Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
6,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The Cirq Developers
Step1: Hidden linear function problem
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step6: In this notebook we consider a problem from the paper "Quantum advantage with shallow circuits" and build a quantum circuit, which solves it, in Cirq.
Introduction
It's well-known that some problems can be solved on the quantum computer exponentially faster than on the classical one in terms of computation time. However, there is more subtle way in which quantum computers are more powerful. There is a problem, which can be solved by quantum circuit of constant depth, but can't be solved by classical circuit of constant depth. In this notebook we will consider this problem.
Structure of this notebook
We start by giving formal statement of the problem. Then we solve this problem in a straightforward way, which follows directly from the problem definition. We will use this solution to verify our quantum solution in the next part. Also, this part contains helper code to generate "interesting" instances of the problem.
In the next part we solve this problem with Cirq. First, we write a code which builds a quantum circuit for solving arbitrary instances of the problem. Then we use Cirq's Clifford simulator to simulate this circuit. We do it for small instances and compare results to the brute force solution from the previous part. Then, we solve the problem for a larger instance of the problem to demonstrate that it can be solved efficiently.
Goal of this notebook is to introduce the reader to the problem and to show how Cirq can be used to solve it. We don't include proofs, but we refer the reader to corresponding lemmas in the original paper.
Problem statement
In this problem we consider a quadratic form of a binary vector and with binary coefficients (but additions and multiplications are evaluated modulo 4). Then we restrict this quadratic form, i.e. we allow to use only certain binary vectors as input. It turns out that under this restriction this quadratic form is equivalent to a linear function, i.e. it just evaluates dot product of input vector and certain scalar vector. Task is to find this scalar vector.
In other words, we have a linear function, which is "hidden" inside a quadratic form.
Formal statement of the problem
Consider $A \in \mathbb{F}_2^{n \times n}$ - upper-triangular binary matrix of size $n \times n$, $b \in \mathbb{F}_2^n$ - binary vector of length $n$.
Define a function $q
Step9: For testing, we need to generate an instance of a problem. We can generate random $A$ and $b$. However, for some $A$ and $b$ problem is trivial - that is, $\mathcal{L}_q = {0}$ and therefore any $z$ is a solution. In fact, product of $|\mathcal{L}_q|$ and number of solutions is always equal to $2^n$ (see Lemma 2 in [1]), so we want a problem with large $\mathcal{L}_q$.
Code below can be used to generate random problem with given size of $\mathcal{L}_q$.
Step10: We ran this function for a while and found an instance with $n=10$ and $|\mathcal{L}_q|=16$, so only 64 of 1024 possible vectors are solutions. So, chance of randomly guessing a solution is $\frac{1}{16}$. We define this instance below by values of $A$ and $b$ and we will use it later to verify our quantum solution.
Step14: Solution with a quantum circuit
As shown in [1], given problem can be solved by a quantum circuit, which implements operator $H ^ {\otimes n} U_q H ^ {\otimes n}$, where
$$U_q = \prod_{1 < i < j < n} CZ_{ij}^{A_{ij}} \cdot \bigotimes_{j=1}^{n} S_j^{b_j} .$$
We need to apply this operator to $| 0^n \rangle$ and measure the result - result is guaranteed to be one of the solutions. Moreover, we can get any solution with equal probability.
Why does this circuit solve the problem? Define $p(z) = \left| \langle z | H ^ {\otimes n} U_q H ^ {\otimes n} | 0^n \rangle \right|^2$. It can be shown that $p(z)>0$ iff $z$ is a solution. For the proof, see Lemma 2 on page 7 in [1].
Let's generate such a circuit and simulate it.
Note that
Step15: Testing
Now, let's test this algorithm. Let's solve it with a quantum circuit 100 times and each time check that measurement result is indeed an answer to the problem.
Step16: Let's repeat that for 10 other problems with $n=8$ and chance of random guessing at most $\frac{1}{4}$.
Step17: Now, let's run our algorithm on a problem with $n=200$. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The Cirq Developers
End of explanation
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
Explanation: Hidden linear function problem
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/hidden_linear_function"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/hidden_linear_function.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/hidden_linear_function.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/hidden_linear_function.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
End of explanation
import numpy as np
import cirq
class HiddenLinearFunctionProblem:
Instance of Hidden Linear Function problem.
The problem is defined by matrix A and vector b, which are
the coefficients of quadratic form, in which linear function
is "hidden".
def __init__(self, A, b):
self.n = A.shape[0]
assert A.shape == (self.n, self.n)
assert b.shape == (self.n, )
for i in range(self.n):
for j in range(i+1):
assert A[i][j] == 0, 'A[i][j] can be 1 only if i<j'
self.A = A
self.b = b
def q(self, x):
Action of quadratic form on binary vector (modulo 4).
Corresponds to `q(x)` in problem definition.
assert x.shape == (self.n, )
return (2 * (x @ self.A @ x) + (self.b @ x)) % 4
def bruteforce_solve(self):
Calculates, by definition, all vectors `z` which are solutions to the problem.
# All binary vectors of length `n`.
all_vectors = [np.array([(m>>i) % 2 for i in range(self.n)]) for m in range(2**self.n)]
def vector_in_L(x):
for y in all_vectors:
if self.q( (x + y)%2 ) != (self.q(x) + self.q(y))%4:
return False
return True
# L is subspace to which we restrict domain of quadratic form.
# Corresponds to `L_q` in the problem definition.
self.L = [x for x in all_vectors if vector_in_L(x)]
# All vectors `z` which are solutions to the problem.
self.all_zs = [z for z in all_vectors if self.is_z(z)]
def is_z(self, z):
Checks by definition, whether given vector `z` is solution to this problem.
assert z.shape == (self.n, )
assert self.L is not None
for x in self.L:
if self.q(x) != 2 * ((z @ x) % 2):
return False
return True
Explanation: In this notebook we consider a problem from the paper "Quantum advantage with shallow circuits" and build a quantum circuit, which solves it, in Cirq.
Introduction
It's well-known that some problems can be solved on the quantum computer exponentially faster than on the classical one in terms of computation time. However, there is more subtle way in which quantum computers are more powerful. There is a problem, which can be solved by quantum circuit of constant depth, but can't be solved by classical circuit of constant depth. In this notebook we will consider this problem.
Structure of this notebook
We start by giving formal statement of the problem. Then we solve this problem in a straightforward way, which follows directly from the problem definition. We will use this solution to verify our quantum solution in the next part. Also, this part contains helper code to generate "interesting" instances of the problem.
In the next part we solve this problem with Cirq. First, we write a code which builds a quantum circuit for solving arbitrary instances of the problem. Then we use Cirq's Clifford simulator to simulate this circuit. We do it for small instances and compare results to the brute force solution from the previous part. Then, we solve the problem for a larger instance of the problem to demonstrate that it can be solved efficiently.
Goal of this notebook is to introduce the reader to the problem and to show how Cirq can be used to solve it. We don't include proofs, but we refer the reader to corresponding lemmas in the original paper.
Problem statement
In this problem we consider a quadratic form of a binary vector and with binary coefficients (but additions and multiplications are evaluated modulo 4). Then we restrict this quadratic form, i.e. we allow to use only certain binary vectors as input. It turns out that under this restriction this quadratic form is equivalent to a linear function, i.e. it just evaluates dot product of input vector and certain scalar vector. Task is to find this scalar vector.
In other words, we have a linear function, which is "hidden" inside a quadratic form.
Formal statement of the problem
Consider $A \in \mathbb{F}_2^{n \times n}$ - upper-triangular binary matrix of size $n \times n$, $b \in \mathbb{F}_2^n$ - binary vector of length $n$.
Define a function $q : \mathbb{F}_2^n \to \mathbb{Z}_4$:
$$q(x) = (2 x^T A x + b^T x) ~\text{mod}~ 4 = \left(2 \sum_{i,j}A_{i,j}x_i x_j + \sum_{i} b_i x_i \right) ~\text{mod}~ 4 , $$
Also define
$$\mathcal{L}_q = \Big{x \in \mathbb{F}_2^n : q(x \oplus y) = (q(x) + q(y)) ~\text{mod}~ 4 ~~ \forall y \in \mathbb{F}_2^n \Big}.$$
Turns out (see Lemma 1 on page 6 in [1]) that restriction of $q$ on $\mathcal{L}_q$ is a linear function, i.e. there exists such $z \in \mathbb{F}_2^n$, that
$$q(x) = 2 z^T x ~~\forall x \in \mathcal{L}_q.$$
Our task is, given $A$ and $b$, to find $z$. There may be multiple answers - we need to find any such answer.
Notation in the problem
$q$ - quadratic form; $A$ - matrix of its quadratic coefficients; $b$ - vector of its linear coefficients;
$\mathcal{L}_q$ - linear space on which we restrict $q(x)$ in order to get linear function;
$z$ - vector of coefficients of the linear function we get by restricting $q$ on $\mathcal{L}_q$. This vector is "hidden" in the coefficients of $q$ and the problem is to find it.
Why is this problem interesting?
1. It's a problem without an oracle
There are other problems where task is to find coefficients of a linear function. But usually the linear function is represented by an oracle. See, for example, Bernstein–Vazirani algorithm.
In this problem we avoid use of an oracle by "hiding" the linear function in the coefficients of quadratic form $A$ and $b$, which are the only inputs to the problem.
2. Quantum circuits have advantage over classical when solving this problem
As we will show below, this problem can be solved with a Clifford circuit. Therefore, according to the Gottesman–Knill theorem, this problem can be solved in polynomial time on a classical computer. So, it might look like quantum computers aren't better than classical ones in solving this problem.
However, if we apply certain restrictions on matrix $A$, the circuit will have fixed depth (i.e. number of Moments). Namely, if the matrix $A$ is an adjacency matrix of a "grid" graph (whose edges can be colored in 4 colors), all CZ gates will fit in 4 Moments, and overall we will have only 8 Moments - and this doesn't depend on $n$.
But for classical circuits it can be proven (see [1]) that even if we restrict matrix $A$ in the same way, the depth of classical circuit (with gates of bounded fan-in) must grow as $n$ grows (in fact, it grows as $\log(n)$).
In terms of complexity theory, this problem is in QNC<sup>0</sup>, but not in NC<sup>0</sup>, which shows that
QNC<sup>0</sup> $\nsubseteq$ NC<sup>0</sup>.
Preparation and brute force solution
For small values of $n$ we can solve this problem with a trivial brute force solution. First, we need to build $\mathcal{L}_q$. We can do that by checking for all possible $2^n$ binary vectors, whether it belongs to $\mathcal{L}_q$, using the definition of $\mathcal{L}_q$. Then we need to try all possible $z \in \mathbb{F}_2^n$, and for each of them and for each $x \in \mathcal{L}_q$ check whether $q(x) = 2 z^T x$.
Below we implement a class which represents an instance of a problem and solves it with a brute force solution.
End of explanation
def random_problem(n, seed=None):
Generates instance of the problem with given `n`.
Args:
n: dimension of the problem.
if seed is not None:
np.random.seed(seed)
A = np.random.randint(0, 2, size=(n,n))
for i in range(n):
for j in range(i+1):
A[i][j] = 0
b = np.random.randint(0, 2, size=n)
problem = HiddenLinearFunctionProblem(A, b)
return problem
def find_interesting_problem(n, min_L_size):
Generates "interesting" instance of the problem.
Returns instance of problem with given `n`, such that size of
subspace `L_q` is at least `min_L_size`.
Args:
n: dimension of the problem.
min_L_size: minimal cardinality of subspace L.
for _ in range(1000):
problem = random_problem(n)
problem.bruteforce_solve()
if len(problem.L) >= min_L_size and not np.max(problem.A) == 0:
return problem
return None
problem = find_interesting_problem(10, 4)
print("Size of subspace L: %d" % len(problem.L))
print("Number of solutions: %d" % len(problem.all_zs))
Explanation: For testing, we need to generate an instance of a problem. We can generate random $A$ and $b$. However, for some $A$ and $b$ problem is trivial - that is, $\mathcal{L}_q = {0}$ and therefore any $z$ is a solution. In fact, product of $|\mathcal{L}_q|$ and number of solutions is always equal to $2^n$ (see Lemma 2 in [1]), so we want a problem with large $\mathcal{L}_q$.
Code below can be used to generate random problem with given size of $\mathcal{L}_q$.
End of explanation
A = np.array([[0, 1, 1, 0, 0, 1, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 1, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
b = np.array([0, 0, 0, 0, 1, 1, 1, 0, 0, 1])
problem_10_64 = HiddenLinearFunctionProblem(A, b)
problem_10_64.bruteforce_solve()
print("Size of subspace L: %d" % len(problem_10_64.L))
print("Number of solutions: %d" % len(problem_10_64.all_zs))
Explanation: We ran this function for a while and found an instance with $n=10$ and $|\mathcal{L}_q|=16$, so only 64 of 1024 possible vectors are solutions. So, chance of randomly guessing a solution is $\frac{1}{16}$. We define this instance below by values of $A$ and $b$ and we will use it later to verify our quantum solution.
End of explanation
def edge_coloring(A):
Solves edge coloring problem.
Args:
A: adjacency matrix of a graph.
Returns list of lists of edges, such as edges in each list
do not have common vertex.
Tries to minimize length of this list.
A = np.copy(A)
n = A.shape[0]
ans = []
while np.max(A) != 0:
edges_group = []
used = np.zeros(n, dtype=bool)
for i in range(n):
for j in range(n):
if A[i][j] == 1 and not used[i] and not used[j]:
edges_group.append((i, j))
A[i][j] = 0
used[i] = used[j] = True
ans.append(edges_group)
return ans
def generate_circuit_for_problem(problem):
Generates `cirq.Circuit` which solves instance of Hidden Linear Function problem.
qubits = cirq.LineQubit.range(problem.n)
circuit = cirq.Circuit()
# Hadamard gates at the beginning (creating equal superposition of all states).
circuit += cirq.Moment([cirq.H(q) for q in qubits])
# Controlled-Z gates encoding the matrix A.
for layer in edge_coloring(problem.A):
for i, j in layer:
circuit += cirq.CZ(qubits[i], qubits[j])
# S gates encoding the vector b.
circuit += cirq.Moment([cirq.S.on(qubits[i]) for i in range(problem.n) if problem.b[i] == 1])
# Hadamard gates at the end.
circuit += cirq.Moment([cirq.H(q) for q in qubits])
# Measurements.
circuit += cirq.Moment([cirq.measure(qubits[i], key=str(i)) for i in range(problem.n)])
return circuit
def solve_problem(problem, print_circuit=False):
Solves instance of Hidden Linear Function problem.
Builds quantum circuit for given problem and simulates
it with the Clifford simulator.
Returns measurement result as binary vector, which is
guaranteed to be a solution to given problem.
circuit = generate_circuit_for_problem(problem)
if print_circuit:
print(circuit)
sim = cirq.CliffordSimulator()
result = sim.simulate(circuit)
z = np.array([result.measurements[str(i)][0] for i in range(problem.n)])
return z
solve_problem(problem_10_64, print_circuit=True)
Explanation: Solution with a quantum circuit
As shown in [1], given problem can be solved by a quantum circuit, which implements operator $H ^ {\otimes n} U_q H ^ {\otimes n}$, where
$$U_q = \prod_{1 < i < j < n} CZ_{ij}^{A_{ij}} \cdot \bigotimes_{j=1}^{n} S_j^{b_j} .$$
We need to apply this operator to $| 0^n \rangle$ and measure the result - result is guaranteed to be one of the solutions. Moreover, we can get any solution with equal probability.
Why does this circuit solve the problem? Define $p(z) = \left| \langle z | H ^ {\otimes n} U_q H ^ {\otimes n} | 0^n \rangle \right|^2$. It can be shown that $p(z)>0$ iff $z$ is a solution. For the proof, see Lemma 2 on page 7 in [1].
Let's generate such a circuit and simulate it.
Note that:
We use Cirq.S gate, whose matrix is $\left(\begin{smallmatrix}1 & 0\0 & i\end{smallmatrix}\right)$. In the paper [1] matrix of S gate is defined as $\left(\begin{smallmatrix}1 & 0\0 & -i\end{smallmatrix}\right)$. But for this problem it doesn't matter.
We reorder CZ gates in such a way so they take less moments. This is a problem of minimal edge coloring, and we solve it here with a simple greedy algorithm (there are better algorithms, but finding true optimum is not the point here). We can do that because CZ gates commute (because their matrices are diagonal). This part is not essential to the solution - it just makes the circuit shorter.
All gates are Clifford gates, so we can use Clifford simulator.
End of explanation
def test_problem(problem):
problem.bruteforce_solve()
tries = 100
for _ in range(tries):
z = solve_problem(problem)
assert problem.is_z(z)
test_problem(problem_10_64)
print('OK')
Explanation: Testing
Now, let's test this algorithm. Let's solve it with a quantum circuit 100 times and each time check that measurement result is indeed an answer to the problem.
End of explanation
for _ in range(10):
test_problem(find_interesting_problem(8, 4))
print('OK')
Explanation: Let's repeat that for 10 other problems with $n=8$ and chance of random guessing at most $\frac{1}{4}$.
End of explanation
%%time
tries = 200
problem = random_problem(tries, seed=0)
solve_problem(problem, print_circuit=False)
Explanation: Now, let's run our algorithm on a problem with $n=200$.
End of explanation |
6,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A primer on numerical differentiation
In order to numerically evaluate a derivative $y'(x)=dy/dx$ at point $x_0$, we approximate is by using finite differences
Step1: Why is it that the sequence does not converge? This is due to the round-off errors in the representation of the floating point numbers. To see this, we can simply type
Step2: Let's try using powers of 1/2
Step3: In addition, one could consider the midpoint difference, defined as
Step4: A more in-depth discussion about round-off erros in numerical differentiation can be found <a href="http
Step5: Notice above that gradient() uses forward and backward differences at the two ends.
Step6: More discussion about numerical differenciation, including higher order methods with error extrapolation can be found <a href="http
Step7: One way to improve the roundoff errors is by simply using the decimal package | Python Code:
dx = 1.
x = 1.
while(dx > 1.e-10):
dy = (x+dx)*(x+dx)-x*x
d = dy / dx
print("%6.0e %20.16f %20.16f" % (dx, d, d-2.))
dx = dx / 10.
Explanation: A primer on numerical differentiation
In order to numerically evaluate a derivative $y'(x)=dy/dx$ at point $x_0$, we approximate is by using finite differences:
Therefore we find: $$\begin{eqnarray}
&& dx \approx \Delta x &=&x_1-x_0, \
&& dy \approx \Delta y &=&y_1-y_0 = y(x_1)-y(x_0) = y(x_0+\Delta_x)-y(x_0),\end{eqnarray}$$
Then we re-write the derivative in terms of discrete differences as:
$$\frac{dy}{dx} \approx \frac{\Delta y}{\Delta x}$$
Example
Let's look at the accuracy of this approximation in terms of the interval $\Delta x$. In our first example we will evaluate the derivative of $y=x^2$ at $x=1$.
End of explanation
((1.+0.0001)*(1+0.0001)-1)
Explanation: Why is it that the sequence does not converge? This is due to the round-off errors in the representation of the floating point numbers. To see this, we can simply type:
End of explanation
dx = 1.
x = 1.
while(dx > 1.e-10):
dy = (x+dx)*(x+dx)-x*x
d = dy / dx
print("%6.0e %20.16f %20.16f" % (dx, d, d-2.))
dx = dx / 2.
Explanation: Let's try using powers of 1/2
End of explanation
from math import sin, sqrt, pi
dx = 1.
while(dx > 1.e-10):
x = pi/4.
d1 = sin(x+dx) - sin(x); #forward
d2 = sin(x+dx*0.5) - sin(x-dx*0.5); # midpoint
d1 = d1 / dx;
d2 = d2 / dx;
print("%6.0e %20.16f %20.16f %20.16f %20.16f" % (dx, d1, d1-sqrt(2.)/2., d2, d2-sqrt(2.)/2.) )
dx = dx / 2.
Explanation: In addition, one could consider the midpoint difference, defined as:
$$ dy \approx \Delta y = y(x_0+\frac{\Delta_x}{2})-y(x_0-\frac{\Delta_x}{2}).$$
For a more complex function we need to import it from math. For instance, let's calculate the derivative of $sin(x)$ at $x=\pi/4$, including both the forward and midpoint differences.
End of explanation
%matplotlib inline
import numpy as np
from matplotlib import pyplot
y = lambda x: x*x
x1 = np.arange(0,10,1)
x2 = np.arange(0,10,0.1)
y1 = np.gradient(y(x1), 1.)
print y1
pyplot.plot(x1,np.gradient(y(x1),1.),'r--o');
pyplot.plot(x1[:x1.size-1],np.diff(y(x1))/np.diff(x1),'b--x');
Explanation: A more in-depth discussion about round-off erros in numerical differentiation can be found <a href="http://www.uio.no/studier/emner/matnat/math/MAT-INF1100/h10/kompendiet/kap11.pdf">here</a>
Special functions in numpy
numpy provides a simple method diff() to calculate the numerical derivatives of a dataset stored in an array by forward differences. The function gradient() will calculate the derivatives by midpoint (or central) difference, that provides a more accurate result.
End of explanation
pyplot.plot(x2,np.gradient(y(x2),0.1),'b--o');
Explanation: Notice above that gradient() uses forward and backward differences at the two ends.
End of explanation
from scipy.misc import derivative
y = lambda x: x**2
dx = 1.
x = 1.
while(dx > 1.e-10):
d = derivative(f, x, dx, n=1, order=3)
print("%6.0e %20.16f %20.16f" % (dx, d, d-2.))
dx = dx / 10.
Explanation: More discussion about numerical differenciation, including higher order methods with error extrapolation can be found <a href="http://young.physics.ucsc.edu/115/diff.pdf">here</a>.
The module scipy also includes methods to accurately calculate derivatives:
End of explanation
from decimal import Decimal
dx = Decimal("1.")
while(dx >= Decimal("1.e-10")):
x = Decimal("1.")
dy = (x+dx)*(x+dx)-x*x
d = dy / dx
print("%6.0e %20.16f %20.16f" % (dx, d, d-Decimal("2.")))
dx = dx / Decimal("10.")
Explanation: One way to improve the roundoff errors is by simply using the decimal package
End of explanation |
6,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading of Libraries and Classes.
Step1: Create forward bond future PV (Exposure) time profile
Setting up parameters
Step2: Data input for the CouponBond portfolio
The word portfolio is used to describe just a dict of CouponBonds.
This line creates a referenceDateList
myScheduler = Scheduler()
ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate)
Create Simulator
This section creates Monte Carlo Trajectories in a wide range. Notice that the BondCoupon maturities have to be
inside the Monte Carlo simulation range [trim_start,trim_end]
Sigma has been artificially increased (OIS has smaller sigma) to allow for visualization of distinct trajectories.
# SDE parameters - Vasicek SDE
# dr(t) = k(θ − r(t))dt + σdW(t)
self.kappa = x[0]
self.theta = x[1]
self.sigma = x[2]
self.r0 = x[3]
myVasicek = MC_Vasicek_Sim()
xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509]
myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0)
myVasicek.getLibor()
Create Coupon Bond with several startDates.
SixMonthDelay = myScheduler.extractDelay("6M")
TwoYearsDelay = myScheduler.extractDelay("2Y")
startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)]
For debugging uncomment this to choose a single date for the forward bond
print(startDates)
startDates = [date(2005,3,10)] # or
startDates = [date(2005,3,10) + SixMonthDelay]
maturities = [(x+TwoYearsDelay) for x in startDates]
You can change the coupon and see its effect on the Exposure Profile. The breakevenRate is calculated, for simplicity, always at referenceDate=self.start, that is, at the first day of the CouponBond life.
Below is a way to create random long/short bond portfolio of any size. The notional only affects the product class at the last stage of calculation. In my case, the only parameters affected are Exposure (PV on referenceDate), pvAvg(average PV on referenceDate)
myPortfolio = {}
coupon = 0.07536509
for i in range(len(startDates))
Step3: Create Libor and portfolioScheduleOfCF. This datelist contains all dates
to be used in any calculation of the portfolio positions.
BondCoupon class has to have a method getScheduleComplete, which return
fullSet on [0] and datelist on [1], calculated by BondCoupon as | Python Code:
%matplotlib inline
from datetime import date
import time
import pandas as pd
import numpy as np
pd.options.display.max_colwidth = 60
from Curves.Corporates.CorporateDailyVasicek import CorporateRates
from Boostrappers.CDSBootstrapper.CDSVasicekBootstrapper import BootstrapperCDSLadder
from MonteCarloSimulators.Vasicek.vasicekMCSim import MC_Vasicek_Sim
from Products.Rates.CouponBond import CouponBond
from Products.Credit.CDS import CDS
from Scheduler.Scheduler import Scheduler
import quandl
import matplotlib.pyplot as plt
import pylab
from parameters import WORKING_DIR
import itertools
marker = itertools.cycle((',', '+', '.', 'o', '*'))
from IPython.core.pylabtools import figsize
figsize(15, 4)
from pandas import ExcelWriter
import numpy.random as nprnd
from pprint import pprint
Explanation: Loading of Libraries and Classes.
End of explanation
t_step = 1.0 / 365.0
simNumber = 10
trim_start = date(2005,3,10)
trim_end = date(2010,12,31) # Last Date of the Portfolio
start = date(2005, 3, 10)
referenceDate = date(2005, 3, 10)
Explanation: Create forward bond future PV (Exposure) time profile
Setting up parameters
End of explanation
myScheduler = Scheduler()
ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate)
# Create Simulator
xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509]
myVasicek = MC_Vasicek_Sim(ReferenceDateList,xOIS,simNumber,1/365.0)
myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0)
myVasicek.getLibor()
# Create Coupon Bond with several startDates.
SixMonthDelay = myScheduler.extractDelay("6M")
TwoYearsDelay = myScheduler.extractDelay("2Y")
startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)]
# For debugging uncomment this to choose a single date for the forward bond
# print(startDates)
startDates = [date(2005,3,10)+SixMonthDelay,date(2005,3,10)+TwoYearsDelay ]
maturities = [(x+TwoYearsDelay) for x in startDates]
myPortfolio = {}
coupon = 0.07536509
for i in range(len(startDates)):
notional=(-1.0)**i
myPortfolio[i] = CouponBond(fee=1.0,start=startDates[i],coupon=coupon,notional=notional,
maturity= maturities[i], freq="3M", referencedate=referenceDate,observationdate=trim_start )
Explanation: Data input for the CouponBond portfolio
The word portfolio is used to describe just a dict of CouponBonds.
This line creates a referenceDateList
myScheduler = Scheduler()
ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate)
Create Simulator
This section creates Monte Carlo Trajectories in a wide range. Notice that the BondCoupon maturities have to be
inside the Monte Carlo simulation range [trim_start,trim_end]
Sigma has been artificially increased (OIS has smaller sigma) to allow for visualization of distinct trajectories.
# SDE parameters - Vasicek SDE
# dr(t) = k(θ − r(t))dt + σdW(t)
self.kappa = x[0]
self.theta = x[1]
self.sigma = x[2]
self.r0 = x[3]
myVasicek = MC_Vasicek_Sim()
xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509]
myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0)
myVasicek.getLibor()
Create Coupon Bond with several startDates.
SixMonthDelay = myScheduler.extractDelay("6M")
TwoYearsDelay = myScheduler.extractDelay("2Y")
startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)]
For debugging uncomment this to choose a single date for the forward bond
print(startDates)
startDates = [date(2005,3,10)] # or
startDates = [date(2005,3,10) + SixMonthDelay]
maturities = [(x+TwoYearsDelay) for x in startDates]
You can change the coupon and see its effect on the Exposure Profile. The breakevenRate is calculated, for simplicity, always at referenceDate=self.start, that is, at the first day of the CouponBond life.
Below is a way to create random long/short bond portfolio of any size. The notional only affects the product class at the last stage of calculation. In my case, the only parameters affected are Exposure (PV on referenceDate), pvAvg(average PV on referenceDate)
myPortfolio = {}
coupon = 0.07536509
for i in range(len(startDates)):
notional=(-1.0)**i
myPortfolio[i] = CouponBond(fee=1.0,start=startDates[i],coupon=coupon,notional=notional,
maturity= maturities[i], freq="3M", referencedate=referenceDate)
End of explanation
# Create FullDateList
portfolioScheduleOfCF = set(ReferenceDateList)
for i in range(len(myPortfolio)):
portfolioScheduleOfCF=portfolioScheduleOfCF.union(myPortfolio[i].getScheduleComplete()[0]
)
portfolioScheduleOfCF = sorted(portfolioScheduleOfCF.union(ReferenceDateList))
OIS = myVasicek.getSmallLibor(datelist=portfolioScheduleOfCF)
#print(OIS)
# at this point OIS contains all dates for which the discount curve should be known.
# If the OIS doesn't contain that date, it would not be able to discount the cashflows and the calcualtion would faill.
pvs={}
for t in portfolioScheduleOfCF:
pvs[t] = np.zeros([1,simNumber])
#(pvs[t])
for i in range(len(myPortfolio)):
myPortfolio[i].setLibor(OIS)
pvs[t] = pvs[t] + myPortfolio[i].getExposure(referencedate=t).values
# print(myPortfolio[i].getExposure(referencedate=t).values)
#print(pvs)
#print(OIS)
#print(myPortfolio[i].getExposure(referencedate=t).value)
pvsPlot = pd.DataFrame.from_dict(list(pvs.items()))
pvsPlot.index= list(pvs.keys())
pvs1={}
for i,t in zip(pvsPlot.values,pvsPlot.index):
pvs1[t]=i[1][0]
pvs = pd.DataFrame.from_dict(data=pvs1,orient="index")
ax=pvs.plot(legend=False)
ax.set_xlabel("Year")
ax.set_ylabel("Coupon Bond Exposure")
Explanation: Create Libor and portfolioScheduleOfCF. This datelist contains all dates
to be used in any calculation of the portfolio positions.
BondCoupon class has to have a method getScheduleComplete, which return
fullSet on [0] and datelist on [1], calculated by BondCoupon as:
def getScheduleComplete(self):
self.datelist=self.myScheduler.getSchedule(start=self.start,end=self.maturity,freq=self.freq,referencedate=self.referencedate)
self.ntimes = len(self.datelist)
fullset = sorted(set(self.datelist)
.union([self.referencedate])
.union([self.start])
.union([self.maturity])
)
return fullset,self.datelist
portfolioScheduleOfCF is the concatenation of all fullsets. It defines the set of all dates for which Libor should be known.
End of explanation |
6,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Electron REST API
This website provides an electron insert factor REST API. A demonstration of the use of this API is given below. The source code for this heroku app is available at https
Step1: Parameterising an insert shape
To parameterise the insert shape x and y coordinates of the shape need to be sent to the url http
Step2: Then send the payload to the api and retrieve the results. Make sure to loop the request until complete is returned as true.
Step3: The final response from the server can be plotted as so.
Step4: Creating the electron insert spline model
Once a range of widths, lengths, and factors have been collected (at least 8) for a single applicator, ssd and energy they can be by sending them to http
Step5: Send the payload to the server api and retrieve the results.
Step6: The result from the server can be plotted in the following way if you wish. | Python Code:
# Copyright (C) 2016 Simon Biggs
# This program is free software: you can redistribute it and/or
# modify it under the terms of the GNU Affero General Public
# License as published by the Free Software Foundation, either
# version 3 of the License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public
# License along with this program. If not, see
# http://www.gnu.org/licenses/.
import time
import json
import requests
import numpy as np
import numpy.ma as ma
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib import cm
% matplotlib inline
Explanation: Electron REST API
This website provides an electron insert factor REST API. A demonstration of the use of this API is given below. The source code for this heroku app is available at https://github.com/SimonBiggs/electronfactor-server.
End of explanation
insert_x = [0.99, -0.14, -1.0, -1.73, -2.56, -3.17, -3.49, -3.57, -3.17, -2.52, -1.76,
-1.04, -0.17, 0.77, 1.63, 2.36, 2.79, 2.91, 3.04, 3.22, 3.34, 3.37, 3.08, 2.54,
1.88, 1.02, 0.99]
insert_y = [5.05, 4.98, 4.42, 3.24, 1.68, 0.6, -0.64, -1.48, -2.38, -3.77, -4.81,
-5.26, -5.51, -5.58, -5.23, -4.64, -3.77, -2.77, -1.68, -0.29, 1.23, 2.68, 3.8,
4.6, 5.01, 5.08, 5.05]
payload = json.dumps({'x': insert_x, 'y': insert_y})
payload
Explanation: Parameterising an insert shape
To parameterise the insert shape x and y coordinates of the shape need to be sent to the url http://electronapi.simonbiggs.net/parameterise as follows.
First create a payload with x and y coordinates.
End of explanation
complete = False
while not(complete):
response = requests.post(
"http://electronapi.simonbiggs.net/parameterise",
# "http://localhost:5000/parameterise",
data=payload)
result = json.loads(response.text)
insert_width = result['width']
insert_length = result['length']
circle = result['circle']
ellipse = result['ellipse']
complete = result['complete']
if circle is not(None):
plt.figure()
plt.plot(insert_x, insert_y)
plt.axis('equal')
plt.plot(circle['x'], circle['y'])
plt.title('Insert shape parameterisation')
plt.xlabel('Width (cm)')
plt.ylabel('Length (cm)')
plt.grid(True)
if ellipse is not(None):
plt.plot(ellipse['x'], ellipse['y'])
plt.show()
time.sleep(2)
Explanation: Then send the payload to the api and retrieve the results. Make sure to loop the request until complete is returned as true.
End of explanation
plt.plot(insert_x, insert_y)
plt.axis('equal')
plt.plot(circle['x'], circle['y'])
plt.plot(ellipse['x'], ellipse['y'])
plt.title('Insert shape parameterisation')
plt.xlabel('Width (cm)')
plt.ylabel('Length (cm)')
plt.grid(True)
Explanation: The final response from the server can be plotted as so.
End of explanation
width_data = [
3.15, 3.16, 3.17, 3.17, 3.17, 3.55, 3.66, 3.71, 4.2, 4.21,
4.21, 4.21, 4.21, 4.38, 4.48, 4.59, 4.59, 4.67, 5.21, 5.25,
5.26, 5.26, 5.26, 5.34, 5.43, 5.72, 5.86, 6, 6.04, 6.08, 6.3,
6.31, 6.41, 6.53, 6.54, 6.64, 6.78, 6.9, 7.08, 7.18, 7.21, 7.36,
7.56, 7.6, 7.64, 7.82, 8.06, 8.4, 9.45]
length_data = [
3.16, 5.25, 13.64, 6.83, 9.43, 7.7, 5.04, 4.36, 4.21, 10.51,
13.65, 6.82, 8.41, 5.47, 7.29, 5.67, 6.54, 6.28, 11.4, 5.26,
10.52, 13.66, 8.41, 9.64, 11.02, 11.6, 8.62, 7.98, 9.22, 6.64,
6.33, 8.24, 8.69, 10.99, 8.41, 9.81, 10.98, 10.25, 10.77, 11.27,
9.03, 7.37, 10.05, 10.26, 8.99, 10.85, 11.85, 8.42, 9.47
]
factor_data = [
0.9294, 0.9346, 0.9533, 0.9488, 0.9488, 0.9443, 0.9434, 0.9488,
0.956, 0.9709, 0.9756, 0.9606, 0.9709, 0.9634, 0.9606, 0.9588, 0.9681,
0.9737, 0.9881, 0.9709, 0.9881, 0.9872, 0.9833, 0.993, 0.9872, 0.999,
0.9891, 0.9911, 0.999, 0.993, 0.9862, 0.9921, 0.999, 1, 0.993, 0.999,
1.007, 0.999, 1.005, 0.999, 1.0101, 1.003, 1.004, 1.0142, 1.003, 1.002,
1.007, 1.007, 1.0081
]
payload = json.dumps(
{'width': width_data, 'length':length_data, 'measuredFactor':factor_data})
payload
Explanation: Creating the electron insert spline model
Once a range of widths, lengths, and factors have been collected (at least 8) for a single applicator, ssd and energy they can be by sending them to http://electronapi.simonbiggs.net/model as follows.
First create a payload to be sent to the server.
End of explanation
response = requests.post(
"http://electronapi.simonbiggs.net/model",
# "http://localhost:5000/model",
data=payload)
result = json.loads(response.text)
model_width = np.array(result['model_width'], dtype=np.float)
model_length = np.array(result['model_length'], dtype=np.float)
model_factor = np.array(result['model_factor'], dtype=np.float)
Explanation: Send the payload to the server api and retrieve the results.
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
vmin = np.min(np.concatenate([model_factor, factor_data]))
vmax = np.max(np.concatenate([model_factor, factor_data]))
vrange = vmax - vmin
for i in range(len(model_factor)):
x = model_width[i]
y = model_length[i]
z = model_factor[i]
c = (z - vmin) / vrange
ax.add_patch(
patches.Rectangle(
(x, y), # (x,y)
0.1, # width
0.1, # height
facecolor=cm.viridis(c),
edgecolor="none"
)
)
plt.scatter(
width_data, length_data, s=100, c=factor_data,
cmap='viridis', vmin=vmin, vmax=vmax, zorder=2)
ax.set_xlim([
np.min(model_width) - 0.1,
np.max(model_width) + 0.2])
ax.set_ylim([
np.min(model_length) - 0.1,
np.max(model_length) + 0.2])
plt.colorbar(label='Insert Factor')
plt.title('Insert factor model')
plt.xlabel('Width (cm)')
plt.ylabel('Length (cm)')
plt.grid(True)
Explanation: The result from the server can be plotted in the following way if you wish.
End of explanation |
6,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading CTD data with PySeabird
Author
Step1: Let's first download an example file with some CTD data
Step2: The profile dPIRX003.cnv.OK was loaded with the default rule cnv.yaml
The header (metadata)
The header is loaded into the .attributes as a dictionary. Note that the date was already converted into a datetime object.
There is a new attribute, not found in the file, that is 'md5'. This is the MD5 Hash for the original file. This might be usefull to double check the inputs when reproducing some analysis.
Since it's a dictionary, to extract the geographical coordinates, for example
Step3: Or for an overview of all the attributes and data
Step4: The data
The object profile behaves like a dictionary with the data. So to check the available data one can just
Step5: Each data returns as a masked array, hence all values equal to profile.attributes['bad_flag'] will return as a masked value
Step6: As a regular masked array, let's check the mean and standard deviation between the two temperature sensors
Step7: We can also export the data into a pandas DataFrame for easier data manipulation later on | Python Code:
%matplotlib inline
from seabird.cnv import fCNV
Explanation: Reading CTD data with PySeabird
Author: Guilherme Castelão
pySeabird is a package to parse/load CTD data files. It should be an easy task but the problem is that the format have been changing along the time. Work with multiple ships/cruises data requires first to understand each file, to normalize it into a common format for only than start your analysis. That can still be done with few general regular expression rules, but I would rather use strict rules. If I'm loading hundreds or thousands of profiles, I want to be sure that no mistake passed by. I rather ignore a file in doubt and warn it, than belive that it was loaded right and be part of my analysis.
With that in mind, I wrote this package with the ability to load multiple rules, so new rules can be added without change the main engine.
For more information, check the documentatio
End of explanation
!wget https://raw.githubusercontent.com/castelao/seabird/master/sampledata/CTD/dPIRX003.cnv
profile = fCNV('dPIRX003.cnv')
Explanation: Let's first download an example file with some CTD data
End of explanation
print ("The profile coordinates is latitude: %.4f, and longitude: %.4f" % \
(profile.attributes['LATITUDE'], profile.attributes['LONGITUDE']))
Explanation: The profile dPIRX003.cnv.OK was loaded with the default rule cnv.yaml
The header (metadata)
The header is loaded into the .attributes as a dictionary. Note that the date was already converted into a datetime object.
There is a new attribute, not found in the file, that is 'md5'. This is the MD5 Hash for the original file. This might be usefull to double check the inputs when reproducing some analysis.
Since it's a dictionary, to extract the geographical coordinates, for example:
End of explanation
print("Header: %s" % profile.attributes.keys())
print(profile.attributes)
Explanation: Or for an overview of all the attributes and data:
End of explanation
print(profile.keys())
Explanation: The data
The object profile behaves like a dictionary with the data. So to check the available data one can just
End of explanation
profile['TEMP2'][:25]
Explanation: Each data returns as a masked array, hence all values equal to profile.attributes['bad_flag'] will return as a masked value
End of explanation
print(profile['TEMP'].mean(), profile['TEMP'].std())
print(profile['TEMP2'].mean(), profile['TEMP2'].std())
from matplotlib import pyplot as plt
plt.plot(profile['TEMP'], profile['PRES'],'b')
plt.plot(profile['TEMP2'], profile['PRES'],'g')
plt.gca().invert_yaxis()
plt.xlabel('temperature')
plt.ylabel('pressure [dbar]')
plt.title(profile.attributes['filename'])
Explanation: As a regular masked array, let's check the mean and standard deviation between the two temperature sensors
End of explanation
df = profile.as_DataFrame()
df.head()
Explanation: We can also export the data into a pandas DataFrame for easier data manipulation later on:
End of explanation |
6,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Look up columns for all tracts/block groups from 2010 decimal census
(bunched in groups of 20 due to api quatas)
Step1: merge all dataframes to one based on pairs of tracts and block groups and drop 'state' and 'county' columns | Python Code:
census_20 = api.query_census_api('census',39, '061','*','*',['H0030001', 'H0030002', 'H0030003', 'H0040002', 'H0040004', 'H0050002', 'H0060002', 'H0060003', 'H0060004', 'H0060005', 'H0060006', 'H0060007', 'H0060008', 'H0100001', 'H0130002', 'H0130003', 'H0130004', 'H0130005', 'H0130006', 'H0130007'], 2010, 'a6e317918af5d4097d792cabd992f41f2691b75e', verbose=True)
census_40 = api.query_census_api('census',39, '061','*','*',['H0130008', 'H0140002', 'H0140003', 'H0140004', 'H0140005', 'H0140006', 'H0140007', 'H0140008', 'H0140009', 'H0140010', 'H0140011', 'H0140012', 'H0140013', 'H0140014', 'H0140015', 'H0140016', 'H0140017', 'H0150001', 'H0150002', 'H0150003'], 2010, 'a6e317918af5d4097d792cabd992f41f2691b75e', verbose=True)
census_60 = api.query_census_api('census',39, '061','*','*',['H0150004', 'H0150005', 'H0150006', 'H0150007', 'H0190003', 'H0190004', 'H0190005', 'H0190006', 'H0190007', 'H0200002', 'H0200003', 'P0010001', 'P0030002', 'P0030003', 'P0030004', 'P0030005', 'P0030006', 'P0030007', 'P0030008', 'P0110001'], 2010, 'a6e317918af5d4097d792cabd992f41f2691b75e', verbose=True)
census_80 = api.query_census_api('census',39, '061','*','*',['P0120003', 'P0120004', 'P0120005', 'P0120006', 'P0120007', 'P0120008', 'P0120009', 'P0120010', 'P0120011', 'P0120012', 'P0120013', 'P0120014', 'P0120015', 'P0120016', 'P0120017', 'P0120018', 'P0120019', 'P0120020', 'P0120021', 'P0120022'], 2010, 'a6e317918af5d4097d792cabd992f41f2691b75e', verbose=True)
census_100 = api.query_census_api('census',39, '061','*','*',['P0120023', 'P0120024', 'P0120025', 'P0120026', 'P0120027', 'P0120028', 'P0120029', 'P0120030', 'P0120031', 'P0120032', 'P0120033', 'P0120034', 'P0120035', 'P0120036', 'P0120037', 'P0120038', 'P0120039', 'P0120040', 'P0120041', 'P0120042'], 2010, 'a6e317918af5d4097d792cabd992f41f2691b75e', verbose=True)
census_111 = api.query_census_api('census',39, '061','*','*',['P0120043', 'P0120044', 'P0120045', 'P0120046', 'P0120047', 'P0120048', 'P0120049', 'P0180001', 'P0180002', 'P0180003', 'P0180004'], 2010, 'a6e317918af5d4097d792cabd992f41f2691b75e', verbose=True)
#look up same values from 2000 census
census_00_20 = api.query_census_api('census',39, '061','*','*',['H003001'], 2000, 'a6e317918af5d4097d792cabd992f41f2691b75e', verbose=True)
Explanation: Look up columns for all tracts/block groups from 2010 decimal census
(bunched in groups of 20 due to api quatas)
End of explanation
census_df = [census_20, census_40, census_60, census_80, census_100, census_111]
census_df_clean = [df.drop(['state', 'county'], axis=1) for df in census_df]
census_df_clean[0].head()
census_pop_housing = reduce(lambda left,right: pd.merge(left,right,on=['tract','block group']), census_df_clean)
census_pop_housing.rename(columns={'block group': 'block_group'}, inplace=True)
census_pop_housing.to_sql('census_pop_housing', engine, schema='shape_files', if_exists='replace')
Explanation: merge all dataframes to one based on pairs of tracts and block groups and drop 'state' and 'county' columns
End of explanation |
6,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
\d - 숫자
\s - 공백(whitespace)
\w - 문자+숫자(alphanumeric)
\t - tab
\n - 개행문자
Step1: 인터넷 DB 읽기
Yahoo! Finance
Google Finance
St.Louis FED (FRED)
Kenneth French's data library
World Bank
Google Analytics
pip install pandas_datareader
Step2: http
Step3: https
Step4: https | Python Code:
# 특정값 NA로 취급
na_val = {'term' : [36]}
pd.read_csv('example_df.csv', na_values=na_val).head()
# 행 생략
df.head()
pd.read_csv('example_df.csv', skiprows=[1, 2]).head()
# 일부 행만 읽기
df_output = pd.read_csv('example_df.csv', nrows=3)
df_output
# file output
df_output.to_csv('df_output.csv', index=False, header=False)
# 인터넷 csv파일 읽기
ip_data = pd.read_csv('https://r-forge.r-project.org/scm/viewvc.php/*checkout*/pkg/fBasics/data/IP.dat.csv?revision=1&root=rmetrics&pathrev=1', sep=';')
ip_data
Explanation: \d - 숫자
\s - 공백(whitespace)
\w - 문자+숫자(alphanumeric)
\t - tab
\n - 개행문자
End of explanation
import pandas_datareader.data as web
import datetime
start = datetime.datetime(2016, 1, 1)
end = datetime.datetime(2016, 12, 31)
print(start, end)
Explanation: 인터넷 DB 읽기
Yahoo! Finance
Google Finance
St.Louis FED (FRED)
Kenneth French's data library
World Bank
Google Analytics
pip install pandas_datareader
End of explanation
df = web.DataReader('005930.KS', 'yahoo', start, end)
df.tail()
Explanation: http://finance.yahoo.com/q?s=005930.ks
End of explanation
df = web.DataReader('KRX:005930', 'google', start, end)
df.tail()
Explanation: https://www.google.com/finance?cid=151610035517112
End of explanation
df = web.DataReader('GDP', 'fred', start, end)
df
Explanation: https://fred.stlouisfed.org/series/GDP
End of explanation |
6,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conjugate Priors
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: In the previous chapters we have used grid approximations to solve a variety of problems.
One of my goals has been to show that this approach is sufficient to solve many real-world problems.
And I think it's a good place to start because it shows clearly how the methods work.
However, as we saw in the previous chapter, grid methods will only get you so far.
As we increase the number of parameters, the number of points in the grid grows (literally) exponentially.
With more than 3-4 parameters, grid methods become impractical.
So, in the remaining three chapters, I will present three alternatives
Step2: And here's a grid approximation.
Step3: Here's the likelihood of scoring 4 goals for each possible value of lam.
Step4: And here's the update.
Step6: So far, this should be familiar.
Now we'll solve the same problem using the conjugate prior.
The Conjugate Prior
In <<_TheGammaDistribution>>, I presented three reasons to use a gamma distribution for the prior and said there was a fourth reason I would reveal later.
Well, now is the time.
The other reason I chose the gamma distribution is that it is the "conjugate prior" of the Poisson distribution, so-called because the two distributions are connected or coupled, which is what "conjugate" means.
In the next section I'll explain how they are connected, but first I'll show you the consequence of this connection, which is that there is a remarkably simple way to compute the posterior distribution.
However, in order to demonstrate it, we have to switch from the one-parameter version of the gamma distribution to the two-parameter version. Since the first parameter is called alpha, you might guess that the second parameter is called beta.
The following function takes alpha and beta and makes an object that represents a gamma distribution with those parameters.
Step7: Here's the prior distribution with alpha=1.4 again and beta=1.
Step9: Now I claim without proof that we can do a Bayesian update with k goals just by making a gamma distribution with parameters alpha+k and beta+1.
Step10: Here's how we update it with k=4 goals in t=1 game.
Step11: After all the work we did with the grid, it might seem absurd that we can do a Bayesian update by adding two pairs of numbers.
So let's confirm that it works.
I'll make a Pmf with a discrete approximation of the posterior distribution.
Step12: The following figure shows the result along with the posterior we computed using the grid algorithm.
Step13: They are the same other than small differences due to floating-point approximations.
Step14: What the Actual?
To understand how that works, we'll write the PDF of the gamma prior and the PMF of the Poisson likelihood, then multiply them together, because that's what the Bayesian update does.
We'll see that the result is a gamma distribution, and we'll derive its parameters.
Here's the PDF of the gamma prior, which is the probability density for each value of $\lambda$, given parameters $\alpha$ and $\beta$
Step15: We used the binomial distribution to compute the likelihood of the data, which was 140 heads out of 250 attempts.
Step16: Then we computed the posterior distribution in the usual way.
Step18: We can solve this problem more efficiently using the conjugate prior of the binomial distribution, which is the beta distribution.
The beta distribution is bounded between 0 and 1, so it works well for representing the distribution of a probability like x.
It has two parameters, called alpha and beta, that determine the shape of the distribution.
SciPy provides an object called beta that represents a beta distribution.
The following function takes alpha and beta and returns a new beta object.
Step19: It turns out that the uniform distribution, which we used as a prior, is the beta distribution with parameters alpha=1 and beta=1.
So we can make a beta object that represents a uniform distribution, like this
Step21: Now let's figure out how to do the update. As in the previous example, we'll write the PDF of the prior distribution and the PMF of the likelihood function, and multiply them together. We'll see that the product has the same form as the prior, and we'll derive its parameters.
Here is the PDF of the beta distribution, which is a function of $x$ with $\alpha$ and $\beta$ as parameters.
$$x^{\alpha-1} (1-x)^{\beta-1}$$
Again, I have omitted the normalizing factor, which we don't need because we are going to normalize the distribution after the update.
And here's the PMF of the binomial distribution, which is a function of $k$ with $n$ and $x$ as parameters.
$$x^{k} (1-x)^{n-k}$$
Again, I have omitted the normalizing factor.
Now when we multiply the beta prior and the binomial likelihood, the result is
$$x^{\alpha-1+k} (1-x)^{\beta-1+n-k}$$
which we recognize as an unnormalized beta distribution with parameters $\alpha+k$ and $\beta+n-k$.
So if we observe k successes in n trials, we can do the update by making a beta distribution with parameters alpha+k and beta+n-k.
That's what this function does
Step22: Again, the conjugate prior gives us insight into the meaning of the parameters; $\alpha$ is related to the number of observed successes; $\beta$ is related to the number of failures.
Here's how we do the update with the observed data.
Step23: To confirm that it works, I'll evaluate the posterior distribution for the possible values of xs and put the results in a Pmf.
Step24: And we can compare the posterior distribution we just computed with the results from the grid algorithm.
Step25: They are the same other than small differences due to floating-point approximations.
The examples so far are problems we have already solved, so let's try something new.
Step26: Lions and Tigers and Bears
Suppose we visit a wild animal preserve where we know that the only animals are lions and tigers and bears, but we don't know how many of each there are.
During the tour, we see 3 lions, 2 tigers, and one bear. Assuming that every animal had an equal chance to appear in our sample, what is the probability that the next animal we see is a bear?
To answer this question, we'll use the data to estimate the prevalence of each species, that is, what fraction of the animals belong to each species.
If we know the prevalences, we can use the multinomial distribution to compute the probability of the data.
For example, suppose we know that the fraction of lions, tigers, and bears is 0.4, 0.3, and 0.3, respectively.
In that case the probability of the data is
Step27: Now, we could choose a prior for the prevalences and do a Bayesian update using the multinomial distribution to compute the probability of the data.
But there's an easier way, because the multinomial distribution has a conjugate prior
Step28: Since we provided three parameters, the result is a distribution of three variables.
If we draw a random value from this distribution, like this
Step29: The result is an array of three values.
They are bounded between 0 and 1, and they always add up to 1, so they can be interpreted as the probabilities of a set of outcomes that are mutually exclusive and collectively exhaustive.
Let's see what the distributions of these values look like. I'll draw 1000 random vectors from this distribution, like this
Step30: The result is an array with 1000 rows and three columns. I'll compute the Cdf of the values in each column.
Step31: The result is a list of Cdf objects that represent the marginal distributions of the three variables. Here's what they look like.
Step33: Column 0, which corresponds to the lowest parameter, contains the lowest probabilities.
Column 2, which corresponds to the highest parameter, contains the highest probabilities.
As it turns out, these marginal distributions are beta distributions.
The following function takes a sequence of parameters, alpha, and computes the marginal distribution of variable i
Step34: We can use it to compute the marginal distribution for the three variables.
Step35: The following plot shows the CDF of these distributions as gray lines and compares them to the CDFs of the samples.
Step37: This confirms that the marginals of the Dirichlet distribution are beta distributions.
And that's useful because the Dirichlet distribution is the conjugate prior for the multinomial likelihood function.
If the prior distribution is Dirichlet with parameter vector alpha and the data is a vector of observations, data, the posterior distribution is Dirichlet with parameter vector alpha + data.
As an exercise at the end of this chapter, you can use this method to solve the Lions and Tigers and Bears problem.
Summary
After reading this chapter, if you feel like you've been tricked, I understand. It turns out that many of the problems in this book can be solved with just a few arithmetic operations. So why did we go to all the trouble of using grid algorithms?
Sadly, there are only a few problems we can solve with conjugate priors; in fact, this chapter includes most of the ones that are useful in practice.
For the vast majority of problems, there is no conjugate prior and no shortcut to compute the posterior distribution.
That's why we need grid algorithms and the methods in the next two chapters, Approximate Bayesian Computation (ABC) and Markov chain Monte Carlo methods (MCMC).
Exercises
Exercise
Step38: Exercise
Step39: And here's the update.
Step40: To get you started, here's the beta distribution that we used as a uniform prior.
Step41: And here's what it looks like compared to the triangle prior.
Step42: Now you take it from there.
Step43: Exercise
Step44: Exercise | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
Explanation: Conjugate Priors
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
from scipy.stats import gamma
alpha = 1.4
dist = gamma(alpha)
Explanation: In the previous chapters we have used grid approximations to solve a variety of problems.
One of my goals has been to show that this approach is sufficient to solve many real-world problems.
And I think it's a good place to start because it shows clearly how the methods work.
However, as we saw in the previous chapter, grid methods will only get you so far.
As we increase the number of parameters, the number of points in the grid grows (literally) exponentially.
With more than 3-4 parameters, grid methods become impractical.
So, in the remaining three chapters, I will present three alternatives:
In this chapter we'll use conjugate priors to speed up some of the computations we've already done.
In the next chapter, I'll present Markov chain Monte Carlo (MCMC) methods, which can solve problems with tens of parameters, or even hundreds, in a reasonable amount of time.
And in the last chapter we'll use Approximate Bayesian Computation (ABC) for problems that are hard to model with simple distributions.
We'll start with the World Cup problem.
The World Cup Problem Revisited
In <<_PoissonProcesses>>, we solved the World Cup problem using a Poisson process to model goals in a soccer game as random events that are equally likely to occur at any point during a game.
We used a gamma distribution to represent the prior distribution of $\lambda$, the goal-scoring rate. And we used a Poisson distribution to compute the probability of $k$, the number of goals scored.
Here's a gamma object that represents the prior distribution.
End of explanation
import numpy as np
from utils import pmf_from_dist
lams = np.linspace(0, 10, 101)
prior = pmf_from_dist(dist, lams)
Explanation: And here's a grid approximation.
End of explanation
from scipy.stats import poisson
k = 4
likelihood = poisson(lams).pmf(k)
Explanation: Here's the likelihood of scoring 4 goals for each possible value of lam.
End of explanation
posterior = prior * likelihood
posterior.normalize()
Explanation: And here's the update.
End of explanation
def make_gamma_dist(alpha, beta):
Makes a gamma object.
dist = gamma(alpha, scale=1/beta)
dist.alpha = alpha
dist.beta = beta
return dist
Explanation: So far, this should be familiar.
Now we'll solve the same problem using the conjugate prior.
The Conjugate Prior
In <<_TheGammaDistribution>>, I presented three reasons to use a gamma distribution for the prior and said there was a fourth reason I would reveal later.
Well, now is the time.
The other reason I chose the gamma distribution is that it is the "conjugate prior" of the Poisson distribution, so-called because the two distributions are connected or coupled, which is what "conjugate" means.
In the next section I'll explain how they are connected, but first I'll show you the consequence of this connection, which is that there is a remarkably simple way to compute the posterior distribution.
However, in order to demonstrate it, we have to switch from the one-parameter version of the gamma distribution to the two-parameter version. Since the first parameter is called alpha, you might guess that the second parameter is called beta.
The following function takes alpha and beta and makes an object that represents a gamma distribution with those parameters.
End of explanation
alpha = 1.4
beta = 1
prior_gamma = make_gamma_dist(alpha, beta)
prior_gamma.mean()
Explanation: Here's the prior distribution with alpha=1.4 again and beta=1.
End of explanation
def update_gamma(prior, data):
Update a gamma prior.
k, t = data
alpha = prior.alpha + k
beta = prior.beta + t
return make_gamma_dist(alpha, beta)
Explanation: Now I claim without proof that we can do a Bayesian update with k goals just by making a gamma distribution with parameters alpha+k and beta+1.
End of explanation
data = 4, 1
posterior_gamma = update_gamma(prior_gamma, data)
Explanation: Here's how we update it with k=4 goals in t=1 game.
End of explanation
posterior_conjugate = pmf_from_dist(posterior_gamma, lams)
Explanation: After all the work we did with the grid, it might seem absurd that we can do a Bayesian update by adding two pairs of numbers.
So let's confirm that it works.
I'll make a Pmf with a discrete approximation of the posterior distribution.
End of explanation
from utils import decorate
def decorate_rate(title=''):
decorate(xlabel='Goal scoring rate (lam)',
ylabel='PMF',
title=title)
posterior.plot(label='grid posterior', color='C1')
posterior_conjugate.plot(label='conjugate posterior',
color='C4', ls=':')
decorate_rate('Posterior distribution')
Explanation: The following figure shows the result along with the posterior we computed using the grid algorithm.
End of explanation
np.allclose(posterior, posterior_conjugate)
Explanation: They are the same other than small differences due to floating-point approximations.
End of explanation
from utils import make_uniform
xs = np.linspace(0, 1, 101)
uniform = make_uniform(xs, 'uniform')
Explanation: What the Actual?
To understand how that works, we'll write the PDF of the gamma prior and the PMF of the Poisson likelihood, then multiply them together, because that's what the Bayesian update does.
We'll see that the result is a gamma distribution, and we'll derive its parameters.
Here's the PDF of the gamma prior, which is the probability density for each value of $\lambda$, given parameters $\alpha$ and $\beta$:
$$\lambda^{\alpha-1} e^{-\lambda \beta}$$
I have omitted the normalizing factor; since we are planning to normalize the posterior distribution anyway, we don't really need it.
Now suppose a team scores $k$ goals in $t$ games.
The probability of this data is given by the PMF of the Poisson distribution, which is a function of $k$ with $\lambda$ and $t$ as parameters.
$$\lambda^k e^{-\lambda t}$$
Again, I have omitted the normalizing factor, which makes it clearer that the gamma and Poisson distributions have the same functional form.
When we multiply them together, we can pair up the factors and add up the exponents.
The result is the unnormalized posterior distribution,
$$\lambda^{\alpha-1+k} e^{-\lambda(\beta + t)}$$
which we can recognize as an unnormalized gamma distribution with parameters $\alpha + k$ and $\beta + t$.
This derivation provides insight into what the parameters of the posterior distribution mean: $\alpha$ reflects the number of events that have occurred; $\beta$ reflects the elapsed time.
Binomial Likelihood
As a second example, let's look again at the Euro problem.
When we solved it with a grid algorithm, we started with a uniform prior:
End of explanation
from scipy.stats import binom
k, n = 140, 250
xs = uniform.qs
likelihood = binom.pmf(k, n, xs)
Explanation: We used the binomial distribution to compute the likelihood of the data, which was 140 heads out of 250 attempts.
End of explanation
posterior = uniform * likelihood
posterior.normalize()
Explanation: Then we computed the posterior distribution in the usual way.
End of explanation
import scipy.stats
def make_beta(alpha, beta):
Makes a beta object.
dist = scipy.stats.beta(alpha, beta)
dist.alpha = alpha
dist.beta = beta
return dist
Explanation: We can solve this problem more efficiently using the conjugate prior of the binomial distribution, which is the beta distribution.
The beta distribution is bounded between 0 and 1, so it works well for representing the distribution of a probability like x.
It has two parameters, called alpha and beta, that determine the shape of the distribution.
SciPy provides an object called beta that represents a beta distribution.
The following function takes alpha and beta and returns a new beta object.
End of explanation
alpha = 1
beta = 1
prior_beta = make_beta(alpha, beta)
Explanation: It turns out that the uniform distribution, which we used as a prior, is the beta distribution with parameters alpha=1 and beta=1.
So we can make a beta object that represents a uniform distribution, like this:
End of explanation
def update_beta(prior, data):
Update a beta distribution.
k, n = data
alpha = prior.alpha + k
beta = prior.beta + n - k
return make_beta(alpha, beta)
Explanation: Now let's figure out how to do the update. As in the previous example, we'll write the PDF of the prior distribution and the PMF of the likelihood function, and multiply them together. We'll see that the product has the same form as the prior, and we'll derive its parameters.
Here is the PDF of the beta distribution, which is a function of $x$ with $\alpha$ and $\beta$ as parameters.
$$x^{\alpha-1} (1-x)^{\beta-1}$$
Again, I have omitted the normalizing factor, which we don't need because we are going to normalize the distribution after the update.
And here's the PMF of the binomial distribution, which is a function of $k$ with $n$ and $x$ as parameters.
$$x^{k} (1-x)^{n-k}$$
Again, I have omitted the normalizing factor.
Now when we multiply the beta prior and the binomial likelihood, the result is
$$x^{\alpha-1+k} (1-x)^{\beta-1+n-k}$$
which we recognize as an unnormalized beta distribution with parameters $\alpha+k$ and $\beta+n-k$.
So if we observe k successes in n trials, we can do the update by making a beta distribution with parameters alpha+k and beta+n-k.
That's what this function does:
End of explanation
data = 140, 250
posterior_beta = update_beta(prior_beta, data)
Explanation: Again, the conjugate prior gives us insight into the meaning of the parameters; $\alpha$ is related to the number of observed successes; $\beta$ is related to the number of failures.
Here's how we do the update with the observed data.
End of explanation
posterior_conjugate = pmf_from_dist(posterior_beta, xs)
Explanation: To confirm that it works, I'll evaluate the posterior distribution for the possible values of xs and put the results in a Pmf.
End of explanation
def decorate_euro(title):
decorate(xlabel='Proportion of heads (x)',
ylabel='Probability',
title=title)
posterior.plot(label='grid posterior', color='C1')
posterior_conjugate.plot(label='conjugate posterior',
color='C4', ls=':')
decorate_euro(title='Posterior distribution of x')
Explanation: And we can compare the posterior distribution we just computed with the results from the grid algorithm.
End of explanation
np.allclose(posterior, posterior_conjugate)
Explanation: They are the same other than small differences due to floating-point approximations.
The examples so far are problems we have already solved, so let's try something new.
End of explanation
from scipy.stats import multinomial
data = 3, 2, 1
n = np.sum(data)
ps = 0.4, 0.3, 0.3
multinomial.pmf(data, n, ps)
Explanation: Lions and Tigers and Bears
Suppose we visit a wild animal preserve where we know that the only animals are lions and tigers and bears, but we don't know how many of each there are.
During the tour, we see 3 lions, 2 tigers, and one bear. Assuming that every animal had an equal chance to appear in our sample, what is the probability that the next animal we see is a bear?
To answer this question, we'll use the data to estimate the prevalence of each species, that is, what fraction of the animals belong to each species.
If we know the prevalences, we can use the multinomial distribution to compute the probability of the data.
For example, suppose we know that the fraction of lions, tigers, and bears is 0.4, 0.3, and 0.3, respectively.
In that case the probability of the data is:
End of explanation
from scipy.stats import dirichlet
alpha = 1, 2, 3
dist = dirichlet(alpha)
Explanation: Now, we could choose a prior for the prevalences and do a Bayesian update using the multinomial distribution to compute the probability of the data.
But there's an easier way, because the multinomial distribution has a conjugate prior: the Dirichlet distribution.
The Dirichlet Distribution
The Dirichlet distribution is a multivariate distribution, like the multivariate normal distribution we used in <<_MultivariateNormalDistribution>> to describe the distribution of penguin measurements.
In that example, the quantities in the distribution are pairs of flipper length and culmen length, and the parameters of the distribution are a vector of means and a matrix of covariances.
In a Dirichlet distribution, the quantities are vectors of probabilities, $\mathbf{x}$, and the parameter is a vector, $\mathbf{\alpha}$.
An example will make that clearer. SciPy provides a dirichlet object that represents a Dirichlet distribution.
Here's an instance with $\mathbf{\alpha} = 1, 2, 3$.
End of explanation
dist.rvs()
dist.rvs().sum()
Explanation: Since we provided three parameters, the result is a distribution of three variables.
If we draw a random value from this distribution, like this:
End of explanation
sample = dist.rvs(1000)
sample.shape
Explanation: The result is an array of three values.
They are bounded between 0 and 1, and they always add up to 1, so they can be interpreted as the probabilities of a set of outcomes that are mutually exclusive and collectively exhaustive.
Let's see what the distributions of these values look like. I'll draw 1000 random vectors from this distribution, like this:
End of explanation
from empiricaldist import Cdf
cdfs = [Cdf.from_seq(col)
for col in sample.transpose()]
Explanation: The result is an array with 1000 rows and three columns. I'll compute the Cdf of the values in each column.
End of explanation
for i, cdf in enumerate(cdfs):
label = f'Column {i}'
cdf.plot(label=label)
decorate()
Explanation: The result is a list of Cdf objects that represent the marginal distributions of the three variables. Here's what they look like.
End of explanation
def marginal_beta(alpha, i):
Compute the ith marginal of a Dirichlet distribution.
total = np.sum(alpha)
return make_beta(alpha[i], total-alpha[i])
Explanation: Column 0, which corresponds to the lowest parameter, contains the lowest probabilities.
Column 2, which corresponds to the highest parameter, contains the highest probabilities.
As it turns out, these marginal distributions are beta distributions.
The following function takes a sequence of parameters, alpha, and computes the marginal distribution of variable i:
End of explanation
marginals = [marginal_beta(alpha, i)
for i in range(len(alpha))]
Explanation: We can use it to compute the marginal distribution for the three variables.
End of explanation
xs = np.linspace(0, 1, 101)
for i in range(len(alpha)):
label = f'Column {i}'
pmf = pmf_from_dist(marginals[i], xs)
pmf.make_cdf().plot(color='C5')
cdf = cdfs[i]
cdf.plot(label=label, ls=':')
decorate()
Explanation: The following plot shows the CDF of these distributions as gray lines and compares them to the CDFs of the samples.
End of explanation
# Solution
The unnormalized posterior is
\lambda^{\alpha-1+1} e^{-(\beta + t) \lambda}
which is an unnormalized gamma distribution with parameters
`alpha+1` and `beta+t`, which means that we observed 1 goal
in elapsed time `t`.
So we can use the same update function and call it like this:
data = 1, 11/90
posterior1 = update_gamma(prior_gamma, data)
# Solution
# Here's the second update
data = 1, 12/90
posterior2 = update_gamma(posterior1, data)
# Solution
prior_gamma.mean(), posterior1.mean(), posterior2.mean()
# Solution
# And here's what the posteriors look like
pmf_from_dist(prior_gamma, lams).plot(color='C5', label='prior')
pmf_from_dist(posterior1, lams).plot(label='after 1 goal')
pmf_from_dist(posterior2, lams).plot(label='after 2 goals')
decorate_rate(title='World Cup Problem, Germany v Brazil')
Explanation: This confirms that the marginals of the Dirichlet distribution are beta distributions.
And that's useful because the Dirichlet distribution is the conjugate prior for the multinomial likelihood function.
If the prior distribution is Dirichlet with parameter vector alpha and the data is a vector of observations, data, the posterior distribution is Dirichlet with parameter vector alpha + data.
As an exercise at the end of this chapter, you can use this method to solve the Lions and Tigers and Bears problem.
Summary
After reading this chapter, if you feel like you've been tricked, I understand. It turns out that many of the problems in this book can be solved with just a few arithmetic operations. So why did we go to all the trouble of using grid algorithms?
Sadly, there are only a few problems we can solve with conjugate priors; in fact, this chapter includes most of the ones that are useful in practice.
For the vast majority of problems, there is no conjugate prior and no shortcut to compute the posterior distribution.
That's why we need grid algorithms and the methods in the next two chapters, Approximate Bayesian Computation (ABC) and Markov chain Monte Carlo methods (MCMC).
Exercises
Exercise: In the second version of the World Cup problem, the data we use for the update is not the number of goals in a game, but the time until the first goal.
So the probability of the data is given by the exponential distribution rather than the Poisson distribution.
But it turns out that the gamma distribution is also the conjugate prior of the exponential distribution, so there is a simple way to compute this update, too.
The PDF of the exponential distribution is a function of $t$ with $\lambda$ as a parameter.
$$\lambda e^{-\lambda t}$$
Multiply the PDF of the gamma prior by this likelihood, confirm that the result is an unnormalized gamma distribution, and see if you can derive its parameters.
Write a few lines of code to update prior_gamma with the data from this version of the problem, which was a first goal after 11 minutes and a second goal after an additional 12 minutes.
Remember to express these quantities in units of games, which are approximately 90 minutes.
End of explanation
from empiricaldist import Pmf
ramp_up = np.arange(50)
ramp_down = np.arange(50, -1, -1)
a = np.append(ramp_up, ramp_down)
xs = uniform.qs
triangle = Pmf(a, xs, name='triangle')
triangle.normalize()
Explanation: Exercise: For problems like the Euro problem where the likelihood function is binomial, we can do a Bayesian update with just a few arithmetic operations, but only if the prior is a beta distribution.
If we want a uniform prior, we can use a beta distribution with alpha=1 and beta=1.
But what can we do if the prior distribution we want is not a beta distribution?
For example, in <<_TrianglePrior>> we also solved the Euro problem with a triangle prior, which is not a beta distribution.
In these cases, we can often find a beta distribution that is a good-enough approximation for the prior we want.
See if you can find a beta distribution that fits the triangle prior, then update it using update_beta.
Use pmf_from_dist to make a Pmf that approximates the posterior distribution and compare it to the posterior we just computed using a grid algorithm. How big is the largest difference between them?
Here's the triangle prior again.
End of explanation
k, n = 140, 250
likelihood = binom.pmf(k, n, xs)
posterior = triangle * likelihood
posterior.normalize()
Explanation: And here's the update.
End of explanation
alpha = 1
beta = 1
prior_beta = make_beta(alpha, beta)
prior_beta.mean()
Explanation: To get you started, here's the beta distribution that we used as a uniform prior.
End of explanation
prior_pmf = pmf_from_dist(prior_beta, xs)
triangle.plot(label='triangle')
prior_pmf.plot(label='beta')
decorate_euro('Prior distributions')
Explanation: And here's what it looks like compared to the triangle prior.
End of explanation
# Solution
data = 140, 250
posterior_beta = update_beta(prior_beta, data)
posterior_beta.mean()
# Solution
posterior_conjugate = pmf_from_dist(posterior_beta, xs)
# Solution
posterior.plot(label='grid posterior', ls=':')
posterior_conjugate.plot(label='conjugate posterior')
decorate(xlabel='Proportion of heads (x)',
ylabel='Probability',
title='Posterior distribution of x')
# Solution
# The largest absolute difference is pretty small
np.allclose(posterior, posterior_conjugate)
Explanation: Now you take it from there.
End of explanation
# Solution
# The first prior implies that most sellers are
# satisfactory most of the time, but none are perfect.
prior = make_beta(8, 2)
xs = np.linspace(0.005, 0.995, 199)
prior_pmf = pmf_from_dist(prior, xs)
prior_pmf.plot(color='C5', label='prior')
decorate(xlabel='Probability of positive rating',
ylabel='PDF')
# Solution
data1 = 10, 10
data2 = 48, 50
data3 = 186, 200
# Solution
seller1 = update_beta(prior, data1)
seller2 = update_beta(prior, data2)
seller3 = update_beta(prior, data3)
# Solution
seller1_pmf = pmf_from_dist(seller1, xs)
seller2_pmf = pmf_from_dist(seller2, xs)
seller3_pmf = pmf_from_dist(seller3, xs)
# Solution
seller1_pmf.plot(label='seller 1')
seller2_pmf.plot(label='seller 2')
seller3_pmf.plot(label='seller 3')
decorate(xlabel='Probability of positive rating',
ylabel='PDF',
xlim=(0.65, 1.0))
# Solution
seller1.mean(), seller2.mean(), seller3.mean()
# Solution
iters = 10000
a = np.empty((3, iters))
a[0] = seller1.rvs(iters)
a[1] = seller2.rvs(iters)
a[2] = seller3.rvs(iters)
# Solution
from empiricaldist import Pmf
best = np.argmax(a, axis=0)
Pmf.from_seq(best)
Explanation: Exercise: 3Blue1Brown is a YouTube channel about math; if you are not already aware of it, I recommend it highly.
In this video the narrator presents this problem:
You are buying a product online and you see three sellers offering the same product at the same price. One of them has a 100% positive rating, but with only 10 reviews. Another has a 96% positive rating with 50 total reviews. And yet another has a 93% positive rating, but with 200 total reviews.
Which one should you buy from?
Let's think about how to model this scenario. Suppose each seller has some unknown probability, x, of providing satisfactory service and getting a positive rating, and we want to choose the seller with the highest value of x.
This is not the only model for this scenario, and it is not necessarily the best. An alternative would be something like item response theory, where sellers have varying ability to provide satisfactory service and customers have varying difficulty of being satisfied.
But the first model has the virtue of simplicity, so let's see where it gets us.
As a prior, I suggest a beta distribution with alpha=8 and beta=2. What does this prior look like and what does it imply about sellers?
Use the data to update the prior for the three sellers and plot the posterior distributions. Which seller has the highest posterior mean?
How confident should we be about our choice? That is, what is the probability that the seller with the highest posterior mean actually has the highest value of x?
Consider a beta prior with alpha=0.7 and beta=0.5. What does this prior look like and what does it imply about sellers?
Run the analysis again with this prior and see what effect it has on the results.
Note: When you evaluate the beta distribution, you should restrict the range of xs so it does not include 0 and 1. When the parameters of the beta distribution are less than 1, the probability density goes to infinity at 0 and 1. From a mathematical point of view, that's not a problem; it is still a proper probability distribution. But from a computational point of view, it means we have to avoid evaluating the PDF at 0 and 1.
End of explanation
# Solution
prior_alpha = np.array([1, 1, 1])
data = 3, 2, 1
# Solution
posterior_alpha = prior_alpha + data
# Solution
marginal_bear = marginal_beta(posterior_alpha, 2)
marginal_bear.mean()
# Solution
dist = dirichlet(posterior_alpha)
# Solution
import pandas as pd
index = ['lion', 'tiger', 'bear']
pd.DataFrame(dist.mean(), index, columns=['prob'])
Explanation: Exercise: Use a Dirichlet prior with parameter vector alpha = [1, 1, 1] to solve the Lions and Tigers and Bears problem:
Suppose we visit a wild animal preserve where we know that the only animals are lions and tigers and bears, but we don't know how many of each there are.
During the tour, we see three lions, two tigers, and one bear. Assuming that every animal had an equal chance to appear in our sample, estimate the prevalence of each species.
What is the probability that the next animal we see is a bear?
End of explanation |
6,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: On-Device Training with TensorFlow Lite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Note
Step3: The train function in the code above uses the GradientTape class to record operations for automatic differentiation. For more information on how to use this class, see the Introduction to gradients and automatic differentiation.
You could use the Model.train_step method of the keras model here instead of a from-scratch implementation. Just note that the loss (and metrics) returned by Model.train_step is the running average, and should be reset regularly (typically each epoch). See Customize Model.fit for details.
Note
Step4: Preprocess the dataset
Pixel values in this dataset are between 0 and 255, and must be normalized to a value between 0 and 1 for processing by the model. Divide the values by 255 to make this adjustment.
Step5: Convert the data labels to categorical values by performing one-hot encoding.
Step6: Note
Step7: Note
Step8: Setup the TensorFlow Lite signatures
The TensorFlow Lite model you saved in the previous step contains several function signatures. You can access them through the tf.lite.Interpreter class and invoke each restore, train, save, and infer signature separately.
Step9: Compare the output of the original model, and the converted lite model
Step10: Above, you can see that the behavior of the model is not changed by the conversion to TFLite.
Retrain the model on a device
After converting your model to TensorFlow Lite and deploying it with your app, you can retrain the model on a device using new data and the train signature method of your model. Each training run generates a new set of weights that you can save for re-use and further improvement of the model, as shown in the next section.
Note
Step11: Above you can see that the on-device training picks up exactly where the pretraining stopped.
Save the trained weights
When you complete a training run on a device, the model updates the set of weights it is using in memory. Using the save signature method you created in your TensorFlow Lite model, you can save these weights to a checkpoint file for later reuse and improve your model.
Step12: In your Android application, you can store the generated weights as a checkpoint file in the internal storage space allocated for your app.
```Java
try (Interpreter interpreter = new Interpreter(modelBuffer)) {
// Conduct the training jobs.
// Export the trained weights as a checkpoint file.
File outputFile = new File(getFilesDir(), "checkpoint.ckpt");
Map<String, Object> inputs = new HashMap<>();
inputs.put("checkpoint_path", outputFile.getAbsolutePath());
Map<String, Object> outputs = new HashMap<>();
interpreter.runSignature(inputs, outputs, "save");
}
```
Restore the trained weights
Any time you create an interpreter from a TFLite model, the interpreter will initially load the original model weights.
So after you've done some training and saved a checkpoint file, you'll need to run the restore signature method to load the checkpoint.
A good rule is "Anytime you create an Interpreter for a model, if the checkpoint exists, load it". If you need to reset the model to the baseline behavior, just delete the checkpoint and create a fresh interpreter.
Step13: The checkpoint was generated by training and saving with TFLite. Above you can see that applying the checkpoint updates the behavior of the model.
Note
Step14: Plot the predicted labels. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
print("TensorFlow version:", tf.__version__)
Explanation: On-Device Training with TensorFlow Lite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/examples/on_device_training/overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/examples/on_device_training/overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/examples/on_device_training/overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/examples/on_device_training/overview.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
When deploying TensorFlow Lite machine learning model to device or mobile app, you may want to enable the model to be improved or personalized based on input from the device or end user. Using on-device training techniques allows you to update a model without data leaving your users' devices, improving user privacy, and without requiring users to update the device software.
For example, you may have a model in your mobile app that recognizes fashion items, but you want users to get improved recognition performance over time based on their interests. Enabling on-device training allows users who are interested in shoes to get better at recognizing a particular style of shoe or shoe brand the more often they use your app.
This tutorial shows you how to construct a TensorFlow Lite model that can be incrementally trained and improved within an installed Android app.
Note: The on-device training technique can be added to existing TensorFlow Lite implementations, provided the devices you are targeting support local file storage.
Setup
This tutorial uses Python to train and convert a TensorFlow model before incorporating it into an Android app. Get started by installing and importing the following packages.
End of explanation
IMG_SIZE = 28
class Model(tf.Module):
def __init__(self):
self.model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE), name='flatten'),
tf.keras.layers.Dense(128, activation='relu', name='dense_1'),
tf.keras.layers.Dense(10, name='dense_2')
])
self.model.compile(
optimizer='sgd',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True))
# The `train` function takes a batch of input images and labels.
@tf.function(input_signature=[
tf.TensorSpec([None, IMG_SIZE, IMG_SIZE], tf.float32),
tf.TensorSpec([None, 10], tf.float32),
])
def train(self, x, y):
with tf.GradientTape() as tape:
prediction = self.model(x)
loss = self.model.loss(y, prediction)
gradients = tape.gradient(loss, self.model.trainable_variables)
self.model.optimizer.apply_gradients(
zip(gradients, self.model.trainable_variables))
result = {"loss": loss}
return result
@tf.function(input_signature=[
tf.TensorSpec([None, IMG_SIZE, IMG_SIZE], tf.float32),
])
def infer(self, x):
logits = self.model(x)
probabilities = tf.nn.softmax(logits, axis=-1)
return {
"output": probabilities,
"logits": logits
}
@tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.string)])
def save(self, checkpoint_path):
tensor_names = [weight.name for weight in self.model.weights]
tensors_to_save = [weight.read_value() for weight in self.model.weights]
tf.raw_ops.Save(
filename=checkpoint_path, tensor_names=tensor_names,
data=tensors_to_save, name='save')
return {
"checkpoint_path": checkpoint_path
}
@tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.string)])
def restore(self, checkpoint_path):
restored_tensors = {}
for var in self.model.weights:
restored = tf.raw_ops.Restore(
file_pattern=checkpoint_path, tensor_name=var.name, dt=var.dtype,
name='restore')
var.assign(restored)
restored_tensors[var.name] = restored
return restored_tensors
Explanation: Note: The On-Device Training APIs are available in TensorFlow version 2.7 and higher.
Classify images of clothing
This example code uses the Fashion MNIST dataset to train a neural network model for classifying images of clothing. This dataset contains 60,000 small (28 x 28 pixel) grayscale images containing 10 different categories of fashion accessories, including dresses, shirts, and sandals.
<figure>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST images">
<figcaption><b>Figure 1</b>: <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).</figcaption>
</figure>
You can explore this dataset in more depth in the Keras classification tutorial.
Build a model for on-device training
TensorFlow Lite models typically have only a single exposed function method (or signature) that allows you to call the model to run an inference. For a model to be trained and used on a device, you must be able to perform several separate operations, including train, infer, save, and restore functions for the model. You can enable this functionality by first extending your TensorFlow model to have multiple functions, and then exposing those functions as signatures when you convert your model to the TensorFlow Lite model format.
The code example below shows you how to add the following functions to a TensorFlow model:
train function trains the model with training data.
infer function invokes the inference.
save function saves the trainable weights into the file system.
restore function loads the trainable weights from the file system.
End of explanation
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
Explanation: The train function in the code above uses the GradientTape class to record operations for automatic differentiation. For more information on how to use this class, see the Introduction to gradients and automatic differentiation.
You could use the Model.train_step method of the keras model here instead of a from-scratch implementation. Just note that the loss (and metrics) returned by Model.train_step is the running average, and should be reset regularly (typically each epoch). See Customize Model.fit for details.
Note: The weights generated by this model are serialized into a TensorFlow 1 format checkpoint file.
Prepare the data
Get the Fashion MNIST dataset for training your model.
End of explanation
train_images = (train_images / 255.0).astype(np.float32)
test_images = (test_images / 255.0).astype(np.float32)
Explanation: Preprocess the dataset
Pixel values in this dataset are between 0 and 255, and must be normalized to a value between 0 and 1 for processing by the model. Divide the values by 255 to make this adjustment.
End of explanation
train_labels = tf.keras.utils.to_categorical(train_labels)
test_labels = tf.keras.utils.to_categorical(test_labels)
Explanation: Convert the data labels to categorical values by performing one-hot encoding.
End of explanation
NUM_EPOCHS = 100
BATCH_SIZE = 100
epochs = np.arange(1, NUM_EPOCHS + 1, 1)
losses = np.zeros([NUM_EPOCHS])
m = Model()
train_ds = tf.data.Dataset.from_tensor_slices((train_images, train_labels))
train_ds = train_ds.batch(BATCH_SIZE)
for i in range(NUM_EPOCHS):
for x,y in train_ds:
result = m.train(x, y)
losses[i] = result['loss']
if (i + 1) % 10 == 0:
print(f"Finished {i+1} epochs")
print(f" loss: {losses[i]:.3f}")
# Save the trained weights to a checkpoint.
m.save('/tmp/model.ckpt')
plt.plot(epochs, losses, label='Pre-training')
plt.ylim([0, max(plt.ylim())])
plt.xlabel('Epoch')
plt.ylabel('Loss [Cross Entropy]')
plt.legend();
Explanation: Note: Make sure you preprocess your training and testing datasets in the same way, so that your testing accurately evaluate your model's performance.
Train the model
Before converting and setting up your TensorFlow Lite model, complete the initial training of your model using the preprocessed dataset and the train signature method. The following code runs model training for 100 epochs, processing batches of 100 images at a time, and displaying the loss value after every 10 epochs. Since this training run is processing quite a bit of data, it may take a few minutes to finish.
End of explanation
SAVED_MODEL_DIR = "saved_model"
tf.saved_model.save(
m,
SAVED_MODEL_DIR,
signatures={
'train':
m.train.get_concrete_function(),
'infer':
m.infer.get_concrete_function(),
'save':
m.save.get_concrete_function(),
'restore':
m.restore.get_concrete_function(),
})
# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_DIR)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
converter.experimental_enable_resource_variables = True
tflite_model = converter.convert()
Explanation: Note: You should complete initial training of your model before converting it to TensorFlow Lite format, so that the model has an initial set of weights, and is able to perform reasonable inferences before you start collecting data and conducting training runs on the device.
Convert model to TensorFlow Lite format
After you have extended your TensorFlow model to enable additional functions for on-device training and completed initial training of the model, you can convert it to TensorFlow Lite format. The following code converts and saves your model to that format, including the set of signatures that you use with the TensorFlow Lite model on a device: train, infer, save, restore.
End of explanation
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
infer = interpreter.get_signature_runner("infer")
Explanation: Setup the TensorFlow Lite signatures
The TensorFlow Lite model you saved in the previous step contains several function signatures. You can access them through the tf.lite.Interpreter class and invoke each restore, train, save, and infer signature separately.
End of explanation
logits_original = m.infer(x=train_images[:1])['logits'][0]
logits_lite = infer(x=train_images[:1])['logits'][0]
#@title
def compare_logits(logits):
width = 0.35
offset = width/2
assert len(logits)==2
keys = list(logits.keys())
plt.bar(x = np.arange(len(logits[keys[0]]))-offset,
height=logits[keys[0]], width=0.35, label=keys[0])
plt.bar(x = np.arange(len(logits[keys[1]]))+offset,
height=logits[keys[1]], width=0.35, label=keys[1])
plt.legend()
plt.grid(True)
plt.ylabel('Logit')
plt.xlabel('ClassID')
delta = np.sum(np.abs(logits[keys[0]] - logits[keys[1]]))
plt.title(f"Total difference: {delta:.3g}")
compare_logits({'Original': logits_original, 'Lite': logits_lite})
Explanation: Compare the output of the original model, and the converted lite model:
End of explanation
train = interpreter.get_signature_runner("train")
NUM_EPOCHS = 50
BATCH_SIZE = 100
more_epochs = np.arange(epochs[-1]+1, epochs[-1] + NUM_EPOCHS + 1, 1)
more_losses = np.zeros([NUM_EPOCHS])
for i in range(NUM_EPOCHS):
for x,y in train_ds:
result = train(x=x, y=y)
more_losses[i] = result['loss']
if (i + 1) % 10 == 0:
print(f"Finished {i+1} epochs")
print(f" loss: {more_losses[i]:.3f}")
plt.plot(epochs, losses, label='Pre-training')
plt.plot(more_epochs, more_losses, label='On device')
plt.ylim([0, max(plt.ylim())])
plt.xlabel('Epoch')
plt.ylabel('Loss [Cross Entropy]')
plt.legend();
Explanation: Above, you can see that the behavior of the model is not changed by the conversion to TFLite.
Retrain the model on a device
After converting your model to TensorFlow Lite and deploying it with your app, you can retrain the model on a device using new data and the train signature method of your model. Each training run generates a new set of weights that you can save for re-use and further improvement of the model, as shown in the next section.
Note: Since training tasks are resource intensive, you should consider performing them when users are not actively interacting with the device, and as a background process. Consider using the WorkManager API to schedule model retraining as an asynchronous task.
On Android, you can perform on-device training with TensorFlow Lite using either Java or C++ APIs. In Java, use the Interpreter class to load a model and drive model training tasks. The following example shows how to run the training procedure using the runSignature method:
```Java
try (Interpreter interpreter = new Interpreter(modelBuffer)) {
int NUM_EPOCHS = 100;
int BATCH_SIZE = 100;
int IMG_HEIGHT = 28;
int IMG_WIDTH = 28;
int NUM_TRAININGS = 60000;
int NUM_BATCHES = NUM_TRAININGS / BATCH_SIZE;
List<FloatBuffer> trainImageBatches = new ArrayList<>(NUM_BATCHES);
List<FloatBuffer> trainLabelBatches = new ArrayList<>(NUM_BATCHES);
// Prepare training batches.
for (int i = 0; i < NUM_BATCHES; ++i) {
FloatBuffer trainImages = FloatBuffer.allocateDirect(BATCH_SIZE * IMG_HEIGHT * IMG_WIDTH).order(ByteOrder.nativeOrder());
FloatBuffer trainLabels = FloatBuffer.allocateDirect(BATCH_SIZE * 10).order(ByteOrder.nativeOrder());
// Fill the data values...
trainImageBatches.add(trainImages.rewind());
trainImageLabels.add(trainLabels.rewind());
}
// Run training for a few steps.
float[] losses = new float[NUM_EPOCHS];
for (int epoch = 0; epoch < NUM_EPOCHS; ++epoch) {
for (int batchIdx = 0; batchIdx < NUM_BATCHES; ++batchIdx) {
Map<String, Object> inputs = new HashMap<>();
inputs.put("x", trainImageBatches.get(batchIdx));
inputs.put("y", trainLabelBatches.get(batchIdx));
Map<String, Object> outputs = new HashMap<>();
FloatBuffer loss = FloatBuffer.allocate(1);
outputs.put("loss", loss);
interpreter.runSignature(inputs, outputs, "train");
// Record the last loss.
if (batchIdx == NUM_BATCHES - 1) losses[epoch] = loss.get(0);
}
// Print the loss output for every 10 epochs.
if ((epoch + 1) % 10 == 0) {
System.out.println(
"Finished " + (epoch + 1) + " epochs, current loss: " + loss.get(0));
}
}
// ...
}
```
You can see a complete code example of model retraining inside an Android app in the model personalization demo app.
Run training for a few epochs to improve or personalize the model. In practice, you would run this additional training using data collected on the device. For simplicity, this example uses the same training data as the previous training step.
End of explanation
save = interpreter.get_signature_runner("save")
save(checkpoint_path=np.array("/tmp/model.ckpt", dtype=np.string_))
Explanation: Above you can see that the on-device training picks up exactly where the pretraining stopped.
Save the trained weights
When you complete a training run on a device, the model updates the set of weights it is using in memory. Using the save signature method you created in your TensorFlow Lite model, you can save these weights to a checkpoint file for later reuse and improve your model.
End of explanation
another_interpreter = tf.lite.Interpreter(model_content=tflite_model)
another_interpreter.allocate_tensors()
infer = another_interpreter.get_signature_runner("infer")
restore = another_interpreter.get_signature_runner("restore")
logits_before = infer(x=train_images[:1])['logits'][0]
# Restore the trained weights from /tmp/model.ckpt
restore(checkpoint_path=np.array("/tmp/model.ckpt", dtype=np.string_))
logits_after = infer(x=train_images[:1])['logits'][0]
compare_logits({'Before': logits_before, 'After': logits_after})
Explanation: In your Android application, you can store the generated weights as a checkpoint file in the internal storage space allocated for your app.
```Java
try (Interpreter interpreter = new Interpreter(modelBuffer)) {
// Conduct the training jobs.
// Export the trained weights as a checkpoint file.
File outputFile = new File(getFilesDir(), "checkpoint.ckpt");
Map<String, Object> inputs = new HashMap<>();
inputs.put("checkpoint_path", outputFile.getAbsolutePath());
Map<String, Object> outputs = new HashMap<>();
interpreter.runSignature(inputs, outputs, "save");
}
```
Restore the trained weights
Any time you create an interpreter from a TFLite model, the interpreter will initially load the original model weights.
So after you've done some training and saved a checkpoint file, you'll need to run the restore signature method to load the checkpoint.
A good rule is "Anytime you create an Interpreter for a model, if the checkpoint exists, load it". If you need to reset the model to the baseline behavior, just delete the checkpoint and create a fresh interpreter.
End of explanation
infer = another_interpreter.get_signature_runner("infer")
result = infer(x=test_images)
predictions = np.argmax(result["output"], axis=1)
true_labels = np.argmax(test_labels, axis=1)
result['output'].shape
Explanation: The checkpoint was generated by training and saving with TFLite. Above you can see that applying the checkpoint updates the behavior of the model.
Note: Loading the saved weights from the checkpoint can take time, based on the number of variables in the model and the size of the checkpoint file.
In your Android app, you can restore the serialized, trained weights from the checkpoint file you stored earlier.
Java
try (Interpreter anotherInterpreter = new Interpreter(modelBuffer)) {
// Load the trained weights from the checkpoint file.
File outputFile = new File(getFilesDir(), "checkpoint.ckpt");
Map<String, Object> inputs = new HashMap<>();
inputs.put("checkpoint_path", outputFile.getAbsolutePath());
Map<String, Object> outputs = new HashMap<>();
anotherInterpreter.runSignature(inputs, outputs, "restore");
}
Note: When your application restarts, you should reload your trained weights prior to running new inferences.
Run Inference using trained weights
Once you have loaded previously saved weights from a checkpoint file, running the infer method uses those weights with your original model to improve predictions. After loading the saved weights, you can use the infer signature method as shown below.
Note: Loading the saved weights is not required to run an inference, but running in that configuration produces predictions using the originally trained model, without improvements.
End of explanation
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
def plot(images, predictions, true_labels):
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i], cmap=plt.cm.binary)
color = 'b' if predictions[i] == true_labels[i] else 'r'
plt.xlabel(class_names[predictions[i]], color=color)
plt.show()
plot(test_images, predictions, true_labels)
predictions.shape
Explanation: Plot the predicted labels.
End of explanation |
6,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
用 NumPy 就能辨識數字
Step1: 先下載 MNIST 資料
Step2: Q
先看看這些資料是什麼吧!
Step3: Supervised Learning
<img src="supervised.svg" />
類比:
<img src="supervised2.svg" />
類比:
中文房間
Step4: 看一下 MNIST 的 y 是什麼
Step5: 看一下 MNIST 的 X
Step6: 先從簡單的方法開始
笨方法,直接比較,找最接近的圖。
先試試看使用方差好了
Q
計算 u, v 方差
python
u = train_X[0]
v = train_X[1]
Step7: Q
試著
* 顯示 test_X[0]
* 在 train_X 中找出最像 test_X[0] 的圖片編號
* display 那張最圖片
* 然後查看對應的 train_y
* 看看 test_y[0]
Step8: Q
拿前面10/100個 test_X 做同樣的事情,然後統計一準確度。
Step9: Q
如果 train_X 只有 500 筆資料呢?
利用 reshaping, broadcasting 技巧, 算出對 test_X[
Step10: Q
用其他距離函數? e.g. np.abs(...).sum()
改用來內積取代方差
$$
\begin{align}
\left\Vert \mathbf{u}-\mathbf{v}\right\Vert ^{2} & =\left(\mathbf{u}-\mathbf{v}\right)\cdot\left(\mathbf{u}-\mathbf{v}\right)\
& =\left\Vert \mathbf{u}\right\Vert ^{2}-2\mathbf{u}\cdot\mathbf{v}+\left\Vert \mathbf{v}\right\Vert ^{2}\
\end{align}
$$
Step11: 用 PCA 降低維度 | Python Code:
from PIL import Image
import numpy as np
Explanation: 用 NumPy 就能辨識數字
End of explanation
import os
import urllib
from urllib.request import urlretrieve
dataset = 'mnist.pkl.gz'
def reporthook(a,b,c):
print("\rdownloading: %5.1f%%"%(a*b*100.0/c), end="")
if not os.path.isfile(dataset):
origin = "https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz"
print('Downloading data from %s' % origin)
urlretrieve(origin, dataset, reporthook=reporthook)
import gzip
import pickle
with gzip.open(dataset, 'rb') as f:
train_set, validation_set, test_set = pickle.load(f, encoding='latin1')
Explanation: 先下載 MNIST 資料
End of explanation
%run -i q_see_mnist_data.py
Explanation: Q
先看看這些資料是什麼吧!
End of explanation
train_X, train_y = train_set
test_X, test_y = test_set
Explanation: Supervised Learning
<img src="supervised.svg" />
類比:
<img src="supervised2.svg" />
類比:
中文房間
End of explanation
# 訓練資料, y 的前 20 筆
train_y[:20]
Explanation: 看一下 MNIST 的 y 是什麼
End of explanation
from IPython.display import display
def showX(X):
int_X = (X*255).clip(0,255).astype('uint8')
# N*784 -> N*28*28 -> 28*N*28 -> 28 * 28N
int_X_reshape = int_X.reshape(-1,28,28).swapaxes(0,1).reshape(28,-1)
display(Image.fromarray(int_X_reshape))
# 訓練資料, X 的前 20 筆
showX(train_X[:20])
Explanation: 看一下 MNIST 的 X
End of explanation
%run -i q_square_error.py
Explanation: 先從簡單的方法開始
笨方法,直接比較,找最接近的圖。
先試試看使用方差好了
Q
計算 u, v 方差
python
u = train_X[0]
v = train_X[1]
End of explanation
%run -i q_find_nn_0.py
Explanation: Q
試著
* 顯示 test_X[0]
* 在 train_X 中找出最像 test_X[0] 的圖片編號
* display 那張最圖片
* 然後查看對應的 train_y
* 看看 test_y[0]
End of explanation
%run -i q_find_nn_10.py
Explanation: Q
拿前面10/100個 test_X 做同樣的事情,然後統計一準確度。
End of explanation
# !可能會用掉太多記憶體!
#%run -i q_small_data.py
# accuracy: 85%
Explanation: Q
如果 train_X 只有 500 筆資料呢?
利用 reshaping, broadcasting 技巧, 算出對 test_X[:100] 的準確度!
Hint: np.expand_dims 來取代 np.reshape
End of explanation
# 資料 normalize
train_X = train_X / np.linalg.norm(train_X, axis=1, keepdims=True)
test_X = test_X / np.linalg.norm(test_X, axis=1, keepdims=True)
# 矩陣乘法 == 大量計算內積
A = test_X @ train_X.T
print(A.shape)
A.argmax(axis=1)
predict_y = train_y[A.argmax(axis=1)]
# 測試資料, X 的前 20 筆
showX(test_set[0][:20])
# 猜測的 Y 前20筆
predict_y[:20]
#測試資料的 y 前 20 筆
test_y[:20]
# 正確率
(predict_y == test_y).mean()
Explanation: Q
用其他距離函數? e.g. np.abs(...).sum()
改用來內積取代方差
$$
\begin{align}
\left\Vert \mathbf{u}-\mathbf{v}\right\Vert ^{2} & =\left(\mathbf{u}-\mathbf{v}\right)\cdot\left(\mathbf{u}-\mathbf{v}\right)\
& =\left\Vert \mathbf{u}\right\Vert ^{2}-2\mathbf{u}\cdot\mathbf{v}+\left\Vert \mathbf{v}\right\Vert ^{2}\
\end{align}
$$
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=60)
train_X = pca.fit_transform(train_set[0])
test_X = pca.transform(test_set[0])
train_X.shape
train_X /= np.linalg.norm(train_X, axis=1, keepdims=True)
test_X /= np.linalg.norm(test_X, axis=1, keepdims=True)
# 矩陣乘法
A = test_X @ train_X.T
predict_y = train_y[A.argmax(axis=1)]
# 正確率
(predict_y == test_y).mean()
Explanation: 用 PCA 降低維度
End of explanation |
6,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nyquist
Step1: Proportional control of the normalized DC-motor
Zero-order hold sampling of the DC motor with transfer function $G(s)=\frac{1}{s(s+1)}$ gives the discrete time system
\begin{equation} H(z) = \frac{\big(h-1+e^{-h}\big)z + \big(1-e^{-h}-he^{-h}\big)}{z^2 -\big(1+e^{-h}\big)z + e^{-h}} \end{equation}
Let $h=\ln 2 \approx 0.693$. This gives the pulse-transfer function
\begin{equation} H(z) = \frac{B(z)}{A(z)} = \frac{0.19z + 0.15}{z^2 - 1.5z + 0.5} \end{equation}
Proportional control gives the closed loop system
\begin{equation}
H_c(z) = \frac{K H(z)}{KH(z) + 1} = \frac{K B(z)}{A(z) + KB(z)}.
\end{equation}
The characteristic equation of the closed loop system is
\begin{equation}
z^2 + (-1.5+0.19K)z + 0.5+0.15K
\end{equation} | Python Code:
import numpy as np
import sympy as sy
import itertools
import matplotlib.pyplot as plt
import control.matlab as cm
init_printing()
%matplotlib inline
Explanation: Nyquist
End of explanation
z,h = sy.symbols('z,h')
eh = sy.exp(-h)
H = ( (h-1+eh)*z + (1-eh-h*eh) )/( z*z - (1+eh)*z + 0.5)
B,A = sy.fraction(H)
print A
Hs = {}
Hs2 = {}
H0 = cm.tf([1.0], [1, 1, 0])
Hs[0] = (H0,[1.0],[1.0, 1.0, 0])
Hs2[0] = (H0,[1.0],[1.0, 1.0, 0])
for hh in [0.01, 0.1, 0.2, 0.4, 0.8, 1]:
Bp = sy.Poly(B.subs(h,hh))
Ap = sy.Poly(A.subs(h,hh))
a = []
for el in Ap.coeffs():
a.append(float(sy.N(el)))
b = []
for el in Bp.coeffs():
b.append(float(sy.N(el)))
Hs[hh] = (cm.tf(b,a,hh), b,a)
Hs2[hh] = (cm.c2d(H0, hh), b,a)
z = np.exp(1j*np.linspace(0.02,np.pi, 800))
s = 1j*np.linspace(0.3,100,800)
zz = Hs[0][0](s);
xy = np.column_stack((np.real(zz),np.imag(zz)))
np.savetxt('nyquistH0150908.out', xy, fmt='%10.5f')
for (hh,HH) in Hs2.iteritems():
if hh > 0:
zz = HH[0](z);
xy = np.column_stack((np.real(zz),np.imag(zz)))
np.savetxt('nyquistHzz%d150908.out' % (100*hh), xy, fmt='%10.5f')
zz = Hs2[0][0](s);
zz2 = Hs2[0.01][0](z);
zz3 = Hs2[0.1][0](z);
zz4 = Hs2[0.2][0](z);
zz5 = Hs2[0.8][0](z);
plt.figure()
plt.plot(np.real(zz), np.imag(zz))
plt.plot(np.real(zz2), np.imag(zz2))
plt.plot(np.real(zz3), np.imag(zz3))
plt.plot(np.real(zz4), np.imag(zz4))
plt.plot(np.real(zz5), np.imag(zz5))
plt.ylim(-4,1)
plt.xlim(-2,1)
Hln2 = H.subs(h, np.log(2))
Hln2 = cm.tf([0.19, 0.15], [1, -1.5, 0.5], np.log(2))
Hln2
cm.rlocus(Hln2)
plt.ylim(-3,3)
plt.xlim(-3,3)
Explanation: Proportional control of the normalized DC-motor
Zero-order hold sampling of the DC motor with transfer function $G(s)=\frac{1}{s(s+1)}$ gives the discrete time system
\begin{equation} H(z) = \frac{\big(h-1+e^{-h}\big)z + \big(1-e^{-h}-he^{-h}\big)}{z^2 -\big(1+e^{-h}\big)z + e^{-h}} \end{equation}
Let $h=\ln 2 \approx 0.693$. This gives the pulse-transfer function
\begin{equation} H(z) = \frac{B(z)}{A(z)} = \frac{0.19z + 0.15}{z^2 - 1.5z + 0.5} \end{equation}
Proportional control gives the closed loop system
\begin{equation}
H_c(z) = \frac{K H(z)}{KH(z) + 1} = \frac{K B(z)}{A(z) + KB(z)}.
\end{equation}
The characteristic equation of the closed loop system is
\begin{equation}
z^2 + (-1.5+0.19K)z + 0.5+0.15K
\end{equation}
End of explanation |
6,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution
Step6: Provide complete sentence answers to the following questions in the cell below | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import math as m
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
%% html
<equation>
F($epsilon$)=$1/(e^(($epsilon$ -$mu$)/kT)-1)
</equation>
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
# YOUR CODE HERE
a = np.array(energy - mu)
b = np.array(a/kT)
c = np.array(m.exp(b))
d = np.array(c+1)
f = np.array(1/d)
return f
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
YOUR ANSWER HERE
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
# YOUR CODE HERE
a = np.array(mu)
b = np.array(kT)
plt.scatter(a, b)
plt.ylabel('Temperature')
plt.xlabel('Chemical Potential')
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
# YOUR CODE HERE
w = interactive(plot_fermidist, mu =(0.0,5.0,0.1), kT=(0.1,10.0,0.1));
w
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation
When kT is low then energy is high and when kT is high then energy is low.
Lowering the chemical potential would result in a higher energy and raising the chemical potental would result in a lower energy.
A smaller area would result in less particls
Explanation: Provide complete sentence answers to the following questions in the cell below:
What happens when the temperature $kT$ is low?
What happens when the temperature $kT$ is high?
What is the effect of changing the chemical potential $\mu$?
The number of particles in the system are related to the area under this curve. How does the chemical potential affect the number of particles.
Use LaTeX to typeset any mathematical symbols in your answer.
YOUR ANSWER HERE
End of explanation |
6,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Intro to Sparse Data and Embeddings
Learning Objectives
Step3: Building a Sentiment Analysis Model
Let's train a sentiment-analysis model on this data that predicts if a review is generally favorable (label of 1) or unfavorable (label of 0).
To do so, we'll turn our string-value terms into feature vectors by using a vocabulary, a list of each term we expect to see in our data. For the purposes of this exercise, we've created a small vocabulary that focuses on a limited set of terms. Most of these terms were found to be strongly indicative of favorable or unfavorable, but some were just added because they're interesting.
Each term in the vocabulary is mapped to a coordinate in our feature vector. To convert the string-value terms for an example into this vector format, we encode such that each coordinate gets a value of 0 if the vocabulary term does not appear in the example string, and a value of 1 if it does. Terms in an example that don't appear in the vocabulary are thrown away.
NOTE
Step4: To confirm our function is working as expected, let's construct a TFRecordDataset for the training data, and map the data to features and labels using the function above.
Step5: Run the following cell to retrieve the first example from the training data set.
Step6: Now, let's build a formal input function that we can pass to the train() method of a TensorFlow Estimator object.
Step7: Task 1
Step8: Next, we'll construct the LinearClassifier, train it on the training set, and evaluate it on the evaluation set. After you read through the code, run it and see how you do.
Step9: Task 2
Step10: Task 3
Step11: Complete the Code Below
Step12: Solution
Click below for a solution.
Step13: Task 4
Step14: Okay, we can see that there is an embedding layer in there
Step15: Spend some time manually checking the various layers and shapes to make sure everything is connected the way you would expect it would be.
Task 5
Step16: Task 6 | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/takasawada/tarsh/blob/master/intro_to_sparse_data_and_embeddings_ipynb.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2017 Google LLC.
End of explanation
from __future__ import print_function
import collections
import io
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from IPython import display
from sklearn import metrics
tf.logging.set_verbosity(tf.logging.ERROR)
train_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/train.tfrecord'
train_path = tf.keras.utils.get_file(train_url.split('/')[-1], train_url)
test_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/test.tfrecord'
test_path = tf.keras.utils.get_file(test_url.split('/')[-1], test_url)
Explanation: Intro to Sparse Data and Embeddings
Learning Objectives:
* Convert movie-review string data to a sparse feature vector
* Implement a sentiment-analysis linear model using a sparse feature vector
* Implement a sentiment-analysis DNN model using an embedding that projects data into two dimensions
* Visualize the embedding to see what the model has learned about the relationships between words
In this exercise, we'll explore sparse data and work with embeddings using text data from movie reviews (from the ACL 2011 IMDB dataset). This data has already been processed into tf.Example format.
Setup
Let's import our dependencies and download the training and test data. tf.keras includes a file download and caching tool that we can use to retrieve the data sets.
End of explanation
def _parse_function(record):
Extracts features and labels.
Args:
record: File path to a TFRecord file
Returns:
A `tuple` `(labels, features)`:
features: A dict of tensors representing the features
labels: A tensor with the corresponding labels.
features = {
"terms": tf.VarLenFeature(dtype=tf.string), # terms are strings of varying lengths
"labels": tf.FixedLenFeature(shape=[1], dtype=tf.float32) # labels are 0 or 1
}
parsed_features = tf.parse_single_example(record, features)
terms = parsed_features['terms'].values
labels = parsed_features['labels']
return {'terms':terms}, labels
Explanation: Building a Sentiment Analysis Model
Let's train a sentiment-analysis model on this data that predicts if a review is generally favorable (label of 1) or unfavorable (label of 0).
To do so, we'll turn our string-value terms into feature vectors by using a vocabulary, a list of each term we expect to see in our data. For the purposes of this exercise, we've created a small vocabulary that focuses on a limited set of terms. Most of these terms were found to be strongly indicative of favorable or unfavorable, but some were just added because they're interesting.
Each term in the vocabulary is mapped to a coordinate in our feature vector. To convert the string-value terms for an example into this vector format, we encode such that each coordinate gets a value of 0 if the vocabulary term does not appear in the example string, and a value of 1 if it does. Terms in an example that don't appear in the vocabulary are thrown away.
NOTE: We could of course use a larger vocabulary, and there are special tools for creating these. In addition, instead of just dropping terms that are not in the vocabulary, we can introduce a small number of OOV (out-of-vocabulary) buckets to which you can hash the terms not in the vocabulary. We can also use a feature hashing approach that hashes each term, instead of creating an explicit vocabulary. This works well in practice, but loses interpretability, which is useful for this exercise. See see the tf.feature_column module for tools handling this.
Building the Input Pipeline
First, let's configure the input pipeline to import our data into a TensorFlow model. We can use the following function to parse the training and test data (which is in TFRecord format) and return a dict of the features and the corresponding labels.
End of explanation
# Create the Dataset object.
ds = tf.data.TFRecordDataset(train_path)
# Map features and labels with the parse function.
ds = ds.map(_parse_function)
ds
Explanation: To confirm our function is working as expected, let's construct a TFRecordDataset for the training data, and map the data to features and labels using the function above.
End of explanation
n = ds.make_one_shot_iterator().get_next()
sess = tf.Session()
sess.run(n)
Explanation: Run the following cell to retrieve the first example from the training data set.
End of explanation
# Create an input_fn that parses the tf.Examples from the given files,
# and split them into features and targets.
def _input_fn(input_filenames, num_epochs=None, shuffle=True):
# Same code as above; create a dataset and map features and labels.
ds = tf.data.TFRecordDataset(input_filenames)
ds = ds.map(_parse_function)
if shuffle:
ds = ds.shuffle(10000)
# Our feature data is variable-length, so we pad and batch
# each field of the dataset structure to whatever size is necessary.
ds = ds.padded_batch(25, ds.output_shapes)
ds = ds.repeat(num_epochs)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
Explanation: Now, let's build a formal input function that we can pass to the train() method of a TensorFlow Estimator object.
End of explanation
# 50 informative terms that compose our model vocabulary
informative_terms = ("bad", "great", "best", "worst", "fun", "beautiful",
"excellent", "poor", "boring", "awful", "terrible",
"definitely", "perfect", "liked", "worse", "waste",
"entertaining", "loved", "unfortunately", "amazing",
"enjoyed", "favorite", "horrible", "brilliant", "highly",
"simple", "annoying", "today", "hilarious", "enjoyable",
"dull", "fantastic", "poorly", "fails", "disappointing",
"disappointment", "not", "him", "her", "good", "time",
"?", ".", "!", "movie", "film", "action", "comedy",
"drama", "family")
terms_feature_column = tf.feature_column.categorical_column_with_vocabulary_list(key="terms", vocabulary_list=informative_terms)
Explanation: Task 1: Use a Linear Model with Sparse Inputs and an Explicit Vocabulary
For our first model, we'll build a LinearClassifier model using 50 informative terms; always start simple!
The following code constructs the feature column for our terms. The categorical_column_with_vocabulary_list function creates a feature column with the string-to-feature-vector mapping.
End of explanation
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
feature_columns = [ terms_feature_column ]
classifier = tf.estimator.LinearClassifier(
feature_columns=feature_columns,
optimizer=my_optimizer,
)
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
Explanation: Next, we'll construct the LinearClassifier, train it on the training set, and evaluate it on the evaluation set. After you read through the code, run it and see how you do.
End of explanation
##################### Here's what we changed ##################################
classifier = tf.estimator.DNNClassifier( #
feature_columns=[tf.feature_column.indicator_column(terms_feature_column)], #
hidden_units=[20,20], #
optimizer=my_optimizer, #
) #
###############################################################################
try:
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
except ValueError as err:
print(err)
Explanation: Task 2: Use a Deep Neural Network (DNN) Model
The above model is a linear model. It works quite well. But can we do better with a DNN model?
Let's swap in a DNNClassifier for the LinearClassifier. Run the following cell, and see how you do.
End of explanation
# Here's a example code snippet you might use to define the feature columns:
terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)
feature_columns = [ terms_embedding_column ]
Explanation: Task 3: Use an Embedding with a DNN Model
In this task, we'll implement our DNN model using an embedding column. An embedding column takes sparse data as input and returns a lower-dimensional dense vector as output.
NOTE: An embedding_column is usually the computationally most efficient option to use for training a model on sparse data. In an optional section at the end of this exercise, we'll discuss in more depth the implementational differences between using an embedding_column and an indicator_column, and the tradeoffs of selecting one over the other.
In the following code, do the following:
Define the feature columns for the model using an embedding_column that projects the data into 2 dimensions (see the TF docs for more details on the function signature for embedding_column).
Define a DNNClassifier with the following specifications:
Two hidden layers of 20 units each
Adagrad optimization with a learning rate of 0.1
A gradient_clip_norm of 5.0
NOTE: In practice, we might project to dimensions higher than 2, like 50 or 100. But for now, 2 dimensions is easy to visualize.
Hint
End of explanation
########################## YOUR CODE HERE ######################################
terms_embedding_column = # Define the embedding column
feature_columns = # Define the feature columns
classifier = # Define the DNNClassifier
################################################################################
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
Explanation: Complete the Code Below
End of explanation
########################## SOLUTION CODE ########################################
terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)
feature_columns = [ terms_embedding_column ]
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[20,20],
optimizer=my_optimizer
)
#################################################################################
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
Explanation: Solution
Click below for a solution.
End of explanation
classifier.get_variable_names()
Explanation: Task 4: Convince yourself there's actually an embedding in there
The above model used an embedding_column, and it seemed to work, but this doesn't tell us much about what's going on internally. How can we check that the model is actually using an embedding inside?
To start, let's look at the tensors in the model:
End of explanation
classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights').shape
Explanation: Okay, we can see that there is an embedding layer in there: 'dnn/input_from_feature_columns/input_layer/terms_embedding/...'. (What's interesting here, by the way, is that this layer is trainable along with the rest of the model just as any hidden layer is.)
Is the embedding layer the correct shape? Run the following code to find out.
NOTE: Remember, in our case, the embedding is a matrix that allows us to project a 50-dimensional vector down to 2 dimensions.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
embedding_matrix = classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights')
for term_index in range(len(informative_terms)):
# Create a one-hot encoding for our term. It has 0s everywhere, except for
# a single 1 in the coordinate that corresponds to that term.
term_vector = np.zeros(len(informative_terms))
term_vector[term_index] = 1
# We'll now project that one-hot vector into the embedding space.
embedding_xy = np.matmul(term_vector, embedding_matrix)
plt.text(embedding_xy[0],
embedding_xy[1],
informative_terms[term_index])
# Do a little setup to make sure the plot displays nicely.
plt.rcParams["figure.figsize"] = (15, 15)
plt.xlim(1.2 * embedding_matrix.min(), 1.2 * embedding_matrix.max())
plt.ylim(1.2 * embedding_matrix.min(), 1.2 * embedding_matrix.max())
plt.show()
Explanation: Spend some time manually checking the various layers and shapes to make sure everything is connected the way you would expect it would be.
Task 5: Examine the Embedding
Let's now take a look at the actual embedding space, and see where the terms end up in it. Do the following:
1. Run the following code to see the embedding we trained in Task 3. Do things end up where you'd expect?
Re-train the model by rerunning the code in Task 3, and then run the embedding visualization below again. What stays the same? What changes?
Finally, re-train the model again using only 10 steps (which will yield a terrible model). Run the embedding visualization below again. What do you see now, and why?
End of explanation
# Download the vocabulary file.
terms_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/terms.txt'
terms_path = tf.keras.utils.get_file(terms_url.split('/')[-1], terms_url)
# Create a feature column from "terms", using a full vocabulary file.
informative_terms = None
with io.open(terms_path, 'r', encoding='utf8') as f:
# Convert it to a set first to remove duplicates.
informative_terms = list(set(f.read().split()))
terms_feature_column = tf.feature_column.categorical_column_with_vocabulary_list(key="terms",
vocabulary_list=informative_terms)
terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)
feature_columns = [ terms_embedding_column ]
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[10,10],
optimizer=my_optimizer
)
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
Explanation: Task 6: Try to improve the model's performance
See if you can refine the model to improve performance. A couple things you may want to try:
Changing hyperparameters, or using a different optimizer like Adam (you may only gain one or two accuracy percentage points following these strategies).
Adding additional terms to informative_terms. There's a full vocabulary file with all 30,716 terms for this data set that you can use at: https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/terms.txt You can pick out additional terms from this vocabulary file, or use the whole thing via the categorical_column_with_vocabulary_file feature column.
End of explanation |
6,313 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have the following datatype: | Problem:
import pandas as pd
id=["Train A","Train A","Train A","Train B","Train B","Train B"]
arrival_time = ["0"," 2016-05-19 13:50:00","2016-05-19 21:25:00","0","2016-05-24 18:30:00","2016-05-26 12:15:00"]
departure_time = ["2016-05-19 08:25:00","2016-05-19 16:00:00","2016-05-20 07:45:00","2016-05-24 12:50:00","2016-05-25 23:00:00","2016-05-26 19:45:00"]
df = pd.DataFrame({'id': id, 'arrival_time':arrival_time, 'departure_time':departure_time})
import numpy as np
def g(df):
df['arrival_time'] = pd.to_datetime(df['arrival_time'].replace('0', np.nan))
df['departure_time'] = pd.to_datetime(df['departure_time'])
df['Duration'] = (df['arrival_time'] - df.groupby('id')['departure_time'].shift()).dt.total_seconds()
return df
df = g(df.copy()) |
6,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!-- -*- coding
Step1: By definition, a Graph is a collection of nodes (vertices) along with
identified pairs of nodes (called edges, links, etc). In NetworkX, nodes can
be any hashable object e.g., a text string, an image, an XML object, another
Graph, a customized node object, etc.
Nodes
The graph G can be grown in several ways. NetworkX includes many graph
generator functions and facilities to read and write graphs in many formats.
To get started though we’ll look at simple manipulations. You can add one node
at a time,
Step2: add a list of nodes,
Step3: or add any iterable container of nodes. You can also add nodes along with node
attributes if your container yields 2-tuples (node, node_attribute_dict).
Node attributes are discussed further below.
Step4: Note that G now contains the nodes of H as nodes of G.
In contrast, you could use the graph H as a node in G.
Step5: The graph G now contains H as a node. This flexibility is very powerful as
it allows graphs of graphs, graphs of files, graphs of functions and much more.
It is worth thinking about how to structure your application so that the nodes
are useful entities. Of course you can always use a unique identifier in G
and have a separate dictionary keyed by identifier to the node information if
you prefer.
Edges
G can also be grown by adding one edge at a time,
Step6: by adding a list of edges,
Step7: or by adding any ebunch of edges. An ebunch is any iterable
container of edge-tuples. An edge-tuple can be a 2-tuple of nodes or a 3-tuple
with 2 nodes followed by an edge attribute dictionary, e.g.,
(2, 3, {'weight'
Step8: There are no complaints when adding existing nodes or edges. For example,
after removing all nodes and edges,
Step9: we add new nodes/edges and NetworkX quietly ignores any that are
already present.
Step10: At this stage the graph G consists of 8 nodes and 3 edges, as can be seen by
Step11: We can examine the nodes and edges. Four basic graph properties facilitate
reporting
Step12: One can specify to report the edges and degree from a subset of all nodes
using an nbunch. An nbunch is any of
Step13: One can remove nodes and edges from the graph in a similar fashion to adding.
Use methods
Graph.remove_node(),
Graph.remove_nodes_from(),
Graph.remove_edge()
and
Graph.remove_edges_from(), e.g.
Step14: When creating a graph structure by instantiating one of the graph
classes you can specify data in several formats.
Step15: What to use as nodes and edges
You might notice that nodes and edges are not specified as NetworkX
objects. This leaves you free to use meaningful items as nodes and
edges. The most common choices are numbers or strings, but a node can
be any hashable object (except None), and an edge can be associated
with any object x using G.add_edge(n1, n2, object=x).
As an example, n1 and n2 could be protein objects from the RCSB Protein
Data Bank, and x could refer to an XML record of publications detailing
experimental observations of their interaction.
We have found this power quite useful, but its abuse
can lead to unexpected surprises unless one is familiar with Python.
If in doubt, consider using convert_node_labels_to_integers() to obtain
a more traditional graph with integer labels.
Accessing edges and neighbors
In addition to the views Graph.edges(), and Graph.adj(),
access to edges and neighbors is possible using subscript notation.
Step16: You can get/set the attributes of an edge using subscript notation
if the edge already exists.
Step17: Fast examination of all (node, adjacency) pairs is achieved using
G.adjacency(), or G.adj.items().
Note that for undirected graphs, adjacency iteration sees each edge twice.
Step18: Convenient access to all edges is achieved with the edges property.
Step19: Adding attributes to graphs, nodes, and edges
Attributes such as weights, labels, colors, or whatever Python object you like,
can be attached to graphs, nodes, or edges.
Each graph, node, and edge can hold key/value attribute pairs in an associated
attribute dictionary (the keys must be hashable). By default these are empty,
but attributes can be added or changed using add_edge, add_node or direct
manipulation of the attribute dictionaries named G.graph, G.nodes, and
G.edges for a graph G.
Graph attributes
Assign graph attributes when creating a new graph
Step20: Or you can modify attributes later
Step21: Node attributes
Add node attributes using add_node(), add_nodes_from(), or G.nodes
Step22: Note that adding a node to G.nodes does not add it to the graph, use
G.add_node() to add new nodes. Similarly for edges.
Edge Attributes
Add/change edge attributes using add_edge(), add_edges_from(),
or subscript notation.
Step23: The special attribute weight should be numeric as it is used by
algorithms requiring weighted edges.
Directed graphs
The DiGraph class provides additional properties specific to
directed edges, e.g.,
DiGraph.out_edges(), DiGraph.in_degree(),
DiGraph.predecessors(), DiGraph.successors() etc.
To allow algorithms to work with both classes easily, the directed versions of
neighbors() is equivalent to successors() while degree reports
the sum of in_degree and out_degree even though that may feel
inconsistent at times.
Step24: Some algorithms work only for directed graphs and others are not well
defined for directed graphs. Indeed the tendency to lump directed
and undirected graphs together is dangerous. If you want to treat
a directed graph as undirected for some measurement you should probably
convert it using Graph.to_undirected() or with
Step25: Multigraphs
NetworkX provides classes for graphs which allow multiple edges
between any pair of nodes. The MultiGraph and
MultiDiGraph
classes allow you to add the same edge twice, possibly with different
edge data. This can be powerful for some applications, but many
algorithms are not well defined on such graphs.
Where results are well defined,
e.g., MultiGraph.degree() we provide the function. Otherwise you
should convert to a standard graph in a way that makes the measurement
well defined.
Step26: Graph generators and graph operations
In addition to constructing graphs node-by-node or edge-by-edge, they
can also be generated by
Applying classic graph operations, such as
Step27: Using a (constructive) generator for a classic graph, e.g.,
Step28: Using a stochastic graph generator, e.g.,
Step29: Reading a graph stored in a file using common graph formats,
such as edge lists, adjacency lists, GML, GraphML, pickle, LEDA and others.
Step30: For details on graph formats see Reading and writing graphs
and for graph generator functions see Graph generators
Analyzing graphs
The structure of G can be analyzed using various graph-theoretic
functions such as
Step31: Some functions with large output iterate over (node, value) 2-tuples.
These are easily stored in a dict structure if you desire.
Step32: See Algorithms for details on graph algorithms
supported.
Drawing graphs
NetworkX is not primarily a graph drawing package but basic drawing with
Matplotlib as well as an interface to use the open source Graphviz software
package are included. These are part of the networkx.drawing module and will
be imported if possible.
First import Matplotlib’s plot interface (pylab works too)
Step33: You may find it useful to interactively test code using ipython -pylab,
which combines the power of ipython and matplotlib and provides a convenient
interactive mode.
To test if the import of networkx.drawing was successful draw G using one of
Step34: when drawing to an interactive display. Note that you may need to issue a
Matplotlib
Step35: command if you are not using matplotlib in interactive mode (see
Matplotlib FAQ
).
Step36: You can find additional options via draw_networkx() and
layouts via layout.
You can use multiple shells with draw_shell().
Step37: To save drawings to a file, use, for example
Step38: writes to the file path.png in the local directory. If Graphviz and
PyGraphviz or pydot, are available on your system, you can also use
nx_agraph.graphviz_layout(G) or nx_pydot.graphviz_layout(G) to get the
node positions, or write the graph in dot format for further processing. | Python Code:
import networkx as nx
G = nx.Graph()
G
Explanation: <!-- -*- coding: utf-8 -*- -->
Tutorial
This guide can help you start working with NetworkX.
Creating a graph
Create an empty graph with no nodes and no edges.
End of explanation
G.add_node(1)
Explanation: By definition, a Graph is a collection of nodes (vertices) along with
identified pairs of nodes (called edges, links, etc). In NetworkX, nodes can
be any hashable object e.g., a text string, an image, an XML object, another
Graph, a customized node object, etc.
Nodes
The graph G can be grown in several ways. NetworkX includes many graph
generator functions and facilities to read and write graphs in many formats.
To get started though we’ll look at simple manipulations. You can add one node
at a time,
End of explanation
G.add_nodes_from([2, 3])
Explanation: add a list of nodes,
End of explanation
H = nx.path_graph(10)
G.add_nodes_from(H)
Explanation: or add any iterable container of nodes. You can also add nodes along with node
attributes if your container yields 2-tuples (node, node_attribute_dict).
Node attributes are discussed further below.
End of explanation
G.add_node(H)
Explanation: Note that G now contains the nodes of H as nodes of G.
In contrast, you could use the graph H as a node in G.
End of explanation
G.add_edge(1, 2)
e = (2, 3)
G.add_edge(*e) # unpack edge tuple*
Explanation: The graph G now contains H as a node. This flexibility is very powerful as
it allows graphs of graphs, graphs of files, graphs of functions and much more.
It is worth thinking about how to structure your application so that the nodes
are useful entities. Of course you can always use a unique identifier in G
and have a separate dictionary keyed by identifier to the node information if
you prefer.
Edges
G can also be grown by adding one edge at a time,
End of explanation
G.add_edges_from([(1, 2), (1, 3)])
Explanation: by adding a list of edges,
End of explanation
G.add_edges_from(H.edges)
Explanation: or by adding any ebunch of edges. An ebunch is any iterable
container of edge-tuples. An edge-tuple can be a 2-tuple of nodes or a 3-tuple
with 2 nodes followed by an edge attribute dictionary, e.g.,
(2, 3, {'weight': 3.1415}). Edge attributes are discussed further below
End of explanation
G.clear()
Explanation: There are no complaints when adding existing nodes or edges. For example,
after removing all nodes and edges,
End of explanation
G.add_edges_from([(1, 2), (1, 3)])
G.add_node(1)
G.add_edge(1, 2)
G.add_node("spam") # adds node "spam"
G.add_nodes_from("spam") # adds 4 nodes: 's', 'p', 'a', 'm'
G.add_edge(3, 'm')
Explanation: we add new nodes/edges and NetworkX quietly ignores any that are
already present.
End of explanation
G.number_of_nodes()
G.number_of_edges()
Explanation: At this stage the graph G consists of 8 nodes and 3 edges, as can be seen by:
End of explanation
list(G.nodes)
list(G.edges)
list(G.adj[1]) # or list(G.neighbors(1))
G.degree[1] # the number of edges incident to 1
Explanation: We can examine the nodes and edges. Four basic graph properties facilitate
reporting: G.nodes, G.edges, G.adj and G.degree. These
are set-like views of the nodes, edges, neighbors (adjacencies), and degrees
of nodes in a graph. They offer a continually updated read-only view into
the graph structure. They are also dict-like in that you can look up node
and edge data attributes via the views and iterate with data attributes
using methods .items(), .data('span').
If you want a specific container type instead of a view, you can specify one.
Here we use lists, though sets, dicts, tuples and other containers may be
better in other contexts.
End of explanation
G.edges([2, 'm'])
G.degree([2, 3])
Explanation: One can specify to report the edges and degree from a subset of all nodes
using an nbunch. An nbunch is any of: None (meaning all nodes), a node,
or an iterable container of nodes that is not itself a node in the graph.
End of explanation
G.remove_node(2)
G.remove_nodes_from("spam")
list(G.nodes)
G.remove_edge(1, 3)
Explanation: One can remove nodes and edges from the graph in a similar fashion to adding.
Use methods
Graph.remove_node(),
Graph.remove_nodes_from(),
Graph.remove_edge()
and
Graph.remove_edges_from(), e.g.
End of explanation
G.add_edge(1, 2)
H = nx.DiGraph(G) # create a DiGraph using the connections from G
list(H.edges())
edgelist = [(0, 1), (1, 2), (2, 3)]
H = nx.Graph(edgelist)
Explanation: When creating a graph structure by instantiating one of the graph
classes you can specify data in several formats.
End of explanation
G[1] # same as G.adj[1]
G[1][2]
G.edges[1, 2]
Explanation: What to use as nodes and edges
You might notice that nodes and edges are not specified as NetworkX
objects. This leaves you free to use meaningful items as nodes and
edges. The most common choices are numbers or strings, but a node can
be any hashable object (except None), and an edge can be associated
with any object x using G.add_edge(n1, n2, object=x).
As an example, n1 and n2 could be protein objects from the RCSB Protein
Data Bank, and x could refer to an XML record of publications detailing
experimental observations of their interaction.
We have found this power quite useful, but its abuse
can lead to unexpected surprises unless one is familiar with Python.
If in doubt, consider using convert_node_labels_to_integers() to obtain
a more traditional graph with integer labels.
Accessing edges and neighbors
In addition to the views Graph.edges(), and Graph.adj(),
access to edges and neighbors is possible using subscript notation.
End of explanation
G.add_edge(1, 3)
G[1][3]['color'] = "blue"
G.edges[1, 2]['color'] = "red"
Explanation: You can get/set the attributes of an edge using subscript notation
if the edge already exists.
End of explanation
FG = nx.Graph()
FG.add_weighted_edges_from([(1, 2, 0.125), (1, 3, 0.75), (2, 4, 1.2), (3, 4, 0.375)])
for n, nbrs in FG.adj.items():
for nbr, eattr in nbrs.items():
wt = eattr['weight']
if wt < 0.5: print('(%d, %d, %.3f)' % (n, nbr, wt))
Explanation: Fast examination of all (node, adjacency) pairs is achieved using
G.adjacency(), or G.adj.items().
Note that for undirected graphs, adjacency iteration sees each edge twice.
End of explanation
for (u, v, wt) in FG.edges.data('weight'):
if wt < 0.5: print('(%d, %d, %.3f)' % (u, v, wt))
Explanation: Convenient access to all edges is achieved with the edges property.
End of explanation
G = nx.Graph(day="Friday")
G.graph
Explanation: Adding attributes to graphs, nodes, and edges
Attributes such as weights, labels, colors, or whatever Python object you like,
can be attached to graphs, nodes, or edges.
Each graph, node, and edge can hold key/value attribute pairs in an associated
attribute dictionary (the keys must be hashable). By default these are empty,
but attributes can be added or changed using add_edge, add_node or direct
manipulation of the attribute dictionaries named G.graph, G.nodes, and
G.edges for a graph G.
Graph attributes
Assign graph attributes when creating a new graph
End of explanation
G.graph['day'] = "Monday"
G.graph
Explanation: Or you can modify attributes later
End of explanation
G.add_node(1, time='5pm')
G.add_nodes_from([3], time='2pm')
G.nodes[1]
G.nodes[1]['room'] = 714
G.nodes.data()
Explanation: Node attributes
Add node attributes using add_node(), add_nodes_from(), or G.nodes
End of explanation
G.add_edge(1, 2, weight=4.7 )
G.add_edges_from([(3, 4), (4, 5)], color='red')
G.add_edges_from([(1, 2, {'color': 'blue'}), (2, 3, {'weight': 8})])
G[1][2]['weight'] = 4.7
G.edges[3, 4]['weight'] = 4.2
Explanation: Note that adding a node to G.nodes does not add it to the graph, use
G.add_node() to add new nodes. Similarly for edges.
Edge Attributes
Add/change edge attributes using add_edge(), add_edges_from(),
or subscript notation.
End of explanation
DG = nx.DiGraph()
DG.add_weighted_edges_from([(1, 2, 0.5), (3, 1, 0.75)])
DG.out_degree(1, weight='weight')
DG.degree(1, weight='weight')
list(DG.successors(1))
list(DG.neighbors(1))
Explanation: The special attribute weight should be numeric as it is used by
algorithms requiring weighted edges.
Directed graphs
The DiGraph class provides additional properties specific to
directed edges, e.g.,
DiGraph.out_edges(), DiGraph.in_degree(),
DiGraph.predecessors(), DiGraph.successors() etc.
To allow algorithms to work with both classes easily, the directed versions of
neighbors() is equivalent to successors() while degree reports
the sum of in_degree and out_degree even though that may feel
inconsistent at times.
End of explanation
H = nx.Graph(G) # convert G to undirected graph
Explanation: Some algorithms work only for directed graphs and others are not well
defined for directed graphs. Indeed the tendency to lump directed
and undirected graphs together is dangerous. If you want to treat
a directed graph as undirected for some measurement you should probably
convert it using Graph.to_undirected() or with
End of explanation
MG = nx.MultiGraph()
MG.add_weighted_edges_from([(1, 2, 0.5), (1, 2, 0.75), (2, 3, 0.5)])
dict(MG.degree(weight='weight'))
GG = nx.Graph()
for n, nbrs in MG.adjacency():
for nbr, edict in nbrs.items():
minvalue = min([d['weight'] for d in edict.values()])
GG.add_edge(n, nbr, weight = minvalue)
nx.shortest_path(GG, 1, 3)
Explanation: Multigraphs
NetworkX provides classes for graphs which allow multiple edges
between any pair of nodes. The MultiGraph and
MultiDiGraph
classes allow you to add the same edge twice, possibly with different
edge data. This can be powerful for some applications, but many
algorithms are not well defined on such graphs.
Where results are well defined,
e.g., MultiGraph.degree() we provide the function. Otherwise you
should convert to a standard graph in a way that makes the measurement
well defined.
End of explanation
petersen = nx.petersen_graph()
tutte = nx.tutte_graph()
maze = nx.sedgewick_maze_graph()
tet = nx.tetrahedral_graph()
Explanation: Graph generators and graph operations
In addition to constructing graphs node-by-node or edge-by-edge, they
can also be generated by
Applying classic graph operations, such as:
subgraph(G, nbunch) - induced subgraph view of G on nodes in nbunch
union(G1,G2) - graph union
disjoint_union(G1,G2) - graph union assuming all nodes are different
cartesian_product(G1,G2) - return Cartesian product graph
compose(G1,G2) - combine graphs identifying nodes common to both
complement(G) - graph complement
create_empty_copy(G) - return an empty copy of the same graph class
convert_to_undirected(G) - return an undirected representation of G
convert_to_directed(G) - return a directed representation of G
Using a call to one of the classic small graphs, e.g.,
End of explanation
K_5 = nx.complete_graph(5)
K_3_5 = nx.complete_bipartite_graph(3, 5)
barbell = nx.barbell_graph(10, 10)
lollipop = nx.lollipop_graph(10, 20)
Explanation: Using a (constructive) generator for a classic graph, e.g.,
End of explanation
er = nx.erdos_renyi_graph(100, 0.15)
ws = nx.watts_strogatz_graph(30, 3, 0.1)
ba = nx.barabasi_albert_graph(100, 5)
red = nx.random_lobster(100, 0.9, 0.9)
Explanation: Using a stochastic graph generator, e.g.,
End of explanation
nx.write_gml(red, "path.to.file")
mygraph = nx.read_gml("path.to.file")
Explanation: Reading a graph stored in a file using common graph formats,
such as edge lists, adjacency lists, GML, GraphML, pickle, LEDA and others.
End of explanation
G = nx.Graph()
G.add_edges_from([(1, 2), (1, 3)])
G.add_node("spam") # adds node "spam"
list(nx.connected_components(G))
sorted(d for n, d in G.degree())
nx.clustering(G)
Explanation: For details on graph formats see Reading and writing graphs
and for graph generator functions see Graph generators
Analyzing graphs
The structure of G can be analyzed using various graph-theoretic
functions such as:
End of explanation
sp = dict(nx.all_pairs_shortest_path(G))
sp[3]
Explanation: Some functions with large output iterate over (node, value) 2-tuples.
These are easily stored in a dict structure if you desire.
End of explanation
import matplotlib.pyplot as plt
Explanation: See Algorithms for details on graph algorithms
supported.
Drawing graphs
NetworkX is not primarily a graph drawing package but basic drawing with
Matplotlib as well as an interface to use the open source Graphviz software
package are included. These are part of the networkx.drawing module and will
be imported if possible.
First import Matplotlib’s plot interface (pylab works too)
End of explanation
G = nx.petersen_graph()
plt.subplot(121)
nx.draw(G, with_labels=True, font_weight='bold')
plt.subplot(122)
nx.draw_shell(G, nlist=[range(5, 10), range(5)], with_labels=True, font_weight='bold')
Explanation: You may find it useful to interactively test code using ipython -pylab,
which combines the power of ipython and matplotlib and provides a convenient
interactive mode.
To test if the import of networkx.drawing was successful draw G using one of
End of explanation
plt.show()
Explanation: when drawing to an interactive display. Note that you may need to issue a
Matplotlib
End of explanation
options = {
'node_color': 'black',
'node_size': 100,
'width': 3,
}
plt.subplot(221)
nx.draw_random(G, **options)
plt.subplot(222)
nx.draw_circular(G, **options)
plt.subplot(223)
nx.draw_spectral(G, **options)
plt.subplot(224)
nx.draw_shell(G, nlist=[range(5,10), range(5)], **options)
Explanation: command if you are not using matplotlib in interactive mode (see
Matplotlib FAQ
).
End of explanation
G = nx.dodecahedral_graph()
shells = [[2, 3, 4, 5, 6], [8, 1, 0, 19, 18, 17, 16, 15, 14, 7], [9, 10, 11, 12, 13]]
nx.draw_shell(G, nlist=shells, **options)
Explanation: You can find additional options via draw_networkx() and
layouts via layout.
You can use multiple shells with draw_shell().
End of explanation
nx.draw(G)
plt.savefig("path.png")
Explanation: To save drawings to a file, use, for example
End of explanation
from networkx.drawing.nx_pydot import write_dot
pos = nx.nx_agraph.graphviz_layout(G)
nx.draw(G, pos=pos)
write_dot(G, 'file.dot')
Explanation: writes to the file path.png in the local directory. If Graphviz and
PyGraphviz or pydot, are available on your system, you can also use
nx_agraph.graphviz_layout(G) or nx_pydot.graphviz_layout(G) to get the
node positions, or write the graph in dot format for further processing.
End of explanation |
6,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img align="right" src="../img/exercise_turning.png" />
Exercise
Step1: 3. Motion code
In the following program template, you must fill the gaps with the appropriate code.
The idea is to compute the turned angle of one of the encoders (no matter which one) as the difference between the current and initial values, then compute the traveled distance. The robot will move as long as this distance is lower than the target distance, then it will stop.
Step2: Solution | Python Code:
import packages.initialization
import pioneer3dx as p3dx
p3dx.init()
Explanation: <img align="right" src="../img/exercise_turning.png" />
Exercise: Turn the robot for an angle.
You are going to make a program for turning the robot from the initial position at the start of the simulation, in the center of the room.
The robot should stop after turning 90 degrees.
According to the specifications, the diameter of the wheels of the Pioneer 3DX robot is 195.3 mm
and the distance between the wheels is 330 mm.
1. Starting position
For a better visual understanding of the task, it is recommended that the robot starts at the center of the room.
You can easily relocate the robot there by simply restarting the simulation.
2. Initialization
After restarting the simulation, the robot needs to be initialized.
End of explanation
target = ... # target angle in radians
r = ... # wheel radius
L = ... # axis length
initialEncoder = ...
robotAngle = 0
while robotAngle < target:
p3dx.move(1.0,-1.0)
wheelAngle = ...
robotAngle = ...
p3dx.move(0,0)
Explanation: 3. Motion code
In the following program template, you must fill the gaps with the appropriate code.
The idea is to compute the turned angle of one of the encoders (no matter which one) as the difference between the current and initial values, then compute the traveled distance. The robot will move as long as this distance is lower than the target distance, then it will stop.
End of explanation
target = 3.1416/2 # target angle in radians
r = 0.1953 / 2 # wheel radius
L = 0.33 # axis length
initialEncoder = p3dx.leftEncoder
robotAngle = 0
while robotAngle < target:
p3dx.move(1.0,-1.0)
wheelAngle = p3dx.leftEncoder - initialEncoder
robotAngle = 2 * r * wheelAngle / L
p3dx.move(0,0)
Explanation: Solution
End of explanation |
6,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with model selection
Those who have used Scikit-Learn before will no doubt already be familiar with the Choosing the Right Estimator flow chart. This diagram is handy for those who are just getting started, as it models a simplified decision-making process for selecting the machine learning algorithm that is best suited to one's dataset.
Imports
Step1: Let's try it together.
Load the datasets
Step2: More than 50 samples?
First we are asked whether we have more than 50 samples for our dataset.
Step4: Predicting a quantity or a category?
Next we're asked if we're predicting a category. For the occupancy dataset, the answer is yes. For occupancy, we are predicting whether a room is occupied (0 for no, 1 for yes). Therefore, we will be looking for a classifier for our occupancy dataset.
Step5: Since our categorical dataset has fewer than 100,000 instances, we are prompted to start with sklearn.svm.LinearSVC (which will map the data to a higher dimensional feature space), or failing that, sklearn.neighbors.KNeighborsClassifier (which will assign instances to the class most common among its k nearest neighbors).
In our feature exploration of the occupancy dataset, you'll remember that the different attributes were not all on the same scale, so in addition to the other steps, we import scale so that we can standardize all the features before we run fit-predict
Step6: More than 50 samples?
Step7: Predicting a quantity or a category?
For the concrete dataset, the labels for the strength of the concrete are continuous, so we are predicting a quantity, not a category. Therefore, we will be looking for a regressor for our concrete dataset.
Step8: Meanwhile for our concrete dataset, we must determine whether we think all of the features are important, or only a few of them. If we decide to keep all the features as is, the chart suggests using sklearn.linear_model.RidgeRegression (which will identify features that are less predictive and ensure they have less influence in the model) or possibly sklearn.svm.SVR with a linear kernel (which is similar to the LinearSVC classifier). If we guess that some of the features are not important, we might decide instead to choose sklearn.linear_model.Lasso (which will drop out any features that aren't predictive) or sklearn.linear_model.ElasticNet (which will try to find a happy medium between the Lasso and Ridge methods, taking the linear combination of their L1 and L2 penalties).
Let's try a few because, why not? | Python Code:
from __future__ import print_function
import os
import numpy as np
import pandas as pd
from sklearn.preprocessing import scale
from sklearn.preprocessing import normalize
from sklearn import cross_validation as cv
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import r2_score, mean_squared_error as mse
from sklearn.svm import SVR
from sklearn.svm import SVC
from sklearn.svm import LinearSVC
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.linear_model import RANSACRegressor
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsClassifier
Explanation: Getting started with model selection
Those who have used Scikit-Learn before will no doubt already be familiar with the Choosing the Right Estimator flow chart. This diagram is handy for those who are just getting started, as it models a simplified decision-making process for selecting the machine learning algorithm that is best suited to one's dataset.
Imports
End of explanation
# Load the room occupancy dataset
occupancy = os.path.join('data','occupancy_data','datatraining.txt')
occupancy = pd.read_csv(occupancy, sep=',')
occupancy.columns = [
'date', 'temp', 'humid', 'light', 'co2', 'hratio', 'occupied'
]
Explanation: Let's try it together.
Load the datasets
End of explanation
print(len(occupancy))
Explanation: More than 50 samples?
First we are asked whether we have more than 50 samples for our dataset.
End of explanation
def classify(attributes, targets, model):
Executes classification using the specified model and returns
a classification report.
# Split data into 'test' and 'train' for cross validation
splits = cv.train_test_split(attributes, targets, test_size=0.2)
X_train, X_test, y_train, y_test = splits
model.fit(X_train, y_train)
y_true = y_test
y_pred = model.predict(X_test)
print(classification_report(y_true, y_pred, target_names=list(occupancy)))
Explanation: Predicting a quantity or a category?
Next we're asked if we're predicting a category. For the occupancy dataset, the answer is yes. For occupancy, we are predicting whether a room is occupied (0 for no, 1 for yes). Therefore, we will be looking for a classifier for our occupancy dataset.
End of explanation
features = occupancy[['temp', 'humid', 'light', 'co2', 'hratio']]
labels = occupancy['occupied']
# Scale the features
stdfeatures = scale(features)
classify(stdfeatures, labels, LinearSVC())
classify(stdfeatures, labels, KNeighborsClassifier())
# Load the concrete compression data set
concrete = pd.read_excel(os.path.join('data','Concrete_Data.xls'))
concrete.columns = [
'cement', 'slag', 'ash', 'water', 'splast',
'coarse', 'fine', 'age', 'strength'
]
Explanation: Since our categorical dataset has fewer than 100,000 instances, we are prompted to start with sklearn.svm.LinearSVC (which will map the data to a higher dimensional feature space), or failing that, sklearn.neighbors.KNeighborsClassifier (which will assign instances to the class most common among its k nearest neighbors).
In our feature exploration of the occupancy dataset, you'll remember that the different attributes were not all on the same scale, so in addition to the other steps, we import scale so that we can standardize all the features before we run fit-predict:
End of explanation
print(len(concrete))
Explanation: More than 50 samples?
End of explanation
def regress(attributes, targets, model):
# Split data into 'test' and 'train' for cross validation
splits = cv.train_test_split(attributes, targets, test_size=0.2)
X_train, X_test, y_train, y_test = splits
model.fit(X_train, y_train)
y_true = y_test
y_pred = model.predict(X_test)
print("Mean squared error = {:0.3f}".format(mse(y_true, y_pred)))
print("R2 score = {:0.3f}".format(r2_score(y_true, y_pred)))
Explanation: Predicting a quantity or a category?
For the concrete dataset, the labels for the strength of the concrete are continuous, so we are predicting a quantity, not a category. Therefore, we will be looking for a regressor for our concrete dataset.
End of explanation
features = concrete[[
'cement', 'slag', 'ash', 'water', 'splast', 'coarse', 'fine', 'age'
]]
labels = concrete['strength']
regress(features, labels, Ridge())
regress(features, labels, Lasso())
regress(features, labels, ElasticNet())
Explanation: Meanwhile for our concrete dataset, we must determine whether we think all of the features are important, or only a few of them. If we decide to keep all the features as is, the chart suggests using sklearn.linear_model.RidgeRegression (which will identify features that are less predictive and ensure they have less influence in the model) or possibly sklearn.svm.SVR with a linear kernel (which is similar to the LinearSVC classifier). If we guess that some of the features are not important, we might decide instead to choose sklearn.linear_model.Lasso (which will drop out any features that aren't predictive) or sklearn.linear_model.ElasticNet (which will try to find a happy medium between the Lasso and Ridge methods, taking the linear combination of their L1 and L2 penalties).
Let's try a few because, why not?
End of explanation |
6,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='top'> </a>
Step1: Fraction correctly identified
Table of contents
Define analysis free parameters
Data preprocessing
Fitting random forest
Fraction correctly identified
Spectrum
Unfolding
Step2: Define analysis free parameters
[ back to top ]
Whether or not to train on 'light' and 'heavy' composition classes, or the individual compositions
Step3: Get composition classifier pipeline
Step4: Define energy binning for this analysis
Step5: Data preprocessing
[ back to top ]
1. Load simulation/data dataframe and apply specified quality cuts
2. Extract desired features from dataframe
3. Get separate testing and training datasets
Step6: Fraction correctly identified
[ back to top ]
Calculate classifier performance via 10-fold CV
Step7: Plot fraction of events correctlty classified vs energy
This is done via 10-fold cross-validation. This will give an idea as to how much variation there is in the classifier due to different trainig and testing samples.
Step8: Determine the mean and standard deviation of the fraction correctly classified for each energy bin | Python Code:
%load_ext watermark
%watermark -a 'Author: James Bourbeau' -u -d -v -p numpy,matplotlib,scipy,pandas,sklearn,mlxtend
Explanation: <a id='top'> </a>
End of explanation
from __future__ import division, print_function
from collections import defaultdict
import itertools
import numpy as np
from scipy import interp
import pandas as pd
import matplotlib.pyplot as plt
import seaborn.apionly as sns
import pyprind
from sklearn.metrics import accuracy_score, confusion_matrix, roc_curve, auc, classification_report
from sklearn.model_selection import cross_val_score, StratifiedShuffleSplit, KFold, StratifiedKFold
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
import comptools as comp
import comptools.analysis.plotting as plotting
# color_dict allows for a consistent color-coding for each composition
color_dict = comp.analysis.get_color_dict()
%matplotlib inline
Explanation: Fraction correctly identified
Table of contents
Define analysis free parameters
Data preprocessing
Fitting random forest
Fraction correctly identified
Spectrum
Unfolding
End of explanation
config = 'IC86.2012'
# config = 'IC79'
comp_class = True
comp_key = 'MC_comp_class' if comp_class else 'MC_comp'
comp_list = ['light', 'heavy'] if comp_class else ['P', 'He', 'O', 'Fe']
Explanation: Define analysis free parameters
[ back to top ]
Whether or not to train on 'light' and 'heavy' composition classes, or the individual compositions
End of explanation
pipeline_str = 'GBDT'
pipeline = comp.get_pipeline(pipeline_str)
Explanation: Get composition classifier pipeline
End of explanation
energybins = comp.analysis.get_energybins()
Explanation: Define energy binning for this analysis
End of explanation
sim_train, sim_test = comp.load_dataframe(datatype='sim', config=config, comp_key=comp_key)
len(sim_train) + len(sim_test)
# feature_list = ['lap_cos_zenith', 'log_s125', 'log_dEdX', 'invqweighted_inice_radius_1_60']
# feature_list, feature_labels = comp.analysis.get_training_features(feature_list)
feature_list, feature_labels = comp.analysis.get_training_features()
Explanation: Data preprocessing
[ back to top ]
1. Load simulation/data dataframe and apply specified quality cuts
2. Extract desired features from dataframe
3. Get separate testing and training datasets
End of explanation
frac_correct_folds = comp.analysis.get_CV_frac_correct(sim_train, feature_list, pipeline_str, comp_list)
frac_correct_gen_err = {key: np.std(frac_correct_folds[key], axis=0) for key in frac_correct_folds}
Explanation: Fraction correctly identified
[ back to top ]
Calculate classifier performance via 10-fold CV
End of explanation
fig, ax = plt.subplots()
for composition in comp_list:
# for composition in comp_list + ['total']:
print(composition)
performance_mean = np.mean(frac_correct_folds[composition], axis=0)
performance_std = np.std(frac_correct_folds[composition], axis=0)
# err = np.sqrt(frac_correct_gen_err[composition]**2 + reco_frac_stat_err[composition]**2)
plotting.plot_steps(energybins.log_energy_bins, performance_mean, yerr=performance_std,
ax=ax, color=color_dict[composition], label=composition)
plt.xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$')
ax.set_ylabel('Classification accuracy')
# ax.set_ylabel('Classification accuracy \n (statistical + 10-fold CV error)')
ax.set_ylim([0.0, 1.0])
ax.set_xlim([energybins.log_energy_min, energybins.log_energy_max])
ax.grid()
leg = plt.legend(loc='upper center', frameon=False,
bbox_to_anchor=(0.5, # horizontal
1.15),# vertical
ncol=len(comp_list)+1, fancybox=False)
# set the linewidth of each legend object
for legobj in leg.legendHandles:
legobj.set_linewidth(3.0)
cv_str = 'Avg. accuracy:\n{:0.2f}\% (+/- {:0.1f}\%)'.format(np.mean(frac_correct_folds['total'])*100,
np.std(frac_correct_folds['total'])*100)
ax.text(7.4, 0.2, cv_str,
ha="center", va="center", size=10,
bbox=dict(boxstyle='round', fc="white", ec="gray", lw=0.8))
# plt.savefig('/home/jbourbeau/public_html/figures/frac-correct-{}.png'.format(pipeline_str))
plt.show()
Explanation: Plot fraction of events correctlty classified vs energy
This is done via 10-fold cross-validation. This will give an idea as to how much variation there is in the classifier due to different trainig and testing samples.
End of explanation
avg_frac_correct_data = {'values': np.mean(frac_correct_folds['total'], axis=0), 'errors': np.std(frac_correct_folds['total'], axis=0)}
avg_frac_correct, avg_frac_correct_err = comp.analysis.averaging_error(**avg_frac_correct_data)
reco_frac, reco_frac_stat_err = comp.analysis.get_frac_correct(sim_train, sim_test, pipeline, comp_list)
# Plot fraction of events correctlt classified vs energy
fig, ax = plt.subplots()
for composition in comp_list + ['total']:
err = np.sqrt(frac_correct_gen_err[composition]**2 + reco_frac_stat_err[composition]**2)
plotting.plot_steps(energybins.log_energy_bins, reco_frac[composition], err, ax,
color_dict[composition], composition)
plt.xlabel('$\log_{10}(E_{\mathrm{reco}}/\mathrm{GeV})$')
ax.set_ylabel('Fraction correctly identified')
ax.set_ylim([0.0, 1.0])
ax.set_xlim([energybins.log_energy_min, energybins.log_energy_max])
ax.grid()
leg = plt.legend(loc='upper center', frameon=False,
bbox_to_anchor=(0.5, # horizontal
1.1),# vertical
ncol=len(comp_list)+1, fancybox=False)
# set the linewidth of each legend object
for legobj in leg.legendHandles:
legobj.set_linewidth(3.0)
cv_str = 'Accuracy: {:0.2f}\% (+/- {:0.1f}\%)'.format(avg_frac_correct*100, avg_frac_correct_err*100)
ax.text(7.4, 0.2, cv_str,
ha="center", va="center", size=10,
bbox=dict(boxstyle='round', fc="white", ec="gray", lw=0.8))
plt.savefig('/home/jbourbeau/public_html/figures/frac-correct-{}.png'.format(pipeline_str))
plt.show()
Explanation: Determine the mean and standard deviation of the fraction correctly classified for each energy bin
End of explanation |
6,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment 4
Before working on this assignment please read these instructions fully. In the submission area, you will notice that you can click the link to Preview the Grading for each step of the assignment. This is the criteria that will be used for peer grading. Please familiarize yourself with the criteria before beginning the assignment.
This assignment requires that you to find at least two datasets on the web which are related, and that you visualize these datasets to answer a question with the broad topic of religious events or traditions (see below) for the region of Ann Arbor, Michigan, United States, or United States more broadly.
You can merge these datasets with data from different regions if you like! For instance, you might want to compare Ann Arbor, Michigan, United States to Ann Arbor, USA. In that case at least one source file must be about Ann Arbor, Michigan, United States.
You are welcome to choose datasets at your discretion, but keep in mind they will be shared with your peers, so choose appropriate datasets. Sensitive, confidential, illicit, and proprietary materials are not good choices for datasets for this assignment. You are welcome to upload datasets of your own as well, and link to them using a third party repository such as github, bitbucket, pastebin, etc. Please be aware of the Coursera terms of service with respect to intellectual property.
Also, you are welcome to preserve data in its original language, but for the purposes of grading you should provide english translations. You are welcome to provide multiple visuals in different languages if you would like!
As this assignment is for the whole course, you must incorporate principles discussed in the first week, such as having as high data-ink ratio (Tufte) and aligning with Cairo’s principles of truth, beauty, function, and insight.
Here are the assignment instructions
Step1: Visualise correlation
Let's see how correlation looks when comparing number of Catholic holidays to montly spendin in the US.
Target for the visualisation is to show Catholic holidays for a month and project those on to the expenditure curve. This helps us to see if there is a direct correlation between expenditure and holidays or is there not. | Python Code:
import pandas as pd
import numpy as np
data_url = "https://fred.stlouisfed.org/graph/fredgraph.csv?chart_type=line&recession_bars=on&log_scales=&bgcolor=%23e1e9f0&graph_bgcolor=%23ffffff&fo=Open+Sans&ts=12&tts=12&txtcolor=%23444444&show_legend=yes&show_axis_titles=yes&drp=0&cosd=2015-12-26&coed=2017-01-30&height=450&stacking=&range=Custom&mode=fred&id=DFXARC1M027SBEA&transformation=lin&nd=1959-01-01&ost=-99999&oet=99999&lsv=&lev=&mma=0&fml=a&fgst=lin&fgsnd=2009-06-01&fq=Monthly&fam=avg&vintage_date=&revision_date=&line_color=%234572a7&line_style=solid&lw=2&scale=left&mark_type=none&mw=2&width=1168"
df_expenditure = pd.read_csv(data_url)
df_expenditure.head()
# Find string between two strings
def find_between( s, first, last ):
try:
start = s.index( first ) + len( first )
end = s.index( last, start )
return s[start:end]
except ValueError:
return ""
from urllib.request import urlopen
link = "http://www.calendar-12.com/catholic_holidays/2016"
response = urlopen(link)
content = response.read().decode("utf-8")
# 'Poor mans' way of parsing days from HTML.
# Using this approach since learning environment does not
# have proper packages installed for html parsing.
table = find_between(content, "<tbody>","</tbody>");
rows = table.split("/tr")
csv = "Day\n"
for row in rows:
day = find_between(row, '">', "</t")
day = find_between(day, "> ", "</")
csv = csv + day + "\n"
print(csv)
import sys
if sys.version_info[0] < 3:
from StringIO import StringIO
else:
from io import StringIO
df_catholic = pd.read_csv(StringIO(csv), sep=";")
df_catholic.head()
from datetime import datetime
# Strip out weekday name
df_catholic["Date"] = df_catholic.apply(lambda row:row["Day"][row["Day"].find(",")+1:], axis=1)
# Convert to date
df_catholic["Date"] = df_catholic.apply(lambda row: datetime.strptime(row["Date"], " %B %d, %Y"), axis=1)
df_catholic["Holiday"] = 1
df_catholic.head()
# Convert to expenditure also to date
df_expenditure["Date"] = df_expenditure.apply(lambda row: datetime.strptime(row["DATE"], "%Y-%m-%d"), axis=1)
df_expenditure.head()
Explanation: Assignment 4
Before working on this assignment please read these instructions fully. In the submission area, you will notice that you can click the link to Preview the Grading for each step of the assignment. This is the criteria that will be used for peer grading. Please familiarize yourself with the criteria before beginning the assignment.
This assignment requires that you to find at least two datasets on the web which are related, and that you visualize these datasets to answer a question with the broad topic of religious events or traditions (see below) for the region of Ann Arbor, Michigan, United States, or United States more broadly.
You can merge these datasets with data from different regions if you like! For instance, you might want to compare Ann Arbor, Michigan, United States to Ann Arbor, USA. In that case at least one source file must be about Ann Arbor, Michigan, United States.
You are welcome to choose datasets at your discretion, but keep in mind they will be shared with your peers, so choose appropriate datasets. Sensitive, confidential, illicit, and proprietary materials are not good choices for datasets for this assignment. You are welcome to upload datasets of your own as well, and link to them using a third party repository such as github, bitbucket, pastebin, etc. Please be aware of the Coursera terms of service with respect to intellectual property.
Also, you are welcome to preserve data in its original language, but for the purposes of grading you should provide english translations. You are welcome to provide multiple visuals in different languages if you would like!
As this assignment is for the whole course, you must incorporate principles discussed in the first week, such as having as high data-ink ratio (Tufte) and aligning with Cairo’s principles of truth, beauty, function, and insight.
Here are the assignment instructions:
State the region and the domain category that your data sets are about (e.g., Ann Arbor, Michigan, United States and religious events or traditions).
You must state a question about the domain category and region that you identified as being interesting.
You must provide at least two links to available datasets. These could be links to files such as CSV or Excel files, or links to websites which might have data in tabular form, such as Wikipedia pages.
You must upload an image which addresses the research question you stated. In addition to addressing the question, this visual should follow Cairo's principles of truthfulness, functionality, beauty, and insightfulness.
You must contribute a short (1-2 paragraph) written justification of how your visualization addresses your stated research question.
What do we mean by religious events or traditions? For this category you might consider calendar events, demographic data about religion in the region and neighboring regions, participation in religious events, or how religious events relate to political events, social movements, or historical events.
Tips
Wikipedia is an excellent source of data, and I strongly encourage you to explore it for new data sources.
Many governments run open data initiatives at the city, region, and country levels, and these are wonderful resources for localized data sources.
Several international agencies, such as the United Nations, the World Bank, the Global Open Data Index are other great places to look for data.
This assignment requires you to convert and clean datafiles. Check out the discussion forums for tips on how to do this from various sources, and share your successes with your fellow students!
Example
Looking for an example? Here's what our course assistant put together for the Ann Arbor, MI, USA area using sports and athletics as the topic. Example Solution File
Solution
In the solution personal food expenditure in 2016 is compared to the number of religious (Catholic) holidays in the US. Target is to see if there is a correlation between number of holidays and food expenditure in the US in 2016.
Datasets
Personal consumption expenditure
U.S. Bureau of Economic Analysis, Personal consumption expenditures: Food [DFXARC1M027SBEA]
Filtered dataset for 2016 expenditure in CSV format
Catholic holidays
A List of Catholic Holidays in the 2016 Year.
Data preprocessing
End of explanation
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
%matplotlib notebook
fig, ax = plt.subplots()
a = ax.plot(list(df_expenditure["Date"]), df_expenditure["DFXARC1M027SBEA"], label="Expentiture", zorder=10)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.xlabel("Date")
plt.ylabel("Billions of Dollars")
plt.title("Catholic holiday effect on US food expenditure, 2016")
ax2 = ax.twinx()
#b = ax2.scatter(list(df_catholic["Date"]),df_catholic["Holiday"], s=60, c="red", alpha=0.7, label="Holiday")
b = ax2.bar(list(df_catholic["Date"]),df_catholic["Holiday"], alpha=0.2, label="Holiday", color="Red")
ax2 = plt.gca()
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
#my_xticks = ['','','Holiday','', '']
#plt.yticks(list(df_catholic["Holiday"]), my_xticks)
ax2 = plt.gca()
ax2.yaxis.set_visible(False)
# Combine legend
h1, l1 = ax.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
ax.legend(h1+h2, l1+l2, loc=4, frameon = False)
months = ['Jan','Feb','Mar','Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct' ,'Nov', 'Dec', 'Jan']
plt.xticks(list(df_expenditure["Date"]), months, rotation='vertical')
fig.autofmt_xdate()
plt.show()
Explanation: Visualise correlation
Let's see how correlation looks when comparing number of Catholic holidays to montly spendin in the US.
Target for the visualisation is to show Catholic holidays for a month and project those on to the expenditure curve. This helps us to see if there is a direct correlation between expenditure and holidays or is there not.
End of explanation |
6,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="navigation"></a>
Single-cell Hi-C data analysis
Welcome to the second part of our analysis. Here we will work specifically with single-cell data.
The outline
Step1: In this case, we have multiple datasets, thus we have to iterate through the list of files.
Step2: <a id="filtering"></a>
2. Data filtering and binning
Go top
In case of single cell analysis, the contact filtering is much more sophisticated. For example, standard PCR duplicates filter is replaced by "filterLessThanDistance". The code below is based on hiclib single cell scripts for Flyamer et al. 2017.
Step3: <a id="visualisation"></a>
4. Hi-C data visualisation
Go top
Step4: Comparison of single-cell and bulk datasets
Step5: <a id="meta"></a>
5. Compartmanets and TADs
Go top
Step6: Topologically associating domains (TADs)
Step7: The problem of the variable parameter for TADs calling might be resolved via parameter optimization. For example, the best parameter could be selected based on mean TADs size fitting expectations (~ 500 Kb in this case). | Python Code:
import os
from hiclib import mapping
from mirnylib import h5dict, genome
bowtie_path = '/opt/conda/bin/bowtie2'
enzyme = 'DpnII'
bowtie_index_path = '/home/jovyan/GENOMES/HG19_IND/hg19_chr1'
fasta_path = '/home/jovyan/GENOMES/HG19_FASTA/'
chrms = ['1']
genome_db = genome.Genome(fasta_path, readChrms=chrms)
min_seq_len = 120
len_step = 5
nthreads = 2
temp_dir = 'tmp'
bowtie_flags = '--very-sensitive'
Explanation: <a id="navigation"></a>
Single-cell Hi-C data analysis
Welcome to the second part of our analysis. Here we will work specifically with single-cell data.
The outline:
Reads mapping
Data filtering and binning
Hi-C data visualisation
Compartments and TADs
If you have any questions, please, contact Aleksandra Galitsyna ([email protected])
<a id="mapping"></a>
1. Reads mapping
Go top
In this notebook, we will be working with three datasets from Flyamer et al. 2017 (GEO ID GSE80006) placed in DATA/FASTQ/ directory.
Opposite to previous analysis, now you can work with single-cell Hi-C results.
End of explanation
experiment_ids = ['72-sc-1', '54-sc-1', '58-sc-1']
for exp_id in experiment_ids:
infile1 = '/home/jovyan/DATA/FASTQ1/K562_{}_R1.fastq'.format(exp_id)
infile2 = '/home/jovyan/DATA/FASTQ1/K562_{}_R2.fastq'.format(exp_id)
out1 = '/home/jovyan/DATA/SAM/K562_{}_R1.chr1.sam'.format(exp_id)
out2 = '/home/jovyan/DATA/SAM/K562_{}_R2.chr1.sam'.format(exp_id)
mapping.iterative_mapping(
bowtie_path = bowtie_path,
bowtie_index_path = bowtie_index_path,
fastq_path = infile1,
out_sam_path = out1,
min_seq_len = min_seq_len,
len_step = len_step,
nthreads = nthreads,
temp_dir = temp_dir,
bowtie_flags = bowtie_flags)
mapping.iterative_mapping(
bowtie_path = bowtie_path,
bowtie_index_path = bowtie_index_path,
fastq_path = infile2,
out_sam_path = out2,
min_seq_len = min_seq_len,
len_step = len_step,
nthreads = nthreads,
temp_dir = temp_dir,
bowtie_flags = bowtie_flags)
out = '/home/jovyan/DATA/HDF5/K562_{}.fragments.hdf5'
mapped_reads = h5dict.h5dict(out.format(exp_id))
mapping.parse_sam(
sam_basename1 = out1,
sam_basename2 = out2,
out_dict = mapped_reads,
genome_db = genome_db,
enzyme_name = enzyme,
save_seqs = False,
keep_ids = False)
Explanation: In this case, we have multiple datasets, thus we have to iterate through the list of files.
End of explanation
import h5py
import numpy as np
from hiclib import fragmentHiC
from hiclib.fragmentHiC import HiCdataset as HiCdatasetorig
from mirnylib.numutils import uniqueIndex
class HiCdataset(HiCdatasetorig):
"Modification of HiCDataset to include all filters"
def filterLessThanDistance(self):
# This is the old function used to filter "duplicates".
#After the final submission of the manuscript, It was replaced by a better function that does the same,
#but at bp resolution, not 100 bp.
M = self.N
for i in range(5):
for j in range(5):
chrStrandID = 10000000 * 10000000 * (np.array(self.chrms1 * (self.strands1 + 1), dtype = np.int64) * 100 + self.chrms2 * (self.strands2 + 1))
print(len(np.unique(chrStrandID)))
posid = np.array((self.cuts1 + i * 100) // 500, dtype = np.int64) * 10000000 + (self.cuts2 + i * 100) // 500
N = self.N
self.maskFilter(uniqueIndex(posid + chrStrandID))
print(N, "filtered to", self.N)
self.metadata["321_quasiDuplicatesRemoved"] = M - self.N
output = []
for exp_id in experiment_ids:
inp = '/home/jovyan/DATA/HDF5/K562_{}.fragments.hdf5'.format(exp_id)
out = '/home/jovyan/DATA/HDF5/K562_{}.tmp.hdf5'.format(exp_id)
outstat = '/home/jovyan/DATA/HDF5/K562_{}.stat.txt'.format(exp_id)
fragments = HiCdataset(
filename = out,
genome = genome_db,
maximumMoleculeLength= 500,
enzymeName = 1000,
mode = 'w')
fragments.parseInputData(
dictLike=inp)
fragments.filterLessThanDistance()
fs = fragments.fragmentSum()
fragments.fragmentFilter(fs < 9)
output.append(list(fragments.metadata.items()))
out_bin = '/home/jovyan/DATA/HDF5/K562_{}.binned_{}.hdf5'
res_kb = [100, 20]
for res in res_kb:
print(res)
outmap = out_bin.format(exp_id, str(res)+'kb')
fragments.saveHeatmap(outmap, res*1000)
del fragments
output
Explanation: <a id="filtering"></a>
2. Data filtering and binning
Go top
In case of single cell analysis, the contact filtering is much more sophisticated. For example, standard PCR duplicates filter is replaced by "filterLessThanDistance". The code below is based on hiclib single cell scripts for Flyamer et al. 2017.
End of explanation
from hiclib.binnedData import binnedDataAnalysis
res = 100
data_hic = binnedDataAnalysis(resolution=res*1000, genome=genome_db)
for exp_id in experiment_ids:
data_hic.simpleLoad('/home/jovyan/DATA/HDF5/K562_{}.binned_{}.hdf5'.format(exp_id, str(res)+'kb'), exp_id)
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('ticks')
%matplotlib inline
plt.figure(figsize=[10,10])
plt.imshow(data_hic.dataDict['54-sc-1'][200:500, 200:500], cmap='jet', interpolation='None')
Explanation: <a id="visualisation"></a>
4. Hi-C data visualisation
Go top
End of explanation
data_hic.simpleLoad('/home/jovyan/DATA/HDF5/K562_B-bulk.binned_{}.hdf5'.format(str(res)+'kb'),'bulk')
data_hic.removeDiagonal()
mtx1 = data_hic.dataDict['bulk']
mtx2 = data_hic.dataDict['54-sc-1']
mtx_tmp = np.triu(mtx1)/np.mean(mtx1) + np.tril(mtx2)/np.mean(mtx2)
plt.figure(figsize=[10,10])
plt.imshow(mtx_tmp[200:500, 200:500], cmap='Blues', interpolation='None', vmax=900)
mtx_merged = sum([data_hic.dataDict[exp_id] for exp_id in experiment_ids])
mtx1 = data_hic.dataDict['bulk']
mtx2 = mtx_merged
mtx_tmp = np.triu(mtx1)/np.mean(mtx1) + np.tril(mtx2)/np.mean(mtx2)
plt.figure(figsize=[10,10])
plt.imshow(mtx_tmp[200:500, 200:500], cmap='Blues', interpolation='None', vmax=800)
Explanation: Comparison of single-cell and bulk datasets
End of explanation
from matplotlib import gridspec
eig = np.loadtxt('/home/jovyan/DATA/ANNOT/comp_K562_100Kb_chr1.tsv')
bgn = 0
end = 500
fig = plt.figure(figsize=(10,10))
gs = gridspec.GridSpec(2, 1, height_ratios=[20,2])#width_ratios=[2,20],
gs.update(wspace=0.0, hspace=0.0)
ax = plt.subplot(gs[0,0])
ax.matshow(mtx_tmp[bgn:end, bgn:end], cmap='jet', origin='lower', aspect='auto')
ax.set_xticks([])
ax.set_yticks([])
axl = plt.subplot(gs[1,0])#, sharey=ax)
plt.plot(range(end-bgn), eig[bgn:end] )
plt.xlim(0, end-bgn)
plt.xlabel('Eigenvector values')
ticks = range(bgn, end+1, 100)
ticklabels = ['{} Kb'.format(x) for x in ticks]
plt.xticks(ticks, ticklabels)
Explanation: <a id="meta"></a>
5. Compartmanets and TADs
Go top
End of explanation
import lavaburst
mtx = data_hic.dataDict['54-sc-1']
good_bins = mtx.astype(bool).sum(axis=0) > 1 # We have to mask rows/cols if data is missing
gammas=[0.1, 1.0, 5.0, 10.0] # set of parameters gamma for TADs calling
segments_dict = {}
for gam_current in gammas:
print(gam_current)
S = lavaburst.scoring.armatus_score(mtx, gamma=gam_current, binmask=good_bins)
model = lavaburst.model.SegModel(S)
segments = model.optimal_segmentation() # Positions of TADs for input matrix
segments_dict[gam_current] = segments.copy()
# TADs at different parameters for particular cell (54-sc-1)
A = mtx.copy()
good_bins = A.astype(bool).sum(axis=0) > 0
At = lavaburst.utils.tilt_heatmap(mtx, n_diags=100)
start_tmp = 0
end_tmp = 500
f = plt.figure(figsize=(20, 6))
ax = f.add_subplot(111)
blues = sns.cubehelix_palette(0.4, gamma=0.5, rot=-0.3, dark=0.1, light=0.9, as_cmap=True)
ax.matshow(np.log(At[start_tmp: end_tmp]), cmap=blues)
cmap = mpl.cm.get_cmap('brg')
gammas = segments_dict.keys()
for n, gamma in enumerate(gammas):
segments = segments_dict[gamma]
for a in segments[:-1]:
if a[1]<start_tmp or a[0]>end_tmp:
continue
ax.plot([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp], [0, -(a[1]-a[0])], c=cmap(n/len(gammas)), alpha=0.5)
ax.plot([a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [-(a[1]-a[0]), 0], c=cmap(n/len(gammas)), alpha=0.5)
a = segments[-1]
ax.plot([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp], [0, -(a[1]-a[0])], c=cmap(n/len(gammas)), alpha=0.5, label=gamma)
ax.plot([a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [-(a[1]-a[0]), 0], c=cmap(n/len(gammas)), alpha=0.5)
ax.set_xlim([0,end_tmp-start_tmp])
ax.set_ylim([100,-100])
ax.legend(bbox_to_anchor=(1.1, 1.05))
ax.set_aspect(0.5)
Explanation: Topologically associating domains (TADs)
End of explanation
optimal_gammas = {}
for exp_id in experiment_ids:
mtx = data_hic.dataDict[exp_id][0:1000, 0:1000]
good_bins = mtx.astype(bool).sum(axis=0) > 1 # We have to mask rows/cols if data is missing
gammas = np.arange(2, 24, 1)*1000/3250 # Desired set of gammas for testing
means = []
for gam_current in gammas:
S = lavaburst.scoring.armatus_score(mtx, gamma=gam_current, binmask=good_bins)
model = lavaburst.model.SegModel(S)
segments = model.optimal_segmentation() # Positions of TADs for input matrix
tad_lens = segments[:,1]-segments[:,0]
good_lens = (tad_lens>=200/res)&(tad_lens<900) # We do not consider too large or too small segments as TADs
means.append(np.mean(tad_lens[good_lens]))
idx = np.argmin(np.abs(np.array(means)-500/res))
opt_mean, opt_gamma = means[idx], gammas[idx]
print(exp_id, opt_mean*res, opt_gamma)
optimal_gammas[exp_id] = opt_gamma
# TADs in single cells compared with merged single-cell data
A = mtx.copy()
good_bins = A.astype(bool).sum(axis=0) > 0
At = lavaburst.utils.tilt_heatmap(mtx_merged, n_diags=100)
start_tmp = 0
end_tmp = 500
f = plt.figure(figsize=(20, 6))
ax = f.add_subplot(111)
ax.matshow(np.log(At[start_tmp: end_tmp]), cmap='Reds')
for n, exp in enumerate(experiment_ids):
A = data_hic.dataDict[exp][bgn:end, bgn:end].copy()
good_bins = A.astype(bool).sum(axis=0) > 0
gamma = optimal_gammas[exp]
S = lavaburst.scoring.modularity_score(A, gamma=gamma, binmask=good_bins)
model = lavaburst.model.SegModel(S)
segments = model.optimal_segmentation()
for a in segments[:-1]:
if a[1]<start_tmp or a[0]>end_tmp:
continue
tad_len = a[1]-a[0]
if (tad_len<200/res)|(tad_len>=900):
continue
ax.fill_between([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [0, -(a[1]-a[0]), 0], 0,
facecolor='#6100FF', interpolate=True, alpha=0.2)
a = segments[-1]
tad_len = a[1]-a[0]
if (tad_len<200/res)|(tad_len>=900):
continue
ax.fill_between([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [0, -(a[1]-a[0]), 0], 0,
facecolor='#6100FF', interpolate=True, alpha=0.2)
ax.set_xlim([start_tmp,end_tmp])
ax.set_ylim([100,-100])
ax.set_aspect(0.5)
# TADs in single cells compared with bulk Hi-C data
A = mtx.copy()
good_bins = A.astype(bool).sum(axis=0) > 0
At = lavaburst.utils.tilt_heatmap(data_hic.dataDict['bulk'], n_diags=100)
start_tmp = 0
end_tmp = 300
f = plt.figure(figsize=(20, 6))
ax = f.add_subplot(111)
ax.matshow(np.log(At[start_tmp: end_tmp]), cmap='Reds')
for n, exp in enumerate(experiment_ids):
A = data_hic.dataDict[exp][bgn:end, bgn:end].copy()
good_bins = A.astype(bool).sum(axis=0) > 0
gamma = optimal_gammas[exp]
S = lavaburst.scoring.modularity_score(A, gamma=gamma, binmask=good_bins)
model = lavaburst.model.SegModel(S)
segments = model.optimal_segmentation()
for a in segments[:-1]:
if a[1]<start_tmp or a[0]>end_tmp:
continue
tad_len = a[1]-a[0]
if (tad_len<200/res)|(tad_len>=900):
continue
ax.fill_between([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [0, -(a[1]-a[0]), 0], 0,
facecolor='#6100FF', interpolate=True, alpha=0.2)
a = segments[-1]
tad_len = a[1]-a[0]
if (tad_len<200/res)|(tad_len>=900):
continue
ax.fill_between([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [0, -(a[1]-a[0]), 0], 0,
facecolor='#6100FF', interpolate=True, alpha=0.2)
ax.set_xlim([start_tmp,end_tmp])
ax.set_ylim([100,-100])
ax.set_aspect(0.5)
Explanation: The problem of the variable parameter for TADs calling might be resolved via parameter optimization. For example, the best parameter could be selected based on mean TADs size fitting expectations (~ 500 Kb in this case).
End of explanation |
6,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collaborative Filtering
Step1: Finding Similar Users
Euclidean Distance Score
One very simple way to calculate a similarity score is to use a Euclidean distance score, which takes the items that people have ranked in common and uses them as axes for a chart.
Step3: Pearson Correlation Score
To determine the similarity between people's interests is to use a Pearson correlation coefficient. The correlation coefficient is a measure of how well two sets of data fit on a straight line.
Step4: Ranking the Critics
Now that you have functions for comparing two people, you can create a function that scores everyone against a given person and finds the closest matches. In this case, I'm interested in learning which movie critics have tastes simliar to mine so that I know whose advice I should take when deciding on a movie.
The following function uses a Python list comprehension to compare me to every other user in the dictionary using one of the previously defined distance metrics. Then it returns the first n items of the sorted results. | Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
# A dictionary of movie critics and their ratings of a small
# set of movies
critics={'Lisa Rose': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.5,
'Just My Luck': 3.0, 'Superman Returns': 3.5, 'You, Me and Dupree': 2.5,
'The Night Listener': 3.0},
'Gene Seymour': {'Lady in the Water': 3.0, 'Snakes on a Plane': 3.5,
'Just My Luck': 1.5, 'Superman Returns': 5.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 3.5},
'Michael Phillips': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.0,
'Superman Returns': 3.5, 'The Night Listener': 4.0},
'Claudia Puig': {'Snakes on a Plane': 3.5, 'Just My Luck': 3.0,
'The Night Listener': 4.5, 'Superman Returns': 4.0,
'You, Me and Dupree': 2.5},
'Mick LaSalle': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'Just My Luck': 2.0, 'Superman Returns': 3.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 2.0},
'Jack Matthews': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'The Night Listener': 3.0, 'Superman Returns': 5.0, 'You, Me and Dupree': 3.5},
'Toby': {'Snakes on a Plane':4.5,'You, Me and Dupree':1.0,'Superman Returns':4.0}}
critics['Lisa Rose']['Lady in the Water']
critics['Toby']['Snakes on a Plane']=4.5
critics['Toby']
Explanation: Collaborative Filtering
End of explanation
from math import sqrt
# Returns a distance-based similarity score for person1 and person2
def sim_distance(prefs,person1,person2):
# Get the list of shared_items
si={}
for item in prefs[person1]:
if item in prefs[person2]:
si[item]=1
# if they have no ratings in common, return 0
if len(si)==0: return 0
# Add up the squares of all the differences
sum_of_squares=sum([pow(prefs[person1][item]-prefs[person2][item],2)
for item in si])
return 1/(1+sqrt(sum_of_squares))
#This function can be called with two names to get a similarity score
#This gives you a similarity score between Lisa Rose and Gene Seymour.
sim_distance(critics,'Lisa Rose','Gene Seymour')
Explanation: Finding Similar Users
Euclidean Distance Score
One very simple way to calculate a similarity score is to use a Euclidean distance score, which takes the items that people have ranked in common and uses them as axes for a chart.
End of explanation
Returns the Pearson correlation coefficient for p1 and p2. This function will return a value between −1 and 1.
A value of 1 means that the two people have exactly the same ratings for every item. Unlike with the distance metric,
you don't need to change this value to get it to the right scale.
def sim_pearson(prefs,p1,p2):
# Get the list of mutually rated items
si={}
for item in prefs[p1]:
if item in prefs[p2]: si[item]=1
# Find the number of elements
n=len(si)
# if they have no ratings in common, return 0
if n==0: return 0
# Add up all the preferences
sum1=sum([prefs[p1][it] for it in si])
sum2=sum([prefs[p2][it] for it in si])
# Sum up the squares
sum1Sq=sum([pow(prefs[p1][it],2) for it in si])
sum2Sq=sum([pow(prefs[p2][it],2) for it in si])
# Sum up the products
pSum=sum([prefs[p1][it]*prefs[p2][it] for it in si])
# Calculate Pearson score
num=pSum - (sum1*sum2/n)
den=sqrt((sum1Sq - pow(sum1,2)/n)*(sum2Sq - pow(sum2,2)/n))
if den==0: return 0
r = num/den
return r
#Run the code
print sim_pearson(critics,'Lisa Rose','Gene Seymour')
def dict_to_list(dict):
dictlist = []
for key, value in dict.iteritems():
temp = [key,value]
dictlist.append(temp)
return dictlist
#Plot Two Movies
import collections
dim1 = critics ['Lisa Rose']
dim2 = critics['Gene Seymour']
dim1 = collections.OrderedDict(sorted(dim1.items()))
dim2 = collections.OrderedDict(sorted(dim2.items()))
print dim1
print dim2
x = list(dim1.values())
y = list(dim2.values())
print x
print y
plt.scatter(x,y, alpha=0.5)
plt.xlabel('Lisa Rose')
plt.ylabel('Gene Seymour')
plt.show()
Explanation: Pearson Correlation Score
To determine the similarity between people's interests is to use a Pearson correlation coefficient. The correlation coefficient is a measure of how well two sets of data fit on a straight line.
End of explanation
# Returns the best matches for person from the prefs dictionary.
# Number of results and similarity function are optional params.
def topMatches(prefs,person,n=5,similarity=sim_pearson):
scores=[(similarity(prefs,person,other),other)
for other in prefs if other!=person]
# Sort the list so the highest scores appear at the top
scores.sort( )
scores.reverse( )
return scores[0:n]
# Calling this function with a name gives a list of movie critics and their similarity scores, compared to the name:
topMatches(critics,'Toby',n=3)
Explanation: Ranking the Critics
Now that you have functions for comparing two people, you can create a function that scores everyone against a given person and finds the closest matches. In this case, I'm interested in learning which movie critics have tastes simliar to mine so that I know whose advice I should take when deciding on a movie.
The following function uses a Python list comprehension to compare me to every other user in the dictionary using one of the previously defined distance metrics. Then it returns the first n items of the sorted results.
End of explanation |
6,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b>CASE - Bacterial resistance experiment</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
In this case study, we will make use of the open data, affiliated to the following journal article
Step1: Reading and processing the data
The data for this use case contains the evolution of different bacteria populations when combined with different phage treatments (viruses). The evolution of the bacterial population is measured by using the optical density (OD) at 3 moments during the experiment
Step2: Read the Falcor data and subset the columns of interest
Step3: Tidy the main_experiment data
(If you're wondering what tidy data representations are, check again the pandas_08_reshaping_data.ipynb notebook)
Actually, the columns OD_0h, OD_20h and OD_72h are representing the same variable (i.e. optical_density) and the column names itself represent a variable, i.e. experiment_time_h. Hence, it is stored in the table as short format and we could tidy these columns by converting them to 2 columns
Step4: <div class="alert alert-success">
**EXERCISE**
Step5: Visual data exploration
Step6: <div class="alert alert-success">
**EXERCISE**
Step7: <div class="alert alert-success">
**EXERCISE**
Use a Seaborn `violin plot` to check the distribution of the `optical_density` in each of the experiment time phases (`experiment_time_h` in the x-axis).
<details><summary>Hints</summary>
- See https
Step8: <div class="alert alert-success">
**EXERCISE**
For each `Phage_t` in an individual subplot, use a `violin plot` to check the distribution of the `optical_density` in each of the experiment time phases (`experiment_time_h`)
<details><summary>Hints</summary>
- The technical term for splitting in subplots using a categorical variable is 'faceting' (or sometimes also 'small multiple'), see https
Step9: <div class="alert alert-success">
**EXERCISE**
Create a summary table of the __average__ `optical_density` with the `Bacterial_genotype` in the rows and the `experiment_time_h` in the columns
<details><summary>Hints</summary>
- No Seaborn required here, rely on Pandas `pivot_table()` function to reshape tables.
</details>
Step10: Advanced/optional solution
Step11: <div class="alert alert-success">
**EXERCISE**
- Calculate for each combination of `Bacterial_genotype`, `Phage_t` and `experiment_time_h` the <i>mean</i> `optical_density` and store the result as a DataFrame called `density_mean` (tip
Step12: (Optional) Reproduce chart of the original paper
Check Figure 2 of the original journal paper in the 'correction' part of the <a href="http
Step13: <div class="alert alert-success">
**EXERCISE**
We will first reproduce 'Figure 2' without the error bars
Step15: Seaborn supports confidence intervals by different estimators when multiple values are combined (see this example). In this particular case, the error estimates are already provided and are not symmetrical. Hence, we need to find a method to use the lower log10 LBc and upper log10 UBc confidence intervals.
Stackoverflow can help you with this, see this thread to solve the following exercise.
<div class="alert alert-success">
**EXERCISE**
Reproduce 'Figure 2' with the error bars using the information from [this Stackoverflow thread](https | Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
Explanation: <p><font size="6"><b>CASE - Bacterial resistance experiment</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
In this case study, we will make use of the open data, affiliated to the following journal article:
Arias-Sánchez FI, Hall A (2016) Effects of antibiotic resistance alleles on bacterial evolutionary responses to viral parasites. Biology Letters 12(5): 20160064. https://doi.org/10.1098/rsbl.2016.0064
<img src="../img/bacteriophage.jpeg">
Check the full paper on the web version. The study handles:
Antibiotic resistance has wide-ranging effects on bacterial phenotypes and evolution. However, the influence of antibiotic resistance on bacterial responses to parasitic viruses remains unclear, despite the ubiquity of such viruses in nature and current interest in therapeutic applications. We experimentally investigated this by exposing various Escherichia coli genotypes, including eight antibiotic-resistant genotypes and a mutator, to different viruses (lytic bacteriophages). Across 960 populations, we measured changes in population density and sensitivity to viruses, and tested whether variation among bacterial genotypes was explained by their relative growth in the absence of parasites, or mutation rate towards phage resistance measured by fluctuation tests for each phage
End of explanation
main_experiment = pd.read_excel("data/Dryad_Arias_Hall_v3.xlsx",
sheet_name="Main experiment")
main_experiment = main_experiment.drop(columns=["AB_r", "Survival_72h", "PhageR_72h"]) # focus on specific subset for this use case)
Explanation: Reading and processing the data
The data for this use case contains the evolution of different bacteria populations when combined with different phage treatments (viruses). The evolution of the bacterial population is measured by using the optical density (OD) at 3 moments during the experiment: at the start (0h), after 20h and at the end (72h).
The data is available on Dryad, a general purpose data repository providing all kinds of data sets linked to journal papers. The downloaded data is available in this repository in the data folder as an excel-file called Dryad_Arias_Hall_v3.xlsx.
For the exercises, two sheets of the excel file will be used:
* Main experiment:
| Variable name | Description |
|---------------:|:-------------|
|AB_r | Antibiotic resistance |
|Bacterial_genotype | Bacterial genotype |
|Phage_t | Phage treatment |
|OD_0h | Optical density at the start of the experiment (0h) |
|OD_20h | Optical density after 20h |
|OD_72h | Optical density at the end of the experiment (72h) |
|Survival_72h | Population survival at 72h (1=survived, 0=extinct) |
|PhageR_72h | Bacterial sensitivity to the phage they were exposed to (0=no bacterial growth, 1= colony formation in the presence of phage) |
Falcor: we focus on a subset of the columns:
| Variable name | Description |
|---------------:|:-------------|
| Phage | Bacteriophage used in the fluctuation test (T4, T7 and lambda) |
| Bacterial_genotype | Bacterial genotype. |
| log10 Mc | Log 10 of corrected mutation rate |
| log10 UBc | Log 10 of corrected upper bound |
| log10 LBc | Log 10 of corrected lower bound |
Reading the main experiment data set from the corresponding sheet:
End of explanation
falcor = pd.read_excel("data/Dryad_Arias_Hall_v3.xlsx", sheet_name="Falcor",
skiprows=1)
falcor = falcor[["Phage", "Bacterial_genotype", "log10 Mc", "log10 UBc", "log10 LBc"]]
falcor.head()
Explanation: Read the Falcor data and subset the columns of interest:
End of explanation
main_experiment["experiment_ID"] = ["ID_" + str(idx) for idx in range(len(main_experiment))]
main_experiment
Explanation: Tidy the main_experiment data
(If you're wondering what tidy data representations are, check again the pandas_08_reshaping_data.ipynb notebook)
Actually, the columns OD_0h, OD_20h and OD_72h are representing the same variable (i.e. optical_density) and the column names itself represent a variable, i.e. experiment_time_h. Hence, it is stored in the table as short format and we could tidy these columns by converting them to 2 columns: experiment_time_h and optical_density.
Before making any changes to the data, we will add an identifier column for each of the current rows to make sure we keep the connection in between the entries of a row when converting from wide to long format.
End of explanation
tidy_experiment = main_experiment.melt(id_vars=['Bacterial_genotype', 'Phage_t', 'experiment_ID'],
value_vars=['OD_0h', 'OD_20h', 'OD_72h'],
var_name='experiment_time_h',
value_name='optical_density', )
tidy_experiment
Explanation: <div class="alert alert-success">
**EXERCISE**:
Convert the columns `OD_0h`, `OD_20h` and `OD_72h` to a long format with the values stored in a column `optical_density` and the time in the experiment as `experiment_time_h`. Save the variable as `tidy_experiment`.
<details><summary>Hints</summary>
- Have a look at `pandas_08_reshaping_data.ipynb` to find out the required function.
- Remember to check the documentation of a function using the `SHIFT` + `TAB` keystroke combination when the cursor is on the function of interest.
</details>
</div>
End of explanation
tidy_experiment.head()
Explanation: Visual data exploration
End of explanation
sns.set_style("white")
histplot = sns.displot(data=tidy_experiment, x="optical_density",
color='grey', edgecolor='white')
histplot.fig.suptitle("Optical density distribution")
histplot.axes[0][0].set_ylabel("Frequency");
Explanation: <div class="alert alert-success">
**EXERCISE**:
* Make a histogram using the [Seaborn package](https://seaborn.pydata.org/index.html) to visualize the distribution of the `optical_density`
* Change the overall theme to any of the available Seaborn themes
* Change the border color of the bars to `white` and the fill color of the bars to `grey`
Using Matplotlib, further adjust the histogram:
- Add a Figure title "Optical density distribution".
- Overwrite the y-axis label to "Frequency".
<details><summary>Hints</summary>
- See https://seaborn.pydata.org/tutorial/distributions.html#plotting-univariate-histograms.
- There are five preset seaborn themes: `darkgrid`, `whitegrid`, `dark`, `white`, and `ticks`.
- Make sure to set the theme before creating the graph.
- Seaborn relies on Matplotlib to plot the individual bars, so the available parameters (`**kwargs`) to adjust the bars that can be passed (e.g. `color` and `edgecolor`) are enlisted in the [matplotlib.axes.Axes.bar](https://matplotlib.org/3.3.2/api/_as_gen/matplotlib.axes.Axes.bar.html) documentation.
- The output of a Seaborn plot is an object from which the Matplotlib `Figure` and `Axes` can be accessed, respectively `snsplot.fig` and `snsplot.axes`. Note that the `axes` are always returned as a 2x2 array of Axes (also if it only contains a single element).
</details>
</div>
End of explanation
sns.catplot(data=tidy_experiment, x="experiment_time_h",
y="optical_density", kind="violin")
Explanation: <div class="alert alert-success">
**EXERCISE**
Use a Seaborn `violin plot` to check the distribution of the `optical_density` in each of the experiment time phases (`experiment_time_h` in the x-axis).
<details><summary>Hints</summary>
- See https://seaborn.pydata.org/tutorial/categorical.html#violinplots.
- Whereas the previous exercise focuses on the distribution of data (`distplot`), this exercise focuses on distributions _for each category of..._ and needs the categorical functions of Seaborn (`catplot`).
</details>
End of explanation
sns.catplot(data=tidy_experiment, x="experiment_time_h", y="optical_density",
col="Phage_t", col_wrap=2, kind="violin")
Explanation: <div class="alert alert-success">
**EXERCISE**
For each `Phage_t` in an individual subplot, use a `violin plot` to check the distribution of the `optical_density` in each of the experiment time phases (`experiment_time_h`)
<details><summary>Hints</summary>
- The technical term for splitting in subplots using a categorical variable is 'faceting' (or sometimes also 'small multiple'), see https://seaborn.pydata.org/tutorial/categorical.html#showing-multiple-relationships-with-facets
- You want to wrap the number of columns on 2 subplots, look for a function argument in the documentation of the `catplot` function.
</details>
End of explanation
pd.pivot_table(tidy_experiment, values='optical_density',
index='Bacterial_genotype',
columns='experiment_time_h',
aggfunc='mean')
Explanation: <div class="alert alert-success">
**EXERCISE**
Create a summary table of the __average__ `optical_density` with the `Bacterial_genotype` in the rows and the `experiment_time_h` in the columns
<details><summary>Hints</summary>
- No Seaborn required here, rely on Pandas `pivot_table()` function to reshape tables.
</details>
End of explanation
# advanced/optional solution
tidy_experiment.groupby(['Bacterial_genotype', 'experiment_time_h'])['optical_density'].mean().unstack()
Explanation: Advanced/optional solution:
End of explanation
density_mean = (tidy_experiment
.groupby(['Bacterial_genotype','Phage_t', 'experiment_time_h'])['optical_density']
.mean().reset_index())
sns.catplot(data=density_mean, kind="bar",
x='Bacterial_genotype',
y='optical_density',
hue='Phage_t',
row="experiment_time_h",
sharey=False,
aspect=3, height=3,
palette="colorblind")
Explanation: <div class="alert alert-success">
**EXERCISE**
- Calculate for each combination of `Bacterial_genotype`, `Phage_t` and `experiment_time_h` the <i>mean</i> `optical_density` and store the result as a DataFrame called `density_mean` (tip: use `reset_index()` to convert the resulting Series to a DataFrame).
- Based on `density_mean`, make a _barplot_ of the mean optical density for each `Bacterial_genotype`, with for each `Bacterial_genotype` an individual bar and with each `Phage_t` in a different color/hue (i.e. grouped bar chart).
- Use the `experiment_time_h` to split into subplots. As we mainly want to compare the values within each subplot, make sure the scales in each of the subplots are adapted to its own data range, and put the subplots on different rows.
- Adjust the size and aspect ratio of the Figure to your own preference.
- Change the color scale of the bars to another Seaborn palette.
<details><summary>Hints</summary>
- _Calculate for each combination of..._ should remind you to the `groupby` functionality of Pandas to calculate statistics for each group.
- The exercise is still using the `catplot` function of Seaborn with `bar`s. Variables are used to vary the `hue` and `row`.
- Each subplot its own range is the same as not sharing axes (`sharey` argument).
- Seaborn in fact has six variations of matplotlib’s palette, called `deep`, `muted`, `pastel`, `bright`, `dark`, and `colorblind`. See https://seaborn.pydata.org/tutorial/color_palettes.html#qualitative-color-palettes
</details>
End of explanation
falcor.head()
Explanation: (Optional) Reproduce chart of the original paper
Check Figure 2 of the original journal paper in the 'correction' part of the <a href="http://rsbl.royalsocietypublishing.org/content/roybiolett/12/5/20160064.full.pdf">pdf</a>:
<img src="https://royalsocietypublishing.org/cms/attachment/eb511c57-4167-4575-b8b3-93fbcf728572/rsbl20160064f02.jpg" width="500">
End of explanation
falcor["Bacterial_genotype"] = falcor["Bacterial_genotype"].replace({'WT(2)': 'WT',
'MUT(2)': 'MUT'})
falcor.head()
sns.catplot(data=falcor, kind="point",
x='Bacterial_genotype',
y='log10 Mc',
row="Phage",
join=False, ci=None,
aspect=3, height=3,
color="black")
Explanation: <div class="alert alert-success">
**EXERCISE**
We will first reproduce 'Figure 2' without the error bars:
- Make sure the `WT(2)` and `MUT(2)` categories are used as respectively `WT` and `MUT` by adjusting them with Pandas first.
- Use the __falcor__ data and the Seaborn package. The 'log10 mutation rate' on the figure corresponds to the `log10 Mc` column.
<details><summary>Hints</summary>
- To replace values using a mapping (dictionary with the keys the current values and the values the new values), use the Pandas `replace` method.
- This is another example of a `catplot`, using `point`s to represent the data.
- The `join` argument defines if individual points need to be connected or not.
- One combination appears multiple times, so make sure to not yet use confidence intervals by setting `ci` to `Null`.
</details>
End of explanation
falcor["Bacterial_genotype"] = falcor["Bacterial_genotype"].replace({'WT(2)': 'WT',
'MUT(2)': 'MUT'})
def errorbar(x, y, low, high, **kws):
Utility function to link falcor data representation with the errorbar representation
plt.errorbar(x, y, (y - low, high - y), capsize=3, fmt="o", color="black", ms=4)
sns.set_style("ticks")
g = sns.FacetGrid(data=falcor, row="Phage", aspect=3, height=3)
g.map(errorbar,
"Bacterial_genotype", "log10 Mc",
"log10 LBc", "log10 UBc")
Explanation: Seaborn supports confidence intervals by different estimators when multiple values are combined (see this example). In this particular case, the error estimates are already provided and are not symmetrical. Hence, we need to find a method to use the lower log10 LBc and upper log10 UBc confidence intervals.
Stackoverflow can help you with this, see this thread to solve the following exercise.
<div class="alert alert-success">
**EXERCISE**
Reproduce 'Figure 2' with the error bars using the information from [this Stackoverflow thread](https://stackoverflow.com/questions/38385099/adding-simple-error-bars-to-seaborn-factorplot). You do not have to adjust the order of the categories in the x-axis.
<details><summary>Hints</summary>
- Do not use the `catplot` function, but first create the layout of the graph by `FacetGrid` on the `Phage` variable.
- Next, map a custom `errorbar` function to the FactgGrid as the example from Stackoverflow.
- Adjust/Simplify the `errorbar` custom function for your purpose.
- Matplotlib uses the `capsize` to draw upper and lower lines of the intervals.
</details>
End of explanation |
6,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 3
Step1: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x
Step2: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
Step3: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree
Step4: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call
Step5: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
Step6: For the rest of the notebook we'll use the sqft_living variable. For plotting purposes (connected the dots) you'll need to sort by the values of sqft_living first
Step7: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
Step8: NOTE
Step9: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial? | Python Code:
import graphlab
Explanation: Regression Week 3: Assessing Fit (polynomial regression)
In this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will:
* Write a function to take an SArray and a degree and return an SFrame where each column is the SArray to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the SArray column 2 is the SArray squared and column 3 is the SArray cubed
* Use matplotlib to visualize polynomial regressions
* Use matplotlib to visualize the same polynomial degree on different subsets of the data
* Use a validation set to select a polynomial degree
* Assess the final fit using test data
We will continue to use the House data from previous notebooks.
Fire up graphlab create
End of explanation
tmp = graphlab.SArray([1., 2., 3.])
tmp_cubed = tmp.apply(lambda x: x**3)
print tmp
print tmp_cubed
Explanation: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions.
For example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab)
End of explanation
ex_sframe = graphlab.SFrame()
ex_sframe['power_1'] = tmp
print ex_sframe
Explanation: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
End of explanation
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
return poly_sframe
Explanation: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree:
End of explanation
print polynomial_sframe(tmp, 3)
Explanation: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call:
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
End of explanation
sales = sales.sort('sqft_living')
Explanation: For the rest of the notebook we'll use the sqft_living variable. For plotting purposes (connected the dots) you'll need to sort by the values of sqft_living first:
End of explanation
poly1_data = polynomial_sframe(sales['sqft_living'], 1)
poly1_data['price'] = sales['price'] # add price to the data since it's the target
Explanation: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
End of explanation
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(poly1_data['power_1'],poly1_data['price'],'.',
poly1_data['power_1'], model1.predict(poly1_data),'-')
Explanation: NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users.
End of explanation
poly2_data = polynomial_sframe(sales['sqft_living'], 2)
my_features = poly2_data.column_names() # get the name of the features
poly2_data['price'] = sales['price'] # add price to the data since it's the target
model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)
model2.get("coefficients")
plt.plot(poly2_data['power_1'],poly2_data['price'],'.',
poly2_data['power_1'], model2.predict(poly2_data),'-')
Explanation: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?
End of explanation |
6,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--NAVIGATION-->
< Setting Up | Contents | Rates Information >
Understanding the Documentations
In order to make use of the API effective, we need to understand the input parameters and the corresponding output. Good thing Oanda provides a comprehensive API guide as shown below.
Oanda
The Oanda API guide is very comprehensive. Select REST amongst the options of REST-V20, REST, JAVA, FIX, and MT4.
We are interested in particular the Resouces section.
<img src="../img/OANDA_APIv20.PNG">
OandapyV20
Follow this oandapyV20 to head to the github repository. There is a brief README.md guide that demonstrates how to use oandapy with the Oanda API.
Let's do a simple walk through here to get started.
We need to first import the oandapyv20 library.
The oandapyV20 package contains a client class, oandapyV20.API, to communicate with the REST-V20 interface. It processes requests that can be created from the endpoint classes. For it’s communication it relies on
Step1: We need to provide Python with the accountID and access_token.
We will show in later lesson how we can segregate this from our main code body thus protecting our account.
Step2: Below is an extract from the oandapyv20 documentation. Note that you only need to provide accountID and params.
<img src="../img/v20_pricing_info.PNG">
The corresponding Oanda API for pricing information
Step3: Note the steps above | Python Code:
import oandapyV20
from oandapyV20 import API
import oandapyV20.endpoints.pricing as pricing
Explanation: <!--NAVIGATION-->
< Setting Up | Contents | Rates Information >
Understanding the Documentations
In order to make use of the API effective, we need to understand the input parameters and the corresponding output. Good thing Oanda provides a comprehensive API guide as shown below.
Oanda
The Oanda API guide is very comprehensive. Select REST amongst the options of REST-V20, REST, JAVA, FIX, and MT4.
We are interested in particular the Resouces section.
<img src="../img/OANDA_APIv20.PNG">
OandapyV20
Follow this oandapyV20 to head to the github repository. There is a brief README.md guide that demonstrates how to use oandapy with the Oanda API.
Let's do a simple walk through here to get started.
We need to first import the oandapyv20 library.
The oandapyV20 package contains a client class, oandapyV20.API, to communicate with the REST-V20 interface. It processes requests that can be created from the endpoint classes. For it’s communication it relies on: requests (requests).
The client keeps no state of a requests. The response of a request is assigned to the request instance. The response is also returned as a return value by the client.
End of explanation
accountID = "111-111-1111111-111"
access_token = "11111111111111111111111111111111-11111111111111111111111111111111"
Explanation: We need to provide Python with the accountID and access_token.
We will show in later lesson how we can segregate this from our main code body thus protecting our account.
End of explanation
api = API(access_token) # step 1
params ={"instruments": "EUR_USD"} # step 2
r = pricing.PricingInfo(accountID=accountID, params=params) # step 3
api.request(r); # step 4
Explanation: Below is an extract from the oandapyv20 documentation. Note that you only need to provide accountID and params.
<img src="../img/v20_pricing_info.PNG">
The corresponding Oanda API for pricing information:
<img src="../img/OANDA_APIv20_pricing_endpoints.PNG">
We then instantiate as the next step shows. You can use other variable name aside from api.
End of explanation
print(r.response)
r.response['prices']
r.response['prices'][0]['asks']
Explanation: Note the steps above:
Create an instance by invoking API(..)
provide the needed parameters and store it in a variable. In our example, we use params
Store the instance into a variable
Submit the requests with the instance as the input parameter. The value will be returned to the input parameter. In our example, we use r
End of explanation |
6,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate a left cerebellum volume source space
Generate a volume source space of the left cerebellum and plot its vertices
relative to the left cortical surface source space and the freesurfer
segmentation file.
Step1: Setup the source spaces
Step2: Plot the positions of each source space | Python Code:
# Author: Alan Leggitt <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import setup_source_space, setup_volume_source_space
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
subject = 'sample'
aseg_fname = subjects_dir + '/sample/mri/aseg.mgz'
Explanation: Generate a left cerebellum volume source space
Generate a volume source space of the left cerebellum and plot its vertices
relative to the left cortical surface source space and the freesurfer
segmentation file.
End of explanation
# setup a cortical surface source space and extract left hemisphere
surf = setup_source_space(subject, subjects_dir=subjects_dir, add_dist=False)
lh_surf = surf[0]
# setup a volume source space of the left cerebellum cortex
volume_label = 'Left-Cerebellum-Cortex'
sphere = (0, 0, 0, 0.12)
lh_cereb = setup_volume_source_space(
subject, mri=aseg_fname, sphere=sphere, volume_label=volume_label,
subjects_dir=subjects_dir, sphere_units='m')
# Combine the source spaces
src = surf + lh_cereb
Explanation: Setup the source spaces
End of explanation
fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,
surfaces='white', coord_frame='head',
src=src)
mne.viz.set_3d_view(fig, azimuth=173.78, elevation=101.75,
distance=0.30, focalpoint=(-0.03, -0.01, 0.03))
Explanation: Plot the positions of each source space
End of explanation |
6,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The previous Notebook in this series used multi-group mode to perform a calculation with previously defined cross sections. However, in many circumstances the multi-group data is not given and one must instead generate the cross sections for the specific application (or at least verify the use of cross sections from another application).
This Notebook illustrates the use of the openmc.mgxs.Library class specifically for the calculation of MGXS to be used in OpenMC's multi-group mode. This example notebook is therefore very similar to the MGXS Part III notebook, except OpenMC is used as the multi-group solver instead of OpenMOC.
During this process, this notebook will illustrate the following features
Step1: We will begin by creating three materials for the fuel, water, and cladding of the fuel pins.
Step2: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Likewise, we can construct a control rod guide tube with the same surfaces.
Step6: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step7: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step8: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step9: Before proceeding lets check the geometry.
Step10: Looks good!
We now must create a geometry that is assigned a root universe and export it to XML.
Step11: With the geometry and materials finished, we now just need to define simulation parameters.
Step12: Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
Step13: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the fuel assembly geometry.
Step14: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. We will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections
Step15: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. In this simple example, we wish to compute multi-group cross sections only for each material and therefore will use a "material" domain type.
NOTE
Step16: We will instruct the library to not compute cross sections on a nuclide-by-nuclide basis, and instead to focus on generating material-specific macroscopic cross sections.
NOTE
Step17: Now we will set the scattering order that we wish to use. For this problem we will use P3 scattering. A warning is expected telling us that the default behavior (a P0 correction on the scattering data) is over-ridden by our choice of using a Legendre expansion to treat anisotropic scattering.
Step18: Now that the Library has been setup let's verify that it contains the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
If no error is raised, then we have a good set of data.
Step19: Great, now we can use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
Step20: The tallies can now be exported to a "tallies.xml" input file for OpenMC.
NOTE
Step21: In addition, we instantiate a fission rate mesh tally that we will eventually use to compare with the corresponding multi-group results.
Step22: Time to run the calculation and get our results!
Step23: To make sure the results we need are available after running the multi-group calculation, we will now rename the statepoint and summary files.
Step24: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. Let's begin by loading the StatePoint file.
Step25: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step26: The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
Multi-Group OpenMC Calculation
We will now use the Library to produce a multi-group cross section data set for use by the OpenMC multi-group solver.
Note that since this simulation included so few histories, it is reasonable to expect some data has not had any scores, and thus we could see division by zero errors. This will show up as a runtime warning in the following step. The Library class is designed to gracefully handle these scenarios.
Step27: OpenMC's multi-group mode uses the same input files as does the continuous-energy mode (materials, geometry, settings, plots, and tallies file). Differences would include the use of a flag to tell the code to use multi-group transport, a location of the multi-group library file, and any changes needed in the materials.xml and geometry.xml files to re-define materials as necessary. The materials and geometry file changes could be necessary if materials or their nuclide/element/macroscopic constituents need to be renamed.
In this example we have created macroscopic cross sections (by material), and thus we will need to change the material definitions accordingly.
First we will create the new materials.xml file.
Step28: No geometry file neeeds to be written as the continuous-energy file is correctly defined for the multi-group case as well.
Next, we can make the changes we need to the simulation parameters.
These changes are limited to telling OpenMC to run a multi-group vice contrinuous-energy calculation.
Step29: Lets clear the tallies file so it doesn't include tallies for re-generating a multi-group library, but then put back in a tally for the fission mesh.
Step30: Before running the calculation let's visually compare a subset of the newly-generated multi-group cross section data to the continuous-energy data. We will do this using the cross section plotting functionality built-in to the OpenMC Python API.
Step31: At this point, the problem is set up and we can run the multi-group calculation.
Step32: Results Comparison
Now we can compare the multi-group and continuous-energy results.
We will begin by loading the multi-group statepoint file we just finished writing and extracting the calculated keff.
Step33: Next, we can load the continuous-energy eigenvalue for comparison.
Step34: Lets compare the two eigenvalues, including their bias
Step35: This shows a small but nontrivial pcm bias between the two methods. Some degree of mismatch is expected simply to the very few histories being used in these example problems. An additional mismatch is always inherent in the practical application of multi-group theory due to the high degree of approximations inherent in that method.
Pin Power Visualizations
Next we will visualize the pin power results obtained from both the Continuous-Energy and Multi-Group OpenMC calculations.
First, we extract volume-integrated fission rates from the Multi-Group calculation's mesh fission rate tally for each pin cell in the fuel assembly.
Step36: We can now do the same for the Continuous-Energy results.
Step37: Now we can easily use Matplotlib to visualize the two fission rates side-by-side.
Step38: These figures really indicate that more histories are probably necessary when trying to achieve a fully converged solution, but hey, this is good enough for our example!
Scattering Anisotropy Treatments
We will next show how we can work with the scattering angular distributions. OpenMC's MG solver has the capability to use group-to-group angular distributions which are represented as any of the following
Step39: Now we can re-run OpenMC to obtain our results
Step40: And then get the eigenvalue differences from the Continuous-Energy and P3 MG solution
Step41: Mixed Scattering Representations
OpenMC's Multi-Group mode also includes a feature where not every data in the library is required to have the same scattering treatment. For example, we could represent the water with P3 scattering, and the fuel and cladding with P0 scattering. This series will show how this can be done.
First we will convert the data to P0 scattering, unless its water, then we will leave that as P3 data.
Step42: We can also use whatever scattering format that we want for the materials in the library. As an example, we will take this P0 data and convert zircaloy to a histogram anisotropic scattering format and the fuel to a tabular anisotropic scattering format
Step43: Finally we will re-set our max_order parameter of our openmc.Settings object to our maximum order so that OpenMC will use whatever scattering data is available in the library.
After we do this we can re-run the simulation.
Step44: For a final step we can again obtain the eigenvalue differences from this case and compare with the same from the P3 MG solution | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import os
import openmc
%matplotlib inline
Explanation: The previous Notebook in this series used multi-group mode to perform a calculation with previously defined cross sections. However, in many circumstances the multi-group data is not given and one must instead generate the cross sections for the specific application (or at least verify the use of cross sections from another application).
This Notebook illustrates the use of the openmc.mgxs.Library class specifically for the calculation of MGXS to be used in OpenMC's multi-group mode. This example notebook is therefore very similar to the MGXS Part III notebook, except OpenMC is used as the multi-group solver instead of OpenMOC.
During this process, this notebook will illustrate the following features:
Calculation of multi-group cross sections for a fuel assembly
Automated creation and storage of MGXS with openmc.mgxs.Library
Steady-state pin-by-pin fission rates comparison between continuous-energy and multi-group OpenMC.
Modification of the scattering data in the library to show the flexibility of the multi-group solver
Generate Input Files
End of explanation
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_element('U', 1., enrichment=1.6)
fuel.add_element('O', 2.)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_element('Zr', 1.)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_element('H', 4.9457e-2)
water.add_element('O', 2.4732e-2)
water.add_element('B', 8.0042e-6)
Explanation: We will begin by creating three materials for the fuel, water, and cladding of the fuel pins.
End of explanation
# Instantiate a Materials object
materials_file = openmc.Materials((fuel, zircaloy, water))
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
# The x0 and y0 parameters (0. and 0.) are the default values for an
# openmc.ZCylinder object. We could therefore leave them out to no effect
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
End of explanation
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
Explanation: Likewise, we can construct a control rod guide tube with the same surfaces.
End of explanation
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
End of explanation
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Initialize an empty 17x17 array of the lattice universes
universes = np.empty((17, 17), dtype=openmc.Universe)
# Fill the array with the fuel pin and guide tube universes
universes[:, :] = fuel_pin_universe
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
Explanation: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = assembly
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(name='root universe', universe_id=0)
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
root_universe.plot(origin=(0., 0., 0.), width=(21.42, 21.42), pixels=(500, 500), color_by='material')
Explanation: Before proceeding lets check the geometry.
End of explanation
# Create Geometry and set root universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
Explanation: Looks good!
We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 600
inactive = 50
particles = 3000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
settings_file.run_mode = 'eigenvalue'
settings_file.verbosity = 4
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters.
End of explanation
# Instantiate a 2-group EnergyGroups object
groups = openmc.mgxs.EnergyGroups([0., 0.625, 20.0e6])
Explanation: Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
End of explanation
# Initialize a 2-group MGXS Library for OpenMC
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = groups
Explanation: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the fuel assembly geometry.
End of explanation
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission',
'nu-scatter matrix', 'multiplicity matrix', 'chi']
Explanation: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. We will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections: "total", "absorption", "nu-fission", '"fission", "nu-scatter matrix", "multiplicity matrix", and "chi".
The "multiplicity matrix" type is a relatively rare cross section type. This data is needed to provide OpenMC's multi-group mode with additional information needed to accurately treat scattering multiplication (i.e., (n,xn) reactions)), including how this multiplication varies depending on both incoming and outgoing neutron energies.
End of explanation
# Specify a "cell" domain type for the cross section tally filters
mgxs_lib.domain_type = "material"
# Specify the cell domains over which to compute multi-group cross sections
mgxs_lib.domains = geometry.get_all_materials().values()
Explanation: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. In this simple example, we wish to compute multi-group cross sections only for each material and therefore will use a "material" domain type.
NOTE: By default, the Library class will instantiate MGXS objects for each and every domain (material, cell, universe, or mesh) in the geometry of interest. However, one may specify a subset of these domains to the Library.domains property.
End of explanation
# Do not compute cross sections on a nuclide-by-nuclide basis
mgxs_lib.by_nuclide = False
Explanation: We will instruct the library to not compute cross sections on a nuclide-by-nuclide basis, and instead to focus on generating material-specific macroscopic cross sections.
NOTE: The default value of the by_nuclide parameter is False, so the following step is not necessary but is included for illustrative purposes.
End of explanation
# Set the Legendre order to 3 for P3 scattering
mgxs_lib.legendre_order = 3
Explanation: Now we will set the scattering order that we wish to use. For this problem we will use P3 scattering. A warning is expected telling us that the default behavior (a P0 correction on the scattering data) is over-ridden by our choice of using a Legendre expansion to treat anisotropic scattering.
End of explanation
# Check the library - if no errors are raised, then the library is satisfactory.
mgxs_lib.check_library_for_openmc_mgxs()
Explanation: Now that the Library has been setup let's verify that it contains the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
If no error is raised, then we have a good set of data.
End of explanation
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
Explanation: Great, now we can use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
End of explanation
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
Explanation: The tallies can now be exported to a "tallies.xml" input file for OpenMC.
NOTE: At this point the Library has constructed nearly 100 distinct Tally objects. The overhead to tally in OpenMC scales as O(N) for N tallies, which can become a bottleneck for large tally datasets. To compensate for this, the Python API's Tally, Filter and Tallies classes allow for the smart merging of tallies when possible. The Library class supports this runtime optimization with the use of the optional merge parameter (False by default) for the Library.add_to_tallies_file(...) method, as shown below.
End of explanation
# Instantiate a tally Mesh
mesh = openmc.RegularMesh()
mesh.dimension = [17, 17]
mesh.lower_left = [-10.71, -10.71]
mesh.upper_right = [+10.71, +10.71]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission']
# Add tally to collection
tallies_file.append(tally, merge=True)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
Explanation: In addition, we instantiate a fission rate mesh tally that we will eventually use to compare with the corresponding multi-group results.
End of explanation
# Run OpenMC
openmc.run()
Explanation: Time to run the calculation and get our results!
End of explanation
# Move the statepoint File
ce_spfile = './statepoint_ce.h5'
os.rename('statepoint.' + str(batches) + '.h5', ce_spfile)
# Move the Summary file
ce_sumfile = './summary_ce.h5'
os.rename('summary.h5', ce_sumfile)
Explanation: To make sure the results we need are available after running the multi-group calculation, we will now rename the statepoint and summary files.
End of explanation
# Load the statepoint file
sp = openmc.StatePoint(ce_spfile, autolink=False)
# Load the summary file in its new location
su = openmc.Summary(ce_sumfile)
sp.link_with_summary(su)
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. Let's begin by loading the StatePoint file.
End of explanation
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
Explanation: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
# Create a MGXS File which can then be written to disk
mgxs_file = mgxs_lib.create_mg_library(xs_type='macro', xsdata_names=['fuel', 'zircaloy', 'water'])
# Write the file to disk using the default filename of "mgxs.h5"
mgxs_file.export_to_hdf5()
Explanation: The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
Multi-Group OpenMC Calculation
We will now use the Library to produce a multi-group cross section data set for use by the OpenMC multi-group solver.
Note that since this simulation included so few histories, it is reasonable to expect some data has not had any scores, and thus we could see division by zero errors. This will show up as a runtime warning in the following step. The Library class is designed to gracefully handle these scenarios.
End of explanation
# Re-define our materials to use the multi-group macroscopic data
# instead of the continuous-energy data.
# 1.6% enriched fuel UO2
fuel_mg = openmc.Material(name='UO2', material_id=1)
fuel_mg.add_macroscopic('fuel')
# cladding
zircaloy_mg = openmc.Material(name='Clad', material_id=2)
zircaloy_mg.add_macroscopic('zircaloy')
# moderator
water_mg = openmc.Material(name='Water', material_id=3)
water_mg.add_macroscopic('water')
# Finally, instantiate our Materials object
materials_file = openmc.Materials((fuel_mg, zircaloy_mg, water_mg))
# Set the location of the cross sections file
materials_file.cross_sections = 'mgxs.h5'
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: OpenMC's multi-group mode uses the same input files as does the continuous-energy mode (materials, geometry, settings, plots, and tallies file). Differences would include the use of a flag to tell the code to use multi-group transport, a location of the multi-group library file, and any changes needed in the materials.xml and geometry.xml files to re-define materials as necessary. The materials and geometry file changes could be necessary if materials or their nuclide/element/macroscopic constituents need to be renamed.
In this example we have created macroscopic cross sections (by material), and thus we will need to change the material definitions accordingly.
First we will create the new materials.xml file.
End of explanation
# Set the energy mode
settings_file.energy_mode = 'multi-group'
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: No geometry file neeeds to be written as the continuous-energy file is correctly defined for the multi-group case as well.
Next, we can make the changes we need to the simulation parameters.
These changes are limited to telling OpenMC to run a multi-group vice contrinuous-energy calculation.
End of explanation
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
# Add fission and flux mesh to tally for plotting using the same mesh we've already defined
mesh_tally = openmc.Tally(name='mesh tally')
mesh_tally.filters = [openmc.MeshFilter(mesh)]
mesh_tally.scores = ['fission']
tallies_file.append(mesh_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: Lets clear the tallies file so it doesn't include tallies for re-generating a multi-group library, but then put back in a tally for the fission mesh.
End of explanation
# First lets plot the fuel data
# We will first add the continuous-energy data
fig = openmc.plot_xs(fuel, ['total'])
# We will now add in the corresponding multi-group data and show the result
openmc.plot_xs(fuel_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# Then repeat for the zircaloy data
fig = openmc.plot_xs(zircaloy, ['total'])
openmc.plot_xs(zircaloy_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# And finally repeat for the water data
fig = openmc.plot_xs(water, ['total'])
openmc.plot_xs(water_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
Explanation: Before running the calculation let's visually compare a subset of the newly-generated multi-group cross section data to the continuous-energy data. We will do this using the cross section plotting functionality built-in to the OpenMC Python API.
End of explanation
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: At this point, the problem is set up and we can run the multi-group calculation.
End of explanation
# Move the StatePoint File
mg_spfile = './statepoint_mg.h5'
os.rename('statepoint.' + str(batches) + '.h5', mg_spfile)
# Move the Summary file
mg_sumfile = './summary_mg.h5'
os.rename('summary.h5', mg_sumfile)
# Rename and then load the last statepoint file and keff value
mgsp = openmc.StatePoint(mg_spfile, autolink=False)
# Load the summary file in its new location
mgsu = openmc.Summary(mg_sumfile)
mgsp.link_with_summary(mgsu)
# Get keff
mg_keff = mgsp.k_combined
Explanation: Results Comparison
Now we can compare the multi-group and continuous-energy results.
We will begin by loading the multi-group statepoint file we just finished writing and extracting the calculated keff.
End of explanation
ce_keff = sp.k_combined
Explanation: Next, we can load the continuous-energy eigenvalue for comparison.
End of explanation
bias = 1.0E5 * (ce_keff - mg_keff)
print('Continuous-Energy keff = {0:1.6f}'.format(ce_keff))
print('Multi-Group keff = {0:1.6f}'.format(mg_keff))
print('bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
Explanation: Lets compare the two eigenvalues, including their bias
End of explanation
# Get the OpenMC fission rate mesh tally data
mg_mesh_tally = mgsp.get_tally(name='mesh tally')
mg_fission_rates = mg_mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
mg_fission_rates.shape = (17,17)
# Normalize to the average pin power
mg_fission_rates /= np.mean(mg_fission_rates[mg_fission_rates > 0.])
Explanation: This shows a small but nontrivial pcm bias between the two methods. Some degree of mismatch is expected simply to the very few histories being used in these example problems. An additional mismatch is always inherent in the practical application of multi-group theory due to the high degree of approximations inherent in that method.
Pin Power Visualizations
Next we will visualize the pin power results obtained from both the Continuous-Energy and Multi-Group OpenMC calculations.
First, we extract volume-integrated fission rates from the Multi-Group calculation's mesh fission rate tally for each pin cell in the fuel assembly.
End of explanation
# Get the OpenMC fission rate mesh tally data
ce_mesh_tally = sp.get_tally(name='mesh tally')
ce_fission_rates = ce_mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
ce_fission_rates.shape = (17,17)
# Normalize to the average pin power
ce_fission_rates /= np.mean(ce_fission_rates[ce_fission_rates > 0.])
Explanation: We can now do the same for the Continuous-Energy results.
End of explanation
# Force zeros to be NaNs so their values are not included when matplotlib calculates
# the color scale
ce_fission_rates[ce_fission_rates == 0.] = np.nan
mg_fission_rates[mg_fission_rates == 0.] = np.nan
# Plot the CE fission rates in the left subplot
fig = plt.subplot(121)
plt.imshow(ce_fission_rates, interpolation='none', cmap='jet')
plt.title('Continuous-Energy Fission Rates')
# Plot the MG fission rates in the right subplot
fig2 = plt.subplot(122)
plt.imshow(mg_fission_rates, interpolation='none', cmap='jet')
plt.title('Multi-Group Fission Rates')
Explanation: Now we can easily use Matplotlib to visualize the two fission rates side-by-side.
End of explanation
# Set the maximum scattering order to 0 (i.e., isotropic scattering)
settings_file.max_order = 0
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: These figures really indicate that more histories are probably necessary when trying to achieve a fully converged solution, but hey, this is good enough for our example!
Scattering Anisotropy Treatments
We will next show how we can work with the scattering angular distributions. OpenMC's MG solver has the capability to use group-to-group angular distributions which are represented as any of the following: a truncated Legendre series of up to the 10th order, a histogram distribution, and a tabular distribution. Any combination of these representations can be used by OpenMC during the transport process, so long as all constituents of a given material use the same representation. This means it is possible to have water represented by a tabular distribution and fuel represented by a Legendre if so desired.
Note: To have the highest runtime performance OpenMC natively converts Legendre series to a tabular distribution before the transport begins. This default functionality can be turned off with the tabular_legendre element of the settings.xml file (or for the Python API, the openmc.Settings.tabular_legendre attribute).
This section will examine the following:
- Re-run the MG-mode calculation with P0 scattering everywhere using the openmc.Settings.max_order attribute
- Re-run the problem with only the water represented with P3 scattering and P0 scattering for the remaining materials using the Python API's ability to convert between formats.
Global P0 Scattering
First we begin by re-running with P0 scattering (i.e., isotropic) everywhere. If a global maximum order is requested, the most effective way to do this is to use the max_order attribute of our openmc.Settings object.
End of explanation
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: Now we can re-run OpenMC to obtain our results
End of explanation
# Move the statepoint File
mgp0_spfile = './statepoint_mg_p0.h5'
os.rename('statepoint.' + str(batches) + '.h5', mgp0_spfile)
# Move the Summary file
mgp0_sumfile = './summary_mg_p0.h5'
os.rename('summary.h5', mgp0_sumfile)
# Load the last statepoint file and keff value
mgsp_p0 = openmc.StatePoint(mgp0_spfile, autolink=False)
# Get keff
mg_p0_keff = mgsp_p0.k_combined
bias_p0 = 1.0E5 * (ce_keff - mg_p0_keff)
print('P3 bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
print('P0 bias [pcm]: {0:1.1f}'.format(bias_p0.nominal_value))
Explanation: And then get the eigenvalue differences from the Continuous-Energy and P3 MG solution
End of explanation
# Convert the zircaloy and fuel data to P0 scattering
for i, xsdata in enumerate(mgxs_file.xsdatas):
if xsdata.name != 'water':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('legendre', 0)
Explanation: Mixed Scattering Representations
OpenMC's Multi-Group mode also includes a feature where not every data in the library is required to have the same scattering treatment. For example, we could represent the water with P3 scattering, and the fuel and cladding with P0 scattering. This series will show how this can be done.
First we will convert the data to P0 scattering, unless its water, then we will leave that as P3 data.
End of explanation
# Convert the formats as discussed
for i, xsdata in enumerate(mgxs_file.xsdatas):
if xsdata.name == 'zircaloy':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('histogram', 2)
elif xsdata.name == 'fuel':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('tabular', 2)
mgxs_file.export_to_hdf5('mgxs.h5')
Explanation: We can also use whatever scattering format that we want for the materials in the library. As an example, we will take this P0 data and convert zircaloy to a histogram anisotropic scattering format and the fuel to a tabular anisotropic scattering format
End of explanation
settings_file.max_order = None
# Export to "settings.xml"
settings_file.export_to_xml()
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: Finally we will re-set our max_order parameter of our openmc.Settings object to our maximum order so that OpenMC will use whatever scattering data is available in the library.
After we do this we can re-run the simulation.
End of explanation
# Load the last statepoint file and keff value
mgsp_mixed = openmc.StatePoint('./statepoint.' + str(batches) + '.h5')
mg_mixed_keff = mgsp_mixed.k_combined
bias_mixed = 1.0E5 * (ce_keff - mg_mixed_keff)
print('P3 bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
print('Mixed Scattering bias [pcm]: {0:1.1f}'.format(bias_mixed.nominal_value))
Explanation: For a final step we can again obtain the eigenvalue differences from this case and compare with the same from the P3 MG solution
End of explanation |
6,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preconfigured Energy Balance Models
In this document the basic use of climlab's preconfigured EBM class is shown.
Contents are how to
setup an EBM model
show and access subprocesses
integrate the model
access and plot various model variables
calculate the global mean of the temperature
Step1: Model Creation
The regular path for the EBM class is climlab.model.ebm.EBM but it can also be accessed through climlab.EBM
An EBM model instance is created through
Step2: By default many parameters are set during initialization
Step3: The model consists of one state variable (surface temperature) and a couple of defined subprocesses.
Step4: Model subprocesses
The subprocesses are stored in a dictionary and can be accessed through
Step5: So to access the time type of the Longwave Radiation subprocess for example, type
Step6: Model integration
The model time dictionary shows information about all the time related content and quantities.
Step7: To integrate the model forward in time different methods are availible
Step8: The model time step has increased from 0 to 1
Step9: Plotting model variables
A couple of interesting model variables are stored in a dictionary named diagnostics. It has following entries
Step10: They can be accessed in two ways
Step11: The following code does the plotting for some model variables.
Step12: The energy balance is zero at every latitude. That means the model is in equilibrium. Perfect!
Global mean temperature
The model's state dictionary has following entries
Step13: Like diagnostics, state variables can be accessed in two ways
Step14: The global mean of the model's surface temperature can be calculated through | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
from climlab import constants as const
Explanation: Preconfigured Energy Balance Models
In this document the basic use of climlab's preconfigured EBM class is shown.
Contents are how to
setup an EBM model
show and access subprocesses
integrate the model
access and plot various model variables
calculate the global mean of the temperature
End of explanation
# model creation
ebm_model = climlab.EBM(name='My EBM')
Explanation: Model Creation
The regular path for the EBM class is climlab.model.ebm.EBM but it can also be accessed through climlab.EBM
An EBM model instance is created through
End of explanation
# print model parameters
ebm_model.param
Explanation: By default many parameters are set during initialization:
num_lat=90, S0=const.S0, A=210., B=2., D=0.55, water_depth=10., Tf=-10, a0=0.3, a2=0.078, ai=0.62, timestep=const.seconds_per_year/90., T0=12., T2=-40
For further details see the climlab documentation.
Many of the input parameters are stored in the following dictionary:
End of explanation
# print model states and suprocesses
print(ebm_model)
Explanation: The model consists of one state variable (surface temperature) and a couple of defined subprocesses.
End of explanation
# access model subprocesses
ebm_model.subprocess.keys()
Explanation: Model subprocesses
The subprocesses are stored in a dictionary and can be accessed through
End of explanation
# access specific subprocess through dictionary
ebm_model.subprocess['LW'].time_type
# For interactive convenience, you can also use attribute access for the same thing:
ebm_model.subprocess.LW.time_type
Explanation: So to access the time type of the Longwave Radiation subprocess for example, type:
End of explanation
# accessing the model time dictionary
ebm_model.time
Explanation: Model integration
The model time dictionary shows information about all the time related content and quantities.
End of explanation
# integrate model for a single timestep
ebm_model.step_forward()
Explanation: To integrate the model forward in time different methods are availible:
End of explanation
ebm_model.time['steps']
# integrate model for a 50 days
ebm_model.integrate_days(50.)
# integrate model for two years
ebm_model.integrate_years(1.)
# integrate model until solution converges
ebm_model.integrate_converge()
Explanation: The model time step has increased from 0 to 1:
End of explanation
ebm_model.diagnostics.keys()
Explanation: Plotting model variables
A couple of interesting model variables are stored in a dictionary named diagnostics. It has following entries:
End of explanation
ebm_model.icelat
Explanation: They can be accessed in two ways:
Through dictionary methods like ebm_model.diagnostics['ASR']
As process attributes like ebm_model.ASR
End of explanation
# creating plot figure
fig = plt.figure(figsize=(15,10))
# Temperature plot
ax1 = fig.add_subplot(221)
ax1.plot(ebm_model.lat,ebm_model.Ts)
ax1.set_xticks([-90,-60,-30,0,30,60,90])
ax1.set_xlim([-90,90])
ax1.set_title('Surface Temperature', fontsize=14)
ax1.set_ylabel('(degC)', fontsize=12)
ax1.grid()
# Albedo plot
ax2 = fig.add_subplot(223, sharex = ax1)
ax2.plot(ebm_model.lat,ebm_model.albedo)
ax2.set_title('Albedo', fontsize=14)
ax2.set_xlabel('latitude', fontsize=10)
ax2.set_ylim([0,1])
ax2.grid()
# Net Radiation plot
ax3 = fig.add_subplot(222, sharex = ax1)
ax3.plot(ebm_model.lat, ebm_model.OLR, label='OLR',
color='cyan')
ax3.plot(ebm_model.lat, ebm_model.ASR, label='ASR',
color='magenta')
ax3.plot(ebm_model.lat, ebm_model.ASR-ebm_model.OLR,
label='net radiation',
color='red')
ax3.set_title('Net Radiation', fontsize=14)
ax3.set_ylabel('(W/m$^2$)', fontsize=12)
ax3.legend(loc='best')
ax3.grid()
# Energy Balance plot
net_rad = np.squeeze(ebm_model.net_radiation)
transport = np.squeeze(ebm_model.heat_transport_convergence)
ax4 = fig.add_subplot(224, sharex = ax1)
ax4.plot(ebm_model.lat, net_rad, label='net radiation',
color='red')
ax4.plot(ebm_model.lat, transport, label='heat transport',
color='blue')
ax4.plot(ebm_model.lat, net_rad+transport, label='balance',
color='black')
ax4.set_title('Energy', fontsize=14)
ax4.set_xlabel('latitude', fontsize=10)
ax4.set_ylabel('(W/m$^2$)', fontsize=12)
ax4.legend(loc='best')
ax4.grid()
plt.show()
Explanation: The following code does the plotting for some model variables.
End of explanation
ebm_model.state.keys()
Explanation: The energy balance is zero at every latitude. That means the model is in equilibrium. Perfect!
Global mean temperature
The model's state dictionary has following entries:
End of explanation
ebm_model.Ts is ebm_model.state['Ts']
Explanation: Like diagnostics, state variables can be accessed in two ways:
With dictionary methods, ebm_model.state['Ts']
As process attributes, ebm_model.Ts
These are entirely equivalent:
End of explanation
print('The global mean temperature is %.2f deg C.' %climlab.global_mean(ebm_model.Ts))
print('The modeled ice edge is at %.2f deg latitude.' %np.max(ebm_model.icelat))
Explanation: The global mean of the model's surface temperature can be calculated through
End of explanation |
6,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integrate Ray AIR with Feast feature store
Step1: In this example, we showcase how to use Ray AIR with Feast feature store, leveraging both historical features for training a model and online features for inference.
The task is adapted from Feast credit scoring tutorial. In this example, we train a xgboost model and run some prediction on an incoming loan request to see if it is approved or rejected.
Let's first set up our workspace and prepare the data to work with.
Step2: There is already a feature repository set up in feature_repo/. It isn't necessary to create a new feature repository, but it can be done using the following command
Step3: Deploy the above defined feature store by running apply from within the feature_repo/ folder.
Step4: Generate training data
On top of the features in Feast, we also have labeled training data at data/loan_table.parquet. At the time of training, loan table will be passed into Feast as an entity dataframe for training data generation. Feast will intelligently join credit_history and zipcode_feature tables to create relevant feature vectors to augment the training data.
Step5: Now let's take a look at the training data as it is augmented by Feast.
Step6: Define Preprocessors
Preprocessor does last mile processing on Ray Datasets before feeding into training model.
Step7: Train XGBoost model using Ray AIR Trainer
Ray AIR provides a variety of Trainers that are integrated with popular machine learning frameworks. You can train a distributed model at scale leveraging Ray using the intuitive API trainer.fit(). The output is a Ray AIR Checkpoint, that will seamlessly transfer the workload from training to prediction. Let's take a look!
Step8: Inference
Now from the Checkpoint object we obtained from last session, we can construct a Ray AIR Predictor that encapsulates everything needed for inference.
The API for using Predictor is also very intuitive - simply call Predictor.predict(). | Python Code:
# !pip install feast==0.20.1 ray[air]>=1.13 xgboost_ray
Explanation: Integrate Ray AIR with Feast feature store
End of explanation
import os
WORKING_DIR = os.path.expanduser("~/ray-air-feast-example/")
%env WORKING_DIR=$WORKING_DIR
! mkdir -p $WORKING_DIR
! wget --no-check-certificate https://github.com/ray-project/air-sample-data/raw/main/air-feast-example.zip
! unzip air-feast-example.zip
! mv air-feast-example/* $WORKING_DIR
%cd $WORKING_DIR
! ls
Explanation: In this example, we showcase how to use Ray AIR with Feast feature store, leveraging both historical features for training a model and online features for inference.
The task is adapted from Feast credit scoring tutorial. In this example, we train a xgboost model and run some prediction on an incoming loan request to see if it is approved or rejected.
Let's first set up our workspace and prepare the data to work with.
End of explanation
!pygmentize feature_repo/features.py
Explanation: There is already a feature repository set up in feature_repo/. It isn't necessary to create a new feature repository, but it can be done using the following command: feast init -t local feature_repo.
Now let's take a look at the schema in Feast feature store, which is defined by feature_repo/features.py. There are mainly two features: zipcode_feature and credit_history, both are generated from parquet files - feature_repo/data/zipcode_table.parquet and feature_repo/data/credit_history.parquet.
End of explanation
! (cd feature_repo && feast apply)
import feast
fs = feast.FeatureStore(repo_path="feature_repo")
Explanation: Deploy the above defined feature store by running apply from within the feature_repo/ folder.
End of explanation
import pandas as pd
loan_df = pd.read_parquet("data/loan_table.parquet")
display(loan_df)
feast_features = [
"zipcode_features:city",
"zipcode_features:state",
"zipcode_features:location_type",
"zipcode_features:tax_returns_filed",
"zipcode_features:population",
"zipcode_features:total_wages",
"credit_history:credit_card_due",
"credit_history:mortgage_due",
"credit_history:student_loan_due",
"credit_history:vehicle_loan_due",
"credit_history:hard_pulls",
"credit_history:missed_payments_2y",
"credit_history:missed_payments_1y",
"credit_history:missed_payments_6m",
"credit_history:bankruptcies",
]
loan_w_offline_feature = fs.get_historical_features(
entity_df=loan_df, features=feast_features
).to_df()
# Drop some unnecessary columns for simplicity
loan_w_offline_feature = loan_w_offline_feature.drop(["event_timestamp", "created_timestamp__", "loan_id", "zipcode", "dob_ssn"], axis=1)
Explanation: Generate training data
On top of the features in Feast, we also have labeled training data at data/loan_table.parquet. At the time of training, loan table will be passed into Feast as an entity dataframe for training data generation. Feast will intelligently join credit_history and zipcode_feature tables to create relevant feature vectors to augment the training data.
End of explanation
display(loan_w_offline_feature)
# Convert into Train and Validation datasets.
import ray
loan_ds = ray.data.from_pandas(loan_w_offline_feature)
train_ds, validation_ds = loan_ds.split_proportionately([0.8])
Explanation: Now let's take a look at the training data as it is augmented by Feast.
End of explanation
categorical_features = [
"person_home_ownership",
"loan_intent",
"city",
"state",
"location_type",
]
from ray.ml.preprocessors import Chain, OrdinalEncoder, SimpleImputer
imputer = SimpleImputer(categorical_features, strategy="most_frequent")
encoder = OrdinalEncoder(columns=categorical_features)
chained_preprocessor = Chain(imputer, encoder)
Explanation: Define Preprocessors
Preprocessor does last mile processing on Ray Datasets before feeding into training model.
End of explanation
LABEL = "loan_status"
CHECKPOINT_PATH = "checkpoint"
NUM_WORKERS = 1 # Change this based on the resources in the cluster.
from ray.ml.train.integrations.xgboost import XGBoostTrainer
params = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
}
trainer = XGBoostTrainer(
scaling_config={
"num_workers": NUM_WORKERS,
"use_gpu": 0,
},
label_column=LABEL,
params=params,
datasets={"train": train_ds, "validation": validation_ds},
preprocessor=chained_preprocessor,
num_boost_round=100,
)
checkpoint = trainer.fit().checkpoint
# This saves the checkpoint onto disk
checkpoint.to_directory(CHECKPOINT_PATH)
Explanation: Train XGBoost model using Ray AIR Trainer
Ray AIR provides a variety of Trainers that are integrated with popular machine learning frameworks. You can train a distributed model at scale leveraging Ray using the intuitive API trainer.fit(). The output is a Ray AIR Checkpoint, that will seamlessly transfer the workload from training to prediction. Let's take a look!
End of explanation
from ray.ml.checkpoint import Checkpoint
from ray.ml.predictors.integrations.xgboost import XGBoostPredictor
predictor = XGBoostPredictor.from_checkpoint(Checkpoint.from_directory(CHECKPOINT_PATH))
import numpy as np
## Now let's do some prediciton.
loan_request_dict = {
"zipcode": [76104],
"dob_ssn": ["19630621_4278"],
"person_age": [133],
"person_income": [59000],
"person_home_ownership": ["RENT"],
"person_emp_length": [123.0],
"loan_intent": ["PERSONAL"],
"loan_amnt": [35000],
"loan_int_rate": [16.02],
}
# Now augment the request with online features.
zipcode = loan_request_dict["zipcode"][0]
dob_ssn = loan_request_dict["dob_ssn"][0]
online_features = fs.get_online_features(
entity_rows=[{"zipcode": zipcode, "dob_ssn": dob_ssn}],
features=feast_features,
).to_dict()
loan_request_dict.update(online_features)
loan_request_df = pd.DataFrame.from_dict(loan_request_dict, dtype=np.float)
loan_request_df = loan_request_df.drop(["zipcode", "dob_ssn"], axis=1)
display(loan_request_df)
# Run through our predictor using `Predictor.predict()` API.
loan_result = np.round(predictor.predict(loan_request_df)["predictions"][0])
if loan_result == 0:
print("Loan approved!")
elif loan_result == 1:
print("Loan rejected!")
Explanation: Inference
Now from the Checkpoint object we obtained from last session, we can construct a Ray AIR Predictor that encapsulates everything needed for inference.
The API for using Predictor is also very intuitive - simply call Predictor.predict().
End of explanation |
6,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Appendix B
Step1: Combining cross_fields and best_fields
Based on previous tuning, we have the following optimal parameters for each multi_match query type.
Step2: We've seen the process to optimize field boosts on two different multi_match queries but it would be interesting to see if combining them in some way might actually result in even better MRR@100. Let's give it a shot and find out.
Side note
Step3: Baseline evaluation
Step4: Query tuning
Here we'll just tune the boosts for each sub-query. Note that this takes twice as long as tuning individual queries because we have two queries combined.
Step5: Seems that there's not much to tune here, but let's keep going.
Step6: So that's the same as without tuning. What's going on?
Debugging
Plot scores from each sub-query to determine why we don't really see an improvement over individual queries. | Python Code:
%load_ext autoreload
%autoreload 2
import importlib
import os
import sys
from elasticsearch import Elasticsearch
from skopt.plots import plot_objective
# project library
sys.path.insert(0, os.path.abspath('..'))
import qopt
importlib.reload(qopt)
from qopt.notebooks import evaluate_mrr100_dev, optimize_query_mrr100
from qopt.optimize import Config
# use a local Elasticsearch or Cloud instance (https://cloud.elastic.co/)
# es = Elasticsearch('http://localhost:9200')
es = Elasticsearch('http://35.246.195.44:9200')
# set the parallelization parameter `max_concurrent_searches` for the Rank Evaluation API calls
# max_concurrent_searches = 10
max_concurrent_searches = 30
index = 'msmarco-document'
template_id = 'combined'
Explanation: Appendix B: Combining queries
In this example, we'll walk through the process of building a more complex query for the MSMARCO Document dataset. This assumes that you are familiar with the other query tuning notebooks. We'll be using the cross_fields and best_fields queries and the optimal parameters found as the foundation on which we build a more complex query.
As with the previous notebook and in accordance with the MSMARCO Document ranking task, we'll continue to use MRR@100 on the dev dataset for evaluation and comparison with other approaches.
End of explanation
cross_fields_params = {
'operator': 'OR',
'minimum_should_match': 50,
'tie_breaker': 0.25,
'url|boost': 1.0129720302556104,
'title|boost': 5.818478716515356,
'body|boost': 3.736613263685484,
}
best_fields_params = {
'tie_breaker': 0.3936135232328522,
'url|boost': 0.0,
'title|boost': 8.63280262513067,
'body|boost': 10.0,
}
Explanation: Combining cross_fields and best_fields
Based on previous tuning, we have the following optimal parameters for each multi_match query type.
End of explanation
def prefix_keys(d, prefix):
return {f'{prefix}{k}': v for k, v in d.items()}
# prefix key of each sub-query
# add default boosts
all_params = {
**prefix_keys(cross_fields_params, 'cross_fields|'),
'cross_fields|boost': 1.0,
**prefix_keys(best_fields_params, 'best_fields|'),
'best_fields|boost': 1.0,
}
all_params
Explanation: We've seen the process to optimize field boosts on two different multi_match queries but it would be interesting to see if combining them in some way might actually result in even better MRR@100. Let's give it a shot and find out.
Side note: Combining queries where each sub-query is always executed may improve relevance but it will hurt performance and the query times will be quite a lot higher than with a single, simpler query. Keep this in mind when building complex queries for production!
End of explanation
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=all_params)
Explanation: Baseline evaluation
End of explanation
%%time
_, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'num_iterations': 30,
'num_initial_points': 15,
'space': {
'cross_fields|boost': { 'low': 0.0, 'high': 5.0 },
'best_fields|boost': { 'low': 0.0, 'high': 5.0 },
},
'default': all_params,
}))
_ = plot_objective(metadata_boosts, sample_source='result')
Explanation: Query tuning
Here we'll just tune the boosts for each sub-query. Note that this takes twice as long as tuning individual queries because we have two queries combined.
End of explanation
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
Explanation: Seems that there's not much to tune here, but let's keep going.
End of explanation
import matplotlib.pyplot as plt
from itertools import chain
from qopt.notebooks import ROOT_DIR
from qopt.search import temporary_search_template, search_template
from qopt.trec import load_queries_as_tuple_list, load_qrels
def collect_scores():
def _search(template_id, query_string, params, doc_id):
res = search_template(es, index, template_id, query={
'id': 0,
'params': {
'query_string': query_string,
**params,
},
})
return [hit['score'] for hit in res['hits'] if hit['id'] == doc_id]
queries = load_queries_as_tuple_list(os.path.join(ROOT_DIR, 'data', 'msmarco-document-sampled-queries.1000.tsv'))
qrels = load_qrels(os.path.join(ROOT_DIR, 'data', 'msmarco', 'document', 'msmarco-doctrain-qrels.tsv'))
template_file = os.path.join(ROOT_DIR, 'config', 'msmarco-document-templates.json')
size = 100
cross_field_scores = []
best_field_scores = []
with temporary_search_template(es, template_file, 'cross_fields', size) as cross_fields_template_id:
with temporary_search_template(es, template_file, 'best_fields', size) as best_fields_template_id:
for query in queries:
doc_id = list(qrels[query[0]].keys())[0]
cfs = _search(cross_fields_template_id, query[1], cross_fields_params, doc_id)
bfs = _search(best_fields_template_id, query[1], best_fields_params, doc_id)
# keep just n scores to make sure the lists are the same length
length = min(len(cfs), len(bfs))
cross_field_scores.append(cfs[:length])
best_field_scores.append(bfs[:length])
return cross_field_scores, best_field_scores
cfs, bfs = collect_scores()
# plot scores
cfs_flat = list(chain(*cfs))
bfs_flat = list(chain(*bfs))
plt.scatter(cfs_flat, bfs_flat)
plt.show()
Explanation: So that's the same as without tuning. What's going on?
Debugging
Plot scores from each sub-query to determine why we don't really see an improvement over individual queries.
End of explanation |
6,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames Malo et al. 2014
Title
Step1: Table 1 - Target Information for Ophiuchus Sources | Python Code:
import warnings
warnings.filterwarnings("ignore")
from astropy.io import ascii
import pandas as pd
Explanation: ApJdataFrames Malo et al. 2014
Title: BANYAN. III. Radial velocity, Rotation and X-ray emission of low-mass star candidates in nearby young kinematic groups
Authors: Malo L., Artigau E., Doyon R., Lafreniere D., Albert L., Gagne J.
Data is from this paper:
http://iopscience.iop.org/article/10.1088/0004-637X/722/1/311/
End of explanation
#! mkdir ../data/Malo2014
#! wget http://iopscience.iop.org/0004-637X/788/1/81/suppdata/apj494919t7_mrt.txt
! head ../data/Malo2014/apj494919t7_mrt.txt
from astropy.table import Table, Column
t1 = Table.read("../data/Malo2014/apj494919t7_mrt.txt", format='ascii')
sns.distplot(t1['Jmag'].data.data)
t1
Explanation: Table 1 - Target Information for Ophiuchus Sources
End of explanation |
6,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Read Salaries.csv as a dataframe called sal.
Step2: Check the head of the DataFrame.
Step3: Use the .info() method to find out how many entries there are.
Step4: What is the average BasePay ?
Step5: What is the highest amount of OvertimePay in the dataset ?
Step6: What is the job title of JOSEPH DRISCOLL ? Note
Step7: How much does JOSEPH DRISCOLL make (including benefits)?
Step8: What is the name of highest paid person (including benefits)?
Step9: What is the name of lowest paid person (including benefits)? Do you notice something strange about how much he or she is paid?
Step10: What was the average (mean) BasePay of all employees per year? (2011-2014) ?
Step11: How many unique job titles are there?
Step12: What are the top 5 most common jobs?
Step13: How many Job Titles were represented by only one person in 2013? (e.g. Job Titles with only one occurence in 2013?)
Step14: How many people have the word Chief in their job title? (This is pretty tricky)
Step15: Bonus | Python Code:
import pandas as pd
Explanation: <a href='http://www.pieriandata.com'> <img src='../../Pierian_Data_Logo.png' /></a>
SF Salaries Exercise - Solutions
Welcome to a quick exercise for you to practice your pandas skills! We will be using the SF Salaries Dataset from Kaggle! Just follow along and complete the tasks outlined in bold below. The tasks will get harder and harder as you go along.
Import pandas as pd.
End of explanation
sal = pd.read_csv('Salaries.csv')
Explanation: Read Salaries.csv as a dataframe called sal.
End of explanation
sal.head()
Explanation: Check the head of the DataFrame.
End of explanation
sal.info() # 148654 Entries
Explanation: Use the .info() method to find out how many entries there are.
End of explanation
sal['BasePay'].mean()
Explanation: What is the average BasePay ?
End of explanation
sal['OvertimePay'].max()
Explanation: What is the highest amount of OvertimePay in the dataset ?
End of explanation
sal[sal['EmployeeName']=='JOSEPH DRISCOLL']['JobTitle']
Explanation: What is the job title of JOSEPH DRISCOLL ? Note: Use all caps, otherwise you may get an answer that doesn't match up (there is also a lowercase Joseph Driscoll).
End of explanation
sal[sal['EmployeeName']=='JOSEPH DRISCOLL']['TotalPayBenefits']
Explanation: How much does JOSEPH DRISCOLL make (including benefits)?
End of explanation
sal[sal['TotalPayBenefits']== sal['TotalPayBenefits'].max()] #['EmployeeName']
# or
# sal.loc[sal['TotalPayBenefits'].idxmax()]
Explanation: What is the name of highest paid person (including benefits)?
End of explanation
sal[sal['TotalPayBenefits']== sal['TotalPayBenefits'].min()] #['EmployeeName']
# or
# sal.loc[sal['TotalPayBenefits'].idxmax()]['EmployeeName']
## ITS NEGATIVE!! VERY STRANGE
Explanation: What is the name of lowest paid person (including benefits)? Do you notice something strange about how much he or she is paid?
End of explanation
sal.groupby('Year').mean()['BasePay']
Explanation: What was the average (mean) BasePay of all employees per year? (2011-2014) ?
End of explanation
sal['JobTitle'].nunique()
Explanation: How many unique job titles are there?
End of explanation
sal['JobTitle'].value_counts().head(5)
Explanation: What are the top 5 most common jobs?
End of explanation
sum(sal[sal['Year']==2013]['JobTitle'].value_counts() == 1) # pretty tricky way to do this...
Explanation: How many Job Titles were represented by only one person in 2013? (e.g. Job Titles with only one occurence in 2013?)
End of explanation
def chief_string(title):
if 'chief' in title.lower():
return True
else:
return False
sum(sal['JobTitle'].apply(lambda x: chief_string(x)))
Explanation: How many people have the word Chief in their job title? (This is pretty tricky)
End of explanation
sal['title_len'] = sal['JobTitle'].apply(len)
sal[['title_len','TotalPayBenefits']].corr() # No correlation.
Explanation: Bonus: Is there a correlation between length of the Job Title string and Salary?
End of explanation |
6,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Responses for Chandra/HETG
Let's see if we can figure out responses for the Chandra/HETG. This notebook will require sherpa, because we're using this for comparison purposes!
Step1: Let's load some HETG simulations of a simple power law. The input values for that spectrum are
Step2: Note
Step3: Let's take a look at that data
Step4: Let's load the ARF from a file
Step5: Let's also load the rmf
Step6: Ok, so all of our files have the same units. This is good, because this is not always the case for Chandra/HETG.
Note
Step7: Now we can make a model spectrum we can play around with! clarsach has an implementation of a Powerlaw model that matches the ISIS one that created the data
Step8: Now we can produce a model with the same resolution as the HETG
Step9: This looks pretty weird! But remember, those are the integral of the power law over the bins! Let's apply the rmf and arf to this model
Step10: Okay, that looks pretty good!
Building a Model
Let's build a model for fitting a model to the data. We're going to do nothing fancy here, just a simple Poisson likelihood. It's similar to the one in the TestAthenaXIFU notebook, except it uses the Clarsach powerlaw function and integrates over bins.
Step11: Let's fit that
Step13: Woo! It works! There's something funny with the exposure I need to ask Lia about!
Making Our Own ARF/RMF Classes
Ideally, we don't want to be dependent on the sherpa implementation.
We're now going to write our own implementations, and then test them on the model above.
Let's first write a function for applying the ARF. This is easy, because the ARF just requires a multiplication with the input spectrum.
Step14: Let's try it!
Step15: We can compare that with the result from sherpa
Step16: It works! There's also a version in the new clarsach package, which we should test and compare as well
Step17: It works!
Next, let's look at the RMF, which is more complex. This requires a matrix multiplication. However, the response matrices are compressed to remove zeros and save space in memory, so they require a little more complex fiddling. Here's an implementation that is basically almost a line-by-line translation of the C++ code
Step19: Let's see if we can make this more vectorized
Step20: Let's time the different implementations and compare them to the sherpa version (which is basically a wrapper around the C++)
Step21: So my vectorized implementation is ~20 times slower than the sherpa version, and the non-vectorized version is really slow. But are they all the same?
Step25: They are! It looks like for this particular Chandra/HETG data set, it's working!
Making an ARF/RMF Class
the ARF and RMF code would live well in a class, so let's wrap it into a class
Step26: Let's make an object of that class and then make another model with the RMF applied
Step27: Does this produce the same result as the vectorized function (it should)?
Step28: Hooray! What does the total spectrum look like?
Step29: There is, of course, also an implementation in clarsach
Step30: It all works! Hooray! I think we've got a solution for Chandra/HETG!
Let's make a final plot | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
try:
import seaborn as sns
except ImportError:
print("No seaborn installed. Oh well.")
import numpy as np
import pandas as pd
import astropy.io.fits as fits
import sherpa.astro.ui as ui
import astropy.modeling.models as models
from astropy.modeling.fitting import _fitter_to_model_params
from scipy.special import gammaln as scipy_gammaln
from clarsach.respond import RMF, ARF
Explanation: Responses for Chandra/HETG
Let's see if we can figure out responses for the Chandra/HETG. This notebook will require sherpa, because we're using this for comparison purposes!
End of explanation
datadir = "./"
Explanation: Let's load some HETG simulations of a simple power law. The input values for that spectrum are:
* norm = 1
* ph_index = 2
End of explanation
ui.load_data(id="p1", filename=datadir+"fake_heg_p1.pha")
Explanation: Note: sherpa will only load the responses correctly if the data is in the same folder as this notebook. Go figure.
End of explanation
d = ui.get_data("p1")
# conversion factor from keV to angstrom
c = 12.3984191
# This is the data in angstrom
bin_lo = d.bin_lo
bin_hi = d.bin_hi
bin_mid = bin_lo + (bin_hi - bin_lo) / 2.0
counts = d.counts
plt.figure(figsize=(10,7))
plt.plot(bin_mid, counts)
Explanation: Let's take a look at that data:
End of explanation
hdulist = fits.open("fake_heg_p1.pha")
hdulist[1].header["TUNIT3"]
exposure = hdulist[1].header["EXPOSURE"]
bin_lo = hdulist[1].data.field("BIN_LO")
bin_hi = hdulist[1].data.field("BIN_HI")
channels = hdulist[1].data.field("CHANNEL")
counts = hdulist[1].data.field("COUNTS")
arf_list = fits.open(datadir+"arfs/aciss_heg-1_cy19.garf")
arf_list[1].header["TUNIT1"]
specresp = arf_list[1].data.field("SPECRESP")
energ_lo = arf_list[1].data.field("ENERG_LO")
energ_hi = arf_list[1].data.field("ENERG_HI")
phafrac = arf_list[1].data.field("PHAFRAC")
arf_list[1].columns
Explanation: Let's load the ARF from a file:
End of explanation
rmf_list = fits.open(datadir+"rmfs/aciss_heg1_cy19.grmf")
rmf_list[1].header["TUNIT1"]
Explanation: Let's also load the rmf:
End of explanation
arf = d.get_arf()
specresp = arf.specresp
rmf = d.get_rmf()
Explanation: Ok, so all of our files have the same units. This is good, because this is not always the case for Chandra/HETG.
Note: We need to fix that in the code!
Let's load the ARF and the RMF from sherpa:
End of explanation
from clarsach.models.powerlaw import Powerlaw
pl = Powerlaw(norm=1.0, phoindex=2.0)
Explanation: Now we can make a model spectrum we can play around with! clarsach has an implementation of a Powerlaw model that matches the ISIS one that created the data:
End of explanation
m = pl.calculate(ener_lo=energ_lo, ener_hi=energ_hi)
plt.figure()
plt.plot(bin_mid, m)
Explanation: Now we can produce a model with the same resolution as the HETG:
End of explanation
m_arf = m*specresp*1.e5
m_rmf = rmf.apply_rmf(m_arf)
np.savetxt("../data/chandra_hetg_m_rmf.txt", np.array([bin_mid, m_rmf]).T)
plt.figure()
plt.plot(bin_mid, counts, label="data")
plt.plot(bin_mid, m_rmf, label="model with ARF")
plt.legend()
Explanation: This looks pretty weird! But remember, those are the integral of the power law over the bins! Let's apply the rmf and arf to this model:
End of explanation
class PoissonLikelihood(object):
def __init__(self, x_low, x_high, y, model, arf=None, rmf=None):
self.x_low = x_low
self.x_high = x_high
self.y = y
self.model = model
self.arf = arf
self.rmf = rmf
def evaluate(self, pars):
# store the new parameters in the model
self.model.norm = pars[0]
self.model.phoindex = pars[1]
# evaluate the model at the positions x
mean_model = self.model.calculate(self.x_low, self.x_high)
# run the ARF and RMF calculations
if arf is not None and rmf is not None:
m_arf = arf.apply_arf(mean_model)*arf.exposure
ymodel = rmf.apply_rmf(m_arf)
else:
ymodel = mean_model
ymodel += 1e-20
# compute the log-likelihood
loglike = np.sum(-ymodel + self.y*np.log(ymodel) \
- scipy_gammaln(self.y + 1.))
if np.isfinite(loglike):
return loglike
else:
return -1.e16
def __call__(self, pars):
l = -self.evaluate(pars)
#print(l)
return l
loglike = PoissonLikelihood(bin_lo, bin_hi, counts, pl, arf=arf, rmf=rmf)
loglike([1.0, 2.0])
Explanation: Okay, that looks pretty good!
Building a Model
Let's build a model for fitting a model to the data. We're going to do nothing fancy here, just a simple Poisson likelihood. It's similar to the one in the TestAthenaXIFU notebook, except it uses the Clarsach powerlaw function and integrates over bins.
End of explanation
from scipy.optimize import minimize
opt = minimize(loglike, [1.0, 2.0])
opt
m_fit = pl.calculate(bin_lo, bin_hi)
m_fit_arf = arf.apply_arf(m_fit) * arf.exposure
m_fit_rmf = rmf.apply_rmf(m_fit_arf)
plt.figure()
plt.plot(bin_mid, counts, label="data")
plt.plot(bin_mid, m_fit_rmf, label="model with ARF")
plt.legend()
Explanation: Let's fit that:
End of explanation
def apply_arf(spec, specresp, exposure=1.0):
Apply the anxilliary response to
a spectrum.
Parameters
----------
spec : numpy.ndarray
The source spectrum in flux units
specresp: numpy.ndarray
The response
exposure : float
The exposure of the observation
Returns
-------
spec_arf : numpy.ndarray
The spectrum with the response applied.
spec_arf = spec*specresp*exposure
return spec_arf
Explanation: Woo! It works! There's something funny with the exposure I need to ask Lia about!
Making Our Own ARF/RMF Classes
Ideally, we don't want to be dependent on the sherpa implementation.
We're now going to write our own implementations, and then test them on the model above.
Let's first write a function for applying the ARF. This is easy, because the ARF just requires a multiplication with the input spectrum.
End of explanation
m_arf = apply_arf(m, arf.specresp)
Explanation: Let's try it!
End of explanation
m_arf_sherpa = arf.apply_arf(m)*1e5
m_arf_sherpa[-10:]
np.savetxt("../data/chandra_hetg_m_arf.txt", np.array([bin_mid, m_arf_sherpa]).T,
fmt="%.25f")
np.allclose(m_arf*1e5, m_arf_sherpa)
)
m_arf_sherpa[-10:]
m_arf[-10:]*1e5
Explanation: We can compare that with the result from sherpa:
End of explanation
from clarsach.respond import ARF
arf_c = ARF(datadir+"arfs/aciss_heg1_cy19.garf")
m_arf_c = arf_c.apply_arf(m)
np.allclose(m_arf_sherpa, m_arf_c*1e5)
Explanation: It works! There's also a version in the new clarsach package, which we should test and compare as well:
End of explanation
def rmf_fold(spec, rmf):
#current_num_groups = 0
#current_num_chans = 0
nchannels = spec.shape[0]
resp_idx = 0
first_chan_idx = 0
num_chans_idx =0
counts_idx = 0
counts = np.zeros(nchannels)
for i in range(nchannels):
source_bin_i = spec[i]
current_num_groups = rmf.n_grp[i]
while current_num_groups:
counts_idx = int(rmf.f_chan[first_chan_idx] - rmf.offset)
current_num_chans = rmf.n_chan[num_chans_idx]
first_chan_idx += 1
num_chans_idx +=1
while current_num_chans:
counts[counts_idx] += rmf.matrix[resp_idx] * source_bin_i
counts_idx += 1
resp_idx += 1
current_num_chans -= 1
current_num_groups -= 1
return counts
Explanation: It works!
Next, let's look at the RMF, which is more complex. This requires a matrix multiplication. However, the response matrices are compressed to remove zeros and save space in memory, so they require a little more complex fiddling. Here's an implementation that is basically almost a line-by-line translation of the C++ code:
End of explanation
def rmf_fold_vector(spec, rmf):
Fold the spectrum through the redistribution matrix.
Parameters
----------
spec : numpy.ndarray
The (model) spectrum to be folded
rmf : sherpa.RMFData object
The object with the RMF data
# get the number of channels in the data
nchannels = spec.shape[0]
# an empty array for the output counts
counts = np.zeros(nchannels)
# index for n_chan and f_chan incrementation
k = 0
# index for the response matrix incrementation
resp_idx = 0
# loop over all channels
for i in range(nchannels):
# this is the current bin in the flux spectrum to
# be folded
source_bin_i = spec[i]
# get the current number of groups
current_num_groups = rmf.n_grp[i]
for j in range(current_num_groups):
counts_idx = int(rmf.f_chan[k] - rmf.offset)
current_num_chans = int(rmf.n_chan[k])
k += 1
counts[counts_idx:counts_idx+current_num_chans] += rmf.matrix[resp_idx:resp_idx+current_num_chans] * source_bin_i
resp_idx += current_num_chans
return counts
Explanation: Let's see if we can make this more vectorized:
End of explanation
# not vectorized version
m_rmf = rmf_fold(m_arf, rmf)
%timeit m_rmf = rmf_fold(m_arf, rmf)
# vectorized version
m_rmf_v = rmf_fold_vector(m_arf, rmf)
%timeit m_rmf_v = rmf_fold_vector(m_arf, rmf)
# C++ (sherpa) version
m_rmf2 = rmf.apply_rmf(m_arf)
%timeit m_rmf2 = rmf.apply_rmf(m_arf)
Explanation: Let's time the different implementations and compare them to the sherpa version (which is basically a wrapper around the C++):
End of explanation
np.allclose(m_rmf_v, m_rmf2)
np.allclose(m_rmf, m_rmf2)
Explanation: So my vectorized implementation is ~20 times slower than the sherpa version, and the non-vectorized version is really slow. But are they all the same?
End of explanation
class RMF(object):
def __init__(self, filename):
self._load_rmf(filename)
pass
def _load_rmf(self, filename):
Load an RMF from a FITS file.
Parameters
----------
filename : str
The file name with the RMF file
Attributes
----------
n_grp : numpy.ndarray
the Array with the number of channels in each
channel set
f_chan : numpy.ndarray
The starting channel for each channel group;
If an element i in n_grp > 1, then the resulting
row entry in f_chan will be a list of length n_grp[i];
otherwise it will be a single number
n_chan : numpy.ndarray
The number of channels in each channel group. The same
logic as for f_chan applies
matrix : numpy.ndarray
The redistribution matrix as a flattened 1D vector
energ_lo : numpy.ndarray
The lower edges of the energy bins
energ_hi : numpy.ndarray
The upper edges of the energy bins
detchans : int
The number of channels in the detector
# open the FITS file and extract the MATRIX extension
# which contains the redistribution matrix and
# anxillary information
hdulist = fits.open(filename)
h = hdulist["MATRIX"]
data = h.data
hdr = h.header
hdulist.close()
# extract + store the attributes described in the docstring
n_grp = np.array(data.field("N_GRP"))
f_chan = np.array(data.field('F_CHAN'))
n_chan = np.array(data.field("N_CHAN"))
matrix = np.array(data.field("MATRIX"))
self.energ_lo = np.array(data.field("ENERG_LO"))
self.energ_hi = np.array(data.field("ENERG_HI"))
self.detchans = hdr["DETCHANS"]
self.offset = 1# self.__get_tlmin(h)
self.n_grp, self.f_chan, self.n_chan, self.matrix = \
self.__flatten_arrays(n_grp, f_chan, n_chan, matrix)
return
def __get_tlmin(self, h):
Get the tlmin keyword for `F_CHAN`.
Parameters
----------
h : an astropy.io.fits.hdu.table.BinTableHDU object
The extension containing the `F_CHAN` column
Returns
-------
tlmin : int
The tlmin keyword
# get the header
hdr = h.header
# get the keys of all
keys = np.array(list(hdr.keys()))
# find the place where the tlmin keyword is defined
t = np.array(["TLMIN" in k for k in keys])
# get the index of the TLMIN keyword
tlmin_idx = np.hstack(np.where(t))[0]
# get the corresponding value
tlmin = np.int(list(hdr.items())[tlmin_idx][1])
return tlmin
def __flatten_arrays(self, n_grp, f_chan, n_chan, matrix):
# find all non-zero groups
nz_idx = (n_grp > 0)
# stack all non-zero rows in the matrix
matrix_flat = np.hstack(matrix[nz_idx])
# stack all nonzero rows in n_chan and f_chan
n_chan_flat = np.hstack(n_chan[nz_idx])
f_chan_flat = np.hstack(f_chan[nz_idx])
return n_grp, f_chan_flat, n_chan_flat, matrix_flat
def apply_rmf(self, spec):
Fold the spectrum through the redistribution matrix.
The redistribution matrix is saved as a flattened 1-dimensional
vector to save space. In reality, for each entry in the flux
vector, there exists one or more sets of channels that this
flux is redistributed into. The additional arrays `n_grp`,
`f_chan` and `n_chan` store this information:
* `n_group` stores the number of channel groups for each
energy bin
* `f_chan` stores the *first channel* that each channel
for each channel set
* `n_chan` stores the number of channels in each channel
set
As a result, for a given energy bin i, we need to look up the
number of channel sets in `n_grp` for that energy bin. We
then need to loop over the number of channel sets. For each
channel set, we look up the first channel into which flux
will be distributed as well as the number of channels in the
group. We then need to also loop over the these channels and
actually use the corresponding elements in the redistribution
matrix to redistribute the photon flux into channels.
All of this is basically a big bookkeeping exercise in making
sure to get the indices right.
Parameters
----------
spec : numpy.ndarray
The (model) spectrum to be folded
Returns
-------
counts : numpy.ndarray
The (model) spectrum after folding, in
counts/s/channel
# get the number of channels in the data
nchannels = spec.shape[0]
# an empty array for the output counts
counts = np.zeros(nchannels)
# index for n_chan and f_chan incrementation
k = 0
# index for the response matrix incrementation
resp_idx = 0
# loop over all channels
for i in range(nchannels):
# this is the current bin in the flux spectrum to
# be folded
source_bin_i = spec[i]
# get the current number of groups
current_num_groups = self.n_grp[i]
# loop over the current number of groups
for j in range(current_num_groups):
# get the right index for the start of the counts array
# to put the data into
counts_idx = int(self.f_chan[k] - self.offset)
# this is the current number of channels to use
current_num_chans = int(self.n_chan[k])
# iterate k for next round
k += 1
# add the flux to the subarray of the counts array that starts with
# counts_idx and runs over current_num_chans channels
counts[counts_idx:counts_idx+current_num_chans] = counts[counts_idx:counts_idx+current_num_chans] + \
self.matrix[resp_idx:resp_idx+current_num_chans] * \
np.float(source_bin_i)
# iterate the response index for next round
resp_idx += current_num_chans
return counts
Explanation: They are! It looks like for this particular Chandra/HETG data set, it's working!
Making an ARF/RMF Class
the ARF and RMF code would live well in a class, so let's wrap it into a class:
End of explanation
rmf_new = RMF(datadir+"rmfs/aciss_heg1_cy19.grmf")
m_rmf = rmf_new.apply_rmf(m_arf)
Explanation: Let's make an object of that class and then make another model with the RMF applied:
End of explanation
np.allclose(m_rmf, m_rmf_v)
Explanation: Does this produce the same result as the vectorized function (it should)?
End of explanation
plt.figure()
plt.plot(m_rmf)
plt.plot(m_rmf_v)
Explanation: Hooray! What does the total spectrum look like?
End of explanation
from clarsach import respond
rmf_c = respond.RMF(datadir+"rmfs/aciss_heg1_cy19.grmf")
m_rmf_c = rmf_new.apply_rmf(m_arf)
np.allclose(m_rmf_c, m_rmf_v)
Explanation: There is, of course, also an implementation in clarsach:
End of explanation
plt.figure()
plt.plot(bin_mid, counts, label="data")
plt.plot(bin_mid, m_rmf_c, label="model")
Explanation: It all works! Hooray! I think we've got a solution for Chandra/HETG!
Let's make a final plot:
End of explanation |
6,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'vresm-1-0', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: VRESM-1-0
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
6,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Piecewise exponential models and creating custom models
This section will be easier if we recall our three mathematical "creatures" and the relationships between them. First is the survival function, $S(t)$, that represents the probability of living past some time, $t$. Next is the always non-negative and non-decreasing cumulative hazard function, $H(t)$. Its relation to $S(t)$ is
Step1: This model does a poor job of fitting to our data. If I fit a non-parametric model, like the Nelson-Aalen model, to this data, the Exponential's lack of fit is very obvious.
Step2: It should be clear that the single parameter model is just averaging the hazards over the entire time period. In reality though, the true hazard rate exhibits some complex non-linear behaviour.
Piecewise Exponential models
What if we could break out model into different time periods, and fit an exponential model to each of those? For example, we define the hazard as
Step3: We can see a much better fit in this model. A quantitative measure of fit is to compare the log-likelihood between exponential model and the piecewise exponential model (higher is better). The log-likelihood went from -772 to -647, respectively. We could keep going and add more and more breakpoints, but that would end up overfitting to the data.
Univarite models in lifelines
I mentioned that the PiecewiseExponentialFitter was implemented using only its cumulative hazard function. This is not a lie. lifelines has very general semantics for univariate fitters. For example, this is how the entire ExponentialFitter is implemented
Step4: The best fit of the model to the data is
Step5: From the output, we see that the value of 76.55 is the suggested asymptote, that is
Step6: Our new asymptote is at $t\approx 100, \text{c.i.}=(87, 112)$. The model appears to fit the early times better than the previous models as well, however our $\alpha$ parameter has more uncertainty now. Continuing to add parameters isn't advisable, as we will overfit to the data.
Why fit parametric models anyways? Taking a step back, we are fitting parametric models and comparing them to the non-parametric Nelson-Aalen. Why not just always use the Nelson-Aalen model?
1) Sometimes we have scientific motivations to use a parametric model. That is, using domain knowledge, we may know the system has a parametric model and we wish to fit to that model.
2) In a parametric model, we are borrowing information from all observations to determine the best parameters. To make this more clear, imagine taking a single observation and changing it's value wildly. The fitted parameters would change as well. On the other hand, imagine doing the same for a non-parametric model. In this case, only the local survival function or hazard function would change. Because parametric models can borrow information from all observations, and there are much fewer unknowns than a non-parametric model, parametric models are said to be more statistically efficient.
3) Extrapolation
Step7: 3-parameter Weibull distribution
We can easily extend the built-in Weibull model (lifelines.WeibullFitter) to include a new location parameter
Step8: Inverse Gaussian distribution
The inverse Gaussian distribution is another popular model for survival analysis. Unlike other model, it's hazard does not asympotically converge to 0, allowing for a long tail of survival. Let's model this, using the same parameterization from Wikipedia
Step9: Gompertz
Step10: APGW
From the paper, "A Flexible Parametric Modelling Framework for Survival Analysis", https
Step11: Bounded lifetimes using the beta distribution
Maybe your data is bounded between 0 and some (unknown) upperbound M? That is, lifetimes can't be more than M. Maybe you know M, maybe you don't. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from lifelines.datasets import load_waltons
waltons = load_waltons()
T, E = waltons['T'], waltons['E']
from lifelines import ExponentialFitter
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10, 4))
epf = ExponentialFitter().fit(T, E)
epf.plot_hazard(ax=ax[0])
epf.plot_cumulative_hazard(ax=ax[1])
ax[0].set_title("hazard"); ax[1].set_title("cumulative_hazard")
epf.print_summary(3)
Explanation: Piecewise exponential models and creating custom models
This section will be easier if we recall our three mathematical "creatures" and the relationships between them. First is the survival function, $S(t)$, that represents the probability of living past some time, $t$. Next is the always non-negative and non-decreasing cumulative hazard function, $H(t)$. Its relation to $S(t)$ is:
$$ S(t) = \exp\left(-H(t)\right)$$
Finally, the hazard function, $h(t)$, is the derivative of the cumulative hazard:
$$h(t) = \frac{dH(t)}{dt}$$
which has the immediate relation to the survival function:
$$S(t) = \exp\left(-\int_{0}^t h(s) ds\right)$$
Notice that any of the three absolutely defines the other two. Some situations make it easier to define one vs the others. For example, in the Cox model, it's easist to work with the hazard, $h(t)$. In this section on parametric univariate models, it'll be easiest to work with the cumulative hazard. This is because of an asymmetry in math: derivatives are much easier to compute than integrals. So, if we define the cumulative hazard, both the hazard and survival function are much easier to reason about versus if we define the hazard and ask questions about the other two.
First, let's revisit some simpler parametric models.
The Exponential model
Recall that the Exponential model has a constant hazard, that is:
$$ h(t) = \frac{1}{\lambda} $$
which implies that the cumulative hazard, $H(t)$, has a pretty simple form: $H(t) = \frac{t}{\lambda}$. Below we fit this model to some survival data.
End of explanation
from lifelines import NelsonAalenFitter
ax = epf.plot(figsize=(8,5))
naf = NelsonAalenFitter().fit(T, E)
ax = naf.plot(ax=ax)
plt.legend()
Explanation: This model does a poor job of fitting to our data. If I fit a non-parametric model, like the Nelson-Aalen model, to this data, the Exponential's lack of fit is very obvious.
End of explanation
from lifelines import PiecewiseExponentialFitter
# looking at the above plot, I think there may be breaks at t=40 and t=60.
pf = PiecewiseExponentialFitter(breakpoints=[40, 60]).fit(T, E)
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(10, 4))
ax = pf.plot(ax=axs[1])
pf.plot_hazard(ax=axs[0])
ax = naf.plot(ax=ax, ci_show=False)
axs[0].set_title("hazard"); axs[1].set_title("cumulative_hazard")
pf.print_summary(3)
Explanation: It should be clear that the single parameter model is just averaging the hazards over the entire time period. In reality though, the true hazard rate exhibits some complex non-linear behaviour.
Piecewise Exponential models
What if we could break out model into different time periods, and fit an exponential model to each of those? For example, we define the hazard as:
$$
h(t) = \begin{cases}
\lambda_0, & \text{if $t \le \tau_0$} \
\lambda_1 & \text{if $\tau_0 < t \le \tau_1$} \
\lambda_2 & \text{if $\tau_1 < t \le \tau_2$} \
...
\end{cases}
$$
This model should be flexible enough to fit better to our dataset.
The cumulative hazard is only slightly more complicated, but not too much and can still be defined in Python. In lifelines, univariate models are constructed such that one only needs to define the cumulative hazard model with the parameters of interest, and all the hard work of fitting, creating confidence intervals, plotting, etc. is taken care.
For example, lifelines has implemented the PiecewiseExponentialFitter model. Internally, the code is a single function that defines the cumulative hazard. The user specifies where they believe the "breaks" are, and lifelines estimates the best $\lambda_i$.
End of explanation
from lifelines.fitters import ParametricUnivariateFitter
import autograd.numpy as np
class InverseTimeHazardFitter(ParametricUnivariateFitter):
# we tell the model what we want the names of the unknown parameters to be
_fitted_parameter_names = ['alpha_']
# this is the only function we need to define. It always takes two arguments:
# params: an iterable that unpacks the parameters you'll need in the order of _fitted_parameter_names
# times: a vector of times that will be passed in.
def _cumulative_hazard(self, params, times):
alpha = params[0]
return alpha /(80 - times)
itf = InverseTimeHazardFitter()
itf.fit(T, E)
itf.print_summary()
ax = itf.plot(figsize=(8,5))
ax = naf.plot(ax=ax, ci_show=False)
plt.legend()
Explanation: We can see a much better fit in this model. A quantitative measure of fit is to compare the log-likelihood between exponential model and the piecewise exponential model (higher is better). The log-likelihood went from -772 to -647, respectively. We could keep going and add more and more breakpoints, but that would end up overfitting to the data.
Univarite models in lifelines
I mentioned that the PiecewiseExponentialFitter was implemented using only its cumulative hazard function. This is not a lie. lifelines has very general semantics for univariate fitters. For example, this is how the entire ExponentialFitter is implemented:
```python
class ExponentialFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ["lambda_"]
def _cumulative_hazard(self, params, times):
lambda_ = params[0]
return times / lambda_
```
We only need to specify the cumulative hazard function because of the 1:1:1 relationship between the cumulative hazard function and the survival function and the hazard rate. From there, lifelines handles the rest.
Defining our own survival models
To show off the flexability of lifelines univariate models, we'll create a brand new, never before seen, survival model. Looking at the Nelson-Aalen fit, the cumulative hazard looks looks like their might be an asymptote at $t=80$. This may correspond to an absolute upper limit of subjects' lives. Let's start with that functional form.
$$ H_1(t; \alpha) = \frac{\alpha}{(80 - t)} $$
We subscript $1$ because we'll investigate other models. In a lifelines univariate model, this is defined in the following code.
Important: in order to compute derivatives, you must use the numpy imported from the autograd library. This is a thin wrapper around the original numpy. Note the import autograd.numpy as np below.
End of explanation
class TwoParamInverseTimeHazardFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ['alpha_', 'beta_']
# Sequence of (min, max) pairs for each element in x. None is used to specify no bound
_bounds = [(0, None), (75.0001, None)]
def _cumulative_hazard(self, params, times):
alpha, beta = params
return alpha / (beta - times)
two_f = TwoParamInverseTimeHazardFitter()
two_f.fit(T, E)
two_f.print_summary()
ax = itf.plot(ci_show=False, figsize=(8,5))
ax = naf.plot(ax=ax, ci_show=False)
two_f.plot(ax=ax)
plt.legend()
Explanation: The best fit of the model to the data is:
$$H_1(t) = \frac{21.51}{80-t}$$
Our choice of 80 as an asymptote was maybe mistaken, so let's allow the asymptote to be another parameter:
$$ H_2(t; \alpha, \beta) = \frac{\alpha}{\beta-t} $$
If we define the model this way, we need to add a bound to the values that $\beta$ can take. Obviously it can't be smaller than or equal to the maximum observed duration. Generally, the cumulative hazard must be positive and non-decreasing. Otherwise the model fit will hit convergence problems.
End of explanation
from lifelines.fitters import ParametricUnivariateFitter
class ThreeParamInverseTimeHazardFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ['alpha_', 'beta_', 'gamma_']
_bounds = [(0, None), (75.0001, None), (0, None)]
# this is the only function we need to define. It always takes two arguments:
# params: an iterable that unpacks the parameters you'll need in the order of _fitted_parameter_names
# times: a numpy vector of times that will be passed in by the optimizer
def _cumulative_hazard(self, params, times):
a, b, c = params
return a / (b - times) ** c
three_f = ThreeParamInverseTimeHazardFitter()
three_f.fit(T, E)
three_f.print_summary()
ax = itf.plot(ci_show=False, figsize=(8,5))
ax = naf.plot(ax=ax, ci_show=False)
ax = two_f.plot(ax=ax, ci_show=False)
ax = three_f.plot(ax=ax)
plt.legend()
Explanation: From the output, we see that the value of 76.55 is the suggested asymptote, that is:
$$H_2(t) = \frac{16.50} {76.55 - t}$$
The curve also appears to track against the Nelson-Aalen model better too. Let's try one additional parameter, $\gamma$, some sort of measure of decay.
$$H_3(t; \alpha, \beta, \gamma) = \frac{\alpha}{(\beta-t)^\gamma} $$
End of explanation
fig, axs = plt.subplots(3, figsize=(7, 8), sharex=True)
new_timeline = np.arange(0, 85)
three_f = ThreeParamInverseTimeHazardFitter().fit(T, E, timeline=new_timeline)
three_f.plot_hazard(label='hazard', ax=axs[0]).legend()
three_f.plot_cumulative_hazard(label='cumulative hazard', ax=axs[1]).legend()
three_f.plot_survival_function(label='survival function', ax=axs[2]).legend()
fig.subplots_adjust(hspace=0)
# Hide x labels and tick labels for all but bottom plot.
for ax in axs:
ax.label_outer()
Explanation: Our new asymptote is at $t\approx 100, \text{c.i.}=(87, 112)$. The model appears to fit the early times better than the previous models as well, however our $\alpha$ parameter has more uncertainty now. Continuing to add parameters isn't advisable, as we will overfit to the data.
Why fit parametric models anyways? Taking a step back, we are fitting parametric models and comparing them to the non-parametric Nelson-Aalen. Why not just always use the Nelson-Aalen model?
1) Sometimes we have scientific motivations to use a parametric model. That is, using domain knowledge, we may know the system has a parametric model and we wish to fit to that model.
2) In a parametric model, we are borrowing information from all observations to determine the best parameters. To make this more clear, imagine taking a single observation and changing it's value wildly. The fitted parameters would change as well. On the other hand, imagine doing the same for a non-parametric model. In this case, only the local survival function or hazard function would change. Because parametric models can borrow information from all observations, and there are much fewer unknowns than a non-parametric model, parametric models are said to be more statistically efficient.
3) Extrapolation: non-parametric models are not easily extended to values outside the observed data. On the other hand, parametric models have no problem with this. However, extrapolation outside observed values is a very dangerous activity.
End of explanation
import autograd.numpy as np
from autograd.scipy.stats import norm
# I'm shifting this to exaggerate the effect
T_ = T + 10
class ThreeParameterWeibullFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ["lambda_", "rho_", "theta_"]
_bounds = [(0, None), (0, None), (0, T.min()-0.001)]
def _cumulative_hazard(self, params, times):
lambda_, rho_, theta_ = params
return ((times - theta_) / lambda_) ** rho_
tpw = ThreeParameterWeibullFitter()
tpw.fit(T_, E)
tpw.print_summary()
ax = tpw.plot_cumulative_hazard(figsize=(8,5))
ax = NelsonAalenFitter().fit(T_, E).plot(ax=ax, ci_show=False)
Explanation: 3-parameter Weibull distribution
We can easily extend the built-in Weibull model (lifelines.WeibullFitter) to include a new location parameter:
$$ H(t) = \left(\frac{t - \theta}{\lambda}\right)^\rho $$
(When $\theta = 0$, this is just the 2-parameter case again). In lifelines custom models, this looks like:
End of explanation
from autograd.scipy.stats import norm
class InverseGaussianFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ['lambda_', 'mu_']
def _cumulative_density(self, params, times):
mu_, lambda_ = params
v = norm.cdf(np.sqrt(lambda_ / times) * (times / mu_ - 1), loc=0, scale=1) + \
np.exp(2 * lambda_ / mu_) * norm.cdf(-np.sqrt(lambda_ / times) * (times / mu_ + 1), loc=0, scale=1)
return v
def _cumulative_hazard(self, params, times):
return -np.log(1-np.clip(self._cumulative_density(params, times), 1e-15, 1-1e-15))
igf = InverseGaussianFitter()
igf.fit(T, E)
igf.print_summary()
ax = igf.plot_cumulative_hazard(figsize=(8,5))
ax = NelsonAalenFitter().fit(T, E).plot(ax=ax, ci_show=False)
Explanation: Inverse Gaussian distribution
The inverse Gaussian distribution is another popular model for survival analysis. Unlike other model, it's hazard does not asympotically converge to 0, allowing for a long tail of survival. Let's model this, using the same parameterization from Wikipedia
End of explanation
class GompertzFitter(ParametricUnivariateFitter):
# this parameterization is slightly different than wikipedia.
_fitted_parameter_names = ['nu_', 'b_']
def _cumulative_hazard(self, params, times):
nu_, b_ = params
return nu_ * (np.expm1(times * b_))
T, E = waltons['T'], waltons['E']
ggf = GompertzFitter()
ggf.fit(T, E)
ggf.print_summary()
ax = ggf.plot_cumulative_hazard(figsize=(8,5))
ax = NelsonAalenFitter().fit(T, E).plot(ax=ax, ci_show=False)
Explanation: Gompertz
End of explanation
class APGWFitter(ParametricUnivariateFitter):
# this parameterization is slightly different than wikipedia.
_fitted_parameter_names = ['kappa_', 'gamma_', 'phi_']
def _cumulative_hazard(self, params, t):
kappa_, phi_, gamma_ = params
return (kappa_ + 1) / kappa_ * ((1 + ((phi_ * t) ** gamma_) /(kappa_ + 1)) ** kappa_ -1)
apg = APGWFitter()
apg.fit(T, E)
apg.print_summary(2)
ax = apg.plot_cumulative_hazard(figsize=(8,5))
ax = NelsonAalenFitter().fit(T, E).plot(ax=ax, ci_show=False)
Explanation: APGW
From the paper, "A Flexible Parametric Modelling Framework for Survival Analysis", https://arxiv.org/pdf/1901.03212.pdf
End of explanation
n = 100
T = 5 * np.random.random(n)**2
T_censor = 10 * np.random.random(n)**2
E = T < T_censor
T_obs = np.minimum(T, T_censor)
from autograd_gamma import betainc
class BetaFitter(ParametricUnivariateFitter):
_fitted_parameter_names = ['alpha_', 'beta_', "m_"]
_bounds = [(0, None), (0, None), (T.max(), None)]
def _cumulative_density(self, params, times):
alpha_, beta_, m_ = params
return betainc(alpha_, beta_, times / m_)
def _cumulative_hazard(self, params, times):
return -np.log(1-self._cumulative_density(params, times))
beta_fitter = BetaFitter().fit(T_obs, E)
beta_fitter.plot()
beta_fitter.print_summary()
Explanation: Bounded lifetimes using the beta distribution
Maybe your data is bounded between 0 and some (unknown) upperbound M? That is, lifetimes can't be more than M. Maybe you know M, maybe you don't.
End of explanation |
6,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting sensor layouts of EEG Systems
This example illustrates how to load all the EEG system montages
shipped in MNE-python, and display it on fsaverage template.
Step1: check all montages | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Joan Massich <[email protected]>
#
# License: BSD Style.
from mayavi import mlab
import os.path as op
import mne
from mne.channels.montage import get_builtin_montages
from mne.datasets import fetch_fsaverage
from mne.viz import plot_alignment
subjects_dir = op.dirname(fetch_fsaverage())
Explanation: Plotting sensor layouts of EEG Systems
This example illustrates how to load all the EEG system montages
shipped in MNE-python, and display it on fsaverage template.
End of explanation
for current_montage in get_builtin_montages():
montage = mne.channels.read_montage(current_montage,
unit='auto',
transform=False)
info = mne.create_info(ch_names=montage.ch_names,
sfreq=1,
ch_types='eeg',
montage=montage)
fig = plot_alignment(info, trans=None,
subject='fsaverage',
subjects_dir=subjects_dir,
eeg=['projected'],
)
mlab.view(135, 80)
mlab.title(montage.kind, figure=fig)
Explanation: check all montages
End of explanation |
6,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Install Misopy
Step1: Restart the kernel (Kernel -> Restart), so that IPython finds Misopy
Get Samtools
Step2: The samtools binary should now be in bin/samtools
Generate Indices of the bam Files
Now the bam files from the Galaxy history can be imported with get(history_id)
Then index files of the bam files can be generated via samtools
Step3: If everything went well there should now be index files (*.bai) next to the bam files.
Step4: Generate Index of an Annotation File
An annotation file is needed.
Step5: A script for the conversion of gtf(gff2) to gff3 is needed
Step6: ... to convert the annotation file to the newer gff3 standard
Step7: Now misopy is needed to generate an index for this annotation file
The index will be stored in the 'indexed' folder
Step8: Now everything that is needed should be available to create a Sashimi plot
Generate Sashimi Plot
The function for the sashimi plot is imported
Step9: A sashimi_settings file specifies aspects of the plot.
Step10: An output directory is created and a specific region of interest is plotted.
The ID of the event we want plotted comes from a valid name from the indexed gff file. (See last column of 'indexed/genes.gff')
Step11: Finally save the pdf back to the Galaxy history | Python Code:
!pip install --user --quiet misopy
Explanation: Install Misopy
End of explanation
from urllib import urlretrieve
urlretrieve("http://depot.galaxyproject.org/package/linux/x86_64/samtools/samtools-0.1.19-Linux-x86_64.tgz","samtools.tgz")
!tar -xf samtools.tgz
Explanation: Restart the kernel (Kernel -> Restart), so that IPython finds Misopy
Get Samtools
End of explanation
get(1)
get(2)
!mv 1 461177.bam
!mv 2 461178.bam
!bin/samtools index 461177.bam
!bin/samtools index 461178.bam
Explanation: The samtools binary should now be in bin/samtools
Generate Indices of the bam Files
Now the bam files from the Galaxy history can be imported with get(history_id)
Then index files of the bam files can be generated via samtools
End of explanation
!ls *.ba*
Explanation: If everything went well there should now be index files (*.bai) next to the bam files.
End of explanation
get(3)
!mv 3 dm3.ensGene.gtf
Explanation: Generate Index of an Annotation File
An annotation file is needed.
End of explanation
urlretrieve("http://genes.mit.edu/burgelab/miso/scripts/gtf2gff3.pl","gtf2gff3.pl")
Explanation: A script for the conversion of gtf(gff2) to gff3 is needed
End of explanation
!perl gtf2gff3.pl dm3.ensGene.gtf > dm3.ensGene.gff3
Explanation: ... to convert the annotation file to the newer gff3 standard
End of explanation
import misopy
from misopy import index_gff
index_gff.index_gff("dm3.ensGene.gff3", "indexed")
Explanation: Now misopy is needed to generate an index for this annotation file
The index will be stored in the 'indexed' folder
End of explanation
from misopy.sashimi_plot import sashimi_plot as sash_plot
help(sash_plot.plot_event)
Explanation: Now everything that is needed should be available to create a Sashimi plot
Generate Sashimi Plot
The function for the sashimi plot is imported
End of explanation
import ConfigParser
config = ConfigParser.RawConfigParser()
config.add_section('data')
config.set('data', 'bam_prefix', '/import')
config.set('data', 'bam_files', '["461177.bam", "461178.bam"]')
config.add_section('plotting')
config.set('plotting', 'fig_width', '7')
config.set('plotting', 'fig_height', '5')
config.set('plotting', 'intron_scale', '30')
config.set('plotting', 'exon_scale', '4')
config.set('plotting', 'logged', 'False')
config.set('plotting', 'font_size', '5')
config.set('plotting', 'ymax', '2500')
config.set('plotting', 'show_posteriors', 'False')
config.set('plotting', 'bar_posteriors', 'False')
config.set('plotting', 'number_junctions', 'True')
config.set('plotting', 'colors', '["#770022","#FF8800"]')
with open('sashimi_settings.txt', 'wb') as configfile:
config.write(configfile)
Explanation: A sashimi_settings file specifies aspects of the plot.
End of explanation
!mkdir output
sash_plot.plot_event("FBgn0039909","indexed","sashimi_settings.txt","output")
Explanation: An output directory is created and a specific region of interest is plotted.
The ID of the event we want plotted comes from a valid name from the indexed gff file. (See last column of 'indexed/genes.gff')
End of explanation
put("output/FBgn0039909.pdf")
Explanation: Finally save the pdf back to the Galaxy history
End of explanation |
6,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
seaborn.countplot
Bar graphs are useful for displaying relationships between categorical data and at least one numerical variable. seaborn.countplot is a barplot where the dependent variable is the number of instances of each instance of the independent variable.
dataset
Step1: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once.
Step2: Basic plot
Step3: color by a category
Step4: make plot horizontal
Step5: Saturation
Step6: Targeting a non-default axes
Step7: Add error bars
Step8: add black bounding lines
Step9: Remove color fill | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
plt.rcParams['figure.figsize'] = (20.0, 10.0)
plt.rcParams['font.family'] = "serif"
df = pd.read_csv('../../datasets/movie_metadata.csv')
df.head()
Explanation: seaborn.countplot
Bar graphs are useful for displaying relationships between categorical data and at least one numerical variable. seaborn.countplot is a barplot where the dependent variable is the number of instances of each instance of the independent variable.
dataset: IMDB 5000 Movie Dataset
End of explanation
# split each movie's genre list, then form a set from the unwrapped list of all genres
categories = set([s for genre_list in df.genres.unique() for s in genre_list.split("|")])
# one-hot encode each movie's classification
for cat in categories:
df[cat] = df.genres.transform(lambda s: int(cat in s))
# drop other columns
df = df[['director_name','genres','duration'] + list(categories)]
df.head()
# convert from wide to long format and remove null classificaitons
df = pd.melt(df,
id_vars=['duration'],
value_vars = list(categories),
var_name = 'Category',
value_name = 'Count')
df = df.loc[df.Count>0]
top_categories = df.groupby('Category').aggregate(sum).sort_values('Count', ascending=False).index
howmany=10
# add an indicator whether a movie is short or long, split at 100 minutes runtime
df['islong'] = df.duration.transform(lambda x: int(x > 100))
df = df.loc[df.Category.isin(top_categories[:howmany])]
# sort in descending order
#df = df.loc[df.groupby('Category').transform(sum).sort_values('Count', ascending=False).index]
df.head()
Explanation: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once.
End of explanation
p = sns.countplot(data=df, x = 'Category')
Explanation: Basic plot
End of explanation
p = sns.countplot(data=df,
x = 'Category',
hue = 'islong')
Explanation: color by a category
End of explanation
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong')
Explanation: make plot horizontal
End of explanation
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=1)
Explanation: Saturation
End of explanation
import matplotlib.pyplot as plt
fig, ax = plt.subplots(2)
sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=1,
ax=ax[1])
Explanation: Targeting a non-default axes
End of explanation
import numpy as np
num_categories = df.Category.unique().size
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=1,
xerr=7*np.arange(num_categories))
Explanation: Add error bars
End of explanation
import numpy as np
num_categories = df.Category.unique().size
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=1,
xerr=7*np.arange(num_categories),
edgecolor=(0,0,0),
linewidth=2)
Explanation: add black bounding lines
End of explanation
import numpy as np
num_categories = df.Category.unique().size
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=1,
xerr=7*np.arange(num_categories),
edgecolor=(0,0,0),
linewidth=2,
fill=False)
import numpy as np
num_categories = df.Category.unique().size
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=1,
xerr=7*np.arange(num_categories),
edgecolor=(0,0,0),
linewidth=2)
sns.set(font_scale=1.25)
num_categories = df.Category.unique().size
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=1,
xerr=3*np.arange(num_categories),
edgecolor=(0,0,0),
linewidth=2)
help(sns.set)
plt.rcParams['font.family'] = "cursive"
#sns.set(style="white",font_scale=1.25)
num_categories = df.Category.unique().size
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=1,
xerr=3*np.arange(num_categories),
edgecolor=(0,0,0),
linewidth=2)
plt.rcParams['font.family'] = 'Times New Roman'
#sns.set_style({'font.family': 'Helvetica'})
sns.set(style="white",font_scale=1.25)
num_categories = df.Category.unique().size
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=1,
xerr=3*np.arange(num_categories),
edgecolor=(0,0,0),
linewidth=2)
bg_color = (0.25, 0.25, 0.25)
sns.set(rc={"font.style":"normal",
"axes.facecolor":bg_color,
"figure.facecolor":bg_color,
"text.color":"black",
"xtick.color":"black",
"ytick.color":"black",
"axes.labelcolor":"black"})
#sns.set_style({'font.family': 'Helvetica'})
#sns.set(style="white",font_scale=1.25)
num_categories = df.Category.unique().size
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=1,
xerr=3*np.arange(num_categories),
edgecolor=(0,0,0),
linewidth=2)
leg = p.get_legend()
leg.set_title("Duration")
labs = leg.texts
labs[0].set_text("Short")
labs[1].set_text("Long")
leg.get_title().set_color('white')
for lab in labs:
lab.set_color('white')
p.axes.xaxis.label.set_text("Counts")
plt.text(900,0, "Bar Plot", fontsize = 95, color='white', fontstyle='italic')
p.get_figure().savefig('../figures/barplot.png')
Explanation: Remove color fill
End of explanation |
6,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Name
Data preparation by using a template to submit a job to Cloud Dataflow
Labels
GCP, Cloud Dataflow, Kubeflow, Pipeline
Summary
A Kubeflow Pipeline component to prepare data by using a template to submit a job to Cloud Dataflow.
Details
Intended use
Use this component when you have a pre-built Cloud Dataflow template and want to launch it as a step in a Kubeflow Pipeline.
Runtime arguments
Argument | Description | Optional | Data type | Accepted values | Default |
Step1: Load the component using KFP SDK
Step2: Sample
Note
Step3: Set sample parameters
Step4: Example pipeline that uses the component
Step5: Compile the pipeline
Step6: Submit the pipeline for execution
Step7: Inspect the output | Python Code:
%%capture --no-stderr
!pip3 install kfp --upgrade
Explanation: Name
Data preparation by using a template to submit a job to Cloud Dataflow
Labels
GCP, Cloud Dataflow, Kubeflow, Pipeline
Summary
A Kubeflow Pipeline component to prepare data by using a template to submit a job to Cloud Dataflow.
Details
Intended use
Use this component when you have a pre-built Cloud Dataflow template and want to launch it as a step in a Kubeflow Pipeline.
Runtime arguments
Argument | Description | Optional | Data type | Accepted values | Default |
:--- | :---------- | :----------| :----------| :---------- | :----------|
project_id | The ID of the Google Cloud Platform (GCP) project to which the job belongs. | No | GCPProjectID | | |
gcs_path | The path to a Cloud Storage bucket containing the job creation template. It must be a valid Cloud Storage URL beginning with 'gs://'. | No | GCSPath | | |
launch_parameters | The parameters that are required to launch the template. The schema is defined in LaunchTemplateParameters. The parameter jobName is replaced by a generated name. | Yes | Dict | A JSON object which has the same structure as LaunchTemplateParameters | None |
location | The regional endpoint to which the job request is directed.| Yes | GCPRegion | | None |
staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information. This is done so that you can resume the job in case of failure.| Yes | GCSPath | | None |
validate_only | If True, the request is validated but not executed. | Yes | Boolean | | False |
wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 |
Input data schema
The input gcs_path must contain a valid Cloud Dataflow template. The template can be created by following the instructions in Creating Templates. You can also use Google-provided templates.
Output
Name | Description
:--- | :----------
job_id | The id of the Cloud Dataflow job that is created.
Caution & requirements
To use the component, the following requirements must be met:
- Cloud Dataflow API is enabled.
- The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
- The Kubeflow user service account is a member of:
- roles/dataflow.developer role of the project.
- roles/storage.objectViewer role of the Cloud Storage Object gcs_path.
- roles/storage.objectCreator role of the Cloud Storage Object staging_dir.
Detailed description
You can execute the template locally by following the instructions in Executing Templates. See the sample code below to learn how to execute the template.
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
End of explanation
import kfp.components as comp
dataflow_template_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_template/component.yaml')
help(dataflow_template_op)
Explanation: Load the component using KFP SDK
End of explanation
!gsutil cat gs://dataflow-samples/shakespeare/kinglear.txt
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code.
In this sample, we run a Google-provided word count template from gs://dataflow-templates/latest/Word_Count. The template takes a text file as input and outputs word counts to a Cloud Storage bucket. Here is the sample input:
End of explanation
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Dataflow - Launch Template'
OUTPUT_PATH = '{}/out/wc'.format(GCS_WORKING_DIR)
Explanation: Set sample parameters
End of explanation
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataflow launch template pipeline',
description='Dataflow launch template pipeline'
)
def pipeline(
project_id = PROJECT_ID,
gcs_path = 'gs://dataflow-templates/latest/Word_Count',
launch_parameters = json.dumps({
'parameters': {
'inputFile': 'gs://dataflow-samples/shakespeare/kinglear.txt',
'output': OUTPUT_PATH
}
}),
location = '',
validate_only = 'False',
staging_dir = GCS_WORKING_DIR,
wait_interval = 30):
dataflow_template_op(
project_id = project_id,
gcs_path = gcs_path,
launch_parameters = launch_parameters,
location = location,
validate_only = validate_only,
staging_dir = staging_dir,
wait_interval = wait_interval)
Explanation: Example pipeline that uses the component
End of explanation
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
Explanation: Compile the pipeline
End of explanation
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
Explanation: Submit the pipeline for execution
End of explanation
!gsutil cat $OUTPUT_PATH*
Explanation: Inspect the output
End of explanation |
6,338 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Clustering Data Using K-Means
| Python Code::
from sklearn.cluster import KMeans
# Step 1: Initalise kmeans clustering model for 5 clusters and
# fit on training data
k_means = KMeans(n_clusters=5,
random_state=101)
k_means.fit(X_train)
# Step 2: Predict cluster for training and test data and add results
# as a column to the respective dataframes
X_train['cluster'] = k_means.predict(X_train)
X_test['cluster'] = k_means.predict(X_test)
# Step 3: Print out cluster center arrays and inertia value
print('Cluster Centers:', k_means.cluster_centers_)
print('Inertia:', k_means.inertia_)
|
6,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WT-Übung 2 - Aufgabe 7b
Ein Schimpanse hat zwei Urnen vor sich
Step1: Simulation
Es werden $N$ zufällige Spiele durchgeführt und ausgewertet. Dabei werden die absoluten Häufigkeiten der (Indikator-)Ereignisse
$$
Z_k = {\text{Spiel endet nach genau $k$ Zügen}},\quad k \leq 10
$$
bestimmt.
Step2: Spielverläufe
Step3: Ergebnisse
Absolute Häufigkeiten der Anzahl gezogener Kugeln pro Spiel bei $N$ Durchläufen | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from random import random, choice
# Kugeln (Werte erstmal unwichtig)
black, red, green, white = 0, 1, 2, 7
# Urnen
kugeln_urne1 = [white, white, white, black, black]
kugeln_urne2 = [white, green, green, red, red]
# Wkeiten
p_urne1 = 0.7
p_urne2 = 0.3 # 1 - p_urne1
def color(s, c): return '\033[1;3{}m{}\033[0m'.format(c, s)
Explanation: WT-Übung 2 - Aufgabe 7b
Ein Schimpanse hat zwei Urnen vor sich: Urne 1 enthält drei weiße und zwei
schwarze, Urne 2 eine weiße, zwei grüne und zwei rote Kugeln. Über das
Verhalten des Schimpansen ist bekannt, dass er mit der Wahrscheinlichkeit 0,7
in die erste und mit der Wahrscheinlichkeit 0,3 in die zweite Urne greift.
Der Schimpanse darf nun solange Kugeln (ohne Zurucklegen) ziehen bis er eine
rote Kugel wählt. Wie groß ist die Wahrscheinlichkeit, dass er maximal drei
Kugeln zieht?
Parameter
End of explanation
N = 100000 # Anzahl der Spiele
spiele = [] # Spielverläufe
h_N = [0] * 11 # absolute Häufigkeiten der Ereignisse Z_k
for _ in range(N):
urne1, urne2 = list(kugeln_urne1), list(kugeln_urne2) # copy
spiel, kugel = [], None
while kugel != red:
# Urne auswählen (Urne 1 kann leer sein, dann Urne2)
urne = urne1 if random() < p_urne1 and urne1 else urne2
# zufällige Kugel aus der ausgewählten Urne nehmen und merken
kugel = choice(urne)
del urne[urne.index(kugel)]
spiel.append(color('●' + '₁₂'[urne == urne2], kugel))
h_N[len(spiel)] += 1
spiele.append(spiel)
Explanation: Simulation
Es werden $N$ zufällige Spiele durchgeführt und ausgewertet. Dabei werden die absoluten Häufigkeiten der (Indikator-)Ereignisse
$$
Z_k = {\text{Spiel endet nach genau $k$ Zügen}},\quad k \leq 10
$$
bestimmt.
End of explanation
for n, spiel in zip(range(20), spiele):
print("{:2} {}".format(n + 1, ' '.join(spiel)))
Explanation: Spielverläufe
End of explanation
plt.stem(h_N)
plt.axvspan(0, 3.5, 0, 1, alpha=0.2)
plt.xlabel('Anzahl Runden $k$')
plt.ylabel('absolute Häufigkeit $h_N(Z_k)$');
print("P(Z₁ + Z₂ + Z₃) ≅ {:.4}".format(sum(h_N[:4]) / N))
Explanation: Ergebnisse
Absolute Häufigkeiten der Anzahl gezogener Kugeln pro Spiel bei $N$ Durchläufen
End of explanation |
6,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize channel over epochs as an image
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
2 images are produced. One with a good channel and one with a channel
that does not see any evoked field.
It is also demonstrated how to reorder the epochs using a 1d spectral
embedding as described in
Step1: Set parameters
Step2: Show event related fields images | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
Explanation: Visualize channel over epochs as an image
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
2 images are produced. One with a good channel and one with a channel
that does not see any evoked field.
It is also demonstrated how to reorder the epochs using a 1d spectral
embedding as described in:
Graph-based variability estimation in single-trial event-related neural
responses A. Gramfort, R. Keriven, M. Clerc, 2010,
Biomedical Engineering, IEEE Trans. on, vol. 57 (5), 1051-1061
https://hal.inria.fr/inria-00497023
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
Explanation: Set parameters
End of explanation
# and order with spectral reordering
# If you don't have scikit-learn installed set order_func to None
from sklearn.cluster.spectral import spectral_embedding # noqa
from sklearn.metrics.pairwise import rbf_kernel # noqa
def order_func(times, data):
this_data = data[:, (times > 0.0) & (times < 0.350)]
this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]
return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),
n_components=1, random_state=0).ravel())
good_pick = 97 # channel with a clear evoked response
bad_pick = 98 # channel with no evoked response
# We'll also plot a sample time onset for each trial
plt_times = np.linspace(0, .2, len(epochs))
plt.close('all')
mne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=0.5, vmin=-100,
vmax=250, colorbar=True, order=order_func,
overlay_times=plt_times, show=True)
Explanation: Show event related fields images
End of explanation |
6,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature Visualizer
This notebook provides examples of visualizations done in other data studies and modifies them using the Yellowbrick library
The first set of examples comes from the blog titled
Step2: Using the above example, the author explains how important it is to get a visual representation of the data. He goes on to create graphics to show the characteristics of all the features along with a correlation matrix to illustrate relationships between the features. We will do the same using the Yellowbrick API and the "concrete" data set. Instead of creating a correlation matrix, we will create a covariance matrix.
Step3: Now let's look at histograms of the features to get their characteristics.
Step4: Now let's look at the covariance matrix of the features.
Step5: The next set of examples comes from lecture notes from the Univeristy of California Irvine CS277 class.
There are several scatterplots showing different types of relationships between features, such as linear and quadratic relationships. We will take the 'concrete' data set and use Yellowbrick's best fit curve visualization to show relationships between the the 'age' feature and the 'strength' target variable.
Step6: The document also shows an example using parallel coordinates as an example of multivariate visualization whereby all dimensions can be shown. Yellowbrick has an example that shows exactly how this can be achieved. | Python Code:
import os
import sys
# Modify the path
sys.path.append("..")
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
g = yb.anscombe()
Explanation: Feature Visualizer
This notebook provides examples of visualizations done in other data studies and modifies them using the Yellowbrick library
The first set of examples comes from the blog titled: Show Me The Data: Using Graphics for Exploratory Data Analysis. This blog begins with the same case outlined in the Yellowbrick examples notebook: Anscombe's quartet to stress the importance of visualization when conducting data analysis. It is worth showing again.
End of explanation
from download import download_all
## The path to the test data sets
FIXTURES = os.path.join(os.getcwd(), "data")
## Dataset loading mechanisms
datasets = {
"credit": os.path.join(FIXTURES, "credit", "credit.csv"),
"concrete": os.path.join(FIXTURES, "concrete", "concrete.csv"),
"occupancy": os.path.join(FIXTURES, "occupancy", "occupancy.csv"),
}
def load_data(name, download=True):
Loads and wrangles the passed in dataset by name.
If download is specified, this method will download any missing files.
# Get the path from the datasets
path = datasets[name]
# Check if the data exists, otherwise download or raise
if not os.path.exists(path):
if download:
download_all()
else:
raise ValueError((
"'{}' dataset has not been downloaded, "
"use the download.py module to fetch datasets"
).format(name))
# Return the data frame
return pd.read_csv(path)
# Load the data
df = load_data('concrete')
features = ['cement', 'slag', 'ash', 'water', 'splast', 'coarse', 'fine', 'age']
target = 'strength'
# Get the X and y data from the DataFrame
X = df[features].as_matrix()
y = df[target].as_matrix()
Explanation: Using the above example, the author explains how important it is to get a visual representation of the data. He goes on to create graphics to show the characteristics of all the features along with a correlation matrix to illustrate relationships between the features. We will do the same using the Yellowbrick API and the "concrete" data set. Instead of creating a correlation matrix, we will create a covariance matrix.
End of explanation
feature_hist = df.hist(column=features)
Explanation: Now let's look at histograms of the features to get their characteristics.
End of explanation
from yellowbrick.features.rankd import Rank2D
# Instantiate the visualizer with the Pearson ranking algorithm
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
Explanation: Now let's look at the covariance matrix of the features.
End of explanation
x_val = df['cement'].as_matrix()
fig, ax = plt.subplots()
ax.set_xlabel('age')
ax.set_ylabel('strength')
ax.set_title('Best fit curve')
ax.scatter(x_val, y)
g = yb.bestfit.draw_best_fit(x_val, y, ax, estimator='linear')
Explanation: The next set of examples comes from lecture notes from the Univeristy of California Irvine CS277 class.
There are several scatterplots showing different types of relationships between features, such as linear and quadratic relationships. We will take the 'concrete' data set and use Yellowbrick's best fit curve visualization to show relationships between the the 'age' feature and the 'strength' target variable.
End of explanation
from yellowbrick.features.pcoords import ParallelCoordinates
# Load the classification data set
data = load_data('occupancy')
# Specify the features of interest and the classes of the target
features = ["temperature", "relative humidity", "light", "C02", "humidity"]
classes = ['unoccupied', 'occupied']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.occupancy.as_matrix()
# Instantiate the visualizer
visualizer = visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
Explanation: The document also shows an example using parallel coordinates as an example of multivariate visualization whereby all dimensions can be shown. Yellowbrick has an example that shows exactly how this can be achieved.
End of explanation |
6,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ds1
Step1: <br>
<br>
<br>
<br>
Step2: Analyse versch. Datasets, Onset Frames, 441 Samples / Frame = 100 Hz
format
Step3: conclusions (dataset 2, 100 Hz frames) | Python Code:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# bound = int(len(X)*0.8)
# X_train = X[:bound, :]
# X_test = X[bound:, :]
# y_train = y[:bound]
# y_test = y[bound:]
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
ss = StandardScaler()
X_train = ss.fit_transform(X_train)
X_test = ss.transform(X_test)
Explanation: ds1: (97328, 111)
ds2: (345916, 111)
ds3: (5538, 111)
ds4: (723144, 111)
End of explanation
onsets = [i for i in range(len(y)) if y[i] == 1]
print(len(onsets))
# print(onsets)
def plot_frame(frame, frame_size):
fig, ax = plt.subplots(figsize=(4, 4))
_ = ax.plot(range(len(frame)), frame)
vertical_line_x = frame_size
while vertical_line_x < len(frame):
ax.axvline(x=vertical_line_x, color='red')
vertical_line_x += frame_size
shuffle(onsets)
onsets_part = onsets[:5]
print(onsets_part)
for i in onsets_part:
plot_frame(np.ravel(X[i-2:i+5, :]), X.shape[1])
Explanation: <br>
<br>
<br>
<br>
End of explanation
# subsampling
plot_frame(X[i+6])
plot_frame(X[i+6][::4])
Explanation: Analyse versch. Datasets, Onset Frames, 441 Samples / Frame = 100 Hz
format:
effektiver onset / erster ausschlag
wenn nur ein wert: erster ausschlag
ds1:
[87238, 54493, 58755, 23774, 55508, 48497, 33519, 57751, 90489, 50501]
-2 / 0
-2 / 0
? / -2
? / -2
? / ?
? / -2
-2 / 0
? / ?
-2 / 0
? / -2
uneinheitlich annotiert, z.t. deutlich zu spät
--> -2 bis 0
ds2:
[298003, 321707, 21606, 311640, 123847, 149672, 4183, 42441, 81900, 164420]
3/3
3/4
0/2
0/3
1/4
1/3
0/4
2/3
0/4
1/3
--> 2 bis 4
ds3:
[1605, 1950, 2537, 2611, 1777, 4485, 437, 2983, 2686, 1263]
-4/0
-1/0
0/2
1/2
?/0
?/1
0/1
0/0
1/2
?/0
--> 0 bis 2
ds4:
[219738, 149302, 95957, 657513, 667847, 653544, 654901, 698624, 244043, 340672, 486978, 14403, 553750, 638052, 454921, 627638, 152653, 404285, 485007, 21380]
?/?
?/0
0
?
1
?
?
?
1
1
0
0
0
-2
?
?
--> 0 bis 1
End of explanation
clf = RandomForestClassifier(n_jobs=-1, n_estimators=30)
clf.fit(X_train, y_train)
y_train_predicted = clf.predict(X_train)
y_test_predicted = clf.predict(X_test)
# ds2
# y: nur frame 0
print(classification_report(y_train, y_train_predicted))
print(classification_report(y_test, y_test_predicted))
# ds2
# y: frame 1, 2, 3, 4
print(classification_report(y_train, y_train_predicted))
print(classification_report(y_test, y_test_predicted))
# ds2
# y: frame 1, 2, 3, 4
# subsampling: 4x
print(classification_report(y_train, y_train_predicted))
print(classification_report(y_test, y_test_predicted))
# ds1 + ds2
# y: frame 1, 2, 3, 4
# subsampling: 4x
print(classification_report(y_train, y_train_predicted))
print(classification_report(y_test, y_test_predicted))
# ds1 + ds2
# y: depending on the dataset
# subsampling: 4x
print(classification_report(y_train, y_train_predicted))
print(classification_report(y_test, y_test_predicted))
# ds1 + ds2 + ds3
# y: frame 1, 2, 3, 4
# subsampling: 4x
print(classification_report(y_train, y_train_predicted))
print(classification_report(y_test, y_test_predicted))
# ds1 + ds2 + ds3 + ds4
# y: depending on the dataset
# subsampling: 4x
print(classification_report(y_train, y_train_predicted))
print(classification_report(y_test, y_test_predicted))
# ds1 + ds2 + ds3 + ds4
# y: depending on the dataset
# subsampling: 4x
# n_estimators=30
print(classification_report(y_train, y_train_predicted))
print(classification_report(y_test, y_test_predicted))
Explanation: conclusions (dataset 2, 100 Hz frames):<br>
fruehster onset: ab frame 2<br>
spaetester onset: bis und mit frame 5<br>
4x subsampling scheint noch zu passen<br>
min: ab frame 2 (882)
max: bis und mit frame 5 (2646)
4x subsampling scheint noch zu passen
aktuell: 0-440
neu: 441-2204 (4x)
auch möglich: 50 Hz, 882-1763 oder 882-2646 oder schon ab 0
oder: überschneidend
<br>
<br>
<br>
<br>
End of explanation |
6,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Seismic acquisition fiddling
The idea is to replicate what we've done so far but with 3 enhancements
Step5: Survey object
Step6: Perhaps s and r should be objects too. I think you might want to have survey.receivers.x for the list of x locations, for example.
Instantiate and plot
Step7: With a hierarchical index you can do cool things, e.g. show the last five sources
Step8: Export GeoDataFrames to GIS shapefile.
Step9: Midpoint calculations
We need midpoints. There is a midpoint between every source-receiver pair.
Hopefully it's not too inelegant to get to the midpoints now that we're using this layout object thing.
Step10: As well as knowing the (x,y) of the midpoints, we'd also like to record the distance from each s to each live r (each r in the live patch). This is easy enough to compute
Step11: Make a Geoseries of the midpoints, offsets and azimths
Step12: Save to a shapefile if desired.
Step13: Spider plot
Step14: We need lists (or arrays) to pass into the matplotlib quiver plot. This takes four main parameters
Step15: Bins
The bins are a new geometry, related to but separate from the survey itself, and the midpoints. We will model them as a GeoDataFrame of polygons. The steps are
Step16: New spatial join
Thank you to Jake Wasserman and Kelsey Jordahl for this code snippet, and many pointers.
This takes about 20 seconds to run on my iMac, compared to something close to 30 minutes for the old nested loops. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from shapely.geometry import Point, LineString
import geopandas as gpd
import pandas as pd
from fiona.crs import from_epsg
%matplotlib inline
Explanation: Seismic acquisition fiddling
The idea is to replicate what we've done so far but with 3 enhancements:
With a Survey object to hold the various features of a survey.
With more GeoPandas stuff, and less fussing with (x,y)'s directly.
Making bins and assigning midpoints to them.
We'll start with the usual prelims...
End of explanation
class Survey:
A seismic survey.
def __init__(self, params):
# Assign the variables from the parameter dict,
# using dict.items() for Python 3 compatibility.
for k, v in params.items():
setattr(self, k, v)
# These are just a convenience; we could use the
# tuples directly, or make objects with attrs.
self.xmi = self.corner[0]
self.ymi = self.corner[1]
self.x = self.size[0]
self.y = self.size[1]
self.SL = self.line_spacing[0]
self.RL = self.line_spacing[1]
self.si = self.point_spacing[0]
self.ri = self.point_spacing[1]
self.shiftx = -self.si/2.
self.shifty = -self.ri/2.
@property
def lines(self):
Returns number of (src, rcvr) lines.
slines = int(self.x/self.SL) + 1
rlines = int(self.y/self.RL) + 1
return slines, rlines
@property
def points_per_line(self):
Returns number of (src, rcvr) points per line.
spoints = int(self.y/self.si) + 2
rpoints = int(self.x/self.ri) + 2
return spoints, rpoints
@property
def src(self):
s = [Point(self.xmi+line*self.SL, self.ymi+s*self.si)
for line in range(self.lines[0])
for s in range(self.points_per_line[0])
]
S = gpd.GeoSeries(s)
S.crs = from_epsg(26911)
return S
@property
def rcvr(self):
r = [Point(self.xmi + r*self.ri + self.shiftx, self.ymi + line*self.RL - self.shifty)
for line in range(self.lines[1])
for r in range(self.points_per_line[1])
]
R = gpd.GeoSeries(r)
R.crs = from_epsg(self.epsg)
return R
@property
def layout(self):
Provide a GeoDataFrame of all points,
labelled as columns and in hierarchical index.
# Feels like there might be a better way to do this...
sgdf = gpd.GeoDataFrame({'geometry': self.src, 'station': 'src'})
rgdf = gpd.GeoDataFrame({'geometry': self.rcvr, 'station': 'rcvr'})
# Concatenate with a hierarchical index
layout = pd.concat([sgdf,rgdf], keys=['sources','receivers'])
layout.crs = from_epsg(self.epsg)
return layout
Explanation: Survey object
End of explanation
params = {'corner': (5750000,4710000),
'size': (3000,1800),
'line_spacing': (600,600),
'point_spacing': (100,100),
'epsg': 26911 # http://spatialreference.org/ref/epsg/26911/
}
survey = Survey(params)
s = survey.src
r = survey.rcvr
r[:10]
layout = survey.layout
layout[:10]
Explanation: Perhaps s and r should be objects too. I think you might want to have survey.receivers.x for the list of x locations, for example.
Instantiate and plot
End of explanation
layout.ix['sources'][-5:]
layout.crs
ax = layout.plot()
Explanation: With a hierarchical index you can do cool things, e.g. show the last five sources:
End of explanation
# gdf.to_file('src_and_rcvr.shp')
Explanation: Export GeoDataFrames to GIS shapefile.
End of explanation
midpoint_list = [LineString([r, s]).interpolate(0.5, normalized=True)
for r in layout.ix['receivers'].geometry
for s in layout.ix['sources'].geometry
]
Explanation: Midpoint calculations
We need midpoints. There is a midpoint between every source-receiver pair.
Hopefully it's not too inelegant to get to the midpoints now that we're using this layout object thing.
End of explanation
offsets = [r.distance(s)
for r in layout.ix['receivers'].geometry
for s in layout.ix['sources'].geometry
]
azimuths = [(180.0/np.pi) * np.arctan((r.x - s.x)/(r.y - s.y))
for r in layout.ix['receivers'].geometry
for s in layout.ix['sources'].geometry
]
offsetx = np.array(offsets)*np.cos(np.array(azimuths)*np.pi/180.)
offsety = np.array(offsets)*np.sin(np.array(azimuths)*np.pi/180.)
Explanation: As well as knowing the (x,y) of the midpoints, we'd also like to record the distance from each s to each live r (each r in the live patch). This is easy enough to compute:
Point(x1, y1).distance(Point(x2, y2))
Then we can make a list of all the offsets when we count the midpoints into the bins.
End of explanation
midpoints = gpd.GeoDataFrame({
'geometry' : midpoint_list,
'offset' : offsets,
'azimuth': azimuths,
'offsetx' : offsetx,
'offsety' : offsety
})
midpoints[:5]
ax = midpoints.plot()
Explanation: Make a Geoseries of the midpoints, offsets and azimths:
End of explanation
#midpt.to_file('CMPs.shp')
Explanation: Save to a shapefile if desired.
End of explanation
midpoints[:5].offsetx # Easy!
midpoints.ix[3].geometry.x # Less easy :(
Explanation: Spider plot
End of explanation
x = [m.geometry.x for i, m in midpoints.iterrows()]
y = [m.geometry.y for i, m in midpoints.iterrows()]
fig = plt.figure(figsize=(12,8))
plt.quiver(x, y, midpoints.offsetx, midpoints.offsety, units='xy', width=0.5, scale=1/0.025, pivot='mid', headlength=0)
plt.axis('equal')
plt.show()
Explanation: We need lists (or arrays) to pass into the matplotlib quiver plot. This takes four main parameters: x, y, u, and v, where x, y will be our coordinates, and u, v will be the offset vector for that midpoint.
We can get at the GeoDataFrame's attributes easily, but I can't see how to get at the coordinates in the geometry GeoSeries (seems like a user error — it feels like it should be really easy) so I am resorting to this:
End of explanation
# Factor to shift the bins relative to source and receiver points
jig = survey.si / 4.
bin_centres = gpd.GeoSeries([Point(survey.xmi + 0.5*r*survey.ri + jig, survey.ymi + 0.5*s*survey.si + jig)
for r in range(2*survey.points_per_line[1] - 3)
for s in range(2*survey.points_per_line[0] - 2)
])
# Buffers are diamond shaped so we have to scale and rotate them.
scale_factor = np.sin(np.pi/4.)/2.
bin_polys = bin_centres.buffer(scale_factor*survey.ri, 1).rotate(-45)
bins = gpd.GeoDataFrame(geometry=bin_polys)
bins[:3]
ax = bins.plot()
Explanation: Bins
The bins are a new geometry, related to but separate from the survey itself, and the midpoints. We will model them as a GeoDataFrame of polygons. The steps are:
Compute the bin centre locations with our usual list comprehension trick.
Buffer the centres with a square.
Gather the buffered polygons into a GeoDataFrame.
End of explanation
reindexed = bins.reset_index().rename(columns={'index':'bins_index'})
joined = gpd.tools.sjoin(reindexed, midpoints)
bin_stats = joined.groupby('bins_index')['offset']\
.agg({'fold': len, 'min_offset': np.min})
bins = gpd.GeoDataFrame(bins.join(bin_stats))
joined[:10]
bins[:10]
ax = bins.plot(column="fold")
ax = bins.plot(column="min_offset")
Explanation: New spatial join
Thank you to Jake Wasserman and Kelsey Jordahl for this code snippet, and many pointers.
This takes about 20 seconds to run on my iMac, compared to something close to 30 minutes for the old nested loops.
End of explanation |
6,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Elemento LinearTriangle
El elemento LinearTriangle es un elemento finito bidimensional con coordenadas locales y globales, caracterizado por una función de forma lineal. Puede ser utilizado para problemas de esfuerzo y deformación plana. Este elemento tiene un módulo de elasticidad $E$, una relación de Poisson $\nu$ y un espesor $t$. Cada triangulo tiene tres nodos con dos grados de libertad en el plano (ux y uy), las coordenadas globales de estos nodos se denotan por $(x_i,y_i)$, $(x_j,y_j)$ y $(x_m,y_m)$ (como se muestra en la figura). El orden de los nodos para cada elemento es importante y deben listarse en sentido antihorario comenzando desde cualquier nodo.
<img src="src/linear-triangle-element/linear_triangle_element.PNG" width="200px">
La matriz de rigidez por elemento viene dada por
Step1: Definimos algunas propiedades a utilizar
Step2: Creamos los nodos y el elemento
Step3: Creamos el modelo y agregamos los nodos y elementos a este
Step4: Aplicamos las condiciones de carga y restricciones
Step5: En este punto podemos graficar el modelo con las condiciones impuestas, para esto utilizamos el método plot_model
Step6: Finalmente resolvemos el modelo
Step7: Podemos consultar los desplazamientos nodales
Step8: Además, utilizar las herramientas de postproceso para graficar el campo de desplazamientos en el elemento
Step9: Placa simple, utilizando NuSA Modeler
Step10: Placa con agujero
Step11: Concentrador de esfuerzos | Python Code:
%matplotlib inline
from nusa import *
Explanation: Elemento LinearTriangle
El elemento LinearTriangle es un elemento finito bidimensional con coordenadas locales y globales, caracterizado por una función de forma lineal. Puede ser utilizado para problemas de esfuerzo y deformación plana. Este elemento tiene un módulo de elasticidad $E$, una relación de Poisson $\nu$ y un espesor $t$. Cada triangulo tiene tres nodos con dos grados de libertad en el plano (ux y uy), las coordenadas globales de estos nodos se denotan por $(x_i,y_i)$, $(x_j,y_j)$ y $(x_m,y_m)$ (como se muestra en la figura). El orden de los nodos para cada elemento es importante y deben listarse en sentido antihorario comenzando desde cualquier nodo.
<img src="src/linear-triangle-element/linear_triangle_element.PNG" width="200px">
La matriz de rigidez por elemento viene dada por:
$$ [k] = tA[B]^T[D][B] $$
Donde $A$ es el área del elemento dada por:
$$ A = \frac{1}{2} \left( x_i(y_j-y_m) + x_j(y_m - y_i) + x_m(y_i - y_j) \right) $$
y $[B]$ es la matriz dada por:
$$
[B] = \frac{1}{2A}
\begin{bmatrix}
\beta_i & 0 & \beta_j & 0 & \beta_m & 0 \
0 & \gamma_i & 0 & \gamma_j & 0 & \gamma_m \
\gamma_i & \beta_i & \gamma_j & \beta_j & \gamma_m & \beta_m \
\end{bmatrix}
$$
Donde $\beta_i$, $\beta_j$, $\beta_m$, $\gamma_i$, $\gamma_j$ y $\gamma_m$ están dados por:
$$ \beta_i = y_j - y_m $$
$$ \beta_j = y_m - y_i $$
$$ \beta_m = y_i - y_j $$
$$ \gamma_i = x_m - x_j $$
$$ \gamma_j = x_i - x_m $$
$$ \gamma_m = x_j - x_i $$
Para el caso de esfuerzo plano la matriz $D$ viene dada por:
$$
[D] = \frac{E}{1-\nu^2}
\begin{bmatrix}
1 & \nu & 0 \
\nu & 1 & 0 \
0 & 0 & \frac{1-\nu}{2} \
\end{bmatrix}
$$
Los esfuerzos en cada elemento se calculan mediante:
$$ {\sigma} = [D][B]{u} $$
Donde $\sigma$ es el vector de esfuerzos en el plano, es decir:
$$ \sigma = \begin{Bmatrix} \sigma_x \ \sigma_y \ \tau_{xy} \end{Bmatrix} $$
y $u$ el vector de desplazamientos en cada nodo del elemento:
$$ {u} = \begin{Bmatrix} ux_i \ uy_i \ ux_j \ uy_j \ ux_m \ uy_m \end{Bmatrix}$$
Ejemplo 1. Caso simple
En este ejemplo vamos a resolver el caso más simple: un elemento triangular fijo en dos de sus nodos y con una fuerza aplicada en el tercero.
Importamos NuSA e indicamos que usaremos el modo "inline" de matplotlib en este notebook
End of explanation
E = 200e9 # Módulo de elasticidad
nu = 0.3 # Relación de Poisson
t = 0.1 # Espesor
Explanation: Definimos algunas propiedades a utilizar:
End of explanation
n1 = Node((0,0))
n2 = Node((1,0.5))
n3 = Node((0,1))
e1 = LinearTriangle((n1,n2,n3),E,nu,t)
Explanation: Creamos los nodos y el elemento:
End of explanation
m = LinearTriangleModel("Modelo 01")
for n in (n1,n2,n3): m.add_node(n)
m.add_element(e1)
Explanation: Creamos el modelo y agregamos los nodos y elementos a este:
End of explanation
m.add_constraint(n1, ux=0, uy=0)
m.add_constraint(n3, ux=0, uy=0)
m.add_force(n2, (5e3, 0))
Explanation: Aplicamos las condiciones de carga y restricciones:
End of explanation
m.plot_model()
Explanation: En este punto podemos graficar el modelo con las condiciones impuestas, para esto utilizamos el método plot_model:
End of explanation
m.solve()
Explanation: Finalmente resolvemos el modelo:
End of explanation
for n in m.get_nodes():
print(n.ux,n.uy)
Explanation: Podemos consultar los desplazamientos nodales:
End of explanation
m.plot_nsol('ux')
Explanation: Además, utilizar las herramientas de postproceso para graficar el campo de desplazamientos en el elemento:
End of explanation
import nusa.mesh as nmsh
md = nmsh.Modeler()
BB, ES = 1, 0.1
a = md.add_rectangle((0,0),(BB,BB), esize=ES)
nc, ec = md.generate_mesh()
x,y = nc[:,0], nc[:,1]
nodos = []
elementos = []
for k,nd in enumerate(nc):
cn = Node((x[k],y[k]))
nodos.append(cn)
for k,elm in enumerate(ec):
i,j,m = int(elm[0]),int(elm[1]),int(elm[2])
ni,nj,nm = nodos[i],nodos[j],nodos[m]
ce = LinearTriangle((ni,nj,nm),200e9,0.3,0.25)
elementos.append(ce)
m = LinearTriangleModel()
for node in nodos: m.add_node(node)
for elm in elementos: m.add_element(elm)
# Aplicando condiciones de frontera en los extremos
minx, maxx = min(x), max(x)
miny, maxy = min(y), max(y)
P = 100e3/((BB/ES)+1)
for node in nodos:
if node.x == minx:
m.add_constraint(node, ux=0, uy=0)
if node.x == maxx:
m.add_force(node, (P,0))
m.plot_model()
m.solve()
# Esfuerzo de von Mises
m.plot_nsol("seqv","Pa")
# Deformación unitaria en la dirección X
m.plot_nsol("exx", "")
Explanation: Placa simple, utilizando NuSA Modeler
End of explanation
%matplotlib inline
from nusa import *
import nusa.mesh as nmsh
md = nmsh.Modeler()
a = md.add_rectangle((0,0),(1,1), esize=0.1)
b = md.add_circle((0.5,0.5), 0.1, esize=0.05)
md.substract_surfaces(a,b)
nc, ec = md.generate_mesh()
x,y = nc[:,0], nc[:,1]
nodos = []
elementos = []
for k,nd in enumerate(nc):
cn = Node((x[k],y[k]))
nodos.append(cn)
for k,elm in enumerate(ec):
i,j,m = int(elm[0]),int(elm[1]),int(elm[2])
ni,nj,nm = nodos[i],nodos[j],nodos[m]
ce = LinearTriangle((ni,nj,nm),200e9,0.3,0.1)
elementos.append(ce)
m = LinearTriangleModel()
for node in nodos: m.add_node(node)
for elm in elementos: m.add_element(elm)
# Aplicando condiciones de frontera en los extremos
minx, maxx = min(x), max(x)
miny, maxy = min(y), max(y)
for node in nodos:
if node.x == minx:
m.add_constraint(node, ux=0, uy=0)
if node.x == maxx:
m.add_force(node, (10e3,0))
m.plot_model()
m.solve()
m.plot_nsol("sxx", units="Pa")
# Element solution
m.plot_esol("sxx")
Explanation: Placa con agujero
End of explanation
# generando la geometría
md = nmsh.Modeler()
g = md.geom # Para acceder a la clase SimpleGMSH
p1 = g.add_point((0,0))
p2 = g.add_point((1,0))
p3 = g.add_point((2,0))
p4 = g.add_point((2,1))
p5 = g.add_point((3,1))
p6 = g.add_point((3,2))
p7 = g.add_point((0,2))
p8 = g.add_point((0.7,1.4))
p9 = g.add_point((0.7,1.7), esize=0.1)
L1 = g.add_line(p1,p2)
L2 = g.add_circle(p3,p2,p4)
L3 = g.add_line(p4,p5)
L4 = g.add_line(p5,p6)
L5 = g.add_line(p6,p7)
L6 = g.add_line(p7,p1)
L7 = g.add_circle(p8,p9)
loop1 = g.add_line_loop(L1,L2,L3,L4,L5,L6) # boundary
loop2 = g.add_line_loop(L7)# hole
g.add_plane_surface(loop1,loop2)
nc, ec = md.generate_mesh()
x,y = nc[:,0], nc[:,1]
nodos = []
elementos = []
for k,nd in enumerate(nc):
cn = Node((x[k],y[k]))
nodos.append(cn)
for k,elm in enumerate(ec):
i,j,m = int(elm[0]),int(elm[1]),int(elm[2])
ni,nj,nm = nodos[i],nodos[j],nodos[m]
ce = LinearTriangle((ni,nj,nm),200e9,0.3,0.1)
elementos.append(ce)
m = LinearTriangleModel()
for node in nodos: m.add_node(node)
for elm in elementos: m.add_element(elm)
# Aplicando condiciones de frontera en los extremos
minx, maxx = min(x), max(x)
miny, maxy = min(y), max(y)
for node in nodos:
if node.x == minx:
m.add_constraint(node, ux=0, uy=0)
if node.x == maxx:
m.add_force(node, (10e3,1))
m.plot_model()
m.solve()
m.plot_nsol("seqv") # Esfuerzo de von Mises
Explanation: Concentrador de esfuerzos
End of explanation |
6,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Identifikation wertloser Codeteile im Projekt "Spring PetClinic"
Auslastungsdaten vom Produktivbetrieb
Datenquelle
Step1: Berechnung wesentlicher Metriken für Größe und Nutzungsgrad
Step2: Überblick über die Gesamtausnutzung der Softwareeinheiten
Step3: Vorbereitung Verbindung zu technischen Schulden
Es wird ein eindeutiger Schlüssel ("fqn") für die Softwareinheiten erstellt, um nachfolgend Nutzungsdaten zu den technischen Schulden zuordnen zu können
Zudem werden nicht mehr benötigte Daten weggelassen
Step4: Technische Schulden in der Software
Zur Bewertung der Softwarequalität werden die technischen Schulden pro Softwareeinheit herangezogen.
Step5: Laden der aktuellen Daten vom Qualitätssicherungsserver
Die aktuellen Messergebnisse der technischen Schulden der Anwendung werden geladen.
Step6: Aufstellung der notwendigen Daten für tiefergehende Analysen
Es wird nur der Name der vermessenen Softwareeinheit benötigt sowie die berechnete Dauer der technischen Schulden
Die Dauer der technischen Schulden wird entsprechend als Zeitdauer umgewandelt
Step7: Vorbereitung der Zuordnung zu Auslastungsdaten
Es wird ein eindeutiger Schlüssel für die Softwareinheiten erstellt, um technischen Schulden zu Nutzungsdaten zuordnen zu können
Mehrfacheinträge zu technischen Schulden werden pro Softwareeinheit aufsummiert
Step8: Erstellung der Management-Sicht
Zusammenführung der Daten
Nutzungsdaten und technische Schulden werden anhand der Schlüssel der Softwareeinheiten zusammengefasst.
Step9: Durchschnittliche Nutzung nach fachlicher Sicht
Step10: Nutzungsgrad und technische Schulden nach fachlichen Komponenten
Step11: Executive Summary
Bewertungsmatrix nach fachlichen Gesichtspunkten
Step12: Zusammenfassung
Erkenntnisse
* Die Investitionen in die Kernfunktionalität rund um die Betreuung von Haustieren ("Pet") haben sich bisher ausgezeichnet.
* Risiko besteht bei den "sonstigen Modulen" ("Other")
Step13: Nutzungsgrad nach technischen Funktionen | Python Code:
import pandas as pd
coverage = pd.read_csv("../dataset/jacoco_production_coverage_spring_petclinic.csv")
coverage.head()
Explanation: Identifikation wertloser Codeteile im Projekt "Spring PetClinic"
Auslastungsdaten vom Produktivbetrieb
Datenquelle: Gemessen wurde der Anwendungsbetrieb der Software über einen Zeitraum von 24h an einem Wochentag. Für jede Softwareeinheit ("CLASS") wurden die durchlaufenen Code-Zeilen aufgezeichnet.
End of explanation
coverage['lines'] = coverage.LINE_MISSED + coverage.LINE_COVERED
coverage['covered'] = coverage.LINE_COVERED / coverage.lines
coverage.head()
Explanation: Berechnung wesentlicher Metriken für Größe und Nutzungsgrad
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
ax = coverage.covered.hist();
ax.set_title("Verteilung der Gesamtausnutzung")
ax.set_xlabel("Anzahl")
ax.set_ylabel("Nutzungsgrad");
Explanation: Überblick über die Gesamtausnutzung der Softwareeinheiten
End of explanation
coverage['fqn'] = coverage.PACKAGE + "." + coverage.CLASS
coverage_per_class = coverage.set_index('fqn')[['lines', 'covered']]
coverage_per_class.head()
Explanation: Vorbereitung Verbindung zu technischen Schulden
Es wird ein eindeutiger Schlüssel ("fqn") für die Softwareinheiten erstellt, um nachfolgend Nutzungsdaten zu den technischen Schulden zuordnen zu können
Zudem werden nicht mehr benötigte Daten weggelassen
End of explanation
# in C:\dev\repos\software-analytics\demos\dataset
# python -m "http.server" 28080
#URL = "http://localhost:28080/sonarqube_search.json"
Explanation: Technische Schulden in der Software
Zur Bewertung der Softwarequalität werden die technischen Schulden pro Softwareeinheit herangezogen.
End of explanation
#import json
#
#issues_json = ""
#
#with open ("../dataset/sonarqube_search.json") as f:
# issues_json = json.loads(f.read())
#
#json.dumps(issues_json)[:200]
import requests
KEY = "org.springframework.samples:spring-petclinic:boundedcontexts"
URL = "https://sonarcloud.io/api/issues/search?languages=java&componentKeys=" + KEY
issues_json = requests.get(URL).json()
print(str(issues_json)[:500])
Explanation: Laden der aktuellen Daten vom Qualitätssicherungsserver
Die aktuellen Messergebnisse der technischen Schulden der Anwendung werden geladen.
End of explanation
from pandas.io.json import json_normalize
issues = json_normalize(issues_json['issues'])[['component', 'debt']]
issues['debt'] = issues.debt.apply(pd.Timedelta)
issues.head()
Explanation: Aufstellung der notwendigen Daten für tiefergehende Analysen
Es wird nur der Name der vermessenen Softwareeinheit benötigt sowie die berechnete Dauer der technischen Schulden
Die Dauer der technischen Schulden wird entsprechend als Zeitdauer umgewandelt
End of explanation
issues['fqn'] = issues.component.str.extract("/java/(.*).java", expand=True)
issues['fqn'] = issues.fqn.str.replace("/", ".")
debt_per_class = issues.groupby('fqn')[['debt']].sum()
debt_per_class.head()
Explanation: Vorbereitung der Zuordnung zu Auslastungsdaten
Es wird ein eindeutiger Schlüssel für die Softwareinheiten erstellt, um technischen Schulden zu Nutzungsdaten zuordnen zu können
Mehrfacheinträge zu technischen Schulden werden pro Softwareeinheit aufsummiert
End of explanation
analysis = coverage_per_class.join(debt_per_class)
analysis = analysis.fillna(0)
analysis.head()
Explanation: Erstellung der Management-Sicht
Zusammenführung der Daten
Nutzungsdaten und technische Schulden werden anhand der Schlüssel der Softwareeinheiten zusammengefasst.
End of explanation
analysis['domain'] = "Other"
domains = ["Owner", "Pet", "Visit", "Vet", "Specialty", "Clinic"]
for domain in domains:
analysis.loc[analysis.index.str.contains(domain), 'domain'] = domain
analysis.groupby('domain')[['covered']].mean()
Explanation: Durchschnittliche Nutzung nach fachlicher Sicht
End of explanation
management_compatible_data = analysis.\
groupby('domain').\
agg({"covered": "mean", "debt" : "sum", "lines" : "sum"})
management_compatible_data.debt = management_compatible_data.debt.dt.seconds / 60
management_compatible_data.columns = \
['Nutzungsgrad (%)', 'Technische Schulden (min)', 'Größe']
management_compatible_data.head()
Explanation: Nutzungsgrad und technische Schulden nach fachlichen Komponenten
End of explanation
%matplotlib inline
from ausi import portfolio
portfolio.plot_diagram(management_compatible_data, "Technische Schulden (min)", "Nutzungsgrad (%)", "Größe", "fachliche Komponenten");
Explanation: Executive Summary
Bewertungsmatrix nach fachlichen Gesichtspunkten
End of explanation
analysis['tech'] = analysis.index.str.split(".").str[-2]
analysis.groupby('tech')[['covered']].mean()
Explanation: Zusammenfassung
Erkenntnisse
* Die Investitionen in die Kernfunktionalität rund um die Betreuung von Haustieren ("Pet") haben sich bisher ausgezeichnet.
* Risiko besteht bei den "sonstigen Modulen" ("Other"):
* Die Nutzung der Komponente ist weniger als 50%
* Die technischen Schulden sind hier am höchsten mit 120 Minuten
Maßnahme: Für die Komponente "Other" müssen dringends qualitätsverbessernde Maßnahmen ergriffen werden
Anhang
Nutzung nach technischen Komponenten
Nutzungsgrad und technische Schulden nach technischen Komponenten
End of explanation
management_compatible_data = analysis.groupby('tech').agg({"covered": "mean", "debt" : "sum", "lines" : "sum"})
management_compatible_data.debt = management_compatible_data.debt.dt.seconds / 60
management_compatible_data.columns = ['Nutzungsgrad (%)', 'Technische Schulden (min)', 'Größe']
portfolio.plot_diagram(management_compatible_data, "Technische Schulden (min)", "Nutzungsgrad (%)", "Größe", "technische Komponenten");
Explanation: Nutzungsgrad nach technischen Funktionen
End of explanation |
6,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stochastic Gradient Descent l
That is a simple yet very efficient approach to discriminative learning of linear classifiers under convex loss function such as(linear) Support Vector Machine and Logistic Regression.
1 Introdution
The advantages of Stochastic Gradient Desent are
Step1: The concrete loss function can be set via the loss paramters,
+ loss = 'hinge'
Step2: 3 Tips
Scale each attribute ont the input vector X to [0,1] or [-1,+1], or standardize it to have mean 0 and variance 1. and the same scaling must be applied to the test vector to obtain meaningful results. | Python Code:
from sklearn.linear_model import SGDClassifier
X = [[0, 0], [1, 1]]
y = [0, 1]
clf = SGDClassifier(loss='hinge', penalty='l2')
clf.fit(X, y)
clf.predict([[2., 2.]])
clf.coef_
#To get the signed distance to the hyperplane
clf.decision_function([[2., 2.]])
Explanation: Stochastic Gradient Descent l
That is a simple yet very efficient approach to discriminative learning of linear classifiers under convex loss function such as(linear) Support Vector Machine and Logistic Regression.
1 Introdution
The advantages of Stochastic Gradient Desent are:
+ Efficiency
+ Ease of implementation
The disadvantages of SGD
+ SGD requiress a number of hyperparamters such as the regularization and the number of iterations
+ SGD is senstive to feature scaling
2 Classification
End of explanation
clf = SGDClassifier(loss='log').fit(X, y)
clf.predict_proba([[1., 1.]])
Explanation: The concrete loss function can be set via the loss paramters,
+ loss = 'hinge': soft-margin linear Support Vector Machine
+ loss = 'modified_huber': smooth hinge loss
+ loss = 'log' : logistic regression
Using loss = 'log' or loss='modified_huber' enables the predict_proba method, which gives a vector of probabilty estimates $P(y \rvert x) $per sample x:
End of explanation
from sklearn.preprocessing import StandardScaler
sclarer = StandardScaler()
sclarer.fit(X_train)
sclarer.fit(X_test)
Explanation: 3 Tips
Scale each attribute ont the input vector X to [0,1] or [-1,+1], or standardize it to have mean 0 and variance 1. and the same scaling must be applied to the test vector to obtain meaningful results.
End of explanation |
6,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning Machines
Taught by Patrick Hebron at NYU/ITP, Fall 2017
TensorFlow Basics
Step1: Now let's try to do the same thing using TensorFlow
Step2: Where's the resulting value?
Notice that in the pure Python code, calling | Python Code:
# Create input constants:
X = 2.0
Y = 3.0
# Perform addition:
Z = X + Y
# Print output:
print Z
Explanation: Learning Machines
Taught by Patrick Hebron at NYU/ITP, Fall 2017
TensorFlow Basics: "Graphs and Sessions"
Let's look at a simple arithmetic procedure in pure Python:
End of explanation
# Import TensorFlow library:
import tensorflow as tf
# Print TensorFlow version, just for good measure:
print( 'TensorFlow Version: ' + tf.VERSION )
# Create input constants:
opX = tf.constant( 2.0 )
opY = tf.constant( 3.0 )
# Create addition operation:
opZ = tf.add( opX, opY )
# Print operation:
print opZ
Explanation: Now let's try to do the same thing using TensorFlow:
End of explanation
# Create input constants:
opX = tf.constant( 2.0 )
opY = tf.constant( 3.0 )
# Create addition operation:
opZ = tf.add( opX, opY )
# Create session:
with tf.Session() as sess:
# Run session:
Z = sess.run( opZ )
# Print output:
print Z
Explanation: Where's the resulting value?
Notice that in the pure Python code, calling:
python
print Z
prints the resulting value:
python
5.0
But in the TensorFlow code, the print call gives us:
python
Tensor("Add:0", shape=(), dtype=float32)
TensorFlow uses a somewhat different programming model from what we're used to in conventional Python code.
Here's a brief overview from the TensorFlow Basic Usage tutorial:
Overview:
TensorFlow is a programming system in which you represent computations as graphs. Nodes in the graph are called ops (short for operations). An op takes zero or more Tensors, performs some computation, and produces zero or more Tensors. A Tensor is a typed multi-dimensional array. For example, you can represent a mini-batch of images as a 4-D array of floating point numbers with dimensions [batch, height, width, channels].
A TensorFlow graph is a description of computations. To compute anything, a graph must be launched in a Session. A Session places the graph ops onto Devices, such as CPUs or GPUs, and provides methods to execute them. These methods return tensors produced by ops as numpy ndarray objects in Python, and as tensorflow::Tensor instances in C and C++.
TensorFlow programs are usually structured into a construction phase, that assembles a graph, and an execution phase that uses a session to execute ops in the graph.
For example, it is common to create a graph to represent and train a neural network in the construction phase, and then repeatedly execute a set of training ops in the graph in the execution phase.
In other words:
The TensorFlow code above only assembles the graph to perform an addition operation on our two input constants.
To actually run the graph and retrieve the output, we need to create a session and run the addition operation through it:
End of explanation |
6,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: 如何使用 DELF 和 TensorFlow Hub 匹配图像
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: 数据
在下一个代码单元中,我们指定要使用 DELF 处理的两个图像的网址,以便进行匹配和对比。
Step3: 下载、调整大小、保存并显示图像。
Step4: 将 DELF 模块应用到数据
DELF 模块使用一个图像作为输入,并使用向量描述需要注意的点。以下单元包含此 Colab 逻辑的核心。
Step5: 使用位置和描述向量匹配图像 | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install scikit-image
from absl import logging
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image, ImageOps
from scipy.spatial import cKDTree
from skimage.feature import plot_matches
from skimage.measure import ransac
from skimage.transform import AffineTransform
from six import BytesIO
import tensorflow as tf
import tensorflow_hub as hub
from six.moves.urllib.request import urlopen
Explanation: 如何使用 DELF 和 TensorFlow Hub 匹配图像
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/tf_hub_delf_module"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/tf_hub_delf_module.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/tf_hub_delf_module.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/tf_hub_delf_module.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">Download notebook</a>
</td>
<td><a href="https://tfhub.dev/google/delf/1"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
TensorFlow Hub (TF-Hub) 是一个分享打包在可重用资源(尤其是预训练的模块)中的机器学习专业知识的平台。
在此 Colab 中,我们将使用打包 DELF 神经网络和逻辑的模块来处理图像,从而识别关键点及其描述符。神经网络的权重在地标图像上训练,如这篇论文所述。
设置
End of explanation
#@title Choose images
images = "Bridge of Sighs" #@param ["Bridge of Sighs", "Golden Gate", "Acropolis", "Eiffel tower"]
if images == "Bridge of Sighs":
# from: https://commons.wikimedia.org/wiki/File:Bridge_of_Sighs,_Oxford.jpg
# by: N.H. Fischer
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/2/28/Bridge_of_Sighs%2C_Oxford.jpg'
# from https://commons.wikimedia.org/wiki/File:The_Bridge_of_Sighs_and_Sheldonian_Theatre,_Oxford.jpg
# by: Matthew Hoser
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/c/c3/The_Bridge_of_Sighs_and_Sheldonian_Theatre%2C_Oxford.jpg'
elif images == "Golden Gate":
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/1/1e/Golden_gate2.jpg'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/3/3e/GoldenGateBridge.jpg'
elif images == "Acropolis":
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/c/ce/2006_01_21_Ath%C3%A8nes_Parth%C3%A9non.JPG'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/5/5c/ACROPOLIS_1969_-_panoramio_-_jean_melis.jpg'
else:
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/d/d8/Eiffel_Tower%2C_November_15%2C_2011.jpg'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/a/a8/Eiffel_Tower_from_immediately_beside_it%2C_Paris_May_2008.jpg'
Explanation: 数据
在下一个代码单元中,我们指定要使用 DELF 处理的两个图像的网址,以便进行匹配和对比。
End of explanation
def download_and_resize(name, url, new_width=256, new_height=256):
path = tf.keras.utils.get_file(url.split('/')[-1], url)
image = Image.open(path)
image = ImageOps.fit(image, (new_width, new_height), Image.ANTIALIAS)
return image
image1 = download_and_resize('image_1.jpg', IMAGE_1_URL)
image2 = download_and_resize('image_2.jpg', IMAGE_2_URL)
plt.subplot(1,2,1)
plt.imshow(image1)
plt.subplot(1,2,2)
plt.imshow(image2)
Explanation: 下载、调整大小、保存并显示图像。
End of explanation
delf = hub.load('https://tfhub.dev/google/delf/1').signatures['default']
def run_delf(image):
np_image = np.array(image)
float_image = tf.image.convert_image_dtype(np_image, tf.float32)
return delf(
image=float_image,
score_threshold=tf.constant(100.0),
image_scales=tf.constant([0.25, 0.3536, 0.5, 0.7071, 1.0, 1.4142, 2.0]),
max_feature_num=tf.constant(1000))
result1 = run_delf(image1)
result2 = run_delf(image2)
Explanation: 将 DELF 模块应用到数据
DELF 模块使用一个图像作为输入,并使用向量描述需要注意的点。以下单元包含此 Colab 逻辑的核心。
End of explanation
#@title TensorFlow is not needed for this post-processing and visualization
def match_images(image1, image2, result1, result2):
distance_threshold = 0.8
# Read features.
num_features_1 = result1['locations'].shape[0]
print("Loaded image 1's %d features" % num_features_1)
num_features_2 = result2['locations'].shape[0]
print("Loaded image 2's %d features" % num_features_2)
# Find nearest-neighbor matches using a KD tree.
d1_tree = cKDTree(result1['descriptors'])
_, indices = d1_tree.query(
result2['descriptors'],
distance_upper_bound=distance_threshold)
# Select feature locations for putative matches.
locations_2_to_use = np.array([
result2['locations'][i,]
for i in range(num_features_2)
if indices[i] != num_features_1
])
locations_1_to_use = np.array([
result1['locations'][indices[i],]
for i in range(num_features_2)
if indices[i] != num_features_1
])
# Perform geometric verification using RANSAC.
_, inliers = ransac(
(locations_1_to_use, locations_2_to_use),
AffineTransform,
min_samples=3,
residual_threshold=20,
max_trials=1000)
print('Found %d inliers' % sum(inliers))
# Visualize correspondences.
_, ax = plt.subplots()
inlier_idxs = np.nonzero(inliers)[0]
plot_matches(
ax,
image1,
image2,
locations_1_to_use,
locations_2_to_use,
np.column_stack((inlier_idxs, inlier_idxs)),
matches_color='b')
ax.axis('off')
ax.set_title('DELF correspondences')
match_images(image1, image2, result1, result2)
Explanation: 使用位置和描述向量匹配图像
End of explanation |
6,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling the Detectability of a Population of Transients
A common LSST science case is to detect a large sample of some family of transient objects for further study. A good proxy for a scientific Figure of Merit in this case is the sample size or fraction of objects detected. In this tutorial, we will define a simple example model lightcurve, and then use MAF to quantify the detectability of a sample of such objects.
Requirements
You will need to have MAF installed. See here for instructions.
Preliminaries
Step2: Defining the Transient Object and Detectability Metric
First we need a function to generate a lightcurve that takes as input the times the lightcurve is observed, and then parameters that descibe the shape of the lightcurve.
Step4: Now we need a metric that can quantify the degree to which a lightcurve has been "well observed". For this example, we will say that any lightcurve that is observed to be above the 5-sigma limiting depth twice in any filter is "well-observed".
Step5: Let's model our population of transient objects so that we can check to see if they are "well observed".
We will model each object as having a position given by RA and dec, an explosion time of t0, and a simple light curve shape governed by parameters peak and slope.
Step6: Setting up MAF
Since we have a catalog with sky positions, we want to check each of those positions. So rather than evaluate our metric on a grid on the sky (e.g., at HEALpixel locations), we will use the UserPointSlicer to find the observations that contain each of our transient objects.
We will then connect to a database and bundle our metric, slicer, and sql query in the usual way.
Step7: Let's connect to an OpSim database and set the output directory
Step8: As well as a metric and a slicer, we also need to define an SQL query. Let's look at the first 5 years of observations, and limit ourselves to the riz filters. Note how plotting labels need to be defined at bundling time.
Step9: Running the Metric
Recall
Step10: Note a possibly confusing fature in the plots here
Step11: Working with MAF Outputs
Step12: We now have some healpy arrays, and so can use standard healpy plotting methods. Empty healpixels are given value hp.UNSEEN by healbin, and are plotted in gray.
Step13: As well as maps, we might want some global summary statistics - to quote as a Figure of Merit, for example. We can do that by operating on the metricValues directly - in our case, the fraction of objects detected could be a good proxy for a Figure of Merit.
Step14: Further Analysis
As Eric Bellm points out, it might be nice to look at how detectability depends on the lightcurve properties. Let's look at how detectability, peak brightness, and slope relate. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import lsst.sims.maf.db as db
import lsst.sims.maf.utils as utils
import lsst.sims.maf.metrics as metrics
import lsst.sims.maf.slicers as slicers
import lsst.sims.maf.metricBundles as metricBundles
from lsst.sims.utils import equatorialFromGalactic
# Make the notebook repeatable
np.random.seed(42)
Explanation: Modeling the Detectability of a Population of Transients
A common LSST science case is to detect a large sample of some family of transient objects for further study. A good proxy for a scientific Figure of Merit in this case is the sample size or fraction of objects detected. In this tutorial, we will define a simple example model lightcurve, and then use MAF to quantify the detectability of a sample of such objects.
Requirements
You will need to have MAF installed. See here for instructions.
Preliminaries
End of explanation
def toyObjectLC(t, t0, peak, slope, duration=10.):
Makes a transient object lightcurve that has an instantaneous appearance followed by a
linear rise or decline and then an instantaneous disappearance.
Parameters
----------
t : np.array
Array of observation times (days)
t0 : float
The peak time (days)
peak : float
Peak brightness (mags)
slope : float
Slope of the lightcurve (mags/day). Negative values of b result in
a brightening object
duration : float (10)
Time between appearance and disappearance (days)
Returns
-------
lightcurve : np.array
1-D array of magnitudes to match t
Notes
-----
Objects could be given colors by setting different parameters per filter; for now
this is assumed to be a flat SED source. If you do extend this to include filter
information, don't forget that you may be restricted by the bands that you select in
your SQL query.
# Make an array of magnitudes to match input times
lightcurve = np.zeros(np.size(t), dtype=float) + 99.
# Select only times in the lightcurve duration window
good = np.where( (t >= t0) & (t <= t0+duration) )
lightcurve[good] = peak + slope*(t[good]-t0)
return lightcurve
# Check that the lightcurves look correct
t = np.arange(0,4,.1)
t0 = 1.
duration = 1.
peak = 20
slopes = [3, 0, -3]
for slope in slopes:
lc = toyObjectLC(t, t0, peak, slope, duration=duration)
plt.plot(t, lc, label='%i' % slope)
plt.xlabel('t (days)')
plt.ylabel('mag')
# Always plot magnitudes backwards
plt.ylim([30,10])
plt.legend()
plt.title('Test Lightcurves')
Explanation: Defining the Transient Object and Detectability Metric
First we need a function to generate a lightcurve that takes as input the times the lightcurve is observed, and then parameters that descibe the shape of the lightcurve.
End of explanation
class ToyDetectabilityMetric(metrics.BaseMetric):
Quantifies detectability of toyObjects.
Parameters
----------
ptsNeeded : int
Number of an object's lightcurve points required to be above the 5-sigma limiting depth
before it is considered detected.
Notes
-----
This metric assumes this will be run with a slicer that has had extra
parameters (t0, peak, and slope) added to the slicer.slicePoint
dict (which already contains ra, dec, fieldID, etc). All the observation information
(MJD of observation, 5-sigma limiting depth of each observation, etc) is contained in the
dataSlice array. We request the filter information for each observation anticipating that
more general lightcurve functions will need it as input.
def __init__(self, metricName='ToyDetectabilityMetric', mjdCol='expMJD', m5Col='fiveSigmaDepth',
filterCol='filter', ptsNeeded=2, **kwargs):
self.mjdCol = mjdCol
self.m5Col = m5Col
self.filterCol = filterCol
self.ptsNeeded = ptsNeeded
super(ToyDetectabilityMetric, self).__init__(col=[self.mjdCol, self.m5Col,
self.filterCol],
units='Detected, 0 or 1',
metricName=metricName,
**kwargs)
def run(self, dataSlice, slicePoint=None):
# Generate the lightcurve for this object
lightcurve = toyObjectLC(dataSlice[self.mjdCol], slicePoint['t0'],
slicePoint['peak'], slicePoint['slope'])
# Check if there are enough points detected in the generated lightcurve
npts = np.where( (lightcurve != 0.) & (lightcurve < dataSlice[self.m5Col]))[0].size
if npts >= self.ptsNeeded:
return 1
else:
return 0
Explanation: Now we need a metric that can quantify the degree to which a lightcurve has been "well observed". For this example, we will say that any lightcurve that is observed to be above the 5-sigma limiting depth twice in any filter is "well-observed".
End of explanation
# The fields for our transient catalog
names = ['ra', 'dec', 't0', 'peak', 'slope']
# Number of objects to create
nobjs = 1e3 # Note, we are going to loop over each object, so try not to make this a crazy huge number.
# An empty numpy array that will hold the catalog of transient objects
transObjects = np.zeros(nobjs, dtype=zip(names, [float]*len(names)))
# Concentrate objects in the galactic plane
l = np.random.rand(nobjs)*360.
b = np.random.randn(nobjs)*20.
transObjects['ra'], transObjects['dec'] = equatorialFromGalactic(l,b)
# Generate lightcurve parameters
# Force our objects to start exploding near the start date of the survey and go for 10 years
# The 59580 value is the start date for the simulation we are using (minion_1016).
# Older simulations have different start dates...
transObjects['t0'] = np.random.rand(nobjs) * 365.25 * 10 + 59580.
transObjects['peak'] = np.random.rand(nobjs) * 3 + 20.
transObjects['slope'] = np.random.rand(nobjs) * 1.1
Explanation: Let's model our population of transient objects so that we can check to see if they are "well observed".
We will model each object as having a position given by RA and dec, an explosion time of t0, and a simple light curve shape governed by parameters peak and slope.
End of explanation
# Set up the slicer to evaluate the catalog we just made
slicer = slicers.UserPointsSlicer(transObjects['ra'], transObjects['dec'])
# Add any additional information about each object to the slicer
slicer.slicePoints['t0'] = transObjects['t0']
slicer.slicePoints['peak'] = transObjects['peak']
slicer.slicePoints['slope'] = transObjects['slope']
Explanation: Setting up MAF
Since we have a catalog with sky positions, we want to check each of those positions. So rather than evaluate our metric on a grid on the sky (e.g., at HEALpixel locations), we will use the UserPointSlicer to find the observations that contain each of our transient objects.
We will then connect to a database and bundle our metric, slicer, and sql query in the usual way.
End of explanation
runName = 'minion_1016'
opsdb = db.OpsimDatabase(runName + '_sqlite.db')
outDir = 'TransientsUPS'
resultsDb = db.ResultsDb(outDir=outDir)
Explanation: Let's connect to an OpSim database and set the output directory:
End of explanation
metric = ToyDetectabilityMetric()
sql = 'night < %i and (filter="r" or filter="i" or filter="z")' % (365.25*5)
plotDict = {'title':'Toy transient detectability', 'xlabel':'Object detected? (0 or 1)'}
plotDict['bins'] = [-0.25, 0.25, 0.75, 1.25]
bundle = metricBundles.MetricBundle(metric, slicer, sql, runName=runName, plotDict=plotDict)
Explanation: As well as a metric and a slicer, we also need to define an SQL query. Let's look at the first 5 years of observations, and limit ourselves to the riz filters. Note how plotting labels need to be defined at bundling time.
End of explanation
bundleList = [bundle]
bundleDict = metricBundles.makeBundlesDictFromList(bundleList)
bgroup = metricBundles.MetricBundleGroup(bundleDict, opsdb, outDir=outDir, resultsDb=resultsDb)
bgroup.runAll()
bgroup.plotAll(closefigs=False)
Explanation: Running the Metric
Recall: we often run more than one metric, or SQL-slicer combinations, at a time, so standard procedure is to define groups of bundles and run them all. Let's follow this procedure.
End of explanation
# If the metricValue is 1, the object was detected
num = np.where(bundle.metricValues == 1)[0].size
print 'Number of detections = %i' % num
# If the value was zero, and unmasked, that means some observations
# were taken at that point in the sky
num = np.where((bundle.metricValues == 0) & (bundle.metricValues.mask == False))[0].size
print 'Number of non-detections = %i' % num
# Find the number of transients where LSST took no observations at that spot
num = np.where( (bundle.metricValues == 0) &
(bundle.metricValues.mask == True))[0].size
print 'Number of transients with no overlapping observations at that spot = %i' % num
Explanation: Note a possibly confusing fature in the plots here: The histogram shows ~600 transients were not detected, and ~50 were. But we input 1,000. The sky map also has no points in the far north. This is a "feature" of MAF that if a point in the sky has no overlapping observations, that point is masked. Only unmasked points get plotted by default. We can check this explicitly by inspecting the metricValues.
End of explanation
from lsst.sims.utils import healbin
import healpy as hp
# Set the resolution of the healpix grid - nside=32 corresponds to ~1 degree pixels
nside = 8
allObjects = healbin(transObjects['ra'], transObjects['dec'], np.ones(nobjs),
nside=nside, reduceFunc=np.sum)
detectedObjects = healbin(transObjects['ra'], transObjects['dec'],bundle.metricValues,
nside=nside, reduceFunc=np.sum)
Explanation: Working with MAF Outputs: Detection Fraction Maps
All our metric did was return 0 or 1 for each individual object, according to its detection criteria. To visualize the results of this analysis as a detection map, we need to bin the catalog on the sky. Healpix is a good scheme for doing this, and we can use the healbin convience function from the sims.utils package. First, we count all the objects in each healpixel, then count all the detected objects in each healpixel.
End of explanation
Nmax = 5
hp.mollview(allObjects, title='Total Number of Objects', max=Nmax)
hp.mollview(detectedObjects, title='Number of Detected Objects', max=Nmax)
frac = detectedObjects/allObjects
frac[np.where((allObjects == 0) | (detectedObjects == hp.UNSEEN))] = hp.UNSEEN
hp.mollview(frac, title='Fraction of Objects Detected')
Explanation: We now have some healpy arrays, and so can use standard healpy plotting methods. Empty healpixels are given value hp.UNSEEN by healbin, and are plotted in gray.
End of explanation
print 'Fraction of Toy objects detected = %f' % (np.sum(bundle.metricValues)/nobjs)
Explanation: As well as maps, we might want some global summary statistics - to quote as a Figure of Merit, for example. We can do that by operating on the metricValues directly - in our case, the fraction of objects detected could be a good proxy for a Figure of Merit.
End of explanation
# add a little noise to separate the point on the plot
noise = np.random.rand(bundle.metricValues.size)*.1
plt.scatter(bundle.metricValues + noise,
transObjects['peak'], c=transObjects['slope'])
plt.xlabel('detected (1 or 0)')
plt.ylabel('peak mag')
plt.ylim([23,20])
plt.xlim([-.5, 1.5])
cb = plt.colorbar()
cb.set_label('slope')
Explanation: Further Analysis
As Eric Bellm points out, it might be nice to look at how detectability depends on the lightcurve properties. Let's look at how detectability, peak brightness, and slope relate.
End of explanation |
6,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Last edit by David Lao - 2017/12/12
<br>
<br>
Netflix Analytics - Movie Recommendation through Correlations
<br>
I love Netflix!
This project aims to build a movie recommendation mechanism within Netflix. The dataset I used here come directly from Netflix. It consists of 4 text data files, each file contains over 20M rows, i.e. over 4K movies and 400K customers. All together over 17K movies and 500K+ customers!
<br>
One of the major challenges is to get all these data loaded into the Kernel for analysis, I have encountered many times of Kernel running out of memory and tried many different ways of how to do it more efficiently. Welcome any suggestions!!!
This kernel will be consistently be updated! Welcome any suggestions! Let's get started!
<br>
Feel free to fork and upvote if this notebook is helpful to you in some ways!
Table of Content
Step1: Next let's load first data file and get a feeling of how huge the dataset is
Step2: Let's try to load the 3 remaining dataset as well
Step3: Now we combine datasets
Step4: Data viewing
Let's give a first look on how the data spread
Step5: We can see that the rating tends to be relatively positive (>3). This may be due to the fact that unhappy customers tend to just leave instead of making efforts to rate. We can keep this in mind - low rating movies mean they are generally really bad
Data cleaning
Movie ID is really a mess import! Looping through dataframe to add Movie ID column WILL make the Kernel run out of memory as it is too inefficient. I achieve my task by first creating a numpy array with correct length then add the whole array as column into the main dataframe! Let's see how it is done below
Step6: Data slicing
The data set now is super huge. I have tried many different ways but can't get the Kernel running as intended without memory error. Therefore I tried to reduce the data volumn by improving the data quality below
Step7: Now let's trim down our data, whats the difference in data size?
Step8: Let's pivot the data set and put it into a giant matrix - we need it for our recommendation system
Step9: Data mapping
Now we load the movie mapping file
Step10: Recommendation models
Well all data required is loaded and cleaned! Next let's get into the recommendation system.
Recommend with Collaborative Filtering
Evalute performance of collaborative filtering, with just first 100K rows for faster process
Step11: Below is what user 783514 liked in the past
Step12: Let's predict which movies user 785314 would love to watch
Step13: Recommend with Pearsons' R correlations
The way it works is we use Pearsons' R correlation to measure the linear correlation between review scores of all pairs of movies, then we provide the top 10 movies with highest correlations
Step14: A recommendation for you if you like 'What the #$*! Do We Know!?'
Step15: X2 | Python Code:
import pandas as pd
import numpy as np
import math
import re
from scipy.sparse import csr_matrix
import matplotlib.pyplot as plt
import seaborn as sns
from surprise import Reader, Dataset, SVD, evaluate
sns.set_style("darkgrid")
Explanation: Last edit by David Lao - 2017/12/12
<br>
<br>
Netflix Analytics - Movie Recommendation through Correlations
<br>
I love Netflix!
This project aims to build a movie recommendation mechanism within Netflix. The dataset I used here come directly from Netflix. It consists of 4 text data files, each file contains over 20M rows, i.e. over 4K movies and 400K customers. All together over 17K movies and 500K+ customers!
<br>
One of the major challenges is to get all these data loaded into the Kernel for analysis, I have encountered many times of Kernel running out of memory and tried many different ways of how to do it more efficiently. Welcome any suggestions!!!
This kernel will be consistently be updated! Welcome any suggestions! Let's get started!
<br>
Feel free to fork and upvote if this notebook is helpful to you in some ways!
Table of Content:
Objective
Data manipulation
Data loading
Data viewing
Data cleaning
Data slicing
Data mapping
Recommendation models
Recommend with Collaborative Filtering (Edit on 2017/11/07)
Recommend with Pearsons' R correlation
Objective
<br>
Learn from data and recommend best TV shows to users, based on self & others behaviour
<br>
Data manipulation
Data loading
Each data file (there are 4 of them) contains below columns:
Movie ID (as first line of each new movie record / file)
Customer ID
Rating (1 to 5)
Date they gave the ratings
There is another file contains the mapping of Movie ID to the movie background like name, year of release, etc
Let's import the library we needed before we get started:
End of explanation
# Skip date
df1 = pd.read_csv('../input/combined_data_1.txt', header = None, names = ['Cust_Id', 'Rating'], usecols = [0,1])
df1['Rating'] = df1['Rating'].astype(float)
print('Dataset 1 shape: {}'.format(df1.shape))
print('-Dataset examples-')
print(df1.iloc[::5000000, :])
Explanation: Next let's load first data file and get a feeling of how huge the dataset is:
End of explanation
#df2 = pd.read_csv('../input/combined_data_2.txt', header = None, names = ['Cust_Id', 'Rating'], usecols = [0,1])
#df3 = pd.read_csv('../input/combined_data_3.txt', header = None, names = ['Cust_Id', 'Rating'], usecols = [0,1])
#df4 = pd.read_csv('../input/combined_data_4.txt', header = None, names = ['Cust_Id', 'Rating'], usecols = [0,1])
#df2['Rating'] = df2['Rating'].astype(float)
#df3['Rating'] = df3['Rating'].astype(float)
#df4['Rating'] = df4['Rating'].astype(float)
#print('Dataset 2 shape: {}'.format(df2.shape))
#print('Dataset 3 shape: {}'.format(df3.shape))
#print('Dataset 4 shape: {}'.format(df4.shape))
Explanation: Let's try to load the 3 remaining dataset as well:
End of explanation
# load less data for speed
df = df1
#df = df1.append(df2)
#df = df.append(df3)
#df = df.append(df4)
df.index = np.arange(0,len(df))
print('Full dataset shape: {}'.format(df.shape))
print('-Dataset examples-')
print(df.iloc[::5000000, :])
Explanation: Now we combine datasets:
End of explanation
p = df.groupby('Rating')['Rating'].agg(['count'])
# get movie count
movie_count = df.isnull().sum()[1]
# get customer count
cust_count = df['Cust_Id'].nunique() - movie_count
# get rating count
rating_count = df['Cust_Id'].count() - movie_count
ax = p.plot(kind = 'barh', legend = False, figsize = (15,10))
plt.title('Total pool: {:,} Movies, {:,} customers, {:,} ratings given'.format(movie_count, cust_count, rating_count), fontsize=20)
plt.axis('off')
for i in range(1,6):
ax.text(p.iloc[i-1][0]/4, i-1, 'Rating {}: {:.0f}%'.format(i, p.iloc[i-1][0]*100 / p.sum()[0]), color = 'white', weight = 'bold')
Explanation: Data viewing
Let's give a first look on how the data spread:
End of explanation
df_nan = pd.DataFrame(pd.isnull(df.Rating))
df_nan = df_nan[df_nan['Rating'] == True]
df_nan = df_nan.reset_index()
movie_np = []
movie_id = 1
for i,j in zip(df_nan['index'][1:],df_nan['index'][:-1]):
# numpy approach
temp = np.full((1,i-j-1), movie_id)
movie_np = np.append(movie_np, temp)
movie_id += 1
# Account for last record and corresponding length
# numpy approach
last_record = np.full((1,len(df) - df_nan.iloc[-1, 0] - 1),movie_id)
movie_np = np.append(movie_np, last_record)
print('Movie numpy: {}'.format(movie_np))
print('Length: {}'.format(len(movie_np)))
# remove those Movie ID rows
df = df[pd.notnull(df['Rating'])]
df['Movie_Id'] = movie_np.astype(int)
df['Cust_Id'] = df['Cust_Id'].astype(int)
print('-Dataset examples-')
print(df.iloc[::5000000, :])
Explanation: We can see that the rating tends to be relatively positive (>3). This may be due to the fact that unhappy customers tend to just leave instead of making efforts to rate. We can keep this in mind - low rating movies mean they are generally really bad
Data cleaning
Movie ID is really a mess import! Looping through dataframe to add Movie ID column WILL make the Kernel run out of memory as it is too inefficient. I achieve my task by first creating a numpy array with correct length then add the whole array as column into the main dataframe! Let's see how it is done below:
End of explanation
f = ['count','mean']
df_movie_summary = df.groupby('Movie_Id')['Rating'].agg(f)
df_movie_summary.index = df_movie_summary.index.map(int)
movie_benchmark = round(df_movie_summary['count'].quantile(0.8),0)
drop_movie_list = df_movie_summary[df_movie_summary['count'] < movie_benchmark].index
print('Movie minimum times of review: {}'.format(movie_benchmark))
df_cust_summary = df.groupby('Cust_Id')['Rating'].agg(f)
df_cust_summary.index = df_cust_summary.index.map(int)
cust_benchmark = round(df_cust_summary['count'].quantile(0.8),0)
drop_cust_list = df_cust_summary[df_cust_summary['count'] < cust_benchmark].index
print('Customer minimum times of review: {}'.format(cust_benchmark))
Explanation: Data slicing
The data set now is super huge. I have tried many different ways but can't get the Kernel running as intended without memory error. Therefore I tried to reduce the data volumn by improving the data quality below:
Remove movie with too less reviews (they are relatively not popular)
Remove customer who give too less reviews (they are relatively less active)
Having above benchmark will have significant improvement on efficiency, since those unpopular movies and non-active customers still occupy same volumn as those popular movies and active customers in the view of matrix (NaN still occupy space). This should help improve the statistical signifiance too.
Let's see how it is implemented:
End of explanation
print('Original Shape: {}'.format(df.shape))
df = df[~df['Movie_Id'].isin(drop_movie_list)]
df = df[~df['Cust_Id'].isin(drop_cust_list)]
print('After Trim Shape: {}'.format(df.shape))
print('-Data Examples-')
print(df.iloc[::5000000, :])
Explanation: Now let's trim down our data, whats the difference in data size?
End of explanation
df_p = pd.pivot_table(df,values='Rating',index='Cust_Id',columns='Movie_Id')
print(df_p.shape)
# Below is another way I used to sparse the dataframe...doesn't seem to work better
#Cust_Id_u = list(sorted(df['Cust_Id'].unique()))
#Movie_Id_u = list(sorted(df['Movie_Id'].unique()))
#data = df['Rating'].tolist()
#row = df['Cust_Id'].astype('category', categories=Cust_Id_u).cat.codes
#col = df['Movie_Id'].astype('category', categories=Movie_Id_u).cat.codes
#sparse_matrix = csr_matrix((data, (row, col)), shape=(len(Cust_Id_u), len(Movie_Id_u)))
#df_p = pd.DataFrame(sparse_matrix.todense(), index=Cust_Id_u, columns=Movie_Id_u)
#df_p = df_p.replace(0, np.NaN)
Explanation: Let's pivot the data set and put it into a giant matrix - we need it for our recommendation system:
End of explanation
df_title = pd.read_csv('../input/movie_titles.csv', encoding = "ISO-8859-1", header = None, names = ['Movie_Id', 'Year', 'Name'])
df_title.set_index('Movie_Id', inplace = True)
print (df_title.head(10))
Explanation: Data mapping
Now we load the movie mapping file:
End of explanation
reader = Reader()
# get just top 100K rows for faster run time
data = Dataset.load_from_df(df[['Cust_Id', 'Movie_Id', 'Rating']][:100000], reader)
data.split(n_folds=3)
svd = SVD()
evaluate(svd, data, measures=['RMSE', 'MAE'])
Explanation: Recommendation models
Well all data required is loaded and cleaned! Next let's get into the recommendation system.
Recommend with Collaborative Filtering
Evalute performance of collaborative filtering, with just first 100K rows for faster process:
End of explanation
df_785314 = df[(df['Cust_Id'] == 785314) & (df['Rating'] == 5)]
df_785314 = df_785314.set_index('Movie_Id')
df_785314 = df_785314.join(df_title)['Name']
print(df_785314)
Explanation: Below is what user 783514 liked in the past:
End of explanation
user_785314 = df_title.copy()
user_785314 = user_785314.reset_index()
user_785314 = user_785314[~user_785314['Movie_Id'].isin(drop_movie_list)]
# getting full dataset
data = Dataset.load_from_df(df[['Cust_Id', 'Movie_Id', 'Rating']], reader)
trainset = data.build_full_trainset()
svd.train(trainset)
user_785314['Estimate_Score'] = user_785314['Movie_Id'].apply(lambda x: svd.predict(785314, x).est)
user_785314 = user_785314.drop('Movie_Id', axis = 1)
user_785314 = user_785314.sort_values('Estimate_Score', ascending=False)
print(user_785314.head(10))
Explanation: Let's predict which movies user 785314 would love to watch:
End of explanation
def recommend(movie_title, min_count):
print("For movie ({})".format(movie_title))
print("- Top 10 movies recommended based on Pearsons'R correlation - ")
i = int(df_title.index[df_title['Name'] == movie_title][0])
target = df_p[i]
similar_to_target = df_p.corrwith(target)
corr_target = pd.DataFrame(similar_to_target, columns = ['PearsonR'])
corr_target.dropna(inplace = True)
corr_target = corr_target.sort_values('PearsonR', ascending = False)
corr_target.index = corr_target.index.map(int)
corr_target = corr_target.join(df_title).join(df_movie_summary)[['PearsonR', 'Name', 'count', 'mean']]
print(corr_target[corr_target['count']>min_count][:10].to_string(index=False))
Explanation: Recommend with Pearsons' R correlations
The way it works is we use Pearsons' R correlation to measure the linear correlation between review scores of all pairs of movies, then we provide the top 10 movies with highest correlations:
End of explanation
recommend("What the #$*! Do We Know!?", 0)
Explanation: A recommendation for you if you like 'What the #$*! Do We Know!?'
End of explanation
recommend("X2: X-Men United", 0)
Explanation: X2: X-Men United:
End of explanation |
6,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ferrofluid - Part 2
Table of Contents
Applying an external magnetic field
Magnetization curve
Remark
Step1: and set up the simulation parameters where we introduce a new dimensionless parameter
\begin{equation}
\alpha = \frac{\mu B}{k_{\text{B}}T} = \frac{\mu \mu_0 H}{k_{\text{B}}T}
\end{equation}
which is called Langevin parameter. We intentionally choose a relatively high volume fraction $\phi$ and dipolar interaction parameter $\lambda$ to clearly see the influence of the dipole-dipole interaction
Step2: Now we set up the system. As in part I, the orientation of the dipole moments is set directly on the particles, whereas the magnitude of the moments is taken into account when determining the prefactor of the dipolar P3M (for more details see part I).
Hint
Step3: We now apply the external magnetic field which is
\begin{equation}
B = \mu_0 H = \frac{\alpha~k_{\text{B}}T}{\mu}
\end{equation}
As only the current orientation of the dipole moments, i.e. the unit vector of the dipole moments, is saved in the particle list but not their magnitude, we have to use $B\cdot \mu$ as the strength of the external magnetic field.
We will apply the field in x-direction using the class <tt>constraints</tt> of ESPResSo
Step4: Exercise
Step5: Now we can visualize the current state and see that the particles mostly create chains oriented in the direction of the external magnetic field. Also some monomers should be present.
Step7: Video of the development of the system
You may want to get an insight of how the system develops in time. Thus we now create a function which will save a video and embed it in an html string to create a video of the systems development
Step8: We now can start the sampling over the <tt>animation</tt> class of <tt>matplotlib</tt>
Step9: In the visualization video we can see that the single chains break and connect to each other during time. Also some monomers are present which break from and connect to chains. If you want to have some more frames, i.e. a longer video, just adjust the <tt>frames</tt> parameter in <tt>FuncAnimation</tt>.
Magnetization curve
An important observable of a ferrofluid system is the magnetization $M$ of the system in direction of an external magnetic field $H$
\begin{equation}
M = \frac{\sum_i \mu_i^H}{V}
\end{equation}
where the index $H$ means the component of $\mu_i$ in direction of the external magnetic field $H$ and the sum runs over all particles. $V$ is the system's volume.
The magnetization plotted over the external field $H$ is called magnetization curve. For particles with non-interacting dipole moments there is an analytical solution
\begin{equation}
M = M_{\text{sat}}\cdot L(\alpha)
\end{equation}
with $L(\alpha)$ the Langevin function
\begin{equation}
L(\alpha) = \coth(\alpha)-\frac{1}{\alpha}
\end{equation}
and $\alpha$ the Langevin parameter
\begin{equation}
\alpha=\frac{\mu_0\mu}{k_{\text{B}}T}H = \frac{\mu}{k_{\text{B}}T}B
\end{equation}
$M_{sat}$ is the so called saturation magnetization which is the magnetization of a system where all dipole moments are aligned to each other. Thus it is the maximum of the magnetization. In our case all dipole moments are equal, thus
\begin{equation}
M_{\text{sat}} = \frac{N_{\text{part}}\cdot\mu}{V}
\end{equation}
For better comparability we now introduce a dimensionless magnetization
\begin{equation}
M^* = \frac{M}{M_{sat}} = \frac{\sum_i \mu_i^H}{N_{\text{part}}\cdot \mu}
\end{equation}
Thus the analytical solution for non-interacting dipole moments $M^*$ is simply the Langevin function.
For interacting dipole moments there are only approximations for the magnetization curve available.
Here we want to use the approximation of Ref. <a href='#[1]'>[1]</a> for a quasi two dimensional system, which reads with adjusted coefficients (Ref. <a href='#[1]'>[1]</a> used a different dipole-dipole interaction prefactor $\gamma = 1$)
\begin{equation}
M_{\parallel}^{\text{q2D}} = M_{\text{sat}} L(\alpha) \left( 1 + \mu_0\frac{1}{8} M_{\text{sat}} \frac{\mathrm{d} L(\alpha)}{\mathrm{d}B} \right)
\end{equation}
and
\begin{equation}
M_{\perp}^{\text{q2D}} = M_{\text{sat}} L(\alpha) \left( 1 - \mu_0\frac{1}{4} M_{\text{sat}} \frac{\mathrm{d} L(\alpha)}{\mathrm{d}B} \right)
\end{equation}
for the magnetization with an external magnetic field parallel and perpendicular to the monolayer plane, respectively. Here the dipole-dipole interaction is approximated as a small perturbation and
\begin{equation}
\frac{\mathrm{d} L(\alpha)}{\mathrm{d}B} = \left( \frac{1}{\alpha^2} - \frac{1}{\sinh^2(\alpha)} \right) \cdot \frac{\mu}{k_{\text{B}}T}
\end{equation}
By comparing the magnetization curve parallel $M_{\parallel}^{\text{q2D}}$ and perpendicular $M_{\perp}^{\text{q2D}}$ to the monolayer plane we can see that the magnetization is increased in the case of an external field parallel to the monolayer plane and decreased in the case of an external field perpendicular to the monolayer plane. The latter can be explained by the fact that an orientation of all single dipole moments perpendicular to the monolayer plane results in a configuration with a repulsive dipole-dipole interaction as the particles have no freedom of movement in the direction perpendicular to the monolayer plane. This counteracts the magnetization perpendicular to the monolayer plane.
We now want to use ESPResSo to get an estimation of how the magnetization curve is affected by the dipole-dipole interaction parallel and perpendicular to the monolayer plane and compare the results with the Langevin curve and the magnetization curves of Ref. <a href='#[1]'>[1]</a>.
For the sampling of the magnetization curve we set up a new system, where we decrease the dipolar interaction parameter $\lambda$ drastically. We do this as we want to compare our results with the approximation of Ref. <a href='#[1]'>[1]</a> which is only valid for small dipole-dipole interaction between the particles (decreasing the volume fraction $\phi$ would also be an appropriate choice). For smaller dipolar interaction parameters it is possible to increase the time step. We do this to get more uncorrelated measurements.
Step10: To increase the performance we use the built-in function <tt>MagneticDipoleMoment</tt> to calculate the dipole moment of the whole system. In our case this is only the orientation as we never set the strength of the dipole moments on our particles.
Exercise
Step11: For both the magnetization perpendicular and parallel to the monolayer plane we use the same system for every value of the Langevin parameter $\alpha$. Thus we use that the system is already more or less equilibrated from the previous run so we save some equilibration time. For scientific purposes one would use a new system for every value for the Langevin parameter to ensure that the systems are independent and no correlation effects are measured. Also one would perform more than just one simulation for each value of $\alpha$ to increase the precision of the results.
Now we sample the magnetization for increasing $\alpha$ (increasing magnetic field strength) in direction perpendicular to the monolayer plane.
Exercise
Step12: For the approximations of $M_{\parallel}^{\text{q2D}}$ and $M_{\perp}^{\text{q2D}}$ of Ref. <a href='#[1]'>[1]</a> we need the dipole moment of a single particle. Thus we calculate it from our dipolar interaction parameter $\lambda$
Step13: and the saturation magnetization by using
\begin{equation}
M_{\text{sat}} = \rho \mu = \phi \frac{4}{\pi \sigma^2} \mu
\end{equation}
thus
Step14: Further we need the derivation of the Langevin function after the external field $B$ thus we define the function
Step15: Now we define the approximated magnetization curves parallel and perpendicular to the monolayer plane
Step16: Now we define the Langevin function | Python Code:
import espressomd
espressomd.assert_features('DIPOLES', 'LENNARD_JONES')
from espressomd.magnetostatics import DipolarP3M
from espressomd.magnetostatic_extensions import DLC
import numpy as np
Explanation: Ferrofluid - Part 2
Table of Contents
Applying an external magnetic field
Magnetization curve
Remark: The equilibration and sampling times used in this tutorial would be not sufficient for scientific purposes, but they are long enough to get at least a qualitative insight of the behaviour of ferrofluids. They have been shortened so we achieve reasonable computation times for the purpose of a tutorial.
Applying an external magnetic field
In this part we want to investigate the influence of a homogeneous external magnetic field exposed to a ferrofluid system.
We import all necessary packages and check for the required ESPResSo features
End of explanation
# Lennard-Jones parameters
LJ_SIGMA = 1.
LJ_EPSILON = 1.
LJ_CUT = 2**(1. / 6.) * LJ_SIGMA
# Particles
N_PART = 700
# Area fraction of the mono-layer
PHI = 0.06
# Dipolar interaction parameter lambda = MU_0 m^2 /(4 pi sigma^3 kT)
DIP_LAMBDA = 4.
# Temperature
KT = 1.0
# Friction coefficient
GAMMA = 1.0
# Time step
TIME_STEP = 0.01
# Langevin parameter ALPHA = MU_0 m H / kT
ALPHA = 10.
# vacuum permeability
MU_0 = 1.
Explanation: and set up the simulation parameters where we introduce a new dimensionless parameter
\begin{equation}
\alpha = \frac{\mu B}{k_{\text{B}}T} = \frac{\mu \mu_0 H}{k_{\text{B}}T}
\end{equation}
which is called Langevin parameter. We intentionally choose a relatively high volume fraction $\phi$ and dipolar interaction parameter $\lambda$ to clearly see the influence of the dipole-dipole interaction
End of explanation
# System setup
box_size = (N_PART * np.pi * (LJ_SIGMA / 2.)**2. / PHI)**0.5
print("Box size", box_size)
# Note that the dipolar P3M and dipolar layer correction need a cubic
# simulation box for technical reasons.
system = espressomd.System(box_l=(box_size, box_size, box_size))
system.time_step = TIME_STEP
# Lennard-Jones interaction
system.non_bonded_inter[0, 0].lennard_jones.set_params(epsilon=LJ_EPSILON, sigma=LJ_SIGMA, cutoff=LJ_CUT, shift="auto")
# Random dipole moments
np.random.seed(seed=1)
dip_phi = 2. * np.pi * np.random.random((N_PART, 1))
dip_cos_theta = 2 * np.random.random((N_PART, 1)) - 1
dip_sin_theta = np.sin(np.arccos(dip_cos_theta))
dip = np.hstack((
dip_sin_theta * np.sin(dip_phi),
dip_sin_theta * np.cos(dip_phi),
dip_cos_theta))
# Random positions in the monolayer
pos = box_size * np.hstack((np.random.random((N_PART, 2)), np.zeros((N_PART, 1))))
# Add particles
system.part.add(pos=pos, rotation=N_PART * [(1, 1, 1)], dip=dip, fix=N_PART * [(0, 0, 1)])
# Remove overlap between particles by means of the steepest descent method
system.integrator.set_steepest_descent(
f_max=0, gamma=0.1, max_displacement=0.05)
while system.analysis.energy()["total"] > 5 * KT * N_PART:
system.integrator.run(20)
# Switch to velocity Verlet integrator
system.integrator.set_vv()
system.thermostat.set_langevin(kT=KT, gamma=GAMMA, seed=1)
# tune verlet list skin
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
# Setup dipolar P3M and dipolar layer correction (DLC)
dp3m = DipolarP3M(accuracy=5E-4, prefactor=DIP_LAMBDA * LJ_SIGMA**3 * KT)
dlc = DLC(maxPWerror=1E-4, gap_size=box_size - LJ_SIGMA)
system.actors.add(dp3m)
system.actors.add(dlc)
# tune verlet list skin again
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
# print skin value
print('tuned skin = {}'.format(system.cell_system.skin))
Explanation: Now we set up the system. As in part I, the orientation of the dipole moments is set directly on the particles, whereas the magnitude of the moments is taken into account when determining the prefactor of the dipolar P3M (for more details see part I).
Hint:
It should be noted that we seed both the Langevin thermostat and the random number generator of numpy. The latter means that the initial configuration of our system is the same every time this script will be executed. As the time evolution of the system depends not solely on the Langevin thermostat but also on the numeric accuracy and DP3M as well as DLC (the tuned parameters are slightly different every time) it is only partly predefined. You can change the seeds to simulate with a different initial configuration and a guaranteed different time evolution.
End of explanation
# magnetic field times dipole moment
H_dipm = ALPHA * KT
H_field = [H_dipm, 0, 0]
Explanation: We now apply the external magnetic field which is
\begin{equation}
B = \mu_0 H = \frac{\alpha~k_{\text{B}}T}{\mu}
\end{equation}
As only the current orientation of the dipole moments, i.e. the unit vector of the dipole moments, is saved in the particle list but not their magnitude, we have to use $B\cdot \mu$ as the strength of the external magnetic field.
We will apply the field in x-direction using the class <tt>constraints</tt> of ESPResSo
End of explanation
# Equilibrate
print("Equilibration...")
equil_rounds = 10
equil_steps = 200
for i in range(equil_rounds):
system.integrator.run(equil_steps)
print("progress: {:3.0f}%, dipolar energy: {:9.2f}".format(
(i + 1) * 100. / equil_rounds, system.analysis.energy()["dipolar"]), end="\r")
print("\nEquilibration done")
Explanation: Exercise:
Define a homogenous magnetic field constraint using H_field and add it to system's contraints.
python
H_constraint = espressomd.constraints.HomogeneousMagneticField(H=H_field)
system.constraints.add(H_constraint)
Equilibrate the system.
End of explanation
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
plt.xlim(0, box_size)
plt.ylim(0, box_size)
plt.xlabel('x-position', fontsize=20)
plt.ylabel('y-position', fontsize=20)
plt.plot(system.part[:].pos_folded[:, 0], system.part[:].pos_folded[:, 1], 'o')
plt.show()
Explanation: Now we can visualize the current state and see that the particles mostly create chains oriented in the direction of the external magnetic field. Also some monomers should be present.
End of explanation
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from tempfile import NamedTemporaryFile
import base64
VIDEO_TAG = <video controls>
<source src="data:video/x-m4v;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>
def anim_to_html(anim):
if not hasattr(anim, '_encoded_video'):
with NamedTemporaryFile(suffix='.mp4') as f:
anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264'])
with open(f.name, "rb") as g:
video = g.read()
anim._encoded_video = base64.b64encode(video).decode('ascii')
plt.close(anim._fig)
return VIDEO_TAG.format(anim._encoded_video)
animation.Animation._repr_html_ = anim_to_html
def init():
# Set x and y range
ax.set_ylim(0, box_size)
ax.set_xlim(0, box_size)
xdata, ydata = [], []
part.set_data(xdata, ydata)
return part,
def run(i):
system.integrator.run(50)
# Save current system state as a plot
xdata, ydata = system.part[:].pos_folded[:, 0], system.part[:].pos_folded[:, 1]
ax.figure.canvas.draw()
part.set_data(xdata, ydata)
print("progress: {:3.0f}%".format(i + 1), end="\r")
return part,
Explanation: Video of the development of the system
You may want to get an insight of how the system develops in time. Thus we now create a function which will save a video and embed it in an html string to create a video of the systems development
End of explanation
fig, ax = plt.subplots(figsize=(10, 10))
part, = ax.plot([], [], 'o')
animation.FuncAnimation(fig, run, frames=100, blit=True, interval=0, repeat=False, init_func=init)
Explanation: We now can start the sampling over the <tt>animation</tt> class of <tt>matplotlib</tt>
End of explanation
# Dipolar interaction parameter lambda = MU_0 m^2 /(4 pi sigma^3 kT)
DIP_LAMBDA = 1.
# increase time step
TIME_STEP = 0.02
# dipole moment
dipm = np.sqrt(4 * np.pi * DIP_LAMBDA * LJ_SIGMA**3. * KT / MU_0)
# remove all particles
system.part[:].remove()
system.thermostat.turn_off()
# Random dipole moments
dip_phi = 2. * np.pi * np.random.random((N_PART, 1))
dip_cos_theta = 2 * np.random.random((N_PART, 1)) - 1
dip_sin_theta = np.sin(np.arccos(dip_cos_theta))
dip = np.hstack((
dip_sin_theta * np.sin(dip_phi),
dip_sin_theta * np.cos(dip_phi),
dip_cos_theta))
# Random positions in the monolayer
pos = box_size * np.hstack((np.random.random((N_PART, 2)), np.zeros((N_PART, 1))))
# Add particles
system.part.add(pos=pos, rotation=N_PART * [(1, 1, 1)], dip=dip, fix=N_PART * [(0, 0, 1)])
# Remove overlap between particles by means of the steepest descent method
system.integrator.set_steepest_descent(f_max=0, gamma=0.1, max_displacement=0.05)
while system.analysis.energy()["total"] > 5 * KT * N_PART:
system.integrator.run(20)
# Switch to velocity Verlet integrator
system.integrator.set_vv()
system.thermostat.set_langevin(kT=KT, gamma=GAMMA, seed=1)
# tune verlet list skin
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
# Setup dipolar P3M and dipolar layer correction
system.actors.remove(dp3m)
system.actors.remove(dlc)
dp3m = DipolarP3M(accuracy=5E-4, prefactor=DIP_LAMBDA * LJ_SIGMA**3 * KT)
dlc = DLC(maxPWerror=1E-4, gap_size=box_size - LJ_SIGMA)
system.actors.add(dp3m)
system.actors.add(dlc)
# tune verlet list skin again
system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100)
Explanation: In the visualization video we can see that the single chains break and connect to each other during time. Also some monomers are present which break from and connect to chains. If you want to have some more frames, i.e. a longer video, just adjust the <tt>frames</tt> parameter in <tt>FuncAnimation</tt>.
Magnetization curve
An important observable of a ferrofluid system is the magnetization $M$ of the system in direction of an external magnetic field $H$
\begin{equation}
M = \frac{\sum_i \mu_i^H}{V}
\end{equation}
where the index $H$ means the component of $\mu_i$ in direction of the external magnetic field $H$ and the sum runs over all particles. $V$ is the system's volume.
The magnetization plotted over the external field $H$ is called magnetization curve. For particles with non-interacting dipole moments there is an analytical solution
\begin{equation}
M = M_{\text{sat}}\cdot L(\alpha)
\end{equation}
with $L(\alpha)$ the Langevin function
\begin{equation}
L(\alpha) = \coth(\alpha)-\frac{1}{\alpha}
\end{equation}
and $\alpha$ the Langevin parameter
\begin{equation}
\alpha=\frac{\mu_0\mu}{k_{\text{B}}T}H = \frac{\mu}{k_{\text{B}}T}B
\end{equation}
$M_{sat}$ is the so called saturation magnetization which is the magnetization of a system where all dipole moments are aligned to each other. Thus it is the maximum of the magnetization. In our case all dipole moments are equal, thus
\begin{equation}
M_{\text{sat}} = \frac{N_{\text{part}}\cdot\mu}{V}
\end{equation}
For better comparability we now introduce a dimensionless magnetization
\begin{equation}
M^* = \frac{M}{M_{sat}} = \frac{\sum_i \mu_i^H}{N_{\text{part}}\cdot \mu}
\end{equation}
Thus the analytical solution for non-interacting dipole moments $M^*$ is simply the Langevin function.
For interacting dipole moments there are only approximations for the magnetization curve available.
Here we want to use the approximation of Ref. <a href='#[1]'>[1]</a> for a quasi two dimensional system, which reads with adjusted coefficients (Ref. <a href='#[1]'>[1]</a> used a different dipole-dipole interaction prefactor $\gamma = 1$)
\begin{equation}
M_{\parallel}^{\text{q2D}} = M_{\text{sat}} L(\alpha) \left( 1 + \mu_0\frac{1}{8} M_{\text{sat}} \frac{\mathrm{d} L(\alpha)}{\mathrm{d}B} \right)
\end{equation}
and
\begin{equation}
M_{\perp}^{\text{q2D}} = M_{\text{sat}} L(\alpha) \left( 1 - \mu_0\frac{1}{4} M_{\text{sat}} \frac{\mathrm{d} L(\alpha)}{\mathrm{d}B} \right)
\end{equation}
for the magnetization with an external magnetic field parallel and perpendicular to the monolayer plane, respectively. Here the dipole-dipole interaction is approximated as a small perturbation and
\begin{equation}
\frac{\mathrm{d} L(\alpha)}{\mathrm{d}B} = \left( \frac{1}{\alpha^2} - \frac{1}{\sinh^2(\alpha)} \right) \cdot \frac{\mu}{k_{\text{B}}T}
\end{equation}
By comparing the magnetization curve parallel $M_{\parallel}^{\text{q2D}}$ and perpendicular $M_{\perp}^{\text{q2D}}$ to the monolayer plane we can see that the magnetization is increased in the case of an external field parallel to the monolayer plane and decreased in the case of an external field perpendicular to the monolayer plane. The latter can be explained by the fact that an orientation of all single dipole moments perpendicular to the monolayer plane results in a configuration with a repulsive dipole-dipole interaction as the particles have no freedom of movement in the direction perpendicular to the monolayer plane. This counteracts the magnetization perpendicular to the monolayer plane.
We now want to use ESPResSo to get an estimation of how the magnetization curve is affected by the dipole-dipole interaction parallel and perpendicular to the monolayer plane and compare the results with the Langevin curve and the magnetization curves of Ref. <a href='#[1]'>[1]</a>.
For the sampling of the magnetization curve we set up a new system, where we decrease the dipolar interaction parameter $\lambda$ drastically. We do this as we want to compare our results with the approximation of Ref. <a href='#[1]'>[1]</a> which is only valid for small dipole-dipole interaction between the particles (decreasing the volume fraction $\phi$ would also be an appropriate choice). For smaller dipolar interaction parameters it is possible to increase the time step. We do this to get more uncorrelated measurements.
End of explanation
alphas = np.array([0, 0.25, 0.5, 1, 2, 3, 4, 8])
Explanation: To increase the performance we use the built-in function <tt>MagneticDipoleMoment</tt> to calculate the dipole moment of the whole system. In our case this is only the orientation as we never set the strength of the dipole moments on our particles.
Exercise:
Import the magnetic dipole moment observable and define an observable object dipm_tot.
Use particle slicing to pass all particle ids.
python
from espressomd.observables import MagneticDipoleMoment
dipm_tot = MagneticDipoleMoment(ids=system.part[:].id)
We use the dimensionless Langevin parameter $\alpha$ as the parameter for the external magnetic field. As the interesting part of the magnetization curve is the one for small external magnetic field strengths—for large external magnetic fields the magnetization goes into saturation in all cases—we increase the spacing between the Langevin parameters $\alpha$ up to higher values and write them into a list
End of explanation
import matplotlib.pyplot as plt
Explanation: For both the magnetization perpendicular and parallel to the monolayer plane we use the same system for every value of the Langevin parameter $\alpha$. Thus we use that the system is already more or less equilibrated from the previous run so we save some equilibration time. For scientific purposes one would use a new system for every value for the Langevin parameter to ensure that the systems are independent and no correlation effects are measured. Also one would perform more than just one simulation for each value of $\alpha$ to increase the precision of the results.
Now we sample the magnetization for increasing $\alpha$ (increasing magnetic field strength) in direction perpendicular to the monolayer plane.
Exercise:
Complete the loop such that for every alpha a magnetic field of strength of the respective H_dipm is applied:
```python
sampling with magnetic field perpendicular to monolayer plane (in z-direction)
remove all constraints
system.constraints.clear()
list of magnetization in field direction
magnetization_perp = np.full_like(alphas, np.nan)
number of loops for sampling
loops = 500
for ndx, alpha in enumerate(alphas):
print("Sampling for alpha = {}".format(alpha))
H_dipm = alpha * KT
print("Set magnetic field constraint...")
# < exercise >
print("done\n")
# Equilibration
print("Equilibration...")
for i in range(equil_rounds):
system.integrator.run(equil_steps)
print(f"progress: {(i + 1) * 100. / equil_rounds}%, dipolar energy: {system.analysis.energy()['dipolar']}", end="\r")
print("\nEquilibration done\n")
# Sampling
print("Sampling...")
magn_temp = 0
for i in range(loops):
system.integrator.run(20)
magn_temp += dipm_tot.calculate()[2]
print(f"progress: {(i + 1) * 100. / loops}%", end="\r")
print("\n")
# save average magnetization
magnetization_perp[ndx] = magn_temp / loops
print("Sampling for alpha = {} done\n".format(alpha))
print("magnetizations = {}".format(magnetization_perp))
print(f"total progress: {(alphas.index(alpha) + 1) * 100. / len(alphas)}%\n")
# remove constraint
system.constraints.clear()
print("Magnetization curve sampling done")
```
```python
sampling with magnetic field perpendicular to monolayer plane (in z-direction)
remove all constraints
system.constraints.clear()
list of magnetization in field direction
magnetization_perp = np.full_like(alphas, np.nan)
number of loops for sampling
loops = 500
for ndx, alpha in enumerate(alphas):
print("Sampling for alpha = {}".format(alpha))
H_dipm = alpha * KT
print("Set magnetic field constraint...")
H_field = [0, 0, H_dipm]
H_constraint = espressomd.constraints.HomogeneousMagneticField(H=H_field)
system.constraints.add(H_constraint)
print("done\n")
# Equilibration
print("Equilibration...")
for i in range(equil_rounds):
system.integrator.run(equil_steps)
print(
f"progress: {(i + 1) * 100. / equil_rounds}%, dipolar energy: {system.analysis.energy()['dipolar']}",
end="\r")
print("\nEquilibration done\n")
# Sampling
print("Sampling...")
magn_temp = 0
for i in range(loops):
system.integrator.run(20)
magn_temp += dipm_tot.calculate()[2]
print(f"progress: {(i + 1) * 100. / loops}%", end="\r")
print("\n")
# save average magnetization
magnetization_perp[ndx] = magn_temp / loops
print("Sampling for alpha = {} done\n".format(alpha))
print("magnetizations = {}".format(magnetization_perp))
print(f"total progress: {(alphas.index(alpha) + 1) * 100. / len(alphas)}%\n")
# remove constraint
system.constraints.clear()
print("Magnetization curve sampling done")
```
and now we sample the magnetization for increasing $\alpha$ or increasing magnetic field in direction parallel to the monolayer plane.
Exercise:
Use the code from the previous exercise as a template.
Now sample the magnetization curve for a magnetic field parallel to the quasi-2D layer and store the calculated magnetizations in a list named magnetization_para (analogous to magnetization_perp).
Hint: Set up the field in $x$- or $y$-direction and sample the magnetization along the same axis.
```python
sampling with magnetic field parallel to monolayer plane (in x-direction)
remove all constraints
system.constraints.clear()
list of magnetization in field direction
magnetization_para = np.full_like(alphas, np.nan)
number of loops for sampling
loops = 500
for ndx, alpha in enumerate(alphas):
print("Sample for alpha = {}".format(alpha))
H_dipm = alpha * KT
H_field = [H_dipm, 0, 0]
print("Set magnetic field constraint...")
H_constraint = espressomd.constraints.HomogeneousMagneticField(H=H_field)
system.constraints.add(H_constraint)
print("done\n")
# Equilibration
print("Equilibration...")
for i in range(equil_rounds):
system.integrator.run(equil_steps)
print(
f"progress: {(i + 1) * 100. / equil_rounds}%, dipolar energy: {system.analysis.energy()['dipolar']}",
end="\r")
print("\nEquilibration done\n")
# Sampling
print("Sampling...")
magn_temp = 0
for i in range(loops):
system.integrator.run(20)
magn_temp += dipm_tot.calculate()[0]
print(f"progress: {(i + 1) * 100. / loops}%", end="\r")
print("\n")
# save average magnetization
magnetization_para[ndx] = magn_temp / loops
print("Sampling for alpha = {} done\n".format(alpha))
print("magnetizations = {}".format(magnetization_para))
print(f"total progress: {(alphas.index(alpha) + 1) * 100. / len(alphas)}%\n")
# remove constraint
system.constraints.clear()
print("Magnetization curve sampling done")
```
Now we can compare the resulting magnetization curves with the Langevin curve and the more advanced ones of Ref. <a href='#[1]'>[1]</a> by plotting all of them in one figure. Thus first we import matplotlib if not already done
End of explanation
# dipole moment
dipm = np.sqrt(DIP_LAMBDA * 4 * np.pi * LJ_SIGMA**3. * KT / MU_0)
print('dipole moment = {}'.format(dipm))
Explanation: For the approximations of $M_{\parallel}^{\text{q2D}}$ and $M_{\perp}^{\text{q2D}}$ of Ref. <a href='#[1]'>[1]</a> we need the dipole moment of a single particle. Thus we calculate it from our dipolar interaction parameter $\lambda$
End of explanation
M_sat = PHI * 4. / np.pi * 1. / (LJ_SIGMA**2.) * dipm
Explanation: and the saturation magnetization by using
\begin{equation}
M_{\text{sat}} = \rho \mu = \phi \frac{4}{\pi \sigma^2} \mu
\end{equation}
thus
End of explanation
def dL_dB(alpha):
return (1. / (alpha**2.) - 1. / ((np.sinh(alpha))**2.)) * dipm / (KT)
Explanation: Further we need the derivation of the Langevin function after the external field $B$ thus we define the function
End of explanation
# approximated magnetization curve for a field parallel to the monolayer plane
def magnetization_approx_para(alpha):
return L(alpha) * (1. + MU_0 / 8. * M_sat * dL_dB(alpha))
# approximated magnetization curve for a field perpendicular to the monolayer plane
def magnetization_approx_perp(alpha):
return L(alpha) * (1. - MU_0 / 4. * M_sat * dL_dB(alpha))
Explanation: Now we define the approximated magnetization curves parallel and perpendicular to the monolayer plane
End of explanation
# Langevin function
def L(x):
return (np.cosh(x) / np.sinh(x)) - 1. / x
Explanation: Now we define the Langevin function
End of explanation |
6,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/logo.jpg" style="display
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step4: <span style="text-align
Step5: <p style="text-align
Step6: <p style="text-align
Step7: <div style="text-align | Python Code:
shoes_in_my_drawer = int(input("How many shoes do you have in your drawer? "))
if shoes_in_my_drawer % 2 == 1:
print("You have an odd number of shoes. Something is wrong!")
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<p style="text-align: right; direction: rtl; float: right;">תנאים – חלק 2</p>
<p style="text-align: right; direction: rtl; float: right;">תנאים – תזכורת</p>
<p style="text-align: right; direction: rtl; float: right;">
ניזכר במחברת הקודמת, שבה למדנו על תנאים.<br>
למדנו שבעזרת מילת המפתח <code>if</code> אנחנו יכולים לבקש מהקוד שלנו לבצע סדרת פעולות, רק אם תנאי כלשהו מתקיים.<br>
במילים אחרות: אנחנו יכולים לבקש מקטע קוד לרוץ, רק אם ביטוי בוליאני מסוים שווה ל־<code>True</code>.
</p>
<p style="text-align: right; direction: rtl; float: right;">
נראה דוגמה:
</p>
End of explanation
shoes_in_my_drawer = int(input("How many shoes do you have in your drawer? "))
if shoes_in_my_drawer % 2 == 1:
print("You have an odd number of shoes. Something is wrong!")
if shoes_in_my_drawer % 2 != 1:
print("You have an even number of shoes. Congratulations!")
Explanation: <p style="text-align: right; direction: rtl; float: right;">
בקוד שלמעלה, ביקשנו מהמשתמש להזין את מספר הנעליים שיש לו, ואם המספר היה אי־זוגי הדפסנו לו שמשהו מוזר מתרחש.<br>
אבל מה קורה אם נרצה להדפיס למשתמש הודעת אישור כשאנחנו מגלים שהכול בסדר?
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כיצד הייתם משפרים את התוכנית שלמעלה כך שתדפיס למשתמש הודעת אישור שהכול בסדר?<br>
השתמשו בכלים שרכשתם במחברת הקודמת. נסו לחשוב על לפחות 2 דרכים דומות.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right;">מה אם לא?<br><sub>על מילת המפתח else</sub></span>
<p style="text-align: right; direction: rtl; float: right;">דרך אחת לפתור את התרגיל שהופיע למעלה, היא זו:</p>
End of explanation
shoes_in_my_drawer = int(input("How many shoes do you have in your drawer? "))
if shoes_in_my_drawer % 2 == 1:
print("You have an odd number of shoes. Something is wrong!")
else:
print("You have an even number of shoes. Congratulations!")
Explanation: <p style="text-align: right; direction: rtl; float: right;">
כדי להדפיס למשתמש הודעה מתאימה בכל מצב, הוספנו תנאי <em>הפוך</em> מהתנאי הראשון, שידפיס למשתמש הודעה מתאימה.<br>
אחד הדברים המעניינים בתוכנית שלמעלה, הוא שיש לנו שני תנאים שמנוגדים זה לזה במשמעות שלהם. אחד בודק זוגיות, והשני בודק אי־זוגיות.<br>
למעשה, אם היינו רוצים לנסח את הקוד במילים, היינו יכולים להגיד: <q>אם מספר הנעליים הוא אי־זוגי, הדפס שיש בעיה. <strong>אחרת</strong>, הדפס שהכול בסדר</q>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פייתון מקנה לנו כלי נוח לבטא את ה"אחרת" הזה, כלי שמקל על הקריאוּת של הקוד – מילת המפתח <code>else</code>.<br>
נראה איך אפשר לנסח את הקוד שלמעלה בעזרת <code>else</code>:
</p>
End of explanation
is_boiler_on = input("Is your boiler turned on? [yes/no] ")
hour = int(input("Enter the hour (00-23): "))
minute = int(input("Enter the minute (00-59): "))
if is_boiler_on == 'yes':
is_boiler_on = True
else:
is_boiler_on = False
if not is_boiler_on and hour == 7 and minute > 0:
is_boiler_on = True
else:
if is_boiler_on:
is_boiler_on = False
if is_boiler_on:
boiler_status = "on"
else:
boiler_status = "off"
print("Boiler is " + boiler_status + " right now.")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב לצורת ההזחה: ה־<code>else</code> אינו מוזח, אך התוכן שבתוכו כן.<br>
נזכור גם ש־<code>else</code> קשור ל־<code>if</code> שלפניו, ומדבר על <em>המקרה המשלים</em> לתנאי שנמצא באותו <code>if</code>.<br>
דרך נוספת לחשוב על <code>else</code> היא ש<mark>קטע הקוד בתוך ה־<code>else</code> יתבצע אם תוצאתו של הביטוי הבוליאני שמופיע כתנאי של ה־<code>if</code> היא <code>False</code></mark>.
</p>
<div style="text-align: right; direction: rtl; float: right; clear: both;">דוגמאות לתנאים עם <code>else</code></div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בדוגמאות הבאות, <em style='background-color: #3C6478; color: white;'>התנאי מופיע כך</em>, ו<em style='background-color: #F58B4C; color: white;'>הפעולות שיקרו בעקבותיו מופיעות כך</em>.
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>אם <em style='background-color: #3C6478; color: white;'>השעה היא לפני 20:00</em>, <em style='background-color: #F58B4C; color: white;'>שמור על תאורה גבוהה במסעדה</em>. <em style='background-color: #3C6478; color: white;'>אחרת</em>, <em style='background-color: #F58B4C; color: white;'>עמעם את התאורה</em>.</li>
<li>אם <em style='background-color: #3C6478; color: white;'>הגיל של המשתמש הוא לפחות 18</em>, <em style='background-color: #F58B4C; color: white;'>אפשר לו להיכנס לבר</em>. <em style='background-color: #3C6478; color: white;'>אחרת</em>, <em style='background-color: #F58B4C; color: white;'>הצע לו אטרקציות אחרות לבקר בהן</em> וגם <em style='background-color: #F58B4C; color: white;'>אל תכניס אותו</em>.</li>
<li>אם המשתמש בכספומט <em style='background-color: #3C6478; color: white;'>הזין סכום שלילי, או יותר מהסכום הזמין בחשבון שלו</em>, <em style='background-color: #F58B4C; color: white;'>הצג לו הודעת שגיאה</em>. <em style='background-color: #3C6478; color: white;'>אחרת</em>, <em style='background-color: #F58B4C; color: white;'>הפחת לו את הסכום מהחשבון</em>, <em style='background-color: #F58B4C; color: white;'>הוצא לו שטרות בסכום שהזין</em>.</li>
<li>אם <em style='background-color: #3C6478; color: white;'>הדוד כבוי וגם השעה היא לפני 8:00 וגם השעה היא אחרי 7:00</em>, <em style='background-color: #F58B4C; color: white;'>הדלק את הדוד</em>. <em style='background-color: #3C6478; color: white;'>אחרת</em>, אם <em style='background-color: #3C6478; color: white;'>הדוד דלוק</em> – <em style='background-color: #F58B4C; color: white;'>כבה את הדוד</em>.</li>
</ol>
<div style="text-align: right; direction: rtl;">זרימת התוכנית: ציור לדוגמה</div>
<figure>
<img src="images/else-flow.svg" style="display: block; margin-left: auto; margin-right: auto;" alt="איור (גרף) להמחשה של זרימת התוכנה. מתחילים בלקבל קלט, עוברים לתנאי. אם הוא לא מתקיים, בודקים אם הדוד דלוק. אם הדוד דלוק, מכבים את הדוד ומשם עוברים לסוף התוכנית. אם התנאי ההתחלתי קטן מתקיים, מדליקים את הדוד, ואז מגיעים לסוף התוכנית.">
<figcaption style="text-align: center; direction: rtl;"></figcaption>
</figure>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מימוש לדוגמה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
התוכנית המוצגת פה מעט מורכבת יותר מהשרטוט המופיע למעלה, כדי לתת לה נופך מציאותי יותר.
</p>
End of explanation
copies_sold = int(input("How many copies were sold? "))
# אנחנו משתמשים באותיות גדולות עבור שמות המשתנים האלו.
# זו מוסכמה בין מתכנתים שמבקשת להדגיש שערכי המשתנים הם קבועים, ולא הולכים להשתנות במהלך התוכנית.
SILVER_ALBUM = 100000
GOLD_ALBUM = 500000
PLATINUM_ALBUM = 1000000
DIAMOND_ALBUM = 10000000
if copies_sold >= DIAMOND_ALBUM:
print("Diamond album")
else:
if copies_sold >= PLATINUM_ALBUM:
print("Platinum album")
else:
if copies_sold >= GOLD_ALBUM:
print("Gold album")
else:
if copies_sold >= SILVER_ALBUM:
print("Silver album")
else:
print("Your album is not a best-seller")
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">מה המצב?<br><sub>טיפול במצבים מרובים</sub></span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בארצות הברית מדורגת מכירת אלבומים כך:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>אלבום מוזיקה נחשב "<dfn>אלבום כסף</dfn>" אם נמכרו ממנו לפחות <em>100,000</em> עותקים.</li>
<li>אלבום מוזיקה נחשב "<dfn>אלבום זהב</dfn>" אם נמכרו ממנו לפחות <em>500,000</em> עותקים.</li>
<li>אלבום מוזיקה נחשב "<dfn>אלבום פלטינה</dfn>" אם נמכרו ממנו לפחות <em>1,000,000</em> עותקים.</li>
<li>אלבום מוזיקה נחשב "<dfn>אלבום יהלום</dfn>" אם נמכרו ממנו לפחות <em>10,000,000</em> עותקים.</li>
</ol>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
קלטו את כמות העותקים שנמכרו עבור אלבום המטאל המצליח "רוצח או נזלת", והדפיסו את דירוג האלבום.<br>
לדוגמה, אם המשתמש הכניס שמספר המכירות היה <em>520,196</em>, הדפיסו "<samp>אלבום זהב</samp>".<br>
אם האלבום לא נמכר מספיק כדי לזכות בדירוג, הדפיסו "<samp>האלבום אינו רב־מכר</samp>".<br>
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">פתרון ושיפורים</span>
<p style="text-align: right; direction: rtl; float: right;">
בתרגיל שנתבקשתם לפתור, נוצר מצב שבו יש צורך בתנאים מרובים שמשלימים זה את זה.<br>
נציג לפניכם שני פתרונות אפשריים:
</p>
End of explanation
copies_sold = int(input("How many copies were sold? "))
SILVER_ALBUM = 100000
GOLD_ALBUM = 500000
PLATINUM_ALBUM = 1000000
DIAMOND_ALBUM = 10000000
if copies_sold >= DIAMOND_ALBUM:
print("Diamond album")
if copies_sold >= PLATINUM_ALBUM and copies_sold < DIAMOND_ALBUM:
print("Platinum album")
if copies_sold >= GOLD_ALBUM and copies_sold < PLATINUM_ALBUM:
print("Gold album")
if copies_sold >= SILVER_ALBUM and copies_sold < GOLD_ALBUM:
print("Silver album")
if copies_sold < SILVER_ALBUM:
print("Your album is not a best-seller")
Explanation: <p style="text-align: right; direction: rtl; float: right;">
ודאי שמתם לב שהקוד נראה מעט מסורבל בגלל כמות ההזחות, והוא יוסיף ויסתרבל ככל שנוסיף יותר מקרים אפשריים.<br>
ננסה לפתור את זה בעזרת הגדרת טווחים מדויקים עבור כל דירוג.<br>
הקוד המשופץ ייראה כך:
</p>
End of explanation
copies_sold = int(input("How many copies were sold? "))
SILVER_ALBUM = 100000
GOLD_ALBUM = 500000
PLATINUM_ALBUM = 1000000
DIAMOND_ALBUM = 10000000
if copies_sold >= DIAMOND_ALBUM:
print("Diamond album")
elif copies_sold >= PLATINUM_ALBUM:
print("Platinum album")
elif copies_sold >= GOLD_ALBUM:
print("Gold album")
elif copies_sold >= SILVER_ALBUM:
print("Silver album")
else:
print("Your album is not a best-seller")
Explanation: <p style="text-align: right; direction: rtl; float: right;">
הקוד נראה טוב יותר במידה ניכרת, אבל בכל <code>if</code> אנחנו בודקים שהתנאי שלפניו לא התקיים, וזה מסורבל מאוד.<br>
אנחנו עושים את זה כדי למנוע הדפסה כפולה: אומנם כל אלבום זהב נמכר מספיק פעמים כדי להיות מוכתר כאלבום כסף, אבל לא נרצה להדפיס למשתמש את שתי ההודעות.</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מה היה קורה אם לא היו תנאים אחרי האופרטור הלוגי <code>and</code>?<br>
מחקו אותם, הכניסו לתוכנה כקלט <em>10000000</em> ובדקו מה התוצאה.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<div style="text-align: right; direction: rtl; float: right; clear: both;">אם אחרת – elif</div>
<p style="text-align: right; direction: rtl; float: right;">
כדי לפתור את הבעיה שהוצגה למעלה, פייתון מעניקה לנו כלי נוסף שנקרא <code>elif</code>.<br>
מדובר בסך הכול בקיצור של <code dir="ltr" style="direction: ltr">else... if (תנאי)</code>, או בעברית: אם התנאי הקודם לא התקיים, בדוק אם...<br>
ראו, לדוגמה, איך נשפר את הקוד הקודם לקוד קריא יותר <em>במידה רבה</em>:
</p>
End of explanation
age = int(input("Please enter your age: ")))
if age < 0:
print("Your age is invalid."
if age < 18:
print("Younger than 18.")
else
print("You are so old!")
Explanation: <div style="text-align: right; direction: rtl; float: right; clear: both;">מה קורה כאן?</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הטריק הוא שבשורה שבה כתוב <code>elif</code>, פייתון תנסה לבדוק את התנאי <em>רק אם</em> התנאים שלפניו לא התקיימו.<br>
במילים אחרות – ערכיהם של הביטויים הבוליאניים בכל התנאים שלפניו היו <code>False</code>.<br>
בכל שורה שבה יש <code>if</code> או <code>elif</code>, פייתון בודקת האם הביטוי הבוליאני שבאותה שורה מתקיים, ואז:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>אם כן, היא מבצעת את הפעולות המוזחות מתחת לתנאי ומפסיקה לבדוק את התנאים הבאים.</li>
<li>אם לא, היא עוברת לבדוק את התנאי ב־<code>elif</code>־ים הבאים (אם ישנם <code>elif</code>־ים).</li>
<li>אם אף אחד מהתנאים לא מתקיים, יתבצע קטע הקוד ששייך ל־<code>else</code> (אם יש <code>else</code>).</li>
</ol>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
ניתן לכתוב <code>if</code> בלי <code>else</code> ובלי <code>elif</code>־ים אחריו.<br>
<code>if</code> תמיד יהיה ראשון, ואם יש צורך להוסיף מקרים, <code>else</code> תמיד יהיה אחרון, וביניהם יהיו <code>elif</code>־ים.<br>
</p>
</div>
</div>
<p style="align: right; direction: rtl; float: right;">תרגול</p>
<p style="align: right; direction: rtl; float: right;">כניסה לבנק, שלב 2</p>
<p style="text-align: right; direction: rtl; float: right;">
שם המשתמש שלי לבנק הוא <em>wrong</em>, והסיסמה שלי היא <em>ads sports</em>.<br>
שם המשתמש של מנהל הבנק היא <em>admin</em>, והסיסמה שלו היא <em>is trator</em>.
<br>
קבלו מהמשתמש שם משתמש וסיסמה, והדפיסו לו הודעה שמספרת לו לאיזה משתמש הוא הצליח להתחבר.<br>
אם לא הצליח להתחבר, הדפיסו לו הודעת שגיאה מתאימה.
</p>
<p style="align: right; direction: rtl; float: right;">חשבון למתחילים, שלב 1</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
דני רוצה ללמוד חשבון, ולצורך כך הוא צריך מחשבון פשוט שיעזור לו.<br>
כתבו תוכנה שמקבלת 2 מספרים ופעולה חשבונית (<em>+</em>, <em>-</em>, <em>*</em>, <em>/</em> או <em>**</em>), ויודעת להחזיר את התשובה הנכונה לתרגיל.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both; margin-top: 0;">
<li>עבור מספר ראשון <em>5</em>, מספר שני <em>2</em> ופעולה <em>/</em> התוכנית תדפיס <samp>2.5</samp>, כיוון ש־<span dir="ltr">5/2 == 2.5</span>.</li>
<li>עבור מספר ראשון <em>9</em>, מספר שני <em>2</em> ופעולה <em>**</em> התוכנית תדפיס <samp>81</samp>, כיוון ש־<span dir="ltr">9<sup>2</sup> == 81</span>.</li>
<li>עבור מספר ראשון <em>3</em>, מספר שני <em>7</em> ופעולה <em>-</em> התוכנית תדפיס <samp dir="ltr" style="direction: ltr;">-4</samp>, כיוון ש־<span dir="ltr">3-7 == -4</span>.</li>
</ul>
<p style="align: right; direction: rtl; float: right;">מחשבון מס הכנסה</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
המיסוי על הכנסה בישראל גבוה מאוד ושיעורו נקבע לפי מדרגות כדלקמן:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>מי שמרוויח עד 6,310 ש"ח, משלם מס בשיעור 10% על הסכום הזה.</li>
<li>מי שמרוויח עד 9,050 ש"ח, משלם מס בשיעור 14% על הסכום הזה.</li>
<li>מי שמרוויח עד 14,530 ש"ח, משלם מס בשיעור 20% על הסכום הזה.</li>
<li>מי שמרוויח עד 20,200 ש"ח, משלם מס בשיעור 31% על הסכום הזה.</li>
<li>מי שמרוויח עד 42,030 ש"ח, משלם מס בשיעור 35% על הסכום הזה.</li>
<li>מי שמרוויח עד 54,130 ש"ח, משלם מס בשיעור 47% על הסכום הזה.</li>
<li>מי שמרוויח מעל הסכום האחרון, משלם מס בשיעור 50% על הסכום הזה.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הסכום תמיד משולם על ההפרש בין המדרגות.<br>
לדוגמה, אם הרווחתי בחודש מסוים 10,000 ש"ח, תשלום המיסים שלי יחושב כך:<br>
על 6,310 השקלים הראשונים אשלם 631 ש"ח, שהם 10% מאותה מדרגה.<br>
על הסכום העודף עד 9,050 שקלים, שהוא 2,740 שקלים <span dir="ltr" style="direction: ltr">(9,050 - 6,310)</span> אשלם 383.6 שקלים, שהם 14% מ־2,740 שקלים.<br>
על הסכום העודף בסך 950 שקלים <span dir="ltr" style="direction: ltr">(10,000 - 9,050)</span> אשלם 190 ש"ח, שהם 20% מ־950 שקלים.<br>
בסך הכול, אשלם למס הכנסה באותו חודש 631 + 383.6 + 190 ש"ח, שהם 1,204.6 שקלים.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו קוד למחשבון שמקבל את השכר החודשי שלכם ומדפיס את סכום המס שתשלמו.
<p>
### <span style="align: right; direction: rtl; float: right; clear: both;">ירוץ אם נתקן, אחרת...</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בקוד הבא נפלו שגיאות רבות.<br>
תקנו אותו והריצו אותו כך שיעבוד ושידפיס הודעה אחת בלבד עבור כל מצב.
</p>
End of explanation |
6,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bad posterior geometry and how to deal with it
HMC and its variant NUTS use gradient information to draw (approximate) samples from a posterior distribution.
These gradients are computed in a particular coordinate system, and different choices of coordinate system can make HMC more or less efficient.
This is analogous to the situation in constrained optimization problems where, for example, parameterizing a positive quantity via an exponential versus softplus transformation results in distinct optimization dynamics.
Consequently it is important to pay attention to the geometry of the posterior distribution.
Reparameterizing the model (i.e. changing the coordinate system) can make a big practical difference for many complex models.
For the most complex models it can be absolutely essential. For the same reason it can be important to pay attention to some of the hyperparameters that control HMC/NUTS (in particular the max_tree_depth and dense_mass).
In this tutorial we explore models with bad posterior geometries---and what one can do to get achieve better performance---with a few concrete examples.
Step1: We begin by writing a helper function to do NUTS inference.
Step2: Evaluating HMC/NUTS
In general it is difficult to assess whether the samples returned from HMC or NUTS represent accurate (approximate) samples from the posterior.
Two general rules of thumb, however, are to look at the effective sample size (ESS) and r_hat diagnostics returned by mcmc.print_summary().
If we see values of r_hat in the range (1.0, 1.05) and effective sample sizes that are comparable to the total number of samples num_samples (assuming thinning=1) then we have good reason to believe that HMC is doing a good job.
If, however, we see low effective sample sizes or large r_hats for some of the variables (e.g. r_hat = 1.15) then HMC is likely struggling with the posterior geometry.
In the following we will use r_hat as our primary diagnostic metric.
Model reparameterization
Example #1
We begin with an example (horseshoe regression; see examples/horseshoe_regression.py for a complete example script) where reparameterization helps a lot.
This particular example demonstrates a general reparameterization strategy that is useful in many models with hierarchical/multi-level structure.
For more discussion of some of the issues that can arise in hierarchical models see reference [1].
Step3: To deal with the bad geometry that results form this coordinate system we change coordinates using the following re-write logic.
Instead of
$$ \beta \sim {\rm Normal}(0, \lambda \tau) $$
we write
$$ \beta^\prime \sim {\rm Normal}(0, 1) $$
and
$$ \beta \equiv \lambda \tau \beta^\prime $$
where $\beta$ is now defined deterministically in terms of $\lambda$, $\tau$,
and $\beta^\prime$.
In effect we've changed to a coordinate system where the different
latent variables are less correlated with one another.
In this new coordinate system we can expect HMC with a diagonal mass matrix to behave much better than it would in the original coordinate system.
There are basically two ways to implement this kind of reparameterization in NumPyro
Step4: Next we do the reparameterization using numpyro.infer.reparam.
There are at least two ways to do this.
First let's use LocScaleReparam.
Step5: To show the versatility of the numpyro.infer.reparam library let's do the reparameterization using TransformReparam instead.
Step6: Finally we verify that _rep_hs_model1, _rep_hs_model2, and _rep_hs_model3 do indeed achieve better r_hats than _unrep_hs_model.
Step7: Aside
Step8: Example #3
Using dense_mass=True can be very expensive when the dimension of the latent space D is very large.
In addition it can be difficult to estimate a full-rank mass matrix with D^2 parameters using a moderate number of samples if D is large. In these cases dense_mass=True can be a poor choice.
Luckily, the argument dense_mass can also be used to specify structured mass matrices that are richer than a diagonal mass matrix but more constrained (i.e. have fewer parameters) than a full-rank mass matrix (see the docs).
In this second example we show how we can use dense_mass to specify such a structured mass matrix.
Step9: Now let's compare two choices of dense_mass.
Step10: max_tree_depth
The hyperparameter max_tree_depth can play an important role in determining the quality of posterior samples generated by NUTS. The default value in NumPyro is max_tree_depth=10. In some models, in particular those with especially difficult geometries, it may be necessary to increase max_tree_depth above 10. In other cases where computing the gradient of the model log density is particularly expensive, it may be necessary to decrease max_tree_depth below 10 to reduce compute. As an example where large max_tree_depth is essential, we return to a variant of example #2. (We note that in this particular case another way to improve performance would be to use dense_mass=True).
Example #4 | Python Code:
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
from functools import partial
import numpy as np
import jax.numpy as jnp
from jax import random
import numpyro
import numpyro.distributions as dist
from numpyro.diagnostics import summary
from numpyro.infer import MCMC, NUTS
assert numpyro.__version__.startswith("0.9.2")
# NB: replace cpu by gpu to run this notebook on gpu
numpyro.set_platform("cpu")
Explanation: Bad posterior geometry and how to deal with it
HMC and its variant NUTS use gradient information to draw (approximate) samples from a posterior distribution.
These gradients are computed in a particular coordinate system, and different choices of coordinate system can make HMC more or less efficient.
This is analogous to the situation in constrained optimization problems where, for example, parameterizing a positive quantity via an exponential versus softplus transformation results in distinct optimization dynamics.
Consequently it is important to pay attention to the geometry of the posterior distribution.
Reparameterizing the model (i.e. changing the coordinate system) can make a big practical difference for many complex models.
For the most complex models it can be absolutely essential. For the same reason it can be important to pay attention to some of the hyperparameters that control HMC/NUTS (in particular the max_tree_depth and dense_mass).
In this tutorial we explore models with bad posterior geometries---and what one can do to get achieve better performance---with a few concrete examples.
End of explanation
def run_inference(
model, num_warmup=1000, num_samples=1000, max_tree_depth=10, dense_mass=False
):
kernel = NUTS(model, max_tree_depth=max_tree_depth, dense_mass=dense_mass)
mcmc = MCMC(
kernel,
num_warmup=num_warmup,
num_samples=num_samples,
num_chains=1,
progress_bar=False,
)
mcmc.run(random.PRNGKey(0))
summary_dict = summary(mcmc.get_samples(), group_by_chain=False)
# print the largest r_hat for each variable
for k, v in summary_dict.items():
spaces = " " * max(12 - len(k), 0)
print("[{}] {} \t max r_hat: {:.4f}".format(k, spaces, np.max(v["r_hat"])))
Explanation: We begin by writing a helper function to do NUTS inference.
End of explanation
# In this unreparameterized model some of the parameters of the distributions
# explicitly depend on other parameters (in particular beta depends on lambdas and tau).
# This kind of coordinate system can be a challenge for HMC.
def _unrep_hs_model(X, Y):
lambdas = numpyro.sample("lambdas", dist.HalfCauchy(jnp.ones(X.shape[1])))
tau = numpyro.sample("tau", dist.HalfCauchy(jnp.ones(1)))
betas = numpyro.sample("betas", dist.Normal(scale=tau * lambdas))
mean_function = jnp.dot(X, betas)
numpyro.sample("Y", dist.Normal(mean_function, 0.05), obs=Y)
Explanation: Evaluating HMC/NUTS
In general it is difficult to assess whether the samples returned from HMC or NUTS represent accurate (approximate) samples from the posterior.
Two general rules of thumb, however, are to look at the effective sample size (ESS) and r_hat diagnostics returned by mcmc.print_summary().
If we see values of r_hat in the range (1.0, 1.05) and effective sample sizes that are comparable to the total number of samples num_samples (assuming thinning=1) then we have good reason to believe that HMC is doing a good job.
If, however, we see low effective sample sizes or large r_hats for some of the variables (e.g. r_hat = 1.15) then HMC is likely struggling with the posterior geometry.
In the following we will use r_hat as our primary diagnostic metric.
Model reparameterization
Example #1
We begin with an example (horseshoe regression; see examples/horseshoe_regression.py for a complete example script) where reparameterization helps a lot.
This particular example demonstrates a general reparameterization strategy that is useful in many models with hierarchical/multi-level structure.
For more discussion of some of the issues that can arise in hierarchical models see reference [1].
End of explanation
# In this reparameterized model none of the parameters of the distributions
# explicitly depend on other parameters. This model is exactly equivalent
# to _unrep_hs_model but is expressed in a different coordinate system.
def _rep_hs_model1(X, Y):
lambdas = numpyro.sample("lambdas", dist.HalfCauchy(jnp.ones(X.shape[1])))
tau = numpyro.sample("tau", dist.HalfCauchy(jnp.ones(1)))
unscaled_betas = numpyro.sample(
"unscaled_betas", dist.Normal(scale=jnp.ones(X.shape[1]))
)
scaled_betas = numpyro.deterministic("betas", tau * lambdas * unscaled_betas)
mean_function = jnp.dot(X, scaled_betas)
numpyro.sample("Y", dist.Normal(mean_function, 0.05), obs=Y)
Explanation: To deal with the bad geometry that results form this coordinate system we change coordinates using the following re-write logic.
Instead of
$$ \beta \sim {\rm Normal}(0, \lambda \tau) $$
we write
$$ \beta^\prime \sim {\rm Normal}(0, 1) $$
and
$$ \beta \equiv \lambda \tau \beta^\prime $$
where $\beta$ is now defined deterministically in terms of $\lambda$, $\tau$,
and $\beta^\prime$.
In effect we've changed to a coordinate system where the different
latent variables are less correlated with one another.
In this new coordinate system we can expect HMC with a diagonal mass matrix to behave much better than it would in the original coordinate system.
There are basically two ways to implement this kind of reparameterization in NumPyro:
manually (i.e. by hand)
using numpyro.infer.reparam, which automates a few common reparameterization strategies
To begin with let's do the reparameterization by hand.
End of explanation
from numpyro.infer.reparam import LocScaleReparam
# LocScaleReparam with centered=0 fully "decenters" the prior over betas.
config = {"betas": LocScaleReparam(centered=0)}
# The coordinate system of this model is equivalent to that in _rep_hs_model1 above.
_rep_hs_model2 = numpyro.handlers.reparam(_unrep_hs_model, config=config)
Explanation: Next we do the reparameterization using numpyro.infer.reparam.
There are at least two ways to do this.
First let's use LocScaleReparam.
End of explanation
from numpyro.distributions.transforms import AffineTransform
from numpyro.infer.reparam import TransformReparam
# In this reparameterized model none of the parameters of the distributions
# explicitly depend on other parameters. This model is exactly equivalent
# to _unrep_hs_model but is expressed in a different coordinate system.
def _rep_hs_model3(X, Y):
lambdas = numpyro.sample("lambdas", dist.HalfCauchy(jnp.ones(X.shape[1])))
tau = numpyro.sample("tau", dist.HalfCauchy(jnp.ones(1)))
# instruct NumPyro to do the reparameterization automatically.
reparam_config = {"betas": TransformReparam()}
with numpyro.handlers.reparam(config=reparam_config):
betas_root_variance = tau * lambdas
# in order to use TransformReparam we have to express the prior
# over betas as a TransformedDistribution
betas = numpyro.sample(
"betas",
dist.TransformedDistribution(
dist.Normal(0.0, jnp.ones(X.shape[1])),
AffineTransform(0.0, betas_root_variance),
),
)
mean_function = jnp.dot(X, betas)
numpyro.sample("Y", dist.Normal(mean_function, 0.05), obs=Y)
Explanation: To show the versatility of the numpyro.infer.reparam library let's do the reparameterization using TransformReparam instead.
End of explanation
# create fake dataset
X = np.random.RandomState(0).randn(100, 500)
Y = X[:, 0]
print("unreparameterized model (very bad r_hats)")
run_inference(partial(_unrep_hs_model, X, Y))
print("\nreparameterized model with manual reparameterization (good r_hats)")
run_inference(partial(_rep_hs_model1, X, Y))
print("\nreparameterized model with LocScaleReparam (good r_hats)")
run_inference(partial(_rep_hs_model2, X, Y))
print("\nreparameterized model with TransformReparam (good r_hats)")
run_inference(partial(_rep_hs_model3, X, Y))
Explanation: Finally we verify that _rep_hs_model1, _rep_hs_model2, and _rep_hs_model3 do indeed achieve better r_hats than _unrep_hs_model.
End of explanation
# Because rho is very close to 1.0 the posterior geometry
# is extremely skewed and using the "diagonal" coordinate system
# implied by dense_mass=False leads to bad results
rho = 0.9999
cov = jnp.array([[10.0, rho], [rho, 0.1]])
def mvn_model():
numpyro.sample("x", dist.MultivariateNormal(jnp.zeros(2), covariance_matrix=cov))
print("dense_mass = False (bad r_hat)")
run_inference(mvn_model, dense_mass=False, max_tree_depth=3)
print("dense_mass = True (good r_hat)")
run_inference(mvn_model, dense_mass=True, max_tree_depth=3)
Explanation: Aside: numpyro.deterministic
In _rep_hs_model1 above we used numpyro.deterministic to define scaled_betas.
We note that using this primitive is not strictly necessary; however, it has the consequence that scaled_betas will appear in the trace and will thus appear in the summary reported by mcmc.print_summary().
In other words we could also have written:
scaled_betas = tau * lambdas * unscaled_betas
without invoking the deterministic primitive.
Mass matrices
By default HMC/NUTS use diagonal mass matrices.
For models with complex geometries it can pay to use a richer set of mass matrices.
Example #2
In this first simple example we show that using a full-rank (i.e. dense) mass matrix leads to a better r_hat.
End of explanation
rho = 0.9
cov = jnp.array([[10.0, rho], [rho, 0.1]])
# In this model x1 and x2 are highly correlated with one another
# but not correlated with y at all.
def partially_correlated_model():
x1 = numpyro.sample(
"x1", dist.MultivariateNormal(jnp.zeros(2), covariance_matrix=cov)
)
x2 = numpyro.sample(
"x2", dist.MultivariateNormal(jnp.zeros(2), covariance_matrix=cov)
)
y = numpyro.sample("y", dist.Normal(jnp.zeros(100), 1.0))
numpyro.sample("obs", dist.Normal(x1 - x2, 0.1), jnp.ones(2))
Explanation: Example #3
Using dense_mass=True can be very expensive when the dimension of the latent space D is very large.
In addition it can be difficult to estimate a full-rank mass matrix with D^2 parameters using a moderate number of samples if D is large. In these cases dense_mass=True can be a poor choice.
Luckily, the argument dense_mass can also be used to specify structured mass matrices that are richer than a diagonal mass matrix but more constrained (i.e. have fewer parameters) than a full-rank mass matrix (see the docs).
In this second example we show how we can use dense_mass to specify such a structured mass matrix.
End of explanation
print("dense_mass = False (very bad r_hats)")
run_inference(partially_correlated_model, dense_mass=False, max_tree_depth=3)
print("\ndense_mass = True (bad r_hats)")
run_inference(partially_correlated_model, dense_mass=True, max_tree_depth=3)
# We use dense_mass=[("x1", "x2")] to specify
# a structured mass matrix in which the y-part of the mass matrix is diagonal
# and the (x1, x2) block of the mass matrix is full-rank.
# Graphically:
#
# x1 x2 y
# x1 | * * 0 |
# x2 | * * 0 |
# y | 0 0 * |
print("\nstructured mass matrix (good r_hats)")
run_inference(partially_correlated_model, dense_mass=[("x1", "x2")], max_tree_depth=3)
Explanation: Now let's compare two choices of dense_mass.
End of explanation
# Because rho is very close to 1.0 the posterior geometry is extremely
# skewed and using small max_tree_depth leads to bad results.
rho = 0.999
dim = 200
cov = rho * jnp.ones((dim, dim)) + (1 - rho) * jnp.eye(dim)
def mvn_model():
x = numpyro.sample(
"x", dist.MultivariateNormal(jnp.zeros(dim), covariance_matrix=cov)
)
print("max_tree_depth = 5 (bad r_hat)")
run_inference(mvn_model, max_tree_depth=5)
print("max_tree_depth = 10 (good r_hat)")
run_inference(mvn_model, max_tree_depth=10)
Explanation: max_tree_depth
The hyperparameter max_tree_depth can play an important role in determining the quality of posterior samples generated by NUTS. The default value in NumPyro is max_tree_depth=10. In some models, in particular those with especially difficult geometries, it may be necessary to increase max_tree_depth above 10. In other cases where computing the gradient of the model log density is particularly expensive, it may be necessary to decrease max_tree_depth below 10 to reduce compute. As an example where large max_tree_depth is essential, we return to a variant of example #2. (We note that in this particular case another way to improve performance would be to use dense_mass=True).
Example #4
End of explanation |
6,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot Emission Measure Distributions
Compute and plot emission measure distributions, $\mathrm{EM}(T)$ for the EBTEL and HYDRAD results for varying pulse duration $\tau$.
Step1: First, load the data for the EBTEL and HYDRAD results.
Step2: We'll some very basic curve fitting on a couple of our $\mathrm{EM}$ distributions so set the parameters for that.
Step3: Define some parameters for labeling
Step4: Single-fluid, Figure 1(b)
Step5: Electron Heating, Figure 3(b)
Step6: Ion Heating, Figure 5(b) | Python Code:
import os
import sys
import pickle
import numpy as np
from scipy.optimize import curve_fit
import seaborn.apionly as sns
import matplotlib.pyplot as plt
from matplotlib import ticker
sys.path.append(os.path.join(os.environ['EXP_DIR'],'EBTEL_analysis/src'))
import em_binner as emb
%matplotlib inline
plt.rcParams.update({'figure.figsize' : [8,8]})
Explanation: Plot Emission Measure Distributions
Compute and plot emission measure distributions, $\mathrm{EM}(T)$ for the EBTEL and HYDRAD results for varying pulse duration $\tau$.
End of explanation
with open(__depends__[0],'rb') as f:
ebtel_results = pickle.load(f)
with open(__depends__[1],'rb') as f:
hydrad_results = pickle.load(f)
Explanation: First, load the data for the EBTEL and HYDRAD results.
End of explanation
Ta = np.log10(6e+6)
Tb = np.log10(10e+6)
def pl_func(x,a,b):
return a + b*x
Explanation: We'll some very basic curve fitting on a couple of our $\mathrm{EM}$ distributions so set the parameters for that.
End of explanation
tau = [20,40,200,500]
Explanation: Define some parameters for labeling
End of explanation
fig = plt.figure()
ax = fig.gca()
for i in range(len(ebtel_results)):
#EBTEL
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=ebtel_results[i]['t'],temp=ebtel_results[i]['T'],
density=ebtel_results[i]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],
linestyle='solid',label=r'$\tau=%d$ $\mathrm{s}$'%tau[i])
#Curve Fitting
logT = np.log10((bin_edges[:-1]+bin_edges[1:])/2)
logem = np.log10(hist/10)
T_fit = logT[(logT>=Ta) & (logT<=Tb)]
em_fit = logem[(logT>=Ta) & (logT<=Tb)]
try:
popt,pcov = curve_fit(pl_func,T_fit,em_fit)
print('Value of the slope for %s is b=%f'%(r'$\tau=%d$ $\mathrm{s}$'%tau[i],popt[1]))
except ValueError:
print('Cannot find fit for %s'%(r'$\tau=%d$ $\mathrm{s}$'%tau[i]))
#HYDRAD
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=hydrad_results['time'],
temp=hydrad_results['single']['tau%ds'%tau[i]]['Te'],
density=hydrad_results['single']['tau%ds'%tau[i]]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],linestyle='dotted')
#aesthetics
#scale
ax.set_yscale('log')
ax.set_xscale('log')
#limits
ax.set_ylim([1e+23,1e+28])
ax.set_xlim([10**5.5,10**7.5])
#ticks
#y
ax.yaxis.set_major_locator(ticker.LogLocator(numticks=5))
#labels
ax.set_xlabel(r'$T\,\,\mathrm{(K)}$')
ax.set_ylabel(r'$\mathrm{EM}\,\,(\mathrm{cm}^{-5})$')
#legend
ax.legend(loc=2)
#save
plt.savefig(__dest__[0])
plt.show()
Explanation: Single-fluid, Figure 1(b)
End of explanation
fig = plt.figure()
ax = fig.gca()
for i in range(len(ebtel_results)):
#EBTEL
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=ebtel_results[i]['te'],temp=ebtel_results[i]['Tee'],
density=ebtel_results[i]['ne'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],
linestyle='solid',label=r'$\tau=%d$ $\mathrm{s}$'%tau[i])
#HYDRAD
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=hydrad_results['time'],
temp=hydrad_results['electron']['tau%ds'%tau[i]]['Te'],
density=hydrad_results['electron']['tau%ds'%tau[i]]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],linestyle='dotted')
#aesthetics
#scale
ax.set_yscale('log')
ax.set_xscale('log')
#limits
ax.set_ylim([1e+23,1e+28])
ax.set_xlim([10**5.5,10**7.5])
#ticks
#y
ax.yaxis.set_major_locator(ticker.LogLocator(numticks=5))
#labels
ax.set_xlabel(r'$T\,\,\mathrm{(K)}$')
ax.set_ylabel(r'$\mathrm{EM}\,\,(\mathrm{cm}^{-5})$')
#legend
ax.legend(loc=2)
#save
plt.savefig(__dest__[1])
plt.show()
Explanation: Electron Heating, Figure 3(b)
End of explanation
fig = plt.figure()
ax = fig.gca()
for i in range(len(ebtel_results)):
#EBTEL
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=ebtel_results[i]['ti'],temp=ebtel_results[i]['Tie'],
density=ebtel_results[i]['ni'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],
linestyle='solid',label=r'$\tau=%d$ $\mathrm{s}$'%tau[i])
#HYDRAD
binner = emb.EM_Binner(2.*ebtel_results[i]['loop_length'],time=hydrad_results['time'],
temp=hydrad_results['ion']['tau%ds'%tau[i]]['Te'],
density=hydrad_results['ion']['tau%ds'%tau[i]]['n'])
binner.build_em_dist()
hist,bin_edges = np.histogram(binner.T_em_flat,bins=binner.T_em_histo_bins,weights=np.array(binner.em_flat))
ax.plot((bin_edges[:-1]+bin_edges[1:])/2,hist/10,color=sns.color_palette('deep')[i],linestyle='dotted')
#aesthetics
#scale
ax.set_yscale('log')
ax.set_xscale('log')
#limits
ax.set_ylim([1e+23,1e+28])
ax.set_xlim([10**5.5,10**7.5])
#ticks
#y
ax.yaxis.set_major_locator(ticker.LogLocator(numticks=5))
#labels
ax.set_xlabel(r'$T\,\,\mathrm{(K)}$')
ax.set_ylabel(r'$\mathrm{EM}\,\,(\mathrm{cm}^{-5})$')
#legend
ax.legend(loc=2)
#save
plt.savefig(__dest__[2])
plt.show()
Explanation: Ion Heating, Figure 5(b)
End of explanation |
6,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Step 1
Step2: <div class ="alert alert-success">As we can see in the above diagram, a lot of new relates to Politics and its related items. Also we can understand a total of 20 new categories are defined in this dataset. So as part of Topic modelling exercise we can try to categories the dataset into 20 topics.
</div>
Step 2
Step3: <div class="alert alert-warning">
<b>Applying TFIDFVectorizer to pre-process the data into vectors.</b>
- max_df
Step4: <div class="alert alert-success">
Here you can notice that the transformed dataset holds a sparse matrix with a dimension of 200853x21893; where 200853 is the total number of rows and 21893 is the total word corpus.
</div>
Step 3
Step5: <div class="alert alert-danger">
Note
Step6: <div class="alert alert-warning">
Transforming the existing dataframe and adding the content with a topic id and LDA generated topics
<div>
Step7: Step 6 | Python Code:
## required installation for LDA visualization
!pip install pyLDAvis
## imports
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import LatentDirichletAllocation
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import pyLDAvis
from pyLDAvis import sklearn
pyLDAvis.enable_notebook()
Explanation: <a href="https://colab.research.google.com/github/rishuatgithub/MLPy/blob/master/Topic_Modelling_LDA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Topic Modelling using Latent-Dirichlet Allocation
Blog URL : Topic Modelling : Latent Dirichlet Allocation, an introduction
Author : Rishu Shrivastava
End of explanation
### Reading the dataset from path
filename = 'News_Category_Dataset_v2.json'
data = pd.read_json(filename, lines=True)
data.head()
### data dimensions (rows, columns) of the dataset we are dealing with
data.shape
### Total articles by category spread - viz
plt.figure(figsize=(20,5))
sns.set_style("whitegrid")
sns.countplot(x='category',data=data, orient='h', palette='husl')
plt.xticks(rotation=90)
plt.title("Category count of article")
plt.show()
Explanation: Step 1: Loading and Understanding Data
<div class="active active-primary">
As part of this step we will load an existing dataset and load it into a pandas dataframe. We will also try to on a brief understand the data.
- Source of the dataset is from Kaggle [News category classifer](https://www.kaggle.com/hengzheng/news-category-classifier-val-acc-0-65).</div>
End of explanation
### tranform the dataset to fit the original requirement
data['Combined_Description'] = data['headline'] + data['short_description']
filtered_data = data[['category','Combined_Description']]
filtered_data.head()
## checking the dimensions of filtered data
filtered_data.shape
Explanation: <div class ="alert alert-success">As we can see in the above diagram, a lot of new relates to Politics and its related items. Also we can understand a total of 20 new categories are defined in this dataset. So as part of Topic modelling exercise we can try to categories the dataset into 20 topics.
</div>
Step 2: Transforming the dataset
<div class="alert alert-warning">For the purpose of this demo and blog, we will do the following:
1. **Combine** both the **Headline and Short Description** into one single column to bring more context to the news and corpus. Calling it as: ```Combined_Description```
2. **Drop** rest of the attributes from the dataframe other than Combined_Description and Categories.
</div>
End of explanation
df_tfidf = TfidfVectorizer(max_df=0.5, min_df=10, stop_words='english', lowercase=True)
df_tfidf_transformed = df_tfidf.fit_transform(filtered_data['Combined_Description'])
df_tfidf_transformed
Explanation: <div class="alert alert-warning">
<b>Applying TFIDFVectorizer to pre-process the data into vectors.</b>
- max_df : Ignore the words that occurs more than 95% of the corpus.
- min_df : Accept the words in preparation of vocab that occurs in atleast 2 of the documents in the corpus.
- stop_words : Remove the stop words. We can do this in separate steps or in a single step.
</div>
End of explanation
### Define the LDA model and set the topic size to 20.
topic_clusters = 20
lda_model = LatentDirichletAllocation(n_components=topic_clusters, batch_size=128, random_state=42)
### Fit the filtered data to the model
lda_model.fit(df_tfidf_transformed)
Explanation: <div class="alert alert-success">
Here you can notice that the transformed dataset holds a sparse matrix with a dimension of 200853x21893; where 200853 is the total number of rows and 21893 is the total word corpus.
</div>
Step 3: Building Latent-Dirichlet Algorithm using scikit-learn
End of explanation
topic_word_dict = {}
top_n_words_num = 10
for index, topic in enumerate(lda_model.components_):
topic_id = index
topic_words_max = [df_tfidf.get_feature_names()[i] for i in topic.argsort()[-top_n_words_num:]]
topic_word_dict[topic_id] = topic_words_max
print(f"Topic ID : {topic_id}; Top 10 Most Words : {topic_words_max}")
Explanation: <div class="alert alert-danger">
Note: Fitting the model to the dataset take a long time. You will see the output as model summary, if success.
</div>
Step 4: LDA Topic Cluster
End of explanation
topic_output = lda_model.transform(df_tfidf_transformed)
filtered_data = filtered_data.copy()
filtered_data['LDA_Topic_ID'] = topic_output.argmax(axis=1)
filtered_data['Topic_word_categories'] = filtered_data['LDA_Topic_ID'].apply(lambda id: topic_word_dict[id])
filtered_data[['category','Combined_Description','LDA_Topic_ID','Topic_word_categories']].head()
Explanation: <div class="alert alert-warning">
Transforming the existing dataframe and adding the content with a topic id and LDA generated topics
<div>
End of explanation
viz = sklearn.prepare(lda_model=lda_model, dtm=df_tfidf_transformed, vectorizer=df_tfidf)
pyLDAvis.display(viz)
Explanation: Step 6: Visualizing
End of explanation |
6,356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Eaton & Ree (2013) single-end RAD data set
Here we demonstrate a denovo assembly for an empirical RAD data set using the ipyrad Python API. This example was run on a workstation with 20 cores and takes about 10 minutes to assemble, but you should be able to run it on a 4-core laptop in ~30-60 minutes.
For our example data we will use the 13 taxa Pedicularis data set from Eaton and Ree (2013) (Open Access). This data set is composed of single-end 75bp reads from a RAD-seq library prepared with the PstI enzyme. The data set also serves as an example for several of our analysis cookbooks that demonstrate methods for analyzing RAD-seq data files. At the end of this notebook there are also several examples of how to use the ipyrad analysis tools to run downstream analyses in parallel.
The figure below shows the ingroup taxa from this study and their sampling locations. The study includes all species within a small monophyletic clade of Pedicularis, including multiple individuals from 5 species and several subspecies, as well as an outgroup species. The sampling essentially spans from population-level variation where species boundaries are unclear, to higher-level divergence where species boundaries are quite distinct. This is a common scale at which RAD-seq data are often very useful.
<img src="https
Step1: In contrast to the ipyrad CLI, the ipyrad API gives users much more fine-scale control over the parallelization of their analysis, but this also requires learning a little bit about the library that we use to do this, called ipyparallel. This library is designed for use with jupyter-notebooks to allow massive-scale multi-processing while working interactively.
Understanding the nuts and bolts of it might take a little while, but it is fairly easy to get started using it, especially in the way it is integrated with ipyrad. To start a parallel client to you must run the command-line program 'ipcluster'. This will essentially start a number of independent Python processes (kernels) which we can then send bits of work to do. The cluster can be stopped and restarted independently of this notebook, which is convenient for working on a cluster where connecting to many cores is not always immediately available.
Step2: Download the data set (Pedicularis)
These data are archived on the NCBI sequence read archive (SRA) under accession id SRP021469. As part of the ipyrad analysis tools we have a wrapper around the SRAtools software that can be used to query NCBI and download sequence data based on accession IDs. Run the code below to download the fastq data files associated with this study. The data will be saved the specified directory which will be created if it does not already exist. The compressed file size of the data is a little over 1GB. If you pass your ipyclient to the .run() command below then the download will be parallelized.
Step3: Create an Assembly object
This object stores the parameters of the assembly and the organization of data files.
Step4: Set parameters for the Assembly. This will raise an error if any of the parameters are not allowed because they are the wrong type, or out of the allowed range.
Step5: Assemble the data set
Step6: Branch to create several final data sets with different parameter settings
Step7: View final stats
The .stats attribute shows a stats summary for each sample, and a number of stats dataframes can be accessed for each step from the .stats_dfs attribute of the Assembly.
Step8: Analysis tools
We have a lot more information about analysis tools in the ipyrad documentation. But here I'll show just a quick example of how you can easily access the data files for these assemblies and use them in downstream analysis software. The ipyrad analysis tools include convenient wrappers to make it easier to parallelize analyses of RAD-seq data. Please see the full documentation for the ipyrad.analysis tools in the ipyrad documentation for more details.
Step9: RAxML -- ML concatenation tree inference
Step10: tetrad -- quartet tree inference
Step11: STRUCTURE -- population cluster inference
Step12: TREEMIX -- ML tree & admixture co-inference
Step13: ABBA-BABA admixture inference
Step14: BPP -- species tree inference/delim
Step15: run BPP
You can either call 'write_bpp_files()' to write input files for this data set to be run in BPP, and then call BPP on those files, or you can use the '.run()' command to run the data files directly, and in parallel on the cluster. If you specify multiple reps then a different random sample of loci will be selected, and different random seeds applied to each replicate. | Python Code:
## conda install ipyrad -c ipyrad
## conda install toytree -c eaton-lab
## conda install sra-tools -c bioconda
## imports
import ipyrad as ip
import ipyrad.analysis as ipa
import ipyparallel as ipp
Explanation: Eaton & Ree (2013) single-end RAD data set
Here we demonstrate a denovo assembly for an empirical RAD data set using the ipyrad Python API. This example was run on a workstation with 20 cores and takes about 10 minutes to assemble, but you should be able to run it on a 4-core laptop in ~30-60 minutes.
For our example data we will use the 13 taxa Pedicularis data set from Eaton and Ree (2013) (Open Access). This data set is composed of single-end 75bp reads from a RAD-seq library prepared with the PstI enzyme. The data set also serves as an example for several of our analysis cookbooks that demonstrate methods for analyzing RAD-seq data files. At the end of this notebook there are also several examples of how to use the ipyrad analysis tools to run downstream analyses in parallel.
The figure below shows the ingroup taxa from this study and their sampling locations. The study includes all species within a small monophyletic clade of Pedicularis, including multiple individuals from 5 species and several subspecies, as well as an outgroup species. The sampling essentially spans from population-level variation where species boundaries are unclear, to higher-level divergence where species boundaries are quite distinct. This is a common scale at which RAD-seq data are often very useful.
<img src="https://raw.githubusercontent.com/eaton-lab/eaton-lab.github.io/master/slides/slide-images/Eaton-Ree-2012-Ped-Fig1.png">
Setup (software and data files)
If you haven't done so yet, start by installing ipyrad using conda (see ipyrad installation instructions) as well as the packages in the cell below. This is easiest to do in a terminal. Then open a jupyter-notebook, like this one, and follow along with the tuturial by copying and executing the code in the cells, and adding your own documentation between them using markdown. Feel free to modify parameters to see their effects on the downstream results.
End of explanation
# Open a terminal and type the following command to start
# an ipcluster instance with 40 engines:
# ipcluster start -n 40 --cluster-id="ipyrad" --daemonize
# After the cluster is running you can attach to it with ipyparallel
ipyclient = ipp.Client(cluster_id="ipyrad")
Explanation: In contrast to the ipyrad CLI, the ipyrad API gives users much more fine-scale control over the parallelization of their analysis, but this also requires learning a little bit about the library that we use to do this, called ipyparallel. This library is designed for use with jupyter-notebooks to allow massive-scale multi-processing while working interactively.
Understanding the nuts and bolts of it might take a little while, but it is fairly easy to get started using it, especially in the way it is integrated with ipyrad. To start a parallel client to you must run the command-line program 'ipcluster'. This will essentially start a number of independent Python processes (kernels) which we can then send bits of work to do. The cluster can be stopped and restarted independently of this notebook, which is convenient for working on a cluster where connecting to many cores is not always immediately available.
End of explanation
## download the Pedicularis data set from NCBI
sra = ipa.sratools(accessions="SRP021469", workdir="fastqs-Ped")
sra.run(force=True, ipyclient=ipyclient)
Explanation: Download the data set (Pedicularis)
These data are archived on the NCBI sequence read archive (SRA) under accession id SRP021469. As part of the ipyrad analysis tools we have a wrapper around the SRAtools software that can be used to query NCBI and download sequence data based on accession IDs. Run the code below to download the fastq data files associated with this study. The data will be saved the specified directory which will be created if it does not already exist. The compressed file size of the data is a little over 1GB. If you pass your ipyclient to the .run() command below then the download will be parallelized.
End of explanation
## you must provide a name for the Assembly
data = ip.Assembly("pedicularis")
Explanation: Create an Assembly object
This object stores the parameters of the assembly and the organization of data files.
End of explanation
## set parameters
data.set_params("project_dir", "analysis-ipyrad")
data.set_params("sorted_fastq_path", "fastqs-Ped/*.fastq.gz")
data.set_params("clust_threshold", "0.90")
data.set_params("filter_adapters", "2")
data.set_params("max_Hs_consens", (5, 5))
data.set_params("trim_loci", (0, 5, 0, 0))
data.set_params("output_formats", "psvnkua")
## see/print all parameters
data.get_params()
Explanation: Set parameters for the Assembly. This will raise an error if any of the parameters are not allowed because they are the wrong type, or out of the allowed range.
End of explanation
## run steps 1 & 2 of the assembly
data.run("12")
## access the stats of the assembly (so far) from the .stats attribute
data.stats
## run steps 3-6 of the assembly
data.run("3456")
Explanation: Assemble the data set
End of explanation
## create a branch for outputs with min_samples = 4 (lots of missing data)
min4 = data.branch("min4")
min4.set_params("min_samples_locus", 4)
min4.run("7")
## create a branch for outputs with min_samples = 13 (no missing data)
min13 = data.branch("min13")
min13.set_params("min_samples_locus", 13)
min13.run("7")
## create a branch with no missing data for ingroups, but allow
## missing data in the outgroups by setting population assignments.
## The population min-sample values overrule the min-samples-locus param
pops = data.branch("min11-pops")
pops.populations = {
"ingroup": (11, [i for i in pops.samples if "prz" not in i]),
"outgroup" : (0, [i for i in pops.samples if "prz" in i]),
}
pops.run("7")
## create a branch with no missing data and with outgroups removed
nouts = data.branch("nouts_min11", subsamples=[i for i in pops.samples if "prz" not in i])
nouts.set_params("min_samples_locus", 11)
nouts.run("7")
Explanation: Branch to create several final data sets with different parameter settings
End of explanation
## we can access the stats summary as a pandas dataframes.
min4.stats
## or print the full stats file
cat $min4.stats_files.s7
## and we can access parts of the full stats outputs as dataframes
min4.stats_dfs.s7_samples
## compare this to the one above, coverage is more equal
min13.stats_dfs.s7_samples
## similarly, coverage is equal here among ingroups, but allows missing in outgroups
pops.stats_dfs.s7_samples
Explanation: View final stats
The .stats attribute shows a stats summary for each sample, and a number of stats dataframes can be accessed for each step from the .stats_dfs attribute of the Assembly.
End of explanation
import ipyrad as ip
import ipyrad.analysis as ipa
## you can re-load assemblies at a later time from their JSON file
min4 = ip.load_json("analysis-ipyrad/min4.json")
min13 = ip.load_json("analysis-ipyrad/min13.json")
nouts = ip.load_json("analysis-ipyrad/nouts_min11.json")
Explanation: Analysis tools
We have a lot more information about analysis tools in the ipyrad documentation. But here I'll show just a quick example of how you can easily access the data files for these assemblies and use them in downstream analysis software. The ipyrad analysis tools include convenient wrappers to make it easier to parallelize analyses of RAD-seq data. Please see the full documentation for the ipyrad.analysis tools in the ipyrad documentation for more details.
End of explanation
## conda install raxml -c bioconda
## conda install toytree -c eaton-lab
## create a raxml analysis object for the min13 data sets
rax = ipa.raxml(
name=min13.name,
data=min13.outfiles.phy,
workdir="analysis-raxml",
T=20,
N=100,
o=[i for i in min13.samples if "prz" in i],
)
## print the raxml command and call it
print rax.command
rax.run(force=True)
## access the resulting tree files
rax.trees
## plot a tree in the notebook with toytree
import toytree
tre = toytree.tree(rax.trees.bipartitions)
tre.draw(
width=350,
height=400,
node_labels=tre.get_node_values("support"),
);
Explanation: RAxML -- ML concatenation tree inference
End of explanation
## create a tetrad analysis object
tet = ipa.tetrad(
name=min4.name,
seqfile=min4.outfiles.snpsphy,
mapfile=min4.outfiles.snpsmap,
nboots=100,
)
## run tree inference
tet.run(ipyclient)
## access tree files
tet.trees
## plot results (just like above, but unrooted by default)
## the consensus tree here differs from the ML tree above.
import toytree
qtre = toytree.tree(tet.trees.nhx)
qtre.root(wildcard="prz")
qtre.draw(
width=350,
height=400,
node_labels=qtre.get_node_values("support"),
);
Explanation: tetrad -- quartet tree inference
End of explanation
## conda install structure clumpp -c ipyrad
## create a structure analysis object for the no-outgroup data set
## NB: As of v0.9.64 you can use instead: data=data.outfiles.snps_database
struct = ipa.structure(
name=nouts.name,
data="analysis-ipyrad/nouts_min11_outfiles/nouts_min11.snps.hdf5",
)
## set params for analysis (should be longer in real analyses)
struct.mainparams.burnin=1000
struct.mainparams.numreps=8000
## run structure across 10 random replicates of sampled unlinked SNPs
for kpop in [2, 3, 4, 5, 6]:
struct.run(kpop=kpop, nreps=10, ipyclient=ipyclient)
## wait for all of these jobs to finish
ipyclient.wait()
## collect results
tables = {}
for kpop in [2, 3, 4, 5, 6]:
tables[kpop] = struct.get_clumpp_table(kpop)
## custom sorting order
myorder = [
"41478_cyathophylloides",
"41954_cyathophylloides",
"29154_superba",
"30686_cyathophylla",
"33413_thamno",
"30556_thamno",
"35236_rex",
"40578_rex",
"35855_rex",
"39618_rex",
"38362_rex",
]
## import toyplot (packaged with toytree)
import toyplot
## plot bars for each K-value (mean of 10 reps)
for kpop in [2, 3, 4, 5, 6]:
table = tables[kpop]
table = table.ix[myorder]
## plot barplot w/ hover
canvas, axes, mark = toyplot.bars(
table,
title=[[i] for i in table.index.tolist()],
width=400,
height=200,
yshow=False,
style={"stroke": toyplot.color.near_black},
)
Explanation: STRUCTURE -- population cluster inference
End of explanation
## conda install treemix -c ipyrad
## group taxa into 'populations'
imap = {
"prz": ["32082_przewalskii", "33588_przewalskii"],
"cys": ["41478_cyathophylloides", "41954_cyathophylloides"],
"cya": ["30686_cyathophylla"],
"sup": ["29154_superba"],
"cup": ["33413_thamno"],
"tha": ["30556_thamno"],
"rck": ["35236_rex"],
"rex": ["35855_rex", "40578_rex"],
"lip": ["39618_rex", "38362_rex"],
}
## optional: loci will be filtered if they do not have data for at
## least N samples in each species. Minimums cannot be <1.
minmap = {
"prz": 2,
"cys": 2,
"cya": 1,
"sup": 1,
"cup": 1,
"tha": 1,
"rck": 1,
"rex": 2,
"lip": 2,
}
## sets a random number seed
import numpy
numpy.random.seed(12349876)
## create a treemix analysis object
tmix = ipa.treemix(
name=min13.name,
data=min13.outfiles.snpsphy,
mapfile=min13.outfiles.snpsmap,
imap=imap,
minmap=minmap,
)
## you can set additional parameter args here
tmix.params.root = "prz"
tmix.params.global_ = 1
## print the full params
tmix.params
## a dictionary for storing treemix objects
tdict = {}
## iterate over values of m
for rep in xrange(4):
for mig in xrange(4):
## create new treemix object copy
name = "mig-{}-rep-{}".format(mig, rep)
tmp = tmix.copy(name)
## set params on new object
tmp.params.m = mig
## run treemix analysis
tmp.run()
## store the treemix object
tdict[name] = tmp
import toyplot
## select a single result
tmp = tdict["mig-1-rep-1"]
## draw the tree similar to the Treemix plotting R code
## this code is rather new and will be expanded in the future.
canvas = toyplot.Canvas(width=350, height=350)
axes = canvas.cartesian(padding=25, margin=75)
axes = tmp.draw(axes)
import toyplot
import numpy as np
## plot many results
canvas = toyplot.Canvas(width=800, height=1200)
idx = 0
for mig in range(4):
for rep in range(4):
tmp = tdict["mig-{}-rep-{}".format(mig, rep)]
ax = canvas.cartesian(grid=(4, 4, idx), padding=25, margin=(25, 50, 100, 25))
ax = tmp.draw(ax)
idx += 1
Explanation: TREEMIX -- ML tree & admixture co-inference
End of explanation
bb = ipa.baba(
data=min4.outfiles.loci,
newick="analysis-raxml/RAxML_bestTree.min13",
)
## check params
bb.params
## generate all tests from the tree where 32082 is p4
bb.generate_tests_from_tree(
constraint_dict={
"p4": ["32082_przewalskii"],
"p3": ["30556_thamno"],
}
)
## run the tests in parallel
bb.run(ipyclient=ipyclient)
bb.results_table.sort_values(by="Z", ascending=False).head()
## most significant result (more ABBA than BABA)
bb.tests[12]
## the next most signif (more BABA than ABBA)
bb.tests[27]
Explanation: ABBA-BABA admixture inference
End of explanation
## a dictionary mapping sample names to 'species' names
imap = {
"prz": ["32082_przewalskii", "33588_przewalskii"],
"cys": ["41478_cyathophylloides", "41954_cyathophylloides"],
"cya": ["30686_cyathophylla"],
"sup": ["29154_superba"],
"cup": ["33413_thamno"],
"tha": ["30556_thamno"],
"rck": ["35236_rex"],
"rex": ["35855_rex", "40578_rex"],
"lip": ["39618_rex", "38362_rex"],
}
## optional: loci will be filtered if they do not have data for at
## least N samples/individuals in each species.
minmap = {
"prz": 2,
"cys": 2,
"cya": 1,
"sup": 1,
"cup": 1,
"tha": 1,
"rck": 1,
"rex": 2,
"lip": 2,
}
## a tree hypothesis (guidetree) (here based on tetrad results)
## for the 'species' we've collapsed samples into.
newick = "((((((rex, lip), rck), tha), cup), (cys, (cya, sup))), prz);"
## initiata a bpp object
b = ipa.bpp(
name=min4.name,
data=min4.outfiles.alleles,
imap=imap,
minmap=minmap,
guidetree=newick,
)
## set some optional params, leaving others at their defaults
## you should definitely run these longer for real analyses
b.params.burnin = 1000
b.params.nsample = 2000
b.params.sampfreq = 20
## print params
b.params
## set some optional filters leaving others at their defaults
b.filters.maxloci=100
b.filters.minsnps=4
## print filters
b.filters
Explanation: BPP -- species tree inference/delim
End of explanation
b.write_bpp_files()
b.run()
## wait for all ipyclient jobs to finish
ipyclient.wait()
## check results
## parse the mcmc table with pandas library
import pandas as pd
btable = pd.read_csv(b.files.mcmcfiles[0], sep="\t", index_col=0)
btable.describe().T
Explanation: run BPP
You can either call 'write_bpp_files()' to write input files for this data set to be run in BPP, and then call BPP on those files, or you can use the '.run()' command to run the data files directly, and in parallel on the cluster. If you specify multiple reps then a different random sample of loci will be selected, and different random seeds applied to each replicate.
End of explanation |
6,357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project Euler
Step1: Below are the lists I created that will help me narrow my search. I created the list called search because the key was only allowed to contain 3 lower case letters. Next I created a list of plain text english to help me filter out unwanted messages.
Step2: Next I create a function that will check if a line of text is plain english, by comparing its components with my english list above.
Step3: Now I begin the search for the key. This computation takes a minute or two because I am searching through all 17500 key possibilities and matching those with all 1200 cipher entries, for a total of about 20 million computations. I print every key that returns a plain text message. In the end I get very lucky, because the program only prints one key.
Step4: Now I know the key is 'god'. I will use that key to decipher the message below. I use the same method to print the message as I did to find the key.
Step5: Now that I know the message I can compute the ASCII sum of the message | Python Code:
ciphertxt = open('cipher.txt', 'r')
cipher = ciphertxt.read().split(',') #Splits the ciphertxt into a list, splits at every ,
cipher = [int(i) for i in cipher]
ciphertxt.close()
Explanation: Project Euler: Problem 59
https://projecteuler.net/problem=59
Each character on a computer is assigned a unique code and the preferred standard is ASCII (American Standard Code for Information Interchange). For example, uppercase A = 65, asterisk (*) = 42, and lowercase k = 107.
A modern encryption method is to take a text file, convert the bytes to ASCII, then XOR each byte with a given value, taken from a secret key. The advantage with the XOR function is that using the same encryption key on the cipher text, restores the plain text; for example, 65 XOR 42 = 107, then 107 XOR 42 = 65.
For unbreakable encryption, the key is the same length as the plain text message, and the key is made up of random bytes. The user would keep the encrypted message and the encryption key in different locations, and without both "halves", it is impossible to decrypt the message.
Unfortunately, this method is impractical for most users, so the modified method is to use a password as a key. If the password is shorter than the message, which is likely, the key is repeated cyclically throughout the message. The balance for this method is using a sufficiently long password key for security, but short enough to be memorable.
Your task has been made easy, as the encryption key consists of three lower case characters. Using cipher.txt (in this directory), a file containing the encrypted ASCII codes, and the knowledge that the plain text must contain common English words, decrypt the message and find the sum of the ASCII values in the original text.
The following cell shows examples of how to perform XOR in Python and how to go back and forth between characters and integers:
The first step is to read in the cipher.txt and create a list containing each number as a single entry. On line two I searched stackexchange to help me split up the text file and separate the numbers by commas.
End of explanation
search = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']
english = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z',',','?',"'",'!',';','"','.','(',')','-','1','2','3','4','5','6','7','8','9','0',' ']
Explanation: Below are the lists I created that will help me narrow my search. I created the list called search because the key was only allowed to contain 3 lower case letters. Next I created a list of plain text english to help me filter out unwanted messages.
End of explanation
def is_plain_text(text):
result = True
for letter in text:
if letter not in english:
result = False
break
return result
Explanation: Next I create a function that will check if a line of text is plain english, by comparing its components with my english list above.
End of explanation
for x in search:
for y in search:
for z in search:
message = ""
i = 0 #Counter i allows me to apply the components of key at every third entry of the message
for entry in cipher:
if i == 0 or i % 3 == 0:
message = message + chr(entry^ord(x))
elif i == 1 or (i-1) % 3 == 0:
message = message + chr(entry^ord(y))
elif i == 2 or (i-2) % 3 == 0:
message = message + chr(entry^ord(z))
i = i + 1
if is_plain_text(message) == True:
print("A potential key is: " + x + y + z)
Explanation: Now I begin the search for the key. This computation takes a minute or two because I am searching through all 17500 key possibilities and matching those with all 1200 cipher entries, for a total of about 20 million computations. I print every key that returns a plain text message. In the end I get very lucky, because the program only prints one key.
End of explanation
message = ""
i = 0
for entry in cipher:
if i == 0 or i % 3 == 0:
message = message + chr(entry^ord('g'))
elif i == 1 or (i-1) % 3 == 0:
message = message + chr(entry^ord('o'))
elif i == 2 or (i-2) % 3 == 0:
message = message + chr(entry^ord('d'))
i = i + 1
print(message)
Explanation: Now I know the key is 'god'. I will use that key to decipher the message below. I use the same method to print the message as I did to find the key.
End of explanation
sum = 0
for char in message:
sum = sum + ord(char)
print("The ASCII sum is: " + str(sum))
# This cell will be used for grading, leave it at the end of the notebook.
Explanation: Now that I know the message I can compute the ASCII sum of the message
End of explanation |
6,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Nipype with Amazon Web Services (AWS)
Several groups have been successfully using Nipype on AWS. This procedure
involves setting a temporary cluster using StarCluster and potentially
transferring files to/from S3. The latter is supported by Nipype through
DataSink and S3DataGrabber.
Using DataSink with S3
The DataSink class now supports sending output data directly to an AWS S3
bucket. It does this through the introduction of several input attributes to the
DataSink interface and by parsing the base_directory attribute. This class
uses the boto3 and
botocore Python packages to
interact with AWS. To configure the DataSink to write data to S3, the user must
set the base_directory property to an S3-style filepath.
For example
Step1: With the "s3 | Python Code:
from nipype.interfaces.io import DataSink
ds = DataSink()
ds.inputs.base_directory = 's3://mybucket/path/to/output/dir'
Explanation: Using Nipype with Amazon Web Services (AWS)
Several groups have been successfully using Nipype on AWS. This procedure
involves setting a temporary cluster using StarCluster and potentially
transferring files to/from S3. The latter is supported by Nipype through
DataSink and S3DataGrabber.
Using DataSink with S3
The DataSink class now supports sending output data directly to an AWS S3
bucket. It does this through the introduction of several input attributes to the
DataSink interface and by parsing the base_directory attribute. This class
uses the boto3 and
botocore Python packages to
interact with AWS. To configure the DataSink to write data to S3, the user must
set the base_directory property to an S3-style filepath.
For example:
End of explanation
ds.inputs.creds_path = '/home/neuro/aws_creds/credentials.csv'
ds.inputs.encrypt_bucket_keys = True
ds.local_copy = '/home/neuro/workflow_outputs/local_backup'
Explanation: With the "s3://" prefix in the path, the DataSink knows that the output
directory to send files is on S3 in the bucket "mybucket". "path/to/output/dir"
is the relative directory path within the bucket "mybucket" where output data
will be uploaded to (Note: if the relative path specified contains folders that
don’t exist in the bucket, the DataSink will create them). The DataSink treats
the S3 base directory exactly as it would a local directory, maintaining support
for containers, substitutions, subfolders, "." notation, etc. to route output
data appropriately.
There are four new attributes introduced with S3-compatibility: creds_path,
encrypt_bucket_keys, local_copy, and bucket.
End of explanation |
6,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate some validation videos random, download them from the server and then use them to visualize the results.
Step1: Load the trained model with its weigths
Step2: Extract the predictions for each video and print the scoring
Step3: Print the global classification results
Step4: Now show the temporal prediction for the activity happening at the video. | Python Code:
import random
import os
import numpy as np
from work.dataset.activitynet import ActivityNetDataset
dataset = ActivityNetDataset(
videos_path='../dataset/videos.json',
labels_path='../dataset/labels.txt'
)
videos = dataset.get_subset_videos('validation')
videos = random.sample(videos, 8)
examples = []
for v in videos:
file_dir = os.path.join('../downloads/features/', v.features_file_name)
if not os.path.isfile(file_dir):
os.system('scp imatge:~/work/datasets/ActivityNet/v1.3/features/{} ../downloads/features/'.format(v.features_file_name))
features = np.load(file_dir)
examples.append((v, features))
Explanation: Generate some validation videos random, download them from the server and then use them to visualize the results.
End of explanation
from keras.layers import Input, BatchNormalization, LSTM, TimeDistributed, Dense, merge
from keras.models import Model
input_features_a = Input(batch_shape=(1, 1, 4096,), name='features')
input_normalized_a = BatchNormalization(mode=1)(input_features_a)
lstm1_a = LSTM(512, return_sequences=True, stateful=True, name='lstm1')(input_normalized_a)
lstm2_a = LSTM(512, return_sequences=True, stateful=True, name='lstm2')(lstm1_a)
output_a = TimeDistributed(Dense(201, activation='softmax'), name='fc')(lstm2_a)
model_def = Model(input=input_features_a, output=output_a)
model_def.load_weights('../work/scripts/training/lstm_activity_classification/model_snapshot/lstm_activity_classification_02_e100.hdf5')
model_def.summary()
model_def.compile(loss='categorical_crossentropy', optimizer='rmsprop')
input_features = Input(batch_shape=(1, 1, 4096,), name='features')
input_normalized = BatchNormalization()(input_features)
previous_output = Input(batch_shape=(1, 1, 202,), name='prev_output')
merging = merge([input_normalized, previous_output], mode='concat', concat_axis=-1)
lstm1 = LSTM(512, return_sequences=True, stateful=True, name='lstm1')(merging)
lstm2 = LSTM(512, return_sequences=True, stateful=True, name='lstm2')(lstm1)
output = TimeDistributed(Dense(201, activation='softmax'), name='fc')(lstm2)
model_feed = Model(input=[input_features, previous_output], output=output)
model_feed.load_weights('../work/scripts/training/lstm_activity_classification_feedback/model_snapshot/lstm_activity_classification_feedback_02_e100.hdf5')
model_feed.summary()
model_feed.compile(loss='categorical_crossentropy', optimizer='rmsprop')
Explanation: Load the trained model with its weigths
End of explanation
predictions_def = []
for v, features in examples:
nb_instances = features.shape[0]
X = features.reshape((nb_instances, 1, 4096))
model_def.reset_states()
prediction = model_def.predict(X, batch_size=1)
prediction = prediction.reshape(nb_instances, 201)
class_prediction = np.argmax(prediction, axis=1)
predictions_def.append((v, prediction, class_prediction))
predictions_feed = []
for v, features in examples:
nb_instances = features.shape[0]
X = features.reshape((nb_instances, 1, 4096))
prediction = np.zeros((nb_instances, 201))
X_prev_output = np.zeros((1, 202))
X_prev_output[0,201] = 1
model_feed.reset_states()
for i in range(nb_instances):
X_features = X[i,:,:].reshape(1, 1, 4096)
X_prev_output = X_prev_output.reshape(1, 1, 202)
next_output = model_feed.predict_on_batch(
{'features': X_features,
'prev_output': X_prev_output}
)
prediction[i,:] = next_output[0,:]
X_prev_output = np.zeros((1, 202))
X_prev_output[0,:201] = next_output[0,:]
class_prediction = np.argmax(prediction, axis=1)
predictions_feed.append((v, prediction, class_prediction))
Explanation: Extract the predictions for each video and print the scoring
End of explanation
from IPython.display import YouTubeVideo, display
for prediction_def, prediction_feed in zip(predictions_def, predictions_feed):
v, prediction_d, class_prediction_d = prediction_def
_, prediction_f, class_prediction_f = prediction_feed
print('Video ID: {}\t\tMain Activity: {}'.format(v.video_id, v.get_activity()))
labels = ('Default Model', 'Model with Feedback')
for prediction, label in zip((prediction_d, prediction_f), labels):
print(label)
class_means = np.mean(prediction, axis=0)
top_3 = np.argsort(class_means[1:])[::-1][:3] + 1
scores = class_means[top_3]/np.sum(class_means[1:])
for index, score in zip(top_3, scores):
if score == 0.:
continue
label = dataset.labels[index][1]
print('{:.4f}\t{}'.format(score, label))
vid = YouTubeVideo(v.video_id)
display(vid)
print('\n')
Explanation: Print the global classification results
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib
normalize = matplotlib.colors.Normalize(vmin=0, vmax=201)
for prediction_d, prediction_f in zip(predictions_def, predictions_feed):
v, _, class_prediction_d = prediction_d
_, _, class_prediction_f = prediction_f
v.get_video_instances(16, 0)
ground_truth = np.array([instance.output for instance in v.instances])
nb_instances = len(v.instances)
print('Video ID: {}\nMain Activity: {}'.format(v.video_id, v.get_activity()))
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(ground_truth, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Ground Truth')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(class_prediction_d, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction Default Model')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(class_prediction_f, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction Model with Feedback')
plt.show()
print('\n')
normalize = matplotlib.colors.Normalize(vmin=0, vmax=1)
for prediction_def, prediction_feed in zip(predictions_def, predictions_feed):
v, prediction_d, class_prediction_d = prediction_def
_, prediction_f, class_prediction_f = prediction_feed
v.get_video_instances(16, 0)
ground_truth = np.array([instance.output for instance in v.instances])
nb_instances = len(v.instances)
output_index = dataset.get_output_index(v.label)
print('Video ID: {}\nMain Activity: {}'.format(v.video_id, v.get_activity()))
labels = ('Default Model', 'Model with Feedback')
for prediction, label in zip((prediction_d, prediction_f), labels):
print(label)
class_means = np.mean(prediction, axis=0)
top_3 = np.argsort(class_means[1:])[::-1][:3] + 1
scores = class_means[top_3]/np.sum(class_means[1:])
for index, score in zip(top_3, scores):
if score == 0.:
continue
label = dataset.labels[index][1]
print('{:.4f}\t{}'.format(score, label))
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(ground_truth/output_index, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Ground Truth')
plt.show()
# print only the positions that predicted the global ground truth category
temp_d = np.zeros((nb_instances))
temp_d[class_prediction_d==output_index] = 1
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(temp_d, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction of the ground truth class (Default model)')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(prediction_d[:,output_index], (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Probability for ground truth (Default model)')
plt.show()
# print only the positions that predicted the global ground truth category
temp_f = np.zeros((nb_instances))
temp_f[class_prediction_f==output_index] = 1
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(temp_f, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction of the ground truth class (Feedback model)')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(prediction_f[:,output_index], (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Probability for ground truth (Feedback model)')
plt.show()
print('\n')
Explanation: Now show the temporal prediction for the activity happening at the video.
End of explanation |
6,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
JVM Thread Dump Analysis
Goal
Step1: Get Data
Dumps generated every 2 minutes and saved in one single file. Period
Step2: Thread State by Date
Problem
Step3: Average of Threads by Hour
Problem
Step4: Average of Threads by Day
Problem
Step5: Threads in TIMED_WAITING (PARKING) by Hour Each Day
Problem
Step6: Principal Component Analysis
Problem
Step7: Unsupervised Clustering
Problem
Step8: Visualizing Clustering
Problem
Step9: Comparing with Day of Week
Problem | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
import pandas as pd
plt.style.use('seaborn')
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
from mpl_toolkits.mplot3d import Axes3D
import jvmthreadparser.parser as jtp
Explanation: JVM Thread Dump Analysis
Goal: Get insights about thread states in a production environment.
Inspired by: https://github.com/jakevdp/JupyterWorkflow
End of explanation
dump = jtp.open_text('threads4.txt', load_thread_content = False)
dump.head()
Explanation: Get Data
Dumps generated every 2 minutes and saved in one single file. Period: May/ 2017.
End of explanation
dump['Threads'] = 1
threads_by_state = dump.groupby(['DateTime','State']).count().unstack().fillna(0)
threads_by_state.columns = threads_by_state.columns.droplevel()
threads_by_state.head()
ax = threads_by_state.plot(figsize=(14,12), cmap='Paired', title = 'Thread State by Date')
ax.set_xlabel('Day of Month')
ax.set_ylabel('Number of Threads');
Explanation: Thread State by Date
Problem: How many threads exist in each state?
Goal: Reshape the data using the date as index and states as columns.
How:
End of explanation
ax = threads_by_state.groupby(threads_by_state.index.hour).mean().plot(figsize=(14,12), cmap='Paired', title='Threads by Hour')
ax.set_xlabel('Hour of the Day (0-23)')
ax.set_ylabel('Mean(Number of Threads)');
Explanation: Average of Threads by Hour
Problem: Are there any peak hour for thread states?
Goal: Plot thread states by hour (0-24).
How:
End of explanation
ax = threads_by_state.resample('D').mean().plot(figsize=(14,12), cmap = 'Paired')
ax.set_xlabel('Day of Month')
ax.set_ylabel('Mean(Number of Threads)');
Explanation: Average of Threads by Day
Problem: Are there any peak day for thread states?
Goal: Plot thread states by day (2017-05-01 / 2017-05-29).
How:
End of explanation
by_hour = threads_by_state.resample('H').mean()
pivoted = by_hour.pivot_table("TIMED_WAITING (PARKING)", index = by_hour.index.time, columns = by_hour.index.date).fillna(0)
ax = pivoted.plot(legend=False, alpha = 0.3, color = 'black', title = 'Day Patterns of TIMED_WAITING (PARKING) Threads by Time', figsize=(14,12))
ax.set_xlabel('Time')
ax.set_ylabel('Number of Threads');
Explanation: Threads in TIMED_WAITING (PARKING) by Hour Each Day
Problem: Are there any pattern in TIMED_WAITING(PARKING) threads?
Goal: Plot TIMED_WAITING (PARKING) threads. Each line represents a day. Thus, we can try visualize some patterns in data.
How:
End of explanation
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(3, svd_solver='full').fit_transform(X)
X2.shape
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X2[:, 0], X2[:, 1], X2[:, 2])
ax.set_title('PCA Dimensionality Reduction (3 Dimensions)')
ax.set_xlabel('Principal Component 1')
ax.set_ylabel('Principal Component 2')
ax.set_zlabel('Principal Component 3');
Explanation: Principal Component Analysis
Problem: Can we plot clustering patterns?
Goal: Use PCA to reduce data dimensionality to 3 dimensions.
How:
End of explanation
gmm = GaussianMixture(3).fit(X)
labels = gmm.predict(X)
fig = plt.figure(figsize=(14,10))
ax = fig.add_subplot(111, projection='3d')
cMap = ListedColormap(['green', 'blue','red'])
p = ax.scatter(X2[:, 0], X2[:, 1], X2[:, 2], c=labels, cmap=cMap)
ax.set_title('Unsupervised Clustering (3 Clusters with Colors)')
ax.set_xlabel('Principal Component 1')
ax.set_ylabel('Principal Component 2')
ax.set_zlabel('Principal Component 3');
colorbar = fig.colorbar(p, ticks=np.linspace(0,2,3))
colorbar.set_label('Cluster')
Explanation: Unsupervised Clustering
Problem: Can we put colors for identify each cluster?
Goal: Use GaussianMixture to identify clusters.
How:
End of explanation
fig, ax = plt.subplots(1, 3, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.4, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.4, ax=ax[1]);
pivoted.T[labels == 2].T.plot(legend=False, alpha=0.4, ax=ax[2]);
ax[0].set_title('Cluster 0')
ax[0].set_xlabel('Time')
ax[0].set_ylabel('Number of Threads')
ax[1].set_title('Cluster 1');
ax[1].set_xlabel('Time')
ax[2].set_title('Cluster 2');
ax[2].set_xlabel('Time')
Explanation: Visualizing Clustering
Problem: How identify threads in each cluster?
Goal: Generate plots showing threads in each cluster.
How:
End of explanation
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
fig = plt.figure(figsize=(14, 10))
ax = fig.add_subplot(111, projection='3d')
p = ax.scatter(X2[:, 0], X2[:, 1], X2[:, 2], c=dayofweek, cmap='rainbow')
ax.set_title('Unsupervised Clustering (3 Clusters) Colored by Weekday')
ax.set_xlabel('Principal Component 1')
ax.set_ylabel('Principal Component 2')
ax.set_zlabel('Principal Component 3');
colorbar = fig.colorbar(p)
colorbar.set_label('Weekday (0=Monday, Sunday=6)')
Explanation: Comparing with Day of Week
Problem: Can weekday explain this variability?
Goal: Plot clusters using one color per weekday (Monday=0, Sunday=6).
How:
End of explanation |
6,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
因为需要全市场回测所以本章无法使用沙盒数据,《量化交易之路》中的原始示例使用的是美股市场,这里的示例改为使用A股市场。
本节建议对照阅读abu量化文档第20-23节内容
本节的基础是在abu量化文档中第20节内容完成运行后有A股训练集交易和A股测试集交易数据之后
abu量化系统github地址 (您的star是我的动力!)
abu量化文档教程ipython notebook
第11章 量化系统-机器学习•ABU
Step1: 11.1 搜索引擎与量化交易
本节建议对照abu量化文档第15节内容进行阅读
Step2: 11.2 主裁
11.2.1 角度主裁
请对照阅读ABU量化系统使用文档 :第15节 中相关内容
Step3: 耗时操作,快的电脑大概几分钟,具体根据电脑性能,cpu数量,启动多进程进行训练:
Step4: 由于不是同一份沙盒数据,所以下面结果内容与书中分析内容不符,需要按照实际情况分析
Step5: 交易快照文件保存在~/abu/data/save_png/中, 下面打开对应目录:save_png
Step6: 11.2.2 使用全局最优对分类簇集合进行筛选
Step7: 11.2.3 跳空主裁
请对照阅读ABU量化系统使用文档 :第16节 UMP主裁交易决策 中相关内容
Step8: 下面这个的这个拦截特征比较明显,两天前才发生向上跳空的交易:
Step9: 11.2.4 价格主裁
请对照阅读ABU量化系统使用文档 :第16节 UMP主裁交易决策 中相关内容
Step10: 11.2.5 波动主裁
请对照阅读ABU量化系统使用文档 :第16节 UMP主裁交易决策 中相关内容
Step11: 11.2.6 验证主裁是否称职
请对照阅读ABU量化系统使用文档 :第21节 A股UMP决策 中相关内容 | Python Code:
from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop, AbuFactorBuyBreak
from abupy import abu, EMarketTargetType, AbuMetricsBase, ABuMarketDrawing, ABuProgress, ABuSymbolPd
from abupy import EMarketTargetType, EDataCacheType, EMarketSourceType, EMarketDataFetchMode, EStoreAbu, AbuUmpMainMul
from abupy import AbuUmpMainDeg, AbuUmpMainJump, AbuUmpMainPrice, AbuUmpMainWave, feature, AbuFeatureDegExtend
from abupy import AbuUmpEdgeDeg, AbuUmpEdgePrice, AbuUmpEdgeWave, AbuUmpEdgeFull, AbuUmpEdgeMul, AbuUmpEegeDegExtend
from abupy import AbuUmpMainDegExtend, ump, Parallel, delayed, AbuMulPidProgress, AbuProgress
,
# 关闭沙盒数据
abupy.env.disable_example_env_ipython()
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
abu_result_tuple_train = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='train_cn')
abu_result_tuple_test = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_cn')
ABuProgress.clear_output()
print('训练集结果:')
metrics_train = AbuMetricsBase.show_general(*abu_result_tuple_train, only_show_returns=True)
print('测试集结果:')
metrics_test = AbuMetricsBase.show_general(*abu_result_tuple_test, only_show_returns=True)
Explanation: 因为需要全市场回测所以本章无法使用沙盒数据,《量化交易之路》中的原始示例使用的是美股市场,这里的示例改为使用A股市场。
本节建议对照阅读abu量化文档第20-23节内容
本节的基础是在abu量化文档中第20节内容完成运行后有A股训练集交易和A股测试集交易数据之后
abu量化系统github地址 (您的star是我的动力!)
abu量化文档教程ipython notebook
第11章 量化系统-机器学习•ABU
End of explanation
orders_pd_train = abu_result_tuple_train.orders_pd
# 选择失败的前20笔交易绘制交易快照
# 这里只是示例,实战中根据需要挑选,rank或者其他方式
plot_simple = orders_pd_train[orders_pd_train.profit_cg < 0][:20]
# save=True保存在本地, 文件保存在~/abu/data/save_png/中
ABuMarketDrawing.plot_candle_from_order(plot_simple, save=True)
Explanation: 11.1 搜索引擎与量化交易
本节建议对照abu量化文档第15节内容进行阅读
End of explanation
from abupy import AbuUmpMainDeg
# 参数为orders_pd
ump_deg = AbuUmpMainDeg(orders_pd_train)
# df即由之前ump_main_make_xy生成的类df,表11-1所示
ump_deg.fiter.df.head()
Explanation: 11.2 主裁
11.2.1 角度主裁
请对照阅读ABU量化系统使用文档 :第15节 中相关内容
End of explanation
_ = ump_deg.fit(brust_min=False)
ump_deg.cprs
max_failed_cluster = ump_deg.cprs.loc[ump_deg.cprs.lrs.argmax()]
print('失败概率最大的分类簇{0}, 失败率为{1:.2f}%, 簇交易总数{2}, ' \
'簇平均交易获利{3:.2f}%'.format(ump_deg.cprs.lrs.argmax(),
max_failed_cluster.lrs * 100,
max_failed_cluster.lcs,
max_failed_cluster.lms * 100))
cpt = int(ump_deg.cprs.lrs.argmax().split('_')[0])
print(cpt)
ump_deg.show_parse_rt(ump_deg.rts[cpt])
max_failed_cluster_orders = ump_deg.nts[ump_deg.cprs.lrs.argmax()]
# 表11-3所示
max_failed_cluster_orders
Explanation: 耗时操作,快的电脑大概几分钟,具体根据电脑性能,cpu数量,启动多进程进行训练:
End of explanation
from abupy import ml
ml.show_orders_hist(max_failed_cluster_orders, ['buy_deg_ang21', 'buy_deg_ang42', 'buy_deg_ang60','buy_deg_ang252'])
print('分类簇中deg_ang60平均值为{0:.2f}'.format(
max_failed_cluster_orders.buy_deg_ang60.mean()))
print('分类簇中deg_ang21平均值为{0:.2f}'.format(
max_failed_cluster_orders.buy_deg_ang21.mean()))
print('分类簇中deg_ang42平均值为{0:.2f}'.format(
max_failed_cluster_orders.buy_deg_ang42.mean()))
print('分类簇中deg_ang252平均值为{0:.2f}'.format(
max_failed_cluster_orders.buy_deg_ang252.mean()))
ml.show_orders_hist(orders_pd_train, ['buy_deg_ang21', 'buy_deg_ang42', 'buy_deg_ang60', 'buy_deg_ang252'])
print('训练数据集中deg_ang60平均值为{0:.2f}'.format(
orders_pd_train.buy_deg_ang60.mean()))
print('训练数据集中deg_ang21平均值为{0:.2f}'.format(
orders_pd_train.buy_deg_ang21.mean()))
print('训练数据集中deg_ang42平均值为{0:.2f}'.format(
orders_pd_train.buy_deg_ang42.mean()))
print('训练数据集中deg_ang252平均值为{0:.2f}'.format(
orders_pd_train.buy_deg_ang252.mean()))
progress = AbuProgress(len(max_failed_cluster_orders), 0, label='plot snap')
for ind in np.arange(0, len(max_failed_cluster_orders)):
progress.show(ind)
order_ind = int(max_failed_cluster_orders.iloc[ind].ind)
# 交易快照文件保存在~/abu/data/save_png/中
ABuMarketDrawing.plot_candle_from_order(ump_deg.fiter.order_has_ret.iloc[order_ind], save=True)
Explanation: 由于不是同一份沙盒数据,所以下面结果内容与书中分析内容不符,需要按照实际情况分析:
比如下面的特征即是42日和60日的deg格外大,21和252相对训练集平均值也很大:
End of explanation
if abupy.env.g_is_mac_os:
!open $abupy.env.g_project_data_dir
else:
!echo $abupy.env.g_project_data_dir
Explanation: 交易快照文件保存在~/abu/data/save_png/中, 下面打开对应目录:save_png
End of explanation
brust_min = ump_deg.brust_min()
brust_min
llps = ump_deg.cprs[(ump_deg.cprs['lps'] <= brust_min[0]) & (ump_deg.cprs['lms'] <= brust_min[1] )& (ump_deg.cprs['lrs'] >=brust_min[2])]
llps
ump_deg.choose_cprs_component(llps)
ump_deg.dump_clf(llps)
Explanation: 11.2.2 使用全局最优对分类簇集合进行筛选
End of explanation
from abupy import AbuUmpMainJump
# 耗时操作,大概需要10几分钟,具体根据电脑性能,cpu情况
ump_jump = AbuUmpMainJump.ump_main_clf_dump(orders_pd_train, save_order=False)
ump_jump.fiter.df.head()
Explanation: 11.2.3 跳空主裁
请对照阅读ABU量化系统使用文档 :第16节 UMP主裁交易决策 中相关内容
End of explanation
print('失败概率最大的分类簇{0}'.format(ump_jump.cprs.lrs.argmax()))
# 拿出跳空失败概率最大的分类簇
max_failed_cluster_orders = ump_jump.nts[ump_jump.cprs.lrs.argmax()]
# 显示失败概率最大的分类簇,表11-6所示
max_failed_cluster_orders
ml.show_orders_hist(max_failed_cluster_orders, feature_columns=['buy_diff_up_days', 'buy_jump_up_power',
'buy_diff_down_days', 'buy_jump_down_power'])
print('分类簇中jump_up_power平均值为{0:.2f}, 向上跳空平均天数{1:.2f}'.format(
max_failed_cluster_orders.buy_jump_up_power.mean(), max_failed_cluster_orders.buy_diff_up_days.mean()))
print('分类簇中jump_down_power平均值为{0:.2f}, 向下跳空平均天数{1:.2f}'.format(
max_failed_cluster_orders.buy_jump_down_power.mean(), max_failed_cluster_orders.buy_diff_down_days.mean()))
print('训练数据集中jump_up_power平均值为{0:.2f},向上跳空平均天数{1:.2f}'.format(
orders_pd_train.buy_jump_up_power.mean(), orders_pd_train.buy_diff_up_days.mean()))
print('训练数据集中jump_down_power平均值为{0:.2f}, 向下跳空平均天数{1:.2f}'.format(
orders_pd_train.buy_jump_down_power.mean(), orders_pd_train.buy_diff_down_days.mean()))
Explanation: 下面这个的这个拦截特征比较明显,两天前才发生向上跳空的交易:
End of explanation
from abupy import AbuUmpMainPrice
ump_price = AbuUmpMainPrice.ump_main_clf_dump(orders_pd_train, save_order=False)
ump_price.fiter.df.head()
print('失败概率最大的分类簇{0}'.format(ump_price.cprs.lrs.argmax()))
# 拿出价格失败概率最大的分类簇
max_failed_cluster_orders = ump_price.nts[ump_price.cprs.lrs.argmax()]
# 表11-8所示
max_failed_cluster_orders
Explanation: 11.2.4 价格主裁
请对照阅读ABU量化系统使用文档 :第16节 UMP主裁交易决策 中相关内容
End of explanation
from abupy import AbuUmpMainWave
ump_wave = AbuUmpMainWave.ump_main_clf_dump(orders_pd_train, save_order=False)
ump_wave.fiter.df.head()
print('失败概率最大的分类簇{0}'.format(ump_wave.cprs.lrs.argmax()))
# 拿出波动特征失败概率最大的分类簇
max_failed_cluster_orders = ump_wave.nts[ump_wave.cprs.lrs.argmax()]
# 表11-10所示
max_failed_cluster_orders
ml.show_orders_hist(max_failed_cluster_orders, feature_columns=['buy_wave_score1', 'buy_wave_score3'])
print('分类簇中wave_score1平均值为{0:.2f}'.format(
max_failed_cluster_orders.buy_wave_score1.mean()))
print('分类簇中wave_score3平均值为{0:.2f}'.format(
max_failed_cluster_orders.buy_wave_score3.mean()))
ml.show_orders_hist(orders_pd_train, feature_columns=['buy_wave_score1', 'buy_wave_score1'])
print('训练数据集中wave_score1平均值为{0:.2f}'.format(
orders_pd_train.buy_wave_score1.mean()))
print('训练数据集中wave_score3平均值为{0:.2f}'.format(
orders_pd_train.buy_wave_score1.mean()))
Explanation: 11.2.5 波动主裁
请对照阅读ABU量化系统使用文档 :第16节 UMP主裁交易决策 中相关内容
End of explanation
# 选取有交易结果的数据order_has_result
order_has_result = abu_result_tuple_test.orders_pd[abu_result_tuple_test.orders_pd.result != 0]
ump_wave.best_hit_cnt_info(ump_wave.llps)
from abupy import AbuUmpMainDeg, AbuUmpMainJump, AbuUmpMainPrice, AbuUmpMainWave
ump_deg = AbuUmpMainDeg(predict=True)
ump_jump = AbuUmpMainJump(predict=True)
ump_price = AbuUmpMainPrice(predict=True)
ump_wave = AbuUmpMainWave(predict=True)
def apply_ml_features_ump(order, predicter, progress, need_hit_cnt):
if not isinstance(order.ml_features, dict):
import ast
# 低版本pandas dict对象取出来会成为str
ml_features = ast.literal_eval(order.ml_features)
else:
ml_features = order.ml_features
progress.show()
# 将交易单中的买入时刻特征传递给ump主裁决策器,让每一个主裁来决策是否进行拦截
return predicter.predict_kwargs(need_hit_cnt=need_hit_cnt, **ml_features)
def pararllel_func(ump, ump_name):
with AbuMulPidProgress(len(order_has_result), '{} complete'.format(ump_name)) as progress:
# 启动多进程进度条,对order_has_result进行apply
ump_result = order_has_result.apply(apply_ml_features_ump, axis=1, args=(ump, progress, 2,))
return ump_name, ump_result
# 并行处理4个主裁,每一个主裁启动一个进程进行拦截决策
parallel = Parallel(
n_jobs=4, verbose=0, pre_dispatch='2*n_jobs')
out = parallel(delayed(pararllel_func)(ump, ump_name)
for ump, ump_name in zip([ump_deg, ump_jump, ump_price, ump_wave],
['ump_deg', 'ump_jump', 'ump_price', 'ump_wave']))
# 将每一个进程中的裁判的拦截决策进行汇总
for sub_out in out:
order_has_result[sub_out[0]] = sub_out[1]
block_pd = order_has_result.filter(regex='^ump_*')
# 把所有主裁的决策进行相加
block_pd['sum_bk'] = block_pd.sum(axis=1)
block_pd['result'] = order_has_result['result']
# 有投票1的即会进行拦截
block_pd = block_pd[block_pd.sum_bk > 0]
print('四个裁判整体拦截正确率{:.2f}%'.format(block_pd[block_pd.result == -1].result.count() / block_pd.result.count() * 100))
block_pd.tail()
print('角度裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_deg')))
print('角度扩展裁判拦拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_jump')))
print('单混裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_wave')))
print('价格裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_price')))
Explanation: 11.2.6 验证主裁是否称职
请对照阅读ABU量化系统使用文档 :第21节 A股UMP决策 中相关内容
End of explanation |
6,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RNN Sentiment Classifier
In the previous lab, you built a tweet sentiment classifier based on Bag-Of-Words features. Now we ask you to improve this model by representing it as a sequence of words.
Step 1
Step1: Step 2
Step2: Step 3 | Python Code:
import tensorflow as tf
import cPickle as pickle
from collections import defaultdict
import re, random
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
#Read data and do preprocessing
def read_data(fn):
with open(fn) as f:
data = pickle.load(f)
#Clean the text
new_data = []
pattern = re.compile('[\W_]+')
for text,label in data:
text = text.strip("\r\n ").split()
x = []
for word in text:
word = pattern.sub('', word)
word = word.lower()
if 0 < len(word) < 20:
x.append(word)
new_data.append((' '.join(x),label))
return new_data
train = read_data("data/train.p")
print train[0:10]
Explanation: RNN Sentiment Classifier
In the previous lab, you built a tweet sentiment classifier based on Bag-Of-Words features. Now we ask you to improve this model by representing it as a sequence of words.
Step 1: Input Preprocessing
Run read_data() below to read training data, normalizing the text.
End of explanation
train_x, train_y = zip(*train)
vectorizer = CountVectorizer(train_x, min_df=0.001)
vectorizer.fit(train_x)
vocab = vectorizer.vocabulary_
UNK_ID = len(vocab)
PAD_ID = len(vocab) + 1
word2id = lambda w:vocab[w] if w in vocab else UNK_ID
train_x = [[word2id(w) for w in x.split()] for x in train_x]
train_data = zip(train_x, train_y)
print train_data[0:10]
Explanation: Step 2: Build a Vocabulary
Here we will use sklearn's CountVectorizer to automatically build a vocabulary over the training set. Infrequent words are pruned to make our life easier. Here we have two special tokens: UNK_ID for unknown words and PAD_ID for special token <PAD> that is used to pad sentences to the same length.
End of explanation
import math
#build RNN model
batch_size = 20
hidden_size = 100
vocab_size = len(vocab) + 2
def lookup_table(input_, vocab_size, output_size, name):
with tf.variable_scope(name):
embedding = tf.get_variable("embedding", [vocab_size, output_size], tf.float32, tf.random_normal_initializer(stddev=1.0 / math.sqrt(output_size)))
return tf.nn.embedding_lookup(embedding, input_)
def linear(input_, output_size, name, init_bias=0.0):
shape = input_.get_shape().as_list()
with tf.variable_scope(name):
W = tf.get_variable("Matrix", [shape[-1], output_size], tf.float32, tf.random_normal_initializer(stddev=1.0 / math.sqrt(shape[-1])))
if init_bias is None:
return tf.matmul(input_, W)
with tf.variable_scope(name):
b = tf.get_variable("bias", [output_size], initializer=tf.constant_initializer(init_bias))
return tf.matmul(input_, W) + b
session = tf.Session()
tweets = tf.placeholder(tf.int32, [batch_size, None])
labels = tf.placeholder(tf.float32, [batch_size])
embedding = lookup_table(tweets, vocab_size, hidden_size, name="word_embedding")
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(hidden_size)
init_state = lstm_cell.zero_state(batch_size, tf.float32)
_, final_state = tf.nn.dynamic_rnn(lstm_cell, embedding, initial_state=init_state)
sentiment = linear(final_state[1], 1, name="output")
sentiment = tf.squeeze(sentiment, [1])
loss = tf.nn.sigmoid_cross_entropy_with_logits(sentiment, labels)
loss = tf.reduce_mean(loss)
prediction = tf.to_float(tf.greater_equal(sentiment, 0.5))
pred_err = tf.to_float(tf.not_equal(prediction, labels))
pred_err = tf.reduce_sum(pred_err)
optimizer = tf.train.AdamOptimizer().minimize(loss)
tf.global_variables_initializer().run(session=session)
saver = tf.train.Saver()
random.shuffle(train_data)
err_rate = 0.0
for step in xrange(0, len(train_data), batch_size):
batch = train_data[step:step+batch_size]
batch_x, batch_y = zip(*batch)
batch_x = list(batch_x)
if len(batch_x) != batch_size:
continue
max_len = max([len(x) for x in batch_x])
for i in xrange(batch_size):
len_x = len(batch_x[i])
batch_x[i] = [PAD_ID] * (max_len - len_x) + batch_x[i]
batch_x = np.array(batch_x, dtype=np.int32)
batch_y = np.array(batch_y, dtype=np.float32)
feed_map = {tweets:batch_x, labels:batch_y}
_, batch_err = session.run([optimizer, pred_err], feed_dict=feed_map)
err_rate += batch_err
if step % 1000 == 0 and step > 0:
print err_rate / step
Explanation: Step 3: Build an LSTM Encoder
A classifier requires the input feature vector to be of fixed size, while sentences are of different lengths. Thus, we need a model (called as encoder) to transform a sentence to a fixed size vector. This could be done by a recurrent neural net (RNN), by taking the last hidden state of LSTM encoder as the feature vector. We could then build a linear (or a multi-layer) network upon it to perform a classifier.
Step 3.1 Embedding Lookup Layer
At input layer, words are represented by their ID (one-hot vector). Before feeding words to LSTM cell, we need an embedding lookup layer to map words to their word vector, given their ID. You should write a function to perform this operation.
def lookup_table(input_, vocab_size, output_size)
where input_ is a matrix of sentences (sentences are padded to the same length in a batch), vocab_size is the size of vocabulary, output_size the size of word vector. You could use the tensorflow API function embedding-lookup
Step 3.2 LSTM Layer
Now we have the embedding layer, we can build LSTM layer upon it. It requires 4 steps:
1. Create a LSTM Cell using BasicLSTMCell
2. Let's say you have a lstm_cell object, declare initial state vector by calling lstm_cell.zero_state().
3. Create a RNN Layer using dynamic-rnn, get the final state of it.
Step 3.3 Classification Layer
Now you have a fixed-size vector for sentences, build a classification layer the same as previous. Declare the cross-entropy loss function.
Step 3.4 Training
Now you need to feed the network with training data, and optimize it. Note that you have to pad sentences in the batch to the same length. To do this, you should put some PAD_ID tokens before each sentences so that they are in the same length. (Don't put them in the back because it would be harder to get the final hidden state of each sentence!)
The Full Code
End of explanation |
6,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
数据清洗
删除空评论
删除评论中的空格,逗号,波浪线,换行
Step1: 用百度AI分析
```python
from aip import AipNlp
https | Python Code:
df1=data_train.drop(index=(data_train.loc[(data_train["评价内容"].isnull())].index))
df1["评价内容"]=df1["评价内容"].str.replace(" ","").str.replace(",","").str.replace("~","").str.replace("\n","")
df1["分析"]=" "
Explanation: 数据清洗
删除空评论
删除评论中的空格,逗号,波浪线,换行
End of explanation
df1["prop"]=""
df1["adj"]=""
df1["sentiment"]=""
import os
for index,row in df1.iterrows():
try:
info=client.commentTag(row["评价内容"], options)['items']
df1.at[index,"分析"]=str(info)
df1.at[index,"prop"]=str(info[0]["prop"] if info[0] else "")
df1.at[index,"adj"]=str(info[0]["adj"] if info[0] else "")
df1.at[index,"sentiment"]=str(info[0]["sentiment"] if info[0] else "")
!cls
os.system('clear')
print(index/4421,end = "",flush=True)
except Exception as err:
print(err)
except IndexError:
pass
time.sleep(0.2)
pass
# 全部弄好要保存
df1.to_csv(r"C:\Users\admin\Desktop\用户反馈\用户评价分析.csv",index=False,encoding="utf-8")
df1.to_csv(r"C:\Users\admin\Desktop\用户反馈\用户评价分析.csv",index=False,encoding="utf-8")
print(index)
Explanation: 用百度AI分析
```python
from aip import AipNlp
https://cloud.baidu.com/doc/NLP/s/2jwvylmuc#%E8%AF%84%E8%AE%BA%E8%A7%82%E7%82%B9%E6%8A%BD%E5%8F%96
百度人工智能SDK示例
你的 APPID AK SK
APP_ID = ''
API_KEY = ''
SECRET_KEY = ''
client = AipNlp(APP_ID, API_KEY, SECRET_KEY)
text = '味道还行,速度快点就好'
如果有可选参数
options = {}
options["type"] = 4
带参数调用评论观点抽取
client.commentTag(text, options)
```
End of explanation |
6,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Preprocessing using Dataflow </h1>
This notebook illustrates
Step1: Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
Step2: NOTE
Step3: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
Step5: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
Step7: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options
Step8: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
Please re-run the above cell if you get a <b>failed status</b> of the job in the dataflow UI console. | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
Explanation: <h1> Preprocessing using Dataflow </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using Dataflow
</ol>
<p>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/tree/master/courses/machine_learning/deepdive2/end_to_end_ml/labs/preproc.ipynb) -- try to complete that notebook first before reviewing this solution notebook
End of explanation
!pip install --user apache-beam[interactive]==2.24.0
Explanation: Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
End of explanation
import apache_beam as beam
print(beam.__version__)
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
Explanation: NOTE: In the output of the above cell you can safely ignore any WARNINGS (in Yellow text) related to: "hdfscli", "hdfscli-avro", "pbr", "fastavro", "gen_client" and ERRORS (in Red text) related to the related to: "witwidget-gpu", "fairing" etc.
If you get any related errors or warnings mentioned above please rerun the above cell.
Note: Restart your kernel to use updated packages.
Make sure the Dataflow API is enabled by going to this link. Ensure that you've installed Beam by importing it and printing the version number.
End of explanation
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
End of explanation
# Create SQL query using natality data after the year 2000
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
Explanation: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
End of explanation
# TODO 1
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
no_ultrasound['is_male'] = 'Unknown'
if rowdict['plurality'] > 1:
no_ultrasound['plurality'] = 'Multiple(2+)'
else:
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'num_workers': 4,
'max_num_workers': 5
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
Explanation: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options:
Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!
Read from BigQuery directly using TensorFlow.
Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to "allow large results" and save the result into a CSV file on Google Cloud Storage.
<p>
However, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing.
Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP web console to the Dataflow section and monitor the running job. It took about 20 minutes for me.
<p>
If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/
</pre>
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
Explanation: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
Please re-run the above cell if you get a <b>failed status</b> of the job in the dataflow UI console.
End of explanation |
6,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load data
Predict the california average house value
Step1: Model with the recommendation of the cheat-sheet
- Based on the Sklearn algorithm cheat-sheet
Step2: Improve the model parametrization
Step3: Check the second cheat sheet recommendation
Step4: Build a decision tree regressor | Python Code:
from sklearn import datasets
all_data = datasets.california_housing.fetch_california_housing()
# Describe dataset
print(all_data.DESCR)
print(all_data.feature_names)
# Print some data lines
print(all_data.data[:10])
print(all_data.target)
#Randomize, normalize and separate train & test
from sklearn.utils import shuffle
X, y = shuffle(all_data.data, all_data.target, random_state=42)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
# Normalize the data
from sklearn.preprocessing import Normalizer
# Define normalizer
...
#Fit & transform over trin
...
# transform test
...
Explanation: Load data
Predict the california average house value
End of explanation
from sklearn import linear_model
# Select the correct linear model and fit it
reg = linear_model. ...
reg.fit(X_train, y_train)
# Evaluate
from sklearn.metrics import mean_absolute_error
y_test_predict = reg.predict(X_test)
print('Mean absolute error ', mean_absolute_error(y_test, y_test_predict))
print('Variance score: ', reg.score(X_test, y_test))
# Plot a scaterplot real vs predict
import matplotlib.pyplot as plt
%matplotlib inline
# Plot the scatter plot real vs predict
...
# Save model
from sklearn.externals import joblib
joblib.dump(reg, '/tmp/reg_model.pkl')
# Load model
reg_loaded = joblib.load('/tmp/reg_model.pkl')
# View the coeficients
print('Coeficients :', reg_loaded.coef_)
print('Intercept: ', reg_loaded.intercept_ )
Explanation: Model with the recommendation of the cheat-sheet
- Based on the Sklearn algorithm cheat-sheet
End of explanation
# Use the function RidgeCV to select the best alpha using cross validation
#Define the RidgeCV model. Test alpha over the values 0.1, 1 and 10
...
reg.fit(X_train, y_train)
print('Best alpha: ', reg.alpha_)
# Build a model with the recommended alpha
reg = linear_model.Ridge (alpha = ...)
reg.fit(X_train, y_train)
y_test_predict = reg.predict(X_test)
print('Mean absolute error ', mean_absolute_error(y_test, y_test_predict))
print('Variance score: ', reg.score(X_test, y_test))
plt.scatter(y_test, y_test_predict)
Explanation: Improve the model parametrization
End of explanation
from sklearn import svm
# Select the correct model and define it
reg_svr = ...
reg_svr.fit(X_train, y_train)
y_test_predict = reg_svr.predict(X_test)
print('Mean absolute error ', mean_absolute_error(y_test, y_test_predict))
print('Variance score: ', reg_svr.score(X_test, y_test))
plt.scatter(y_test, y_test_predict)
Explanation: Check the second cheat sheet recommendation
End of explanation
# Import the regression tree function
from sklearn import ...
# Define the tree
...
dtree.fit(X_train, y_train)
y_test_predict = dtree.predict(X_test)
print('Mean absolute error ', mean_absolute_error(y_test, y_test_predict))
print('Variance score: ', dtree.score(X_test, y_test))
plt.scatter(y_test, y_test_predict)
# A second model regularized controling the depth
# Build a second tree with a max deep of 5
...
...
y_test_predict = dtree2.predict(X_test)
print('Mean absolute error ', mean_absolute_error(y_test, y_test_predict))
print('Variance score: ', dtree2.score(X_test, y_test))
plt.scatter(y_test, y_test_predict)
# Plot the tree
import pydotplus
from IPython.display import Image
dot_data = tree.export_graphviz(dtree2, out_file=None,
feature_names=all_data.feature_names,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data)
Image(graph.create_png())
Explanation: Build a decision tree regressor
End of explanation |
6,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detached Binary
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Datasets
Now we'll create an empty mesh dataset at quarter-phase so we can compare the difference between using roche and rotstar for deformation potentials
Step3: Running Compute
Let's set the radius of the primary component to be large enough to start to show some distortion when using the roche potentials.
Step4: Now we'll compute synthetics at the times provided using the default options
Step5: Plotting | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
Explanation: Detached Binary: Roche vs Rotstar
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('mesh', times=[0.75], dataset='mesh01')
Explanation: Adding Datasets
Now we'll create an empty mesh dataset at quarter-phase so we can compare the difference between using roche and rotstar for deformation potentials:
End of explanation
b['requiv@primary@component'] = 1.8
Explanation: Running Compute
Let's set the radius of the primary component to be large enough to start to show some distortion when using the roche potentials.
End of explanation
b.run_compute(irrad_method='none', distortion_method='roche', model='rochemodel')
b.run_compute(irrad_method='none', distortion_method='rotstar', model='rotstarmodel')
Explanation: Now we'll compute synthetics at the times provided using the default options
End of explanation
afig, mplfig = b.plot(model='rochemodel',show=True)
afig, mplfig = b.plot(model='rotstarmodel',show=True)
Explanation: Plotting
End of explanation |
6,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
print(source_text[:200])
print(sentences[0])
a = source_text[:200]
[word for line in a.split('\n') for word in line.split()]
[[word for word in line.split()]+['<EOS>'] for line in a.split('\n')]
#[[target_vocab_to_int.get(word) for word in line] for line in target_text.split('\n')]
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
eos_id = target_vocab_to_int['<EOS>']
# source text (English)
#set_words = set([word for line in source_text.split('\n') for word in line.split()]) # set of all the characters that appear in the data
#source_int_to_vocab = {word_i: word for word_i, word in enumerate(list(set_words))}
#source_vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}
# Convert characters to ids
# source_vocab_to_int = [[source_vocab_to_int.get(word) for word in line.split()] for line in source_text.split('\n')]
# target_vocab_to_int = [[target_vocab_to_int.get(word) for word in line.split()] + [eos_id] for line in target_text.split('\n')]
source_ids = [[source_vocab_to_int.get(word) for word in line.split()] for line in source_text.split('\n')]
target_ids = [[target_vocab_to_int.get(word) for word in line.split()] + [eos_id] for line in target_text.split('\n')]
# TODO: Implement Function
return source_ids, target_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return input, targets, lr, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
# demonstration_outputs = np.reshape(range(batch_size * sequence_length), (batch_size, sequence_length))
# TODO: Implement Function
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# Encoderex
enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)
# TODO: Implement Function
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
#train_logits = tf.contrib.rnn.DropoutWrapper(train_logits, output_keep_prob=keep_prob)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length - 1, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# Decoder Embedding
#dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
#dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Decoder RNNs
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
# dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length,decoding_scope,
output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
#training_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)
inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return (train_logits, inference_logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# Apply embedding to the input data for the encoder.
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Apply embedding to the target data for the decoder.
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
#enc_embed_target = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, dec_embedding_size)
# Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
train_logits, inference_logits = decoding_layer(dec_embed_input, dec_embeddings,
enc_state, target_vocab_size, sequence_length, rnn_size, num_layers,
target_vocab_to_int, keep_prob)
return (train_logits, inference_logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 20
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 100
decoding_embedding_size = 100
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
if batch_i % 10 == 0:
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
res = [vocab_to_int.get(word.lower(), target_vocab_to_int['<UNK>']) for word in sentence.split()]
return res
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
# translate_sentence = 'he saw a old yellow truck .'
# translate_sentence = 'I would like to know where all the strange ones go.'
# translate_sentence = 'I am English. I am devorced and 60 years old.'
translate_sentence = 'New York is very cold in December, and it sometimes heavily snows.'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
6,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color="red" size="6"><b>Programmation dynamique</b></font>
<font color="blue" size="5"><b>I Fibonacci récursif naïf</b></font>
Step1: Que constatez vous ???
Step2: hmmm.... pas fameux !!!
Mais il fallait s'y attendre, la complexité est exponentielle.
<font color="blue" size="5"><b>II Fibonacci récursif + mémoïsation</b></font>
Step3: D'emblée... c'est mieux non ?
Voyons le graphique
Step4: Et avec une boucle for sans mémoire ?
Step5: Et avec une simple boucle while, sans for ?
Step6: Et avec une fonction Cython ?
On va tester de compiler la fonction en C avec Cython pour voir si on obtient des performances "vraiment linéaires"
Step7: Bien sûr, on peut encore écrire du code Cython plus efficace, cf la doc.
Et avec Cython on utilise les types C donc les entiers bornés, d'où mon utilisation de unsigned long long, le type natif d'entirs C le plus grand possible.
-- Partie rédigée par Lilian Besson (@Naereen)
Graphiques avec plein de comparaison
Step8: <font color="blue" size="5"><b>III Fibonacci | Python Code:
# 1 1 2 3 5 8 13 21 39 ...
# codez la fonction FiboRec (avec une fonction récursive)
def FiboRec(n) :
if n <=2 : return 1
return FiboRec(n-1)+FiboRec(n-2)
for i in range(1,40) : print(i,FiboRec(i))
Explanation: <font color="red" size="6"><b>Programmation dynamique</b></font>
<font color="blue" size="5"><b>I Fibonacci récursif naïf</b></font>
End of explanation
from timeit import timeit
import matplotlib.pyplot as plt
abcisse = [5,10,15,20,25,28,30]
ordonnee=[0 for _ in range(len(abcisse))]
for i in range(len(abcisse)) :
ordonnee[i] = timeit("FiboRec(abcisse[i])", number=10, globals=globals())
# Graphique pour le tri sélection en rouge
plt.plot(abcisse, ordonnee, "ro-") # en rouge
plt.show()
plt.close()
Explanation: Que constatez vous ???
End of explanation
# Codez la version avec mémoïsation
def Fibo(n,memo) :
if n in memo.keys() : return memo[n]
#if n<=2 : return 1
memo[n] = Fibo(n-1,memo)+Fibo(n-2,memo)
#memo[n]=1+1
return memo[n]
def Fibomem(n):
memo={1:1,2:1}
return Fibo(n,memo)
for i in range(1,40) : print(i,Fibo(i,{1:1,2:1}))
Explanation: hmmm.... pas fameux !!!
Mais il fallait s'y attendre, la complexité est exponentielle.
<font color="blue" size="5"><b>II Fibonacci récursif + mémoïsation</b></font>
End of explanation
from timeit import timeit
import matplotlib.pyplot as plt
import sys
sys.setrecursionlimit(90000)
abcisse = [i for i in range(1000,22000,3000)]
ordonnee=[0 for _ in range(len(abcisse))]
for i in range(len(abcisse)) :
ordonnee[i] = timeit(stmt="Fibomem(abcisse[i])", number=100, globals=globals())
# Graphique
plt.plot(abcisse, ordonnee, "ro-") # en rouge
plt.show()
plt.close()
# Variante :
def Fibomem2(n,memo=None) :
if memo == None : memo={1:1,2:1}
if n in memo.keys() : return memo[n]
memo[n] = Fibomem2(n-1,memo)+Fibomem2(n-2,memo)
return memo[n]
Fibomem2(100)
from timeit import timeit
import matplotlib.pyplot as plt
import sys
sys.setrecursionlimit(90000)
abcisse2 = [i for i in range(1000,15000,3000)]
ordonnee2=[0 for _ in range(len(abcisse2))]
for i in range(len(abcisse2)) :
ordonnee2[i] = timeit(stmt="Fibomem2(abcisse2[i])", number=100, globals=globals())
# Graphique
plt.plot(abcisse, ordonnee, "ro-") # en rouge
plt.plot(abcisse2, ordonnee2, "go-") # en vert
plt.show()
plt.close()
Explanation: D'emblée... c'est mieux non ?
Voyons le graphique :
End of explanation
%%time
def fibo1(n: int) -> int:
fi, fip1 = 0, 1
for i in range(n):
fi, fip1 = fip1, fi + fip1
return fi
for n in range(100, 110):
print(f"fibo1({n}) = {fibo1(n)}")
for n in [10, 100, 1000, 10000]:
print(f"Pour n = {n}, fibo1 a pris ce temps : ", end='', flush=True)
%timeit fibo1(n)
Explanation: Et avec une boucle for sans mémoire ?
End of explanation
%%time
def fibo2(n: int) -> int:
fi : int = 0
fip1: int = 1
fip2: int = 2
i : int = 1
while i <= n:
fip2 = fi + fip2
i += 1
fi = fip1
fip1 = fip2
return fi
for n in range(100, 110):
print(f"fibo2({n}) = {fibo2(n)}")
for n in [10, 100, 1000, 10000]:
print(f"Pour n = {n}, fibo2 a pris ce temps :", end='', flush=True)
%timeit fibo2(n)
Explanation: Et avec une simple boucle while, sans for ?
End of explanation
%load_ext cython
%%cython
def fibo3(unsigned long long n):
cdef unsigned long long fi = 0
cdef unsigned long long fip1 = 1
cdef unsigned long long fip2 = 2
cdef unsigned long long i = 1
while i <= n:
fip2 = fi + fip2
i += 1
fi = fip1
fip1 = fip2
return fi
%%time
for n in range(100, 110):
print(f"fibo3({n}) = {fibo3(n)}")
for n in [10, 100, 1000, 10000]:
print(f"Pour n = {n}, fibo3 a pris ce temps :", end='', flush=True)
%timeit fibo3(n)
Explanation: Et avec une fonction Cython ?
On va tester de compiler la fonction en C avec Cython pour voir si on obtient des performances "vraiment linéaires" :
End of explanation
globals().update({'ok': 'ok'})
from timeit import timeit
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.setrecursionlimit(90000)
exp_min, exp_max, nb_tailles = 1, 4, 10
number_timeit = 1000
tailles = np.array(np.ceil(np.logspace(exp_min, exp_max, nb_tailles)), dtype=int)
print(f"Tailles = {tailles}")
fonctions = {
"Linéaire (for)": fibo1,
"Linéaire (while)": fibo2,
"Linéaire (Cython)": fibo3,
}
temps = {}
for nom, fonction in fonctions.items():
temps[nom] = np.zeros_like(tailles)
print(f"Estimation des temps pour {nom} ({fonction})...")
for i, taille in enumerate(tailles):
temps[nom][i] = timeit(stmt="fonctions[nom](taille)",
number=number_timeit,
globals={'fonction': fonction, 'taille': taille, 'fonctions': fonctions, 'nom': nom}
)
#print(f"Calling %timeit {fonction}({taille}): ")
#%timeit fonction(taille)
# Graphique
plt.figure(figsize=(12, 8), dpi=300)
plt.title("Comparaison de trois fonctions calculant Fibo(n) en complexité linéaire ?")
markers = ['+', 'o', 'd']
for i, (nom, fonction) in enumerate(fonctions.items()):
plt.plot(tailles, temps[nom], label=nom, marker=markers[i % len(fonctions)])
plt.legend()
plt.show()
Explanation: Bien sûr, on peut encore écrire du code Cython plus efficace, cf la doc.
Et avec Cython on utilise les types C donc les entiers bornés, d'où mon utilisation de unsigned long long, le type natif d'entirs C le plus grand possible.
-- Partie rédigée par Lilian Besson (@Naereen)
Graphiques avec plein de comparaison
End of explanation
# Approche de bas en haut
def FiboMonte(n) :
fib=[0 for _ in range(n+1)]
fib[1]=fib[2]=1
for i in range(3,n+1) :
fib[i] =fib[i-1]+fib[i-2]
return fib[n]
from timeit import timeit
import matplotlib.pyplot as plt
abcisse = [i for i in range(1000,22000,3000)]
ordonneeMonte=[0 for _ in range(len(abcisse))]
ordonneeMonte=[0 for _ in range(len(abcisse))]
for i in range(len(abcisse)) :
ordonneeMonte[i] = timeit(stmt="FiboMonte(abcisse[i])", number=50, globals=globals())
# Graphique
plt.plot(abcisse, ordonnee, "ro-") # en rouge
plt.plot(abcisse, ordonneeMonte, "go-") # en vert
plt.show()
plt.close()
Explanation: <font color="blue" size="5"><b>III Fibonacci : approche de bas en haut</b></font>
End of explanation |
6,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling data involves using observed datapoints to try to make a more general description of patterns that we see. It can be useful to describe the trajectory of a neuron's behavior in time, or to describe the relationship between two variables. Here we will cover the basics of modeling, as well as how we can investigate variability in data and how it affects modeling results.
Step1: Part 1
Plot the spike counts as a function of angle. A small amount of meaningless vertical noise has been added to make visualation easier.
Step2: We'll also plot the mean spiking activity over time below. Calculating the mean across time is already a kind of model. It makes the assumption that the mean is a "good" description of spiking activity at any given time point.
Step3: Bootstrap error bars
The mean is useful, but it also removes a lot of information about the data. In particular, it doesn't tell us anything about how variable the data is. For this, we should calculate error bars.
It is possible to calculate error bars analytically, i.e. with mathematical equations, but this requires making assumpts about the distribution of deviations from the mean. Instead, it is recommended to use bootstrapping if possible. This is a method for computationally calculating error bars in order to avoid making as many assumptions about your data. We'll perform this below.
Step4: As you can see, there is some variability in the calculated mean across bootstrap samples. We can incorporate this variability into our original mean plot by including error bars. We calculate these by taking the 2.5th and 97.5th percentiles of the mean at each timepoint across all of our bootstraps. This is called building a 95% confidence interval.
Step5: Advanced exercise
Do this for all neurons. Do they actually have cosine tuning as indicated by the research?
Part 2
We can also fit a parameterized model to the spike count. In this case we'll use a Poisson distribution where the rate parameters depends on the exponential of the cosine of the angle away from the arm and a scaling parameters.
$$P(n; \theta) = \frac{\lambda(\theta)^n\exp(-\lambda(\theta))}{n!}$$
where
$$\lambda = \exp\left(\alpha+\beta\cos(\theta-\theta_\text{arm})\right).$$
Step6: We'll use the fmin function in python, which allows us to define an arbitrary "cost" function, that is then minimized by tuning model parameters.
Step7: By optimizing this cost function, the model has uncovered the above structure (blue line) in the data. Does it seem to describe the data well? Try using more complicated model functions and see how it affects the result.
Advanced exercise
Is exponential better than linear-threshold?
Part 3
We can also use more powerful machine learning tools to regress onto the spike count.
We'll use Random Forests and Regression models to predict spike count as a function of arm position and velocity. For each of these models we can either regress onto the spike count treating it like a continuous value, or we can predict discreet values for spike count treating it like a classification problem.
We'll fit a number of models, then calculate their ability to predict the values of datapoints they were not trained on. This is called "cross-validating" your model, which is a crucial component of machine learning.
The Random Forest models have the integer parameter n_estimators which increases the complexity of the model. The Regression models have continuous regularization parameters | Python Code:
# Load data and pull out important values
data = si.loadmat('../../data/StevensonV2.mat')
# Only keep x, y dimensions, transpose to (trials, dims)
hand_vel = data['handVel'][:2].T
hand_pos = data['handPos'][:2].T
# (neurons, trials)
spikes = data['spikes']
# Remove all times where speeds are very slow
threshold = .015
is_good = np.where(np.linalg.norm(hand_vel, axis=1)**2 > threshold)[0]
hand_vel = hand_vel[is_good]
hand_pos = hand_pos[is_good]
spikes = spikes[:, is_good]
angle = np.arctan2(hand_vel[:, 0], hand_vel[:, 1])
Explanation: Modeling data involves using observed datapoints to try to make a more general description of patterns that we see. It can be useful to describe the trajectory of a neuron's behavior in time, or to describe the relationship between two variables. Here we will cover the basics of modeling, as well as how we can investigate variability in data and how it affects modeling results.
End of explanation
# Plot Raw Data
nNeuron = 193
fig, ax = plt.subplots()
spikes_noisy = spikes[nNeuron] + 0.75 * np.random.rand(spikes[nNeuron].shape[0])
max_s = spikes[nNeuron].max()+1
ax.plot(angle, spikes_noisy, 'r.')
md.format_plot(ax, max_s)
Explanation: Part 1
Plot the spike counts as a function of angle. A small amount of meaningless vertical noise has been added to make visualation easier.
End of explanation
# Make a simple tuning curve
angles = np.arange(-np.pi, np.pi, np.pi / 8.)
n_spikes = np.zeros(len(angles))
angle_bins = np.digitize(angle, angles)
for ii in range(len(angles)):
mask_angle = angle_bins == (ii + 1)
n_spikes[ii] = np.mean(spikes[nNeuron, mask_angle])
fig, ax = plt.subplots()
ax.plot(angle, spikes_noisy, 'r.')
ax.plot(angles + np.pi / 16., n_spikes, lw=3)
md.format_plot(ax, max_s)
Explanation: We'll also plot the mean spiking activity over time below. Calculating the mean across time is already a kind of model. It makes the assumption that the mean is a "good" description of spiking activity at any given time point.
End of explanation
n_angle_samples = angle.size
n_angles = angles.size
n_boots = 1000
simulations = np.zeros([n_boots, n_angles])
for ii in range(n_boots):
# Take a random sample of angle values
ixs = np.random.randint(0, n_angle_samples, n_angle_samples)
angle_sample = angle[ixs]
spike_sample = spikes[:, ixs]
# Group these samples by bins of angle
angle_bins = np.digitize(angle_sample, angles)
# For each angle, calculate the datapoints corresponding to that angle
# Take the mean spikes for each bin of angles
for jj in range(n_angles):
mask_angle = angle_bins == (jj + 1)
this_spikes = spike_sample[nNeuron, mask_angle]
simulations[ii, jj] = np.mean(this_spikes)
fig, ax = plt.subplots()
_ = ax.plot(angles[:, np.newaxis], simulations.T, color='k', alpha=.01)
_ = ax.plot(angles, simulations.mean(0), color='b', lw=3)
md.format_plot(ax, np.ceil(simulations.max()))
Explanation: Bootstrap error bars
The mean is useful, but it also removes a lot of information about the data. In particular, it doesn't tell us anything about how variable the data is. For this, we should calculate error bars.
It is possible to calculate error bars analytically, i.e. with mathematical equations, but this requires making assumpts about the distribution of deviations from the mean. Instead, it is recommended to use bootstrapping if possible. This is a method for computationally calculating error bars in order to avoid making as many assumptions about your data. We'll perform this below.
End of explanation
# Plot data + error bars
clo, chi = np.percentile(simulations, [2.5, 97.5], axis=0)
fig, ax = plt.subplots()
ax.plot(angle, spikes_noisy, 'r.', zorder=-1)
ax.errorbar(angles, n_spikes, yerr=[n_spikes-clo, chi-n_spikes], lw=3)
md.format_plot(ax, max_s)
Explanation: As you can see, there is some variability in the calculated mean across bootstrap samples. We can incorporate this variability into our original mean plot by including error bars. We calculate these by taking the 2.5th and 97.5th percentiles of the mean at each timepoint across all of our bootstraps. This is called building a 95% confidence interval.
End of explanation
# This package allows us to perform optimizations
from scipy import optimize as opt
Explanation: Advanced exercise
Do this for all neurons. Do they actually have cosine tuning as indicated by the research?
Part 2
We can also fit a parameterized model to the spike count. In this case we'll use a Poisson distribution where the rate parameters depends on the exponential of the cosine of the angle away from the arm and a scaling parameters.
$$P(n; \theta) = \frac{\lambda(\theta)^n\exp(-\lambda(\theta))}{n!}$$
where
$$\lambda = \exp\left(\alpha+\beta\cos(\theta-\theta_\text{arm})\right).$$
End of explanation
initial_guess= [.8, 0.1, 4]
params = opt.fmin(md.evaluate_score_ExpCos, initial_guess,
args=(spikes[nNeuron, :], angle))
plt_angle = np.arange(-np.pi, np.pi, np.pi / 80.)
out = np.exp(params[0] + params[1] * np.cos(plt_angle - params[2]))
fig, ax = plt.subplots()
ax.plot(angle, spikes_noisy, 'r.')
ax.plot(plt_angle, out, lw=3)
md.format_plot(ax, max_s)
Explanation: We'll use the fmin function in python, which allows us to define an arbitrary "cost" function, that is then minimized by tuning model parameters.
End of explanation
from sklearn.ensemble import RandomForestRegressor as RFR, RandomForestClassifier as RFC
from sklearn.linear_model import Ridge as RR, LogisticRegression as LgR
nNeuron = 0
# First lets have some meaningful regressors
Y = spikes[nNeuron]
X = np.hstack((hand_vel, hand_pos))
models = [RFR(n_estimators=10), RFC(n_estimators=10),
RR(alpha=1.), LgR(C=1., solver='lbfgs', multi_class='multinomial')]
model_names = ['Random Forest\nRegresion', 'Random Forest\nClassification',
'Ridge Regression', 'Logistic Regression']
folds = 10
mse = np.zeros((len(models), folds))
mse_train = np.zeros((len(models), folds))
def mse_func(y, y_hat):
return ((y-y_hat)**2).mean()
for ii in range(folds):
inds_train = np.arange(Y.size)
inds_test = inds_train[np.mod(inds_train, folds) == ii]
inds_train = inds_train[np.logical_not(np.mod(inds_train, folds) == ii)]
for jj, model in enumerate(models):
model.fit(X[inds_train], Y[inds_train])
mse_train[jj, ii] = mse_func(model.predict(X[inds_train]), Y[inds_train])
mse[jj, ii] = mse_func(model.predict(X[inds_test]), Y[inds_test])
f, ax = plt.subplots(figsize=(10, 4))
ax.plot(np.arange(4)-.1, mse_train, 'r.')
ax.plot(np.arange(4)+.1, mse, 'b.')
ax.plot(-10, 10, 'r.', label='Train mse')
ax.plot(-10, 10, 'b.', label='Validation mse')
plt.legend(loc='best')
ax.set_xticks(range(4))
ax.set_xticklabels(model_names)
ax.set_ylim(0, 2)
ax.set_ylabel('Mean-squared Error')
_ = ax.set_xlim(-.5, 3.5)
Explanation: By optimizing this cost function, the model has uncovered the above structure (blue line) in the data. Does it seem to describe the data well? Try using more complicated model functions and see how it affects the result.
Advanced exercise
Is exponential better than linear-threshold?
Part 3
We can also use more powerful machine learning tools to regress onto the spike count.
We'll use Random Forests and Regression models to predict spike count as a function of arm position and velocity. For each of these models we can either regress onto the spike count treating it like a continuous value, or we can predict discreet values for spike count treating it like a classification problem.
We'll fit a number of models, then calculate their ability to predict the values of datapoints they were not trained on. This is called "cross-validating" your model, which is a crucial component of machine learning.
The Random Forest models have the integer parameter n_estimators which increases the complexity of the model. The Regression models have continuous regularization parameters: alpha for Ridge regression (larger = more regularization) and C for Logistic Regression (small = more regularization). Try changing these parameteres and see how it affects training and validation performance.
End of explanation |
6,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Diffusion Monte Carlo propagators
Most of the equations taken from Chapter 24 ("Projector quantum Monte Carlo") in "Interacting Electrons" (2016) by R.M. Martin, L. Reining, and D.M. Ceperley.
Trotter breakup
Step1: In coordinate space, no importance sampling
Step2: In coordinate space, with importance sampling
Step3: Sampling the drift-diffusion term
Step4: Scaling the drift
In QMCPACK, the drift term is scaled. (From C. J. Umrigar, M. P. Nightingale, K. J. Runge "A diffusion Monte Carlo algorithm with very small time-step errors" JCP 99, 2865 (1993) doi
Step5: Values for Testing | Python Code:
T_op = Symbol('That') # Kinetic energy operator
V_op = Symbol('Vhat') # Potential energy operator
tau = Symbol('tau') # Projection time
n = Symbol('n',isinteger=True) # Number of timestep divisions
dt = Symbol(r'\Delta\tau') # Time for individual timestep
# Eq. 24.7
Eq(exp(-tau *(T_op + V_op)),
Limit(exp(-dt*T_op) * exp(-dt*V_op),n,oo))
Explanation: Diffusion Monte Carlo propagators
Most of the equations taken from Chapter 24 ("Projector quantum Monte Carlo") in "Interacting Electrons" (2016) by R.M. Martin, L. Reining, and D.M. Ceperley.
Trotter breakup
End of explanation
R = Symbol('R')
Rp = Symbol("R'")
Rpp = Symbol("R''")
ET = Symbol("E_T") # Trial energy
N = Symbol('N',isinteger=True) # number of particles
V = Symbol('V') # potential energy
bracket = lambda a,b,c : Symbol(r'\left\langle{%s}\left|{%s}\right|{%s}\right\rangle'%(latex(a),latex(b),latex(c)))
# Kinetic energy - Eq. 24.8
Eq(bracket(R, exp(-dt*T_op), Rpp),
(2 *pi*dt)**(-3*N/2) * exp(-(R-Rpp)**2/(2*dt)))
# Potential energy - Eq. 24.9
Eq(bracket(Rpp, exp(-dt*(V_op-ET)),Rp),
exp(-dt*V(Rp))*DiracDelta(Rpp-Rp))
Explanation: In coordinate space, no importance sampling
End of explanation
F = Symbol('F_i')
psiG = Symbol('Psi_G',commutative=False)
EL = Symbol("E_L") # Local energy
H_op = Symbol("Hhat",commutative=False)
gradient = lambda x: Symbol(r'\nabla{%s}'%latex(x))
gradient_with_index = lambda x,i : Symbol(r'\nabla_{%s}{%s}'%(latex(i),latex(x)))
# Quantum force
Eq(F(R), 2*gradient_with_index(log(psiG),Symbol('i')))
# Local energy
Eq(EL(R), psiG**-1 * H_op * psiG)
drift_diffusion = exp(-(Rp-R-S.Half*dt*F(R))**2/(2*dt))
drift_diffusion
branching = exp(-dt*(EL(R)-ET))
branching
prefactor = (2*pi*dt)**(-3*N/2)
prefactor
# Eq. 24.18
prefactor*drift_diffusion*branching
Explanation: In coordinate space, with importance sampling
End of explanation
chi = Symbol('chi') # gaussian random sample with zero mean and variance delta tau
r = Symbol('r')
rp = Symbol("r'")
# Sample new positions with this formula (Eq 23.13)
# Question - how to detemine sampling formula from evolution equation/distribution above?
sample_drift_diffusion = Eq(rp, r + dt * F + chi)
sample_drift_diffusion
Explanation: Sampling the drift-diffusion term
End of explanation
Fmag = Symbol('Fmag^2')
epsilon = Symbol('epsilon')
drift_scale = Piecewise( (tau,Fmag < epsilon ),
((sqrt(1 + 2*Fmag*tau)-1)/Fmag, True))
drift_scale
scaled_drift = F*drift_scale
scaled_drift
Explanation: Scaling the drift
In QMCPACK, the drift term is scaled. (From C. J. Umrigar, M. P. Nightingale, K. J. Runge "A diffusion Monte Carlo algorithm with very small time-step errors" JCP 99, 2865 (1993) doi: 10.1063/1.465195 )
End of explanation
class SymPrinter(printing.lambdarepr.NumPyPrinter):
def _print_Symbol(self, expr):
if expr.name == r'\Delta\tau':
return 'dt'
return expr.name
# RNG corresponding to src/ParticleBase/RandomSeqGenerator.h
def gaussian_rng_list(n):
input_rng = [0.5]*(n+1)
slightly_less_than_one = 1.0 - sys.float_info.epsilon
vals = []
for i in range(0,n,2):
temp1 = math.sqrt(-2.0 * math.log(1.0- slightly_less_than_one*input_rng[i]))
temp2 = 2*math.pi*input_rng[i+1]
vals.append(temp1*math.cos(temp2))
vals.append(temp2*math.sin(temp2))
if n%2 == 1:
temp1 = math.sqrt(-2.0 * math.log(1.0- slightly_less_than_one*input_rng[n-1]))
temp2 = 2*math.pi*input_rng[n]
vals.append(temp1*math.cos(temp2))
return vals
chi_vals = np.array(gaussian_rng_list(6)).reshape((2,3))
chi_vals
r_vals = np.array( [ [1.0, 0.0, 0.0],
[0.0, 0.0, 1.0]])
tau_val = 0.1
scaled_chi_vals = chi_vals * math.sqrt(tau_val)
drift_diffuse_func = lambdify((r, F, chi, dt),sample_drift_diffusion.rhs, printer=SymPrinter)
scaled_drift_func = lambdify((tau, Fmag, F), scaled_drift.subs(epsilon, sys.float_info.epsilon) )
# For a constant wavefunction, gradient is zero
for r_val, chi_val in zip(r_vals, scaled_chi_vals):
rp_val = np.zeros(3)
rp_val = drift_diffuse_func(r_val, np.zeros(3), chi_val, tau_val)
print rp_val
# For a linear wavefunction, gradient is constant
grad_coeff = np.array([ 1.0, 2.0, 3.0])
for r_val, chi_val in zip(r_vals, scaled_chi_vals):
rp_val = np.zeros(3)
# Scaled drift is already multiplied by dt, accomodate by setting dt param to 1.0
rp_val = drift_diffuse_func(r_val, scaled_drift_func(tau_val, np.dot(grad_coeff, grad_coeff),grad_coeff), chi_val, 1.0)
print ['%.15g'%v for v in rp_val]
# Compute scaled drift
drift_scale.subs({epsilon:sys.float_info.epsilon, tau:tau_val, Fmag:np.dot(grad_coeff, grad_coeff)})
Explanation: Values for Testing
End of explanation |
6,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SMA ROC Portfolio
1. The Security is above its 200-day moving average
2. The Security closes with sma_roc > 0, buy.
3. If the Security closes with sma_roc < 0, sell your long position.
(For a Portfolio of securities.)
Step1: Some global data
Yahoo finance cryptocurrencies
Step2: Run Strategy
Step3: View log DataFrames
Step4: Generate strategy stats - display all available stats
Step5: View Performance by Symbol
Step6: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
Step7: Plot Equity Curves
Step8: Bar Graph
Step9: Analysis | Python Code:
import datetime
import matplotlib.pyplot as plt
import pandas as pd
import pinkfish as pf
import strategy
# Format price data.
pd.options.display.float_format = '{:0.2f}'.format
pd.set_option('display.max_rows', None)
%matplotlib inline
# Set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
Explanation: SMA ROC Portfolio
1. The Security is above its 200-day moving average
2. The Security closes with sma_roc > 0, buy.
3. If the Security closes with sma_roc < 0, sell your long position.
(For a Portfolio of securities.)
End of explanation
# Symbol Lists
BitCoin = ['BTC-USD']
CryptoCurrencies_2016 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'LTC-USD',
'XEM-USD', 'DASH-USD', 'MAID-USD', 'LSK-USD', 'DOGE-USD']
# 'DAO-USD' is a dead coin, so missing from above
CryptoCurrencies_2017 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'LTC-USD', 'ETC-USD',
'XEM-USD', 'MIOTA-USD', 'DASH-USD', 'BTS-USD']
# 'STRAT-USD' last trade date is 2020-11-18, so removed
CryptoCurrencies_2018 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'BCH-USD', 'EOS-USD',
'LTC-USD', 'XLM-USD', 'ADA-USD', 'TRX-USD', 'MIOTA-USD']
CryptoCurrencies_2019 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'LTC-USD', 'BCH-USD',
'EOS-USD', 'BNB-USD', 'USDT-USD', 'BSV-USD', 'CRO-USD']
Stocks_Bonds_Gold_Crypto = ['SPY', 'QQQ', 'TLT', 'GLD', 'BTC-USD']
# Set 'continuous_timeseries' : False (for mixed asset classes)
start_1900 = datetime.datetime(1900, 1, 1)
start_2016 = datetime.datetime(2016, 6, 26)
start_2017 = datetime.datetime(2017, 6, 25)
start_2018 = datetime.datetime(2018, 6, 24)
start_2019 = datetime.datetime(2019, 6, 30)
# Pick one of the above symbols and start pairs
symbols = CryptoCurrencies_2016
start = start_2016
capital = 10000
end = datetime.datetime.now()
# NOTE: Cryptocurrencies have 7 days a week timeseries. You can test them with
# their entire timeseries by setting stock_market_calendar=False. Alternatively,
# to trade with stock market calendar by setting stock_market_calendar=True.
# For mixed asset classes that include stocks or ETFs, you must set
# stock_market_calendar=True.
options = {
'use_adj' : False,
'use_cache' : True,
'use_continuous_calendar' : False,
'force_stock_market_calendar' : True,
'stop_loss_pct' : 1.0,
'margin' : 1,
'lookback' : 1,
'sma_timeperiod': 20,
'sma_pct_band': 3,
'use_regime_filter' : False,
'use_vola_weight' : True
}
Explanation: Some global data
Yahoo finance cryptocurrencies:
https://finance.yahoo.com/cryptocurrencies/
10 largest Crypto currencies from 5 years ago:
https://coinmarketcap.com/historical/20160626/
10 largest Crypto currencies from 4 years ago:
https://coinmarketcap.com/historical/20170625/
10 largest Crypto currencies from 3 years ago:
https://coinmarketcap.com/historical/20180624/
10 largest Crypto currencies from 2 years ago:
https://coinmarketcap.com/historical/20190630/
Some global data
End of explanation
s = strategy.Strategy(symbols, capital, start, end, options=options)
s.run()
Explanation: Run Strategy
End of explanation
s.rlog.head()
s.tlog.head()
s.dbal.tail()
Explanation: View log DataFrames: raw trade log, trade log, and daily balance
End of explanation
pf.print_full(s.stats)
Explanation: Generate strategy stats - display all available stats
End of explanation
weights = {symbol: 1 / len(symbols) for symbol in symbols}
totals = s.portfolio.performance_per_symbol(weights=weights)
totals
corr_df = s.portfolio.correlation_map(s.ts)
corr_df
Explanation: View Performance by Symbol
End of explanation
benchmark = pf.Benchmark('BTC-USD', s.capital, s.start, s.end, use_adj=True)
benchmark.run()
Explanation: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
End of explanation
pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal)
Explanation: Plot Equity Curves: Strategy vs Benchmark
End of explanation
df = pf.plot_bar_graph(s.stats, benchmark.stats)
df
Explanation: Bar Graph: Strategy vs Benchmark
End of explanation
kelly = pf.kelly_criterion(s.stats, benchmark.stats)
kelly
Explanation: Analysis: Kelly Criterian
End of explanation |
6,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: PCMDI
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
6,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Tutorial
Now you are ready to start creating your own AutoML video classification model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
Step13: Quick peek at your data
This tutorial uses a version of the MIT Human Motion dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
Step14: Create the Dataset
Next, create the Dataset resource using the create method for the VideoDataset class, which takes the following parameters
Step15: Create and run training pipeline
To train an AutoML model, you perform two steps
Step16: Run the training pipeline
Next, you run the job to start the training job by invoking the method run, with the following parameters
Step17: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
Step18: Send a batch prediction request
Send a batch prediction to your deployed model.
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
Step19: Make a batch input file
Now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each video. The dictionary contains the key/value pairs
Step20: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters
Step21: Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
Step22: Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format
Step23: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex SDK: AutoML training video classification model for batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_video_classification_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_video_classification_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_video_classification_batch.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to create video classification models and do batch prediction using a Google Cloud AutoML model.
Dataset
The dataset used for this tutorial is the golf swing recognition portion of the Human Motion dataset from MIT. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model will predict the start frame where a golf swing begins.
Objective
In this tutorial, you create an AutoML video classification model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Make a batch prediction.
There is one key difference between using batch prediction and using online prediction:
Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.
Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aiplatform
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
IMPORT_FILE = "gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv"
Explanation: Tutorial
Now you are ready to start creating your own AutoML video classification model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
Explanation: Quick peek at your data
This tutorial uses a version of the MIT Human Motion dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
dataset = aiplatform.VideoDataset.create(
display_name="MIT Human Motion" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aiplatform.schema.dataset.ioformat.video.classification,
)
print(dataset.resource_name)
Explanation: Create the Dataset
Next, create the Dataset resource using the create method for the VideoDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
This operation may take several minutes.
End of explanation
job = aiplatform.AutoMLVideoTrainingJob(
display_name="hmdb_" + TIMESTAMP,
prediction_type="classification",
)
print(job)
Explanation: Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLVideoTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: A video classification model.
object_tracking: A video object tracking model.
action_recognition: A video action recognition model.
End of explanation
model = job.run(
dataset=dataset,
model_display_name="hmdb_" + TIMESTAMP,
training_fraction_split=0.8,
test_fraction_split=0.2,
)
Explanation: Run the training pipeline
Next, you run the job to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
The run method when completed returns the Model resource.
The execution of the training pipeline can take over 3 hours to complete.
End of explanation
# Get model resource ID
models = aiplatform.Model.list(filter="display_name=hmdb_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aiplatform.gapic.ModelServiceClient(
client_options=client_options
)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
Explanation: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
test_items = ! gsutil cat $IMPORT_FILE | head -n2
if len(test_items[0]) == 5:
_, test_item_1, test_label_1, _, _ = str(test_items[0]).split(",")
_, test_item_2, test_label_2, _, _ = str(test_items[1]).split(",")
else:
test_item_1, test_label_1, _, _ = str(test_items[0]).split(",")
test_item_2, test_label_2, _, _ = str(test_items[1]).split(",")
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
Explanation: Send a batch prediction request
Send a batch prediction to your deployed model.
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
import json
from google.cloud import storage
test_filename = "test.jsonl"
gcs_input_uri = BUCKET_NAME + "/" + test_filename
data_1 = {
"content": test_item_1,
"mimeType": "video/avi",
"timeSegmentStart": "0.0s",
"timeSegmentEnd": "5.0s",
}
data_2 = {
"content": test_item_2,
"mimeType": "video/avi",
"timeSegmentStart": "0.0s",
"timeSegmentEnd": "5.0s",
}
bucket = storage.Client(project=PROJECT_ID).bucket(BUCKET_NAME.replace("gs://", ""))
blob = bucket.blob(blob_name=test_filename)
data = json.dumps(data_1) + "\n" + json.dumps(data_2) + "\n"
blob.upload_from_string(data)
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
Explanation: Make a batch input file
Now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each video. The dictionary contains the key/value pairs:
content: The Cloud Storage path to the video.
mimeType: The content type. In our example, it is a avi file.
timeSegmentStart: The start timestamp in the video to do prediction on. Note, the timestamp must be specified as a string and followed by s (second), m (minute) or h (hour).
timeSegmentEnd: The end timestamp in the video to do prediction on.
End of explanation
batch_predict_job = model.batch_predict(
job_display_name="hmdb_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job)
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
batch_predict_job.wait()
Explanation: Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}".replace(
BUCKET_NAME + "/", ""
)
data = bucket.get_blob(gfile_name).download_as_string()
data = json.loads(data)
print(data)
Explanation: Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
content: The prediction request.
prediction: The prediction response.
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each class label.
confidences: The predicted confidence, between 0 and 1, per class label.
timeSegmentStart: The time offset in the video to the start of the video sequence.
timeSegmentEnd: The time offset in the video to the end of the video sequence.
End of explanation
# If the bucket needs to be deleted too, please set "delete_bucket" to True
delete_bucket = False
# Delete the dataset using the Vertex dataset object
dataset.delete()
# Delete the model using the Vertex model object
model.delete()
# Delete the AutoML or Pipeline training job
job.delete()
# Delete the batch prediction job using the Vertex batch prediction object
batch_predict_job.delete()
# Delete the Cloud storage bucket
if delete_bucket is True:
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Model
AutoML Training Job
Batch Job
Cloud Storage Bucket
End of explanation |
6,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Natural Language Processing - Text Mining with Twitter API
Introduction
Using Natural Language Processing we can extract relevant and insightful data from social media. In this demonstration I will use the Python Twitter API tweepy to stream data into my console and extract that data into a text file for Natural Language Procssing.
In the past year we have seen the rise in fashion dominance of Adidas in the sneakers category. This was reflected in their stock price as Adidas have been climbing consistently in the past year. We will look to twitter to help acertain this trend by comparing the number of relevant tweets between Adidas and Nike.
Streaming Data from Twitter
To start off we will stream data from Twitter using the tweepy API. We will capture tweets that contains keywords Nike and Adidas.
Step1: The code above was processed through Canopy and data was exported to a text file via the Canopy command steaming_data.py > data.text. Due to the lack of processing power of my laptop, I've only obtained roughly 3000 tweets.
Dataframe Preparation
The next step is to create a dataframe to store the necessary information. For starters we will take in language and country for some quick visualizations.
Step2: Analyzing tweets
With dataframe created we can now use some of that information for some quick visualizations.
Step3: From the barchart above we can see that English was the primary language of the tweets collected.
Next we will create new columns and extract information on tweets that are Nike or Adidas related.
Step4: From the above barchart we can see that Nike is still ranked higher in terms of mentioned tweets. We will take this further and investigate whether this is still the case by placing some relevance filters.
Relevant data
We've seen Nike ranked higher in terms of mentioned tweets, but now lets add in some relevant keywords. As our focus is sneakers, we will add in shoes, sneakers and kicks as relevant keywords. The code in the following section will take those keywords and look for tweets that are relevant.
Step5: With the relevant keywords added we can now see that Adidas actually ranked higher. This is no surprise but helps ascertain that Adidas is now big player in the sneakers market.
Please note that the analysis and data above are for demonstration purposes only. For a more robust analysis, more data (tweets) is needed.
Extract Links from tweets
In this last section we will pull links of relevant tweets into a new dataframe for further analysis. | Python Code:
#Import the necessary methods from tweepy library
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
access_token = "Access Token"
access_token_secret = "Access Token Secret"
consumer_key = "Consumer Key"
consumer_secret = "Consumer Secret"
class StdOutListener(StreamListener):
def on_data(self, data):
print data
return True
def on_error(self, status):
print status
if __name__ == '__main__':
l = StdOutListener()
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
stream = Stream(auth, l)
#key word capture
stream.filter(track=['Nike', 'Adidas'])
Explanation: Natural Language Processing - Text Mining with Twitter API
Introduction
Using Natural Language Processing we can extract relevant and insightful data from social media. In this demonstration I will use the Python Twitter API tweepy to stream data into my console and extract that data into a text file for Natural Language Procssing.
In the past year we have seen the rise in fashion dominance of Adidas in the sneakers category. This was reflected in their stock price as Adidas have been climbing consistently in the past year. We will look to twitter to help acertain this trend by comparing the number of relevant tweets between Adidas and Nike.
Streaming Data from Twitter
To start off we will stream data from Twitter using the tweepy API. We will capture tweets that contains keywords Nike and Adidas.
End of explanation
import json
import pandas as pd
import re
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
tweets_data_path = '/Users/data.txt'
tweets_data = []
tweets_file = open(tweets_data_path, "r")
for line in tweets_file:
try:
tweet = json.loads(line)
tweets_data.append(tweet)
except:
continue
tweets = pd.DataFrame()
tweets['text'] = map(lambda tweet: tweet.get('text', None),tweets_data)
tweets['lang'] = map(lambda tweet: tweet.get('lang',None),tweets_data)
tweets['country'] = map(lambda tweet: tweet.get('place',{}).get('country',{}) if tweet.get('place',None) != None else None, tweets_data)
tweets_by_lang = tweets['lang'].value_counts()
Explanation: The code above was processed through Canopy and data was exported to a text file via the Canopy command steaming_data.py > data.text. Due to the lack of processing power of my laptop, I've only obtained roughly 3000 tweets.
Dataframe Preparation
The next step is to create a dataframe to store the necessary information. For starters we will take in language and country for some quick visualizations.
End of explanation
fig, ax = plt.subplots()
ax.tick_params(axis='x', labelsize=15)
ax.tick_params(axis='y', labelsize=10)
ax.set_xlabel('Languages', fontsize=15)
ax.set_ylabel('Number of tweets' , fontsize=15)
ax.set_title('Top 5 languages', fontsize=15, fontweight='bold')
tweets_by_lang[:5].plot(ax=ax, kind='bar', color='red')
Explanation: Analyzing tweets
With dataframe created we can now use some of that information for some quick visualizations.
End of explanation
#some quick data cleaning to remove rows will null values
tweets = tweets.drop(tweets.index[[878,881,886,925]])
#function to return True if a certain word in found in the text
def word_in_text(word, text):
word = word.lower()
text = text.lower()
match = re.search(word, text)
if match:
return True
return False
tweets['Nike'] = tweets['text'].apply(lambda tweet: word_in_text('nike', tweet))
tweets['Adidas'] = tweets['text'].apply(lambda tweet: word_in_text('adidas', tweet))
brands = ['nike', 'adidas']
tweets_by_brands = [tweets['Nike'].value_counts()[True], tweets['Adidas'].value_counts()[True]]
x_pos = list(range(len(brands)))
width = 0.8
fig, ax = plt.subplots()
plt.bar(x_pos, tweets_by_brands, width, alpha=1, color='g')
ax.set_ylabel('Number of tweets', fontsize=12)
ax.set_title('Nike vs Adidas', fontsize=15, fontweight='bold')
ax.set_xticks([p + 0.4 * width for p in x_pos])
ax.set_xticklabels(brands)
plt.grid()
Explanation: From the barchart above we can see that English was the primary language of the tweets collected.
Next we will create new columns and extract information on tweets that are Nike or Adidas related.
End of explanation
tweets['shoes'] = tweets['text'].apply(lambda tweet: word_in_text('shoes', tweet))
tweets['sneakers'] = tweets['text'].apply(lambda tweet: word_in_text('sneakers', tweet))
tweets['kicks'] = tweets['text'].apply(lambda tweet: word_in_text('kicks', tweet))
tweets['relevant'] = tweets['text'].apply(lambda tweet: word_in_text('shoes', tweet) or \
word_in_text('sneakers', tweet) or word_in_text('kicks', tweet))
print tweets['shoes'].value_counts()[True]
print tweets['sneakers'].value_counts()[True]
print tweets['kicks'].value_counts()[True]
print tweets['relevant'].value_counts()[True]
tweets_by_brand_rel = [tweets[tweets['relevant'] == True]['Nike'].value_counts()[True],
tweets[tweets['relevant'] == True]['Adidas'].value_counts()[True]]
x_pos = list(range(len(brands)))
width = 0.8
fig, ax = plt.subplots()
plt.bar(x_pos, tweets_by_brand_rel, width,alpha=1,color='g')
ax.set_ylabel('Number of tweets', fontsize=15)
ax.set_title('Nike vs Adidas (Relevant data)', fontsize=10, fontweight='bold')
ax.set_xticks([p + 0.4 * width for p in x_pos])
ax.set_xticklabels(brands)
plt.grid()
Explanation: From the above barchart we can see that Nike is still ranked higher in terms of mentioned tweets. We will take this further and investigate whether this is still the case by placing some relevance filters.
Relevant data
We've seen Nike ranked higher in terms of mentioned tweets, but now lets add in some relevant keywords. As our focus is sneakers, we will add in shoes, sneakers and kicks as relevant keywords. The code in the following section will take those keywords and look for tweets that are relevant.
End of explanation
def extract_link(text):
regex = r'https?://[^\s<>"]+|www\.[^\s<>"]+'
match = re.search(regex, text)
if match:
return match.group()
return ''
tweets['link'] = tweets['text'].apply(lambda tweet: extract_link(tweet))
tweets_relevant = tweets[tweets['relevant'] == True]
tweets_relevant_with_link = tweets_relevant[tweets_relevant['link'] != '']
links_rel_nike = tweets_relevant_with_link[tweets_relevant_with_link['Nike'] == True]['link']
links_rel_nike.head()
Explanation: With the relevant keywords added we can now see that Adidas actually ranked higher. This is no surprise but helps ascertain that Adidas is now big player in the sneakers market.
Please note that the analysis and data above are for demonstration purposes only. For a more robust analysis, more data (tweets) is needed.
Extract Links from tweets
In this last section we will pull links of relevant tweets into a new dataframe for further analysis.
End of explanation |
6,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Analysis
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: This chapter presents a problem inspired by the game show The Price is Right.
It is a silly example, but it demonstrates a useful process called Bayesian decision analysis.
As in previous examples, we'll use data and prior distribution to compute a posterior distribution; then we'll use the posterior distribution to choose an optimal strategy in a game that involves bidding.
As part of the solution, we will use kernel density estimation (KDE) to estimate the prior distribution, and a normal distribution to compute the likelihood of the data.
And at the end of the chapter, I pose a related problem you can solve as an exercise.
The Price Is Right Problem
On November 1, 2007, contestants named Letia and Nathaniel appeared on The Price is Right, an American television game show. They competed in a game called "The Showcase", where the objective is to guess the price of a collection of prizes. The contestant who comes closest to the actual price, without going over, wins the prizes.
Nathaniel went first. His showcase included a dishwasher, a wine cabinet, a laptop computer, and a car. He bid \$26,000.
Letia's showcase included a pinball machine, a video arcade game, a pool table, and a cruise of the Bahamas. She bid \$21,500.
The actual price of Nathaniel's showcase was \$25,347. His bid was too high, so he lost.
The actual price of Letia's showcase was \$21,578.
She was only off by \$78, so she won her showcase and, because her bid was off by less than 250, she also won Nathaniel's showcase.
For a Bayesian thinker, this scenario suggests several questions
Step3: The following function reads the data and cleans it up a little.
Step4: I'll read both files and concatenate them.
Step5: Here's what the dataset looks like
Step7: The first two columns, Showcase 1 and Showcase 2, are the values of the showcases in dollars.
The next two columns are the bids the contestants made.
The last two columns are the differences between the actual values and the bids.
Kernel Density Estimation
This dataset contains the prices for 313 previous showcases, which we can think of as a sample from the population of possible prices.
We can use this sample to estimate the prior distribution of showcase prices. One way to do that is kernel density estimation (KDE), which uses the sample to estimate a smooth distribution. If you are not familiar with KDE, you can read about it here.
SciPy provides gaussian_kde, which takes a sample and returns an object that represents the estimated distribution.
The following function takes sample, makes a KDE, evaluates it at a given sequence of quantities, qs, and returns the result as a normalized PMF.
Step8: We can use it to estimate the distribution of values for Showcase 1
Step9: Here's what it looks like
Step10: Exercise
Step11: Distribution of Error
To update these priors, we have to answer these questions
Step12: To visualize the distribution of these differences, we can use KDE again.
Step13: Here's what these distributions look like
Step14: It looks like the bids are too low more often than too high, which makes sense. Remember that under the rules of the game, you lose if you overbid, so contestants probably underbid to some degree deliberately.
For example, if they guess that the value of the showcase is \$40,000, they might bid \$36,000 to avoid going over.
It looks like these distributions are well modeled by a normal distribution, so we can summarize them with their mean and standard deviation.
For example, here is the mean and standard deviation of Diff for Player 1.
Step15: Now we can use these differences to model the contestant's distribution of errors.
This step is a little tricky because we don't actually know the contestant's guesses; we only know what they bid.
So we have to make some assumptions
Step16: The result is an object that provides pdf, which evaluates the probability density function of the normal distribution.
For example, here is the probability density of error=-100, based on the distribution of errors for Player 1.
Step17: By itself, this number doesn't mean very much, because probability densities are not probabilities. But they are proportional to probabilities, so we can use them as likelihoods in a Bayesian update, as we'll see in the next section.
Update
Suppose you are Player 1. You see the prizes in your showcase and your guess for the total price is \$23,000.
From your guess I will subtract away each hypothetical price in the prior distribution; the result is your error under each hypothesis.
Step18: Now suppose we know, based on past performance, that your estimation error is well modeled by error_dist1.
Under that assumption we can compute the likelihood of your error under each hypothesis.
Step19: The result is an array of likelihoods, which we can use to update the prior.
Step20: Here's what the posterior distribution looks like
Step21: Because your initial guess is in the lower end of the range, the posterior distribution has shifted to the left. We can compute the posterior mean to see by how much.
Step22: Before you saw the prizes, you expected to see a showcase with a value close to \$30,000.
After making a guess of \$23,000, you updated the prior distribution.
Based on the combination of the prior and your guess, you now expect the actual price to be about \$26,000.
Exercise
Step24: Probability of Winning
Now that we have a posterior distribution for each player, let's think about strategy.
First, from the point of view of Player 1, let's compute the probability that Player 2 overbids. To keep it simple, I'll use only the performance of past players, ignoring the value of the showcase.
The following function takes a sequence of past bids and returns the fraction that overbid.
Step25: Here's an estimate for the probability that Player 2 overbids.
Step27: Now suppose Player 1 underbids by \$5000.
What is the probability that Player 2 underbids by more?
The following function uses past performance to estimate the probability that a player underbids by more than a given amount, diff
Step28: Here's the probability that Player 2 underbids by more than \$5000.
Step29: And here's the probability they underbid by more than \$10,000.
Step31: We can combine these functions to compute the probability that Player 1 wins, given the difference between their bid and the actual price
Step32: Here's the probability that you win, given that you underbid by \$5000.
Step33: Now let's look at the probability of winning for a range of possible differences.
Step34: Here's what it looks like
Step35: If you underbid by \$30,000, the chance of winning is about 30%, which is mostly the chance your opponent overbids.
As your bids gets closer to the actual price, your chance of winning approaches 1.
And, of course, if you overbid, you lose (even if your opponent also overbids).
Exercise
Step37: Decision Analysis
In the previous section we computed the probability of winning, given that we have underbid by a particular amount.
In reality the contestants don't know how much they have underbid by, because they don't know the actual price.
But they do have a posterior distribution that represents their beliefs about the actual price, and they can use that to estimate their probability of winning with a given bid.
The following function takes a possible bid, a posterior distribution of actual prices, and a sample of differences for the opponent.
It loops through the hypothetical prices in the posterior distribution and, for each price,
Computes the difference between the bid and the hypothetical price,
Computes the probability that the player wins, given that difference, and
Adds up the weighted sum of the probabilities, where the weights are the probabilities in the posterior distribution.
Step38: This loop implements the law of total probability
Step39: Now we can loop through a series of possible bids and compute the probability of winning for each one.
Step40: Here are the results.
Step41: And here's the bid that maximizes Player 1's chance of winning.
Step42: Recall that your guess was \$23,000.
Using your guess to compute the posterior distribution, the posterior mean is about \$26,000.
But the bid that maximizes your chance of winning is \$21,000.
Exercise
Step44: Maximizing Expected Gain
In the previous section we computed the bid that maximizes your chance of winning.
And if that's your goal, the bid we computed is optimal.
But winning isn't everything.
Remember that if your bid is off by \$250 or less, you win both showcases.
So it might be a good idea to increase your bid a little
Step45: For example, if the actual price is \$35000
and you bid \$30000,
you will win about \$23,600 worth of prizes on average, taking into account your probability of losing, winning one showcase, or winning both.
Step47: In reality we don't know the actual price, but we have a posterior distribution that represents what we know about it.
By averaging over the prices and probabilities in the posterior distribution, we can compute the expected gain for a particular bid.
In this context, "expected" means the average over the possible showcase values, weighted by their probabilities.
Step48: For the posterior we computed earlier, based on a guess of \$23,000, the expected gain for a bid of \$21,000 is about \$16,900.
Step49: But can we do any better?
To find out, we can loop through a range of bids and find the one that maximizes expected gain.
Step50: Here are the results.
Step51: Here is the optimal bid.
Step52: With that bid, the expected gain is about \$17,400.
Step53: Recall that your initial guess was \$23,000.
The bid that maximizes the chance of winning is \$21,000.
And the bid that maximizes your expected gain is \$22,000.
Exercise
Step59: Summary
There's a lot going on this this chapter, so let's review the steps
Step60: To test these functions, suppose we get exactly 10 orders per week for eight weeks
Step61: If you print 60 books, your net profit is \$200, as in the example.
Step62: If you print 100 books, your net profit is \$310.
Step63: Of course, in the context of the problem you don't know how many books will be ordered in any given week. You don't even know the average rate of orders. However, given the data and some assumptions about the prior, you can compute the distribution of the rate of orders.
You'll have a chance to do that, but to demonstrate the decision analysis part of the problem, I'll start with the arbitrary assumption that order rates come from a gamma distribution with mean 9.
Here's a Pmf that represents this distribution.
Step64: And here's what it looks like
Step65: Now, we could generate a predictive distribution for the number of books ordered in a given week, but in this example we have to deal with a complicated cost function. In particular, out_of_stock_cost depends on the sequence of orders.
So, rather than generate a predictive distribution, I suggest we run simulations. I'll demonstrate the steps.
First, from our hypothetical distribution of rates, we can draw a random sample of 1000 values.
Step66: For each possible rate, we can generate a sequence of 8 orders.
Step68: Each row of this array is a hypothetical sequence of orders based on a different hypothetical order rate.
Now, if you tell me how many books you printed, I can compute your expected profits, averaged over these 1000 possible sequences.
Step69: For example, here are the expected profits if you order 70, 80, or 90 books.
Step70: Now, let's sweep through a range of values and compute expected profits as a function of the number of books you print.
Step71: Here is the optimal order and the expected profit.
Step73: Now it's your turn. Choose a prior that you think is reasonable, update it with the data you are given, and then use the posterior distribution to do the analysis I just demonstrated. | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
Explanation: Decision Analysis
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
# Load the data files
download('https://raw.githubusercontent.com/AllenDowney/ThinkBayes2/master/data/showcases.2011.csv')
download('https://raw.githubusercontent.com/AllenDowney/ThinkBayes2/master/data/showcases.2012.csv')
Explanation: This chapter presents a problem inspired by the game show The Price is Right.
It is a silly example, but it demonstrates a useful process called Bayesian decision analysis.
As in previous examples, we'll use data and prior distribution to compute a posterior distribution; then we'll use the posterior distribution to choose an optimal strategy in a game that involves bidding.
As part of the solution, we will use kernel density estimation (KDE) to estimate the prior distribution, and a normal distribution to compute the likelihood of the data.
And at the end of the chapter, I pose a related problem you can solve as an exercise.
The Price Is Right Problem
On November 1, 2007, contestants named Letia and Nathaniel appeared on The Price is Right, an American television game show. They competed in a game called "The Showcase", where the objective is to guess the price of a collection of prizes. The contestant who comes closest to the actual price, without going over, wins the prizes.
Nathaniel went first. His showcase included a dishwasher, a wine cabinet, a laptop computer, and a car. He bid \$26,000.
Letia's showcase included a pinball machine, a video arcade game, a pool table, and a cruise of the Bahamas. She bid \$21,500.
The actual price of Nathaniel's showcase was \$25,347. His bid was too high, so he lost.
The actual price of Letia's showcase was \$21,578.
She was only off by \$78, so she won her showcase and, because her bid was off by less than 250, she also won Nathaniel's showcase.
For a Bayesian thinker, this scenario suggests several questions:
Before seeing the prizes, what prior beliefs should the contestants have about the price of the showcase?
After seeing the prizes, how should the contestants update those beliefs?
Based on the posterior distribution, what should the contestants bid?
The third question demonstrates a common use of Bayesian methods: decision analysis.
This problem is inspired by an example in Cameron Davidson-Pilon's book, Probablistic Programming and Bayesian Methods for Hackers.
The Prior
To choose a prior distribution of prices, we can take advantage of data from previous episodes. Fortunately, fans of the show keep detailed records.
For this example, I downloaded files containing the price of each showcase from the 2011 and 2012 seasons and the bids offered by the contestants.
The following cells load the data files.
End of explanation
import pandas as pd
def read_data(filename):
Read the showcase price data.
df = pd.read_csv(filename, index_col=0, skiprows=[1])
return df.dropna().transpose()
Explanation: The following function reads the data and cleans it up a little.
End of explanation
df2011 = read_data('showcases.2011.csv')
df2012 = read_data('showcases.2012.csv')
df = pd.concat([df2011, df2012], ignore_index=True)
print(df2011.shape, df2012.shape, df.shape)
Explanation: I'll read both files and concatenate them.
End of explanation
df.head(3)
Explanation: Here's what the dataset looks like:
End of explanation
from scipy.stats import gaussian_kde
from empiricaldist import Pmf
def kde_from_sample(sample, qs):
Make a kernel density estimate from a sample.
kde = gaussian_kde(sample)
ps = kde(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
Explanation: The first two columns, Showcase 1 and Showcase 2, are the values of the showcases in dollars.
The next two columns are the bids the contestants made.
The last two columns are the differences between the actual values and the bids.
Kernel Density Estimation
This dataset contains the prices for 313 previous showcases, which we can think of as a sample from the population of possible prices.
We can use this sample to estimate the prior distribution of showcase prices. One way to do that is kernel density estimation (KDE), which uses the sample to estimate a smooth distribution. If you are not familiar with KDE, you can read about it here.
SciPy provides gaussian_kde, which takes a sample and returns an object that represents the estimated distribution.
The following function takes sample, makes a KDE, evaluates it at a given sequence of quantities, qs, and returns the result as a normalized PMF.
End of explanation
import numpy as np
qs = np.linspace(0, 80000, 81)
prior1 = kde_from_sample(df['Showcase 1'], qs)
Explanation: We can use it to estimate the distribution of values for Showcase 1:
End of explanation
from utils import decorate
def decorate_value(title=''):
decorate(xlabel='Showcase value ($)',
ylabel='PMF',
title=title)
prior1.plot(label='Prior 1')
decorate_value('Prior distribution of showcase value')
Explanation: Here's what it looks like:
End of explanation
# Solution
qs = np.linspace(0, 80000, 81)
prior2 = kde_from_sample(df['Showcase 2'], qs)
# Solution
prior1.plot(label='Prior 1')
prior2.plot(label='Prior 2')
decorate_value('Prior distributions of showcase value')
Explanation: Exercise: Use this function to make a Pmf that represents the prior distribution for Showcase 2, and plot it.
End of explanation
sample_diff1 = df['Bid 1'] - df['Showcase 1']
sample_diff2 = df['Bid 2'] - df['Showcase 2']
Explanation: Distribution of Error
To update these priors, we have to answer these questions:
What data should we consider and how should we quantify it?
Can we compute a likelihood function; that is, for each hypothetical price, can we compute the conditional likelihood of the data?
To answer these questions, I will model each contestant as a price-guessing instrument with known error characteristics.
In this model, when the contestant sees the prizes, they guess the price of each prize and add up the prices.
Let's call this total guess.
Now the question we have to answer is, "If the actual price is price, what is the likelihood that the contestant's guess would be guess?"
Equivalently, if we define error = guess - price, we can ask, "What is the likelihood that the contestant's guess is off by error?"
To answer this question, I'll use the historical data again.
For each showcase in the dataset, let's look at the difference between the contestant's bid and the actual price:
End of explanation
qs = np.linspace(-40000, 20000, 61)
kde_diff1 = kde_from_sample(sample_diff1, qs)
kde_diff2 = kde_from_sample(sample_diff2, qs)
Explanation: To visualize the distribution of these differences, we can use KDE again.
End of explanation
kde_diff1.plot(label='Diff 1', color='C8')
kde_diff2.plot(label='Diff 2', color='C4')
decorate(xlabel='Difference in value ($)',
ylabel='PMF',
title='Difference between bid and actual value')
Explanation: Here's what these distributions look like:
End of explanation
mean_diff1 = sample_diff1.mean()
std_diff1 = sample_diff1.std()
print(mean_diff1, std_diff1)
Explanation: It looks like the bids are too low more often than too high, which makes sense. Remember that under the rules of the game, you lose if you overbid, so contestants probably underbid to some degree deliberately.
For example, if they guess that the value of the showcase is \$40,000, they might bid \$36,000 to avoid going over.
It looks like these distributions are well modeled by a normal distribution, so we can summarize them with their mean and standard deviation.
For example, here is the mean and standard deviation of Diff for Player 1.
End of explanation
from scipy.stats import norm
error_dist1 = norm(0, std_diff1)
Explanation: Now we can use these differences to model the contestant's distribution of errors.
This step is a little tricky because we don't actually know the contestant's guesses; we only know what they bid.
So we have to make some assumptions:
I'll assume that contestants underbid because they are being strategic, and that on average their guesses are accurate. In other words, the mean of their errors is 0.
But I'll assume that the spread of the differences reflects the actual spread of their errors. So, I'll use the standard deviation of the differences as the standard deviation of their errors.
Based on these assumptions, I'll make a normal distribution with parameters 0 and std_diff1.
SciPy provides an object called norm that represents a normal distribution with the given mean and standard deviation.
End of explanation
error = -100
error_dist1.pdf(error)
Explanation: The result is an object that provides pdf, which evaluates the probability density function of the normal distribution.
For example, here is the probability density of error=-100, based on the distribution of errors for Player 1.
End of explanation
guess1 = 23000
error1 = guess1 - prior1.qs
Explanation: By itself, this number doesn't mean very much, because probability densities are not probabilities. But they are proportional to probabilities, so we can use them as likelihoods in a Bayesian update, as we'll see in the next section.
Update
Suppose you are Player 1. You see the prizes in your showcase and your guess for the total price is \$23,000.
From your guess I will subtract away each hypothetical price in the prior distribution; the result is your error under each hypothesis.
End of explanation
likelihood1 = error_dist1.pdf(error1)
Explanation: Now suppose we know, based on past performance, that your estimation error is well modeled by error_dist1.
Under that assumption we can compute the likelihood of your error under each hypothesis.
End of explanation
posterior1 = prior1 * likelihood1
posterior1.normalize()
Explanation: The result is an array of likelihoods, which we can use to update the prior.
End of explanation
prior1.plot(color='C5', label='Prior 1')
posterior1.plot(color='C4', label='Posterior 1')
decorate_value('Prior and posterior distribution of showcase value')
Explanation: Here's what the posterior distribution looks like:
End of explanation
prior1.mean(), posterior1.mean()
Explanation: Because your initial guess is in the lower end of the range, the posterior distribution has shifted to the left. We can compute the posterior mean to see by how much.
End of explanation
# Solution
mean_diff2 = sample_diff2.mean()
std_diff2 = sample_diff2.std()
print(mean_diff2, std_diff2)
# Solution
error_dist2 = norm(0, std_diff2)
# Solution
guess2 = 38000
error2 = guess2 - prior2.qs
likelihood2 = error_dist2.pdf(error2)
# Solution
posterior2 = prior2 * likelihood2
posterior2.normalize()
# Solution
prior2.plot(color='C5', label='Prior 2')
posterior2.plot(color='C15', label='Posterior 2')
decorate_value('Prior and posterior distribution of showcase value')
# Solution
print(prior2.mean(), posterior2.mean())
Explanation: Before you saw the prizes, you expected to see a showcase with a value close to \$30,000.
After making a guess of \$23,000, you updated the prior distribution.
Based on the combination of the prior and your guess, you now expect the actual price to be about \$26,000.
Exercise: Now suppose you are Player 2. When you see your showcase, you guess that the total price is \$38,000.
Use diff2 to construct a normal distribution that represents the distribution of your estimation errors.
Compute the likelihood of your guess for each actual price and use it to update prior2.
Plot the posterior distribution and compute the posterior mean. Based on the prior and your guess, what do you expect the actual price of the showcase to be?
End of explanation
def prob_overbid(sample_diff):
Compute the probability of an overbid.
return np.mean(sample_diff > 0)
Explanation: Probability of Winning
Now that we have a posterior distribution for each player, let's think about strategy.
First, from the point of view of Player 1, let's compute the probability that Player 2 overbids. To keep it simple, I'll use only the performance of past players, ignoring the value of the showcase.
The following function takes a sequence of past bids and returns the fraction that overbid.
End of explanation
prob_overbid(sample_diff2)
Explanation: Here's an estimate for the probability that Player 2 overbids.
End of explanation
def prob_worse_than(diff, sample_diff):
Probability opponent diff is worse than given diff.
return np.mean(sample_diff < diff)
Explanation: Now suppose Player 1 underbids by \$5000.
What is the probability that Player 2 underbids by more?
The following function uses past performance to estimate the probability that a player underbids by more than a given amount, diff:
End of explanation
prob_worse_than(-5000, sample_diff2)
Explanation: Here's the probability that Player 2 underbids by more than \$5000.
End of explanation
prob_worse_than(-10000, sample_diff2)
Explanation: And here's the probability they underbid by more than \$10,000.
End of explanation
def compute_prob_win(diff, sample_diff):
Probability of winning for a given diff.
# if you overbid you lose
if diff > 0:
return 0
# if the opponent overbids, you win
p1 = prob_overbid(sample_diff)
# or of their bid is worse than yours, you win
p2 = prob_worse_than(diff, sample_diff)
# p1 and p2 are mutually exclusive, so we can add them
return p1 + p2
Explanation: We can combine these functions to compute the probability that Player 1 wins, given the difference between their bid and the actual price:
End of explanation
compute_prob_win(-5000, sample_diff2)
Explanation: Here's the probability that you win, given that you underbid by \$5000.
End of explanation
xs = np.linspace(-30000, 5000, 121)
ys = [compute_prob_win(x, sample_diff2)
for x in xs]
Explanation: Now let's look at the probability of winning for a range of possible differences.
End of explanation
import matplotlib.pyplot as plt
plt.plot(xs, ys)
decorate(xlabel='Difference between bid and actual price ($)',
ylabel='Probability of winning',
title='Player 1')
Explanation: Here's what it looks like:
End of explanation
# Solution
prob_overbid(sample_diff1)
# Solution
prob_worse_than(-5000, sample_diff1)
# Solution
compute_prob_win(-5000, sample_diff1)
# Solution
xs = np.linspace(-30000, 5000, 121)
ys = [compute_prob_win(x, sample_diff1) for x in xs]
# Solution
plt.plot(xs, ys)
decorate(xlabel='Difference between bid and actual price ($)',
ylabel='Probability of winning',
title='Player 2')
Explanation: If you underbid by \$30,000, the chance of winning is about 30%, which is mostly the chance your opponent overbids.
As your bids gets closer to the actual price, your chance of winning approaches 1.
And, of course, if you overbid, you lose (even if your opponent also overbids).
Exercise: Run the same analysis from the point of view of Player 2. Using the sample of differences from Player 1, compute:
The probability that Player 1 overbids.
The probability that Player 1 underbids by more than \$5000.
The probability that Player 2 wins, given that they underbid by \$5000.
Then plot the probability that Player 2 wins for a range of possible differences between their bid and the actual price.
End of explanation
def total_prob_win(bid, posterior, sample_diff):
Computes the total probability of winning with a given bid.
bid: your bid
posterior: Pmf of showcase value
sample_diff: sequence of differences for the opponent
returns: probability of winning
total = 0
for price, prob in posterior.items():
diff = bid - price
total += prob * compute_prob_win(diff, sample_diff)
return total
Explanation: Decision Analysis
In the previous section we computed the probability of winning, given that we have underbid by a particular amount.
In reality the contestants don't know how much they have underbid by, because they don't know the actual price.
But they do have a posterior distribution that represents their beliefs about the actual price, and they can use that to estimate their probability of winning with a given bid.
The following function takes a possible bid, a posterior distribution of actual prices, and a sample of differences for the opponent.
It loops through the hypothetical prices in the posterior distribution and, for each price,
Computes the difference between the bid and the hypothetical price,
Computes the probability that the player wins, given that difference, and
Adds up the weighted sum of the probabilities, where the weights are the probabilities in the posterior distribution.
End of explanation
total_prob_win(25000, posterior1, sample_diff2)
Explanation: This loop implements the law of total probability:
$$P(win) = \sum_{price} P(price) ~ P(win ~|~ price)$$
Here's the probability that Player 1 wins, based on a bid of \$25,000 and the posterior distribution posterior1.
End of explanation
bids = posterior1.qs
probs = [total_prob_win(bid, posterior1, sample_diff2)
for bid in bids]
prob_win_series = pd.Series(probs, index=bids)
Explanation: Now we can loop through a series of possible bids and compute the probability of winning for each one.
End of explanation
prob_win_series.plot(label='Player 1', color='C1')
decorate(xlabel='Bid ($)',
ylabel='Probability of winning',
title='Optimal bid: probability of winning')
Explanation: Here are the results.
End of explanation
prob_win_series.idxmax()
prob_win_series.max()
Explanation: And here's the bid that maximizes Player 1's chance of winning.
End of explanation
# Solution
bids = posterior2.qs
probs = [total_prob_win(bid, posterior2, sample_diff1)
for bid in bids]
prob_win_series = pd.Series(probs, index=bids)
# Solution
prob_win_series.plot(label='Player 2', color='C1')
decorate(xlabel='Bid ($)',
ylabel='Probability of winning',
title='Optimal bid: probability of winning')
# Solution
prob_win_series.idxmax()
# Solution
prob_win_series.max()
Explanation: Recall that your guess was \$23,000.
Using your guess to compute the posterior distribution, the posterior mean is about \$26,000.
But the bid that maximizes your chance of winning is \$21,000.
Exercise: Do the same analysis for Player 2.
End of explanation
def compute_gain(bid, price, sample_diff):
Compute expected gain given a bid and actual price.
diff = bid - price
prob = compute_prob_win(diff, sample_diff)
# if you are within 250 dollars, you win both showcases
if -250 <= diff <= 0:
return 2 * price * prob
else:
return price * prob
Explanation: Maximizing Expected Gain
In the previous section we computed the bid that maximizes your chance of winning.
And if that's your goal, the bid we computed is optimal.
But winning isn't everything.
Remember that if your bid is off by \$250 or less, you win both showcases.
So it might be a good idea to increase your bid a little: it increases the chance you overbid and lose, but it also increases the chance of winning both showcases.
Let's see how that works out.
The following function computes how much you will win, on average, given your bid, the actual price, and a sample of errors for your opponent.
End of explanation
compute_gain(30000, 35000, sample_diff2)
Explanation: For example, if the actual price is \$35000
and you bid \$30000,
you will win about \$23,600 worth of prizes on average, taking into account your probability of losing, winning one showcase, or winning both.
End of explanation
def expected_gain(bid, posterior, sample_diff):
Compute the expected gain of a given bid.
total = 0
for price, prob in posterior.items():
total += prob * compute_gain(bid, price, sample_diff)
return total
Explanation: In reality we don't know the actual price, but we have a posterior distribution that represents what we know about it.
By averaging over the prices and probabilities in the posterior distribution, we can compute the expected gain for a particular bid.
In this context, "expected" means the average over the possible showcase values, weighted by their probabilities.
End of explanation
expected_gain(21000, posterior1, sample_diff2)
Explanation: For the posterior we computed earlier, based on a guess of \$23,000, the expected gain for a bid of \$21,000 is about \$16,900.
End of explanation
bids = posterior1.qs
gains = [expected_gain(bid, posterior1, sample_diff2) for bid in bids]
expected_gain_series = pd.Series(gains, index=bids)
Explanation: But can we do any better?
To find out, we can loop through a range of bids and find the one that maximizes expected gain.
End of explanation
expected_gain_series.plot(label='Player 1', color='C2')
decorate(xlabel='Bid ($)',
ylabel='Expected gain ($)',
title='Optimal bid: expected gain')
Explanation: Here are the results.
End of explanation
expected_gain_series.idxmax()
Explanation: Here is the optimal bid.
End of explanation
expected_gain_series.max()
Explanation: With that bid, the expected gain is about \$17,400.
End of explanation
# Solution
bids = posterior2.qs
gains = [expected_gain(bid, posterior2, sample_diff1) for bid in bids]
expected_gain_series = pd.Series(gains, index=bids)
# Solution
expected_gain_series.plot(label='Player 2', color='C2')
decorate(xlabel='Bid ($)',
ylabel='Expected gain ($)',
title='Optimal bid: expected gain')
# Solution
expected_gain_series.idxmax()
# Solution
expected_gain_series.max()
Explanation: Recall that your initial guess was \$23,000.
The bid that maximizes the chance of winning is \$21,000.
And the bid that maximizes your expected gain is \$22,000.
Exercise: Do the same analysis for Player 2.
End of explanation
def print_cost(printed):
Compute print costs.
printed: integer number printed
if printed < 100:
return printed * 5
else:
return printed * 4.5
def total_income(printed, orders):
Compute income.
printed: integer number printed
orders: sequence of integer number of books ordered
sold = min(printed, np.sum(orders))
return sold * 10
def inventory_cost(printed, orders):
Compute inventory costs.
printed: integer number printed
orders: sequence of integer number of books ordered
excess = printed - np.sum(orders)
if excess > 0:
return excess * 2
else:
return 0
def out_of_stock_cost(printed, orders):
Compute out of stock costs.
printed: integer number printed
orders: sequence of integer number of books ordered
weeks = len(orders)
total_orders = np.cumsum(orders)
for i, total in enumerate(total_orders):
if total > printed:
return (weeks-i) * 50
return 0
def compute_profit(printed, orders):
Compute profit.
printed: integer number printed
orders: sequence of integer number of books ordered
return (total_income(printed, orders) -
print_cost(printed)-
out_of_stock_cost(printed, orders) -
inventory_cost(printed, orders))
Explanation: Summary
There's a lot going on this this chapter, so let's review the steps:
First we used KDE and data from past shows to estimate prior distributions for the values of the showcases.
Then we used bids from past shows to model the distribution of errors as a normal distribution.
We did a Bayesian update using the distribution of errors to compute the likelihood of the data.
We used the posterior distribution for the value of the showcase to compute the probability of winning for each possible bid, and identified the bid that maximizes the chance of winning.
Finally, we used probability of winning to compute the expected gain for each possible bid, and identified the bid that maximizes expected gain.
Incidentally, this example demonstrates the hazard of using the word "optimal" without specifying what you are optimizing.
The bid that maximizes the chance of winning is not generally the same as the bid that maximizes expected gain.
Discussion
When people discuss the pros and cons of Bayesian estimation, as contrasted with classical methods sometimes called "frequentist", they often claim that in many cases Bayesian methods and frequentist methods produce the same results.
In my opinion, this claim is mistaken because Bayesian and frequentist method produce different kinds of results:
The result of frequentist methods is usually a single value that is considered to be the best estimate (by one of several criteria) or an interval that quantifies the precision of the estimate.
The result of Bayesian methods is a posterior distribution that represents all possible outcomes and their probabilities.
Granted, you can use the posterior distribution to choose a "best" estimate or compute an interval.
And in that case the result might be the same as the frequentist estimate.
But doing so discards useful information and, in my opinion, eliminates the primary benefit of Bayesian methods: the posterior distribution is more useful than a single estimate, or even an interval.
The example in this chapter demonstrates the point.
Using the entire posterior distribution, we can compute the bid that maximizes the probability of winning, or the bid that maximizes expected gain, even if the rules for computing the gain are complicated (and nonlinear).
With a single estimate or an interval, we can't do that, even if they are "optimal" in some sense.
In general, frequentist estimation provides little guidance for decision-making.
If you hear someone say that Bayesian and frequentist methods produce the same results, you can be confident that they don't understand Bayesian methods.
Exercises
Exercise: When I worked in Cambridge, Massachusetts, I usually took the subway to South Station and then a commuter train home to Needham. Because the subway was unpredictable, I left the office early enough that I could wait up to 15 minutes and still catch the commuter train.
When I got to the subway stop, there were usually about 10 people waiting on the platform. If there were fewer than that, I figured I just missed a train, so I expected to wait a little longer than usual. And if there there more than that, I expected another train soon.
But if there were a lot more than 10 passengers waiting, I inferred that something was wrong, and I expected a long wait. In that case, I might leave and take a taxi.
We can use Bayesian decision analysis to quantify the analysis I did intuitively. Given the number of passengers on the platform, how long should we expect to wait? And when should we give up and take a taxi?
My analysis of this problem is in redline.ipynb, which is in the repository for this book. Click here to run this notebook on Colab.
Exercise: This exercise is inspired by a true story. In 2001 I created Green Tea Press to publish my books, starting with Think Python. I ordered 100 copies from a short run printer and made the book available for sale through a distributor.
After the first week, the distributor reported that 12 copies were sold. Based that report, I thought I would run out of copies in about 8 weeks, so I got ready to order more. My printer offered me a discount if I ordered more than 1000 copies, so I went a little crazy and ordered 2000.
A few days later, my mother called to tell me that her copies of the book had arrived. Surprised, I asked how many. She said ten.
It turned out I had sold only two books to non-relatives. And it took a lot longer than I expected to sell 2000 copies.
The details of this story are unique, but the general problem is something almost every retailer has to figure out. Based on past sales, how do you predict future sales? And based on those predictions, how do you decide how much to order and when?
Often the cost of a bad decision is complicated. If you place a lot of small orders rather than one big one, your costs are likely to be higher. If you run out of inventory, you might lose customers. And if you order too much, you have to pay the various costs of holding inventory.
So, let's solve a version of the problem I faced. It will take some work to set up the problem; the details are in the notebook for this chapter.
Suppose you start selling books online. During the first week you sell 10 copies (and let's assume that none of the customers are your mother). During the second week you sell 9 copies.
Assuming that the arrival of orders is a Poisson process, we can think of the weekly orders as samples from a Poisson distribution with an unknown rate.
We can use orders from past weeks to estimate the parameter of this distribution, generate a predictive distribution for future weeks, and compute the order size that maximized expected profit.
Suppose the cost of printing the book is \$5 per copy,
But if you order 100 or more, it's \$4.50 per copy.
For every book you sell, you get \$10.
But if you run out of books before the end of 8 weeks, you lose \$50 in future sales for every week you are out of stock.
If you have books left over at the end of 8 weeks, you lose \$2 in inventory costs per extra book.
For example, suppose you get orders for 10 books per week, every week. If you order 60 books,
The total cost is \$300.
You sell all 60 books, so you make \$600.
But the book is out of stock for two weeks, so you lose \$100 in future sales.
In total, your profit is \$200.
If you order 100 books,
The total cost is \$450.
You sell 80 books, so you make \$800.
But you have 20 books left over at the end, so you lose \$40.
In total, your profit is \$310.
Combining these costs with your predictive distribution, how many books should you order to maximize your expected profit?
To get you started, the following functions compute profits and costs according to the specification of the problem:
End of explanation
always_10 = [10] * 8
always_10
Explanation: To test these functions, suppose we get exactly 10 orders per week for eight weeks:
End of explanation
compute_profit(60, always_10)
Explanation: If you print 60 books, your net profit is \$200, as in the example.
End of explanation
compute_profit(100, always_10)
Explanation: If you print 100 books, your net profit is \$310.
End of explanation
from scipy.stats import gamma
alpha = 9
qs = np.linspace(0, 25, 101)
ps = gamma.pdf(qs, alpha)
pmf = Pmf(ps, qs)
pmf.normalize()
pmf.mean()
Explanation: Of course, in the context of the problem you don't know how many books will be ordered in any given week. You don't even know the average rate of orders. However, given the data and some assumptions about the prior, you can compute the distribution of the rate of orders.
You'll have a chance to do that, but to demonstrate the decision analysis part of the problem, I'll start with the arbitrary assumption that order rates come from a gamma distribution with mean 9.
Here's a Pmf that represents this distribution.
End of explanation
pmf.plot(color='C1')
decorate(xlabel=r'Book ordering rate ($\lambda$)',
ylabel='PMF')
Explanation: And here's what it looks like:
End of explanation
rates = pmf.choice(1000)
np.mean(rates)
Explanation: Now, we could generate a predictive distribution for the number of books ordered in a given week, but in this example we have to deal with a complicated cost function. In particular, out_of_stock_cost depends on the sequence of orders.
So, rather than generate a predictive distribution, I suggest we run simulations. I'll demonstrate the steps.
First, from our hypothetical distribution of rates, we can draw a random sample of 1000 values.
End of explanation
np.random.seed(17)
order_array = np.random.poisson(rates, size=(8, 1000)).transpose()
order_array[:5, :]
Explanation: For each possible rate, we can generate a sequence of 8 orders.
End of explanation
def compute_expected_profits(printed, order_array):
Compute profits averaged over a sample of orders.
printed: number printed
order_array: one row per sample, one column per week
profits = [compute_profit(printed, orders)
for orders in order_array]
return np.mean(profits)
Explanation: Each row of this array is a hypothetical sequence of orders based on a different hypothetical order rate.
Now, if you tell me how many books you printed, I can compute your expected profits, averaged over these 1000 possible sequences.
End of explanation
compute_expected_profits(70, order_array)
compute_expected_profits(80, order_array)
compute_expected_profits(90, order_array)
Explanation: For example, here are the expected profits if you order 70, 80, or 90 books.
End of explanation
printed_array = np.arange(70, 110)
t = [compute_expected_profits(printed, order_array)
for printed in printed_array]
expected_profits = pd.Series(t, printed_array)
expected_profits.plot(label='')
decorate(xlabel='Number of books printed',
ylabel='Expected profit ($)')
Explanation: Now, let's sweep through a range of values and compute expected profits as a function of the number of books you print.
End of explanation
expected_profits.idxmax(), expected_profits.max()
Explanation: Here is the optimal order and the expected profit.
End of explanation
# Solution
# For a prior I chose a log-uniform distribution;
# that is, a distribution that is uniform in log-space
# from 1 to 100 books per week.
qs = np.logspace(0, 2, 101)
prior = Pmf(1, qs)
prior.normalize()
# Solution
# Here's the CDF of the prior
prior.make_cdf().plot(color='C1')
decorate(xlabel=r'Book ordering rate ($\lambda$)',
ylabel='CDF')
# Solution
# Here's a function that updates the distribution of lambda
# based on one week of orders
from scipy.stats import poisson
def update_book(pmf, data):
Update book ordering rate.
pmf: Pmf of book ordering rates
data: observed number of orders in one week
k = data
lams = pmf.index
likelihood = poisson.pmf(k, lams)
pmf *= likelihood
pmf.normalize()
# Solution
# Here's the update after week 1.
posterior1 = prior.copy()
update_book(posterior1, 10)
# Solution
# And the update after week 2.
posterior2 = posterior1.copy()
update_book(posterior2, 9)
# Solution
prior.mean(), posterior1.mean(), posterior2.mean()
# Solution
# Now we can generate a sample of 1000 values from the posterior
rates = posterior2.choice(1000)
np.mean(rates)
# Solution
# And we can generate a sequence of 8 weeks for each value
order_array = np.random.poisson(rates, size=(8, 1000)).transpose()
order_array[:5, :]
# Solution
# Here are the expected profits for each possible order
printed_array = np.arange(70, 110)
t = [compute_expected_profits(printed, order_array)
for printed in printed_array]
expected_profits = pd.Series(t, printed_array)
# Solution
# And here's what they look like.
expected_profits.plot(label='')
decorate(xlabel='Number of books printed',
ylabel='Expected profit ($)')
# Solution
# Here's the optimal order.
expected_profits.idxmax()
Explanation: Now it's your turn. Choose a prior that you think is reasonable, update it with the data you are given, and then use the posterior distribution to do the analysis I just demonstrated.
End of explanation |
6,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compare photometry in the new Stripe82 catalog
to Gaia DR2 photometry and derive corrections for
gray systematics using Gmag photometry
input
Step1: <a id='dataReading'></a>
Define paths and catalogs
Step2: Simple positional match using ra/dec
Step3: apply standard cuts as in old catalog
Step4: now match to Gaia DR2...
Step5: Select good matches and compare both catalogs to Gaia DR2
Step6: Final Figures for the Paper
with Karun's 2D histogram implementation
Step7: Final Gmag-based recalibration
Recalibrate R.A. residuals
Step8: Recalibrate Dec residuals
Step9: Now save correction arrays, then apply to original file, and then test
Step10: paper plot showing the jump in Gaia Gmag
Step11: TEMP code for color corrections to go from 3.1 to 3.2 and 3.3 | Python Code:
%matplotlib inline
from astropy.table import Table
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.table import hstack
import matplotlib.pyplot as plt
import numpy as np
from astroML.plotting import hist
# for astroML installation see https://www.astroml.org/user_guide/installation.html
## automatically reload any modules read below that might have changed (e.g. plots)
%load_ext autoreload
%autoreload 2
# importing ZI and KT tools:
import ZItools as zit
import KTtools as ktt
Explanation: Compare photometry in the new Stripe82 catalog
to Gaia DR2 photometry and derive corrections for
gray systematics using Gmag photometry
input: N2020_stripe82calibStars.dat
output: stripe82calibStars_v3.1.dat
files with RA/Dec corrections: ZPcorrectionsRA_v3.1_final.dat and ZPcorrectionsDec_v3.1_final.dat
makes paper plots:
GmagCorrection_RA_Hess.png GmagCorrection_Dec_Hess.png
GmagCorrectionTest_Gmag_Hess.png
End of explanation
ZIdataDir = "/Users/ivezic/Work/Science/CalibrationV2/SDSS_SSC/Data"
# the original SDSS catalog from 2007
sdssOldCat = ZIdataDir + "/" + "stripe82calibStars_v2.6.dat"
# INPUT: Karun's new catalog from 2020
sdssNewCatIn = ZIdataDir + "/" + "N2020_stripe82calibStars.dat"
readFormat = 'csv'
# OUTPUT: with Gmag-based gray corrections
sdssNewCatOut = ZIdataDir + "/" + "stripe82calibStars_v3.1.dat"
# Gaia DR2
GaiaDR2Cat = ZIdataDir + "/" + "Stripe82_GaiaDR2.dat"
# Gaia DR2 with BP and RP data
GaiaDR2CatBR = ZIdataDir + "/" + "Stripe82_GaiaDR2_BPRP.dat"
# both new and old files use identical data structure
colnamesSDSS = ['calib_fla', 'ra', 'dec', 'raRMS', 'decRMS', 'nEpochs', 'AR_val',
'u_Nobs', 'u_mMed', 'u_mMean', 'u_mErr', 'u_rms_scatt', 'u_chi2',
'g_Nobs', 'g_mMed', 'g_mMean', 'g_mErr', 'g_rms_scatt', 'g_chi2',
'r_Nobs', 'r_mMed', 'r_mMean', 'r_mErr', 'r_rms_scatt', 'r_chi2',
'i_Nobs', 'i_mMed', 'i_mMean', 'i_mErr', 'i_rms_scatt', 'i_chi2',
'z_Nobs', 'z_mMed', 'z_mMean', 'z_mErr', 'z_rms_scatt', 'z_chi2']
%%time
# old
sdssOld = Table.read(sdssOldCat, format='ascii', names=colnamesSDSS)
np.size(sdssOld)
%%time
# new
sdssNew = Table.read(sdssNewCatIn, format=readFormat, names=colnamesSDSS)
np.size(sdssNew)
Explanation: <a id='dataReading'></a>
Define paths and catalogs
End of explanation
sdssOld_coords = SkyCoord(ra = sdssOld['ra']*u.degree, dec= sdssOld['dec']*u.degree)
sdssNew_coords = SkyCoord(ra = sdssNew['ra']*u.degree, dec= sdssNew['dec']*u.degree)
# this is matching sdssNew to sdssOld, so that indices are into sdssNew catalog
# makes sense in this case since the sdssOld catalog is (a little bit) bigger
# than sdssNew (1006849 vs 1005470)
idx, d2d, d3d = sdssNew_coords.match_to_catalog_sky(sdssOld_coords)
# object separation is an object with units,
# I add that as a column so that one can
# select based on separation to the nearest matching object
new_old = hstack([sdssNew, sdssOld[idx]], table_names = ['new', 'old'])
new_old['sep_2d_arcsec'] = d2d.arcsec
# good matches between the old and new catalogs
MAX_DISTANCE_ARCSEC = 0.5
sdss = new_old[(new_old['sep_2d_arcsec'] < MAX_DISTANCE_ARCSEC)]
print(np.size(sdss))
Explanation: Simple positional match using ra/dec
End of explanation
mOK3 = sdss[sdss['ra_new']<1]
mOK3 = zit.selectCatalog(sdss, mOK3)
print(996147/1006849)
print(993774/1006849)
print(991472/1006849)
Explanation: apply standard cuts as in old catalog:
End of explanation
colnamesGaia = ['ra', 'dec', 'nObs', 'Gmag', 'flux', 'fluxErr', 'pmra', 'pmdec']
colnamesGaia = colnamesGaia + ['BPmag', 'BPeI', 'RPmag', 'RPeI', 'BRef']
gaia = Table.read(GaiaDR2CatBR, format='ascii', names=colnamesGaia)
gaia['raG'] = gaia['ra']
gaia['decG'] = gaia['dec']
gaia['GmagErr'] = gaia['fluxErr'] / gaia['flux']
gaia['BR'] = gaia['BPmag'] - gaia['RPmag']
gaia['GBP'] = gaia['Gmag'] - gaia['BPmag']
gaia['GRP'] = gaia['Gmag'] - gaia['RPmag']
sdss_coords = SkyCoord(ra = sdss['ra_old']*u.degree, dec= sdss['dec_old']*u.degree)
gaia_coords = SkyCoord(ra = gaia['raG']*u.degree, dec= gaia['decG']*u.degree)
# this is matching gaia to sdss, so that indices are into sdss catalog
# makes sense in this case since the sdss catalog is bigger than gaia
idxG, d2dG, d3dG = gaia_coords.match_to_catalog_sky(sdss_coords)
# object separation is an object with units,
# I add that as a column so that one can
# select based on separation to the nearest matching object
gaia_sdss = hstack([gaia, sdss[idxG]], table_names = ['gaia', 'sdss'])
gaia_sdss['sepSG_2d_arcsec'] = d2dG.arcsec
### code for generating new quantities, such as dra, ddec, colors, differences in mags, etc
def derivedColumns(matches):
matches['dra'] = (matches['ra_new']-matches['ra_old'])*3600
matches['ddec'] = (matches['dec_new']-matches['dec_old'])*3600
matches['ra'] = matches['ra_old']
ra = matches['ra']
matches['raW'] = np.where(ra > 180, ra-360, ra)
matches['dec'] = matches['dec_old']
matches['u'] = matches['u_mMed_old']
matches['g'] = matches['g_mMed_old']
matches['r'] = matches['r_mMed_old']
matches['i'] = matches['i_mMed_old']
matches['z'] = matches['z_mMed_old']
matches['ug'] = matches['u_mMed_old'] - matches['g_mMed_old']
matches['gr'] = matches['g_mMed_old'] - matches['r_mMed_old']
matches['ri'] = matches['r_mMed_old'] - matches['i_mMed_old']
matches['gi'] = matches['g_mMed_old'] - matches['i_mMed_old']
matches['du'] = matches['u_mMed_old'] - matches['u_mMed_new']
matches['dg'] = matches['g_mMed_old'] - matches['g_mMed_new']
matches['dr'] = matches['r_mMed_old'] - matches['r_mMed_new']
matches['di'] = matches['i_mMed_old'] - matches['i_mMed_new']
matches['dz'] = matches['z_mMed_old'] - matches['z_mMed_new']
# Gaia
matches['draGold'] = -3600*(matches['ra_old'] - matches['raG'])
matches['draGnew'] = -3600*(matches['ra_new'] - matches['raG'])
matches['ddecGold'] = -3600*(matches['dec_old'] - matches['decG'])
matches['ddecGnew'] = -3600*(matches['dec_new'] - matches['decG'])
# photometric
matches['gGr_old'] = matches['Gmag'] - matches['r_mMed_old']
matches['gGr_new'] = matches['Gmag'] - matches['r_mMed_new']
matches['gRPr_new'] = matches['RPmag'] - matches['r_mMed_new']
return
derivedColumns(gaia_sdss)
Explanation: now match to Gaia DR2...
End of explanation
# doGaiaAll(mOK)
def doGaiaGmagCorrection(d, Cstr, Gmax=20.0, yMax=0.03):
# Cstr = 'gGr_old' or 'gGr_new'
gi = d['gi']
Gr = d[Cstr]
Gmag = d['Gmag']
zit.qpBM(d, 'gi', -1, 4.5, Cstr, -2, 1.0, 56)
xBin, nPts, medianBin, sigGbin = zit.fitMedians(gi, Gr, -0.7, 4.0, 47, 0)
data = np.array([xBin, medianBin, sigGbin])
Ndata = xBin.size
### HERE WE ARE FITTING 7-th ORDER POLYNOMIAL TO Gmag-rSDSS vs. g-i ###
# get best-fit parameters
thetaCloc = zit.best_theta(data,7)
# generate best fit lines on a fine grid
xfit = np.linspace(-1.1, 4.3, 1000)
yfit = zit.polynomial_fit(thetaCloc, xfit)
## added "Poly" because switched to piecewise linear interpolation below
d['gGrFitPoly'] = zit.polynomial_fit(thetaCloc, gi)
d['dgGrPoly'] = d[Cstr] - d['gGrFitPoly']
### PIECEWISE LINEAR INTERPOLATION (AS FOR ALL OTHER COLORS AND SURVEYS)
d['gGrFit'] = np.interp(gi, xBin, medianBin)
d['dgGr'] = d[Cstr] - d['gGrFit']
# SELECT FOR RECALIBRATION wrt RA and Dec
giMin = 0.4
giMax = 3.0
Dc = d[(d['gi']>giMin)&(d['gi']<giMax)]
print('N before and after color cut:', np.size(d), np.size(Dc))
DcB = Dc[(Dc['Gmag']>14.5)&(Dc['Gmag']<Gmax)]
DcB['GrResid'] = DcB['dgGr'] - np.median(DcB['dgGr'])
zit.printStats(DcB['dgGr'])
DcBok = DcB[np.abs(DcB['dgGr'])<0.1]
print(np.size(DcB), np.size(DcBok))
zit.qpBM(DcBok, 'Gmag', 14.5, Gmax, 'GrResid', -1*yMax, yMax, 56)
zit.qpBM(DcBok, 'dec', -1.3, 1.3, 'GrResid', -1*yMax, yMax, 126)
zit.qpBM(DcBok, 'raW', -51.5, 60, 'GrResid', -1*yMax, yMax, 112)
return thetaCloc, DcBok
## first limit astrometric distance and
## require at least 4 epochs as in the old catalog
MAX_DISTANCE_ARCSEC = 0.5
m1 = gaia_sdss[(gaia_sdss['sepSG_2d_arcsec'] < MAX_DISTANCE_ARCSEC)]
a1 = m1['g_Nobs_new']
a2 = m1['r_Nobs_new']
a3 = m1['i_Nobs_new']
mOK = m1[(a1>3)&(a2>3)&(a3>3)]
print(len(new_old))
print(len(m1))
print(len(mOK))
def plotAstro2Ddiagrams(d):
### plots
plotNameRoot = 'astroVSpm_RA_pm'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'pmra', "Xmin":-40, "Xmax":40, "Xlabel":'R.A. proper motion (mas/yr)', \
"Ystr":'draGnew', "Ymin":-0.5, "Ymax":0.5, "Ylabel":'raw SDSS R.A. - Gaia R.A. (arcsec)', \
"XminBin":-35, "XmaxBin":35, "nBin":70, \
"plotName":plotName, "Nsigma":0, "offset":-0.1, "symbSize":0.05}
kw["nBinX"] = 90
kw["nBinY"] = 40
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
# need to fit draGnew vs. pmra and correct for the mean trend, then plot vs. r mag
pmra = d['pmra']
draGnew = d['draGnew']
xBin, nPts, medianBin, sigGbin = zit.fitMedians(pmra, draGnew, -60, 60, 120, 0)
### PIECEWISE LINEAR INTERPOLATION
d['draGnewFit'] = np.interp(d['pmra'], xBin, medianBin)
draCorr = d['draGnew'] - d['draGnewFit']
draCorrOK = np.where(np.abs(draCorr) < 0.25, draCorr, 0)
d['draGnewCorr'] = draCorrOK
plotNameRoot = 'astroVSpm_RA_r'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'r_mMed_new', "Xmin":14, "Xmax":21, "Xlabel":'SDSS r magnitude', \
"Ystr":'draGnewCorr', "Ymin":-0.12, "Ymax":0.12, "Ylabel":'corr. SDSS R.A. - Gaia R.A. (arcsec)', \
"XminBin":14, "XmaxBin":21, "nBin":30, \
"plotName":plotName, "Nsigma":0, "offset":0.050, "symbSize":0.05}
kw["nBinX"] = 30
kw["nBinY"] = 24
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
plotNameRoot = 'astroVSpm_Dec_pm'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'pmdec', "Xmin":-40, "Xmax":40, "Xlabel":'Dec. proper motion (mas/yr)', \
"Ystr":'ddecGnew', "Ymin":-0.5, "Ymax":0.5, "Ylabel":'raw SDSS Dec. - Gaia Dec. (arcsec)', \
"XminBin":-35, "XmaxBin":35, "nBin":70, \
"plotName":plotName, "Nsigma":0, "offset":-0.1, "symbSize":0.05}
kw["nBinX"] = 90
kw["nBinY"] = 40
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
### produce astrometric plots showing correlation with proper motions
plotAstro2Ddiagrams(mOK)
# print(np.std(mOK['draGnew']), np.std(mOK['ddecGnew']))
#mOK
x = mOK['draGnewCorr']
xOK = x[np.abs(x)<0.25]
print(np.std(xOK), zit.sigG(xOK))
zit.qpBM(mOK, 'pmra', -50, 50, 'draGnew', -0.6, 0.6, 50)
zit.qpBM(mOK, 'pmdec', -50, 50, 'ddecGnew', -0.6, 0.6, 50)
theta, mOKc = doGaiaGmagCorrection(mOK, 'gGr_new')
thetaLoc = theta
## for zero point calibration, in addition to color cut in doGaiaAll, take 16 < G < 19.5
mOKcB = mOKc[(mOKc['Gmag']>16)&(mOKc['Gmag']<19.5)]
mOKcB['GrResid'] = mOKcB['dgGr'] - np.median(mOKcB['dgGr'])
mOKcBok = mOKcB[np.abs(mOKcB['dgGr'])<0.1]
print(np.size(mOKc), np.size(mOKcB), np.size(mOKcBok))
print(np.std(mOKcBok['GrResid']), zit.sigG(mOKcBok['GrResid']))
zit.qpBM(mOKcBok, 'dec', -1.3, 1.3, 'GrResid', -0.03, 0.03, 260)
zit.qpBM(mOKcBok, 'raW', -51.5, 60, 'GrResid', -0.03, 0.03, 112)
Explanation: Select good matches and compare both catalogs to Gaia DR2
End of explanation
def plotGmag2Ddiagrams(d):
### plots
plotNameRoot = 'GrVSgi'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'gi', "Xmin":0.0, "Xmax":3.5, "Xlabel":'SDSS g-i', \
"Ystr":'gGr_new', "Ymin":-1.25, "Ymax":0.25, "Ylabel":'Gaia Gmag - SDSS r', \
"XminBin":-0.5, "XmaxBin":4.0, "nBin":90, \
"plotName":plotName, "Nsigma":3, "offset":0.0, "symbSize":0.05}
kw["nBinX"] = 90
kw["nBinY"] = 40
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
def plotGmag2DdiagramsX(d, kw):
# Gaia G
print('-----------')
print(' stats for SDSS r binning medians:')
plotName = plotNameRoot + '_Gmag.png'
kwOC = {"Xstr":'Gmag', "Xmin":14.3, "Xmax":21.01, "Xlabel":'Gaia G (mag)', \
"Ystr":kw['Ystr'], "Ymin":-0.06, "Ymax":0.06, "Ylabel":Ylabel, \
"XminBin":14.5, "XmaxBin":21.0, "nBin":130, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_Gmag_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 130
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
print('------------------------------------------------------------------')
def plotGmagCorrections(d, kw):
### REDEFINE residuals to correspond to "SDSS-others", as other cases
d['redef'] = -1*d[kw['Ystr']]
kw['Ystr'] = 'redef'
goodC = d[np.abs(d['redef'])<0.1]
### plots
plotNameRoot = kw['plotNameRoot']
# RA
print(' stats for RA binning medians:')
plotName = plotNameRoot + '_RA.png'
Ylabel = 'residuals for (Gmag$_{SDSS}$ - Gmag$_{GaiaDR2}$) '
kwOC = {"Xstr":'raW', "Xmin":-52, "Xmax":60.5, "Xlabel":'R.A. (deg)', \
"Ystr":kw['Ystr'], "Ymin":-0.07, "Ymax":0.07, "Ylabel":Ylabel, \
"XminBin":-51.5, "XmaxBin":60, "nBin":112, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_RA_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 112
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
# Dec
print('-----------')
print(' stats for Dec binning medians:')
plotName = plotNameRoot + '_Dec.png'
kwOC = {"Xstr":'dec', "Xmin":-1.3, "Xmax":1.3, "Xlabel":'Declination (deg)', \
"Ystr":kw['Ystr'], "Ymin":-0.07, "Ymax":0.07, "Ylabel":Ylabel, \
"XminBin":-1.266, "XmaxBin":1.264, "nBin":252, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_Dec_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 252
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
# Gaia G
print('-----------')
print(' stats for SDSS r binning medians:')
plotName = plotNameRoot + '_Gmag.png'
kwOC = {"Xstr":'Gmag', "Xmin":14.3, "Xmax":21.01, "Xlabel":'Gaia G (mag)', \
"Ystr":kw['Ystr'], "Ymin":-0.06, "Ymax":0.06, "Ylabel":Ylabel, \
"XminBin":14.5, "XmaxBin":21.0, "nBin":130, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_Gmag_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 130
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
print('------------------------------------------------------------------')
mOK['GrResid'] = mOK['dgGr'] - np.median(mOK['dgGr']) + 0.006
mOKok = mOK[np.abs(mOK['dgGr'])<0.1]
print(np.size(mOK), np.size(mOKok))
keywords = {"Ystr":'GrResid', "plotNameRoot":'GmagCorrection', "symbSize":0.05}
plotGmagCorrections(mOKok, keywords)
!cp GmagCorrection_Gmag_Hess.png GmagCorrectionTest_Gmag_Hess.png
mOKokX = mOKok[(mOKok['Gmag']>15)&(mOKok['Gmag']<15.5)]
print(np.median(mOKokX['GrResid']))
mOKokX = mOKok[(mOKok['Gmag']>16)&(mOKok['Gmag']<16.2)]
print(np.median(mOKokX['GrResid']))
keywords = {"Ystr":'GrResid', "plotNameRoot":'GmagCorrection', "symbSize":0.05}
plotGmagCorrections(mOKcBok, keywords)
# for calibration: giMin = 0.4 & giMax = 3.0
mOKB = mOK[(mOK['Gmag']>16)&(mOK['Gmag']<19.5)]
plotGmag2Ddiagrams(mOKB)
mOKB
Explanation: Final Figures for the Paper
with Karun's 2D histogram implementation
End of explanation
RAbin, RAnPts, RAmedianBin, RAsigGbin = zit.fitMedians(mOKcBok['raW'], mOKcBok['GrResid'], -51.5, 60.0, 112, 1)
Explanation: Final Gmag-based recalibration
Recalibrate R.A. residuals
End of explanation
decOK = mOKcBok['dec_new']
GrResid = mOKcBok['GrResid']
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(decOK, GrResid, s=0.01, c='blue')
ax.set_xlim(-1.3,1.3)
ax.set_ylim(-0.06,0.06)
ax.set_ylim(-0.04,0.04)
ax.set_xlabel('Declination (deg)')
ax.set_ylabel('Gaia G - SDSS G')
xBin, nPts, medianBin, sigGbin = zit.fitMedians(decOK, GrResid, -1.266, 1.264, 252, 0)
ax.scatter(xBin, medianBin, s=30.0, c='black', alpha=0.9)
ax.scatter(xBin, medianBin, s=15.0, c='yellow', alpha=0.5)
TwoSigP = medianBin + 2*sigGbin
TwoSigM = medianBin - 2*sigGbin
ax.plot(xBin, TwoSigP, c='yellow')
ax.plot(xBin, TwoSigM, c='yellow')
xL = np.linspace(-100,100)
ax.plot(xL, 0*xL+0.00, c='yellow')
ax.plot(xL, 0*xL+0.01, c='red')
ax.plot(xL, 0*xL-0.01, c='red')
dCleft = -1.3
ax.plot(0*xL+dCleft, xL, c='red')
alltheta = []
for i in range(0,12):
decCol = -1.2655 + (i+1)*0.2109
ax.plot(0*xL+decCol, xL, c='red')
xR = xBin[(xBin>dCleft)&(xBin<decCol)]
yR = medianBin[(xBin>dCleft)&(xBin<decCol)]
dyR = sigGbin[(xBin>dCleft)&(xBin<decCol)]
data = np.array([xR, yR, dyR])
theta2 = zit.best_theta(data,5)
alltheta.append(theta2)
yfit = zit.polynomial_fit(theta2, xR)
ax.plot(xR, yfit, c='cyan', lw=2)
dCleft = decCol
rrr = yR - yfit
# print(i, np.median(rrr), np.std(rrr)) # 2 milimag scatter
# print(i, theta2)
plt.savefig('GmagDecCorrections.png')
# let's now correct all mags with this correction
thetaRecalib = alltheta
decLeft = -1.3
for i in range(0,12):
decRight = -1.2655 + (i+1)*0.2109
decArr = np.linspace(decLeft, decRight, 100)
thetaBin = thetaRecalib[i]
ZPfit = zit.polynomial_fit(thetaBin, decArr)
if (i==0):
decCorrGrid = decArr
ZPcorr = ZPfit
else:
decCorrGrid = np.concatenate([decCorrGrid, decArr])
ZPcorr = np.concatenate([ZPcorr, ZPfit])
decLeft = decRight
mOKtest = mOK[mOK['r_Nobs_new']>3]
# Dec correction
decGrid2correct = mOKtest['dec_new']
ZPcorrectionsDec = np.interp(decGrid2correct, decCorrGrid, ZPcorr)
# RA correction
raWGrid2correct = mOKtest['raW']
ZPcorrectionsRA = np.interp(raWGrid2correct, RAbin, RAmedianBin)
print(np.std(ZPcorrectionsDec), np.std(ZPcorrectionsRA))
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(decGrid2correct, ZPcorrectionsDec, s=0.01, c='blue')
ax.plot(decCorrGrid, ZPcorr, c='red')
ax.set_xlim(-1.3,1.3)
ax.set_ylim(-0.02,0.02)
ax.set_xlabel('Declination (deg)')
ax.set_ylabel('Correction')
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(raWGrid2correct, ZPcorrectionsRA, s=0.01, c='blue')
ax.plot(RAbin, RAmedianBin, c='red')
ax.set_xlim(-52,61)
ax.set_ylim(-0.05,0.05)
ax.set_xlabel('RA (deg)')
ax.set_ylabel('Correction')
np.min(ZPcorrectionsRA)
mOKtest['ZPcorrectionsRA'] = ZPcorrectionsRA
mOKtest['ZPcorrectionsDec'] = ZPcorrectionsDec
mOKtest['r_mMed_new'] = mOKtest['r_mMed_new'] + mOKtest['ZPcorrectionsRA'] + mOKtest['ZPcorrectionsDec']
mOKtest['gGr_new'] = mOKtest['Gmag'] - mOKtest['r_mMed_new']
mOKtest['gGrFit'] = zit.polynomial_fit(thetaCloc, mOKtest['gi'])
mOKtest['dgGr'] = mOKtest['gGr_new'] - mOKtest['gGrFit']
d = mOKtest
gi = d['gi']
Gr = d['gGr_new']
Gmag = d['Gmag']
zit.qpBM(d, 'gi', -1, 4.5, 'gGr_new', -2, 1.0, 56)
thetaCtest, DcBokTest_new = doGaiaGmagCorrection(mOKtest, 'gGr_new')
keywords = {"Ystr":'gGr_new', "plotNameRoot":'GmagCorrectionTest', "symbSize":0.05}
mOKtest2 = mOKtest[(mOKtest['gi']>0.4)&(mOKtest['gi']<3.0)]
x = mOKtest2[(mOKtest2['Gmag']>14.5)&(mOKtest2['Gmag']<15.5)]
mOKtest2['gGr_new'] = mOKtest2['gGr_new'] - np.median(x['gGr_new'])
plotGmagCorrections(mOKtest2, keywords)
Explanation: Recalibrate Dec residuals
End of explanation
# final refers to the July 2020 analysis, before the paper submission
#np.savetxt('ZPcorrectionsRA_v3.1_final.dat', (RAbin, RAmedianBin))
#np.savetxt('ZPcorrectionsDec_v3.1_final.dat', (decCorrGrid, ZPcorr))
sdssOut = sdss[sdss['ra_new']<1]
sdssOut = zit.selectCatalog(sdss, sdssOut)
sdssOut.sort('calib_fla_new')
# read back gray zero point recalibration files
zpRAgrid, zpRA = np.loadtxt('ZPcorrectionsRA_v3.1_final.dat')
zpDecgrid, zpDec = np.loadtxt('ZPcorrectionsDec_v3.1_final.dat')
sdssOut
# Dec correction
decGrid2correct = sdssOut['dec_new']
ZPcorrectionsDec = np.interp(decGrid2correct, zpDecgrid, zpDec)
# RA correction
ra = sdssOut['ra_new']
raWGrid2correct = np.where(ra > 180, ra-360, ra)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpRAgrid, zpRA)
print('gray std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
for b in ('u', 'g', 'r', 'i', 'z'):
for mtype in ('_mMed_new', '_mMean_new'):
mstr = b + mtype
# applying here gray corrections
sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec
SSCindexRoot = 'CALIBSTARS_'
outFile = ZIdataDir + "/" + "stripe82calibStars_v3.1_noheader_final.dat"
newSSC = open(outFile,'w')
df = sdssOut
Ngood = 0
for i in range(0, np.size(df)):
Ngood += 1
NoldCat = df['calib_fla_new'][i]
strNo = f'{Ngood:07}'
SSCindex = SSCindexRoot + strNo
SSCrow = zit.getSSCentry(df, i)
zit.SSCentryToOutFileRow(SSCrow, SSCindex, newSSC)
newSSC.close()
print(Ngood, 'rows in file', outFile)
Explanation: Now save correction arrays, then apply to original file, and then test
End of explanation
np.size(zpDec)
Explanation: paper plot showing the jump in Gaia Gmag
End of explanation
### need to figure out where were ZPcorrections2_rz_Dec.dat etc produced ...
## color corrections
for mtype in ('_mMed', '_mMean'):
## u band from u-r color
color = 'ur'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
print('u-r std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
mstr = 'u' + mtype
sdssOut[mstr] = sdssOut[mstr] - ZPcorrectionsRA - ZPcorrectionsDec
## g band from g-r color
color = 'gr'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
print('g-r std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
mstr = 'g' + mtype
sdssOut[mstr] = sdssOut[mstr] - ZPcorrectionsRA - ZPcorrectionsDec
## i band from r-i color
color = 'ri'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
mstr = 'i' + mtype
print('r-i std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec
## i band from r-z color
color = 'rz'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
mstr = 'z' + mtype
print('r-z std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec
Explanation: TEMP code for color corrections to go from 3.1 to 3.2 and 3.3
End of explanation |
6,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to Redis with Python
In this notebook, we will go thourhg a similar set of commands as those described in the Redis Data Types introduction but using the redis-py Python client from a Jupyter notebook.
Remember that Redis is a server, and it can be access in a distributed way by multiple clients in an Enterprise System. This notebook acts as a single client, and is just for educative purposes. The full power of Redis comes when used in an enterprise architecture!
While working with Redis with Python, you will notice that many operations on Redis data types are also available for the Python data types that we get as a result of some operations (e.g. lists). However we have to keep in mind that they operate at very different levels. Using the Redis server operations, and not the local Python equivalents, is the way to go for enterprise applications in order to keep our system scalability and availability (i.e. large sets, concurrent access, etc).
Starting up
Interacting with a running Redis server from a Jupyter notebook using Python is as easy as installing redis-py for your Python distribution and then import the module as follows.
Step1: Then we can obtain a reference to our server. Asuming that we are running our Redis server and our Jupyter notebook server in the same host, with the default Redis server port, we can do as follows.
Step2: Getting and settings Redis Keys
Now we can use r to send Redis commands. For example, we can SET the value of the key my.key as follows.
Step3: In order to check our recently set key, we can use GET and pass the name of the key.
Step4: We can also check for the existence of a given key.
Step5: If we want to set multiple keys at once, we can use MSET and pass a Python dictionary as follows.
Step6: We can also increment the value of a given key in an atomic way.
Step7: Notice how the resulting type has been changed to integer!
Setting keys to expire
With redis-py we can also set keys with limited time to live.
Step8: Let's wait for a couple of seconds for the key to expire and check again.
Step9: Finally, del is a reserved keyword in the Python syntax. Therefore redis-py uses 'delete' instead.
Step10: Redis Lists
Redis lists are linked lists of keys. We can insert and remove elements from both ends.
The LPUSH command adds a new element into a list, on the left.
Step11: The RPUSH command adds a new element into a list, on the right.
Step12: Finally the LRANGE command extracts ranges of elements from lists.
Step13: The result is returned as a Python list. We can use LLEN to check a Redis list lenght without requiring to store the result of lrange and then use Python's len.
Step14: We can push multiple elements with a single call to push.
Step15: Finally, we have the equivalent pop operations for both, right and left ends.
Step16: Capped Lists
We can also TRIM Redis lists with redis-py. We need to pass three arguments
Step17: Notice as the last element has been dropped when triming the list. The lpush/ltrim sequence is a common pattern when inserting in a list that we want to keep size-fized.
Step18: Redis Hashes
The equivalent of Python dictionaries are Redis hashes, with field-value pairs. We use the command HMSET.
Step19: We can also set individual fields.
Step20: We have methods to get individual and multiple fields from a hash.
Step21: The result is returned as a list of values.
Increment operations are also available for hash fields.
Step22: Redis Sets
Redis Sets are unordered collections of strings. We can easily add multiple elements to a Redis set in redis-py as follows by using its implementation of SADD.
Step23: As a result, we get the size of the set. If we want to check the elements within a set, we can use SMEMBERS.
Step24: Notice that we get a Python set as a result. That opens the door to all sort of Python set operations. However, we can operate directly within the Redis server space, and still do things as checking an element membership using SISMEMBER. This is the way to go for enterprise applications in order to keep our system scalability and availability (i.e. large sets, concurrent access, etc).
Step25: The SPOP command extracts a random element (and we can use SRANDMEMBER to get one or more random elements without extraction).
Step26: Or if we want to be specific, we can just use SREM.
Step27: Set operations
In order to obtain the intersection between two sets, we can use SINTER.
Step28: That we get as a Python set. Alternatively, we can directly store the result as a new Redis set by using SINTERSTORE.
Step29: Similar operations are available for union and difference. Moreover, they can be applied to more than two sets. For example, let's create a union set with all the previous and store it in a new Redis set.
Step30: Finally, the number of elements of a given Redis set can be obtained with SCARD.
Step31: Let's clean our server before leaving this section.
Step32: Redis sorted Sets
In a Redis sorted set, every element is associated with a floating point value, called the score. Elements within the set are then ordered according to these scores. We add values to a sorted set by using the oepration ZADD.
Step33: Sorted sets' scores can be updated at any time. Just calling ZADD against an element already included in the sorted set will update its score (and position).
It doesn't matter the order in which we insert the elements. When retrieving them using ZRANGE, they will be returned as a Python list ordered by score.
Step34: And if we want also them in reverse order, we can call ZREVRANGE.
Step35: Even more, we can slice the range by score by using ZRANGEBYSCORE.
Step36: A similar schema can be used to remove elements from the sorted set by score using ZREMRANGEBYSCORE.
Step37: It is also possible to ask what is the position of an element in the set of the ordered elements by using ZRANK (or ZREVRANK if we want the order in reverse way).
Step38: Remember that ranks and scores have the same order but different values! If what we want is the score, we can use ZSCORE. | Python Code:
import redis
Explanation: An Introduction to Redis with Python
In this notebook, we will go thourhg a similar set of commands as those described in the Redis Data Types introduction but using the redis-py Python client from a Jupyter notebook.
Remember that Redis is a server, and it can be access in a distributed way by multiple clients in an Enterprise System. This notebook acts as a single client, and is just for educative purposes. The full power of Redis comes when used in an enterprise architecture!
While working with Redis with Python, you will notice that many operations on Redis data types are also available for the Python data types that we get as a result of some operations (e.g. lists). However we have to keep in mind that they operate at very different levels. Using the Redis server operations, and not the local Python equivalents, is the way to go for enterprise applications in order to keep our system scalability and availability (i.e. large sets, concurrent access, etc).
Starting up
Interacting with a running Redis server from a Jupyter notebook using Python is as easy as installing redis-py for your Python distribution and then import the module as follows.
End of explanation
r = redis.StrictRedis(host='localhost', port=6379, db=0)
Explanation: Then we can obtain a reference to our server. Asuming that we are running our Redis server and our Jupyter notebook server in the same host, with the default Redis server port, we can do as follows.
End of explanation
r.set('my.key', 'value1')
Explanation: Getting and settings Redis Keys
Now we can use r to send Redis commands. For example, we can SET the value of the key my.key as follows.
End of explanation
r.get('my.key')
Explanation: In order to check our recently set key, we can use GET and pass the name of the key.
End of explanation
r.exists('my.key')
r.exists('some.other.key')
Explanation: We can also check for the existence of a given key.
End of explanation
r.mset({'my.key':'value2', 'some.other.key':123})
r.get('my.key')
r.get('some.other.key')
Explanation: If we want to set multiple keys at once, we can use MSET and pass a Python dictionary as follows.
End of explanation
r.incrby('some.other.key',10)
Explanation: We can also increment the value of a given key in an atomic way.
End of explanation
r.expire('some.other.key',1)
r.exists('some.other.key')
Explanation: Notice how the resulting type has been changed to integer!
Setting keys to expire
With redis-py we can also set keys with limited time to live.
End of explanation
from time import sleep
sleep(2)
r.exists('some.other.key')
Explanation: Let's wait for a couple of seconds for the key to expire and check again.
End of explanation
r.delete('my.key')
Explanation: Finally, del is a reserved keyword in the Python syntax. Therefore redis-py uses 'delete' instead.
End of explanation
r.lpush('my.list', 'elem1')
Explanation: Redis Lists
Redis lists are linked lists of keys. We can insert and remove elements from both ends.
The LPUSH command adds a new element into a list, on the left.
End of explanation
r.rpush('my.list', 'elem2')
Explanation: The RPUSH command adds a new element into a list, on the right.
End of explanation
r.lrange('my.list',0,-1)
r.lpush('my.list', 'elem0')
r.lrange('my.list',0,-1)
Explanation: Finally the LRANGE command extracts ranges of elements from lists.
End of explanation
r.llen('my.list')
Explanation: The result is returned as a Python list. We can use LLEN to check a Redis list lenght without requiring to store the result of lrange and then use Python's len.
End of explanation
r.rpush('my.list','elem3','elem4')
r.lrange('my.list',0,-1)
Explanation: We can push multiple elements with a single call to push.
End of explanation
r.lpop('my.list')
r.lrange('my.list',0,-1)
r.rpop('my.list')
r.lrange('my.list',0,-1)
Explanation: Finally, we have the equivalent pop operations for both, right and left ends.
End of explanation
r.lpush('my.list','elem0')
r.ltrim('my.list',0,2)
r.lrange('my.list',0,-1)
Explanation: Capped Lists
We can also TRIM Redis lists with redis-py. We need to pass three arguments: the name of the list, and the start and stop indexes.
End of explanation
r.delete('my.list')
Explanation: Notice as the last element has been dropped when triming the list. The lpush/ltrim sequence is a common pattern when inserting in a list that we want to keep size-fized.
End of explanation
r.hmset('my.hash', {'field1':'value1',
'field2': 1234})
Explanation: Redis Hashes
The equivalent of Python dictionaries are Redis hashes, with field-value pairs. We use the command HMSET.
End of explanation
r.hset('my.hash','field3',True)
Explanation: We can also set individual fields.
End of explanation
r.hget('my.hash','field2')
r.hmget('my.hash','field1','field2','field3')
Explanation: We have methods to get individual and multiple fields from a hash.
End of explanation
r.hincrby('my.hash','field2',10)
Explanation: The result is returned as a list of values.
Increment operations are also available for hash fields.
End of explanation
r.sadd('my.set', 1, 2, 3)
Explanation: Redis Sets
Redis Sets are unordered collections of strings. We can easily add multiple elements to a Redis set in redis-py as follows by using its implementation of SADD.
End of explanation
r.smembers('my.set')
type(r.smembers('my.set'))
Explanation: As a result, we get the size of the set. If we want to check the elements within a set, we can use SMEMBERS.
End of explanation
r.sismember('my.set', 4)
r.sismember('my.set', 1)
Explanation: Notice that we get a Python set as a result. That opens the door to all sort of Python set operations. However, we can operate directly within the Redis server space, and still do things as checking an element membership using SISMEMBER. This is the way to go for enterprise applications in order to keep our system scalability and availability (i.e. large sets, concurrent access, etc).
End of explanation
elem = r.spop('my.set')
r.smembers('my.set')
r.sadd('my.set',elem)
r.smembers('my.set')
Explanation: The SPOP command extracts a random element (and we can use SRANDMEMBER to get one or more random elements without extraction).
End of explanation
r.srem('my.set',2)
r.smembers('my.set')
Explanation: Or if we want to be specific, we can just use SREM.
End of explanation
r.sadd('my.other.set', 'A','B',1)
r.smembers('my.other.set')
r.sinter('my.set','my.other.set')
Explanation: Set operations
In order to obtain the intersection between two sets, we can use SINTER.
End of explanation
r.sinterstore('my.intersection','my.set','my.other.set')
r.smembers('my.intersection')
Explanation: That we get as a Python set. Alternatively, we can directly store the result as a new Redis set by using SINTERSTORE.
End of explanation
r.sadd('my.intersection','batman')
r.sunionstore('my.union','my.set','my.other.set','my.intersection')
r.smembers('my.union')
Explanation: Similar operations are available for union and difference. Moreover, they can be applied to more than two sets. For example, let's create a union set with all the previous and store it in a new Redis set.
End of explanation
r.scard('my.union')
Explanation: Finally, the number of elements of a given Redis set can be obtained with SCARD.
End of explanation
r.delete('my.set','my.other.set','my.intersection','my.union')
Explanation: Let's clean our server before leaving this section.
End of explanation
r.zadd('my.sorted.set', 1, 'first')
r.zadd('my.sorted.set', 3, 'third')
r.zadd('my.sorted.set', 2, 'second')
r.zadd('my.sorted.set', 4, 'fourth')
r.zadd('my.sorted.set', 6, 'sixth')
Explanation: Redis sorted Sets
In a Redis sorted set, every element is associated with a floating point value, called the score. Elements within the set are then ordered according to these scores. We add values to a sorted set by using the oepration ZADD.
End of explanation
r.zrange('my.sorted.set',0,-1)
Explanation: Sorted sets' scores can be updated at any time. Just calling ZADD against an element already included in the sorted set will update its score (and position).
It doesn't matter the order in which we insert the elements. When retrieving them using ZRANGE, they will be returned as a Python list ordered by score.
End of explanation
r.zrevrange('my.sorted.set',0,-1)
Explanation: And if we want also them in reverse order, we can call ZREVRANGE.
End of explanation
r.zrangebyscore('my.sorted.set',2,4)
Explanation: Even more, we can slice the range by score by using ZRANGEBYSCORE.
End of explanation
r.zremrangebyscore('my.sorted.set',6,'inf')
r.zrange('my.sorted.set',0,-1)
Explanation: A similar schema can be used to remove elements from the sorted set by score using ZREMRANGEBYSCORE.
End of explanation
r.zrank('my.sorted.set','third')
Explanation: It is also possible to ask what is the position of an element in the set of the ordered elements by using ZRANK (or ZREVRANK if we want the order in reverse way).
End of explanation
r.zscore('my.sorted.set','third')
Explanation: Remember that ranks and scores have the same order but different values! If what we want is the score, we can use ZSCORE.
End of explanation |
6,378 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
So in numpy arrays there is the built in function for getting the diagonal indices, but I can't seem to figure out how to get the diagonal starting from the top right rather than top left. | Problem:
import numpy as np
a = np.array([[ 0, 1, 2, 3, 4, 5],
[ 5, 6, 7, 8, 9, 10],
[10, 11, 12, 13, 14, 15],
[15, 16, 17, 18, 19, 20],
[20, 21, 22, 23, 24, 25]])
result = np.diag(np.fliplr(a)) |
6,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejercicios 7
El Web Scraping (o Scraping) son un conjunto de técnicas que se utilizan para obtener de forma automática el contenido que hay en páginas web a través de su código HTML.
Las técnicas de Scraping se pueden enmarcar dentro del campodel Big Data en la primera fase de recolección de datos para su posterior almacenamiento, tratamiento y visualización.
El uso de estas técnicas tienen como finalidad recopilar grandes cantidades de datos de diferentes páginas web cuyo uso posterior puede ser muy variado
Step1: En primer lugar tenemos que hacer una petición HTTP a la página web para obtener su HTML. Para ello utilizaremos la libreria request y utilizando el método get (pasandole como parámetro la url) nos dará entre otras cosas el HTML de la web y su "Status Code".
A partir del objeto req, que almacena muchos datos relacionados con la petición HTTP (cabeceras, cookies, etc.) obtenemos el status_Code y el HTML (como un string) de la web.
Step2: A partir del string htmlText que representa el código interno de la pagina web, usamos la librería beautifullSoup de Python para parsear la página y obtener los datos que nos interesan.
BeautifulSoup nos aporta los métodos necesarios (y muy bien optimizados) para obtener el contenido que hay entre las etiquetas HTML.
Step3: 1 Ejercicio
Calcula la población total de todos los municipios
Step4: 2 Ejercicio
Obtener un listado de los municipios cuya altitud sea por encima de los 700 metros. | Python Code:
import requests
url = "https://es.wikipedia.org/wiki/Anexo:Municipios_de_la_Comunidad_de_Madrid"
# Realizamos la petición HTTP a la web
req = requests.get(url)
# Comprobamos que la petición nos devuelve un Status Code = 200
statusCode = req.status_code
if statusCode == 200:
print('La petición ha ido bien')
else:
print('Problemas con la petición...')
Explanation: Ejercicios 7
El Web Scraping (o Scraping) son un conjunto de técnicas que se utilizan para obtener de forma automática el contenido que hay en páginas web a través de su código HTML.
Las técnicas de Scraping se pueden enmarcar dentro del campodel Big Data en la primera fase de recolección de datos para su posterior almacenamiento, tratamiento y visualización.
El uso de estas técnicas tienen como finalidad recopilar grandes cantidades de datos de diferentes páginas web cuyo uso posterior puede ser muy variado: homogenización de datos, tratamiento de contenido para la extracción de conocimiento, complementar datos en una web, etc.
A continuación adjunto una función capaz de hacer una conexión HTTP para acceder a una página web y extraer información.
La página web con la que vamos a jugar es:
"https://es.wikipedia.org/wiki/Anexo:Municipios_de_la_Comunidad_de_Madrid"
que contiene datos de la lista de municipios de la Comunidad de Madrid.
End of explanation
htmlText = req.text
Explanation: En primer lugar tenemos que hacer una petición HTTP a la página web para obtener su HTML. Para ello utilizaremos la libreria request y utilizando el método get (pasandole como parámetro la url) nos dará entre otras cosas el HTML de la web y su "Status Code".
A partir del objeto req, que almacena muchos datos relacionados con la petición HTTP (cabeceras, cookies, etc.) obtenemos el status_Code y el HTML (como un string) de la web.
End of explanation
from bs4 import BeautifulSoup
if statusCode == 200:
# Pasamos el contenido HTML de la web a un objeto BeautifulSoup()
html = BeautifulSoup(req.text, 'html.parser')
content = {}
# Obtenemos cada una de las filas de la tabla
rows = html.find_all('tr')
for r in rows:
#Seleccionamos las celdas de la tabla (td)
celdas = r.find_all('td')
# ignoramos la primera celda, que no tiene elementos td sino th (ver HTML de la página web)
if len(celdas)>0:
# En lugar de un separador de miles, se ha usado un caracter parecido a un espacio en blanco
# por lo que en la celda de habitantes hay que eliminar todos los caracteres que no sean números
#content.append([celdas[0].string, ''.join(c for c in celdas[1].string if c.isdigit())])
content[celdas[0].string] = { 'población': int( ''.join(c for c in celdas[1].string if c.isdigit())),
'superficie': int( ''.join(c for c in celdas[2].string if c.isdigit())),
'altitud': int( ''.join(c for c in celdas[6].string if c.isdigit())),}
Explanation: A partir del string htmlText que representa el código interno de la pagina web, usamos la librería beautifullSoup de Python para parsear la página y obtener los datos que nos interesan.
BeautifulSoup nos aporta los métodos necesarios (y muy bien optimizados) para obtener el contenido que hay entre las etiquetas HTML.
End of explanation
# Sol:
pobT = 0
for municipios, datos in content.items():
pobT = pobT + datos['población']
pobT
# Sol:
# Para nota
def municipio_por_poblacion(habitantes):
pobTotal = 0
print('\nLista de municipios cuya población es superior a %d habitantes:\n' % habitantes)
for municipio, datos in content.items():
if datos['población'] > habitantes:
print(str(municipio) + ': ' + str(datos['población']))
pobTotal = pobTotal + datos['población']
return print('\nLa población total de todos estos municipios es: ' + str(pobTotal)+'\n')
municipio_por_poblacion(50000)
Explanation: 1 Ejercicio
Calcula la población total de todos los municipios
End of explanation
# Sol:
print('\nLista de municipios cuya altitud es superior a 700m:\n')
for municipio, datos in content.items():
if datos['altitud'] > 700:
print(municipio)
# Sol:
# Para nota
def municipio_por_altitud(altitud):
print('\nLista de municipios cuya altitud es superior a %dm:\n' % altitud)
for municipio, datos in content.items():
if datos['altitud'] > altitud:
print(str(municipio) + ': ' + str(datos['altitud']) + 'm')
return
municipio_por_altitud(1200)
Explanation: 2 Ejercicio
Obtener un listado de los municipios cuya altitud sea por encima de los 700 metros.
End of explanation |
6,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
고유분해와 특이값 분해
정방 행렬 $A$에 대해 다음 식을 만족하는 단위 벡터 $v$, 스칼라 $\lambda$을 여러 개 찾을 수 있다.
$$ Av = \lambda v $$
$ A \in \mathbf{R}^{M \times M} $
$ \lambda \in \mathbf{R} $
$ v \in \mathbf{R}^{M} $
이러한 실수 $\lambda$를 고유값(eigenvalue), 단위 벡터 $v$ 를 고유벡터(eigenvector) 라고 하며 고유값과 고유벡터를 찾는 작업을 고유분해(eigen-decomposition)라고 한다.
$ A \in \mathbf{R}^{M \times M} $ 에 대해 최대 $M$개의 고유값-고유벡터 쌍이 존재할 수 있다.
예를 들어 다음 행렬 $A$
$$
A=
\begin{bmatrix}
1 & -2 \
2 & -3
\end{bmatrix}
$$
에 대해 다음 단위 벡터와 스칼라 값은 고유벡터-고유값이 된다.
$$\lambda = -1$$
$$
v=
\begin{bmatrix}
\dfrac{1}{\sqrt{2}} \
\dfrac{1}{\sqrt{2}}
\end{bmatrix}
$$
복수 개의 고유 벡터가 존재하는 경우에는 다음과 같이 고유벡터 행렬 $V$와 고유값 행렬 $\Lambda$로 표기할 수 있다.
$$
A \left[ v_1 \cdots v_M \right] =
\left[ \lambda_1 v_1 \cdots \lambda_M v_M \right] =
\left[ v_1 \cdots v_M \right]
\begin{bmatrix}
\lambda_{1} & 0 & \cdots & 0 \
0 & \lambda_{2} & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & \lambda_{M} \
\end{bmatrix}
$$
$$ AV = V\Lambda $$
여기에서
$$
V = \left[ v_1 \cdots v_M \right]
$$
$$
\Lambda =
\begin{bmatrix}
\lambda_{1} & 0 & \cdots & 0 \
0 & \lambda_{2} & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & \lambda_{M} \
\end{bmatrix}
$$
numpy linalg 서브패키지에서는 고유값과 고유벡터를 구할 수 있는 eig 명령을 제공한다.
Step1: 대칭 행렬의 고유 분해
행렬 $A$가 대칭(symmetric) 행렬이면 고유값 벡터 행렬 $V$는 다음과 같이 전치 행렬이 역행렬과 같아진다.
$$ V^T V = V V^T = I$$
이 때는 고유 분해가 다음과 같이 표시된다.
$$ A = V\Lambda V^T = \sum_{i=1}^{M} {\lambda_i} v_i v_i^T$$
$$ A^{-1} = V \Lambda^{-1} V^T = \sum_{i=1}^{M} \dfrac{1}{\lambda_i} v_i v_i^T$$
확률 변수의 좌표 변환
확률 변수의 공분산 행렬 $\Sigma$ 은 대칭 행렬이므로 위의 관계식이 성립한다.
따라서 다변수 가우시안 정규 분포의 확률 밀도 함수는 다음과 같이 표시할 수 있다.
$$
\begin{eqnarray}
\mathcal{N}(x \mid \mu, \Sigma)
&=& \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right) \
&=& \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T V \Lambda^{-1} V^T (x-\mu) \right) \
&=& \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (V^T(x-\mu))^T \Lambda^{-1} (V^T (x-\mu)) \right) \
\end{eqnarray}
$$
즉 변환 행렬 $V^T$로 좌표 변환하면 서로 독립인 성분들로 나누어진다.
Step2: 특이값 분해
정방 행렬이 아닌 행렬 $M$에 대해서도 고유 분해와 유사한 분해가 가능하다. 이를 특이값 분해(singular value decomposition)이라고 한다.
$M \in \mathbf{R}^{m \times n}$
$$M = U \Sigma V^T$$
여기에서
* $U \in \mathbf{R}^{m \times m}$
* $\Sigma \in \mathbf{R}^{m \times n}$
* $V \in \mathbf{R}^{n \times n}$
이고 행렬 $U$와 $V$는 다음 관계를 만족한다.
$$ U^T U = UU^T = I $$
$$ V^T V = VV^T = I $$
예를 들어
$$\mathbf{M} = \begin{bmatrix}
1 & 0 & 0 & 0 & 2 \
0 & 0 & 3 & 0 & 0 \
0 & 0 & 0 & 0 & 0 \
0 & 2 & 0 & 0 & 0
\end{bmatrix}
$$
에 대한 특이값 분해 결과는 다음과 같다.
$$
\begin{align}
\mathbf{U} &= \begin{bmatrix}
0 & 0 & 1 & 0 \
1 & 0 & 0 & 0 \
0 & 0 & 0 & -1 \
0 & 1 & 0 & 0 \
\end{bmatrix} \
\boldsymbol{\Sigma} &= \begin{bmatrix}
\sqrt{5} & 0 & 0 & 0 & 0 \
0 & 2 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 \
0 & 0 & 0 & 0 & 0
\end{bmatrix} \
\mathbf{V}^T &= \begin{bmatrix}
0 & 0 & \sqrt{0.2} & 0 & \sqrt{0.8} \
0 & 1 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 \
0 & 0 & -\sqrt{0.8} & 0 & \sqrt{0.2} \
0 & 0 & 0 & 1 & 0 \
\end{bmatrix}
\end{align}$$
이는 다음과 같이 확인 할 수 있다.
$$\begin{align}
\mathbf{U} \mathbf{U^T} &=
\begin{bmatrix}
0 & 0 & 1 & 0 \
1 & 0 & 0 & 0 \
0 & 0 & 0 & -1 \
0 & 1 & 0 & 0 \
\end{bmatrix}
\cdot
\begin{bmatrix}
0 & 1 & 0 & 0 \
0 & 0 & 0 & 1 \
1 & 0 & 0 & 0 \
0 & 0 & -1 & 0 \
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 & 0 & 0 \
0 & 1 & 0 & 0 \
0 & 0 & 1 & 0 \
0 & 0 & 0 & 1
\end{bmatrix}
= \mathbf{I}_4 \
\mathbf{V} \mathbf{V^T} &=
\begin{bmatrix}
0 & 0 & \sqrt{0.2} & 0 & \sqrt{0.8} \
0 & 1 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 \
0 & 0 & -\sqrt{0.8} & 0 & \sqrt{0.2} \
0 & 0 & 0 & 1 & 0 \
\end{bmatrix}
\cdot
\begin{bmatrix}
0 & 0 & 1 & 0 & 0 \
0 & 1 & 0 & 0 & 0 \
\sqrt{0.2} & 0 & 0 & -\sqrt{0.8} & 0\
0 & 0 & 0 & 0 & 1 \
\sqrt{0.8} & 0 & 0 & \sqrt{0.2} & 0 \
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 \
0 & 0 & 0 & 1 & 0 \
0 & 0 & 0 & 0 & 1
\end{bmatrix}
= \mathbf{I}_5
\end{align}$$ | Python Code:
w, V = np.linalg.eig(np.array([[1, -2], [2, -3]]))
w
V
Explanation: 고유분해와 특이값 분해
정방 행렬 $A$에 대해 다음 식을 만족하는 단위 벡터 $v$, 스칼라 $\lambda$을 여러 개 찾을 수 있다.
$$ Av = \lambda v $$
$ A \in \mathbf{R}^{M \times M} $
$ \lambda \in \mathbf{R} $
$ v \in \mathbf{R}^{M} $
이러한 실수 $\lambda$를 고유값(eigenvalue), 단위 벡터 $v$ 를 고유벡터(eigenvector) 라고 하며 고유값과 고유벡터를 찾는 작업을 고유분해(eigen-decomposition)라고 한다.
$ A \in \mathbf{R}^{M \times M} $ 에 대해 최대 $M$개의 고유값-고유벡터 쌍이 존재할 수 있다.
예를 들어 다음 행렬 $A$
$$
A=
\begin{bmatrix}
1 & -2 \
2 & -3
\end{bmatrix}
$$
에 대해 다음 단위 벡터와 스칼라 값은 고유벡터-고유값이 된다.
$$\lambda = -1$$
$$
v=
\begin{bmatrix}
\dfrac{1}{\sqrt{2}} \
\dfrac{1}{\sqrt{2}}
\end{bmatrix}
$$
복수 개의 고유 벡터가 존재하는 경우에는 다음과 같이 고유벡터 행렬 $V$와 고유값 행렬 $\Lambda$로 표기할 수 있다.
$$
A \left[ v_1 \cdots v_M \right] =
\left[ \lambda_1 v_1 \cdots \lambda_M v_M \right] =
\left[ v_1 \cdots v_M \right]
\begin{bmatrix}
\lambda_{1} & 0 & \cdots & 0 \
0 & \lambda_{2} & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & \lambda_{M} \
\end{bmatrix}
$$
$$ AV = V\Lambda $$
여기에서
$$
V = \left[ v_1 \cdots v_M \right]
$$
$$
\Lambda =
\begin{bmatrix}
\lambda_{1} & 0 & \cdots & 0 \
0 & \lambda_{2} & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & \lambda_{M} \
\end{bmatrix}
$$
numpy linalg 서브패키지에서는 고유값과 고유벡터를 구할 수 있는 eig 명령을 제공한다.
End of explanation
mu = [2, 3]
cov = [[2, 3],[3, 7]]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(0, 4, 120)
yy = np.linspace(1, 5, 150)
XX, YY = np.meshgrid(xx, yy)
plt.grid(False)
plt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))
x1 = np.array([0, 2])
x1_mu = x1 - mu
x2 = np.array([3, 4])
x2_mu = x2 - mu
plt.plot(x1_mu[0] + mu[0], x1_mu[1] + mu[1], 'bo', ms=20)
plt.plot(x2_mu[0] + mu[0], x2_mu[1] + mu[1], 'ro', ms=20)
plt.axis("equal")
plt.show()
w, V = np.linalg.eig(cov)
w
V
rv = sp.stats.multivariate_normal(mu, w)
xx = np.linspace(0, 4, 120)
yy = np.linspace(1, 5, 150)
XX, YY = np.meshgrid(xx, yy)
plt.grid(False)
plt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))
x1 = np.array([0, 2])
x1_mu = x1 - mu
x2 = np.array([3, 4])
x2_mu = x2 - mu
x1t_mu = V.T.dot(x1_mu) # 좌표 변환
x2t_mu = V.T.dot(x2_mu) # 좌표 변환
plt.plot(x1t_mu[0] + mu[0], x1t_mu[1] + mu[1], 'bo', ms=20)
plt.plot(x2t_mu[0] + mu[0], x2t_mu[1] + mu[1], 'ro', ms=20)
plt.axis("equal")
plt.show()
Explanation: 대칭 행렬의 고유 분해
행렬 $A$가 대칭(symmetric) 행렬이면 고유값 벡터 행렬 $V$는 다음과 같이 전치 행렬이 역행렬과 같아진다.
$$ V^T V = V V^T = I$$
이 때는 고유 분해가 다음과 같이 표시된다.
$$ A = V\Lambda V^T = \sum_{i=1}^{M} {\lambda_i} v_i v_i^T$$
$$ A^{-1} = V \Lambda^{-1} V^T = \sum_{i=1}^{M} \dfrac{1}{\lambda_i} v_i v_i^T$$
확률 변수의 좌표 변환
확률 변수의 공분산 행렬 $\Sigma$ 은 대칭 행렬이므로 위의 관계식이 성립한다.
따라서 다변수 가우시안 정규 분포의 확률 밀도 함수는 다음과 같이 표시할 수 있다.
$$
\begin{eqnarray}
\mathcal{N}(x \mid \mu, \Sigma)
&=& \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right) \
&=& \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T V \Lambda^{-1} V^T (x-\mu) \right) \
&=& \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (V^T(x-\mu))^T \Lambda^{-1} (V^T (x-\mu)) \right) \
\end{eqnarray}
$$
즉 변환 행렬 $V^T$로 좌표 변환하면 서로 독립인 성분들로 나누어진다.
End of explanation
from pprint import pprint
M = np.array([[1,0,0,0,0],[0,0,2,0,3],[0,0,0,0,0],[0,2,0,0,0]])
print("\nM:"); pprint(M)
U, S0, V0 = np.linalg.svd(M, full_matrices=True)
print("\nU:"); pprint(U)
S = np.hstack([np.diag(S0), np.zeros(M.shape[0])[:, np.newaxis]])
print("\nS:"); pprint(S)
print("\nV:"); pprint(V)
V = V0.T
print("\nU.dot(U.T):"); pprint(U.dot(U.T))
print("\nV.dot(V.T):"); pprint(V.dot(V.T))
print("\nU.dot(S).dot(V.T):"); pprint(U.dot(S).dot(V.T))
Explanation: 특이값 분해
정방 행렬이 아닌 행렬 $M$에 대해서도 고유 분해와 유사한 분해가 가능하다. 이를 특이값 분해(singular value decomposition)이라고 한다.
$M \in \mathbf{R}^{m \times n}$
$$M = U \Sigma V^T$$
여기에서
* $U \in \mathbf{R}^{m \times m}$
* $\Sigma \in \mathbf{R}^{m \times n}$
* $V \in \mathbf{R}^{n \times n}$
이고 행렬 $U$와 $V$는 다음 관계를 만족한다.
$$ U^T U = UU^T = I $$
$$ V^T V = VV^T = I $$
예를 들어
$$\mathbf{M} = \begin{bmatrix}
1 & 0 & 0 & 0 & 2 \
0 & 0 & 3 & 0 & 0 \
0 & 0 & 0 & 0 & 0 \
0 & 2 & 0 & 0 & 0
\end{bmatrix}
$$
에 대한 특이값 분해 결과는 다음과 같다.
$$
\begin{align}
\mathbf{U} &= \begin{bmatrix}
0 & 0 & 1 & 0 \
1 & 0 & 0 & 0 \
0 & 0 & 0 & -1 \
0 & 1 & 0 & 0 \
\end{bmatrix} \
\boldsymbol{\Sigma} &= \begin{bmatrix}
\sqrt{5} & 0 & 0 & 0 & 0 \
0 & 2 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 \
0 & 0 & 0 & 0 & 0
\end{bmatrix} \
\mathbf{V}^T &= \begin{bmatrix}
0 & 0 & \sqrt{0.2} & 0 & \sqrt{0.8} \
0 & 1 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 \
0 & 0 & -\sqrt{0.8} & 0 & \sqrt{0.2} \
0 & 0 & 0 & 1 & 0 \
\end{bmatrix}
\end{align}$$
이는 다음과 같이 확인 할 수 있다.
$$\begin{align}
\mathbf{U} \mathbf{U^T} &=
\begin{bmatrix}
0 & 0 & 1 & 0 \
1 & 0 & 0 & 0 \
0 & 0 & 0 & -1 \
0 & 1 & 0 & 0 \
\end{bmatrix}
\cdot
\begin{bmatrix}
0 & 1 & 0 & 0 \
0 & 0 & 0 & 1 \
1 & 0 & 0 & 0 \
0 & 0 & -1 & 0 \
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 & 0 & 0 \
0 & 1 & 0 & 0 \
0 & 0 & 1 & 0 \
0 & 0 & 0 & 1
\end{bmatrix}
= \mathbf{I}_4 \
\mathbf{V} \mathbf{V^T} &=
\begin{bmatrix}
0 & 0 & \sqrt{0.2} & 0 & \sqrt{0.8} \
0 & 1 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 \
0 & 0 & -\sqrt{0.8} & 0 & \sqrt{0.2} \
0 & 0 & 0 & 1 & 0 \
\end{bmatrix}
\cdot
\begin{bmatrix}
0 & 0 & 1 & 0 & 0 \
0 & 1 & 0 & 0 & 0 \
\sqrt{0.2} & 0 & 0 & -\sqrt{0.8} & 0\
0 & 0 & 0 & 0 & 1 \
\sqrt{0.8} & 0 & 0 & \sqrt{0.2} & 0 \
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 \
0 & 0 & 0 & 1 & 0 \
0 & 0 & 0 & 0 & 1
\end{bmatrix}
= \mathbf{I}_5
\end{align}$$
End of explanation |
6,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this example, we use the Google geocoding API to translate addresses into geo-coordinates. Google imposes usages limits on the API. If you are using this script to index data, you many need to sign up for an API key to overcome limits.
Step1: Import Data
Import restaurant inspection data into a Pandas dataframe
Step2: Data preprocessing
Step3: Create a dictionary of unique Addresses. We do this to avoid calling the Google geocoding api multiple times for the same address
Step4: Get address for the geolocation for each address. This step can take a while because it's calling the Google geocoding API for each unique address.
Step5: Index Data | Python Code:
from geopy.geocoders import GoogleV3
geolocator = GoogleV3()
# geolocator = GoogleV3(api_key=<your_google_api_key>)
Explanation: In this example, we use the Google geocoding API to translate addresses into geo-coordinates. Google imposes usages limits on the API. If you are using this script to index data, you many need to sign up for an API key to overcome limits.
End of explanation
t = pd.read_csv('https://data.cityofnewyork.us/api/views/43nn-pn8j/rows.csv?accessType=DOWNLOAD', header=0, sep=',', dtype={'PHONE':str, 'INSPECTION DATE':str});
## Helper Functions
from datetime import datetime
def str_to_iso(text):
if text != '':
for fmt in (['%m/%d/%Y']):
try:
#print(fmt)
#print(datetime.strptime(text, fmt))
return datetime.isoformat(datetime.strptime(text, fmt))
except ValueError:
#print(text)
pass
#raise ValueError('Changing date')
else:
return None
def getLatLon(row):
if row['Address'] != '':
location = geolocator.geocode(row['Address'], timeout=10000, sensor=False)
if location != None:
lat = location.latitude
lon = location.longitude
#print(lat,lon)
return [lon, lat]
elif row['Zipcode'] !='' or location != None:
location = geolocator.geocode(row['Zipcode'], timeout=10000, sensor=False)
if location != None:
lat = location.latitude
lon = location.longitude
#print(lat,lon)
return [lon, lat]
else:
return None
def getAddress(row):
if row['Building'] != '' and row['Street'] != '' and row['Boro'] != '':
x = row['Building']+' '+row['Street']+' '+row['Boro']+',NY'
x = re.sub(' +',' ',x)
return x
else:
return ''
def combineCT(x):
return str(x['Inspection_Date'][0][0:10])+'_'+str(x['Camis'])
Explanation: Import Data
Import restaurant inspection data into a Pandas dataframe
End of explanation
# process column names: remove spaces & use title casing
t.columns = map(str.title, t.columns)
t.columns = map(lambda x: x.replace(' ', '_'), t.columns)
# replace nan with ''
t.fillna('', inplace=True)
# Convert date to ISO format
t['Inspection_Date'] = t['Inspection_Date'].map(lambda x: str_to_iso(x))
t['Record_Date'] = t['Record_Date'].map(lambda x: str_to_iso(x))
t['Grade_Date'] = t['Grade_Date'].map(lambda x: str_to_iso(x))
#t['Inspection_Date'] = t['Inspection_Date'].map(lambda x: x.split('/'))
# Combine Street, Building and Boro information to create Address string
t['Address'] = t.apply(getAddress, axis=1)
Explanation: Data preprocessing
End of explanation
addDict = t[['Address','Zipcode']].copy(deep=True)
addDict = addDict.drop_duplicates()
addDict['Coord'] = [None]* len(addDict)
Explanation: Create a dictionary of unique Addresses. We do this to avoid calling the Google geocoding api multiple times for the same address
End of explanation
for item_id, item in addDict.iterrows():
if item_id % 100 == 0:
print(item_id)
if addDict['Coord'][item_id] == None:
addDict['Coord'][item_id] = getLatLon(item)
#print(addDict.loc[item_id]['Coord'])
# Save address dictionary to CSV
#addDict.to_csv('./dict_final.csv')
# Merge coordinates into original table
t1 = t.merge(addDict[['Address', 'Coord']])
# Keep only 1 value of score and grade per inspection
t2=t1.copy(deep=True)
t2['raw_num'] = t2.index
t2['RI'] = t2.apply(combineCT, axis=1)
yy = t2.groupby('RI').first().reset_index()['raw_num']
t2['Unique_Score'] = None
t2['Unique_Score'].loc[yy.values] = t2['Score'].loc[yy.values]
t2['Unique_Grade'] = None
t2['Unique_Grade'].loc[yy.values] = t2['Grade'].loc[yy.values]
del(t2['RI'])
del(t2['raw_num'])
del(t2['Grade'])
del(t2['Score'])
t2.rename(columns={'Unique_Grade' : 'Grade','Unique_Score':'Score'}, inplace=True)
t2['Grade'].fillna('', inplace=True)
t2.iloc[1]
Explanation: Get address for the geolocation for each address. This step can take a while because it's calling the Google geocoding API for each unique address.
End of explanation
### Create and configure Elasticsearch index
# Name of index and document type
index_name = 'nyc_restaurants';
doc_name = 'inspection'
# Delete donorschoose index if one does exist
if es.indices.exists(index_name):
es.indices.delete(index_name)
# Create donorschoose index
es.indices.create(index_name)
# Add mapping
with open('./inspection_mapping.json') as json_mapping:
d = json.load(json_mapping)
es.indices.put_mapping(index=index_name, doc_type=doc_name, body=d)
# Index data
for item_id, item in t2.iterrows():
if item_id % 1000 == 0:
print(item_id)
thisItem = item.to_dict()
#thisItem['Coord'] = getLatLon(thisItem)
thisDoc = json.dumps(thisItem);
#pprint.pprint(thisItem)
# write to elasticsearch
es.index(index=index_name, doc_type=doc_name, id=item_id, body=thisDoc)
Explanation: Index Data
End of explanation |
6,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anomaly Detection in HTTP Logs
This sample notebook demonstrates working with HTTP request logs data stored in BigQuery.
Google Cloud Logging in the Google Cloud Platform makes it simple to export HTTP request logs from AppEngine applications directly into BigQuery for further analysis. This log data includes information such as the requested resource, HTTP status code, etc. One possible use of these logs is to mine them as they are collected to detect anomalies in response latency, since this can be a signal for some unexpected deployment issue.
The sample data used in this notebook is similar to AppEngine logs. It represents anonymized HTTP logs from a hypothetical application.
Related Links
Step1: Understanding the Logs Data
It's helpful to inspect the dataset, the schema, and a sample of the data we're working with. Usually, logs are captured as multiple tables within a dataset, with new tables added per time window (such as daily logs).
Step2: Transforming Logs into a Time Series
We're going to build a timeseries over latency. In order to make it a useful metric, we'll look at 99th percentile latency of requests within a fixed 5min window using this SQL query issued to BigQuery (notice the grouping over a truncated timestamp, and quantile aggregation).
Step3: Visualizing the Time Series Data
Its helpful to visualize the data. In order to visualize this timeseries, we'll use python, pandas and matplotlib.
Step4: Anomaly Detection
A visual inspection of the chart, above, highlights the obvious anomalies, but we want to construct something that can inspect the data, learn from it, and detect anomalies in an automated way.
Anomaly Detector
The code below is a simple anomaly detector created for purposes of a sample. It uses a simple algorithm that tracks mean and standard deviation. You can give it a value indicating the number of instances is uses to train a model before it starts detecting anomalies. As new values are passed to it, it continues to update the model (to account for slowly evolving changes) and report anomalies. Any value that is off from the mean by 3x the standard deviation is considered as an anomaly.
Step5: With the anomaly detector implemented, let's run the timeseries through it to collect any anomalies and the expected mean along each point.
Step6: Then, plot the same time series, but overlay the anomalies and the mean values as well. | Python Code:
from __future__ import division
import google.datalab.bigquery as bq
import matplotlib.pyplot as plot
import numpy as np
Explanation: Anomaly Detection in HTTP Logs
This sample notebook demonstrates working with HTTP request logs data stored in BigQuery.
Google Cloud Logging in the Google Cloud Platform makes it simple to export HTTP request logs from AppEngine applications directly into BigQuery for further analysis. This log data includes information such as the requested resource, HTTP status code, etc. One possible use of these logs is to mine them as they are collected to detect anomalies in response latency, since this can be a signal for some unexpected deployment issue.
The sample data used in this notebook is similar to AppEngine logs. It represents anonymized HTTP logs from a hypothetical application.
Related Links:
Cloud Logging
BigQuery
Pandas for data analysis
Matplotlib for data visualization
End of explanation
%bq tables list --project cloud-datalab-samples --dataset httplogs
%bq tables describe -n cloud-datalab-samples.httplogs.logs_20140615
%%bq query -n logs
SELECT timestamp, latency, status, method, endpoint
FROM `cloud-datalab-samples.httplogs.logs_20140615`
ORDER by timestamp
%%bq sample --query logs --count 7
Explanation: Understanding the Logs Data
It's helpful to inspect the dataset, the schema, and a sample of the data we're working with. Usually, logs are captured as multiple tables within a dataset, with new tables added per time window (such as daily logs).
End of explanation
%%bq query -n timeseries
SELECT DIV(UNIX_SECONDS(timestamp), 300) * 300 AS five_minute_window,
APPROX_QUANTILES(latency, 99)[SAFE_ORDINAL(99)] as latency
FROM `cloud-datalab-samples.httplogs.logs_20140615`
WHERE endpoint = 'Recent'
GROUP BY five_minute_window
ORDER by five_minute_window
%%bq sample --query timeseries --count 10
Explanation: Transforming Logs into a Time Series
We're going to build a timeseries over latency. In order to make it a useful metric, we'll look at 99th percentile latency of requests within a fixed 5min window using this SQL query issued to BigQuery (notice the grouping over a truncated timestamp, and quantile aggregation).
End of explanation
# Execute and convert the results to a Pandas dataframe
timeseries_df = timeseries.execute(output_options=bq.QueryOutput.dataframe()).result()
timeseries_values = timeseries_df['latency'].values
timeseries_len = len(timeseries_values)
plot.plot(np.array(range(timeseries_len)), timeseries_values)
plot.yscale('log')
plot.grid()
Explanation: Visualizing the Time Series Data
Its helpful to visualize the data. In order to visualize this timeseries, we'll use python, pandas and matplotlib.
End of explanation
class AnomalyDetector(object):
def __init__(self, window = 10):
self._index = 0
self._window = window
self._values = np.zeros(window)
self._valuesSq = np.zeros(window)
self._mean = 0
self._variance = 0
self._count = 0
def observation(self, value):
anomaly = False
threshold = 3 * np.sqrt(self._variance)
if self._count > self._window:
if value > self._mean + threshold:
value = self._mean + threshold
anomaly = True
elif value < self._mean - threshold:
value = self._mean - threshold
anomaly = True
else:
self._count += 1
prev_value = self._values[self._index]
self._values[self._index] = value
self._valuesSq[self._index] = value ** 2
self._index = (self._index + 1) % self._window
self._mean = self._mean - prev_value / self._window + value / self._window
self._variance = sum(self._valuesSq) / self._window - (self._mean ** 2)
return anomaly, self._mean
Explanation: Anomaly Detection
A visual inspection of the chart, above, highlights the obvious anomalies, but we want to construct something that can inspect the data, learn from it, and detect anomalies in an automated way.
Anomaly Detector
The code below is a simple anomaly detector created for purposes of a sample. It uses a simple algorithm that tracks mean and standard deviation. You can give it a value indicating the number of instances is uses to train a model before it starts detecting anomalies. As new values are passed to it, it continues to update the model (to account for slowly evolving changes) and report anomalies. Any value that is off from the mean by 3x the standard deviation is considered as an anomaly.
End of explanation
anomalies = np.zeros(timeseries_len)
means = np.zeros(timeseries_len)
anomaly_detector = AnomalyDetector(36)
for i, value in enumerate(timeseries_values):
anomaly, mean = anomaly_detector.observation(value)
anomalies[i] = anomaly
means[i] = mean
Explanation: With the anomaly detector implemented, let's run the timeseries through it to collect any anomalies and the expected mean along each point.
End of explanation
ticks = np.array(range(timeseries_len))
plot.plot(ticks, timeseries_values)
plot.plot(ticks[anomalies == 1], timeseries_values[anomalies == 1], 'ro')
plot.plot(ticks, means, 'g', linewidth = 1)
plot.yscale('log')
plot.grid()
Explanation: Then, plot the same time series, but overlay the anomalies and the mean values as well.
End of explanation |
6,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source localization with equivalent current dipole (ECD) fit
This shows how to fit a dipole
Step1: Let's localize the N100m (using MEG only)
Step2: Plot the result in 3D brain with the MRI image using Nilearn
In MRI coordinates and in MNI coordinates (template brain)
Step3: Calculate and visualise magnetic field predicted by dipole with maximum GOF
and compare to the measured data, highlighting the ipsilateral (right) source
Step4: Estimate the time course of a single dipole with fixed position and
orientation (the one that maximized GOF) over the entire interval | Python Code:
from os import path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.forward import make_forward_dipole
from mne.evoked import combine_evoked
from mne.simulation import simulate_evoked
from nilearn.plotting import plot_anat
from nilearn.datasets import load_mni152_template
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_ave = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
fname_cov = op.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem', 'sample-5120-bem-sol.fif')
fname_trans = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
fname_surf_lh = op.join(subjects_dir, 'sample', 'surf', 'lh.white')
Explanation: Source localization with equivalent current dipole (ECD) fit
This shows how to fit a dipole :footcite:Sarvas1987 using mne-python.
For a comparison of fits between MNE-C and mne-python, see
this gist <https://gist.github.com/larsoner/ca55f791200fe1dc3dd2>__.
End of explanation
evoked = mne.read_evokeds(fname_ave, condition='Right Auditory',
baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False)
evoked_full = evoked.copy()
evoked.crop(0.07, 0.08)
# Fit a dipole
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
# Plot the result in 3D brain with the MRI image.
dip.plot_locations(fname_trans, 'sample', subjects_dir, mode='orthoview')
Explanation: Let's localize the N100m (using MEG only)
End of explanation
trans = mne.read_trans(fname_trans)
subject = 'sample'
mni_pos = mne.head_to_mni(dip.pos, mri_head_t=trans,
subject=subject, subjects_dir=subjects_dir)
mri_pos = mne.head_to_mri(dip.pos, mri_head_t=trans,
subject=subject, subjects_dir=subjects_dir)
t1_fname = op.join(subjects_dir, subject, 'mri', 'T1.mgz')
fig_T1 = plot_anat(t1_fname, cut_coords=mri_pos[0], title='Dipole loc.')
try:
template = load_mni152_template(resolution=1)
except TypeError: # in nilearn < 0.8.1 this did not exist
template = load_mni152_template()
fig_template = plot_anat(template, cut_coords=mni_pos[0],
title='Dipole loc. (MNI Space)')
Explanation: Plot the result in 3D brain with the MRI image using Nilearn
In MRI coordinates and in MNI coordinates (template brain)
End of explanation
fwd, stc = make_forward_dipole(dip, fname_bem, evoked.info, fname_trans)
pred_evoked = simulate_evoked(fwd, stc, evoked.info, cov=None, nave=np.inf)
# find time point with highest GOF to plot
best_idx = np.argmax(dip.gof)
best_time = dip.times[best_idx]
print('Highest GOF %0.1f%% at t=%0.1f ms with confidence volume %0.1f cm^3'
% (dip.gof[best_idx], best_time * 1000,
dip.conf['vol'][best_idx] * 100 ** 3))
# remember to create a subplot for the colorbar
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=[10., 3.4],
gridspec_kw=dict(width_ratios=[1, 1, 1, 0.1],
top=0.85))
vmin, vmax = -400, 400 # make sure each plot has same colour range
# first plot the topography at the time of the best fitting (single) dipole
plot_params = dict(times=best_time, ch_type='mag', outlines='skirt',
colorbar=False, time_unit='s')
evoked.plot_topomap(time_format='Measured field', axes=axes[0], **plot_params)
# compare this to the predicted field
pred_evoked.plot_topomap(time_format='Predicted field', axes=axes[1],
**plot_params)
# Subtract predicted from measured data (apply equal weights)
diff = combine_evoked([evoked, pred_evoked], weights=[1, -1])
plot_params['colorbar'] = True
diff.plot_topomap(time_format='Difference', axes=axes[2:], **plot_params)
fig.suptitle('Comparison of measured and predicted fields '
'at {:.0f} ms'.format(best_time * 1000.), fontsize=16)
fig.tight_layout()
Explanation: Calculate and visualise magnetic field predicted by dipole with maximum GOF
and compare to the measured data, highlighting the ipsilateral (right) source
End of explanation
dip_fixed = mne.fit_dipole(evoked_full, fname_cov, fname_bem, fname_trans,
pos=dip.pos[best_idx], ori=dip.ori[best_idx])[0]
dip_fixed.plot(time_unit='s')
Explanation: Estimate the time course of a single dipole with fixed position and
orientation (the one that maximized GOF) over the entire interval
End of explanation |
6,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition
Step4: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation
Step6: You are now going to solve the following differential equation
Step7: In the following cell you are going to solve the above ODE using four different algorithms | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
np.zeros?
def solve_euler(derivs, y0, x):
Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
y = np.empty(len(x))
y[0] = y0
n = 0
while n < len(x)-1:
h = x[n+1] - x[n]
y[n+1] = y[n] + h*derivs(y[n],x[n])
n += 1
return y
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
Explanation: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition:
$$ y(x_0)=y_0 $$
Euler's method performs updates using the equations:
$$ y_{n+1} = y_n + h f(y_n,x_n) $$
$$ h = x_{n+1} - x_n $$
Write a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
End of explanation
def solve_midpoint(derivs, y0, x):
Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
y = np.empty(len(x))
y[0] = y0
n = 0
while n < len(x)-1:
h = x[n+1] - x[n]
y[n+1] = y[n] + h*derivs(y[n] + h*derivs(y[n],x[n])/2,x[n] + h/2)
n += 1
return y
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
Explanation: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:
$$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$
Write a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
End of explanation
def solve_exact(x):
compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
y = np.empty(len(x))
for i in range(len(x)):
y[i] = 0.25*np.exp(2*x[i]) - 0.5*x[i] - 0.25
return y
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
Explanation: You are now going to solve the following differential equation:
$$
\frac{dy}{dx} = x + 2y
$$
which has the analytical solution:
$$
y(x) = 0.25 e^{2x} - 0.5 x - 0.25
$$
First, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:
End of explanation
N = 10
x = np.linspace(0,1.0,N)
y0 = 0.0
derivs = lambda y, x: x + 2*y
y_euler = solve_euler(derivs,y0,x)
y_midpoint = solve_midpoint(derivs,y0,x)
y_odeint = odeint(derivs,y0,x)
y_exact = solve_exact(x)
plt.plot(x, y_euler, label='Euler')
plt.plot(x, y_midpoint, label='Midpoint')
plt.plot(x, y_odeint, label='odeint')
plt.plot(x, y_exact, label='Exact')
plt.legend(loc=2);
assert True # leave this for grading the plots
Explanation: In the following cell you are going to solve the above ODE using four different algorithms:
Euler's method
Midpoint method
odeint
Exact
Here are the details:
Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).
Define the derivs function for the above differential equation.
Using the solve_euler, solve_midpoint, odeint and solve_exact functions to compute
the solutions using the 4 approaches.
Visualize the solutions on a sigle figure with two subplots:
Plot the $y(x)$ versus $x$ for each of the 4 approaches.
Plot $\left|y(x)-y_{exact}(x)\right|$ versus $x$ for each of the 3 numerical approaches.
Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.
While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
End of explanation |
6,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 1
Due Wednesday, September 2 by 2
Step1: Question 1
Pick a graph from Spurious Correlations and recreate it using
matplotlib, numpy and pandas. For whichever graph you choose, save the data in one or more CSV files and submit them with your assignment. Make sure your axes are labeled propertly.
Step2: Question 2
Scikit-learn includes many datasets for playing with, one of which is iris. This dataset includes measures of many iris flowers, measured in centimeters
Step3: Create a pair-plot of the iris dataset similar to this figure using only numpy and
matplotlib (you can use scikit-learn to load the data with sklearn.datasets.load_iris, you are not
allowed to use pandas). Ensure all axes are labeled. The diagonals need to contain histograms,
the different species need to be distinguished by color or glyph, and there needs to be a
legend for the species.
Step4: Question 3
Create an array of scatter plots on the boston housing dataset
(sklearn.datasets.load_boston). This dataset contains 13 housing-related features of areas in Boston e.g., crime rate), along with a "target" value of median value of owner-occupied homes (MEDV). After loading the dataset, try print(boston_dataset.DESCR) for more info.
For each feature, plot this feature against the target MEDV. Use
alpha to cope with overplotting. Ensure everything is labeled properly and the resulting charts can
easily be read and understood. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
Explanation: Homework 1
Due Wednesday, September 2 by 2:00 PM. Submit via email.
Some helpful setup code. Feel free to add whatever else you might need.
End of explanation
# Code here
Explanation: Question 1
Pick a graph from Spurious Correlations and recreate it using
matplotlib, numpy and pandas. For whichever graph you choose, save the data in one or more CSV files and submit them with your assignment. Make sure your axes are labeled propertly.
End of explanation
from sklearn.datasets import load_iris
iris_dataset = load_iris()
features = iris_dataset.feature_names
data = iris_dataset.data
targets = iris_dataset.target
df = pd.DataFrame(data, columns=features)
pd.plotting.scatter_matrix(df, c=targets, figsize=(15,15),
marker='o', hist_kwds={'bins': 20}, s=60,
alpha=.8);
Explanation: Question 2
Scikit-learn includes many datasets for playing with, one of which is iris. This dataset includes measures of many iris flowers, measured in centimeters: sepal length, sepal width, petal length, and petal width. (The sepal is just another part of the flower.) The dataset also includes the species of each flower measured (setosa, versicolor, or virginica).
Often when we start with a new dataset we can to explore it visually to get a feel for it. Pandas provides us with a very quick and easy way to compare all the measurements to each other (A similar figure (and similar code) appears on page 20 of IMLP):
End of explanation
# Code here
Explanation: Create a pair-plot of the iris dataset similar to this figure using only numpy and
matplotlib (you can use scikit-learn to load the data with sklearn.datasets.load_iris, you are not
allowed to use pandas). Ensure all axes are labeled. The diagonals need to contain histograms,
the different species need to be distinguished by color or glyph, and there needs to be a
legend for the species.
End of explanation
# Code here
Explanation: Question 3
Create an array of scatter plots on the boston housing dataset
(sklearn.datasets.load_boston). This dataset contains 13 housing-related features of areas in Boston e.g., crime rate), along with a "target" value of median value of owner-occupied homes (MEDV). After loading the dataset, try print(boston_dataset.DESCR) for more info.
For each feature, plot this feature against the target MEDV. Use
alpha to cope with overplotting. Ensure everything is labeled properly and the resulting charts can
easily be read and understood.
End of explanation |
6,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 22
Step1: By far, we'll use the plt object from the second import the most; that contains the main plotting library.
Plotting in a script
Let's say you're coding a standalone Python application, contained in a file myapp.py. You'll need to explicitly tell matplotlib to generate a figure and display it, via the show() command.
<img src="https
Step2: Note that you do NOT need to use plt.show()! When in "inline" mode, matplotlib will automatically render whatever the "active" figure is as soon as you issue some kind of plotting command.
Saving plots to files
Sometimes you'll want to save the plots you're making to files for use later, perhaps as part of a presentation to demonstrate to your bosses what you've accomplished.
In this case, you once again won't use the plt.show() command, but instead substitute in the plt.savefig() command.
<img src="https
Step3: Part 2
Step4: Matplotlib sees we've created points at (4, 9), (5, 4), and (6, 7), and it connects each of these in turn with a line, producing the above plot. It also automatically scales the x and y axes of the plot so all the data fit visibly inside.
An important side note
Step5: They'll even be plotted in different colors. How nice!
Line plots are nice, but let's say I really want a scatter plot of my data; there's no real concept of a line, but instead I have disparate data points in 2D space that I want to visualize. There's a function for that!
Step6: We use the plt.scatter() function, which operates pretty much the same way as plt.plot(), except it puts dots in for each data point without drawing lines between them.
Another very useful plot, especially in scientific circles, is the errorbar plot. This is a lot like the line plot, except each data point comes with an errorbar to quantify uncertainty or variance present in each datum.
Step7: You use the yerr argument of the function plt.errorbar() in order to specify what your error rate in the y-direction is. There's also an xerr optional argument, if your error is actually in the x-direction.
What about that statistics lecture we had not so long ago? We have a bunch of numbers and would like to visualize how they are distributed to see if we can make any inferences and predictions about that. Histograms to the rescue!
Step8: plt.hist() has only 1 required argument
Step9: The plt.imshow() method takes as input a matrix and renders it as an image. If the matrix is 3D, it considers this to be an image in RGB format (width, height, and 3 color dimensions) and uses that information to determine colors. If the matrix is only 2D, it will consider it to be grayscale.
It doesn't even have be a "true" image. Often you want to look at a matrix that you're building, just to get a "feel" for the structure of it. imshow() is great for this as well.
Step10: We built a random matrix matrix, and as you can see it looks exactly like that
Step11: Legends
Going back to the idea of plotting multiple datasets on a single figure, it'd be nice to label them in addition to using colors to distinguish them. Luckily, we have legends we can use, but it takes a coordinated effort to use them effectively. Pay close attention
Step12: First, you'll notice that the plt.plot() call changed a little with the inclusion of an optional argument
Step13: This can potentially help center your visualizations, too.
Colors, markers, and colorbars
Matplotlib has a default progression of colors it uses in plots--you may have noticed the first data you plot is always blue, followed by green. You're welcome to stick with this, or you can manually override the colors scheme in any plot using the optional argument c (for color).
Step14: If you're making scatter plots, it can be especially useful to specify the type of marker in addition to the color you want to use. This can really help differentiate multiple scatter plots that are combined on one figure.
Step15: Finally, when you're rendering images, and especially matrices, it can help to have a colorbarthat shows the scale of colors you have in your image plot.
Step16: The matrix is clearly still random, but the colorbar tells us the values in the picture range from around -3.5 or so to +4, giving us an idea of what's in our data.
seaborn
The truth is, there is endless freedom in matplotlib to customize the look and feel; you could spend a career digging through the documentation to master the ability to change edge colors, line thickness, and marker transparencies. At least in my opinion, there's a better way. | Python Code:
import matplotlib as mpl
import matplotlib.pyplot as plt
Explanation: Lecture 22: Data Visualization
CSCI 1360: Foundations for Informatics and Analytics
Overview and Objectives
Data visualization is one of, if not the, most important method of communicating data science results. It's analogous to writing: if you can't visualize your results, you'll be hard-pressed to convince anyone else of them. By the end of this lecture, you should be able to
Define and describe some types of plots and what kinds of data they're used to visualize
Use the basic functionality of matplotlib to generate figures
Customize the look and feel of figures to suit particular formats
Part 1: Introduction to matplotlib
The Matplotlib package as we know it was originally conceived and designed by John Hunter in 2002, originally built as an IPython plugin to enable Matlab-style plotting.
IPython's creator, Fernando Perez, was at the time finishing his PhD and didn't have time to fully vet John's patch. So John took his fledgling plotting library and ran with it, releasing Matplotlib version 0.1 in 2003 and setting the stage for what would be the most flexible and cross-platform Python plotting library to date.
Matplotlib can run on a wide variety of operating systems and make use of a wide variety of graphical backends. Hence, despite some developers complaining that it can feel bloated and clunky, it easily maintains the largest active user base and team of developers, ensuring it will remain relevant in some sense for quite some time yet.
You've seen snippets of matplotlib in action in several assignments and lectures, but we haven't really formalized it yet. Like NumPy, matplotlib follows some use conventions.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.random.random(10)
y = np.random.random(10)
plt.plot(x, y)
Explanation: By far, we'll use the plt object from the second import the most; that contains the main plotting library.
Plotting in a script
Let's say you're coding a standalone Python application, contained in a file myapp.py. You'll need to explicitly tell matplotlib to generate a figure and display it, via the show() command.
<img src="https://raw.githubusercontent.com/eds-uga/csci1360e-su16/master/lectures/script.png" width="50%" />
Then you can run the code from the command line:
<pre>$ python myapp.py</pre>
Beware: plt.show() does a lot of things under-the-hood, including interacting with your operating system's graphical backend.
Matplotlib hides all these details from you, but as a consequence you should be careful to only use plt.show() once per Python session.
Multiple uses of show() can lead to unpredictable behavior that depends entirely on what backend is in use, so try your best to avoid it.
Plotting in a shell (e.g., IPython)
Remember back to our first lecture, when you learned how to fire up a Python prompt on the terminal? You can plot in that shell just as you can in a script!
<img src="https://raw.githubusercontent.com/eds-uga/csci1360e-su16/master/lectures/shell.png" width="75%" />
In addition, you can enter "matplotlib mode" by using the %matplotlib magic command in the IPython shell. You'll notice in the above screenshot that the prompt is hovering below line [6], but no line [7] has emerged. That's because the shell is currently not in matplotlib mode, so it will wait indefinitely until you close the figure on the right.
By contrast, in matplotlib mode, you'll immediately get the next line of the prompt while the figure is still open. You can then edit the properties of the figure dynamically to update the plot. To force an update, you can use the command plt.draw().
Plotting in a notebook (e.g., Jupyter)
This is probably the mode you're most familiar with: plotting in a notebook, such as the one you're viewing right now.
Since matplotlib's default is to render its graphics in an external window, for plotting in a notebook you will have to specify otherwise, as it's impossible to do this in a browser. You'll once again make use of the %matplotlib magic command, this time with the inline argument added to tell matplotlib to embed the figures into the notebook itself.
End of explanation
fig = plt.figure()
fig.canvas.get_supported_filetypes()
Explanation: Note that you do NOT need to use plt.show()! When in "inline" mode, matplotlib will automatically render whatever the "active" figure is as soon as you issue some kind of plotting command.
Saving plots to files
Sometimes you'll want to save the plots you're making to files for use later, perhaps as part of a presentation to demonstrate to your bosses what you've accomplished.
In this case, you once again won't use the plt.show() command, but instead substitute in the plt.savefig() command.
<img src="https://raw.githubusercontent.com/eds-uga/csci1360e-su16/master/lectures/savefig.png" width="75%" />
An image file will be created (in this case, fig.png) on the filesystem with the plot.
Matplotlib is designed to operate nicely with lots of different output formats; PNG was just the example used here.
The output format is inferred from the filename used in savefig(). You can see all the other formats matplotlib supports with the command
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.array([4, 5, 6])
y = np.array([9, 4, 7])
plt.plot(x, y)
Explanation: Part 2: Basics of plotting
Ok, let's dive in with some plotting examples and how-tos!
The most basic kind of plot you can make is the line plot. This kind of plot uses (x, y) coordinate pairs and implicitly draws lines between them. Here's an example:
End of explanation
x1 = np.array([4, 5, 6])
y1 = np.array([9, 4, 7])
plt.plot(x1, y1)
x2 = np.array([1, 2, 4])
y2 = np.array([4, 6, 9])
plt.plot(x2, y2)
Explanation: Matplotlib sees we've created points at (4, 9), (5, 4), and (6, 7), and it connects each of these in turn with a line, producing the above plot. It also automatically scales the x and y axes of the plot so all the data fit visibly inside.
An important side note: matplotlib is stateful, which means it has some memory of what commands you've issued. So if you want to, say, include multiple different plots on the same figure, all you need to do is issue additional plotting commands.
End of explanation
x = np.array([4, 5, 6])
y = np.array([9, 4, 7])
plt.scatter(x, y)
Explanation: They'll even be plotted in different colors. How nice!
Line plots are nice, but let's say I really want a scatter plot of my data; there's no real concept of a line, but instead I have disparate data points in 2D space that I want to visualize. There's a function for that!
End of explanation
# This is a great function that gives me 50 evenly-spaced values from 0 to 10.
x = np.linspace(0, 10, 50)
dy = 0.8 # The error rate.
y = np.sin(x) + dy * np.random.random(50) # Adds a little bit of noise.
plt.errorbar(x, y, yerr = dy)
Explanation: We use the plt.scatter() function, which operates pretty much the same way as plt.plot(), except it puts dots in for each data point without drawing lines between them.
Another very useful plot, especially in scientific circles, is the errorbar plot. This is a lot like the line plot, except each data point comes with an errorbar to quantify uncertainty or variance present in each datum.
End of explanation
x = np.random.normal(size = 100)
plt.hist(x, bins = 20)
Explanation: You use the yerr argument of the function plt.errorbar() in order to specify what your error rate in the y-direction is. There's also an xerr optional argument, if your error is actually in the x-direction.
What about that statistics lecture we had not so long ago? We have a bunch of numbers and would like to visualize how they are distributed to see if we can make any inferences and predictions about that. Histograms to the rescue!
End of explanation
import scipy.misc
img = scipy.misc.face()
plt.imshow(img)
Explanation: plt.hist() has only 1 required argument: a list of numbers. However, the optional bins argument is very useful, as it dictates how many bins you want to use to divide up the data in the required argument. Too many bins and every bar in the histogram will have a count of 1; too few bins and all your data will end up in just a single bar!
Picking the number of bins for histograms is an art unto itself that usually requires a lot of trial-and-error, hence the importance of having a good visualization setup!
The last type of plot we'll discuss here isn't really a "plot" in the sense as the previous ones have been, but it is no less important: showing images!
End of explanation
matrix = np.random.random((100, 100))
plt.imshow(matrix, cmap = "gray")
Explanation: The plt.imshow() method takes as input a matrix and renders it as an image. If the matrix is 3D, it considers this to be an image in RGB format (width, height, and 3 color dimensions) and uses that information to determine colors. If the matrix is only 2D, it will consider it to be grayscale.
It doesn't even have be a "true" image. Often you want to look at a matrix that you're building, just to get a "feel" for the structure of it. imshow() is great for this as well.
End of explanation
x = np.linspace(0, 10, 50) # 50 evenly-spaced numbers from 0 to 10
y = np.sin(x) # Compute the sine of each of these numbers.
plt.plot(x, y)
plt.xlabel("x") # This goes on the x-axis.
plt.ylabel("sin(x)") # This goes on the y-axis.
plt.title("Plot of sin(x)") # This goes at the top, as the plot title.
Explanation: We built a random matrix matrix, and as you can see it looks exactly like that: in fact, a lot like TV static (coincidence?...). The cmap = "gray" optional argument specifies the "colormap", of which matplotlib has quite a few, but this explicitly enforces the "gray" colormap, otherwise matplotlib will attempt to predict a color scheme.
Part 3: Customizing the look and feel
You may be thinking at this point: this is all cool, but my inner graphic designer cringed at how a few of these plots looked. Is there any way to make them look, well, "nicer"?
There are, in fact, a couple things we can do to spiff things up a little, starting with how we can annotate the plots in various ways.
Axis labels and plot titles
You can add text along the axes and the top of the plot to give a little extra information about what, exactly, your plot is visualizing. For this you use the plt.xlabel(), plt.ylabel(), and plt.title() functions.
End of explanation
x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10
y1 = np.sin(x) # Compute the sine of each of these numbers.
y2 = np.cos(x) # Compute the cosine of each number.
plt.plot(x, y1, label = "sin(x)")
plt.plot(x, y2, label = "cos(x)")
plt.legend(loc = 0)
Explanation: Legends
Going back to the idea of plotting multiple datasets on a single figure, it'd be nice to label them in addition to using colors to distinguish them. Luckily, we have legends we can use, but it takes a coordinated effort to use them effectively. Pay close attention:
End of explanation
x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10
y = np.sin(x) # Compute the sine of each of these numbers.
plt.plot(x, y)
plt.xlim([-1, 11]) # Range from -1 to 11 on the x-axis.
plt.ylim([-3, 3]) # Range from -3 to 3 on the y-axis.
Explanation: First, you'll notice that the plt.plot() call changed a little with the inclusion of an optional argument: label. This string is the label that will show up in the legend.
Second, you'll also see a call to plt.legend(). This instructs matplotlib to show the legend on the plot. The loc argument specifies the location; "0" tells matplotlib to "put the legend in the best possible spot, respecting where the graphics tend to be." This is usually the best option, but if you want to override this behavior and specify a particular location, the numbers 1-9 refer to different specific areas of the plot.
Axis limits
This will really come in handy when you need to make multiple plots that span different datasets, but which you want to compare directly. We've seen how matplotlib scales the axes so the data you're plotting are visible, but if you're plotting the data in entirely separate figures, matplotlib may scale the figures differently. If you need set explicit axis limits:
End of explanation
x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10
y = np.sin(x) # Compute the sine of each of these numbers.
plt.plot(x, y, c = "cyan")
Explanation: This can potentially help center your visualizations, too.
Colors, markers, and colorbars
Matplotlib has a default progression of colors it uses in plots--you may have noticed the first data you plot is always blue, followed by green. You're welcome to stick with this, or you can manually override the colors scheme in any plot using the optional argument c (for color).
End of explanation
X1 = np.random.normal(loc = [-1, -1], size = (10, 2))
X2 = np.random.normal(loc = [1, 1], size = (10, 2))
plt.scatter(X1[:, 0], X1[:, 1], c = "black", marker = "v")
plt.scatter(X2[:, 0], X2[:, 1], c = "yellow", marker = "o")
Explanation: If you're making scatter plots, it can be especially useful to specify the type of marker in addition to the color you want to use. This can really help differentiate multiple scatter plots that are combined on one figure.
End of explanation
matrix = np.random.normal(size = (100, 100))
plt.imshow(matrix, cmap = "gray")
plt.colorbar()
Explanation: Finally, when you're rendering images, and especially matrices, it can help to have a colorbarthat shows the scale of colors you have in your image plot.
End of explanation
import seaborn as sns # THIS IS THE KEY TO EVERYTHING
x = np.linspace(0, 10, 50) # Evenly-spaced numbers from 0 to 10
y = np.sin(x) # Compute the sine of each of these numbers.
plt.plot(x, y)
Explanation: The matrix is clearly still random, but the colorbar tells us the values in the picture range from around -3.5 or so to +4, giving us an idea of what's in our data.
seaborn
The truth is, there is endless freedom in matplotlib to customize the look and feel; you could spend a career digging through the documentation to master the ability to change edge colors, line thickness, and marker transparencies. At least in my opinion, there's a better way.
End of explanation |
6,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TF-Agents Authors.
Step1: CheckpointerとPolicySaver
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: DQNエージェント
前のColabと同じように、DQNエージェントを設定します。 このColabでは、詳細は主な部分ではないので、デフォルトでは非表示になっていますが、「コードを表示」をクリックすると詳細を表示できます。
ハイパーパラメーター
Step3: 環境
Step4: エージェント
Step5: データ収集
Step6: エージェントのトレーニング
Step8: ビデオ生成
Step9: ビデオ生成
ビデオを生成して、ポリシーのパフォーマンスを確認します。
Step10: チェックポインタとPolicySaverのセットアップ
CheckpointerとPolicySaverを使用する準備ができました。
Checkpointer
Step11: Policy Saver
Step12: 1回のイテレーションのトレーニング
Step13: チェックポイントに保存
Step14: チェックポイントに復元
チェックポイントに復元するためには、チェックポイントが作成されたときと同じ方法でオブジェクト全体を再作成する必要があります。
Step15: また、ポリシーを保存して指定する場所にエクスポートします。
Step16: ポリシーの作成に使用されたエージェントまたはネットワークについての知識がなくても、ポリシーを読み込めるので、ポリシーのデプロイが非常に簡単になります。
保存されたポリシーを読み込み、それがどのように機能するかを確認します。
Step17: エクスポートとインポート
以下は、後でトレーニングを続行し、再度トレーニングすることなくモデルをデプロイできるように、Checkpointer とポリシーディレクトリをエクスポート/インポートするのに役立ちます。
「1回のイテレーションのトレーニング」に戻り、後で違いを理解できるように、さらに数回トレーニングします。 結果が少し改善し始めたら、以下に進みます。
Step18: チェックポイントディレクトリからzipファイルを作成します。
Step19: zipファイルをダウンロードします。
Step20: 10〜15回ほどトレーニングした後、チェックポイントのzipファイルをダウンロードし、[ランタイム]> [再起動してすべて実行]に移動してトレーニングをリセットし、このセルに戻ります。ダウンロードしたzipファイルをアップロードして、トレーニングを続けます。
Step21: チェックポイントディレクトリをアップロードしたら、「1回のイテレーションのトレーニング」に戻ってトレーニングを続けるか、「ビデオ生成」に戻って読み込まれたポリシーのパフォーマンスを確認します。
または、ポリシー(モデル)を保存して復元することもできます。Checkpointerとは異なり、トレーニングを続けることはできませんが、モデルをデプロイすることはできます。ダウンロードしたファイルはCheckpointerのファイルよりも大幅に小さいことに注意してください。
Step22: ダウンロードしたポリシーディレクトリ(exported_policy.zip)をアップロードし、保存したポリシーの動作を確認します。
Step23: SavedModelPyTFEagerPolicy
TFポリシーを使用しない場合は、py_tf_eager_policy.SavedModelPyTFEagerPolicyを使用して、Python envでsaved_modelを直接使用することもできます。
これは、eagerモードが有効になっている場合にのみ機能することに注意してください。
Step24: ポリシーを TFLite に変換する
詳細については、「TensorFlow Lite 推論」をご覧ください。
Step25: TFLite モデルで推論を実行する
詳細については、「TensorFlow Lite 推論」をご覧ください。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
#@test {"skip": true}
!sudo apt-get update
!sudo apt-get install -y xvfb ffmpeg python-opengl
!pip install pyglet
!pip install 'imageio==2.4.0'
!pip install 'xvfbwrapper==0.2.9'
!pip install tf-agents[reverb]
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import io
import matplotlib
import matplotlib.pyplot as plt
import os
import shutil
import tempfile
import tensorflow as tf
import zipfile
import IPython
try:
from google.colab import files
except ImportError:
files = None
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import policy_saver
from tf_agents.policies import py_tf_eager_policy
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
tempdir = os.getenv("TEST_TMPDIR", tempfile.gettempdir())
#@test {"skip": true}
# Set up a virtual display for rendering OpenAI gym environments.
import xvfbwrapper
xvfbwrapper.Xvfb(1400, 900, 24).start()
Explanation: CheckpointerとPolicySaver
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/agents/tutorials/10_checkpointer_policysaver_tutorial"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
はじめに
tf_agents.utils.common.Checkpointerは、ローカルストレージとの間でトレーニングの状態、ポリシーの状態、およびreplay_bufferの状態を保存/読み込むユーティリティです。
tf_agents.policies.policy_saver.PolicySaverは、ポリシーのみを保存/読み込むツールであり、Checkpointerよりも軽量です。PolicySaverを使用すると、ポリシーを作成したコードに関する知識がなくてもモデルをデプロイできます。
このチュートリアルでは、DQNを使用してモデルをトレーニングし、次にCheckpointerとPolicySaverを使用して、状態とモデルをインタラクティブな方法で保存および読み込む方法を紹介します。PolicySaverでは、TF2.0の新しいsaved_modelツールとフォーマットを使用することに注意してください。
セットアップ
以下の依存関係をインストールしていない場合は、実行します。
End of explanation
env_name = "CartPole-v1"
collect_steps_per_iteration = 100
replay_buffer_capacity = 100000
fc_layer_params = (100,)
batch_size = 64
learning_rate = 1e-3
log_interval = 5
num_eval_episodes = 10
eval_interval = 1000
Explanation: DQNエージェント
前のColabと同じように、DQNエージェントを設定します。 このColabでは、詳細は主な部分ではないので、デフォルトでは非表示になっていますが、「コードを表示」をクリックすると詳細を表示できます。
ハイパーパラメーター
End of explanation
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
Explanation: 環境
End of explanation
#@title
q_net = q_network.QNetwork(
train_env.observation_spec(),
train_env.action_spec(),
fc_layer_params=fc_layer_params)
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=global_step)
agent.initialize()
Explanation: エージェント
End of explanation
#@title
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_capacity)
collect_driver = dynamic_step_driver.DynamicStepDriver(
train_env,
agent.collect_policy,
observers=[replay_buffer.add_batch],
num_steps=collect_steps_per_iteration)
# Initial data collection
collect_driver.run()
# Dataset generates trajectories with shape [BxTx...] where
# T = n_step_update + 1.
dataset = replay_buffer.as_dataset(
num_parallel_calls=3, sample_batch_size=batch_size,
num_steps=2).prefetch(3)
iterator = iter(dataset)
Explanation: データ収集
End of explanation
#@title
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
def train_one_iteration():
# Collect a few steps using collect_policy and save to the replay buffer.
collect_driver.run()
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience)
iteration = agent.train_step_counter.numpy()
print ('iteration: {0} loss: {1}'.format(iteration, train_loss.loss))
Explanation: エージェントのトレーニング
End of explanation
#@title
def embed_gif(gif_buffer):
Embeds a gif file in the notebook.
tag = '<img src="data:image/gif;base64,{0}"/>'.format(base64.b64encode(gif_buffer).decode())
return IPython.display.HTML(tag)
def run_episodes_and_create_video(policy, eval_tf_env, eval_py_env):
num_episodes = 3
frames = []
for _ in range(num_episodes):
time_step = eval_tf_env.reset()
frames.append(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_tf_env.step(action_step.action)
frames.append(eval_py_env.render())
gif_file = io.BytesIO()
imageio.mimsave(gif_file, frames, format='gif', fps=60)
IPython.display.display(embed_gif(gif_file.getvalue()))
Explanation: ビデオ生成
End of explanation
print ('global_step:')
print (global_step)
run_episodes_and_create_video(agent.policy, eval_env, eval_py_env)
Explanation: ビデオ生成
ビデオを生成して、ポリシーのパフォーマンスを確認します。
End of explanation
checkpoint_dir = os.path.join(tempdir, 'checkpoint')
train_checkpointer = common.Checkpointer(
ckpt_dir=checkpoint_dir,
max_to_keep=1,
agent=agent,
policy=agent.policy,
replay_buffer=replay_buffer,
global_step=global_step
)
Explanation: チェックポインタとPolicySaverのセットアップ
CheckpointerとPolicySaverを使用する準備ができました。
Checkpointer
End of explanation
policy_dir = os.path.join(tempdir, 'policy')
tf_policy_saver = policy_saver.PolicySaver(agent.policy)
Explanation: Policy Saver
End of explanation
#@test {"skip": true}
print('Training one iteration....')
train_one_iteration()
Explanation: 1回のイテレーションのトレーニング
End of explanation
train_checkpointer.save(global_step)
Explanation: チェックポイントに保存
End of explanation
train_checkpointer.initialize_or_restore()
global_step = tf.compat.v1.train.get_global_step()
Explanation: チェックポイントに復元
チェックポイントに復元するためには、チェックポイントが作成されたときと同じ方法でオブジェクト全体を再作成する必要があります。
End of explanation
tf_policy_saver.save(policy_dir)
Explanation: また、ポリシーを保存して指定する場所にエクスポートします。
End of explanation
saved_policy = tf.saved_model.load(policy_dir)
run_episodes_and_create_video(saved_policy, eval_env, eval_py_env)
Explanation: ポリシーの作成に使用されたエージェントまたはネットワークについての知識がなくても、ポリシーを読み込めるので、ポリシーのデプロイが非常に簡単になります。
保存されたポリシーを読み込み、それがどのように機能するかを確認します。
End of explanation
#@title Create zip file and upload zip file (double-click to see the code)
def create_zip_file(dirname, base_filename):
return shutil.make_archive(base_filename, 'zip', dirname)
def upload_and_unzip_file_to(dirname):
if files is None:
return
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
shutil.rmtree(dirname)
zip_files = zipfile.ZipFile(io.BytesIO(uploaded[fn]), 'r')
zip_files.extractall(dirname)
zip_files.close()
Explanation: エクスポートとインポート
以下は、後でトレーニングを続行し、再度トレーニングすることなくモデルをデプロイできるように、Checkpointer とポリシーディレクトリをエクスポート/インポートするのに役立ちます。
「1回のイテレーションのトレーニング」に戻り、後で違いを理解できるように、さらに数回トレーニングします。 結果が少し改善し始めたら、以下に進みます。
End of explanation
train_checkpointer.save(global_step)
checkpoint_zip_filename = create_zip_file(checkpoint_dir, os.path.join(tempdir, 'exported_cp'))
Explanation: チェックポイントディレクトリからzipファイルを作成します。
End of explanation
#@test {"skip": true}
if files is not None:
files.download(checkpoint_zip_filename) # try again if this fails: https://github.com/googlecolab/colabtools/issues/469
Explanation: zipファイルをダウンロードします。
End of explanation
#@test {"skip": true}
upload_and_unzip_file_to(checkpoint_dir)
train_checkpointer.initialize_or_restore()
global_step = tf.compat.v1.train.get_global_step()
Explanation: 10〜15回ほどトレーニングした後、チェックポイントのzipファイルをダウンロードし、[ランタイム]> [再起動してすべて実行]に移動してトレーニングをリセットし、このセルに戻ります。ダウンロードしたzipファイルをアップロードして、トレーニングを続けます。
End of explanation
tf_policy_saver.save(policy_dir)
policy_zip_filename = create_zip_file(policy_dir, os.path.join(tempdir, 'exported_policy'))
#@test {"skip": true}
if files is not None:
files.download(policy_zip_filename) # try again if this fails: https://github.com/googlecolab/colabtools/issues/469
Explanation: チェックポイントディレクトリをアップロードしたら、「1回のイテレーションのトレーニング」に戻ってトレーニングを続けるか、「ビデオ生成」に戻って読み込まれたポリシーのパフォーマンスを確認します。
または、ポリシー(モデル)を保存して復元することもできます。Checkpointerとは異なり、トレーニングを続けることはできませんが、モデルをデプロイすることはできます。ダウンロードしたファイルはCheckpointerのファイルよりも大幅に小さいことに注意してください。
End of explanation
#@test {"skip": true}
upload_and_unzip_file_to(policy_dir)
saved_policy = tf.saved_model.load(policy_dir)
run_episodes_and_create_video(saved_policy, eval_env, eval_py_env)
Explanation: ダウンロードしたポリシーディレクトリ(exported_policy.zip)をアップロードし、保存したポリシーの動作を確認します。
End of explanation
eager_py_policy = py_tf_eager_policy.SavedModelPyTFEagerPolicy(
policy_dir, eval_py_env.time_step_spec(), eval_py_env.action_spec())
# Note that we're passing eval_py_env not eval_env.
run_episodes_and_create_video(eager_py_policy, eval_py_env, eval_py_env)
Explanation: SavedModelPyTFEagerPolicy
TFポリシーを使用しない場合は、py_tf_eager_policy.SavedModelPyTFEagerPolicyを使用して、Python envでsaved_modelを直接使用することもできます。
これは、eagerモードが有効になっている場合にのみ機能することに注意してください。
End of explanation
converter = tf.lite.TFLiteConverter.from_saved_model(policy_dir, signature_keys=["action"])
tflite_policy = converter.convert()
with open(os.path.join(tempdir, 'policy.tflite'), 'wb') as f:
f.write(tflite_policy)
Explanation: ポリシーを TFLite に変換する
詳細については、「TensorFlow Lite 推論」をご覧ください。
End of explanation
import numpy as np
interpreter = tf.lite.Interpreter(os.path.join(tempdir, 'policy.tflite'))
policy_runner = interpreter.get_signature_runner()
print(policy_runner._inputs)
policy_runner(**{
'0/discount':tf.constant(0.0),
'0/observation':tf.zeros([1,4]),
'0/reward':tf.constant(0.0),
'0/step_type':tf.constant(0)})
Explanation: TFLite モデルで推論を実行する
詳細については、「TensorFlow Lite 推論」をご覧ください。
End of explanation |
6,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using GraphvizAnim to show how heapsort works
Import the required packages and instantiate the animation
Step1: Define an heap
Step2: Now draw it (nodes will be named as the array indices and labelled as the array values)
Step3: Define the usual iterative down heap procedure (endowed with animation actions)
Step4: Fix the heap calling down_heap on his lower half
Step5: And finally exchange the top with heap positions starting form the last one (fixing again with down_heap)
Step6: We are ready to plot the animation interactively!
Be patient | Python Code:
from gvanim import Animation
from gvanim.jupyter import interactive
ga = Animation()
Explanation: Using GraphvizAnim to show how heapsort works
Import the required packages and instantiate the animation
End of explanation
heap = [ None, 5, 6, 7, 8, 9, 10, 11, 12 ]
Explanation: Define an heap
End of explanation
ga.label_node( 1, heap[ 1 ] )
for i in range( 2, len( heap ) ):
ga.label_node( i, heap[ i ] )
ga.add_edge( i // 2, i )
Explanation: Now draw it (nodes will be named as the array indices and labelled as the array values)
End of explanation
def down_heap( i, n ):
t = heap[ i ]
while i <= n // 2:
ga.highlight_node( i )
ga.next_step()
j = 2 * i
if j < n and heap[ j ] < heap[ j + 1 ]: j += 1
ga.highlight_edge( i, j )
ga.next_step()
if t >= heap[ j ]: break
heap[ i ] = heap[ j ]
ga.highlight_node( i )
ga.highlight_node( j )
ga.label_node( i, heap[ i ] )
ga.label_node( j, heap[ j ] )
ga.next_step()
i = j
heap[ i ] = t
ga.highlight_node( i )
ga.label_node( i, heap[ i ] )
ga.next_step()
Explanation: Define the usual iterative down heap procedure (endowed with animation actions)
End of explanation
n = len( heap ) - 1
ga.next_step()
for i in range( n // 2, 0, -1 ):
down_heap( i, n )
Explanation: Fix the heap calling down_heap on his lower half
End of explanation
ga.next_step()
while n > 1:
heap[ 1 ], heap[ n ] = heap[ n ], heap[ 1 ]
ga.highlight_node( 1 )
ga.highlight_node( n )
ga.label_node( 1, heap[ 1 ] )
ga.label_node( n, heap[ n ] )
ga.next_step()
n -= 1
down_heap( 1, n )
Explanation: And finally exchange the top with heap positions starting form the last one (fixing again with down_heap)
End of explanation
interactive( ga, 400 )
Explanation: We are ready to plot the animation interactively!
Be patient: to generate the required 68 graphs will take quite a bit of time; moreover, in case Jupyter does not correctly resize the cell just zoom in and out the document size (with the browser).
End of explanation |
6,389 | Given the following text description, write Python code to implement the functionality described.
Description:
Given a positive integer n, return the count of the numbers of n-digit
positive integers that start or end with 1.
| Python Code:
def starts_one_ends(n):
if n == 1: return 1
return 18 * (10 ** (n - 2)) |
6,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wind Structure Calculator
This Python module provides a number of simple calculators with which the structure of line-driven winds can be determined following different analytic approaches. In the following, we briefly demonstrate the usage of these calculators at the example of the well-studies O-star zeta-Puppis.
Setting the stellar Parameters
These parameters describe the basic properties of the star. These are adopted from Noebauer & Sim 2015, tables 2 and 3.
Step1: CAK force multiplier paramters. Again these are adopted from Noebauer & Sim 2015, table 3
Step2: The grid for dimensional radii (i.e. r/Rstar) for which the wind velocity and density will be later determined
Step3: Setting up the calculators
WindStructureCak75
Step4: Calculating the mass-loss rates
Step5: Determining and visualizing the wind structure
Let's first have a look at the wind velocity in absolute terms as predicted by the different descriptions of the wind structure.
Step6: The following illustration compares how fast the terminal wind speed is reached in the various wind descriptions
Step7: Finally, we have a look at the predicted wind density | Python Code:
mstar = 52.5 # mass; if no astropy units are provided, the calculators will assume units of solar masses
lstar = 1e6 # luminosity; if no astropy units are provided, the calculators will assume units of solar luminosities
teff = 4.2e4 # effective temperature; if no astropy units are provided, the calculators will assume kelvin
sigma = 0.3 # reference electron scattering cross section; if no astropy untis are provided, cm^2/g is assumed
gamma = 0.502 # Eddington factor with respect to electron scattering
Explanation: Wind Structure Calculator
This Python module provides a number of simple calculators with which the structure of line-driven winds can be determined following different analytic approaches. In the following, we briefly demonstrate the usage of these calculators at the example of the well-studies O-star zeta-Puppis.
Setting the stellar Parameters
These parameters describe the basic properties of the star. These are adopted from Noebauer & Sim 2015, tables 2 and 3.
End of explanation
alpha = 0.595
k = 0.381
Explanation: CAK force multiplier paramters. Again these are adopted from Noebauer & Sim 2015, table 3
End of explanation
x = np.logspace(-3, 3, 1024) + 1.
Explanation: The grid for dimensional radii (i.e. r/Rstar) for which the wind velocity and density will be later determined
End of explanation
wind_cak75 = ws.WindStructureCak75(mstar=mstar, lstar=lstar, teff=teff, gamma=gamma, sigma=sigma, k=k, alpha=alpha)
wind_fa86 = ws.WindStructureFa86(mstar=mstar, lstar=lstar, teff=teff, gamma=gamma, sigma=sigma, k=k, alpha=alpha)
wind_kppa89 = ws.WindStructureKppa89(mstar=mstar, lstar=lstar, teff=teff, gamma=gamma, sigma=sigma, k=k, alpha=alpha)
winds = [wind_cak75, wind_fa86, wind_kppa89]
labels = ["CAK75", "FA86", "KPPA89"]
linestyles = ["solid", "dashed", "dashdot"]
Explanation: Setting up the calculators
WindStructureCak75: wind structure based on the seminal work by Castor, Abbott and Klein 1975; the central star is assumed to be a point source
WindStructureFa86: wind structure based on fits to the numerical results obtained by Friend and Abbott 1986; only the influence of the finite extent of the central star is taken into account
WindStructureKppa89: wind structure based on the approximate analytic description by Kudritzki, Pauldrach, Puls and Abbbott 1989; the finite extent of the central star is taken into account
Note: in all wind structure calculators it is assumed that the ionization state is frozen-in, i.e. constant throughout the wind.
End of explanation
print(" | vterm [km/s] | Mdot [solMass/yr] ")
print("==========================================")
for wind, label in zip(winds, labels):
print("{:6s} | {:7.2f} | {:.4e}".format(label, wind.vterm.value, wind.mdot.value))
Explanation: Calculating the mass-loss rates
End of explanation
plt.figure()
for wind, label, ls in zip(winds, labels, linestyles):
plt.plot(x, wind.v(x), ls=ls, label=label)
plt.legend(frameon=False, loc="lower right")
plt.xlabel(r"$r/R_{\star}$")
plt.ylabel(r"$v$ [km/s]")
plt.xlim([0.8, 1e1])
Explanation: Determining and visualizing the wind structure
Let's first have a look at the wind velocity in absolute terms as predicted by the different descriptions of the wind structure.
End of explanation
plt.figure()
for wind, label, ls in zip(winds, labels, linestyles):
plt.plot(x - 1., wind.v(x) / wind.vterm, ls=ls, label=label)
plt.xscale("log")
plt.xlim([1e-3, 1e3])
plt.ylim([0, 1])
plt.legend(loc="upper left", frameon=False)
plt.xlabel(r"$r/R_{\star} - 1$")
plt.ylabel(r"$v/v_{\infty}$")
Explanation: The following illustration compares how fast the terminal wind speed is reached in the various wind descriptions
End of explanation
plt.figure()
for wind, label, ls in zip(winds, labels, linestyles):
plt.plot(x, wind.rho(x), ls=ls, label=label)
plt.yscale("log")
plt.ylim([1e-15, 1e-10])
plt.xlim([0.8, 10])
plt.xlabel(r"$r/R_{\star}$")
plt.ylabel(r"$\rho$ $[\mathrm{g\,cm^{-3}}]$")
plt.legend(loc="upper right", frameon=False)
Explanation: Finally, we have a look at the predicted wind density:
End of explanation |
6,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PySAL Change Log Statistics
This notebook pulls the summary statistics for use in the 6-month releases of PySAL, which is now (2017-07) a meta package.
It assumes the subpackages have been git cloned in a directory below the location of this notebook. It also requires network connectivity for some of the reporting.
Step9: Our last main release was 2017-11-03
Step10: The issues are pulled since the last release date of the meta package.
However, each package that is going into the meta release, has a specific release tag that pins the code making it into the release. We don't want to report the commits post the packages tag date so we have to do some filtering here before building our change log statistics for the meta package.
For now let's pickle the issues and pull records to filter later and not have to rehit github api | Python Code:
from __future__ import print_function
import os
import json
import re
import sys
import pandas
from datetime import datetime, timedelta
from time import sleep
from subprocess import check_output
try:
from urllib import urlopen
except:
from urllib.request import urlopen
import ssl
import yaml
context = ssl._create_unverified_context()
with open('packages.yml') as package_file:
packages = yaml.load(package_file)
CWD = os.path.abspath(os.path.curdir)
Explanation: PySAL Change Log Statistics
This notebook pulls the summary statistics for use in the 6-month releases of PySAL, which is now (2017-07) a meta package.
It assumes the subpackages have been git cloned in a directory below the location of this notebook. It also requires network connectivity for some of the reporting.
End of explanation
start_date = '2017-11-03'
since_date = '--since="{start}"'.format(start=start_date)
since_date
since = datetime.strptime(start_date+" 0:0:0", "%Y-%m-%d %H:%M:%S")
since
from datetime import datetime, timedelta
ISO8601 = "%Y-%m-%dT%H:%M:%SZ"
PER_PAGE = 100
element_pat = re.compile(r'<(.+?)>')
rel_pat = re.compile(r'rel=[\'"](\w+)[\'"]')
def parse_link_header(headers):
link_s = headers.get('link', '')
urls = element_pat.findall(link_s)
rels = rel_pat.findall(link_s)
d = {}
for rel,url in zip(rels, urls):
d[rel] = url
return d
def get_paged_request(url):
get a full list, handling APIv3's paging
results = []
while url:
#print("fetching %s" % url, file=sys.stderr)
f = urlopen(url)
results.extend(json.load(f))
links = parse_link_header(f.headers)
url = links.get('next')
return results
def get_issues(project="pysal/pysal", state="closed", pulls=False):
Get a list of the issues from the Github API.
which = 'pulls' if pulls else 'issues'
url = "https://api.github.com/repos/%s/%s?state=%s&per_page=%i" % (project, which, state, PER_PAGE)
return get_paged_request(url)
def _parse_datetime(s):
Parse dates in the format returned by the Github API.
if s:
return datetime.strptime(s, ISO8601)
else:
return datetime.fromtimestamp(0)
def issues2dict(issues):
Convert a list of issues to a dict, keyed by issue number.
idict = {}
for i in issues:
idict[i['number']] = i
return idict
def is_pull_request(issue):
Return True if the given issue is a pull request.
return 'pull_request_url' in issue
def issues_closed_since(period=timedelta(days=365), project="pysal/pysal", pulls=False):
Get all issues closed since a particular point in time. period
can either be a datetime object, or a timedelta object. In the
latter case, it is used as a time before the present.
which = 'pulls' if pulls else 'issues'
if isinstance(period, timedelta):
period = datetime.now() - period
url = "https://api.github.com/repos/%s/%s?state=closed&sort=updated&since=%s&per_page=%i" % (project, which, period.strftime(ISO8601), PER_PAGE)
allclosed = get_paged_request(url)
# allclosed = get_issues(project=project, state='closed', pulls=pulls, since=period)
filtered = [i for i in allclosed if _parse_datetime(i['closed_at']) > period]
# exclude rejected PRs
if pulls:
filtered = [ pr for pr in filtered if pr['merged_at'] ]
return filtered
def sorted_by_field(issues, field='closed_at', reverse=False):
Return a list of issues sorted by closing date date.
return sorted(issues, key = lambda i:i[field], reverse=reverse)
def report(issues, show_urls=False):
Summary report about a list of issues, printing number and title.
# titles may have unicode in them, so we must encode everything below
if show_urls:
for i in issues:
role = 'ghpull' if 'merged_at' in i else 'ghissue'
print('* :%s:`%d`: %s' % (role, i['number'],
i['title'].encode('utf-8')))
else:
for i in issues:
print('* %d: %s' % (i['number'], i['title'].encode('utf-8')))
all_issues = {}
all_pulls = {}
total_commits = 0
issue_details = {}
pull_details = {}
for package in packages:
subpackages = packages[package].split()
for subpackage in subpackages:
prj = 'pysal/{subpackage}'.format(subpackage=subpackage)
os.chdir(CWD)
os.chdir('tmp/{subpackage}'.format(subpackage=subpackage))
#sub_issues = issues_closed_since(project=prj, period=since)
#sleep(5)
issues = issues_closed_since(since, project=prj,pulls=False)
pulls = issues_closed_since(since, project=prj,pulls=True)
issues = sorted_by_field(issues, reverse=True)
pulls = sorted_by_field(pulls, reverse=True)
issue_details[subpackage] = issues
pull_details[subpackage] = pulls
n_issues, n_pulls = map(len, (issues, pulls))
n_total = n_issues + n_pulls
all_issues[subpackage] = n_total, n_pulls
os.chdir(CWD)
Explanation: Our last main release was 2017-11-03:
End of explanation
import pickle
pickle.dump( issue_details, open( "issue_details.p", "wb" ) )
pickle.dump( pull_details, open("pull_details.p", "wb"))
Explanation: The issues are pulled since the last release date of the meta package.
However, each package that is going into the meta release, has a specific release tag that pins the code making it into the release. We don't want to report the commits post the packages tag date so we have to do some filtering here before building our change log statistics for the meta package.
For now let's pickle the issues and pull records to filter later and not have to rehit github api
End of explanation |
6,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Things go wrong when programming all the time. Some of these "problems" are errors that stop the program from making sense. Others are problems that stop the program from working in specific, special cases. These "problems" may be real, or we may want to treat them as special cases that don't stop the program from running.
These special cases can be dealt with using exceptions.
Exceptions
Let's define a function that divides two numbers.
Step2: But what happens if we try something really stupid?
Step3: So, the code works fine until we pass in input that we shouldn't. When we do, this causes the code to stop. To show how this can be a problem, consider the loop
Step4: There are three sensible results, but we only get the first.
There are many more complex, real cases where it's not obvious that we're doing something wrong ahead of time. In this case, we want to be able to try running the code and catch errors without stopping the code. This can be done in Python
Step5: The idea here is given by the names. Python will try to execute the code inside the try block. This is just like an if or a for block
Step6: We see that, as it makes no sense to divide by a string, we get a TypeError instead of a ZeroDivisionError. We could catch both errors
Step7: We could catch any error
Step8: This doesn't give us much information, and may lose information that we need in order to handle the error. We can capture the exception to a variable, and then use that variable
Step9: Here we have caught two possible types of error within the tuple (which must, in this case, have parantheses) and captured the specific error in the variable exception. This variable can then be used
Step11: The statements in the else block are only run if the try block succeeds. If it doesn't - if the statements in the try block raise an exception - then the statements in the else block are not run.
Exceptions in your own code
Sometimes you don't want to wait for the code to break at a low level, but instead stop when you know things are going to go wrong. This is usually because you can be more informative about what's going wrong. Here's a slightly artificial example
Step13: It should be obvious to the code that this is going to go wrong. Rather than letting the code hit the ZeroDivisionError exception automatically, we can raise it ourselves, with a more meaningful error message
Step15: There are a large number of standard exceptions in Python, and most of the time you should use one of those, combined with a meaningful error message. One is particularly useful
Step17: Testing
How do we know if our code is working correctly? It is not when the code runs and returns some value
Step18: First we check what happens if there are imaginary roots, using $x^2 + 1 = 0$
Step19: As we wanted, it has returned None. We also check what happens if the roots are zero, using $x^2 = 0$
Step20: We get the expected behaviour. We also check what happens if the roots are real, using $x^2 - 1 = 0$ which has roots $\pm 1$
Step22: Something has gone wrong. Looking at the code, we see that the x_minus line has been copied and pasted from the x_plus line, without changing the sign correctly. So we fix that error
Step23: We have changed the code, so now have to re-run all our tests, in case our change broke something else
Step24: As a final test, we check what happens if the equation degenerates to a linear equation where $a=0$, using $x + 1 = 0$ with solution $-1$
Step26: In this case we get an exception, which we don't want. We fix this problem
Step27: And we now must re-run all our tests again, as the code has changed once more
Step29: Formalizing tests
This small set of tests covers most of the cases we are concerned with. However, by this point it's getting hard to remember
what each line is actually testing, and
what the correct value is meant to be.
To formalize this, we write each test as a small function that contains this information for us. Let's start with the $x^2 - 1 = 0$ case where the roots are $\pm 1$
Step31: What this function does is checks that the results of the function call match the expected value, here stored in roots. If it didn't match the expected value, it would raise an exception
Step33: Testing that one floating point number equals another can be dangerous. Consider $x^2 - 2 x + (1 - 10^{-10}) = 0$ with roots $1.1 \pm 10^{-5} )$
Step35: We see that the solutions match to the first 14 or so digits, but this isn't enough for them to be exactly the same. In this case, and in most cases using floating point numbers, we want the result to be "close enough"
Step41: The assert_allclose statement takes options controlling the precision of our test.
We can now write out all our tests | Python Code:
from __future__ import division
def divide(numerator, denominator):
Divide two numbers.
Parameters
----------
numerator: float
numerator
denominator: float
denominator
Returns
-------
fraction: float
numerator / denominator
return numerator / denominator
print(divide(4.0, 5.0))
Explanation: Things go wrong when programming all the time. Some of these "problems" are errors that stop the program from making sense. Others are problems that stop the program from working in specific, special cases. These "problems" may be real, or we may want to treat them as special cases that don't stop the program from running.
These special cases can be dealt with using exceptions.
Exceptions
Let's define a function that divides two numbers.
End of explanation
print(divide(4.0, 0.0))
Explanation: But what happens if we try something really stupid?
End of explanation
denominators = [1.0, 0.0, 3.0, 5.0]
for denominator in denominators:
print(divide(4.0, denominator))
Explanation: So, the code works fine until we pass in input that we shouldn't. When we do, this causes the code to stop. To show how this can be a problem, consider the loop:
End of explanation
try:
print(divide(4.0, 0.0))
except ZeroDivisionError:
print("Dividing by zero is a silly thing to do!")
denominators = [1.0, 0.0, 3.0, 5.0]
for denominator in denominators:
try:
print(divide(4.0, denominator))
except ZeroDivisionError:
print("Dividing by zero is a silly thing to do!")
Explanation: There are three sensible results, but we only get the first.
There are many more complex, real cases where it's not obvious that we're doing something wrong ahead of time. In this case, we want to be able to try running the code and catch errors without stopping the code. This can be done in Python:
End of explanation
try:
print(divide(4.0, "zero"))
except ZeroDivisionError:
print("Dividing by zero is a silly thing to do!")
Explanation: The idea here is given by the names. Python will try to execute the code inside the try block. This is just like an if or a for block: each command that is indented in that block will be executed in order.
If, and only if, an error arises then the except block will be checked. If the error that is produced matches the one listed then instead of stopping, the code inside the except block will be run instead.
To show how this works with different errors, consider a different silly error:
End of explanation
try:
print(divide(4.0, "zero"))
except ZeroDivisionError:
print("Dividing by zero is a silly thing to do!")
except TypeError:
print("Dividing by a string is a silly thing to do!")
Explanation: We see that, as it makes no sense to divide by a string, we get a TypeError instead of a ZeroDivisionError. We could catch both errors:
End of explanation
try:
print(divide(4.0, "zero"))
except:
print("Some error occured")
Explanation: We could catch any error:
End of explanation
try:
print(divide(4.0, "zero"))
except (ZeroDivisionError, TypeError) as exception:
print("Some error occured: {}".format(exception))
Explanation: This doesn't give us much information, and may lose information that we need in order to handle the error. We can capture the exception to a variable, and then use that variable:
End of explanation
denominators = [1.0, 0.0, 3.0, "zero", 5.0]
results = []
divisors = []
for denominator in denominators:
try:
result = divide(4.0, denominator)
except (ZeroDivisionError, TypeError) as exception:
print("Error of type {} for denominator {}".format(exception, denominator))
else:
results.append(result)
divisors.append(denominator)
print(results)
print(divisors)
Explanation: Here we have caught two possible types of error within the tuple (which must, in this case, have parantheses) and captured the specific error in the variable exception. This variable can then be used: here we just print it out.
Normally best practise is to be as specific as possible on the error you are trying to catch.
Extending the logic
Sometimes you may want to perform an action only if an error did not occur. For example, let's suppose we wanted to store the result of dividing 4 by a divisor, and also store the divisor, but only if the divisor is valid.
One way of doing this would be the following:
End of explanation
def divide_sum(numerator, denominator1, denominator2):
Divide a number by a sum.
Parameters
----------
numerator: float
numerator
denominator1: float
Part of the denominator
denominator2: float
Part of the denominator
Returns
-------
fraction: float
numerator / (denominator1 + denominator2)
return numerator / (denominator1 + denominator2)
divide_sum(1, 1, -1)
Explanation: The statements in the else block are only run if the try block succeeds. If it doesn't - if the statements in the try block raise an exception - then the statements in the else block are not run.
Exceptions in your own code
Sometimes you don't want to wait for the code to break at a low level, but instead stop when you know things are going to go wrong. This is usually because you can be more informative about what's going wrong. Here's a slightly artificial example:
End of explanation
def divide_sum(numerator, denominator1, denominator2):
Divide a number by a sum.
Parameters
----------
numerator: float
numerator
denominator1: float
Part of the denominator
denominator2: float
Part of the denominator
Returns
-------
fraction: float
numerator / (denominator1 + denominator2)
if (denominator1 + denominator2) == 0:
raise ZeroDivisionError("The sum of denominator1 and denominator 2 is zero!")
return numerator / (denominator1 + denominator2)
divide_sum(1, 1, -1)
Explanation: It should be obvious to the code that this is going to go wrong. Rather than letting the code hit the ZeroDivisionError exception automatically, we can raise it ourselves, with a more meaningful error message:
End of explanation
from math import sqrt
def real_quadratic_roots(a, b, c):
Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist.
Parameters
----------
a : float
Coefficient of x^2
b : float
Coefficient of x^1
c : float
Coefficient of x^0
Returns
-------
roots : tuple
The roots
Raises
------
NotImplementedError
If the roots are not real.
discriminant = b**2 - 4.0*a*c
if discriminant < 0.0:
raise NotImplementedError("The discriminant is {}<0. "
"No real roots exist.".format(discriminant))
x_plus = (-b + sqrt(discriminant)) / (2.0*a)
x_minus = (-b - sqrt(discriminant)) / (2.0*a)
return x_plus, x_minus
print(real_quadratic_roots(1.0, 5.0, 6.0))
real_quadratic_roots(1.0, 1.0, 5.0)
Explanation: There are a large number of standard exceptions in Python, and most of the time you should use one of those, combined with a meaningful error message. One is particularly useful: NotImplementedError.
This exception is used when the behaviour the code is about to attempt makes no sense, is not defined, or similar. For example, consider computing the roots of the quadratic equation, but restricting to only real solutions. Using the standard formula
\begin{equation}
x_{\pm} = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}
\end{equation}
we know that this only makes sense if $b^2 \ge 4ac$. We put this in code as:
End of explanation
from math import sqrt
def real_quadratic_roots(a, b, c):
Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist.
Parameters
----------
a : float
Coefficient of x^2
b : float
Coefficient of x^1
c : float
Coefficient of x^0
Returns
-------
roots : tuple or None
The roots
discriminant = b**2 - 4.0*a*c
if discriminant < 0.0:
return None
x_plus = (-b + sqrt(discriminant)) / (2.0*a)
x_minus = (-b + sqrt(discriminant)) / (2.0*a)
return x_plus, x_minus
Explanation: Testing
How do we know if our code is working correctly? It is not when the code runs and returns some value: as seen above, there may be times where it makes sense to stop the code even when it is correct, as it is being used incorrectly. We need to test the code to check that it works.
Unit testing is the idea of writing many small tests that check if simple cases are behaving correctly. Rather than trying to prove that the code is correct in all cases (which could be very hard), we check that it is correct in a number of tightly controlled cases (which should be more straightforward). If we later find a problem with the code, we add a test to cover that case.
Consider a function solving for the real roots of the quadratic equation again. This time, if there are no real roots we shall return None (to say there are no roots) instead of raising an exception.
End of explanation
print(real_quadratic_roots(1, 0, 1))
Explanation: First we check what happens if there are imaginary roots, using $x^2 + 1 = 0$:
End of explanation
print(real_quadratic_roots(1, 0, 0))
Explanation: As we wanted, it has returned None. We also check what happens if the roots are zero, using $x^2 = 0$:
End of explanation
print(real_quadratic_roots(1, 0, -1))
Explanation: We get the expected behaviour. We also check what happens if the roots are real, using $x^2 - 1 = 0$ which has roots $\pm 1$:
End of explanation
from math import sqrt
def real_quadratic_roots(a, b, c):
Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist.
Parameters
----------
a : float
Coefficient of x^2
b : float
Coefficient of x^1
c : float
Coefficient of x^0
Returns
-------
roots : tuple or None
The roots
discriminant = b**2 - 4.0*a*c
if discriminant < 0.0:
return None
x_plus = (-b + sqrt(discriminant)) / (2.0*a)
x_minus = (-b - sqrt(discriminant)) / (2.0*a)
return x_plus, x_minus
Explanation: Something has gone wrong. Looking at the code, we see that the x_minus line has been copied and pasted from the x_plus line, without changing the sign correctly. So we fix that error:
End of explanation
print(real_quadratic_roots(1, 0, 1))
print(real_quadratic_roots(1, 0, 0))
print(real_quadratic_roots(1, 0, -1))
Explanation: We have changed the code, so now have to re-run all our tests, in case our change broke something else:
End of explanation
print(real_quadratic_roots(0, 1, 1))
Explanation: As a final test, we check what happens if the equation degenerates to a linear equation where $a=0$, using $x + 1 = 0$ with solution $-1$:
End of explanation
from math import sqrt
def real_quadratic_roots(a, b, c):
Find the real roots of the quadratic equation a x^2 + b x + c = 0, if they exist.
Parameters
----------
a : float
Coefficient of x^2
b : float
Coefficient of x^1
c : float
Coefficient of x^0
Returns
-------
roots : tuple or float or None
The root(s) (two if a genuine quadratic, one if linear, None otherwise)
Raises
------
NotImplementedError
If the equation has trivial a and b coefficients, so isn't solvable.
discriminant = b**2 - 4.0*a*c
if discriminant < 0.0:
return None
if a == 0:
if b == 0:
raise NotImplementedError("Cannot solve quadratic with both a"
" and b coefficients equal to 0.")
else:
return -c / b
x_plus = (-b + sqrt(discriminant)) / (2.0*a)
x_minus = (-b - sqrt(discriminant)) / (2.0*a)
return x_plus, x_minus
Explanation: In this case we get an exception, which we don't want. We fix this problem:
End of explanation
print(real_quadratic_roots(1, 0, 1))
print(real_quadratic_roots(1, 0, 0))
print(real_quadratic_roots(1, 0, -1))
print(real_quadratic_roots(0, 1, 1))
Explanation: And we now must re-run all our tests again, as the code has changed once more:
End of explanation
from numpy.testing import assert_equal, assert_allclose
def test_real_distinct():
Test that the roots of x^2 - 1 = 0 are \pm 1.
roots = (1.0, -1.0)
assert_equal(real_quadratic_roots(1, 0, -1), roots,
err_msg="Testing x^2-1=0; roots should be 1 and -1.")
test_real_distinct()
Explanation: Formalizing tests
This small set of tests covers most of the cases we are concerned with. However, by this point it's getting hard to remember
what each line is actually testing, and
what the correct value is meant to be.
To formalize this, we write each test as a small function that contains this information for us. Let's start with the $x^2 - 1 = 0$ case where the roots are $\pm 1$:
End of explanation
def test_should_fail():
Comparing the roots of x^2 - 1 = 0 to (1, 1), which should fail.
roots = (1.0, 1.0)
assert_equal(real_quadratic_roots(1, 0, -1), roots,
err_msg="Testing x^2-1=0; roots should be 1 and 1."
" So this test should fail")
test_should_fail()
Explanation: What this function does is checks that the results of the function call match the expected value, here stored in roots. If it didn't match the expected value, it would raise an exception:
End of explanation
from math import sqrt
def test_real_distinct_irrational():
Test that the roots of x^2 - 2 x + (1 - 10**(-10)) = 0 are 1 \pm 1e-5.
roots = (1 + 1e-5, 1 - 1e-5)
assert_equal(real_quadratic_roots(1, -2.0, 1.0 - 1e-10), roots,
err_msg="Testing x^2-2x+(1-1e-10)=0; roots should be 1 +- 1e-5.")
test_real_distinct_irrational()
Explanation: Testing that one floating point number equals another can be dangerous. Consider $x^2 - 2 x + (1 - 10^{-10}) = 0$ with roots $1.1 \pm 10^{-5} )$:
End of explanation
from math import sqrt
def test_real_distinct_irrational():
Test that the roots of x^2 - 2 x + (1 - 10**(-10)) = 0 are 1 \pm 1e-5.
roots = (1 + 1e-5, 1 - 1e-5)
assert_allclose(real_quadratic_roots(1, -2.0, 1.0 - 1e-10), roots,
err_msg="Testing x^2-2x+(1-1e-10)=0; roots should be 1 +- 1e-5.")
test_real_distinct_irrational()
Explanation: We see that the solutions match to the first 14 or so digits, but this isn't enough for them to be exactly the same. In this case, and in most cases using floating point numbers, we want the result to be "close enough": to match the expected precision. There is an assertion for this as well:
End of explanation
from math import sqrt
from numpy.testing import assert_equal, assert_allclose
def test_no_roots():
Test that the roots of x^2 + 1 = 0 are not real.
roots = None
assert_equal(real_quadratic_roots(1, 0, 1), roots,
err_msg="Testing x^2+1=0; no real roots.")
def test_zero_roots():
Test that the roots of x^2 = 0 are both zero.
roots = (0, 0)
assert_equal(real_quadratic_roots(1, 0, 0), roots,
err_msg="Testing x^2=0; should both be zero.")
def test_real_distinct():
Test that the roots of x^2 - 1 = 0 are \pm 1.
roots = (1.0, -1.0)
assert_equal(real_quadratic_roots(1, 0, -1), roots,
err_msg="Testing x^2-1=0; roots should be 1 and -1.")
def test_real_distinct_irrational():
Test that the roots of x^2 - 2 x + (1 - 10**(-10)) = 0 are 1 \pm 1e-5.
roots = (1 + 1e-5, 1 - 1e-5)
assert_allclose(real_quadratic_roots(1, -2.0, 1.0 - 1e-10), roots,
err_msg="Testing x^2-2x+(1-1e-10)=0; roots should be 1 +- 1e-5.")
def test_real_linear_degeneracy():
Test that the root of x + 1 = 0 is -1.
root = -1.0
assert_equal(real_quadratic_roots(0, 1, 1), root,
err_msg="Testing x+1=0; root should be -1.")
test_no_roots()
test_zero_roots()
test_real_distinct()
test_real_distinct_irrational()
test_real_linear_degeneracy()
Explanation: The assert_allclose statement takes options controlling the precision of our test.
We can now write out all our tests:
End of explanation |
6,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: La senyal del so
Si recordeu un exemple anterior, la funció sound ens retorna un valor entre 0 i 100, segons la intensitat del so. Anem a provar-la de nou, però representant el seu valor en una gràfica.
Per a això, el que farem és recollir dades
Step2: Ara representarem gràficament els valors llegits.
Step3: Normalment voreu valors que pugen i baixen, entre 0 i 100, segons parleu menys o més fort. Podeu provar també a donar palmades.
Aleshores, per a controlar el robot, podem fer que reaccione quan el valor del so siga més alt d'un determinat llindar (umbral en castellà).
Control a distància per so
Feu un programa per a que el robot vaja recte mentre no li digueu que pare, és a dir, mentre no detecte un so alt.
Step4: Navegació controlada
Modifiqueu el programa de navegació, de manera que, en lloc de girar quan el robot detecta un contacte, ho faça quan detecta un so, per exemple una palmada.
Step5: Usar els dos sensors al mateix temps
En lloc de substiuir la condició de contacte per la de so, també podem afegir eixa segona condició a la primera. Els llenguatges de programació poden fer operacions lògiques per a combinar condicions. No és tan complicat com pareix, per exemple, les dos condicions s'escriurien així
Step6: Indicar la direcció
Controlar el robot, per a que després gire a l'atzar, no queda massa bé. Seria millor que poguérem controlar la direcció del gir amb el so. Recordeu que el robot no reconeix els sons, només els canvis de volum.
Com podríeu indicar-li la direcció de gir? A la millor, en compte d'un número a l'atzar, podríeu comprovar el valor del sensor de so un segon cop. Aleshores, amb una sola palmada, el robot giraria cap a un costat, i amb dos, cap a l'altre.
Step7: I el gran repte
Podeu fer una versió completa amb tot?
el robot va recte mentre no hi haja contacte ni detecte cap so alt
si detecta un contacte, va cap arrere i gira a l'atzar a esquerra o dreta
però si el que ha detectat és el so, va cap arrere, i si detecta un segon so, gira a esquerra i si no a dreta
Complicat? No tant, però cal tindre les idees clares!
Step8: Recapitulem
Abans de continuar, desconnecteu el robot | Python Code:
from functions import connect, sound, forward, stop
connect()
Explanation: <img src="http://cdn.shopify.com/s/files/1/0059/3932/products/Aldebaran_Robotics_Nao_Humanoid_Robot_07_d2010cf6-cff3-468b-82c4-e2c7cfa6df9b.jpg?v=1439318386" align="right" width=200>
Sensor de so (micròfon)
El micròfon del robot detecta el soroll ambiental. No sap reconèixer paraules, però si pot reaccionar a una palmada, o un crit. Altres robots més sofisticats com el de la imatge sí que poden parlar i reconèixer el llenguatge.
En aquesta pàgina analitzarem els valors del sensor, i l'usarem per a controlar el robot.
Primer que res, ens connectem.
End of explanation
data = [] # executeu este codi mentre parleu al micròfon
for i in range(200):
data.append(sound())
Explanation: La senyal del so
Si recordeu un exemple anterior, la funció sound ens retorna un valor entre 0 i 100, segons la intensitat del so. Anem a provar-la de nou, però representant el seu valor en una gràfica.
Per a això, el que farem és recollir dades: llegirem els valors de la funció vàries vegades i els guardarem en la memòria de l'ordinador, per a després representar gràficament els valors.
Per a guardar dades en la memòria, els llenguatges de programació fan servir variables.
En el següent exemple, usem una variable anomenada data, que contindrà la llista de valors que llegim del sensor. Inicialment estarà buida, i dins del bucle anirem afegint valors.
End of explanation
from functions import plot
plot(data)
Explanation: Ara representarem gràficament els valors llegits.
End of explanation
while sound()<50:
forward()
stop()
Explanation: Normalment voreu valors que pugen i baixen, entre 0 i 100, segons parleu menys o més fort. Podeu provar també a donar palmades.
Aleshores, per a controlar el robot, podem fer que reaccione quan el valor del so siga més alt d'un determinat llindar (umbral en castellà).
Control a distància per so
Feu un programa per a que el robot vaja recte mentre no li digueu que pare, és a dir, mentre no detecte un so alt.
End of explanation
from functions import backward, left, right
from time import sleep
from random import random
try:
while True:
while sound()<50:
forward()
backward()
sleep(1)
if random() > 0.5:
left()
else:
right()
sleep(1)
except KeyboardInterrupt:
stop()
Explanation: Navegació controlada
Modifiqueu el programa de navegació, de manera que, en lloc de girar quan el robot detecta un contacte, ho faça quan detecta un so, per exemple una palmada.
End of explanation
from functions import backward, left, right, touch
from time import sleep
from random import random
try:
while True:
while sound()<50 and not touch():
forward()
backward()
sleep(1)
if random() > 0.5:
left()
else:
right()
sleep(1)
except KeyboardInterrupt:
stop()
Explanation: Usar els dos sensors al mateix temps
En lloc de substiuir la condició de contacte per la de so, també podem afegir eixa segona condició a la primera. Els llenguatges de programació poden fer operacions lògiques per a combinar condicions. No és tan complicat com pareix, per exemple, les dos condicions s'escriurien així:
mentre el so siga menor que 50 i no hi haja contacte
Per a programar-ho en Python, només cal saber que "i" en anglès s'escriu "and" ;-)
End of explanation
from functions import backward, left, right, touch
from time import sleep
from random import random
try:
while True:
while sound()<50 and not touch():
forward()
backward()
sleep(1)
if sound() < 50:
left()
else:
right()
sleep(1)
except KeyboardInterrupt:
stop()
Explanation: Indicar la direcció
Controlar el robot, per a que després gire a l'atzar, no queda massa bé. Seria millor que poguérem controlar la direcció del gir amb el so. Recordeu que el robot no reconeix els sons, només els canvis de volum.
Com podríeu indicar-li la direcció de gir? A la millor, en compte d'un número a l'atzar, podríeu comprovar el valor del sensor de so un segon cop. Aleshores, amb una sola palmada, el robot giraria cap a un costat, i amb dos, cap a l'altre.
End of explanation
from functions import backward, left, right, touch
from time import sleep
from random import random
try:
while True:
while sound()<50 and not touch():
forward()
if touch():
backward()
sleep(1)
if random() < 0.5:
left()
else:
right()
sleep(1)
else:
backward()
sleep(1)
if sound() < 50:
left()
else:
right()
sleep(1)
except KeyboardInterrupt:
stop()
Explanation: I el gran repte
Podeu fer una versió completa amb tot?
el robot va recte mentre no hi haja contacte ni detecte cap so alt
si detecta un contacte, va cap arrere i gira a l'atzar a esquerra o dreta
però si el que ha detectat és el so, va cap arrere, i si detecta un segon so, gira a esquerra i si no a dreta
Complicat? No tant, però cal tindre les idees clares!
End of explanation
from functions import disconnect
disconnect()
Explanation: Recapitulem
Abans de continuar, desconnecteu el robot:
End of explanation |
6,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Document AI Form Parser (async)
This notebook shows you how to analyze a set pdfs using the Google Cloud DocumentAI API asynchronously.
Step1: Set your Processor Variables
Step3: The following code calls the synchronous API and parses the form fields and values. | Python Code:
# Install necessary Python libraries and restart your kernel after.
!pip install -r ../requirements.txt
from google.cloud import documentai_v1beta3 as documentai
from google.cloud import storage
import os
import re
import pandas as pd
Explanation: Document AI Form Parser (async)
This notebook shows you how to analyze a set pdfs using the Google Cloud DocumentAI API asynchronously.
End of explanation
# TODO(developer): Fill these variables with your values before running the sample
PROJECT_ID = "YOUR_PROJECT_ID_HERE"
LOCATION = "us" # Format is 'us' or 'eu'
PROCESSOR_ID = "PROCESSOR_ID" # Create processor in Cloud Console
GCS_INPUT_BUCKET = 'cloud-samples-data'
GCS_INPUT_PREFIX = 'documentai/async_forms/'
GCS_OUTPUT_URI = 'YOUR-OUTPUT-BUCKET'
GCS_OUTPUT_URI_PREFIX = 'TEST'
TIMEOUT = 300
Explanation: Set your Processor Variables
End of explanation
def process_document_sample():
# Instantiates a client
client_options = {"api_endpoint": "{}-documentai.googleapis.com".format(LOCATION)}
client = documentai.DocumentProcessorServiceClient(client_options=client_options)
storage_client = storage.Client()
blobs = storage_client.list_blobs(GCS_INPUT_BUCKET, prefix=GCS_INPUT_PREFIX)
document_configs = []
print("Input Files:")
for blob in blobs:
if ".pdf" in blob.name:
source = "gs://{bucket}/{name}".format(bucket = GCS_INPUT_BUCKET, name = blob.name)
print(source)
document_config = {"gcs_uri": source, "mime_type": "application/pdf"}
document_configs.append(document_config)
gcs_documents = documentai.GcsDocuments(
documents=document_configs
)
input_config = documentai.BatchDocumentsInputConfig(gcs_documents=gcs_documents)
destination_uri = f"{GCS_OUTPUT_URI}/{GCS_OUTPUT_URI_PREFIX}/"
# Where to write results
output_config = documentai.DocumentOutputConfig(
gcs_output_config={"gcs_uri": destination_uri}
)
# The full resource name of the processor, e.g.:
# projects/project-id/locations/location/processor/processor-id
# You must create new processors in the Cloud Console first.
name = f"projects/{PROJECT_ID}/locations/{LOCATION}/processors/{PROCESSOR_ID}"
request = documentai.types.document_processor_service.BatchProcessRequest(
name=name,
input_documents=input_config,
document_output_config=output_config,
)
operation = client.batch_process_documents(request)
# Wait for the operation to finish
operation.result(timeout=TIMEOUT)
# Results are written to GCS. Use a regex to find
# output files
match = re.match(r"gs://([^/]+)/(.+)", destination_uri)
output_bucket = match.group(1)
prefix = match.group(2)
bucket = storage_client.get_bucket(output_bucket)
blob_list = list(bucket.list_blobs(prefix=prefix))
for i, blob in enumerate(blob_list):
# If JSON file, download the contents of this blob as a bytes object.
if ".json" in blob.name:
blob_as_bytes = blob.download_as_string()
print("downloaded")
document = documentai.types.Document.from_json(blob_as_bytes)
print(f"Fetched file {i + 1}")
# For a full list of Document object attributes, please reference this page:
# https://cloud.google.com/document-ai/docs/reference/rpc/google.cloud.documentai.v1beta3#document
document_pages = document.pages
keys = []
keysConf = []
values = []
valuesConf = []
# Grab each key/value pair and their corresponding confidence scores.
for page in document_pages:
for form_field in page.form_fields:
fieldName=get_text(form_field.field_name,document)
keys.append(fieldName.replace(':', ''))
nameConfidence = round(form_field.field_name.confidence,4)
keysConf.append(nameConfidence)
fieldValue = get_text(form_field.field_value,document)
values.append(fieldValue.replace(':', ''))
valueConfidence = round(form_field.field_value.confidence,4)
valuesConf.append(valueConfidence)
# Create a Pandas Dataframe to print the values in tabular format.
df = pd.DataFrame({'Key': keys, 'Key Conf': keysConf, 'Value': values, 'Value Conf': valuesConf})
display(df)
else:
print(f"Skipping non-supported file type {blob.name}")
# Extract shards from the text field
def get_text(doc_element: dict, document: dict):
Document AI identifies form fields by their offsets
in document text. This function converts offsets
to text snippets.
response = ""
# If a text segment spans several lines, it will
# be stored in different text segments.
for segment in doc_element.text_anchor.text_segments:
start_index = (
int(segment.start_index)
if segment in doc_element.text_anchor.text_segments
else 0
)
end_index = int(segment.end_index)
response += document.text[start_index:end_index]
return response
doc = process_document_sample()
Explanation: The following code calls the synchronous API and parses the form fields and values.
End of explanation |
6,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Previous
1.6 字典中的键映射多个值
问题
怎样实现一个键对应多个值的字典(也叫 multidict )?
解决方案
一个字典就是一个键对应一个单值的映射。如果你想要一个键映射多个值,那么你就需要将这多个值放到另外的容器中, 比如列表或者集合里面。比如,你可以像下面这样构造这样的字典:
Step1: 选择使用列表还是集合取决于你的实际需求。如果你想保持元素的插入顺序就应该使用列表, 如果想去掉重复元素就使用集合(并且不关心元素的顺序问题)。
你可以很方便的使用 collections 模块中的 defaultdict 来构造这样的字典。 defaultdict 的一个特征是它会自动初始化每个 key 刚开始对应的值,所以你只需要关注添加元素操作了。比如:
Step2: 需要注意的是, defaultdict 会自动为将要访问的键(就算目前字典中并不存在这样的键)创建映射实体。 如果你并不需要这样的特性,你可以在一个普通的字典上使用 setdefault() 方法来代替。比如: | Python Code:
d = {
"a" : [1, 2, 3],
"b" : [4, 5]
}
e = {
"a" : {1, 2, 3},
"b" : {4, 5}
}
Explanation: Previous
1.6 字典中的键映射多个值
问题
怎样实现一个键对应多个值的字典(也叫 multidict )?
解决方案
一个字典就是一个键对应一个单值的映射。如果你想要一个键映射多个值,那么你就需要将这多个值放到另外的容器中, 比如列表或者集合里面。比如,你可以像下面这样构造这样的字典:
End of explanation
from collections import defaultdict
d = defaultdict(list)
d["a"].append(1)
d["a"].append(2)
d["b"].append(4)
d = defaultdict(set)
d["a"].add(1)
d["a"].add(2)
d["b"].add(4)
Explanation: 选择使用列表还是集合取决于你的实际需求。如果你想保持元素的插入顺序就应该使用列表, 如果想去掉重复元素就使用集合(并且不关心元素的顺序问题)。
你可以很方便的使用 collections 模块中的 defaultdict 来构造这样的字典。 defaultdict 的一个特征是它会自动初始化每个 key 刚开始对应的值,所以你只需要关注添加元素操作了。比如:
End of explanation
d = {} #A regular dictionary
d.setdefault("a", []).append(1)
d.setdefault("a", []).append(2)
d.setdefault("b", []).append(4)
Explanation: 需要注意的是, defaultdict 会自动为将要访问的键(就算目前字典中并不存在这样的键)创建映射实体。 如果你并不需要这样的特性,你可以在一个普通的字典上使用 setdefault() 方法来代替。比如:
End of explanation |
6,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pyOpenCGA Basic User Usage
[NOTE] The server methods used by pyopencga client are defined in the following swagger URL
Step1: Now is time to import pyopencga modules.
You have two options
a) You can import pyopencga directly (skip next section) if you have installed pyopencga with pip install pyopencga (remember to use sudo unless you are using your own Python
install or virtualenv)
b) If you need to import from the source code remember that Python3 does not accept relative importing, so you need to append the module path to sys.path
Preparing environmnet for importing from source
Step2: Importing pyopencga
Step3: Creating some useful functions to manage the results
Step4: Setup client and login
Configuration and Credentials
You need to provide a server URL in the standard configuration format for OpenCGA as a dict or in a json file
Regarding credentials, if you don't pass the password, it would be asked interactively without echo.
Step5: Creating ConfigClient for server connection configuration
Step6: Initialize the client configuration
You can pass a dictionary to the ClientConfiguration
Step7: Make the login
Step8: You are now connected to OpenCGA
Working with Users
Step9: The demo user has not projects from its own, but has access to some projectso from opencga user.
Let's see how to find it out.
We need to list the project info from project client not from the user client.
We use the method search()
And remember that OpenCGA REST objects encapsulate the result inside the responses property, so we need to access the first element of the responses array.
Step10: User demo has access to one project called opencga@exomes_grch37
note | Python Code:
# Initialize PYTHONPATH for pyopencga
import sys
import os
from pprint import pprint
Explanation: pyOpenCGA Basic User Usage
[NOTE] The server methods used by pyopencga client are defined in the following swagger URL:
- http://bioinfo.hpc.cam.ac.uk/opencga-demo/webservices
For tutorials and more info about accessing the OpenCGA REST please read the documentation at http://docs.opencb.org/display/opencga/Python
Loading pyOpenCGA
End of explanation
cwd = os.getcwd()
print("current_dir: ...."+cwd[-10:])
base_modules_dir = os.path.dirname(cwd)
print("base_modules_dir: ...."+base_modules_dir[-10:])
sys.path.append(base_modules_dir)
Explanation: Now is time to import pyopencga modules.
You have two options
a) You can import pyopencga directly (skip next section) if you have installed pyopencga with pip install pyopencga (remember to use sudo unless you are using your own Python
install or virtualenv)
b) If you need to import from the source code remember that Python3 does not accept relative importing, so you need to append the module path to sys.path
Preparing environmnet for importing from source
End of explanation
from pyopencga.opencga_config import ClientConfiguration
from pyopencga.opencga_client import OpenCGAClient
import json
Explanation: Importing pyopencga
End of explanation
def get_not_private_methods(client):
all_methods = dir(client)
#showing all methos (exept the ones starting with "_", as they are private for the API)
methods = [method for method in all_methods if not method.startswith("_")]
return methods
Explanation: Creating some useful functions to manage the results
End of explanation
# server host
# user credentials
user = "demo"
passwd = "demo"
# the user demo access projects from user opencga
prj_owner = "opencga"
Explanation: Setup client and login
Configuration and Credentials
You need to provide a server URL in the standard configuration format for OpenCGA as a dict or in a json file
Regarding credentials, if you don't pass the password, it would be asked interactively without echo.
End of explanation
# Creating ClientConfiguration dict
host = 'http://bioinfo.hpc.cam.ac.uk/opencga-demo'
config_dict = {"rest": {
"host": host
}
}
print("Config information:\n",config_dict)
Explanation: Creating ConfigClient for server connection configuration
End of explanation
config = ClientConfiguration(config_dict)
oc = OpenCGAClient(config)
Explanation: Initialize the client configuration
You can pass a dictionary to the ClientConfiguration
End of explanation
# here we put only the user in order to be asked for the password interactively
oc.login(user)
Explanation: Make the login
End of explanation
# Listing available methods for the user client object
user_client = oc.users
# showing all methods (except the ones starting with "_", as they are private for the API)
get_not_private_methods(user_client)
## getting user information
## [NOTE] User needs the quey_id string directly --> (user)
uc_info = user_client.info(user).responses[0]['results'][0]
print("user info:")
print("name: {}\towned_projects: {}".format(uc_info["name"], len(uc_info["projects"])))
Explanation: You are now connected to OpenCGA
Working with Users
End of explanation
## Getting user projects
## [NOTE] Client specific methods have the query_id as a key:value (i.e (user=user_id))
project_client = oc.projects
projects_info = project_client.search().responses[0]["results"]
for project in projects_info:
print("Name: {}\tfull_id: {}".format(project["name"], project["fqn"]))
Explanation: The demo user has not projects from its own, but has access to some projectso from opencga user.
Let's see how to find it out.
We need to list the project info from project client not from the user client.
We use the method search()
And remember that OpenCGA REST objects encapsulate the result inside the responses property, so we need to access the first element of the responses array.
End of explanation
project_client = oc.projects
get_not_private_methods(project_client)
## Getting all projects from logged in user
project_client = oc.projects
projects_list = project_client.search().responses[0]["results"]
for project in projects_list:
print("Name: {}\tfull_id: {}".format(project["name"], project["fqn"]))
## Getting information from a specific project
project_name = 'exomes_grch37'
project_info = project_client.info(project_name).responses[0]['results'][0]
#show the studies
for study in project_info['studies']:
print("project:{}\nstudy:{}\ttype:{}".format(project_name, study['name'], study['type'] ))
print('--')
## Fetching the studies from a project using the studies method
results = project_client.studies(project_name).responses[0]['results']
for result in results:
pprint(result)
Explanation: User demo has access to one project called opencga@exomes_grch37
note: in opencga the projects and studies have a full qualify name, fqn with the format [owner]@[porject]:[study]
Working with Projects
End of explanation |
6,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spark Context
Let's start with creating a SparkContext - an entry point to Spark application. Parameter 'local[*]' means that we create the Spark cluster locally using all machine cores. Next we check that everything is working fine.
Step1: RDD
RDD is just a collection (like a list) but distributed. You can create it using sc.parallelize method that consumes normal collection.
Step2: You can convert it back to python list using collect method
Step3: RDD operations
Main operations on RDD are transformations
Step4: Task 2 Write count using reduce
Step5: Task 3 Write sum using aggregate described here http
Step6: Task 4 Get the biggest value from in the rdd2 (use reduce)
Step7: Task 5 Get the second biggest value from rdd2, use reduce or aggregate.
Step8: Pair RDD
In many cases it is convenient to work with an RDD that consists of key-value pairs. We can get one by counting different reminders in rdd2.
Step9: We only want the lengths of ResultIterable, so we can map values of the key-value pairs
Step10: Let's sort it by count | Python Code:
import pyspark
sc = pyspark.SparkContext('local[*]')
# do something to prove it works
rdd = sc.parallelize(range(1000))
rdd.takeSample(False, 5)
Explanation: Spark Context
Let's start with creating a SparkContext - an entry point to Spark application. Parameter 'local[*]' means that we create the Spark cluster locally using all machine cores. Next we check that everything is working fine.
End of explanation
rdd = sc.parallelize(range(100000))
rdd
Explanation: RDD
RDD is just a collection (like a list) but distributed. You can create it using sc.parallelize method that consumes normal collection.
End of explanation
rdd.collect()
Explanation: You can convert it back to python list using collect method:
End of explanation
assert rdd.map(lambda x: x * 13 % 33).take(34)[-1] == ...
Explanation: RDD operations
Main operations on RDD are transformations:
* map - apply a function to every element of the collection;
* filter - filter collection using predicate;
* flatMap - apply a function that changes each element into a collection and flatten the results;
... and actions (a. k. a. aggregations):
* collect - converts RDD to a list;
* count - counts the number of elements in the RDD;
* take(n) - takes first n elements of the RDD and returns a list;
* takeSample(withReplacement, n) - takes a sample of RDD of n elements;
* reduce(function) - reduces the collection using function;
* aggregate - aggregates elements of an RDD.
See http://spark.apache.org/docs/1.6.0/api/python/pyspark.html#pyspark.RDD for more information and other usefull functions.
Remember: Transformations are lazy and actions are eager.
Task 1 Fill the ... below to aviod AssertionError:
End of explanation
def fun(x, y):
return "something"
assert rdd.count() == rdd.reduce(fun)
Explanation: Task 2 Write count using reduce:
End of explanation
assert rdd.sum() == rdd.aggregate(0, lambda x, y: ..., lambda x, y: ...)
rdd2 = rdd.flatMap(lambda x: [x**2 % 8609, x**3 % 8609])
rdd2.take(10)
Explanation: Task 3 Write sum using aggregate described here http://spark.apache.org/docs/1.6.0/api/python/pyspark.html#pyspark.RDD.aggregate
End of explanation
assert rdd2.max() == rdd.reduce(lambda x, y: ...)
Explanation: Task 4 Get the biggest value from in the rdd2 (use reduce):
End of explanation
rdd2.aggregate(..., ..., ...)
Explanation: Task 5 Get the second biggest value from rdd2, use reduce or aggregate.
End of explanation
reminders = rdd2.groupBy(lambda x: x)
reminders.take(10)
Explanation: Pair RDD
In many cases it is convenient to work with an RDD that consists of key-value pairs. We can get one by counting different reminders in rdd2.
End of explanation
reminders_counts = reminders.mapValues(lambda x: len(x))
reminders_counts.take(10)
Explanation: We only want the lengths of ResultIterable, so we can map values of the key-value pairs:
End of explanation
reminders_counts.sortBy(lambda x: x[1], ascending=False).take(15)
Explanation: Let's sort it by count:
End of explanation |
6,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch normalization
When changing the parameters of a model during the process of learning the distribition functions of each hidden layer are also changing. For that reason each layer needs to adapt itself to those changes avoiding the noise they produce.To Batch-Normalize a network is the process that smooth what have been told first. BN is applied on the input of each neuron making that the input to each activacion function has mean equal to 0 and variance equal to 1. The formula used in this function is the next one
Step1: Leaky Relu
The Rectifier (Rectified Linear Unit) is an activation defined as max(0,x). This is also known as a ramp function.
The Leaky Relu activation is a variant from the Relu and it is defined as max(x,alpha*x). This Leaky Relu function has been probed to work well with images avoiding the problem of dying ReLU.
Step2: BCE
Calculate the cross entropy between y and y'. This value is going to be used by the optimizer
Step3: GENERATOR AND DISCRIMINATOR FUNCTIONS
This two methods basically consists on the two differents multilayers that are going to be used in a GAN network, both of them are going to use weights initialized with a random normal desviation of 0,02. We have used relu as the activation function for the generator and leakyRelu for the discriminator. In each step we concat the Y labels, as Y or as yb, and they act like the bias in this network. We have only used two conv (conv2d and conv2d_transpose) to simplify the results and to reduce computational time.
Step4: Model
The model is going to connect the generator and discriminator and calculate the different variables to optimize during our training. This variables are calculated with the BCE functions that calculate the cross entropy between the results from the discriminator (with the real or generated image) and the labels.
Step5: Optimizer
The AdamOptimizer is used with a learning rate of 0.0002 and a beta of 0.5. These parameters determine how fast change the weights and the bias. This function computes both, the optimizer from the generator and from the discriminator.
Step6: Sample Generator
This is the sample generator that is going to be used for extracting a sample during the training. This sample allows us to see how the generator is rectifing and creating more accurate image as the training progresses.
Step7: Aux functions
Step8: Load the DATA
Step9: Training Part | Python Code:
def batchnormalization(X, eps=1e-8, W=None, b=None):
if X.get_shape().ndims == 4:
mean = tf.reduce_mean(X, [0,1,2])
standar_desviation = tf.reduce_mean(tf.square(X-mean), [0,1,2])
X = (X - mean) / tf.sqrt(standar_desviation + eps)
if W is not None and b is not None:
W = tf.reshape(W, [1,1,1,-1])
b = tf.reshape(b, [1,1,1,-1])
X = X*W + b
elif X.get_shape().ndims == 2:
mean = tf.reduce_mean(X, 0)
standar_desviation = tf.reduce_mean(tf.square(X-mean), 0)
X = (X - mean) / tf.sqrt(standar_desviation + eps)
if W is not None and b is not None:
W = tf.reshape(W, [1,-1])
b = tf.reshape(b, [1,-1])
X = X*W + b
return X
Explanation: Batch normalization
When changing the parameters of a model during the process of learning the distribition functions of each hidden layer are also changing. For that reason each layer needs to adapt itself to those changes avoiding the noise they produce.To Batch-Normalize a network is the process that smooth what have been told first. BN is applied on the input of each neuron making that the input to each activacion function has mean equal to 0 and variance equal to 1. The formula used in this function is the next one:
X = x - E[x] / sqrt(Var[x] + eps)
In case of dimension this depends on the activation that it is happening during the process. It could be dimension 2 or 4 deppending on the process's step.
End of explanation
def leakyRelu(X, alpha=0.2):
return tf.maximum(X,tf.multiply(X, alpha))
Explanation: Leaky Relu
The Rectifier (Rectified Linear Unit) is an activation defined as max(0,x). This is also known as a ramp function.
The Leaky Relu activation is a variant from the Relu and it is defined as max(x,alpha*x). This Leaky Relu function has been probed to work well with images avoiding the problem of dying ReLU.
End of explanation
def bce(x, z):
x = tf.clip_by_value(x, 1e-7, 1. - 1e-7)
return tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = x, labels = z))
Explanation: BCE
Calculate the cross entropy between y and y'. This value is going to be used by the optimizer
End of explanation
def MultilayerPerceptronGenerator(Z, Y, batch_size):
kernel_W1 = [int(Z.get_shape()[1] + Y.get_shape()[1]), dim_W1]
kernel_W2 = [dim_W1 + int(Y.get_shape()[1]), dim_W2*7*7]
kernel_W3 = [5, 5, dim_W3, dim_W2 + int(Y.get_shape()[1])]
kernel_W4 = [5, 5, dim_channel, dim_W3 + int(Y.get_shape()[1])]
gen_W1 = tf.get_variable("gen_W1", kernel_W1, initializer=tf.random_normal_initializer(stddev=0.02))
gen_W2 = tf.get_variable("gen_W2", kernel_W2, initializer=tf.random_normal_initializer(stddev=0.02))
gen_W3 = tf.get_variable("gen_W3", kernel_W3, initializer=tf.random_normal_initializer(stddev=0.02))
gen_W4 = tf.get_variable("gen_W4", kernel_W4, initializer=tf.random_normal_initializer(stddev=0.02))
yb = tf.reshape(Y, [batch_size, 1, 1, int(Y.get_shape()[1])])
Z = tf.concat([Z, Y], axis=1)
op1 = tf.nn.relu(batchnormalization(tf.matmul(Z, gen_W1)))
op1 = tf.concat([op1, Y], axis=1)
op2 = tf.nn.relu(batchnormalization(tf.matmul(op1, gen_W2)))
op2 = tf.reshape(op2, [batch_size, 7, 7, dim_W2])
op2 = tf.concat([op2, yb*tf.ones([batch_size, 7, 7, int(Y.get_shape()[1])])], axis = 3)
op3 = tf.nn.conv2d_transpose(op2, gen_W3, output_shape=[batch_size, 14, 14, dim_W3], strides=[1,2,2,1])
op3 = tf.nn.relu(batchnormalization(op3))
op3 = tf.concat([op3, yb*tf.ones([batch_size, 14, 14, Y.get_shape()[1]])], axis = 3)
op4 = tf.nn.conv2d_transpose(op3, gen_W4, output_shape=[batch_size, 28, 28, dim_channel], strides=[1,2,2,1])
return op4
def MultilayerPerceptronDiscriminator(image, Y, batch_size):
kernel_W1 = [5, 5, dim_channel + int(dim_y), dim_W3]
kernel_W2 = [5, 5, dim_W3 + int(dim_y), dim_W2]
kernel_W3 = [dim_W2*7*7 + int(dim_y), dim_W1]
kernel_W4 = [dim_W1 + int(dim_y), 1]
dis_W1 = tf.get_variable("dis_W1", kernel_W1, initializer=tf.random_normal_initializer(stddev=0.02))
dis_W2 = tf.get_variable("dis_W2", kernel_W2, initializer=tf.random_normal_initializer(stddev=0.02))
dis_W3 = tf.get_variable("dis_W3", kernel_W3, initializer=tf.random_normal_initializer(stddev=0.02))
dis_W4 = tf.get_variable("dis_W4", kernel_W4, initializer=tf.random_normal_initializer(stddev=0.02))
yb = tf.reshape(Y, tf.stack([batch_size, 1, 1, int(Y.get_shape()[1])]))
X = tf.concat([image, yb*tf.ones([batch_size, 28, 28, int(Y.get_shape()[1])])], axis = 3)
op1 = leakyRelu( tf.nn.conv2d( X, dis_W1, strides=[1, 2, 2, 1], padding='SAME'))
op1 = tf.concat([op1, yb*tf.ones([batch_size, 14, 14, int(Y.get_shape()[1])])], axis = 3)
op2 = leakyRelu( tf.nn.conv2d( op1, dis_W2, strides=[1, 2, 2, 1], padding='SAME'))
op2 = tf.reshape(op2, [batch_size, -1])
op2 = tf.concat([op2, Y], axis = 1)
op3 = leakyRelu(batchnormalization(tf.matmul(op2, dis_W3)))
op3 = tf.concat([op3, Y], axis = 1)
p = tf.nn.sigmoid(tf.matmul(op3, dis_W4))
return p, op3
Explanation: GENERATOR AND DISCRIMINATOR FUNCTIONS
This two methods basically consists on the two differents multilayers that are going to be used in a GAN network, both of them are going to use weights initialized with a random normal desviation of 0,02. We have used relu as the activation function for the generator and leakyRelu for the discriminator. In each step we concat the Y labels, as Y or as yb, and they act like the bias in this network. We have only used two conv (conv2d and conv2d_transpose) to simplify the results and to reduce computational time.
End of explanation
def createModel(batch_size):
Z = tf.placeholder(tf.float32, [batch_size, dim_z])
Y = tf.placeholder(tf.float32, [batch_size, dim_y])
image_real = tf.placeholder(tf.float32, [batch_size] + image_shape)
op4_generated = MultilayerPerceptronGenerator(Z,Y, batch_size)
image_generate = tf.nn.sigmoid(op4_generated)
with tf.variable_scope("discriminator_variables") as scope:
p_real, raw_real = MultilayerPerceptronDiscriminator(image_real, Y, batch_size)
scope.reuse_variables()
p_gen, raw_gen = MultilayerPerceptronDiscriminator(image_generate, Y, batch_size)
dis_cost_real = bce(raw_real, tf.ones_like(raw_real))
dis_cost_gen = bce(raw_gen, tf.zeros_like(raw_gen))
dis_cost = dis_cost_real + dis_cost_gen
gen_cost = bce (raw_gen, tf.ones_like(raw_gen))
return Z, Y, image_real, dis_cost, gen_cost, p_real, p_gen
Explanation: Model
The model is going to connect the generator and discriminator and calculate the different variables to optimize during our training. This variables are calculated with the BCE functions that calculate the cross entropy between the results from the discriminator (with the real or generated image) and the labels.
End of explanation
def optimizer_function(d_cost_tf, g_cost_tf, dis_vars, gen_vars):
train_op_dis = tf.train.AdamOptimizer(learning_rate, beta1=0.5).minimize(d_cost_tf, var_list=dis_vars)
train_op_gen = tf.train.AdamOptimizer(learning_rate, beta1=0.5).minimize(g_cost_tf, var_list=gen_vars)
return train_op_dis, train_op_gen
Explanation: Optimizer
The AdamOptimizer is used with a learning rate of 0.0002 and a beta of 0.5. These parameters determine how fast change the weights and the bias. This function computes both, the optimizer from the generator and from the discriminator.
End of explanation
def sample_creator(dimension):
Z = tf.placeholder(tf.float32, [dimension, dim_z])
Y = tf.placeholder(tf.float32, [dimension, dim_y])
op4 = MultilayerPerceptronGenerator(Z,Y,dimension)
image = tf.nn.sigmoid(op4)
return Z,Y,image
Explanation: Sample Generator
This is the sample generator that is going to be used for extracting a sample during the training. This sample allows us to see how the generator is rectifing and creating more accurate image as the training progresses.
End of explanation
def OneHot(X, n=None, negative_class=0.):
X = np.asarray(X).flatten()
if n is None:
n = np.max(X) + 1
Xoh = np.ones((len(X), n)) * negative_class
Xoh[np.arange(len(X)), X] = 1.
return Xoh
def save_visualization(X, nh_nw, save_path='tmp/sample.jpg'):
h,w = X.shape[1], X.shape[2]
img = np.zeros((h * nh_nw[0], w * nh_nw[1], 3))
for n,x in enumerate(X):
j = n // nh_nw[1]
i = n % nh_nw[1]
img[j*h:j*h+h, i*w:i*w+w, :] = x
scipy.misc.imsave(save_path, img)
Explanation: Aux functions
End of explanation
sys.path.append('..')
data_dir = 'data/'
def mnist():
fd = open(os.path.join(data_dir,'train-images.idx3-ubyte'))
loaded = np.fromfile(file=fd,dtype=np.uint8)
trX = loaded[16:].reshape((60000,28*28)).astype(float)
fd = open(os.path.join(data_dir,'train-labels.idx1-ubyte'))
loaded = np.fromfile(file=fd,dtype=np.uint8)
trY = loaded[8:].reshape((60000))
fd = open(os.path.join(data_dir,'t10k-images.idx3-ubyte'))
loaded = np.fromfile(file=fd,dtype=np.uint8)
teX = loaded[16:].reshape((10000,28*28)).astype(float)
fd = open(os.path.join(data_dir,'t10k-labels.idx1-ubyte'))
loaded = np.fromfile(file=fd,dtype=np.uint8)
teY = loaded[8:].reshape((10000))
trY = np.asarray(trY)
teY = np.asarray(teY)
return trX, teX, trY, teY
def mnist_with_valid_set():
trX, teX, trY, teY = mnist()
train_inds = np.arange(len(trX))
np.random.shuffle(train_inds)
trX = trX[train_inds]
trY = trY[train_inds]
vaX = trX[50000:]
vaY = trY[50000:]
trX = trX[:50000]
trY = trY[:50000]
return trX, vaX, teX, trY, vaY, teY
train_data, validation_data, test_data, train_label, validation_label, test_label = mnist_with_valid_set()
print("Train set of : " + str(train_data.shape))
print("Train label of : " + str(train_label.shape))
print("Test set of : " + str(test_data.shape))
print("Test label of : " + str(test_label.shape))
print("Validation set of : " + str(validation_data.shape))
print("Validation label of : " + str(validation_label.shape))
Explanation: Load the DATA
End of explanation
n_epochs = 100
learning_rate = 0.0002
batch_size = 128
image_shape = [28,28,1]
dim_z = 100
dim_W1 = 1024
dim_W2 = 128
dim_W3 = 64
dim_channel = 1
dim_y = 10
visualize_dimension=196
with tf.variable_scope("training_part") as scope:
Z_tf, Y_tf, image_tf, d_cost_tf, g_cost_tf, p_real, p_gen = createModel(batch_size)
session = tf.InteractiveSession()
saver = tf.train.Saver(max_to_keep=10)
scope.reuse_variables()
Z_sample, Y_sample, image_sample = sample_creator(visualize_dimension)
dis_vars = filter(lambda x: x.name.startswith(scope.name+'/dis'), tf.global_variables())
gen_vars = filter(lambda x: x.name.startswith(scope.name+'/gen'), tf.global_variables())
dis_vars = [i for i in dis_vars]
gen_vars = [i for i in gen_vars]
train_op_dis, train_op_gen = optimizer_function(d_cost_tf, g_cost_tf, dis_vars, gen_vars)
tf.global_variables_initializer().run()
Z_np_sample = np.random.uniform(-1, 1, size=(visualize_dimension, dim_z))
Y_np_sample = OneHot(np.random.randint(10, size=[visualize_dimension]))
iterations = 0
k = 2
#Information variables of the training process
sample_creation = 200 #Iteration where a sample is going to be created
show_information = 25 #Iteration where the information is going to be showed
print("Starting the training process")
for epoch in range(n_epochs):
index = np.arange(len(train_label))
np.random.shuffle(index)
train_data = train_data[index]
train_label = train_label[index]
for start, end in zip(
range(0, len(train_label), batch_size),
range(batch_size, len(train_label), batch_size)
):
Xs = train_data[start:end].reshape( [-1, 28, 28, 1]) / 255.
Ys = OneHot(train_label[start:end])
Zs = np.random.uniform(-1, 1, size=[batch_size, dim_z]).astype(np.float32)
if np.mod( iterations, k ) != 0:
_, gen_loss_val = session.run([train_op_gen, g_cost_tf],feed_dict={Z_tf:Zs,Y_tf:Ys})
discrim_loss_val, p_real_val, p_gen_val = session.run([d_cost_tf,p_real,p_gen],feed_dict={Z_tf:Zs, image_tf:Xs, Y_tf:Ys})
else:
_, discrim_loss_val = session.run([train_op_dis, d_cost_tf],feed_dict={Z_tf:Zs,Y_tf:Ys,image_tf:Xs})
gen_loss_val, p_real_val, p_gen_val = session.run([g_cost_tf, p_real, p_gen],feed_dict={Z_tf:Zs, image_tf:Xs, Y_tf:Ys})
if np.mod(iterations, show_information) == 0:
print("========== Showing information =========")
print("iteration:", iterations)
print("gen loss:", gen_loss_val)
print("discrim loss:", discrim_loss_val)
print("Average P(real)=", p_real_val.mean())
print("Average P(gen)=", p_gen_val.mean())
if np.mod(iterations, sample_creation) == 0:
generated_sample = session.run(image_sample,feed_dict={Z_sample:Z_np_sample,Y_sample:Y_np_sample})
generated_samples = (generated_sample + 1.)/2.
save_visualization(generated_samples, (14,14), save_path='image/sample_%04d.jpg' % int(iterations/sample_creation))
iterations += 1
Explanation: Training Part
End of explanation |
6,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras for Text Classification
Learning Objectives
1. Learn how to create a text classification datasets using BigQuery
1. Learn how to tokenize and integerize a corpus of text for training in Keras
1. Learn how to do one-hot-encodings in Keras
1. Learn how to use embedding layers to represent words in Keras
1. Learn about the bag-of-word representation for sentences
1. Learn how to use DNN/CNN/RNN model to classify text in keras
Introduction
In this notebook, we will implement text models to recognize the probable source (GitHub, TechCrunch, or The New York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Step12: Let's write the sample datatset to disk.
Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step14: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, TechCrunch, or The New York Times).
Step15: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that
Step16: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
Step17: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
Step18: Preparing the train/test splits
Let's split our data into train and test splits
Step19: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
Step20: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
Step21: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
Step22: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
Step23: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Step24: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs)
Step25: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Step26: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps.
Step27: (Optional) Using the Keras Text Preprocessing Layer
Thanks to the new Keras preprocessing layer, we can also include the preprocessing of the text (i.e., the tokenization followed by the integer representation of the tokens) within the model itself as a standard Keras layer. Let us first import this text preprocessing layer
Step28: At instanciation, we can specify the maximum length of the sequence output as well as the maximum number of tokens to be considered
Step29: Before using this layer in our model, we need to adapt it to our data so that it generates a token-to-integer mapping. Remeber our dataset looks like the following
Step30: We can directly use the Pandas Series corresponding to the titles in our dataset to adapt the data using the adapt method
Step31: At this point, the preprocessing layer can create the integer representation of our input text if we simply apply the layer to it
Step32: Exercise
Step33: Our model is now able to cosume text directly as input! Again, consider the following text sample
Step34: Then we can have our model directly predict on this input
Step35: Of course the model above has not yet been trained, so its prediction are meaningless so far.
Let us train it now on our dataset as before | Python Code:
import os
import pandas as pd
from google.cloud import bigquery
%load_ext google.cloud.bigquery
Explanation: Keras for Text Classification
Learning Objectives
1. Learn how to create a text classification datasets using BigQuery
1. Learn how to tokenize and integerize a corpus of text for training in Keras
1. Learn how to do one-hot-encodings in Keras
1. Learn how to use embedding layers to represent words in Keras
1. Learn about the bag-of-word representation for sentences
1. Learn how to use DNN/CNN/RNN model to classify text in keras
Introduction
In this notebook, we will implement text models to recognize the probable source (GitHub, TechCrunch, or The New York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
End of explanation
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
%env PROJECT = {PROJECT}
%env BUCKET = {PROJECT}
%env REGION = "us-central1"
SEED = 0
Explanation: Replace the variable values in the cell below:
End of explanation
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
regex = ".*://(.[^/]+)/"
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(
regex
)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(
sub_query=sub_query
)
print(query)
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
print(f"The full dataset contains {len(title_dataset)} titles")
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
title_dataset.source.value_counts()
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
DATADIR = "./data/"
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = "titles_full.csv"
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
End of explanation
SAMPLE_DATASET_NAME = "titles_sample.csv"
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
sample_title_dataset.head()
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard
from tensorflow.keras.layers import (
GRU,
Conv1D,
Dense,
Embedding,
Flatten,
Lambda,
)
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
Explanation: Let's write the sample datatset to disk.
End of explanation
LOGDIR = "./text_models"
DATA_DIR = "./data"
Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:
End of explanation
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ["title", "source"]
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
Explanation: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, TechCrunch, or The New York Times).
End of explanation
tokenizer = Tokenizer()
tokenizer.fit_on_texts(titles_df.title)
integerized_titles = tokenizer.texts_to_sequences(titles_df.title)
integerized_titles[:3]
VOCAB_SIZE = len(tokenizer.index_word)
VOCAB_SIZE
DATASET_SIZE = tokenizer.document_count
DATASET_SIZE
MAX_LEN = max(len(sequence) for sequence in integerized_titles)
MAX_LEN
Explanation: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that:
End of explanation
# TODO 1
def create_sequences(texts, max_len=MAX_LEN):
sequences = tokenizer.texts_to_sequences(texts)
padded_sequences = pad_sequences(sequences, max_len, padding="post")
return padded_sequences
sequences = create_sequences(titles_df.title[:3])
sequences
titles_df.source[:4]
Explanation: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
End of explanation
CLASSES = {"github": 0, "nytimes": 1, "techcrunch": 2}
N_CLASSES = len(CLASSES)
# TODO 2
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes)
return one_hots
encode_labels(titles_df.source[:4])
Explanation: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
End of explanation
N_TRAIN = int(DATASET_SIZE * 0.80)
titles_train, sources_train = (
titles_df.title[:N_TRAIN],
titles_df.source[:N_TRAIN],
)
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:],
titles_df.source[N_TRAIN:],
)
Explanation: Preparing the train/test splits
Let's split our data into train and test splits:
End of explanation
sources_train.value_counts()
sources_valid.value_counts()
Explanation: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
End of explanation
X_train, Y_train = create_sequences(titles_train), encode_labels(sources_train)
X_valid, Y_valid = create_sequences(titles_valid), encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
Explanation: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
End of explanation
def build_dnn_model(embed_dim):
model = Sequential(
[
Embedding(
VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN]
), # TODO 3
Lambda(lambda x: tf.reduce_mean(x, axis=1)), # TODO 4
Dense(N_CLASSES, activation="softmax"), # TODO 5
]
)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
Explanation: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, "dnn")
shutil.rmtree(MODEL_DIR, ignore_errors=True)
BATCH_SIZE = 300
EPOCHS = 100
EMBED_DIM = 10
PATIENCE = 5
dnn_model = build_dnn_model(embed_dim=EMBED_DIM)
dnn_history = dnn_model.fit(
X_train,
Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(dnn_history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(dnn_history.history)[["accuracy", "val_accuracy"]].plot()
dnn_model.summary()
Explanation: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
End of explanation
def build_rnn_model(embed_dim, units):
model = Sequential(
[
Embedding(
VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN], mask_zero=True
), # TODO 3
GRU(units), # TODO 5
Dense(N_CLASSES, activation="softmax"),
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
Explanation: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, "rnn")
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 10
UNITS = 16
PATIENCE = 2
rnn_model = build_rnn_model(embed_dim=EMBED_DIM, units=UNITS)
history = rnn_model.fit(
X_train,
Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot()
rnn_model.summary()
Explanation: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs):
End of explanation
def build_cnn_model(embed_dim, filters, ksize, strides):
model = Sequential(
[
Embedding(
VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN], mask_zero=True
), # TODO 3
Conv1D( # TODO 5
filters=filters,
kernel_size=ksize,
strides=strides,
activation="relu",
),
Flatten(), # TODO 5
Dense(N_CLASSES, activation="softmax"),
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
Explanation: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, "cnn")
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 5
FILTERS = 200
STRIDES = 2
KSIZE = 3
PATIENCE = 2
cnn_model = build_cnn_model(
embed_dim=EMBED_DIM,
filters=FILTERS,
strides=STRIDES,
ksize=KSIZE,
)
cnn_history = cnn_model.fit(
X_train,
Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(cnn_history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(cnn_history.history)[["accuracy", "val_accuracy"]].plot()
cnn_model.summary()
Explanation: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps.
End of explanation
from keras.layers import TextVectorization
Explanation: (Optional) Using the Keras Text Preprocessing Layer
Thanks to the new Keras preprocessing layer, we can also include the preprocessing of the text (i.e., the tokenization followed by the integer representation of the tokens) within the model itself as a standard Keras layer. Let us first import this text preprocessing layer:
End of explanation
MAX_LEN = 26
MAX_TOKENS = 20000
preprocessing_layer = TextVectorization(
output_sequence_length=MAX_LEN, max_tokens=MAX_TOKENS
)
Explanation: At instanciation, we can specify the maximum length of the sequence output as well as the maximum number of tokens to be considered:
End of explanation
titles_df.head()
Explanation: Before using this layer in our model, we need to adapt it to our data so that it generates a token-to-integer mapping. Remeber our dataset looks like the following:
End of explanation
preprocessing_layer.adapt(titles_df.title)
Explanation: We can directly use the Pandas Series corresponding to the titles in our dataset to adapt the data using the adapt method:
End of explanation
X_train, X_valid = titles_train, titles_valid
X_train[:5]
integers = preprocessing_layer(X_train[:5])
integers
Explanation: At this point, the preprocessing layer can create the integer representation of our input text if we simply apply the layer to it:
End of explanation
def build_model_with_text_preprocessing(embed_dim, units):
model = Sequential(
[
preprocessing_layer,
Embedding(
VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN], mask_zero=True
), # TODO 3
GRU(units),
Dense(N_CLASSES, activation="softmax"),
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
Explanation: Exercise: In the following cell, implement a function
build_model_with_text_preprocessing(embed_dim, units) that returns a text model with the following sequential structure:
the preprocessing_layer we defined above folowed by
an embedding layer with embed_dim dimension for the output vectors followed by
a GRU layer with units number of neurons followed by
a final dense layer for classification
End of explanation
X_train[:5]
Explanation: Our model is now able to cosume text directly as input! Again, consider the following text sample:
End of explanation
model = build_model_with_text_preprocessing(embed_dim=EMBED_DIM, units=UNITS)
model.predict(X_train[:5])
Explanation: Then we can have our model directly predict on this input:
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, "rnn")
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 10
UNITS = 16
PATIENCE = 2
model = build_model_with_text_preprocessing(embed_dim=EMBED_DIM, units=UNITS)
history = model.fit(
X_train,
Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot()
model.summary()
Explanation: Of course the model above has not yet been trained, so its prediction are meaningless so far.
Let us train it now on our dataset as before:
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.