Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
4,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is a distribution?
An object-oriented exploration of one of the most useful concepts in statistics.
Copyright 2016 Allen Downey
MIT License
Step14: Playing dice with the universe
One of the recurring themes of my books is the use of object-oriented programming to explore mathematical ideas. Many mathematical entities are hard to define because they are so abstract. Representing them in Python puts the focus on what operations each entity supports -- that is, what the objects can do -- rather than on what they are.
In this notebook, I explore the idea of a probability distribution, which is one of the most important ideas in statistics, but also one of the hardest to explain.
To keep things concrete, I'll start with one of the usual examples
Step15: Each Pmf contains a dictionary named d that contains the values and probabilities. To show how this class is used, I'll create a Pmf that represents a six-sided die
Step16: Initially the "probabilities" are all 1, so the total probability in the Pmf is 6, which doesn't make a lot of sense. In a proper, meaningful, PMF, the probabilities add up to 1, which implies that one outcome, and only one outcome, will occur (for any given roll of the die).
We can take this "unnormalized" distribution and make it a proper Pmf using the normalize method. Here's what the method looks like
Step17: normalize adds up the probabilities in the PMF and divides through by the total. The result is a Pmf with probabilities that add to 1.
Here's how it's used
Step18: The fundamental operation provided by a Pmf is a "lookup"; that is, we can look up an outcome and get the corresponding probability. Pmf provides __getitem__, so we can use bracket notation to look up an outcome
Step19: And if you look up a value that's not in the Pmf, the probability is 0.
Step20: Exerise
Step21: Is that all there is?
So is a Pmf a distribution? No. At least in this framework, a Pmf is one of several representations of a distribution. Other representations include the cumulative distribution function, or CDF, and the characteristic function.
These representations are equivalent in the sense that they all contain the same informaton; if I give you any one of them, you can figure out the others (and we'll see how soon).
So why would we want different representations of the same information? The fundamental reason is that there are many different operations we would like to perform with distributions; that is, questions we would like to answer. Some representations are better for some operations, but none of them is the best for all operations.
So what are the questions we would like a distribution to answer? They include
Step22: Python dictionaries are implemented using hash tables, so we expect __getitem__ to be fast. In terms of algorithmic complexity, it is constant time, or $O(1)$.
Moments and expecations
The Pmf representation is also good for computing mean, variance, and other moments. Here's the implementation of Pmf.mean
Step23: This implementation is efficient, in the sense that it is $O(n)$, and because it uses a comprehension to traverse the outcomes, the overhead is low.
The implementation of Pmf.var is similar
Step24: And here's how they are used
Step25: The structure of mean and var is the same
Step26: As an example, we can use expect to compute the third central moment of the distribution
Step27: Because the distribution is symmetric, the third central moment is 0.
Addition
The next question we'll answer is the last one on the list
Step28: The outer loop traverses the outcomes and probabilities of the first Pmf; the inner loop traverses the second Pmf. Each time through the loop, we compute the sum of the outcome pair, v1 and v2, and the probability that the pair occurs.
Note that this method implicitly assumes that the two processes are independent; that is, the outcome from one does not affect the other. That's why we can compute the probability of the pair by multiplying the probabilities of the outcomes.
To demonstrate this method, we'll start with d6 again. Here's what it looks like
Step29: When we use the + operator, Python invokes the __add__ method, which returns a new Pmf object. Here's the Pmf that represents the sum of two dice
Step30: And here's the Pmf that represents the sum of three dice.
Step31: As we add up more dice, the result converges to the bell shape of the Gaussian distribution.
Exercise
Step38: Cumulative probabilities
The next few questions on the list are related to the median and other percentiles. They are harder to answer with the Pmf representation, but easier with a cumulative distribution function (CDF).
A CDF is a map from an outcome, $x$, to its cumulative probability, which is the probability that the outcome is less than or equal to $x$. In math notation
Step40: compute_cumprobs takes a dictionary that maps outcomes to probabilities, sorts the outcomes in increasing order, then makes two NumPy arrays
Step41: Here's how we use it to create a Cdf object for the sum of three dice
Step42: Because we have to sort the values, the time to compute a Cdf is $O(n \log n)$.
Here's what the CDF looks like
Step43: The range of the CDF is always from 0 to 1.
Now we can compute $CDF(x)$ by searching the xs to find the right location, or index, and then looking up the corresponding probability. Because the xs are sorted, we can use bisection search, which is $O(\log n)$.
Cdf provides cumprobs, which takes an array of values and returns the corresponding probabilities
Step44: The details here are a little tricky because we have to deal with some "off by one" problems, and if any of the values are less than the smallest value in the Cdf, we have to handle that as a special case. But the basic idea is simple, and the implementation is efficient.
Now we can look up probabilities for a sequence of values
Step45: Cdf also provides __getitem__, so we can use brackets to look up a single value
Step46: Exercise
Step47: Reverse lookup
You might wonder why I represent a Cdf with two lists rather than a dictionary. After all, a dictionary lookup is constant time and bisection search is logarithmic. The reason is that we often want to use a Cdf to do a reverse lookup; that is, given a probability, we would like to find the corresponding value. With two sorted lists, a reverse lookup has the same performance as a forward loopup, $O(\log n)$.
Here's the implementation
Step48: And here's an example that finds the 10th, 50th, and 90th percentiles
Step49: The Cdf representation is also good at generating random samples, by choosing a probability uniformly from 0 to 1 and finding the corresponding value. Here's the method Cdf provides
Step50: The result is a NumPy array with the given shape. The time to generate each random choice is $O(\log n)$
Here are some examples that use it.
Step51: Exercise
Step52: Max and min
The Cdf representation is particularly good for finding the distribution of a maximum. For example, in Dungeons and Dragons, players create characters with random properties like strength and intelligence. The properties are generated by rolling three dice and adding them, so the CDF for each property is the Cdf we used in this example. Each character has 6 properties, so we might wonder what the distribution is for the best of the six.
Here's the method that computes it
Step53: To get the distribution of the maximum, we make a new Cdf with the same values as the original, and with the ps raised to the kth power. Simple, right?
To see how it works, suppose you generate six properties and your best is only a 10. That's unlucky, but you might wonder how unlucky. So, what is the chance of rolling 3 dice six times, and never getting anything better than 10?
Well, that means that all six values were 10 or less. The probability that each of them is 10 or less is $CDF(10)$, because that's what the CDF means. So the probability that all 6 are 10 or less is $CDF(10)^6$.
Now we can generalize that by replacing $10$ with any value of $x$ and $6$ with any integer $k$. The result is $CDF(x)^k$, which is the probability that all $k$ rolls are $x$ or less, and that is the CDF of the maximum.
Here's how we use Cdf.maximum
Step54: So the chance of generating a character whose best property is 10 is less than 2%.
Exercise
Step59: Characteristic function
At this point we've answered all the questions on the list, but I want to come back to addition, because the algorithm we used with the Pmf representation is not as efficient as it could be. It enumerates all pairs of outcomes, so if there are $n$ values in each Pmf, the run time is $O(n^2)$. We can do better.
The key is the characteristic function, which is the Fourier transform (FT) of the PMF. If you are familiar with the Fourier transform and the Convolution Theorem, keep reading. Otherwise, skip the rest of this cell and get to the code, which is much simpler than the explanation.
Details for people who know about convolution
If you are familiar with the FT in the context of spectral analysis of signals, you might wonder why we would possibly want to compute the FT of a PMF. The reason is the Convolution Theorem.
It turns out that the algorithm we used to "add" two Pmf objects is a form of convolution. To see how that works, suppose we are computing the distribution of $Z = X+Y$. To make things concrete, let's compute the probability that the sum, $Z$ is 5. To do that, we can enumerate all possible values of $X$ like this
Step61: The attribute, hs, is the Fourier transform of the Pmf, represented as a NumPy array of complex numbers.
The following function takes a dictionary that maps from outcomes to their probabilities, and computes the FT of the PDF
Step62: fft computes the Fast Fourier Transform (FFT), which is called "fast" because the run time is $O(n \log n)$.
Here's what the characteristic function looks like for the sum of three dice (plotting the real and imaginary parts of hs)
Step63: The characteristic function contains all of the information from the Pmf, but it is encoded in a form that is hard to interpret. However, if we are given a characteristic function, we can find the corresponding Pmf.
CharFunc provides make_pmf, which uses the inverse FFT to get back to the Pmf representation. Here's the code
Step64: And here's an example
Step65: Now we can use the characteristic function to compute a convolution. CharFunc provides __mul__, which multiplies the hs elementwise and returns a new CharFunc object
Step66: And here's how we can use it to compute the distribution of the sum of 6 dice.
Step67: Here are the probabilities, mean, and variance.
Step68: This might seem like a roundabout way to compute a convolution, but it is efficient. The time to Compute the CharFunc objects is $O(n \log n)$. Multiplying them together is $O(n)$. And converting back to a Pmf is $O(n \log n)$.
So the whole process is $O(n \log n)$, which is better than Pmf.__add__, which is $O(n^2)$.
Exercise
Step72: Distributions
Finally, let's back to the question we started with
Step73: When you create a Dist, you provide a dictionary of values and probabilities.
Dist.__init__ calls the other three __init__ methods to create the Pmf, Cdf, and CharFunc representations. The result is an object that has all the attributes and methods of the three representations.
As an example, I'll create a Dist that represents the sum of six dice
Step74: We inherit __getitem__ from Pmf, so we can look up the probability of a value.
Step75: We also get mean and variance from Pmf
Step76: But we can also use methods from Cdf, like values
Step77: And cumprobs
Step78: And sample and plot_cdf
Step79: Dist.__add__ uses Pmf.__add__, which performs convolution the slow way
Step80: Dist.__mul__ uses CharFunc.__mul__, which performs convolution the fast way. | Python Code:
from __future__ import print_function, division
%matplotlib inline
%precision 6
import matplotlib.pyplot as plt
import numpy as np
from numpy.fft import fft, ifft
from inspect import getsourcelines
def show_code(func):
lines, _ = getsourcelines(func)
for line in lines:
print(line, end='')
Explanation: What is a distribution?
An object-oriented exploration of one of the most useful concepts in statistics.
Copyright 2016 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
class Pmf:
def __init__(self, d=None):
Initializes the distribution.
d: map from values to probabilities
self.d = {} if d is None else d
def items(self):
Returns a sequence of (value, prob) pairs.
return self.d.items()
def __repr__(self):
Returns a string representation of the object.
cls = self.__class__.__name__
return '%s(%s)' % (cls, repr(self.d))
def __getitem__(self, value):
Looks up the probability of a value.
return self.d.get(value, 0)
def __setitem__(self, value, prob):
Sets the probability associated with a value.
self.d[value] = prob
def __add__(self, other):
Computes the Pmf of the sum of values drawn from self and other.
other: another Pmf or a scalar
returns: new Pmf
pmf = Pmf()
for v1, p1 in self.items():
for v2, p2 in other.items():
pmf[v1 + v2] += p1 * p2
return pmf
def total(self):
Returns the total of the probabilities.
return sum(self.d.values())
def normalize(self):
Normalizes this PMF so the sum of all probs is 1.
Args:
fraction: what the total should be after normalization
Returns: the total probability before normalizing
total = self.total()
for x in self.d:
self.d[x] /= total
return total
def mean(self):
Computes the mean of a PMF.
return sum(p * x for x, p in self.items())
def var(self, mu=None):
Computes the variance of a PMF.
mu: the point around which the variance is computed;
if omitted, computes the mean
if mu is None:
mu = self.mean()
return sum(p * (x - mu) ** 2 for x, p in self.items())
def expect(self, func):
Computes the expectation of a given function, E[f(x)]
func: function
return sum(p * func(x) for x, p in self.items())
def display(self):
Displays the values and probabilities.
for value, prob in self.items():
print(value, prob)
def plot_pmf(self, **options):
Plots the values and probabilities.
xs, ps = zip(*sorted(self.items()))
plt.plot(xs, ps, **options)
Explanation: Playing dice with the universe
One of the recurring themes of my books is the use of object-oriented programming to explore mathematical ideas. Many mathematical entities are hard to define because they are so abstract. Representing them in Python puts the focus on what operations each entity supports -- that is, what the objects can do -- rather than on what they are.
In this notebook, I explore the idea of a probability distribution, which is one of the most important ideas in statistics, but also one of the hardest to explain.
To keep things concrete, I'll start with one of the usual examples: rolling dice. When you roll a standard six-sided die, there are six possible outcomes -- numbers 1 through 6 -- and all outcomes are equally likely.
If you roll two dice and add up the total, there are 11 possible outcomes -- numbers 2 through 12 -- but they are not equally likely. The least likely outcomes, 2 and 12, only happen once in 36 tries; the most likely outcome happens 1 times in 6.
And if you roll three dice and add them up, you get a different set of possible outcomes with a different set of probabilities.
What I've just described are three random number generators, which are also called random processes. The output from a random process is a random variable, or more generally a set of random variables. And each random variable has probability distribution, which is the set of possible outcomes and the corresponding set of probabilities.
There are many ways to represent a probability distribution. The most obvious is a probability mass function, or PMF, which is a function that maps from each possible outcome to its probability. And in Python, the most obvious way to represent a PMF is a dictionary that maps from outcomes to probabilities.
Here's a definition for a class named Pmf that represents a PMF.
End of explanation
d6 = Pmf()
for x in range(1, 7):
d6[x] = 1
d6.display()
Explanation: Each Pmf contains a dictionary named d that contains the values and probabilities. To show how this class is used, I'll create a Pmf that represents a six-sided die:
End of explanation
show_code(Pmf.normalize)
Explanation: Initially the "probabilities" are all 1, so the total probability in the Pmf is 6, which doesn't make a lot of sense. In a proper, meaningful, PMF, the probabilities add up to 1, which implies that one outcome, and only one outcome, will occur (for any given roll of the die).
We can take this "unnormalized" distribution and make it a proper Pmf using the normalize method. Here's what the method looks like:
End of explanation
d6.normalize()
d6.display()
Explanation: normalize adds up the probabilities in the PMF and divides through by the total. The result is a Pmf with probabilities that add to 1.
Here's how it's used:
End of explanation
d6[3]
Explanation: The fundamental operation provided by a Pmf is a "lookup"; that is, we can look up an outcome and get the corresponding probability. Pmf provides __getitem__, so we can use bracket notation to look up an outcome:
End of explanation
d6[7]
Explanation: And if you look up a value that's not in the Pmf, the probability is 0.
End of explanation
# Solution
die = Pmf(dict(red=2, blue=4))
die.normalize()
die.display()
Explanation: Exerise: Create a Pmf that represents a six-sided die that is red on two sides and blue on the other four.
End of explanation
show_code(Pmf.__getitem__)
Explanation: Is that all there is?
So is a Pmf a distribution? No. At least in this framework, a Pmf is one of several representations of a distribution. Other representations include the cumulative distribution function, or CDF, and the characteristic function.
These representations are equivalent in the sense that they all contain the same informaton; if I give you any one of them, you can figure out the others (and we'll see how soon).
So why would we want different representations of the same information? The fundamental reason is that there are many different operations we would like to perform with distributions; that is, questions we would like to answer. Some representations are better for some operations, but none of them is the best for all operations.
So what are the questions we would like a distribution to answer? They include:
What is the probability of a given outcome?
What is the mean of the outcomes, taking into account their probabilities?
What is the variance, and other moments, of the outcome?
What is the probability that the outcome exceeds (or falls below) a threshold?
What is the median of the outcomes, that is, the 50th percentile?
What are the other percentiles?
How can get generate a random sample from this distribution, with the appropriate probabilities?
If we run two random processes and choose the maximum of the outcomes (or minimum), what is the distribution of the result?
If we run two random processes and add up the results, what is the distribution of the sum?
Each of these questions corresponds to a method we would like a distribution to provide. But as I said, there is no one representation that answers all of them easily and efficiently. So let's look at the different representations and see what they can do.
Getting back to the Pmf, we've already seen how to look up the probability of a given outcome. Here's the code:
End of explanation
show_code(Pmf.mean)
Explanation: Python dictionaries are implemented using hash tables, so we expect __getitem__ to be fast. In terms of algorithmic complexity, it is constant time, or $O(1)$.
Moments and expecations
The Pmf representation is also good for computing mean, variance, and other moments. Here's the implementation of Pmf.mean:
End of explanation
show_code(Pmf.var)
Explanation: This implementation is efficient, in the sense that it is $O(n)$, and because it uses a comprehension to traverse the outcomes, the overhead is low.
The implementation of Pmf.var is similar:
End of explanation
d6.mean(), d6.var()
Explanation: And here's how they are used:
End of explanation
show_code(Pmf.expect)
Explanation: The structure of mean and var is the same: they traverse the outcomes and their probabilities, x and p, and add up the product of p and some function of x.
We can generalize this structure to compute the expectation of any function of x, which is defined as
$E[f] = \sum_x p(x) f(x)$
Pmf provides expect, which takes a function object, func, and returns the expectation of func:
End of explanation
mu = d6.mean()
d6.expect(lambda x: (x-mu)**3)
Explanation: As an example, we can use expect to compute the third central moment of the distribution:
End of explanation
show_code(Pmf.__add__)
Explanation: Because the distribution is symmetric, the third central moment is 0.
Addition
The next question we'll answer is the last one on the list: if we run two random processes and add up the results, what is the distribution of the sum? In other words, if the result of the first process is a random variable, $X$, and the result of the second is $Y$, what is the distribution of $X+Y$?
The Pmf representation of the distribution can answer this question pretty well, but we'll see later that the characteristic function is even better.
Here's the implementation:
End of explanation
d6.plot_pmf()
Explanation: The outer loop traverses the outcomes and probabilities of the first Pmf; the inner loop traverses the second Pmf. Each time through the loop, we compute the sum of the outcome pair, v1 and v2, and the probability that the pair occurs.
Note that this method implicitly assumes that the two processes are independent; that is, the outcome from one does not affect the other. That's why we can compute the probability of the pair by multiplying the probabilities of the outcomes.
To demonstrate this method, we'll start with d6 again. Here's what it looks like:
End of explanation
twice = d6 + d6
twice.plot_pmf(color='green')
Explanation: When we use the + operator, Python invokes the __add__ method, which returns a new Pmf object. Here's the Pmf that represents the sum of two dice:
End of explanation
thrice = twice + d6
d6.plot_pmf()
twice.plot_pmf()
thrice.plot_pmf()
Explanation: And here's the Pmf that represents the sum of three dice.
End of explanation
# Solution
dice = die + die
dice.display()
Explanation: As we add up more dice, the result converges to the bell shape of the Gaussian distribution.
Exercise: If you did the previous exercise, you have a Pmf that represents a die with red on 2 sides and blue on the other 4. Use the + operator to compute the outcomes of rolling two of these dice and the probabilities of the outcomes.
Note: if you represent the outcomes as strings, the __add__ method will concatenate them instead of adding, which actually works.
End of explanation
class Cdf:
def __init__(self, xs, ps):
self.xs = xs
self.ps = ps
def __repr__(self):
return 'Cdf(%s, %s)' % (repr(self.xs), repr(self.ps))
def __getitem__(self, x):
return self.cumprobs([x])[0]
def cumprobs(self, values):
Gets probabilities for a sequence of values.
values: any sequence that can be converted to NumPy array
returns: NumPy array of cumulative probabilities
values = np.asarray(values)
index = np.searchsorted(self.xs, values, side='right')
ps = self.ps[index-1]
ps[values < self.xs[0]] = 0.0
return ps
def values(self, ps):
Returns InverseCDF(p), the value that corresponds to probability p.
ps: sequence of numbers in the range [0, 1]
returns: NumPy array of values
ps = np.asarray(ps)
if np.any(ps < 0) or np.any(ps > 1):
raise ValueError('Probability p must be in range [0, 1]')
index = np.searchsorted(self.ps, ps, side='left')
return self.xs[index]
def sample(self, shape):
Generates a random sample from the distribution.
shape: dimensions of the resulting NumPy array
ps = np.random.random(shape)
return self.values(ps)
def maximum(self, k):
Computes the CDF of the maximum of k samples from the distribution.
return Cdf(self.xs, self.ps**k)
def display(self):
Displays the values and cumulative probabilities.
for x, p in zip(self.xs, self.ps):
print(x, p)
def plot_cdf(self, **options):
Plots the cumulative probabilities.
plt.plot(self.xs, self.ps, **options)
Explanation: Cumulative probabilities
The next few questions on the list are related to the median and other percentiles. They are harder to answer with the Pmf representation, but easier with a cumulative distribution function (CDF).
A CDF is a map from an outcome, $x$, to its cumulative probability, which is the probability that the outcome is less than or equal to $x$. In math notation:
$CDF(x) = Prob(X \le x)$
where $X$ is the outcome of a random process, and $x$ is the threshold we are interested in. For example, if $CDF$ is the cumulative distribution for the sum of three dice, the probability of getting 5 or less is $CDF(5)$, and the probability of getting 6 or more is $1 - CDF(5)$.
To represent a CDF in Python, I use a sorted list of outcomes and the corresponding list of cumulative probabilities.
End of explanation
def compute_cumprobs(d):
Computes cumulative probabilities.
d: map from values to probabilities
xs, freqs = zip(*sorted(d.items()))
xs = np.asarray(xs)
ps = np.cumsum(freqs, dtype=np.float)
ps /= ps[-1]
return xs, ps
Explanation: compute_cumprobs takes a dictionary that maps outcomes to probabilities, sorts the outcomes in increasing order, then makes two NumPy arrays: xs is the sorted sequence of values; ps is the sequence of cumulative probabilities:
End of explanation
xs, ps = compute_cumprobs(thrice.d)
cdf = Cdf(xs, ps)
cdf.display()
Explanation: Here's how we use it to create a Cdf object for the sum of three dice:
End of explanation
cdf.plot_cdf()
Explanation: Because we have to sort the values, the time to compute a Cdf is $O(n \log n)$.
Here's what the CDF looks like:
End of explanation
show_code(Cdf.cumprobs)
Explanation: The range of the CDF is always from 0 to 1.
Now we can compute $CDF(x)$ by searching the xs to find the right location, or index, and then looking up the corresponding probability. Because the xs are sorted, we can use bisection search, which is $O(\log n)$.
Cdf provides cumprobs, which takes an array of values and returns the corresponding probabilities:
End of explanation
cdf.cumprobs((2, 10, 18))
Explanation: The details here are a little tricky because we have to deal with some "off by one" problems, and if any of the values are less than the smallest value in the Cdf, we have to handle that as a special case. But the basic idea is simple, and the implementation is efficient.
Now we can look up probabilities for a sequence of values:
End of explanation
cdf[5]
Explanation: Cdf also provides __getitem__, so we can use brackets to look up a single value:
End of explanation
# Solution
1 - cdf[14]
Explanation: Exercise: If you roll three dice, what is the probability of getting 15 or more?
End of explanation
show_code(Cdf.values)
Explanation: Reverse lookup
You might wonder why I represent a Cdf with two lists rather than a dictionary. After all, a dictionary lookup is constant time and bisection search is logarithmic. The reason is that we often want to use a Cdf to do a reverse lookup; that is, given a probability, we would like to find the corresponding value. With two sorted lists, a reverse lookup has the same performance as a forward loopup, $O(\log n)$.
Here's the implementation:
End of explanation
cdf.values((0.1, 0.5, 0.9))
Explanation: And here's an example that finds the 10th, 50th, and 90th percentiles:
End of explanation
show_code(Cdf.sample)
Explanation: The Cdf representation is also good at generating random samples, by choosing a probability uniformly from 0 to 1 and finding the corresponding value. Here's the method Cdf provides:
End of explanation
cdf.sample(1)
cdf.sample(6)
cdf.sample((2, 2))
Explanation: The result is a NumPy array with the given shape. The time to generate each random choice is $O(\log n)$
Here are some examples that use it.
End of explanation
# Solution
def iqr(cdf):
values = cdf.values((0.25, 0.75))
return np.diff(values)[0]
iqr(cdf)
Explanation: Exercise: Write a function that takes a Cdf object and returns the interquartile range (IQR), which is the difference between the 75th and 25th percentiles.
End of explanation
show_code(Cdf.maximum)
Explanation: Max and min
The Cdf representation is particularly good for finding the distribution of a maximum. For example, in Dungeons and Dragons, players create characters with random properties like strength and intelligence. The properties are generated by rolling three dice and adding them, so the CDF for each property is the Cdf we used in this example. Each character has 6 properties, so we might wonder what the distribution is for the best of the six.
Here's the method that computes it:
End of explanation
best = cdf.maximum(6)
best.plot_cdf()
best[10]
Explanation: To get the distribution of the maximum, we make a new Cdf with the same values as the original, and with the ps raised to the kth power. Simple, right?
To see how it works, suppose you generate six properties and your best is only a 10. That's unlucky, but you might wonder how unlucky. So, what is the chance of rolling 3 dice six times, and never getting anything better than 10?
Well, that means that all six values were 10 or less. The probability that each of them is 10 or less is $CDF(10)$, because that's what the CDF means. So the probability that all 6 are 10 or less is $CDF(10)^6$.
Now we can generalize that by replacing $10$ with any value of $x$ and $6$ with any integer $k$. The result is $CDF(x)^k$, which is the probability that all $k$ rolls are $x$ or less, and that is the CDF of the maximum.
Here's how we use Cdf.maximum:
End of explanation
# Solution
def minimum(cdf, k):
return Cdf(cdf.xs, 1 - (1-cdf.ps)**k)
worst = minimum(cdf, 6)
worst.plot_cdf()
Explanation: So the chance of generating a character whose best property is 10 is less than 2%.
Exercise: Write a function that takes a CDF and returns the CDF of the minimum of k values.
Hint: If the minimum is less than $x$, that means all k values must be less than $x$.
End of explanation
class CharFunc:
def __init__(self, hs):
Initializes the CF.
hs: NumPy array of complex
self.hs = hs
def __mul__(self, other):
Computes the elementwise product of two CFs.
return CharFunc(self.hs * other.hs)
def make_pmf(self, thresh=1e-11):
Converts a CF to a PMF.
Values with probabilities below `thresh` are dropped.
ps = ifft(self.hs)
d = dict((i, p) for i, p in enumerate(ps.real) if p > thresh)
return Pmf(d)
def plot_cf(self, **options):
Plots the real and imaginary parts of the CF.
n = len(self.hs)
xs = np.arange(-n//2, n//2)
hs = np.roll(self.hs, len(self.hs) // 2)
plt.plot(xs, hs.real, label='real', **options)
plt.plot(xs, hs.imag, label='imag', **options)
plt.legend()
Explanation: Characteristic function
At this point we've answered all the questions on the list, but I want to come back to addition, because the algorithm we used with the Pmf representation is not as efficient as it could be. It enumerates all pairs of outcomes, so if there are $n$ values in each Pmf, the run time is $O(n^2)$. We can do better.
The key is the characteristic function, which is the Fourier transform (FT) of the PMF. If you are familiar with the Fourier transform and the Convolution Theorem, keep reading. Otherwise, skip the rest of this cell and get to the code, which is much simpler than the explanation.
Details for people who know about convolution
If you are familiar with the FT in the context of spectral analysis of signals, you might wonder why we would possibly want to compute the FT of a PMF. The reason is the Convolution Theorem.
It turns out that the algorithm we used to "add" two Pmf objects is a form of convolution. To see how that works, suppose we are computing the distribution of $Z = X+Y$. To make things concrete, let's compute the probability that the sum, $Z$ is 5. To do that, we can enumerate all possible values of $X$ like this:
$Prob(Z=5) = \sum_x Prob(X=x) \cdot Prob(Y=5-x)$
Now we can write each of those probabilities in terms of the PMF of $X$, $Y$, and $Z$:
$PMF_Z(5) = \sum_x PMF_X(x) \cdot PMF_Y(5-x)$
And now we can generalize by replacing 5 with any value of $z$:
$PMF_Z(z) = \sum_x PMF_X(x) \cdot PMF_Y(z-x)$
You might recognize that computation as convolution, denoted with the operator $\ast$.
$PMF_Z = PMF_X \ast PMF_Y$
Now, according to the Convolution Theorem:
$FT(PMF_X \ast Y) = FT(PMF_X) \cdot FT(PMF_Y)$
Or, taking the inverse FT of both sides:
$PMF_X \ast PMF_Y = IFT(FT(PMF_X) \cdot FT(PMF_Y))$
In words, to compute the convolution of $PMF_X$ and $PMF_Y$, we can compute the FT of $PMF_X$ and $PMF_Y$ and multiply them together, then compute the inverse FT of the result.
Let's see how that works. Here's a class that represents a characteristic function.
End of explanation
def compute_fft(d, n=256):
Computes the FFT of a PMF of integers.
Values must be integers less than `n`.
xs, freqs = zip(*d.items())
ps = np.zeros(256)
ps[xs,] = freqs
hs = fft(ps)
return hs
Explanation: The attribute, hs, is the Fourier transform of the Pmf, represented as a NumPy array of complex numbers.
The following function takes a dictionary that maps from outcomes to their probabilities, and computes the FT of the PDF:
End of explanation
hs = compute_fft(thrice)
cf = CharFunc(hs)
cf.plot_cf()
Explanation: fft computes the Fast Fourier Transform (FFT), which is called "fast" because the run time is $O(n \log n)$.
Here's what the characteristic function looks like for the sum of three dice (plotting the real and imaginary parts of hs):
End of explanation
show_code(CharFunc.make_pmf)
Explanation: The characteristic function contains all of the information from the Pmf, but it is encoded in a form that is hard to interpret. However, if we are given a characteristic function, we can find the corresponding Pmf.
CharFunc provides make_pmf, which uses the inverse FFT to get back to the Pmf representation. Here's the code:
End of explanation
cf.make_pmf().plot_pmf()
Explanation: And here's an example:
End of explanation
show_code(CharFunc.__mul__)
Explanation: Now we can use the characteristic function to compute a convolution. CharFunc provides __mul__, which multiplies the hs elementwise and returns a new CharFunc object:
End of explanation
sixth = (cf * cf).make_pmf()
sixth.plot_pmf()
Explanation: And here's how we can use it to compute the distribution of the sum of 6 dice.
End of explanation
sixth.display()
sixth.mean(), sixth.var()
Explanation: Here are the probabilities, mean, and variance.
End of explanation
#Solution
n = len(cf.hs)
mags = np.abs(cf.hs)
plt.plot(np.roll(mags, n//2))
None
# The result approximates a Gaussian curve because
# the PMF is approximately Gaussian and the FT of a
# Gaussian is also Gaussian
Explanation: This might seem like a roundabout way to compute a convolution, but it is efficient. The time to Compute the CharFunc objects is $O(n \log n)$. Multiplying them together is $O(n)$. And converting back to a Pmf is $O(n \log n)$.
So the whole process is $O(n \log n)$, which is better than Pmf.__add__, which is $O(n^2)$.
Exercise: Plot the magnitude of cf.hs using np.abs. What does that shape look like?
Hint: it might be clearer if you us np.roll to put the peak of the CF in the middle.
End of explanation
class Dist(Pmf, Cdf, CharFunc):
def __init__(self, d):
Initializes the Dist.
Calls all three __init__ methods.
Pmf.__init__(self, d)
Cdf.__init__(self, *compute_cumprobs(d))
CharFunc.__init__(self, compute_fft(d))
def __add__(self, other):
Computes the distribution of the sum using Pmf.__add__.
pmf = Pmf.__add__(self, other)
return Dist(pmf.d)
def __mul__(self, other):
Computes the distribution of the sum using CharFunc.__mul__.
pmf = CharFunc.__mul__(self, other).make_pmf()
return Dist(pmf.d)
Explanation: Distributions
Finally, let's back to the question we started with: what is a distribution?
I've said that Pmf, Cdf, and CharFunc are different ways to represent the same information. For the questions we want to answer, some representations are better than others. But how should we represent the distribution itself?
One option is to treat each representation as a mixin; that is, a class that provides a set of capabilities. A distribution inherits all of the capabilities from all of the representations. Here's a class that shows what I mean:
End of explanation
dist = Dist(sixth.d)
dist.plot_pmf()
Explanation: When you create a Dist, you provide a dictionary of values and probabilities.
Dist.__init__ calls the other three __init__ methods to create the Pmf, Cdf, and CharFunc representations. The result is an object that has all the attributes and methods of the three representations.
As an example, I'll create a Dist that represents the sum of six dice:
End of explanation
dist[21]
Explanation: We inherit __getitem__ from Pmf, so we can look up the probability of a value.
End of explanation
dist.mean(), dist.var()
Explanation: We also get mean and variance from Pmf:
End of explanation
dist.values((0.25, 0.5, 0.75))
Explanation: But we can also use methods from Cdf, like values:
End of explanation
dist.cumprobs((18, 21, 24))
Explanation: And cumprobs
End of explanation
dist.sample(10)
dist.maximum(6).plot_cdf()
Explanation: And sample and plot_cdf
End of explanation
twelfth = dist + dist
twelfth.plot_pmf()
twelfth.mean()
Explanation: Dist.__add__ uses Pmf.__add__, which performs convolution the slow way:
End of explanation
twelfth_fft = dist * dist
twelfth_fft.plot_pmf()
twelfth_fft.mean()
Explanation: Dist.__mul__ uses CharFunc.__mul__, which performs convolution the fast way.
End of explanation |
4,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Transfer learning and fine-tuning
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Data preprocessing
Data download
In this tutorial, you will use a dataset containing several thousand images of cats and dogs. Download and extract a zip file containing the images, then create a tf.data.Dataset for training and validation using the tf.keras.utils.image_dataset_from_directory utility. You can learn more about loading images in this tutorial.
Step3: Show the first nine images and labels from the training set
Step4: As the original dataset doesn't contain a test set, you will create one. To do so, determine how many batches of data are available in the validation set using tf.data.experimental.cardinality, then move 20% of them to a test set.
Step5: Configure the dataset for performance
Use buffered prefetching to load images from disk without having I/O become blocking. To learn more about this method see the data performance guide.
Step6: Use data augmentation
When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random, yet realistic, transformations to the training images, such as rotation and horizontal flipping. This helps expose the model to different aspects of the training data and reduce overfitting. You can learn more about data augmentation in this tutorial.
Step7: Note
Step8: Rescale pixel values
In a moment, you will download tf.keras.applications.MobileNetV2 for use as your base model. This model expects pixel values in [-1, 1], but at this point, the pixel values in your images are in [0, 255]. To rescale them, use the preprocessing method included with the model.
Step9: Note
Step10: Note
Step11: This feature extractor converts each 160x160x3 image into a 5x5x1280 block of features. Let's see what it does to an example batch of images
Step12: Feature extraction
In this step, you will freeze the convolutional base created from the previous step and to use as a feature extractor. Additionally, you add a classifier on top of it and train the top-level classifier.
Freeze the convolutional base
It is important to freeze the convolutional base before you compile and train the model. Freezing (by setting layer.trainable = False) prevents the weights in a given layer from being updated during training. MobileNet V2 has many layers, so setting the entire model's trainable flag to False will freeze all of them.
Step13: Important note about BatchNormalization layers
Many models contain tf.keras.layers.BatchNormalization layers. This layer is a special case and precautions should be taken in the context of fine-tuning, as shown later in this tutorial.
When you set layer.trainable = False, the BatchNormalization layer will run in inference mode, and will not update its mean and variance statistics.
When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training = False when calling the base model. Otherwise, the updates applied to the non-trainable weights will destroy what the model has learned.
For more details, see the Transfer learning guide.
Step14: Add a classification head
To generate predictions from the block of features, average over the spatial 5x5 spatial locations, using a tf.keras.layers.GlobalAveragePooling2D layer to convert the features to a single 1280-element vector per image.
Step15: Apply a tf.keras.layers.Dense layer to convert these features into a single prediction per image. You don't need an activation function here because this prediction will be treated as a logit, or a raw prediction value. Positive numbers predict class 1, negative numbers predict class 0.
Step16: Build a model by chaining together the data augmentation, rescaling, base_model and feature extractor layers using the Keras Functional API. As previously mentioned, use training=False as our model contains a BatchNormalization layer.
Step17: Compile the model
Compile the model before training it. Since there are two classes, use the tf.keras.losses.BinaryCrossentropy loss with from_logits=True since the model provides a linear output.
Step18: The 2.5 million parameters in MobileNet are frozen, but there are 1.2 thousand trainable parameters in the Dense layer. These are divided between two tf.Variable objects, the weights and biases.
Step19: Train the model
After training for 10 epochs, you should see ~94% accuracy on the validation set.
Step20: Learning curves
Let's take a look at the learning curves of the training and validation accuracy/loss when using the MobileNetV2 base model as a fixed feature extractor.
Step21: Note
Step22: Compile the model
As you are training a much larger model and want to readapt the pretrained weights, it is important to use a lower learning rate at this stage. Otherwise, your model could overfit very quickly.
Step23: Continue training the model
If you trained to convergence earlier, this step will improve your accuracy by a few percentage points.
Step24: Let's take a look at the learning curves of the training and validation accuracy/loss when fine-tuning the last few layers of the MobileNetV2 base model and training the classifier on top of it. The validation loss is much higher than the training loss, so you may get some overfitting.
You may also get some overfitting as the new training set is relatively small and similar to the original MobileNetV2 datasets.
After fine tuning the model nearly reaches 98% accuracy on the validation set.
Step25: Evaluation and prediction
Finally you can verify the performance of the model on new data using test set.
Step26: And now you are all set to use this model to predict if your pet is a cat or dog. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet # IGNORE_COPYRIGHT: cleared by OSS licensing
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
Explanation: Transfer learning and fine-tuning
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning.ipynb?force_kitty_mode=1&force_corgi_mode=1"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this tutorial, you will learn how to classify images of cats and dogs by using transfer learning from a pre-trained network.
A pre-trained model is a saved network that was previously trained on a large dataset, typically on a large-scale image-classification task. You either use the pretrained model as is or use transfer learning to customize this model to a given task.
The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. You can then take advantage of these learned feature maps without having to start from scratch by training a large model on a large dataset.
In this notebook, you will try two ways to customize a pretrained model:
Feature Extraction: Use the representations learned by a previous network to extract meaningful features from new samples. You simply add a new classifier, which will be trained from scratch, on top of the pretrained model so that you can repurpose the feature maps learned previously for the dataset.
You do not need to (re)train the entire model. The base convolutional network already contains features that are generically useful for classifying pictures. However, the final, classification part of the pretrained model is specific to the original classification task, and subsequently specific to the set of classes on which the model was trained.
Fine-Tuning: Unfreeze a few of the top layers of a frozen model base and jointly train both the newly-added classifier layers and the last layers of the base model. This allows us to "fine-tune" the higher-order feature representations in the base model in order to make them more relevant for the specific task.
You will follow the general machine learning workflow.
Examine and understand the data
Build an input pipeline, in this case using Keras ImageDataGenerator
Compose the model
Load in the pretrained base model (and pretrained weights)
Stack the classification layers on top
Train the model
Evaluate model
End of explanation
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
BATCH_SIZE = 32
IMG_SIZE = (160, 160)
train_dataset = tf.keras.utils.image_dataset_from_directory(train_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
validation_dataset = tf.keras.utils.image_dataset_from_directory(validation_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
Explanation: Data preprocessing
Data download
In this tutorial, you will use a dataset containing several thousand images of cats and dogs. Download and extract a zip file containing the images, then create a tf.data.Dataset for training and validation using the tf.keras.utils.image_dataset_from_directory utility. You can learn more about loading images in this tutorial.
End of explanation
class_names = train_dataset.class_names
plt.figure(figsize=(10, 10))
for images, labels in train_dataset.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
Explanation: Show the first nine images and labels from the training set:
End of explanation
val_batches = tf.data.experimental.cardinality(validation_dataset)
test_dataset = validation_dataset.take(val_batches // 5)
validation_dataset = validation_dataset.skip(val_batches // 5)
print('Number of validation batches: %d' % tf.data.experimental.cardinality(validation_dataset))
print('Number of test batches: %d' % tf.data.experimental.cardinality(test_dataset))
Explanation: As the original dataset doesn't contain a test set, you will create one. To do so, determine how many batches of data are available in the validation set using tf.data.experimental.cardinality, then move 20% of them to a test set.
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE)
validation_dataset = validation_dataset.prefetch(buffer_size=AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE)
Explanation: Configure the dataset for performance
Use buffered prefetching to load images from disk without having I/O become blocking. To learn more about this method see the data performance guide.
End of explanation
data_augmentation = tf.keras.Sequential([
tf.keras.layers.RandomFlip('horizontal'),
tf.keras.layers.RandomRotation(0.2),
])
Explanation: Use data augmentation
When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random, yet realistic, transformations to the training images, such as rotation and horizontal flipping. This helps expose the model to different aspects of the training data and reduce overfitting. You can learn more about data augmentation in this tutorial.
End of explanation
for image, _ in train_dataset.take(1):
plt.figure(figsize=(10, 10))
first_image = image[0]
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
augmented_image = data_augmentation(tf.expand_dims(first_image, 0))
plt.imshow(augmented_image[0] / 255)
plt.axis('off')
Explanation: Note: These layers are active only during training, when you call Model.fit. They are inactive when the model is used in inference mode in Model.evaluate or Model.fit.
Let's repeatedly apply these layers to the same image and see the result.
End of explanation
preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input
Explanation: Rescale pixel values
In a moment, you will download tf.keras.applications.MobileNetV2 for use as your base model. This model expects pixel values in [-1, 1], but at this point, the pixel values in your images are in [0, 255]. To rescale them, use the preprocessing method included with the model.
End of explanation
rescale = tf.keras.layers.Rescaling(1./127.5, offset=-1)
Explanation: Note: Alternatively, you could rescale pixel values from [0, 255] to [-1, 1] using tf.keras.layers.Rescaling.
End of explanation
# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
Explanation: Note: If using other tf.keras.applications, be sure to check the API doc to determine if they expect pixels in [-1, 1] or [0, 1], or use the included preprocess_input function.
Create the base model from the pre-trained convnets
You will create the base model from the MobileNet V2 model developed at Google. This is pre-trained on the ImageNet dataset, a large dataset consisting of 1.4M images and 1000 classes. ImageNet is a research training dataset with a wide variety of categories like jackfruit and syringe. This base of knowledge will help us classify cats and dogs from our specific dataset.
First, you need to pick which layer of MobileNet V2 you will use for feature extraction. The very last classification layer (on "top", as most diagrams of machine learning models go from bottom to top) is not very useful. Instead, you will follow the common practice to depend on the very last layer before the flatten operation. This layer is called the "bottleneck layer". The bottleneck layer features retain more generality as compared to the final/top layer.
First, instantiate a MobileNet V2 model pre-loaded with weights trained on ImageNet. By specifying the include_top=False argument, you load a network that doesn't include the classification layers at the top, which is ideal for feature extraction.
End of explanation
image_batch, label_batch = next(iter(train_dataset))
feature_batch = base_model(image_batch)
print(feature_batch.shape)
Explanation: This feature extractor converts each 160x160x3 image into a 5x5x1280 block of features. Let's see what it does to an example batch of images:
End of explanation
base_model.trainable = False
Explanation: Feature extraction
In this step, you will freeze the convolutional base created from the previous step and to use as a feature extractor. Additionally, you add a classifier on top of it and train the top-level classifier.
Freeze the convolutional base
It is important to freeze the convolutional base before you compile and train the model. Freezing (by setting layer.trainable = False) prevents the weights in a given layer from being updated during training. MobileNet V2 has many layers, so setting the entire model's trainable flag to False will freeze all of them.
End of explanation
# Let's take a look at the base model architecture
base_model.summary()
Explanation: Important note about BatchNormalization layers
Many models contain tf.keras.layers.BatchNormalization layers. This layer is a special case and precautions should be taken in the context of fine-tuning, as shown later in this tutorial.
When you set layer.trainable = False, the BatchNormalization layer will run in inference mode, and will not update its mean and variance statistics.
When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training = False when calling the base model. Otherwise, the updates applied to the non-trainable weights will destroy what the model has learned.
For more details, see the Transfer learning guide.
End of explanation
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
Explanation: Add a classification head
To generate predictions from the block of features, average over the spatial 5x5 spatial locations, using a tf.keras.layers.GlobalAveragePooling2D layer to convert the features to a single 1280-element vector per image.
End of explanation
prediction_layer = tf.keras.layers.Dense(1)
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape)
Explanation: Apply a tf.keras.layers.Dense layer to convert these features into a single prediction per image. You don't need an activation function here because this prediction will be treated as a logit, or a raw prediction value. Positive numbers predict class 1, negative numbers predict class 0.
End of explanation
inputs = tf.keras.Input(shape=(160, 160, 3))
x = data_augmentation(inputs)
x = preprocess_input(x)
x = base_model(x, training=False)
x = global_average_layer(x)
x = tf.keras.layers.Dropout(0.2)(x)
outputs = prediction_layer(x)
model = tf.keras.Model(inputs, outputs)
Explanation: Build a model by chaining together the data augmentation, rescaling, base_model and feature extractor layers using the Keras Functional API. As previously mentioned, use training=False as our model contains a BatchNormalization layer.
End of explanation
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
Explanation: Compile the model
Compile the model before training it. Since there are two classes, use the tf.keras.losses.BinaryCrossentropy loss with from_logits=True since the model provides a linear output.
End of explanation
len(model.trainable_variables)
Explanation: The 2.5 million parameters in MobileNet are frozen, but there are 1.2 thousand trainable parameters in the Dense layer. These are divided between two tf.Variable objects, the weights and biases.
End of explanation
initial_epochs = 10
loss0, accuracy0 = model.evaluate(validation_dataset)
print("initial loss: {:.2f}".format(loss0))
print("initial accuracy: {:.2f}".format(accuracy0))
history = model.fit(train_dataset,
epochs=initial_epochs,
validation_data=validation_dataset)
Explanation: Train the model
After training for 10 epochs, you should see ~94% accuracy on the validation set.
End of explanation
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
Explanation: Learning curves
Let's take a look at the learning curves of the training and validation accuracy/loss when using the MobileNetV2 base model as a fixed feature extractor.
End of explanation
base_model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine-tune from this layer onwards
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
Explanation: Note: If you are wondering why the validation metrics are clearly better than the training metrics, the main factor is because layers like tf.keras.layers.BatchNormalization and tf.keras.layers.Dropout affect accuracy during training. They are turned off when calculating validation loss.
To a lesser extent, it is also because training metrics report the average for an epoch, while validation metrics are evaluated after the epoch, so validation metrics see a model that has trained slightly longer.
Fine tuning
In the feature extraction experiment, you were only training a few layers on top of an MobileNetV2 base model. The weights of the pre-trained network were not updated during training.
One way to increase performance even further is to train (or "fine-tune") the weights of the top layers of the pre-trained model alongside the training of the classifier you added. The training process will force the weights to be tuned from generic feature maps to features associated specifically with the dataset.
Note: This should only be attempted after you have trained the top-level classifier with the pre-trained model set to non-trainable. If you add a randomly initialized classifier on top of a pre-trained model and attempt to train all layers jointly, the magnitude of the gradient updates will be too large (due to the random weights from the classifier) and your pre-trained model will forget what it has learned.
Also, you should try to fine-tune a small number of top layers rather than the whole MobileNet model. In most convolutional networks, the higher up a layer is, the more specialized it is. The first few layers learn very simple and generic features that generalize to almost all types of images. As you go higher up, the features are increasingly more specific to the dataset on which the model was trained. The goal of fine-tuning is to adapt these specialized features to work with the new dataset, rather than overwrite the generic learning.
Un-freeze the top layers of the model
All you need to do is unfreeze the base_model and set the bottom layers to be un-trainable. Then, you should recompile the model (necessary for these changes to take effect), and resume training.
End of explanation
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer = tf.keras.optimizers.RMSprop(learning_rate=base_learning_rate/10),
metrics=['accuracy'])
model.summary()
len(model.trainable_variables)
Explanation: Compile the model
As you are training a much larger model and want to readapt the pretrained weights, it is important to use a lower learning rate at this stage. Otherwise, your model could overfit very quickly.
End of explanation
fine_tune_epochs = 10
total_epochs = initial_epochs + fine_tune_epochs
history_fine = model.fit(train_dataset,
epochs=total_epochs,
initial_epoch=history.epoch[-1],
validation_data=validation_dataset)
Explanation: Continue training the model
If you trained to convergence earlier, this step will improve your accuracy by a few percentage points.
End of explanation
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([0.8, 1])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 1.0])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
Explanation: Let's take a look at the learning curves of the training and validation accuracy/loss when fine-tuning the last few layers of the MobileNetV2 base model and training the classifier on top of it. The validation loss is much higher than the training loss, so you may get some overfitting.
You may also get some overfitting as the new training set is relatively small and similar to the original MobileNetV2 datasets.
After fine tuning the model nearly reaches 98% accuracy on the validation set.
End of explanation
loss, accuracy = model.evaluate(test_dataset)
print('Test accuracy :', accuracy)
Explanation: Evaluation and prediction
Finally you can verify the performance of the model on new data using test set.
End of explanation
# Retrieve a batch of images from the test set
image_batch, label_batch = test_dataset.as_numpy_iterator().next()
predictions = model.predict_on_batch(image_batch).flatten()
# Apply a sigmoid since our model returns logits
predictions = tf.nn.sigmoid(predictions)
predictions = tf.where(predictions < 0.5, 0, 1)
print('Predictions:\n', predictions.numpy())
print('Labels:\n', label_batch)
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].astype("uint8"))
plt.title(class_names[predictions[i]])
plt.axis("off")
Explanation: And now you are all set to use this model to predict if your pet is a cat or dog.
End of explanation |
4,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, we need to connect to NewsroomDB and download all shootings and homicides.
Step1: Right now, we're interested in all shootings and homicides for the current month. So filter the lists based on whatever that month is.
Step2: Now let's find the days without a shooting.
Step3: And days without a homicide.
Step4: Let's get the latitudes and longitudes of every murder in 2015 (Manya Brachear asked). | Python Code:
import os
import requests
def get_table_data(table_name):
url = '%stable/json/%s' % (os.environ['NEWSROOMDB_URL'], table_name)
try:
r = requests.get(url)
return r.json()
except:
print 'doh'
return get_table_data(table_name)
homicides = get_table_data('homicides')
shootings = get_table_data('shootings')
print 'Found %d homicides and %d shootings' % (len(homicides), len(shootings))
Explanation: First, we need to connect to NewsroomDB and download all shootings and homicides.
End of explanation
from datetime import date, datetime
today = date.today()
homicides_this_month = {}
for h in homicides:
try:
dt = datetime.strptime(h['Occ Date'], '%Y-%m-%d')
except ValueError:
continue
if dt.month == today.month:
if dt.year not in homicides_this_month:
homicides_this_month[dt.year] = []
homicides_this_month[dt.year].append(h)
shootings_this_month = {}
for s in shootings:
try:
dt = datetime.strptime(s['Date'], '%Y-%m-%d')
except ValueError:
continue
if dt.month == today.month:
if dt.year not in shootings_this_month:
shootings_this_month[dt.year] = []
shootings_this_month[dt.year].append(s)
for year in sorted(shootings_this_month.keys(), reverse=True):
try:
s = len(shootings_this_month[year])
except:
s = 0
try:
h = len(homicides_this_month[year])
except:
h = 0
print '%d:\t%d shootings\t\t%d homicides' % (year, s, h)
Explanation: Right now, we're interested in all shootings and homicides for the current month. So filter the lists based on whatever that month is.
End of explanation
from datetime import date, timedelta
test_date = date.today()
one_day = timedelta(days=1)
shooting_days = {}
for shooting in shootings:
if shooting['Date'] not in shooting_days:
shooting_days[shooting['Date']] = 0
shooting_days[shooting['Date']] += 1
while test_date.year > 2013:
if test_date.strftime('%Y-%m-%d') not in shooting_days:
print 'No shootings on %s' % test_date
test_date -= one_day
Explanation: Now let's find the days without a shooting.
End of explanation
from datetime import date, timedelta
test_date = date.today()
one_day = timedelta(days=1)
homicide_days = {}
for homicide in homicides:
if homicide['Occ Date'] not in homicide_days:
homicide_days[homicide['Occ Date']] = 0
homicide_days[homicide['Occ Date']] += 1
while test_date.year > 2013:
if test_date.strftime('%Y-%m-%d') not in homicide_days:
print 'No homicides on %s' % test_date
test_date -= one_day
Explanation: And days without a homicide.
End of explanation
coordinates = []
for homicide in homicides:
if not homicide['Occ Date'].startswith('2015-'):
continue
# Since the format of this field is (x, y) (or y, x? I always confuse the two) we need to extract just x and y
try:
coordinates.append(
(homicide['Geocode Override'][1:-1].split(',')[0], homicide['Geocode Override'][1:-1].split(',')[1]))
except:
# Not valid/expected lat/long format
continue
print len(coordinates)
for coordinate in coordinates:
print '%s,%s' % (coordinate[0].strip(), coordinate[1].strip())
Explanation: Let's get the latitudes and longitudes of every murder in 2015 (Manya Brachear asked).
End of explanation |
4,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
感知器
感知机(Perceptron)是一种二元线性分类器,是最简单的前向人工神经网络.1957由Rosenblatt在康奈尔航空研究室提出,受到心理学家McCulloch和数理逻辑学家Watt Pitts关于人工神经元数学模型的启发,开发出的模仿人类具有感知能力的试错,调整的机器学习方法.
算法
感知机有多种算法,比如最基本的感知机算法,感知机边界算法和多层感知机.我们这里介绍最基本的感知机算法.
以一组线性可分的二元d维数据集$ X ={ (\vec{x_i}, y_t)
Step1: 数据获取
这个数据集很经典因此很多机器学习框架都为其提供接口,sklearn也是一样.但更多的情况下我们还是要处理各种来源的数据.因此此处我们还是使用最传统的方式获取数据
Step2: 数据预处理
由于特征为float类型而标签为标签类别数据,因此标签需要为其编码,特征需要标准化.我们使用z-score进行归一化
Step3: 数据集拆分
Step4: 训练模型
Step5: 模型评估 | Python Code:
import requests
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder,StandardScaler
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
Explanation: 感知器
感知机(Perceptron)是一种二元线性分类器,是最简单的前向人工神经网络.1957由Rosenblatt在康奈尔航空研究室提出,受到心理学家McCulloch和数理逻辑学家Watt Pitts关于人工神经元数学模型的启发,开发出的模仿人类具有感知能力的试错,调整的机器学习方法.
算法
感知机有多种算法,比如最基本的感知机算法,感知机边界算法和多层感知机.我们这里介绍最基本的感知机算法.
以一组线性可分的二元d维数据集$ X ={ (\vec{x_i}, y_t):y_t \in {-1,1 }, i \in [1,n]}$为训练集,我们寻找一个分隔超平面$\vec{w^} \cdot \vec{x} = 0$.
1. 初始化$\vec{w_0} = \vec{0}$,记$t=0$
2. 从$X$中拿出一个数据点$\vec{x_i}$,如果$\vec{w^} \cdot \vec{x}> 0$,预测$\hat{y_i} = 1$,否则$\hat{y_i} = -1$
3. 如果$ y_i \neq \hat{y_i}$,更新一次 $\vec{w_t+1} = \vec{w_t} + y_i * \vec{x_i}, t=t+1 $
4. 重复2和3,直到遍历$X$
收敛性和复杂度
由Novikoff定理可知,最多经过$ \frac{R^2}{\gamma ^2}$步迭代就会收敛,其中$R,\gamma $分别为数据集中元素的长度的最大值和到分割超平面的最小几何距离.
感知机算法复杂度是$O(n)$
优缺点
感知机最大的特点是简单,并且错误边界可控.但基本感知机算法无法处理非线性可分数据集和亦或(XOR)问题,因为基本感知机算法只是在平面画条线,无法把${ (-1,-1), (1,1)}$和${ (1,-1), (1,-1)}$区分开.
发展
模仿神经科学,我们可以把感知机刻画成一个只有输入层和输出层的神经网络.
Rosenblatt等人意识到引入隐藏层,也就是在输入层和输出层之间加入新的层并加入激活函数,可以解决线性不可分的问题.加上上世纪80年代,反向算法的提出,带动了神经网络,以至于现在如火如荼的深度学习的研究.
应用sklearn的相关接口
在sklearn中用于监督学习的感知器相关接口有:
单节点的线性感知器
sklearn.linear_model.Perceptron单节点感知器
sklearn.linear_model.SGDClassifier快速梯度下降分类器,和单节点感知器同样的底层实现.只是出了可以描述感知器也可以描述一些其他算法
Perceptron()和SGDClassifier(loss=”perceptron”, eta0=1, learning_rate=”constant”, penalty=None)是一样的.
单节点感知器是一种适用于大规模学习的一种简单算法.优点在于
不需要设置学习率(learning rate)。
不需要正则化处理。
仅使用错误样本更新模型
使用合页损失(hinge loss)的感知机比SGD略快,所得模型更稀疏.
多层感知器(全连接的神经网络)
neural_network.MLPClassifier([…])多层感知器分类器
neural_network.MLPRegressor([…])多层感知器回归器
sklearn毕竟不是专业的神经网络算法工具.由于计算量大,要用神经网络还是推荐使用诸如tensorflow,theano,keras这样的专业框架配合gpu使用.本文也不会过多的涉及神经网络.
例:使用iris数据集训练模型
iris是一个知名的数据集,有4维连续的特征和三种标签.
End of explanation
csv_content = requests.get("http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data").text
row_name = ['sepal_length','sepal_width','petal_length','petal_width','label']
csv_list = csv_content.strip().split("\n")
row_matrix = [line.strip().split(",") for line in csv_list]
dataset = pd.DataFrame(row_matrix,columns=row_name)
dataset[:10]
Explanation: 数据获取
这个数据集很经典因此很多机器学习框架都为其提供接口,sklearn也是一样.但更多的情况下我们还是要处理各种来源的数据.因此此处我们还是使用最传统的方式获取数据
End of explanation
encs = {}
encs["feature"] = StandardScaler()
encs["feature"].fit(dataset[row_name[:-1]])
table = pd.DataFrame(encs["feature"].transform(dataset[row_name[:-1]]),columns=row_name[:-1])
encs["label"]=LabelEncoder()
encs["label"].fit(dataset["label"])
table["label"] = encs["label"].transform(dataset["label"])
table[:10]
table.groupby("label").count()
Explanation: 数据预处理
由于特征为float类型而标签为标签类别数据,因此标签需要为其编码,特征需要标准化.我们使用z-score进行归一化
End of explanation
train_set,validation_set = train_test_split(table)
train_set.groupby("label").count()
validation_set.groupby("label").count()
Explanation: 数据集拆分
End of explanation
mlp = MLPClassifier(
hidden_layer_sizes=(100,50),
activation='relu',
solver='adam',
alpha=0.0001,
batch_size='auto',
learning_rate='constant',
learning_rate_init=0.001)
mlp.fit(train_set[row_name[:-1]], train_set["label"])
pre = mlp.predict(validation_set[row_name[:-1]])
Explanation: 训练模型
End of explanation
print(classification_report(validation_set["label"],pre))
Explanation: 模型评估
End of explanation |
4,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Turbulence example
In this notebook we show how to perform a simulation using a soundspeed field based on a Gaussian turbulence spectrum.
Step1: Configuration
The following are the parameters for our simulation
Step2: Turbulent field
Step3: Create model
We now create a model. Waves propagate in a medium which we define first.
Step4: The model is only finite and to prevent aliasing we need a Perfectly Matched Layer.
Step5: Now we create the actual model.
Step6: In this example our source excites a pulse.
Step7: We also add a receiver on the other side of the domain
Step8: Check model
To get a quick overview of all parameters, for example to check them, we can print one
Step9: To check whether the geometry is as we want it to be, we can simply draw it.
Step10: Running the simulation
Now that we've defined and checked our model we can run it.
With model.run() you can specify the amount of time steps or amount of seconds it should run.
Step11: Let's see how the sound pressure field looks like now.
Step12: It might happen that you realize that you actually need to calculate a bit further. This can easily be done, since the state is remembered. Simply use model.run() again and the simulation continues.
Step13: With more steps we now see the effect of the turbulence.
Step14: We can also check the signal recorded by the receiver, which in this case is the impulse response. The method receiver.recording() returns an instance of acoustics.Signal.
Step15: If however, you want to restart the simulation you can do so with model.restart(). | Python Code:
import numpy as np
from pstd import PSTD, PML, Medium, Position2D, PointSource
from pstd import PSTD
from acoustics import Signal
from turbulence import Field2D, Gaussian2DTemp
#import seaborn as sns
%matplotlib inline
Explanation: Turbulence example
In this notebook we show how to perform a simulation using a soundspeed field based on a Gaussian turbulence spectrum.
End of explanation
x = 50.0
y = 40.0
z = 0.0
c_0 = 343.2
maximum_frequency_target = 200.0
Explanation: Configuration
The following are the parameters for our simulation
End of explanation
f_max = 500.0
f_margin = 1.0
# Amount of modes
max_mode_order = 100
# Maximum wavenumber
k_max = 10.0
wavenumber_resolution = k_max / max_mode_order
# We don't need it for the calculations but we do need it to create an instance.
spatial_resolution = c_0 / (2.0 * f_max * f_margin)
spectrum = Gaussian2DTemp(
max_mode_order=max_mode_order,
wavenumber_resolution=wavenumber_resolution,
mu_0=3e-2,
a=1.1,
# a=0.001,
)
field = Field2D(
x=x,
y=y,
z=y,
spatial_resolution=spatial_resolution,
spectrum=spectrum,
)
mu = field.randomize().generate().mu
print("Mu shape: {}".format(mu.shape))
c = ( mu + 1.0 ) * c_0
field.plot()
Explanation: Turbulent field
End of explanation
medium = Medium(soundspeed=c, density=1.296)
Explanation: Create model
We now create a model. Waves propagate in a medium which we define first.
End of explanation
pml = PML((1000.0, 1000.0), depth=10.0)
Explanation: The model is only finite and to prevent aliasing we need a Perfectly Matched Layer.
End of explanation
model = PSTD(
maximum_frequency=maximum_frequency_target,
pml=pml,
medium=medium,
cfl=PSTD.maximum_cfl(medium.soundspeed)/2.,
size=[x, y],
spacing = spatial_resolution,
)
Explanation: Now we create the actual model.
End of explanation
source_position = Position2D(x*2.0/5.0, y/2.0)
source = model.add_object('source', 'PointSource', position=source_position,
excitation='pulse', quantity='pressure', amplitude=0.1)
Explanation: In this example our source excites a pulse.
End of explanation
receiver_position = Position2D(x*3.0/5.0, y/2.0)
receiver = model.add_object('receiver', 'Receiver', position=receiver_position, quantity='pressure')
Explanation: We also add a receiver on the other side of the domain
End of explanation
print(model.overview())
Explanation: Check model
To get a quick overview of all parameters, for example to check them, we can print one
End of explanation
_ = model.plot_scene()
Explanation: To check whether the geometry is as we want it to be, we can simply draw it.
End of explanation
model.run(seconds=0.01)
Explanation: Running the simulation
Now that we've defined and checked our model we can run it.
With model.run() you can specify the amount of time steps or amount of seconds it should run.
End of explanation
_ = model.plot_field()
Explanation: Let's see how the sound pressure field looks like now.
End of explanation
model.run(seconds=0.03)
Explanation: It might happen that you realize that you actually need to calculate a bit further. This can easily be done, since the state is remembered. Simply use model.run() again and the simulation continues.
End of explanation
_ = model.plot_field()
Explanation: With more steps we now see the effect of the turbulence.
End of explanation
_ = receiver.recording().plot()
Explanation: We can also check the signal recorded by the receiver, which in this case is the impulse response. The method receiver.recording() returns an instance of acoustics.Signal.
End of explanation
model.restart()
Explanation: If however, you want to restart the simulation you can do so with model.restart().
End of explanation |
4,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Datasets
Step1: The data is from two colour spotted cDNA arrays. It has been widely studied in computational biology. There are four different time series in the data as well as induction experiments. The data is returned in the form of a pandas data frame which can be described as follows.
Step2: The first five columns are the clb2 and cln3 induction experiments. The columns that follow are the alpha, cdc15, cdc28 and elutriation time course experiments. The index gives the gene names. The columns are named according to the experiment.
Step3: And the index is given by the gene name, there are 6178 genes in total.
Step4: We also provide a variant of the data for just the cdc15 time course.
Step5: And in this data we also provide the associated time points.
Step6: As normal we include the citation information for the data.
Step7: And extra information about the data is included, as standard, under the keys info and details. | Python Code:
import pods
import pylab as plt
%matplotlib inline
data = pods.datasets.spellman_yeast()
Explanation: Datasets: The Spellman Yeast Data
Open Data Science Initiative
29th May 2014 Neil D. Lawrence
This data set collection is from an classic early microarray paper on the yeast cell cycle, Spellman et al (1998).
End of explanation
data['Y'].describe()
Explanation: The data is from two colour spotted cDNA arrays. It has been widely studied in computational biology. There are four different time series in the data as well as induction experiments. The data is returned in the form of a pandas data frame which can be described as follows.
End of explanation
print(data['Y'].columns)
Explanation: The first five columns are the clb2 and cln3 induction experiments. The columns that follow are the alpha, cdc15, cdc28 and elutriation time course experiments. The index gives the gene names. The columns are named according to the experiment.
End of explanation
print(data['Y'].index)
Explanation: And the index is given by the gene name, there are 6178 genes in total.
End of explanation
data = pods.datasets.spellman_yeast_cdc15()
Explanation: We also provide a variant of the data for just the cdc15 time course.
End of explanation
plt.plot(data['t'], data['Y']['YAR015W'],'rx')
plt.title('Gene YAR015W from Spellman et al for the cdc15 Time Course')
plt.xlabel('time')
plt.ylabel('$\log_2$ expression ratio')
Explanation: And in this data we also provide the associated time points.
End of explanation
print(data['citation'])
Explanation: As normal we include the citation information for the data.
End of explanation
print(data['info'])
print()
print(data['details'])
Explanation: And extra information about the data is included, as standard, under the keys info and details.
End of explanation |
4,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source of the materials
Step1: The <span>PERMISSIVE</span> flag indicates that a number of common
problems (see [problem structures]) associated with PDB files will be
ignored (but note that some atoms and/or residues will be missing). If
the flag is not present a <span>PDBConstructionException</span> will be
generated if any problems are detected during the parse operation.
The Structure object is then produced by letting the PDBParser object
parse a PDB file (the PDB file in this case is called ’pdb1fat.ent’,
’1fat’ is a user defined name for the structure)
Step2: You can extract the header and trailer (simple lists of strings) of the
PDB file from the PDBParser object with the <span>get_header</span> and
<span>get_trailer</span> methods. Note however that many PDB files
contain headers with incomplete or erroneous information. Many of the
errors have been fixed in the equivalent mmCIF files. Hence, if you are
interested in the header information, it is a good idea to extract
information from mmCIF files using the MMCIF2Dict tool described
below, instead of parsing the PDB header.
Now that is clarified, let’s return to parsing the PDB header. The
structure object has an attribute called header which is a Python
dictionary that maps header records to their values.
Example
Step3: The available keys are name, head, deposition_date,
release_date, structure_method, resolution, structure_reference
(which maps to a list of references), journal_reference, author, and
compound (which maps to a dictionary with various information about
the crystallized compound).
The dictionary can also be created without creating a Structure
object, ie. directly from the PDB file
Step4: Reading an mmCIF file
Similarly to the case the case of PDB files, first create an
MMCIFParser object
Step5: Then use this parser to create a structure object from the mmCIF file
Step6: To have some more low level access to an mmCIF file, you can use the
MMCIF2Dict class to create a Python dictionary that maps all mmCIF
tags in an mmCIF file to their values. If there are multiple values
(like in the case of tag _atom_site.Cartn_y, which holds the $y$
coordinates of all atoms), the tag is mapped to a list of values. The
dictionary is created from the mmCIF file as follows
Step7: Example
Step8: Example
Step9: Reading files in the PDB XML format
That’s not yet supported, but we are definitely planning to support that
in the future (it’s not a lot of work). Contact the Biopython developers
() if you need this).
Writing PDB files
Use the PDBIO class for this. It’s easy to write out specific parts of a
structure too, of course.
Example
Step10: If you want to write out a part of the structure, make use of the
Select class (also in PDBIO). Select has four methods
Step11: If this is all too complicated for you, the Dice module contains a
handy extract function that writes out all residues in a chain between
a start and end residue.
Structure representation
The overall layout of a Structure object follows the so-called SMCRA
(Structure/Model/Chain/Residue/Atom) architecture
Step12: You can also get a list of all child Entities of a parent Entity object.
Note that this list is sorted in a specific way (e.g. according to chain
identifier for Chain objects in a Model object).
Step13: You can also get the parent from a child
Step14: At all levels of the SMCRA hierarchy, you can also extract a full id.
The full id is a tuple containing all id’s starting from the top object
(Structure) down to the current object. A full id for a Residue object
e.g. is something like
Step15: This corresponds to
Step16: You can check if the entity has a child with a given id by using the
has_id method
Step17: The length of an entity is equal to its number of children
Step18: It is possible to delete, rename, add, etc. child entities from a parent
entity, but this does not include any sanity checks (e.g. it is possible
to add two residues with the same id to one chain). This really should
be done via a nice Decorator class that includes integrity checking, but
you can take a look at the code (Entity.py) if you want to use the raw
interface.
Structure
The Structure object is at the top of the hierarchy. Its id is a user
given string. The Structure contains a number of Model children. Most
crystal structures (but not all) contain a single model, while NMR
structures typically consist of several models. Disorder in crystal
structures of large parts of molecules can also result in several
models.
Model
The id of the Model object is an integer, which is derived from the
position of the model in the parsed file (they are automatically
numbered starting from 0). Crystal structures generally have only one
model (with id 0), while NMR files usually have several models. Whereas
many PDB parsers assume that there is only one model, the Structure
class in Bio.PDB is designed such that it can easily handle PDB files
with more than one model.
As an example, to get the first model from a Structure object, use
Step19: The Model object stores a list of Chain children.
Chain
The id of a Chain object is derived from the chain identifier in the
PDB/mmCIF file, and is a single character (typically a letter). Each
Chain in a Model object has a unique id. As an example, to get the Chain
object with identifier “A” from a Model object, use
Step20: The Chain object stores a list of Residue children.
Residue
A residue id is a tuple with three elements
Step21: The reason for the hetero-flag is that many, many PDB files use the same
sequence identifier for an amino acid and a hetero-residue or a water,
which would create obvious problems if the hetero-flag was not used.
Unsurprisingly, a Residue object stores a set of Atom children. It also
contains a string that specifies the residue name (e.g. “ASN”) and the
segment identifier of the residue (well known to X-PLOR users, but not
used in the construction of the SMCRA data structure).
Let’s look at some examples. Asn 10 with a blank insertion code would
have residue id <span>(’ ’, 10, ’ ’)</span>. Water 10 would have residue
id <span>(’W’, 10, ’ ’)</span>. A glucose molecule (a hetero residue
with residue name GLC) with sequence identifier 10 would have residue id
<span>(’H_GLC’, 10, ’ ’)</span>. In this way, the three residues (with
the same insertion code and sequence identifier) can be part of the same
chain because their residue id’s are distinct.
In most cases, the hetflag and insertion code fields will be blank, e.g.
<span>(’ ’, 10, ’ ’)</span>. In these cases, the sequence identifier can
be used as a shortcut for the full id
Step22: Each Residue object in a Chain object should have a unique id. However,
disordered residues are dealt with in a special way, as described in
section [point mutations].
A Residue object has a number of additional methods
Step23: You can use is_aa(residue) to test if a Residue object is an amino
acid.
Atom
The Atom object stores the data associated with an atom, and has no
children. The id of an atom is its atom name (e.g. “OG” for the side
chain oxygen of a Ser residue). An Atom id needs to be unique in a
Residue. Again, an exception is made for disordered atoms, as described
in section [disordered atoms].
The atom id is simply the atom name (eg. ’CA’). In practice, the atom
name is created by stripping all spaces from the atom name in the PDB
file.
However, in PDB files, a space can be part of an atom name. Often,
calcium atoms are called ’CA..’ in order to distinguish them from
C$\alpha$ atoms (which are called ’.CA.’). In cases were stripping the
spaces would create problems (ie. two atoms called ’CA’ in the same
residue) the spaces are kept.
In a PDB file, an atom name consists of 4 chars, typically with leading
and trailing spaces. Often these spaces can be removed for ease of use
(e.g. an amino acid C$ \alpha $ atom is labeled “.CA.” in a PDB file,
where the dots represent spaces). To generate an atom name (and thus an
atom id) the spaces are removed, unless this would result in a name
collision in a Residue (i.e. two Atom objects with the same atom name
and id). In the latter case, the atom name including spaces is tried.
This situation can e.g. happen when one residue contains atoms with
names “.CA.” and “CA..”, although this is not very likely.
The atomic data stored includes the atom name, the atomic coordinates
(including standard deviation if present), the B factor (including
anisotropic B factors and standard deviation if present), the altloc
specifier and the full atom name including spaces. Less used items like
the atom element number or the atomic charge sometimes specified in a
PDB file are not stored.
To manipulate the atomic coordinates, use the transform method of the
Atom object. Use the set_coord method to specify the atomic
coordinates directly.
An Atom object has the following additional methods
Step24: To represent the atom coordinates, siguij, anisotropic B factor and
sigatm Numpy arrays are used.
The get_vector method returns a Vector object representation of the
coordinates of the Atom object, allowing you to do vector operations
on atomic coordinates. Vector implements the full set of 3D vector
operations, matrix multiplication (left and right) and some advanced
rotation-related operations as well.
As an example of the capabilities of Bio.PDB’s Vector module, suppose
that you would like to find the position of a Gly residue’s C$\beta$
atom, if it had one. Rotating the N atom of the Gly residue along the
C$\alpha$-C bond over -120 degrees roughly puts it in the position of a
virtual C$\beta$ atom. Here’s how to do it, making use of the rotaxis
method (which can be used to construct a rotation around a certain axis)
of the Vector module
Step25: This example shows that it’s possible to do some quite nontrivial vector
operations on atomic data, which can be quite useful. In addition to all
the usual vector operations (cross (use **), and dot (use *)
product, angle, norm, etc.) and the above mentioned rotaxis function,
the Vector module also has methods to rotate (rotmat) or reflect
(refmat) one vector on top of another.
Extracting a specific Atom/Residue/Chain/Model from a Structure
These are some examples
Step26: Note that you can use a shortcut
Step27: Disorder
Bio.PDB can handle both disordered atoms and point mutations (i.e. a Gly
and an Ala residue in the same position).
General approach[disorder problems]
Disorder should be dealt with from two points of view
Step28: Disordered residues
Common case {#common-case .unnumbered}
The most common case is a residue that contains one or more disordered
atoms. This is evidently solved by using DisorderedAtom objects to
represent the disordered atoms, and storing the DisorderedAtom object in
a Residue object just like ordinary Atom objects. The DisorderedAtom
will behave exactly like an ordinary atom (in fact the atom with the
highest occupancy) by forwarding all uncaught method calls to one of the
Atom objects (the selected Atom object) it contains.
Point mutations[point mutations] {#point-mutationspoint-mutations .unnumbered}
A special case arises when disorder is due to a point mutation, i.e.
when two or more point mutants of a polypeptide are present in the
crystal. An example of this can be found in PDB structure 1EN2.
Since these residues belong to a different residue type (e.g. let’s say
Ser 60 and Cys 60) they should not be stored in a single Residue
object as in the common case. In this case, each residue is represented
by one Residue object, and both Residue objects are stored in a
single DisorderedResidue object (see Fig. [fig
Step29: In addition, you can get a list of all Atom objects (ie. all
DisorderedAtom objects are ’unpacked’ to their individual Atom
objects) using the get_unpacked_list method of a (Disordered)Residue
object.
Hetero residues
Associated problems[hetero problems]
A common problem with hetero residues is that several hetero and
non-hetero residues present in the same chain share the same sequence
identifier (and insertion code). Therefore, to generate a unique id for
each hetero residue, waters and other hetero residues are treated in a
different way.
Remember that Residue object have the tuple (hetfield, resseq, icode) as
id. The hetfield is blank (“ ”) for amino and nucleic acids, and a
string for waters and other hetero residues. The content of the hetfield
is explained below.
Water residues
The hetfield string of a water residue consists of the letter “W”. So a
typical residue id for a water is (“W”, 1, “ ”).
Other hetero residues
The hetfield string for other hetero residues starts with “H_” followed
by the residue name. A glucose molecule e.g. with residue name “GLC”
would have hetfield “H_GLC”. Its residue id could e.g. be (“H_GLC”, 1,
“ ”).
Navigating through a Structure object
Parse a PDB file, and extract some Model, Chain, Residue and Atom objects {#parse-a-pdb-file-and-extract-some-model-chain-residue-and-atom-objects .unnumbered}
Step30: Iterating through all atoms of a structure {#iterating-through-all-atoms-of-a-structure .unnumbered}
Step31: There is a shortcut if you want to iterate over all atoms in a
structure
Step32: Similarly, to iterate over all atoms in a chain, use
Step33: Iterating over all residues of a model {#iterating-over-all-residues-of-a-model .unnumbered}
or if you want to iterate over all residues in a model
Step34: You can also use the Selection.unfold_entities function to get all
residues from a structure
Step35: or to get all atoms from a chain
Step36: Obviously, A=atom, R=residue, C=chain, M=model, S=structure. You can
use this to go up in the hierarchy, e.g. to get a list of (unique)
Residue or Chain parents from a list of Atoms
Step37: For more info, see the API documentation.
Extract a hetero residue from a chain (e.g. a glucose (GLC) moiety with resseq 10) {#extract-a-hetero-residue-from-a-chain-e.g.-a-glucose-glc-moiety-with-resseq-10 .unnumbered}
Step38: Print all hetero residues in chain {#print-all-hetero-residues-in-chain .unnumbered}
Step39: Print out the coordinates of all CA atoms in a structure with B factor greater than 50 {#print-out-the-coordinates-of-all-ca-atoms-in-a-structure-with-b-factor-greater-than-50 .unnumbered}
Step40: Print out all the residues that contain disordered atoms {#print-out-all-the-residues-that-contain-disordered-atoms .unnumbered}
Step41: Loop over all disordered atoms, and select all atoms with altloc A (if present) {#loop-over-all-disordered-atoms-and-select-all-atoms-with-altloc-a-if-present .unnumbered}
This will make sure that the SMCRA data structure will behave as if only
the atoms with altloc A are present.
Step42: Extracting polypeptides from a Structure object[subsubsec
Step43: A Polypeptide object is simply a UserList of Residue objects, and is
always created from a single Model (in this case model 1). You can use
the resulting Polypeptide object to get the sequence as a Seq object
or to get a list of C$\alpha$ atoms as well. Polypeptides can be built
using a C-N or a C$\alpha$-C$\alpha$ distance criterion.
Example
Step44: Note that in the above case only model 0 of the structure is considered
by PolypeptideBuilder. However, it is possible to use
PolypeptideBuilder to build Polypeptide objects from Model and
Chain objects as well.
Obtaining the sequence of a structure {#obtaining-the-sequence-of-a-structure .unnumbered}
The first thing to do is to extract all polypeptides from the structure
(as above). The sequence of each polypeptide can then easily be obtained
from the Polypeptide objects. The sequence is represented as a
Biopython Seq object, and its alphabet is defined by a
ProteinAlphabet object.
Example
Step45: Analyzing structures
Measuring distances
The minus operator for atoms has been overloaded to return the distance
between two atoms.
Step46: Measuring angles
Use the vector representation of the atomic coordinates, and the
calc_angle function from the Vector module
Step47: Measuring torsion angles
Use the vector representation of the atomic coordinates, and the
calc_dihedral function from the Vector module
Step48: Determining atom-atom contacts
Use NeighborSearch to perform neighbor lookup. The neighbor lookup is
done using a KD tree module written in C (see Bio.KDTree), making it
very fast. It also includes a fast method to find all point pairs within
a certain distance of each other.
Superimposing two structures
Use a Superimposer object to superimpose two coordinate sets. This
object calculates the rotation and translation matrix that rotates two
lists of atoms on top of each other in such a way that their RMSD is
minimized. Of course, the two lists need to contain the same number of
atoms. The Superimposer object can also apply the rotation/translation
to a list of atoms. The rotation and translation are stored as a tuple
in the rotran attribute of the Superimposer object (note that the
rotation is right multiplying!). The RMSD is stored in the rmsd
attribute.
The algorithm used by Superimposer comes from @golub1989 [Golub & Van
Loan] and makes use of singular value decomposition (this is implemented
in the general Bio.SVDSuperimposer module).
Example
Step49: To superimpose two structures based on their active sites, use the
active site atoms to calculate the rotation/translation matrices (as
above), and apply these to the whole molecule.
Mapping the residues of two related structures onto each other
First, create an alignment file in FASTA format, then use the
StructureAlignment class. This class can also be used for alignments
with more than two structures.
Calculating the Half Sphere Exposure
Half Sphere Exposure (HSE) is a new, 2D measure of solvent exposure
@hamelryck2005. Basically, it counts the number of C$\alpha$ atoms
around a residue in the direction of its side chain, and in the opposite
direction (within a radius of $13 \AA$). Despite its simplicity, it
outperforms many other measures of solvent exposure.
HSE comes in two flavors
Step50: Determining the secondary structure
For this functionality, you need to install DSSP (and obtain a license
for it — free for academic use, see http
Step51: You can also get access to the molecular surface itself (via the
get_surface function), in the form of a Numeric Python array with the
surface points.
Common problems in PDB files
It is well known that many PDB files contain semantic errors (not the
structures themselves, but their representation in PDB files). Bio.PDB
tries to handle this in two ways. The PDBParser object can behave in two
ways
Step52: In the permissive state (DEFAULT), PDB files that obviously contain
errors are “corrected” (i.e. some residues or atoms are left out). These
errors include
Step53: The PDBList class can also be used as a command-line tool | Python Code:
from Bio.PDB.PDBParser import PDBParser
p = PDBParser(PERMISSIVE=1)
Explanation: Source of the materials: Biopython cookbook (adapted)
<font color='red'>Status: Draft</font>
Going 3D: The PDB module
Bio.PDB is a Biopython module that focuses on working with crystal
structures of biological macromolecules. Among other things, Bio.PDB
includes a PDBParser class that produces a Structure object, which can
be used to access the atomic data in the file in a convenient manner.
There is limited support for parsing the information contained in the
PDB header.
Reading and writing crystal structure files
Reading a PDB file
First we create a PDBParser object:
End of explanation
structure_id = "1fat"
filename = "data/pdb1fat.ent"
structure = p.get_structure(structure_id, filename)
Explanation: The <span>PERMISSIVE</span> flag indicates that a number of common
problems (see [problem structures]) associated with PDB files will be
ignored (but note that some atoms and/or residues will be missing). If
the flag is not present a <span>PDBConstructionException</span> will be
generated if any problems are detected during the parse operation.
The Structure object is then produced by letting the PDBParser object
parse a PDB file (the PDB file in this case is called ’pdb1fat.ent’,
’1fat’ is a user defined name for the structure):
End of explanation
resolution = structure.header['resolution']
keywords = structure.header['keywords']
Explanation: You can extract the header and trailer (simple lists of strings) of the
PDB file from the PDBParser object with the <span>get_header</span> and
<span>get_trailer</span> methods. Note however that many PDB files
contain headers with incomplete or erroneous information. Many of the
errors have been fixed in the equivalent mmCIF files. Hence, if you are
interested in the header information, it is a good idea to extract
information from mmCIF files using the MMCIF2Dict tool described
below, instead of parsing the PDB header.
Now that is clarified, let’s return to parsing the PDB header. The
structure object has an attribute called header which is a Python
dictionary that maps header records to their values.
Example:
End of explanation
file = open(filename, 'r')
header_dict = parse_pdb_header(file)
file.close()
Explanation: The available keys are name, head, deposition_date,
release_date, structure_method, resolution, structure_reference
(which maps to a list of references), journal_reference, author, and
compound (which maps to a dictionary with various information about
the crystallized compound).
The dictionary can also be created without creating a Structure
object, ie. directly from the PDB file:
End of explanation
from Bio.PDB.MMCIFParser import MMCIFParser
parser = MMCIFParser()
Explanation: Reading an mmCIF file
Similarly to the case the case of PDB files, first create an
MMCIFParser object:
End of explanation
structure = parser.get_structure('1fat', 'data/1fat.cif')
Explanation: Then use this parser to create a structure object from the mmCIF file:
End of explanation
from Bio.PDB.MMCIF2Dict import MMCIF2Dict
mmcif_dict = MMCIF2Dict('data/1fat.cif')
Explanation: To have some more low level access to an mmCIF file, you can use the
MMCIF2Dict class to create a Python dictionary that maps all mmCIF
tags in an mmCIF file to their values. If there are multiple values
(like in the case of tag _atom_site.Cartn_y, which holds the $y$
coordinates of all atoms), the tag is mapped to a list of values. The
dictionary is created from the mmCIF file as follows:
End of explanation
sc = mmcif_dict['_exptl_crystal.density_percent_sol']
Explanation: Example: get the solvent content from an mmCIF file:
End of explanation
y_list = mmcif_dict['_atom_site.Cartn_y']
Explanation: Example: get the list of the $y$ coordinates of all atoms
End of explanation
from Bio.PDB import PDBIO
io = PDBIO()
io.set_structure(s)
io.save('out.pdb')
Explanation: Reading files in the PDB XML format
That’s not yet supported, but we are definitely planning to support that
in the future (it’s not a lot of work). Contact the Biopython developers
() if you need this).
Writing PDB files
Use the PDBIO class for this. It’s easy to write out specific parts of a
structure too, of course.
Example: saving a structure
End of explanation
from Bio.PDB.PDBIO import Select
class GlySelect(Select):
def accept_residue(self, residue):
if residue.get_name() == 'GLY':
return True
else:
return False
io = PDBIO()
io.set_structure(s)
io.save('gly_only.pdb', GlySelect())
Explanation: If you want to write out a part of the structure, make use of the
Select class (also in PDBIO). Select has four methods:
accept_model(model)
accept_chain(chain)
accept_residue(residue)
accept_atom(atom)
By default, every method returns 1 (which means the
model/chain/residue/atom is included in the output). By subclassing
Select and returning 0 when appropriate you can exclude models,
chains, etc. from the output. Cumbersome maybe, but very powerful. The
following code only writes out glycine residues:
End of explanation
child_entity = parent_entity[child_id]
Explanation: If this is all too complicated for you, the Dice module contains a
handy extract function that writes out all residues in a chain between
a start and end residue.
Structure representation
The overall layout of a Structure object follows the so-called SMCRA
(Structure/Model/Chain/Residue/Atom) architecture:
A structure consists of models
A model consists of chains
A chain consists of residues
A residue consists of atoms
This is the way many structural biologists/bioinformaticians think about
structure, and provides a simple but efficient way to deal with
structure. Additional stuff is essentially added when needed. A UML
diagram of the Structure object (forget about the Disordered classes
for now) is shown in Fig. [fig:smcra]. Such a data structure is not
necessarily best suited for the representation of the macromolecular
content of a structure, but it is absolutely necessary for a good
interpretation of the data present in a file that describes the
structure (typically a PDB or MMCIF file). If this hierarchy cannot
represent the contents of a structure file, it is fairly certain that
the file contains an error or at least does not describe the structure
unambiguously. If a SMCRA data structure cannot be generated, there is
reason to suspect a problem. Parsing a PDB file can thus be used to
detect likely problems. We will give several examples of this in section
[problem structures].
Structure, Model, Chain and Residue are all subclasses of the Entity
base class. The Atom class only (partly) implements the Entity interface
(because an Atom does not have children).
For each Entity subclass, you can extract a child by using a unique id
for that child as a key (e.g. you can extract an Atom object from a
Residue object by using an atom name string as a key, you can extract a
Chain object from a Model object by using its chain identifier as a
key).
Disordered atoms and residues are represented by DisorderedAtom and
DisorderedResidue classes, which are both subclasses of the
DisorderedEntityWrapper base class. They hide the complexity associated
with disorder and behave exactly as Atom and Residue objects.
In general, a child Entity object (i.e. Atom, Residue, Chain, Model) can
be extracted from its parent (i.e. Residue, Chain, Model, Structure,
respectively) by using an id as a key.
End of explanation
child_list = parent_entity.get_list()
Explanation: You can also get a list of all child Entities of a parent Entity object.
Note that this list is sorted in a specific way (e.g. according to chain
identifier for Chain objects in a Model object).
End of explanation
parent_entity = child_entity.get_parent()
Explanation: You can also get the parent from a child:
End of explanation
full_id = residue.get_full_id()
print(full_id)
Explanation: At all levels of the SMCRA hierarchy, you can also extract a full id.
The full id is a tuple containing all id’s starting from the top object
(Structure) down to the current object. A full id for a Residue object
e.g. is something like:
End of explanation
entity.get_id()
Explanation: This corresponds to:
The Structure with id `"1abc`"
The Model with id 0
The Chain with id `"A`"
The Residue with id (`" `", 10, `"A`").
The Residue id indicates that the residue is not a hetero-residue (nor a
water) because it has a blank hetero field, that its sequence identifier
is 10 and that its insertion code is `"A`".
To get the entity’s id, use the get_id method:
End of explanation
entity.has_id(entity_id)
Explanation: You can check if the entity has a child with a given id by using the
has_id method:
End of explanation
nr_children = len(entity)
Explanation: The length of an entity is equal to its number of children:
End of explanation
first_model = structure[0]
Explanation: It is possible to delete, rename, add, etc. child entities from a parent
entity, but this does not include any sanity checks (e.g. it is possible
to add two residues with the same id to one chain). This really should
be done via a nice Decorator class that includes integrity checking, but
you can take a look at the code (Entity.py) if you want to use the raw
interface.
Structure
The Structure object is at the top of the hierarchy. Its id is a user
given string. The Structure contains a number of Model children. Most
crystal structures (but not all) contain a single model, while NMR
structures typically consist of several models. Disorder in crystal
structures of large parts of molecules can also result in several
models.
Model
The id of the Model object is an integer, which is derived from the
position of the model in the parsed file (they are automatically
numbered starting from 0). Crystal structures generally have only one
model (with id 0), while NMR files usually have several models. Whereas
many PDB parsers assume that there is only one model, the Structure
class in Bio.PDB is designed such that it can easily handle PDB files
with more than one model.
As an example, to get the first model from a Structure object, use
End of explanation
chain_A = model["A"]
Explanation: The Model object stores a list of Chain children.
Chain
The id of a Chain object is derived from the chain identifier in the
PDB/mmCIF file, and is a single character (typically a letter). Each
Chain in a Model object has a unique id. As an example, to get the Chain
object with identifier “A” from a Model object, use
End of explanation
# Full id
residue = chain[(' ', 100, ' ')]
residue = chain[100]
Explanation: The Chain object stores a list of Residue children.
Residue
A residue id is a tuple with three elements:
The hetero-field (hetfield): this is
'W' in the case of a water molecule;
'H_' followed by the residue name for other hetero
residues (e.g. 'H_GLC' in the case of a glucose molecule);
blank for standard amino and nucleic acids.
This scheme is adopted for reasons described in section
[hetero problems].
The sequence identifier (resseq), an integer describing the
position of the residue in the chain (e.g., 100);
The insertion code (icode); a string, e.g. ’A’. The insertion
code is sometimes used to preserve a certain desirable residue
numbering scheme. A Ser 80 insertion mutant (inserted e.g. between a
Thr 80 and an Asn 81 residue) could e.g. have sequence identifiers
and insertion codes as follows: Thr 80 A, Ser 80 B, Asn 81. In this
way the residue numbering scheme stays in tune with that of the wild
type structure.
The id of the above glucose residue would thus be (’H_GLC’, 100, ’A’).
If the hetero-flag and insertion code are blank, the sequence identifier
alone can be used:
End of explanation
# use full id
res10 = chain[(' ', 10, ' ')]
res10 = chain[10]
Explanation: The reason for the hetero-flag is that many, many PDB files use the same
sequence identifier for an amino acid and a hetero-residue or a water,
which would create obvious problems if the hetero-flag was not used.
Unsurprisingly, a Residue object stores a set of Atom children. It also
contains a string that specifies the residue name (e.g. “ASN”) and the
segment identifier of the residue (well known to X-PLOR users, but not
used in the construction of the SMCRA data structure).
Let’s look at some examples. Asn 10 with a blank insertion code would
have residue id <span>(’ ’, 10, ’ ’)</span>. Water 10 would have residue
id <span>(’W’, 10, ’ ’)</span>. A glucose molecule (a hetero residue
with residue name GLC) with sequence identifier 10 would have residue id
<span>(’H_GLC’, 10, ’ ’)</span>. In this way, the three residues (with
the same insertion code and sequence identifier) can be part of the same
chain because their residue id’s are distinct.
In most cases, the hetflag and insertion code fields will be blank, e.g.
<span>(’ ’, 10, ’ ’)</span>. In these cases, the sequence identifier can
be used as a shortcut for the full id:
End of explanation
residue.get_resname() # returns the residue name, e.g. "ASN"
residue.is_disordered() # returns 1 if the residue has disordered atoms
residue.get_segid() # returns the SEGID, e.g. "CHN1"
residue.has_id(name) # test if a residue has a certain atom
Explanation: Each Residue object in a Chain object should have a unique id. However,
disordered residues are dealt with in a special way, as described in
section [point mutations].
A Residue object has a number of additional methods:
End of explanation
a.get_name() # atom name (spaces stripped, e.g. "CA")
a.get_id() # id (equals atom name)
a.get_coord() # atomic coordinates
a.get_vector() # atomic coordinates as Vector object
a.get_bfactor() # isotropic B factor
a.get_occupancy() # occupancy
a.get_altloc() # alternative location specifier
a.get_sigatm() # standard deviation of atomic parameters
a.get_siguij() # standard deviation of anisotropic B factor
a.get_anisou() # anisotropic B factor
a.get_fullname() # atom name (with spaces, e.g. ".CA.")
Explanation: You can use is_aa(residue) to test if a Residue object is an amino
acid.
Atom
The Atom object stores the data associated with an atom, and has no
children. The id of an atom is its atom name (e.g. “OG” for the side
chain oxygen of a Ser residue). An Atom id needs to be unique in a
Residue. Again, an exception is made for disordered atoms, as described
in section [disordered atoms].
The atom id is simply the atom name (eg. ’CA’). In practice, the atom
name is created by stripping all spaces from the atom name in the PDB
file.
However, in PDB files, a space can be part of an atom name. Often,
calcium atoms are called ’CA..’ in order to distinguish them from
C$\alpha$ atoms (which are called ’.CA.’). In cases were stripping the
spaces would create problems (ie. two atoms called ’CA’ in the same
residue) the spaces are kept.
In a PDB file, an atom name consists of 4 chars, typically with leading
and trailing spaces. Often these spaces can be removed for ease of use
(e.g. an amino acid C$ \alpha $ atom is labeled “.CA.” in a PDB file,
where the dots represent spaces). To generate an atom name (and thus an
atom id) the spaces are removed, unless this would result in a name
collision in a Residue (i.e. two Atom objects with the same atom name
and id). In the latter case, the atom name including spaces is tried.
This situation can e.g. happen when one residue contains atoms with
names “.CA.” and “CA..”, although this is not very likely.
The atomic data stored includes the atom name, the atomic coordinates
(including standard deviation if present), the B factor (including
anisotropic B factors and standard deviation if present), the altloc
specifier and the full atom name including spaces. Less used items like
the atom element number or the atomic charge sometimes specified in a
PDB file are not stored.
To manipulate the atomic coordinates, use the transform method of the
Atom object. Use the set_coord method to specify the atomic
coordinates directly.
An Atom object has the following additional methods:
End of explanation
# get atom coordinates as vectors
n = residue['N'].get_vector()
c = residue['C'].get_vector()
ca = residue['CA'].get_vector()
n = n - ca
c = c - ca
rot = rotaxis(-pi * 120.0/180.0, c)
cb_at_origin = n.left_multiply(rot)
cb = cb_at_origin + ca
Explanation: To represent the atom coordinates, siguij, anisotropic B factor and
sigatm Numpy arrays are used.
The get_vector method returns a Vector object representation of the
coordinates of the Atom object, allowing you to do vector operations
on atomic coordinates. Vector implements the full set of 3D vector
operations, matrix multiplication (left and right) and some advanced
rotation-related operations as well.
As an example of the capabilities of Bio.PDB’s Vector module, suppose
that you would like to find the position of a Gly residue’s C$\beta$
atom, if it had one. Rotating the N atom of the Gly residue along the
C$\alpha$-C bond over -120 degrees roughly puts it in the position of a
virtual C$\beta$ atom. Here’s how to do it, making use of the rotaxis
method (which can be used to construct a rotation around a certain axis)
of the Vector module:
End of explanation
model = structure[0]
chain = model['A']
residue = chain[100]
atom = residue['CA']
Explanation: This example shows that it’s possible to do some quite nontrivial vector
operations on atomic data, which can be quite useful. In addition to all
the usual vector operations (cross (use **), and dot (use *)
product, angle, norm, etc.) and the above mentioned rotaxis function,
the Vector module also has methods to rotate (rotmat) or reflect
(refmat) one vector on top of another.
Extracting a specific Atom/Residue/Chain/Model from a Structure
These are some examples:
End of explanation
atom = structure[0]['A'][100]['CA']
Explanation: Note that you can use a shortcut:
End of explanation
atom.disordered_select('A') # select altloc A atom
print(atom.get_altloc())
atom.disordered_select('B') # select altloc B atom
print(atom.get_altloc())
Explanation: Disorder
Bio.PDB can handle both disordered atoms and point mutations (i.e. a Gly
and an Ala residue in the same position).
General approach[disorder problems]
Disorder should be dealt with from two points of view: the atom and the
residue points of view. In general, we have tried to encapsulate all the
complexity that arises from disorder. If you just want to loop over all
C$\alpha$ atoms, you do not care that some residues have a disordered
side chain. On the other hand it should also be possible to represent
disorder completely in the data structure. Therefore, disordered atoms
or residues are stored in special objects that behave as if there is no
disorder. This is done by only representing a subset of the disordered
atoms or residues. Which subset is picked (e.g. which of the two
disordered OG side chain atom positions of a Ser residue is used) can be
specified by the user.
Disordered atoms[disordered atoms]
Disordered atoms are represented by ordinary Atom objects, but all
Atom objects that represent the same physical atom are stored in a
DisorderedAtom object (see Fig. [fig:smcra]). Each Atom object in
a DisorderedAtom object can be uniquely indexed using its altloc
specifier. The DisorderedAtom object forwards all uncaught method
calls to the selected Atom object, by default the one that represents
the atom with the highest occupancy. The user can of course change the
selected Atom object, making use of its altloc specifier. In this way
atom disorder is represented correctly without much additional
complexity. In other words, if you are not interested in atom disorder,
you will not be bothered by it.
Each disordered atom has a characteristic altloc identifier. You can
specify that a DisorderedAtom object should behave like the Atom
object associated with a specific altloc identifier:
End of explanation
residue = chain[10]
residue.disordered_select('CYS')
Explanation: Disordered residues
Common case {#common-case .unnumbered}
The most common case is a residue that contains one or more disordered
atoms. This is evidently solved by using DisorderedAtom objects to
represent the disordered atoms, and storing the DisorderedAtom object in
a Residue object just like ordinary Atom objects. The DisorderedAtom
will behave exactly like an ordinary atom (in fact the atom with the
highest occupancy) by forwarding all uncaught method calls to one of the
Atom objects (the selected Atom object) it contains.
Point mutations[point mutations] {#point-mutationspoint-mutations .unnumbered}
A special case arises when disorder is due to a point mutation, i.e.
when two or more point mutants of a polypeptide are present in the
crystal. An example of this can be found in PDB structure 1EN2.
Since these residues belong to a different residue type (e.g. let’s say
Ser 60 and Cys 60) they should not be stored in a single Residue
object as in the common case. In this case, each residue is represented
by one Residue object, and both Residue objects are stored in a
single DisorderedResidue object (see Fig. [fig:smcra]).
The DisorderedResidue object forwards all uncaught methods to the
selected Residue object (by default the last Residue object added),
and thus behaves like an ordinary residue. Each Residue object in a
DisorderedResidue object can be uniquely identified by its residue
name. In the above example, residue Ser 60 would have id “SER” in the
DisorderedResidue object, while residue Cys 60 would have id “CYS”.
The user can select the active Residue object in a DisorderedResidue
object via this id.
Example: suppose that a chain has a point mutation at position 10,
consisting of a Ser and a Cys residue. Make sure that residue 10 of this
chain behaves as the Cys residue.
End of explanation
from Bio.PDB.PDBParser import PDBParser
parser = PDBParser()
structure = parser.get_structure("test", "data/pdb1fat.ent")
model = structure[0]
chain = model["A"]
residue = chain[1]
atom = residue["CA"]
Explanation: In addition, you can get a list of all Atom objects (ie. all
DisorderedAtom objects are ’unpacked’ to their individual Atom
objects) using the get_unpacked_list method of a (Disordered)Residue
object.
Hetero residues
Associated problems[hetero problems]
A common problem with hetero residues is that several hetero and
non-hetero residues present in the same chain share the same sequence
identifier (and insertion code). Therefore, to generate a unique id for
each hetero residue, waters and other hetero residues are treated in a
different way.
Remember that Residue object have the tuple (hetfield, resseq, icode) as
id. The hetfield is blank (“ ”) for amino and nucleic acids, and a
string for waters and other hetero residues. The content of the hetfield
is explained below.
Water residues
The hetfield string of a water residue consists of the letter “W”. So a
typical residue id for a water is (“W”, 1, “ ”).
Other hetero residues
The hetfield string for other hetero residues starts with “H_” followed
by the residue name. A glucose molecule e.g. with residue name “GLC”
would have hetfield “H_GLC”. Its residue id could e.g. be (“H_GLC”, 1,
“ ”).
Navigating through a Structure object
Parse a PDB file, and extract some Model, Chain, Residue and Atom objects {#parse-a-pdb-file-and-extract-some-model-chain-residue-and-atom-objects .unnumbered}
End of explanation
p = PDBParser()
structure = p.get_structure('X', 'data/pdb1fat.ent')
for model in structure:
for chain in model:
for residue in chain:
for atom in residue:
print(atom)
Explanation: Iterating through all atoms of a structure {#iterating-through-all-atoms-of-a-structure .unnumbered}
End of explanation
atoms = structure.get_atoms()
for atom in atoms:
print(atom)
Explanation: There is a shortcut if you want to iterate over all atoms in a
structure:
End of explanation
atoms = chain.get_atoms()
for atom in atoms:
print(atom)
Explanation: Similarly, to iterate over all atoms in a chain, use
End of explanation
residues = model.get_residues()
for residue in residues:
print(residue)
Explanation: Iterating over all residues of a model {#iterating-over-all-residues-of-a-model .unnumbered}
or if you want to iterate over all residues in a model:
End of explanation
from Bio.PDB import Selection
res_list = Selection.unfold_entities(structure, 'R')
Explanation: You can also use the Selection.unfold_entities function to get all
residues from a structure:
End of explanation
atom_list = Selection.unfold_entities(chain, 'A')
Explanation: or to get all atoms from a chain:
End of explanation
residue_list = Selection.unfold_entities(atom_list, 'R')
chain_list = Selection.unfold_entities(atom_list, 'C')
Explanation: Obviously, A=atom, R=residue, C=chain, M=model, S=structure. You can
use this to go up in the hierarchy, e.g. to get a list of (unique)
Residue or Chain parents from a list of Atoms:
End of explanation
residue_id = ("H_GLC", 10, " ")
residue = chain[residue_id]
Explanation: For more info, see the API documentation.
Extract a hetero residue from a chain (e.g. a glucose (GLC) moiety with resseq 10) {#extract-a-hetero-residue-from-a-chain-e.g.-a-glucose-glc-moiety-with-resseq-10 .unnumbered}
End of explanation
for residue in chain.get_list():
residue_id = residue.get_id()
hetfield = residue_id[0]
if hetfield[0]=="H":
print(residue_id)
Explanation: Print all hetero residues in chain {#print-all-hetero-residues-in-chain .unnumbered}
End of explanation
for model in structure.get_list():
for chain in model.get_list():
for residue in chain.get_list():
if residue.has_id("CA"):
ca = residue["CA"]
if ca.get_bfactor() > 50.0:
print(ca.get_coord())
Explanation: Print out the coordinates of all CA atoms in a structure with B factor greater than 50 {#print-out-the-coordinates-of-all-ca-atoms-in-a-structure-with-b-factor-greater-than-50 .unnumbered}
End of explanation
for model in structure.get_list():
for chain in model.get_list():
for residue in chain.get_list():
if residue.is_disordered():
resseq = residue.get_id()[1]
resname = residue.get_resname()
model_id = model.get_id()
chain_id = chain.get_id()
print(model_id, chain_id, resname, resseq)
Explanation: Print out all the residues that contain disordered atoms {#print-out-all-the-residues-that-contain-disordered-atoms .unnumbered}
End of explanation
for model in structure.get_list():
for chain in model.get_list():
for residue in chain.get_list():
if residue.is_disordered():
for atom in residue.get_list():
if atom.is_disordered() and atom.disordered_has_id("A"):
atom.disordered_select("A")
Explanation: Loop over all disordered atoms, and select all atoms with altloc A (if present) {#loop-over-all-disordered-atoms-and-select-all-atoms-with-altloc-a-if-present .unnumbered}
This will make sure that the SMCRA data structure will behave as if only
the atoms with altloc A are present.
End of explanation
model_nr = 1
polypeptide_list = build_peptides(structure, model_nr)
for polypeptide in polypeptide_list:
print(polypeptide)
Explanation: Extracting polypeptides from a Structure object[subsubsec:extracting_polypeptides] {#extracting-polypeptides-from-a-structure-objectsubsubsecextracting_polypeptides .unnumbered}
To extract polypeptides from a structure, construct a list of
Polypeptide objects from a Structure object using
PolypeptideBuilder as follows:
End of explanation
# Using C-N
ppb = PPBuilder()
for pp in ppb.build_peptides(structure):
print(pp.get_sequence())
ppb = CaPPBuilder()
for pp in ppb.build_peptides(structure):
print(pp.get_sequence())
Explanation: A Polypeptide object is simply a UserList of Residue objects, and is
always created from a single Model (in this case model 1). You can use
the resulting Polypeptide object to get the sequence as a Seq object
or to get a list of C$\alpha$ atoms as well. Polypeptides can be built
using a C-N or a C$\alpha$-C$\alpha$ distance criterion.
Example:
End of explanation
seq = polypeptide.get_sequence()
print(seq)
Explanation: Note that in the above case only model 0 of the structure is considered
by PolypeptideBuilder. However, it is possible to use
PolypeptideBuilder to build Polypeptide objects from Model and
Chain objects as well.
Obtaining the sequence of a structure {#obtaining-the-sequence-of-a-structure .unnumbered}
The first thing to do is to extract all polypeptides from the structure
(as above). The sequence of each polypeptide can then easily be obtained
from the Polypeptide objects. The sequence is represented as a
Biopython Seq object, and its alphabet is defined by a
ProteinAlphabet object.
Example:
End of explanation
# Get some atoms
ca1 = residue1['CA']
ca2 = residue2['CA']
distance = ca1-ca2
Explanation: Analyzing structures
Measuring distances
The minus operator for atoms has been overloaded to return the distance
between two atoms.
End of explanation
vector1 = atom1.get_vector()
vector2 = atom2.get_vector()
vector3 = atom3.get_vector()
angle = calc_angle(vector1, vector2, vector3)
Explanation: Measuring angles
Use the vector representation of the atomic coordinates, and the
calc_angle function from the Vector module:
End of explanation
vector1 = atom1.get_vector()
vector2 = atom2.get_vector()
vector3 = atom3.get_vector()
vector4 = atom4.get_vector()
angle = calc_dihedral(vector1, vector2, vector3, vector4)
Explanation: Measuring torsion angles
Use the vector representation of the atomic coordinates, and the
calc_dihedral function from the Vector module:
End of explanation
from Bio.PDB import Superimposer
sup = Superimposer()
sup.set_atoms(fixed, moving)
print(sup.rotran)
print(sup.rms)
sup.apply(moving)
Explanation: Determining atom-atom contacts
Use NeighborSearch to perform neighbor lookup. The neighbor lookup is
done using a KD tree module written in C (see Bio.KDTree), making it
very fast. It also includes a fast method to find all point pairs within
a certain distance of each other.
Superimposing two structures
Use a Superimposer object to superimpose two coordinate sets. This
object calculates the rotation and translation matrix that rotates two
lists of atoms on top of each other in such a way that their RMSD is
minimized. Of course, the two lists need to contain the same number of
atoms. The Superimposer object can also apply the rotation/translation
to a list of atoms. The rotation and translation are stored as a tuple
in the rotran attribute of the Superimposer object (note that the
rotation is right multiplying!). The RMSD is stored in the rmsd
attribute.
The algorithm used by Superimposer comes from @golub1989 [Golub & Van
Loan] and makes use of singular value decomposition (this is implemented
in the general Bio.SVDSuperimposer module).
Example:
End of explanation
from Bio.PDB import HSExposure
model = structure[0]
hse = HSExposure()
exp_ca = hse.calc_hs_exposure(model, option='CA3')
exp_cb=hse.calc_hs_exposure(model, option='CB')
exp_fs = hse.calc_fs_exposure(model)
print(exp_ca[some_residue])
Explanation: To superimpose two structures based on their active sites, use the
active site atoms to calculate the rotation/translation matrices (as
above), and apply these to the whole molecule.
Mapping the residues of two related structures onto each other
First, create an alignment file in FASTA format, then use the
StructureAlignment class. This class can also be used for alignments
with more than two structures.
Calculating the Half Sphere Exposure
Half Sphere Exposure (HSE) is a new, 2D measure of solvent exposure
@hamelryck2005. Basically, it counts the number of C$\alpha$ atoms
around a residue in the direction of its side chain, and in the opposite
direction (within a radius of $13 \AA$). Despite its simplicity, it
outperforms many other measures of solvent exposure.
HSE comes in two flavors: HSE$\alpha$ and HSE$\beta$. The former only
uses the C$\alpha$ atom positions, while the latter uses the C$\alpha$
and C$\beta$ atom positions. The HSE measure is calculated by the
HSExposure class, which can also calculate the contact number. The
latter class has methods which return dictionaries that map a Residue
object to its corresponding HSE$\alpha$, HSE$\beta$ and contact number
values.
Example:
End of explanation
from Bio.PDB import ResidueDepth
model = structure[0]
rd = ResidueDepth(model, pdb_file)
residue_depth, ca_depth=rd[some_residue]
Explanation: Determining the secondary structure
For this functionality, you need to install DSSP (and obtain a license
for it — free for academic use, see http://www.cmbi.kun.nl/gv/dssp/).
Then use the DSSP class, which maps Residue objects to their
secondary structure (and accessible surface area). The DSSP codes are
listed in Table [cap:DSSP-codes]. Note that DSSP (the program, and
thus by consequence the class) cannot handle multiple models!
Code Secondary structure
H $\alpha$-helix
B Isolated $\beta$-bridge residue
E Strand
G 3-10 helix
I $\Pi$-helix
T Turn
S Bend
- Other
: [cap:DSSP-codes]DSSP codes in Bio.PDB.
The DSSP class can also be used to calculate the accessible surface
area of a residue. But see also section [subsec:residue_depth].
Calculating the residue depth[subsec:residue_depth]
Residue depth is the average distance of a residue’s atoms from the
solvent accessible surface. It’s a fairly new and very powerful
parameterization of solvent accessibility. For this functionality, you
need to install Michel Sanner’s MSMS program
(http://www.scripps.edu/pub/olson-web/people/sanner/html/msms_home.html).
Then use the ResidueDepth class. This class behaves as a dictionary
which maps Residue objects to corresponding (residue depth, C$\alpha$
depth) tuples. The C$\alpha$ depth is the distance of a residue’s
C$\alpha$ atom to the solvent accessible surface.
Example:
End of explanation
# Permissive parser
parser = PDBParser(PERMISSIVE=1)
parser = PDBParser() # The same (default)
strict_parser = PDBParser(PERMISSIVE=0)
Explanation: You can also get access to the molecular surface itself (via the
get_surface function), in the form of a Numeric Python array with the
surface points.
Common problems in PDB files
It is well known that many PDB files contain semantic errors (not the
structures themselves, but their representation in PDB files). Bio.PDB
tries to handle this in two ways. The PDBParser object can behave in two
ways: a restrictive way and a permissive way, which is the default.
Example:
End of explanation
from Bio.PDB import PDBList
pdbl = PDBList()
pdbl.retrieve_pdb_file('1FAT')
Explanation: In the permissive state (DEFAULT), PDB files that obviously contain
errors are “corrected” (i.e. some residues or atoms are left out). These
errors include:
Multiple residues with the same identifier
Multiple atoms with the same identifier (taking into account the
altloc identifier)
These errors indicate real problems in the PDB file (for details see
@hamelryck2003a [Hamelryck and Manderick, 2003]). In the restrictive
state, PDB files with errors cause an exception to occur. This is useful
to find errors in PDB files.
Some errors however are automatically corrected. Normally each
disordered atom should have a non-blank altloc identifier. However,
there are many structures that do not follow this convention, and have a
blank and a non-blank identifier for two disordered positions of the
same atom. This is automatically interpreted in the right way.
Sometimes a structure contains a list of residues belonging to chain A,
followed by residues belonging to chain B, and again followed by
residues belonging to chain A, i.e. the chains are ’broken’. This is
also correctly interpreted.
Examples[problem structures]
The PDBParser/Structure class was tested on about 800 structures (each
belonging to a unique SCOP superfamily). This takes about 20 minutes, or
on average 1.5 seconds per structure. Parsing the structure of the large
ribosomal subunit (1FKK), which contains about 64000 atoms, takes 10
seconds on a 1000 MHz PC.
Three exceptions were generated in cases where an unambiguous data
structure could not be built. In all three cases, the likely cause is an
error in the PDB file that should be corrected. Generating an exception
in these cases is much better than running the chance of incorrectly
describing the structure in a data structure.
Duplicate residues
One structure contains two amino acid residues in one chain with the
same sequence identifier (resseq 3) and icode. Upon inspection it was
found that this chain contains the residues Thr A3, …, Gly A202, Leu A3,
Glu A204. Clearly, Leu A3 should be Leu A203. A couple of similar
situations exist for structure 1FFK (which e.g. contains Gly B64, Met
B65, Glu B65, Thr B67, i.e. residue Glu B65 should be Glu B66).
Duplicate atoms
Structure 1EJG contains a Ser/Pro point mutation in chain A at position
22. In turn, Ser 22 contains some disordered atoms. As expected, all
atoms belonging to Ser 22 have a non-blank altloc specifier (B or C).
All atoms of Pro 22 have altloc A, except the N atom which has a blank
altloc. This generates an exception, because all atoms belonging to two
residues at a point mutation should have non-blank altloc. It turns out
that this atom is probably shared by Ser and Pro 22, as Ser 22 misses
the N atom. Again, this points to a problem in the file: the N atom
should be present in both the Ser and the Pro residue, in both cases
associated with a suitable altloc identifier.
Automatic correction
Some errors are quite common and can be easily corrected without much
risk of making a wrong interpretation. These cases are listed below.
A blank altloc for a disordered atom
Normally each disordered atom should have a non-blank altloc identifier.
However, there are many structures that do not follow this convention,
and have a blank and a non-blank identifier for two disordered positions
of the same atom. This is automatically interpreted in the right way.
Broken chains
Sometimes a structure contains a list of residues belonging to chain A,
followed by residues belonging to chain B, and again followed by
residues belonging to chain A, i.e. the chains are “broken”. This is
correctly interpreted.
Fatal errors
Sometimes a PDB file cannot be unambiguously interpreted. Rather than
guessing and risking a mistake, an exception is generated, and the user
is expected to correct the PDB file. These cases are listed below.
Duplicate residues
All residues in a chain should have a unique id. This id is generated
based on:
The sequence identifier (resseq).
The insertion code (icode).
The hetfield string (“W” for waters and “H_” followed by the
residue name for other hetero residues)
The residue names of the residues in the case of point mutations (to
store the Residue objects in a DisorderedResidue object).
If this does not lead to a unique id something is quite likely wrong,
and an exception is generated.
Duplicate atoms
All atoms in a residue should have a unique id. This id is generated
based on:
The atom name (without spaces, or with spaces if a problem arises).
The altloc specifier.
If this does not lead to a unique id something is quite likely wrong,
and an exception is generated.
Accessing the Protein Data Bank
Downloading structures from the Protein Data Bank
Structures can be downloaded from the PDB (Protein Data Bank) by using
the retrieve_pdb_file method on a PDBList object. The argument for
this method is the PDB identifier of the structure.
End of explanation
pl = PDBList(pdb='/tmp/data/pdb')
pl.update_pdb()
Explanation: The PDBList class can also be used as a command-line tool:
```
python PDBList.py 1fat
```
The downloaded file will be called pdb1fat.ent and stored in the
current working directory. Note that the retrieve_pdb_file method also
has an optional argument pdir that specifies a specific directory in
which to store the downloaded PDB files.
The retrieve_pdb_file method also has some options to specify the
compression format used for the download, and the program used for local
decompression (default .Z format and gunzip). In addition, the PDB
ftp site can be specified upon creation of the PDBList object. By
default, the server of the Worldwide Protein Data Bank
(ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/pdb/) is used.
See the API documentation for more details. Thanks again to Kristian
Rother for donating this module.
Downloading the entire PDB
The following commands will store all PDB files in the /data/pdb
directory:
```
python PDBList.py all /data/pdb
python PDBList.py all /data/pdb -d
```
The API method for this is called download_entire_pdb. Adding the -d
option will store all files in the same directory. Otherwise, they are
sorted into PDB-style subdirectories according to their PDB ID’s.
Depending on the traffic, a complete download will take 2-4 days.
Keeping a local copy of the PDB up to date
This can also be done using the PDBList object. One simply creates a
PDBList object (specifying the directory where the local copy of the
PDB is present) and calls the update_pdb method:
End of explanation |
4,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Set Information
1593 handwritten digits from around 80 persons were scanned, stretched
in a rectangular box 16x16 in a gray scale of 256 values.Then each pixel
of each image was scaled into a bolean (1/0) value using a fixed
threshold.
Each person wrote on a paper all the digits from 0 to 9, twice. The
commitment was to write the digit the first time in the normal way
(trying to write each digit accurately) and the second time in a fast
way (with no accuracy).
The best validation protocol for this dataset seems to be a 5x2CV, 50%
Tune (Train +Test) and completly blind 50% Validation.
Step1: Neural Net | Python Code:
data = pd.read_csv('data/semeion.csv', sep=",", header=None)
data.head()
data_train = data.sample(frac=0.9, random_state=42)
data_val = data.drop(data_train.index)
df_x_train = data_train.iloc[:,:256]
df_y_train = data_train.iloc[:,256:]
df_x_val = data_val.iloc[:,:256]
df_y_val = data_val.iloc[:,256]
x_train = df_x_train.values
y_train = df_y_train.values
# y_train = keras.utils.to_categorical(y_train)
x_val = df_x_val.values
y_val = df_y_val.values
# y_val = keras.utils.to_categorical(y_val)
# y_val
Explanation: Data Set Information
1593 handwritten digits from around 80 persons were scanned, stretched
in a rectangular box 16x16 in a gray scale of 256 values.Then each pixel
of each image was scaled into a bolean (1/0) value using a fixed
threshold.
Each person wrote on a paper all the digits from 0 to 9, twice. The
commitment was to write the digit the first time in the normal way
(trying to write each digit accurately) and the second time in a fast
way (with no accuracy).
The best validation protocol for this dataset seems to be a 5x2CV, 50%
Tune (Train +Test) and completly blind 50% Validation.
End of explanation
hidden1_dim = 12
hidden2_dim = 12
model = Sequential()
model.add(Dense(hidden1_dim, activation='relu', input_shape=(256,)))
model.add(Dropout(0.1))
model.add(Dense(hidden2_dim, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=24,
epochs=100,
verbose=0,
shuffle=True,
validation_split=0.1)
score = model.evaluate(x_val, y_val)[1]
print(score)
Explanation: Neural Net
End of explanation |
4,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figure
Step1: 1. What is the voxelwise threshold?
Step2: 2. Definition of alternative
Detect 1 region
We define a 'success' as a situation in which the maximum in the active field exceeds
the threshold.
Step3: 3. How large statistic in a field be to exceed the threshold with power 0.80?
We quantify this by computing the expected local maximum in the field (which is a null field elevated by value D).
We use the distribution of local maxima of Cheng&Schwartzman to compute the power/effect size.
Step4: extra analysis
Step5: 5. From the required voxel statistic to Cohen's D for a given sample size
Step6: The figure per List (Tal or David)
Step7: Print median sample size and power for Neurosynth data
Step8: Compute median of sample sizes over last 5 years, for use in correlation simulation notebook.
Step9: Compute number of single-group studies with sample sizes over 100
Step10: Load neurosynth group data and compute median group size for 2015
Step11: Figure for 90% power | Python Code:
% matplotlib inline
from __future__ import division
import os
import nibabel as nib
import numpy as np
from neuropower import peakdistribution
import scipy.integrate as integrate
import pandas as pd
import matplotlib.pyplot as plt
import palettable.colorbrewer as cb
if not 'FSLDIR' in os.environ.keys():
raise Exception('This notebook requires that FSL is installed and the FSLDIR environment variable is set')
Explanation: Figure: How large should effect sizes be in neuroimaging to have sufficient power?
Specification of alternative
In a brain map in an MNI template, with smoothness of 3 times the voxelsize, there is one active region with voxelwise effect size D. The (spatial) size of the region is relatively small (<200 voxels). We want to know how large D should be in order to have 80% power to detect the region using voxelwise FWE thresholding using Random Field Theory.
Detect the region means that the maximum in the activated area exceeds the significance threshold.
Strategy
Compute the voxelwise threshold for the specified smoothness and volume
FweThres = 5.12
Define the alternative hypothesis, so that the omnibus power is 80%
How large should the maximum statistic in a (small) region be to exceed the voxelwise threshold with 0.8 power?
muMax = 4.00
How does this voxel statistic translate to Cohen's D for a given sample size?
See Figure
End of explanation
# From smoothness + mask to ReselCount
FWHM = 3
ReselSize = FWHM**3
MNI_mask = nib.load(os.path.join(os.getenv('FSLDIR'),'data/standard/MNI152_T1_2mm_brain_mask.nii.gz')).get_data()
Volume = np.sum(MNI_mask)
ReselCount = Volume/ReselSize
print("ReselSize: "+str(ReselSize))
print("Volume: "+str(Volume))
print("ReselCount: "+str(ReselCount))
print("------------")
# From ReselCount to FWE treshold
FweThres_cmd = 'ptoz 0.05 -g %s' %ReselCount
FweThres = os.popen(FweThres_cmd).read()
print("FWE voxelwise GRF threshold: "+str(FweThres))
Explanation: 1. What is the voxelwise threshold?
End of explanation
Power = 0.8
Explanation: 2. Definition of alternative
Detect 1 region
We define a 'success' as a situation in which the maximum in the active field exceeds
the threshold.
End of explanation
muRange = np.arange(1.8,5,0.01)
muSingle = []
for muMax in muRange:
# what is the power to detect a maximum
power = 1-integrate.quad(lambda x:peakdistribution.peakdens3D(x,1),-20,float(FweThres)-muMax)[0]
if power>Power:
muSingle.append(muMax)
break
print("The power is sufficient for one region if mu equals: "+str(muSingle[0]))
Explanation: 3. How large statistic in a field be to exceed the threshold with power 0.80?
We quantify this by computing the expected local maximum in the field (which is a null field elevated by value D).
We use the distribution of local maxima of Cheng&Schwartzman to compute the power/effect size.
End of explanation
muRange = np.arange(1.8,5,0.01)
muNinety = []
for muMax in muRange:
# what is the power to detect a maximum
power = 1-integrate.quad(lambda x:peakdistribution.peakdens3D(x,1),-20,float(FweThres)-muMax)[0]
if power>0.90:
muNinety.append(muMax)
break
print("The power is sufficient for one region if mu equals: "+str(muNinety[0]))
Explanation: extra analysis: power 0.90
End of explanation
# Read in data
Data = pd.read_csv("../SampleSize/neurosynth_study_data.txt",sep=" ",header=None,names=['year','n'])
Data['source']='Tal'
Data=Data[Data.year!=1997] #remove year with 1 entry
David = pd.read_csv("../SampleSize/david_sampsizedata.txt",sep=" ",header=None,names=['year','n'])
David['source']='David'
Data=Data.append(David)
# add detectable effect
Data['deltaSingle']=muSingle[0]/np.sqrt(Data['n'])
Data['deltaNinety']=muNinety[0]/np.sqrt(Data['n'])
# add jitter for figure
stdev = 0.01*(max(Data.year)-min(Data.year))
Data['year_jitter'] = Data.year+np.random.randn(len(Data))*stdev
# Compute medians per year (for smoother)
Medians = pd.DataFrame({'year':
np.arange(start=np.min(Data.year),stop=np.max(Data.year)+1),
'TalMdSS':'nan',
'DavidMdSS':'nan',
'TalMdDSingle':'nan',
'DavidMdDSingle':'nan',
'TalMdDNinety':'nan',
'DavidMdDNinety':'nan',
'MdSS':'nan',
'DSingle':'nan',
'DNinety':'nan'
})
for yearInd in (range(len(Medians))):
# Compute medians for Tal's data
yearBoolTal = np.array([a and b for a,b in zip(Data.source=="Tal",Data.year==Medians.year[yearInd])])
Medians.TalMdSS[yearInd] = np.median(Data.n[yearBoolTal])
Medians.TalMdDSingle[yearInd] = np.median(Data.deltaSingle[yearBoolTal])
Medians.TalMdDNinety[yearInd] = np.median(Data.deltaNinety[yearBoolTal])
# Compute medians for David's data
yearBoolDavid = np.array([a and b for a,b in zip(Data.source=="David",Data.year==Medians.year[yearInd])])
Medians.DavidMdSS[yearInd] = np.median(Data.n[yearBoolDavid])
Medians.DavidMdDSingle[yearInd] = np.median(Data.deltaSingle[yearBoolDavid])
Medians.DavidMdDNinety[yearInd] = np.median(Data.deltaNinety[yearBoolDavid])
# Compute medians for all data
yearBool = np.array(Data.year==Medians.year[yearInd])
Medians.MdSS[yearInd] = np.median(Data.n[yearBool])
Medians.DSingle[yearInd] = np.median(Data.deltaSingle[yearBool])
Medians.DNinety[yearInd] = np.median(Data.deltaNinety[yearBool])
Medians[0:5]
# add logscale
Medians['MdSSLog'] = [np.log(x) for x in Medians.MdSS]
Medians['TalMdSSLog'] = [np.log(x) for x in Medians.TalMdSS]
Medians['DavidMdSSLog'] = [np.log(x) for x in Medians.DavidMdSS]
Data['nLog']= [np.log(x) for x in Data.n]
Explanation: 5. From the required voxel statistic to Cohen's D for a given sample size
End of explanation
twocol = cb.qualitative.Paired_12.mpl_colors
fig,axs = plt.subplots(1,2,figsize=(12,5))
fig.subplots_adjust(hspace=.5,wspace=.3)
axs=axs.ravel()
axs[0].plot(Data.year_jitter[Data.source=="Tal"],Data['nLog'][Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="")
axs[0].plot(Data.year_jitter[Data.source=="David"],Data['nLog'][Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="")
axs[0].plot(Medians.year,Medians.TalMdSSLog,color=twocol[1],lw=3,label="Neurosynth")
axs[0].plot(Medians.year,Medians.DavidMdSSLog,color=twocol[3],lw=3,label="David et al.")
axs[0].set_xlim([1993,2016])
axs[0].set_ylim([0,8])
axs[0].set_xlabel("Year")
axs[0].set_ylabel("Median Sample Size")
axs[0].legend(loc="upper left",frameon=False)
#labels=[1,5,10,20,50,150,500,1000,3000]
labels=[1,4,16,64,256,1024,3000]
axs[0].set_yticks(np.log(labels))
axs[0].set_yticklabels(labels)
axs[1].plot(Data.year_jitter[Data.source=="Tal"],Data.deltaSingle[Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="")
axs[1].plot(Data.year_jitter[Data.source=="David"],Data.deltaSingle[Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="")
axs[1].plot(Medians.year,Medians.TalMdDSingle,color=twocol[1],lw=3,label="Neurosynth")
axs[1].plot(Medians.year,Medians.DavidMdDSingle,color=twocol[3],lw=3,label="David et al.")
axs[1].set_xlim([1993,2016])
axs[1].set_ylim([0,3])
axs[1].set_xlabel("Year")
axs[1].set_ylabel("Effect Size with 80% power")
axs[1].legend(loc="upper right",frameon=False)
plt.savefig('Figure1.svg',dpi=600)
plt.show()
Explanation: The figure per List (Tal or David)
End of explanation
Medians.loc[Medians.year>2010, lambda df: ['year', 'TalMdSS', 'TalMdDSingle']]
Explanation: Print median sample size and power for Neurosynth data
End of explanation
yearBoolTal = np.array([a and b for a,b in zip(Data.source=="Tal",Data.year>2010)])
print('Median sample size (2011-2015):',np.median(Data.n[yearBoolTal]))
Explanation: Compute median of sample sizes over last 5 years, for use in correlation simulation notebook.
End of explanation
for year in range(2011,2016):
bigstudyBoolTal = np.array([a and b and c for a,b,c in zip(Data.source=="Tal",Data.n>100,Data.year==year)])
yearBoolTal = np.array([a and b for a,b in zip(Data.source=="Tal",Data.year==year)])
print(year,np.sum(yearBoolTal),np.sum(Data[bigstudyBoolTal].n>99))
Explanation: Compute number of single-group studies with sample sizes over 100
End of explanation
groupData = pd.read_csv("../SampleSize/neurosynth_group_data.txt",sep=" ",header=None,names=['year','n'])
yearBoolGroup=np.array([a for a in groupData.year==year])
print('%d groups found in 2015 (over %d studies)'%(np.sum(yearBoolGroup),np.unique(groupData[yearBoolGroup].index).shape[0]))
print('median group size in 2015: %f'%np.median(groupData[yearBoolGroup].n))
Explanation: Load neurosynth group data and compute median group size for 2015
End of explanation
twocol = cb.qualitative.Paired_12.mpl_colors
fig,axs = plt.subplots(1,2,figsize=(12,5))
fig.subplots_adjust(hspace=.5,wspace=.3)
axs=axs.ravel()
axs[0].plot(Data.year_jitter[Data.source=="Tal"],Data['nLog'][Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="")
axs[0].plot(Data.year_jitter[Data.source=="David"],Data['nLog'][Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="")
axs[0].plot(Medians.year,Medians.TalMdSSLog,color=twocol[1],lw=3,label="Neurosynth")
axs[0].plot(Medians.year,Medians.DavidMdSSLog,color=twocol[3],lw=3,label="David et al.")
axs[0].set_xlim([1993,2016])
axs[0].set_ylim([0,8])
axs[0].set_xlabel("Year")
axs[0].set_ylabel("Median Sample Size")
axs[0].legend(loc="upper left",frameon=False)
#labels=[1,5,10,20,50,150,500,1000,3000]
labels=[1,4,16,64,256,1024,3000]
axs[0].set_yticks(np.log(labels))
axs[0].set_yticklabels(labels)
axs[1].plot(Data.year_jitter[Data.source=="Tal"],Data.deltaNinety[Data.source=="Tal"],"r.",color=twocol[0],alpha=0.5,label="")
axs[1].plot(Data.year_jitter[Data.source=="David"],Data.deltaNinety[Data.source=="David"],"r.",color=twocol[2],alpha=0.5,label="")
axs[1].plot(Medians.year,Medians.TalMdDSingle,color="grey",lw=3,label="80% power")
axs[1].plot(Medians.year,Medians.DavidMdDSingle,color="grey",lw=3,label="")
axs[1].plot(Medians.year,Medians.TalMdDNinety,color=twocol[1],lw=3,label="Neurosynth, 90% power")
axs[1].plot(Medians.year,Medians.DavidMdDNinety,color=twocol[3],lw=3,label="David et al., 90% power")
axs[1].set_xlim([1993,2016])
axs[1].set_ylim([0,3])
axs[1].set_xlabel("Year")
axs[1].set_ylabel("Effect Size with 80% power")
axs[1].legend(loc="upper right",frameon=False)
plt.savefig('Figure_8090.svg',dpi=600)
plt.show()
Explanation: Figure for 90% power
End of explanation |
4,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.data - Classification, régression, anomalies - correction
Le jeu de données Wine Quality Data Set contient 5000 vins décrits par leurs caractéristiques chimiques et évalués par un expert. Peut-on s'approcher de l'expert à l'aide d'un modèle de machine learning.
Step1: Les données
On peut les récupérer sur github...data_2a.
Step2: Il y a peu de très mauvais ou très bons vins. On découpe en apprentissage / test ce qui va nécessairement rendre leur prédiction complexe
Step3: Avec un peu de chance, les notes extrêmes sont présentes dans les bases d'apprentissages et de tests mais une note seule a peu d'influence sur un modèle. Pour s'assurer une meilleur répartition train / test, on peut s'assurer que chaque note est bien répartie de chaque côté. On se sert du paramètre stratify.
Step4: La répartition des notes selon apprentissage / test est plus uniforme.
Premier modèle
Step5: Une colonne n'est pas numérique. On utilise un OneHotEncoder.
Step6: La matrice est sparse ou creuse.
Step7: Ensuite il faut fusionner ces deux colonnes avec les données ou une seule puisqu'elles sont corrélées. Ou alors on écrit un pipeline...
Step8: Il reste quelques bugs. On ajoute un classifieur.
Step9: Pas extraordinaire.
Step10: Beaucoup mieux.
Courbe ROC pour chaque classe
Step11: Ces chiffres peuvent paraître élevés mais ce n'est pas formidable quand même.
Step12: Ce n'est pas très joli...
Step13: Mais cela veut dire que pour un score élevé, le taux de bonne classification s'améliore.
Step14: Les petites classes ont disparu
Step15: Les résultats précédents ne sont pas probants. On peut changer de modèle de détection d'anomalies mais les conclusions restent les mêmes. Le score d'anomalie n'est pas relié au score de prédiction.
Step16: C'est joli mais ils n'ont rien à voir. Et c'était prévisible car le modèle de prédiction qu'on utilise est tout-à-fait capable de prédire ce qu'est une anomalie.
Step17: Le modèle d'anomalie n'apporte donc aucune information nouvelle. Cela signifie que le modèle prédictif initial n'améliorerait pas sa prédiction en utilisant le score d'anomalie. Il n'y a donc aucune chance que les erreurs ou les score de prédiction soient corrélés au score d'anomalie d'une manière ou d'une autre.
Score de confiance pour une régression
Step18: Pas super. Mais...
Step19: Comme prévu le modèle ne se trompe pas plus dans un sens que dans l'autre. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.data - Classification, régression, anomalies - correction
Le jeu de données Wine Quality Data Set contient 5000 vins décrits par leurs caractéristiques chimiques et évalués par un expert. Peut-on s'approcher de l'expert à l'aide d'un modèle de machine learning.
End of explanation
from ensae_teaching_cs.data import wines_quality
from pandas import read_csv
df = read_csv(wines_quality(local=True, filename=True))
df.head()
ax = df['quality'].hist(bins=16)
ax.set_title("Distribution des notes");
Explanation: Les données
On peut les récupérer sur github...data_2a.
End of explanation
from sklearn.model_selection import train_test_split
X = df.drop("quality", axis=1)
y = df["quality"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)
X_train.shape, X_test.shape
from pandas import DataFrame
def distribution(y_train, y_test):
df_train = DataFrame(dict(color=y_train))
df_test = DataFrame(dict(color=y_test))
df_train['ctrain'] = 1
df_test['ctest'] = 1
h_train = df_train.groupby('color').count()
h_test = df_test.groupby('color').count()
merge = h_train.join(h_test, how='outer')
merge["ratio"] = merge.ctest / merge.ctrain
return merge
distribution(y_train, y_test)
ax = y_train.hist(bins=24, label="train", align="right")
y_test.hist(bins=24, label="test", ax=ax, align="left")
ax.set_title("Distribution des notes")
ax.legend();
Explanation: Il y a peu de très mauvais ou très bons vins. On découpe en apprentissage / test ce qui va nécessairement rendre leur prédiction complexe : un modèle reproduit en quelque sorte ce qu'il voit.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.5)
X_train.shape, X_test.shape
ax = y_train.hist(bins=24, label="train", align="right")
y_test.hist(bins=24, label="test", ax=ax, align="left")
ax.set_title("Distribution des notes - statifiée")
ax.legend();
distribution(y_train, y_test)
Explanation: Avec un peu de chance, les notes extrêmes sont présentes dans les bases d'apprentissages et de tests mais une note seule a peu d'influence sur un modèle. Pour s'assurer une meilleur répartition train / test, on peut s'assurer que chaque note est bien répartie de chaque côté. On se sert du paramètre stratify.
End of explanation
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
try:
logreg.fit(X_train, y_train)
except Exception as e:
print(e)
Explanation: La répartition des notes selon apprentissage / test est plus uniforme.
Premier modèle
End of explanation
from sklearn.preprocessing import OneHotEncoder
one = OneHotEncoder()
one.fit(X_train[['color']])
tr = one.transform(X_test[["color"]])
tr
Explanation: Une colonne n'est pas numérique. On utilise un OneHotEncoder.
End of explanation
tr.todense()[:5]
Explanation: La matrice est sparse ou creuse.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
numeric_features = [c for c in X_train if c != 'color']
pipe = Pipeline([
("prep", ColumnTransformer([
("color", Pipeline([
('one', OneHotEncoder()),
('select', ColumnTransformer([('sel1', 'passthrough', [0])]))
]), ['color']),
("others", "passthrough", numeric_features)
])),
])
pipe.fit(X_train)
pipe.transform(X_test)[:2]
from jyquickhelper import RenderJsDot
from mlinsights.plotting import pipeline2dot
dot = pipeline2dot(pipe, X_train)
RenderJsDot(dot)
Explanation: Ensuite il faut fusionner ces deux colonnes avec les données ou une seule puisqu'elles sont corrélées. Ou alors on écrit un pipeline...
End of explanation
pipe = Pipeline([
("prep", ColumnTransformer([
("color", Pipeline([
('one', OneHotEncoder()),
('select', ColumnTransformer([('sel1', 'passthrough', [0])]))
]), ['color']),
("others", "passthrough", numeric_features)
])),
("lr", LogisticRegression(max_iter=1000)),
])
pipe.fit(X_train, y_train)
from sklearn.metrics import classification_report
print(classification_report(y_test, pipe.predict(X_test)))
Explanation: Il reste quelques bugs. On ajoute un classifieur.
End of explanation
from sklearn.ensemble import RandomForestClassifier
pipe = Pipeline([
("prep", ColumnTransformer([
("color", Pipeline([
('one', OneHotEncoder()),
('select', ColumnTransformer([('sel1', 'passthrough', [0])]))
]), ['color']),
("others", "passthrough", numeric_features)
])),
("lr", RandomForestClassifier()),
])
pipe.fit(X_train, y_train)
print(classification_report(y_test, pipe.predict(X_test)))
Explanation: Pas extraordinaire.
End of explanation
from sklearn.metrics import roc_curve, auc
labels = pipe.steps[1][1].classes_
y_score = pipe.predict_proba(X_test)
fpr = dict()
tpr = dict()
roc_auc = dict()
for i, cl in enumerate(labels):
fpr[cl], tpr[cl], _ = roc_curve(y_test == cl, y_score[:, i])
roc_auc[cl] = auc(fpr[cl], tpr[cl])
fig, ax = plt.subplots(1, 1, figsize=(8,4))
for k in roc_auc:
ax.plot(fpr[k], tpr[k], label="c%d = %1.2f" % (k, roc_auc[k]))
ax.legend();
Explanation: Beaucoup mieux.
Courbe ROC pour chaque classe
End of explanation
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, pipe.predict(X_test), labels=labels)
Explanation: Ces chiffres peuvent paraître élevés mais ce n'est pas formidable quand même.
End of explanation
def confusion_matrix_df(y_test, y_true):
conf = confusion_matrix(y_test, y_true)
labels = list(sorted(set(y_test)))
df = DataFrame(conf, columns=labels)
df.set_index(labels)
return df
confusion_matrix_df(y_test, pipe.predict(X_test))
Explanation: Ce n'est pas très joli...
End of explanation
import numpy
ind = numpy.max(pipe.predict_proba(X_test), axis=1) >= 0.6
confusion_matrix_df(y_test[ind], pipe.predict(X_test)[ind])
Explanation: Mais cela veut dire que pour un score élevé, le taux de bonne classification s'améliore.
End of explanation
from sklearn.covariance import EllipticEnvelope
one = Pipeline([
("prep", ColumnTransformer([
("color", Pipeline([
('one', OneHotEncoder()),
('select', ColumnTransformer([('sel1', 'passthrough', [0])]))
]), ['color']),
("others", "passthrough", numeric_features)
])),
("lr", EllipticEnvelope()),
])
one.fit(X_train)
ano = one.predict(X_test)
ano
from pandas import DataFrame
df = DataFrame(dict(note=y_test, ano=one.decision_function(X_test),
pred=pipe.predict(X_test),
errors=y_test == pipe.predict(X_test),
proba_max=numpy.max(pipe.predict_proba(X_test), axis=1),
))
df["anoclip"] = df.ano.apply(lambda x: max(x, -200))
df.head()
import seaborn
seaborn.lmplot(x="anoclip", y="proba_max", hue="errors",
truncate=True, height=5, data=df,
logx=True, fit_reg=False);
df.corr()
Explanation: Les petites classes ont disparu : le modèle n'est pas sûr du tout pour les classes 3, 4, 9. On voit aussi que le modèle se trompe souvent d'une note, il serait sans doute plus judicieux de passer à un modèle de régression plutôt que de classification. Cependant, un modèle de régression ne fournit pas de score de confiance. Sans doute serait-il possible d'en construire avec un modèle de détection d'anomalie...
Anomalies
Une anomalie est un point aberrant. Cela revient à dire que sa probabilité qu'un tel événement se reproduise est faible. Un modèle assez connu est EllipticEnvelope. On suppose que si le modèle détecte une anomalie, un modèle de prédiction aura plus de mal à prédire. On réutilise le pipeline précédent en changeant juste la dernière étape.
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
df.proba_max.hist(ax=ax[0], bins=50)
df.anoclip.hist(ax=ax[1], bins=50)
ax[0].set_title("Distribution du score de classification")
ax[1].set_title("Distribution du score d'anomalie");
Explanation: Les résultats précédents ne sont pas probants. On peut changer de modèle de détection d'anomalies mais les conclusions restent les mêmes. Le score d'anomalie n'est pas relié au score de prédiction.
End of explanation
pipe_ano = Pipeline([
("prep", ColumnTransformer([
("color", Pipeline([
('one', OneHotEncoder()),
('select', ColumnTransformer([('sel1', 'passthrough', [0])]))
]), ['color']),
("others", "passthrough", numeric_features)
])),
("lr", RandomForestClassifier()),
])
pipe_ano.fit(X_train, one.predict(X_train))
confusion_matrix_df(one.predict(X_test), pipe_ano.predict(X_test))
Explanation: C'est joli mais ils n'ont rien à voir. Et c'était prévisible car le modèle de prédiction qu'on utilise est tout-à-fait capable de prédire ce qu'est une anomalie.
End of explanation
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
pipe_reg = Pipeline([
("prep", ColumnTransformer([
("color", Pipeline([
('one', OneHotEncoder()),
('select', ColumnTransformer([('sel1', 'passthrough', [0])]))
]), ['color']),
("others", "passthrough", numeric_features)
])),
("lr", RandomForestRegressor()),
])
pipe_reg.fit(X_train, y_train)
r2_score(y_test, pipe_reg.predict(X_test))
Explanation: Le modèle d'anomalie n'apporte donc aucune information nouvelle. Cela signifie que le modèle prédictif initial n'améliorerait pas sa prédiction en utilisant le score d'anomalie. Il n'y a donc aucune chance que les erreurs ou les score de prédiction soient corrélés au score d'anomalie d'une manière ou d'une autre.
Score de confiance pour une régression
End of explanation
error = y_test - pipe_reg.predict(X_test)
score = numpy.max(pipe.predict_proba(X_test), axis=1)
fig, ax = plt.subplots(1, 2, figsize=(12, 4))
seaborn.kdeplot(score, error, ax=ax[1])
ax[1].set_ylim([-1.5, 1.5])
ax[1].set_title("Densité")
ax[0].plot(score, error, ".")
ax[0].set_xlabel("score de confiance du classifieur")
ax[0].set_ylabel("Erreur de prédiction")
ax[0].set_title("Lien entre classification et prédiction");
Explanation: Pas super. Mais...
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(12, 4))
seaborn.kdeplot(score, error.abs(), ax=ax[1])
ax[1].set_ylim([0, 1.5])
ax[1].set_title("Densité")
ax[0].plot(score, error.abs(), ".")
ax[0].set_xlabel("score de confiance du classifieur")
ax[0].set_ylabel("Erreur de prédiction")
ax[0].set_title("Lien entre classification et prédiction");
Explanation: Comme prévu le modèle ne se trompe pas plus dans un sens que dans l'autre.
End of explanation |
4,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supervised Descent Method - Basics
The aim of this notebook is to showcase how one can build and fit SDMs to images using Menpo.
Note that this notebook assumes that the user has previously gone through the AAMs Basics notebook and he/she is already familiar with the basics of Menpo's Deformable Model Fitting framework explained in there.
1. Loading data
In this notebook, we will use the training and test sets of the LFPW database for the training and fitting SDMs.
Note that the necessary steps required for acquiring the LFPW database are explained in detail in the AAMs Basics notebook and the user is simply referred to that notebook for this matter.
Step1: 2. Train a SDM with default parameters
Trainign an SDM using Menpo is rather straight forward and can be done using a single line of code.
As expected, the SDM training takes some time.
Step2: 3. Fit the previous SDM
Let's try fitting the SDM to some images of the LFPW database test set!!!
Step3: Note that for the purpose of this simple fitting demonstration we will just fit the first 5 images of the LFPW test set.
Fitting a SDM to an image is as simple as calling its fit method | Python Code:
%matplotlib inline
from pathlib import Path
path_to_lfpw = Path('/vol/atlas/databases/lfpw')
path_to_lfpw = Path('/home/nontas/Dropbox/lfpw/')
import menpo.io as mio
training_images = []
# load landmarked images
for i in mio.import_images(path_to_lfpw / 'trainset', verbose=True):
# crop image
i = i.crop_to_landmarks_proportion(0.1)
# convert it to grayscale if needed
if i.n_channels == 3:
i = i.as_greyscale(mode='luminosity')
# append it to the list
training_images.append(i)
from menpowidgets import visualize_images
visualize_images(training_images)
Explanation: Supervised Descent Method - Basics
The aim of this notebook is to showcase how one can build and fit SDMs to images using Menpo.
Note that this notebook assumes that the user has previously gone through the AAMs Basics notebook and he/she is already familiar with the basics of Menpo's Deformable Model Fitting framework explained in there.
1. Loading data
In this notebook, we will use the training and test sets of the LFPW database for the training and fitting SDMs.
Note that the necessary steps required for acquiring the LFPW database are explained in detail in the AAMs Basics notebook and the user is simply referred to that notebook for this matter.
End of explanation
from menpofit.sdm import RegularizedSDM
# Note that we use fast dense sift features
# and thus cyvlfeat must be installed (use conda)
from menpo.feature import vector_128_dsift
fitter = RegularizedSDM(
training_images,
verbose=True,
group='PTS',
diagonal=200,
n_perturbations=30,
n_iterations=2,
patch_features=vector_128_dsift,
patch_shape=(24, 24),
alpha=10
)
print(fitter)
Explanation: 2. Train a SDM with default parameters
Trainign an SDM using Menpo is rather straight forward and can be done using a single line of code.
As expected, the SDM training takes some time.
End of explanation
import menpo.io as mio
# load test images
test_images = []
for i in mio.import_images(path_to_lfpw / 'testset' / '*.png', max_images=5, verbose=True):
# crop image
i = i.crop_to_landmarks_proportion(0.5)
# convert it to grayscale if needed
if i.n_channels == 3:
i = i.as_greyscale(mode='luminosity')
# append it to the list
test_images.append(i)
Explanation: 3. Fit the previous SDM
Let's try fitting the SDM to some images of the LFPW database test set!!!
End of explanation
from menpofit.fitter import noisy_shape_from_bounding_box
fitting_results = []
for i in test_images:
gt_s = i.landmarks['PTS'].lms
# generate perturbed landmarks
bb = noisy_shape_from_bounding_box(fitter.reference_shape.bounding_box(),
gt_s.bounding_box())
# fit image
fr = fitter.fit_from_bb(i, bb, gt_shape=gt_s)
fitting_results.append(fr)
# print fitting error
print(fr)
from menpowidgets import visualize_fitting_result
visualize_fitting_result(fitting_results)
Explanation: Note that for the purpose of this simple fitting demonstration we will just fit the first 5 images of the LFPW test set.
Fitting a SDM to an image is as simple as calling its fit method:
End of explanation |
4,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
learning_rate = 0.01
inputs_ = tf.placeholder(tf.float32, [None, 28, 28, 1], name='inputs')
targets_ = tf.placeholder(tf.float32, [None, 28, 28, 1], name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, filters=16, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, 2, 2, 'same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, filters=8, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2, 'same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, filters=8, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, 2, 2, 'same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, filters=8, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, filters=8, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, filters=16, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, filters=1, kernel_size=(3,3), strides=(1, 1), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 500
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}\r".format(batch_cost), end='')
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
learning_rate = 0.01
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, filters=32, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, 2, 2, 'same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, filters=32, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2, 'same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, filters=16, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, 2, 2, 'same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, filters=16, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, filters=32, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, filters=32, kernel_size=(3,3), strides=(1, 1), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, filters=1, kernel_size=(3,3), strides=(1, 1), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 500
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}\r".format(batch_cost), end='')
print()
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
4,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom training and batch prediction
<table align="left">
<td>
<a href="https
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the pillow library for loading images.
Step3: Install the numpy library for manipulation of image data.
Step4: Restart the kernel
Once you've installed everything, you need to restart the notebook kernel so it can find the packages.
Step5: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step6: Otherwise, set your project ID here.
Step7: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step8: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step9: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model resources.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Set up variables
Next, set up some variables used throughout the tutorial.
Import Vertex SDK for Python
Import the Vertex SDK for Python into your Python environment and initialize it.
Step13: Set hardware accelerators
You can set hardware accelerators for both training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify
Step14: Set pre-built containers
Vertex AI provides pre-built containers to run training and prediction.
For the latest list, see Pre-built containers for training and Pre-built containers for prediction
Step15: Set machine types
Next, set the machine types to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.
machine type
n1-standard
Step16: Tutorial
Now you are ready to start creating your own custom-trained model with CIFAR10.
Train a model
There are two ways you can train a custom model using a container image
Step17: Training script
In the next cell, you will write the contents of the training script, task.py. In summary
Step18: Train the model
Define your custom training job on Vertex AI.
Use the CustomTrainingJob class to define the job, which takes the following parameters
Step19: Make a batch prediction request
Send a batch prediction request to your deployed model.
Get test data
Download images from the CIFAR dataset and preprocess them.
Download the test images
Download the provided set of images from the CIFAR dataset
Step20: Preprocess the images
Before you can run the data through the endpoint, you need to preprocess it to match the format that your custom model defined in task.py expects.
x_test
Step21: Prepare data for batch prediction
Before you can run the data through batch prediction, you need to save the data into one of a few possible formats.
For this tutorial, use JSONL as it's compatible with the 3-dimensional list that each image is currently represented in. To do this
Step22: Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters
Step23: Retrieve batch prediction results
When the batch prediction is done processing, you can finally view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated when you created the batch prediction job. The predictions are located in a subdirectory starting with the name prediction. Within that directory, there is a file named prediction.results-xxxx-of-xxxx.
Let's display the contents. You will get a row for each prediction. The row is the softmax probability distribution for the corresponding CIFAR10 classes.
Step24: Evaluate results
You can then run a quick evaluation on the prediction results
Step25: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
Explanation: Custom training and batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK for Python to train and deploy a custom image classification model for batch prediction.
Dataset
The dataset used for this tutorial is the cifar10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Objective
In this notebook, you create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console.
The steps performed include:
Create a Vertex AI custom job for training a model.
Train a TensorFlow model.
Make a batch prediction.
Cleanup resources.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest (preview) version of Vertex SDK for Python.
End of explanation
! pip install {USER_FLAG} --upgrade google-cloud-storage
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
! pip install {USER_FLAG} --upgrade pillow
Explanation: Install the pillow library for loading images.
End of explanation
! pip install {USER_FLAG} --upgrade numpy
Explanation: Install the numpy library for manipulation of image data.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed everything, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
if not os.getenv("IS_TESTING"):
# Get your Google Cloud project ID from gcloud
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
Explanation: Otherwise, set your project ID here.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model resources.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import os
import sys
from google.cloud import aiplatform
from google.cloud.aiplatform import gapic as aip
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import Vertex SDK for Python
Import the Vertex SDK for Python into your Python environment and initialize it.
End of explanation
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
Explanation: Set hardware accelerators
You can set hardware accelerators for both training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
See the locations where accelerators are available.
Otherwise specify (None, None) to use a container image to run on a CPU.
Note: TensorFlow releases earlier than 2.3 for GPU support fail to load the custom model in this tutorial. This issue is caused by static graph operations that are generated in the serving function. This is a known issue, which is fixed in TensorFlow 2.3. If you encounter this issue with your own custom models, use a container image for TensorFlow 2.3 or later with GPU support.
End of explanation
TRAIN_VERSION = "tf-gpu.2-1"
DEPLOY_VERSION = "tf2-gpu.2-1"
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Set pre-built containers
Vertex AI provides pre-built containers to run training and prediction.
For the latest list, see Pre-built containers for training and Pre-built containers for prediction
End of explanation
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine types
Next, set the machine types to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
Explanation: Tutorial
Now you are ready to start creating your own custom-trained model with CIFAR10.
Train a model
There are two ways you can train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.
Define the command args for the training script
Prepare the command-line arguments to pass to your training script.
- args: The command line arguments to pass to the corresponding Python module. In this example, they will be:
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The training distribution strategy to use for single or distributed training.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
End of explanation
%%writefile task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
MODEL_DIR = os.getenv("AIP_MODEL_DIR")
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(MODEL_DIR)
Explanation: Training script
In the next cell, you will write the contents of the training script, task.py. In summary:
Get the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service.
Loads CIFAR10 dataset from TF Datasets (tfds).
Builds a model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps
Saves the trained model (save(MODEL_DIR)) to the specified model directory.
End of explanation
job = aiplatform.CustomTrainingJob(
display_name=JOB_NAME,
script_path="task.py",
container_uri=TRAIN_IMAGE,
requirements=["tensorflow_datasets==1.3.0"],
model_serving_container_image_uri=DEPLOY_IMAGE,
)
MODEL_DISPLAY_NAME = "cifar10-" + TIMESTAMP
# Start the training
if TRAIN_GPU:
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
)
else:
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_count=0,
)
Explanation: Train the model
Define your custom training job on Vertex AI.
Use the CustomTrainingJob class to define the job, which takes the following parameters:
display_name: The user-defined name of this training pipeline.
script_path: The local path to the training script.
container_uri: The URI of the training container image.
requirements: The list of Python package dependencies of the script.
model_serving_container_image_uri: The URI of a container that can serve predictions for your model — either a prebuilt container or a custom container.
Use the run function to start training, which takes the following parameters:
args: The command line arguments to be passed to the Python script.
replica_count: The number of worker replicas.
model_display_name: The display name of the Model if the script produces a managed Model.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
The run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object.
End of explanation
# Download the images
! gsutil -m cp -r gs://cloud-samples-data/ai-platform-unified/cifar_test_images .
Explanation: Make a batch prediction request
Send a batch prediction request to your deployed model.
Get test data
Download images from the CIFAR dataset and preprocess them.
Download the test images
Download the provided set of images from the CIFAR dataset:
End of explanation
import numpy as np
from PIL import Image
# Load image data
IMAGE_DIRECTORY = "cifar_test_images"
image_files = [file for file in os.listdir(IMAGE_DIRECTORY) if file.endswith(".jpg")]
# Decode JPEG images into numpy arrays
image_data = [
np.asarray(Image.open(os.path.join(IMAGE_DIRECTORY, file))) for file in image_files
]
# Scale and convert to expected format
x_test = [(image / 255.0).astype(np.float32).tolist() for image in image_data]
# Extract labels from image name
y_test = [int(file.split("_")[1]) for file in image_files]
Explanation: Preprocess the images
Before you can run the data through the endpoint, you need to preprocess it to match the format that your custom model defined in task.py expects.
x_test:
Normalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:
You can extract the labels from the image filenames. Each image's filename format is "image_{LABEL}_{IMAGE_NUMBER}.jpg"
End of explanation
import json
BATCH_PREDICTION_INSTANCES_FILE = "batch_prediction_instances.jsonl"
BATCH_PREDICTION_GCS_SOURCE = (
BUCKET_NAME + "/batch_prediction_instances/" + BATCH_PREDICTION_INSTANCES_FILE
)
# Write instances at JSONL
with open(BATCH_PREDICTION_INSTANCES_FILE, "w") as f:
for x in x_test:
f.write(json.dumps(x) + "\n")
# Upload to Cloud Storage bucket
! gsutil cp $BATCH_PREDICTION_INSTANCES_FILE $BATCH_PREDICTION_GCS_SOURCE
print("Uploaded instances to: ", BATCH_PREDICTION_GCS_SOURCE)
Explanation: Prepare data for batch prediction
Before you can run the data through batch prediction, you need to save the data into one of a few possible formats.
For this tutorial, use JSONL as it's compatible with the 3-dimensional list that each image is currently represented in. To do this:
In a file, write each instance as JSON on its own line.
Upload this file to Cloud Storage.
For more details on batch prediction input formats: https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions#batch_request_input
End of explanation
MIN_NODES = 1
MAX_NODES = 1
# The name of the job
BATCH_PREDICTION_JOB_NAME = "cifar10_batch-" + TIMESTAMP
# Folder in the bucket to write results to
DESTINATION_FOLDER = "batch_prediction_results"
# The Cloud Storage bucket to upload results to
BATCH_PREDICTION_GCS_DEST_PREFIX = BUCKET_NAME + "/" + DESTINATION_FOLDER
# Make SDK batch_predict method call
batch_prediction_job = model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name=BATCH_PREDICTION_JOB_NAME,
gcs_source=BATCH_PREDICTION_GCS_SOURCE,
gcs_destination_prefix=BATCH_PREDICTION_GCS_DEST_PREFIX,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
)
Explanation: Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters:
- instances_format: The format of the batch prediction request file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- prediction_format: The format of the batch prediction response file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- job_display_name: The human readable name for the prediction job.
- gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.
- gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to.
- model_parameters: Additional filtering parameters for serving prediction results.
- machine_type: The type of machine to use for training.
- accelerator_type: The hardware accelerator type.
- accelerator_count: The number of accelerators to attach to a worker replica.
- starting_replica_count: The number of compute instances to initially provision.
- max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
Compute instance scaling
You can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.
If you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.
End of explanation
RESULTS_DIRECTORY = "prediction_results"
RESULTS_DIRECTORY_FULL = RESULTS_DIRECTORY + "/" + DESTINATION_FOLDER
# Create missing directories
os.makedirs(RESULTS_DIRECTORY, exist_ok=True)
# Get the Cloud Storage paths for each result
! gsutil -m cp -r $BATCH_PREDICTION_GCS_DEST_PREFIX $RESULTS_DIRECTORY
# Get most recently modified directory
latest_directory = max(
[
os.path.join(RESULTS_DIRECTORY_FULL, d)
for d in os.listdir(RESULTS_DIRECTORY_FULL)
],
key=os.path.getmtime,
)
# Get downloaded results in directory
results_files = []
for dirpath, subdirs, files in os.walk(latest_directory):
for file in files:
if file.startswith("prediction.results"):
results_files.append(os.path.join(dirpath, file))
# Consolidate all the results into a list
results = []
for results_file in results_files:
# Download each result
with open(results_file, "r") as file:
results.extend([json.loads(line) for line in file.readlines()])
Explanation: Retrieve batch prediction results
When the batch prediction is done processing, you can finally view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated when you created the batch prediction job. The predictions are located in a subdirectory starting with the name prediction. Within that directory, there is a file named prediction.results-xxxx-of-xxxx.
Let's display the contents. You will get a row for each prediction. The row is the softmax probability distribution for the corresponding CIFAR10 classes.
End of explanation
y_predicted = [np.argmax(result["prediction"]) for result in results]
correct = sum(y_predicted == np.array(y_test))
accuracy = len(y_predicted)
print(
f"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}"
)
Explanation: Evaluate results
You can then run a quick evaluation on the prediction results:
np.argmax: Convert each list of confidence levels to a label
Compare the predicted labels to the actual labels
Calculate accuracy as correct/total
To improve the accuracy, try training for a higher number of epochs.
End of explanation
delete_training_job = True
delete_model = True
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete the training job
job.delete()
# Delete the model
model.delete()
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil -m rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Cloud Storage Bucket
End of explanation |
4,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: 1. Create and run the synthetic example of NST
First, we need to create an implementation of the Landlab NetworkModelGrid to plot. This example creates a synthetic grid, defining the location of each node and link.
Step2: 2. Create and run an example of NST using a shapefile to define the network
First, we need to create an implementation of the Landlab NetworkModelGrid to plot. This example creates a grid based on a polyline shapefile.
Step3: 3. Options for link color and link line widths
The dictionary below (link_color_options) outlines 4 examples of link color and line width choices
Step4: Below, we implement these 4 plotting options, first for the synthetic network, and then for the shapefile-delineated network
Step5: In addition to plotting link coloring using an existing link attribute, we can pass any array of size link. In this example, we color links using an array of random values.
Step6: 4. Options for parcel color
The dictionary below (parcel_color_options) outlines 4 examples of link color and line width choices
Step7: 5. Options for parcel size
The dictionary below (parcel_size_options) outlines 4 examples of link color and line width choices
Step8: 6. Plotting a subset of the parcels
In some cases, we might want to plot only a subset of the parcels on the network. Below, we plot every 50th parcel in the DataRecord.
Step9: 7. Select the parcel timestep to be plotted
As a default, plot_network_and_parcels plots parcel positions for the last timestep of the model run. However, NetworkSedimentTransporter tracks the motion of parcels for all timesteps. We can plot the location of parcels on the link at any timestep using parcel_time_index.
Step10: 7. Combining network and parcel plotting options
Nothing will stop us from making all of the choices at once. | Python Code:
import warnings
warnings.filterwarnings("ignore")
import os
import pathlib
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import numpy as np
from landlab import ExampleData
from landlab.components import FlowDirectorSteepest, NetworkSedimentTransporter
from landlab.data_record import DataRecord
from landlab.grid.network import NetworkModelGrid
from landlab.plot import plot_network_and_parcels
from landlab.io import read_shapefile
from matplotlib.colors import Normalize
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Using plotting tools associated with the Landlab NetworkSedimentTransporter component
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial illustrates how to plot the results of the NetworkSedimentTransporter Landlab component using the plot_network_and_parcels tool.
In this example we will:
- create a simple instance of the NetworkSedimentTransporter using a synthetic river network
- create a simple instance of the NetworkSedimentTransporter using an input shapefile for the river network
- show options for setting the color and line widths of network links
- show options for setting the color of parcels (marked as dots on the network)
- show options for setting the size of parcels
- show options for plotting a subset of the parcels
- demonstrate changing the timestep plotted
- show an example combining many plotting controls
First, import the necessary libraries:
End of explanation
y_of_node = (0, 100, 200, 200, 300, 400, 400, 125)
x_of_node = (0, 0, 100, -50, -100, 50, -150, -100)
nodes_at_link = ((1, 0), (2, 1), (1, 7), (3, 1), (3, 4), (4, 5), (4, 6))
grid1 = NetworkModelGrid((y_of_node, x_of_node), nodes_at_link)
grid1.at_node["bedrock__elevation"] = [0.0, 0.05, 0.2, 0.1, 0.25, 0.4, 0.8, 0.8]
grid1.at_node["topographic__elevation"] = [0.0, 0.05, 0.2, 0.1, 0.25, 0.4, 0.8, 0.8]
grid1.at_link["flow_depth"] = 2.5 * np.ones(grid1.number_of_links) # m
grid1.at_link["reach_length"] = 200 * np.ones(grid1.number_of_links) # m
grid1.at_link["channel_width"] = 1 * np.ones(grid1.number_of_links) # m
# element_id is the link on which the parcel begins.
element_id = np.repeat(np.arange(grid1.number_of_links), 30)
element_id = np.expand_dims(element_id, axis=1)
volume = 0.1 * np.ones(np.shape(element_id)) # (m3)
active_layer = np.ones(np.shape(element_id)) # 1= active, 0 = inactive
density = 2650 * np.ones(np.size(element_id)) # (kg/m3)
abrasion_rate = 0 * np.ones(np.size(element_id)) # (mass loss /m)
# Lognormal GSD
medianD = 0.05 # m
mu = np.log(medianD)
sigma = np.log(2) # assume that D84 = sigma*D50
np.random.seed(0)
D = np.random.lognormal(
mu, sigma, np.shape(element_id)
) # (m) the diameter of grains in each parcel
time_arrival_in_link = np.random.rand(np.size(element_id), 1)
location_in_link = np.random.rand(np.size(element_id), 1)
variables = {
"abrasion_rate": (["item_id"], abrasion_rate),
"density": (["item_id"], density),
"time_arrival_in_link": (["item_id", "time"], time_arrival_in_link),
"active_layer": (["item_id", "time"], active_layer),
"location_in_link": (["item_id", "time"], location_in_link),
"D": (["item_id", "time"], D),
"volume": (["item_id", "time"], volume),
}
items = {"grid_element": "link", "element_id": element_id}
parcels1 = DataRecord(
grid1,
items=items,
time=[0.0],
data_vars=variables,
dummy_elements={"link": [NetworkSedimentTransporter.OUT_OF_NETWORK]},
)
fd1 = FlowDirectorSteepest(grid1, "topographic__elevation")
fd1.run_one_step()
nst1 = NetworkSedimentTransporter(
grid1,
parcels1,
fd1,
bed_porosity=0.3,
g=9.81,
fluid_density=1000,
transport_method="WilcockCrowe",
)
timesteps = 10 # total number of timesteps
dt = 60 * 60 * 24 * 1 # length of timestep (seconds)
for t in range(0, (timesteps * dt), dt):
nst1.run_one_step(dt)
Explanation: 1. Create and run the synthetic example of NST
First, we need to create an implementation of the Landlab NetworkModelGrid to plot. This example creates a synthetic grid, defining the location of each node and link.
End of explanation
datadir = ExampleData("io/shapefile", case="methow").base
shp_file = datadir / "MethowSubBasin.shp"
points_shapefile = datadir / "MethowSubBasin_Nodes_4.shp"
grid2 = read_shapefile(
shp_file,
points_shapefile=points_shapefile,
node_fields=["usarea_km2", "Elev_m"],
link_fields=["usarea_km2", "Length_m"],
link_field_conversion={
"usarea_km2": "drainage_area",
"Slope": "channel_slope",
"Length_m": "reach_length",
},
node_field_conversion={
"usarea_km2": "drainage_area",
"Elev_m": "topographic__elevation",
},
threshold=0.01,
)
grid2.at_node["bedrock__elevation"] = grid2.at_node["topographic__elevation"].copy()
grid2.at_link["channel_width"] = 1 * np.ones(grid2.number_of_links)
grid2.at_link["flow_depth"] = 0.9 * np.ones(grid2.number_of_links)
# element_id is the link on which the parcel begins.
element_id = np.repeat(np.arange(grid2.number_of_links), 50)
element_id = np.expand_dims(element_id, axis=1)
volume = 1 * np.ones(np.shape(element_id)) # (m3)
active_layer = np.ones(np.shape(element_id)) # 1= active, 0 = inactive
density = 2650 * np.ones(np.size(element_id)) # (kg/m3)
abrasion_rate = 0 * np.ones(np.size(element_id)) # (mass loss /m)
# Lognormal GSD
medianD = 0.15 # m
mu = np.log(medianD)
sigma = np.log(2) # assume that D84 = sigma*D50
np.random.seed(0)
D = np.random.lognormal(
mu, sigma, np.shape(element_id)
) # (m) the diameter of grains in each parcel
time_arrival_in_link = np.random.rand(np.size(element_id), 1)
location_in_link = np.random.rand(np.size(element_id), 1)
variables = {
"abrasion_rate": (["item_id"], abrasion_rate),
"density": (["item_id"], density),
"time_arrival_in_link": (["item_id", "time"], time_arrival_in_link),
"active_layer": (["item_id", "time"], active_layer),
"location_in_link": (["item_id", "time"], location_in_link),
"D": (["item_id", "time"], D),
"volume": (["item_id", "time"], volume),
}
items = {"grid_element": "link", "element_id": element_id}
parcels2 = DataRecord(
grid2,
items=items,
time=[0.0],
data_vars=variables,
dummy_elements={"link": [NetworkSedimentTransporter.OUT_OF_NETWORK]},
)
fd2 = FlowDirectorSteepest(grid2, "topographic__elevation")
fd2.run_one_step()
nst2 = NetworkSedimentTransporter(
grid2,
parcels2,
fd2,
bed_porosity=0.3,
g=9.81,
fluid_density=1000,
transport_method="WilcockCrowe",
)
for t in range(0, (timesteps * dt), dt):
nst2.run_one_step(dt)
Explanation: 2. Create and run an example of NST using a shapefile to define the network
First, we need to create an implementation of the Landlab NetworkModelGrid to plot. This example creates a grid based on a polyline shapefile.
End of explanation
network_norm = Normalize(-1, 6) # see matplotlib.colors.Normalize
link_color_options = [
{}, # empty dictionary = defaults
{
"network_color": "r", # specify some simple modifications.
"network_linewidth": 7,
"parcel_alpha": 0, # make parcels transparent (not visible)
},
{
"link_attribute": "sediment_total_volume", # color links by an existing grid link attribute
"parcel_alpha": 0,
},
{
"link_attribute": "sediment_total_volume",
"network_norm": network_norm, # and normalize color scheme
"link_attribute_title": "Total Sediment Volume", # title on link color legend
"parcel_alpha": 0,
"network_linewidth": 3,
},
]
Explanation: 3. Options for link color and link line widths
The dictionary below (link_color_options) outlines 4 examples of link color and line width choices:
1. The default output of plot_network_and_parcels
2. Some simple modifications: the whole network is red, with a line width of 7, and no parcels.
3. Coloring links by an existing grid link attribute, in this case the total volume of sediment on the link (grid.at_link.["sediment_total_volume"], which is created by the NetworkSedimentTransporter)
4. Similar to #3 above, but taking advantange of additional flexiblity in plotting
End of explanation
for grid, parcels in zip([grid1, grid2], [parcels1, parcels2]):
for l_opts in link_color_options:
fig = plot_network_and_parcels(grid, parcels, parcel_time_index=0, **l_opts)
plt.show()
Explanation: Below, we implement these 4 plotting options, first for the synthetic network, and then for the shapefile-delineated network:
End of explanation
random_link = np.random.randn(grid2.size("link"))
l_opts = {
"link_attribute": random_link, # use an array of size link
"network_cmap": "jet", # change colormap
"network_norm": network_norm, # and normalize
"link_attribute_title": "A random number",
"parcel_alpha": 0,
"network_linewidth": 3,
}
fig = plot_network_and_parcels(grid2, parcels2, parcel_time_index=0, **l_opts)
plt.show()
Explanation: In addition to plotting link coloring using an existing link attribute, we can pass any array of size link. In this example, we color links using an array of random values.
End of explanation
parcel_color_norm = Normalize(0, 1) # Linear normalization
parcel_color_norm2 = colors.LogNorm(vmin=0.01, vmax=1)
parcel_color_options = [
{}, # empty dictionary = defaults
{"parcel_color": "r", "parcel_size": 10}, # specify some simple modifications.
{
"parcel_color_attribute": "D", # existing parcel attribute.
"parcel_color_norm": parcel_color_norm,
"parcel_color_attribute_title": "Diameter [m]",
"parcel_alpha": 1.0,
},
{
"parcel_color_attribute": "abrasion_rate", # silly example, does not vary in our example
"parcel_color_cmap": "bone",
},
]
for grid, parcels in zip([grid1, grid2], [parcels1, parcels2]):
for pc_opts in parcel_color_options:
fig = plot_network_and_parcels(grid, parcels, parcel_time_index=0, **pc_opts)
plt.show()
Explanation: 4. Options for parcel color
The dictionary below (parcel_color_options) outlines 4 examples of link color and line width choices:
1. The default output of plot_network_and_parcels
2. Some simple modifications: all parcels are red, with a parcel size of 10
3. Color parcels by an existing parcel attribute, in this case the sediment diameter of the parcel (parcels1.dataset['D'])
4. Color parcels by an existing parcel attribute, but change the colormap.
End of explanation
parcel_size_norm = Normalize(0, 1)
parcel_size_norm2 = colors.LogNorm(vmin=0.01, vmax=1)
parcel_size_options = [
{}, # empty dictionary = defaults
{"parcel_color": "b", "parcel_size": 10}, # specify some simple modifications.
{
"parcel_size_attribute": "D", # use a parcel attribute.
"parcel_size_norm": parcel_color_norm,
"parcel_size_attribute_title": "Diameter [m]",
"parcel_alpha": 1.0, # default parcel_alpha = 0.5
},
{
"parcel_size_attribute": "D",
"parcel_size_norm": parcel_size_norm2,
"parcel_size_min": 10, # default = 5
"parcel_size_max": 100, # default = 40
"parcel_alpha": 0.1,
},
]
for grid, parcels in zip([grid1, grid2], [parcels1, parcels2]):
for ps_opts in parcel_size_options:
fig = plot_network_and_parcels(grid, parcels, parcel_time_index=0, **ps_opts)
plt.show()
Explanation: 5. Options for parcel size
The dictionary below (parcel_size_options) outlines 4 examples of link color and line width choices:
1. The default output of plot_network_and_parcels
2. Set a uniform parcel size and color
3. Size parcels by an existing parcel attribute, in this case the sediment diameter (parcels1.dataset['D']), and making the parcel markers entirely opaque.
4. Normalize parcel size on a logarithmic scale, and change the default maximum and minimum parcel sizes.
End of explanation
parcel_filter = np.zeros((parcels2.dataset.dims["item_id"]), dtype=bool)
parcel_filter[::50] = True
pc_opts = {
"parcel_color_attribute": "D", # a more complex normalization and a parcel filter.
"parcel_color_norm": parcel_color_norm2,
"parcel_color_attribute_title": "Diameter [m]",
"parcel_alpha": 1.0,
"parcel_size": 40,
"parcel_filter": parcel_filter,
}
fig = plot_network_and_parcels(grid2, parcels2, parcel_time_index=0, **pc_opts)
plt.show()
Explanation: 6. Plotting a subset of the parcels
In some cases, we might want to plot only a subset of the parcels on the network. Below, we plot every 50th parcel in the DataRecord.
End of explanation
parcel_time_options = [0, 4, 7]
for grid, parcels in zip([grid1, grid2], [parcels1, parcels2]):
for pt_opts in parcel_time_options:
fig = plot_network_and_parcels(
grid, parcels, parcel_size=20, parcel_alpha=0.1, parcel_time_index=pt_opts
)
plt.show()
Explanation: 7. Select the parcel timestep to be plotted
As a default, plot_network_and_parcels plots parcel positions for the last timestep of the model run. However, NetworkSedimentTransporter tracks the motion of parcels for all timesteps. We can plot the location of parcels on the link at any timestep using parcel_time_index.
End of explanation
parcel_color_norm = colors.LogNorm(vmin=0.01, vmax=1)
parcel_filter = np.zeros((parcels2.dataset.dims["item_id"]), dtype=bool)
parcel_filter[::30] = True
fig = plot_network_and_parcels(
grid2,
parcels2,
parcel_time_index=0,
parcel_filter=parcel_filter,
link_attribute="sediment_total_volume",
network_norm=network_norm,
network_linewidth=4,
network_cmap="bone_r",
parcel_alpha=1.0,
parcel_color_attribute="D",
parcel_color_norm=parcel_color_norm2,
parcel_size_attribute="D",
parcel_size_min=5,
parcel_size_max=150,
parcel_size_norm=parcel_size_norm,
parcel_size_attribute_title="D",
)
Explanation: 7. Combining network and parcel plotting options
Nothing will stop us from making all of the choices at once.
End of explanation |
4,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Join the data for preprocessing
Step1: Create additional features
Step2: Convert "PubDate" into two columns
Step3: More features and gap filling
Below are the results of one day of searching for meaningful patterns in the data. There are a few easily identifiable features, most of which lead to zero popularity. They are
Step4: Filling the gaps in NewsDesk, SectionName and SubsectionName.
Step5: Filling even more gaps with some clustering. I created a TFI-DF matrix and did Ward clustering on words. Four of six clusters I thought were meaningful and fitted well into existing NewsDesk/S(ubs)ectionName.
Step6: You can see what these clusters look like by typing
Step7: Finally, use a few (6) obvious keywords to categorise the data even more. After this, we are left with 950 entries where NewsDesk, SectionName and SubsectionName are NaN, but I didn't have an idea how to deal with them.
Step8: Categorical (factor) colums
First, turn the categorial data into 0/1 binary columns. Yes, it's more painful in Python than in R.
Step9: Recover train and test sets
Step10: Random Forest
Parametres for RF had been optimised with a GridSearchCV function from sklearn.
Step11: Gradient Boosting Method
Step12: Define a function for cross-validating and plotting ensemble of my two models | Python Code:
print('Max train ID: %d. Max test ID: %d' % (np.max(NYT_train_raw['UniqueID']), np.max(NYT_test_raw['UniqueID'])))
joined = NYT_train_raw.merge(NYT_test_raw, how = 'outer')
Explanation: Join the data for preprocessing
End of explanation
joined['QorE'] = joined['Headline'].str.contains(r'\!|\?').astype(int)
joined['Q&A'] = joined['Headline'].str.contains(r'Q\. and A\.').astype(int)
Explanation: Create additional features:
* "QorE": question or exclamation mark in the headline
* "Q&A": "Q. and A." phrase in the headline (I don't think it was valuable, but stayed here from my previous attemps)
End of explanation
joined['PubDate'] = pd.to_datetime(joined['PubDate'])
joined['Weekday'] = joined['PubDate'].dt.weekday
joined['Hour'] = joined['PubDate'].dt.hour
print("At the moment, we have %d entries with NewsDesk=Nan." % len(joined.loc[joined['NewsDesk'].isnull()]))
Explanation: Convert "PubDate" into two columns: Weekday and Hour:
End of explanation
joined.loc[(joined['NewsDesk'] == 'Foreign') & (joined['SectionName'].isnull())].head()
joined.loc[(joined['NewsDesk'] == 'Styles') & (joined['SectionName'].isnull()), 'NewsDesk'] = 'TStyle'
joined.loc[(joined['NewsDesk'] == 'Foreign') & (joined['SectionName'].isnull()), 'NewsDesk'] = 'History'
joined.loc[(joined['NewsDesk'].isnull()) & (joined['Headline'].str.contains(r'^1[0-9]{3}')), 'NewsDesk'] = 'History'
joined.loc[(joined['NewsDesk'].isnull()) & (joined['Headline'] == 'Daily Clip Report'), 'NewsDesk'] = 'Daily Rubric'
joined.loc[joined['NewsDesk'] == 'Daily Rubric', 'SectionName'] = 'Clip Report'
joined.loc[(joined['NewsDesk'].isnull()) & (joined['Headline'] == 'Today in Politics'), 'SectionName'] = 'Today in Politics'
joined.loc[joined['SectionName'] == 'Today in Politics', 'NewsDesk'] = 'Daily Rubric'
joined.loc[(joined['NewsDesk'].isnull()) & (joined['Headline'].str.contains(r'what we\'re reading', case=False)), 'SectionName'] = 'What we\'re reading'
joined.loc[joined['SectionName'] == 'What we\'re reading', 'NewsDesk'] = 'Daily Rubric'
joined.loc[(joined['NewsDesk'].isnull()) & (joined['Headline'].str.contains(r'first draft', case=False)), 'SectionName'] = 'First draft'
joined.loc[joined['SectionName'] == 'First draft', 'NewsDesk'] = 'Daily Rubric'
joined.loc[(joined['NewsDesk'].isnull()) & (joined['SubsectionName'] == 'Education'), 'NewsDesk'] = 'Daily Rubric'
joined.loc[(joined['Headline'].str.contains('pictures of the day|week in pictures', case=False)), 'NewsDesk'] = 'Daily Rubric'
Explanation: More features and gap filling
Below are the results of one day of searching for meaningful patterns in the data. There are a few easily identifiable features, most of which lead to zero popularity. They are:
* "History": article headings always started with a year. None of them were popular in the training set
* "Daily rubric": I added this new NewsDesk category for types of articles that appeared regularly (not necessarily daily): "Daily Clip Report", "Today in Politics", "What we're reading", "First Draft", "Pictures of the day", "Week in pictures". They also were not popular.
Now, as ask788 pointed out in this thread, the problem with data is often their structure, not the models we use on them. I agree that ideally this feature engineering should have been done automatically, but I am a novice, and had to tediously plod through the rows of data manually.
You can browse individual features that I selected by printing the head() of a subset, like so:
End of explanation
section_to_newsdesk = {'Business Day': 'Business', 'Crosswords/Games': 'Business', 'Technology': 'Business',
'Arts': 'Culture',
'World': 'Foreign',
'Magazine': 'Magazine',
'N.Y. / Region': 'Metro',
'Opinion': 'OpEd',
'Travel': 'Travel',
'Multimedia': 'Multimedia',
'Open': 'Open'}
section_to_subsection = {'Crosswords/Games': 'Crosswords/Games',
'Technology': 'Technology'}
newsdesk_to_section = {'TStyle': 'TStyle',
'Culture': 'Arts',
'OpEd': 'Opinion',
'History': 'History'}
newsdesk_to_subsection = {'TStyle': 'TStyle',
'Culture': 'Arts',
'Daily Rubric': 'Rubric',
'Magazine': 'Magazine',
'Metro': 'Metro',
'Multimedia': 'Multimedia',
'OpEd': 'OpEd',
'Science': 'Science',
'Sports': 'Sports',
'Styles': 'Styles',
'Travel': 'Travel',
'History': 'History'}
for sec in set(joined['SectionName']):
try: section_to_newsdesk[sec]
except KeyError:
pass
else:
joined['NewsDesk'].fillna(joined.loc[(joined['SectionName'] == sec)]['NewsDesk'].fillna(section_to_newsdesk[sec]), inplace=True)
try: section_to_subsection[sec]
except KeyError:
pass
else:
joined['SubsectionName'].fillna(joined.loc[(joined['SectionName'] == sec)]['SubsectionName'].fillna(section_to_subsection[sec]), inplace=True)
for nd in set(joined['NewsDesk']):
try: newsdesk_to_section[nd]
except KeyError:
pass
else:
joined['SectionName'].fillna(joined.loc[(joined['NewsDesk'] == nd)]['SectionName'].fillna(newsdesk_to_section[nd]), inplace=True)
try: newsdesk_to_subsection[nd]
except KeyError:
pass
else:
joined['SubsectionName'].fillna(joined.loc[(joined['NewsDesk'] == nd)]['SubsectionName'].fillna(newsdesk_to_subsection[nd]), inplace=True)
Explanation: Filling the gaps in NewsDesk, SectionName and SubsectionName.
End of explanation
from sklearn.cluster import AgglomerativeClustering
from sklearn.feature_extraction.text import TfidfVectorizer
nans = joined.loc[joined['NewsDesk'].isnull()]
words = list(nans.apply(lambda x:'%s' % (x['Abstract']),axis=1))
tfv = TfidfVectorizer(min_df=0.005, max_features=None,
strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}',
ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1,
stop_words = 'english')
X_tr = tfv.fit_transform(words)
ward = AgglomerativeClustering(n_clusters=6,
linkage='ward').fit(X_tr.toarray())
joined.loc[joined['NewsDesk'].isnull(), 'cluster'] = ward.labels_
cluster_to = {}
cluster_to['NewsDesk'] = {4: 'Metro', 3: 'National', 2: 'Foreign', 1: 'National'}
cluster_to['SectionName'] = {4: 'N.Y. / Region', 3: 'U.S.', 2: 'Not_Asia', 1: 'U.S.'}
cluster_to['SubsectionName'] = {4: 'NYT', 3: 'Politics', 2: 'Not_Asia', 1: 'Politics'}
for key in cluster_to:
for key2 in cluster_to[key]:
joined.loc[(joined['cluster'] == key2) & (nans['NewsDesk'].isnull()), key] = cluster_to[key][key2]
Explanation: Filling even more gaps with some clustering. I created a TFI-DF matrix and did Ward clustering on words. Four of six clusters I thought were meaningful and fitted well into existing NewsDesk/S(ubs)ectionName.
End of explanation
joined.loc[joined['cluster'] == 3].head()
Explanation: You can see what these clusters look like by typing:
End of explanation
joined.drop('cluster', axis=1, inplace=True)
keywords = {}
keywords['clinton|white house|obama'] = {'NewsDesk': 'National', 'SectionName': 'U.S.', 'SubsectionName': 'Politics'}
keywords['isis|iraq'] = {'NewsDesk': 'Foreign', 'SectionName': 'Not_Asia', 'SubsectionName': 'Not_Asia'}
keywords['york'] = {'NewsDesk': 'Metro', 'SectionName': 'N.Y. / Region', 'SubsectionName': 'N.Y. / Region'}
for key in keywords:
indices = (joined['NewsDesk'].isnull()) & (joined['Abstract'].str.contains(key, case=False))
for sec in keywords[key]:
joined.loc[indices, sec] = keywords[key][sec]
print("Now we have %d entries with NewsDesk=Nan." % len(joined.loc[joined['NewsDesk'].isnull()]))
Explanation: Finally, use a few (6) obvious keywords to categorise the data even more. After this, we are left with 950 entries where NewsDesk, SectionName and SubsectionName are NaN, but I didn't have an idea how to deal with them.
End of explanation
from sklearn.feature_extraction import DictVectorizer
def categorizeDF(df):
old_columns = df.columns
cat_cols = ['NewsDesk', 'SectionName', 'SubsectionName']
temp_dict = df[cat_cols].to_dict(orient="records")
vec = DictVectorizer()
vec_arr = vec.fit_transform(temp_dict).toarray()
new_df = pd.DataFrame(vec_arr).convert_objects(convert_numeric=True)
new_df.index = df.index
new_df.columns = vec.get_feature_names()
columns_to_add = [col for col in old_columns if col not in cat_cols]
new_df[columns_to_add] = df[columns_to_add]
new_df.drop(cat_cols, inplace=True, axis=1)
return new_df
joined_cat = categorizeDF(joined)
Explanation: Categorical (factor) colums
First, turn the categorial data into 0/1 binary columns. Yes, it's more painful in Python than in R.
End of explanation
train = joined_cat[joined_cat['UniqueID'] <= 6532]
test = joined_cat[joined_cat['UniqueID'] > 6532]
Explanation: Recover train and test sets
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn import cross_validation
Xcols = train.columns
Xcols = [x for x in Xcols if not x in ('Headline', 'Snippet', 'Abstract', 'PubDate', 'UniqueID', 'Popular', 'Q&A')]
y = train['Popular']
forest = RandomForestClassifier(n_estimators=7000, max_features=0.1, min_samples_split=24, random_state=33, n_jobs=3)
forest.fit(train[Xcols], y)
probsRF = forest.predict_proba(test[Xcols])[:,1]
print("10 Fold CV Score: ", np.mean(cross_validation.cross_val_score(forest, train[Xcols], y, cv=10, scoring='roc_auc')))
Explanation: Random Forest
Parametres for RF had been optimised with a GridSearchCV function from sklearn.
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
from sklearn import cross_validation
Xcols = train.columns
Xcols = [x for x in Xcols if not x in ('Headline', 'Snippet', 'Abstract', 'PubDate', 'UniqueID', 'Popular', 'Q&A')]
y = train['Popular']
est = GradientBoostingClassifier(n_estimators=3000,
learning_rate=0.005,
max_depth=4,
max_features=0.3,
min_samples_leaf=9,
random_state=33)
est.fit(train[Xcols], y)
probsGBC = est.predict_proba(test[Xcols])[:,1]
print("10 Fold CV Score: ", np.mean(cross_validation.cross_val_score(est, train[Xcols], y, cv=10, scoring='roc_auc')))
Explanation: Gradient Boosting Method
End of explanation
from sklearn import cross_validation
from sklearn.metrics import roc_auc_score
def calculate_ensemble_score(model1, model2, Xcols, ycol, dataset, cv=10):
'''Calculates the score for various weights of two models in an ensemble'''
num_points = 21
score_arr = np.zeros((cv, num_points))
kf = cross_validation.KFold(len(dataset), cv, shuffle=True)
i = 0
for xtrain, xtest in kf:
train, test = dataset.ix[xtrain], dataset.ix[xtest]
model1.fit(train[Xcols], train[ycol])
probs1 = model1.predict_proba(test[Xcols])[:,1]
model2.fit(train[Xcols], train[ycol])
probs2 = model2.predict_proba(test[Xcols])[:,1]
for wg in range(num_points):
probs = wg/(num_points-1)*probs1 + (1-wg/(num_points-1))*probs2
score_arr[i][wg] = roc_auc_score(test[ycol], probs)
i+=1
return np.mean(score_arr, axis=0)
def plot_ensemble_score(scores):
import seaborn as sbs
fig = plt.figure()
ax = fig.add_subplot(111)
ax.axhline(y=scores[0], linestyle='--', color='red')
ax.axhline(y=scores[-1], linestyle='--', color='green')
ax.text(0.03, scores[0]+0.00001, "Pure GBM", verticalalignment='bottom', horizontalalignment='left', color='red', size='larger')
ax.text(0.98, scores[-1]+0.00001, "Pure RF", verticalalignment='bottom', horizontalalignment='right', color='green', size='larger')
ax.plot(np.linspace(0,1,len(scores)), scores)
ax.set_xlabel("RF model weight")
ax.set_ylabel("AUC")
ax.set_title("Choosing the weights for two models in an ensemble (10-fold cross-validation)")
return fig
means = calculate_ensemble_score(forest, est, Xcols, 'Popular', train)
myplot = plot_ensemble_score(means)
myplot.savefig("AUC.png", dpi=300)
test['Popular'] = (0.6*probsGBC+0.4*probsRF)
test['UniqueID'] = test['UniqueID'].astype(int)
test.to_csv('preds.csv', columns=['UniqueID', 'Popular'], header=['UniqueID', 'Probability1'], index=False)
Explanation: Define a function for cross-validating and plotting ensemble of my two models:
End of explanation |
4,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning Linear Classifiers
Question 1
<img src="images/lec2_pic01.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Question 2
<img src="images/lec2_pic02.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Question 3
<img src="images/lec2_pic03.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Question 4
<img src="images/lec2_pic04.png">
<img src="images/lec2_pic04-00.png">
<img src="images/lec2_pic04-001.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Step2: Question 5
<img src="images/lec2_pic05.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Watch this video for more info
https | Python Code:
import numpy as np
dummy_feature_matrix = np.array([[1.,2.5], [1.,0.3], [1.,2.8], [1.,0.5]])
dummy_coefficients = np.array([0., 1.])
sentiment = np.array([1., -1., 1., 1.])
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
scores = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = 1. / (1 + np.exp(-scores))
# return predictions
return predictions
def compute_data_likelihood(sentiment, probability):
indicator = (sentiment==+1)
print "Indicator: ", indicator
print "Probability of +1: ", probability
# probability of (-1)= (1 - probability of +1)
probability[~indicator] = 1 - probability[~indicator]
print "Maximum likelihood: ", probability
return np.prod(probability)
probability = predict_probability(dummy_feature_matrix, dummy_coefficients)
print probability
data_likelihood = compute_data_likelihood(sentiment, probability)
print data_likelihood
Explanation: Learning Linear Classifiers
Question 1
<img src="images/lec2_pic01.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Question 2
<img src="images/lec2_pic02.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Question 3
<img src="images/lec2_pic03.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Question 4
<img src="images/lec2_pic04.png">
<img src="images/lec2_pic04-00.png">
<img src="images/lec2_pic04-001.png">
Screenshot taken from Coursera
<!--TEASER_END-->
End of explanation
def compute_derivative_log_likelihood(feature_vector, sentiment, probability):
Compute derivative of feature vector
- In this case, the feature vector with respect to w1
indicator = (sentiment==+1)
print "Indicator: ", indicator
# Contribution to derivative for w1
contribution = feature_vector * (indicator - probability)
print "Contribution: ", contribution
return np.sum(contribution)
probability = predict_probability(dummy_feature_matrix, dummy_coefficients)
print probability
# In this case, the feature vector (dummy_feature_matrix[:, 1]) with respect to w1
compute_derivative_log_likelihood(dummy_feature_matrix[:, 1], sentiment, probability)
Explanation: Question 5
<img src="images/lec2_pic05.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Watch this video for more info
https://www.coursera.org/learn/ml-classification/lecture/UEmJg/example-of-computing-derivative-for-logistic-regression
End of explanation |
4,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Assignment Resit - Part B
Deadline
Step2: Tip 0
Step3: Tip 1
Step4: Tip 2
Step5: Tip 3
Step6: 3. Building python modules to process files in a directory
In this exercise, you will write two python modules | Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Assignments-colab/ASSIGNMENT_RESIT_B.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
# your function
# test your function
text = 'This is an example text. The text mentions a former president of the United States, Barack Obama.'
basename = 'test_text.tsv'
output_dir = 'test_dir'
text_to_conll_simple(text,
nlp,
output_dir,
basename,
start_with_index = False,
overwrite_existing_conll_file = True)
Explanation: Assignment Resit - Part B
Deadline: Friday, November 13, 2020 before 17:00
This part of the assignment should be submitted as a zip file containing two python modules:
utils.py
texts_to_coll.py
ASSIGNMENT-RESIT-A.ipynb (notebook containing part A)
Please name your zip file as follows: RESIT-ASSIGNMENT.zip and upload it via Canvas (Resit Assignment).
Please submit your assignment on Canvas: Resit Assignment
If you have questions about this topic
If you have questions about this topic, please contact [email protected].
Questions and answers will be collected in this Q&A document, so please check if your question has already been answered.
All of the covered chapters are important to this assignment. However, please pay special attention to:
Chapter 14 - Reading and writing text files
Chapter 15 - Off to analyzing text
Chapter 16 - Data formats I (CSV and TSV)
Chapter 19 - More about Natural Language Processing Tools (spaCy)
In this assignment, we are going to write code which conversts raw text to a structured format frequently used in Natural Lanugage Processing. No matter what field you will end up working in, you will always have to be able to convert data from format A to format B. You have already gained some experience with such conversions in Block 4.
The CoNLL format
Before you use the output of a text analysis system, you usually want to store the output in a structured format. One way of doing this is to use naf - a format using xml. In this assignment, we are going to look at CoNLL, which is a table-based format (i.e. it is similar to csv/tsv).
The format we are converting to is called CoNLL. CoNLL is the name of a conference (Conference on Natural Language Learning). Every year, the conference hosts a 'competion'. In this competition, participants have to build systems for a certain Natural Language Processing problem (usually referred to as 'task'). To compare results, participants have to stick to the CoNLL format. The format has become a popular format for storing the output of NLP systems.
The goal of this assignment is to write a python module which processes all texts in ../Data/Dreams/. The output should be written to a new directory, in which each text is stored as a csv/tsv file following CoNLL conventions.
Text analysis with SpaCy
In part A of this assignment, you have already used SpaCy to process text. In this part of the assignment, you can make use of the code you have already written. The output files will contain the following information:
The tokens in each text
Information about the sentences in each text
Part-of-speech tags for each token
The lemma of each token
information about entities in a text (i.e. people, places, organizations, etc that are mentioned)
The assignment
We will guide you towards the final file-conversion step-by-step. The assignment is divided in 3 parts. We provide small toy exampls you can use to develop your code. As a final step, you will be asked to transfer all your code to python modules and process a directory of text files with it.
Exercise 1: A guided tour of the CoNLL format
Exercise 2: Writing a conversion function (text_to_conll)
Exercise 3: Processing multiple files using python modules
Attention: This notebook should be placed in the same folder as the other Assignments!
1. Understanding the CoNLL format
The CoNLL format represents information about a text in table format. Each token is represented on a line. Each column contains a piece of information. Sentence-boundaries are marked by empty lines. In addition, each token has an index. This index starts with 1 and indentifies the positoion of the token in the sentence. Punctuation marks are also included.
Consider the following example text:
This is an example text. The text mentions a former president of the United States, Barack Obama.
The representation of this sentence in CoNLL format looks like this:
| | | | | | |
|----|-----------|-----|-----------|--------|---|
| 1 | This | DT | this | | O |
| 2 | is | VBZ | be | | O |
| 3 | an | DT | an | | O |
| 4 | example | NN | example | | O |
| 5 | text | NN | text | | O |
| 6 | . | . | . | | O |
| | | | | | |
| 1 | The | DT | the | | O |
| 2 | text | NN | text | | O |
| 3 | mentions | VBZ | mention | | O |
| 4 | a | DT | a | | O |
| 5 | former | JJ | former | | O |
| 6 | president | NN | president | | O |
| 7 | of | IN | of | | O |
| 8 | the | DT | the | GPE | B |
| 9 | United | NNP | United | GPE | I |
| 10 | States | NNP | States | GPE | I |
| 11 | , | , | , | | O |
| 12 | Barack | NNP | Barack | PERSON | B |
| 13 | Obama | NNP | Obama | PERSON | I |
| 14 | . | . | . | | O |
The columns represent the following information:
Column 1: Token index in sentence
Column 2: The token as it appears in the text (including punctuation)
Column 3: The part-of-speech tag
Column 4: The lemma of the token
Column 5: Information about the type of entity (if the token is part of an expression referring to an entity). For example, Barack Obama is recognized as a person
Column 6: Information about the position of the token in the entiy-mention. B stands for 'beginning', I stands for 'inside' and O stands for 'outside'. Anything that is not part of an entity mention is marked as 'outside'. (This is important information for dealing with entity mentions. Don't worry, you do not have to make use of this information here.)
2. Writing the conversion function
In this section of the assignment, we will guide you through writing your function. You can accomplish the entire conversion in a single function (i.e. there will be no helper functions at this point). We will first describe what your function should do and then provide small toy examples to help you with some of the steps.
The conversion function: text_to_conll
(1) Define a function called text_to_conll
(2) The function should have the following parameters:
text: The input text (str) that should be processed and written to a conll file
nlp: the SpaCy model
output_dir: the directory the file should be written to
basename: the name of the output file without the path (i.e the file will be written to output_dir/basename
delimiter: the field delimiter (by default, it should be a tab)
start_with_index: By default, this should be True.
overwrite_existing_conll_file: By default, this should be set to True.
(3) The function should do the following:
Convert text to CoNll format as shown in the example in exercise 1.
The file should have the following columns:
Token index in sentence (as shown in example) If start_with_index is set to False, the first column should be the token.
token
part of speech tag (see tips below)
lemma
entity type (see tips below)
entity iob label (indicates the position of a token in an entity-expression (see tips below)
If the parameter overwrite_existing_conll_file is set to True, the file should be written to output_dir/basename.
If the parameter overwrite_existing_conll_file is set to False, the function should check whether the file (path: output_dir/basename) exists. If it does, it should print 'File exists. Set param overwrite_exisiting_conll_file to True if you want to overwrite it.' If it does not exist, it should write it to the specified file. (See tips below)
The delimiter between fields should be the delimiter specified by the parameter delimiter.
You can define the function in the notebook. Please test it using the following test text. Make sure to test the different paprameters. Your test file should be written to test_dir/test_text.tsv.
End of explanation
import spacy
nlp = spacy.load('en_core_web_sm')
Explanation: Tip 0: Import spacy and load your model
(See part A and chapter on SpaCy for more information)
End of explanation
test = 'This is a test.'
doc = nlp(test)
tok = doc[0]
tok.text
Explanation: Tip 1: Tokens, POS tags, and lemmas
Experiment with a small example to get the tokens and pos tags. Please refer to the chapter on SpaCy for an example on how to process text with spacy.
Spacy has different pos tags. For this exercise, it does not matter which one you use. Hint: To get a string (rather than a number, use the SpaCy attributes ending with '_').
You can use the code below to experiment:
End of explanation
test = 'This is a test.'
doc = nlp(test)
tok = doc[0]
tok.text
dir(tok)
Explanation: Tip 2: Entities
Entity types
Entities are things (usually people/places/organizations/etc) that exist in the real world. SpaCy can tag texts with entity types. If an expression refers to an entity in the world, it will receive a lable indicating the type (for example, Barack Obama will be tagged as 'PERSON'. Since the expression 'Barack Obama' consists of two tokens, each token will receive such a label. Use dir() on a token object to find out how to get this information. Hint: Everything about entities starts with ('ent_')
Position of the entity token
An expression referring to an entity can consist of multiple tokens. To indicate that multiple tokens are part of the same/of different expressions, we often use the IOB system. In this system, we indicate whether a token is outside an entity mention, inside an entity mention or at the beginning of an entity mention. In practice, most tokens of a text will thus be tagged as 'O'. 'Barack' will be tagged as 'B' and 'Obama' as 'I' (see example above). SpaCy can do this type of labeling. Use dir() on a token object to find out how to get this information.
End of explanation
# Check if file exists
import os
a_path_to_a_file = '../Data/books/Macbeth.txt'
if os.path.isfile(a_path_to_a_file):
print('File exists:', a_path_to_a_file)
else:
print('File not found:', a_path_to_a_file)
another_path_to_a_file = '../Data/books/KingLear.txt'
if os.path.isfile(another_path_to_a_file):
print('File exists:', another_path_to_a_file)
else:
print('File not found:', another_path_to_a_file)
# check if directory exists
a_path_to_a_dir = '../Data/books/'
if os.path.isdir(a_path_to_a_dir):
print('Directory exists:', a_path_to_a_dir)
else:
print('Directory not found:', a_path_to_a_dir)
another_path_to_a_dir = '../Data/films/'
if os.path.isdir(another_path_to_a_dir):
print('Directory exists:', another_path_to_a_dir)
else:
print('Directory not found:', another_path_to_a_dir)
Explanation: Tip 3: Dealing with directories and files
Use os to check if files or directories exist. You can also use os to make a directory if it does not exist yet.
os.path.isdir(path_to_dir) returns a boolean value. If the directory exists, it returns True. Else it returns False. You can use this to check if a directory exists. If it does not, you can make it.
os.path.isfile(path_to_file) returns a boolean value. If the file exists, it returns True. Else it returns False.
os.mkdir(path_to_dir) makes a new directory. Try it out and create a directory called 'test_dir' in the current directory.
End of explanation
# Files in '../Data/Dreams':
%ls ../Data/Dreams/
Explanation: 3. Building python modules to process files in a directory
In this exercise, you will write two python modules:
utils.py
texts_to_conll.py
The module texts_to_conll.py should do the following:
process all text files in a specified directory (we will use '../Data/Dreams')
write conll files representing these texts to another directory
Step 1: Preparation:
Create the two python modules in the same directory as this notebook
copy your function text_to_conll to the python module texts_to_conll.py
Move the function load_text you have defined in part A to utils.py and import it in text_to_conll.py
Move the function get_paths you have defined in part A to utils.py and import in it text_to_conll.py
Step 2: convert all text files in ../Data/Dreams:
Use your functions to convert all files in ../Data/Dreams/. Please fulfill the following criteria:
The new files should be placed in a directory placed in the current directory called dreams_conll/
Each file should be named as follows: [original name without extension].tsv (e.g.vicky1.tsv)
The files should contain an index column
Tips:
Use a loop to iterate over the files in ../Data/Dreams.
Use string methods and slicing to create the new filename from the original filename (e.g. split on '/' and/or '.', use indices to extract certain substrings, etc.)
Look at the resulting files to check if your code works.
Step 3: Test and submit
Please test your code carefully. Them submitt all your files in a .zip file via Canvas.
Congratulations! You have completed your first file conversion exercise!
End of explanation |
4,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Indicators of Future Success in Men's Professional Tennis
May 2016
Written by John Ockay at NYU's Stern School of Business
Contact
Step1: Data (Part I)
To complete this project, I used data from two different sources. First, Jeff Sackmann is an author and entrepreneur who has worked in the fields of sports statistics and test preparation. In particular, he has an interest in the world of tennis and tennis statistics. He has created a website called TennisAbstract that contains historical match data and other tennis analytics. He has uploaded match data from 1968 to 2016 on GitHub in CSV format. I pulled the data from the Github site into Python in order to conduct the necessary analysis work for my project. I imported match data from 1984 to 2015, combined the data into one large spreadsheet, removed unnecessary columns, and adjusted the date column in order to better facilitate filtering. This process is outlined below
Step2: I utilized the above data to develop functions that allowed me to easily obtain and organize any tennis player's early career statistics, such as top 10 or top 50 victories. Those functions are listed below with Roger Federer used as an example
Step3: YearWins
Step4: CareerTopTen
Step5: YearTopTen
Step6: CareerTopFifty
Step7: YearTopFifty
Step8: CareerTourneyWins
Step9: YearTourneyWins
Step10: Data (Part II)
While the data from Jeff Sackmann allowed me to easily collect data for individual tennis players, I needed to use an Excel document in order to better organize summary data for Top 20 and Top 10 players between the years of 1995 and 2015. In addition, I collected (from atpworld.com) and organized historical rankings data and the years in which players turned pro in order to investigate players' rankings progression. The Excel document contains the following worksheets
Step11: The following cells illustrate some of the data contained within the Excel document
Step12: In order to investigate a player's early career, it is essential to identify when that player turned professional. I developed the following function based on the data within the Excel document to facilitate that process. Again, Roger Federer is used as an example.
Step13: As mentioned in the introduction, a primary goal for this project was to compare the stats of young tennis players to the early career stats of players who eventually achieved top 20 or top 10 status. Utilizing data from the Excel document, the following cells include my analysis of the early careers of players who achieved top 20 or top 10 status between the years of 1995 and 2015.
First, I compared how long, on average, it took players who eventually became top 20 or top 10 players to achieve top 50 status upon turning pro. For all analyses below, in addition to comparing the top 20 group to the top 10 group, I also made comparisons based on the year in which tennis players turned pro.
Step14: As seen above, tennis stars who turned pro before 1995 achieved top 50 status more quickly than players who turned pro after 2004. As expected, players who eventually became top 10 achieved top 50 status more quickly than those players who only became top 20 in the world. The following two graphics outline similar information in table and histogram formats
Step15: Second, I compared how many top 10 wins, on average, tennis stars had within 3 years of turning pro.
Step16: As seen above, players who turned pro prior to 2000 generally had more top 10 victories within their first 3 professional seasons compared to players turning pro after 2000. Due to players like Andy Murray, who had incredible success early in his career, the trend has started to change over the last decade. The following graphs highlight similar information in table and histogram format
Step17: Third, I compared how many top 10 wins, on average, tennis stars had within 5 years of turning pro.
Step18: As seen above, players who turned pro prior to 2000 generally had more top 10 victories within their first 3 professional seasons compared to players turning pro after 2000. Due to players like Andy Murray, who had incredible success early in his career, the trend has started to change over the last decade. The following graphs highlight similar information in table and histogram format
Step19: Fourth, I compared how many top 50 wins, on average, tennis stars had within 3 years of turning pro.
Step20: As seen above, players who turned pro prior to 2000 generally had more top 50 victories within their first 3 professional seasons compared to players turning pro after 2000. Due to players like Andy Murray, who had incredible success early in his career, the trend has started to change over the last decade. The following graphs highlight similar information in table and histogram format
Step21: Fifth, I compared how many top 50 wins, on average, tennis stars had within 5 years of turning pro.
Step22: As seen above, players who turned pro prior to 2000 generally had more top 50 victories within their first 5 professional seasons compared to players turning pro after 2000. Due to players like Andy Murray, who had incredible success early in his career, the trend has started to change over the last decade. The following graphs highlight similar information in table and histogram format
Step23: Sixth, I compared how many tournament wins, on average, tennis stars had within 3 years of turning pro.
Step24: As seen above, tennis stars that turned pro before 1995 had much more tournament success early in their careers compared with tennis stars that turned pro after 2000. The following graphs highlight similar information in table and histogram format
Step25: Lastly, I compared how many tournament wins, on average, tennis stars had within 5 years of turning pro.
Step26: Once again, while there has been a resurgence more recently, tennis stars who turned pro before 1995 generally had more early success in tournaments compared with tennis stars who turned pro after 2000. The following graphs highlight similar information in table and histogram format
Step27: Summary
The following table summarizes the data from the above graphics. These numbers serve as the foundation for the ultimate analysis of young players' early careers. As we saw above, with the exception of outliers like Andy Murray, tennis stars in recent years have taken a longer time to reach top 50 status and have had fewer top 10, 50, and tournament victories in the first few years of their careers. Since the early 2000's, tennis has largely been dominated by four players (Roger Federer, Rafael Nadal, Andy Murray, and Novak Djokovic), and this is likely the primary explanation for the increased difficulty within men's professional tennis. With such dominant players at the top, it has been challenging for younger players to break into the top 10 or to win tournaments at a young age.
Step28: As the final piece of the analysis, I developed a function to easily compare the average statistics from the early careers of the top 20 and top 10 players between 1995 and 2015 with the stats of any selected tennis player. The following functions utilized terms developed earlier in this report and allowed me to easily summarize tennis stats for any player.
TourneyWinsFirst3ProYears
Step29: TourneyWinsFirst5ProYears
Step30: Top10WinsFirst3ProYears
Step31: Top10WinsFirst5ProYears
Step32: Top50WinsFirst3ProYears
Step33: Top50WinsFirst5ProYears
Step34: YearsUntilTop50
Step35: The final function summarizes all the data seen thus far in this report in a single table. We can now compare any tennis player to the average statistics of players who eventually became top 10 or top 20 players in the world. For young tennis players, this could be used to gauge success early in their careers.
Step36: Based on the information above, Andy Murray had an incredible start to his career. For example, it took him only 2 years to achieve a top 50 ranking, and he had 35 victories against top 10 opponents in his first 5 professional seasons (compared to an average of 10 for players that eventually became top 10 in the world). Looking at this table, it is not surprising that Andy Murray eventually became a top 5 player in the world with multiple grand slam championship titles. The following function allows us to see the same information in a more visual format
Step37: As another example, we can look at the career of Steve Johnson. Steve Johnson is a perennial top 100 player in the world, but has struggled to make a significant impact on the ATP tour. Looking at his early career, it is not surprising that he has had difficulty becoming a top 20 player.
Step38: Finally, we can look at the early statistics for Roger Federer, one of the greatest men's tennis players of all time. As the graph illustrates, his play in the first few years of his career, particularly with respect to victories against top 10 opponents, made it clear that he would become a phenomenal tennis player someday. | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import xlrd
Explanation: Indicators of Future Success in Men's Professional Tennis
May 2016
Written by John Ockay at NYU's Stern School of Business
Contact: jfo262@stern.nyu.edu
Project Description
In 2015, Novak Djokovic earned over 16 million through tennis alone while Dudi Sela, the 100th best tennis player in the world, earned just over 300 thousand. In 2010, the U.S. Tennis Association reported that the annual average cost to be a “highly competitive” professional tennis player was 143 thousand. After accounting for such expenses, it is clear that if a player cannot stay within the top 100 on a consistent basis, it will be incredibly challenging to survive financially playing tennis alone.
My project utilizes historical tennis data to investigate key indicators of future success in men's professional tennis. Before continuing to invest hundreds of thousands of dollars into one's career, a young player can compare his statistics with the early careers of tennis stars (i.e., those who have achieved at least a top 20 ranking at some point during their careers). For the purposes of this project, I analyzed the early career statistics for every player that achieved a top 20 or top 10 ranking between 1995 and 2015. I looked at the following key indicators:
1) Professional seasons until a top 50 ranking: For those who became top 20 or top 10 players between 1995 and 2015, how many years did it take for them to achieve a top 50 ranking after they turned pro? Young tennis players can use this metric to compare their ranking progress with the early progress of successful tennis players.
2) Top 10 Victories: How many top 10 victories did tennis stars have in their first 3 and their first 5 professional seasons?
3) Top 50 Victories: How many top 50 victories did tennis stars have in their first 3 and their first 5 professional seasons?
4) Tournament Victories: How many tournament victories did tennis stars have in their first 3 and their first 5 professional seasons?
For each indicator, I also broke the data down based on when a top 20 or top 10 player between 1995 and 2015 turned pro. I compared the differences among the early careers of tennis stars that turned pro before 1995, between 1995 and 2000, between 2000 and 2005, and after 2005. As we will see in the graphics below, young players today may not be expected to progress as quickly or have as many top 10 or tournament victories as players that turned pro before 1995.
Ultimately, I provide the means to compare the stats of any tennis player on the ATP tour with the average, historical data of players who eventually became top 20 or top 10 players in the world.
I used the following Python packages to import both internet and internal excel data and to develop relevant graphics:
End of explanation
url2015 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2015.csv'
tennis2015 = pd.read_csv(url2015)
url2014 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2014.csv'
tennis2014 = pd.read_csv(url2014)
url2013 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2013.csv'
tennis2013 = pd.read_csv(url2013)
url2012 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2012.csv'
tennis2012 = pd.read_csv(url2012)
url2011 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2011.csv'
tennis2011 = pd.read_csv(url2011)
url2010 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2010.csv'
tennis2010 = pd.read_csv(url2010)
url2009 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2009.csv'
tennis2009 = pd.read_csv(url2009)
url2008 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2008.csv'
tennis2008 = pd.read_csv(url2008)
url2007 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2007.csv'
tennis2007 = pd.read_csv(url2007)
url2006 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2006.csv'
tennis2006 = pd.read_csv(url2006)
url2005 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2005.csv'
tennis2005 = pd.read_csv(url2005)
url2004 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2004.csv'
tennis2004 = pd.read_csv(url2004)
url2003 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2003.csv'
tennis2003 = pd.read_csv(url2003)
url2002 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2002.csv'
tennis2002 = pd.read_csv(url2002)
url2001 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2001.csv'
tennis2001 = pd.read_csv(url2001)
url2000 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_2000.csv'
tennis2000 = pd.read_csv(url2000)
url1999 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1999.csv'
tennis1999 = pd.read_csv(url1999)
url1998 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1998.csv'
tennis1998 = pd.read_csv(url1998)
url1997 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1997.csv'
tennis1997 = pd.read_csv(url1997)
url1996 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1996.csv'
tennis1996 = pd.read_csv(url1996)
url1995 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1995.csv'
tennis1995 = pd.read_csv(url1995)
url1994 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1994.csv'
tennis1994 = pd.read_csv(url1994)
url1993 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1993.csv'
tennis1993 = pd.read_csv(url1993)
url1992 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1992.csv'
tennis1992 = pd.read_csv(url1992)
url1991 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1991.csv'
tennis1991 = pd.read_csv(url1991)
url1990 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1990.csv'
tennis1990 = pd.read_csv(url1990)
url1989 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1989.csv'
tennis1989 = pd.read_csv(url1989)
url1988 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1988.csv'
tennis1988 = pd.read_csv(url1988)
url1987 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1987.csv'
tennis1987 = pd.read_csv(url1987)
url1986 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1986.csv'
tennis1986 = pd.read_csv(url1986)
url1985 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1985.csv'
tennis1985 = pd.read_csv(url1985)
url1984 = 'https://raw.githubusercontent.com/JeffSackmann/tennis_atp/master/atp_matches_1984.csv'
tennis1984 = pd.read_csv(url1984)
tennisALL = (tennis1984.append(tennis1985).append(tennis1986).append(tennis1987).append(tennis1988).
append(tennis1989).append(tennis1990).append(tennis1991).append(tennis1992).
append(tennis1993).append(tennis1994).append(tennis1995).append(tennis1996).
append(tennis1997).append(tennis1998).append(tennis1999).append(tennis2000).
append(tennis2001).append(tennis2002).append(tennis2003).append(tennis2004).
append(tennis2005).append(tennis2006).append(tennis2007).append(tennis2008).
append(tennis2009).append(tennis2010).append(tennis2011).append(tennis2012).
append(tennis2013).append(tennis2014).append(tennis2015))
tennisALL.head(2)
tennisALL = tennisALL.drop(["tourney_id","draw_size","tourney_level","match_num","winner_id",
"winner_seed","winner_entry","winner_hand","winner_ht","winner_ioc",
"winner_age","winner_rank_points","loser_id","loser_seed","loser_entry",
"loser_hand","loser_ht","loser_ioc","loser_age","loser_rank_points","score",
"best_of","minutes","w_ace","w_df","w_svpt","w_1stIn","w_1stWon","w_2ndWon",
"w_SvGms","w_bpSaved","w_bpFaced","l_ace","l_df","l_svpt","l_1stIn","l_1stWon",
"l_2ndWon","l_SvGms","l_bpSaved","l_bpFaced"],axis=1)
tennisALL["tourney_date"] = tennisALL["tourney_date"].astype(str)
tennisALL["tourney_date"] = tennisALL["tourney_date"].str[:4]
tennisALL["tourney_date"] = tennisALL["tourney_date"].astype(int)
tennisALL
Explanation: Data (Part I)
To complete this project, I used data from two different sources. First, Jeff Sackmann is an author and entrepreneur who has worked in the fields of sports statistics and test preparation. In particular, he has an interest in the world of tennis and tennis statistics. He has created a website called TennisAbstract that contains historical match data and other tennis analytics. He has uploaded match data from 1968 to 2016 on GitHub in CSV format. I pulled the data from the Github site into Python in order to conduct the necessary analysis work for my project. I imported match data from 1984 to 2015, combined the data into one large spreadsheet, removed unnecessary columns, and adjusted the date column in order to better facilitate filtering. This process is outlined below:
End of explanation
def CareerWins(player):
return len(tennisALL[(tennisALL['winner_name']==player)])
CareerWins("Roger Federer")
Explanation: I utilized the above data to develop functions that allowed me to easily obtain and organize any tennis player's early career statistics, such as top 10 or top 50 victories. Those functions are listed below with Roger Federer used as an example:
CareerWins: total career victories for any given tennis player
End of explanation
def YearWins(player,year):
return len(tennisALL[(tennisALL['tourney_date']==year) & (tennisALL['winner_name']==player)])
YearWins("Roger Federer",2014)
Explanation: YearWins: victories within a specific season for any given tennis player
End of explanation
def CareerTopTen(player):
return len(tennisALL[(tennisALL['loser_rank']<=10) & (tennisALL['winner_name']==player)])
CareerTopTen("Roger Federer")
Explanation: CareerTopTen: total career top 10 victories for any given tennis player
End of explanation
def YearTopTen(player,year):
return len(tennisALL[(tennisALL['loser_rank']<=10)&(tennisALL['winner_name']==player)
&(tennisALL['tourney_date']==year)])
YearTopTen("Roger Federer",2014)
Explanation: YearTopTen: top ten victories within a specific season for any given tennis player
End of explanation
def CareerTopFifty(player):
return len(tennisALL[(tennisALL['loser_rank']<=50) & (tennisALL['winner_name']==player)])
CareerTopFifty("Roger Federer")
Explanation: CareerTopFifty: total career top 50 victories for any given tennis player
End of explanation
def YearTopFifty(player,year):
return len(tennisALL[(tennisALL['loser_rank']<=50)&(tennisALL['winner_name']==player)
&(tennisALL['tourney_date']==year)])
YearTopFifty("Roger Federer",2014)
Explanation: YearTopFifty: top fifty victories within a specific season for any given tennis player
End of explanation
def CareerTourneyWins(player):
return len(tennisALL[(tennisALL['round']=='F') & (tennisALL['winner_name']==player)])
CareerTourneyWins("Roger Federer")
Explanation: CareerTourneyWins: total career tournament victories for any given tennis player
End of explanation
def YearTourneyWins(player,year):
return len(tennisALL[(tennisALL['round']=='F')&(tennisALL['winner_name']==player)
&(tennisALL['tourney_date']==year)])
YearTourneyWins("Roger Federer",2014)
Explanation: YearTourneyWins: tournament victories within a specific season for any given tennis player
End of explanation
path = ("C:/Users/John/Desktop/Data_Bootcamp/Project Data/ATP Tennis Data (1973-2016).xlsx")
TennisData = pd.ExcelFile(path)
Top20Summary = TennisData.parse("Top 20 Summary (1995 Onward)").set_index("Player")
Top20RankingProgression = TennisData.parse("Top 20 Ranking Progression").set_index("Player")
Top10RankingProgression = TennisData.parse("Top 10 Ranking Progression").set_index("Player")
Top20Top10Wins = TennisData.parse("Top 20 Top 10 Wins").set_index("Player")
Top10Top10Wins = TennisData.parse("Top 10 Top 10 Wins").set_index("Player")
Top20Top50Wins = TennisData.parse("Top 20 Top 50 Wins").set_index("Player")
Top10Top50Wins = TennisData.parse("Top 10 Top 50 Wins").set_index("Player")
Top20TournamentWins = TennisData.parse("Top 20 Tournament Wins").set_index("Player")
Top10TournamentWins = TennisData.parse("Top 10 Tournament Wins").set_index("Player")
AllPlayerSummary = TennisData.parse("All Player Summary").set_index("Player")
Explanation: Data (Part II)
While the data from Jeff Sackmann allowed me to easily collect data for individual tennis players, I needed to use an Excel document in order to better organize summary data for Top 20 and Top 10 players between the years of 1995 and 2015. In addition, I collected (from atpworld.com) and organized historical rankings data and the years in which players turned pro in order to investigate players' rankings progression. The Excel document contains the following worksheets:
1. All Player Summary: historical, by year rankings for all tennis players; year turned pro; years it took players to obtain top 50 status
2. Top 10 Summary (1995 Onward): historical, by year rankings for players that achieved top 10 status between 1995 and 2015
3. Top 10 Ranking Progression: years it took to achieve top 50 status upon turning pro for players that eventually became top 10 in the world
4. Top 10 Top 10 Wins: Summary of top 10 victories in first 3 and 5 professional seasons for players that achieved top 10 status between 1995 and 2015
5. Top 10 Top 50 Wins: Summary of top 50 victories in first 3 and 5 professional seasons for players that achieved top 10 status between 1995 and 2015
6. Top 10 Tournament Wins: Summary of tournament victories in first 3 and 5 professional seasons for players that achieved top 10 status between 1995 and 2015
7. Top 20 Summary (1995 Onward): historical, by year rankings for players that achieved top 20 status between 1995 and 2015
8. Top 20 Ranking Progression: years it took to achieve top 50 status upon turning pro for players that eventually became top 20 in the world
9. Top 20 Top 10 Wins: Summary of top 10 victories in first 3 and 5 professional seasons for players that achieved top 20 status between 1995 and 2015
10. Top 20 Top 50 Wins: Summary of top 50 victories in first 3 and 5 professional seasons for players that achieved top 20 status between 1995 and 2015
11. Top 20 Tournament Wins: Summary of tournament victories in first 3 and 5 professional seasons for players that achieved top 20 status between 1995 and 2015
End of explanation
Top20Summary = Top20Summary.fillna("")
Top20Summary.head(5)
Top20RankingProgression.head(5)
Top10Top10Wins.head(5)
AllPlayerSummary = AllPlayerSummary[["Years Until Top 50","Year Turned Pro"]]
AllPlayerSummary = AllPlayerSummary.fillna("Never")
AllPlayerSummary.head(5)
Explanation: The following cells illustrate some of the data contained within the Excel document:
End of explanation
def YearTurnedPro(Player):
return AllPlayerSummary.ix[Player,"Year Turned Pro"]
YearTurnedPro("Roger Federer")
def RankingProgression(Player):
print("Player Name:",Player)
print("Year Turned Pro:",YearTurnedPro(Player))
print("Rankings First Five Pro Years:",[Top20RankingProgression.ix[Player,"Year 1"],
Top20RankingProgression.ix[Player,"Year 2"],Top20RankingProgression.ix[Player,"Year 3"],
Top20RankingProgression.ix[Player,"Year 4"],Top20RankingProgression.ix[Player,"Year 5"]])
RankingProgression("Roger Federer")
Explanation: In order to investigate a player's early career, it is essential to identify when that player turned professional. I developed the following function based on the data within the Excel document to facilitate that process. Again, Roger Federer is used as an example.
End of explanation
YearsUntilTop50Comparison = pd.DataFrame({'Year Turned Pro':['Before 1995','Between 1995 and 1999',
'Between 2000 and 2004','After 2004'],
'Avg Years to Reach Top 50 (For Eventual Top 20 Players)':
[Top20RankingProgression[(Top20RankingProgression['Year Turned Pro']<[1995])]
["Years Until Top 50"].mean(),
Top20RankingProgression[(Top20RankingProgression['Year Turned Pro']>=[1995])&
(Top20RankingProgression['Year Turned Pro']<[2000])]["Years Until Top 50"].mean(),
Top20RankingProgression[(Top20RankingProgression['Year Turned Pro']>=[2000])&
(Top20RankingProgression['Year Turned Pro']<[2005])]["Years Until Top 50"].mean(),
Top20RankingProgression[(Top20RankingProgression['Year Turned Pro']>=[2005])]
["Years Until Top 50"].mean()],
'Avg Years to Reach Top 50 (For Eventual Top 10 Players)':
[Top10RankingProgression[(Top10RankingProgression['Year Turned Pro']<[1995])]
["Years Until Top 50"].mean(),
Top10RankingProgression[(Top10RankingProgression['Year Turned Pro']>=[1995])&
(Top10RankingProgression['Year Turned Pro']<[2000])]["Years Until Top 50"].mean(),
Top10RankingProgression[(Top10RankingProgression['Year Turned Pro']>=[2000])&
(Top10RankingProgression['Year Turned Pro']<[2005])]["Years Until Top 50"].mean(),
Top10RankingProgression[(Top10RankingProgression['Year Turned Pro']>=[2005])]
["Years Until Top 50"].mean()]})
YearsUntilTop50Comparison = YearsUntilTop50Comparison.set_index("Year Turned Pro")
YearsUntilTop50ComparisonGraph = YearsUntilTop50Comparison.plot(kind="line",figsize=(10,5),fontsize=(14))
plt.ylabel("Years")
plt.style.use('fivethirtyeight')
plt.legend(loc='lower right',prop={'size':12}).get_frame().set_edgecolor('black')
plt.title("How Long Does it Take Tennis Stars to \nAchieve a Top 50 Ranking After Turning Pro?",fontsize=(16))
Explanation: As mentioned in the introduction, a primary goal for this project was to compare the stats of young tennis players to the early career stats of players who eventually achieved top 20 or top 10 status. Utilizing data from the Excel document, the following cells include my analysis of the early careers of players who achieved top 20 or top 10 status between the years of 1995 and 2015.
First, I compared how long, on average, it took players who eventually became top 20 or top 10 players to achieve top 50 status upon turning pro. For all analyses below, in addition to comparing the top 20 group to the top 10 group, I also made comparisons based on the year in which tennis players turned pro.
End of explanation
YearsUntilTop50Comparison.round(2)
Top20RankingProgression.plot(kind="hist",bins=10,y="Years Until Top 50")
plt.legend(loc='upper right',prop={'size':12}).get_frame().set_edgecolor('black')
plt.title("Number of Years It Took Top 20 Players from 1995-2015 to Reach Top 50\n\n",fontsize=(12))
plt.xlabel("\nYears",fontsize=(12))
plt.ylabel("Frequency\n",fontsize=(12))
Explanation: As seen above, tennis stars who turned pro before 1995 achieved top 50 status more quickly than players who turned pro after 2004. As expected, players who eventually became top 10 achieved top 50 status more quickly than those players who only became top 20 in the world. The following two graphics outline similar information in table and histogram formats:
End of explanation
Top10WinsComparison = pd.DataFrame({'Year Turned Pro':['Before 1995','Between 1995 and 1999',
'Between 2000 and 2004','After 2004'],
'Top 10 Wins in First 3 Pro Years (For Eventual Top 20 Players)':
[Top20Top10Wins[(Top20Top10Wins['Year Turned Pro']<[1995])]["Total First 3 Years"].mean(),
Top20Top10Wins[(Top20Top10Wins['Year Turned Pro']>=[1995])&
(Top20Top10Wins['Year Turned Pro']<[2000])]["Total First 3 Years"].mean(),
Top20Top10Wins[(Top20Top10Wins['Year Turned Pro']>=[2000])&
(Top20Top10Wins['Year Turned Pro']<[2005])]["Total First 3 Years"].mean(),
Top20Top10Wins[(Top20Top10Wins['Year Turned Pro']>=[2005])]["Total First 3 Years"].mean()],
'Top 10 Wins in First 3 Pro Years (For Eventual Top 10 Players)':
[Top10Top10Wins[(Top10Top10Wins['Year Turned Pro']<[1995])]["Total First 3 Years"].mean(),
Top10Top10Wins[(Top10Top10Wins['Year Turned Pro']>=[1995])&
(Top10Top10Wins['Year Turned Pro']<[2000])]["Total First 3 Years"].mean(),
Top10Top10Wins[(Top10Top10Wins['Year Turned Pro']>=[2000])&
(Top10Top10Wins['Year Turned Pro']<[2005])]["Total First 3 Years"].mean(),
Top10Top10Wins[(Top10Top10Wins['Year Turned Pro']>=[2005])]["Total First 3 Years"].mean()]})
Top10WinsComparison = Top10WinsComparison.set_index("Year Turned Pro")
Top10WinsComparisonGraph = Top10WinsComparison.plot(kind="line",figsize=(10,5),fontsize=(14))
plt.ylabel("Top 10 Victories")
plt.legend(loc='lower left',prop={'size':11}).get_frame().set_edgecolor('black')
plt.title("How Many Top 10 Victories Do Tennis Stars \nHave in First 3 Professional Seasons?",fontsize=(16))
Explanation: Second, I compared how many top 10 wins, on average, tennis stars had within 3 years of turning pro.
End of explanation
Top10WinsComparison.round(2)
Top20Top10Wins.plot(kind="hist",bins=10,y="Total First 3 Years")
plt.legend(loc='upper right',prop={'size':12}).get_frame().set_edgecolor('black')
plt.title("Number of Top 10 Victories in First 3 Pro Seasons\n for Eventual Top 20 Players",fontsize=(12))
plt.xlabel("\nTop 10 Victories",fontsize=(12))
plt.ylabel("Frequency\n",fontsize=(12))
Explanation: As seen above, players who turned pro prior to 2000 generally had more top 10 victories within their first 3 professional seasons compared to players turning pro after 2000. Due to players like Andy Murray, who had incredible success early in his career, the trend has started to change over the last decade. The following graphs highlight similar information in table and histogram format:
End of explanation
Top10WinsComparison2 = pd.DataFrame({'Year Turned Pro':['Before 1995','Between 1995 and 1999',
'Between 2000 and 2004','After 2004'],
'Top 10 Wins in First 5 Pro Years (For Eventual Top 20 Players)':
[Top20Top10Wins[(Top20Top10Wins['Year Turned Pro']<[1995])]["Total First 5 Years"].mean(),
Top20Top10Wins[(Top20Top10Wins['Year Turned Pro']>=[1995])&
(Top20Top10Wins['Year Turned Pro']<[2000])]["Total First 5 Years"].mean(),
Top20Top10Wins[(Top20Top10Wins['Year Turned Pro']>=[2000])&
(Top20Top10Wins['Year Turned Pro']<[2005])]["Total First 5 Years"].mean(),
Top20Top10Wins[(Top20Top10Wins['Year Turned Pro']>=[2005])]["Total First 5 Years"].mean()],
'Top 10 Wins in First 5 Pro Years (For Eventual Top 10 Players)':
[Top10Top10Wins[(Top10Top10Wins['Year Turned Pro']<[1995])]["Total First 5 Years"].mean(),
Top10Top10Wins[(Top10Top10Wins['Year Turned Pro']>=[1995])&
(Top10Top10Wins['Year Turned Pro']<[2000])]["Total First 5 Years"].mean(),
Top10Top10Wins[(Top10Top10Wins['Year Turned Pro']>=[2000])&
(Top10Top10Wins['Year Turned Pro']<[2005])]["Total First 5 Years"].mean(),
Top10Top10Wins[(Top10Top10Wins['Year Turned Pro']>=[2005])]["Total First 5 Years"].mean()]})
Top10WinsComparison2 = Top10WinsComparison2.set_index("Year Turned Pro")
Top10WinsComparison2Graph = Top10WinsComparison2.plot(kind="line",figsize=(10,5),fontsize=(14))
plt.ylabel("Top 10 Victories")
plt.legend(loc='best',prop={'size':11}).get_frame().set_edgecolor('black')
plt.title("How Many Top 10 Victories Do Tennis Stars \nHave in First 5 Professional Seasons?",fontsize=(16))
Explanation: Third, I compared how many top 10 wins, on average, tennis stars had within 5 years of turning pro.
End of explanation
Top10WinsComparison2.round(2)
Top20Top10Wins.plot(kind="hist",bins=10,y="Total First 5 Years")
plt.legend(loc='upper right',prop={'size':12}).get_frame().set_edgecolor('black')
plt.title("Number of Top 10 Victories in First 5 Pro Seasons\n for Eventual Top 20 Players",fontsize=(12))
plt.xlabel("\nTop 10 Victories",fontsize=(12))
plt.ylabel("Frequency\n",fontsize=(12))
Explanation: As seen above, players who turned pro prior to 2000 generally had more top 10 victories within their first 3 professional seasons compared to players turning pro after 2000. Due to players like Andy Murray, who had incredible success early in his career, the trend has started to change over the last decade. The following graphs highlight similar information in table and histogram format:
End of explanation
Top50WinsComparison = pd.DataFrame({'Year Turned Pro':['Before 1995','Between 1995 and 1999',
'Between 2000 and 2004','After 2004'],
'Top 50 Wins in First 3 Pro Years (For Eventual Top 20 Players)':
[Top20Top50Wins[(Top20Top50Wins['Year Turned Pro']<[1995])]["Total First 3 Years"].mean(),
Top20Top50Wins[(Top20Top50Wins['Year Turned Pro']>=[1995])&
(Top20Top50Wins['Year Turned Pro']<[2000])]["Total First 3 Years"].mean(),
Top20Top50Wins[(Top20Top50Wins['Year Turned Pro']>=[2000])&
(Top20Top50Wins['Year Turned Pro']<[2005])]["Total First 3 Years"].mean(),
Top20Top50Wins[(Top20Top50Wins['Year Turned Pro']>=[2005])]["Total First 3 Years"].mean()],
'Top 50 Wins in First 3 Pro Years (For Eventual Top 10 Players)':
[Top10Top50Wins[(Top10Top50Wins['Year Turned Pro']<[1995])]["Total First 3 Years"].mean(),
Top10Top50Wins[(Top10Top50Wins['Year Turned Pro']>=[1995])&
(Top10Top50Wins['Year Turned Pro']<[2000])]["Total First 3 Years"].mean(),
Top10Top50Wins[(Top10Top50Wins['Year Turned Pro']>=[2000])&
(Top10Top50Wins['Year Turned Pro']<[2005])]["Total First 3 Years"].mean(),
Top10Top50Wins[(Top10Top50Wins['Year Turned Pro']>=[2005])]["Total First 3 Years"].mean()]})
Top50WinsComparison = Top50WinsComparison.set_index("Year Turned Pro")
Top50WinsComparisonGraph = Top50WinsComparison.plot(kind="line",figsize=(10,5),fontsize=(14))
plt.ylabel("Top 50 Victories")
plt.legend(loc='best',prop={'size':11}).get_frame().set_edgecolor('black')
plt.title("How Many Top 50 Victories Do Tennis Stars \nHave in First 3 Professional Seasons?",fontsize=(16))
Explanation: Fourth, I compared how many top 50 wins, on average, tennis stars had within 3 years of turning pro.
End of explanation
Top50WinsComparison.round(2)
Top20Top50Wins.plot(kind="hist",bins=10,y="Total First 3 Years")
plt.legend(loc='upper right',prop={'size':12}).get_frame().set_edgecolor('black')
plt.title("Number of Top 50 Victories in First 5 Pro Seasons\n for Eventual Top 20 Players",fontsize=(12))
plt.xlabel("\nTop 50 Victories",fontsize=(12))
plt.ylabel("Frequency\n",fontsize=(12))
Explanation: As seen above, players who turned pro prior to 2000 generally had more top 50 victories within their first 3 professional seasons compared to players turning pro after 2000. Due to players like Andy Murray, who had incredible success early in his career, the trend has started to change over the last decade. The following graphs highlight similar information in table and histogram format:
End of explanation
Top50WinsComparison2 = pd.DataFrame({'Year Turned Pro':['Before 1995','Between 1995 and 1999',
'Between 2000 and 2004','After 2004'],
'Top 50 Wins in First 5 Pro Years (For Eventual Top 20 Players)':
[Top20Top50Wins[(Top20Top50Wins['Year Turned Pro']<[1995])]["Total First 5 Years"].mean(),
Top20Top50Wins[(Top20Top50Wins['Year Turned Pro']>=[1995])&
(Top20Top50Wins['Year Turned Pro']<[2000])]["Total First 5 Years"].mean(),
Top20Top50Wins[(Top20Top50Wins['Year Turned Pro']>=[2000])&
(Top20Top50Wins['Year Turned Pro']<[2005])]["Total First 5 Years"].mean(),
Top20Top50Wins[(Top20Top50Wins['Year Turned Pro']>=[2005])]["Total First 5 Years"].mean()],
'Top 50 Wins in First 5 Pro Years (For Eventual Top 10 Players)':
[Top10Top50Wins[(Top10Top50Wins['Year Turned Pro']<[1995])]["Total First 5 Years"].mean(),
Top10Top50Wins[(Top10Top50Wins['Year Turned Pro']>=[1995])&
(Top10Top50Wins['Year Turned Pro']<[2000])]["Total First 5 Years"].mean(),
Top10Top50Wins[(Top10Top50Wins['Year Turned Pro']>=[2000])&
(Top10Top50Wins['Year Turned Pro']<[2005])]["Total First 5 Years"].mean(),
Top10Top50Wins[(Top10Top50Wins['Year Turned Pro']>=[2005])]["Total First 5 Years"].mean()]})
Top50WinsComparison2 = Top50WinsComparison2.set_index("Year Turned Pro")
Top50WinsComparison2Graph = Top50WinsComparison2.plot(kind="line",figsize=(10,5),fontsize=(14))
plt.ylabel("Top 50 Victories")
plt.legend(loc='best',prop={'size':11}).get_frame().set_edgecolor('black')
plt.title("How Many Top 50 Victories Do Tennis Stars \nHave in First 5 Professional Seasons?",fontsize=(16))
Explanation: Fifth, I compared how many top 50 wins, on average, tennis stars had within 5 years of turning pro.
End of explanation
Top50WinsComparison2.round(2)
Top20Top50Wins.plot(kind="hist",bins=10,y="Total First 5 Years")
plt.legend(loc='upper right',prop={'size':12}).get_frame().set_edgecolor('black')
plt.title("Number of Top 50 Victories in First 5 Pro Seasons\n for Eventual Top 20 Players",fontsize=(12))
plt.xlabel("\nTop 50 Victories",fontsize=(12))
plt.ylabel("Frequency\n",fontsize=(12))
Explanation: As seen above, players who turned pro prior to 2000 generally had more top 50 victories within their first 5 professional seasons compared to players turning pro after 2000. Due to players like Andy Murray, who had incredible success early in his career, the trend has started to change over the last decade. The following graphs highlight similar information in table and histogram format:
End of explanation
TournamentWinsComparison = pd.DataFrame({'Year Turned Pro':['Before 1995','Between 1995 and 1999',
'Between 2000 and 2004','After 2004'],
'Tournament Wins in First 3 Pro Years (For Eventual Top 20 Players)':
[Top20TournamentWins[(Top20TournamentWins['Year Turned Pro']<[1995])]["Total First 3 Years"].mean(),
Top20TournamentWins[(Top20TournamentWins['Year Turned Pro']>=[1995])&
(Top20TournamentWins['Year Turned Pro']<[2000])]["Total First 3 Years"].mean(),
Top20TournamentWins[(Top20TournamentWins['Year Turned Pro']>=[2000])&
(Top20TournamentWins['Year Turned Pro']<[2005])]["Total First 3 Years"].mean(),
Top20TournamentWins[(Top20TournamentWins['Year Turned Pro']>=[2005])]["Total First 3 Years"].mean()],
'Tournament Wins in First 3 Pro Years (For Eventual Top 10 Players)':
[Top10TournamentWins[(Top10TournamentWins['Year Turned Pro']<[1995])]["Total First 3 Years"].mean(),
Top10TournamentWins[(Top10TournamentWins['Year Turned Pro']>=[1995])&
(Top10TournamentWins['Year Turned Pro']<[2000])]["Total First 3 Years"].mean(),
Top10TournamentWins[(Top10TournamentWins['Year Turned Pro']>=[2000])&
(Top10TournamentWins['Year Turned Pro']<[2005])]["Total First 3 Years"].mean(),
Top10TournamentWins[(Top10TournamentWins['Year Turned Pro']>=[2005])]["Total First 3 Years"].mean()]},
index=[1, 2, 3, 4])
TournamentWinsComparison = TournamentWinsComparison.set_index("Year Turned Pro")
Top50WinsComparisonGraph = TournamentWinsComparison.plot(kind="line",figsize=(10,5),fontsize=(14))
plt.ylabel("Tournament Victories")
plt.legend(loc='best',prop={'size':11}).get_frame().set_edgecolor('black')
plt.title("How Many Tournament Victories Do Tennis Stars \nHave in First 3 Professional Seasons?",
fontsize=(16))
Explanation: Sixth, I compared how many tournament wins, on average, tennis stars had within 3 years of turning pro.
End of explanation
TournamentWinsComparison.round(2)
Top20TournamentWins.plot(kind="hist",bins=10,y="Total First 3 Years")
plt.legend(loc='upper right',prop={'size':12}).get_frame().set_edgecolor('black')
plt.title("Number of Tournament Victories in First 3 Pro Seasons\n for Eventual Top 20 Players",fontsize=(12))
plt.xlabel("\nTournament Victories",fontsize=(12))
plt.ylabel("Frequency\n",fontsize=(12))
Explanation: As seen above, tennis stars that turned pro before 1995 had much more tournament success early in their careers compared with tennis stars that turned pro after 2000. The following graphs highlight similar information in table and histogram format:
End of explanation
TournamentWinsComparison2 = pd.DataFrame({'Year Turned Pro':['Before 1995','Between 1995 and 1999',
'Between 2000 and 2004','After 2004'],
'Tournament Wins in First 5 Pro Years (For Eventual Top 20 Players)':
[Top20TournamentWins[(Top20TournamentWins['Year Turned Pro']<[1995])]["Total First 5 Years"].mean(),
Top20TournamentWins[(Top20TournamentWins['Year Turned Pro']>=[1995])&
(Top20TournamentWins['Year Turned Pro']<[2000])]["Total First 5 Years"].mean(),
Top20TournamentWins[(Top20TournamentWins['Year Turned Pro']>=[2000])&
(Top20TournamentWins['Year Turned Pro']<[2005])]["Total First 5 Years"].mean(),
Top20TournamentWins[(Top20TournamentWins['Year Turned Pro']>=[2005])]["Total First 5 Years"].mean()],
'Tournament Wins in First 5 Pro Years (For Eventual Top 10 Players)':
[Top10TournamentWins[(Top10TournamentWins['Year Turned Pro']<[1995])]["Total First 5 Years"].mean(),
Top10TournamentWins[(Top10TournamentWins['Year Turned Pro']>=[1995])&
(Top10TournamentWins['Year Turned Pro']<[2000])]["Total First 5 Years"].mean(),
Top10TournamentWins[(Top10TournamentWins['Year Turned Pro']>=[2000])&
(Top10TournamentWins['Year Turned Pro']<[2005])]["Total First 5 Years"].mean(),
Top10TournamentWins[(Top10TournamentWins['Year Turned Pro']>=[2005])]["Total First 5 Years"].mean()]},
index=[1, 2, 3, 4])
TournamentWinsComparison2 = TournamentWinsComparison2.set_index("Year Turned Pro")
Top50WinsComparison2Graph = TournamentWinsComparison2.plot(kind="line",figsize=(10,5),fontsize=(14))
plt.ylabel("Tournament Victories")
plt.legend(loc='best',prop={'size':11}).get_frame().set_edgecolor('black')
plt.title("How Many Tournament Victories Do Tennis Stars \nHave in First 5 Professional Seasons?",
fontsize=(16))
Explanation: Lastly, I compared how many tournament wins, on average, tennis stars had within 5 years of turning pro.
End of explanation
TournamentWinsComparison2.round(2)
Top20TournamentWins.plot(kind="hist",bins=10,y="Total First 5 Years")
plt.legend(loc='upper right',prop={'size':12}).get_frame().set_edgecolor('black')
plt.title("Number of Tournament Victories in First 5 Pro Seasons\n for Eventual Top 20 Players",fontsize=(12))
plt.xlabel("\nTournament Victories",fontsize=(12))
plt.ylabel("Frequency\n",fontsize=(12))
Explanation: Once again, while there has been a resurgence more recently, tennis stars who turned pro before 1995 generally had more early success in tournaments compared with tennis stars who turned pro after 2000. The following graphs highlight similar information in table and histogram format:
End of explanation
TennisSummary = [YearsUntilTop50Comparison["Avg Years to Reach Top 50 (For Eventual Top 20 Players)"],
Top10WinsComparison["Top 10 Wins in First 3 Pro Years (For Eventual Top 20 Players)"],
Top10WinsComparison2["Top 10 Wins in First 5 Pro Years (For Eventual Top 20 Players)"],
Top50WinsComparison["Top 50 Wins in First 3 Pro Years (For Eventual Top 20 Players)"],
Top50WinsComparison2["Top 50 Wins in First 5 Pro Years (For Eventual Top 20 Players)"],
TournamentWinsComparison["Tournament Wins in First 3 Pro Years (For Eventual Top 20 Players)"],
TournamentWinsComparison2["Tournament Wins in First 5 Pro Years (For Eventual Top 20 Players)"],
YearsUntilTop50Comparison["Avg Years to Reach Top 50 (For Eventual Top 10 Players)"],
Top10WinsComparison["Top 10 Wins in First 3 Pro Years (For Eventual Top 10 Players)"],
Top10WinsComparison2["Top 10 Wins in First 5 Pro Years (For Eventual Top 10 Players)"],
Top50WinsComparison["Top 50 Wins in First 3 Pro Years (For Eventual Top 10 Players)"],
Top50WinsComparison2["Top 50 Wins in First 5 Pro Years (For Eventual Top 10 Players)"],
TournamentWinsComparison["Tournament Wins in First 3 Pro Years (For Eventual Top 10 Players)"],
TournamentWinsComparison2["Tournament Wins in First 5 Pro Years (For Eventual Top 10 Players)"]]
result = pd.concat(TennisSummary,axis=1)
columns = [('All Top 20 Players between 1995 to 2015','All Top 10 Players between 1995 to 2015'),
("Avg Years to Reach Top 50",
"Top 10 Wins First 3 Pro Years",
"Top 10 Wins First 5 Pro Years",
"Top 50 Wins First 3 Pro Years",
"Top 50 Wins First 5 Pro Years",
"Tournament Wins First 3 Pro Years",
"Tournament Wins First 5 Pro Years")]
result.columns=pd.MultiIndex.from_product(columns)
result.round(2)
Explanation: Summary
The following table summarizes the data from the above graphics. These numbers serve as the foundation for the ultimate analysis of young players' early careers. As we saw above, with the exception of outliers like Andy Murray, tennis stars in recent years have taken a longer time to reach top 50 status and have had fewer top 10, 50, and tournament victories in the first few years of their careers. Since the early 2000's, tennis has largely been dominated by four players (Roger Federer, Rafael Nadal, Andy Murray, and Novak Djokovic), and this is likely the primary explanation for the increased difficulty within men's professional tennis. With such dominant players at the top, it has been challenging for younger players to break into the top 10 or to win tournaments at a young age.
End of explanation
def TourneyWinsFirst3ProYears(player):
return (YearTourneyWins(player,YearTurnedPro(player)) + YearTourneyWins(player,YearTurnedPro(player)+1) +
YearTourneyWins(player,YearTurnedPro(player)+2))
TourneyWinsFirst3ProYears("Roger Federer")
Explanation: As the final piece of the analysis, I developed a function to easily compare the average statistics from the early careers of the top 20 and top 10 players between 1995 and 2015 with the stats of any selected tennis player. The following functions utilized terms developed earlier in this report and allowed me to easily summarize tennis stats for any player.
TourneyWinsFirst3ProYears: Outputs number of tournament victories in first 3 professional seasons for any selected tennis player
End of explanation
def TourneyWinsFirst5ProYears(player):
return (YearTourneyWins(player,YearTurnedPro(player)) + YearTourneyWins(player,YearTurnedPro(player)+1) +
YearTourneyWins(player,YearTurnedPro(player)+2) + YearTourneyWins(player,YearTurnedPro(player)+3) +
YearTourneyWins(player,YearTurnedPro(player)+4))
TourneyWinsFirst5ProYears("Roger Federer")
Explanation: TourneyWinsFirst5ProYears: Outputs number of tournament victories in first 5 professional seasons for any selected tennis player
End of explanation
def Top10WinsFirst3ProYears(player):
return (YearTopTen(player,YearTurnedPro(player)) + YearTopTen(player,YearTurnedPro(player)+1) +
YearTopTen(player,YearTurnedPro(player)+2))
Top10WinsFirst3ProYears("Roger Federer")
Explanation: Top10WinsFirst3ProYears: Outputs number of top 10 victories in first 3 professional seasons for any selected tennis player
End of explanation
def Top10WinsFirst5ProYears(player):
return (YearTopTen(player,YearTurnedPro(player)) + YearTopTen(player,YearTurnedPro(player)+1) +
YearTopTen(player,YearTurnedPro(player)+2) + YearTopTen(player,YearTurnedPro(player)+3) +
YearTopTen(player,YearTurnedPro(player)+4))
Top10WinsFirst5ProYears("Roger Federer")
Explanation: Top10WinsFirst5ProYears: Outputs number of top ten victories in first 5 professional seasons for any selected tennis player
End of explanation
def Top50WinsFirst3ProYears(player):
return (YearTopFifty(player,YearTurnedPro(player)) + YearTopFifty(player,YearTurnedPro(player)+1) +
YearTopFifty(player,YearTurnedPro(player)+2))
Top50WinsFirst3ProYears("Roger Federer")
Explanation: Top50WinsFirst3ProYears: Outputs number of top 50 victories in first 3 professional seasons for any selected tennis player
End of explanation
def Top50WinsFirst5ProYears(player):
return (YearTopFifty(player,YearTurnedPro(player)) + YearTopFifty(player,YearTurnedPro(player)+1) +
YearTopFifty(player,YearTurnedPro(player)+2) + YearTopFifty(player,YearTurnedPro(player)+3) +
YearTopFifty(player,YearTurnedPro(player)+4))
Top50WinsFirst5ProYears("Roger Federer")
Explanation: Top50WinsFirst5ProYears: Outputs number of top 50 victories in first 5 professional seasons for any selected tennis player
End of explanation
def YearsUntilTop50(Player):
if type(AllPlayerSummary.ix[Player,"Years Until Top 50"]) == float:
return int(AllPlayerSummary.ix[Player,"Years Until Top 50"])
else:
return AllPlayerSummary.ix[Player,"Years Until Top 50"]
YearsUntilTop50("Roger Federer")
Explanation: YearsUntilTop50: Identifies how long it took a player to achieve top 50 status
End of explanation
def WillHeBeGreat(Player):
Will = pd.DataFrame({'Player':[Player,"Average for Top 20 Players from 1995 to 2015",
"Average for Top 10 Players from 1995 to 2015"],
'Pro Years Until Top 50 Ranking':[YearsUntilTop50(Player),
Top20RankingProgression["Years Until Top 50"].mean(),
Top10RankingProgression["Years Until Top 50"].mean()],
'Top 10 Wins First 3 Pro Years':[Top10WinsFirst3ProYears(Player),
Top20Top10Wins["Total First 3 Years"].mean(),
Top10Top10Wins["Total First 3 Years"].mean()],
'Top 10 Wins First 5 Pro Years':[Top10WinsFirst5ProYears(Player),
Top20Top10Wins["Total First 5 Years"].mean(),
Top10Top10Wins["Total First 5 Years"].mean()],
'Top 50 Wins First 3 Pro Years':[Top50WinsFirst3ProYears(Player),
Top20Top50Wins["Total First 3 Years"].mean(),
Top10Top50Wins["Total First 3 Years"].mean()],
'Top 50 Wins First 5 Pro Years':[Top50WinsFirst5ProYears(Player),
Top20Top50Wins["Total First 5 Years"].mean(),
Top10Top50Wins["Total First 5 Years"].mean()],
'Tournament Wins First 3 Pro Years':[TourneyWinsFirst3ProYears(Player),
Top20TournamentWins["Total First 3 Years"].mean(),
Top10TournamentWins["Total First 3 Years"].mean()],
'Tournament Wins First 5 Pro Years':[TourneyWinsFirst5ProYears(Player),
Top20TournamentWins["Total First 5 Years"].mean(),
Top10TournamentWins["Total First 5 Years"].mean()],
'Year Turned Pro':[YearTurnedPro(Player),"",""]})
ColumnOrder = ['Year Turned Pro','Pro Years Until Top 50 Ranking','Top 10 Wins First 3 Pro Years',
'Top 10 Wins First 5 Pro Years','Top 50 Wins First 3 Pro Years',
'Top 50 Wins First 5 Pro Years','Tournament Wins First 3 Pro Years',
'Tournament Wins First 5 Pro Years']
Will = Will.set_index("Player")
Will = Will[ColumnOrder]
Will = Will.round(2)
return Will
WillHeBeGreat("Andy Murray")
Explanation: The final function summarizes all the data seen thus far in this report in a single table. We can now compare any tennis player to the average statistics of players who eventually became top 10 or top 20 players in the world. For young tennis players, this could be used to gauge success early in their careers.
End of explanation
def WillHeBeGreatGraph(Player):
WillHeBeGreat(Player).plot(y=["Pro Years Until Top 50 Ranking",
"Top 10 Wins First 3 Pro Years",
"Top 10 Wins First 5 Pro Years",
"Tournament Wins First 3 Pro Years",
"Tournament Wins First 5 Pro Years"],kind="barh",figsize=(10,5))
plt.style.use("ggplot")
plt.ylabel("")
plt.xlabel("Years or # of Victories")
plt.title("Early Career of Selected Player vs. \nEarly Careers of Eventual Top 20 or Top 10 Players")
WillHeBeGreatGraph("Andy Murray")
Explanation: Based on the information above, Andy Murray had an incredible start to his career. For example, it took him only 2 years to achieve a top 50 ranking, and he had 35 victories against top 10 opponents in his first 5 professional seasons (compared to an average of 10 for players that eventually became top 10 in the world). Looking at this table, it is not surprising that Andy Murray eventually became a top 5 player in the world with multiple grand slam championship titles. The following function allows us to see the same information in a more visual format:
End of explanation
WillHeBeGreatGraph("Steve Johnson")
Explanation: As another example, we can look at the career of Steve Johnson. Steve Johnson is a perennial top 100 player in the world, but has struggled to make a significant impact on the ATP tour. Looking at his early career, it is not surprising that he has had difficulty becoming a top 20 player.
End of explanation
WillHeBeGreatGraph("Roger Federer")
Explanation: Finally, we can look at the early statistics for Roger Federer, one of the greatest men's tennis players of all time. As the graph illustrates, his play in the first few years of his career, particularly with respect to victories against top 10 opponents, made it clear that he would become a phenomenal tennis player someday.
End of explanation |
4,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Before beginning this exercise you must download some data files, which can be retrieved from here
Step1: We have provided three images containing stars, taken with 3 different CCDs, in "stars_X.npz" where X = 0, 1, 2.
The true magnitudes of all the stars are also known, taken from a reliable reference catalogue.
OK, I'll be honest; these are simulations. The stars' centers are all at centres of pixels
make your lives easier; there is no spatial structure in the PSF; and there is also no noise added of any kind. You may buy me a drink later.
To read the image data and the calibs say something like
Step2: Just to make sure that we're all on the same page, here's code to display the image and stars (imshow is a simple utility imported from rhlUtils -- feel free to use plt.imshow if you'd rather)
Step4: Time for you to do some work. Write some code to estimate a PSF model by simply averaging all the objects, giving each equal weight. You should add the option to use only some subset of the stars (e.g. the faintest 25%).
Your model should simply be an image of the PSF, normalised to have a maximum value of 1.0
You could use the calibs object to find all the stars, but in the real world there are stars in the data that are
not in the catalogue so you'll have to write a simple object finder (i.e. don't use the calibs!). It's sufficient to find pixels that are larger than any neighbours to the left, right, top, or bottom; no, this isn't quite how you'd do it in the real world but it's not far off.
I told you that there were three data sets, taken in different places on the sky and with CCDs with different properties. Choose one set (e.g. "stars_0.npz") to carry out the following activities, then go back and look at the other two.
Here's my version of code do estimate the PSF. I wrote my own naive object finder for DSFP; in reality I'd call the one in the LSST stack (which has heritage stretching back to SDSS via PanSTARRS).
Step5: OK, now use your PSF model to create an image of the residuals created by subtracting the scaled PSF from the stars.
How does it look? Do you see what's going on? Remember, I said that the PSF isn't a function of position, but CCDs aren't perfect.
Step6: A powerful diagnostic is to measure the fluxes of the objects in different ways, and compare with the truth (i.e. the
magnitudes given in the calibs data structure).
For the purpose of this exercise you should use
Step7: Now we can make plots for all three images. I only use the faintest 5% of the stars to estimate the PSF,
you'll see why in a moment. | Python Code:
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
import os
import numpy as np
import matplotlib.pyplot as plt
from rhlUtils import BBox, CCD, Image, imshow
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
#%matplotlib qt
#%gui qt
dataDir = # complete
mag0 = 33 # Magnitude of an object with 1 detected photon
Explanation: Before beginning this exercise you must download some data files, which can be retrieved from here:
https://northwestern.box.com/s/rsb5wjb7dztg2128flzgex4fmq19havu
Be sure to move the corresponding files to the directory where you are running this notebook. Furthermore, you will need to provide the path to these data as the variable dataDir, below.
End of explanation
data = np.load(os.path.join(dataDir, "stars_0.npz"))
image, calibs = data["image"], data["calibs"]
image0, calibs0 = image.copy(), calibs # Keep copies
Explanation: We have provided three images containing stars, taken with 3 different CCDs, in "stars_X.npz" where X = 0, 1, 2.
The true magnitudes of all the stars are also known, taken from a reliable reference catalogue.
OK, I'll be honest; these are simulations. The stars' centers are all at centres of pixels
make your lives easier; there is no spatial structure in the PSF; and there is also no noise added of any kind. You may buy me a drink later.
To read the image data and the calibs say something like:
End of explanation
plt.figure(1)
plt.clf()
imshow(image, vmin=0, vmax=1000)
plt.title("Data")
plt.plot(calibs[:, 0], calibs[:, 1], '+') # calibs[:, 2] contains the object's magnitude (not flux)
plt.show()
Explanation: Just to make sure that we're all on the same page, here's code to display the image and stars (imshow is a simple utility imported from rhlUtils -- feel free to use plt.imshow if you'd rather)
End of explanation
def makePsfModel(image, pmin=0, pmax=100):
Return an image which is a model of the PSF
Only use stars which lie within the pmin and pmax percentiles (inclusive)
yc, xc = np.where(np.logical_and.reduce([image > np.roll(image.copy(), (1, 0), (0, 1)),
image > np.roll(image.copy(), (-1, 0), (0, 1)),
image > np.roll(image.copy(), (0, 1), (0, 1)),
image > np.roll(image.copy(), (0, -1), (0, 1)),
]))
I0 = image[yc, xc]
psfSize = 15
psfIm = np.zeros((psfSize, psfSize))
for x, y, I in zip(xc, yc, I0):
if I >= np.percentile(I0, [pmin]) and I <= np.percentile(I0, [pmax]):
dpsf = image[y - psfSize//2:y + psfSize//2 + 1, x - psfSize//2:x + psfSize//2 + 1].copy()
dpsf /= dpsf.max()
psfIm += dpsf
psfIm /= psfIm.max()
return psfIm, xc, yc
psfIm, xc, yc = makePsfModel(image)
imshow(psfIm, vmin=0, vmax=1.1)
plt.title("PSF model");
Explanation: Time for you to do some work. Write some code to estimate a PSF model by simply averaging all the objects, giving each equal weight. You should add the option to use only some subset of the stars (e.g. the faintest 25%).
Your model should simply be an image of the PSF, normalised to have a maximum value of 1.0
You could use the calibs object to find all the stars, but in the real world there are stars in the data that are
not in the catalogue so you'll have to write a simple object finder (i.e. don't use the calibs!). It's sufficient to find pixels that are larger than any neighbours to the left, right, top, or bottom; no, this isn't quite how you'd do it in the real world but it's not far off.
I told you that there were three data sets, taken in different places on the sky and with CCDs with different properties. Choose one set (e.g. "stars_0.npz") to carry out the following activities, then go back and look at the other two.
Here's my version of code do estimate the PSF. I wrote my own naive object finder for DSFP; in reality I'd call the one in the LSST stack (which has heritage stretching back to SDSS via PanSTARRS).
End of explanation
image = image0.copy()
psfSize = psfIm.shape[0]
for x, y in zip(xc, yc):
sub = image[y - psfSize//2:y + psfSize//2 + 1, x - psfSize//2:x + psfSize//2 + 1]
sub -= psfIm*sub.sum()/psfIm.sum()
imshow(image, vmin=image.min(), vmax=image.max()) # , vmin=0, vmax=100)
plt.title("Residuals");
Explanation: OK, now use your PSF model to create an image of the residuals created by subtracting the scaled PSF from the stars.
How does it look? Do you see what's going on? Remember, I said that the PSF isn't a function of position, but CCDs aren't perfect.
End of explanation
def plotMags(image, calibs, yLabels=True):
apFlux = np.empty_like(xc, dtype='float')
calibMag = np.empty_like(apFlux)
psfFlux = np.empty_like(apFlux)
for i, (x, y) in enumerate(zip(xc, yc)):
sub = image[y - psfSize//2:y + psfSize//2 + 1, x - psfSize//2:x + psfSize//2 + 1]
apFlux[i] = sub.sum()
psfFlux[i] = psfIm.sum()*np.sum(psfIm*sub)/np.sum(psfIm**2)
delta = np.hypot(x - calibs[:, 0], y - calibs[:, 1])
iDelta = np.argmin(delta)
calibMag[i] = calibs[iDelta, 2]
apMag = mag0 - 2.5*np.log10(apFlux)
psfMag = mag0 - 2.5*np.log10(psfFlux)
plt.plot(apMag, psfMag - apMag, 'o', label="psf")
plt.plot(apMag, calibMag - apMag, 'o', label="ap")
plt.ylim(-0.11, 0.11)
if yLabels:
plt.ylabel('xxMag - apMag');
else:
plt.gca().set_yticklabels([])
plt.xlabel('apMag')
plotMags(image0, calibs0)
plt.legend();
Explanation: A powerful diagnostic is to measure the fluxes of the objects in different ways, and compare with the truth (i.e. the
magnitudes given in the calibs data structure).
For the purpose of this exercise you should use:
- aperture fluxes (the sum of all the pixels in a square centred on the star is good enough)
- psf fluxes (the flux in the best-fit PSF model, $\phi$. The amplitude is simply $A = \frac{\sum \phi I}{\sum \phi^2}$, and the flux is $A \sum \phi$)
Convert them to magnitudes using the known zero-point mag0 (the magnitude corresponding to one count), and make suitable plots.
What do you think is going on? Are there other measurements that you can make on the data to test your hypotheses? Are there other observations that you'd like (I might be able to make them for you).
End of explanation
for i in range(3):
data = np.load(os.path.join(dataDir, "stars_%d.npz" % i))
image, calibs = data["image"], data["calibs"]
psfIm = makePsfModel(image.copy(), 0, 5)[0]
plt.subplot(1, 3, i+1)
plotMags(image.copy(), calibs, i==0)
plt.legend()
plt.title("stars_%d" % (i))
Explanation: Now we can make plots for all three images. I only use the faintest 5% of the stars to estimate the PSF,
you'll see why in a moment.
End of explanation |
4,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dogs vs Cats using VGG16
Author
Step1: Custom Packages
Step2: Declaring paths & global parameters
The path to the dataset is defined here. It will point to the sample folder which contains lesser number of images for quick and iterative training on the local machine. For the final training, on the cloud we must change the path to the one commented out below.
Step3: The default batchsize for training and validation purposes
Step4: Data Exploration
Instantiating the VGG16 class which implements the required utility methods
Step5: Getting the training and validation batches
Step6: Visualizing the images, only if we are exploring the samples
Step7: Finetuning
Step8: Model Testing
Due to the quirkiness of the ImageDataGenerator.flow_from_directory() used by vgg.get_batches(), we have to make a sub directory under test directory by the name 'subdir_for_keras_ImageDataGenerator'.
Step9: Keras ImageDataGenerator does not return the filenames and loads them in the same order as os.listdir() returns. Here, we extract the filenames which will serve as the indexes.
Step10: With the class_mode set to None, it will return only the batch of images without labels
Step11: Manually verifying the predicitons
Step12: We confirm the predictions manually.
Here, we make the predictions using our trained model
Step13: Results
Preparing to save the predictions as submissions to the Kaggle competetion
Step14: Save the trained model
Step15: Saving the array as a CSV
Step16: Improving the Score by using Probabilities
Loading the values predicted by the model
Step17: Visualising the distribution of probabilities
Step18: Since the kaggle competetion evaluates results based on log loss, it heavily penalises values which are 1 or 0. So, we manually modify the 1 and 0 to read 0.95 and 0.05.
Step19: Now, we will prepare the list for submission
Step20: Saving the new submission file | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Dogs vs Cats using VGG16
Author : Aman Hussain
Email : [email protected]
Description : Classifying images of dogs and cats by finetuning the VGG16 model
Import Libraries
Scientific Computing Stack
End of explanation
import os, json
from helper import utils
from helper.utils import plots
from helper import vgg16
from helper.vgg16 import Vgg16
Explanation: Custom Packages
End of explanation
# path = '../data/dogscats/sample/'
path = '../data/dogscats/'
Explanation: Declaring paths & global parameters
The path to the dataset is defined here. It will point to the sample folder which contains lesser number of images for quick and iterative training on the local machine. For the final training, on the cloud we must change the path to the one commented out below.
End of explanation
batchsize = 64
Explanation: The default batchsize for training and validation purposes
End of explanation
vgg = Vgg16()
Explanation: Data Exploration
Instantiating the VGG16 class which implements the required utility methods
End of explanation
batches = vgg.get_batches(path+'train', batch_size=batchsize)
val_batches = vgg.get_batches(path+'valid', batch_size=batchsize)
Explanation: Getting the training and validation batches
End of explanation
imgs, labels = next(batches)
val_imgs, val_labels = next(val_batches)
labels = ['dog' if i[0]==0 else 'cat' for i in labels]
val_labels = ['dog' if i[0]==0 else 'cat' for i in val_labels]
plots(val_imgs[:5], figsize=(20,10), titles=val_labels)
Explanation: Visualizing the images, only if we are exploring the samples
End of explanation
vgg.finetune(batches)
%%time
vgg.fit(batches, val_batches, nb_epoch=3)
Explanation: Finetuning
End of explanation
batch_size = len(os.listdir(path+'test'+'/subdir_for_keras_ImageDataGenerator'))
Explanation: Model Testing
Due to the quirkiness of the ImageDataGenerator.flow_from_directory() used by vgg.get_batches(), we have to make a sub directory under test directory by the name 'subdir_for_keras_ImageDataGenerator'.
End of explanation
img_index = os.listdir(path+'test'+'/subdir_for_keras_ImageDataGenerator')
img_index = [os.path.splitext(file)[0] for file in img_index]
Explanation: Keras ImageDataGenerator does not return the filenames and loads them in the same order as os.listdir() returns. Here, we extract the filenames which will serve as the indexes.
End of explanation
testbatch = vgg.get_batches(path+'test', shuffle=False, batch_size=batch_size, class_mode=None)
test_imgs = next(testbatch)
Explanation: With the class_mode set to None, it will return only the batch of images without labels
End of explanation
plots(test_imgs[:5])
probab, prediction, prediction_labels = vgg.predict(test_imgs[:5], details = True)
print(prediction_labels, probab, prediction)
img_index[:5]
Explanation: Manually verifying the predicitons
End of explanation
%%time
probab, prediction, prediction_labels = vgg.predict(test_imgs, details = True)
Explanation: We confirm the predictions manually.
Here, we make the predictions using our trained model
End of explanation
np.save(path+'submissions/index', img_index)
np.save(path+'submissions/probab', probab)
np.save(path+'submissions/prediction', prediction)
np.save(path+'submissions/prediction_labels', prediction_labels)
Explanation: Results
Preparing to save the predictions as submissions to the Kaggle competetion
End of explanation
vgg.model.save("../models/vgg_dogsVScats.h5")
for predicted, index in enumerate(prediction):
# When a cat is predicted, get the complimentary value
if predicted == 0:
probab[index] = 1 - probab[index]
img_index.insert(0, 'id')
labels_pred = [str(label) for label in prediction]
labels_pred.insert(0, 'label')
labels_prob = [str(label) for label in probab]
labels_prob.insert(0, 'label')
submission_array_pred = np.vstack((img_index, labels_pred)).T.astype('str')
submission_array_prob = np.vstack((img_index, labels_prob)).T.astype('str')
Explanation: Save the trained model
End of explanation
np.savetxt(path+'submissions/submission_pred.csv', submission_array_pred, delimiter=",", fmt='%1s')
np.savetxt(path+'submissions/submission_prob.csv', submission_array_prob, delimiter=",", fmt='%1s')
Explanation: Saving the array as a CSV
End of explanation
img_index = np.load(path+'submissions/index.npy')
probab = np.load(path+'submissions/probab.npy')
prediction = np.load(path+'submissions/prediction.npy')
Explanation: Improving the Score by using Probabilities
Loading the values predicted by the model
End of explanation
plt.hist(probab)
Explanation: Visualising the distribution of probabilities
End of explanation
np.unique(prediction)
prediction = prediction.astype(float)
prediction[prediction == 1] = 0.95
prediction[prediction == 0] = 0.05
np.unique(prediction)
Explanation: Since the kaggle competetion evaluates results based on log loss, it heavily penalises values which are 1 or 0. So, we manually modify the 1 and 0 to read 0.95 and 0.05.
End of explanation
img_index = img_index.tolist()
img_index.insert(0, 'id')
labels_pred = [str(label) for label in prediction]
labels_pred.insert(0, 'label')
submission_array_pred = np.vstack((img_index, labels_pred)).T.astype('str')
Explanation: Now, we will prepare the list for submission
End of explanation
np.savetxt(path+'submissions/submission_pred_mod.csv', submission_array_pred, delimiter=",", fmt='%1s')
Explanation: Saving the new submission file
End of explanation |
4,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: pomegranate and parallelization
pomegranate supports parallelization through a set of built in functions based off of joblib. All computationally intensive functions in pomegranate are implemented in cython with the global interpreter lock (GIL) released, allowing for multithreading to be used for efficient parallel processing. The following functions can be called for parallelization
Step2: 1. General Mixture Models
pomegranate has a very efficient implementation of mixture models, particularly Gaussian mixture models. Lets take a look at how fast pomegranate is versus sklearn, and then see how much faster parallelization can get it to be.
Step3: It looks like on a large dataset not only is pomegranate faster than sklearn at performing 15 iterations of EM on 3 million 5 dimensional datapoints with 3 clusters, but the parallelization is able to help in speeding things up.
Lets now take a look at the time it takes to make predictions using GMMs. Lets fit the model to a small amount of data, and then predict a larger amount of data drawn from the same underlying distributions.
Step4: It looks like pomegranate can be slightly slower than sklearn when using a single processor, but that it can be parallelized to get faster performance. At the same time, predictions at this level happen so quickly (millions per second) that this may not be the most reliable test for parallelization.
To ensure that we're getting the exact same results just faster, lets subtract the predictions from each other and make sure that the sum is equal to 0.
Step5: Great, no difference between the two.
Lets now make sure that pomegranate and sklearn are learning basically the same thing. Lets fit both models to some 2 dimensional 2 component data and make sure that they both extract the underlying clusters by plotting them.
Step6: It looks like we're getting the same basic results for the two. The two algorithms are initialized a bit differently, and so it can be difficult to directly compare the results between them, but it looks like they're getting roughly the same results.
3. Multivariate Gaussian HMM
Now let's move on to training a hidden Markov model with multivariate Gaussian emissions with a diagonal covariance matrix. We'll randomly generate some Gaussian distributed numbers and use pomegranate with either one or four threads to fit our model to the data.
Step7: All we had to do was pass in the n_jobs parameter to the fit function in order to get a speed improvement. It looks like we're getting a really good speed improvement, as well! This is mostly because the HMM algorithms perform a lot more operations than the other models, and so spend the vast majority of time with the GIL released. You may not notice as strong speedups when using a MultivariateGaussianDistribution because BLAS uses multithreaded operations already internally, even when only one job is specified.
Now lets look at the prediction function to make sure the we're getting speedups there as well. You'll have to use a wrapper function to parallelize the predictions for a HMM because it returns an annotated sequence rather than a single value like a classic machine learning model might.
Step8: Great, we're getting a really good speedup on that as well! Looks like the parallel processing is more efficient with a bigger, more complex model, than with a simple one. This can make sense, because all inference/training is more complex, and so there is more time with the GIL released compared to with the simpler operations.
4. Mixture of Hidden Markov Models
Let's stack another layer onto this model by making it a mixture of these hidden Markov models, instead of a single one. At this point we're sticking a multivariate Gaussian HMM into a mixture and we're going to train this big thing in parallel.
Step9: Looks like we're getting a really nice speed improvement when training this complex model. Let's take a look now at the time it takes to do inference with it. | Python Code:
%pylab inline
from sklearn.mixture import GaussianMixture
from pomegranate import *
import seaborn, time
seaborn.set_style('whitegrid')
def create_dataset(n_samples, n_dim, n_classes, alpha=1):
Create a random dataset with n_samples in each class.
X = numpy.concatenate([numpy.random.normal(i*alpha, 1, size=(n_samples, n_dim)) for i in range(n_classes)])
y = numpy.concatenate([numpy.zeros(n_samples) + i for i in range(n_classes)])
idx = numpy.arange(X.shape[0])
numpy.random.shuffle(idx)
return X[idx], y[idx]
Explanation: pomegranate and parallelization
pomegranate supports parallelization through a set of built in functions based off of joblib. All computationally intensive functions in pomegranate are implemented in cython with the global interpreter lock (GIL) released, allowing for multithreading to be used for efficient parallel processing. The following functions can be called for parallelization:
fit
summarize
predict
predict_proba
predict_log_proba
log_probability
probability
These functions can all be simply parallelized by passing in n_jobs=X to the method calls. This tutorial will demonstrate how to use those calls. First we'll look at a simple multivariate Gaussian mixture model, and compare its performance to sklearn. Then we'll look at a hidden Markov model with Gaussian emissions, and lastly we'll look at a mixture of Gaussian HMMs. These can all utilize the build-in parallelization that pomegranate has.
Let's dive right in!
End of explanation
n, d, k = 1000000, 5, 3
X, y = create_dataset(n, d, k)
print "sklearn GMM"
%timeit GaussianMixture(n_components=k, covariance_type='full', max_iter=15, tol=1e-10).fit(X)
print
print "pomegranate GMM"
%timeit GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, max_iterations=15, stop_threshold=1e-10)
print
print "pomegranate GMM (4 jobs)"
%timeit GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, n_jobs=4, max_iterations=15, stop_threshold=1e-10)
Explanation: 1. General Mixture Models
pomegranate has a very efficient implementation of mixture models, particularly Gaussian mixture models. Lets take a look at how fast pomegranate is versus sklearn, and then see how much faster parallelization can get it to be.
End of explanation
d, k = 25, 2
X, y = create_dataset(1000, d, k)
a = GaussianMixture(k, n_init=1, max_iter=25).fit(X)
b = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, max_iterations=25)
del X, y
n = 1000000
X, y = create_dataset(n, d, k)
print "sklearn GMM"
%timeit -n 1 a.predict_proba(X)
print
print "pomegranate GMM"
%timeit -n 1 b.predict_proba(X)
print
print "pomegranate GMM (4 jobs)"
%timeit -n 1 b.predict_proba(X, n_jobs=4)
Explanation: It looks like on a large dataset not only is pomegranate faster than sklearn at performing 15 iterations of EM on 3 million 5 dimensional datapoints with 3 clusters, but the parallelization is able to help in speeding things up.
Lets now take a look at the time it takes to make predictions using GMMs. Lets fit the model to a small amount of data, and then predict a larger amount of data drawn from the same underlying distributions.
End of explanation
print (b.predict_proba(X) - b.predict_proba(X, n_jobs=4)).sum()
Explanation: It looks like pomegranate can be slightly slower than sklearn when using a single processor, but that it can be parallelized to get faster performance. At the same time, predictions at this level happen so quickly (millions per second) that this may not be the most reliable test for parallelization.
To ensure that we're getting the exact same results just faster, lets subtract the predictions from each other and make sure that the sum is equal to 0.
End of explanation
d, k = 2, 2
X, y = create_dataset(1000, d, k, alpha=2)
a = GaussianMixture(k, n_init=1, max_iter=25).fit(X)
b = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, max_iterations=25)
y1, y2 = a.predict(X), b.predict(X)
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("sklearn clusters", fontsize=14)
plt.scatter(X[y1==0, 0], X[y1==0, 1], color='m', edgecolor='m')
plt.scatter(X[y1==1, 0], X[y1==1, 1], color='c', edgecolor='c')
plt.subplot(122)
plt.title("pomegranate clusters", fontsize=14)
plt.scatter(X[y2==0, 0], X[y2==0, 1], color='m', edgecolor='m')
plt.scatter(X[y2==1, 0], X[y2==1, 1], color='c', edgecolor='c')
Explanation: Great, no difference between the two.
Lets now make sure that pomegranate and sklearn are learning basically the same thing. Lets fit both models to some 2 dimensional 2 component data and make sure that they both extract the underlying clusters by plotting them.
End of explanation
X = numpy.random.randn(1000, 500, 50)
print "pomegranate Gaussian HMM (1 job)"
%timeit -n 1 -r 1 HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=5)
print
print "pomegranate Gaussian HMM (2 jobs)"
%timeit -n 1 -r 1 HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=5, n_jobs=2)
print
print "pomegranate Gaussian HMM (2 jobs)"
%timeit -n 1 -r 1 HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=5, n_jobs=4)
Explanation: It looks like we're getting the same basic results for the two. The two algorithms are initialized a bit differently, and so it can be difficult to directly compare the results between them, but it looks like they're getting roughly the same results.
3. Multivariate Gaussian HMM
Now let's move on to training a hidden Markov model with multivariate Gaussian emissions with a diagonal covariance matrix. We'll randomly generate some Gaussian distributed numbers and use pomegranate with either one or four threads to fit our model to the data.
End of explanation
model = HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=2, verbose=False)
print "pomegranate Gaussian HMM (1 job)"
%timeit predict_proba(model, X)
print
print "pomegranate Gaussian HMM (2 jobs)"
%timeit predict_proba(model, X, n_jobs=2)
Explanation: All we had to do was pass in the n_jobs parameter to the fit function in order to get a speed improvement. It looks like we're getting a really good speed improvement, as well! This is mostly because the HMM algorithms perform a lot more operations than the other models, and so spend the vast majority of time with the GIL released. You may not notice as strong speedups when using a MultivariateGaussianDistribution because BLAS uses multithreaded operations already internally, even when only one job is specified.
Now lets look at the prediction function to make sure the we're getting speedups there as well. You'll have to use a wrapper function to parallelize the predictions for a HMM because it returns an annotated sequence rather than a single value like a classic machine learning model might.
End of explanation
def create_model(mus):
n = mus.shape[0]
starts = numpy.zeros(n)
starts[0] = 1.
ends = numpy.zeros(n)
ends[-1] = 0.5
transition_matrix = numpy.zeros((n, n))
distributions = []
for i in range(n):
transition_matrix[i, i] = 0.5
if i < n - 1:
transition_matrix[i, i+1] = 0.5
distribution = IndependentComponentsDistribution([NormalDistribution(mu, 1) for mu in mus[i]])
distributions.append(distribution)
model = HiddenMarkovModel.from_matrix(transition_matrix, distributions, starts, ends)
return model
def create_mixture(mus):
hmms = [create_model(mu) for mu in mus]
return GeneralMixtureModel(hmms)
n, d = 50, 10
mus = [(numpy.random.randn(d, n)*0.2 + numpy.random.randn(n)*2).T for i in range(2)]
model = create_mixture(mus)
X = numpy.random.randn(400, 150, d)
print "pomegranate Mixture of Gaussian HMMs (1 job)"
%timeit model.fit(X, max_iterations=5)
print
model = create_mixture(mus)
print "pomegranate Mixture of Gaussian HMMs (2 jobs)"
%timeit model.fit(X, max_iterations=5, n_jobs=2)
Explanation: Great, we're getting a really good speedup on that as well! Looks like the parallel processing is more efficient with a bigger, more complex model, than with a simple one. This can make sense, because all inference/training is more complex, and so there is more time with the GIL released compared to with the simpler operations.
4. Mixture of Hidden Markov Models
Let's stack another layer onto this model by making it a mixture of these hidden Markov models, instead of a single one. At this point we're sticking a multivariate Gaussian HMM into a mixture and we're going to train this big thing in parallel.
End of explanation
model = create_mixture(mus)
print "pomegranate Mixture of Gaussian HMMs (1 job)"
%timeit model.predict_proba(X)
print
model = create_mixture(mus)
print "pomegranate Mixture of Gaussian HMMs (2 jobs)"
%timeit model.predict_proba(X, n_jobs=2)
Explanation: Looks like we're getting a really nice speed improvement when training this complex model. Let's take a look now at the time it takes to do inference with it.
End of explanation |
4,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
확률 분포, 확률 변수, 확률 모형의 의미
분포
확률 분포
확률 변수
확률 모형
샘플링
모집단
확률 분포
자료의 분포(distribution)란 자료가 어떤 수치적인 값을 가지는지를 그 전반적인 특징을 서술한 것을 말한다.
어떤 경우에 자료의 분포가 필요할까? 다음의 세 가지 경우를 생각해보자.
우선 복수의 자료 즉, 자료의 집합이 존재하고 이 집합의 특성을 서술해야 하는 경우이다. 이는 자료의 모습을 기술(describe)하기 위한 것이라고 해서 기술 통계(descriptive statistics)라고 한다. 보통 자료의 평균, 최대값, 최소값, 분산 등의 계산 값을 사용하거나 히스토그램(histogram)이나 커널 밀도(kernel density)를 사용하기도 한다.
다음으로 아직 자료가 실제로 생성(realization)되지는 않았지만 미래에 생성될 자료 집합의 특성을 미리 서술하기 위한 것이다.
만약 자료가 실험(experiment)이나 조사(survey)등을 통해 생성된다고 하면 아직 실험이나 조사를 하기 미리 특성을 알아보기 위한 경우도 있을 수 있다.
이 때의 분포를 확률 분포(probability distribution)이라고 한다. 이 때의 확률의 의미는 앞으로 생성될 자료의 값이 확률 분포에서 지정한 빈도에 따라 생성될 것이라는 의미이므로 빈도주의 확률론(frequentist probability)이라는 용어를 사용한다
마지막으로 생각할 수 있는 경우는 실제로 하나의 자료가 생성이 되었지만 그 값을 알지 못하는 미지(unknown)의 자료 값을 고려하는 경우이다. 이 때의 확률은 아직 알지 못하는 자료의 값이 특정한 값이 되리라는 믿음(belief) 또는 가능성에 대한 상대적 척도이다. 이러한 확률을 베이지안 확률론(Bayesian probability)이라고 한다.
우리가 어떤 문제를 푸는 경우, 보통은 몇가지 후보(candidate) 값을 놓고 각각의 후보가 정답이 될 가능성을 수치로 비교할 수 있다. 베이지안 확률론의 이러한 상황에서 정답에 대한 증거 혹은 힌트가 추가될 때 마다 이 가능성들을 어떻게 바꾸어야 하는지를 나타내는 방법론이다.
현존하는 복수의 자료의 기술
복수의 자료가 이미 존재하는 경우, 자료 값들의 특성을 살펴보기 위해
기술 통계(descriptive statistics)
미래에 만들어질 자료의 예측
자료가 아직 존재하지는 않지만 미래에 복수의 자료가 만들어질 수 있는 경우, 어떤 자료 값들이 만들어질지 예측하기 위해
확률 분포 (probability distribution)
빈도주의 확률론 (frequentist probability)
미지의 자료 값에 대한 추정
하나의 자료가 이미 존재하지만 그 값을 아직 알지 못하는 경우, 그 자료의 값을 추정하기 위해
베이지안 확률론 (Bayesian probability
확률 분포를 정의하는 방법
자료의 분포를 기술하는 방법은 앞서 말한 기술 통계가 가장 간단한 방법이지만 기술통계는 언제까지나 대략적인 모습만을 그릴 뿐이고 자료 전체의 완벽한 모습을 그리기 힘들다.
히스토그램을 예로 들어 보자. 1,000개의 자료가 존재한다고 가정하고 이를 히스토그램으로 그려보자.
Step1: 이 히스토그램에서 -0.143394 부터 0.437156 사이의 값이 전체의 약 24%를 차지하고 있음을 알 수 있다. 그럼 만약 -0.01 부터 0.01 사이의 구간에 대한 정보를 얻고 싶다면? 더 세부적인 구간에 대해 정보를 구하고 싶다면 히스토그램의 구간을 더 작게 나누어야 한다.
Step2: 정확한 묘사를 위해 구간의 수를 증가시키면 몇 가지 문제가 발생한다.
우선 구간의 간격이 작아지면서 하나의 구간에 있는 자료의 수가 점점 적어진다. 만약 구간 수가 무한대에 가깝다면 하나의 구간 폭은 0으로 수렴하고 해당 구간의 자료 수도 0으로 수렴할 것이다. 따라서 분포의 상대적인 모양을 살펴보기 힘들어진다. 이 문제는 누적 분포(cumulatice distribution)를 사용하면 해결할 수 있다.
두번째는 더 근본적인 문제로 서술을 위한 정보 자체가 증가하면서 정보의 단순화라는 원래의 목적을 상실한다는 점이다.
확률 모형
확률 분포를 보다 단순하게 묘사하기 위해 고안한 것이 확률 모형(probability model)이다.
확률 모형은 분포 함수(distribution function) 또는 밀도 함수(density function)라고 불리우는 미리 정해진 함수의 수식을 사용하여 분포의 모양을 정의(define)하는 방법이다. 이 때 분포의 모양을 결정하는 함수의 계수를 분포의 모수(parameter)라고 부른다.
예를 들어 가장 널리 쓰이는 정규 분포(Normal distribution)는 다음과 같은 수식으로 정의된다.
이 수식 자체의 이름은 $N$이고 함수의 독립 변수는 자료의 값을 의미하는 변수 $x$이다. 식에서 사용된 문자 $\mu$와 $\sigma$는 평균(mean)과 표준편차(standard deviation)이라는 이름의 모수이다.
$$ N(x; \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$
다음 그림은 scipy를 사용하여 평균 0, 표준편차 1인 표준 정규 분포(standard normal distribution)의 모양을 그린것이다. | Python Code:
sp.random.seed(0)
x = sp.random.normal(size=1000)
x
ns, bins, ps = plt.hist(x, bins=10)
ns
bins
ps
pd.DataFrame([bins, ns/1000])
Explanation: 확률 분포, 확률 변수, 확률 모형의 의미
분포
확률 분포
확률 변수
확률 모형
샘플링
모집단
확률 분포
자료의 분포(distribution)란 자료가 어떤 수치적인 값을 가지는지를 그 전반적인 특징을 서술한 것을 말한다.
어떤 경우에 자료의 분포가 필요할까? 다음의 세 가지 경우를 생각해보자.
우선 복수의 자료 즉, 자료의 집합이 존재하고 이 집합의 특성을 서술해야 하는 경우이다. 이는 자료의 모습을 기술(describe)하기 위한 것이라고 해서 기술 통계(descriptive statistics)라고 한다. 보통 자료의 평균, 최대값, 최소값, 분산 등의 계산 값을 사용하거나 히스토그램(histogram)이나 커널 밀도(kernel density)를 사용하기도 한다.
다음으로 아직 자료가 실제로 생성(realization)되지는 않았지만 미래에 생성될 자료 집합의 특성을 미리 서술하기 위한 것이다.
만약 자료가 실험(experiment)이나 조사(survey)등을 통해 생성된다고 하면 아직 실험이나 조사를 하기 미리 특성을 알아보기 위한 경우도 있을 수 있다.
이 때의 분포를 확률 분포(probability distribution)이라고 한다. 이 때의 확률의 의미는 앞으로 생성될 자료의 값이 확률 분포에서 지정한 빈도에 따라 생성될 것이라는 의미이므로 빈도주의 확률론(frequentist probability)이라는 용어를 사용한다
마지막으로 생각할 수 있는 경우는 실제로 하나의 자료가 생성이 되었지만 그 값을 알지 못하는 미지(unknown)의 자료 값을 고려하는 경우이다. 이 때의 확률은 아직 알지 못하는 자료의 값이 특정한 값이 되리라는 믿음(belief) 또는 가능성에 대한 상대적 척도이다. 이러한 확률을 베이지안 확률론(Bayesian probability)이라고 한다.
우리가 어떤 문제를 푸는 경우, 보통은 몇가지 후보(candidate) 값을 놓고 각각의 후보가 정답이 될 가능성을 수치로 비교할 수 있다. 베이지안 확률론의 이러한 상황에서 정답에 대한 증거 혹은 힌트가 추가될 때 마다 이 가능성들을 어떻게 바꾸어야 하는지를 나타내는 방법론이다.
현존하는 복수의 자료의 기술
복수의 자료가 이미 존재하는 경우, 자료 값들의 특성을 살펴보기 위해
기술 통계(descriptive statistics)
미래에 만들어질 자료의 예측
자료가 아직 존재하지는 않지만 미래에 복수의 자료가 만들어질 수 있는 경우, 어떤 자료 값들이 만들어질지 예측하기 위해
확률 분포 (probability distribution)
빈도주의 확률론 (frequentist probability)
미지의 자료 값에 대한 추정
하나의 자료가 이미 존재하지만 그 값을 아직 알지 못하는 경우, 그 자료의 값을 추정하기 위해
베이지안 확률론 (Bayesian probability
확률 분포를 정의하는 방법
자료의 분포를 기술하는 방법은 앞서 말한 기술 통계가 가장 간단한 방법이지만 기술통계는 언제까지나 대략적인 모습만을 그릴 뿐이고 자료 전체의 완벽한 모습을 그리기 힘들다.
히스토그램을 예로 들어 보자. 1,000개의 자료가 존재한다고 가정하고 이를 히스토그램으로 그려보자.
End of explanation
ns, bins, ps = plt.hist(x, bins=100)
pd.DataFrame([bins, ns/1000])
Explanation: 이 히스토그램에서 -0.143394 부터 0.437156 사이의 값이 전체의 약 24%를 차지하고 있음을 알 수 있다. 그럼 만약 -0.01 부터 0.01 사이의 구간에 대한 정보를 얻고 싶다면? 더 세부적인 구간에 대해 정보를 구하고 싶다면 히스토그램의 구간을 더 작게 나누어야 한다.
End of explanation
x = np.linspace(-3, 3, 100)
y = sp.stats.norm.pdf(x)
plt.plot(x, y)
Explanation: 정확한 묘사를 위해 구간의 수를 증가시키면 몇 가지 문제가 발생한다.
우선 구간의 간격이 작아지면서 하나의 구간에 있는 자료의 수가 점점 적어진다. 만약 구간 수가 무한대에 가깝다면 하나의 구간 폭은 0으로 수렴하고 해당 구간의 자료 수도 0으로 수렴할 것이다. 따라서 분포의 상대적인 모양을 살펴보기 힘들어진다. 이 문제는 누적 분포(cumulatice distribution)를 사용하면 해결할 수 있다.
두번째는 더 근본적인 문제로 서술을 위한 정보 자체가 증가하면서 정보의 단순화라는 원래의 목적을 상실한다는 점이다.
확률 모형
확률 분포를 보다 단순하게 묘사하기 위해 고안한 것이 확률 모형(probability model)이다.
확률 모형은 분포 함수(distribution function) 또는 밀도 함수(density function)라고 불리우는 미리 정해진 함수의 수식을 사용하여 분포의 모양을 정의(define)하는 방법이다. 이 때 분포의 모양을 결정하는 함수의 계수를 분포의 모수(parameter)라고 부른다.
예를 들어 가장 널리 쓰이는 정규 분포(Normal distribution)는 다음과 같은 수식으로 정의된다.
이 수식 자체의 이름은 $N$이고 함수의 독립 변수는 자료의 값을 의미하는 변수 $x$이다. 식에서 사용된 문자 $\mu$와 $\sigma$는 평균(mean)과 표준편차(standard deviation)이라는 이름의 모수이다.
$$ N(x; \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$
다음 그림은 scipy를 사용하여 평균 0, 표준편차 1인 표준 정규 분포(standard normal distribution)의 모양을 그린것이다.
End of explanation |
4,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What's the fuzz all about?
Randomized data generation for robust testing
Moritz Gronbach, Blue Yonder
EuroPython 2015, Bilbao, Spain
About me and why I want to talk about this
Predictive Analytics
<img src="gfx/fortune-teller.jpg" width=300, height=auto align='center'>
(picture CC by Nancy Nance)
Predictive Analytics
Step2: Big Data
<img src="gfx/cube.png" align='center'>
Big Complex Data
<img src="gfx/contraption.jpg" width=600 height=auto align='center'>
If things go wrong in our code...
<img src="gfx/empty.jpg" width=600 height=auto align='center'>
Actually the reality is closer to this
<img src="gfx/redlight.jpg" width=400 height=auto align='center'>
For general happiness...
need to check as many edge cases as possible
before going into production
even the cases we don't think about!
Dynamic unit testing can help!
<img src="gfx/happy.jpg" width=300 height=auto align='center'>
What is dynamic unit testing?
Property-based testing + Fuzzing
Test cases are generated automatically
Fuzzing
Parameter Templates
Function behaviour is checked for
crashes
timeouts
universal properties
Static Testing
test cases provided by the user
function behaviour precisely defined by the user
Dynamic and Static
Attributes of tests
Precision
How closely is the expected behaviour defined?
Case Coverage
What proportion of the input space is covered?
Does case coverage matter?
Does it really matter if we check 5 out of 2^64 cases, or 5000 out of 2^64 cases?
Often, yes!
Let's say there is a numerical instability in your algorithm
Only one percent of all inputs are affected
Probability to detect this instability using five cases
1 - 0.99^5, about 0.05
Probability to detect this instability using 5000 cases
1 - 0.99^5000, nearly 1
Dynamic tests help you find case classes you didn't think about
Static and Dynamic
Static
high precision, low case coverage*
Dynamic
low precision, higher case coverage*
* usually approximately true
Static and Dynamic testing complement each other
Uncertainty principle of unit testing
Step3: What happened?
Sampling
hypothesis samples integers until it finds a falsifying example
Shrinking
hypothesis tries to simplify the falsifying example
here | Python Code:
import secret_algorithms
def create_pipeline():
pipeline = []
pipeline.append(TimeSeriesProcessor())
pipeline.append(WeatherData())
pipeline.append(secret_algorithms.SuperModel())
return Pipeline(pipeline)
Explanation: What's the fuzz all about?
Randomized data generation for robust testing
Moritz Gronbach, Blue Yonder
EuroPython 2015, Bilbao, Spain
About me and why I want to talk about this
Predictive Analytics
<img src="gfx/fortune-teller.jpg" width=300, height=auto align='center'>
(picture CC by Nancy Nance)
Predictive Analytics
End of explanation
from math import sqrt
def fib(n):
Computes the n-th Fibonacci number.
fib(0) == fib(1) == 1
fib(n) == fib(n - 1) + fib(n - 2)
1, 1, 2, 3, 5, 8, ...
sqrt_5 = sqrt(5)
p = (1 + sqrt_5) / 2
q = 1/p
return int((p**n + q**n) / sqrt_5 + 0.5)
print('Defined Fibonacci function!')
def test_fib():
assert(fib(1) == 1)
assert(fib(2) == 1)
assert(fib(3) == 2)
assert(fib(6) == 8)
assert(fib(50) == 12586269025)
print("Tests passed!")
test_fib()
from hypothesis import given
from hypothesis.strategies import integers
from hypothesis import Settings, Verbosity
# settings to increase chances of a smooth presentation
Settings.default.derandomize = True
Settings.default.max_iterations = 50
Settings.default.timeout = 20
Settings.database = None
@given(integers(min_value=3))
def test_fib_recurrence(n):
assert(fib(n) == fib(n - 1) + fib(n - 2))
test_fib_recurrence()
@given(integers(min_value=3),
settings=Settings(verbosity=Verbosity.verbose))
def test_fib_recurrence(n):
assert(fib(n) == fib(n - 1) + fib(n - 2))
test_fib_recurrence()
Explanation: Big Data
<img src="gfx/cube.png" align='center'>
Big Complex Data
<img src="gfx/contraption.jpg" width=600 height=auto align='center'>
If things go wrong in our code...
<img src="gfx/empty.jpg" width=600 height=auto align='center'>
Actually the reality is closer to this
<img src="gfx/redlight.jpg" width=400 height=auto align='center'>
For general happiness...
need to check as many edge cases as possible
before going into production
even the cases we don't think about!
Dynamic unit testing can help!
<img src="gfx/happy.jpg" width=300 height=auto align='center'>
What is dynamic unit testing?
Property-based testing + Fuzzing
Test cases are generated automatically
Fuzzing
Parameter Templates
Function behaviour is checked for
crashes
timeouts
universal properties
Static Testing
test cases provided by the user
function behaviour precisely defined by the user
Dynamic and Static
Attributes of tests
Precision
How closely is the expected behaviour defined?
Case Coverage
What proportion of the input space is covered?
Does case coverage matter?
Does it really matter if we check 5 out of 2^64 cases, or 5000 out of 2^64 cases?
Often, yes!
Let's say there is a numerical instability in your algorithm
Only one percent of all inputs are affected
Probability to detect this instability using five cases
1 - 0.99^5, about 0.05
Probability to detect this instability using 5000 cases
1 - 0.99^5000, nearly 1
Dynamic tests help you find case classes you didn't think about
Static and Dynamic
Static
high precision, low case coverage*
Dynamic
low precision, higher case coverage*
* usually approximately true
Static and Dynamic testing complement each other
Uncertainty principle of unit testing: can't have both high precision and high case coverage
Dynamic Testing in Python
We use hypothesis
QuickCheck-style testing for Python
stable, but in ongoing development
a lot of innovative features
Let's do an example!
End of explanation
from urllib import quote
def test_quote():
assert(quote('') == '')
s = 'abc def'
expected = 'abc%20def'
assert(quote(s) == expected)
print("Tests passed!")
test_quote()
from urllib import unquote
from hypothesis.strategies import text
@given(text())
def test_quote_unquote(s):
assert unquote(quote(s)) == s
test_quote_unquote()
from urllib import unquote
from hypothesis.strategies import text
import string
@given(text(alphabet=string.printable))
def test_quote_unquote(s):
assert unquote(quote(s)) == s
test_quote_unquote()
Explanation: What happened?
Sampling
hypothesis samples integers until it finds a falsifying example
Shrinking
hypothesis tries to simplify the falsifying example
here: simplest means smallest integer
Example II: Departure from Math-Wonderland
End of explanation |
4,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook to TreatGeoSelf with gridded climate data set coordinates
Case study
Step1: Establish a secure connection with HydroShare by instantiating the hydroshare class that is defined within hs_utils. In addition to connecting with HydroShare, this command also sets and prints environment variables for several parameters that will be useful for saving work back to HydroShare.
Step3: If you are curious about where the data is being downloaded, click on the Jupyter Notebook dashboard icon to return to the File System view. The homedir directory location printed above is where you can find the data and contents you will download to a HydroShare JupyterHub server. At the end of this work session, you can migrate this data to the HydroShare iRods server as a Generic Resource.
2. Generate a mapping file for the study site of interest
Get the files
Here, we will retrieve two data objects then catalog the files within the mapping file. The Hydroshare resource 'https
Step4: Unzip the tar.gz file to a folder of files
Step5: Examine the mapping file
Step6: 4. Remap the file directories for each gridded cell centroid in the mapping file
Consider the help tool to understand the function and its parameters
Step7: Examine the mapping file | Python Code:
# data processing
import os
import ogh
import tarfile
# data migration library
from utilities import hydroshare
# silencing warning
# import warnings
# warnings.filterwarnings("ignore")
Explanation: Notebook to TreatGeoSelf with gridded climate data set coordinates
Case study: the Sauk-Suiattle river watershed
Use this Jupyter Notebook to:
1. HydroShare setup and preparation
2. Retrieve a mapping file (contains gridded cell centroids) for the study site of interest
3. Retrieve a folder of datafiles that were previously obtained for the study site of interest
4. Remap the file directories for each gridded cell centroid in the mapping file
<img src="http://www.sauk-suiattle.com/images/Elliott.jpg"
style="float:right;width:150px;padding:20px">
<br/><br/><br/>
<img src="https://www.washington.edu/brand/files/2014/09/W-Logo_Purple_Hex.png"
style="float:right;width:150px;padding:20px">
<br/><br/>
This data is compiled to digitally observe the watersheds, powered by HydroShare. <br/>Provided by the Watershed Dynamics Group, Dept. of Civil and Environmental Engineering, University of Washington
1. HydroShare setup and preparation
To run this notebook, we must import several libaries. These are listed in order of 1) Python standard libraries, 2) hs_utils library provides functions for interacting with HydroShare, including resource querying, dowloading and creation, and 3) the observatory_gridded_hydromet library that is downloaded with this notebook.
If the python library basemap-data-hires is not installed, please uncomment and run the following lines in terminal.
End of explanation
hs=hydroshare.hydroshare()
homedir = hs.getContentPath(os.environ["HS_RES_ID"])
os.chdir(homedir)
print('Data will be loaded from and save to:'+homedir)
Explanation: Establish a secure connection with HydroShare by instantiating the hydroshare class that is defined within hs_utils. In addition to connecting with HydroShare, this command also sets and prints environment variables for several parameters that will be useful for saving work back to HydroShare.
End of explanation
Sample mapping file and previously downloaded files
# List of available data
hs.getResourceFromHydroShare('3629f2d5315b48fdb8eb851c1dd9ce63')
folderpath = hs.getContentPath('3629f2d5315b48fdb8eb851c1dd9ce63') # the folder
mappingfile1 = os.path.abspath(hs.content['Sauk_mappingfile.csv']) # the mapping file in the folder
zipfolder = os.path.abspath(hs.content['salathe2014.tar.gz']) # the zipfolder in the folder
os.listdir(folderpath)
Explanation: If you are curious about where the data is being downloaded, click on the Jupyter Notebook dashboard icon to return to the File System view. The homedir directory location printed above is where you can find the data and contents you will download to a HydroShare JupyterHub server. At the end of this work session, you can migrate this data to the HydroShare iRods server as a Generic Resource.
2. Generate a mapping file for the study site of interest
Get the files
Here, we will retrieve two data objects then catalog the files within the mapping file. The Hydroshare resource 'https://www.hydroshare.org/resource/3629f2d5315b48fdb8eb851c1dd9ce63/' contains the mapping file for a test study site. The zipfolder contains the WRF ASCII files (described in Salathe et al. 2014) from a previous data download run, and may contain more files than is necessary for the study of our study site.
First, we will need to migrate these objects into the computing environment, and designate their path directories. Then, we will unzip the zipfolder, then catalog the two data products into the mappingfile under two dataset names.
End of explanation
tar = tarfile.open(zipfolder)
tar.extractall(path=folderpath) # untar file into same directory
tar.close()
os.remove(zipfolder)
os.listdir(folderpath)
Explanation: Unzip the tar.gz file to a folder of files
End of explanation
mapdf, nstations = ogh.mappingfileToDF(os.path.abspath(mappingfile1))
Explanation: Examine the mapping file
End of explanation
help(ogh.remapCatalog)
# dailywrf_salathe2014
ogh.remapCatalog(mappingfile=mappingfile1,
catalog_label='dailywrf_salathe2014',
homedir=folderpath,
subdir='salathe2014/WWA_1950_2010/raw')
# dailywrf_bcsalathe2014
ogh.remapCatalog(mappingfile=mappingfile1,
catalog_label='dailywrf_bcsalathe2014',
homedir=folderpath,
subdir='salathe2014/WWA_1950_2010/bc')
Explanation: 4. Remap the file directories for each gridded cell centroid in the mapping file
Consider the help tool to understand the function and its parameters
End of explanation
mapdf, nstations = ogh.mappingfileToDF(os.path.abspath(mappingfile1))
Explanation: Examine the mapping file
End of explanation |
4,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtro dos 10 crimes com mais ocorrências em março
Step1: Todas as ocorrências criminais de março
Step2: Quantidade de crimes por região
Step3: As 5 regiões com mais ocorrências
Step4: Acima podemos ver que a região 4 teve o maior número de ocorrências criminais
Podemos agora ver quais são essas ocorrências de forma mais detalhada
Step5: Uma análise sobre as 5 ocorrências mais comuns
Step6: Filtro dos 10 horários com mais ocorrências em março
Step7: Filtro dos 5 horários com mais ocorrências na região 4 (região com mais ocorrências em março)
Step8: Filtro dos 10 bairros com mais ocorrências em março
Step9: O Bairro com o maior número de ocorrências em março foi Jangurussu
Vamos agora ver de forma mais detalhadas quais foram estes crimes
Step10: Os 5 bairros mais comuns na região 4
Step11: Análise sobre o bairro Bom Jardim | Python Code:
all_crime_tipos.head(10)
all_crime_tipos_top10 = all_crime_tipos.head(10)
all_crime_tipos_top10.plot(kind='barh', figsize=(12,6), color='#3f3fff')
plt.title('Top 10 crimes por tipo (Mar 2017)')
plt.xlabel('Número de crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Filtro dos 10 crimes com mais ocorrências em março
End of explanation
all_crime_tipos
Explanation: Todas as ocorrências criminais de março
End of explanation
group_df_marco = df_marco.groupby('CLUSTER')
crimes = group_df_marco['NATUREZA DA OCORRÊNCIA'].count()
crimes.plot(kind='barh', figsize=(10,7), color='#3f3fff')
plt.title('Número de crimes por região (Mar 2017)')
plt.xlabel('Número')
plt.ylabel('Região')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Quantidade de crimes por região
End of explanation
regioes = df_marco.groupby('CLUSTER').count()
grupo_de_regioes = regioes.sort_values('NATUREZA DA OCORRÊNCIA', ascending=False)
grupo_de_regioes['TOTAL'] = grupo_de_regioes.ID
top_5_regioes_qtd = grupo_de_regioes.TOTAL.head(6)
top_5_regioes_qtd.plot(kind='barh', figsize=(10,4), color='#3f3fff')
plt.title('Top 5 regiões com mais crimes')
plt.xlabel('Número de crimes')
plt.ylabel('Região')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: As 5 regiões com mais ocorrências
End of explanation
regiao_4_detalhe = df_marco[df_marco['CLUSTER'] == 4]
regiao_4_detalhe
Explanation: Acima podemos ver que a região 4 teve o maior número de ocorrências criminais
Podemos agora ver quais são essas ocorrências de forma mais detalhada
End of explanation
crime_types = regiao_4_detalhe[['NATUREZA DA OCORRÊNCIA']]
crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()
crime_type_counts = regiao_4_detalhe[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()
crime_type_counts['TOTAL'] = crime_type_total
all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)
crimes_top_5 = all_crime_types.head(5)
crimes_top_5.plot(kind='barh', figsize=(11,3), color='#3f3fff')
plt.title('Top 5 crimes na região 4')
plt.xlabel('Número de crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Uma análise sobre as 5 ocorrências mais comuns
End of explanation
horas_mes = df_marco.HORA.value_counts()
horas_mes_top10 = horas_mes.head(10)
horas_mes_top10.plot(kind='barh', figsize=(11,4), color='#3f3fff')
plt.title('Crimes por hora (Mar 2017)')
plt.xlabel('Número de ocorrências')
plt.ylabel('Hora do dia')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Filtro dos 10 horários com mais ocorrências em março
End of explanation
crime_hours = regiao_4_detalhe[['HORA']]
crime_hours_total = crime_hours.groupby('HORA').size()
crime_hours_counts = regiao_4_detalhe[['HORA']].groupby('HORA').sum()
crime_hours_counts['TOTAL'] = crime_hours_total
all_hours_types = crime_hours_counts.sort_values(by='TOTAL', ascending=False)
all_hours_types.head(5)
all_hours_types_top5 = all_hours_types.head(5)
all_hours_types_top5.plot(kind='barh', figsize=(11,3), color='#3f3fff')
plt.title('Top 5 crimes por hora na região 4')
plt.xlabel('Número de ocorrências')
plt.ylabel('Hora do dia')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Filtro dos 5 horários com mais ocorrências na região 4 (região com mais ocorrências em março)
End of explanation
crimes_mes = df_marco.BAIRRO.value_counts()
crimes_mes_top10 = crimes_mes.head(10)
crimes_mes_top10.plot(kind='barh', figsize=(11,4), color='#3f3fff')
plt.title('Top 10 Bairros com mais crimes (Mar 2017)')
plt.xlabel('Número de ocorrências')
plt.ylabel('Bairro')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Filtro dos 10 bairros com mais ocorrências em março
End of explanation
messejana = df_marco[df_marco['BAIRRO'] == 'JANGURUSSU']
crime_types = messejana[['NATUREZA DA OCORRÊNCIA']]
crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()
crime_type_counts = messejana[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()
crime_type_counts['TOTAL'] = crime_type_total
all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)
all_crime_tipos_5 = all_crime_types.head(5)
all_crime_tipos_5.plot(kind='barh', figsize=(15,4), color='#3f3fff')
plt.title('Top 5 crimes no Jangurussú')
plt.xlabel('Número de Crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: O Bairro com o maior número de ocorrências em março foi Jangurussu
Vamos agora ver de forma mais detalhadas quais foram estes crimes
End of explanation
crime_types_bairro = regiao_4_detalhe[['BAIRRO']]
crime_type_total_bairro = crime_types_bairro.groupby('BAIRRO').size()
crime_type_counts_bairro = regiao_4_detalhe[['BAIRRO']].groupby('BAIRRO').sum()
crime_type_counts_bairro['TOTAL'] = crime_type_total_bairro
all_crime_types_bairro = crime_type_counts_bairro.sort_values(by='TOTAL', ascending=False)
crimes_top_5_bairro = all_crime_types_bairro.head(5)
crimes_top_5_bairro.plot(kind='barh', figsize=(11,3), color='#3f3fff')
plt.title('Top 5 bairros na região 4')
plt.xlabel('Quantidade')
plt.ylabel('Bairro')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Os 5 bairros mais comuns na região 4
End of explanation
bom_jardim = df_marco[df_marco['BAIRRO'] == 'BOM JARDIM']
crime_types = bom_jardim[['NATUREZA DA OCORRÊNCIA']]
crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()
crime_type_counts = bom_jardim[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()
crime_type_counts['TOTAL'] = crime_type_total
all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)
all_crime_tipos_5 = all_crime_types.head(5)
all_crime_tipos_5.plot(kind='barh', figsize=(15,4), color='#3f3fff')
plt.title('Top 5 crimes no Bom Jardim')
plt.xlabel('Número de Crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Análise sobre o bairro Bom Jardim
End of explanation |
4,625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: We will create a grid with 41 rows and 5 columns, and dx is 5 m (a long, narrow, hillslope). The initial elevation is 0 at all nodes.
We set-up boundary conditions so that material can leave the hillslope at the two short ends.
Step2: Now we import and initialize the LinearDiffuser component.
Step3: We now initialize a few more parameters.
Step4: Now we figure out the analytical solution for the elevation of the steady-state profile.
Step5: Before we evolve the landscape, let's look at the initial topography. (This is just verifying that it is flat with zero elevation.)
Step6: Now we are ready to evolve the landscape and compare it to the steady state solution.
Below is the time loop that does all the calculations.
Step7: Now we plot the final cross-section.
Step8: Now we plot the steepest slope in the downward direction across the landscape.
(To calculate the steepest slope at a location, we need to route flow across the landscape.) | Python Code:
# below is to make plots show up in the notebook
%matplotlib inline
# Code Block 1
import numpy as np
from matplotlib.pyplot import figure, legend, plot, show, title, xlabel, ylabel, ylim
from landlab.plot.imshow import imshow_grid
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../../landlab_header.png"></a>
Linear diffusion exercise with Landlab
This notebook is adapted from Landscape Evolution Modeling with CHILD by Gregory Tucker and Stephen Lancaster. This notebook was created by Nicole Gasparini at Tulane University.
<hr>
For tutorials on learning Landlab, click here: <a href="https://github.com/landlab/landlab/wiki/Tutorials">https://github.com/landlab/landlab/wiki/Tutorials</a>
<hr>
What is this notebook?
This notebook illustrates the evolution of landforms dominated by processes that result in linear diffusion of sediment. In other words, the downhill flow of soil is proportional to the (downhill) gradient of the land surface multiplied by a transport coefficient.
The notebook first illustrates a simple example of a diffusing hillslope. We then provide a number of exercises for students to do on their own. This set of exercises is recomended for students in a quantitative geomorphology class, who have been introduced to the linear diffusion equation in class.
Application of linear diffusion transport law:
For relatively gentle, soil-mantled slopes, there is reasonably strong support for a transport law of the form:
\begin{equation}
q_s = -D \nabla z
\end{equation}
where ${q}_s$ is the transport rate with dimensions of L$^2$T$^{-1}$; $D$ is a transport coefficient with dimensions of L$^2$T$^{-1}$; and $z$ is elevation. $\nabla z$ is the gradient in elevation. If distance is increasing downslope, $\nabla z$ is negative downslope, hence the negative in front of $D$.
Changes in elevation, or erosion, are calculated from conservation of mass:
\begin{equation}
\frac{dz}{dt} = U-\nabla q_s
\end{equation}
where $U$ is the rock uplift rate, with dimensions LT$^{-1}$.
How will we explore this with Landlab?
We will use the Landlab component LinearDiffuser, which implements the equations above, to explore how hillslopes evolve when linear diffusion describes hillslope sediment transport. We will explore both steady state, here defined as erosion rate equal to rock uplift rate, and also how a landscape gets to steady state.
The first example illustrates how to set-up the model and evolve a hillslope to steady state, along with how to plot some variables of interest. We assume that you have knowledge of how to derive the steady-state form of a uniformly uplifting, steady-state, diffusive hillslope. For more information on hillslope sediment transport laws, this paper is a great overview:
Roering, Joshua J. (2008) "How well can hillslope evolution models “explain” topography? Simulating soil transport and production with high-resolution topographic data." Geological Society of America Bulletin.
Based on the first example, you are asked to first think about what will happen as you change a parameter, and then you explore this numerically by changing the code.
Start at the top by reading each block of text and sequentially running each code block (shift - enter OR got to the Cell pulldown menu at the top and choose Run Cells).
Remember that you can always go to the Kernel pulldown menu at the top and choose Restart & Clear Output or Restart & Run All if you change things and want to start afresh. If you just change one code block and rerun only that code block, only the parts of the code in that code block will be updated. (E.g. if you change parameters but don't reset the code blocks that initialize run time or topography, then these values will not be reset.)
Now on to the code example
Import statements. You should not need to edit this.
End of explanation
# Code Block 2
# setup grid
from landlab import RasterModelGrid
mg = RasterModelGrid((41, 5), 5.0)
z_vals = mg.add_zeros("topographic__elevation", at="node")
# initialize some values for plotting
ycoord_rast = mg.node_vector_to_raster(mg.node_y)
ys_grid = ycoord_rast[:, 2]
# set boundary condition.
mg.set_closed_boundaries_at_grid_edges(True, False, True, False)
Explanation: We will create a grid with 41 rows and 5 columns, and dx is 5 m (a long, narrow, hillslope). The initial elevation is 0 at all nodes.
We set-up boundary conditions so that material can leave the hillslope at the two short ends.
End of explanation
# Code Block 3
from landlab.components import LinearDiffuser
D = 0.01 # initial value of 0.01 m^2/yr
lin_diffuse = LinearDiffuser(mg, linear_diffusivity=D)
Explanation: Now we import and initialize the LinearDiffuser component.
End of explanation
# Code Block 4
# Uniform rate of rock uplift
uplift_rate = 0.0001 # meters/year, originally set to 0.0001
# Total time in years that the model will run for.
runtime = 1000000 # years, originally set to 1,000,000
# Stability criteria for timestep dt. Coefficient can be changed
# depending on our tolerance for stability vs tolerance for run time.
dt = 0.5 * mg.dx * mg.dx / D
# nt is number of time steps
nt = int(runtime // dt)
# Below is to keep track of time for labeling plots
time_counter = 0
# length of uplift over a single time step, meters
uplift_per_step = uplift_rate * dt
Explanation: We now initialize a few more parameters.
End of explanation
# Code Block 5
ys = np.arange(mg.number_of_node_rows * mg.dx - mg.dx)
# location of divide or ridge crest -> middle of grid
# based on boundary conds.
divide_loc = (mg.number_of_node_rows * mg.dx - mg.dx) / 2
# half-width of the ridge
half_width = (mg.number_of_node_rows * mg.dx - mg.dx) / 2
# analytical solution for elevation under linear diffusion at steady state
zs = (uplift_rate / (2 * D)) * (np.power(half_width, 2) - np.power(ys - divide_loc, 2))
Explanation: Now we figure out the analytical solution for the elevation of the steady-state profile.
End of explanation
# Code Block 6
figure(1)
imshow_grid(mg, "topographic__elevation")
title("initial topography")
figure(2)
elev_rast = mg.node_vector_to_raster(mg.at_node["topographic__elevation"])
plot(ys_grid, elev_rast[:, 2], "r-", label="model")
plot(ys, zs, "k--", label="analytical solution")
ylim((-5, 50)) # may want to change upper limit if D changes
xlabel("horizontal distance (m)")
ylabel("vertical distance (m)")
legend(loc="lower center")
title("initial topographic cross section")
Explanation: Before we evolve the landscape, let's look at the initial topography. (This is just verifying that it is flat with zero elevation.)
End of explanation
# Code Block 7
for i in range(nt):
mg["node"]["topographic__elevation"][mg.core_nodes] += uplift_per_step
lin_diffuse.run_one_step(dt)
time_counter += dt
# All landscape evolution is the first two lines of loop.
# Below is simply for plotting the topography halfway through the run
if i == int(nt // 2):
figure(1)
imshow_grid(mg, "topographic__elevation")
title("topography at time %s, with D = %s" % (time_counter, D))
figure(2)
elev_rast = mg.node_vector_to_raster(mg.at_node["topographic__elevation"])
plot(ys_grid, elev_rast[:, 2], "k-", label="model")
plot(ys, zs, "g--", label="analytical solution - SS")
plot(ys, zs * 0.75, "b--", label="75% of analytical solution")
plot(ys, zs * 0.5, "r--", label="50% of analytical solution")
xlabel("horizontal distance (m)")
ylabel("vertical distance (m)")
legend(loc="lower center")
title("topographic__elevation at time %s, with D = %s" % (time_counter, D))
Explanation: Now we are ready to evolve the landscape and compare it to the steady state solution.
Below is the time loop that does all the calculations.
End of explanation
# Code Block 8
elev_rast = mg.node_vector_to_raster(mg.at_node["topographic__elevation"])
plot(ys_grid, elev_rast[:, 2], "k-", label="model")
plot(ys, zs, "g--", label="analytical solution - SS")
plot(ys, zs * 0.75, "b--", label="75% of analytical solution")
plot(ys, zs * 0.5, "r--", label="50% of analytical solution")
xlabel("horizontal distance (m)")
ylabel("vertical distance (m)")
legend(loc="lower center")
title("topographic cross section at time %s, with D = %s" % (time_counter, D))
Explanation: Now we plot the final cross-section.
End of explanation
# Code Block 9
from landlab.components import FlowAccumulator
fr = FlowAccumulator(mg) # intializing flow routing
fr.run_one_step()
plot(
mg.node_y[mg.core_nodes],
mg.at_node["topographic__steepest_slope"][mg.core_nodes],
"k-",
)
xlabel("horizontal distance (m)")
ylabel("topographic slope (m/m)")
title("slope of the hillslope at time %s, with D = %s" % (time_counter, D))
Explanation: Now we plot the steepest slope in the downward direction across the landscape.
(To calculate the steepest slope at a location, we need to route flow across the landscape.)
End of explanation |
4,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejercicio de visualizacion de informacion con Pandas - Soluciones
Este es un pequenio ejercicio para revisar las diferentes graficas que nos permite generar Pandas.
* NOTA
Step1: Recrea la siguiente grafica de puntos de b contra a.
Step2: Crea un histograma de la columna 'a'.
Step3: Las graficas se ven muy bien, pero deseamos que se vean un poco mas profesional, asi que utiliza la hoja de estilo 'ggplot' y genera el histograma nuevamente, ademas investiga como agregar mas divisiones.
Step4: Crea una grafica de cajas comparando las columnas 'a' y 'b'.
Step5: Crea una grafica kde plot de la columna 'd'
Step6: Crea una grafica de area para todas las columnas, utilizando hasta 30 filas (tip | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
df3 = pd.read_csv('../data/df3')
%matplotlib inline
df3.plot.scatter(x='a',y='b',c='red',s=50
df3.info()
df3.head()
Explanation: Ejercicio de visualizacion de informacion con Pandas - Soluciones
Este es un pequenio ejercicio para revisar las diferentes graficas que nos permite generar Pandas.
* NOTA: Utilizar el archivo df3 que se encuentra en la carpeta data
End of explanation
df3.plot.scatter(x='a',y='b',c='red',s=50,figsize=(12,3))
Explanation: Recrea la siguiente grafica de puntos de b contra a.
End of explanation
df3['a'].plot.hist()
Explanation: Crea un histograma de la columna 'a'.
End of explanation
plt.style.use('ggplot')
df3['a'].plot.hist(alpha=0.5,bins=25)
Explanation: Las graficas se ven muy bien, pero deseamos que se vean un poco mas profesional, asi que utiliza la hoja de estilo 'ggplot' y genera el histograma nuevamente, ademas investiga como agregar mas divisiones.
End of explanation
df3[['a','b']].plot.box()
Explanation: Crea una grafica de cajas comparando las columnas 'a' y 'b'.
End of explanation
df3['d'].plot.kde()
Explanation: Crea una grafica kde plot de la columna 'd'
End of explanation
df3.loc[0:30].plot.area(alpha=0.4)
Explanation: Crea una grafica de area para todas las columnas, utilizando hasta 30 filas (tip: usar .loc).
End of explanation |
4,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I. Setting up the Problem
Step1: 1) Peeking into the Data
Step2: II. Preparing data
1) Keep only players that have a Rater Image
Step3: 2) Getting rif of referees and grouping data by soccer player
We need to aggregate the information about referees and group the result by soccer player. It means that each line will correspond to a soccer player, with the sum of all the cards he got, and we won't know anymore who gaves the cards.
Step4: III. Unsupervized machine learning
The first idea we got is to start an unsupervized learning kept as simple as possible.
We will have to take player position, the three types of cards and the skin color
Step5: (We show only skin color and number of "red cards" because it's a 2D plot, but we actually used 5 parameters
Step6: So, do we have any new information ? What can we conclude of this ?
We can use the "silhouette score", which is a metric showing if the two clusters are well separated. It it's equals to 1, the clusters are perfectly separated, and if it's 0, the clustering doesn't make any sense.
Step7: We got a silhouette score of 58%, which is honestly not enough to predict precisely the skin color of new players. A value closer to +1 would have indicated with higher confidence a difference between the clusters. 60% is enough to distinguish the two clusters but, still, we cannot rely on this model.
Let's try to remove features iterately, starting with skin color.
Step8: Seems like removing skin color from the input didn't change anything for the clustering performance !
Let's do this with removing another parameter
Step9: Player position doesn't have much impact either. We can try to remove the number of games, but it won't make sense | Python Code:
import pandas as pd
import numpy as np
from IPython.display import Image
import matplotlib.pyplot as plt
# Import the random forest package
from sklearn.ensemble import RandomForestClassifier
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
filename ="CrowdstormingDataJuly1st.csv"
Data = pd.read_csv(filename)
Explanation: I. Setting up the Problem
End of explanation
Data.ix[:10,:13]
Data.ix[:10,13:28]
Explanation: 1) Peeking into the Data
End of explanation
# Remove the players without rater 1 / 2 (ie: without photo) because we won't be
# able to train or test the values (this can be done as bonus later)
Data_hasImage = Data[pd.notnull(Data['photoID'])]
Explanation: II. Preparing data
1) Keep only players that have a Rater Image
End of explanation
# Group by player and do the sum of every column, except for mean_rater (skin color) that we need to move away during the calculation (we don't want to sum skin color values !)
Data_aggregated = Data_hasImage.drop(['refNum', 'refCountry'], 1)
Data_aggregated = Data_aggregated.groupby(['playerShort', 'position'])['games','yellowCards', 'yellowReds', 'redCards'].sum()
Data_aggregated = Data_aggregated.reset_index()
# Take information of skin color for each player
Data_nbGames_skinColor = Data_hasImage
Data_nbGames_skinColor.drop_duplicates('playerShort')
Data_nbGames_skinColor['skinColor']=(Data_nbGames_skinColor['rater1']+Data_hasImage['rater2'])/2
Data_nbGames_skinColor = pd.DataFrame(Data_nbGames_skinColor[['playerShort','skinColor']])
Data_aggregated = pd.merge(left=Data_aggregated,right=Data_nbGames_skinColor, how='left', left_on='playerShort', right_on='playerShort')
Data_aggregated = Data_aggregated.drop_duplicates('playerShort')
Data_aggregated = Data_aggregated.reset_index(drop=True)
Data_aggregated
Explanation: 2) Getting rif of referees and grouping data by soccer player
We need to aggregate the information about referees and group the result by soccer player. It means that each line will correspond to a soccer player, with the sum of all the cards he got, and we won't know anymore who gaves the cards.
End of explanation
# Input
x = Data_aggregated
x = x.drop(['playerShort'], 1)
# We have to convert every columns to floats, to be able to train our model
mapping = {'Center Back': 1, 'Attacking Midfielder': 2, 'Right Midfielder': 3, 'Center Midfielder': 4, 'Defensive Midfielder': 5, 'Goalkeeper':6, 'Left Fullback':7, 'Left Midfielder':8, 'Right Fullback':9, 'Center Forward':10, 'Left Winger':11, 'Right Winger':12}
x = x.replace({'position': mapping})
x
# Output with the same length as the input, that will contains the associated cluster
y = pd.DataFrame(index=x.index, columns=['targetCluster'])
y.head()
# K Means Cluster
model = KMeans(n_clusters=2)
model = model.fit(x)
model
# We got a model with two clusters
model.labels_
# View the results
# Set the size of the plot
plt.figure(figsize=(14,7))
# Create a colormap for the two clusters
colormap = np.array(['blue', 'lime'])
# Plot the Model Classification PARTIALLY
plt.scatter((0.5*x.yellowCards + x.yellowReds + x.redCards)/x.games, x.skinColor, c=colormap[model.labels_], s=40)
plt.xlabel('Red cards per game (yellow = half a red card)')
plt.ylabel('Skin color')
plt.title('K Mean Classification')
plt.show()
Explanation: III. Unsupervized machine learning
The first idea we got is to start an unsupervized learning kept as simple as possible.
We will have to take player position, the three types of cards and the skin color: that makes 5 dimensions to deal with !
Instead, let say we only look at the total number of cards the players got, and their skin color. Then we would be able to display something in 2 dimensions only:
<img src="resources/axis_assumption.jpg" alt="Drawing" style="width: 600px;"/>
Then, we would try to obtain two clusters that might lead to really simple conclusion such as "dark people slightly tend to get more cards":
<img src="resources/axis_assumption_clustered.jpg" alt="Drawing" style="width: 600px;"/>
Again, this is totally hypothetical. So let's give it a try.
We try to use a K means clustering methode to obtain 2 distinct clusters, with the help of this website:
http://stamfordresearch.com/k-means-clustering-in-python/
End of explanation
cluster = pd.DataFrame(pd.Series(model.labels_, name='cluster'))
Data_Clustered = Data_aggregated
Data_Clustered['cluster'] = cluster
Data_Clustered
Explanation: (We show only skin color and number of "red cards" because it's a 2D plot, but we actually used 5 parameters: position, yellowCards, yellowReds, redCards and number of games. So this graph doesn't really represent how our data has been clustered.
This is only to check if some clustering has ben done. Here we don't really see two distincts clusters. It looks like more random coloring ! :x
Now, let's add the result to each player:
End of explanation
score = silhouette_score(x, model.labels_)
score
Explanation: So, do we have any new information ? What can we conclude of this ?
We can use the "silhouette score", which is a metric showing if the two clusters are well separated. It it's equals to 1, the clusters are perfectly separated, and if it's 0, the clustering doesn't make any sense.
End of explanation
x_noSkinColor = x.drop(['skinColor'], 1)
model = KMeans(n_clusters=2)
model = model.fit(x_noSkinColor)
score_noSkinColor = silhouette_score(x_noSkinColor, model.labels_)
score_noSkinColor
score_noSkinColor / score
Explanation: We got a silhouette score of 58%, which is honestly not enough to predict precisely the skin color of new players. A value closer to +1 would have indicated with higher confidence a difference between the clusters. 60% is enough to distinguish the two clusters but, still, we cannot rely on this model.
Let's try to remove features iterately, starting with skin color.
End of explanation
x_noPosition = x.drop(['position'], 1)
model = KMeans(n_clusters=2)
model = model.fit(x_noPosition)
score_noPosition= silhouette_score(x_noPosition, model.labels_)
score_noPosition
score_noPosition / score
Explanation: Seems like removing skin color from the input didn't change anything for the clustering performance !
Let's do this with removing another parameter: position.
End of explanation
x_noGameNumber = x.drop(['games'], 1)
model = KMeans(n_clusters=2)
model = model.fit(x_noGameNumber)
score_noGameNumber = silhouette_score(x_noGameNumber, model.labels_)
score_noGameNumber
score_noGameNumber / score
Explanation: Player position doesn't have much impact either. We can try to remove the number of games, but it won't make sense: some player will have an absolute higher number of cards, only because they played a lot more games. But we will lost this information.
End of explanation |
4,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: some unexpected errors are present
Step2: executing the same codes again removes the errors, not sure why!!
Loading pickled data as instructed in the TEMPLATE Jupyter Notebook
Step3: Step 1
Step4: Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step5: Create a histogram that depicts the overall dataset distribution for the training set.
Step6: Step 2
Step7: Question 1
Describe how you preprocessed the data. Why did you choose that technique?
Answer
Step8: Question 3
What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow
from the classroom.
Answer
Step9: Question 4
How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)
Answer
Step10: Question 6
Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.
Answer
Step11: Question 8
Use the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example | Python Code:
# Imports all libraries required
import os
import cv2
import csv
import time
import pickle
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from PIL import Image
from pylab import rcParams
from skimage import transform
from sklearn.utils import shuffle
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix
from tensorflow.contrib.layers import flatten
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
%matplotlib inline
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
MY Testing Environments
1) CPU: Intel Core i7-7700K (4.6Ghz OC) 4 CPU CORES, 8 threads
2) RAM: 16GB DDR4
3) GPU: nVidia GeForce GTX 970 4GB GGDR5
4) OS: WINDOWS 10 PRO 64bit
5) nVidia Software: Nvidia CUDA and cuDNN V 5.1
Step 0: Load The Data
import all necessary libraries in one go
End of explanation
# Imports all libraries required
import os
import cv2
import csv
import time
import pickle
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from PIL import Image
from pylab import rcParams
from skimage import transform
from sklearn.utils import shuffle
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix
from tensorflow.contrib.layers import flatten
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
%matplotlib inline
Explanation: some unexpected errors are present
End of explanation
# Load pickled data
training_file = 'train.p'
validating_file = 'valid.p'
testing_file = 'test.p'
with open(training_file, mode='rb') as f:train = pickle.load(f)
with open(validating_file, mode='rb') as f:valid = pickle.load(f)
with open(testing_file, mode='rb') as f:test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
Explanation: executing the same codes again removes the errors, not sure why!!
Loading pickled data as instructed in the TEMPLATE Jupyter Notebook
End of explanation
### Replace each question mark with the appropriate value.
# TODO: Number of training examples
n_train = len(X_train)
n_valid = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(set(y_train))
print("Number of training examples =", n_train)
print("Number of validating examples =", n_valid)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 2D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below.
End of explanation
### Data exploration visualization
fig = plt.figure(figsize=(15, 5))
image_seq = np.random.randint(1,len(X_train),10)
# Load image labels from csv
label_csv = csv.reader(open('signnames.csv', 'r'))
label_names = []
for row in label_csv:
label_names.append(row[1])
label_names.pop(0)
for ind,val in enumerate(image_seq):
img = fig.add_subplot(2,5,ind+1)
plt.imshow(X_train[val-1])
#Add corresponding label
img.set_xlabel("{0} ({1})".format(y_train[val-1], label_names[y_train[val-1]]))
#Remove the axis ticks
img.set_xticks([])
img.set_yticks([])
plt.show()
Explanation: Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
Display randomly 10 images from the training set.
End of explanation
# A= unique B = counts
A, B = np.unique(y_train, return_counts=True)
fig = plt.figure(figsize=(15,10))
plt.bar(A, B, color='green')
label = [label for label in label_names]
plt.xticks(np.arange(0.5,n_classes+0.5), label, rotation=45,ha='right')
plt.ylabel('Frequency')
plt.title('Training Data Distribution')
plt.show()
Explanation: Create a histogram that depicts the overall dataset distribution for the training set.
End of explanation
def preprocess(X):
# Normalize to range 0-1
X = (X - X.mean())/(np.max(X) - np.min(X))
# grayscale conversion
X = 0.114 * X[...,0] + 0.587 * X[...,1] + 0.299 * X[...,2] # BGR->Gray
return X
X_train = preprocess(X_train)
X_valid = preprocess(X_valid)
X_test = preprocess(X_test)
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
There are various aspects to consider when thinking about this problem:
Neural network architecture
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
End of explanation
layer_depth = {
'layer_1': 12,
'layer_2': 32,
'fully_connected_1': 512,
'fully_connected_2': 256,
'fully_connected_3': 128,
'out': n_classes,
}
weights = {
'layer_1': tf.Variable(tf.truncated_normal(
[5, 5, 1, layer_depth['layer_1']], mean=0, stddev=0.1)),
'layer_2': tf.Variable(tf.truncated_normal(
[5, 5, layer_depth['layer_1'], layer_depth['layer_2']], mean=0, stddev=0.1)),
'fully_connected_1': tf.Variable(tf.truncated_normal(
[5*5*layer_depth['layer_2'], layer_depth['fully_connected_1']])),
'fully_connected_2': tf.Variable(tf.truncated_normal(
[layer_depth['fully_connected_1'], layer_depth['fully_connected_2']], mean=0, stddev=0.1)),
'fully_connected_3': tf.Variable(tf.truncated_normal(
[layer_depth['fully_connected_2'], layer_depth['fully_connected_3']], mean=0, stddev=0.1)),
'out': tf.Variable(tf.truncated_normal(
[layer_depth['fully_connected_3'], layer_depth['out']], mean=0, stddev=0.1))
}
biases = {
'layer_1': tf.Variable(tf.zeros(layer_depth['layer_1'])),
'layer_2': tf.Variable(tf.zeros(layer_depth['layer_2'])),
'fully_connected_1': tf.Variable(tf.zeros(layer_depth['fully_connected_1'])),
'fully_connected_2': tf.Variable(tf.zeros(layer_depth['fully_connected_2'])),
'fully_connected_3': tf.Variable(tf.zeros(layer_depth['fully_connected_3'])),
'out': tf.Variable(tf.zeros(layer_depth['out']))
}
# Define 2 more functions
def conv2d(x, W, b, strides=1):
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding = 'VALID')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
return tf.nn.max_pool(x, ksize=[1, k, k, 1],
strides=[1, k, k, 1],
padding='VALID')
# Define Architecture
keep_prob = tf.placeholder(tf.float32)
def LeNet(x):
x = tf.expand_dims(x, -1)
conv1 = conv2d(x, weights['layer_1'], biases['layer_1'])
conv1 = tf.nn.relu(conv1)
conv1 = maxpool2d(conv1)
#________________________________________________________________________________________
conv2 = conv2d(conv1, weights['layer_2'], biases['layer_2'])
conv2 = tf.nn.relu(conv2)
conv2 = maxpool2d(conv2)
#________________________________________________________________________________________
fc0 = flatten(conv2)
fc1 = tf.add(tf.matmul(fc0, weights['fully_connected_1']), biases['fully_connected_1'])
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1, keep_prob=keep_prob)
#________________________________________________________________________________________
fc2 = tf.add(tf.matmul(fc1, weights['fully_connected_2']), biases['fully_connected_2'])
fc2 = tf.nn.relu(fc2)
fc2 = tf.nn.dropout(fc2, keep_prob=keep_prob)
#________________________________________________________________________________________
fc3 = tf.add(tf.matmul(fc2, weights['fully_connected_3']), biases['fully_connected_3'])
fc3 = tf.nn.relu(fc3)
fc3 = tf.nn.dropout(fc3, keep_prob=keep_prob)
logits = tf.add(tf.matmul(fc3, weights['out']), biases['out'])
return logits
Explanation: Question 1
Describe how you preprocessed the data. Why did you choose that technique?
Answer:
I encapsulate 2 operations within a single 'preprocess' function here.
The first operation is normalisation using the simple formula X = (X - X.mean())/(np.max(X) - np.min(X)), the purpose of normalisation is to help the gradient descent optimizer (Adam Optimizer) to converge faster by restricting the range of feature values.
The 2nd operation is grayscale conversion, which is supposed to be detrimental to the model performance, however, I was at a loss to explain why grayscale conversion turned out to be conducive to the test accuracy at the end.
Question 2
Describe how you set up the training, validation and testing data for your model. Optional: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?
Answer:
Since I have 3 separate files for training, validation and testing respectively, it makes sense to avoid using training/testing split function here. train.p is used for training exclusively, valid.p is earmarked for validation during the training process, while the test.p is strictly reserved for testing purpose once the training is completed.
End of explanation
saver = tf.train.Saver()
# Add placeholder for input and data labels
x = tf.placeholder(tf.float32, (None, 32, 32))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
learning_rate = 0.0005
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
# training Without regulariztaion
training_operation = optimizer.minimize(loss_operation)
# Evaluation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_tunning = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, batch_size):
end = offset + batch_size
batch_x, batch_y = X_data[offset:end], y_data[offset:end]
accuracy = sess.run(accuracy_tunning, feed_dict={x: batch_x, y: batch_y, keep_prob: 1})
total_accuracy += (accuracy * len(batch_x))
accuracy = total_accuracy / num_examples
return accuracy
epochs = 100
batch_size = 64
# Run Training and save model
total_time = time.time()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print('Number of training samples: {}'.format(num_examples))
print('Training in progress......\n\n')
for i in range(epochs):
start_time = time.time()
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, batch_size):
end = offset + batch_size
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob:0.6})
validation_accuracy = evaluate(X_valid, y_valid)
validation_percent = validation_accuracy*100
print("\nEPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}%".format(validation_percent))
end_time = time.time() - start_time
print("Time taken for the last epoch: %.3f seconds" %end_time)
test_accuracy = evaluate(X_test, y_test)
test_percent = test_accuracy*100
print("\n\n\nAccuracy compared to test set = {:.3f}%".format(test_percent))
final_time = time.time() - total_time
print("Total Training: %.3f seconds" %final_time)
saver.save(sess, '.\model')
print('Model successfully Saved to current directory!')
# roload and test the Model
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
test_percent = test_accuracy*100
print("Test Accuracy = {:.3f}%".format(test_percent))
Explanation: Question 3
What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow
from the classroom.
Answer:
The model is based on the LeNet Lab with the different number of Convolution layers and Fully connected layers.
I have tried some other configurations such as one with just 2 full connected layers, however, I was unable to record any noticeable performance improvement.
Several dropout values were tested from 0.3 all the way to 0.9, 0.6 seemed to fit the bill.
Layer 1 : 5x5 Filter with depth 12
Layer 2 : 5x5 Filter with depth 32
Fully Connected Layer A : n = 512
Dropout Layer : Dropout Value = 0.6
Fully Connected Layer B : n = 256
Dropout Layer : Dropout Value = 0.6
Fully Connected Layer C: n = 128
Dropout Layer : Dropout Value = 0.6
End of explanation
# load up new test images
df = pd.read_csv('signnames.csv')
import glob
images_resized = []
images = []
for j in glob.glob('./extra_German_sign/*.jpg'):
image = plt.imread(j)
image_resized = cv2.resize(image, (32, 32), interpolation=cv2.INTER_AREA)
images_resized.append(image_resized)
image_preprocessed = preprocess(image_resized)
images.append(image_preprocessed[np.newaxis,...])
images = np.vstack(images)
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('model.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))
out = sess.run(tf.argmax(logits, 1), feed_dict={x: images, keep_prob: 1})
# Plot Images with prediction
new_label_list = [np.argmax(row) for row in out]
plt.figure(figsize=(12,12))
for i in range(0,images.shape[0]):
with sns.axes_style("white"):
plt.subplot(4, 4, i+1)
plt.imshow(np.squeeze(images_resized[i]), cmap='gray')
plt.tick_params(axis='both', which='both', bottom='on', top='on', labelbottom='off', right='off', left='off', labelleft='off')
plt.xlabel(df.loc[out[i]].SignName)
plt.tight_layout()
Explanation: Question 4
How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)
Answer:
The reason why I chose AdamOptimizer is that it is more both quicker and more accurate than a standard stochastic gradient descent optimizer. In addition, I decided to set a relatively large epoch counts of 100 and a small batch size of 64 as I could rely on my powerful nVidia GTX970 GPU to complete the task efficiently. A rather ambitious learning rate of 0.0005 was also chosen. The codes responsible for training is in cell number 19 while preparations for the training took place in cell 14 to 18.
At the end of the training, the training model was saved to three files with prefix 'model'.
The validation accuracy you observed in the result sheet above referred to the accuracy of the training model when it was compared with the data located in a separate file called 'valid.p'. No training-testing data splitting was required.
The test accuracy at the end of the result sheet above referred to the accuracy of the training model when it was compared with the data located in a separate file called 'test.p'
The final test accuracy stood between 95% and 96% after several tests, which was quite satisfactory I thought. However, the validation accuracy kinda plateaued at around epoch 10! Which is good to know!
Question 5
What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem.
Answer:
The Convnet LeNet Lab template in the nanodegree course is a solid convolutional neural network upon which my implementation is largely based. I tried various configurations: 1 conv + 1 FC layers, 1 conv + 2 FC layers, 2 conv + 1 FC layers, 2 conv + 2 FC layers, 3 conv + 2 FC (This one didn't work at all due to dimension errors) and 2 conv + 3 FC layers which was the one I chosen for the final showdown.
1 conv + 1 FC, 1 conv + 2 FC and 2 conv + 1 FC all had significantly worse accuracies than 2 conv + 2 FC and 2 conv + 3 FC.
The 2 conv + 2 FC layers configuration has a very similar performance to that of the 2 conv + 3 FC layers configuration. In the hindsight, maybe I should've chosen the 2 conv + 2 FC one as GPU would have had less work to do but I just wanted to stretch its muscle a bit, lol.
Step 3: Test a Model on New Images
Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
Testing on German Traffic Signs
End of explanation
k = 5
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('model.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))
out_prob = sess.run(tf.nn.top_k(tf.nn.softmax(logits), k=k), feed_dict={x: images, keep_prob: 1})
plt.rcParams['figure.figsize'] = (15, 30)
image_indices = (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14)
#image_indices = np.arange(0, len(images_resized))
for i, im in enumerate(image_indices):
with sns.axes_style("white"):
plt.subplot(len(image_indices), 2, (2*i)+1)
plt.imshow(np.squeeze(images_resized[im]), cmap='gray')
plt.axis('on')
plt.xlabel(df.loc[out[i]].SignName)
plt.subplot(len(image_indices) ,2, (2*i)+2)
plt.barh(np.arange(k), out_prob.values[im])
plt.yticks(np.arange(k)+0.3, df.loc[out_prob.indices[im]].SignName)
plt.tight_layout()
Explanation: Question 6
Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.
Answer:
I can think of several problems that can make the life extremely difficult for the model.
1) The orientation of the traffic signs.
2) The actual clarity of the traffic signs.
3) The angles and perspective at which the photos were taken.
4) Multiple signs in one image.
Question 7
Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate.
NOTE: You could check the accuracy manually by using signnames.csv (same directory). This file has a mapping from the class id (0-42) to the corresponding sign name. So, you could take the class id the model outputs, lookup the name in signnames.csv and see if it matches the sign from the image.
Answer:
The model correctly guessed the signs in 1st, 8th,12th, 13th,14th(partially) image, giving an accuracy of 5/15 = 33.3%, which is significantly worse than the test accuracy obtained earlier! Please correct me if I'm wrong as I don't drive due to medical conditions.
End of explanation
'''
plot_loss_accuracy(batches, loss_batch, train_acc_batch, valid_acc_batch)
'''
'''
if not offset % 50:
# Calculate Training and Validation accuracy
training_accuracy = sess.run(accuracy_tunning, feed_dict={x: X_train,
y: y_train, keep_prob: 0.8 })
validation_accuracy = sess.run(accuracy_tunning, feed_dict={x: X_valid,
y: y_valid, keep_prob: 1})
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(50 + previous_batch)
loss_batch.append(c)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
'''
Explanation: Question 8
Use the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example:
```
(5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
The model is most certain about the following:
(predicted: bumpy road, actual: bumpy road) correctly classified
(predicted: priority road, actual:slope of gradient of 8%)
understandably a very tough sign to classify as not many roads share the same slope. The correct label is NOT in the top 5 most likely classifications.
(predicted:Road work, actual:Road work)
Correctly classified.
(predicted: dangerous curve to the right, actual:motor vehicle prohibited)
no idea why the model is so far off.The correct label is NOT in the top 5 most likely classifications.
(predicted: End of no passing, actual: diversion) The correct label is NOT in the top 5 most likely classifications.
(predicted: no passing, actual: overtaking allowed)
grayscale conversion is definitely answerable for the wrong prediction here. The correct label is NOT in the top 5 most likely classifications.
(predicted:right-of-way at the next intersection, actual: speed limit of 60km/h) The correct label is NOT in the top 5 most likely classifications.
(predicted: speed limit(50km/h), actual: motor vehicle prohibited)
This prediction makes absolutely no sense. The correct label is NOT in the top 5 most likely classifications.
(predicted: no entry, actual: no entry) correctly classified
(predicted: priority road, actual: priority road) correctly classified
(predicted: road narrow on the right, actual: road narrow on both side) partially correctly classified, The correct label is NOT in the top 5 most likely classifications.
(predicted: keep right, actual: 30km/h zone) The correct label is NOT in the top 5 most likely classifications.
The model is most uncertain about:
(predicted:bicyle crossing, actual:wild animal crossing)
incorrectly classified, however, the correct label is in the top 5 most likely classifications.
(predicted: roundabout mandatory, actual: bicycle crossing)
The correct label is NOT in the top 5 most likely classifications.
(predicted: speed limits(50km), actual: roundabout mandatory)
incorrectly classified, however, the correct label is in the top 5 most likely classifications.
THANK YOU FOR VIEWING.
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
reference:
https://www.tensorflow.org/
https://https://github.com/
https://developer.nvidia.com/
graveyard functions (please ignore)
End of explanation |
4,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NLTK experiments
based on NLTK with Python 3 for Natural Language Processing by Sentdex
Step1: Tokenizing
based on
- https
Step2: Stop words
sources
video
Step3: Stemming
source
video
Step4: Part of Speech Tagging
source
video
```
POS tag list
Step6: Chunking
source
video | Python Code:
import nltk
from nltk import tokenize
# TODO: we don't relly want to download packages each time when we lauch this script
# so it'll better to check somehow whether we have packages or not - or Download on demand
# nltk.download()
Explanation: NLTK experiments
based on NLTK with Python 3 for Natural Language Processing by Sentdex
End of explanation
example = 'Hello Mr. Smith, how are you doing today? The weather is great, and Python is awesome. ' \
'The sky is pinkish-blue. You shouldn\'t eat cardboard.'
tokenize.sent_tokenize(example)
tokenize.word_tokenize(example)
Explanation: Tokenizing
based on
- https://pythonprogramming.net/tokenizing-words-sentences-nltk-tutorial/
- youtube
End of explanation
from nltk import corpus, tokenize
example_sentence = 'This is a sample sentence, showing off the stop words filtration.'
stop_words = set(corpus.stopwords.words('english'))
words = tokenize.word_tokenize(example_sentence)
filtered_sentence = [w for w in words if w not in stop_words]
print(filtered_sentence)
Explanation: Stop words
sources
video
End of explanation
from nltk import stem, tokenize
ps = stem.PorterStemmer()
example_words = ['python', 'pythoner', 'pythoning', 'pythoned', 'pythonly', 'pythonic', 'pythonista']
['{} --> {}'.format(w, ps.stem(w)) for w in example_words]
example_text = 'It is important to by very pythonly while you are pythoning with python. '\
'All pythoners have pythoned poorly at least once.'
['{} --> {}'.format(w, ps.stem(w)) for w in tokenize.word_tokenize(example_text)]
Explanation: Stemming
source
video
End of explanation
import nltk
from nltk import corpus, tokenize
train_text = corpus.state_union.raw('2005-GWBush.txt')
sample_text = corpus.state_union.raw('2006-GWBush.txt')
# Map tag to description, useful for annotations
tag_to_description = {
'CC': 'coordinating conjunction',
'CD': 'cardinal digit',
'DT': 'determiner',
'EX': 'existential there (like: "there is" ... think of it like "there exists")',
'FW': 'foreign word',
'IN': 'preposition/subordinating conjunction',
'JJ': 'adjective "big"',
'JJR': 'adjective, comparative "bigger"',
'JJS': 'adjective, superlative "biggest"',
'LS': 'list marker 1)',
'MD': 'modal could, will',
'NN': 'noun, singular "desk"',
'NNS': 'noun plural "desks"',
'NNP': 'proper noun, singular "Harrison"',
'NNPS': 'proper noun, plural "Americans"',
'PDT': 'predeterminer "all tdhe kids"',
'POS': 'possessive ending parent"s',
'PRP': 'personal pronoundß I, he, she',
'PRP$': 'possessive pronoun my, his, hers',
'RB': 'adverb very, silently,',
'RBR': 'adverb, comparative better',
'RBS': 'adverb, superlative best',
'RP': 'particle give up',
'TO': 'to go "to" the store.',
'UH': 'interjection errrrrrrrm',
'VB': 'verb, base form take',
'VBD': 'verb, past tense took',
'VBG': 'verb, gerund/present participle taking',
'VBN': 'verb, past participle taken',
'VBP': 'verb, sing. present, non-3d take',
'VBZ': 'verb, 3rd person sing. present takes',
'WDT': 'wh-determiner which',
'WP': 'wh-pronoun who, what',
'WP$': 'possessive wh-pronoun whose',
'WRB': 'wh-abverb where, when',
}
from collections import Counter
from operator import itemgetter, attrgetter
custom_sent_tokenizer = tokenize.PunktSentenceTokenizer(train_text)
tokenized_text = custom_sent_tokenizer.tokenize(sample_text)
total_counts = Counter()
for i in tokenized_text[:5]:
words = nltk.word_tokenize(i)
tagged = nltk.pos_tag(words)
print('# Sentence:')
print(i)
print('# Words:')
print(words)
print('# Tagged:')
print(tagged)
counts = Counter(tag for word, tag in tagged)
total_counts += counts
print('\n')
total = sum(total_counts.values())
freq = dict((word, float(count) / total) for word, count in sorted(total_counts.items()))
print('# Counts:')
print('\n\n-----\n\n'.join(['{}\n[{}] {}'.format(f, tag, tag_to_description.get(tag, tag)) for tag, f in sorted(freq.items(), key=itemgetter(1), reverse=True)]))
Explanation: Part of Speech Tagging
source
video
```
POS tag list:
CC coordinating conjunction
CD cardinal digit
DT determiner
EX existential there (like: "there is" ... think of it like "there exists")
FW foreign word
IN preposition/subordinating conjunction
JJ adjective 'big'
JJR adjective, comparative 'bigger'
JJS adjective, superlative 'biggest'
LS list marker 1)
MD modal could, will
NN noun, singular 'desk'
NNS noun plural 'desks'
NNP proper noun, singular 'Harrison'
NNPS proper noun, plural 'Americans'
PDT predeterminer 'all tdhe kids'
POS possessive ending parent's
PRP personal pronoundß I, he, she
PRP$ possessive pronoun my, his, hers
RB adverb very, silently,
RBR adverb, comparative better
RBS adverb, superlative best
RP particle give up
TO to go 'to' the store.
UH interjection errrrrrrrm
VB verb, base form take
VBD verb, past tense took
VBG verb, gerund/present participle taking
VBN verb, past participle taken
VBP verb, sing. present, non-3d take
VBZ verb, 3rd person sing. present takes
WDT wh-determiner which
WP wh-pronoun who, what
WP$ possessive wh-pronoun whose
WRB wh-abverb where, when
```
End of explanation
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
chunkGram = rChunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}
chunkParser = nltk.RegexpParser(chunkGram)
for i in tokenized_text[:5]:
words = nltk.word_tokenize(i)
tagged = nltk.pos_tag(words)
chunked = chunkParser.parse(tagged)
# TODO: should fix it
# I'm using jupyter inside of Docker so maybe it is reason why doesn't work :(
# I've found this one https://stackoverflow.com/questions/31779707/how-do-you-make-nltk-draw-trees-that-are-inline-in-ipython-jupyter
# but haven't checkit it yet.
chunked.draw()
Explanation: Chunking
source
video
End of explanation |
4,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PUMP IT UP
Introduction
Step2: Data Analysis
Step3: cols_values_counts_dataframe
As we can see in above describe output, we seem to have lots of categorical values so let start exploring them a bit.
Lets start taking into believe everything is a Categorical Columns and check their data
Step4: Example of how np-log transforms data
>>> np.log([0.001, 0.01, 0.1, 1, 10, 100, 1000])
array([-6.90775528, -4.60517019, -2.30258509, 0. , 2.30258509,
4.60517019, 6.90775528])
As you can see in np-log example, we can learn that when a list of values vary significantly(exponentially) then their logarithms moves linearly. As we(I) feel comfortable in studying linear plot and linear information, we did a np.log to values counts.
Step5: Checking rest of the columns
Step6: cols_categorical_check
Here in this project, cols_categorical_check refers to list of columns for which caution check is considered. Reason for this check is, we would need more data to explain other columns & target cols with respect to it.
Lets consider these columns with more 5% of values as non categorical values and since our problem statement is choosing which category, we will try to minimise the category and see how our performance changes(improves or not)
To begin we will consider that those categories with more than cols_value_count_limit_fraction percentage as the upper limit allowed. Any column with other data will pruged to become some to other information
Step7: All cols_date_numerics, are date & other numeric data which can be made into buckets or reducing precision. Thus we can bound number of categories in data as the more variety of data we have, we need more information specific to each category which all might end with curse of dimensionality.
During pre-processing states we shall do following
TODO
* limiting check experiments on our cols_date_numerics & cols_categorical_check to be under cols_value_count_limit_fraction
Observations & TODO
Most of the data seems categorical
Need to check cols_date_numerics(TODO1)
we shall convert date -> day, month, year, weekday, total_no_of_day_from_reference_point. These splits for two reasons.
Reason1
Step8: Int Transformations
Step10: Text Data Tranformations
For cols_categorical_check, we are going to basic clean action like, lower and upper case issue. Clearning of non ascii values.
Step11: Custom Labeler
Loading Custom Labeler is for the the purpose of reducing categories varieties by ignoring groups with lower frequencies and covering 80%(default) of the original data.
Step12: funder
Step13: Label Encoder
Label Encoder with DefaultDict for quick data transformation
http
Step14: Pickle
Pickle Save
Step15: Feature Selection
Step16: Variance Threshold
To remove all features that are either one or zero (on or off) in more than 80% of the samples.
http
Step17: Select K Best
For regression
Step18: kbest conclusion
Step19: PCA
Step20: PCA
Step21: Saving Processed Data
Step22: Unsupervised Learning
Unsupervised Learning Exploration(Gaussian Process, Neural Nets)
Loading Pre-Processed Data
Step23: Gaussian
Step24: KMeans
Step25: Supervised Learning
Supervised Learning(GBT Trees, Nearest Neighbours, RF, One-vs-One)
Test-Train Split
Step26: GBT Trees
Step27: Nearest Neighbours
Step29: Random Forest
Step30: Model selection Evaluation
Step31: Multi Class
Step32: One Vs One
Step33: One vs Rest
Step34: Multiclass Model Selection
Step35: Parameter tuning
From above analysis we can see that Random Forest CLF performed better than most other and so here we shall optimise it.
Step36: Random Forest
Step37: Checking "clf_rf" RF performance
GBT
Step38: XGBOOST
Submission
Model Selection
Check for which model is performing best and using it.
Check to apply the one-vs-many//one-vs-one wrapper.
Check for 'test_train_split' for which X,y to be used for training | Python Code:
import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scripts.tools import game
%matplotlib inline
# %load_ext writeandexecute
plt.style.use('ggplot')
sns.set(color_codes=True)
# seed
np.random.seed(69572)
# import sys
# sys.path = sys.path + ['/Users/sampathkumarm/Desktop/devbox/Sam-DS/Kaggle/datadriven']
import scripts
import imp
imp.reload(scripts)
from scripts.sam_value_counts import sam_dataframe_cols_value_count_analysis, sam_dataframe_markup_value_counts
from scripts.sam_confusion_matrix import sam_plot_confusion_matrix, sam_confusion_maxtrix
import sys
from __future__ import absolute_import
from IPython.core.getipython import get_ipython
from IPython.core.magic import (Magics, magics_class, cell_magic)
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
from markdown import markdown
from IPython.core.display import HTML
from IPython.display import display
@magics_class
class MarkdownMagics(Magics):
@cell_magic
def asmarkdown(self, line, cell):
buffer = StringIO()
stdout = sys.stdout
sys.stdout = buffer
try:
exec(cell, locals(), self.shell.user_ns)
except:
sys.stdout = stdout
raise
sys.stdout = stdout
return HTML("<p>{}</p>".format(markdown(buffer.getvalue(), extensions=['markdown.extensions.extra'])))
return buffer.getvalue() + 'test'
def timer_message(self, start_time):
# print self
time_diff = (now() - start_time).total_seconds()
if time_diff < 0.001:
time_diff = 0
print('\n<pre>In', time_diff, 'Secs</pre>')
else:
print('\n<pre>In', time_diff, 'Secs</pre>')
@cell_magic
def timer(self, line, cell):
import datetime
now = datetime.datetime.now
start_time = now()
buffer = StringIO()
stdout = sys.stdout
sys.stdout = buffer
try:
exec(cell, locals(), self.shell.user_ns)
self.timer_message(start_time)
except:
sys.stdout = stdout
raise
sys.stdout = stdout
return HTML("<p>{}</p>".format(markdown(buffer.getvalue(), extensions=['markdown.extensions.extra'])))
return buffer.getvalue() + 'test'
get_ipython().register_magics(MarkdownMagics)
Explanation: PUMP IT UP
Introduction:
Using the data gathered from Taarifa and the Tanzanian Ministry of Water, can we predict which pumps are functional, which need some repairs, and which don't work at all? Predicting one of these three classes based and a smart understanding of which waterpoints will fail, can improve the maintenance operations and ensure that clean, potable water is available to communities across Tanzania.
This is also an intermediate-level competition by DataDriven! All code & support scripts are in Github Repo
Imports
End of explanation
RAW_X = pd.read_csv('data/traning_set_values.csv', index_col='id')
RAW_y = pd.read_csv('data/training_set_labels.csv', index_col='id')
RAW_TEST_X = pd.read_csv('data/test_set_values.csv', index_col='id')
# proportion of labels available
RAW_y.status_group.value_counts() / RAW_y.size
def check_shape(*dfs):
for df in dfs:
print('Share of Data Frame is', df.shape)
def df_check_stats(*dfs):
To print DataFrames Shape & Cols details.
Input: X, y, TEST_X,..
stmt = "Data Frame Shape: %1.15s TotColumns: %1.15s ObjectCols: %1.15s"
for df in dfs:
df_shape = str(df.shape)
obj_cols, all_cols = len(df.dtypes[df.dtypes == '0']), len(df.dtypes)
print(stmt % (df_shape, obj_cols, all_cols))
return
print('Shape of RAW_X', RAW_X.shape)
print('Shape of RAW_y', RAW_y.shape)
print('Shape of RAW_TEST_X', RAW_TEST_X.shape)
# ('Shape of RAW_X', (59400, 39))
# ('Shape of RAW_y', (59400, 1))
# ('Shape of RAW_TEST_X', (14850, 39))
for i, col in enumerate(RAW_X.columns):
print('|%d|%s|%d|' % (i, col, len(RAW_X[col].value_counts())))
# integer colums
cols_ints = '''amount_tsh
gps_height
longitude
latitude
num_private
region_code
district_code
population
construction_year'''.splitlines()
# bool
cols_bool = 'public_meeting permit'.split()
# date
cols_date = ['date_recorded']
print('INT COlS: ', len(cols_ints))
print('BOOL COLS:', len(cols_bool))
print('Date COLS:', len(cols_date))
def df_check_stats(*dfs):
for df in dfs:
df_shape = str(df.shape)
obj_cols, all_cols = len(df.dtypes[X.dtypes == '0']), len(df.dtypes)
print("Share of Data Frame: %1.15s All Columns: %1.15s Object Cols: %1.15s" % (df_shape, obj_cols, all_cols))
def show_object_dtypes(df,others=True):
dtype = object
if others:
return df.dtypes[df.dtypes == dtype]
else:
return df.dtypes[df.dtypes != dtype]
show_object_dtypes(RAW_TEST_X, True)
show_object_dtypes(RAW_TEST_X, False)
Explanation: Data Analysis
End of explanation
columns = RAW_X.columns
values_counts_bag = [len(RAW_X[column].value_counts()) for column in columns]
_ = sns.distplot(values_counts_bag, hist=True, kde=False,)
Explanation: cols_values_counts_dataframe
As we can see in above describe output, we seem to have lots of categorical values so let start exploring them a bit.
Lets start taking into believe everything is a Categorical Columns and check their data
End of explanation
cols_values_counts_dataframe = pd.DataFrame(np.log(values_counts_bag), index=columns, columns=['Value Counts'])
print('Values Counts:', values_counts_bag)
print('\nLog of Values Counts:', cols_values_counts_dataframe.T.values)
_ = sns.distplot(cols_values_counts_dataframe.T.values, hist=True, kde=False,)
plt.title('Historgram of Object Feature`s (log2 of) Unique Values counts')
plt.xlabel('Features')
cols_values_counts_dataframe.plot(kind='barh', figsize=(12, 12))
_ = plt.plot((2, 2), (0, 38))
_ = plt.plot((4, 4), (0, 38), '-g')
_ = plt.plot((6, 6), (0, 38), '-r')
_ = plt.plot((8, 8), (0, 38), '-y')
print('We seem to have some special categories where value counts are high.')
plt.title('Features Values Counts for comparision')
plt.xlabel ('Log of Unique Values')
sam_dataframe_cols_value_count_analysis(RAW_X)
Explanation: Example of how np-log transforms data
>>> np.log([0.001, 0.01, 0.1, 1, 10, 100, 1000])
array([-6.90775528, -4.60517019, -2.30258509, 0. , 2.30258509,
4.60517019, 6.90775528])
As you can see in np-log example, we can learn that when a list of values vary significantly(exponentially) then their logarithms moves linearly. As we(I) feel comfortable in studying linear plot and linear information, we did a np.log to values counts.
End of explanation
cols_value_count_limit_fraction = 0.01
cols_value_count_limit_log_value = np.log(RAW_X.shape[0] * cols_value_count_limit_fraction)
print('Total Number of Records:', RAW_X.shape[0], '- Log val is:', np.log(RAW_X.shape[0]))
print('%s percent of Number of Records:' % (cols_value_count_limit_fraction * 100),\
RAW_X.shape[0] * cols_value_count_limit_fraction,\
' - Log val is:', cols_value_count_limit_log_value)
Explanation: Checking rest of the columns
End of explanation
cols_non_categorical = show_object_dtypes(RAW_X, True).index.tolist()
cols_date_numerics = show_object_dtypes(RAW_X, True).index.tolist()
cols_categorical_check = []
for col, vc in cols_values_counts_dataframe.iterrows():
if col in cols_non_categorical:
if float(vc) > cols_value_count_limit_log_value:
cols_categorical_check.append(col)
print('Columns we need to moderate are:', cols_categorical_check)
Explanation: cols_categorical_check
Here in this project, cols_categorical_check refers to list of columns for which caution check is considered. Reason for this check is, we would need more data to explain other columns & target cols with respect to it.
Lets consider these columns with more 5% of values as non categorical values and since our problem statement is choosing which category, we will try to minimise the category and see how our performance changes(improves or not)
To begin we will consider that those categories with more than cols_value_count_limit_fraction percentage as the upper limit allowed. Any column with other data will pruged to become some to other information
End of explanation
# Reloading the data
RAW_X = pd.read_csv('data/traning_set_values.csv', index_col='id')
RAW_y = pd.read_csv('data/training_set_labels.csv', index_col='id')
RAW_TEST_X = pd.read_csv('data/test_set_values.csv', index_col='id')
Explanation: All cols_date_numerics, are date & other numeric data which can be made into buckets or reducing precision. Thus we can bound number of categories in data as the more variety of data we have, we need more information specific to each category which all might end with curse of dimensionality.
During pre-processing states we shall do following
TODO
* limiting check experiments on our cols_date_numerics & cols_categorical_check to be under cols_value_count_limit_fraction
Observations & TODO
Most of the data seems categorical
Need to check cols_date_numerics(TODO1)
we shall convert date -> day, month, year, weekday, total_no_of_day_from_reference_point. These splits for two reasons.
Reason1: It might be possible that in some location all specific set of complaints are registered on a start/mid/at end of the month. It might also be possible that they are registered on every Monday or so.
Reason2: Taking as much information as possible.
Need to check cols_categorical_check(TODO2)
longitutude & latitude seem to hold (0,0) instead of NULL which is acting as outlier for now
Following pairs looks closesly related - cleanup (TODO3)
quantity & quantity_group
quality_group & water_quality
extraction_type, extraction_type_class & extraction_type_group
Other - cleanup (TODO4)
recorded_by, seems to hold only a single value
population & amount_tsh, values are for some given as zero
Data Processing
Generic Transformations
Num/Bool Tranformations
date_recorded to Int
public_meeting to Int
permit to Int
longitude to Float(less precision)
latitude to Float(less precision)
Precision Description of Longititude and Latitude is available here at below link
End of explanation
import datetime
strptime = datetime.datetime.strptime
DATE_FORMAT = "%Y-%m-%d"
REFERENCE_DATE_POINT = strptime('2000-01-01', DATE_FORMAT)
if RAW_X.date_recorded.dtype == 'O':
# convert it to datetime format
f = lambda x: strptime(str(x), DATE_FORMAT)
RAW_X.date_recorded = RAW_X.date_recorded.apply(f)
RAW_TEST_X.date_recorded = RAW_TEST_X.date_recorded.apply(f)
# week day
f = lambda x: x.weekday()
RAW_X['date_recorded_weekday'] = RAW_X.date_recorded.apply(f)
RAW_TEST_X['date_recorded_weekday'] = RAW_TEST_X.date_recorded.apply(f)
# date
f = lambda x: x.day
RAW_X['date_recorded_date'] = RAW_X.date_recorded.apply(f)
RAW_TEST_X['date_recorded_date'] = RAW_TEST_X.date_recorded.apply(f)
# month
f = lambda x: x.month
RAW_X['date_recorded_month'] = RAW_X.date_recorded.apply(f)
RAW_TEST_X['date_recorded_month'] = RAW_TEST_X.date_recorded.apply(f)
# year
f = lambda x: x.year
RAW_X['date_recorded_year'] = RAW_X.date_recorded.apply(f)
RAW_TEST_X['date_recorded_year'] = RAW_TEST_X.date_recorded.apply(f)
# total days
f = lambda x: (x - REFERENCE_DATE_POINT).days
RAW_X.date_recorded = RAW_X.date_recorded.apply(f)
RAW_TEST_X.date_recorded = RAW_TEST_X.date_recorded.apply(f)
# Longitude & Latitude -- zero values fix
# Filling Missing/OUTLIAR Values
_ = np.mean(RAW_X[u'latitude'][RAW_X.latitude < -1.0].values)
if not RAW_X.loc[RAW_X.latitude >= -1.0, u'latitude'].empty:
RAW_X.loc[RAW_X.latitude >= -1.0, u'latitude'] = _
RAW_TEST_X.loc[RAW_TEST_X.latitude >= -1.0, u'latitude'] = _
# Filling Missing/OUTLIAR Values
_ = np.mean(RAW_X[u'longitude'][RAW_X[u'longitude'] > 1.0].values)
if not RAW_X.loc[RAW_X[u'longitude'] <= 1.0, u'longitude'].empty:
RAW_X.loc[RAW_X[u'longitude'] <= 1.0, u'longitude'] = _
RAW_TEST_X.loc[RAW_TEST_X[u'longitude'] <= 1.0, u'longitude'] = _
def f(x):
if x is True:
return 1
elif x is False:
return 2
else:
return 3
if (RAW_X.public_meeting.dtype != 'bool') and (RAW_X.permit.dtype != 'bool'):
# public_meeting
RAW_X.public_meeting = RAW_X.public_meeting.apply(f)
RAW_TEST_X.public_meeting = RAW_TEST_X.public_meeting.apply(f)
# permit
RAW_X.permit = RAW_X.permit.apply(f)
RAW_TEST_X.permit = RAW_TEST_X.permit.apply(f)
print('Dtype of public_meetings & permit:',RAW_X.public_meeting.dtype, RAW_X.permit.dtype)
print('')
# checking
if list(RAW_TEST_X.dtypes[RAW_TEST_X.dtypes != RAW_X.dtypes]):
raise Exception('RAW_X.dtypes and RAW_TEST_X.dtypes are not in Sync')
else:
print('All in Good Shape')
show_object_dtypes(RAW_X, True)
show_object_dtypes(RAW_X, False)
# Reducing geo location precision to 11 meters
LONG_LAT_PRECISION = 0.001
# Reducing Precision of Lat.
if RAW_X.longitude.mean() < 50:
RAW_X.longitude = RAW_X.longitude // LONG_LAT_PRECISION
RAW_X.latitude = RAW_X.latitude // LONG_LAT_PRECISION
RAW_TEST_X.longitude = RAW_TEST_X.longitude // LONG_LAT_PRECISION
RAW_TEST_X.latitude = RAW_TEST_X.latitude // LONG_LAT_PRECISION
Explanation: Int Transformations
End of explanation
def text_transformation(name):
Cleanup basic text issue in name(input).
Removes text capitalisation, case, space and other non text ascii charecters
except space.
if name:
name = name.lower().strip()
name = ''.join([i if 96 < ord(i) < 128 else ' ' for i in name])
if 'and' in name:
name = name.replace('and', ' ')
# clear double space
while ' ' in name:
name = name.replace(' ', ' ')
return name.strip()
return ''
# saving transformed data
pickle.dump(obj=RAW_X, file=open('tmp\clean_X.pkl', 'wb'))
pickle.dump(RAW_TEST_X, open('tmp\clean_TEST_X.pkl', 'wb'))
# pickle.dump(y, open('tmp\y.pkl', 'wb'))
TEST_X, X = RAW_TEST_X, RAW_X
Explanation: Text Data Tranformations
For cols_categorical_check, we are going to basic clean action like, lower and upper case issue. Clearning of non ascii values.
End of explanation
from collections import defaultdict
from __future__ import print_function
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
from scripts import sam_custom_labeler
CUST_CATEGORY_LABELER = sam_custom_labeler.CUST_CATEGORY_LABELER
Explanation: Custom Labeler
Loading Custom Labeler is for the the purpose of reducing categories varieties by ignoring groups with lower frequencies and covering 80%(default) of the original data.
End of explanation
##################################
######### IMPLEMENT ##############
#################################
if 'custom_labler' not in dir():
custom_labler = defaultdict(CUST_CATEGORY_LABELER)
tmp = { 'funder': 97,
'installer': 97,
'wpt_name': 80,
'subvillage': 80,
'ward': 80,
'scheme_name': 85
}
for col, limit in tmp.items():
labler = custom_labler[col]
labler.DATA_COVERAGE_LIMIT = limit
labler.fit(X[col])
print('')
print('-' * 15, col.upper())
# custom_labler[col].check_data_coverage(limit)
RAW_X[col] = labler.transform()
else:
print('"custom_labler" seems is already defined, please check')
print(RAW_X.shape, RAW_TEST_X.shape, all(RAW_X.columns == RAW_TEST_X.columns))
Explanation: funder:
100.0 percentage of DATA coverage mean, 1881 (in number) groups
97.0 percentage of DATA coverage mean, 592 (in number) groups ##
90.5 percentage of DATA coverage mean, 237 (in number) groups
installer:
100.0 percentage of DATA coverage mean, 1867 (in number) groups
97.0 percentage of DATA coverage mean, 599 (in number) groups ##
wpt_name:
80.0 percentage of DATA coverage mean, 24838 (in number) groups ##
subvillage:
80.5 percentage of DATA coverage mean, 8715 (in number) groups ##
83.0 percentage of DATA coverage mean, 9458 (in number) groups
ward:
80.0 percentage of DATA coverage mean, 998 (in number) groups ##
91.5 percentage of DATA coverage mean, 1397 (in number) groups
100.0 percentage of DATA coverage mean, 2093 (in number) groups
scheme_name:
100.0 percentage of DATA coverage mean, 2486 (in number) groups
91.5 percentage of DATA coverage mean, 870 (in number) groups
80.5 percentage of DATA coverage mean, 363 (in number) groups
85.0 percentage of DATA coverage mean, 524 (in number) groups ##
NOTE :
Marked with double hashes are the selected values for coverage
End of explanation
from collections import defaultdict
from sklearn import preprocessing
print(RAW_X.shape, RAW_TEST_X.shape)
RAW_X.dtypes[RAW_X.dtypes == 'O'].index.tolist()
RAW_X.dropna(inplace=True)
d = defaultdict(preprocessing.LabelEncoder)
tmp = RAW_X.dtypes[RAW_X.dtypes == 'O'].index.tolist()
RAW_X[tmp] = RAW_X[tmp].fillna('Other')
RAW_TEST_X[tmp] = RAW_TEST_X[tmp].fillna('Other')
# Labels Fit
sam = pd.concat([RAW_X, RAW_TEST_X]).apply(lambda x: d[x.name].fit(x))
# Labels Transform - Training Data
X = RAW_X.apply(lambda x: d[x.name].transform(x))
TEST_X = RAW_TEST_X.apply(lambda x: d[x.name].transform(x))
le = preprocessing.LabelEncoder().fit(RAW_y[u'status_group'])
y = le.transform(RAW_y[u'status_group'])
show_object_dtypes(RAW_X, True)
show_object_dtypes(X, True)
Explanation: Label Encoder
Label Encoder with DefaultDict for quick data transformation
http://stackoverflow.com/questions/24458645/label-encoding-across-multiple-columns-in-scikit-learn
End of explanation
# saving transformed data
pickle.dump(X, open('tmp\processed_X.pkl', 'wb'))
pickle.dump(TEST_X, open('tmp\processed_TEST_X.pkl', 'wb'))
pickle.dump(y, open('tmp\processed_y.pkl', 'wb'))
# saving label transformers
pickle.dump(d, open('tmp\d.pkl', 'wb'))
pickle.dump(le, open('tmp\le.pkl', 'wb'))
Explanation: Pickle
Pickle Save
End of explanation
X = pickle.load(open('tmp\processed_X.pkl', 'rb'))
TEST_X = pickle.load(open('tmp\processed_TEST_X.pkl', 'rb'))
y = pickle.load(open('tmp\processed_y.pkl', 'rb'))
# # Load this when you are about to do text transformation and submission
# d = pickle.load(open('tmp\d.pkl'))
# le = pickle.load(open('tmp\le.pkl'))
print(X.shape, y.shape, y[:5])
Explanation: Feature Selection
End of explanation
X.dtypes[X.dtypes != np.int64]
from scripts.sam_variance_check import get_low_variance_columns
X, removed_features, ranking_variance_thresholds = get_low_variance_columns(dframe=X,
threshold=(0.85 * (1 - 0.85)),
autoremove=True)
print('\nLow Variance Columns', removed_features)
if removed_features:
TEST_X.drop(removed_features, axis=1, inplace=True)
print('cleanup completed!')
print('Shape of X is', X.shape)
print('Shape of TEST_X is', TEST_X.shape)
Explanation: Variance Threshold
To remove all features that are either one or zero (on or off) in more than 80% of the samples.
http://scikit-learn.org/stable/modules/feature_selection.html#removing-features-with-low-variance
http://stackoverflow.com/questions/29298973/removing-features-with-low-variance-scikit-learn/34850639#34850639
End of explanation
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2, f_classif, mutual_info_classif
kbest_cols = len(X.columns) -5
print(('Shape of X:', X.shape))
for fns in [chi2, f_classif, mutual_info_classif]:
print((str(fns.__name__),game(SelectKBest(score_func=fns, k=kbest_cols).fit(X, y).transform(X), y, model='rf')))
for fns in [chi2, f_classif, mutual_info_classif]:
print((str(fns.__name__),game(SelectKBest(score_func=fns, k=kbest_cols).fit(X, y).transform(X), y, model='gbt')))
print('''
('Shape of X:', (59400, 42))
('chi2', (0.98478114478114476, 0.79548821548821547))
('f_classif', (0.98381593714927051, 0.79569023569023567))
('mutual_info_classif', (0.98505050505050507, 0.79919191919191923))
('chi2', (0.75800224466891131, 0.75535353535353533))
('f_classif', (0.75755331088664424, 0.75575757575757574))
('mutual_info_classif', (0.75795735129068464, 0.75515151515151513))
'''.replace(', (', ', ').replace('))', ')').replace('(', '|').replace(')', '|').replace(', ', '|'))
bag = [
# {'cols': 1, 'test': 0.5511111111111111, 'train': 0.5545679012345679},
# {'cols': 9, 'test': 0.650976430976431, 'train': 0.658294051627385},
{'cols': 17, 'test': 0.7022895622895623, 'train': 0.7092480359147025},
{'cols': 25, 'test': 0.7517171717171717, 'train': 0.7534455667789001},
{'cols': 25, 'test': 0.75171717171717167, 'train': 0.75344556677890007},
{'cols': 28, 'test': 0.7531986531986532, 'train': 0.75537598204264866},
{'cols': 31, 'test': 0.75346801346801351, 'train': 0.7551290684624018},
{'cols': 33, 'test': 0.7545454545454545, 'train': 0.7562738496071829},
{'cols': 34, 'test': 0.75535353535353533, 'train': 0.75658810325476988},
{'cols': 35, 'test': 0.75542087542087544, 'train': 0.75665544332210999},
{'cols': 36, 'test': 0.75427609427609432, 'train': 0.75586980920314251},
{'cols': 37, 'test': 0.75535353535353533, 'train': 0.75800224466891131},
{'cols': 38, 'test': 0.75582491582491584, 'train': 0.75797979797979798},
{'cols': 39, 'test': 0.75589225589225584, 'train': 0.75797979797979798}]
for kbest_cols in range(18, 25):
# for kbest_cols in range(23, 33, 2):
# for kbest_cols in range(26, 29):
fit = SelectKBest(score_func=chi2, k=kbest_cols).fit(X, y)
cols_names = X.columns
kbest_selected_cols = [_ for _ in cols_names[:kbest_cols]]
kbest_X = pd.DataFrame(fit.transform(X.copy()))
kbest_TEST_X = pd.DataFrame(fit.transform(TEST_X.copy()))
# kbest_X.columns = kbest_selected_cols
# kbest_TEST_X.columns = kbest_selected_cols
# print('Before KBest', X.shape, TEST_X.shape, len(y))
# print('After KBest', kbest_X.shape, kbest_TEST_X.shape, len(y))
train_score, test_score = game(kbest_X, y, model='gbt')
bag.append({'cols': kbest_cols, 'train': train_score, 'test': test_score})
# print(', '.join(kbest_selected_cols).upper())
bag
# sorted(bag, key=lambda x : x['cols'])
bag = [
# {'cols': 1, 'test': 0.5511111111111111, 'train': 0.5545679012345679},
# {'cols': 9, 'test': 0.650976430976431, 'train': 0.658294051627385},
{'cols': 17, 'test': 0.7022895622895623, 'train': 0.7092480359147025},
{'cols': 25, 'test': 0.7517171717171717, 'train': 0.7534455667789001},
{'cols': 28, 'test': 0.7531986531986532, 'train': 0.75537598204264866},
{'cols': 31, 'test': 0.75346801346801351, 'train': 0.7551290684624018},
{'cols': 33, 'test': 0.7545454545454545, 'train': 0.7562738496071829},
{'cols': 34, 'test': 0.75535353535353533, 'train': 0.75658810325476988},
{'cols': 35, 'test': 0.75542087542087544, 'train': 0.75665544332210999},
{'cols': 36, 'test': 0.75427609427609432, 'train': 0.75586980920314251},
{'cols': 37, 'test': 0.75535353535353533, 'train': 0.75800224466891131},
{'cols': 38, 'test': 0.75582491582491584, 'train': 0.75797979797979798},
{'cols': 39, 'test': 0.75589225589225584, 'train': 0.75797979797979798},
{'cols': 18, 'test': 0.70430976430976433, 'train': 0.70985409652076314},
{'cols': 19, 'test': 0.70430976430976433, 'train': 0.70985409652076314},
{'cols': 20, 'test': 0.70484848484848484, 'train': 0.70904601571268233},
{'cols': 21, 'test': 0.70397306397306403, 'train': 0.71160493827160498},
{'cols': 22, 'test': 0.70801346801346798, 'train': 0.71331088664421993},
{'cols': 23, 'test': 0.75077441077441076, 'train': 0.75173961840628511},
{'cols': 24, 'test': 0.75077441077441076, 'train': 0.75173961840628511}]
bag = pd.DataFrame(bag)
sns.pointplot(x='cols', y='test', data=bag, color='green')
sns.pointplot(x='cols', y='train', data=bag, color='red', markers="x", linestyles='--')
plt.title('GBT KBest Columns Selection')
plt.legend(['test(green)',' train(red)'], )
plt.ylabel('GBT Score')
Explanation: Select K Best
For regression: f_regression, mutual_info_regression
For classification: chi2, f_classif, mutual_info_classif
Random Forest Classifier score: RandomForestClassifier(n_estimators=150, criterion='entropy', class_weight="balanced_subsample", n_jobs=-1)
* chi2 0.81225589225589223
* f_classic 0.81138047138047142
* mutual_info_classif 0.81037037037037041
End of explanation
kbest_cols = 26
fit = SelectKBest(score_func=chi2, k=kbest_cols).fit(X, y)
cols_names = X.columns
kbest_selected_cols = [_ for _ in cols_names[:kbest_cols]]
kbest_X = pd.DataFrame(fit.transform(X))
kbest_TEST_X = pd.DataFrame(fit.transform(TEST_X))
kbest_X.shape, kbest_TEST_X.shape, y.shape
pickle.dump(kbest_X, open('tmp\kbest_X.pkl', 'wb'))
pickle.dump(kbest_TEST_X, open('tmp\kbest_TEST_X.pkl', 'wb'))
pickle.dump(y, open('tmp\kbest_y.pkl', 'wb'))
pickle.dump(kbest_X, open('tmp\kbest_X.pkl', 'wb'))
pickle.dump(kbest_TEST_X, open('tmp\kbest_TEST_X.pkl', 'wb'))
pickle.dump(y, open('tmp\kbest_y.pkl', 'wb'))
Explanation: kbest conclusion :
Best selected columns
AMOUNT_TSH, DATE_RECORDED, FUNDER, GPS_HEIGHT, INSTALLER, LONGITUDE, LATITUDE, NUM_PRIVATE, BASIN, SUBVILLAGE, REGION, REGION_CODE, DISTRICT_CODE, LGA, WARD, POPULATION, PUBLIC_MEETING, SCHEME_MANAGEMENT, SCHEME_NAME, PERMIT, CONSTRUCTION_YEAR, EXTRACTION_TYPE, EXTRACTION_TYPE_GROUP, EXTRACTION_TYPE_CLASS, MANAGEMENT, MANAGEMENT_GROUP, PAYMENT, PAYMENT_TYPE
``` Python
results of previous runs
[{'cols': 1, 'test': 0.52659932659932662, 'train': 0.57483726150392822},
{'cols': 5, 'test': 0.68962962962962959, 'train': 0.94240179573512906},
{'cols': 9, 'test': 0.7211447811447812, 'train': 0.97638608305274976},
{'cols': 13, 'test': 0.75380471380471381, 'train': 0.97955106621773291},
{'cols': 17, 'test': 0.76134680134680133, 'train': 0.98071829405162736},
{'cols': 21, 'test': 0.76511784511784509, 'train': 0.98076318742985413},
{'cols': 25, 'test': 0.80033670033670035, 'train': 0.98316498316498313},
{'cols': 29, 'test': 0.80053872053872055, 'train': 0.98379349046015707},
{'cols': 33, 'test': 0.80040404040404045, 'train': 0.98390572390572395},
{'cols': 37, 'test': 0.79993265993265994, 'train': 0.98341189674523011}]
[{'cols': 23, 'test': 0.7976430976430976, 'train': 0.9836812570145903},
{'cols': 25, 'test': 0.80033670033670035, 'train': 0.98316498316498313},
{'cols': 27, 'test': 0.80101010101010106, 'train': 0.9829405162738496},
{'cols': 29, 'test': 0.80053872053872055, 'train': 0.98379349046015707},
{'cols': 31, 'test': 0.80000000000000004, 'train': 0.98381593714927051}]
[{'cols': 26, 'test': 0.80309764309764309, 'train': 0.98359147025813698},
{'cols': 27, 'test': 0.80101010101010106, 'train': 0.9829405162738496},
{'cols': 28, 'test': 0.80222222222222217, 'train': 0.98334455667789}]
```
As per Okham Razor's rules, we are going to select the simplest and well performing. Luckily, we have got kbest_selected_cols at 26 which is comparitively top performer among other K-selections and also lower than actualy number of columns
End of explanation
!mkdir tmp
load = 2
if load == 2:
# this will load kbest
print('Loading KBest Processed Data')
X = pickle.load(open('tmp\kbest_X.pkl', 'rb'))
TEST_X = pickle.load(open('tmp\kbest_TEST_X.pkl', 'rb'))
y = pickle.load(open('tmp\kbest_y.pkl', 'rb'))
elif load == 1:
# this will load processed data
print('Loading Normal Processed Data')
X = pickle.load(open('tmp\processed_X.pkl', 'rb'))
TEST_X = pickle.load(open('tmp\processed_TEST_X.pkl', 'rb'))
# # y = pickle.load(open('tmp\processed_y.pkl'))
Explanation: PCA
End of explanation
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
# feature extraction
pca = PCA(n_components=15)
fit = pca.fit(X)
plt.figure(figsize=(12, 3))
_ = plt.scatter(range(len(fit.explained_variance_ratio_)), fit.explained_variance_ratio_.cumsum())
_ = plt.xlabel('cumilative sum of explained variance')
_ = plt.ylabel('score')
print(fit.explained_variance_ratio_.cumsum())
print()
print(('Score', game(pca.transform(X), y, 'gbt')))
# (0.97580246913580249, 0.60511784511784517) # KBest dataset
# (0.97564534231200895, 0.60552188552188557) # Normal Dataset
ss = pd.DataFrame(fit.components_)
ss = ss.applymap(lambda x: x if x > 0 else -1 * x)
display(ss.describe().T)
ss.plot(kind='bar', figsize=(125, 10))
# feature extraction
lda = LinearDiscriminantAnalysis(n_components=16)
fit = lda.fit(X, y)
plt.figure(figsize=(12, 3))
_ = plt.scatter (range(len(fit.explained_variance_ratio_)), fit.explained_variance_ratio_.cumsum())
_ = plt.xlabel('cumilative sum of explained variance')
_ = plt.ylabel('score')
print(fit.explained_variance_ratio_.cumsum())
print(('\nScore', game(lda.transform(X), y)))
# (0.97580246913580249, 0.60511784511784517) # KBest dataset
# (0.97564534231200895, 0.60552188552188557) # Normal Dataset
ss = pd.DataFrame(fit.coef_)
ss = ss.applymap(lambda x: x if x > 0 else -1 * x)
display(ss.describe().T)
ss.plot(kind='bar', figsize=(125, 10))
X = pca.transform(X)
TEST_X = pca.transform(TEST_X)
X.shape, TEST_X.shape
Explanation: PCA
End of explanation
pickle.dump(X, open('tmp\pca_X.pkl', 'wb'))
pickle.dump(TEST_X, open('tmp\pca_TEST_X.pkl', 'wb'))
# pickle.dump(y, open('tmp\pca_y.pkl', 'wb'))
Explanation: Saving Processed Data
End of explanation
load = 3
if load == 1:
print('Loading PCA Processed Data')
X = pickle.load(open('tmp\pca_X.pkl', 'rb'))
TEST_X = pickle.load(open('tmp\pca_TEST_X.pkl', 'rb'))
print(game(X, y, model='rf'))
elif load == 2:
# this will load kbest
print('Loading KBest Processed Data')
X = pickle.load(open('tmp\kbest_X.pkl', 'rb'))
TEST_X = pickle.load(open('tmp\kbest_TEST_X.pkl', 'rb'))
print(game(X, y, model='rf'))
elif load == 3:
# this will load processed data
print('Loading normal Processed Data')
X = pickle.load(open('tmp\processed_X.pkl', 'rb'))
TEST_X = pickle.load(open('tmp\processed_TEST_X.pkl', 'rb'))
print(game(X, y, model='rf'))
# # y = pickle.load(open('tmp\processed_y.pkl'))
print(X.shape, y.shape, TEST_X.shape)
Explanation: Unsupervised Learning
Unsupervised Learning Exploration(Gaussian Process, Neural Nets)
Loading Pre-Processed Data
End of explanation
from sklearn.mixture import GaussianMixture as GMM
from sklearn.metrics import silhouette_score
# For future analysis
GMM_Centers = []
__check_for = 1000
print ('clusters | score for top 1000')
for i in range(2, 7):
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = GMM(n_components=i, random_state=42)
clusterer.fit(X)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(X)
# TODO: Find the cluster centers
GMM_Centers.append(clusterer.means_)
# score = silhouette_score(X, preds)
score = silhouette_score(X[:__check_for], preds[:__check_for])
print(i, score)
# clusters | score for top 1000
# 2 0.484879234998
# 3 0.377180934294
# 4 0.334333476259
# 5 0.29213724894
# 6 0.27643712696
Explanation: Gaussian
End of explanation
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
# For future analysis
KMM_Centers = []
# Testing each category
for i in range(2, 7):
clusterer = KMeans(init='k-means++', n_clusters=i, n_init=10)
clusterer.fit(X)
preds = clusterer.predict(X)
centers = clusterer.cluster_centers_
KMM_Centers.append(centers)
# score = silhouette_score(X, preds)
score = silhouette_score(X[:__check_for], preds[:__check_for])
print(i, score)
# clusters | score for top 1000
# 2 0.502005229628
# 3 0.377168744959
# 4 0.325091546516
# 5 0.303811069492
# 6 0.304265445159
i = 2
clusterer = KMeans(init='k-means++', n_clusters=i, n_init=10)
clusterer.fit(X)
preds = clusterer.predict(X)
score = silhouette_score(X[:__check_for], preds[:__check_for])
print(i, score)
X = pd.DataFrame(X)
X['new'] = clusterer.predict(X)
TEST_X = pd.DataFrame(TEST_X)
TEST_X['new'] = clusterer.predict(TEST_X)
print(X.shape, TEST_X.shape)
Explanation: KMeans
End of explanation
from sklearn.model_selection import train_test_split
from sklearn.metrics import auc
from sklearn.metrics import roc_auc_score
load = 2
np.set_printoptions(precision=4)
print('------------------------------------------------')
if load == 1:
print('Loading PCA Processed Data')
X = pickle.load(open('tmp\pca_X.pkl', 'rb'))
TEST_X = pickle.load(open('tmp\pca_TEST_X.pkl', 'rb'))
y = pickle.load(open('tmp\processed_y.pkl', 'rb'))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, stratify=y)
model_rf(X_train, y_train)
elif load == 2:
# this will load kbest
print('Loading KBest Processed Data')
X = pickle.load(open('tmp\kbest_X.pkl', 'rb'))
TEST_X = pickle.load(open('tmp\kbest_TEST_X.pkl', 'rb'))
y = pickle.load(open('tmp\processed_y.pkl', 'rb'))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, stratify=y)
model_rf(X_train, y_train)
elif load == 3:
# this will load processed data
print('Loading normal Processed Data')
X = pickle.load(open('tmp\processed_X.pkl', 'rb'))
TEST_X = pickle.load(open('tmp\processed_TEST_X.pkl', 'rb'))
y = pickle.load(open('tmp\processed_y.pkl', 'rb'))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, stratify=y)
model_rf(X_train, y_train)
print('------------------------------------------------')
print(X.shape, y.shape)
import sklearn.metrics as sk_metrics
def check_metric(y_pred, y_test, show_cm=True):
if show_cm:
print('------------------------------------------------')
print(sk_metrics.classification_report(y_pred, y_test))
print('------------------------------------------------')
print('AC Score:', sk_metrics.accuracy_score(y_pred, y_test),
'F1 Score:', sk_metrics.f1_score(y_pred, y_test, average='weighted'))
Explanation: Supervised Learning
Supervised Learning(GBT Trees, Nearest Neighbours, RF, One-vs-One)
Test-Train Split
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
clf_gbt = GradientBoostingClassifier(random_state=192)
clf_gbt = clf_gbt.fit(X_train, y_train)
# metric
y_train_pred = clf_gbt.predict(X_train)
y_pred = clf_gbt.predict(X_test)
print('GradientBoostingClassifier(random_state=192)')
print('----------')
print('Tranining Score')
check_metric(y_train_pred, y_train, show_cm=False)
print('Testing Score')
check_metric(y_pred, y_test, show_cm=False)
Explanation: GBT Trees
End of explanation
from sklearn.neighbors import KNeighborsClassifier
# modelling
clf_knn = KNeighborsClassifier()
clf_knn.fit(X_test, y_test)
# metric
y_train_pred = clf_knn.predict(X_train)
y_pred = clf_knn.predict(X_test)
print('KNeighborsClassifier()')
print('----------')
print('Tranining Score')
check_metric(y_train_pred, y_train, show_cm=False)
print('Testing Score')
check_metric(y_pred, y_test, show_cm=False)
from sklearn.neighbors import KNeighborsClassifier
Explanation: Nearest Neighbours
End of explanation
from sklearn.ensemble import RandomForestClassifier
def model_rf(X_train, y_train):
Random Forest
clf_rf = RandomForestClassifier(random_state=192)
clf_rf = clf_rf.fit(X_train, y_train)
y_pred = clf_rf.predict(X_test)
# metric
y_train_pred = clf_rf.predict(X_train)
y_pred = clf_rf.predict(X_test)
print('RandomForestClassifier(random_state=192)')
print('----------')
print('Tranining Score')
check_metric(y_train_pred, y_train, show_cm=False)
print('Testing Score')
check_metric(y_pred, y_test, show_cm=False)
model_rf(X_train, y_train)
print(list(zip(X.columns, clf_rf.feature_importances_)))
X[kbest_selected_cols].size / 40., X[kbest_selected_cols].shape
# n_estimators=150, criterion='entropy', class_weight="balanced_subsample",
clf_rf = RandomForestClassifier(random_state=192, n_jobs=-1)
# class_weight="balanced_subsample"/"balanced"
# criterion="gini"/"entropy"
clf_rf = clf_rf.fit(X_train[kbest_selected_cols], y_train)
# pred = clf_rf.predict_proba(X_test)
clf_rf.score(X_test[kbest_selected_cols], y_test)
Explanation: Random Forest
End of explanation
# metric
for clf in [clf_gbt,
# clf_knn,
clf_rf]:
y_train_pred = clf.predict(X_train)
y_pred = clf.predict(X_test)
print(clf)
# print('----------')
# print('Training Score')
# check_metric(y_train_pred, y_train)
print('----------')
print('Testing Score')
check_metric(y_pred, y_test)
Explanation: Model selection Evaluation
End of explanation
from sklearn.multiclass import OneVsOneClassifier, OneVsRestClassifier
Explanation: Multi Class
End of explanation
clf_multiclass_rf = OneVsOneClassifier(RandomForestClassifier(
n_estimators=200,criterion='entropy', class_weight="balanced_subsample",
random_state=192, n_jobs=-1
))
clf_multiclass_rf = OneVsOneClassifier(RandomForestClassifier(n_estimators=150,
criterion='entropy',
class_weight="balanced_subsample",
n_jobs=-1, random_state=192
))
clf_multiclass_rf = clf_multiclass_rf.fit(X_train, y_train)
print('Classifier:', clf_multiclass_rf)
print('Score:', clf_multiclass_rf.score(X_train, y_train))
# print('Score:', clf_multiclass_rf.score(X_test, y_test))
y_pred = clf_multiclass_rf.predict(X_test)
check_metric(y_pred, y_test)
# Score: 0.999775533109
# Score: 0.813602693603
# RandomForestClassifier(n_estimators=150, criterion='entropy', class_weight="balanced_subsample", n_jobs=-1, random_state=192)
Explanation: One Vs One
End of explanation
clf_multiclass_rf = OneVsRestClassifier(RandomForestClassifier(
n_estimators=200,criterion='entropy', class_weight="balanced_subsample",
random_state=192, n_jobs=-1
))
clf_multiclass_rf = clf_multiclass_rf.fit(X_train, y_train)
print('Classifier:', clf_multiclass_rf)
print('Train Score: ', clf_multiclass_rf.score(X_train, y_train))
# print('Test Score:', clf_multiclass_rf.score(X_test, y_test))
y_pred = clf_multiclass_rf.predict(X_test)
check_metric(y_pred, y_test)
Explanation: One vs Rest
End of explanation
# Random Forest
clf_multiclass1_rf = OneVsOneClassifier(RandomForestClassifier(
random_state=192, n_jobs=-1
))
clf_multiclass2_rf = OneVsRestClassifier(RandomForestClassifier(
random_state=192, n_jobs=-1
))
# Gradient Boosting
clf_multiclass1_gb = OneVsOneClassifier(GradientBoostingClassifier(
random_state=192
))
clf_multiclass2_gb = OneVsRestClassifier(GradientBoostingClassifier(
random_state=192
))
clf_multiclass1_rf = clf_multiclass1_rf.fit(X_train, y_train)
clf_multiclass2_rf = clf_multiclass2_rf.fit(X_train, y_train)
clf_multiclass1_gb = clf_multiclass1_gb.fit(X_train, y_train)
clf_multiclass2_gb = clf_multiclass2_gb.fit(X_train, y_train)
for clf in [clf_multiclass1_gb, clf_multiclass2_gb, clf_multiclass1_rf, clf_multiclass2_rf]:
y_train_pred = clf.predict(X_train)
y_pred = clf.predict(X_test)
print('---------------------------------------------------------------------------')
print(clf)
print('---------------------------------------------------------------------------')
print('Training Score')
check_metric(y_train_pred, y_train)
print('---------------------------------------------------------------------------')
print('Testing Score')
check_metric(y_pred, y_test)
Explanation: Multiclass Model Selection
End of explanation
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
Explanation: Parameter tuning
From above analysis we can see that Random Forest CLF performed better than most other and so here we shall optimise it.
End of explanation
# max_features
np.sqrt(len(X_train.columns)), np.log(len(X_train.columns))
np.log2(len(X_train.columns)), np.sqrt (len(X_train.columns)), len(X_train.columns)
'balanced_subsample balanced'.split(), 'gini entropy'.split()
parameters = {
'n_estimators': [10, 50, 100, 150, 200],
'class_weight': ['balanced_subsample', 'balanced'],
'criterion': ['gini', 'entropy'],
'max_features': ['log2', 'auto', 25],
'random_state': [192]
}
# clf_rf = RandomForestClassifier(n_estimators=150, criterion='entropy', class_weight="balanced_subsample", n_jobs=-1, random_state=192)
# 0.81346801346801345
GS_CV = RandomizedSearchCV(RandomForestClassifier(), parameters)
GS_CV.fit(X, y)
print(GS_CV.best_params_, GS_CV.best_score_)
# {'n_estimators': 200, 'max_features': 'log2', 'random_state': 192, 'criterion': 'entropy',
# 'class_weight': 'balanced_subsample'} 0.806717171717
cv_results = pd.DataFrame(GS_CV.cv_results_, columns=[u'mean_fit_time', u'mean_score_time', u'mean_test_score',
u'mean_train_score', u'param_class_weight', u'param_criterion',
u'param_max_features', u'param_n_estimators', u'params'])
cv_results.head(2)
import seaborn as sns
sns.set(color_codes=True)
np.random.seed(sum(map(ord, "regression")))
ax=plt.figure(figsize=(8,8))
_ = sns.lmplot(x="mean_test_score", y="mean_train_score", hue="param_max_features", data=cv_results)
Explanation: Random Forest
End of explanation
GradientBoostingClassifier?
X.shape
parameters = {
'n_estimators': range(25, 250, 50),
'random_state': [192],
'min_samples_split': range(2, 8, 2),
# 'min_samples_leaf': [.001, .01, .1, .3, .5],
# 'max_depth': range(3, 8)
}
GS_CV = RandomizedSearchCV(GradientBoostingClassifier(), parameters)
GS_CV.fit(X, y)
GS_CV.best_params_, GS_CV.best_score_
cv_results = pd.DataFrame(GS_CV.cv_results_, columns=[u'mean_test_score', u'mean_train_score', # two standard
# here params keys
u'param_min_samples_split', u'param_n_estimators', u'params'])
sns.pairplot(data=cv_results, x_vars=['mean_test_score', 'mean_train_score'],
y_vars=['param_min_samples_split', 'param_n_estimators'])
parameters = {
'n_estimators': [215, 225, 235,], # 225 best
'random_state': [192],
'min_samples_split': [5, 6],
'min_samples_leaf': [.001, .01, .1, .3, .5],
# 'max_depth': range(3, 8)
}
GS_CV = RandomizedSearchCV(RandomForestClassifier(), parameters)
GS_CV.fit(X, y)
cv_results = pd.DataFrame(GS_CV.cv_results_, columns=[u'mean_test_score', u'mean_train_score', # two standard
# here params keys
u'param_min_samples_leaf',
u'param_min_samples_split', u'param_n_estimators',
u'params'])
_ = sns.pairplot(data=cv_results, x_vars=['mean_test_score', 'mean_train_score'],
y_vars=['param_min_samples_split', 'param_n_estimators', u'param_min_samples_leaf',])
print(GS_CV.best_params_, GS_CV.best_score_)
Explanation: Checking "clf_rf" RF performance
GBT
End of explanation
GS_CV.best_params_
clf_rf = OneVsOneClassifier(RandomForestClassifier(n_estimators=150,
random_state=192,
max_features='log2',
class_weight='balanced_subsample',
criterion='gini'))
print (clf_rf)
clf_rf = clf_rf.fit(X, y)
# saving the index
test_ids = RAW_TEST_X.index
# predicint the values
predictions = clf_rf.predict(TEST_X)
print(predictions.shape)
# Converting int to its respective Labels
predictions_labels = le.inverse_transform(predictions)
# setting up column name & save file
sub = pd.DataFrame(predictions_labels, columns=['status_group'])
sub.head()
sub.insert(loc=0, column='id', value=test_ids)
sub.reset_index()
sub.to_csv('submit.csv', index=False)
sub.head()
scores = '''
0.7970
0.786838161735000
0.799663299663
0.803097643097
0.796498316498000
0.699393939394
0.709020068049
0.779259259259000
0.778047138047
0.813468013468000
0.808821548822
'''.strip().splitlines()
scores = map(lambda x: float(x) if x else None, scores)
stages = '''
Benchmark
Algorithmn Selection
KSelect chi2 Test(40+cols)
KSelect Best(26)Cols
KBest Processed Data
PCA Processed Data Score
Normal Processed Data
OneVsOneClassifier
OneVsRestClassifier
Tuning
Result
'''.strip().splitlines()
print(list(scores)
plt.plot([0.797, 0.786838161735, 0.799663299663, 0.803097643097,
0.796498316498, 0.699393939394, 0.709020068049,
0.779259259259, 0.778047138047, 0.813468013468, 0.808821548822], )
# plt.xticks(range(3), 'a b c'.split())
Explanation: XGBOOST
Submission
Model Selection
Check for which model is performing best and using it.
Check to apply the one-vs-many//one-vs-one wrapper.
Check for 'test_train_split' for which X,y to be used for training
End of explanation |
4,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Youtube Videos
Step1: Class Methods
Step2: Class Methods can be used to create alternate constructors
Step3: Static Methods
Instance methods take self as the first argument
Class methods take cls as the first argument
Static methods don't take instance or class as their argument, we just pass the arguments we want to work with.
Static methods don't operate on instance or class.
Step4: Inheritance - Creating subclasses
Step5: Now what if you want Developer's raise_amount to be 10%?
Step6: Now what if we want the Developer class to have an extra attribute like prog_lang?
Step7: Gotcha - Mutable default arguments
* https
Step8: Magic or Dunder Methods
https
Step9: __add__
__len__
Step10: Property Decorators
Step11: Abstract Base Classes in Python
What are Abstract Base Classes good for? A while ago I had a discussion about which pattern to use for implementing a maintainable class hierarchy in Python. More specifically, the goal was to define a simple class hierarchy for a service backend in the most programmer-friendly and maintainable way.
There was a BaseService that defines a common interface and several concrete implementations that do different things but all provide the same interface (MockService, RealService, and so on). To make this relationship explicit the concrete implementations all subclass BaseService.
To be as maintainable and programmer-friendly as possible the idea was to make sure that | Python Code:
class Employee:
emp_count = 0 # Class Variable
company = 'Google' # Class Variable
def __init__(self, fname, lname):
self.fname = fname
self.lname = lname
self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'
Employee.emp_count += 1
def get_fullname(self):
return '{} {}'.format(self.fname, self.lname)
def get_company(self):
return 'Company Name is: {}'.format(Employee.company)
emp1 = Employee('Sri', 'Paladugu')
emp2 = Employee('Dhruv', 'Paladugu')
print( emp1.get_fullname() )
print( Employee.emp_count )
# Trobule ensues when you treat class variables as instance attribute.
# What the interpreter does in this case is, it creates an instance attribute with the same name and assigns to it.
# The class variable still remains intact with old value.
emp1.company = 'Verily'
print(emp1.company)
print(emp1.get_company())
print(emp2.company)
print(emp2.email)
Explanation: Youtube Videos:
* https://www.youtube.com/watch?v=rq8cL2XMM5M&index=3&list=PL-osiE80TeTsqhIuOqKhwlXsIBIdSeYtc
* https://www.youtube.com/watch?v=RSl87lqOXDE&index=4&list=PL-osiE80TeTsqhIuOqKhwlXsIBIdSeYtc
Online References:
* https://jeffknupp.com/blog/2017/03/27/improve-your-python-python-classes-and-object-oriented-programming/
* https://dbader.org/blog/abstract-base-classes-in-python
End of explanation
class Employee:
emp_count = 0 # Class Variable
company = 'Google' # Class Variable
raise_amount = 1.04
def __init__(self, fname, lname):
self.fname = fname
self.lname = lname
self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'
Employee.emp_count += 1
def get_fullname(self):
return '{} {}'.format(self.fname, self.lname)
def get_company(self):
return 'Company Name is: {}'.format(Employee.company)
@classmethod
def set_raise_amt(cls, amount):
cls.raise_amount = amount
emp1 = Employee('Sri', 'Paladugu')
emp2 = Employee('Dhruv', 'Paladugu')
Employee.set_raise_amt(1.05)
print(Employee.raise_amount)
print(emp1.raise_amount)
print(emp2.raise_amount)
Explanation: Class Methods
End of explanation
class Employee:
emp_count = 0 # Class Variable
company = 'Google' # Class Variable
raise_amount = 1.04
def __init__(self, fname, lname, salary):
self.fname = fname
self.lname = lname
self.salary = salary
self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'
Employee.emp_count += 1
def get_fullname(self):
return '{} {}'.format(self.fname, self.lname)
def get_company(self):
return 'Company Name is: {}'.format(Employee.company)
@classmethod
def set_raise_amt(cls, amount):
cls.raise_amount = amount
@classmethod
def from_string(cls, emp_str):
fname, lname, salary = emp_str.split("-")
return cls(fname, lname, salary)
new_emp = Employee.from_string("Pradeep-Koganti-10000")
print(new_emp.email)
Explanation: Class Methods can be used to create alternate constructors
End of explanation
class Employee:
emp_count = 0 # Class Variable
company = 'Google' # Class Variable
raise_amount = 1.04
def __init__(self, fname, lname, salary):
self.fname = fname
self.lname = lname
self.salary = salary
self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'
Employee.emp_count += 1
def get_fullname(self):
return '{} {}'.format(self.fname, self.lname)
def get_company(self):
return 'Company Name is: {}'.format(Employee.company)
@classmethod
def set_raise_amt(cls, amount):
cls.raise_amount = amount
@classmethod
def from_string(cls, emp_str):
fname, lname, salary = emp_str.split("-")
return cls(fname, lname, salary)
@staticmethod
def is_workday(day):
if day.weekday() == 5 or day.weekday() == 6:
return False
else:
return True
import datetime
my_date = datetime.date(2016, 7, 10)
print(Employee.is_workday(my_date))
Explanation: Static Methods
Instance methods take self as the first argument
Class methods take cls as the first argument
Static methods don't take instance or class as their argument, we just pass the arguments we want to work with.
Static methods don't operate on instance or class.
End of explanation
class Employee:
emp_count = 0 # Class Variable
company = 'Google' # Class Variable
raise_amount = 1.04
def __init__(self, fname, lname, salary):
self.fname = fname
self.lname = lname
self.salary = salary
self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'
Employee.emp_count += 1
def get_fullname(self):
return '{} {}'.format(self.fname, self.lname)
def get_company(self):
return 'Company Name is: {}'.format(Employee.company)
def apply_raise(self):
self.salary = self.salary * self.raise_amount
class Developer(Employee):
pass
dev1 = Developer('Sri', 'Paladugu', 1000)
print(dev1.get_fullname())
print(help(Developer)) # This command prints the Method resolution order.
# Indicating the order in which the interpreter is going to look for methods.
Explanation: Inheritance - Creating subclasses
End of explanation
class Employee:
emp_count = 0 # Class Variable
company = 'Google' # Class Variable
raise_amount = 1.04
def __init__(self, fname, lname, salary):
self.fname = fname
self.lname = lname
self.salary = salary
self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'
Employee.emp_count += 1
def get_fullname(self):
return '{} {}'.format(self.fname, self.lname)
def get_company(self):
return 'Company Name is: {}'.format(Employee.company)
def apply_raise(self):
self.salary = self.salary * self.raise_amount
class Developer(Employee):
raise_amount = 1.10
dev1 = Developer('Sri', 'Paladugu', 1000)
dev1.apply_raise()
print(dev1.salary)
Explanation: Now what if you want Developer's raise_amount to be 10%?
End of explanation
class Employee:
emp_count = 0 # Class Variable
company = 'Google' # Class Variable
raise_amount = 1.04
def __init__(self, fname, lname, salary):
self.fname = fname
self.lname = lname
self.salary = salary
self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'
Employee.emp_count += 1
def get_fullname(self):
return '{} {}'.format(self.fname, self.lname)
def get_company(self):
return 'Company Name is: {}'.format(Employee.company)
def apply_raise(self):
self.salary = self.salary * self.raise_amount
class Developer(Employee):
raise_amount = 1.10
def __init__(self, fname, lname, salary, prog_lang):
super().__init__(fname, lname, salary)
# or you can also use the following syntax
# Employee.__init__(self, fname, lname, salary)
self.prog_lang = prog_lang
dev1 = Developer('Sri', 'Paladugu', 1000, 'Python')
print(dev1.get_fullname())
print(dev1.prog_lang)
Explanation: Now what if we want the Developer class to have an extra attribute like prog_lang?
End of explanation
class Employee:
emp_count = 0 # Class Variable
company = 'Google' # Class Variable
raise_amount = 1.04
def __init__(self, fname, lname, salary):
self.fname = fname
self.lname = lname
self.salary = salary
self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'
Employee.emp_count += 1
def get_fullname(self):
return '{} {}'.format(self.fname, self.lname)
def get_company(self):
return 'Company Name is: {}'.format(Employee.company)
def apply_raise(self):
self.salary = self.salary * self.raise_amount
class Developer(Employee):
raise_amount = 1.10
def __init__(self, fname, lname, salary, prog_lang):
super().__init__(fname, lname, salary)
# or you can also use the following syntax
# Employee.__init__(self, fname, lname, salary)
self.prog_lang = prog_lang
class Manager(Employee):
def __init__(self, fname, lname, salary, employees = None): # Use None as default not empty list []
super().__init__(fname, lname, salary)
if employees is None:
self.employees = []
else:
self.employees = employees
def add_employee(self, emp):
if emp not in self.employees:
self.employees.append(emp)
def remove_employee(self, emp):
if emp in self.employees:
self.employees.remove(emp)
def print_emps(self):
for emp in self.employees:
print('--->', emp.get_fullname())
dev_1 = Developer('Sri', 'Paladugu', 1000, 'Python')
dev_2 = Developer('Dhruv', 'Paladugu', 2000, 'Java')
mgr_1 = Manager('Sue', 'Smith', 9000, [dev_1])
print(mgr_1.email)
print(mgr_1.print_emps())
mgr_1.add_employee(dev_2)
print(mgr_1.print_emps())
print('Is dev_1 an instance of Developer: ', isinstance(dev_1, Developer))
print('Is dev_1 an instance of Employee: ', isinstance(dev_1, Employee))
print('Is Developer an Subclass of Developer: ', issubclass(Developer, Developer))
print('Is Developer an Subclass of Employee: ', issubclass(Developer, Employee))
Explanation: Gotcha - Mutable default arguments
* https://pythonconquerstheuniverse.wordpress.com/2012/02/15/mutable-default-arguments/
End of explanation
class Employee:
company = 'Google'
def __init__(self, fname, lname, salary):
self.fname = fname
self.lname = lname
self.salary = salary
self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'
def __repr__(self): # For other developers
return "Employee('{}','{}','{}')".format(self.fname, self.lname, self.salary)
def __str__(self): # For end user
return '{} - {}'.format(self.get_fullname(), self.email)
def get_fullname(self):
return '{} {}'.format(self.fname, self.lname)
emp1 = Employee('Sri', 'Paladugu', 5000)
print(emp1)
print(repr(emp1))
Explanation: Magic or Dunder Methods
https://www.youtube.com/watch?v=3ohzBxoFHAY&index=5&list=PL-osiE80TeTsqhIuOqKhwlXsIBIdSeYtc
Dunder methods:
1. __repr__
2. __str__
End of explanation
# if you do: 1 + 2 internally the interpreter calls the dunder method __add__
print(int.__add__(1,2))
# Similarly # if you do: [2,3] + [4,5] internally the interpreter calls the dunder method __add__
print(list.__add__([2,3],[4,5]))
print('Paladugu'.__len__()) # This is same as len('Paladugu')
class Employee:
company = 'Google'
def __init__(self, fname, lname, salary):
self.fname = fname
self.lname = lname
self.salary = salary
self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'
def __repr__(self): # For other developers
return "Employee('{}','{}','{}')".format(self.fname, self.lname, self.salary)
def __str__(self): # For end user
return '{} - {}'.format(self.get_fullname(), self.email)
def get_fullname(self):
return '{} {}'.format(self.fname, self.lname)
def __add__(self, other):
return self.salary + other.salary
def __len__(self):
return len(self.get_fullname())
emp1 = Employee('Sri', 'Paladugu', 5000)
emp2 = Employee('Dhruv', 'Paladugu', 5000)
print(emp1 + emp2)
print(len(emp1))
Explanation: __add__
__len__
End of explanation
class Employee:
company = 'Google'
def __init__(self, fname, lname, salary):
self.fname = fname
self.lname = lname
self.salary = salary
@property
def email(self):
return '{}.{}@{}.com'.format(self.fname, self.lname, self.company)
@property
def fullname(self):
return '{} {}'.format(self.fname, self.lname)
@fullname.setter
def fullname(self, name):
first, last = name.split(' ')
self.fname = first
self.lname = last
@fullname.deleter
def fullname(self):
print('Delete Name!')
self.fname = None
self.lname = None
emp1 = Employee('Sri', 'Paladugu', 5000)
print(emp1.email)
print(emp1.fullname)
emp1.fullname = 'Ramki Paladugu'
print(emp1.email)
del emp1.fullname
print(emp1.email)
Explanation: Property Decorators
End of explanation
from abc import ABCMeta, abstractmethod
class Base(metaclass=ABCMeta):
@abstractmethod
def foo(self):
pass
@abstractmethod
def bar(self):
pass
class Concrete(Base):
def foo(self):
pass
# We forget to declare bar()
c = Concrete()
Explanation: Abstract Base Classes in Python
What are Abstract Base Classes good for? A while ago I had a discussion about which pattern to use for implementing a maintainable class hierarchy in Python. More specifically, the goal was to define a simple class hierarchy for a service backend in the most programmer-friendly and maintainable way.
There was a BaseService that defines a common interface and several concrete implementations that do different things but all provide the same interface (MockService, RealService, and so on). To make this relationship explicit the concrete implementations all subclass BaseService.
To be as maintainable and programmer-friendly as possible the idea was to make sure that:
instantiating the base class is impossible; and
forgetting to implement interface methods in one of the subclasses raises an error as early as possible.
End of explanation |
4,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AlientVault OTX <> Graphistry
Step1: Start
Step2: Continue | Python Code:
#!pip install graphistry -q
#!pip install OTXv2 -q
import graphistry
import pandas as pd
from OTXv2 import OTXv2, IndicatorTypes
from gotx import G_OTX
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
otx = OTXv2("MY_KEY")
g_otx = G_OTX(graphistry, otx)
Explanation: AlientVault OTX <> Graphistry: LockerGoga investigation
End of explanation
lockergoga_pulses = otx.search_pulses('LockerGoga').get('results')
lockergoga_pulses_df = g_otx.pulses_to_df(lockergoga_pulses)
lockergoga_indicators_df = g_otx.pulses_to_indicators_df(lockergoga_pulses)
g = g_otx.indicatormap(lockergoga_pulses_df, lockergoga_indicators_df)
g.plot()
Explanation: Start: rough hits
We find there are 3 clusters of activity
End of explanation
ip_pulses = otx.get_indicator_details_by_section(IndicatorTypes.IPv4, lockergoga_indicators_df[lockergoga_indicators_df['indicator_type'] == 'IPv4'].values[0][0])
ip_pulses_df = g_otx.indicator_details_by_section_to_pulses_df(ip_pulses)
ip_indicators_df = g_otx.indicator_details_by_section_to_indicators_df(ip_pulses)
g_otx.indicatormap(ip_pulses_df, ip_indicators_df).plot()
Explanation: Continue: Expand on IPv4 hits
Let's expand the small cluster related to "Powershell Backdoor calling back on port 443". Use the OTX API to get other pulses containing the same IP address and then expand them and create a new graph
End of explanation |
4,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparision of Machine Learning Methods vs Rule Based
Traditionally, Educational Institutions use rule based models to generate risk score which then informs resource allocation. For example, Hiller et al, 1999
Instead, we'll build a simple model using basic ML techniques and demonstrate why the risk scores generated are better
Step1: Setup
First, we need to generate simulated data and read it into a data frame
Step2: Determine if the student undermatched-or-was-properly matched
Step3: Rule Based Model
Simple GPA and PSAT rule
Step5: The Rules
* We have 3 observed variables - GPA, PSAT, race
* Predict which college based on those observed variables.
* Rules based on Hoxby, et al 2013
Step6: Machine Learning Model
Simple Logisitic Regression | Python Code:
## Imports
import pandas as pd
import seaborn as sns
sns.set(color_codes=True)
import matplotlib.pyplot as plt
Explanation: Comparision of Machine Learning Methods vs Rule Based
Traditionally, Educational Institutions use rule based models to generate risk score which then informs resource allocation. For example, Hiller et al, 1999
Instead, we'll build a simple model using basic ML techniques and demonstrate why the risk scores generated are better
End of explanation
# Gen Data
%run sim.py
stud_df.gpa = pd.to_numeric(stud_df.gpa)
stud_df.honors = pd.to_numeric(stud_df.honors)
stud_df.psat = pd.to_numeric(stud_df.psat)
Explanation: Setup
First, we need to generate simulated data and read it into a data frame
End of explanation
avg_gpas = stud_df.groupby('college').gpa.mean()
def isUndermatched(student):
if student.gpa >= (avg_gpas[student.college] + .50):
return True
else:
return False
stud_df['undermatch_status'] = stud_df.apply(isUndermatched, axis =1 )
#stud_df.groupby('race').undermatch_status.value_counts()
Explanation: Determine if the student undermatched-or-was-properly matched
End of explanation
msk = np.random.rand(len(stud_df)) < 0.8
train = stud_df[msk]
test = stud_df[~msk]
print("Training Set Length: ", len(train))
print("Testing Set Length: ", len(test))
Explanation: Rule Based Model
Simple GPA and PSAT rule
End of explanation
stud_df.psat.hist()
def rule_based_model(student_r):
returns a college for each student passed
risk_score = 0
if student_r.race == 'aa':
risk_score += 1
if student_r.race == 'latino':
risk_score += .5
if student_r.psat >= 170 and student_r.honors <= 3:
risk_score += 1
return risk_score
test['risk_score'] = test.apply(rule_based_model, axis = 1)
Explanation: The Rules
* We have 3 observed variables - GPA, PSAT, race
* Predict which college based on those observed variables.
* Rules based on Hoxby, et al 2013
End of explanation
from sklearn import linear_model
feature_cols = ['psat', 'gpa', 'honors']
X = train[feature_cols]
y = train['undermatch_status']
# instantiate, fit
lm = linear_model.LogisticRegression()
lm.fit(X, y)
# The coefficients
print('Coefficients: \n', lm.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((lm.predict(test[feature_cols]) - test['undermatch_status']) ** 2))
# Explained variance score: 1 is perfect prediction
lm.predict(train[feature_cols])
sns.lmplot(x='psat', y='undermatch_status', data=test, logistic=True)
Explanation: Machine Learning Model
Simple Logisitic Regression
End of explanation |
4,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of Unicode Character Names
Character data from Python unicodedata module
Step1: Character data from UnicodeData.txt
Step2: Difference between names from unicodedata module and UnicodeData.txt
Note | Python Code:
import sys
import unicodedata
sys.maxunicode
unicodedata.unidata_version
def python_named_chars():
for code in range(sys.maxunicode):
char = chr(code)
try:
yield char, unicodedata.name(char)
except ValueError: # no such name
continue
l_py = list(python_named_chars())
len(l_py)
l_py[0]
l_py[:5], l_py[-5:]
set_py = {name for _, name in l_py}
import collections
words = collections.Counter()
for _, name in l_py:
parts = name.replace('-', ' ').split()
words.update(parts)
len(words)
for word, count in words.most_common(10):
print(f'{count:6d} {word}')
mc = [(w, c) for w, c in words.most_common() if c > 1]
len(mc)
mc[len(mc)//100]
Explanation: Analysis of Unicode Character Names
Character data from Python unicodedata module
End of explanation
len(list(open('UnicodeData.txt')))
import ucd # local module
l_ucd = list(ucd.parser())
len(l_ucd)
l_ucd[:5], l_ucd[-5:]
set_ucd = {rec.name for rec in l_ucd}
Explanation: Character data from UnicodeData.txt
End of explanation
set_py > set_ucd
set_ucd > set_py
ucd_only = sorted(set_ucd - set_py)
len(ucd_only)
ucd_only[:7], ucd_only[-7:]
py_only = sorted(set_py - set_ucd)
len(py_only)
py_only[:7], py_only[-7:]
import collections
words = collections.Counter()
for name in py_only:
if 'CJK UNIFIED IDEOGRAPH' in name:
continue
parts = name.replace('-', ' ').split()
words.update(parts)
len(words)
for word, count in words.most_common(10):
print(f'{count:6d} {word}')
words = collections.Counter()
for name in sorted(set_ucd):
parts = name.replace('-', ' ').split()
words.update(parts)
len(words)
for word, count in words.most_common(100):
print(f'{count:6d} {word}')
max(words, key=len)
singles = sorted((count, word) for word, count in words.items() if len(word)==1)
len(singles)
for count, word in reversed(singles):
print(f'{count:6d} {word}')
unique = sorted(word for word, count in words.items() if count==1)
len(unique)
unique[:50], unique[-50:]
Explanation: Difference between names from unicodedata module and UnicodeData.txt
Note: UnicodeData.txt does not contain algorthmically derived names such as 'CJK UNIFIED IDEOGRAPH-20004'
End of explanation |
4,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pynamical
Step1: First, let's see the population values the logistic map produces for a range of growth rate parameters
Step2: Now let's visualize the system attractors for a large range of growth rate parameters, using bifurcation diagrams
Step3: In the chaotic regime (r=3.6 to 4=4.0), the system has a strange attractor with fractal structure
Step4: Now let's visualize the system's sensitive dependence on initial conditions
Step5: In part 2, I look at phase diagrams that let us visualize our strange attractors and disambiguate chaos from random noise | Python Code:
import IPython.display as IPdisplay
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pynamical
from pynamical import simulate, bifurcation_plot, save_fig
%matplotlib inline
title_font = pynamical.get_title_font()
label_font = pynamical.get_label_font()
Explanation: Pynamical: demo of the logistic map and bifurcation diagrams
Citation info: Boeing, G. 2016. "Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction." Systems, 4 (4), 37. doi:10.3390/systems4040037.
Pynamical documentation: http://pynamical.readthedocs.org
This notebook implements a logistic map and plots its results, bifurcation diagrams, and phase diagrams
End of explanation
# run the logistic model for 20 generations for 7 growth rates between 0.5 and 3.5 then view the output
pops = simulate(num_gens=20, rate_min=0.5, rate_max=3.5, num_rates=7)
pops.applymap(lambda x: '{:03.3f}'.format(x))
def get_colors(cmap, n, start=0., stop=1., alpha=1., reverse=False):
'''return n-length list of rgba colors from the passed colormap name and alpha,
limit extent by start/stop values and reverse list order if flag is true'''
colors = [cm.get_cmap(cmap)(x) for x in np.linspace(start, stop, n)]
colors = [(r, g, b, alpha) for r, g, b, _ in colors]
return list(reversed(colors)) if reverse else colors
# plot the results of the logistic map run for these 7 different growth rates
#color_list = ['#cc00cc', '#4B0082', '#0066cc', '#33cc00', '#cccc33', '#ff9900', '#ff0000']
color_list = get_colors('viridis', n=len(pops.columns), start=0., stop=1)
for color, rate in reversed(list(zip(color_list, pops.columns))):
ax = pops[rate].plot(kind='line', figsize=[10, 6], linewidth=2.5, alpha=0.95, c=color)
ax.grid(True)
ax.set_ylim([0, 1])
ax.legend(title='Growth Rate', loc=3, bbox_to_anchor=(1, 0.525))
ax.set_title('Logistic Model Results by Growth Rate', fontproperties=title_font)
ax.set_xlabel('Generation', fontproperties=label_font)
ax.set_ylabel('Population', fontproperties=label_font)
save_fig('logistic-map-growth-rates')
plt.show()
Explanation: First, let's see the population values the logistic map produces for a range of growth rate parameters
End of explanation
# run the model for 100 generations across 1000 growth rate steps from 0 to 4 then plot the bifurcation diagram
pops = simulate(num_gens=100, rate_min=0, rate_max=4, num_rates=1000, num_discard=1)
bifurcation_plot(pops, filename='logistic-map-bifurcation-0')
# plot the bifurcation diagram for 200 generations, but this time throw out the first 100 rows
# 200-100=100, so we still have 100 generations in the plot, just like in the previous cell
# this will show us only the attractors (aka, the values that each growth rate settles on over time)
pops = simulate(num_gens=100, rate_min=0, rate_max=4, num_rates=1000, num_discard=100)
bifurcation_plot(pops, filename='logistic-map-bifurcation-1')
# run the model for 300 generations across 1,000 growth rate steps from 2.8 to 4, and plot the bifurcation diagram
# this plot is a zoomed-in look at the first plot and shows the period-doubling path to chaos
pops = simulate(num_gens=100, rate_min=2.8, rate_max=4, num_rates=1000, num_discard=200, initial_pop=0.1)
bifurcation_plot(pops, xmin=2.8, xmax=4, filename='logistic-map-bifurcation-2')
# run the model for 200 generations across 1,000 growth rate steps from 3.7 to 3.9, and plot the bifurcation diagram
# this plot is a zoomed-in look at the first plot and shows more detail in the chaotic regimes
pops = simulate(num_gens=100, rate_min=3.7, rate_max=3.9, num_rates=1000, num_discard=100)
bifurcation_plot(pops, xmin=3.7, xmax=3.9, filename='logistic-map-bifurcation-3')
Explanation: Now let's visualize the system attractors for a large range of growth rate parameters, using bifurcation diagrams
End of explanation
# run the model for 500 generations across 1,000 growth rate steps from 3.84 to 3.856, and plot the bifurcation diagram
# throw out the first 300 generations, so we end up with 200 generations in the plot
# this plot is a zoomed-in look at the first plot and shows the same structure we saw at the macro-level
pops = simulate(num_gens=200, rate_min=3.84, rate_max=3.856, num_rates=1000, num_discard=300)
bifurcation_plot(pops, xmin=3.84, xmax=3.856, ymin=0.445, ymax=0.552, filename='logistic-map-bifurcation-4')
Explanation: In the chaotic regime (r=3.6 to 4=4.0), the system has a strange attractor with fractal structure
End of explanation
# plot the numeric output of the logistic model for growth rates of 3.9 and 3.90001
# this demonstrates sensitive dependence on the parameter
rate1 = 3.9
rate2 = rate1 + 0.00001
pops = simulate(num_gens=40, rate_min=rate1, rate_max=rate2, num_rates=2)
ax = pops.plot(kind='line', figsize=[10, 6], linewidth=3, alpha=0.6, style=['#003399','#cc0000'])
ax.grid(True)
ax.set_title('Logistic Model Results by Growth Rate', fontproperties=title_font)
ax.set_xlabel('Generation', fontproperties=label_font)
ax.set_ylabel('Population', fontproperties=label_font)
ax.legend(title='Growth Rate', loc=3)
save_fig('logistic-map-parameter-sensitivity')
plt.show()
# plot the numeric output of the logistic model at growth rate 3.9 for 2 similar starting population values
# this demonstrates sensitive dependence on initial conditions, as they diverge through chaos
r = 3.9
pops1 = simulate(num_gens=55, rate_min=r, rate_max=4.0, num_rates=1, initial_pop=0.5)
pops2 = simulate(num_gens=55, rate_min=r, rate_max=4.0, num_rates=1, initial_pop=0.50001)
pops = pd.concat([pops1, pops2], axis=1)
pops.columns = ['0.5', '0.50001']
ax = pops.plot(kind='line', figsize=[10, 6], linewidth=3, alpha=0.6, style=['#003399','#cc0000'])
ax.grid(True)
ax.set_title('Logistic Model Results by Initial Conditions, r={}'.format(r), fontproperties=title_font)
ax.set_xlabel('Generation', fontproperties=label_font)
ax.set_ylabel('Population', fontproperties=label_font)
ax.legend(title='Initial Population', loc=3)
save_fig('logistic-map-initial-conditions')
plt.show()
# plot the numeric output of the logistic model at growth rate 3.65 for 2 similar starting population values
# this demonstrates how very similar conditions do not diverge when the rate is not chaotic
r = 3.65
pops1 = simulate(num_gens=55, rate_min=r, num_rates=1, initial_pop=0.5)
pops2 = simulate(num_gens=55, rate_min=r, num_rates=1, initial_pop=0.50001)
pops = pd.concat([pops1, pops2], axis=1)
pops.columns = ['0.5', '0.50001']
ax = pops.plot(kind='line', figsize=[10, 6], linewidth=3, alpha=0.6, style=['#003399','#cc0000'])
ax.grid(True)
ax.set_title('Logistic Model Results by Initial Conditions, r={}'.format(r), fontproperties=title_font)
ax.set_xlabel('Generation', fontproperties=label_font)
ax.set_ylabel('Population', fontproperties=label_font)
ax.legend(title='Initial Population', loc=3)
save_fig('logistic-map-initial-conditions-stable')
plt.show()
Explanation: Now let's visualize the system's sensitive dependence on initial conditions
End of explanation
# here's an example of the phase diagrams that I create in pynamical-demo-phase-diagrams.ipynb
IPdisplay.Image(url='images/3d-logistic-map-attractor-1.png', width=500)
Explanation: In part 2, I look at phase diagrams that let us visualize our strange attractors and disambiguate chaos from random noise:
pynamical-demo-phase-diagrams.ipynb
End of explanation |
4,636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Serial Numbers, How I love thee...
No one really like serial numbers, but keeping track of them is one of the "brushing your teeth" activities that everyone needs to take care of. It's like eating your brussel sprouts. Or listening to your mom. You're just better of if you do it quickly as it just gets more painful over time.
Not only is it just good hygene, but you may be subject to regulations, like eRate in the United States where you have to be able to report on the location of any device by serial number at any point in time.
Trust me, having to play hide-and-go seek with an SSH session is not something you want to do when government auditors are looking for answers.
I'm sure you've already guessed what I'm about to say, but I"ll say it anyway...
There's an API for that!!!
HPE IMC base platform has a great network assets function that automatically gathers all the details of your various devices, assuming of course they support RFC 4133, otherwise known as the Entity MIB. On the bright side, most vendors have chosen to support this standards based MIB, so chances are you're in good shape.
And if they don't support it, they really should. You should ask them. Ok?
So without further ado, let's get started.
Importing the required libraries
I'm sure you're getting used to this part, but it's import to know where to look for these different functions. In this case, we're going to look at a new library that is specifically designed to deal with network assets, including serial numbers.
Step1: How many assets in a Cisco Router?
As some of you may have heard, HPE IMC is a multi-vendor tool and offers support for many of the common devices you'll see in your daily travels.
In this example, we're going to use a Cisco 2811 router to showcase the basic function.
Routers, like chassis switches have multiple components. As any one who's ever been the ~~victem~~ owner of a Smartnet contract, you'll know that you have individual components which have serial numbers as well and all of them have to be reported for them to be covered. So let's see if we managed to grab all of those by first checking out how many individual items we got back in the asset list for this cisco router.
Step2: What's in the box???
Now we know that we've got an idea of how many assets are in here, let's take a look to see exactly what's in one of the asset records to see if there's anything useful in here.
Step3: What can we do with this?
With some basic python string manipulation we could easily print out some of the attributes that we want into what could easily turn into a nicely formated report.
Again realise that the example below is just a subset of what's available in the JSON above. If you want more, just add it to the list.
Step4: Why not just write that to disk?
Although we could go directly to the formated report without a lot of extra work, we would be losing a lot of data which we may have use for later. Instead why don't we export all the available data from the JSON above into a CSV file which can be later opened in your favourite spreadsheet viewer and manipulated to your hearst content.
Pretty cool, no?
Step5: Reading it back
Now we'll read it back from disk to make sure it worked properly. When working with data like this, I find it useful to think about who's going to be consuming the data. For example, when looking at this remember this is a CSV file which can be easily opened in python, or something like Microsoft Excel to manipuate further. It's not realy intended to be read by human beings in this particular format. You'll need another program to consume and munge the data first to turn it into something human consumable.
Step6: What about all my serial numbers at once?
That's a great question! I'm glad you asked. One of the most beautiful things about learning to automate things like asset gathering through an API is that it's often not much more work to do something 1000 times than it is to do it a single time.
This time instead of using the get_dev_asset_details function that we used above which gets us all the assets associated with a single device, let's grab ALL the devices at once.
Step7: That's a lot of assets!
Exactly why we automate things. Now let's write the all_assets list to disk as well.
**note for reasons unknown to me at this time, although the majority of the assets have 27 differnet fields, a few of them actually have 28 different attributes. Something I'll have to dig into later.
Step8: Well That's not good....
So it looks like there are a few network assets that have a different number of attributes than the first one in the list. We'll write some quick code to figure out how big of a problem this is.
Step9: Well that's not so bad
It looks like the items which don't have exactly 27 attribues have exactly 28 attributes. So we'll just pick one of the longer ones to use as the headers for our CSV file and then run the script again.
For this one, I'm going to ask you to trust me that the file is on disk and save us all the trouble of having to print out 1013 seperate assets into this blog post. | Python Code:
from pyhpeimc.auth import *
from pyhpeimc.plat.netassets import *
import csv
auth = IMCAuth("http://", "10.101.0.203", "8080", "admin", "admin")
ciscorouter = get_dev_asset_details('10.101.0.1', auth.creds, auth.url)
Explanation: Serial Numbers, How I love thee...
No one really like serial numbers, but keeping track of them is one of the "brushing your teeth" activities that everyone needs to take care of. It's like eating your brussel sprouts. Or listening to your mom. You're just better of if you do it quickly as it just gets more painful over time.
Not only is it just good hygene, but you may be subject to regulations, like eRate in the United States where you have to be able to report on the location of any device by serial number at any point in time.
Trust me, having to play hide-and-go seek with an SSH session is not something you want to do when government auditors are looking for answers.
I'm sure you've already guessed what I'm about to say, but I"ll say it anyway...
There's an API for that!!!
HPE IMC base platform has a great network assets function that automatically gathers all the details of your various devices, assuming of course they support RFC 4133, otherwise known as the Entity MIB. On the bright side, most vendors have chosen to support this standards based MIB, so chances are you're in good shape.
And if they don't support it, they really should. You should ask them. Ok?
So without further ado, let's get started.
Importing the required libraries
I'm sure you're getting used to this part, but it's import to know where to look for these different functions. In this case, we're going to look at a new library that is specifically designed to deal with network assets, including serial numbers.
End of explanation
len(ciscorouter)
Explanation: How many assets in a Cisco Router?
As some of you may have heard, HPE IMC is a multi-vendor tool and offers support for many of the common devices you'll see in your daily travels.
In this example, we're going to use a Cisco 2811 router to showcase the basic function.
Routers, like chassis switches have multiple components. As any one who's ever been the ~~victem~~ owner of a Smartnet contract, you'll know that you have individual components which have serial numbers as well and all of them have to be reported for them to be covered. So let's see if we managed to grab all of those by first checking out how many individual items we got back in the asset list for this cisco router.
End of explanation
ciscorouter[0]
Explanation: What's in the box???
Now we know that we've got an idea of how many assets are in here, let's take a look to see exactly what's in one of the asset records to see if there's anything useful in here.
End of explanation
for i in ciscorouter:
print ("Device Name: " + i['deviceName'] + " Device Model: " + i['model'] +
"\nAsset Name is: " + i['name'] + " Asset Serial Number is: " +
i['serialNum']+ "\n")
Explanation: What can we do with this?
With some basic python string manipulation we could easily print out some of the attributes that we want into what could easily turn into a nicely formated report.
Again realise that the example below is just a subset of what's available in the JSON above. If you want more, just add it to the list.
End of explanation
keys = ciscorouter[0].keys()
with open('ciscorouter.csv', 'w') as file:
dict_writer = csv.DictWriter(file, keys)
dict_writer.writeheader()
dict_writer.writerows(ciscorouter)
Explanation: Why not just write that to disk?
Although we could go directly to the formated report without a lot of extra work, we would be losing a lot of data which we may have use for later. Instead why don't we export all the available data from the JSON above into a CSV file which can be later opened in your favourite spreadsheet viewer and manipulated to your hearst content.
Pretty cool, no?
End of explanation
with open('ciscorouter.csv') as file:
print (file.read())
Explanation: Reading it back
Now we'll read it back from disk to make sure it worked properly. When working with data like this, I find it useful to think about who's going to be consuming the data. For example, when looking at this remember this is a CSV file which can be easily opened in python, or something like Microsoft Excel to manipuate further. It's not realy intended to be read by human beings in this particular format. You'll need another program to consume and munge the data first to turn it into something human consumable.
End of explanation
all_assets = get_dev_asset_details_all(auth.creds, auth.url)
len (all_assets)
Explanation: What about all my serial numbers at once?
That's a great question! I'm glad you asked. One of the most beautiful things about learning to automate things like asset gathering through an API is that it's often not much more work to do something 1000 times than it is to do it a single time.
This time instead of using the get_dev_asset_details function that we used above which gets us all the assets associated with a single device, let's grab ALL the devices at once.
End of explanation
keys = all_assets[0].keys()
with open('all_assets.csv', 'w') as file:
dict_writer = csv.DictWriter(file, keys)
dict_writer.writeheader()
dict_writer.writerows(all_assets)
Explanation: That's a lot of assets!
Exactly why we automate things. Now let's write the all_assets list to disk as well.
**note for reasons unknown to me at this time, although the majority of the assets have 27 differnet fields, a few of them actually have 28 different attributes. Something I'll have to dig into later.
End of explanation
print ("The length of the first items keys is " + str(len(keys)))
for i in all_assets:
if len(i) != len(all_assets[0].keys()):
print ("The length of index " + str(all_assets.index(i)) + " is " + str(len(i.keys())))
Explanation: Well That's not good....
So it looks like there are a few network assets that have a different number of attributes than the first one in the list. We'll write some quick code to figure out how big of a problem this is.
End of explanation
keys = all_assets[879].keys()
with open ('all_assets.csv', 'w') as file:
dict_writer = csv.DictWriter(file, keys)
dict_writer.writeheader()
dict_writer.writerows(all_assets)
Explanation: Well that's not so bad
It looks like the items which don't have exactly 27 attribues have exactly 28 attributes. So we'll just pick one of the longer ones to use as the headers for our CSV file and then run the script again.
For this one, I'm going to ask you to trust me that the file is on disk and save us all the trouble of having to print out 1013 seperate assets into this blog post.
End of explanation |
4,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Two Lists
Step2: Iterate Over Both Lists As A Single Sequence | Python Code:
from itertools import chain
Explanation: Title: Chain Together Lists
Slug: chain_together_lists
Summary: Chain Together Lists Using Python.
Date: 2017-02-02 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Preliminaries
End of explanation
# Create a list of allies
allies = ['Spain', 'Germany', 'Namibia', 'Austria']
# Create a list of enemies
enemies = ['Mexico', 'United Kingdom', 'France']
Explanation: Create Two Lists
End of explanation
# For each country in allies and enemies
for country in chain(allies, enemies):
# print the country
print(country)
Explanation: Iterate Over Both Lists As A Single Sequence
End of explanation |
4,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NEB using ASE
1. Setting up an EAM calculator.
Suppose we want to calculate the minimum energy path of adatom diffusion on a (100) surface. We first need to choose an energy model, and in ASE, this is done by defining a "calculator". Let's choose our calculator to be Zhou's aluminum EAM potential, which we've used in previous labs.
We first import ASE's built-in EAM calculator class
Step1: Then set the potential file
Step2: We just have to point ASE to the potential file. It automatically parses the file and constructs the corresponding EAM calculator for us
Step3: Next, we define our surface. Let's put the atom in the "hollow" site on the (100) surface. You can find out the adatom sites that are available for different types of surfaces in the ASE documentation
Step4: Let's see what the slab looks like
Step5: Let's set the calculator of the slab to the EAM calculator we defined above
Step6: This lets us calculate the potential energy and forces on the atoms
Step7: Notice the nonzero forces, and in particular the strong force in the z-direction acting on the adatom. That's a signal that we're not relaxed.
2. Structure relaxation in ASE.
We can use one of ASE's built-in structure optimizers to relax the structure and find the local energy minimum predicted by the EAM potential. The optimization terminates when the maximum force on an atom falls below fmax, which we set to 0.1 meV/A.
Step8: To compute the activation barrier for adatom diffusion, we first need to know the endpoint of the transition path, which we can determine by looking at the atomic coordinates.
Step9: We again relax the structure
Step10: Nudged elastic band calculations.
We've succeeded in relaxing the endpoints. We can proceed to search for the saddle point using NEB.
Step11: Let's plot the results.
Step12: FLARE code
1. Setup
In this lab, we'll use the Bayesian ML code FLARE ("fast learning of atomistic rare events") that has recently been made open source. To set it up on Google cloud, vbox, or your personal machine, you'll need to pull it from github. I'll give the commands for setting everything up on an AP275 Google cloud instance, but the steps will be pretty much the same on any machine.
git clone https
Step13: Let's put equal and opposite forces on our atoms.
Step14: The FLARE code uses Gaussian process regression (GPR) to construct a covariant force field based on atomic forces, which are the training labels. GPR is a kernel based machine learning method, which means that it makes predictions by comparing test points to structures in the training set. For this simple system, we choose a two-body kernel, which compares pairwise distances in two structures.
Step15: The GP models are local, and require you to choose a cutoff. We'll pick 4 A. The kernel has a few hyperparameters which control uncertainty estimates and length scales. They can be optimized in a rigorous way by maximizing the likelihood of the training data, but for this lab we'll just set the hyperparameters to reasonable values.
Step16: The GP models take structure objects as input, which contain information about the cell and atomic coordinates (much like the Atoms class in ASE).
Step17: We train the GP model by giving it training structures and corresponding forces
Step18: As a quick check, let's make sure we get reasonable results on the training structure
Step19: To make it easier to get force and energy estimates from the GP model, we can wrap it in an ASE calculator
Step20: Now let's test the covariance property of the model. Let's rotate the structure by 90 degrees, and see what forces we get.
Step21: 3. Two plus three body model
In the lab, we'll add a three-body term to the potential, which makes the model significantly more accurate for certain systems. Let's see how this would work for our (100) slab. | Python Code:
from ase.calculators.eam import EAM
Explanation: NEB using ASE
1. Setting up an EAM calculator.
Suppose we want to calculate the minimum energy path of adatom diffusion on a (100) surface. We first need to choose an energy model, and in ASE, this is done by defining a "calculator". Let's choose our calculator to be Zhou's aluminum EAM potential, which we've used in previous labs.
We first import ASE's built-in EAM calculator class:
End of explanation
import os
pot_file = os.environ.get('LAMMPS_POTENTIALS') + '/Al_zhou.eam.alloy'
print(pot_file)
Explanation: Then set the potential file:
End of explanation
zhou = EAM(potential=pot_file)
Explanation: We just have to point ASE to the potential file. It automatically parses the file and constructs the corresponding EAM calculator for us:
End of explanation
from ase.build import fcc100, add_adsorbate
slab = fcc100('Al', size=(3, 3, 3))
add_adsorbate(slab, 'Al', 2, 'hollow') # put adatom 2 A above the slab
slab.center(vacuum=5.0, axis=2) # 5 A of vacuum on either side
Explanation: Next, we define our surface. Let's put the atom in the "hollow" site on the (100) surface. You can find out the adatom sites that are available for different types of surfaces in the ASE documentation: https://wiki.fysik.dtu.dk/ase/ase/build/surface.html
End of explanation
from ase.visualize import view
view(slab, viewer='x3d')
Explanation: Let's see what the slab looks like:
End of explanation
slab.set_calculator(zhou)
Explanation: Let's set the calculator of the slab to the EAM calculator we defined above:
End of explanation
slab.get_potential_energy() # energy in eV
slab.get_forces() # forces in eV/A
Explanation: This lets us calculate the potential energy and forces on the atoms:
End of explanation
from ase.optimize import BFGS
dyn = BFGS(slab)
dyn.run(fmax=0.0001)
Explanation: Notice the nonzero forces, and in particular the strong force in the z-direction acting on the adatom. That's a signal that we're not relaxed.
2. Structure relaxation in ASE.
We can use one of ASE's built-in structure optimizers to relax the structure and find the local energy minimum predicted by the EAM potential. The optimization terminates when the maximum force on an atom falls below fmax, which we set to 0.1 meV/A.
End of explanation
slab_2 = fcc100('Al', size=(3, 3, 3))
add_adsorbate(slab_2, 'Al', 2, 'hollow') # put adatom 2 A above the slab
slab_2.center(vacuum=5.0, axis=2) # 5 A of vacuum on either side
slab_2.set_calculator(EAM(potential=pot_file))
slab_2.positions
slab_2.positions[-1][0:2] = slab_2.positions[10][0:2] # notice the adatom is directly above atom 9
view(slab_2, viewer='x3d')
Explanation: To compute the activation barrier for adatom diffusion, we first need to know the endpoint of the transition path, which we can determine by looking at the atomic coordinates.
End of explanation
dyn = BFGS(slab_2)
dyn.run(fmax=0.0001)
Explanation: We again relax the structure:
End of explanation
from ase.neb import NEB
import numpy as np
# make band
no_images = 15
images = [slab]
images += [slab.copy() for i in range(no_images-2)]
images += [slab_2]
neb = NEB(images)
# interpolate middle images
neb.interpolate()
# set calculators of middle images
pot_dir = os.environ.get('LAMMPS_POTENTIALS')
for image in images[1:no_images-1]:
image.set_calculator(EAM(potential=pot_file))
# optimize the NEB trajectory
optimizer = BFGS(neb)
optimizer.run(fmax=0.01)
# calculate the potential energy of each image
pes = np.zeros(no_images)
pos = np.zeros((no_images, len(images[0]), 3))
for n, image in enumerate(images):
pes[n] = image.get_potential_energy()
pos[n] = image.positions
Explanation: Nudged elastic band calculations.
We've succeeded in relaxing the endpoints. We can proceed to search for the saddle point using NEB.
End of explanation
import matplotlib.pyplot as plt
plt.plot(pes-pes[0], 'k.', markersize=10) # plot energy difference in eV w.r.t. first image
plt.plot(pes-pes[0], 'k--', markersize=10)
plt.xlabel('image #')
plt.ylabel('energy difference (eV)')
plt.show()
Explanation: Let's plot the results.
End of explanation
import numpy as np
from ase import Atoms
positions = np.array([[0, 0, 0], [1, 0, 0]])
cell = np.eye(3) * 10
two_atoms = Atoms(positions=positions, cell=cell)
from ase.visualize import view
view(two_atoms, viewer='x3d')
Explanation: FLARE code
1. Setup
In this lab, we'll use the Bayesian ML code FLARE ("fast learning of atomistic rare events") that has recently been made open source. To set it up on Google cloud, vbox, or your personal machine, you'll need to pull it from github. I'll give the commands for setting everything up on an AP275 Google cloud instance, but the steps will be pretty much the same on any machine.
git clone https://github.com/mir-group/flare.git
The code is written in Python, but inner loops are accelerated with Numba, which you'll need to install with pip.
sudo apt install python3-pip
pip3 install numba
You'll see warnings from Numba that a more recent version of "colorama" needs to be installed. You can install it with pip:
pip3 install colorama
You may find it helpful to add the FLARE directory to your Python path and bash environment, which makes it easier to directly import files.
nano .profile
export FLARE=\$HOME/Software/flare
export PYTHONPATH=\$PYTHONPATH:\$FLARE:\$FLARE/otf_engine:\$FLARE/modules
source .profile
2. A toy example
Let's look at a simple example to get a feel for how the code works. Let's put two atoms in a box along the x axis:
End of explanation
forces = np.array([[-1, 0, 0], [1, 0, 0]])
Explanation: Let's put equal and opposite forces on our atoms.
End of explanation
from kernels import two_body, two_body_grad, two_body_force_en
from gp import GaussianProcess
Explanation: The FLARE code uses Gaussian process regression (GPR) to construct a covariant force field based on atomic forces, which are the training labels. GPR is a kernel based machine learning method, which means that it makes predictions by comparing test points to structures in the training set. For this simple system, we choose a two-body kernel, which compares pairwise distances in two structures.
End of explanation
hyps = np.array([1, 1, 1e-3]) # signal std, length scale, noise std
cutoffs = np.array([2])
gp_model = GaussianProcess(kernel=two_body, kernel_grad=two_body_grad, hyps=hyps,
cutoffs=cutoffs, energy_force_kernel=two_body_force_en)
Explanation: The GP models are local, and require you to choose a cutoff. We'll pick 4 A. The kernel has a few hyperparameters which control uncertainty estimates and length scales. They can be optimized in a rigorous way by maximizing the likelihood of the training data, but for this lab we'll just set the hyperparameters to reasonable values.
End of explanation
import struc
training_struc = struc.Structure(cell=cell, species=['A']*2, positions=positions)
Explanation: The GP models take structure objects as input, which contain information about the cell and atomic coordinates (much like the Atoms class in ASE).
End of explanation
gp_model.update_db(training_struc, forces)
gp_model.set_L_alpha()
Explanation: We train the GP model by giving it training structures and corresponding forces:
End of explanation
gp_model.predict(gp_model.training_data[0], 2) # second argument is the force component (x=1, y=2, z=3)
Explanation: As a quick check, let's make sure we get reasonable results on the training structure:
End of explanation
from gp_calculator import GPCalculator
gp_calc = GPCalculator(gp_model)
Explanation: To make it easier to get force and energy estimates from the GP model, we can wrap it in an ASE calculator:
End of explanation
# print positions, energy, and forces before rotation
two_atoms.set_calculator(gp_calc)
print(two_atoms.positions)
print(two_atoms.get_potential_energy())
print(two_atoms.get_forces())
two_atoms.rotate(90, 'z') # rotate the atoms 90 degrees about the z axis
two_atoms.set_calculator(gp_calc) # set calculator to gp model
# print positions, energy, and forces after rotation
print(two_atoms.positions)
print(two_atoms.get_potential_energy())
print(two_atoms.get_forces())
Explanation: Now let's test the covariance property of the model. Let's rotate the structure by 90 degrees, and see what forces we get.
End of explanation
# initialize gp model
import kernels
import gp
import numpy as np
kernel = kernels.two_plus_three_body
kernel_grad = kernels.two_plus_three_body_grad
hyps = np.array([1, 1, 0.1, 1, 1e-3]) # sig2, ls2, sig3, ls3, noise std
cutoffs = np.array([4.96, 4.96])
energy_force_kernel = kernels.two_plus_three_force_en
gp_model = gp.GaussianProcess(kernel, kernel_grad, hyps, cutoffs,
energy_force_kernel=energy_force_kernel)
# make slab structure in ASE
from ase.build import fcc100, add_adsorbate
import os
from ase.calculators.eam import EAM
slab = fcc100('Al', size=(3, 3, 3))
add_adsorbate(slab, 'Al', 2, 'hollow') # put adatom 2 A above the slab
slab.center(vacuum=5.0, axis=2) # 5 A of vacuum on either side
pot_file = os.environ.get('LAMMPS_POTENTIALS') + '/Al_zhou.eam.alloy'
zhou = EAM(potential=pot_file)
slab.set_calculator(zhou)
# make training structure
import struc
training_struc = struc.Structure(cell=slab.cell,
species=['Al']*len(slab),
positions=slab.positions)
training_forces = slab.get_forces()
# add atoms to training database
gp_model.update_db(training_struc, training_forces)
gp_model.set_L_alpha()
# wrap in ASE calculator
from gp_calculator import GPCalculator
gp_calc = GPCalculator(gp_model)
# test on training structure
slab.set_calculator(gp_calc)
GP_forces = slab.get_forces()
# check accuracy by making a parity plot
import matplotlib.pyplot as plt
plt.plot(training_forces.reshape(-1), GP_forces.reshape(-1), '.')
plt.show()
Explanation: 3. Two plus three body model
In the lab, we'll add a three-body term to the potential, which makes the model significantly more accurate for certain systems. Let's see how this would work for our (100) slab.
End of explanation |
4,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting $\sin(x^2+y^2)$ for a regular grid with a total of 40,000 points or 20,000 points and on 20,000 random points
We create (x,y) points first and plot a scatter plot on them with gray level given by $\sin(x^2+y^2)$
Importing vector and plotting libraries. %matplotlib inline to see plots in notebook
Step1: First we show the case of a regular grid with a total of 40,000 points
We plot the function between -10 and 10, so for a total of 40,000 points we need a spacing of $0.1=20/\sqrt(40000)$ in a regular grid
Step2: Already at this resolution above one can interference patterns. These patterns are more clear when using less points in the grid,
below for 20,000 points, for which the spacing is $0.1414=20/\sqrt(20000)$
Step3: Now for 20,000 random points | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Plotting $\sin(x^2+y^2)$ for a regular grid with a total of 40,000 points or 20,000 points and on 20,000 random points
We create (x,y) points first and plot a scatter plot on them with gray level given by $\sin(x^2+y^2)$
Importing vector and plotting libraries. %matplotlib inline to see plots in notebook
End of explanation
xlist = np.arange(-10.0, 10.0, 0.1) # vector of x values
ylist = np.arange(-10.0, 10.0, 0.1) # vector of y values
X, Y = np.meshgrid(xlist, ylist) # regular mesh from x and y values
Z = np.sin(X**2 + Y**2) # function to plot
fig, axes = plt.subplots(figsize=(4,4)) # Making fig square
plt.title('Hi res regular grid')
plt.scatter(X, Y, c=Z, s=1) # Scatter plot
plt.gray()
Explanation: First we show the case of a regular grid with a total of 40,000 points
We plot the function between -10 and 10, so for a total of 40,000 points we need a spacing of $0.1=20/\sqrt(40000)$ in a regular grid
End of explanation
xlist = np.arange(-10.0, 10.0, 0.1414) # vector of x values
ylist = np.arange(-10.0, 10.0, 0.1414) # vector of y values
X, Y = np.meshgrid(xlist, ylist) # regular mesh from x and y values
Z = np.sin(X**2 + Y**2) # function to plot
fig, axes = plt.subplots(figsize=(4,4)) # Making fig square
plt.title('Lower res regular grid')
plt.scatter(X, Y, c=Z, s=1) # Scatter plot
plt.gray()
Explanation: Already at this resolution above one can interference patterns. These patterns are more clear when using less points in the grid,
below for 20,000 points, for which the spacing is $0.1414=20/\sqrt(20000)$
End of explanation
X = np.random.uniform(low=-10, high=10, size=(20000,))
Y= np.random.uniform(low=-10, high=10, size=(20000,))
Z = np.sin(X**2 + Y**2)
plt.figure()
fig, axes = plt.subplots(figsize=(4,4)) # Making fig square
plt.title('random points')
plt.scatter(X, Y, c=Z, s=1)
plt.gray()
Explanation: Now for 20,000 random points
End of explanation |
4,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wayne H Nixalo - 09 Aug 2017
This JNB is an attempt to do the neural artistic style transfer and super-resolution examples done in class, on a GPU using PyTorch for speed.
Lesson NB
Step1: Setup
Step2: Create Model | Python Code:
%matplotlib inline
import importlib
import os, sys; sys.path.insert(1, os.path.join('../utils'))
from utils2 import *
import torch, torch.nn as nn, torch.nn.functional as F, torch.optim as optim
from torch.autograd import Variable
from torch.utils.serialization import load_lua
from torch.utils.data import DataLoader
from torchvision import transforms, models, datasets
Explanation: Wayne H Nixalo - 09 Aug 2017
This JNB is an attempt to do the neural artistic style transfer and super-resolution examples done in class, on a GPU using PyTorch for speed.
Lesson NB: neural-style-pytorch
Neural Style Transfer
Style Transfer / Super Resolution Implementation in PyTorch
End of explanation
path = '../data/nst/'
fnames = pickle.load(open(path+'fnames.pkl','rb'))
img = Image.open(path + fnames[0]); img
rn_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32).reshape((1,1,1,3))
preproc = lambda x: (x - rn_mean)[:,:,:,::-1]
img_arr = preproc(np.expand_dims(np.array(img),0))
shp = img_arr.shape
deproc = lambda x: x[:,:,:,::-1] + rn_mena
Explanation: Setup
End of explanation
def download_convert_vgg16_model():
model_url = 'http://cs.stanford.edu/people/jcjohns/fast-neural-style/models/vgg16.t7'
file = get_file(model_url, cache_subdir='models')
vgglua = load_lua(file).parameters()
vgg = models.VGGFeature()
for (src, dst) in zip(vgglua[0], vgg.parameters()): dst[:] = src[:]
torch.save(vgg.state_dict(), path + 'vgg16_feature.pth')
url = 'https://s3-us-west-2.amazonaws.com/jcjohns-models/'
fname = 'vgg16-00b39a1b.pth'
file = get_file(fname, url+fname, cache_subdir='models')
vgg = models.vgg.vgg16()
vgg.load_state_dict(torch.load(file))
optimizer = optim.Adam(vgg.parameters())
vgg.cuda();
arr_lr = bcolz.open(path + 'trn_resized_72.bc')[:]
arr_hr = bcolz.open(path + 'trn_resized_288.bc')[:]
arr = bcolz.open(dpath + 'trn_resized.bc')[:]
x = Variable(arr[0])
y = model(x)
url = 'http://www.files.fast.ai/models/'
fname = 'imagenet_class_index.json'
fpath = get_file(fname, url + fname, cache_subdir='models')
class ResidualBlock(nn.Module):
def __init__(self, num):
super(ResideualBlock, self).__init__()
self.c1 = nn.Conv2d(num, num, kernel_size=3, stride=1, padding=1)
self.c2 = nn.Conv2d(num, num, kernel_size=3, stride=1, padding=1)
self.b1 = nn.BatchNorm2d(num)
self.b2 = nn.BatchNorm2d(num)
def forward(self, x):
h = F.relu(self.b1(self.c1(x)))
h = self.b2(self.c2(h))
return h + x
class FastStyleNet(nn.Module):
def __init__(self):
super(FastStyleNet, self).__init__()
self.cs = [nn.Conv2d(3, 32, kernel_size=9, stride=1, padding=4),
nn.Conv2d(32, 64, kernel_size=4, stride=2, padding=1),
nn.Conv2d(64, 128, kernel_size=4, stride=2, padding1)]
self.b1s = [nn.BatchNorm2d(i) for i in [32, 64, 128]]
self.rs = [ResidualBlock(128) for i in range(5)]
self.ds = [nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1),
nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1)]
self.b2s = [nn.BatchNorm2d(i) for i in [64, 32]]
self.d3 = nn.Conv2d(32, 3, kernel_size=9, stride=1, padding=4)
def forward(self, h):
for i in range(3): h = F.relu(self.b1s[i](self.cs[i](x)))
for r in self.rs: h = r(h)
for i in range(2): h = F.relu(self.b2s[i](self.ds[i](x)))
return self.d3(h)
Explanation: Create Model
End of explanation |
4,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Part 21
Step1: With our setup in place, let's do a few standard imports to get the ball rolling.
Step2: The ntext step we want to do is load our dataset. We're using a small dataset we've prepared that's pulled out of the larger GDB benchmarks. The dataset contains the atomization energies for 1K small molecules.
Step3: We now need a way to transform molecules that is useful for prediction of atomization energy. This representation draws on foundational work [1] that represents a molecule's 3D electrostatic structure as a 2D matrix $C$ of distances scaled by charges, where the $ij$-th element is represented by the following charge structure.
$C_{ij} = \frac{q_i q_j}{r_{ij}^2}$
If you're observing carefully, you might ask, wait doesn't this mean that molecules with different numbers of atoms generate matrices of different sizes? In practice the trick to get around this is that the matrices are "zero-padded." That is, if you're making coulomb matrices for a set of molecules, you pick a maximum number of atoms $N$, make the matrices $N\times N$ and set to zero all the extra entries for this molecule. (There's a couple extra tricks that are done under the hood beyond this. Check out reference [1] or read the source code in DeepChem!)
DeepChem has a built in featurization class dc.feat.CoulombMatrixEig that can generate these featurizations for you.
Step4: Note that in this case, we set the maximum number of atoms to $N = 23$. Let's now load our dataset file into DeepChem. As in the previous tutorials, we use a Loader class, in particular dc.data.SDFLoader to load our .sdf file into DeepChem. The following snippet shows how we do this
Step5: For the purposes of this tutorial, we're going to do a random split of the dataset into training, validation, and test. In general, this split is weak and will considerably overestimate the accuracy of our models, but for now in this simple tutorial isn't a bad place to get started.
Step6: One issue that Coulomb matrix featurizations have is that the range of entries in the matrix $C$ can be large. The charge $q_1q_2/r^2$ term can range very widely. In general, a wide range of values for inputs can throw off learning for the neural network. For this, a common fix is to normalize the input values so that they fall into a more standard range. Recall that the normalization transform applies to each feature $X_i$ of datapoint $X$
$\hat{X_i} = \frac{X_i - \mu_i}{\sigma_i}$
where $\mu_i$ and $\sigma_i$ are the mean and standard deviation of the $i$-th feature. This transformation enables the learning to proceed smoothly. A second point is that the atomization energies also fall across a wide range. So we apply an analogous transformation normalization transformation to the output to scale the energies better. We use DeepChem's transformation API to make this happen
Step7: Now that we have the data cleanly transformed, let's do some simple machine learning. We'll start by constructing a random forest on top of the data. We'll use DeepChem's hyperparameter tuning module to do this.
Step8: Let's build one more model, a kernel ridge regression, on top of this raw data. | Python Code:
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem
deepchem.__version__
Explanation: Tutorial Part 21: Exploring Quantum Chemistry with GDB1k
Most of the tutorials we've walked you through so far have focused on applications to the drug discovery realm, but DeepChem's tool suite works for molecular design problems generally. In this tutorial, we're going to walk through an example of how to train a simple molecular machine learning for the task of predicting the atomization energy of a molecule. (Remember that the atomization energy is the energy required to form 1 mol of gaseous atoms from 1 mol of the molecule in its standard state under standard conditions).
Colab
This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine.
End of explanation
import deepchem as dc
from sklearn.ensemble import RandomForestRegressor
from sklearn.kernel_ridge import KernelRidge
Explanation: With our setup in place, let's do a few standard imports to get the ball rolling.
End of explanation
tasks = ["atomization_energy"]
dataset_file = "../../datasets/gdb1k.sdf"
smiles_field = "smiles"
mol_field = "mol"
Explanation: The ntext step we want to do is load our dataset. We're using a small dataset we've prepared that's pulled out of the larger GDB benchmarks. The dataset contains the atomization energies for 1K small molecules.
End of explanation
featurizer = dc.feat.CoulombMatrixEig(23, remove_hydrogens=False)
Explanation: We now need a way to transform molecules that is useful for prediction of atomization energy. This representation draws on foundational work [1] that represents a molecule's 3D electrostatic structure as a 2D matrix $C$ of distances scaled by charges, where the $ij$-th element is represented by the following charge structure.
$C_{ij} = \frac{q_i q_j}{r_{ij}^2}$
If you're observing carefully, you might ask, wait doesn't this mean that molecules with different numbers of atoms generate matrices of different sizes? In practice the trick to get around this is that the matrices are "zero-padded." That is, if you're making coulomb matrices for a set of molecules, you pick a maximum number of atoms $N$, make the matrices $N\times N$ and set to zero all the extra entries for this molecule. (There's a couple extra tricks that are done under the hood beyond this. Check out reference [1] or read the source code in DeepChem!)
DeepChem has a built in featurization class dc.feat.CoulombMatrixEig that can generate these featurizations for you.
End of explanation
loader = dc.data.SDFLoader(
tasks=["atomization_energy"],
featurizer=featurizer)
dataset = loader.create_dataset(dataset_file)
Explanation: Note that in this case, we set the maximum number of atoms to $N = 23$. Let's now load our dataset file into DeepChem. As in the previous tutorials, we use a Loader class, in particular dc.data.SDFLoader to load our .sdf file into DeepChem. The following snippet shows how we do this:
End of explanation
random_splitter = dc.splits.RandomSplitter()
train_dataset, valid_dataset, test_dataset = random_splitter.train_valid_test_split(dataset)
Explanation: For the purposes of this tutorial, we're going to do a random split of the dataset into training, validation, and test. In general, this split is weak and will considerably overestimate the accuracy of our models, but for now in this simple tutorial isn't a bad place to get started.
End of explanation
transformers = [
dc.trans.NormalizationTransformer(transform_X=True, dataset=train_dataset),
dc.trans.NormalizationTransformer(transform_y=True, dataset=train_dataset)]
for dataset in [train_dataset, valid_dataset, test_dataset]:
for transformer in transformers:
dataset = transformer.transform(dataset)
Explanation: One issue that Coulomb matrix featurizations have is that the range of entries in the matrix $C$ can be large. The charge $q_1q_2/r^2$ term can range very widely. In general, a wide range of values for inputs can throw off learning for the neural network. For this, a common fix is to normalize the input values so that they fall into a more standard range. Recall that the normalization transform applies to each feature $X_i$ of datapoint $X$
$\hat{X_i} = \frac{X_i - \mu_i}{\sigma_i}$
where $\mu_i$ and $\sigma_i$ are the mean and standard deviation of the $i$-th feature. This transformation enables the learning to proceed smoothly. A second point is that the atomization energies also fall across a wide range. So we apply an analogous transformation normalization transformation to the output to scale the energies better. We use DeepChem's transformation API to make this happen:
End of explanation
def rf_model_builder(model_dir, **model_params):
sklearn_model = RandomForestRegressor(**model_params)
return dc.models.SklearnModel(sklearn_model, model_dir)
params_dict = {
"n_estimators": [10, 100],
"max_features": ["auto", "sqrt", "log2", None],
}
metric = dc.metrics.Metric(dc.metrics.mean_absolute_error)
optimizer = dc.hyper.GridHyperparamOpt(rf_model_builder)
best_rf, best_rf_hyperparams, all_rf_results = optimizer.hyperparam_search(
params_dict, train_dataset, valid_dataset, output_transformers=transformers,
metric=metric, use_max=False)
for key, value in all_rf_results.items():
print(f'{key}: {value}')
print('Best hyperparams:', best_rf_hyperparams)
Explanation: Now that we have the data cleanly transformed, let's do some simple machine learning. We'll start by constructing a random forest on top of the data. We'll use DeepChem's hyperparameter tuning module to do this.
End of explanation
def krr_model_builder(model_dir, **model_params):
sklearn_model = KernelRidge(**model_params)
return dc.models.SklearnModel(sklearn_model, model_dir)
params_dict = {
"kernel": ["laplacian"],
"alpha": [0.0001],
"gamma": [0.0001]
}
metric = dc.metrics.Metric(dc.metrics.mean_absolute_error)
optimizer = dc.hyper.GridHyperparamOpt(krr_model_builder)
best_krr, best_krr_hyperparams, all_krr_results = optimizer.hyperparam_search(
params_dict, train_dataset, valid_dataset, output_transformers=transformers,
metric=metric, use_max=False)
for key, value in all_krr_results.items():
print(f'{key}: {value}')
print('Best hyperparams:', best_krr_hyperparams)
Explanation: Let's build one more model, a kernel ridge regression, on top of this raw data.
End of explanation |
4,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
La Magia de la television
Capitulo 1
Step2: En psicologia probablemente se harian un festin analizando estas canciones, la protagonista se nombra a si misma tantas veces que no deja lugar a dudas de quien es el programa. Aplicando lo que sabemos de cadenas y secuencias en general, vamos a escribir un poco de codigo para poder ponerle numeros a esta afirmacion, contando cuantas veces dicen "Simona" los temas de Simona.
Si Si Simona
Step4: Simona Va
Step6: Soy Como Soy
Step7: Metodos del tipo cadena
Por ahora los metodos son las funciones que puedo llamar para un tipo de datos dado, utilizando . . La forma de verlos todos es utilizando la funcion dir
Step8: OJO
Step10: Si bien quiero solo contar las apariciones, podria hacer una funcion que me devuelva las apariciones y contar cuantos elementos me devolvio. De esa forma, tengo una funcion que sirve para mas de un contexto.
Dado que todo lo que vamos a usar son operaciones de secuencias en gral, podemos nombrar la funcion de forma general[1]
Step11: Discusiones importantes para recordar
Step12: Version abusando de Split y Join
Step13: Ahora en una sola linea
Step14: Version haciendo templates con format strings
Supongamos que es bastante comun esto de solo cambiar el nombre, andar cortando y haciendo copias de listas, cadenas o lo que sea, no es como una idea muy copada. Usando cadenas con formato, se puede hacer mas amigable.
Paso 1
Step15: Usando el template que generamos, ahora podemos usar format para obtener nuestras multiples versiones de la cancion
Step16: Format Strings (Cadenas con formato)
Las cadenas con formato son de la forma blablabla {} blablabla donde {} luego se va a reemplazar por el/los valores que le pasamos como parametro al metodo format. En el ejemplo anterior, utilizamos {0}, porque si no especificamos nada, asume que cada {} corresponde a un parametro distinto. En este caso, el 0 representa al primer parametro.
Step19: Ahora con esto, podemos hasta hacer una funcion hermosa que nos genere premisas de novelas completamente ~~genericas~~ novedosas.
Step22: Desde Python 3.6 en adelante, hay una forma mucho mas linda de hacer esto, ya que mejora bastante la legibilidad (notar que en el caso anterior no tengo idea de a que parametro representa cada {} a menos que vaya contando, y para mas de un parametro, poner todo con numeros es bastante molesto)
Step23: Para poder lograr este efecto, las cadenas deben tener una letra f justo antes de las comillas al inicio y entre llaves expresiones. Y digo expresiones, porque esto vale
Step24: Parte 3
Step25: Mas de uno se quedo estudiando hasta tarde y se topó con estos programas donde hay que adivinar que palabra se forma con esas letras. Si bien las reglas dicen que solo se pueden formar palabras conectando letras de celdas contiguas, el 90% de la gente que llama cree que la respuesta es "queso", "mozzarela" o "Estambul" por alguna razon.
Es bastante evidente que la respuesta ~~es azufre~~ se encuentra en alguna rotacion de "freazu" en sentido horario o anti-horario. Escribamos codigo que nos imprima todas las posibles rotaciones de una cadena a ver si encontramos la respuesta | Python Code:
Image(filename='./clase-09-04_images/i1.jpg')
Explanation: La Magia de la television
Capitulo 1: La television argentina es un template gigante
Parte 0: Repaso general de secuencias
| |Cadenas|Tuplas|Listas|
|:---|:---|:---|:---|
|Acceso por indice|Si|Si|Si|
|Recorrer por indices|Si|Si|Si|
|Recorrer por elemento|Si|Si|Si|
|Elementos|Caracteres|Cualquier cosa|Cualquier cosa|
|Mutabilidad|Inmutables|Inmutables|Mutables|
|Operador + (concatenacion)|Copia|Copia|Copia|
|Slices|Copia|Copia|Copia|
|Append|No existe|No existe|Agrega (Modifica)|
|Buscar un elemento|Si|Si|Si|
Parte 1: Simona y el narcisismo en tv
End of explanation
si_si_simona = Si te vas yo voy con vos a algún lugar
Si jugás yo tengo ganas de jugar
Si te quedas yo me quedo
Y si te alejas yo me muero
Soy quien soy
Si soy algo distraída que mas da
Puede ser que sin querer me fui a soñar
Si sonríes yo sonrío
Si me miras yo suspiro
Soy quien soy
Si, si, si Simona si
Simona es así
Simona
Si, si, si Simona si
Simona es así
Simona
(Si, si, si) Simona
(Si, si, si) Simona
Si soy algo distraída que mas da
Puede ser que sin querer me fui a soñar
Si sonríes yo sonrío
Si me miras yo suspiro
Soy quien soy
Si, si, si
Simona si
Simona es así
Simona
Si, si, si Simona si
Simona es así
Simona
Simona
Simona, si
Si sonríes yo sonrío
Si me mirras yo suspiro
Soy quien soy
Si, si, si Simona si
Simona es así
Simona
Si, si, si Simona si
Simona es así
Simona
Si, si, si Simona si
Simona es así
Simona
Si, si, si Simona si
Simona es así
Simona
(Simona, Simona)
Simona
(Simona, Simona)
Simona
Explanation: En psicologia probablemente se harian un festin analizando estas canciones, la protagonista se nombra a si misma tantas veces que no deja lugar a dudas de quien es el programa. Aplicando lo que sabemos de cadenas y secuencias en general, vamos a escribir un poco de codigo para poder ponerle numeros a esta afirmacion, contando cuantas veces dicen "Simona" los temas de Simona.
Si Si Simona
End of explanation
simona_va = Es un lindo día para pedir un deseo
No me importa si se cumple o no
Es un lindo día para jugarnos enteros
Vamos a cantar una canción
Amanece más temprano cuando quiero
Porque dentro mío brilla el sol
Al final siempre consigo lo que quiero
Por que lo hago con amor
Solo hay que cruzar bien los dedos
Y desear que pase con todo el corazón
Vamos a cubrirte los miedos
Vamos que ya llega lo mejor
Simona va, Simona va
Andando se hace el camino
Yo manejo mi destino
Simona va, Simona va
Si piso dejo mi huella
Voy a alcanzar las estrellas
Ella va, va, va, viene y va
Es un lindo dia para hacer algo bien bueno
Disfrutando de la sensacion
De animarse aunque no te salga perfecto
Cada vez ira mejor
[?] cuando siento
Que mañana voy a verte a vos
Porque yo siempre consigo lo que quiero
Nadie me dice que no
Solo hay que cruzar bien los dedos
Y desear que pase con todo el corazón
Vamos sacudite los miedos
Vamos que ya llega lo mejor
Simona va, Simona va
Andando se hace el camino
Yo manejo mi destino
Simona va, Simona va
Si piso dejo mi huella
Voy a alcanzar las estrellas
Ella va, va, va, viene y va
Ya va, yo tengo mi tiempo
Are you ready for funky?
Verás yo voy con lo puesto
Como no?
No da que no seas sincero
Yo no, no le tengo miedo a na na na na na
Simona va, Simona va
Andando se hace el camino
Yo manejo mi destino
Simona va, Simona va
Si piso dejo mi huella
Voy a alcanzar las estrellas
Ella va, va, va
Simona va, Simona va
Andando se hace el camino
Yo manejo mi destino
Simona va, Simona va
Si piso dejo mi huella
Voy a alcanzar las estrellas
Ella va, va, va, viene y va
Explanation: Simona Va
End of explanation
soy_como_soy = Soy especial
A veces no me entienden, es normal
Yo digo siempre lo que pienso
Aunque te caiga mal
Ya vez mi personalidad
Tengo la música en la sangre prefiero bailar
No es necesario interpretar, adivinar
Soy lo que siento
Soy Simona
Cantar mi corazón ilusiona
Mi sueño voy de a poco alcanzando
Y no puedo dejar de pensar en tu amor
Cada vez que un recuerdo se asoma
Intento al menos no ser tan obvia
Y que no te des cuenta que muero por vos
Soy como soy pero mi amor
Es mas lindo cuando somos dos
Vuelvo a empezar
No se porque doy tantas vueltas
Si en verdad
Tu amor es todo lo que quiero
Me cuesta tanto disimular
Ya vez mi personalidad
Tengo la música en la sangre prefiero bailar
No es necesario interpretar, adivinar
Soy lo que siento
Soy Simona
Cantar mi corazón ilusiona
Mi sueño voy de a poco alcanzando
Y no puedo dejar de pensar en tu amor
Cada vez que un recuerdo se asoma
Intento al menos no ser tan obvia
Y que no te des cuenta que muero por vos
Soy como soy, pero mi amor
Es mas lindo cuando somos dos
(Cuando somos dos)
(Es mas lindo cuando somos dos)
Soy Simona
Cantar mi corazón ilusiona
Mi sueño voy de a poco alcanzando
Y no puedo dejar de pensar en tu amor
Cada vez que un recuerdo se asoma (se asoma)
Intento al menos no ser tan obvia
Y que no te des cuenta que muero por vos
Soy como soy, pero mi amor
Es mas lindo cuando somos dos
Explanation: Soy Como Soy
End of explanation
dir("")
Explanation: Metodos del tipo cadena
Por ahora los metodos son las funciones que puedo llamar para un tipo de datos dado, utilizando . . La forma de verlos todos es utilizando la funcion dir
End of explanation
help("".find)
Explanation: OJO: Notar que hay cosas que arrancan con __, eso nos quiere decir que no debemos tocarlo por ningun motivo (Mas sobre esto algunas clases mas adelante)
Ahora para saber ya la documentacion especifica del metodo que queremos usar, podemos usar help de la forma:
End of explanation
def buscar_todas(secuencia,sub_secuencia):
Recibe una secuencia, devuelve una lista con los indices de todas las apariciones de la sub_secuencia dentro de la secuencia original.
ocurrencias = []
anterior = secuencia.find(sub_secuencia)
while anterior != -1:
ocurrencias.append(anterior)
anterior = secuencia.find(sub_secuencia , anterior+1)
return ocurrencias
# Podemos probar que la funcion se comporta como esperamos
# Si algo de esto imprime False, claramente no anda
print ("Buscar devuelve vacio si no encuentra :",buscar_todas("Hola","pp") == [] )
print ("Buscar devuelve a :",buscar_todas("Hola","Hola") == [0] )
print("\n"*2)
# Ahora a contar
print("Apariciones de Simona en 'Si Si Simona': ",len(buscar_todas(si_si_simona,"Simona")))
print("Apariciones de Simona en 'Simona va': ",len(buscar_todas(simona_va,"Simona")))
print("Apariciones de Simona en 'Soy como soy': ",len(buscar_todas(soy_como_soy,"Simona")))
Explanation: Si bien quiero solo contar las apariciones, podria hacer una funcion que me devuelva las apariciones y contar cuantos elementos me devolvio. De esa forma, tengo una funcion que sirve para mas de un contexto.
Dado que todo lo que vamos a usar son operaciones de secuencias en gral, podemos nombrar la funcion de forma general[1]
End of explanation
palabras = si_si_simona.split(" ")
for i in range(len (palabras)):
if palabras[i] == "Simona":
palabras[i] = "Ramona"
print(" ".join(palabras))
Explanation: Discusiones importantes para recordar:
* Por que find devuelve -1 y no otra cosa?
* Devolver una lista o una tupla?
* Usar internamente una lista o una tupla? Y si tengo que devolver una tupla?
Tarea para el lector:
* Probar la validez o no de la afirmacion [1] (Ayuda: probar usando listas)
Cosas no tan importantes a modo de conclusion:
* Notar que hasta en las canciones que no tienen su nombre en el titulo se nombra
* Queda entonces demostrado lo que sea que se supone que haya querido demostrar con esto
Parte 2: Que hacemos si nos cierran este antro?
¿Que pasa ahora si tienen que terminar la novela por problemas con los actores y hacer una nueva? Claramente alguien que escribe 34 veces el nombre de la protagonista en la letra de una cancion no tiene mucha imaginacion, asi que probablemente cambien los nombres de las personas y sigan con lo mismo. Hagamos algo que pueda reemplazar el nombre de Simona por el de una nueva protagonista en las canciones.
Version usando Split + Join
End of explanation
partes_cancion = simona_va.split("Simona")
cancion_nueva = "Ramona".join(partes_cancion)
print(cancion_nueva)
Explanation: Version abusando de Split y Join
End of explanation
print("Ramona".join(si_si_simona.split("Simona")))
Explanation: Ahora en una sola linea
End of explanation
template = si_si_simona.replace("Simona","{0}")
print(template)
Explanation: Version haciendo templates con format strings
Supongamos que es bastante comun esto de solo cambiar el nombre, andar cortando y haciendo copias de listas, cadenas o lo que sea, no es como una idea muy copada. Usando cadenas con formato, se puede hacer mas amigable.
Paso 1: Reemplazar todas las apariciones de Simona por '{0}' para generar el template (usamos replace porque ya viene en Python y no tengo ganas de hacerlo a mano)
End of explanation
print(template.format("Ramona"))
Explanation: Usando el template que generamos, ahora podemos usar format para obtener nuestras multiples versiones de la cancion
End of explanation
"Cuando vengo digo {} y cuando me voy digo {}".format("hola","chau")
"Cuando vengo digo {1} y cuando me voy digo {0}".format("hola","chau")
"Cuando vengo digo {1} y cuando me voy digo {1}".format("hola","chau")
Explanation: Format Strings (Cadenas con formato)
Las cadenas con formato son de la forma blablabla {} blablabla donde {} luego se va a reemplazar por el/los valores que le pasamos como parametro al metodo format. En el ejemplo anterior, utilizamos {0}, porque si no especificamos nada, asume que cada {} corresponde a un parametro distinto. En este caso, el 0 representa al primer parametro.
End of explanation
def obtener_nueva_premisa( nombre_protagonista_femenino,
barrio_del_conurbano,
nombre_protagonista_masculino,
razon_de_no_ser_rica,
razon_de_ser_rico,
razon_para_ahora_ser_rica):
Devuelve una cadena con la premisa para una nueva y exitosa novela argentina, dados los datos pasados como parametro.
return Ella es {}, una mucama que vive en {}, y esta enamorada de su jefe {}.
Su amor no puede ser porque ella es {} y el es {}.
Pero todo cambia cuando ella descubre que es {} y su amor florece (porque aparentemente no existe el amor en la clase media)..format(
nombre_protagonista_femenino,
barrio_del_conurbano,
nombre_protagonista_masculino,
razon_de_no_ser_rica,
razon_de_ser_rico,
razon_para_ahora_ser_rica
)
print(obtener_nueva_premisa("Ramona","Quilmes","Esteban","la hija del carnicero","el hijo de Barack Obama","una Rockefeller"))
Explanation: Ahora con esto, podemos hasta hacer una funcion hermosa que nos genere premisas de novelas completamente ~~genericas~~ novedosas.
End of explanation
def obtener_nueva_premisa( nombre_protagonista_femenino,
barrio_del_conurbano,
nombre_protagonista_masculino,
razon_de_no_ser_rica,
razon_de_ser_rico,
razon_para_ahora_ser_rica):
Devuelve una cadena con la premisa para una nueva y exitosa novela argentina, dados los datos pasados como parametro.
return fElla es {nombre_protagonista_femenino}, una mucama que vive en {barrio_del_conurbano}.
Ella esta enamorada de su jefe {nombre_protagonista_masculino}, pero su amor no puede ser porque ella es {razon_de_no_ser_rica} y el es {razon_de_ser_rico}.
Pero todo cambia cuando ella descubre que es {razon_para_ahora_ser_rica} y su amor florece (porque aparentemente no existe el amor en la clase media).
print(obtener_nueva_premisa("Leona","Tigre","Alberto","una persona de bajos ingresos","mucho muy rico","la ganadora del gordo de navidad"))
Explanation: Desde Python 3.6 en adelante, hay una forma mucho mas linda de hacer esto, ya que mejora bastante la legibilidad (notar que en el caso anterior no tengo idea de a que parametro representa cada {} a menos que vaya contando, y para mas de un parametro, poner todo con numeros es bastante molesto)
End of explanation
cantidad_a = 10
cantidad_b = 7
f"Si tengo {cantidad_a} manzanas, y me como {cantidad_b}, cuantas me queda? Rta:({cantidad_a - cantidad_b})"
Explanation: Para poder lograr este efecto, las cadenas deben tener una letra f justo antes de las comillas al inicio y entre llaves expresiones. Y digo expresiones, porque esto vale:
End of explanation
Image(filename='./clase-09-04_images/i2.jpg')
Explanation: Parte 3: Despues de las 12, azufre y queso tienen las mismas letras aparentemente
End of explanation
def imprimir_soluciones_posibles(incognita):
largo = len(incognita) #Lo vamos a usar varias veces, lo "calculamos" una sola
#Recorrido en sentido horario
for i in range(largo):
for j in range(i,i+largo):
print(incognita[j % largo],end="")
print()
#Recorrido en sentido anti-horario
for i in range(largo):
for j in range( i , i-largo , -1 ):
print(incognita[j % largo],end="")
print()
imprimir_soluciones_posibles("freazu")
Explanation: Mas de uno se quedo estudiando hasta tarde y se topó con estos programas donde hay que adivinar que palabra se forma con esas letras. Si bien las reglas dicen que solo se pueden formar palabras conectando letras de celdas contiguas, el 90% de la gente que llama cree que la respuesta es "queso", "mozzarela" o "Estambul" por alguna razon.
Es bastante evidente que la respuesta ~~es azufre~~ se encuentra en alguna rotacion de "freazu" en sentido horario o anti-horario. Escribamos codigo que nos imprima todas las posibles rotaciones de una cadena a ver si encontramos la respuesta
End of explanation |
4,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 13 - Hydrogen functions
Start with some imports fro Symbolic Python library
Step1: Define some variables, radial, polar, azimuthal, time, and two frequencies
Step2: Look at a few of the radial equations and the spherical harmonics
Notice that instead of Ylm the name is Ynm... the arguments to the function are still the quantum numbers l and m
Step3: Write the equation for the $|nlm\rangle = |100\rangle$ state. Use the sympy method .expand(func=True) to convert to the actual expression. To create this state, we combine the Radial function and the Ylm function. Make sure to set n, l, and m to the correct values. The fourth argument to R_nl is Z which we set to 1 since we are talking about a 1-proton nucleus.
The combination of R_nl and Ynm should look like the following (replace N, L, and M with the appropriate values)
Step4: Integrating over all space
Remember spherical coordinate integrals of function $f(r,\theta,\phi)$ over all space look like
Step5: Now do the $|210\rangle$ state
Step6: Note, if you compare these to listed solutions (for example at http
Step7: Now calculate $\langle z \rangle$
Step8: No surprise, the average z position of the electron in the hydrogen atom is 0.
Now for problem 13.21
find $\langle z \rangle(t)$. Use the same integral, but add a time-dependent piece to each term in the wavefunction, add them together and multiply by the complex conjugate. | Python Code:
from sympy.physics.hydrogen import R_nl
from sympy.functions.special.spherical_harmonics import Ynm
from sympy import *
Explanation: Chapter 13 - Hydrogen functions
Start with some imports fro Symbolic Python library:
End of explanation
var("r theta phi t w1 w2")
Explanation: Define some variables, radial, polar, azimuthal, time, and two frequencies:
End of explanation
R_nl(1, 0, r, 1) # the n = 1, l = 0 radial function
Ynm(0,0,theta,phi).expand(func=True) # the l = 0, m = 0 spherical harmonic
Explanation: Look at a few of the radial equations and the spherical harmonics
Notice that instead of Ylm the name is Ynm... the arguments to the function are still the quantum numbers l and m
End of explanation
# this is the |100> state:
psi100 = R_nl(1, 0, r, 1)*Ynm(0,0,theta,phi).expand(func=True)
psi100 # check to see how it looks as an expression
Explanation: Write the equation for the $|nlm\rangle = |100\rangle$ state. Use the sympy method .expand(func=True) to convert to the actual expression. To create this state, we combine the Radial function and the Ylm function. Make sure to set n, l, and m to the correct values. The fourth argument to R_nl is Z which we set to 1 since we are talking about a 1-proton nucleus.
The combination of R_nl and Ynm should look like the following (replace N, L, and M with the appropriate values):
R_nl(N, L, r, 1)*Ynm(L, M, theta, phi).expand(func=True)
End of explanation
integrate(r**2*sin(theta) * (psi100)**2 ,(r,0,oo),(theta,0,pi),(phi,0,2*pi))
Explanation: Integrating over all space
Remember spherical coordinate integrals of function $f(r,\theta,\phi)$ over all space look like: $$\int_0^\infty\int_0^\pi\int_0^{2\pi}r^2\sin(\theta)drd\theta d\phi \,\,f(r,\theta,\phi)$$ so you alwasy need to add a factor of r**2*sin(theta) and then integrate r from 0 to infinity, theta from $0-\pi$ and phi from $0-2\pi$. As a check, you should integrate the square of the psi100 wavefunction over all space to see that it equals 1 (i.e. it is normalized)
End of explanation
psi210 = R_nl(2, 1, r, 1)*Ynm(1,0,theta,phi).expand(func=True)
psi210 # check how it looks
Explanation: Now do the $|210\rangle$ state:
End of explanation
psi211 = R_nl(2, 1, r, 1)*Ynm(1,1,theta,phi).expand(func=True)
psi211
Explanation: Note, if you compare these to listed solutions (for example at http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hydwf.html#c3) you see that there are not any factors of $a_0$. This is because the R_nl function is defined in units of $a_0$. $a_0$ is the Bohr Radius: http://en.wikipedia.org/wiki/Bohr_radius
End of explanation
expect = integrate(r**2*sin(theta)* (r*cos(theta)) * (psi100*psi100),(r,0,oo),(theta,0,pi),(phi,0,2*pi))
expect
Explanation: Now calculate $\langle z \rangle$:
To calculate $\langle z \rangle$ we need to convert to spherical coordinates: $z = r\cos\theta$. The terms in the following integral are the $r^2\sin\theta$ then $z$ (in spherical coords) then the wave function squared.
End of explanation
psi = 1/sqrt(2)*(psi100*exp(1j*w1*t) + psi210*exp(1j*w2*t))
psi_conj = 1/sqrt(2)*(psi100*exp(-1j*w1*t) + psi210*exp(-1j*w2*t))
outer = (psi*psi_conj).simplify()
outer
expect2 = integrate(r**2 * sin(theta) * (r*cos(theta)) * outer,(r,0,oo),(theta,0,pi),(phi,0,2*pi))
expect2
expect2.simplify()
Explanation: No surprise, the average z position of the electron in the hydrogen atom is 0.
Now for problem 13.21
find $\langle z \rangle(t)$. Use the same integral, but add a time-dependent piece to each term in the wavefunction, add them together and multiply by the complex conjugate.
End of explanation |
4,644 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
look at my code below: | Problem:
import pandas as pd
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel
import numpy as np
X, y = load_data()
clf = ExtraTreesClassifier(random_state=42)
clf = clf.fit(X, y)
model = SelectFromModel(clf, prefit=True)
column_names = X.columns[model.get_support()] |
4,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fast Fourier Transform snippets
Documentation
Numpy implementation
Step1: Make data
Step2: Fourier transform with Numpy
Do the fourier transform
Step3: Filter
Step4: Do the reverse transform | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
Explanation: Fast Fourier Transform snippets
Documentation
Numpy implementation: http://docs.scipy.org/doc/numpy/reference/routines.fft.html
Scipy implementation: http://docs.scipy.org/doc/scipy/reference/fftpack.html
Import directives
End of explanation
pattern = np.zeros((4, 4))
pattern[1:3,1:3] = 1
pattern
signal = np.tile(pattern, (2, 2))
fig = plt.figure(figsize=(16.0, 10.0))
ax = fig.add_subplot(111)
ax.imshow(signal, interpolation='nearest', cmap=cm.gray)
Explanation: Make data
End of explanation
transformed_signal = np.fft.fft2(signal)
#transformed_signal
fig = plt.figure(figsize=(16.0, 10.0))
ax = fig.add_subplot(111)
ax.imshow(abs(transformed_signal), interpolation='nearest', cmap=cm.gray)
Explanation: Fourier transform with Numpy
Do the fourier transform
End of explanation
max_value = np.max(abs(transformed_signal))
filtered_transformed_signal = transformed_signal * (abs(transformed_signal) > max_value*0.5)
#filtered_transformed_signal[6, 6] = 0
#filtered_transformed_signal[2, 2] = 0
#filtered_transformed_signal[2, 6] = 0
#filtered_transformed_signal[6, 2] = 0
#filtered_transformed_signal[1, 6] = 0
#filtered_transformed_signal[6, 1] = 0
#filtered_transformed_signal[1, 2] = 0
#filtered_transformed_signal[2, 1] = 0
#filtered_transformed_signal
fig = plt.figure(figsize=(16.0, 10.0))
ax = fig.add_subplot(111)
ax.imshow(abs(filtered_transformed_signal), interpolation='nearest', cmap=cm.gray)
Explanation: Filter
End of explanation
filtered_signal = np.fft.ifft2(filtered_transformed_signal)
#filtered_signal
fig = plt.figure(figsize=(16.0, 10.0))
ax = fig.add_subplot(111)
ax.imshow(abs(filtered_signal), interpolation='nearest', cmap=cm.gray)
#shifted_filtered_signal = np.fft.ifftshift(transformed_signal)
#shifted_filtered_signal
#shifted_transformed_signal = np.fft.fftshift(transformed_signal)
#shifted_transformed_signal
Explanation: Do the reverse transform
End of explanation |
4,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Network Classifier
Neural networks can learn
Step1: Load Iris Data
Step2: Targets 0, 1, 2 correspond to three species
Step3: Split into Training and Testing
Step4: Let's test out a Logistic Regression Classifier
Step5: Let's Train a Neural Network Classifier
Step6: Defining the Network
we have four features and three classes
input layer must have 4 units
output must have 3
we'll add a single hidden layer (choose 16 units)
Step7: What's happening here?
optimizier
Step8: Nice! Much better performance than logistic regression!
How about training with stochastic gradient descent? | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.linear_model import LogisticRegressionCV
from sklearn import datasets
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.utils import np_utils
Explanation: Neural Network Classifier
Neural networks can learn
End of explanation
iris = datasets.load_iris()
iris_df = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
iris_df.head()
Explanation: Load Iris Data
End of explanation
sns.pairplot(iris_df, hue="target")
X = iris_df.values[:, :4]
Y = iris_df.values[: , 4]
Explanation: Targets 0, 1, 2 correspond to three species: setosa, versicolor, and virginica.
End of explanation
train_X, test_X, train_Y, test_Y = train_test_split(X, Y, train_size=0.5, random_state=0)
Explanation: Split into Training and Testing
End of explanation
lr = LogisticRegressionCV()
lr.fit(train_X, train_Y)
print("Accuracy = {:.2f}".format(lr.score(test_X, test_Y)))
Explanation: Let's test out a Logistic Regression Classifier
End of explanation
# Let's Encode the Output in a vector (one hot encoding)
# since this is what the network outputs
def one_hot_encode_object_array(arr):
'''One hot encode a numpy array of objects (e.g. strings)'''
uniques, ids = np.unique(arr, return_inverse=True)
return np_utils.to_categorical(ids, len(uniques))
train_y_ohe = one_hot_encode_object_array(train_Y)
test_y_ohe = one_hot_encode_object_array(test_Y)
Explanation: Let's Train a Neural Network Classifier
End of explanation
model = Sequential()
model.add(Dense(16, input_shape=(4,)))
model.add(Activation("sigmoid"))
# define output layer
model.add(Dense(3))
# softmax is used here, because there are three classes (sigmoid only works for two classes)
model.add(Activation("softmax"))
# define loss function and optimization
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
Explanation: Defining the Network
we have four features and three classes
input layer must have 4 units
output must have 3
we'll add a single hidden layer (choose 16 units)
End of explanation
model.fit(train_X, train_y_ohe, epochs=100, batch_size=1, verbose=0)
loss, accuracy = model.evaluate(test_X, test_y_ohe, verbose=0)
print("Accuracy = {:.2f}".format(accuracy))
Explanation: What's happening here?
optimizier: examples include stochastic gradient descent (going down steepest point)
ADAM (the one selected above) stands for Adaptive Moment Estimation
similar to stochastic gradient descent, but looks as exponentially decaying average and has a different update rule
loss: classficiation error or mean square error are fine options
Categorical Cross Entropy is a better option for computing the gradient supposedly
End of explanation
stochastic_net = Sequential()
stochastic_net.add(Dense(16, input_shape=(4,)))
stochastic_net.add(Activation("sigmoid"))
stochastic_net.add(Dense(3))
stochastic_net.add(Activation("softmax"))
stochastic_net.compile(optimizer="sgd", loss="categorical_crossentropy", metrics=["accuracy"])
stochastic_net.fit(train_X, train_y_ohe, epochs=100, batch_size=1, verbose=0)
loss, accuracy = stochastic_net.evaluate(test_X, test_y_ohe, verbose=0)
print("Accuracy = {:.2f}".format(accuracy))
Explanation: Nice! Much better performance than logistic regression!
How about training with stochastic gradient descent?
End of explanation |
4,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Thermochemistry Validation Test
Han, Kehang ([email protected])
This notebook is designed to use a big set of tricyclics for testing the performance of new polycyclics thermo estimator. Currently the dataset contains 2903 tricyclics that passed isomorphic check.
Set up
Step3: Validation Test
Collect data from heuristic algorithm and qm library
Step4: Create pandas dataframe for easy data validation
Step5: categorize error sources
Step6: Parity Plot
Step7: Histogram of abs(heuristic-qm) | Python Code:
from rmgpy.data.rmg import RMGDatabase
from rmgpy import settings
from rmgpy.species import Species
from rmgpy.molecule import Molecule
from rmgpy.molecule import Group
from rmgpy.rmg.main import RMG
from rmgpy.cnn_framework.predictor import Predictor
from IPython.display import display
import numpy as np
import os
import pandas as pd
from pymongo import MongoClient
import logging
logging.disable(logging.CRITICAL)
from bokeh.charts import Histogram
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
output_notebook()
host = 'mongodb://user:[email protected]/admin'
port = 27018
client = MongoClient(host, port)
db = getattr(client, 'sdata134k')
db.collection_names()
def get_data(db, collection_name):
collection = getattr(db, collection_name)
db_cursor = collection.find()
# collect data
print('reading data...')
db_mols = []
for db_mol in db_cursor:
db_mols.append(db_mol)
print('done')
return db_mols
model = '/home/mjliu/Code/RMG-Py/examples/cnn/evaluate/test_model'
h298_predictor = Predictor()
predictor_input = os.path.join(model,
'predictor_input.py')
h298_predictor.load_input(predictor_input)
param_path = os.path.join(model,
'saved_model',
'full_train.h5')
h298_predictor.load_parameters(param_path)
# fetch testing dataset
collection_name = 'small_cyclic_table'
db_mols = get_data(db, collection_name)
print len(db_mols)
Explanation: Thermochemistry Validation Test
Han, Kehang ([email protected])
This notebook is designed to use a big set of tricyclics for testing the performance of new polycyclics thermo estimator. Currently the dataset contains 2903 tricyclics that passed isomorphic check.
Set up
End of explanation
filterList = [
Group().fromAdjacencyList(1 R u0 p0 c0 {2,[S,D,T]} {9,[S,D,T]}
2 R u0 p0 c0 {1,[S,D,T]} {3,[S,D,T]}
3 R u0 p0 c0 {2,[S,D,T]} {4,[S,D,T]}
4 R u0 p0 c0 {3,[S,D,T]} {5,[S,D,T]}
5 R u0 p0 c0 {4,[S,D,T]} {6,[S,D,T]}
6 R u0 p0 c0 {5,[S,D,T]} {7,[S,D,T]}
7 R u0 p0 c0 {6,[S,D,T]} {8,[S,D,T]}
8 R u0 p0 c0 {7,[S,D,T]} {9,[S,D,T]}
9 R u0 p0 c0 {1,[S,D,T]} {8,[S,D,T]}
),
Group().fromAdjacencyList(1 R u0 p0 c0 {2,S} {5,S}
2 R u0 p0 c0 {1,S} {3,D}
3 R u0 p0 c0 {2,D} {4,S}
4 R u0 p0 c0 {3,S} {5,S}
5 R u0 p0 c0 {1,S} {4,S} {6,S} {9,S}
6 R u0 p0 c0 {5,S} {7,S}
7 R u0 p0 c0 {6,S} {8,D}
8 R u0 p0 c0 {7,D} {9,S}
9 R u0 p0 c0 {5,S} {8,S}
),
]
test_size = 0
R = 1.987 # unit: cal/mol/K
validation_test_dict = {} # key: spec.label, value: (thermo_heuristic, thermo_qm)
spec_labels = []
spec_dict = {}
H298s_qm = []
Cp298s_qm = []
H298s_cnn = []
Cp298s_cnn = []
for db_mol in db_mols:
smiles_in = str(db_mol["SMILES_input"])
spec_in = Species().fromSMILES(smiles_in)
for grp in filterList:
if spec_in.molecule[0].isSubgraphIsomorphic(grp):
break
else:
spec_labels.append(smiles_in)
# qm: just free energy but not free energy of formation
G298_qm = float(db_mol["G298"])*627.51 # unit: kcal/mol
H298_qm = float(db_mol["Hf298(kcal/mol)"]) # unit: kcal/mol
Cv298_qm = float(db_mol["Cv298"]) # unit: cal/mol/K
Cp298_qm = Cv298_qm + R # unit: cal/mol/K
H298s_qm.append(H298_qm)
# cnn
H298_cnn = h298_predictor.predict(spec_in.molecule[0]) # unit: kcal/mol
H298s_cnn.append(H298_cnn)
spec_dict[smiles_in] = spec_in
Explanation: Validation Test
Collect data from heuristic algorithm and qm library
End of explanation
# create pandas dataframe
validation_test_df = pd.DataFrame(index=spec_labels)
validation_test_df['H298_cnn(kcal/mol)'] = pd.Series(H298s_cnn, index=validation_test_df.index)
validation_test_df['H298_qm(kcal/mol)'] = pd.Series(H298s_qm, index=validation_test_df.index)
heuristic_qm_diff = abs(validation_test_df['H298_cnn(kcal/mol)']-validation_test_df['H298_qm(kcal/mol)'])
validation_test_df['H298_cnn_qm_diff(kcal/mol)'] = pd.Series(heuristic_qm_diff, index=validation_test_df.index)
display(validation_test_df.head())
print "Validation test dataframe has {0} tricyclics.".format(len(spec_labels))
validation_test_df['H298_cnn_qm_diff(kcal/mol)'].describe()
Explanation: Create pandas dataframe for easy data validation
End of explanation
diff20_df = validation_test_df[(validation_test_df['H298_heuristic_qm_diff(kcal/mol)'] > 15)
& (validation_test_df['H298_heuristic_qm_diff(kcal/mol)'] <= 500)]
len(diff20_df)
print len(diff20_df)
for smiles in diff20_df.index:
print "***********cnn = {0}************".format(diff20_df[diff20_df.index==smiles]['H298_cnn(kcal/mol)'])
print "***********qm = {0}************".format(diff20_df[diff20_df.index==smiles]['H298_qm(kcal/mol)'])
spe = spec_dict[smiles]
display(spe)
Explanation: categorize error sources
End of explanation
p = figure(plot_width=500, plot_height=400)
# plot_df = validation_test_df[validation_test_df['H298_heuristic_qm_diff(kcal/mol)'] < 10]
plot_df = validation_test_df
# add a square renderer with a size, color, and alpha
p.circle(plot_df['H298_cnn(kcal/mol)'], plot_df['H298_qm(kcal/mol)'],
size=5, color="green", alpha=0.5)
x = np.array([-50, 200])
y = x
p.line(x=x, y=y, line_width=2, color='#636363')
p.line(x=x, y=y+10, line_width=2,line_dash="dashed", color='#bdbdbd')
p.line(x=x, y=y-10, line_width=2, line_dash="dashed", color='#bdbdbd')
p.xaxis.axis_label = "H298 CNN (kcal/mol)"
p.yaxis.axis_label = "H298 Quantum (kcal/mol)"
p.xaxis.axis_label_text_font_style = "normal"
p.yaxis.axis_label_text_font_style = "normal"
p.xaxis.axis_label_text_font_size = "16pt"
p.yaxis.axis_label_text_font_size = "16pt"
p.xaxis.major_label_text_font_size = "12pt"
p.yaxis.major_label_text_font_size = "12pt"
show(p)
len(plot_df.index)
Explanation: Parity Plot: heuristic vs. qm
End of explanation
from bokeh.models import Range1d
hist = Histogram(validation_test_df,
values='Cp298_heuristic_qm_diff(cal/mol/K)', xlabel='Cp Prediction Error (cal/mol/K)',
ylabel='Number of Testing Molecules',
bins=50,\
plot_width=500, plot_height=300)
# hist.y_range = Range1d(0, 1640)
hist.x_range = Range1d(0, 20)
show(hist)
with open('validation_test_sdata134k_2903_pyPoly_dbPoly.csv', 'w') as fout:
validation_test_df.to_csv(fout)
Explanation: Histogram of abs(heuristic-qm)
End of explanation |
4,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
Step1: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Here, just creating some placeholders like normal.
Step6: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper
Step7: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note
Step9: Model Loss
Calculating the loss like before, nothing new here.
Step11: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
Step12: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them. | Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(self.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
# 4x4x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
# 8x8x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# 16x16x128 now
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
# 32x32x3 now
out = tf.tanh(logits)
return out
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
End of explanation
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
# 16x16x64
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
# 8x8x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
# 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.
End of explanation
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
Explanation: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
End of explanation
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=alpha)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
Explanation: Here is a function for displaying generated images.
End of explanation
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
End of explanation
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0002
batch_size = 128
epochs = 25
alpha = 0.2
beta1 = 0.5
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
Explanation: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
End of explanation |
4,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#PageRank" data-toc-modified-id="PageRank-1"><span class="toc-item-num">1 </span>PageRank</a></span><ul class="toc-item"><li><span><a href="#Taxation" data-toc-modified-id="Taxation-1.1"><span class="toc-item-num">1.1 </span>Taxation</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: PageRank
PageRank is a function that assigns a number weighting each page in the Web, the intent is that the higher the PageRank of a page, the more important the page is. We can think of the Web as a directed graph, where the pages are the nodes and if there exists a link that connects page1 to page2 then there would be an edge connecting the two nodes.
Imagine an toy example where there are only 4 pages/nodes, ${A, B, C, D}$
Step2: Given this graph, we can build a transition matrix to depict what is the probability of landing on a given page after 1 step. If we look at an example below, the matrix has
Step3: Now suppose we start at any of the $n$ pages of the Web with equal probability. Then the initial vector $v_0$ will have $1/n$ for each page. If $M$ is the transition matrix of the Web, then after one step, the distribution
of us landing on each of the page can be computed by a matrix vector multiplication. $v_0 M$
Step4: As we can see after 1 step, the probability of landing on the first page, page $A$, is higher than the probability of landing on other pages. We can repeat this matrix vector multiplication for multiple times and our results will eventually converge. Giving us an estimated probability of landing on each page, which in term is PageRank's estimate of how important a given page is when compared to all the other page in the Web.
Step5: This sort of convergence behavior is an example of the Markov Chain processes. It is known that the distribution of $v = Mv$ converges, provided two conditions are met
Step6: As predicted, all the PageRank is at node $C$, since once we land there, there's no way for us to leave.
The other problem dead end describes pages that have no out-links, as a result pages that reaches these dead ends will not have any PageRank.
Step7: As we see, the result tells us the probability of us being anywhere goes to 0, as the number of steps increase.
To avoid the two problems mentioned above, we will modify the calculation of PageRank. At each step, we will give it a small probability of "teleporting" to a random page, rather than following an out-link from their current page. The notation form for the description above would be
Step8: The result looks much more reasonable after introducing the taxation. We can also compare the it with the pagerank function from networkx. | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
%watermark -a 'Ethen' -d -t -v -p numpy,networkx,matplotlib
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#PageRank" data-toc-modified-id="PageRank-1"><span class="toc-item-num">1 </span>PageRank</a></span><ul class="toc-item"><li><span><a href="#Taxation" data-toc-modified-id="Taxation-1.1"><span class="toc-item-num">1.1 </span>Taxation</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
nodes = ['A', 'B', 'C', 'D']
edges = [
('A', 'B'),
('A', 'C'),
('A', 'D'),
('B', 'A'),
('B', 'D'),
('D', 'B'),
('D', 'C'),
('C', 'A')
]
graph = nx.DiGraph()
graph.add_nodes_from(nodes)
graph.add_edges_from(edges)
graph
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
# quick and dirty visualization of the graph we've defined above
nx.draw(graph, with_labels=True, node_color='skyblue', alpha=0.7)
Explanation: PageRank
PageRank is a function that assigns a number weighting each page in the Web, the intent is that the higher the PageRank of a page, the more important the page is. We can think of the Web as a directed graph, where the pages are the nodes and if there exists a link that connects page1 to page2 then there would be an edge connecting the two nodes.
Imagine an toy example where there are only 4 pages/nodes, ${A, B, C, D}$:
$A$ has links connecting itself to each of ther other three pages.
$B$ has links to $A$ and $D$.
$D$ has links to $B$ and $C$.
$C$ has links only to $A$.
End of explanation
trans_matrix = nx.to_numpy_array(graph)
trans_matrix /= trans_matrix.sum(axis=1, keepdims=True)
trans_matrix
Explanation: Given this graph, we can build a transition matrix to depict what is the probability of landing on a given page after 1 step. If we look at an example below, the matrix has:
$n$ rows and columns if there are $n$ pages
Each element in the matrix, $m_{ij}$ takes on the value of $1 / k$ if page $i$ has $k$ edges and one of them is $j$. Otherwise $m_{ij}$ is 0.
End of explanation
n_nodes = trans_matrix.shape[0]
init_vector = np.repeat(1 / n_nodes, n_nodes)
init_vector @ trans_matrix
Explanation: Now suppose we start at any of the $n$ pages of the Web with equal probability. Then the initial vector $v_0$ will have $1/n$ for each page. If $M$ is the transition matrix of the Web, then after one step, the distribution
of us landing on each of the page can be computed by a matrix vector multiplication. $v_0 M$
End of explanation
# we can tweak the number of iterations parameter
# and see that the resulting probability remains
# the same even if we increased the number to 50
n_iters = 30
result = init_vector
for _ in range(n_iters):
result = result @ trans_matrix
result
Explanation: As we can see after 1 step, the probability of landing on the first page, page $A$, is higher than the probability of landing on other pages. We can repeat this matrix vector multiplication for multiple times and our results will eventually converge. Giving us an estimated probability of landing on each page, which in term is PageRank's estimate of how important a given page is when compared to all the other page in the Web.
End of explanation
# we replaced C's out-link, ('C', 'A'),
# from the list of edges with a link within
# the page itself ('C', 'C'), note that
# we can also avoid this problem by note
# including self-loops in the edges
nodes = ['A', 'B', 'C', 'D']
edges = [
('A', 'B'),
('A', 'C'),
('A', 'D'),
('B', 'A'),
('B', 'D'),
('D', 'B'),
('D', 'C'),
('C', 'C')
]
graph = nx.DiGraph()
graph.add_nodes_from(nodes)
graph.add_edges_from(edges)
# not showing the self-loop edge for node C
nx.draw(graph, with_labels=True, node_color='skyblue', alpha=0.7)
# notice in the transition probability matrix, the third row, node C
# contains 1 for a single entry
trans_matrix = nx.to_numpy_array(graph)
trans_matrix /= trans_matrix.sum(axis=1, keepdims=True)
trans_matrix
n_iters = 40
result = init_vector
for _ in range(n_iters):
result = result @ trans_matrix
result
Explanation: This sort of convergence behavior is an example of the Markov Chain processes. It is known that the distribution of $v = Mv$ converges, provided two conditions are met:
The graph is strongly connected; that is, it is possible to get from any
node to any other node.
There are no dead ends: nodes that have no edges out.
If we stare at the formula $v = Mv$ long enough, we can observe that our final result vector $v$ is an eigenvector of the matrix $M$ (recall an eigenvector of a matrix $M$ is a vector $v$ that satisfies $v = \lambda Mv$ for some constant eigenvalue $\lambda$).
Taxation
The vanilla PageRank that we've introduced above needs some tweaks to handle data that can appear in real world scenarios. The two problems that we need to avoid is what's called spider traps and dead end.
spider trap is a set of nodes with edges, but these edges all links within the page itself. This causes the PageRank calculation to place all the PageRank score within the spider traps.
End of explanation
# we remove C's out-link, ('C', 'A'),
# from the list of edges
nodes = ['A', 'B', 'C', 'D']
edges = [
('A', 'B'),
('A', 'C'),
('A', 'D'),
('B', 'A'),
('B', 'D'),
('D', 'B'),
('D', 'C')
]
graph = nx.DiGraph()
graph.add_nodes_from(nodes)
graph.add_edges_from(edges)
nx.draw(graph, with_labels=True, node_color='skyblue', alpha=0.7)
# trick for numpy for dealing with zero division
# https://stackoverflow.com/questions/26248654/numpy-return-0-with-divide-by-zero
trans_matrix = nx.to_numpy_array(graph)
summed = trans_matrix.sum(axis=1, keepdims=True)
trans_matrix = np.divide(trans_matrix, summed,
out=np.zeros_like(trans_matrix), where=summed!=0)
# notice in the transition probability matrix, the third row, node C
# consists of all 0
trans_matrix
n_iters = 40
result = init_vector
for _ in range(n_iters):
result = result @ trans_matrix
result
Explanation: As predicted, all the PageRank is at node $C$, since once we land there, there's no way for us to leave.
The other problem dead end describes pages that have no out-links, as a result pages that reaches these dead ends will not have any PageRank.
End of explanation
def build_trans_matrix(graph: nx.DiGraph, beta: float=0.9) -> np.ndarray:
n_nodes = len(graph)
trans_matrix = nx.to_numpy_array(graph)
# assign uniform probability to dangling nodes (nodes without out links)
const_vector = np.repeat(1.0 / n_nodes, n_nodes)
row_sum = trans_matrix.sum(axis=1)
dangling_nodes = np.where(row_sum == 0)[0]
if len(dangling_nodes):
for node in dangling_nodes:
trans_matrix[node] = const_vector
row_sum[node] = 1
trans_matrix /= row_sum.reshape(-1, 1)
return beta * trans_matrix + (1 - beta) * const_vector
trans_matrix = build_trans_matrix(graph)
trans_matrix
n_iters = 20
result = init_vector
for _ in range(n_iters):
result = result @ trans_matrix
result
Explanation: As we see, the result tells us the probability of us being anywhere goes to 0, as the number of steps increase.
To avoid the two problems mentioned above, we will modify the calculation of PageRank. At each step, we will give it a small probability of "teleporting" to a random page, rather than following an out-link from their current page. The notation form for the description above would be:
\begin{align}
v^\prime = \beta M v + (1 - \beta) e / n
\end{align}
where:
$\beta$ is a chosen constant, usually in the range of 0.8 to 0.9.
$e$ is a vector of all 1s with the appropriate number of elements so that the matrix addition adds up.
$n$ is the number of pages/nodes in the Web graph.
The term, $\beta M v$ denotes that at this step, there is a probability $\beta$ that we will follow an out-link from their present page.
Notice the term $(1 - \beta) e / n$ does not depend on $v$, thus if there are some dead ends in the graph, there will always be some fraction of opportunity to jump out of that rabbit hole.
This idea of adding $\beta$ is referred to as taxation (networkx package calls this damping factor).
End of explanation
pagerank_score = nx.pagerank(graph, alpha=0.9)
pagerank_score
Explanation: The result looks much more reasonable after introducing the taxation. We can also compare the it with the pagerank function from networkx.
End of explanation |
4,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Art Style Transfer
This notebook is an implementation of the algorithm described in "A Neural Algorithm of Artistic Style" (http
Step1: Load the pretrained weights into the network
Step2: Choose the Photo to be Enhanced
Step3: Executing the cell below will iterate through the images in the ./images/art-style/photos directory, so you can choose the one you want
Step4: Choose the photo with the required 'Style'
Step5: Executing the cell below will iterate through the images in the ./images/art-style/styles directory, so you can choose the one you want
Step6: This defines various measures of difference that we'll use to compare the current output image with the original sources.
Step7: Here are the GoogLeNet layers that we're going to pay attention to
Step8: Precompute layer activations for photo and artwork
This takes ~ 20 seconds
Step9: Define the overall loss / badness function
Step10: The Famous Symbolic Gradient operation
Step11: Get Ready for Optimisation by SciPy
Step12: Initialize with the original photo, since going from noise (the code that's commented out) takes many more iterations.
Step13: Optimize all those losses, and show the image
To refine the result, just keep hitting 'run' on this cell (each iteration is about 60 seconds) | Python Code:
import theano
import theano.tensor as T
import lasagne
from lasagne.utils import floatX
import numpy as np
import scipy
import matplotlib.pyplot as plt
%matplotlib inline
import os # for directory listings
import pickle
import time
AS_PATH='./images/art-style'
from model import googlenet
net = googlenet.build_model()
net_input_var = net['input'].input_var
net_output_layer = net['prob']
Explanation: Art Style Transfer
This notebook is an implementation of the algorithm described in "A Neural Algorithm of Artistic Style" (http://arxiv.org/abs/1508.06576) by Gatys, Ecker and Bethge. Additional details of their method are available at http://arxiv.org/abs/1505.07376 and http://bethgelab.org/deepneuralart/.
An image is generated which combines the content of a photograph with the "style" of a painting. This is accomplished by jointly minimizing the squared difference between feature activation maps of the photo and generated image, and the squared difference of feature correlation between painting and generated image. A total variation penalty is also applied to reduce high frequency noise.
This notebook was originally sourced from Lasagne Recipes, but has been modified to use a GoogLeNet network (pre-trained and pre-loaded), and given some features to make it easier to experiment with.
End of explanation
params = pickle.load(open('./data/googlenet/blvc_googlenet.pkl', 'rb'), encoding='iso-8859-1')
model_param_values = params['param values']
#classes = params['synset words']
lasagne.layers.set_all_param_values(net_output_layer, model_param_values)
IMAGE_W=224
print("Loaded Model parameters")
Explanation: Load the pretrained weights into the network :
End of explanation
photos = [ '%s/photos/%s' % (AS_PATH, f) for f in os.listdir('%s/photos/' % AS_PATH) if not f.startswith('.')]
photo_i=-1 # will be incremented in next cell (i.e. to start at [0])
Explanation: Choose the Photo to be Enhanced
End of explanation
photo_i += 1
photo = plt.imread(photos[photo_i % len(photos)])
photo_rawim, photo = googlenet.prep_image(photo)
plt.imshow(photo_rawim)
Explanation: Executing the cell below will iterate through the images in the ./images/art-style/photos directory, so you can choose the one you want
End of explanation
styles = [ '%s/styles/%s' % (AS_PATH, f) for f in os.listdir('%s/styles/' % AS_PATH) if not f.startswith('.')]
style_i=-1 # will be incremented in next cell (i.e. to start at [0])
Explanation: Choose the photo with the required 'Style'
End of explanation
style_i += 1
art = plt.imread(styles[style_i % len(styles)])
art_rawim, art = googlenet.prep_image(art)
plt.imshow(art_rawim)
Explanation: Executing the cell below will iterate through the images in the ./images/art-style/styles directory, so you can choose the one you want
End of explanation
def plot_layout(combined):
def no_axes():
plt.gca().xaxis.set_visible(False)
plt.gca().yaxis.set_visible(False)
plt.figure(figsize=(9,6))
plt.subplot2grid( (2,3), (0,0) )
no_axes()
plt.imshow(photo_rawim)
plt.subplot2grid( (2,3), (1,0) )
no_axes()
plt.imshow(art_rawim)
plt.subplot2grid( (2,3), (0,1), colspan=2, rowspan=2 )
no_axes()
plt.imshow(combined, interpolation='nearest')
plt.tight_layout()
def gram_matrix(x):
x = x.flatten(ndim=3)
g = T.tensordot(x, x, axes=([2], [2]))
return g
def content_loss(P, X, layer):
p = P[layer]
x = X[layer]
loss = 1./2 * ((x - p)**2).sum()
return loss
def style_loss(A, X, layer):
a = A[layer]
x = X[layer]
A = gram_matrix(a)
G = gram_matrix(x)
N = a.shape[1]
M = a.shape[2] * a.shape[3]
loss = 1./(4 * N**2 * M**2) * ((G - A)**2).sum()
return loss
def total_variation_loss(x):
return (((x[:,:,:-1,:-1] - x[:,:,1:,:-1])**2 + (x[:,:,:-1,:-1] - x[:,:,:-1,1:])**2)**1.25).sum()
Explanation: This defines various measures of difference that we'll use to compare the current output image with the original sources.
End of explanation
layers = [
# used for 'content' in photo - a mid-tier convolutional layer
'inception_4b/output',
# used for 'style' - conv layers throughout model (not same as content one)
'conv1/7x7_s2', 'conv2/3x3', 'inception_3b/output', 'inception_4d/output',
]
#layers = [
# # used for 'content' in photo - a mid-tier convolutional layer
# 'pool4/3x3_s2',
#
# # used for 'style' - conv layers throughout model (not same as content one)
# 'conv1/7x7_s2', 'conv2/3x3', 'pool3/3x3_s2', 'inception_5b/output',
#]
layers = {k: net[k] for k in layers}
Explanation: Here are the GoogLeNet layers that we're going to pay attention to :
End of explanation
input_im_theano = T.tensor4()
outputs = lasagne.layers.get_output(layers.values(), input_im_theano)
photo_features = {k: theano.shared(output.eval({input_im_theano: photo}))
for k, output in zip(layers.keys(), outputs)}
art_features = {k: theano.shared(output.eval({input_im_theano: art}))
for k, output in zip(layers.keys(), outputs)}
# Get expressions for layer activations for generated image
generated_image = theano.shared(floatX(np.random.uniform(-128, 128, (1, 3, IMAGE_W, IMAGE_W))))
gen_features = lasagne.layers.get_output(layers.values(), generated_image)
gen_features = {k: v for k, v in zip(layers.keys(), gen_features)}
Explanation: Precompute layer activations for photo and artwork
This takes ~ 20 seconds
End of explanation
losses = []
# content loss
cl = 10 /1000.
losses.append(cl * content_loss(photo_features, gen_features, 'inception_4b/output'))
# style loss
sl = 20 *1000.
losses.append(sl * style_loss(art_features, gen_features, 'conv1/7x7_s2'))
losses.append(sl * style_loss(art_features, gen_features, 'conv2/3x3'))
losses.append(sl * style_loss(art_features, gen_features, 'inception_3b/output'))
losses.append(sl * style_loss(art_features, gen_features, 'inception_4d/output'))
#losses.append(sl * style_loss(art_features, gen_features, 'inception_5b/output'))
# total variation penalty
vp = 0.01 /1000. /1000.
losses.append(vp * total_variation_loss(generated_image))
total_loss = sum(losses)
Explanation: Define the overall loss / badness function
End of explanation
grad = T.grad(total_loss, generated_image)
Explanation: The Famous Symbolic Gradient operation
End of explanation
# Theano functions to evaluate loss and gradient - takes around 1 minute (!)
f_loss = theano.function([], total_loss)
f_grad = theano.function([], grad)
# Helper functions to interface with scipy.optimize
def eval_loss(x0):
x0 = floatX(x0.reshape((1, 3, IMAGE_W, IMAGE_W)))
generated_image.set_value(x0)
return f_loss().astype('float64')
def eval_grad(x0):
x0 = floatX(x0.reshape((1, 3, IMAGE_W, IMAGE_W)))
generated_image.set_value(x0)
return np.array(f_grad()).flatten().astype('float64')
Explanation: Get Ready for Optimisation by SciPy
End of explanation
generated_image.set_value(photo)
#generated_image.set_value(floatX(np.random.uniform(-128, 128, (1, 3, IMAGE_W, IMAGE_W))))
x0 = generated_image.get_value().astype('float64')
iteration=0
Explanation: Initialize with the original photo, since going from noise (the code that's commented out) takes many more iterations.
End of explanation
t0 = time.time()
scipy.optimize.fmin_l_bfgs_b(eval_loss, x0.flatten(), fprime=eval_grad, maxfun=40)
x0 = generated_image.get_value().astype('float64')
iteration += 1
if False:
plt.figure(figsize=(8,8))
plt.imshow(googlenet.deprocess(x0), interpolation='nearest')
plt.axis('off')
plt.text(270, 25, '# {} in {:.1f}sec'.format(iteration, (float(time.time() - t0))), fontsize=14)
else:
plot_layout(googlenet.deprocess(x0))
print('Iteration {}, ran in {:.1f}sec'.format(iteration, float(time.time() - t0)))
Explanation: Optimize all those losses, and show the image
To refine the result, just keep hitting 'run' on this cell (each iteration is about 60 seconds) :
End of explanation |
4,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
Step1: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE
Step2: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A
Step3: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
Step4: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
Step5: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
Step7: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks
Step8: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
Step10: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
Step11: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image. | Python Code:
# As usual, a bit of setup
import time, os, json
import numpy as np
import skimage.io
import matplotlib.pyplot as plt
from cs231n.classifiers.pretrained_cnn import PretrainedCNN
from cs231n.data_utils import load_tiny_imagenet
from cs231n.image_utils import blur_image, deprocess_image
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Image Gradients
In this notebook we'll introduce the TinyImageNet dataset and a deep CNN that has been pretrained on this dataset. You will use this pretrained model to compute gradients with respect to images, and use these image gradients to produce class saliency maps and fooling images.
End of explanation
data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True)
Explanation: Introducing TinyImageNet
The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images.
We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B; for this exercise you will work with TinyImageNet-100-A.
To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_a.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory.
NOTE: The full TinyImageNet-100-A dataset will take up about 250MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory.
End of explanation
for i, names in enumerate(data['class_names']):
print i, ' '.join('"%s"' % name for name in names)
Explanation: TinyImageNet-100-A classes
Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A:
End of explanation
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(data['y_train'] == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = deprocess_image(data['X_train'][train_idx], data['mean_image'])
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(data['class_names'][class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
Explanation: Visualize Examples
Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images.
End of explanation
model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')
Explanation: Pretrained model
We have trained a deep CNN for you on the TinyImageNet-100-A dataset that we will use for image visualization. The model has 9 convolutional layers (with spatial batch normalization) and 1 fully-connected hidden layer (with batch normalization).
To get the model, run the script get_pretrained_model.sh from the cs231n/datasets directory. After doing so, run the following to load the model from disk.
End of explanation
batch_size = 100
# Test the model on training data
mask = np.random.randint(data['X_train'].shape[0], size=batch_size)
X, y = data['X_train'][mask], data['y_train'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Training accuracy: ', (y_pred == y).mean()
# Test the model on validation data
mask = np.random.randint(data['X_val'].shape[0], size=batch_size)
X, y = data['X_val'][mask], data['y_val'][mask]
y_pred = model.loss(X).argmax(axis=1)
print 'Validation accuracy: ', (y_pred == y).mean()
Explanation: Pretrained model performance
Run the following to test the performance of the pretrained model on some random training and validation set images. You should see training accuracy around 90% and validation accuracy around 60%; this indicates a bit of overfitting, but it should work for our visualization experiments.
End of explanation
def compute_saliency_maps(X, y, model):
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images, of shape (N, 3, H, W)
- y: Labels for X, of shape (N,)
- model: A PretrainedCNN that will be used to compute the saliency map.
Returns:
- saliency: An array of shape (N, H, W) giving the saliency maps for the input
images.
saliency = None
##############################################################################
# TODO: Implement this function. You should use the forward and backward #
# methods of the PretrainedCNN class, and compute gradients with respect to #
# the unnormalized class score of the ground-truth classes in y. #
##############################################################################
N, _, H, W = X.shape
out, cache = model.forward(X)
dout = np.zeros_like(out)
dout[np.arange(N),y] = 1
dX, grads = model.backward(dout,cache)
saliency = np.max(np.abs(dX),axis=1)
##############################################################################
# END OF YOUR CODE #
##############################################################################
return saliency
Explanation: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [1].
As mentioned in Section 2 of the paper, you should compute the gradient of the image with respect to the unnormalized class score, not with respect to the normalized class probability.
You will need to use the forward and backward methods of the PretrainedCNN class to compute gradients with respect to the image. Open the file cs231n/classifiers/pretrained_cnn.py and read the documentation for these methods to make sure you know how they work. For example usage, you can see the loss method. Make sure to run the model in test mode when computing saliency maps.
[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps", ICLR Workshop 2014.
End of explanation
def show_saliency_maps(mask):
mask = np.asarray(mask)
X = data['X_val'][mask]
y = data['y_val'][mask]
saliency = compute_saliency_maps(X, y, model)
for i in xrange(mask.size):
plt.subplot(2, mask.size, i + 1)
plt.imshow(deprocess_image(X[i], data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[i]][0])
plt.subplot(2, mask.size, mask.size + i + 1)
plt.title(mask[i])
plt.imshow(saliency[i])
plt.axis('off')
plt.gcf().set_size_inches(10, 4)
plt.show()
# Show some random images
mask = np.random.randint(data['X_val'].shape[0], size=5)
show_saliency_maps(mask)
# These are some cherry-picked images that should give good results
show_saliency_maps([128, 3225, 2417, 1640, 4619])
Explanation: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on the validation set of TinyImageNet-100-A.
End of explanation
def make_fooling_image(X, target_y, model):
Generate a fooling image that is close to X, but that the model classifies
as target_y.
Inputs:
- X: Input image, of shape (1, 3, 64, 64)
- target_y: An integer in the range [0, 100)
- model: A PretrainedCNN
Returns:
- X_fooling: An image that is close to X, but that is classifed as target_y
by the model.
X_fooling = X.copy()
##############################################################################
# TODO: Generate a fooling image X_fooling that the model will classify as #
# the class target_y. Use gradient ascent on the target class score, using #
# the model.forward method to compute scores and the model.backward method #
# to compute image gradients. #
# #
# HINT: For most examples, you should be able to generate a fooling image #
# in fewer than 100 iterations of gradient ascent. #
##############################################################################
N = X.shape[0]
i = 0
max_iter =500
while i<max_iter:
out, cache = model.forward(X_fooling)
target_out = np.argmax(out,axis=1)[0]
if target_y == target_out:
break
dout = np.zeros_like(out)
dout[np.arange(N),target_y] = 1
dX, grads = model.backward(dout,cache)
X_fooling += 1000 * dX
i += 1
if i==max_iter:
print "incomplete !"
##############################################################################
# END OF YOUR CODE #
##############################################################################
return X_fooling
Explanation: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [2]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[2] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
End of explanation
# Find a correctly classified validation image
while True:
i = np.random.randint(data['X_val'].shape[0])
X = data['X_val'][i:i+1]
y = data['y_val'][i:i+1]
y_pred = model.loss(X)[0].argmax()
if y_pred == y: break
target_y = 67
X_fooling = make_fooling_image(X, target_y, model)
# Make sure that X_fooling is classified as y_target
scores = model.loss(X_fooling)
assert scores[0].argmax() == target_y, 'The network is not fooled!'
# Show original image, fooling image, and difference
plt.subplot(1, 3, 1)
plt.imshow(deprocess_image(X, data['mean_image']))
plt.axis('off')
plt.title(data['class_names'][y[0]][0])
plt.subplot(1, 3, 2)
plt.imshow(deprocess_image(X_fooling, data['mean_image'], renorm=True))
plt.title(data['class_names'][target_y][0])
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title('Difference')
plt.imshow(deprocess_image(X - X_fooling, data['mean_image']))
plt.axis('off')
plt.show()
Explanation: Run the following to choose a random validation set image that is correctly classified by the network, and then make a fooling image.
End of explanation |
4,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Guided Project 3
Learning Objective
Step1: Step 1. Environment setup
Envirnonment Variables
Setup the your Kubeflow pipelines endopoint below the same way you did in guided project 1 & 2.
Step2: You may need to restart the kernel at this point.
skaffold tool setup
Step3: Modify the PATH environment variable so that skaffold is available
Step4: Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and
files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below.
This will also become the name of the project directory where your files will be put.
Step5: TFX includes the taxi template with the TFX python package.
If you are planning to solve a point-wise prediction problem,
including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
Step6: Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code,
sample data, and Jupyter Notebooks to analyse the output of the pipeline.
The taxi template uses the same Chicago Taxi dataset and ML model as
the Airflow Tutorial.
Here is brief introduction to each of the Python files
Step7: Step 4. Create the artifact store bucket
Note | Python Code:
import os
Explanation: Guided Project 3
Learning Objective:
Learn how to customize the tfx template to your own dataset
Learn how to modify the Keras model scaffold provided by tfx template
In this guided project, we will use the tfx template tool to create a TFX pipeline for the covertype project, but this time, instead of re-using an already implemented model as we did in guided project 2, we will adapt the model scaffold generated by tfx template so that it can train on the covertype dataset
Note: The covertype dataset is loacated at
gs://workshop-datasets/covertype/small/dataset.csv
End of explanation
ENDPOINT = '' # Enter your ENDPOINT here.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE = 'gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
CUSTOM_TFX_IMAGE
Explanation: Step 1. Environment setup
Envirnonment Variables
Setup the your Kubeflow pipelines endopoint below the same way you did in guided project 1 & 2.
End of explanation
%%bash
LOCAL_BIN="/home/jupyter/.local/bin"
SKAFFOLD_URI="https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64"
test -d $LOCAL_BIN || mkdir -p $LOCAL_BIN
which skaffold || (
curl -Lo skaffold $SKAFFOLD_URI &&
chmod +x skaffold &&
mv skaffold $LOCAL_BIN
)
Explanation: You may need to restart the kernel at this point.
skaffold tool setup
End of explanation
!which skaffold
Explanation: Modify the PATH environment variable so that skaffold is available:
At this point, you shoud see the skaffold tool with the command which:
End of explanation
PIPELINE_NAME = 'guided_project_3' # Your pipeline name
PROJECT_DIR = os.path.join(os.path.expanduser("."), PIPELINE_NAME)
PROJECT_DIR
Explanation: Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and
files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below.
This will also become the name of the project directory where your files will be put.
End of explanation
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
%cd {PROJECT_DIR}
Explanation: TFX includes the taxi template with the TFX python package.
If you are planning to solve a point-wise prediction problem,
including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
End of explanation
!python -m models.features_test
!python -m models.keras.model_test
Explanation: Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code,
sample data, and Jupyter Notebooks to analyse the output of the pipeline.
The taxi template uses the same Chicago Taxi dataset and ML model as
the Airflow Tutorial.
Here is brief introduction to each of the Python files:
pipeline - This directory contains the definition of the pipeline
* configs.py — defines common constants for pipeline runners
* pipeline.py — defines TFX components and a pipeline
models - This directory contains ML model definitions.
* features.py, features_test.py — defines features for the model
* preprocessing.py, preprocessing_test.py — defines preprocessing jobs using tf::Transform
models/estimator - This directory contains an Estimator based model.
* constants.py — defines constants of the model
* model.py, model_test.py — defines DNN model using TF estimator
models/keras - This directory contains a Keras based model.
* constants.py — defines constants of the model
* model.py, model_test.py — defines DNN model using Keras
beam_dag_runner.py, kubeflow_dag_runner.py — define runners for each orchestration engine
Running the tests:
You might notice that there are some files with _test.py in their name.
These are unit tests of the pipeline and it is recommended to add more unit
tests as you implement your own pipelines.
You can run unit tests by supplying the module name of test files with -m flag.
You can usually get a module name by deleting .py extension and replacing / with ..
For example:
End of explanation
GCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + '-kubeflowpipelines-default'
GCS_BUCKET_NAME
!gsutil ls gs://{GCS_BUCKET_NAME} | grep {GCS_BUCKET_NAME} || gsutil mb gs://{GCS_BUCKET_NAME}
Explanation: Step 4. Create the artifact store bucket
Note: You probably already have completed this step in guided project 1, so you may
may skip it if this is the case.
Components in the TFX pipeline will generate outputs for each run as
ML Metadata Artifacts, and they need to be stored somewhere.
You can use any storage which the KFP cluster can access, and for this example we
will use Google Cloud Storage (GCS).
Let us create this bucket if you haven't created it in guided project 1.
Its name will be <YOUR_PROJECT>-kubeflowpipelines-default.
End of explanation |
4,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decoding sensor space data with generalization across time and conditions
This example runs the analysis described in [1]_. It illustrates how one can
fit a linear classifier to identify a discriminatory topography at a given time
instant and subsequently assess whether this linear model can accurately
predict all of the time samples of a second set of conditions.
References
.. [1] King & Dehaene (2014) 'Characterizing the dynamics of mental
representations
Step1: We will train the classifier on all left visual vs auditory trials
and test on all right visual vs auditory trials.
Step2: Score on the epochs where the stimulus was presented to the right.
Step3: Plot | Python Code:
# Authors: Jean-Remi King <[email protected]>
# Alexandre Gramfort <[email protected]>
# Denis Engemann <[email protected]>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import GeneralizingEstimator
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
events_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True, exclude='bads') # Pick MEG channels
raw.filter(1., 30., fir_design='firwin') # Band pass filtering signals
events = mne.read_events(events_fname)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
tmin = -0.050
tmax = 0.400
decim = 2 # decimate to make the example faster to run
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax,
proj=True, picks=picks, baseline=None, preload=True,
reject=dict(mag=5e-12), decim=decim)
Explanation: Decoding sensor space data with generalization across time and conditions
This example runs the analysis described in [1]_. It illustrates how one can
fit a linear classifier to identify a discriminatory topography at a given time
instant and subsequently assess whether this linear model can accurately
predict all of the time samples of a second set of conditions.
References
.. [1] King & Dehaene (2014) 'Characterizing the dynamics of mental
representations: the Temporal Generalization method', Trends In
Cognitive Sciences, 18(4), 203-210. doi: 10.1016/j.tics.2014.01.002.
End of explanation
clf = make_pipeline(StandardScaler(), LogisticRegression(solver='lbfgs'))
time_gen = GeneralizingEstimator(clf, scoring='roc_auc', n_jobs=1,
verbose=True)
# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs['Left'].get_data(),
y=epochs['Left'].events[:, 2] > 2)
Explanation: We will train the classifier on all left visual vs auditory trials
and test on all right visual vs auditory trials.
End of explanation
scores = time_gen.score(X=epochs['Right'].get_data(),
y=epochs['Right'].events[:, 2] > 2)
Explanation: Score on the epochs where the stimulus was presented to the right.
End of explanation
fig, ax = plt.subplots(1)
im = ax.matshow(scores, vmin=0, vmax=1., cmap='RdBu_r', origin='lower',
extent=epochs.times[[0, -1, 0, -1]])
ax.axhline(0., color='k')
ax.axvline(0., color='k')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Generalization across time and condition')
plt.colorbar(im, ax=ax)
plt.show()
Explanation: Plot
End of explanation |
4,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook gives a Nengo implementation of the Spiking Elementary Motion Detector (sEMD) from doi
Step1: Now let's re-create Figure 2
Step2: Now let's see what the performance is as we vary different parameters. To do this, I'm using pytry, a simple Python package for running experiments and gathering data. (You can install it with pip install pytry)
Step3: Now let's see how the spike count varies as we adjust dt. We run the experiment varying dt and it will save data in a directory called exp2.
Step4: And we can now plot the data.
Step5: That looks great! Now let's try varying w_fac (the weight for the facilitation input).
Step6: And let's also check varying w_trig. This should give the identical results as varying w_fac, since they are just multiplied together.
Step7: Now let's vary the time constant for the trigger synapse.
Step8: And finally, let's very the time constant for the facilitation synapse. | Python Code:
# the facilitation spikes
def stim_1_func(t):
index = int(t/0.001)
if index in [100, 1100, 2100]:
return 1000
else:
return 0
# the trigger spikes
def stim_2_func(t):
index = int(t/0.001)
if index in [90, 1500, 2150]:
return 1000
else:
return 0
# the operation we're going to do on the two different inputs to the sEMD neuron
def dendrite_func(t, x):
return x[0]*x[1]
# the trigger weight (w_e2 in the paper)
w = 2.0
model = nengo.Network()
with model:
stim1 = nengo.Node(stim_1_func)
stim2 = nengo.Node(stim_2_func)
# this will handle the non-linearity we need for the input
dendrite = nengo.Node(dendrite_func, size_in=2)
# the facilitation input gets a low-pass filter of 10ms but the trigger is unfiltered
nengo.Connection(stim1, dendrite[0], synapse=0.01)
nengo.Connection(stim2, dendrite[1], transform=w, synapse=None)
# one simple leaky integrate-and-fire neuron
ens = nengo.Ensemble(n_neurons=1, dimensions=1, gain=np.ones(1), bias=np.zeros(1))
# a low-pass filter of 5 ms for the output from the dendritic nonlinearity
nengo.Connection(dendrite, ens.neurons, synapse=0.005)
# now let's probe a bunch of data so we can plot things
pd = nengo.Probe(dendrite, synapse=0.005)
p1_n = nengo.Probe(stim1, synapse=None)
p1 = nengo.Probe(stim1, synapse=0.01)
p2 = nengo.Probe(stim2, synapse=None)
pn = nengo.Probe(ens.neurons)
sim = nengo.Simulator(model)
with sim:
sim.run(3)
Explanation: This notebook gives a Nengo implementation of the Spiking Elementary Motion Detector (sEMD) from doi:10.1162/neco_a_01112
First, let's try to replicate Figure 2.
End of explanation
plt.figure(figsize=(14,5))
plt.subplot(3,1,1)
import nengo.utils.matplotlib
nengo.utils.matplotlib.rasterplot(sim.trange(), np.hstack([sim.data[p1_n], sim.data[p2]]))
plt.xlim(0, sim.trange()[-1])
plt.ylim(0.5,2.5)
plt.subplot(3, 1, 2)
plt.plot(sim.trange(), sim.data[p1])
plt.plot(sim.trange(), sim.data[pd])
plt.xlim(0, sim.trange()[-1])
plt.subplot(3, 1, 3)
plt.plot(sim.trange(), sim.data[pn])
plt.xlim(0, sim.trange()[-1])
plt.show()
Explanation: Now let's re-create Figure 2
End of explanation
import pytry
class SEMDTrial(pytry.PlotTrial):
def params(self):
self.param('trigger weight', w_trig=1.0)
self.param('facilitation weight', w_fac=1.0)
self.param('time delay between facilitation spike and trigger spike', dt=0)
self.param('facilitation synapse', syn_fac=0.01)
self.param('trigger synapse', syn_trig=0.005)
def evaluate(self, p, plt):
model = nengo.Network()
with model:
stim1 = nengo.Node(lambda t: 1000 if int(t/0.001)==100 else 0)
stim2 = nengo.Node(lambda t: 1000 if int((t-p.dt)/0.001)==100 else 0)
dendrite = nengo.Node(lambda t, x: x[0]*x[1], size_in=2)
nengo.Connection(stim1, dendrite[0], transform=p.w_fac, synapse=p.syn_fac)
nengo.Connection(stim2, dendrite[1], transform=p.w_trig, synapse=None)
ens = nengo.Ensemble(n_neurons=1, dimensions=1, gain=np.ones(1), bias=np.zeros(1))
nengo.Connection(dendrite, ens.neurons, synapse=p.syn_trig)
pn = nengo.Probe(ens.neurons)
sim = nengo.Simulator(model, progress_bar=False)
with sim:
sim.run(0.1+p.dt+0.2)
if plt:
plt.plot(sim.trange(), sim.data[pn]) # neuron output
plt.axvline(0.1, color='g') # facilitation spike
plt.axvline(0.1+p.dt, color='b') # trigger spike
spike_count = np.sum(sim.data[pn])/1000
return dict(spike_count=spike_count)
SEMDTrial().run(plt=True, dt=0.02)
Explanation: Now let's see what the performance is as we vary different parameters. To do this, I'm using pytry, a simple Python package for running experiments and gathering data. (You can install it with pip install pytry)
End of explanation
dts = (np.arange(99)+1)*0.001
for dt in dts:
SEMDTrial().run(verbose=False, dt=dt, data_dir='exp2')
Explanation: Now let's see how the spike count varies as we adjust dt. We run the experiment varying dt and it will save data in a directory called exp2.
End of explanation
df = pandas.DataFrame(pytry.read('exp2'))
seaborn.lineplot('dt', 'spike_count', data=df)
Explanation: And we can now plot the data.
End of explanation
dts = (np.arange(0,100,5)+1)*0.001
ws = [0.1, 0.2, 0.5, 1.0, 1.5, 2.0, 3.0, 4.0]
for dt in dts:
for w_fac in ws:
SEMDTrial().run(verbose=False, dt=dt, w_fac=w_fac, data_dir='exp3')
df = pandas.DataFrame(pytry.read('exp3'))
plt.figure(figsize=(14,7))
seaborn.pointplot('dt', 'spike_count', hue='w_fac', data=df)
plt.xticks(range(len(dts)), ['%g'%x for x in dts], rotation='vertical')
plt.show()
Explanation: That looks great! Now let's try varying w_fac (the weight for the facilitation input).
End of explanation
dts = (np.arange(0,100,5)+1)*0.001
ws = [0.1, 0.2, 0.5, 1.0, 1.5, 2.0, 3.0, 4.0]
for dt in dts:
for w_trig in ws:
SEMDTrial().run(verbose=False, dt=dt, w_trig=w_trig, data_dir='exp4')
df = pandas.DataFrame(pytry.read('exp4'))
plt.figure(figsize=(14,7))
seaborn.pointplot('dt', 'spike_count', hue='w_trig', data=df)
plt.xticks(range(len(dts)), ['%g'%x for x in dts], rotation='vertical')
plt.show()
Explanation: And let's also check varying w_trig. This should give the identical results as varying w_fac, since they are just multiplied together.
End of explanation
dts = (np.arange(0,100,5)+1)*0.001
syns = [0.001, 0.002, 0.005, 0.1, 0.2]
syns = [0.01, 0.02, 0.05]
for dt in dts:
for syn_trig in syns:
SEMDTrial().run(verbose=False, dt=dt, syn_trig=syn_trig, data_dir='exp5')
df = pandas.DataFrame(pytry.read('exp5'))
plt.figure(figsize=(14,7))
seaborn.pointplot('dt', 'spike_count', hue='syn_trig', data=df)
plt.xticks(range(len(dts)), ['%g'%x for x in dts], rotation='vertical')
plt.show()
Explanation: Now let's vary the time constant for the trigger synapse.
End of explanation
dts = (np.arange(0,100,5)+1)*0.001
syns = [0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2]
for dt in dts:
for syn_fac in syns:
SEMDTrial().run(verbose=False, dt=dt, syn_fac=syn_fac, data_dir='exp6')
df = pandas.DataFrame(pytry.read('exp6'))
plt.figure(figsize=(14,7))
seaborn.pointplot('dt', 'spike_count', hue='syn_fac', data=df)
plt.xticks(range(len(dts)), ['%g'%x for x in dts], rotation='vertical')
plt.show()
Explanation: And finally, let's very the time constant for the facilitation synapse.
End of explanation |
4,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy를 활용한 선형대수 입문
선형대수(linear algebra)는 데이터 분석에 필요한 각종 계산을 위한 기본적인 학문이다.
데이터 분석을 하기 위해서는 실제로 수많은 숫자의 계산이 필요하다. 하나의 데이터 레코드(record)가 수십개에서 수천개의 숫자로 이루어져 있을 수도 있고 수십개에서 수백만개의 이러한 데이터 레코드를 조합하여 계산하는 과정이 필요할 수 있다.
선형대수를 사용하는 첫번째 장점은 이러한 데이터 계산의 과정을 아주 단순한 수식으로 서술할 수 있다는 점이다. 그러기 위해서는 선형대수에서 사용되는 여러가지 기호와 개념을 익혀야 한다.
데이터의 유형
선형대수에서 다루는 데이터는 크게 스칼라(scalar), 벡터(vector), 행렬(matrix), 이 세가지 이다.
간단하게 말하자면 스칼라는 숫자 하나로 이루어진 데이터이고 벡터는 여러개의 숫자로 이루어진 데이터 레코드이며 행렬은 벡터, 즉 데이터 레코드가 여러개 있는 데이터 집합이라고 볼 수 있다.
스칼라
스칼라는 하나의 숫자를 말한다. 예를 들어 어떤 붓꽃(iris) 샘플의 꽃잎의 길이를 측정하는 하나의 숫자가 나올 것이다. 이 숫자는 스칼라이다.
스칼라는 보통 $x$와 같이 알파벳 소문자로 표기하며 실수(real number)인 숫자 중의 하나이므로 실수 집합의 원소라는 의미에서 다음과 같이 표기한다.
$$ x \in \mathbb{R} $$
벡터
벡터는 복수개의 숫자가 특정 순서대로 모여 있는 것을 말한다. 사실 대부분의 데이터 분석에서 하나의 데이터 레코드는 여러개의 숫자로 이루어진 경우가 많다. 예를 들어 붓꽃의 종을 알아내기 위해 크기를 측정하게 되면 꽃잎의 길이 $x_1$ 뿐 아니라 꽃잎의 폭 $x_2$ , 꽃받침의 길이 $x_3$ , 꽃받침의 폭 $x_4$ 이라는 4개의 숫자를 측정할 수 있다. 이렇게 측정된 4개의 숫자를 하나의 쌍(tuple) $x$ 로 생각하여 다음과 같이 표기한다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
x_{3} \
x_{4} \
\end{bmatrix}
$$
여기에서 주의할 점은 벡터는 복수의 행(row)을 가지고 하나의 열(column)을 가지는 형태로 위에서 아래로 표기한다는 점이다.
이 때 $x$는 4개의 실수(real number)로 이루어져 있기 때문에 4차원 벡터라고 하고 다음과 같이 4차원임을 표기한다.
$$ x \in \mathbb{R}^4 $$
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 벡터를 feature vector라고 하기도 한다.
만약 4개가 아니라 $N$개의 숫자가 모여 있는 경우의 표기는 다음과 같다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
,\;\;\;\;
x \in \mathbb{R}^N
$$
NumPy를 사용하면 벡터는 1차원 ndarray 객체 혹은 열의 갯수가 1개인 2차원 ndarray 객체로 표현한다. 벡터를 처리하는 프로그램에 따라서는 두 가지 중 특정한 형태만 원하는 경우도 있을 수 있기 때문에 주의해야 한다. 예를 들어 파이썬 scikit-learn 패키지에서는 벡터를 요구하는 경우에 열의 갯수가 1개인 2차원 ndarray 객체를 선호한다.
Step1: 행렬
행렬은 복수의 차원을 가지는 데이터 레코드가 다시 여러개 있는 경우의 데이터를 합쳐서 표기한 것이다. 예를 들어 앞서 말한 붓꽃의 예에서 6개의 붓꽃에 대해 크기를 측정하였다면 4차원 붓꽃 데이터가 6개가 있다. 즉, $4 \times 6 = 24$개의 실수 숫자가 있는 것이다. 이 숫자 집합을
행렬로 나타내면 다음과 같다. 행렬은 보통 $X$와 같이 알파벳 대문자로 표기한다.
$$X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
$$
행렬 안에서 원소의 위치를 표기할 때는 $x_{2, 3}$ 처럼 두 개의 숫자 쌍을 아랫 첨자(sub-script)로 붙여서 표기한다. 첫번째 숫자가 행(row)을 뜻하고 두번째 숫자가 열(column)을 뜻한다. 예를 들어 $x_{2, 3}$ 는 두번째 행(위에서 아래로 두번째), 세번째 열(왼쪽에서 오른쪽으로 세번째)의 숫자를 뜻한다.
붓꽃의 예에서는 하나의 데이터 레코드가 4차원이였다는 점을 기억하자. 따라서 이 행렬 표기에서는 하나의 행(row)이 붓꽃 하나에 대한 데이터 레코드가 된다.
하나의 데이터 레코드를 나타낼 때는 하나의 열(column)로 나타내고 복수의 데이터 레코드 집합을 나타낼 때는 하나의 데이터 레코드가 하나의 행(row)으로 표기하는 것은 일관성이 없어 보지만 데이터 분석에서 쓰는 일반적인 관례이므로 익히고 있어야 한다.
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 행렬를 feature matrix라고 하기도 한다.
이 행렬의 크기를 수식으로 표시할 때는 행의 크기 곱하기 열의 크기의 형태로 다음과 같이 나타낸다.
$$ X \in \mathbb{R}^{6\times 4} $$
벡터도 열의 수가 1인 특수한 행렬이기 때문에 벡터의 크기를 표시할 때 행렬 표기에 맞추어 다음과 같이 쓰기도 한다.
$$ x \in \mathbb{R}^{4\times 1} $$
NumPy를 이용하여 행렬을 표기할 때는 2차원 ndarray 객체를 사용한다.
Step2: 특수한 행렬
몇가지 특수한 행렬에 대해서는 별도의 이름이 붙어있다.
행렬에서 행의 숫자와 열의 숫자가 같은 위치를 대각(diagonal)이라고 하고 대각 위치에 있지 않은 것들은 비대각(off-diagonal)이라고 한다. 모든 비대각 요소가 0인 행렬을 대각 행렬(diagonal matrix)이라고 한다.
$$ D \in \mathbb{R}^{N \times N} $$
$$
D =
\begin{bmatrix}
D_{1} & 0 & \cdots & 0 \
0 & D_{2} & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & D_{N} \
\end{bmatrix}
$$
NumPy로 대각행렬을 생성하려면 diag 명령을 사용한다.
Step3: 대각 행렬 중에서도 모든 대각 성분의 값이 1인 대각 행렬을 단위 행렬(identity matrix)이라고 한다. 단위 행렬은 보통 알파벳 대문자 $I$로 표기하는 경우가 많다.
$$ I \in \mathbb{R}^{N \times N} $$
$$
I =
\begin{bmatrix}
1 & 0 & \cdots & 0 \
0 & 1 & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & 1 \
\end{bmatrix}
$$
NumPy로 단위행렬을 생성하려면 identity 혹은 eye 명령을 사용한다.
Step4: 연산
행렬의 연산을 이용하면 대량의 데이터에 대한 계산을 간단한 수식으로 나타낼 수 있다. 물론 행렬에 대한 연산은 보통의 숫자 즉, 스칼라에 대한 사칙 연산과는 다른 규칙을 적용하므로 이 규칙을 외워야 한다.
전치 연산
전치(transpose) 연산은 행렬의 행과 열을 바꾸는 연산을 말한다. 벡터 기호에 $T$라는 윗첨자(super-script)를 붙어서 표기한다. 예를 들어 앞에서 보인 $4\times 6$ 차원의 행렬을 전치 연산하면 $6\times 4$ 차원의 행렬이 된다.
$$
X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
\;\;\;
\rightarrow
\;\;\;
X^T =
\begin{bmatrix}
x_{1, 1} & x_{2, 1} & x_{3, 1} & x_{4, 1} & x_{5, 1} & x_{6, 1} \
x_{1, 2} & x_{2, 2} & x_{3, 2} & x_{4, 2} & x_{5, 2} & x_{6, 2} \
x_{1, 3} & x_{2, 3} & x_{3, 3} & x_{4, 3} & x_{5, 3} & x_{6, 3} \
x_{1, 4} & x_{2, 4} & x_{3, 4} & x_{4, 4} & x_{5, 4} & x_{6, 4} \
\end{bmatrix}
$$
벡터도 열의 수가 1인 특수한 행렬이므로 벡터에 대해서도 전치 연산을 적용할 수 있다. 이 때 $x$와 같이 열의 수가 1인 행렬을 열 벡터(column vector)라고 하고 $x^T$와 같이 행의 수가 1인 행렬을 행 벡터(row vector)라고 한다.
$$
x =
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
\; \rightarrow \;
x^T =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
$$
NumPy에서는 ndarray 객체의 T라는 속성을 이용하여 전치 행렬을 구한다. 이 때 T는 메서드(method)가 아닌 속성(attribute)에 유의한다.
Step5: 행렬의 행 표기법과 열 표기법
전치 연산과 행 벡터, 열 벡터를 이용하면 행렬을 다음과 같이 복수의 열 벡터들 $c_i$, 또는 복수의 열 벡터들 $r_j^T$ 을 합친(concatenated) 형태로 표기할 수도 있다.
$$
X
=
\begin{bmatrix}
c_1 & c_2 & \cdots & c_M
\end{bmatrix}
=
\begin{bmatrix}
r_1^T \ r_2^T \ \vdots \ r_N^T
\end{bmatrix}
$$
$$ X \in \mathbb{R}^{N\times M} ,\;\;\; c_i \in R^{N \times 1} \; (i=1,\cdots,M) ,\;\;\; r_j \in R^{M \times 1} \; (j=1,\cdots,N) $$
행렬 덧셈과 뺄셈
행렬의 덧셈과 뺄셈은 같은 크기를 가진 두개의 행렬에 대해 정의되며 각각의 원소에 대해 덧셈과 뺄셈을 하면 된다. 이러한 연산을 element-wise 연산이라고 한다.
Step6: 벡터 곱셈
두 행렬의 곱셈을 정의하기 전에 우선 두 벡터의 곱셈을 알아보자. 벡터의 곱셈에는 내적(inner product)과 외적(outer product) 두 가지가 있다 여기에서는 내적에 대해서만 설명한다. 내적은 dot product라고 하기도 한다.
두 벡터의 곱(내적)이 정의되려면 우선 두 벡터의 길이가 같으며 앞의 벡터가 행 벡터이고 뒤의 벡터가 열 벡터이어야 한다. 이때 두 벡터의 곱은 다음과 같이 각 원소들을 element-by-element로 곱한 다음에 그 값들을 다시 모두 합해서 하나의 스칼라값으로 계산된다.
$$
x^T y =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{N} \
\end{bmatrix}
= x_1 y_1 + \cdots + x_N y_N
= \sum_{i=1}^N x_i y_i
$$
$$ x \in \mathbb{R}^{N \times 1} , \; y \in \mathbb{R}^{N \times 1} \; \rightarrow \; x^T y \in \mathbb{R} $$
벡터의 곱은 왜 이렇게 복잡하게 정의된 것일까. 벡터의 곱을 사용한 예를 몇가지 살펴보자
가중합
가중합(weighted sum)이란 복수의 데이터를 단순히 합하는 것이 아니라 각각의 수에 중요도에 따른 어떤 가중치를 곱한 후 이 값을 합하는 것을 말한다. 만약 데이터가 $x_1, \cdots, x_N$ 이고 가중치가 $w_1, \cdots, w_N$ 이면 가중합은 다음과 같다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i $$
이를 벡터의 곱으로 나타내면 다음과 같이 $w^Tx$ 또는 $x^Tw$ 라는 간단한 수식으로 표시할 수 있다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i =
\begin{bmatrix}
w_{1} && w_{2} && \cdots && w_{N}
\end{bmatrix}
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_N
\end{bmatrix}
= w^Tx =
\begin{bmatrix}
x_{1} && x_{2} && \cdots && x_{N}
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= x^Tw $$
NumPy에서 벡터 혹은 이후에 설명할 행렬의 곱은 dot이라는 명령으로 계산한다. 2차원 행렬로 표시한 벡터의 경우에는 결과값이 스칼라가 아닌 2차원 행렬값임에 유의한다.
Step7: 제곱합
데이터 분석시에 분산(variance), 표준 편차(standard deviation)을 구하는 경우에는 각각의 데이터를 제곱한 값을 모두 더하는 계산 즉 제곱합(sum of squares)을 계산하게 된다. 이 경우에도 벡터의 곱을 사용하여 $x^Tx$로 쓸 수 있다.
$$
x^T x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} x_i^2
$$
행렬의 곱셈
벡터의 곱셈을 정의한 후에는 다음과 같이 행렬의 곱셈을 정의할 수 있다.
$A$ 행렬과 $B$ 행렬을 곱한 결과인 $C$ 행렬의 $i$번째 행, $j$번째 열의 원소의 값은 $A$ 행렬의 $i$번째 행 벡터 $a_i^T$와 $B$ 행렬의 $j$번째 열 벡터 $b_j$의 곱으로 계산된 숫자이다.
$$ C = AB \; \rightarrow \; [c_{ij}] = a_i^T b_j $$
이 정의가 성립하려면 앞의 행렬 $A$의 열의 수가 뒤의 행렬 $B$의 행의 수와 일치해야만 한다.
$$ A \in \mathbb{R}^{N \times L} , \; B \in \mathbb{R}^{L \times M} \; \rightarrow \; AB \in \mathbb{R}^{N \times M} $$
Step8: 그럼 이러한 행렬의 곱셈은 데이터 분석에서 어떤 경우에 사용될까. 몇가지 예를 살펴본다.
가중 벡터합
어떤 데이터 레코드 즉, 벡터의 가중합은 $w^Tx$ 또는 $x^Tw$로 표시할 수 있다는 것을 배웠다. 그런데 만약 이렇게 $w$ 가중치를 사용한 가중합을 하나의 벡터 $x$가 아니라 여러개의 벡터 $x_1, \cdots, x_M$개에 대해서 모두 계산해야 한다면 이 계산을 다음과 같이 $Xw$라는 기호로 간단하게 표시할 수 있다.
$$
\begin{bmatrix}
w_1 x_{1,1} + w_2 x_{1,2} + \cdots + w_N x_{1,N} \
w_1 x_{2,1} + w_2 x_{2,2} + \cdots + w_N x_{2,N} \
\vdots \
w_1 x_{M,1} + w_2 x_{M,2} + \cdots + w_N x_{M,N} \
\end{bmatrix}
=
\begin{bmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,N} \
x_{2,1} & x_{2,2} & \cdots & x_{2,N} \
\vdots & \vdots & \vdots & \vdots \
x_{M,1} & x_{M,2} & \cdots & x_{M,N} \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
=
\begin{bmatrix}
x_1^T \
x_2^T \
\vdots \
x_M^T \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= X w
$$
잔차
선형 회귀 분석(linear regression)을 한 결과는 가중치 벡터 $w$라는 형태로 나타나고 예측치는 이 가중치 벡터를 사용한 독립 변수 데이터 레코드 즉, 벡터 $x_i$의 가중합 $w^Tx_i$이 된다. 이 예측치와 실제 값 $y_i$의 차이를 오차(error) 혹은 잔차(residual) $e_i$ 이라고 한다. 이러한 잔차 값을 모든 독립 변수 벡터에 대해 구하면 잔차 벡터 $e$가 된다.
$$ e_i = y_i - w^Tx_i $$
잔차 벡터는 다음과 같이 $y-Xw$로 간단하게 표기할 수 있다.
$$
e =
\begin{bmatrix}
e_{1} \
e_{2} \
\vdots \
e_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
w^T x_{1} \
w^T x_{2} \
\vdots \
w^T x_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1}w \
x^T_{2}w \
\vdots \
x^T_{M}w \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1} \
x^T_{2} \
\vdots \
x^T_{M} \
\end{bmatrix}
w
= y - Xw
$$
$$
e = y - Xw
$$
Step9: 잔차 제곱합
잔차의 크기는 잔차 벡터의 각 원소를 제곱한 후 더한 잔차 제곱합(RSS
Step10: 이차 형식
벡터의 이차 형식(Quadratic Form) 이란 어떤 벡터의 각 원소에 대해 가능한 모든 쌍의 조합 $(x_i, x_j)$을 구한 다음 그 곱셈$x_ix_j$을 더한 것을 말한다. 이 때 각 쌍에 대해 서로 다른 가중치 $a_{i,j}$를 적용하여 $a_{i,j}x_ix_j$의 합을 구한다면 다음과 같이 $x^TAx$라는 간단한 식으로 쓸 수 있다.
$$
x^T A x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,N} \
a_{2,1} & a_{2,2} & \cdots & a_{2,N} \
\vdots & \vdots & \ddots & \vdots \
a_{N,1} & a_{N,2} & \cdots & a_{N,N} \
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} \sum_{j=1}^{N} a_{i,j} x_i x_j
$$
예를 들어 $ x = [1, 2, 3]^T $ 이고 A가 다음과 같다면
$$ A =
\begin{pmatrix}
1 & 2 & 3 \
4 & 5 & 6 \
7 & 8 & 9 \
\end{pmatrix}
$$
NumPy 에서 벡터의 이차 형식은 다음과 같이 계산한다. | Python Code:
x = np.array([1, 2, 3, 4])
x
x = np.array([[1], [2], [3], [4]])
x
Explanation: NumPy를 활용한 선형대수 입문
선형대수(linear algebra)는 데이터 분석에 필요한 각종 계산을 위한 기본적인 학문이다.
데이터 분석을 하기 위해서는 실제로 수많은 숫자의 계산이 필요하다. 하나의 데이터 레코드(record)가 수십개에서 수천개의 숫자로 이루어져 있을 수도 있고 수십개에서 수백만개의 이러한 데이터 레코드를 조합하여 계산하는 과정이 필요할 수 있다.
선형대수를 사용하는 첫번째 장점은 이러한 데이터 계산의 과정을 아주 단순한 수식으로 서술할 수 있다는 점이다. 그러기 위해서는 선형대수에서 사용되는 여러가지 기호와 개념을 익혀야 한다.
데이터의 유형
선형대수에서 다루는 데이터는 크게 스칼라(scalar), 벡터(vector), 행렬(matrix), 이 세가지 이다.
간단하게 말하자면 스칼라는 숫자 하나로 이루어진 데이터이고 벡터는 여러개의 숫자로 이루어진 데이터 레코드이며 행렬은 벡터, 즉 데이터 레코드가 여러개 있는 데이터 집합이라고 볼 수 있다.
스칼라
스칼라는 하나의 숫자를 말한다. 예를 들어 어떤 붓꽃(iris) 샘플의 꽃잎의 길이를 측정하는 하나의 숫자가 나올 것이다. 이 숫자는 스칼라이다.
스칼라는 보통 $x$와 같이 알파벳 소문자로 표기하며 실수(real number)인 숫자 중의 하나이므로 실수 집합의 원소라는 의미에서 다음과 같이 표기한다.
$$ x \in \mathbb{R} $$
벡터
벡터는 복수개의 숫자가 특정 순서대로 모여 있는 것을 말한다. 사실 대부분의 데이터 분석에서 하나의 데이터 레코드는 여러개의 숫자로 이루어진 경우가 많다. 예를 들어 붓꽃의 종을 알아내기 위해 크기를 측정하게 되면 꽃잎의 길이 $x_1$ 뿐 아니라 꽃잎의 폭 $x_2$ , 꽃받침의 길이 $x_3$ , 꽃받침의 폭 $x_4$ 이라는 4개의 숫자를 측정할 수 있다. 이렇게 측정된 4개의 숫자를 하나의 쌍(tuple) $x$ 로 생각하여 다음과 같이 표기한다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
x_{3} \
x_{4} \
\end{bmatrix}
$$
여기에서 주의할 점은 벡터는 복수의 행(row)을 가지고 하나의 열(column)을 가지는 형태로 위에서 아래로 표기한다는 점이다.
이 때 $x$는 4개의 실수(real number)로 이루어져 있기 때문에 4차원 벡터라고 하고 다음과 같이 4차원임을 표기한다.
$$ x \in \mathbb{R}^4 $$
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 벡터를 feature vector라고 하기도 한다.
만약 4개가 아니라 $N$개의 숫자가 모여 있는 경우의 표기는 다음과 같다.
$$
x = \begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
,\;\;\;\;
x \in \mathbb{R}^N
$$
NumPy를 사용하면 벡터는 1차원 ndarray 객체 혹은 열의 갯수가 1개인 2차원 ndarray 객체로 표현한다. 벡터를 처리하는 프로그램에 따라서는 두 가지 중 특정한 형태만 원하는 경우도 있을 수 있기 때문에 주의해야 한다. 예를 들어 파이썬 scikit-learn 패키지에서는 벡터를 요구하는 경우에 열의 갯수가 1개인 2차원 ndarray 객체를 선호한다.
End of explanation
X = np.array([[11,12,13],[21,22,23]])
X
Explanation: 행렬
행렬은 복수의 차원을 가지는 데이터 레코드가 다시 여러개 있는 경우의 데이터를 합쳐서 표기한 것이다. 예를 들어 앞서 말한 붓꽃의 예에서 6개의 붓꽃에 대해 크기를 측정하였다면 4차원 붓꽃 데이터가 6개가 있다. 즉, $4 \times 6 = 24$개의 실수 숫자가 있는 것이다. 이 숫자 집합을
행렬로 나타내면 다음과 같다. 행렬은 보통 $X$와 같이 알파벳 대문자로 표기한다.
$$X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
$$
행렬 안에서 원소의 위치를 표기할 때는 $x_{2, 3}$ 처럼 두 개의 숫자 쌍을 아랫 첨자(sub-script)로 붙여서 표기한다. 첫번째 숫자가 행(row)을 뜻하고 두번째 숫자가 열(column)을 뜻한다. 예를 들어 $x_{2, 3}$ 는 두번째 행(위에서 아래로 두번째), 세번째 열(왼쪽에서 오른쪽으로 세번째)의 숫자를 뜻한다.
붓꽃의 예에서는 하나의 데이터 레코드가 4차원이였다는 점을 기억하자. 따라서 이 행렬 표기에서는 하나의 행(row)이 붓꽃 하나에 대한 데이터 레코드가 된다.
하나의 데이터 레코드를 나타낼 때는 하나의 열(column)로 나타내고 복수의 데이터 레코드 집합을 나타낼 때는 하나의 데이터 레코드가 하나의 행(row)으로 표기하는 것은 일관성이 없어 보지만 데이터 분석에서 쓰는 일반적인 관례이므로 익히고 있어야 한다.
만약 이 데이터를 이용하여 붓꽃의 종을 결정하는 예측 문제를 풀고 있다면 이 행렬를 feature matrix라고 하기도 한다.
이 행렬의 크기를 수식으로 표시할 때는 행의 크기 곱하기 열의 크기의 형태로 다음과 같이 나타낸다.
$$ X \in \mathbb{R}^{6\times 4} $$
벡터도 열의 수가 1인 특수한 행렬이기 때문에 벡터의 크기를 표시할 때 행렬 표기에 맞추어 다음과 같이 쓰기도 한다.
$$ x \in \mathbb{R}^{4\times 1} $$
NumPy를 이용하여 행렬을 표기할 때는 2차원 ndarray 객체를 사용한다.
End of explanation
np.diag([1, 2, 3])
Explanation: 특수한 행렬
몇가지 특수한 행렬에 대해서는 별도의 이름이 붙어있다.
행렬에서 행의 숫자와 열의 숫자가 같은 위치를 대각(diagonal)이라고 하고 대각 위치에 있지 않은 것들은 비대각(off-diagonal)이라고 한다. 모든 비대각 요소가 0인 행렬을 대각 행렬(diagonal matrix)이라고 한다.
$$ D \in \mathbb{R}^{N \times N} $$
$$
D =
\begin{bmatrix}
D_{1} & 0 & \cdots & 0 \
0 & D_{2} & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & D_{N} \
\end{bmatrix}
$$
NumPy로 대각행렬을 생성하려면 diag 명령을 사용한다.
End of explanation
np.identity(3)
np.eye(4)
Explanation: 대각 행렬 중에서도 모든 대각 성분의 값이 1인 대각 행렬을 단위 행렬(identity matrix)이라고 한다. 단위 행렬은 보통 알파벳 대문자 $I$로 표기하는 경우가 많다.
$$ I \in \mathbb{R}^{N \times N} $$
$$
I =
\begin{bmatrix}
1 & 0 & \cdots & 0 \
0 & 1 & \cdots & 0 \
\vdots & \vdots & \ddots & \vdots \
0 & 0 & \cdots & 1 \
\end{bmatrix}
$$
NumPy로 단위행렬을 생성하려면 identity 혹은 eye 명령을 사용한다.
End of explanation
X = np.array([[11,12,13],[21,22,23]])
X
X.T
Explanation: 연산
행렬의 연산을 이용하면 대량의 데이터에 대한 계산을 간단한 수식으로 나타낼 수 있다. 물론 행렬에 대한 연산은 보통의 숫자 즉, 스칼라에 대한 사칙 연산과는 다른 규칙을 적용하므로 이 규칙을 외워야 한다.
전치 연산
전치(transpose) 연산은 행렬의 행과 열을 바꾸는 연산을 말한다. 벡터 기호에 $T$라는 윗첨자(super-script)를 붙어서 표기한다. 예를 들어 앞에서 보인 $4\times 6$ 차원의 행렬을 전치 연산하면 $6\times 4$ 차원의 행렬이 된다.
$$
X =
\begin{bmatrix}
x_{1, 1} & x_{1, 2} & x_{1, 3} & x_{1, 4} \
x_{2, 1} & x_{2, 2} & x_{2, 3} & x_{2, 4} \
x_{3, 1} & x_{3, 2} & x_{3, 3} & x_{3, 4} \
x_{4, 1} & x_{4, 2} & x_{4, 3} & x_{4, 4} \
x_{5, 1} & x_{5, 2} & x_{5, 3} & x_{5, 4} \
x_{6, 1} & x_{6, 2} & x_{6, 3} & x_{6, 4} \
\end{bmatrix}
\;\;\;
\rightarrow
\;\;\;
X^T =
\begin{bmatrix}
x_{1, 1} & x_{2, 1} & x_{3, 1} & x_{4, 1} & x_{5, 1} & x_{6, 1} \
x_{1, 2} & x_{2, 2} & x_{3, 2} & x_{4, 2} & x_{5, 2} & x_{6, 2} \
x_{1, 3} & x_{2, 3} & x_{3, 3} & x_{4, 3} & x_{5, 3} & x_{6, 3} \
x_{1, 4} & x_{2, 4} & x_{3, 4} & x_{4, 4} & x_{5, 4} & x_{6, 4} \
\end{bmatrix}
$$
벡터도 열의 수가 1인 특수한 행렬이므로 벡터에 대해서도 전치 연산을 적용할 수 있다. 이 때 $x$와 같이 열의 수가 1인 행렬을 열 벡터(column vector)라고 하고 $x^T$와 같이 행의 수가 1인 행렬을 행 벡터(row vector)라고 한다.
$$
x =
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix}
\; \rightarrow \;
x^T =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
$$
NumPy에서는 ndarray 객체의 T라는 속성을 이용하여 전치 행렬을 구한다. 이 때 T는 메서드(method)가 아닌 속성(attribute)에 유의한다.
End of explanation
x = np.array([10, 11, 12, 13, 14])
x
y = np.array([0, 1, 2, 3, 4])
y
x + y
x - y
Explanation: 행렬의 행 표기법과 열 표기법
전치 연산과 행 벡터, 열 벡터를 이용하면 행렬을 다음과 같이 복수의 열 벡터들 $c_i$, 또는 복수의 열 벡터들 $r_j^T$ 을 합친(concatenated) 형태로 표기할 수도 있다.
$$
X
=
\begin{bmatrix}
c_1 & c_2 & \cdots & c_M
\end{bmatrix}
=
\begin{bmatrix}
r_1^T \ r_2^T \ \vdots \ r_N^T
\end{bmatrix}
$$
$$ X \in \mathbb{R}^{N\times M} ,\;\;\; c_i \in R^{N \times 1} \; (i=1,\cdots,M) ,\;\;\; r_j \in R^{M \times 1} \; (j=1,\cdots,N) $$
행렬 덧셈과 뺄셈
행렬의 덧셈과 뺄셈은 같은 크기를 가진 두개의 행렬에 대해 정의되며 각각의 원소에 대해 덧셈과 뺄셈을 하면 된다. 이러한 연산을 element-wise 연산이라고 한다.
End of explanation
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
np.dot(x, y)
x.dot(y)
x = np.array([[1], [2], [3]])
y = np.array([[4], [5], [6]])
np.dot(x.T, y)
Explanation: 벡터 곱셈
두 행렬의 곱셈을 정의하기 전에 우선 두 벡터의 곱셈을 알아보자. 벡터의 곱셈에는 내적(inner product)과 외적(outer product) 두 가지가 있다 여기에서는 내적에 대해서만 설명한다. 내적은 dot product라고 하기도 한다.
두 벡터의 곱(내적)이 정의되려면 우선 두 벡터의 길이가 같으며 앞의 벡터가 행 벡터이고 뒤의 벡터가 열 벡터이어야 한다. 이때 두 벡터의 곱은 다음과 같이 각 원소들을 element-by-element로 곱한 다음에 그 값들을 다시 모두 합해서 하나의 스칼라값으로 계산된다.
$$
x^T y =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{N} \
\end{bmatrix}
= x_1 y_1 + \cdots + x_N y_N
= \sum_{i=1}^N x_i y_i
$$
$$ x \in \mathbb{R}^{N \times 1} , \; y \in \mathbb{R}^{N \times 1} \; \rightarrow \; x^T y \in \mathbb{R} $$
벡터의 곱은 왜 이렇게 복잡하게 정의된 것일까. 벡터의 곱을 사용한 예를 몇가지 살펴보자
가중합
가중합(weighted sum)이란 복수의 데이터를 단순히 합하는 것이 아니라 각각의 수에 중요도에 따른 어떤 가중치를 곱한 후 이 값을 합하는 것을 말한다. 만약 데이터가 $x_1, \cdots, x_N$ 이고 가중치가 $w_1, \cdots, w_N$ 이면 가중합은 다음과 같다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i $$
이를 벡터의 곱으로 나타내면 다음과 같이 $w^Tx$ 또는 $x^Tw$ 라는 간단한 수식으로 표시할 수 있다.
$$ w_1 x_1 + \cdots + w_N x_N = \sum_{i=1}^N w_i x_i =
\begin{bmatrix}
w_{1} && w_{2} && \cdots && w_{N}
\end{bmatrix}
\begin{bmatrix}
x_1 \ x_2 \ \vdots \ x_N
\end{bmatrix}
= w^Tx =
\begin{bmatrix}
x_{1} && x_{2} && \cdots && x_{N}
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= x^Tw $$
NumPy에서 벡터 혹은 이후에 설명할 행렬의 곱은 dot이라는 명령으로 계산한다. 2차원 행렬로 표시한 벡터의 경우에는 결과값이 스칼라가 아닌 2차원 행렬값임에 유의한다.
End of explanation
A = np.array([[1, 2, 3], [4, 5, 6]])
B = np.array([[1, 2], [3, 4], [5, 6]])
C = np.dot(A, B)
A
B
C
Explanation: 제곱합
데이터 분석시에 분산(variance), 표준 편차(standard deviation)을 구하는 경우에는 각각의 데이터를 제곱한 값을 모두 더하는 계산 즉 제곱합(sum of squares)을 계산하게 된다. 이 경우에도 벡터의 곱을 사용하여 $x^Tx$로 쓸 수 있다.
$$
x^T x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} x_i^2
$$
행렬의 곱셈
벡터의 곱셈을 정의한 후에는 다음과 같이 행렬의 곱셈을 정의할 수 있다.
$A$ 행렬과 $B$ 행렬을 곱한 결과인 $C$ 행렬의 $i$번째 행, $j$번째 열의 원소의 값은 $A$ 행렬의 $i$번째 행 벡터 $a_i^T$와 $B$ 행렬의 $j$번째 열 벡터 $b_j$의 곱으로 계산된 숫자이다.
$$ C = AB \; \rightarrow \; [c_{ij}] = a_i^T b_j $$
이 정의가 성립하려면 앞의 행렬 $A$의 열의 수가 뒤의 행렬 $B$의 행의 수와 일치해야만 한다.
$$ A \in \mathbb{R}^{N \times L} , \; B \in \mathbb{R}^{L \times M} \; \rightarrow \; AB \in \mathbb{R}^{N \times M} $$
End of explanation
from sklearn.datasets import make_regression
X, y = make_regression(4,3)
X
y
w = np.linalg.lstsq(X, y)[0] #역행렬
w
e = y - np.dot(X, w)
e
Explanation: 그럼 이러한 행렬의 곱셈은 데이터 분석에서 어떤 경우에 사용될까. 몇가지 예를 살펴본다.
가중 벡터합
어떤 데이터 레코드 즉, 벡터의 가중합은 $w^Tx$ 또는 $x^Tw$로 표시할 수 있다는 것을 배웠다. 그런데 만약 이렇게 $w$ 가중치를 사용한 가중합을 하나의 벡터 $x$가 아니라 여러개의 벡터 $x_1, \cdots, x_M$개에 대해서 모두 계산해야 한다면 이 계산을 다음과 같이 $Xw$라는 기호로 간단하게 표시할 수 있다.
$$
\begin{bmatrix}
w_1 x_{1,1} + w_2 x_{1,2} + \cdots + w_N x_{1,N} \
w_1 x_{2,1} + w_2 x_{2,2} + \cdots + w_N x_{2,N} \
\vdots \
w_1 x_{M,1} + w_2 x_{M,2} + \cdots + w_N x_{M,N} \
\end{bmatrix}
=
\begin{bmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,N} \
x_{2,1} & x_{2,2} & \cdots & x_{2,N} \
\vdots & \vdots & \vdots & \vdots \
x_{M,1} & x_{M,2} & \cdots & x_{M,N} \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
=
\begin{bmatrix}
x_1^T \
x_2^T \
\vdots \
x_M^T \
\end{bmatrix}
\begin{bmatrix}
w_1 \ w_2 \ \vdots \ w_N
\end{bmatrix}
= X w
$$
잔차
선형 회귀 분석(linear regression)을 한 결과는 가중치 벡터 $w$라는 형태로 나타나고 예측치는 이 가중치 벡터를 사용한 독립 변수 데이터 레코드 즉, 벡터 $x_i$의 가중합 $w^Tx_i$이 된다. 이 예측치와 실제 값 $y_i$의 차이를 오차(error) 혹은 잔차(residual) $e_i$ 이라고 한다. 이러한 잔차 값을 모든 독립 변수 벡터에 대해 구하면 잔차 벡터 $e$가 된다.
$$ e_i = y_i - w^Tx_i $$
잔차 벡터는 다음과 같이 $y-Xw$로 간단하게 표기할 수 있다.
$$
e =
\begin{bmatrix}
e_{1} \
e_{2} \
\vdots \
e_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
w^T x_{1} \
w^T x_{2} \
\vdots \
w^T x_{M} \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1}w \
x^T_{2}w \
\vdots \
x^T_{M}w \
\end{bmatrix}
=
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{M} \
\end{bmatrix}
-
\begin{bmatrix}
x^T_{1} \
x^T_{2} \
\vdots \
x^T_{M} \
\end{bmatrix}
w
= y - Xw
$$
$$
e = y - Xw
$$
End of explanation
np.dot(e.T,e)
Explanation: 잔차 제곱합
잔차의 크기는 잔차 벡터의 각 원소를 제곱한 후 더한 잔차 제곱합(RSS: Residual Sum of Squares)를 이용하여 구한다. 이 값은 $e^Te$로 간단하게 쓸 수 있으며 그 값은 다음과 같이 계산한다.
$$
e^Te = \sum_{i=1}^{N} (y_i - w^Tx_i)^2 = (y - Xw)^T (y - Xw)
$$
End of explanation
x = np.array([1,2,3])
x
A = np.arange(1, 10).reshape(3,3)
A
np.dot(np.dot(x, A), x)
Explanation: 이차 형식
벡터의 이차 형식(Quadratic Form) 이란 어떤 벡터의 각 원소에 대해 가능한 모든 쌍의 조합 $(x_i, x_j)$을 구한 다음 그 곱셈$x_ix_j$을 더한 것을 말한다. 이 때 각 쌍에 대해 서로 다른 가중치 $a_{i,j}$를 적용하여 $a_{i,j}x_ix_j$의 합을 구한다면 다음과 같이 $x^TAx$라는 간단한 식으로 쓸 수 있다.
$$
x^T A x =
\begin{bmatrix}
x_{1} & x_{2} & \cdots & x_{N}
\end{bmatrix}
\begin{bmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,N} \
a_{2,1} & a_{2,2} & \cdots & a_{2,N} \
\vdots & \vdots & \ddots & \vdots \
a_{N,1} & a_{N,2} & \cdots & a_{N,N} \
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N} \
\end{bmatrix} = \sum_{i=1}^{N} \sum_{j=1}^{N} a_{i,j} x_i x_j
$$
예를 들어 $ x = [1, 2, 3]^T $ 이고 A가 다음과 같다면
$$ A =
\begin{pmatrix}
1 & 2 & 3 \
4 & 5 & 6 \
7 & 8 & 9 \
\end{pmatrix}
$$
NumPy 에서 벡터의 이차 형식은 다음과 같이 계산한다.
End of explanation |
4,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: SSGAN Demo
This notebook is a demo of Generative Adversarial Networks (GANs) trained on ImageNet without labels using self-supervised techniques. Both generator and discrimantor models are available on TF Hub.
For more information about the models and the training procedure see our paper [1].
The code for training these models is available on GitHub.
To get started, connect to a runtime and follow these steps
Step2: Select a model
Step3: Sample
Step4: Discriminator | Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
# @title Imports and utility functions
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import IPython
from IPython.display import display
import numpy as np
import PIL.Image
import pandas as pd
import six
import tensorflow as tf
import tensorflow_hub as hub
def imgrid(imarray, cols=8, pad=1):
pad = int(pad)
assert pad >= 0
cols = int(cols)
assert cols >= 1
N, H, W, C = imarray.shape
rows = int(np.ceil(N / float(cols)))
batch_pad = rows * cols - N
assert batch_pad >= 0
post_pad = [batch_pad, pad, pad, 0]
pad_arg = [[0, p] for p in post_pad]
imarray = np.pad(imarray, pad_arg, 'constant')
H += pad
W += pad
grid = (imarray
.reshape(rows, cols, H, W, C)
.transpose(0, 2, 1, 3, 4)
.reshape(rows*H, cols*W, C))
return grid[:-pad, :-pad]
def imshow(a, format='png', jpeg_fallback=True):
a = np.asarray(a, dtype=np.uint8)
if six.PY3:
str_file = six.BytesIO()
else:
str_file = six.StringIO()
PIL.Image.fromarray(a).save(str_file, format)
png_data = str_file.getvalue()
try:
disp = display(IPython.display.Image(png_data))
except IOError:
if jpeg_fallback and format != 'jpeg':
print ('Warning: image was too large to display in format "{}"; '
'trying jpeg instead.').format(format)
return imshow(a, format='jpeg')
else:
raise
return disp
class Generator(object):
def __init__(self, module_spec):
self._module_spec = module_spec
self._sess = None
self._graph = tf.Graph()
self._load_model()
@property
def z_dim(self):
return self._z.shape[-1].value
@property
def conditional(self):
return self._labels is not None
def _load_model(self):
with self._graph.as_default():
self._generator = hub.Module(self._module_spec, name="gen_module",
tags={"gen", "bsNone"})
input_info = self._generator.get_input_info_dict()
inputs = {k: tf.placeholder(v.dtype, v.get_shape().as_list(), k)
for k, v in self._generator.get_input_info_dict().items()}
self._samples = self._generator(inputs=inputs, as_dict=True)["generated"]
print("Inputs:", inputs)
print("Outputs:", self._samples)
self._z = inputs["z"]
self._labels = inputs.get("labels", None)
def _init_session(self):
if self._sess is None:
self._sess = tf.Session(graph=self._graph)
self._sess.run(tf.global_variables_initializer())
def get_noise(self, num_samples, seed=None):
if np.isscalar(seed):
np.random.seed(seed)
return np.random.normal(size=[num_samples, self.z_dim])
z = np.empty(shape=(len(seed), self.z_dim), dtype=np.float32)
for i, s in enumerate(seed):
np.random.seed(s)
z[i] = np.random.normal(size=[self.z_dim])
return z
def get_samples(self, z, labels=None):
with self._graph.as_default():
self._init_session()
feed_dict = {self._z: z}
if self.conditional:
assert labels is not None
assert labels.shape[0] == z.shape[0]
feed_dict[self._labels] = labels
samples = self._sess.run(self._samples, feed_dict=feed_dict)
return np.uint8(np.clip(256 * samples, 0, 255))
class Discriminator(object):
def __init__(self, module_spec):
self._module_spec = module_spec
self._sess = None
self._graph = tf.Graph()
self._load_model()
@property
def conditional(self):
return "labels" in self._inputs
@property
def image_shape(self):
return self._inputs["images"].shape.as_list()[1:]
def _load_model(self):
with self._graph.as_default():
self._discriminator = hub.Module(self._module_spec, name="disc_module",
tags={"disc", "bsNone"})
input_info = self._discriminator.get_input_info_dict()
self._inputs = {k: tf.placeholder(v.dtype, v.get_shape().as_list(), k)
for k, v in input_info.items()}
self._outputs = self._discriminator(inputs=self._inputs, as_dict=True)
print("Inputs:", self._inputs)
print("Outputs:", self._outputs)
def _init_session(self):
if self._sess is None:
self._sess = tf.Session(graph=self._graph)
self._sess.run(tf.global_variables_initializer())
def predict(self, images, labels=None):
with self._graph.as_default():
self._init_session()
feed_dict = {self._inputs["images"]: images}
if "labels" in self._inputs:
assert labels is not None
assert labels.shape[0] == images.shape[0]
feed_dict[self._inputs["labels"]] = labels
return self._sess.run(self._outputs, feed_dict=feed_dict)
Explanation: SSGAN Demo
This notebook is a demo of Generative Adversarial Networks (GANs) trained on ImageNet without labels using self-supervised techniques. Both generator and discrimantor models are available on TF Hub.
For more information about the models and the training procedure see our paper [1].
The code for training these models is available on GitHub.
To get started, connect to a runtime and follow these steps:
(Optional) Select a model in the second code cell below.
Click Runtime > Run all to run each cell in order.
Afterwards, the interactive visualizations should update automatically when you modify the settings using the sliders and dropdown menus.
Note: if you run into any issues, youn can try restarting the runtime and rerunning all cells from scratch by clicking Runtime > Restart and run all....
[1] Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, Neil Houlsby, Self-Supervised GANs via Auxiliary Rotation Loss, CVPR 2019.
Setup
End of explanation
# @title Load Model
model_name = "SSGAN 128x128 (FID 20.6, IS 24.9)"
models = {
"SSGAN 128x128": "https://tfhub.dev/google/compare_gan/ssgan_128x128/1",
}
module_spec = models[model_name.split(" (")[0]]
print("Module spec:", module_spec)
tf.reset_default_graph()
print("Loading model...")
sampler = Generator(module_spec)
print("Model loaded.")
Explanation: Select a model
End of explanation
# @title Sampling { run: "auto" }
num_rows = 3 # @param {type: "slider", min:1, max:16}
num_cols = 4 # @param {type: "slider", min:1, max:16}
noise_seed = 23 # @param {type:"slider", min:0, max:100, step:1}
num_samples = num_rows * num_cols
z = sampler.get_noise(num_samples, seed=noise_seed)
samples = sampler.get_samples(z)
imshow(imgrid(samples, cols=num_cols))
# @title Interpolation { run: "auto" }
num_samples = 1 # @param {type: "slider", min: 1, max: 6, step: 1}
num_interps = 6 # @param {type: "slider", min: 2, max: 10, step: 1}
noise_seed_A = 11 # @param {type: "slider", min: 0, max: 100, step: 1}
noise_seed_B = 0 # @param {type: "slider", min: 0, max: 100, step: 1}
def interpolate(A, B, num_interps):
alphas = np.linspace(0, 1, num_interps)
if A.shape != B.shape:
raise ValueError('A and B must have the same shape to interpolate.')
return np.array([((1-a)*A + a*B)/np.sqrt(a**2 + (1-a)**2) for a in alphas])
def interpolate_and_shape(A, B, num_interps):
interps = interpolate(A, B, num_interps)
return (interps.transpose(1, 0, *range(2, len(interps.shape)))
.reshape(num_samples * num_interps, -1))
z_A = sampler.get_noise(num_samples, seed=noise_seed_A)
z_B = sampler.get_noise(num_samples, seed=noise_seed_B)
z = interpolate_and_shape(z_A, z_B, num_interps)
samples = sampler.get_samples(z)
imshow(imgrid(samples, cols=num_interps))
Explanation: Sample
End of explanation
disc = Discriminator(module_spec)
batch_size = 4
num_classes = 1000
images = np.random.random(size=[batch_size] + disc.image_shape)
disc.predict(images)
Explanation: Discriminator
End of explanation |
4,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import data
Step1: Data exploration
Shape, types, distribution, modalities and potential missing values
Step3: Data processing
Step4: Feature engineering
Step5: Modelling
This model aims to answer the questions what is the profile of the persons regarding interests that got the most matches.
Variables
Step6: Variables selection
Step7: Decision Tree
Step8: Tuning Parameters
Step9: Check | Python Code:
raw_dataset = pd.read_csv(source_path + "Speed_Dating_Data.csv")
Explanation: Import data
End of explanation
raw_dataset.head(3)
raw_dataset_copy = raw_dataset
#merged_datasets = raw_dataset.merge(raw_dataset_copy, left_on="pid", right_on="iid")
#merged_datasets[["iid_x","gender_x","pid_y","gender_y"]].head(5)
#same_gender = merged_datasets[merged_datasets["gender_x"] == merged_datasets["gender_y"]]
#same_gender.head()
columns_by_types = raw_dataset.columns.to_series().groupby(raw_dataset.dtypes).groups
raw_dataset.dtypes.value_counts()
raw_dataset.isnull().sum().head(3)
summary = raw_dataset.describe() #.transpose()
print summary
#raw_dataset.groupby("gender").agg({"iid": pd.Series.nunique})
raw_dataset.groupby('gender').iid.nunique()
raw_dataset.groupby('career').iid.nunique().sort_values(ascending=False).head(5)
raw_dataset.groupby(["gender","match"]).iid.nunique()
Explanation: Data exploration
Shape, types, distribution, modalities and potential missing values
End of explanation
local_path = "/Users/sandrapietrowska/Documents/Trainings/luigi/data_source/"
local_filename = "Speed_Dating_Data.csv"
my_variables_selection = ["iid", "pid", "match","gender","date","go_out","sports","tvsports","exercise","dining",
"museums","art","hiking","gaming","clubbing","reading","tv","theater","movies",
"concerts","music","shopping","yoga"]
class RawSetProcessing(object):
This class aims to load and clean the dataset.
def __init__(self,source_path,filename,features):
self.source_path = source_path
self.filename = filename
self.features = features
# Load data
def load_data(self):
raw_dataset_df = pd.read_csv(self.source_path + self.filename)
return raw_dataset_df
# Select variables to process and include in the model
def subset_features(self, df):
sel_vars_df = df[self.features]
return sel_vars_df
@staticmethod
# Remove ids with missing values
def remove_ids_with_missing_values(df):
sel_vars_filled_df = df.dropna()
return sel_vars_filled_df
@staticmethod
def drop_duplicated_values(df):
df = df.drop_duplicates()
return df
# Combine processing stages
def combiner_pipeline(self):
raw_dataset = self.load_data()
subset_df = self.subset_features(raw_dataset)
subset_no_dup_df = self.drop_duplicated_values(subset_df)
subset_filled_df = self.remove_ids_with_missing_values(subset_no_dup_df)
return subset_filled_df
raw_set = RawSetProcessing(local_path, local_filename, my_variables_selection)
dataset_df = raw_set.combiner_pipeline()
dataset_df.head(3)
# Number of unique participants
dataset_df.iid.nunique()
dataset_df.shape
Explanation: Data processing
End of explanation
def get_partner_features(df):
#print df[df["iid"] == 1]
df_partner = df.copy()
df_partner = df_partner.drop(['pid','match'], 1).drop_duplicates()
#print df_partner.shape
merged_datasets = df.merge(df_partner, how = "inner",left_on="pid", right_on="iid",suffixes=('_me','_partner'))
#print merged_datasets[merged_datasets["iid_me"] == 1]
return merged_datasets
feat_eng_df = get_partner_features(dataset_df)
feat_eng_df.head(3)
Explanation: Feature engineering
End of explanation
import sklearn
print sklearn.__version__
from sklearn import tree
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
import subprocess
Explanation: Modelling
This model aims to answer the questions what is the profile of the persons regarding interests that got the most matches.
Variables:
* gender
* date (In general, how frequently do you go on dates?)
* go out (How often do you go out (not necessarily on dates)?
* sports: Playing sports/ athletics
* tvsports: Watching sports
* excersice: Body building/exercising
* dining: Dining out
* museums: Museums/galleries
* art: Art
* hiking: Hiking/camping
* gaming: Gaming
* clubbing: Dancing/clubbing
* reading: Reading
* tv: Watching TV
* theater: Theater
* movies: Movies
* concerts: Going to concerts
* music: Music
* shopping: Shopping
* yoga: Yoga/meditation
End of explanation
#features = list(["gender","age_o","race_o","goal","samerace","imprace","imprelig","date","go_out","career_c"])
features = list(["gender","date","go_out","sports","tvsports","exercise","dining","museums","art",
"hiking","gaming","clubbing","reading","tv","theater","movies","concerts","music",
"shopping","yoga"])
suffix_me = "_me"
suffix_partner = "_partner"
#add suffix to each element of list
def process_features_names(features, suffix_1, suffix_2):
features_me = [feat + suffix_1 for feat in features]
features_partner = [feat + suffix_2 for feat in features]
features_all = features_me + features_partner
return features_all
features_model = process_features_names(features, suffix_me, suffix_partner)
explanatory = feat_eng_df[features_model]
explained = feat_eng_df[label]
Explanation: Variables selection
End of explanation
clf = tree.DecisionTreeClassifier(min_samples_split=20,min_samples_leaf=10,max_depth=4)
clf = clf.fit(explanatory, explained)
# Download http://www.graphviz.org/
with open("data.dot", 'w') as f:
f = tree.export_graphviz(clf, out_file=f, feature_names= features_model, class_names="match")
import subprocess
subprocess.call(['dot', '-Tpdf', 'data.dot', '-o' 'data.pdf'])
Explanation: Decision Tree
End of explanation
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(explanatory, explained, test_size=0.3, random_state=0)
parameters = [
{'criterion': ['gini','entropy'], 'max_depth': [4,6,10,12,14],
'min_samples_split': [10,20,30], 'min_samples_leaf': [10,15,20]
}
]
scores = ['precision', 'recall']
dtc = tree.DecisionTreeClassifier()
clf = GridSearchCV(dtc, parameters,n_jobs=3, cv=5, refit=True)
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print("")
clf = GridSearchCV(dtc, parameters, cv=5,
scoring='%s_macro' % score)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print("")
print(clf.best_params_)
print("")
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print("")
best_param_dtc = tree.DecisionTreeClassifier(criterion="entropy",min_samples_split=10,min_samples_leaf=10,max_depth=14)
best_param_dtc = best_param_dtc.fit(explanatory, explained)
best_param_dtc.feature_importances_
raw_dataset.rename(columns={"age_o":"age_of_partner","race_o":"race_of_partner"},inplace=True)
Explanation: Tuning Parameters
End of explanation
raw_data = {
'subject_id': ['14', '15', '16', '17', '18'],
'first_name': ['Sue', 'Maria', 'Sandra', 'Kate', 'Aurelie'],
'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan'],
'pid': ['4', '5', '6', '7', '8'],}
df_a = pd.DataFrame(raw_data, columns = ['subject_id', 'first_name', 'last_name','pid'])
df_a
raw_data = {
'subject_id': ['4', '5', '6', '7', '8'],
'first_name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan'],
'pid': ['14', '15', '16', '17', '18'],}
df_b = pd.DataFrame(raw_data, columns = ['subject_id', 'first_name', 'last_name','pid'])
df_b
df_a.merge(df_b, left_on='pid', right_on='subject_id', how='outer', suffixes=('_me','_partner'))
Explanation: Check
End of explanation |
4,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step 1 - Subject selection
written by R.A.I. Bethlehem, D. Margulies and M. Falkiewicz for the Autism Gradients project at Brainhack Cambridge 2017
Subjects are selected based on
Step1: Check for missing data
Step2: Now load the phenotype file and check to see the filenames match the selected ones and have the SUB_IN_SMP set to 1 | Python Code:
# imports
from __future__ import print_function
import numpy as np
import os
import nibabel as nib
from os import listdir
from os.path import isfile, join
import os.path
# little helper function to return the proper filelist with the full path but that skips hidden files
def listdir_nohidden(path):
for f in os.listdir(path):
if not f.startswith('.'):
yield f
def listdir_fullpath(d):
return [os.path.join(d, f) for f in listdir_nohidden(d)]
# and create a filelist
onlyfiles = listdir_fullpath("./data/Input/")
# print the file list length
"There are " + str(len(onlyfiles)) + " files to be processed."
Explanation: Step 1 - Subject selection
written by R.A.I. Bethlehem, D. Margulies and M. Falkiewicz for the Autism Gradients project at Brainhack Cambridge 2017
Subjects are selected based on:
- Missing data
- SUB_IN_SMP variable (subjects used in the orginal paper)
End of explanation
# check to see which files contains nodes with missing information
missingarray = []
for i in onlyfiles:
# load timeseries
filename = i
ts_raw = np.loadtxt(filename)
# check zero columns
missingn = np.where(~ts_raw.any(axis=0))[0]
missingarray.append(missingn)
# select the ones that don't have missing data
ids = np.where([len(i) == 0 for i in missingarray])[0]
selected_filename_only = [onlyfiles[i] for i in ids]
# could be useful to have one without pathnames later one
selected_full_path = [os.path.basename(onlyfiles[i]) for i in ids]
"There are " + str(len(selected_filename_only)) + " files that are selected."
Explanation: Check for missing data
End of explanation
import pandas as pd
# read in csv
df_phen = pd.read_csv('./data/Phenotypic_V1_0b_preprocessed1_filt.csv')
# add a column that matches the filename
for i in df_phen:
df_phen['filename_1D'] = df_phen['FILE_ID']+"_rois_cc400.1D"
df_phen['filename_npy'] = df_phen['FILE_ID']+"_rois_cc400.1D.npy"
df_phen['selected'] = np.where(df_phen['filename_1D'].isin((selected_full_path)), 1, 0 )
df_phen = df_phen.loc[df_phen["SUB_IN_SMP"] == 1]
df_phen = df_phen.loc[df_phen["selected"] == 1]
df_phen.to_csv('./data/SelectedSubjects.csv')
"There are " + str(len(df_phen.index)) + " in the final selection."
Explanation: Now load the phenotype file and check to see the filenames match the selected ones and have the SUB_IN_SMP set to 1
End of explanation |
4,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Curvature Matrix error estimation
Please cite
Step1: Read in the network and set up coordinates
Step2: Set up the grid of source points
Step3: Set source power and run the curvature matrix math
Step4: Plotting | Python Code:
%pylab inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import simulation_functions as sf
from mpl_toolkits.basemap import Basemap
from coordinateSystems import TangentPlaneCartesianSystem, GeographicSystem
c0 = 3.0e8 # m/s
dt_rms = 23.e-9 # seconds
Explanation: Curvature Matrix error estimation
Please cite: V. C. Chmielewski and E. C. Bruning (2016), Lightning Mapping Array flash detection performance with variable receiver thresholds, J. Geophys. Res. Atmos., 121, 8600-8614, doi:10.1002/2016JD025159
If any results from this model are presented.
Contact:
[email protected]
End of explanation
Network = 'grid_LMA' # Choose network from csv
stations = pd.read_csv('network.csv')
stations.set_index('network').loc[Network]
aves = np.array(stations.set_index('network').loc[Network])[:,:-1].astype('float')
center = (np.mean(aves[:,1]), np.mean(aves[:,2]), np.mean(aves[:,0]))
geo = GeographicSystem()
tanp = TangentPlaneCartesianSystem(center[0], center[1], center[2])
alt, lat, lon = aves[:,:3].T
stations_ecef = np.array(geo.toECEF(lon, lat, alt)).T
stations_local = tanp.toLocal(stations_ecef.T).T
ordered_threshs = aves[:,-1]
Explanation: Read in the network and set up coordinates
End of explanation
xmin, xmax, xint = -200001, 199999, 5000 # Extent of grid (km)
alts = np.array([7000])
initial_points = np.array(np.meshgrid(np.arange(xmin,xmax+xint,xint),
np.arange(xmin,xmax+xint,xint), alts))
points = initial_points.reshape(3,int(np.size(initial_points)/3)).T
Explanation: Set up the grid of source points
End of explanation
# power = 0.84 # 98% FDE or OK/COLMA 95% in Watts
# power = 1.57 # NALMA 95%
# power = 1.36 # 95% FDE
# power = 2.67 # 90% FDE
# power = 4.34 # 85% FDE
# power = 6.79 # 80% FDE
power = 9.91 # 75% FDE
# power = 14.00 # 70% FDE
# power = 10000 # High powered source for full station contribution in domain
means = np.empty((np.shape(points)[0],4))
for i in range(np.shape(points)[0]):
means[i] = sf.curvature_matrix(points[i],stations_local,ordered_threshs,c0,power=power,timing_error=dt_rms,min_stations=6)
means = means.T.reshape((4,np.shape(initial_points)[1],np.shape(initial_points)[2]))
means = np.ma.masked_where(np.isnan(means) , means)
Explanation: Set source power and run the curvature matrix math
End of explanation
fig = plt.figure()
ax1 = fig.add_subplot(141)
sf.nice_plot(means[0,:,:]/1000.,xmin,xmax,xint,
center[0],center[1],stations_local,color='inferno_r',cmin=0,cmax=3,levels_t=(0.05,0.1,0.5,1,5))
plt.title('X RMSE')
ax2 = fig.add_subplot(142)
sf.nice_plot(means[1,:,:]/1000.,xmin,xmax,xint,
center[0],center[1],stations_local,color='inferno_r',cmin=0,cmax=3,levels_t=(0.05,0.1,0.5,1,5))
plt.title('Y RMSE')
ax3 = fig.add_subplot(143)
sf.nice_plot(means[2,:,:]/1000.,xmin,xmax,xint,
center[0],center[1],stations_local,color='inferno_r',cmin=0,cmax=3,levels_t=(0.05,0.1,0.5,1,5))
plt.title('Z RMSE')
ax4 = fig.add_subplot(144)
sf.nice_plot(means[3,:,:]/1000.,xmin,xmax,xint,
center[0],center[1],stations_local,color='inferno_r',cmin=0,cmax=3,levels_t=(0.05,0.1,0.5,1,5))
plt.title('CT RMSE')
plt.show()
sf.nice_plot(means[2,:,:]/1000.,xmin,xmax,xint,
center[0],center[1],stations_local,color='inferno_r',cmin=0,cmax=3,levels_t=(0.05,0.1,0.5,1,5))
Explanation: Plotting
End of explanation |
4,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detecting Forest Change using Landsat 8 imagery
In this notebook forest change is detected by observing two contiguous acquisitions from landsat_8 imagery. The comparisons between a before and after images offers insight into changes in vegetation and inundation over a varied topography to give indication of deforestation. A broader history spanning one year into the past is also considered
The diagram below shows a series of very simple rules applied to a given point in the imagery to make a classification of forest changed or not changed
Rule 1
Step1: In this step. The comparison of before and after NDVI values can indicate a loss of vegetation and is used as the first criteria for detecting deforestation.
The comparison relies on differencing between acquisitions.
Step2: Through a process of visual survey a threshold is determined for NDVI decrease. In our example the determined threshold is 0.18, which means that pixels with NDVI subtraction smaller than -0.18 will be retained, larger than (-0.18) will not be considered as a possible point of deforestation
Step3: <br>
Rule 2
Step4: Rule 3
Step5: <br>
In this step, the entire MNDWI history(time component) is analyzed and the max MNDWI is taken into account for each pixel. If, at some point in time, any MNDWI surpases some threshold, the assumption is made that the area is not a forest.
<br>
Step6: A threshold is emperically evaulated to reduce the rate of false positives caused by inundated areas. In this case that threshold is 0. Pixels with max_MNDWI smaller than 0 will be retained, greater be discarded.
Step7: Rule 4
Step8: Rule 5
Step9: <br>
Step10: Run on a small area
The area
Step11: A quick visualization
Step12: Loading Data for the Area
The following lines of code load in ASTER DEM data, and Landsat7 data through datacube's python api.
Step13: Define the loads
Step14: Landsat 7
Step15: ASTER GDEM V2
Step16: Load Products
Step17: Pick a time
Step18: A quick RGB visualization function
Step19: Generate a boolean mask to filter clouds
Step20: <br>
Step21: Display contents
Step22: Add landsat and cloud mask to a common Xarray
Step23: Unfiltered image at selected time
Step24: Retained Pixels after filtering (in red)
Step25: Filtered Imagery
Step26: STEP 1
Step27: NDVI change mask (in red)
Step28: NDVI change + previous mask
Step29: Remaining pixels after filtering
Step30: Step 2
Step31: <br>
The lack of validation data, with regards to height and agriculture means we pick an arbitrary threshold for filtering. The threshold of 5 was chosen.
<br>
Step32: Valid Height Mask
Step33: Height Mask + Previous Filters
Step34: Remaining
Step35: STEP 3
Step36: Inundation Mask
Step37: Inundation Mask + Previous Masks
Step38: Remaining
Step39: STEP 4
Step40: Non-Repeat NDVI Mask
Step41: Non-Repeat NDVI Mask + Previous Masks
Step42: Remaining
Step43: STEP 5
Step44: Deforestation Mask
Step45: Deforestation Mask in the context of previous filter results
Step46: Deforestation After Filtering | Python Code:
def ndvi(dataset):
return ((dataset.nir - dataset.red)/(dataset.nir + dataset.red)).rename("NDVI")
Explanation: Detecting Forest Change using Landsat 8 imagery
In this notebook forest change is detected by observing two contiguous acquisitions from landsat_8 imagery. The comparisons between a before and after images offers insight into changes in vegetation and inundation over a varied topography to give indication of deforestation. A broader history spanning one year into the past is also considered
The diagram below shows a series of very simple rules applied to a given point in the imagery to make a classification of forest changed or not changed
Rule 1: vegetation decreases significantly
NDVI is an index that correlates highly with the existence of vegetation. Its formulation is simple
$$pixel_{NDVI} = \frac{pixel_{NIR} - pixel_{red}}{pixel_{NIR} + pixel_{red}} $$
Implementation, with the help of xarray datastructures is equally as simple.
End of explanation
def delta(before, after):
return after - before
def vegetation_change(before, after):
vegetation_before = ndvi(before)
vegetation_after = ndvi(after)
return delta(vegetation_before,
vegetation_after)
Explanation: In this step. The comparison of before and after NDVI values can indicate a loss of vegetation and is used as the first criteria for detecting deforestation.
The comparison relies on differencing between acquisitions.
End of explanation
def meets_devegetation_criteria(before,after, threshold = -0.18):
#In this code the following comparison is applied to all pixels.
#It returns a grid of True and False values to be used for filtering our dataset.
return (vegetation_change(before, after) < threshold).rename("devegetation_mask").astype(bool)
Explanation: Through a process of visual survey a threshold is determined for NDVI decrease. In our example the determined threshold is 0.18, which means that pixels with NDVI subtraction smaller than -0.18 will be retained, larger than (-0.18) will not be considered as a possible point of deforestation
End of explanation
## ASTER
def meets_minimum_height_criteria(aster_ds, threshold = 30):
return (aster_ds.num > threshold).astype(bool).rename("height_mask")
Explanation: <br>
Rule 2: vegetation decrease is not the result of agricultural activity
Agricultural cultivation often occurs on flat and low land. This assumption is used to deal with the possibility of agricultural activity interfereing with a detection of deforestation/devegatation.
Thus, pixels with DEM value smaller than 30m are discarded.
End of explanation
def mndwi(dataset):
return (dataset.green - dataset.swir1)/(dataset.green + dataset.swir1).rename("MNDWI")
#In this code the following arithmetic is applied to all pixels in a dataset
Explanation: Rule 3: vegetation decrease is not in an area that is known for inundation
Most forests are tall with canopies obscuring the view of the forest floor. Visually speaking, flooding should have very little effect in NDVI values since flooding happens below the canopy.
The presence of water at some point in the year might indicate that the detected vegetation is not forest, but an agricultural area or pond covered by vegetation water-fern.
MNDWI is a calculated index that correlates highly with the existence of water. Like NDVI its formulation is simple.
<br>
$$MNDWI = \frac{pixel_{green} - pixel_{swir1}}{pixel_{green} + pixel_{swir1}} $$
So is the implementation in code
End of explanation
def max_mndwi(dataset):
#In this code max is applied accross across all pixels along a time component.
_max = mndwi(dataset).max(dim = ['time'])
return _max.rename("max_MNDWI")
Explanation: <br>
In this step, the entire MNDWI history(time component) is analyzed and the max MNDWI is taken into account for each pixel. If, at some point in time, any MNDWI surpases some threshold, the assumption is made that the area is not a forest.
<br>
End of explanation
def meets_inundation_criteria(dataset, threshold = 0.0):
return (max_mndwi(dataset) < threshold).rename("inundation_mask")
Explanation: A threshold is emperically evaulated to reduce the rate of false positives caused by inundated areas. In this case that threshold is 0. Pixels with max_MNDWI smaller than 0 will be retained, greater be discarded.
End of explanation
def create_ndvi_matrix(ds):
ndvi_matrix = ndvi(ds)
return ndvi_matrix.where(ds.cloud_mask)
def delta_matrix(ds):
return np.diff(ds, axis = 0)
def vegetation_sum_threshold(ds, threshold = 10):
ndvi_delta_matrix = delta_matrix(create_ndvi_matrix(ds)) # lat,lon,t-1 matrix
ndvi_change_magnitude = np.absolute(ndvi_delta_matrix) # lat,lon,t-1 matrix
cummulative_ndvi_change_magnitude = np.nansum(ndvi_change_magnitude, axis = 0) #lat, lon matrix,
ndvi_change_repeat_mask = cummulative_ndvi_change_magnitude < threshold #lat, lon boolean matrix
return xr.DataArray(ndvi_change_repeat_mask,
dims = ["latitude", "longitude"],
coords = {"latitude": ds.latitude, "longitude": ds.longitude},
attrs = {"threshold": threshold},
name = "repeat_devegetation_mask")
Explanation: Rule 4: vegetation decrease isn't a recurring or normal phenomena
End of explanation
from skimage import measure
import numpy as np
def group_pixels(array, connectivity = 1):
arr = measure.label(array, connectivity=connectivity)
return [np.dstack(np.where(arr == y))[0] for y in range(1,np.amax(arr))]
def numpy_group_mask(boolean_np_array, min_size = 5):
all_groups = group_pixels(boolean_np_array.astype(int))
candidate_groups = filter(lambda group:
(len(group) > min_size) & (group != 0).all(),
all_groups)
candidate_pixels = (pixel for group in candidate_groups for pixel in group)
dynamic_array = np.zeros(boolean_np_array.shape)
for x,y in candidate_pixels:
dynamic_array[x][y] = 1
return dynamic_array.astype(bool)
Explanation: Rule 5: vegetation decrease happens in large areas
The assumption is made that deforestation happens on a sufficiently large scale. Another assumption is made that detected deforestation in a pixel is typically spatially correlated with deforestation in adjacent pixels.
Following this reasoning, the final filter retains groups of deforestation larger than 5 pixels per group. The diagram below
illustrates a candidate for deforestation based on this grouping criteria, and one that is rejected based on this criteria.
<br>
<br>
The implementation relies on adequately grouping pixels in an efficient manner. The use of skimage's measure module yields segemented arrays of pixel groups.
<br>
End of explanation
def boolean_xarray_segmentation_filter(da, min_size = 5):
mask_np = numpy_group_mask(da.values,min_size = 5)
return xr.DataArray(mask_np,
dims = da.dims,
coords = da.coords,
attrs = {"group_size": min_size},
name = "filtered_chunks_mask")
Explanation: <br>
End of explanation
# Zanzibar, Tanzania
latitude = (-6.2238, -6.1267)
longitude = (39.2298, 39.2909)
date_range = ('2000-01-01', '2000-12-31')
Explanation: Run on a small area
The area
End of explanation
## The code below renders a map that can be used to orient yourself with the region.
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude = latitude, longitude = longitude)
Explanation: A quick visualization
End of explanation
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube()
Explanation: Loading Data for the Area
The following lines of code load in ASTER DEM data, and Landsat7 data through datacube's python api.
End of explanation
common_load_params = dict(latitude = latitude, longitude = longitude,
dask_chunks={'time':5, 'latitude':1000, 'longitude':1000})
Explanation: Define the loads
End of explanation
ls7_extents = dict(platform = 'LANDSAT_7',
product = 'ls7_usgs_sr_scene',
time = date_range,
measurements = ['red','green','blue','nir','swir1', 'swir2', 'pixel_qa'],
**common_load_params)
Explanation: Landsat 7
End of explanation
aster_extents = dict(platform = "TERRA",
product = "terra_aster_gdm",
**common_load_params)
Explanation: ASTER GDEM V2
End of explanation
landsat_dataset = dc.load(**ls7_extents)
aster_dataset = dc.load(**aster_extents)
Explanation: Load Products
End of explanation
import time
def reformat_n64(t):
return time.strftime("%Y-%m-%d", time.gmtime(t.astype(int)/1000000000))
acq_dates = list(map(reformat_n64, landsat_dataset.time.values))
print("Choose the target date")
print(acq_dates[1:]) # There must be at least one acqusition before the target date.
target_date = '2001-01-12'
target_index = acq_dates.index(target_date)
from datetime import datetime , timedelta
year_before_target_date = datetime.strptime(target_date, '%Y-%m-%d') - timedelta(days = 365)
year_of_landsat_dataset = landsat_dataset.sel(time = slice(year_before_target_date, target_date))
year_of_landsat_dataset
Explanation: Pick a time
End of explanation
from utils.data_cube_utilities.dc_displayutil import display_at_time
Explanation: A quick RGB visualization function
End of explanation
from functools import reduce
import numpy as np
import xarray as xr
def ls7_qa_mask(dataset, keys):
land_cover_endcoding = dict( fill = [1],
clear = [66, 130],
water = [68, 132],
shadow = [72, 136],
snow = [80, 112, 144, 176],
cloud = [96, 112, 160, 176, 224],
low_conf = [66, 68, 72, 80, 96, 112],
med_conf = [130, 132, 136, 144, 160, 176],
high_conf= [224]
)
def merge_lists(a, b):
return a.union(set(land_cover_endcoding[b]))
relevant_encodings = reduce(merge_lists, keys,set())
return xr.DataArray( np.isin(dataset.pixel_qa,list(relevant_encodings)),
coords = dataset.pixel_qa.coords,
dims = dataset.pixel_qa.dims,
name = "cloud_mask",
attrs = dataset.attrs)
Explanation: Generate a boolean mask to filter clouds
End of explanation
is_clear_mask = ls7_qa_mask(year_of_landsat_dataset, ['clear', 'water'])
Explanation: <br>
End of explanation
is_clear_mask
Explanation: Display contents
End of explanation
product_dataset = xr.merge([year_of_landsat_dataset, is_clear_mask])
Explanation: Add landsat and cloud mask to a common Xarray
End of explanation
from utils.data_cube_utilities.dc_rgb import rgb
rgb(landsat_dataset, at_index = target_index, width = 15)
Explanation: Unfiltered image at selected time
End of explanation
rgb(product_dataset, at_index = -1, paint_on_mask = [((product_dataset.cloud_mask.values), (255,0,0))], width = 15)
Explanation: Retained Pixels after filtering (in red)
End of explanation
rgb(product_dataset, at_index = -1, paint_on_mask = [(np.invert(product_dataset.cloud_mask.values), (0,0,0))], width = 15)
Explanation: Filtered Imagery
End of explanation
after = product_dataset .isel(time = -1)
before = product_dataset .isel(time = -2)
devegetation_mask = meets_devegetation_criteria(before,after,threshold = -0.18)
product_dataset = xr.merge([product_dataset, devegetation_mask])
def nan_to_bool(nd_array):
nd = nd_array.astype(float)
nd[np.isnan(nd)] = 0
return nd.astype(bool)
Explanation: STEP 1: NDVI Decrease
End of explanation
rgb(product_dataset, at_index = -1,
paint_on_mask = [(nan_to_bool(product_dataset.where(devegetation_mask).red.values), (255,0,0))], width = 15)
Explanation: NDVI change mask (in red)
End of explanation
rgb(product_dataset, at_index = -1,
paint_on_mask = [(nan_to_bool(product_dataset.where(devegetation_mask).red.values), (255,0,0)),
(np.invert(nan_to_bool(product_dataset.cloud_mask.values)), (0,0,0))],
width = 15)
Explanation: NDVI change + previous mask
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [(np.invert(nan_to_bool(product_dataset.where(devegetation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.cloud_mask.values)), (0,0,0))], width = 15)
Explanation: Remaining pixels after filtering
End of explanation
aster_dataset.num.plot.hist(figsize = (15,5), bins = 16)
Explanation: Step 2: Filter DEM
Elevation in Salgar Colombia might have a unique distribution of heights. Therefore, It makes sense to look at a histogram of all hieights and determine an appropriate threshold specific to this area.
<br>
End of explanation
height_mask = meets_minimum_height_criteria(aster_dataset.isel(time = 0), threshold = 5)
product_dataset = xr.merge([product_dataset, height_mask])
Explanation: <br>
The lack of validation data, with regards to height and agriculture means we pick an arbitrary threshold for filtering. The threshold of 5 was chosen.
<br>
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [((nan_to_bool(product_dataset.where(height_mask).red.values)), (255,0,0))], width = 15)
Explanation: Valid Height Mask
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [((nan_to_bool(product_dataset.where(height_mask).red.values)), (255,0,0)),
(np.invert(nan_to_bool(product_dataset.where(devegetation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.cloud_mask.values)), (0,0,0))], width = 15)
Explanation: Height Mask + Previous Filters
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [(np.invert(nan_to_bool(product_dataset.where(height_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(devegetation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.cloud_mask.values)), (0,0,0))], width = 15)
Explanation: Remaining
End of explanation
cloudless_dataset = product_dataset.where(product_dataset.cloud_mask)
inundation_mask = meets_inundation_criteria(cloudless_dataset)
product_dataset = xr.merge([product_dataset, inundation_mask])
Explanation: STEP 3: Filter inundation criteria
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [((nan_to_bool(product_dataset.where(inundation_mask).red.values)), (255,0,0))], width = 15)
Explanation: Inundation Mask
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [((nan_to_bool(product_dataset.where(inundation_mask).red.values)), (255,0,0)),
(np.invert(nan_to_bool(product_dataset.where(height_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(devegetation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.cloud_mask.values)), (0,0,0))], width = 15)
Explanation: Inundation Mask + Previous Masks
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [(np.invert(nan_to_bool(product_dataset.where(inundation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(height_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(devegetation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.cloud_mask.values)), (0,0,0))], width = 15)
Explanation: Remaining
End of explanation
non_recurring_ndvi_change = vegetation_sum_threshold(product_dataset, threshold = 1)
product_dataset = xr.merge([product_dataset, non_recurring_ndvi_change])
Explanation: STEP 4: Recurring NDVI change
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [((nan_to_bool(product_dataset.where(non_recurring_ndvi_change).red.values)), (255,0,0))], width = 15)
Explanation: Non-Repeat NDVI Mask
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [
((nan_to_bool(product_dataset.where(non_recurring_ndvi_change).red.values)),(255,0,0)),
(np.invert(nan_to_bool(product_dataset.where(inundation_mask ).red.values)),(0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(height_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(devegetation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.cloud_mask.values)), (0,0,0))], width = 15)
Explanation: Non-Repeat NDVI Mask + Previous Masks
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [(np.invert(nan_to_bool(product_dataset.where(non_recurring_ndvi_change).red.values)),(0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(inundation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(height_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(devegetation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.cloud_mask.values)), (0,0,0))], width = 15)
Explanation: Remaining
End of explanation
remaining_pixel_mask = product_dataset.isel(time = -1).cloud_mask & product_dataset.devegetation_mask & product_dataset.height_mask & product_dataset.inundation_mask & product_dataset.repeat_devegetation_mask
deforestation_mask = boolean_xarray_segmentation_filter(remaining_pixel_mask).rename("deforestation_mask").drop("time")
product_dataset = xr.merge([product_dataset, deforestation_mask])
Explanation: STEP 5: Filter in favor of sufficiently large pixel groups
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [((nan_to_bool(product_dataset.where(deforestation_mask).red.values)), (255,0,0))])
Explanation: Deforestation Mask
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [
((nan_to_bool(product_dataset.where(deforestation_mask).red.values)),(255,0,0)),
(np.invert(nan_to_bool(product_dataset.where(non_recurring_ndvi_change).red.values)),(0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(inundation_mask ).red.values)),(0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(height_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(devegetation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.cloud_mask.values)), (0,0,0))])
Explanation: Deforestation Mask in the context of previous filter results
End of explanation
rgb(product_dataset,
at_index = -1,
paint_on_mask = [
(np.invert(nan_to_bool(product_dataset.where(deforestation_mask).red.values)),(0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(non_recurring_ndvi_change).red.values)),(0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(inundation_mask ).red.values)),(0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(height_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.where(devegetation_mask).red.values)), (0,0,0)),
(np.invert(nan_to_bool(product_dataset.cloud_mask.values)), (0,0,0))])
display_map(latitude = latitude, longitude = longitude)
Explanation: Deforestation After Filtering
End of explanation |
4,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = []
for sentence in source_text.split("\n"):
id_sentence = []
for word in sentence.split():
#print(source_vocab_to_int[word])
id_sentence.append(source_vocab_to_int[word])
source_id_text.append(id_sentence)
target_id_text = []
for sentence in target_text.split("\n"):
id_sentence = []
for word in sentence.split():
id_sentence.append(target_vocab_to_int[word])
id_sentence.append(target_vocab_to_int['<EOS>'])
target_id_text.append(id_sentence)
print("len source_id text "+str(len(source_id_text)))
print("len target_id text "+str(len(target_id_text)))
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32,shape=[None,None],name="input")
targets = tf.placeholder(tf.int32,shape=[None,None], name="target")
learningRate = tf.placeholder(tf.float32)
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return inputs, targets, learningRate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
print(dec_input)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
# enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
enc_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
output, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
with tf.variable_scope("decoding") as decoding_scope:
training_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],sequence_length - 1, vocab_size,
decoding_scope, output_fn, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
embed_target = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, dec_embedding_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
#dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, embed_target)
training_logits, inference_logits = decoding_layer(embed_target, dec_embeddings, encoder_state, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 60
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 50
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 13
decoding_embedding_size = 13
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
4,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encontro 07
Step1: Configurando a biblioteca
Step2: Carregando o grafo
Step3: Vamos fazer uma simulação de $k$ iterações do algoritmo Hub/Authority
Step4: Considere as seguintes definições | Python Code:
import sys
sys.path.append('..')
import numpy as np
import socnet as sn
Explanation: Encontro 07: Simulação e Demonstração de Hub/Authority
Importando as bibliotecas:
End of explanation
sn.graph_width = 225
sn.graph_height = 225
Explanation: Configurando a biblioteca:
End of explanation
g = sn.load_graph('graph.gml', has_pos=True)
sn.show_graph(g)
Explanation: Carregando o grafo:
End of explanation
k = 10
# inicializa arbitrariamente hubs
g.node[0]['h'] = 0
g.node[1]['h'] = 0
g.node[2]['h'] = 0
g.node[3]['h'] = 0
# inicializa arbitrariamente authorities
g.node[0]['a'] = 2
g.node[1]['a'] = 6
g.node[2]['a'] = 4
g.node[3]['a'] = 3
for _ in range(k):
# atualiza hubs a partir de authorities
for n in g.nodes():
g.node[n]['h'] = sum([g.node[m]['a'] for m in g.successors(n)])
# atualiza authorities a partir de hubs
for n in g.nodes():
g.node[n]['a'] = sum([g.node[m]['h'] for m in g.predecessors(n)])
# soma hubs
sh = sum([g.node[n]['h'] for n in g.nodes()])
# soma authorities
sa = sum([g.node[n]['a'] for n in g.nodes()])
# imprime hubs e authorities normalizados
for n in g.nodes():
print('{}: hub {:04.2f}, authority {:04.2f}'.format(n, g.node[n]['h'] / sh, g.node[n]['a'] / sa))
Explanation: Vamos fazer uma simulação de $k$ iterações do algoritmo Hub/Authority:
End of explanation
k = 10
# constrói matriz de adjacência
A = sn.build_matrix(g)
# constrói matriz transposta
At = A.transpose()
# inicializa arbitrariamente hubs
h = np.array([[0], [0], [0], [0]])
# inicializa arbitrariamente authorities
a = np.array([[2], [6], [4], [3]])
for _ in range(k):
# atualiza hubs a partir de authorities
h = A.dot(a)
# atualiza authorities a partir de hubs
a = At.dot(h)
# soma hubs
sh = np.sum(h)
# soma authorities
sa = np.sum(a)
# imprime hubs e authorities normalizados
for n in g.nodes():
print('{}: hub {:04.2f}, authority {:04.2f}'.format(n, h[n, 0] / sh, a[n, 0] / sa))
Explanation: Considere as seguintes definições:
$A$ é a matriz de adjacência de g;
$h^k$ é o vetor de hubs no final da iteração $k$;
$a^k$ é o vetor de authorities no final da iteração $k$.
Note que:
$h^k = Aa^{k-1}$;
$a^k = A^th^k$.
Vamos fazer uma nova simulação de $k$ iterações do algoritmo Hub/Authority, desta vez usando álgebra matricial:
End of explanation |
4,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
В данном ноутбуке вам предлагается написать различные обработчики для команд с компьютера.Также можно реализовать свой дополнительный набор команд под свои задачи.
Подключение всех библиотек
Сперва подключите все библиотеки уставленные в подготовительной статье
Step1: Стандартные функции для работы с UART
Здесь уже заготовлены стандартные функции для работы с UART. Функции даны лишь для удобства и их использование обязательным не является.
Step2: Проверка работоспособности
Для проверки работоспособности замкните выводы RXD и TXD у конвертера, после этого загрузите написанную вами программу с повторителем пакетов и запустите клетку еще раз. Если возникает ошибка, то скорее всего не хватает прав (на MacOS такой проблемы быть не должно)
Step3: Исполнительное устройство
В этой части вам предлагается написать первую команду - управление светодиодом. Но прежде чем приступить к имплементации, необходимо определиться с форматом передачи данных между устройствами. Для простоты предлагается использовать два блока
Step4: Напишите такой же обработчик для зеленого светодиода и зажгите его
Step5: Попробуйте потушить одновременно два светодиода
Step6: Потушился только один, потому что МК не умеет обрабатывать две команды за один раз, поэтому необходимо сначала дождаться ответа
Step9: Детектирование бита в музыкальном треке
Данный пункт является необязательным
Цель данного примера это показать, как можно использовать компьютер и МК для решения общей задачи. Задача состоит в том, чтобы детектировать ритм музыки. Производительности микроконтроллера не достаточно для обработки звука и тяжелых расчетов, но МК вполне сможет помигать светодиодом в нужные моменты. При желании данный пример можно улучшить, подключив линейнуй шкалу к МК. С 3 индикаторами можно уже сделать простой спектроанализатор. Вот так МК может работать в качестве исполнительного устройства.
Запустите код ниже. Идея состоит в том, чтобы захватывать звуковое окно каждые 1024 семпла и рассчитывать точки, в которых происходит резкое увеличение энергии в низких частотах. Для подробного изучения более простой версии алгоритма, можно обратиться к данной статье.
Step10: Используйте файл music.wav, который лежит в папке с ноутбуком. Можно запустить любой другой, но он должен быть моно и в wav формате. 400 отсчетов по 1024 семпла хватит примерно на 10 сек проигрывания.
Step11: Можно посмотреть на отфильтрованный сигнал. Резкие скачки амплитудой >600 это и есть моменты, где резко меняется энергия в мелодии (drum kick)
Step12: Проверка нажатия кнопки
Теперь напишите обработчик, который будет возвращать состояние кнопки. Пусть код этого запроса будет 0.
Обработчик в manage_requests должен получиться в одну строчку
Step13: Семисегментный индикатор
Теперь напишите обработчик, который будет выводить на семисегментный дисплей число, переданное в качестве аргумента. Пусть код данной команды будет 1. Далее сделайте счетчик, увеличающий значение каждую секунду. Используйте sleep для формирования задержки. Документация тут.
Step14: Энкодер
Напишите обработчик для чтения текущего угла поворота энкодера.
Задание со звездочкой
Step15: ШИМ
Напишите команду, с помощью которой можно менять яркость светодиодов
Задание со звездочкой | Python Code:
import serial
import pyaudio
import numpy as np
import wave
import scipy.signal as signal
import warnings
warnings.filterwarnings('ignore')
Explanation: В данном ноутбуке вам предлагается написать различные обработчики для команд с компьютера.Также можно реализовать свой дополнительный набор команд под свои задачи.
Подключение всех библиотек
Сперва подключите все библиотеки уставленные в подготовительной статье
End of explanation
def serial_init(speed):
dev = serial.Serial(
# Здесь указывается устройство, с которым будет производится работа
# /dev/ttyUSBx - для Linux
# /dev/tty.SLAB_USBtoUART - для MacOS
port='/dev/ttyUSB0',
# Скорость передачи
baudrate=speed,
# Использование бита четности
parity=serial.PARITY_NONE,
# Длина стоп-бита
stopbits=serial.STOPBITS_ONE,
# Длина пакета
bytesize=serial.EIGHTBITS,
# Максимальное время ожидания устройства
timeout=0.1
)
return dev
def serial_recv(dev):
# Для простоты макс. кол-во символов для чтения - 255. Время ожидания - 0.1
# decode необходим для конвертирования набора полученных байтов в строку
string = dev.read(255).decode()
return string
def serial_send(dev, string):
# encode конвертирует строку в кодировке utf-8 в набор байтов
dev.write(string.encode('utf-8'))
Explanation: Стандартные функции для работы с UART
Здесь уже заготовлены стандартные функции для работы с UART. Функции даны лишь для удобства и их использование обязательным не является.
End of explanation
dev = serial_init(115200)
serial_send(dev, "Hello, world!")
ans = serial_recv(dev)
print(ans)
Explanation: Проверка работоспособности
Для проверки работоспособности замкните выводы RXD и TXD у конвертера, после этого загрузите написанную вами программу с повторителем пакетов и запустите клетку еще раз. Если возникает ошибка, то скорее всего не хватает прав (на MacOS такой проблемы быть не должно):
sh
sudo adduser YOUR_USER_NAME dialout
sudo chmod a+rw /dev/ttyUSB0
End of explanation
serial_send(dev, "8 1")
Explanation: Исполнительное устройство
В этой части вам предлагается написать первую команду - управление светодиодом. Но прежде чем приступить к имплементации, необходимо определиться с форматом передачи данных между устройствами. Для простоты предлагается использовать два блока: первый хранит номер команды, второй - необходимые аргументы. Для этого объявите следующую структуру в main.c:
```c
typedef struct {
// Номер команды
uint8_t cmd;
// Необходимые параметры
uint8_t params[10];
// Флаг о том, что была принята новая команда
uint8_t active;
} uart_req_t;
```
После этого объявите статическую глобальную переменную данного типа:
c
static uart_req_t uart_req;
Теперь придется немного модифицировать обработчик для USART1: после начала приема пакетов необходимо первый байт записать в поле cmd структуры uart_req, а все остальные байты в params до тех пор, пока не будет выставлен флаг IDLE:
```c
void USART1_IRQHandler(void)
{
static uint8_t pos = 0;
if (LL_USART_IsActiveFlag_RXNE(USART1)) {
/*
* Если pos равен 0, то байт нужно положить в cmd,
* иначе в params
* Не забудьте увеличить значение pos
*/
}
if (LL_USART_IsActiveFlag_IDLE(USART1)) {
/*
* Если был выстален флаг IDLE, то прием завершился,
* необходимо сбросить pos и выставить флаг active
*/
LL_USART_ClearFlag_IDLE(USART1);
}
return;
}
```
Пришло время написать сам менеджер запросов:
```c
static void manage_requests(void) {
/
* Этой переменной каждый обработчик присваивает статус после
* завершения работы: 1 - ошибка, 0 - нет ошибок
/
uint8_t is_ok = 0;
/*
* Если нет активных запросов - на выход
*/
if (!uart_req.active)
return;
/*
* Здесь будут все обработчики, каждый со своим кодом
*/
switch (uart_req.cmd) {
default:
is_ok = 1;
break;
}
/*
* Здесь отправляется ответ
* 0x30 необходимо, чтобы привести цифру к символу
*/
while (!LL_USART_IsActiveFlag_TXE(USART1));
LL_USART_TransmitData8(USART1, is_ok + 0x30);
/*
* Сброс флага запроса
*/
uart_req.active = 0;
return;
}
```
Теперь добавьте его вызов в бесконечный цикл в main.
Первый обработчик - управление светодиодами
После написания менеджера напишите обработчик, который будет управлять светодиодом. Пусть символ 8 будет кодом команды для включения/выключения восьмого светодиода на порту GPIOC. Если передан символ 0 в качестве аргумента, то необходимо выключить светодиод, если 1, то включить.
c
// Этот case нужно добавить в менеджер запросов
case '8': {
if (uart_req.params[1] == '1')
LL_GPIO_SetOutputPin(GPIOC, LL_GPIO_PIN_8);
else
LL_GPIO_ResetOutputPin(GPIOC, LL_GPIO_PIN_8);
is_ok = 1;
break;
}
Загрузите прошивку и попробуйте следующей командой с компьютера зажечь синий светодиод:
End of explanation
serial_send(dev, "9 1")
Explanation: Напишите такой же обработчик для зеленого светодиода и зажгите его:
End of explanation
serial_send(dev, "8 0")
serial_send(dev, "9 0")
Explanation: Попробуйте потушить одновременно два светодиода
End of explanation
serial_send(dev, "8 1")
serial_recv(dev)
serial_send(dev, "9 0")
Explanation: Потушился только один, потому что МК не умеет обрабатывать две команды за один раз, поэтому необходимо сначала дождаться ответа
End of explanation
class AudioFile:
chunk = 1024
def __init__(self, file):
Init audio stream
self.wf = wave.open(file, 'rb')
self.p = pyaudio.PyAudio()
self.stream = self.p.open(
format = self.p.get_format_from_width(self.wf.getsampwidth()),
channels = self.wf.getnchannels(),
rate = self.wf.getframerate(),
output = True
)
self.beatframe = np.empty(0)
def play(self, dev, max_samples):
block_cnt = 0
B, A = signal.butter(N=3, Wn=0.9, output='ba')
self.beatframe = np.empty(0)
self.peak = np.zeros(max_samples)
data = self.wf.readframes(self.chunk)
led_lock = 10
while data != '' and block_cnt != max_samples:
block_cnt += 1
self.stream.write(data)
data = self.wf.readframes(self.chunk)
sample = np.frombuffer(data, dtype=np.int16)
# Extracting low band
fft = np.abs(np.fft.rfft(sample))
flg_diff = (fft[:30]**2).mean()/float(0xFFFFFFFF)
# Filtering
self.beatframe = np.append(self.beatframe, flg_diff)
fft_final = np.diff(self.beatframe)
if (block_cnt <= 13):
continue
fft_final = signal.filtfilt(B, A, fft_final)
fft_final = np.where(fft_final < 0, 0, fft_final)
# Detecting peaks
fft_range_window = np.max(fft_final[-5:])/np.max(fft_final[-25:])
if (fft_range_window >= 0.90 and led_lock >= 10):
serial_send(dev, "8 1")
led_lock = 0
else:
serial_send(dev, "8 0")
led_lock += 1
return fft_final
def close(self):
Graceful shutdown
self.stream.close()
self.p.terminate()
Explanation: Детектирование бита в музыкальном треке
Данный пункт является необязательным
Цель данного примера это показать, как можно использовать компьютер и МК для решения общей задачи. Задача состоит в том, чтобы детектировать ритм музыки. Производительности микроконтроллера не достаточно для обработки звука и тяжелых расчетов, но МК вполне сможет помигать светодиодом в нужные моменты. При желании данный пример можно улучшить, подключив линейнуй шкалу к МК. С 3 индикаторами можно уже сделать простой спектроанализатор. Вот так МК может работать в качестве исполнительного устройства.
Запустите код ниже. Идея состоит в том, чтобы захватывать звуковое окно каждые 1024 семпла и рассчитывать точки, в которых происходит резкое увеличение энергии в низких частотах. Для подробного изучения более простой версии алгоритма, можно обратиться к данной статье.
End of explanation
dev = serial_init(115200)
a = AudioFile("music.wav")
fft = a.play(dev, 400)
a.close()
Explanation: Используйте файл music.wav, который лежит в папке с ноутбуком. Можно запустить любой другой, но он должен быть моно и в wav формате. 400 отсчетов по 1024 семпла хватит примерно на 10 сек проигрывания.
End of explanation
plt.figure(figsize=(20,7))
plt.plot(fft, label='filtered low pass')
plt.axis('tight')
plt.legend()
plt.show()
Explanation: Можно посмотреть на отфильтрованный сигнал. Резкие скачки амплитудой >600 это и есть моменты, где резко меняется энергия в мелодии (drum kick)
End of explanation
serial_send(dev, '0')
state = serial_recv(dev)
if (state == '0'):
print("Button is not pressed:(")
else:
print("Button is pressed:)")
Explanation: Проверка нажатия кнопки
Теперь напишите обработчик, который будет возвращать состояние кнопки. Пусть код этого запроса будет 0.
Обработчик в manage_requests должен получиться в одну строчку
End of explanation
# your_code
Explanation: Семисегментный индикатор
Теперь напишите обработчик, который будет выводить на семисегментный дисплей число, переданное в качестве аргумента. Пусть код данной команды будет 1. Далее сделайте счетчик, увеличающий значение каждую секунду. Используйте sleep для формирования задержки. Документация тут.
End of explanation
# your_code
Explanation: Энкодер
Напишите обработчик для чтения текущего угла поворота энкодера.
Задание со звездочкой: на основании этих данных попробуйте посчитать скорость вращения $\omega$ и угловое ускорение $\varepsilon$. Постройте графики
End of explanation
# your_code
Explanation: ШИМ
Напишите команду, с помощью которой можно менять яркость светодиодов
Задание со звездочкой: используя код из примера с детектированием ритма, напишите программу, которая будет менять яркость светодиода в зависимости от интенсивности звукового сигнала
End of explanation |
4,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We start with a fasta file for our RNA
Step1: Folding using RNAfold from Vienna suite
RNAfold perfomrs MFE folding at the given temperature (-T option) and outputs top three structures in the bracket notation It also creates a .ps file with the drawing of the top structure.
Step2: This is a regular postscript file
Step3: ... except that at the end it includes base pairing probabilities for the structure
Step4: From RNAfold manpage
It also produces PostScript files with plots of the resulting secondary structure graph and a "dot plot" of the base pairing matrix. The dot plot shows a matrix of squares with area proportional to the pairing probability in the upper right half, and one square for each pair in the minimum free energy structure in the lower left half. For each pair i−j with probability p>10E−6 there is a line of the form
i j sqrt(p) ubox
in the PostScript file, so that the pair probabilities can be easily extracted.
Step5: Final script
Now we can put it all together in a bash script that will take an RNA fasta file, temperature range, run RNAfold for each T and extract and save base pairing probabilities in a .txt file.
For example
Step6: Let's run it for hHSR for 35<sup>o</sup>C-45<sup>o</sup>C range
Step7: The resulting .txt files are saved in the same directory as the starting fasta file | Python Code:
ls -lah ../data/
!head ../data/rose.fa
Explanation: We start with a fasta file for our RNA
End of explanation
%%bash
cd ../data/
RNAfold -p -d2 --noPS --noLP -T 37 < rose.fa
cd -
ls -lah ../data/
Explanation: Folding using RNAfold from Vienna suite
RNAfold perfomrs MFE folding at the given temperature (-T option) and outputs top three structures in the bracket notation It also creates a .ps file with the drawing of the top structure.
End of explanation
!head -n 25 ../data/ROSE1_dp.ps
Explanation: This is a regular postscript file
End of explanation
!tail -n 25 ../data/ROSE1_dp.ps
Explanation: ... except that at the end it includes base pairing probabilities for the structure:
End of explanation
%%bash
cat ../data/ROSE1_dp.ps | grep "^[0-9].*ubox$"
%%bash
awk '/^>/' ../data/rose.fa | head -1
Explanation: From RNAfold manpage
It also produces PostScript files with plots of the resulting secondary structure graph and a "dot plot" of the base pairing matrix. The dot plot shows a matrix of squares with area proportional to the pairing probability in the upper right half, and one square for each pair in the minimum free energy structure in the lower left half. For each pair i−j with probability p>10E−6 there is a line of the form
i j sqrt(p) ubox
in the PostScript file, so that the pair probabilities can be easily extracted.
End of explanation
%%writefile ../scripts/fold_temp.sh
#!/bin/bash
# Runs RNAfold for the given RNA sequence over the range of temperatures,
# extracts base pairing probabilities and saves them in .txt files
# for later analysis
# PostScript file generated by RNAfold ends with this
psext="_dp.ps"
# FASTA file with RNA sequence, rna.fa by default
rna_fa=${1:-rna.fa}
dirname=$(dirname "$rna_fa")
# Temperature interval limits
T1=${2:-37}
T2=${3:-43}
# Check the input file exists
if [[ ! -f $rna_fa ]]
then
echo "Could not find $rna_fa ... Exiting."
exit 1
fi
# Get the base_name either from the fasta file or the filename
base_name=`awk '/^>/' $rna_fa | head -1`
if [[ -z "$base_name" ]]
then
base_name="${rna_fa%.*}"
else
base_name=${base_name##>}
fi
# Iterate over the T range and save probabilities to .txt file
for T in $(seq $T1 $T2)
do
echo "Running RNAfold for Temp=$T ..."
RNAfold -p -d2 --noPS --noLP -T $T < $rna_fa
tmpf=$(ls | grep _dp.ps)
grep "^[0-9].*ubox$" "$tmpf" > "${dirname}/${base_name}_${T}.txt"
done
# Cleanup
rm ${base_name}_dp.ps
Explanation: Final script
Now we can put it all together in a bash script that will take an RNA fasta file, temperature range, run RNAfold for each T and extract and save base pairing probabilities in a .txt file.
For example:
bash
$./fold_temp.sh ../data/rose.fa 37 43
End of explanation
%%bash
../scripts/fold_temp.sh ../data/hHSR.fa 35 45
Explanation: Let's run it for hHSR for 35<sup>o</sup>C-45<sup>o</sup>C range
End of explanation
ls -lah ../data
Explanation: The resulting .txt files are saved in the same directory as the starting fasta file
End of explanation |
4,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Classes and Object Oriented Programming
In an earlier section we discussed classes as a way of representing an abstract object, such as a polynomial. The resulting code
Step2: allowed polynomials to be created, displayed, and multiplied together. However, the language is a little cumbersome. We can take advantage of a number of useful features of Python, many of which carry over to other programming languages, to make it easier to use the results.
Remember that the __init__ function is called when a variable is created. There are a number of special class functions, each of which has two underscores before and after the name. This is another Python convention that is effectively a rule
Step4: Another special function that is very useful is __repr__. This gives a representation of the class. In essence, if you ask Python to print a variable, it will print the string returned by the __repr__ function. This was the role played by our display method, so we can just change the name of the function, making the Polynomial class easier to use. We can use this to create a simple string representation of the polynomial
Step6: The final special function we'll look at (although there are many more, many of which may be useful) is __mul__. This allows Python to multiply two variables together. We did this before using the multiply method, but by using the __mul__ method we can multiply together two polynomials using the standard * operator. With this we can take the product of two polynomials
Step8: We now have a simple class that can represent polynomials and multiply them together, whilst printing out a simple string form representing itself. This can obviously be extended to be much more useful.
Inheritance
As we can see above, building a complete class from scratch can be lengthy and tedious. If there is another class that does much of what we want, we can build on top of that. This is the idea behind inheritance.
In the case of the Polynomial we declared that it started from the object class in the first line defining the class
Step9: Variables of the Monomial class are also variables of the Polynomial class, so can use all the methods and functions from the Polynomial class automatically
Step11: We note that these functions, methods and variables may not be exactly right, as they are given for the general Polynomial class, not by the specific Monomial class. If we redefine these functions and variables inside the Monomial class, they will override those defined in the Polynomial class. We do not have to override all the functions and variables, just the parts we want to change
Step12: This has had no effect on the original Polynomial class and variables, which can be used as before
Step13: And, as Monomial variables are Polynomials, we can multiply them together to get a Polynomial
Step15: In fact, we can be a bit smarter than this. Note that the __init__ function of the Monomial class is identical to that of the Polynomial class, just with the leading_term set explicitly to 1. Rather than duplicating the code and modifying a single value, we can call the __init__ function of the Polynomial class directly. This is because the Monomial class is built on the Polynomial class, so knows about it. We regenerate the class, but only change the __init__ function | Python Code:
class Polynomial(object):
Representing a polynomial.
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def display(self):
string = str(self.leading_term)
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
def multiply(self, other):
roots = self.roots + other.roots
leading_term = self.leading_term * other.leading_term
return Polynomial(roots, leading_term)
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
Explanation: Classes and Object Oriented Programming
In an earlier section we discussed classes as a way of representing an abstract object, such as a polynomial. The resulting code
End of explanation
p_roots = (1, 2, -3)
p_leading_term = 2
p = Polynomial(p_roots, p_leading_term)
p.explain_to("Alice")
q = Polynomial((1,1,0,-2), -1)
q.explain_to("Bob")
Explanation: allowed polynomials to be created, displayed, and multiplied together. However, the language is a little cumbersome. We can take advantage of a number of useful features of Python, many of which carry over to other programming languages, to make it easier to use the results.
Remember that the __init__ function is called when a variable is created. There are a number of special class functions, each of which has two underscores before and after the name. This is another Python convention that is effectively a rule: functions surrounded by two underscores have special effects, and will be called by other Python functions internally. So now we can create a variable that represents a specific polynomial by storing its roots and the leading term:
End of explanation
class Polynomial(object):
Representing a polynomial.
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def __repr__(self):
string = str(self.leading_term)
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
p = Polynomial(p_roots, p_leading_term)
print(p)
q = Polynomial((1,1,0,-2), -1)
print(q)
Explanation: Another special function that is very useful is __repr__. This gives a representation of the class. In essence, if you ask Python to print a variable, it will print the string returned by the __repr__ function. This was the role played by our display method, so we can just change the name of the function, making the Polynomial class easier to use. We can use this to create a simple string representation of the polynomial:
End of explanation
class Polynomial(object):
Representing a polynomial.
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def __repr__(self):
string = str(self.leading_term)
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
def __mul__(self, other):
roots = self.roots + other.roots
leading_term = self.leading_term * other.leading_term
return Polynomial(roots, leading_term)
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
p = Polynomial(p_roots, p_leading_term)
q = Polynomial((1,1,0,-2), -1)
r = p*q
print(r)
Explanation: The final special function we'll look at (although there are many more, many of which may be useful) is __mul__. This allows Python to multiply two variables together. We did this before using the multiply method, but by using the __mul__ method we can multiply together two polynomials using the standard * operator. With this we can take the product of two polynomials:
End of explanation
class Monomial(Polynomial):
Representing a monomial, which is a polynomial with leading term 1.
def __init__(self, roots):
self.roots = roots
self.leading_term = 1
self.order = len(roots)
Explanation: We now have a simple class that can represent polynomials and multiply them together, whilst printing out a simple string form representing itself. This can obviously be extended to be much more useful.
Inheritance
As we can see above, building a complete class from scratch can be lengthy and tedious. If there is another class that does much of what we want, we can build on top of that. This is the idea behind inheritance.
In the case of the Polynomial we declared that it started from the object class in the first line defining the class: class Polynomial(object). But we can build on any class, by replacing object with something else. Here we will build on the Polynomial class that we've started with.
A monomial is a polynomial whose leading term is simply 1. A monomial is a polynomial, and could be represented as such. However, we could build a class that knows that the leading term is always 1: there may be cases where we can take advantage of this additional simplicity.
We build a new monomial class as follows:
End of explanation
m = Monomial((-1, 4, 9))
m.explain_to("Caroline")
print(m)
Explanation: Variables of the Monomial class are also variables of the Polynomial class, so can use all the methods and functions from the Polynomial class automatically:
End of explanation
class Monomial(Polynomial):
Representing a monomial, which is a polynomial with leading term 1.
explanation = "I am a monomial"
def __init__(self, roots):
self.roots = roots
self.leading_term = 1
self.order = len(roots)
def __repr__(self):
string = ""
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
m = Monomial((-1, 4, 9))
m.explain_to("Caroline")
print(m)
Explanation: We note that these functions, methods and variables may not be exactly right, as they are given for the general Polynomial class, not by the specific Monomial class. If we redefine these functions and variables inside the Monomial class, they will override those defined in the Polynomial class. We do not have to override all the functions and variables, just the parts we want to change:
End of explanation
s = Polynomial((2, 3), 4)
s.explain_to("David")
print(s)
Explanation: This has had no effect on the original Polynomial class and variables, which can be used as before:
End of explanation
t = m*s
t.explain_to("Erik")
print(t)
Explanation: And, as Monomial variables are Polynomials, we can multiply them together to get a Polynomial:
End of explanation
class Monomial(Polynomial):
Representing a monomial, which is a polynomial with leading term 1.
explanation = "I am a monomial"
def __init__(self, roots):
Polynomial.__init__(self, roots, 1)
def __repr__(self):
string = ""
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
v = Monomial((2, -3))
v.explain_to("Fred")
print(v)
Explanation: In fact, we can be a bit smarter than this. Note that the __init__ function of the Monomial class is identical to that of the Polynomial class, just with the leading_term set explicitly to 1. Rather than duplicating the code and modifying a single value, we can call the __init__ function of the Polynomial class directly. This is because the Monomial class is built on the Polynomial class, so knows about it. We regenerate the class, but only change the __init__ function:
End of explanation |
4,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab
Step1: TensorFlow 付属のモジュールを使って MNIST データセットをダウンロードします。
Step2: ニューラルネットの入力となる Tensor を tf.placeholder で用意します。
学習の際にランダムサンプリングしたデータを使って weight の更新を行うので、後から使うデータを変更できるように tf.placeholder を使います。
また、データをいくつずつ渡していくかも後で自由に決められるように、 Tensor の shape を [None, 784] と指定しています。
TensorFlow では tf.placeholder の shape を指定する時に、不明な場合は None とすることが可能です。
ただし、一部のオペレーションは Tensor の shape がきちんと定義されていないと実行できない場合があるので注意してください。
Step3: 以下が tf.layers を使ってニューラルネットのノードや辺にあたる部分を作成するコードです。
tf.layers.dense は一般的な全結合層を追加する関数です。
Step4: 損失関数として cross entropy を定義します。
Step5: 学習に直接必要な部分ではありませんが、正答率を計算するためのオペレーションを用意します。
Step6: 最小化してほしい cross_entropy を渡して、勾配法で tf.Variable を更新してくれるオペレーションを作成します。
tf.layers を使う場合は tf.Variable の存在が隠蔽されていますが、裏ではニューラルネットの辺にあたる部分 (weight) を tf.Variable として作成して計算グラフに追加しています。
Step7: tf.Variable を初期化するオペレーションを作成します。
Step8: 計算グラフを構築し終えたら、あとはオペレーション (ノード) を選んで実行するだけです。
ランダムサンプリングしたデータを tf.placeholder に渡しつつ、繰り返し train_op を実行します。 | Python Code:
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
print(tf.__version__)
Explanation: Lab: tf.layers
tf.layers を使うと行列演算や Variable の存在を隠蔽しつつ、柔軟にニューラルネットを記述することができます。
TensorFlow v1.0 で contrib から外れて、変更が加わりにくい安定したモジュールになりました。
楽さと柔軟さのバランスも取れており、おすすめの書き方です。
End of explanation
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Explanation: TensorFlow 付属のモジュールを使って MNIST データセットをダウンロードします。
End of explanation
x_ph = tf.placeholder(tf.float32, [None, 784])
y_ph = tf.placeholder(tf.float32, [None, 10])
Explanation: ニューラルネットの入力となる Tensor を tf.placeholder で用意します。
学習の際にランダムサンプリングしたデータを使って weight の更新を行うので、後から使うデータを変更できるように tf.placeholder を使います。
また、データをいくつずつ渡していくかも後で自由に決められるように、 Tensor の shape を [None, 784] と指定しています。
TensorFlow では tf.placeholder の shape を指定する時に、不明な場合は None とすることが可能です。
ただし、一部のオペレーションは Tensor の shape がきちんと定義されていないと実行できない場合があるので注意してください。
End of explanation
hidden = tf.layers.dense(x_ph, 20)
logits = tf.layers.dense(hidden, 10)
y = tf.nn.softmax(logits)
Explanation: 以下が tf.layers を使ってニューラルネットのノードや辺にあたる部分を作成するコードです。
tf.layers.dense は一般的な全結合層を追加する関数です。
End of explanation
cross_entropy = -tf.reduce_mean(y_ph * tf.log(y))
Explanation: 損失関数として cross entropy を定義します。
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y_ph, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: 学習に直接必要な部分ではありませんが、正答率を計算するためのオペレーションを用意します。
End of explanation
train_op = tf.train.GradientDescentOptimizer(1e-1).minimize(cross_entropy)
Explanation: 最小化してほしい cross_entropy を渡して、勾配法で tf.Variable を更新してくれるオペレーションを作成します。
tf.layers を使う場合は tf.Variable の存在が隠蔽されていますが、裏ではニューラルネットの辺にあたる部分 (weight) を tf.Variable として作成して計算グラフに追加しています。
End of explanation
init_op = tf.global_variables_initializer()
Explanation: tf.Variable を初期化するオペレーションを作成します。
End of explanation
with tf.Session() as sess:
sess.run(init_op)
for i in range(3001):
x_train, y_train = mnist.train.next_batch(100)
sess.run(train_op, feed_dict={x_ph: x_train, y_ph: y_train})
if i % 100 == 0:
train_loss = sess.run(cross_entropy, feed_dict={x_ph: x_train, y_ph: y_train})
test_loss = sess.run(cross_entropy, feed_dict={x_ph: mnist.test.images, y_ph: mnist.test.labels})
tf.logging.info("Iteration: {0} Training Loss: {1} Test Loss: {2}".format(i, train_loss, test_loss))
test_accuracy = sess.run(accuracy, feed_dict={x_ph: mnist.test.images, y_ph: mnist.test.labels})
tf.logging.info("Accuracy: {}".format(test_accuracy))
Explanation: 計算グラフを構築し終えたら、あとはオペレーション (ノード) を選んで実行するだけです。
ランダムサンプリングしたデータを tf.placeholder に渡しつつ、繰り返し train_op を実行します。
End of explanation |
4,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 1a
Step2: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict.
<h2> Explore data </h2>
The data is natality data (record of births in the US). The goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that -- this way, twins born on the same day won't end up in different cuts of the data. We'll first create a SQL query using the natality data after the year 2000.
Step3: Let's create a BigQuery client that we can use throughout the notebook.
Step4: Let's now examine the result of a BiqQuery call in a Pandas DataFrame using our newly created client.
Step6: First, let's get the set of all valid column names in the natality dataset. We can do this by accessing the INFORMATION_SCHEMA for the table from the dataset.
Step7: We can print our valid columns set to see all of the possible columns we have available in the dataset. Of course, you could also find this information by going to the Schema tab when selecting the table in the BigQuery UI.
Step10: Lab Task #1
Step12: Lab Task #2
Step13: Make a bar plot to see is_male with avg_wt linearly scaled and num_babies logarithmically scaled.
Step14: Make a bar plot to see mother_age with avg_wt linearly scaled and num_babies linearly scaled.
Step15: Make a bar plot to see plurality with avg_wt linearly scaled and num_babies logarithmically scaled.
Step16: Make a bar plot to see gestation_weeks with avg_wt linearly scaled and num_babies logarithmically scaled. | Python Code:
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
from google.cloud import bigquery
Explanation: LAB 1a: Exploring natality dataset.
Learning Objectives
Use BigQuery to explore natality dataset
Use Cloud AI Platform Notebooks to plot data explorations
Introduction
In this notebook, we will explore the natality dataset before we begin model development and training to predict the weight of a baby before it is born. We will use BigQuery to explore the data and use Cloud AI Platform Notebooks to plot data explorations.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
End of explanation
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(YEAR AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
Explanation: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict.
<h2> Explore data </h2>
The data is natality data (record of births in the US). The goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that -- this way, twins born on the same day won't end up in different cuts of the data. We'll first create a SQL query using the natality data after the year 2000.
End of explanation
bq = bigquery.Client()
Explanation: Let's create a BigQuery client that we can use throughout the notebook.
End of explanation
# Call BigQuery and examine in dataframe
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
Explanation: Let's now examine the result of a BiqQuery call in a Pandas DataFrame using our newly created client.
End of explanation
# Query to get all column names within table schema
sql =
SELECT
column_name
FROM
publicdata.samples.INFORMATION_SCHEMA.COLUMNS
WHERE
table_name = "natality"
# Send query through BigQuery client and store output to a dataframe
valid_columns_df = bq.query(sql).to_dataframe()
# Convert column names in dataframe to a set
valid_columns_set = valid_columns_df["column_name"].tolist()
Explanation: First, let's get the set of all valid column names in the natality dataset. We can do this by accessing the INFORMATION_SCHEMA for the table from the dataset.
End of explanation
print(valid_columns_set)
Explanation: We can print our valid columns set to see all of the possible columns we have available in the dataset. Of course, you could also find this information by going to the Schema tab when selecting the table in the BigQuery UI.
End of explanation
# TODO: Create function that gets distinct value statistics from BigQuery
def get_distinct_values(valid_columns_set, column_name):
Gets distinct value statistics of BigQuery data column.
Args:
valid_columns_set: set, the set of all possible valid column names in
table.
column_name: str, name of column in BigQuery.
Returns:
Dataframe of unique values, their counts, and averages.
assert column_name in valid_columns_set, (
"{column_name} is not a valid column_name".format(
column_name=column_name))
sql =
pass
Explanation: Lab Task #1: Use BigQuery to explore natality dataset.
Using the above code as an example, write a query to find the unique values for each of the columns and the count of those values for babies born after the year 2000.
For example, we want to get these values:
<pre>
is_male num_babies avg_wt
False 16245054 7.104715
True 17026860 7.349797
</pre>
This is important to ensure that we have enough examples of each data value, and to verify our hunch that the parameter has predictive value.
Hint (highlight to see): <p style='color:white'>Use COUNT(), AVG() and GROUP BY. For example:
<pre style='color:white'>
SELECT
is_male,
COUNT(1) AS num_babies,
AVG(weight_pounds) AS avg_wt
FROM
publicdata.samples.natality
WHERE
year > 2000
GROUP BY
is_male
</pre>
</p>
End of explanation
# TODO: Create function that plots distinct value statistics from BigQuery
def plot_distinct_values(valid_columns_set, column_name, logy=False):
Plots distinct value statistics of BigQuery data column.
Args:
valid_columns_set: set, the set of all possible valid column names in
table.
column_name: str, name of column in BigQuery.
logy: bool, if plotting counts in log scale or not.
pass
Explanation: Lab Task #2: Use Cloud AI Platform Notebook to plot explorations.
Which factors seem to play a part in the baby's weight?
<b>Bonus:</b> Draw graphs to illustrate your conclusions
Hint (highlight to see):
<p style='color:white'># TODO: Reusing the get_distinct_values function you just implemented, create function that plots distinct value statistics from BigQuery
Hint (highlight to see): <p style='color:white'> The simplest way to plot is to use Pandas' built-in plotting capability
<pre style='color:white'>
df = get_distinct_values(valid_columns_set, column_name)
df = df.sort_values(column_name)
df.plot(x=column_name, y="num_babies", kind="bar", figsize=(12, 5))
df.plot(x=column_name, y="avg_wt", kind="bar", figsize=(12, 5))
</pre>
End of explanation
# TODO: Plot is_male
Explanation: Make a bar plot to see is_male with avg_wt linearly scaled and num_babies logarithmically scaled.
End of explanation
# TODO: Plot mother_age
Explanation: Make a bar plot to see mother_age with avg_wt linearly scaled and num_babies linearly scaled.
End of explanation
# TODO: Plot plurality
Explanation: Make a bar plot to see plurality with avg_wt linearly scaled and num_babies logarithmically scaled.
End of explanation
# TODO: Plot gestation_weeks
Explanation: Make a bar plot to see gestation_weeks with avg_wt linearly scaled and num_babies logarithmically scaled.
End of explanation |
4,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejercicio
Step1: Empecemos echando un ojo a la función del rotor, para ver qué vamos a necesitar y con qué parámetros vamos a trabajar.
Step2: Podemos trazar unas cuantas curvas para observar qué pinta va a tener lo que saquemos. Por ejemplo, cómo cambian las características de la hélice dependiendo de la velocidad de vuelo, para una hélice de ejemplo que gira a uyna velocidad dada.
Step3: Definiendo el genoma
Definamos un individuo genérico
Step4: Nuestro rotor depende de varios parámetros, pero en general, buscaremos optimizar el valor de unos, mateniendo un valor controlado de otros. Por ejemplo, la velocidad de avance y la altitud normalmente las impondremos, ya que querremos optimizar para una velocidad y altura de vuelos dadas.
En nuestro algoritmo, usaremos como genoma los parámetros de optimización, y las variables circunstanciales las controlaremos a mano.
Sugerencia (esta es una manera de organizar las variables, aunque puedes escoger otras)
Parámetros de optimización
Step5: A continuación crearemos un diccionario de genes. En él iremos almacenando los nombres de los parámetros y la cantidad de bits que usaremos para definirlos. Cuantos más bits, más resolución
Ej
Step6: Ahora, crearemos una función que rellene estos genomas con datos aleatorios
Step7: Trabajando con el individuo
Ahora necesitamos una función que transforme esos genes a valores con sentido. Cada gen es un número binario cuyo valor estará entre 0 y 2 ^ n, siendo n el número de bits que hayamos escogido. Estas variables traducidas las guardaremos en otro diccionario, ya con su valor. Estos genes no están volando por ahí sueltos, sino que estarán guardados en el interior del individuo al que pertenezcan, por lo que la función deberá estar preparada para extraerlos del individuo, y guardar los resultados a su vez en el interior del individuo.
Step8: El siguiente paso es usar estos traits(parámetros) para calcular las performances (características o desempeños) del motor. Aquí es donde entra el modelo del motor propiamente dicho.
Step9: Comprobemos si todo funciona!
Step10: El último paso que tenemos que realizar sobre el individuo es uno de los más críticos
Step11: Ya tenemos terminado todo lo que necesitamos a nivel de individuo!
Que comiencen los Juegos!
Es hora de trabajar a nivel de algoritmo, y para ello, lo primero es crear una sociedad compuesta de individuos aleatorios. Definamos una función para ello.
Step12: Ahora podemos crear nuestra sociedad
Step13: Ya tenemos nuestra pequeña sociedad, aumentémosla un poco más mezclando entre sí a los ciudadanos con mejores fitness! Vamos a extender nuestra población mezclando los genomas de otros individuos. Los individuos con mejor fitness es más probable que se reproduzcan. Además, en los nuevos individuos produciremos ligeras mutaciones aleatorias.
Step14: Ahora que tenemos una sociedad extensa, es el momento de que actúe la selección "natural"
Step15: Ya tenemos nuestro algoritmo prácticamente terminado! | Python Code:
%matplotlib inline
import numpy as np # Trabajaremos con arrays
import matplotlib.pyplot as plt # Y vamos a pintar gráficos
from optrot.rotor import calcular_rotor # Esta función es la que vamos a usar para calcular el rotor
import random as random # Necesitaremos números aleatorios
Explanation: Ejercicio: Algoritmo genético para optimizar un rotor o hélice, paso a paso
El problema
A menudo, en ingeniería, cuando nos enfrentamos a un problema, no podemos resolver directamente o despejar la solución como en los problemas sencillos típicos de matemáticas o física clásica. Una manera muy típica en la que nos encontraremos los problemas es en la forma de simulación: tenemos una serie de parámetros y un modelo, y podemos simularlo para obtener sus características, pero sin tener ninguna fórmula explícita que relacione parámetros y resultados y que nos permita obtener una función inversa.
En este ejercicio, nos plantearemos un problema de ese tipo: tenemos una función que calcula las propiedades de una hélice en función de una serie de parámetros, pero no conocemos los cálculos que hace internamente. Para nosotros, es una caja negra.
Para optimizar, iremos recuperando las funciones del algoritmo genético que se vieron en la parte de teoría.
End of explanation
help(calcular_rotor)
Explanation: Empecemos echando un ojo a la función del rotor, para ver qué vamos a necesitar y con qué parámetros vamos a trabajar.
End of explanation
vel = np.linspace(0, 30, 100)
efic = np.zeros_like(vel)
T = np.zeros_like(vel)
P = np.zeros_like(vel)
mach = np.zeros_like(vel)
for i in range(len(vel)):
T[i], P[i], efic[i], mach[i] = calcular_rotor(130, vel[i], 0.5, 3)
plt.plot(vel, T)
plt.title('Tracción de la hélice')
plt.plot(vel, P)
plt.title('Potencia consumida')
plt.plot(vel, efic)
plt.title('Eficiencia de la hélice')
plt.plot(vel, mach)
plt.title('Mach en la punta de las palas')
Explanation: Podemos trazar unas cuantas curvas para observar qué pinta va a tener lo que saquemos. Por ejemplo, cómo cambian las características de la hélice dependiendo de la velocidad de vuelo, para una hélice de ejemplo que gira a uyna velocidad dada.
End of explanation
class Individual (object):
def __init__(self, genome):
self.genome = genome
self.traits = {}
self.performances = {}
self.fitness = 0
Explanation: Definiendo el genoma
Definamos un individuo genérico: Cada individuo será un posible diseño del rotor, con unas características determinadas.
End of explanation
15 * np.pi / 180
Explanation: Nuestro rotor depende de varios parámetros, pero en general, buscaremos optimizar el valor de unos, mateniendo un valor controlado de otros. Por ejemplo, la velocidad de avance y la altitud normalmente las impondremos, ya que querremos optimizar para una velocidad y altura de vuelos dadas.
En nuestro algoritmo, usaremos como genoma los parámetros de optimización, y las variables circunstanciales las controlaremos a mano.
Sugerencia (esta es una manera de organizar las variables, aunque puedes escoger otras)
Parámetros de optimización:
omega (velocidad de rotación) (Entre 0 y 200 radianes/segundo)
R (radio de la hélice) (Entre 0.1 y 2 metros)
b (número de palas) (Entre 2 y 5 palas)
theta0 (ángulo de paso colectivo) (Entre -0.26 y 0.26 radianes)(se corresponde a -15 y 15 grados)
p (parámetro de torsión) (Entre -5 y 20 grados)
cuerda (anchura de la pala) (Entre 0.01 y 0.2 metros)
Parámetros circunstanciales:
vz (velocidad de vuelo)
h (altura de vuelo)
Variables que se van a mantener
ley de torsión (hiperbólica)
formato de chord params: un solo número, para que la anchura sea constante a lo largo de la pala
End of explanation
#Completa este diccionario con las variables que hayas elegido y los bits que usarás
dict_genes = {
'omega' : 10,
'R': 10,
'b': 2
}
Explanation: A continuación crearemos un diccionario de genes. En él iremos almacenando los nombres de los parámetros y la cantidad de bits que usaremos para definirlos. Cuantos más bits, más resolución
Ej: 1 bit : 2 valores, 2 bit : 4 valores, 10 bit : 1024 valores
End of explanation
def generate_genome (dict_genes):
#Calculamos el número total de bits con un bucle que recorra el diccionario
n_bits = ?
#Generamos un array aletorio de 1 y 0 de esa longitud con numpy
genome = np.random.randint(0, 2, nbits)
#Transformamos el array en una lista antes de devolverlo
return list(genome)
# Podemos probar a usar nuestra función, para ver qué pinta tiene el ADN de un rotor:
generate_genome(dict_genes)
Explanation: Ahora, crearemos una función que rellene estos genomas con datos aleatorios:
End of explanation
def calculate_traits (individual, dict_genes):
genome = individual.genome
integer_temporal_list = []
for gen in dict_genes: #Recorremos el diccionario de genes para ir traduciendo del binario
??? #Buscamos los bits que se corresponden al bit en cuestión
??? #Pasamos de lista binaria a número entero
integer_temporal_list.append(??) #Añadimos el entero a la lista
# Transformamos cada entero en una variable con sentido físico:
# Por ejemplo, si el entero de la variable Omega está entre 0 y 1023 (10bits),
# pero la variable Omega real estará entre 0 y 200 radianes por segundo:
omega = integer_temporal_list[0] * 200 / 1023
#del mismo modo, para R:
R = 0.1 + integer_temporal_list[1] * 1.9 / 1023 #Obtendremos un radio entre 0.1 y 2 metros
#El número de palas debe ser un entero, hay que tener cuidado:
b = integer_temporal_list[2] + 2 #(entre 2 y 5 palas)
#Continúa con el resto de variables que hayas elegido!
dict_traits = { #Aquí iremos guardando los traits, o parámetros
'omega' : omega,
'R': R
}
individual.traits = dict_traits #Por último, guardamos los traits en el individuo
Explanation: Trabajando con el individuo
Ahora necesitamos una función que transforme esos genes a valores con sentido. Cada gen es un número binario cuyo valor estará entre 0 y 2 ^ n, siendo n el número de bits que hayamos escogido. Estas variables traducidas las guardaremos en otro diccionario, ya con su valor. Estos genes no están volando por ahí sueltos, sino que estarán guardados en el interior del individuo al que pertenezcan, por lo que la función deberá estar preparada para extraerlos del individuo, y guardar los resultados a su vez en el interior del individuo.
End of explanation
def calculate_performances (individual):
dict_traits = individual.traits
#Nuestras circunstancias las podemos imponer aquí, o irlas pasando como argumento a la función
h = 2000 #Altitud de vuelo en metros
vz = 70 #velocidad de avance en m/s, unos 250 km/h
#Extraemos los traits del diccionario:
omega = dict_traits['omega']
R = dict_traits['R']
#... etc
T, P, efic, mach_punta = calcular_rotor(omega, vz, R, b, h...) #Introduce las variables que uses de parámetro.
# Consulta la ayuda para asegurarte de que usas el
# formato correcto!
dict_perfo = {
'T' : T, #Tracción de la hélice
'P' : P, #Potencia consumida por la hélice
'efic': efic, #Eficiencia propulsiva de la hélice
'mach_punta': mach_punta #Mach en la punta de las palas
}
individual.performances = dict_perfo
Explanation: El siguiente paso es usar estos traits(parámetros) para calcular las performances (características o desempeños) del motor. Aquí es donde entra el modelo del motor propiamente dicho.
End of explanation
individuo = Individual(generate_genome(dict_genes))
calculate_traits(individuo, dict_genes)
calculate_performances(individuo)
print(individuo.traits)
print(individuo.performances)
Explanation: Comprobemos si todo funciona!
End of explanation
def calculate_fitness (individual):
dict_traits = individuo.traits
dict_performances = individuo.performances
fitness = ????? #Be Creative!
individual.fitness = fitness
Explanation: El último paso que tenemos que realizar sobre el individuo es uno de los más críticos: Transformar las performances en un valor único (fitness) que con exprese cómo de bueno es con respecto al objetivo de optimización. La función de fitness puede ser función de parámetros(traits) y performances, dependiendo de qué queramos optimizar.
Por ejemplo, si buscáramos que tuviera la tracción máxima sin preocuparnos de nada más, el valor de fitnes sería simplemente igual al de T:
fitness = T
Si queremos imponer restricciones, por ejemplo, que la potencia sea menor a 1000 watios, se pueden añadir sentencias del tipo:
if P > 1000:
fitness -= 1000
Se puede hacer depender la fitness de varios parámetros de manera ponderada:
fitness = parámetro_importante * 10 + parámetro_poco_importante * 0.5
También se pueden combinar diferentes funciones no lineales:
fitness = parámetro_1 * parámetro_2 - parámetro_3 **2 * log(parámetro_4)
Ahora te toca ser creativo! Elige con qué objetivo quieres optimizar la hélice!
Sugerencias de posibles objetivos de optimización:
Mínimo radio posible, manteniendo una tracción mínima de 30 Newtons
Mínima potencia posible, máxima eficiencia, y mínimo radio posible en menor medida, manteniendo una tracción mínima de 40 Newtons y un mach en la punta de las palas de como mucho 0.7
Mínima potencia posible y máxima eficiencia cuando vuela a 70 m/s, tracción mayor a 50 Newtons en el despegue (vz = 0), mínimo peso posible (calculado a partir del radio, número y anchura de las palas) (Puede que tengas que reescribir la función y el diccionario de performances!)
End of explanation
def immigration (society, target_population, dict_genes):
while len(society) < target_population:
new_individual = Individual (generate_genome (dict_genes)) # Generamos un individuo aleatorio
calculate_traits (new_individual, dict_genes) # Calculamos sus traits
calculate_performances (new_individual) # Calculamos sus performances
calculate_fitness (new_individual) # Calculamos su fitness
society.append (new_individual) # Nuestro nuevo ciudadano está listo para unirse al grupo!
Explanation: Ya tenemos terminado todo lo que necesitamos a nivel de individuo!
Que comiencen los Juegos!
Es hora de trabajar a nivel de algoritmo, y para ello, lo primero es crear una sociedad compuesta de individuos aleatorios. Definamos una función para ello.
End of explanation
society = []
immigration (society, 12, dict_genes) #12 por ejemplo, pueden ser los que sean
#Veamos qué pinta tienen los genes de la población
plt.matshow([individual.genome for individual in society], cmap=plt.cm.gray)
Explanation: Ahora podemos crear nuestra sociedad:
End of explanation
#This function was taken from Eli Bendersky's website
#It returns an index of a list called "weights",
#where the content of each element in "weights" is the probability of this index to be returned.
#For this function to be as fast as possible we need to pass it a list of weights in descending order.
def weighted_choice_sub(weights):
rnd = random.random() * sum(weights)
for i, w in enumerate(weights):
rnd -= w
if rnd < 0:
return i
def crossover (society, reproduction_rate, mutation_rate):
#First we create a list with the fitness values of every individual in the society
fitness_list = [individual.fitness for individual in society]
#We sort the individuals in the society in descending order of fitness.
society_sorted = [x for (y, x) in sorted(zip(fitness_list, society), key=lambda x: x[0], reverse=True)]
#We then create a list of relative probabilities in descending order,
#so that the fittest individual in the society has N times more chances to reproduce than the least fit,
#where N is the number of individuals in the society.
probability = [i for i in reversed(range(1,len(society_sorted)+1))]
#We create a list of weights with the probabilities of non-mutation and mutation
mutation = [1 - mutation_rate, mutation_rate]
#For every new individual to be created through reproduction:
for i in range (int(len(society) * reproduction_rate)):
#We select two parents randomly, using the list of probabilities in "probability".
father, mother = society_sorted[weighted_choice_sub(probability)], society_sorted[weighted_choice_sub(probability)]
#We randomly select two cutting points for the genome.
a, b = random.randrange(0, len(father.genome)), random.randrange(0, len(father.genome))
#And we create the genome of the child putting together the genome slices of the parents in the cutting points.
child_genome = father.genome[0:min(a,b)]+mother.genome[min(a,b):max(a,b)]+father.genome[max(a,b):]
#For every bit in the not-yet-born child, we generate a list containing
#1's in the positions where the genome must mutate (i.e. the bit must switch its value)
#and 0's in the positions where the genome must stay the same.
n = [weighted_choice_sub(mutation) for ii in range(len(child_genome))]
#This line switches the bits of the genome of the child that must mutate.
mutant_child_genome = [abs(n[i] - child_genome[i]) for i in range(len(child_genome))]
#We finally append the newborn individual to the society
newborn = Individual(mutant_child_genome)
calculate_traits (newborn, dict_genes)
calculate_performances (newborn)
calculate_fitness (newborn)
society.append(newborn)
Explanation: Ya tenemos nuestra pequeña sociedad, aumentémosla un poco más mezclando entre sí a los ciudadanos con mejores fitness! Vamos a extender nuestra población mezclando los genomas de otros individuos. Los individuos con mejor fitness es más probable que se reproduzcan. Además, en los nuevos individuos produciremos ligeras mutaciones aleatorias.
End of explanation
def tournament(society, target_population):
while len(society) > target_population:
fitness_list = [individual.fitness for individual in society]
society.pop(fitness_list.index(min(fitness_list)))
Explanation: Ahora que tenemos una sociedad extensa, es el momento de que actúe la selección "natural": Eliminaremos de la sociedad a los individuos con peor fitness hasta llegar a una población objetivo.
End of explanation
society = []
fitness_max = []
for generation in range(30):
immigration (society, 100, dict_genes) #Añade individuos aleatorios a la sociedad hasta tener 100
fitness_max += [max([individual.fitness for individual in society])]
tournament (society, 15) #Los hace competir hasta que quedan 15
crossover(society, 5, 0.05) #Los ganadores se reproducen hasta tener 75
plt.plot(fitness_max)
plt.title('Evolución del valor de fitness')
tournament (society, 1) #Buscamos el mejor de todos
winner = society[0]
print(winner.traits) #Comprobamos sus características
print(winner.performances)
Explanation: Ya tenemos nuestro algoritmo prácticamente terminado!
End of explanation |
4,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algoritmo de Optimización por Colonia de Hormigas (ACO)
Como vimos en la parte de teoría, el problema del viajante es un problema clásico
Step1: Lo primero que vamos a hacer es crear un mapa que contenga nuestras ciudades. Como primer argumento le pasamos el número de ciudades, y como segundo, el tamaño del mapa. Al ser creado, el mapa generará automáticamente las ciudades en posiciones aleatorias.
Step2: Podemos ver el mapa con las ciudades unidas por líneas cuyo grosor depende de la distancia entre ellas
Step3: A continuación, lo siguiente que tenemos que hacer es crear un enjambre de hormigas. Con esta función, lo podremos hacer fácilmente.
Step4: Si queremos ver dónde se encuentran nuestras hormigas en un momento dado, podemos hacerlo
Step5: Podemos comprobar fácilmente que la matriz de distancias es simétrica, y que la matriz de feromonas aún está vacía
Step6: Empecemos a mover a nuestras hormigas!
Para que la primera generación de hormigas recorra el mapa llamaremos a la función swarm_generation()
Step7: Veamos cómo ha cambiado las feromonas!
Step8: Para poder encontrar una buena ruta, necesitaremos que pasen unas cuantras generaciones más... Pongamos que 50
Step9: Parece que las hormigas ya tienen claros sus caminos favoritos!
Veamos qué pinta tiene el mejor camino que han encontrado
Step10: Podemos borrar las feromonas y empezar de nuevo el algoritmo, para ver si siempre llegan a la misma solución. No te preocupes, al algoritmo no borrará la mejor ruta encontrada.
Step11: El algoritmo podría haber encontrado una ruta mejor, comprobémoslo
Step12: Podemos observar cómo han ido variando las longitudes máxima, mínima y media de los caminos de las hormigas en cada generación, y compararlas con la del mejor camino encontrado
Step13: También podemos dibujar las longitudes mínimas de cada ejecución del algoritmo
Step14: Ajuste fino de las Feromonas
Supongamos ahora que queremos optimizar una ruta entre 40 ciudades
Step15: ¿Qué pasa? ¡No hay feromonas!
La solución es muy simple | Python Code:
#Comencemos importando los paquetes necesarios:
%matplotlib inline
import numpy as np # Usaremos arrays
import matplotlib.pyplot as plt # Para pintar resultados
import ants as ants # Aquí están los objetos del algoritmo
Explanation: Algoritmo de Optimización por Colonia de Hormigas (ACO)
Como vimos en la parte de teoría, el problema del viajante es un problema clásico:
Imaginemos una distribución de ciudades en un mapa. Somos un vendedor que quiere visitarlas todas, sólo una vez cada una, gastando el menor combustible posible.
El algoritmo se basa en varias generaciones sucesivas de hormigas que recorren el mapa viajando de ciudad en ciudad, eligiendo su siguiente ciudad de manera aletoria hasta que las han recorrido todas. En cada etapa del viaje, las hormigas eligen moverse de una ciudad a otra teniendo en cuenta las siguientes reglas:
Debe visitar cada ciudad exactamente una vez, excepto la inicial en la que estará dos veces (salida y llegada final);
Una ciudad distante tiene menor posibilidad de ser elegida (Visibilidad);
Cuanto más intenso es el rastro de feromonas de una arista entre dos ciudades, mayor es la probabilidad de que esa arista sea elegida;
Después de haber completado su recorrido, la hormiga deposita feromonas en todas las aristas visitadas, mayor cantidad cuanto más pequeña es la distancia total recorrida;
Después de cada generación, algunas feromonas son evaporadas.
End of explanation
map1 = ants.Mapa(10)
Explanation: Lo primero que vamos a hacer es crear un mapa que contenga nuestras ciudades. Como primer argumento le pasamos el número de ciudades, y como segundo, el tamaño del mapa. Al ser creado, el mapa generará automáticamente las ciudades en posiciones aleatorias.
End of explanation
map1.draw_distances()
Explanation: Podemos ver el mapa con las ciudades unidas por líneas cuyo grosor depende de la distancia entre ellas:
End of explanation
map1.swarm_create(100) # Creamos un enjambre de 100 hormigas
Explanation: A continuación, lo siguiente que tenemos que hacer es crear un enjambre de hormigas. Con esta función, lo podremos hacer fácilmente.
End of explanation
map1.swarm_show()
Explanation: Si queremos ver dónde se encuentran nuestras hormigas en un momento dado, podemos hacerlo:
End of explanation
map1.show_distances_matrix()
map1.show_feromones_matrix()
Explanation: Podemos comprobar fácilmente que la matriz de distancias es simétrica, y que la matriz de feromonas aún está vacía:
End of explanation
map1.swarm_generation()
Explanation: Empecemos a mover a nuestras hormigas!
Para que la primera generación de hormigas recorra el mapa llamaremos a la función swarm_generation():
End of explanation
map1.show_feromones_matrix()
map1.draw_feromones()
Explanation: Veamos cómo ha cambiado las feromonas!
End of explanation
for i in range(50):
print(i, end = '·')
map1.swarm_generation()
map1.show_feromones_matrix()
map1.draw_feromones()
Explanation: Para poder encontrar una buena ruta, necesitaremos que pasen unas cuantras generaciones más... Pongamos que 50
End of explanation
map1.draw_best_path()
Explanation: Parece que las hormigas ya tienen claros sus caminos favoritos!
Veamos qué pinta tiene el mejor camino que han encontrado:
End of explanation
for j in range(3):
map1.feromone_reset()
print()
print('Ejecución', j+1, ', generación: ')
for i in range(50):
print(i+1, end = '·')
map1.swarm_generation()
map1.draw_feromones()
Explanation: Podemos borrar las feromonas y empezar de nuevo el algoritmo, para ver si siempre llegan a la misma solución. No te preocupes, al algoritmo no borrará la mejor ruta encontrada.
End of explanation
map1.draw_best_path()
Explanation: El algoritmo podría haber encontrado una ruta mejor, comprobémoslo:
End of explanation
map1.draw_results()
Explanation: Podemos observar cómo han ido variando las longitudes máxima, mínima y media de los caminos de las hormigas en cada generación, y compararlas con la del mejor camino encontrado:
End of explanation
map1.draw_best_results()
Explanation: También podemos dibujar las longitudes mínimas de cada ejecución del algoritmo:
End of explanation
map2 = ants.Mapa(40)
map2.swarm_create(200)
map2.swarm_generation()
map2.show_feromones_matrix()
map2.draw_feromones()
Explanation: Ajuste fino de las Feromonas
Supongamos ahora que queremos optimizar una ruta entre 40 ciudades:
End of explanation
#Con un valor de 5 es suficiente
map2.feromone_fine_tune()
map2.swarm_generation()
map2.show_feromones_matrix()
map2.draw_feromones()
for i in range(25):
print(i, end = '·')
map2.swarm_generation()
map2.show_feromones_matrix()
map2.draw_feromones()
map2.draw_best_path()
map2.draw_results()
Explanation: ¿Qué pasa? ¡No hay feromonas!
La solución es muy simple: las hormigas "standard" están adaptadas a mapas más pequeños, y no dejan tras de sí suficiente feromona como para que no se evapore toda.
Para solucionarlo, podemos modificar a nuestras hormigas para este entorno:
End of explanation |
4,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 2 - Spark SQL
This Lab will show you how to work with Spark SQL
Step 1
<h3>Getting started
Step1: Step 2
<h3>Dowload a JSON Recordset to work with</h3>
Let's download the data, we can run commands on the console of the server (or docker image) that the notebook enviroment is using. To do so we simply put a "!" in front of the command that we want to run. For example
Step2: Step 3
<h3>Create a Dataframe</h3>
Now you can create the Dataframe, note that if you wanted to see where you downloaded the file you can run !pwd or !ls
To create the Dataframe type
Step3: <h3>We can look at the schema with this command
Step4: <h3>Dataframes work like RDDs, you can map, reduce, groupby, etc.
<br>Take a look at the first two rows of data using "take"</h3>
Step5: Step 4
<h3>Register a table</h3>
Using
DataframeObject.registerTempTable("name_of_table")
Create a table named "world_bank"
Step6: Step 5
<h3>Writing SQL Statements</h3>
Using SQL Get the first 2 records
sqlContext.sql("SQL Statement") will return a Dataframe with the records
Step7: Step 6
<h3>Creating simple graphs</h3>
Using Pandas we can do create some simple visualizations.
First create a SQL statement that is a resonable number if items
For example, you can count the number of projects (rows) by countryname
<br>or in anothe words
Step8: Step 7
<h3>Creating a dataframe "manually" by adding a schema to an RDD</h3>
First, we need to create an RDD of pairs or triplets. This can be done using code (for loop) as
seen in the instructor's example, or more simply by assigning values to an array.
Step9: Use first the StructField method, following these steps | Python Code:
#Create the SQLContext
Explanation: Lab 2 - Spark SQL
This Lab will show you how to work with Spark SQL
Step 1
<h3>Getting started: Create a SQL Context</h3>
<b>Type:</b>
from pyspark.sql import SQLContext<br>
sqlContext = SQLContext(sc)
End of explanation
#enter the commands to remove and download file here
Explanation: Step 2
<h3>Dowload a JSON Recordset to work with</h3>
Let's download the data, we can run commands on the console of the server (or docker image) that the notebook enviroment is using. To do so we simply put a "!" in front of the command that we want to run. For example:
!pwd
To get the data we will download a file to the enviroment. Simple run these two commands, the first just ensures that the file is removed if it exists:
!rm world_bank.json.gz -f <br>
!wget https://raw.githubusercontent.com/bradenrc/sparksql_pot/master/world_bank.json.gz
End of explanation
#create the Dataframe here:
Explanation: Step 3
<h3>Create a Dataframe</h3>
Now you can create the Dataframe, note that if you wanted to see where you downloaded the file you can run !pwd or !ls
To create the Dataframe type:
example1_df = sqlContext.read.json("world_bank.json.gz")
End of explanation
#print out the schema
Explanation: <h3>We can look at the schema with this command:</h3>
example1_df.printSchema()
End of explanation
#Use take on the dataframe to pull out 2 rows
Explanation: <h3>Dataframes work like RDDs, you can map, reduce, groupby, etc.
<br>Take a look at the first two rows of data using "take"</h3>
End of explanation
#Create the table to be referenced via SQL
Explanation: Step 4
<h3>Register a table</h3>
Using
DataframeObject.registerTempTable("name_of_table")
Create a table named "world_bank"
End of explanation
#Use SQL to select from table limit 2 and print the output
#Extra credit, take the Dataframe you created with the two records and convert it into Pandas
#Now Calculate a Simple count based on a group, for example "regionname"
# With JSON data you can reference the nested data
# If you look at Schema above you can see that Sector.Name is a nested column
# Select that column and limit to reasonable output (like 2)
Explanation: Step 5
<h3>Writing SQL Statements</h3>
Using SQL Get the first 2 records
sqlContext.sql("SQL Statement") will return a Dataframe with the records
End of explanation
# we need to tell the charting library (matplotlib) to display charts inline
# just run this paragraph
%matplotlib inline
import matplotlib.pyplot as plt, numpy as np
# first write the sql statment and look at the data, remember to add .toPandas() to have it look nice
# an even easier option is to create a variable and set it to the SQL statement
# for example:
# query = "select count(*) as Count, countryname from world_bank group by countryname"
# chart1_df = sqlContext.sql(query).toPandas()
# print chart1_df
# now take the variable (or same sql statement) and use the method:
# .plot(kind='bar', x='countryname', y='Count', figsize=(12, 5))
Explanation: Step 6
<h3>Creating simple graphs</h3>
Using Pandas we can do create some simple visualizations.
First create a SQL statement that is a resonable number if items
For example, you can count the number of projects (rows) by countryname
<br>or in anothe words:
<br>count(*), countryname from table group by countryname
End of explanation
# Default array defined below. Feel free to change as desired.
array=[[1,1,1],[2,2,2],[3,3,3],[4,4,4],[5,5,5]]
my_rdd = sc.parallelize(array)
my_rdd.collect()
Explanation: Step 7
<h3>Creating a dataframe "manually" by adding a schema to an RDD</h3>
First, we need to create an RDD of pairs or triplets. This can be done using code (for loop) as
seen in the instructor's example, or more simply by assigning values to an array.
End of explanation
from pyspark.sql.types import *
# The schema is encoded in a string. Complete the string below
schemaString = ""
# MissingType() should be either StringType() or IntegerType(). Please replace as required.
fields = [StructField(field_name, MissingType(), True) for field_name in schemaString.split()]
schema = StructType(fields)
# Apply the schema to the RDD.
schemaExample = sqlContext.createDataFrame(use_your_rdd_name_here, schema)
# Register the DataFrame as a table. Add table name below as parameter to registerTempTable.
schemaExample.registerTempTable("")
# Run some select statements on your newly created DataFrame and display the output
Explanation: Use first the StructField method, following these steps:<br>
1- Define your schema columns as a string<br>
2- Build the schema object using StructField<br>
3- Apply the schema object to the RDD<br>
Note: The cell below is missing some code and will not run properly until the missing code has
been completed.
End of explanation |
4,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OT for domain adaptation
This example introduces a domain adaptation in a 2D setting and the 4 OTDA
approaches currently supported in POT.
Step1: Generate data
Step2: Instantiate the different transport algorithms and fit them
Step3: Fig 1
Step4: Fig 2 | Python Code:
# Authors: Remi Flamary <[email protected]>
# Stanislas Chambon <[email protected]>
#
# License: MIT License
import matplotlib.pylab as pl
import ot
Explanation: OT for domain adaptation
This example introduces a domain adaptation in a 2D setting and the 4 OTDA
approaches currently supported in POT.
End of explanation
n_source_samples = 150
n_target_samples = 150
Xs, ys = ot.datasets.get_data_classif('3gauss', n_source_samples)
Xt, yt = ot.datasets.get_data_classif('3gauss2', n_target_samples)
Explanation: Generate data
End of explanation
# EMD Transport
ot_emd = ot.da.EMDTransport()
ot_emd.fit(Xs=Xs, Xt=Xt)
# Sinkhorn Transport
ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1)
ot_sinkhorn.fit(Xs=Xs, Xt=Xt)
# Sinkhorn Transport with Group lasso regularization
ot_lpl1 = ot.da.SinkhornLpl1Transport(reg_e=1e-1, reg_cl=1e0)
ot_lpl1.fit(Xs=Xs, ys=ys, Xt=Xt)
# Sinkhorn Transport with Group lasso regularization l1l2
ot_l1l2 = ot.da.SinkhornL1l2Transport(reg_e=1e-1, reg_cl=2e0, max_iter=20,
verbose=True)
ot_l1l2.fit(Xs=Xs, ys=ys, Xt=Xt)
# transport source samples onto target samples
transp_Xs_emd = ot_emd.transform(Xs=Xs)
transp_Xs_sinkhorn = ot_sinkhorn.transform(Xs=Xs)
transp_Xs_lpl1 = ot_lpl1.transform(Xs=Xs)
transp_Xs_l1l2 = ot_l1l2.transform(Xs=Xs)
Explanation: Instantiate the different transport algorithms and fit them
End of explanation
pl.figure(1, figsize=(10, 5))
pl.subplot(1, 2, 1)
pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples')
pl.xticks([])
pl.yticks([])
pl.legend(loc=0)
pl.title('Source samples')
pl.subplot(1, 2, 2)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples')
pl.xticks([])
pl.yticks([])
pl.legend(loc=0)
pl.title('Target samples')
pl.tight_layout()
Explanation: Fig 1 : plots source and target samples
End of explanation
param_img = {'interpolation': 'nearest', 'cmap': 'spectral'}
pl.figure(2, figsize=(15, 8))
pl.subplot(2, 4, 1)
pl.imshow(ot_emd.coupling_, **param_img)
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nEMDTransport')
pl.subplot(2, 4, 2)
pl.imshow(ot_sinkhorn.coupling_, **param_img)
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nSinkhornTransport')
pl.subplot(2, 4, 3)
pl.imshow(ot_lpl1.coupling_, **param_img)
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nSinkhornLpl1Transport')
pl.subplot(2, 4, 4)
pl.imshow(ot_l1l2.coupling_, **param_img)
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nSinkhornL1l2Transport')
pl.subplot(2, 4, 5)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.3)
pl.scatter(transp_Xs_emd[:, 0], transp_Xs_emd[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.xticks([])
pl.yticks([])
pl.title('Transported samples\nEmdTransport')
pl.legend(loc="lower left")
pl.subplot(2, 4, 6)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.3)
pl.scatter(transp_Xs_sinkhorn[:, 0], transp_Xs_sinkhorn[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.xticks([])
pl.yticks([])
pl.title('Transported samples\nSinkhornTransport')
pl.subplot(2, 4, 7)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.3)
pl.scatter(transp_Xs_lpl1[:, 0], transp_Xs_lpl1[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.xticks([])
pl.yticks([])
pl.title('Transported samples\nSinkhornLpl1Transport')
pl.subplot(2, 4, 8)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.3)
pl.scatter(transp_Xs_l1l2[:, 0], transp_Xs_l1l2[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.xticks([])
pl.yticks([])
pl.title('Transported samples\nSinkhornL1l2Transport')
pl.tight_layout()
pl.show()
Explanation: Fig 2 : plot optimal couplings and transported samples
End of explanation |
4,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
word2vec
This notebook is equivalent to demo-word.sh, demo-analogy.sh, demo-phrases.sh and demo-classes.sh from Google.
Training
Download some data, for example
Step1: Run word2phrase to group up similar words "Los Angeles" to "Los_Angeles"
Step2: This will create a text8-phrases that we can use as a better input for word2vec.
Note that you could easily skip this previous step and use the origial data as input for word2vec.
Train the model using the word2phrase output.
Step3: That generated a text8.bin file containing the word vectors in a binary format.
Do the clustering of the vectors based on the trained model.
Step4: That created a text8-clusters.txt with the cluster for every word in the vocabulary
Predictions
Step5: Import the word2vec binary file created above
Step6: We can take a look at the vocabulaty as a numpy array
Step7: Or take a look at the whole matrix
Step8: We can retreive the vector of individual words
Step9: We can do simple queries to retreive words similar to "socks" based on cosine similarity
Step10: This returned a tuple with 2 items
Step11: There is a helper function to create a combined response
Step12: Is easy to make that numpy array a pure python response
Step13: Phrases
Since we trained the model with the output of word2phrase we can ask for similarity of "phrases"
Step14: Analogies
Its possible to do more complex queries like analogies such as
Step15: Clusters
Step16: We can see get the cluster number for individual words
Step17: We can see get all the words grouped on an specific cluster
Step18: We can add the clusters to the word2vec model and generate a response that includes the clusters | Python Code:
import word2vec
Explanation: word2vec
This notebook is equivalent to demo-word.sh, demo-analogy.sh, demo-phrases.sh and demo-classes.sh from Google.
Training
Download some data, for example: http://mattmahoney.net/dc/text8.zip
End of explanation
word2vec.word2phrase('./text8', './text8-phrases', verbose=True)
Explanation: Run word2phrase to group up similar words "Los Angeles" to "Los_Angeles"
End of explanation
word2vec.word2vec('./text8-phrases', './text8.bin', size=100, verbose=True)
Explanation: This will create a text8-phrases that we can use as a better input for word2vec.
Note that you could easily skip this previous step and use the origial data as input for word2vec.
Train the model using the word2phrase output.
End of explanation
word2vec.word2clusters('./text8', './text8-clusters.txt', 100, verbose=True)
Explanation: That generated a text8.bin file containing the word vectors in a binary format.
Do the clustering of the vectors based on the trained model.
End of explanation
import word2vec
Explanation: That created a text8-clusters.txt with the cluster for every word in the vocabulary
Predictions
End of explanation
model = word2vec.load('./text8.bin')
Explanation: Import the word2vec binary file created above
End of explanation
model.vocab
Explanation: We can take a look at the vocabulaty as a numpy array
End of explanation
model.vectors.shape
model.vectors
Explanation: Or take a look at the whole matrix
End of explanation
model['dog'].shape
model['dog'][:10]
Explanation: We can retreive the vector of individual words
End of explanation
indexes, metrics = model.cosine('socks')
indexes, metrics
Explanation: We can do simple queries to retreive words similar to "socks" based on cosine similarity:
End of explanation
model.vocab[indexes]
Explanation: This returned a tuple with 2 items:
1. numpy array with the indexes of the similar words in the vocabulary
2. numpy array with cosine similarity to each word
Its possible to get the words of those indexes
End of explanation
model.generate_response(indexes, metrics)
Explanation: There is a helper function to create a combined response: a numpy record array
End of explanation
model.generate_response(indexes, metrics).tolist()
Explanation: Is easy to make that numpy array a pure python response:
End of explanation
indexes, metrics = model.cosine('los_angeles')
model.generate_response(indexes, metrics).tolist()
Explanation: Phrases
Since we trained the model with the output of word2phrase we can ask for similarity of "phrases"
End of explanation
indexes, metrics = model.analogy(pos=['king', 'woman'], neg=['man'], n=10)
indexes, metrics
model.generate_response(indexes, metrics).tolist()
Explanation: Analogies
Its possible to do more complex queries like analogies such as: king - man + woman = queen
This method returns the same as cosine the indexes of the words in the vocab and the metric
End of explanation
clusters = word2vec.load_clusters('./text8-clusters.txt')
Explanation: Clusters
End of explanation
clusters['dog']
Explanation: We can see get the cluster number for individual words
End of explanation
clusters.get_words_on_cluster(90).shape
clusters.get_words_on_cluster(90)[:10]
Explanation: We can see get all the words grouped on an specific cluster
End of explanation
model.clusters = clusters
indexes, metrics = model.analogy(pos=['paris', 'germany'], neg=['france'], n=10)
model.generate_response(indexes, metrics).tolist()
Explanation: We can add the clusters to the word2vec model and generate a response that includes the clusters
End of explanation |
4,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is to test several models for measuring the drop in $f_{features}$ in the FERENGI-fied galaxies. Refer to the link below for the final version, where zeta is calculated with the chosen model.¶
Link to Zeta.ipynb
Step1: Using simple normalisation
Step2: $\frac{f}{f_{0}}=1 - {\zeta} * (z-z_{0})$
$\zeta = \zeta[0]+\zeta[1] * \mu$
Step3: $\frac{f}{f_{0}}=e^{\frac{-(z-z_0)}{\zeta}}$
$\zeta = constant$
Step4: $\frac{f}{f_{0}}=e^{\frac{-(z-z_0)}{\zeta}}$
$\zeta = \zeta[0]+\zeta[1] * \mu$
Step5: $\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$
$\zeta_{a}, \zeta_{b} = constant$
Step6: $\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$
$\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $
$\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
Step7: $\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$
$\zeta_{a}, \zeta_{b}, \zeta_{c} = constant$
Step8: $\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$
$\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $
$\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
$\zeta_{c} = \zeta_{c}[0] + \zeta_{c}[1] * \mu $
Step9: Using alternative normalisation, as in eqn. 4
Step10: $\frac{1-f_{0}}{1-f}=1 - {\zeta} * (z-z_{0})$
$\zeta = \zeta[0]+\zeta[1] * \mu$
Step11: $\frac{1-f_{0}}{1-f}=e^{\frac{-(z-z_0)}{\zeta}}$
$\zeta = constant$
Step12: $\frac{1-f_{0}}{1-f}=e^{\frac{-(z-z_0)}{\zeta}}$
$\zeta = \zeta[0]+\zeta[1] * \mu$
Step13: $\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$
$\zeta_{a}, \zeta_{b} = constant$
Step14: $\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$
$\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $
$\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
Step15: $\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$
$\zeta_{a}, \zeta_{b}, \zeta_{c} = constant$
Step16: $\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$
$\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $
$\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
$\zeta_{c} = \zeta_{c}[0] + \zeta_{c}[1] * \mu $ | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
from astropy.table import Table,Column
from astropy.io import fits
from scipy import optimize
from scipy.optimize import minimize
from scipy import stats
from scipy.stats import distributions as dist
import numpy as np
import os
import requests
import warnings
import matplotlib as mpl
warnings.filterwarnings('ignore', category=RuntimeWarning, append=True)
warnings.filterwarnings('ignore', category=UserWarning, append=True)
# Load data from Dropbox folder instead of clogging up Github
def download_from_dropbox(url):
local_filename = "../{:}".format(url.split("/")[-1].split("?")[0])
r = requests.get(url, stream=True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
return local_filename
# Use only galaxies with surface brightness/redshift ranges that are considered "debiasable"
# This table is computed in STEP 1 - p_features_thresholds_slope_method.ipynb
ferengi_filename = '../../data/ferengi_data_with_categories_new_sb.fits'
alldata = Table.read(ferengi_filename)
data = alldata[alldata['Correctable_Category'] == 'correctable']
# Limit to galaxies that have data at z_sim = z0, since that's what we're normalizing to.
z0 = 0.3
unique_galaxies = set(data['sdss_id'])
z0ind = np.zeros(len(data),dtype=bool)
eps = 1e-3
for ug in unique_galaxies:
ind = (data['sdss_id'] == ug)
if data[ind]['sim_redshift'].min() < (z0+eps):
z0ind[ind] = True
data_z0 = data[z0ind]
def fzeta_exp_mu_none(p,x,mu):
return np.exp(-1 * (x-z0)/p[0])
def fzeta_exp_mu_lin(p,x,mu):
q0 = p[0] + p[1]*mu
return np.exp(-1 * (x-z0)/q0)
def fzeta_lin_mu_none(p,x,mu):
return 1 - p[0] * (x-z0)
def fzeta_lin_mu_lin(p,x,mu):
zeta = p[0] + p[1]*mu
return 1 - zeta * (x-z0)
def fzeta_qud_mu_none(p,x,mu):
return 1 - p[0] * (x-z0) + p[1] * (x-z0)**2
def fzeta_qud_mu_lin(p,x,mu):
q0 = p[0] + p[1]*mu
q1 = p[2] + p[3]*mu
return 1 - q0 * (x-z0) + q1 * (x-z0)**2
def fzeta_cub_mu_none(p,x,mu):
return 1 - p[0] * (x-z0) + p[1] * (x-z0)**2 + p[2] * (x-z0)**3
def fzeta_cub_mu_lin(p,x,mu):
q0 = p[0] + p[1]*mu
q1 = p[2] + p[3]*mu
q2 = p[4] + p[5]*mu
return 1 - q0 * (x-z0) + q1 * (x-z0)**2 + q2 * (x-z0)**3
def negloglike(p,func,x,mu,y):
# Assuming a Normal scatter isn't ideal, since we know the value is bounded to (0,1).
# Maybe some kind of beta function would be more suitable, but not bother with now.
# Calculate the negative log-likelihood as the negative sum of the log of a normal
# PDF where the observed values are normally distributed around the mean (yPred)
# with a standard deviation of sd
sd = p[-1]
p = p[:-1]
nll = -np.sum( stats.norm.logpdf(y, loc=func(p,x,mu), scale=sd) )
return(nll)
def get_AIC(k,nll):
log_likelihood = -nll
AIC = 2*k - log_likelihood
return AIC
# create dataset combining all galaxies
# get all unique galaxies
unique_galaxies = set(data['sdss_id'])
x = []
mu = []
y = []
yn = []
ym = []
yd = []
for gal in unique_galaxies:
#loop over different evolutions for each galaxy
this_galaxy = data[data['sdss_id']==gal]
for evo in set(this_galaxy['sim_evolution']):
# Find data for this galaxy at all redshifts
ind = this_galaxy['sim_evolution']==evo
#Make sure minimum simulated redshift is 0.3; some were removed because of bad surface brightness measurments
if np.min(this_galaxy[ind]['sim_redshift'])<0.4:
#Store data for each galaxy/evo combination:
galaxy = this_galaxy[ind]
galaxy.sort('sim_redshift') #make sure data is in order, helps for plotting later
#arrays for storing info:
# set x,y
y_abs = np.array(galaxy['p_features']) #unnormlaized p_features for galaxy
p_at_3 = galaxy[galaxy['sim_redshift']==0.3]['p_features'][0]
if p_at_3 == 0:
continue
x.extend(galaxy['sim_redshift'])
y.extend(y_abs) #normalized p_features
yn.extend(y_abs/p_at_3) #normalized p_features
ym.extend((1-p_at_3 )/(1-y_abs ))
yd.extend(y_abs - p_at_3) #normalized p_features
mu_at_3 = galaxy[galaxy['sim_redshift']==0.3]['GZ_MU_I'][0]
mu.extend(mu_at_3 * np.ones_like(y_abs))
x = np.asarray(x)
mu = np.asarray(mu)
y = np.asarray(y) # no normalisation: f(z)
yn = np.asarray(yn) # normalised: f(z) / f(z0)
ym = np.asarray(ym) # normalised as eqn. 4 in paper: (f(z0)-1)/(f(z)-1)
yd = np.asarray(yd) # subtracted: f(z) - f(z0)
def fit_and_plot(x, y, mu, func, n, cmap='jet'):
p0 = [0.5, 0.01, 0.5, 0.01, 0.5, 0.01, 0.5]
# DON'T INCLUDE FIRST POINT (AT Z=Z0) SINCE IT IS USED TO NORMALIZE DATA.
p = minimize(negloglike, p0[:n], args=(func, x[1:], mu[1:], y[1:]), method='nelder-mead')
jet = cm = plt.get_cmap(cmap)
cNorm = mpl.colors.Normalize(vmin=mu.min(), vmax=mu.max())
scalarMap = mpl.cm.ScalarMappable(norm=cNorm, cmap=jet)
plt.scatter(x + np.random.normal(0, 0.01, len(x)), y, s=5, c=mu, edgecolor='none',
norm=cNorm, cmap=cmap)
xfit = np.linspace(x.min(), x.max(), 100)
mufit = np.linspace(mu.min(), mu.max(), 5)[1:-1]
for mu in mufit:
yfit = func(p.x, xfit, mu)
plt.plot(xfit, yfit, '-', color=scalarMap.to_rgba(mu))
plt.ylim(0, 2)
plt.colorbar()
aic = get_AIC(n, p.fun)
print('AIC for this model = %s'%aic)
#return p
Explanation: This notebook is to test several models for measuring the drop in $f_{features}$ in the FERENGI-fied galaxies. Refer to the link below for the final version, where zeta is calculated with the chosen model.¶
Link to Zeta.ipynb: https://github.com/willettk/gzhubble/blob/master/python/creating_debiased_catalog/STEP_2_zeta.ipynb
We will use the AIC parameter to evaluate which model is the most appropriate.
The Akaike information criterion is a measure of relative quality of statistical models to a set of data. It is defined as:
AIC = 2k - ln(L)
Where k is the number of parameters in the model and L is the maximum value of the likelhihood function for the model. It is applicable for non-nested functions, which is necessary in our case in that we wish to test different types of models. A low AIC value favors both high goodness-of-fit (measured by L) and low complexity (as measured by the number of parameters, k). Therefore the model with the lowest AIC is the preferred model.
End of explanation
fit_and_plot(x, yn, mu, fzeta_lin_mu_none, 2)
Explanation: Using simple normalisation: f(z)/f(z0) (equation 2 from first draft of GZH paper)
$\frac{f}{f_{0}}=1 - {\zeta} * (z-z_{0})$
$\zeta = constant$
End of explanation
fit_and_plot(x, yn, mu, fzeta_lin_mu_lin, 3)
Explanation: $\frac{f}{f_{0}}=1 - {\zeta} * (z-z_{0})$
$\zeta = \zeta[0]+\zeta[1] * \mu$
End of explanation
fit_and_plot(x, yn, mu, fzeta_exp_mu_none, 2)
Explanation: $\frac{f}{f_{0}}=e^{\frac{-(z-z_0)}{\zeta}}$
$\zeta = constant$
End of explanation
p = fit_and_plot(x, yn, mu, fzeta_exp_mu_lin, 3)
Explanation: $\frac{f}{f_{0}}=e^{\frac{-(z-z_0)}{\zeta}}$
$\zeta = \zeta[0]+\zeta[1] * \mu$
End of explanation
fit_and_plot(x, yn, mu, fzeta_qud_mu_none, 3)
Explanation: $\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$
$\zeta_{a}, \zeta_{b} = constant$
End of explanation
fit_and_plot(x, yn, mu, fzeta_qud_mu_lin, 5)
Explanation: $\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$
$\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $
$\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
End of explanation
fit_and_plot(x, yn, mu, fzeta_cub_mu_none, 4)
Explanation: $\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$
$\zeta_{a}, \zeta_{b}, \zeta_{c} = constant$
End of explanation
fit_and_plot(x, yn, mu, fzeta_cub_mu_lin, 7)
Explanation: $\frac{f}{f_{0}}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$
$\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $
$\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
$\zeta_{c} = \zeta_{c}[0] + \zeta_{c}[1] * \mu $
End of explanation
fit_and_plot(x, ym, mu, fzeta_lin_mu_none, 2)
Explanation: Using alternative normalisation, as in eqn. 4: (f(z0)-1) / (f(z)-1)
$\frac{1-f_{0}}{1-f}=1 - {\zeta} * (z-z_{0})$
$\zeta = constant$
End of explanation
fit_and_plot(x, ym, mu, fzeta_lin_mu_lin, 3)
Explanation: $\frac{1-f_{0}}{1-f}=1 - {\zeta} * (z-z_{0})$
$\zeta = \zeta[0]+\zeta[1] * \mu$
End of explanation
fit_and_plot(x, ym, mu, fzeta_exp_mu_none, 2)
Explanation: $\frac{1-f_{0}}{1-f}=e^{\frac{-(z-z_0)}{\zeta}}$
$\zeta = constant$
End of explanation
fit_and_plot(x, ym, mu, fzeta_exp_mu_lin, 3)
Explanation: $\frac{1-f_{0}}{1-f}=e^{\frac{-(z-z_0)}{\zeta}}$
$\zeta = \zeta[0]+\zeta[1] * \mu$
End of explanation
fit_and_plot(x, ym, mu, fzeta_qud_mu_none, 3)
Explanation: $\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$
$\zeta_{a}, \zeta_{b} = constant$
End of explanation
fit_and_plot(x, ym, mu, fzeta_qud_mu_lin, 5)
Explanation: $\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2$
$\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $
$\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
End of explanation
fit_and_plot(x, ym, mu, fzeta_cub_mu_none, 4)
Explanation: $\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$
$\zeta_{a}, \zeta_{b}, \zeta_{c} = constant$
End of explanation
fit_and_plot(x, ym, mu, fzeta_cub_mu_lin, 7)
Explanation: $\frac{1-f_{0}}{1-f}= 1 - \zeta_{a}(z-z_{0}) + \zeta_{b}(z-z_{0})^2 + \zeta_{c}*(z-z_{0})^3$
$\zeta_{a} = \zeta_{a}[0] + \zeta_{a}[1] * \mu $
$\zeta_{b} = \zeta_{b}[0] + \zeta_{b}[1] * \mu $
$\zeta_{c} = \zeta_{c}[0] + \zeta_{c}[1] * \mu $
End of explanation |
4,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RiiDataFrame
Here, a little bit more detail about RiiDataFrame class will be given.
Step1: RiiDataFrame has an attribute named catalog that is a Pandas DataFrame provinding the catalog of experimental data as shown below.
The columns formula and tabulated indicate the type of data. If n or k is included in the column tamulated, the experimentally observed refractive index n or extinction coefficient k is given in tabulated form, respectively. If tabulated is f, only coefficients of formula are given.
On the other hand, the number written in the column formula indicates the number of dispetsion formula that fits the experimental data. If the number is 0, only the tabulated data are given.
Step2: The experimental data are given by raw_data
Step3: where n is the refractive index and k is the extinction coefficient at the vacuum wavelength wl_n (wl_k) in the unit of μm.
The column c gives the coefficients for the dielectric function model.
In the above example, no coefficient is given because only the tabulated data are given (formula number in catalog is 0).
On the other hand, if formula number is not 0, some coefficeints are given in the column c as shown below.
In this case, formula 21 means Drude-Lorentz model, which is explained in Dispersion formulas.
Step4: Using the method load_grid_data(), you can get grid data calculated at 200 wavelength values in the range [wl_min, wl_max], which is the intersection between the domain of n [wl_n_min, wl_n_max] and the domain of k [wl_k_min, wl_k_max]. These values are shown in catalog.
Step5: Helper Methods
By using the functionality of Pandas, you may find what you want, easily. But, here some simple helper methods are implemented.
plot
plot(id
Step6: search
search(name
Step7: select
select(condition
Step8: show
show(id
Step9: read
read(id, as_dict=False)
This method returns the contants of a page associated with the id.
If you want the page contents as a python dict, give True for argument as_dict.
Step10: references
references(id
Step11: material
material(params | Python Code:
import riip
ri = riip.RiiDataFrame()
Explanation: RiiDataFrame
Here, a little bit more detail about RiiDataFrame class will be given.
End of explanation
ri.catalog.head(3)
Explanation: RiiDataFrame has an attribute named catalog that is a Pandas DataFrame provinding the catalog of experimental data as shown below.
The columns formula and tabulated indicate the type of data. If n or k is included in the column tamulated, the experimentally observed refractive index n or extinction coefficient k is given in tabulated form, respectively. If tabulated is f, only coefficients of formula are given.
On the other hand, the number written in the column formula indicates the number of dispetsion formula that fits the experimental data. If the number is 0, only the tabulated data are given.
End of explanation
ri.raw_data.loc[3].head(5) # first 5 rows for the material whose id is 3
Explanation: The experimental data are given by raw_data:
End of explanation
ri.catalog.tail(3)
ri.raw_data.loc[2911].head(5) # first 5 rows for the material whose id is 2912
Explanation: where n is the refractive index and k is the extinction coefficient at the vacuum wavelength wl_n (wl_k) in the unit of μm.
The column c gives the coefficients for the dielectric function model.
In the above example, no coefficient is given because only the tabulated data are given (formula number in catalog is 0).
On the other hand, if formula number is not 0, some coefficeints are given in the column c as shown below.
In this case, formula 21 means Drude-Lorentz model, which is explained in Dispersion formulas.
End of explanation
grid_data = ri.load_grid_data(3)
grid_data
Explanation: Using the method load_grid_data(), you can get grid data calculated at 200 wavelength values in the range [wl_min, wl_max], which is the intersection between the domain of n [wl_n_min, wl_n_max] and the domain of k [wl_k_min, wl_k_max]. These values are shown in catalog.
End of explanation
import matplotlib.pyplot as plt
ri.plot(3, "n")
plt.show()
ri.plot(3, "k")
plt.show()
ri.plot(3, "eps")
plt.show()
Explanation: Helper Methods
By using the functionality of Pandas, you may find what you want, easily. But, here some simple helper methods are implemented.
plot
plot(id: int, comp: str = "n", fmt1: Optional[str] = "-", fmt2: Optional[str] = "--", **kwargs)
* id (int): ID number.
* comp (str): 'n', 'k' or 'eps'.
* fmt1 (Union[str, None]): Plot format for n and Re(eps).
* fmt2 (Union[str, None]): Plot format for k and Im(eps).
Plot refractive index (if set comp="n"), extinction coefficient (comp="k") or permittivity (comp="eps").
End of explanation
ri.search("NaCl")
ri.search("sodium") # upper or lower case is not significant
Explanation: search
search(name: str) -> DataFrame
This method searches data whose book or book_name contain given name and return a simplified catalog for them.
End of explanation
ri.select("2.5 < n < 3 and 0.4 < wl < 0.8").head(10)
ri.plot(157)
Explanation: select
select(condition: str) -> DataFrame
This method make a query with the given condition and return a simplified catalog. It will pick up materials whose experimental data contains some data that fulfill given condition.
End of explanation
ri.show(1)
Explanation: show
show(id: int | Sequence[int]) -> DataFrame
This method shows a simplified catalog for given id.
End of explanation
print(ri.read(0))
ri.read(0, as_dict=True)
Explanation: read
read(id, as_dict=False)
This method returns the contants of a page associated with the id.
If you want the page contents as a python dict, give True for argument as_dict.
End of explanation
ri.references(20)
Explanation: references
references(id: int)
This method returns the REFERENCES of a page associated with the id.
End of explanation
water = ri.material({'id': 428})
water.catalog
Explanation: material
material(params: dict) -> Material
Create Material-class instance for given parameter dict params.
params can includes the following parameters,
* 'id': ID number. (int)
* 'book': book value in catalog of RiiDataFrame. (str)
* 'page': page value in catalog of RiiDataFrame. (str)
* 'RI': Constant refractive index. (complex)
* 'e': Constant permittivity. (complex)
* 'bound_check': True if bound check should be done. Defaults to True. (bool)
* 'im_factor': A magnification factor multiplied to the imaginary part of permittivity. Defaults to 1.0. (float)
End of explanation |
4,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: The docstring describes the component and provides some simple examples
Step2: The __init__ docstring lists the parameters
Step3: Example 1
Step4: To use DrainageDensity, we need to have drainage directions and areas pre-calculated. We'll do that with the FlowAccumulator component. We'll have the component do D8 flow routing (each DEM cell drains to whichever of its 8 neighbors lies in the steepest downslope direction), and fill pits (depressions in the DEM that would otherwise block the flow) using the LakeMapperBarnes component. The latter two arguments below tell the lake mapper to update the flow directions and drainage areas after filling the pits.
Step5: Now run DrainageDensity and display the map of $L$ values
Step6: Display the channel mask
Step7: Example 2 | Python Code:
import copy
import numpy as np
import matplotlib as mpl
from landlab import RasterModelGrid, imshow_grid
from landlab.io import read_esri_ascii
from landlab.components import FlowAccumulator, DrainageDensity
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../../landlab_header.png"></a>
Using the DrainageDensity Component
Overview
Drainage density is defined as the total map-view length of stream channels, $\Lambda$, within a region with map-view surface area $A$, divided by that area:
$$D_d = \Lambda / A$$
The measure has dimensions of inverse length. The traditional method for measuring drainage density was to measure $\Lambda$ on a paper map by tracing out each stream. An alternative method, which lends itself to automated calculation from digital elevation models (DEMs), is to derive drainage density from a digital map that depicts the flow-path distance from each grid node to the nearest channel node, $L$ (Tucker et al., 2001). If the average flow-path distance to channels is $\overline{L}$, then the corresponding average drainage density is:
$$D_d = \frac{1}{2\overline{L}}$$
An advantage of this alternative approach is that $L$ can be mapped and analyzed statistically to reveal spatial variations, correlations with other geospatial attributes, and so on.
The DrainageDensity component is designed to calculate $L$, and then derive $D_d$ from it using the second equation above. Given a grid with drainage directions and drainage area, along with either a grid of channel locations or a threshold from which to generate channel locations, DrainageDensity component calculates the flow-path distance to the nearest channel node for each node in the grid. The values of $L$ are stored in a new at-node field called surface_to_channel__minimum_distance.
The component assumes that drainage directions and drainage area have already been calculated and the results stored in the following node fields:
flow__receiver_node: ID of the neighboring node to which each node sends flow (its "receiver")
flow__link_to_receiver_node: ID of the link along which each node sends flow to its receiver
flow__upstream_node_order: downstream-to-upstream ordered array of node IDs
topographic__steepest_slope: gradient from each node to its receiver
The FlowAccumulator generates all four of these fields, and should normally be run before DrainageDensity.
Identifying channels
The DrainageDensity component is NOT very sophisticated about identifying channels. There are (currently) two options for handling channel identification:
specify the parameters of an area-slope channelization threshold, or
map the channels separately, and pass the result to DrainageDensity as a "channel mask" array
Area-slope channel threshold
This option identifies a channel as occurring at any grid node where the actual drainage area, represented by the field drainage_area, exceeds a threshold, $T_c$:
$$C_A A^{m_r} C_s S^{n_r} > T_c$$
Here $A$ is drainage_area, $S$ is topographic__steepest_slope, and $C_A$, $C_s$, $m_r$, and $n_r$ are parameters. For example, to create a channel mask in which nodes with a drainage area greater than $10^5$ m$^2$ are identified as channels, the DrainageDensity component would be initialized as:
dd = DrainageDensity(grid,
area_coefficient=1.0,
slope_coefficient=1.0,
area_exponent=1.0,
slope_exponent=0.0,
channelization_threshold=1.0e5)
Channel mask
This option involves creating a number-of-nodes-long array, of type np.uint8, containing a 1 for channel nodes and a 0 for others.
Imports and inline docs
First, import what we'll need:
End of explanation
print(DrainageDensity.__doc__)
Explanation: The docstring describes the component and provides some simple examples:
End of explanation
print(DrainageDensity.__init__.__doc__)
Explanation: The __init__ docstring lists the parameters:
End of explanation
# read the DEM
(grid_geog, elev) = read_esri_ascii("west_bijou_escarpment_snippet.asc")
grid = RasterModelGrid(
(grid_geog.number_of_node_rows, grid_geog.number_of_node_columns), xy_spacing=30.0
)
grid.add_field("topographic__elevation", elev, at="node")
cmap = copy.copy(mpl.cm.get_cmap("pink"))
imshow_grid(grid, elev, cmap=cmap, colorbar_label="Elevation (m)")
Explanation: Example 1: channelization threshold
In this example, we read in a small digital elevation model (DEM) from NASADEM for an area on the Colorado high plains (USA) that includes a portion of an escarpment along the west side of a drainage known as West Bijou Creek (see Rengers & Tucker, 2014).
The DEM file is in ESRI Ascii format, but is in a geographic projection, with horizontal units of decimal degrees. To calculate slope gradients properly, we'll first read the DEM into a Landlab grid object that has this geographic projection. Then we'll create a second grid with 30 m cell spacing (approximately equal to the NASADEM's resolution), and copy the elevation field from the geographic DEM. This isn't a proper projection of course, but it will do for purposes of this example.
End of explanation
fa = FlowAccumulator(
grid,
flow_director="FlowDirectorD8", # use D8 routing
depression_finder="LakeMapperBarnes", # pit filler
method="D8", # pit filler use D8 too
redirect_flow_steepest_descent=True, # re-calculate flow dirs
reaccumulate_flow=True, # re-calculate drainagea area
)
fa.run_one_step() # run the flow accumulator
cmap = copy.copy(mpl.cm.get_cmap("Blues"))
imshow_grid(
grid,
np.log10(grid.at_node["drainage_area"] + 1.0), # sq root helps show drainage
cmap=cmap,
colorbar_label="Log10(drainage area (m2))",
)
Explanation: To use DrainageDensity, we need to have drainage directions and areas pre-calculated. We'll do that with the FlowAccumulator component. We'll have the component do D8 flow routing (each DEM cell drains to whichever of its 8 neighbors lies in the steepest downslope direction), and fill pits (depressions in the DEM that would otherwise block the flow) using the LakeMapperBarnes component. The latter two arguments below tell the lake mapper to update the flow directions and drainage areas after filling the pits.
End of explanation
dd = DrainageDensity(
grid,
area_coefficient=1.0,
slope_coefficient=1.0,
area_exponent=1.0,
slope_exponent=0.0,
channelization_threshold=2.0e4,
)
ddens = dd.calculate_drainage_density()
imshow_grid(
grid,
grid.at_node["surface_to_channel__minimum_distance"],
cmap="viridis",
colorbar_label="Distance to channel (m)",
)
print("Drainage density = " + str(ddens) + " m/m2")
Explanation: Now run DrainageDensity and display the map of $L$ values:
End of explanation
imshow_grid(
grid,
grid.at_node["channel__mask"],
colorbar_label="Channel present (1 = yes)",
)
Explanation: Display the channel mask:
End of explanation
# make a copy of the mask from the previous example
chanmask = grid.at_node["channel__mask"].copy()
# re-make the grid (this will remove all the previously created fields)
grid = RasterModelGrid(
(grid_geog.number_of_node_rows, grid_geog.number_of_node_columns), xy_spacing=30.0
)
grid.add_field("topographic__elevation", elev, at="node")
# instatiated and run flow accumulator
fa = FlowAccumulator(
grid,
flow_director="FlowDirectorD8", # use D8 routing
depression_finder="LakeMapperBarnes", # pit filler
method="D8", # pit filler use D8 too
redirect_flow_steepest_descent=True, # re-calculate flow dirs
reaccumulate_flow=True, # re-calculate drainagea area
)
fa.run_one_step() # run the flow accumulator
# instantiate and run DrainageDensity component
dd = DrainageDensity(grid, channel__mask=chanmask)
ddens = dd.calculate_drainage_density()
# display distance-to-channel
imshow_grid(
grid,
grid.at_node["surface_to_channel__minimum_distance"],
cmap="viridis",
colorbar_label="Distance to channel (m)",
)
print("Drainage density = " + str(ddens) + " m/m2")
Explanation: Example 2: calculating from an independently derived channel mask
This example demonstrates how to run the component with an independently derived channel mask. For the sake of illustration, we will just use the channel mask from the previous example, in which case the $L$ field should look identical.
End of explanation |
4,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple model of epidemic dynamics
Step1: Setup a python function that specifies the dynamics
Step2: The function SIR above takes three arguments, $U$, $t$, and $p$ that represent the states of the system, the time and the parameters, respectively.
Outbreak condition
The condition
\begin{equation}
\frac{\alpha}{\beta}x(t)>1 , \quad y>0
\end{equation}
defines a threshold for a full epidemic outbreak. An equivalent condition is
\begin{equation}
x>\frac{\beta}{\alpha }, \quad y>0
\end{equation}
Therefore, with the parameters $(\alpha,\beta)$=(0.5,0.1), there will be an outbreak if the initial condition for $x(t)>1/5$ with $y>0$.
Notice that the initial value for $z$ can be interpreted as the initial proportion of immune individuals within the population.
The dynamics related to the oubreak condition can be studied by defining a variable $B(t) = x(t) \alpha/\beta$, called by some authors "effective reproductive number". If $x(t)\approx 1$, the corresponding $B(t)$ is called "basic reproductive number", or $R_o$.
Let's define a python dictionary containing parameters and initial conditions to perform simulations.
Step3: Integrate numerically and plot the results | Python Code:
#Import the necessary modules and perform the necessary tests
import scipy as sc
import pylab as gr
sc.test("all",verbose=0)
%matplotlib inline
Explanation: Simple model of epidemic dynamics: SIR
Prof. Marco Arieli Herrera-Valdez,
Facultad de Ciencias, Universidad Nacional Autónoma de México
Created March 7, 2016
Let $x$, $y$, and $z$ represent the fraction of susceptibles, infected, and recovered individuals within a population. Assume homogeneous mixing with a probability of infection given a contact with an infected individual given by $\beta$ and an average removal time $\beta^{-1}$ from the infected group, by recovery or death due to infection. The population dynamics are given by
\begin{eqnarray}
\partial_t x &=& -\alpha xy
\
\partial_t y &=& \left( \alpha x - \beta \right) y
\
\partial_t x &=& \beta y
\end{eqnarray}
Notice that the population size does not matter because it is kept constant.
End of explanation
def SIR(U,t,p):
x,y,z=U
yNew= p["alpha"] * y * x
zNew= p["beta"] * y
dx = -yNew
dy = yNew - zNew
dz = zNew
return dx, dy, dz
Explanation: Setup a python function that specifies the dynamics
End of explanation
p={"alpha": 0.15, "beta":0.1, "timeStop":300.0, "timeStep":0.01 }
p["Ro"]=p["alpha"]/p["beta"]
p["sampTimes"]= sc.arange(0,p["timeStop"],p["timeStep"])
N= 1e4; i0= 1e1; r0=0; s0=N-i0-r0
x0=s0/N; y0=i0/N; z0=r0/N;
p["ic"]=[x0,y0,z0]
print("N=%g with initial conditions (S,I,R)=(%g,%g,%g)"%(N,s0,i0,r0))
print("Initial conditions: ", p["ic"])
print("B(0)=%g"%(p["ic"][0]*p["Ro"]))
Explanation: The function SIR above takes three arguments, $U$, $t$, and $p$ that represent the states of the system, the time and the parameters, respectively.
Outbreak condition
The condition
\begin{equation}
\frac{\alpha}{\beta}x(t)>1 , \quad y>0
\end{equation}
defines a threshold for a full epidemic outbreak. An equivalent condition is
\begin{equation}
x>\frac{\beta}{\alpha }, \quad y>0
\end{equation}
Therefore, with the parameters $(\alpha,\beta)$=(0.5,0.1), there will be an outbreak if the initial condition for $x(t)>1/5$ with $y>0$.
Notice that the initial value for $z$ can be interpreted as the initial proportion of immune individuals within the population.
The dynamics related to the oubreak condition can be studied by defining a variable $B(t) = x(t) \alpha/\beta$, called by some authors "effective reproductive number". If $x(t)\approx 1$, the corresponding $B(t)$ is called "basic reproductive number", or $R_o$.
Let's define a python dictionary containing parameters and initial conditions to perform simulations.
End of explanation
# Numerical integration
xyz= sc.integrate.odeint(SIR, p["ic"], p["sampTimes"], args=(p,)).transpose()
# Calculate the outbreak indicator
B= xyz[0]*p["alpha"]/p["beta"]
# Figure
fig=gr.figure(figsize=(11,5))
gr.ioff()
rows=1; cols=2
ax=list()
for n in sc.arange(rows*cols):
ax.append(fig.add_subplot(rows,cols,n+1))
ax[0].plot(p["sampTimes"], xyz[0], 'k', label=r"$(t,x(t))$")
ax[0].plot(p["sampTimes"], xyz[1], 'g', lw=3, label=r"$(t,y(t))$")
ax[0].plot(p["sampTimes"], xyz[2], 'b', label=r"$(t,z(t))$")
ax[0].plot(p["sampTimes"], B, 'r', label=r"$(t,B(t))$")
ax[0].plot([0, p["timeStop"]], [1,1], 'k--', alpha=0.4)
ax[1].plot(xyz[0], xyz[1], 'g', lw=3, label=r"$(x(t),y(t))$")
ax[1].plot(xyz[0], xyz[2], 'b', label=r"$(x(t),z(t))$")
ax[1].plot(xyz[0], B, 'r', label=r"$(x(t),B(t))$")
ax[1].plot([0, 1], [1,1], 'k--', alpha=0.4)
ax[0].legend(); ax[1].legend(loc="upper left")
gr.ion(); gr.draw()
Explanation: Integrate numerically and plot the results
End of explanation |
4,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Note
Step1: Chapter 9
Step2: Test Installation
Step3: Tip
Step4: Seq Objects as a String
Step5: MutableSeq
Step6: SeqRecord
Step7: Align
Listing 9.1
Step8: AlignIO
Step9: AlignInfo
Step10: SeqIO
Step11: Listing 9.2
Step12: Listing 9.3
Step13: Listing 9.4
Step14: BLAST
Listing 9.5
Step15: Listing 9.7
Step16: Listing 9.9
Step17: eUtils
Step18: eUtils
Step19: Listing 9.14
Step20: PROSITE
Step21: DNA Utils
Step22: Protein Utils
Listing 9.15
Step23: Listing 9.16
Step24: Listing 9.17
Step25: Listing 9.18
Step26: Listing 9.19
Step27: VER DE ACA PARA ABAJO SI ESTO ESTA BIEN O HABRIA QUE PONERLO ARRIBA!!!!
How to download a file | Python Code:
!curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/samples.tar.bz2 -o samples.tar.bz2
!mkdir samples
!tar xvfj samples.tar.bz2 -C samples
Explanation: Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Note: Before opening the file, this file should be accesible from this Jupyter notebook. In order to do so, the following commands will download these files from Github and extract them into a directory called samples.
End of explanation
import platform
platform.platform()
!pip install biopython
Explanation: Chapter 9: Introduction to Biopython
End of explanation
import Bio
Bio.__version__
import Bio.Alphabet
Bio.Alphabet.ThreeLetterProtein.letters
from Bio.Alphabet import IUPAC
IUPAC.IUPACProtein.letters
IUPAC.unambiguous_dna.letters
IUPAC.ambiguous_dna.letters
IUPAC.ExtendedIUPACProtein.letters
IUPAC.ExtendedIUPACDNA.letters
from Bio.Seq import Seq
import Bio.Alphabet
seq = Seq('CCGGGTT', Bio.Alphabet.IUPAC.unambiguous_dna)
seq.transcribe()
seq.translate()
rna_seq = Seq('CCGGGUU',Bio.Alphabet.IUPAC.unambiguous_rna)
rna_seq.transcribe()
rna_seq.translate()
rna_seq.back_transcribe()
Explanation: Test Installation
End of explanation
from Bio.Seq import translate, transcribe, back_transcribe
dnaseq = 'ATGGTATAA'
translate(dnaseq)
transcribe(dnaseq)
rnaseq = transcribe(dnaseq)
translate(rnaseq)
back_transcribe(rnaseq)
Explanation: Tip: The Transcribe Function in Biopython
End of explanation
seq = Seq('CCGGGTTAACGTA',Bio.Alphabet.IUPAC.unambiguous_dna)
seq[:5]
len(seq)
print(seq)
Explanation: Seq Objects as a String
End of explanation
seq[0] = 'T'
mut_seq = seq.tomutable()
mut_seq
mut_seq[0] = 'T'
mut_seq
mut_seq.reverse()
mut_seq
mut_seq.complement()
mut_seq
mut_seq.reverse_complement()
mut_seq
Explanation: MutableSeq
End of explanation
from Bio.SeqRecord import SeqRecord
SeqRecord(seq, id='001', name='MHC gene')
from Bio.SeqRecord import SeqRecord
from Bio.Seq import Seq
from Bio.Alphabet import generic_protein
rec = SeqRecord(Seq('mdstnvrsgmksrkkkpkttvidddddcmtcsacqs'
'klvkisditkvsldyintmrgntlacaacgsslkll',
generic_protein),
id = 'P20994.1', name = 'P20994',
description = 'Protein A19',
dbxrefs = ['Pfam:PF05077', 'InterPro:IPR007769',
'DIP:2186N'])
rec.annotations['note'] = 'A simple note'
print(rec)
Explanation: SeqRecord
End of explanation
from Bio.Alphabet import generic_protein
from Bio.Align import MultipleSeqAlignment
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
seq1 = 'MHQAIFIYQIGYPLKSGYIQSIRSPEYDNW'
seq2 = 'MH--IFIYQIGYALKSGYIQSIRSPEY-NW'
seq_rec_1 = SeqRecord(Seq(seq1, generic_protein), id = 'asp')
seq_rec_2 = SeqRecord(Seq(seq2, generic_protein), id = 'unk')
align = MultipleSeqAlignment([seq_rec_1, seq_rec_2])
print(align)
seq3 = 'M---IFIYQIGYAAKSGYIQSIRSPEY--W'
seq_rec_3 = SeqRecord(Seq(seq3, generic_protein), id = 'cas')
align.extend([seq_rec_3])
print(align)
align[0]
print(align[:2,5:11])
len(align)
from Bio.SeqUtils.ProtParam import ProteinAnalysis
for seq in align:
print(ProteinAnalysis(str(seq.seq)).isoelectric_point())
Explanation: Align
Listing 9.1: Using Align module
End of explanation
from Bio import AlignIO
align = AlignIO.read('samples/cas9align.fasta', 'fasta')
print(align)
from Bio import AlignIO
for alignment in AlignIO.parse('samples/example.aln', 'clustal'):
print(len(alignment))
from Bio import AlignIO
AlignIO.convert(open('samples/cas9align.fasta'), 'fasta', 'cas9align.aln', 'clustal')
Explanation: AlignIO
End of explanation
from Bio import AlignIO
from Bio.Align.AlignInfo import SummaryInfo
from Bio.Alphabet import ProteinAlphabet
align = AlignIO.read('samples/cas9align.fasta', 'fasta', alphabet=ProteinAlphabet())
summary = SummaryInfo(align)
print(summary.information_content())
summary.dumb_consensus(consensus_alpha=ProteinAlphabet())
summary.gap_consensus(consensus_alpha=ProteinAlphabet())
print(summary.alignment)
print(summary.pos_specific_score_matrix())
from Bio.Align.Applications import ClustalwCommandline
clustalw_exe = 'clustalw2'
ccli = ClustalwCommandline(clustalw_exe, infile="samples/input4align.fasta", outfile='../../aoutput.aln')
print(ccli)
clustalw_exe = 'clustalw2'
clustalw_exe='c:\\windows\\program file\\clustal\\clustalw.exe'
from Bio.Align.Applications import ClustalwCommandline
clustalw_exe = 'clustalw2'
ccli = ClustalwCommandline(clustalw_exe,
infile="samples/input4align.fasta", outfile='../../aoutput.aln')
ccli()
from Bio import AlignIO
seqs = AlignIO.read('samples/aoutput.aln', 'clustal')
seqs[0]
seqs[1]
seqs[2]
from Bio.Align.Applications import ClustalwCommandline
clustalw_exe = 'clustalw2'
ccli = ClustalwCommandline(clustalw_exe,
infile="input4align.fasta", outfile='../../aoutput.aln',
pwgapopen=5)
print(ccli)
from Bio.Align.Applications import ClustalwCommandline
ccli = ClustalwCommandline()
help(ccli)
Explanation: AlignInfo
End of explanation
from Bio import SeqIO
f_in = open('samples/a19.gp')
seq = SeqIO.parse(f_in, 'genbank')
next(seq)
f_in = open('samples/a19.gp')
SeqIO.read(f_in, 'genbank')
Explanation: SeqIO
End of explanation
from Bio import SeqIO
FILE_IN = 'samples/3seqs.fas'
with open(FILE_IN) as fh:
for record in SeqIO.parse(fh, 'fasta'):
id_ = record.id
seq = record.seq
print('Name: {0}, size: {1}'.format(id_, len(seq)))
Explanation: Listing 9.2: readfasta.py: Read a FASTA file
End of explanation
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
with open('samples/NC2033.txt') as fh:
with open('NC2033.fasta','w') as f_out:
rawseq = fh.read().replace('\n','')
record = (SeqRecord(Seq(rawseq),'NC2033.txt','',''),)
SeqIO.write(record, f_out,'fasta')
from Bio import SeqIO
fo_handle = open('myseqs.fasta','w')
readseq = SeqIO.parse(open('samples/myseqs.gbk'), 'genbank')
SeqIO.write(readseq, fo_handle, "fasta")
fo_handle.close()
from Bio import AlignIO
fn = open('samples/secu3.aln')
align = AlignIO.read(fn, 'clustal')
print(align)
Explanation: Listing 9.3: rwfasta.py: Read a file and write it as a FASTA sequence
End of explanation
fi = open('samples/example.aln')
with open('samples/example.phy', 'w') as fo:
align = AlignIO.read(fi, 'clustal')
AlignIO.write([align], fo, 'phylip')
Explanation: Listing 9.4: Alignments
End of explanation
from Bio.Blast.Applications imp
BLAST_EXE = '~/opt/ncbi-blast-2
f_in = '../../samples/seq3.txt'
b_db = 'db/samples/TAIR8cds'
blastn_cline = blastn(cmd=BLAST
evalue=.0
rh, eh = blastn_cline()
rh.readline()
rh.readline()
eh.readline()
fh = open('testblast.xml','w')
fh.write(rh.read())
fh.close()
from Bio.Blast import NCBIXML
for blast_record in NCBIXML.parse(rh):
# Do something with blast_record
Explanation: BLAST
Listing 9.5: runblastn.py: Running a local NCBI BLAST
End of explanation
from Bio.Blast import NCBIXML
with open('samples/sampleXblast.xml') as xmlfh:
for record in NCBIXML.parse(xmlfh):
for align in record.alignments:
print(align.title)
align.length
align.hit_id
align.hit_def
align.hsps
from Bio.Blast import NCBIXML
threshold = 0.0001
xmlfh = open('samples/other.xml')
blast_record = next(NCBIXML.parse(open(xmlfh)))
for align in blast_record.alignments:
if align.hsps[0].expect < threshold:
print(align.accession)
from Bio.Data import IUPACData
IUPACData.ambiguous_dna_values['M']
IUPACData.ambiguous_dna_values['H']
IUPACData.ambiguous_dna_values['X']
Explanation: Listing 9.7: BLASTparser1.py: Extract alignments title from a BLAST output
End of explanation
from Bio.Data.IUPACData import protein_weights as pw
protseq = input('Enter your protein sequence: ')
total_w = 0
for aa in protseq:
total_w += pw.get(aa.upper(),0)
total_w -= 18*(len(protseq)-1)
print('The net weight is: {0}'.format(total_w))
from Bio.Data.CodonTable import unambiguous_dna_by_id
bact_trans=unambiguous_dna_by_id[11]
bact_trans.forward_table['GTC']
bact_trans.back_table['R']
from Bio.Data import CodonTable
print (CodonTable.generic_by_id[2])
Explanation: Listing 9.9: protwwbiopy.py: Protein weight calculator with Biopython
End of explanation
from Bio import Entrez
my_em = '[email protected]'
db = "pubmed"
# Search de Entrez website using esearch from eUtils
# esearch returns a handle (called h_search)
h_search = Entrez.esearch(db=db, email=my_em,
term='python and bioinformatics')
# Parse the result with Entrez.read()
record = Entrez.read(h_search)
# Get the list of Ids returned by previous search
res_ids = record["IdList"]
# For each id in the list
for r_id in res_ids:
# Get summary information for each id
h_summ = Entrez.esummary(db=db, id=r_id, email=my_em)
# Parse the result with Entrez.read()
summ = Entrez.read(h_summ)
print(summ[0]['Title'])
print(summ[0]['DOI'])
print('==============================================')
Explanation: eUtils: Retrieving Bibliography
Listing 9.12: entrez1.py: Retrieve and display data from PubMed
End of explanation
from Bio import Entrez
my_em = '[email protected]'
db = "gene"
term = 'cobalamin synthase homo sapiens'
h_search = Entrez.esearch(db=db, email=my_em, term=term)
record = Entrez.read(h_search)
res_ids = record["IdList"]
for r_id in res_ids:
h_summ = Entrez.esummary(db=db, id=r_id, email=my_em)
s = Entrez.read(h_summ)
print(r_id)
name = s['DocumentSummarySet']['DocumentSummary'][0]['Name']
print(name)
su = s['DocumentSummarySet']['DocumentSummary'][0]['Summary']
print(su)
print('==============================================')
n = "nucleotide"
handle = Entrez.efetch(db=n, id="326625", rettype='fasta')
print (handle.read())
handle = Entrez.efetch(db=n, id="326625", retmode='xml')
record[0]['GBSeq_moltype']
record[0]['GBSeq_sequence']
record[0]['GBSeq_organism']
from Bio.PDB.PDBParser import PDBParser
pdbfn = '../../samples/1FAT.pdb'
parser = PDBParser(PERMISSIVE=1)
structure = parser.get_structure("1fat", pdbfn)
structure.child_list
model = structure[0]
model.child_list
chain = model['B']
chain.child_list[:5]
residue = chain[4]
residue.child_list
atom = residue['CB']
atom.bfactor
Explanation: eUtils: Retrieving Gene Information
Listing 9.13: entrez2.py: Retrieve and display data from PubMed
End of explanation
import gzip
import io
from Bio.PDB.PDBParser import PDBParser
def disorder(structure):
for chain in structure[0].get_list():
for residue in chain.get_list():
for atom in residue.get_list():
if atom.is_disordered():
print(residue, atom)
return None
pdbfn = 'samples/pdb1apk.ent.gz'
handle = gzip.GzipFile(pdbfn)
handle = io.StringIO(handle.read().decode('utf-8'))
parser = PDBParser()
structure = parser.get_structure('test', handle)
disorder(structure)
Explanation: Listing 9.14: pdb2.py: Parse a gzipped PDB file
End of explanation
from Bio import Prosite
handle = open("prosite.dat")
records = Prosite.parse(handle)
for r in records:
print(r.accession)
print(r.name)
print(r.description)
print(r.pattern)
print(r.created)
print(r.pdoc)
print("===================================")
from Bio import Restriction
Restriction.EcoRI
from Bio.Seq import Seq
from Bio.Alphabet.IUPAC import IUPACAmbiguousDNA
alfa = IUPACAmbiguousDNA()
gi1942535 = Seq('CGCGAATTCGCG', alfa
Restriction.EcoRI.search(gi1942535)
Restriction.EcoRI.catalyse(gi1942535)
enz1 = Restriction.EcoRI
enz2 = Restriction.HindIII
batch1 = Restriction.RestrictionBatch([enz1, enz2])
batch1.search(gi1942535)
dd = batch1.search(gi1942535)
dd.get(Restriction.EcoRI)
dd.get(Restriction.HindIII)
batch1.add(Restriction.EarI)
batch1
batch1.remove(Restriction.EarI)
batch1
batch2 = Restriction.CommOnly
an1 = Restriction.Analysis(batch1,gi1942535)
an1.full()
an1.print_that()
an1.print_as('map')
an1.print_that()
an1.only_between(1,8)
Explanation: PROSITE
End of explanation
from Bio.SeqUtils import GC
GC('gacgatcggtattcgtag')
from Bio.SeqUtils import MeltingTemp
MeltingTemp.Tm_staluc('tgcagtacgtatcgt')
print('%.2f'%MeltingTemp.Tm_staluc('tgcagtacgtatcgt'))
from Bio.SeqUtils import CheckSum
myseq = 'acaagatgccattgtcccccggcctcctgctgctgct'
CheckSum.gcg(myseq)
CheckSum.crc32(myseq)
CheckSum.crc64(myseq)
CheckSum.seguid(myseq)
Explanation: DNA Utils
End of explanation
from Bio.SeqUtils.ProtParam import ProteinAnalysis
from Bio.SeqUtils import ProtParamData
from Bio import SeqIO
with open('samples/pdbaa') as fh:
for rec in SeqIO.parse(fh,'fasta'):
myprot = ProteinAnalysis(str(rec.seq))
print(myprot.count_amino_acids())
print(myprot.get_amino_acids_percent())
print(myprot.molecular_weight())
print(myprot.aromaticity())
print(myprot.instability_index())
print(myprot.flexibility())
print(myprot.isoelectric_point())
print(myprot.secondary_structure_fraction())
print(myprot.protein_scale(ProtParamData.kd, 9, .4))
Explanation: Protein Utils
Listing 9.15: protparam.py: Apply PropParam functions to a group of proteins
End of explanation
import pprint
from Bio.Sequencing import Phd
fn = 'samples/phd1'
with open(fn) as fh:
rp = Phd.read(fh)
# All the comments are in a dictionary
pprint.pprint(rp.comments)
# Sequence information
print('Sequence: %s' % rp.seq)
# Quality information for each base
print('Quality: %s' % rp.sites)
from Bio import SeqIO
fn = '../../samples/phd1'
fh = open(fn)
seqs = SeqIO.parse(fh,'phd')
seqs = SeqIO.parse(fh,'phd')
for s in seqs:
print(s.seq)
from Bio.Sequencing import Ace
fn='836CLEAN-100.fasta.cap.ace'
acefilerecord=Ace.read(open(fn))
acefilerecord.ncontigs
acefilerecord.nreads
acefilerecord.wa[0].info
acefilerecord.wa[0].date
Explanation: Listing 9.16: phd1.py: Extract data from a .phd.1 file
End of explanation
from Bio.Sequencing import Ace
fn = 'samples/contig1.ace'
acefilerecord = Ace.read(open(fn))
# For each contig:
for ctg in acefilerecord.contigs:
print('==========================================')
print('Contig name: %s'%ctg.name)
print('Bases: %s'%ctg.nbases)
print('Reads: %s'%ctg.nreads)
print('Segments: %s'%ctg.nsegments)
print('Sequence: %s'%ctg.sequence)
print('Quality: %s'%ctg.quality)
# For each read in contig:
for read in ctg.reads:
print('Read name: %s'%read.rd.name)
print('Align start: %s'%read.qa.align_clipping_start)
print('Align end: %s'%read.qa.align_clipping_end)
print('Qual start: %s'%read.qa.qual_clipping_start)
print('Qual end: %s'%read.qa.qual_clipping_end)
print('Read sequence: %s'%read.rd.sequence)
print('==========================================')
Explanation: Listing 9.17: ace.py: Retrieve data from an “.ace” file
End of explanation
from Bio import SwissProt
with open('samples/spfile.txt') as fh:
records = SwissProt.parse(fh)
for record in records:
print('Entry name: %s' % record.entry_name)
print('Accession(s): %s' % ','.join(record.accessions))
print('Keywords: %s' % ','.join(record.keywords))
print('Sequence: %s' % record.sequence)
Explanation: Listing 9.18: Retrieve data from a SwissProt file
End of explanation
from Bio import SwissProt
with open('samples/spfile.txt') as fh:
record = next(SwissProt.parse(fh))
for att in dir(record):
if not att.startswith('__'):
print(att, getattr(record,att))
Explanation: Listing 9.19: Attributes of a SwissProt record
End of explanation
!curl https://s3.amazonaws.com/py4bio/cas9align.fasta -o cas9align.fasta
with open('cas9align.fasta') as f_in:
print(f_in.read())
print(dir(align))
from Bio import AlignIO
AlignIO.write(align, 'cas9align.phy', 'phylip')
from Bio.Alphabet import ProteinAlphabet
align._alphabet = ProteinAlphabet()
print(align)
print(align[3:,:5])
with open('cas9align.aln') as f_in:
print(f_in.read())
from Bio.Align.AlignInfo import print_info_content
print_info_content(align)
summary.pos_specific_score_matrix()
from Bio import Alphabet
for record in align:
print(Alphabet._get_base_alphabet(record.seq.alphabet))
!curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/cas9align.fasta -o archivo2.txt
with open('archivo2.txt') as fh:
print(fh.read())
# http://biopython.org/DIST/docs/api/Bio.Align.Applications._ClustalOmega.ClustalOmegaCommandline-class.html
Explanation: VER DE ACA PARA ABAJO SI ESTO ESTA BIEN O HABRIA QUE PONERLO ARRIBA!!!!
How to download a file
End of explanation |
4,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An information filter is used to combine a noisy measurement and a noisy predicition about the state of a system into a better estimate of the real state of said system
Step1: On of the most basic forms of information filter is the H-filter, it uses a parameter $h$ to adjust our prediction of the current state of the system by using the residual $ r = Z(t) - P_t$ where $Z(t)$ is the measured state at time $t$ and $P_t$ is the predicted state at time $t$
$h$ measures how much we trust our measurement, if it is 1 our estimate becomes $$\hat x_t = P_t + 1 (Z(t) - P_t) = Z(t)$$
If it $h=0$ on the other hand, we have $$ \hat x_t= P_t - 0(Z(t) - P_t) = P_t$$
Step2: We can see that the estimated speed eventually converges to the real speed, even with the noisy measurements,
the $h$ parameter will dictate how fast it converges, but it will also allor for the measurement noise to filter into the estimations.
Note that we also "guessed" the correct rate of change (the acceleration, in this case), if we guess it wrong the estimates will be biased.
Step3: Even with the negative acceleration the estimated speed is increasing with each iteration, although it is biased. We can see the estimate curve is really close to the predictions, which are off because we guessed the wrong rate of change, we can fix this by increasing $h$ and telling the filter that we would rather trust the measurements than the prediction, but this will let sensor noise pollute the estimates.
Having to correctly guess the rate of change is a problem since it may not be easy to do, to fix that we can adjust our rate of change much like in the way we adjust our predictions, the residual $r_t = Z(t) - P_t$ tells us by how much our prediction is off from the measurement, if we then increase our rate of change by that amount we should correct prediction for the next iteration, of course, since $Z(t)$ is noisy, we also add a parameter - $g$ - to tell the filter how much of this information we want to incorporate in the rate of change, this is knonw as th GH-Filter (or sometimes, the $\alpha\beta$-filter)
Step4: We can wirte down the equations for each step of the filter, we have the inital guesses
$$\hat x_0 = \epsilon; \delta x_0 = \delta$$
The prediction is a function of the estimated state, its rate of change and some time step for scaling if needed
$$ P_t = P(\hat x_{t-1}, \delta x_{t-1}, \delta t) $$
We can then compute the residual by subtracting our prediction from the measured state
$$ r_t = Z(t) - P_t$$
Now we allow some of the information from the residual to be incorporated into the prediction, producing the final estimate
$$ E_t = P_t + h r_t $$
We also course correct our rate of change for the next step
$$ \delta x_{t} = \delta x_{t-1} + g \frac{r_t}{\delta t} $$ | Python Code:
# Lets define some constants so we can play with the simulations later
# I used 81 time steps so at each time step the speed will increase by 1
TIME_STEPS = 81
V_0 = 40
V_F = 120
# Create the true speeds from V_0 to V_F
real_speeds = np.linspace(start=V_0, stop=V_F, num=TIME_STEPS)
# Define a generator that will add a random gaussian noise to the measurements
def speeds():
for s in real_speeds:
yield s + np.random.normal(loc=0, scale=3)
print(real_speeds)
Explanation: An information filter is used to combine a noisy measurement and a noisy predicition about the state of a system into a better estimate of the real state of said system
End of explanation
def h_filter(initial_estimate, change_rate, measurements, h, time_step=1):
estimates = [initial_estimate]
predictions = []
measured = []
for z in measurements:
measured.append(z)
prediction = estimates[-1] + change_rate * time_step
predictions.append(prediction)
residual = z - prediction
estimate = prediction + h * residual
estimates.append(estimate)
return measured, predictions, estimates
# Let's set some parameters for the simulations, note that we do not have to be acurate on the initial speed
# estimate
initial_speed = 0
accel = 1
factor = .2
Z, P, E = h_filter(initial_speed, accel, speeds(), factor)
def plot(times, real, Z, P, E):
fig, ax = plt.subplots(figsize=(15,6))
ax.plot(times, real, '--', label="Real", linewidth=2)
ax.scatter(times, Z, label="Measurements", s=3, c = 'red')
ax.plot(times, P, label="Predictions", linewidth=.5)
ax.plot(times, E[1:], label="Estimates", linewidth=1)
ax.legend();
times = np.arange(TIME_STEPS)
plot(times, real_speeds, Z, P, E)
Explanation: On of the most basic forms of information filter is the H-filter, it uses a parameter $h$ to adjust our prediction of the current state of the system by using the residual $ r = Z(t) - P_t$ where $Z(t)$ is the measured state at time $t$ and $P_t$ is the predicted state at time $t$
$h$ measures how much we trust our measurement, if it is 1 our estimate becomes $$\hat x_t = P_t + 1 (Z(t) - P_t) = Z(t)$$
If it $h=0$ on the other hand, we have $$ \hat x_t= P_t - 0(Z(t) - P_t) = P_t$$
End of explanation
initial_speed = 0
accel = -2 # Guessing a negativa acceleration
factor = .2
Z, P, E = h_filter(initial_speed, accel, speeds(), factor)
def plot(times, real, Z, P, E):
fig, ax = plt.subplots(figsize=(15,6))
ax.plot(times, real, '--', label="Real", linewidth=2)
ax.scatter(times, Z, label="Measurements", s=3, c = 'red')
ax.plot(times, P, label="Predictions", linewidth=.5)
ax.plot(times, E[1:], label="Estimates", linewidth=1)
ax.legend();
times = np.arange(TIME_STEPS)
plot(times, real_speeds, Z, P, E)
Explanation: We can see that the estimated speed eventually converges to the real speed, even with the noisy measurements,
the $h$ parameter will dictate how fast it converges, but it will also allor for the measurement noise to filter into the estimations.
Note that we also "guessed" the correct rate of change (the acceleration, in this case), if we guess it wrong the estimates will be biased.
End of explanation
def gh_filter(initial_estimate, change_rate, measurements, h, g, time_step=1):
estimates = [initial_estimate]
predictions = []
measured = []
for z in measurements:
measured.append(z)
prediction = estimates[-1] + change_rate * time_step
predictions.append(prediction)
residual = z - prediction
estimate = prediction + h * residual
estimates.append(estimate)
change_rate = change_rate + g * residual/time_step
return measured, predictions, estimates
Explanation: Even with the negative acceleration the estimated speed is increasing with each iteration, although it is biased. We can see the estimate curve is really close to the predictions, which are off because we guessed the wrong rate of change, we can fix this by increasing $h$ and telling the filter that we would rather trust the measurements than the prediction, but this will let sensor noise pollute the estimates.
Having to correctly guess the rate of change is a problem since it may not be easy to do, to fix that we can adjust our rate of change much like in the way we adjust our predictions, the residual $r_t = Z(t) - P_t$ tells us by how much our prediction is off from the measurement, if we then increase our rate of change by that amount we should correct prediction for the next iteration, of course, since $Z(t)$ is noisy, we also add a parameter - $g$ - to tell the filter how much of this information we want to incorporate in the rate of change, this is knonw as th GH-Filter (or sometimes, the $\alpha\beta$-filter)
End of explanation
# Here we have a wrong initial guess and a inital rate of change that is way off, see how the filter qucikly corrects
# the rate of change so it can start tracking the true speed
initial_speed = 0
accel = -50
h = .6
g = .2
Z, P, E = gh_filter(initial_speed, accel, speeds(), h, g)
times = np.arange(TIME_STEPS)
plot(times, real_speeds, Z, P, E)
Explanation: We can wirte down the equations for each step of the filter, we have the inital guesses
$$\hat x_0 = \epsilon; \delta x_0 = \delta$$
The prediction is a function of the estimated state, its rate of change and some time step for scaling if needed
$$ P_t = P(\hat x_{t-1}, \delta x_{t-1}, \delta t) $$
We can then compute the residual by subtracting our prediction from the measured state
$$ r_t = Z(t) - P_t$$
Now we allow some of the information from the residual to be incorporated into the prediction, producing the final estimate
$$ E_t = P_t + h r_t $$
We also course correct our rate of change for the next step
$$ \delta x_{t} = \delta x_{t-1} + g \frac{r_t}{\delta t} $$
End of explanation |
4,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 1
Step1: Overview of a simulation script
Typically, a simulation script consists of the following parts
Step2: The next step would be to create an instance of the System class and to seed espresso. This instance is used as a handle to the simulation system. At any time, only one instance of the System class can exist.
Step3: It can be used to manipulate the crucial system parameters like the time step and the size of the simulation box (<tt>time_step</tt>, and <tt>box_l</tt>).
Step4: Choosing the thermodynamic ensemble, thermostat
Simulations can be carried out in different thermodynamic ensembles such as NVE (particle __N__umber, __V__olume, __E__nergy), NVT (particle __N__umber, __V__olume, __T__emperature) or NPT-isotropic (particle __N__umber, __P__ressure, __T__emperature).
The NVE ensemble is simulated without a thermostat. A previously enabled thermostat can be switched off as follows
Step5: The NVT and NPT ensembles require a thermostat. In this tutorial, we use the Langevin thermostat.
In ESPResSo, the thermostat is set as follows
Step6: Use a Langevin thermostat (NVT or NPT ensemble) with temperature set to 1.0 and damping coefficient to 0.5.
Placing and accessing particles
Particles in the simulation can be added and accessed via the <tt>part</tt> property of the System class. Individual particles are referred to by an integer id, e.g., <tt>system.part[0]</tt>. If <tt>id</tt> is unspecified, an unused particle id is automatically assigned. It is also possible to use common python iterators and slicing operations to add or access several particles at once.
Particles can be grouped into several types, so that, e.g., a binary fluid can be simulated. Particle types are identified by integer ids, which are set via the particles' <tt>type</tt> attribute. If it is not specified, zero is implied.
Step7: Many objects in ESPResSo have a string representation, and thus can be displayed via python's <tt>print</tt> function
Step8: Setting up non-bonded interactions
Non-bonded interactions act between all particles of a given combination of particle types. In this tutorial, we use the Lennard-Jones non-bonded interaction. The interaction of two particles of type 0 can be setup as follows
Step9: Warmup
In many cases, including this tutorial, particles are initially placed randomly in the simulation box. It is therefore possible that particles overlap, resulting in a huge repulsive force between them. In this case, integrating the equations of motion would not be numerically stable. Hence, it is necessary to remove this overlap. This is done by limiting the maximum force between two particles, integrating the equations of motion, and increasing the force limit step by step as follows
Step10: Integrating equations of motion and taking measurements
Once warmup is done, the force capping is switched off by setting it to zero.
Step11: At this point, we have set the necessary environment and warmed up our system. Now, we integrate the equations of motion and take measurements. We first plot the radial distribution function which describes how the density varies as a function of distance from a tagged particle. The radial distribution function is averaged over several measurements to reduce noise.
The potential and kinetic energies can be monitored using the analysis method <tt>system.analysis.energy()</tt>. <tt>kinetic_temperature</tt> here refers to the measured temperature obtained from kinetic energy and the number of degrees of freedom in the system. It should fluctuate around the preset temperature of the thermostat.
The particles' mean square displacement,
\begin{equation}
\mathrm{msd}(t) =\langle (x(t_0+t) -x(t_0))^2\rangle,
\end{equation}
can be calculated using "observables and correlators". An observable is an object which takes a measurement on the system. It can depend on parameters specified when the observable is instanced, such as the ids of the particles to be considered.
Step12: We now use the plotting library <tt>matplotlib</tt> available in Python to visualize the measurements.
Step13: Since the ensemble average $\langle E_\text{kin}\rangle=3/2 N k_B T$ is related to the temperature, we may compute the actual temperature of the system via $k_B T= 2/(3N) \langle E_\text{kin}\rangle$. The temperature is fixed and does not fluctuate in the NVT ensemble! The instantaneous temperature is calculated via $2/(3N) E_\text{kin}$ (without ensemble averaging), but it is not the temperature of the system.
Step14: Simple Error Estimation on Time Series Data
A simple way to estimate the error of an observable is to use the standard error of the mean (SE) for $N$
uncorrelated samples | Python Code:
import espressomd
print(espressomd.features())
required_features = ["LENNARD_JONES"]
espressomd.assert_features(required_features)
Explanation: Tutorial 1: Lennard-Jones Liquid
Table of Contents
Introduction
Background
The Lennard-Jones Potential
Units
First steps
Overview of a simulation script
System setup
Choosing the thermodynamic ensemble, thermostat
Placing and accessing particles
Setting up non-bonded interactions
Warmup
Integrating equations of motion and taking measurements
Simple Error Estimation on Time Series Data
Exercises
Binary Lennard-Jones Liquid
References
Introduction
Welcome to the basic ESPResSo tutorial!
In this tutorial, you will learn, how to use the ESPResSo package for your
research. We will cover the basics of ESPResSo, i.e., how to set up and modify a physical system, how to run a simulation, and how to load, save and analyze the produced simulation data.
More advanced features and algorithms available in the ESPResSo package are
described in additional tutorials.
Background
Today's research on Soft Condensed Matter has brought the needs for having a flexible, extensible, reliable, and efficient (parallel) molecular simulation package. For this reason ESPResSo (Extensible Simulation Package for Research on Soft Matter Systems) [1] has been developed at the Max Planck Institute for Polymer Research, Mainz, and at the Institute for Computational Physics at the University of Stuttgart in the group of Prof. Dr. Christian Holm [2,3]. The ESPResSo package is probably the most flexible and extensible simulation package in the market. It is specifically developed for coarse-grained molecular dynamics (MD) simulation of polyelectrolytes but is not necessarily limited to this. For example, it could also be used to simulate granular media. ESPResSo has been nominated for the Heinz-Billing-Preis for Scientific Computing in 2003 [4].
The Lennard-Jones Potential
A pair of neutral atoms or molecules is subject to two distinct forces in the limit of large separation and small separation: an attractive force at long ranges (van der Waals force, or dispersion force) and a repulsive force at short ranges (the result of overlapping electron orbitals, referred to as Pauli repulsion from the Pauli exclusion principle). The Lennard-Jones potential (also referred to as the L-J potential, 6-12 potential or, less commonly, 12-6 potential) is a simple mathematical model that represents this behavior. It was proposed in 1924 by John Lennard-Jones. The L-J potential is of the form
\begin{equation}
V(r) = 4\epsilon \left[ \left( \dfrac{\sigma}{r} \right)^{12} - \left( \dfrac{\sigma}{r} \right)^{6} \right]
\end{equation}
where $\epsilon$ is the depth of the potential well and $\sigma$ is the (finite) distance at which the inter-particle potential is zero and $r$ is the distance between the particles. The $\left(\frac{1}{r}\right)^{12}$ term describes repulsion and the $(\frac{1}{r})^{6}$ term describes attraction. The Lennard-Jones potential is an
approximation. The form of the repulsion term has no theoretical justification; the repulsion force should depend exponentially on the distance, but the repulsion term of the L-J formula is more convenient due to the ease and efficiency of computing $r^{12}$ as the square of $r^6$.
In practice, the L-J potential is cutoff beyond a specified distance $r_{c}$ and the potential at the cutoff distance is zero.
<figure>
<img src='figures/lennard-jones-potential.png' alt='missing' style='width: 600px;'/>
<center>
<figcaption>Figure 1: Lennard-Jones potential</figcaption>
</center>
</figure>
Units
Novice users must understand that Espresso has no fixed unit system. The unit
system is set by the user. Conventionally, reduced units are employed, in other
words LJ units.
First steps
What is ESPResSo? It is an extensible, efficient Molecular Dynamics package specially powerful on simulating charged systems. In depth information about the package can be found in the relevant sources [1,4,2,3].
ESPResSo consists of two components. The simulation engine is written in C and C++ for the sake of computational efficiency. The steering or control
level is interfaced to the kernel via an interpreter of the Python scripting languages.
The kernel performs all computationally demanding tasks. Before all, integration of Newton's equations of motion, including calculation of energies and forces. It also takes care of internal organization of data, storing the data about particles, communication between different processors or cells of the cell-system.
The scripting interface (Python) is used to setup the system (particles, boundary conditions, interactions etc.), control the simulation, run analysis, and store and load results. The user has at hand the full reliability and functionality of the scripting language. For instance, it is possible to use the SciPy package for analysis and PyPlot for plotting.
With a certain overhead in efficiency, it can also be bused to reject/accept new configurations in combined MD/MC schemes. In principle, any parameter which is accessible from the scripting level can be changed at any moment of runtime. In this way methods like thermodynamic integration become readily accessible.
Note: This tutorial assumes that you already have a working ESPResSo
installation on your system. If this is not the case, please consult the first chapters of the user's guide for installation instructions.
Python simulation scripts can be run conveniently:
End of explanation
# Importing other relevant python modules
import numpy as np
# System parameters
n_part = 100
density = 0.5
box_l = np.power(n_part / density, 1.0 / 3.0) * np.ones(3)
Explanation: Overview of a simulation script
Typically, a simulation script consists of the following parts:
System setup (box geometry, thermodynamic ensemble, integrator parameters)
Placing the particles
Setup of interactions between particles
Warm up (bringing the system into a state suitable for measurements)
Integration loop (propagate the system in time and record measurements)
System setup
The functionality of ESPResSo for python is provided via a python module called <tt>espressomd</tt>. At the beginning of the simulation script, it has to be imported.
End of explanation
system = espressomd.System(box_l=box_l)
system.seed = 42
Explanation: The next step would be to create an instance of the System class and to seed espresso. This instance is used as a handle to the simulation system. At any time, only one instance of the System class can exist.
End of explanation
skin = 0.4
time_step = 0.01
eq_tstep = 0.001
temperature = 0.728
system.time_step = time_step
system.cell_system.skin = skin
Explanation: It can be used to manipulate the crucial system parameters like the time step and the size of the simulation box (<tt>time_step</tt>, and <tt>box_l</tt>).
End of explanation
system.thermostat.turn_off()
Explanation: Choosing the thermodynamic ensemble, thermostat
Simulations can be carried out in different thermodynamic ensembles such as NVE (particle __N__umber, __V__olume, __E__nergy), NVT (particle __N__umber, __V__olume, __T__emperature) or NPT-isotropic (particle __N__umber, __P__ressure, __T__emperature).
The NVE ensemble is simulated without a thermostat. A previously enabled thermostat can be switched off as follows:
End of explanation
system.thermostat.set_langevin(kT=temperature, gamma=1.0, seed=42)
Explanation: The NVT and NPT ensembles require a thermostat. In this tutorial, we use the Langevin thermostat.
In ESPResSo, the thermostat is set as follows:
End of explanation
# Add particles to the simulation box at random positions
for i in range(n_part):
system.part.add(type=0, pos=np.random.random(3) * system.box_l)
# Access position of a single particle
print(system.part[0].pos)
# Iterate over the first five particles for the purpose of demonstration.
# For accessing all particles, do not splice system.part
for p in system.part[:5]:
print(p.pos)
print(p.v)
# Obtain all particle positions
cur_pos = system.part[:].pos
Explanation: Use a Langevin thermostat (NVT or NPT ensemble) with temperature set to 1.0 and damping coefficient to 0.5.
Placing and accessing particles
Particles in the simulation can be added and accessed via the <tt>part</tt> property of the System class. Individual particles are referred to by an integer id, e.g., <tt>system.part[0]</tt>. If <tt>id</tt> is unspecified, an unused particle id is automatically assigned. It is also possible to use common python iterators and slicing operations to add or access several particles at once.
Particles can be grouped into several types, so that, e.g., a binary fluid can be simulated. Particle types are identified by integer ids, which are set via the particles' <tt>type</tt> attribute. If it is not specified, zero is implied.
End of explanation
print(system.part[0])
Explanation: Many objects in ESPResSo have a string representation, and thus can be displayed via python's <tt>print</tt> function:
End of explanation
lj_eps = 1.0
lj_sig = 1.0
lj_cut = 2.5 * lj_sig
lj_cap = 0.5
system.non_bonded_inter[0, 0].lennard_jones.set_params(
epsilon=lj_eps, sigma=lj_sig, cutoff=lj_cut, shift='auto')
system.force_cap = lj_cap
Explanation: Setting up non-bonded interactions
Non-bonded interactions act between all particles of a given combination of particle types. In this tutorial, we use the Lennard-Jones non-bonded interaction. The interaction of two particles of type 0 can be setup as follows:
End of explanation
warm_steps = 100
warm_n_time = 2000
min_dist = 0.87
i = 0
act_min_dist = system.analysis.min_dist()
while i < warm_n_time and act_min_dist < min_dist:
system.integrator.run(warm_steps)
act_min_dist = system.analysis.min_dist()
i += 1
lj_cap += 1.0
system.force_cap = lj_cap
Explanation: Warmup
In many cases, including this tutorial, particles are initially placed randomly in the simulation box. It is therefore possible that particles overlap, resulting in a huge repulsive force between them. In this case, integrating the equations of motion would not be numerically stable. Hence, it is necessary to remove this overlap. This is done by limiting the maximum force between two particles, integrating the equations of motion, and increasing the force limit step by step as follows:
End of explanation
system.force_cap = 0
Explanation: Integrating equations of motion and taking measurements
Once warmup is done, the force capping is switched off by setting it to zero.
End of explanation
# Integration parameters
sampling_interval = 100
sampling_iterations = 100
from espressomd.observables import ParticlePositions
from espressomd.accumulators import Correlator
# Pass the ids of the particles to be tracked to the observable.
part_pos = ParticlePositions(ids=range(n_part))
# Initialize MSD correlator
msd_corr = Correlator(obs1=part_pos,
tau_lin=10, delta_N=10,
tau_max=10000 * time_step,
corr_operation="square_distance_componentwise")
# Calculate results automatically during the integration
system.auto_update_accumulators.add(msd_corr)
# Set parameters for the radial distribution function
r_bins = 70
r_min = 0.0
r_max = system.box_l[0] / 2.0
avg_rdf = np.zeros((r_bins,))
# Take measurements
time = np.zeros(sampling_iterations)
instantaneous_temperature = np.zeros(sampling_iterations)
etotal = np.zeros(sampling_iterations)
for i in range(1, sampling_iterations + 1):
system.integrator.run(sampling_interval)
# Measure radial distribution function
r, rdf = system.analysis.rdf(rdf_type="rdf", type_list_a=[0], type_list_b=[0],
r_min=r_min, r_max=r_max, r_bins=r_bins)
avg_rdf += rdf / sampling_iterations
# Measure energies
energies = system.analysis.energy()
kinetic_temperature = energies['kinetic'] / (1.5 * n_part)
etotal[i - 1] = energies['total']
time[i - 1] = system.time
instantaneous_temperature[i - 1] = kinetic_temperature
# Finalize the correlator and obtain the results
msd_corr.finalize()
msd = msd_corr.result()
Explanation: At this point, we have set the necessary environment and warmed up our system. Now, we integrate the equations of motion and take measurements. We first plot the radial distribution function which describes how the density varies as a function of distance from a tagged particle. The radial distribution function is averaged over several measurements to reduce noise.
The potential and kinetic energies can be monitored using the analysis method <tt>system.analysis.energy()</tt>. <tt>kinetic_temperature</tt> here refers to the measured temperature obtained from kinetic energy and the number of degrees of freedom in the system. It should fluctuate around the preset temperature of the thermostat.
The particles' mean square displacement,
\begin{equation}
\mathrm{msd}(t) =\langle (x(t_0+t) -x(t_0))^2\rangle,
\end{equation}
can be calculated using "observables and correlators". An observable is an object which takes a measurement on the system. It can depend on parameters specified when the observable is instanced, such as the ids of the particles to be considered.
End of explanation
import matplotlib.pyplot as plt
plt.ion()
fig1 = plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k')
fig1.set_tight_layout(False)
plt.plot(r, avg_rdf, '-', color="#A60628", linewidth=2, alpha=1)
plt.xlabel('$r$', fontsize=20)
plt.ylabel('$g(r)$', fontsize=20)
plt.show()
fig2 = plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k')
fig2.set_tight_layout(False)
plt.plot(time, instantaneous_temperature, '-', color="red", linewidth=2,
alpha=0.5, label='Instantaneous Temperature')
plt.plot([min(time), max(time)], [temperature] * 2, '-', color="#348ABD",
linewidth=2, alpha=1, label='Set Temperature')
plt.xlabel('Time', fontsize=20)
plt.ylabel('Temperature', fontsize=20)
plt.legend(fontsize=16, loc=0)
plt.show()
Explanation: We now use the plotting library <tt>matplotlib</tt> available in Python to visualize the measurements.
End of explanation
fig3 = plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k')
fig3.set_tight_layout(False)
plt.plot(msd[:, 0], msd[:, 2] + msd[:, 3] + msd[:, 4],
'o-', color="#348ABD", linewidth=2, alpha=1)
plt.xlabel('Time', fontsize=20)
plt.ylabel('Mean squared displacement', fontsize=20)
plt.xscale('log')
plt.yscale('log')
plt.show()
Explanation: Since the ensemble average $\langle E_\text{kin}\rangle=3/2 N k_B T$ is related to the temperature, we may compute the actual temperature of the system via $k_B T= 2/(3N) \langle E_\text{kin}\rangle$. The temperature is fixed and does not fluctuate in the NVT ensemble! The instantaneous temperature is calculated via $2/(3N) E_\text{kin}$ (without ensemble averaging), but it is not the temperature of the system.
End of explanation
# calculate the standard error of the mean of the total energy
standard_error_total_energy = np.sqrt(etotal.var()) / np.sqrt(sampling_iterations)
print(standard_error_total_energy)
Explanation: Simple Error Estimation on Time Series Data
A simple way to estimate the error of an observable is to use the standard error of the mean (SE) for $N$
uncorrelated samples:
\begin{equation}
SE = \sqrt{\frac{\sigma^2}{N}},
\end{equation}
where $\sigma^2$ is the variance
\begin{equation}
\sigma^2 = \left\langle x^2 - \langle x\rangle^2 \right\rangle
\end{equation}
End of explanation |
4,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ndmg Tutorial
Step1: Check for dependencies, Set Directories
The below code is a simple check that makes sure AFNI and FSL are installed. <br>
We also set the input, data, and atlas paths.
Make sure that AFNI and FSL are installed
Step2: Set Input, Output, and Atlas Locations
Here, you set
Step3: Choose input parameters
Naming Conventions
Here, we define input variables to the pipeline.
To run the ndmg pipeline, you need four files
Step4: Parameter Choices and Output Directory
Here, we choose the parameters to run the pipeline with.
If you are inexperienced with diffusion MRI theory, feel free to just use the default parameters.
atlases = ['desikan', 'CPAC200', 'DKT', 'HarvardOxfordcort', 'HarvardOxfordsub', 'JHU', 'Schaefer2018-200', 'Talairach', 'aal', 'brodmann', 'glasser', 'yeo-7-liberal', 'yeo-17-liberal']
Step5: Get masks and labels
The pipeline needs these two variables as input. <br>
Running the pipeline via ndmg_bids does this for you.
Step6: Run the pipeline! | Python Code:
import os
import os.path as op
import glob
import shutil
import warnings
import subprocess
from pathlib import Path
from ndmg.scripts import ndmg_dwi_pipeline
from ndmg.scripts.ndmg_bids import get_atlas
from ndmg.utils import cloud_utils
Explanation: Ndmg Tutorial: Running Inside Python
This tutorial provides a basic overview of how to run ndmg manually within Python. <br>
We begin by checking for dependencies,
then we set our input parameters,
then we smiply run the pipeline.
Running the pipeline is quite simple: call ndmg_dwi_pipeline.ndmg_dwi_worker with the correct arguments. <br>
Note that, although you can run the pipeline in Python, the absolute easiest way (outside Gigantum) is to run the pipeline from the command line once all dependencies are installed using the following command: <br>
ndmg_bids </absolute/input/dir> </absolute/output/dir>. <br>
This will run a single session from the input directory, and output the results into your output directory.
But for now, let's look at running in Python -- <br>
Let's begin!
End of explanation
# FSL
try:
print(f"Your fsl directory is located here: {os.environ['FSLDIR']}")
except KeyError:
raise AssertionError("You do not have FSL installed! See installation instructions here: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FslInstallation")
# AFNI
try:
print(f"Your AFNI directory is located here: {subprocess.check_output('which afni', shell=True, universal_newlines=True)}")
except subprocess.CalledProcessError:
raise AssertionError("You do not have AFNI installed! See installation instructions here: https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/background_install/main_toc.html")
Explanation: Check for dependencies, Set Directories
The below code is a simple check that makes sure AFNI and FSL are installed. <br>
We also set the input, data, and atlas paths.
Make sure that AFNI and FSL are installed
End of explanation
# get atlases
ndmg_dir = Path.home() / ".ndmg"
atlas_dir = ndmg_dir / "ndmg_atlases"
get_atlas(str(atlas_dir), "2mm")
# These
input_dir = ndmg_dir / "input"
out_dir = ndmg_dir / "output"
print(f"Your input and output directory will be : {input_dir} and {out_dir}")
assert op.exists(input_dir), f"You must have an input directory with data. Your input directory is located here: {input_dir}"
Explanation: Set Input, Output, and Atlas Locations
Here, you set:
1. the input_dir - this is where your input data lives.
2. the out_dir - this is where your output data will go.
End of explanation
# Specify base directory and paths to input files (dwi, bvecs, bvals, and t1w required)
subject_id = 'sub-0025864'
# Define the location of our input files.
t1w = str(input_dir / f"{subject_id}/ses-1/anat/{subject_id}_ses-1_T1w.nii.gz")
dwi = str(input_dir / f"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.nii.gz")
bvecs = str(input_dir / f"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.bvec")
bvals = str(input_dir / f"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.bval")
print(f"Your anatomical image location: {t1w}")
print(f"Your dwi image location: {dwi}")
print(f"Your bvector location: {bvecs}")
print(f"Your bvalue location: {bvals}")
Explanation: Choose input parameters
Naming Conventions
Here, we define input variables to the pipeline.
To run the ndmg pipeline, you need four files:
1. a t1w - this is a high-resolution anatomical image.
2. a dwi - the diffusion image.
3. bvecs - this is a text file that defines the gradient vectors created by a DWI scan.
4. bvals - this is a text file that defines magnitudes for the gradient vectors created by a DWI scan.
The naming convention is in the BIDs spec.
End of explanation
# Use the default parameters.
atlas = 'desikan'
mod_type = 'prob'
track_type = 'local'
mod_func = 'csd'
reg_style = 'native'
vox_size = '2mm'
seeds = 1
Explanation: Parameter Choices and Output Directory
Here, we choose the parameters to run the pipeline with.
If you are inexperienced with diffusion MRI theory, feel free to just use the default parameters.
atlases = ['desikan', 'CPAC200', 'DKT', 'HarvardOxfordcort', 'HarvardOxfordsub', 'JHU', 'Schaefer2018-200', 'Talairach', 'aal', 'brodmann', 'glasser', 'yeo-7-liberal', 'yeo-17-liberal'] : The atlas that defines the node location of the graph you create.
mod_types = ['det', 'prob'] : Deterministic or probablistic tractography.
track_types = ['local', 'particle'] : Local or particle tracking.
mods = ['csa', 'csd'] : Constant Solid Angle or Constrained Spherical Deconvolution.
regs = ['native', 'native_dsn', 'mni'] : Registration style. If native, do all registration in each scan's space; if mni, register scans to the MNI atlas; if native_dsn, do registration in native space, and then fit the streamlines to MNI space.
vox_size = ['1mm', '2mm'] : Whether our voxels are 1mm or 2mm.
seeds = int : Seeding density for tractography. More seeds generally results in a better graph, but at a much higher computational cost.
End of explanation
# Auto-set paths to neuroparc files
mask = str(atlas_dir / "atlases/mask/MNI152NLin6_res-2x2x2_T1w_descr-brainmask.nii.gz")
labels = [str(i) for i in (atlas_dir / "atlases/label/Human/").glob(f"*{atlas}*2x2x2.nii.gz")]
print(f"mask location : {mask}")
print(f"atlas location : {labels}")
Explanation: Get masks and labels
The pipeline needs these two variables as input. <br>
Running the pipeline via ndmg_bids does this for you.
End of explanation
ndmg_dwi_pipeline.ndmg_dwi_worker(dwi=dwi, bvals=bvals, bvecs=bvecs, t1w=t1w, atlas=atlas, mask=mask, labels=labels, outdir=str(out_dir), vox_size=vox_size, mod_type=mod_type, track_type=track_type, mod_func=mod_func, seeds=seeds, reg_style=reg_style, clean=False, skipeddy=True, skipreg=True)
Explanation: Run the pipeline!
End of explanation |
4,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Contextual Bandits
We'll look into a policy-gradient based agent.
Step1: The Contextual Bandits
Here we define our contextual bandits. In this example, we are using three four-armed bandit. What this means is that each bandit has four arms that can be pulled. Each bandit has different success probabilities for each arm, and as such requires different actions to obtain the best result. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the bandit-arm that will most often give a positive reward, depending on the Bandit presented.
Step2: The Policy-Based Agent
The code below established our simple neural agent. It takes as input the current state, and returns an action. This allows the agent to take actions which are conditioned on the state of the environment, a critical step toward being able to solve full RL problems. The agent uses a single set of weights, within which each value is an estimate of the value of the return from choosing a particular arm given a bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward. | Python Code:
import tensorflow as tf
import numpy as np
import tensorflow.contrib.slim as slim
Explanation: The Contextual Bandits
We'll look into a policy-gradient based agent.
End of explanation
class contextual_bandit():
def __init__(self):
self.state = 0
#List out our bandits. Currently arms 4, 2, and 1 (respectively) are the most optimal.
self.bandits = np.array([[0.2,0,-0.0,-5],[0.1,-5,1,0.25],[-5,5,5,5]])
self.num_bandits = self.bandits.shape[0]
self.num_actions = self.bandits.shape[1]
def getBandit(self):
self.state = np.random.randint(0,len(self.bandits)) #Returns a random state for each episode.
return self.state
def pullArm(self,action):
#Get a random number.
bandit = self.bandits[self.state,action]
result = np.random.randn(1)
if result > bandit:
#return a positive reward.
return 1
else:
#return a negative reward.
return -1
Explanation: The Contextual Bandits
Here we define our contextual bandits. In this example, we are using three four-armed bandit. What this means is that each bandit has four arms that can be pulled. Each bandit has different success probabilities for each arm, and as such requires different actions to obtain the best result. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the bandit-arm that will most often give a positive reward, depending on the Bandit presented.
End of explanation
class agent():
def __init__(self, lr, s_size,a_size):
#These lines established the feed-forward part of the network. The agent takes a state and produces an action.
self.state_in= tf.placeholder(shape=[1],dtype=tf.int32)
state_in_OH = slim.one_hot_encoding(self.state_in,s_size)
output = slim.fully_connected(state_in_OH,a_size,\
biases_initializer=None,activation_fn=tf.nn.sigmoid,weights_initializer=tf.ones_initializer())
self.output = tf.reshape(output,[-1])
self.chosen_action = tf.argmax(self.output,0)
#The next six lines establish the training proceedure. We feed the reward and chosen action into the network
#to compute the loss, and use it to update the network.
self.reward_holder = tf.placeholder(shape=[1],dtype=tf.float32)
self.action_holder = tf.placeholder(shape=[1],dtype=tf.int32)
self.responsible_weight = tf.slice(self.output,self.action_holder,[1])
self.loss = -(tf.log(self.responsible_weight)*self.reward_holder)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=lr)
self.update = optimizer.minimize(self.loss)
tf.reset_default_graph() #Clear the Tensorflow graph.
cBandit = contextual_bandit() #Load the bandits.
myAgent = agent(lr=0.001,s_size=cBandit.num_bandits,a_size=cBandit.num_actions) #Load the agent.
weights = tf.trainable_variables()[0] #The weights we will evaluate to look into the network.
total_episodes = 10000 #Set total number of episodes to train agent on.
total_reward = np.zeros([cBandit.num_bandits,cBandit.num_actions]) #Set scoreboard for bandits to 0.
e = 0.1 #Set the chance of taking a random action.
init = tf.initialize_all_variables()
# Launch the tensorflow graph
with tf.Session() as sess:
sess.run(init)
i = 0
while i < total_episodes:
s = cBandit.getBandit() #Get a state from the environment.
#Choose either a random action or one from our network.
if np.random.rand(1) < e:
action = np.random.randint(cBandit.num_actions)
else:
action = sess.run(myAgent.chosen_action,feed_dict={myAgent.state_in:[s]})
reward = cBandit.pullArm(action) #Get our reward for taking an action given a bandit.
#Update the network.
feed_dict={myAgent.reward_holder:[reward],myAgent.action_holder:[action],myAgent.state_in:[s]}
_,ww = sess.run([myAgent.update,weights], feed_dict=feed_dict)
#Update our running tally of scores.
total_reward[s,action] += reward
if i % 500 == 0:
print("Mean reward for each of the " + str(cBandit.num_bandits) + " bandits: " + str(np.mean(total_reward,axis=1)))
i+=1
for a in range(cBandit.num_bandits):
print("The agent thinks action " + str(np.argmax(ww[a])+1) + " for bandit " + str(a+1) + " is the most promising....")
if np.argmax(ww[a]) == np.argmin(cBandit.bandits[a]):
print("...and it was right!")
else:
print("...and it was wrong!")
Explanation: The Policy-Based Agent
The code below established our simple neural agent. It takes as input the current state, and returns an action. This allows the agent to take actions which are conditioned on the state of the environment, a critical step toward being able to solve full RL problems. The agent uses a single set of weights, within which each value is an estimate of the value of the return from choosing a particular arm given a bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.
End of explanation |
4,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Markov switching autoregression models
This notebook provides an example of the use of Markov switching models in Statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother.
This is tested against the Markov-switching models from E-views 8, which can be found at http
Step1: Hamilton (1989) switching model of GNP
This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written
Step2: We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.
For reference, the shaded periods represent the NBER recessions.
Step3: From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
Step4: In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.
Kim, Nelson, and Startz (1998) Three-state Variance Switching
This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http
Step5: Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
Step6: Filardo (1994) Time-Varying Transition Probabilities
This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http
Step7: The time-varying transition probabilities are specified by the exog_tvtp parameter.
Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters.
Below, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.
Step8: Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
Step9: Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import requests
from io import BytesIO
# NBER recessions
from pandas_datareader.data import DataReader
from datetime import datetime
usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))
Explanation: Markov switching autoregression models
This notebook provides an example of the use of Markov switching models in Statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother.
This is tested against the Markov-switching models from E-views 8, which can be found at http://www.eviews.com/EViews8/ev8ecswitch_n.html#MarkovAR or the Markov-switching models of Stata 14 which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.
End of explanation
# Get the RGNP data to replicate Hamilton
dta = pd.read_stata('http://www.stata-press.com/data/r14/rgnp.dta').iloc[1:]
dta.index = pd.DatetimeIndex(dta.date, freq='QS')
dta_hamilton = dta.rgnp
# Plot the data
dta_hamilton.plot(title='Growth rate of Real GNP', figsize=(12,3))
# Fit the model
mod_hamilton = sm.tsa.MarkovAutoregression(dta_hamilton, k_regimes=2, order=4, switching_ar=False)
res_hamilton = mod_hamilton.fit()
res_hamilton.summary()
Explanation: Hamilton (1989) switching model of GNP
This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written:
$$
y_t = \mu_{S_t} + \phi_1 (y_{t-1} - \mu_{S_{t-1}}) + \phi_2 (y_{t-2} - \mu_{S_{t-2}}) + \phi_3 (y_{t-3} - \mu_{S_{t-3}}) + \phi_4 (y_{t-4} - \mu_{S_{t-4}}) + \varepsilon_t
$$
Each period, the regime transitions according to the following matrix of transition probabilities:
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
p_{01} & p_{11}
\end{bmatrix}
$$
where $p_{ij}$ is the probability of transitioning from regime $i$, to regime $j$.
The model class is MarkovAutoregression in the time-series part of Statsmodels. In order to create the model, we must specify the number of regimes with k_regimes=2, and the order of the autoregression with order=4. The default model also includes switching autoregressive coefficients, so here we also need to specify switching_ar=False to avoid that.
After creation, the model is fit via maximum likelihood estimation. Under the hood, good starting parameters are found using a number of steps of the expectation maximization (EM) algorithm, and a quasi-Newton (BFGS) algorithm is applied to quickly find the maximum.
End of explanation
fig, axes = plt.subplots(2, figsize=(7,7))
ax = axes[0]
ax.plot(res_hamilton.filtered_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title='Filtered probability of recession')
ax = axes[1]
ax.plot(res_hamilton.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title='Smoothed probability of recession')
fig.tight_layout()
Explanation: We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.
For reference, the shaded periods represent the NBER recessions.
End of explanation
print(res_hamilton.expected_durations)
Explanation: From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
End of explanation
# Get the dataset
ew_excs = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn').content
raw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine='python')
raw.index = pd.date_range('1926-01-01', '1995-12-01', freq='MS')
dta_kns = raw.loc[:'1986'] - raw.loc[:'1986'].mean()
# Plot the dataset
dta_kns[0].plot(title='Excess returns', figsize=(12, 3))
# Fit the model
mod_kns = sm.tsa.MarkovRegression(dta_kns, k_regimes=3, trend='nc', switching_variance=True)
res_kns = mod_kns.fit()
res_kns.summary()
Explanation: In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.
Kim, Nelson, and Startz (1998) Three-state Variance Switching
This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn.
The model in question is:
$$
\begin{align}
y_t & = \varepsilon_t \
\varepsilon_t & \sim N(0, \sigma_{S_t}^2)
\end{align}
$$
Since there is no autoregressive component, this model can be fit using the MarkovRegression class. Since there is no mean effect, we specify trend='nc'. There are hypotheized to be three regimes for the switching variances, so we specify k_regimes=3 and switching_variance=True (by default, the variance is assumed to be the same across regimes).
End of explanation
fig, axes = plt.subplots(3, figsize=(10,7))
ax = axes[0]
ax.plot(res_kns.smoothed_marginal_probabilities[0])
ax.set(title='Smoothed probability of a low-variance regime for stock returns')
ax = axes[1]
ax.plot(res_kns.smoothed_marginal_probabilities[1])
ax.set(title='Smoothed probability of a medium-variance regime for stock returns')
ax = axes[2]
ax.plot(res_kns.smoothed_marginal_probabilities[2])
ax.set(title='Smoothed probability of a high-variance regime for stock returns')
fig.tight_layout()
Explanation: Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
End of explanation
# Get the dataset
filardo = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn').content
dta_filardo = pd.read_table(BytesIO(filardo), sep=' +', header=None, skipfooter=1, engine='python')
dta_filardo.columns = ['month', 'ip', 'leading']
dta_filardo.index = pd.date_range('1948-01-01', '1991-04-01', freq='MS')
dta_filardo['dlip'] = np.log(dta_filardo['ip']).diff()*100
# Deflated pre-1960 observations by ratio of std. devs.
# See hmt_tvp.opt or Filardo (1994) p. 302
std_ratio = dta_filardo['dlip']['1960-01-01':].std() / dta_filardo['dlip'][:'1959-12-01'].std()
dta_filardo['dlip'][:'1959-12-01'] = dta_filardo['dlip'][:'1959-12-01'] * std_ratio
dta_filardo['dlleading'] = np.log(dta_filardo['leading']).diff()*100
dta_filardo['dmdlleading'] = dta_filardo['dlleading'] - dta_filardo['dlleading'].mean()
# Plot the data
dta_filardo['dlip'].plot(title='Standardized growth rate of industrial production', figsize=(13,3))
plt.figure()
dta_filardo['dmdlleading'].plot(title='Leading indicator', figsize=(13,3));
Explanation: Filardo (1994) Time-Varying Transition Probabilities
This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn.
In the above models we have assumed that the transition probabilities are constant across time. Here we allow the probabilities to change with the state of the economy. Otherwise, the model is the same Markov autoregression of Hamilton (1989).
Each period, the regime now transitions according to the following matrix of time-varying transition probabilities:
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00,t} & p_{10,t} \
p_{01,t} & p_{11,t}
\end{bmatrix}
$$
where $p_{ij,t}$ is the probability of transitioning from regime $i$, to regime $j$ in period $t$, and is defined to be:
$$
p_{ij,t} = \frac{\exp{ x_{t-1}' \beta_{ij} }}{1 + \exp{ x_{t-1}' \beta_{ij} }}
$$
Instead of estimating the transition probabilities as part of maximum likelihood, the regression coefficients $\beta_{ij}$ are estimated. These coefficients relate the transition probabilities to a vector of pre-determined or exogenous regressors $x_{t-1}$.
End of explanation
mod_filardo = sm.tsa.MarkovAutoregression(
dta_filardo.iloc[2:]['dlip'], k_regimes=2, order=4, switching_ar=False,
exog_tvtp=sm.add_constant(dta_filardo.iloc[1:-1]['dmdlleading']))
np.random.seed(12345)
res_filardo = mod_filardo.fit(search_reps=20)
res_filardo.summary()
Explanation: The time-varying transition probabilities are specified by the exog_tvtp parameter.
Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters.
Below, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.
End of explanation
fig, ax = plt.subplots(figsize=(12,3))
ax.plot(res_filardo.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='gray', alpha=0.2)
ax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1])
ax.set(title='Smoothed probability of a low-production state');
Explanation: Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
End of explanation
res_filardo.expected_durations[0].plot(
title='Expected duration of a low-production state', figsize=(12,3));
Explanation: Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time:
End of explanation |
4,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ModelSelection.ipynb
Choosing the number of states and a suitable timescale for hidden Markov models
One of the challenges associated with using hidden Markov models is specifying the correct model. For example, how many hidden states should the model have? At what timescale should we bin our observations? How much data do we need in order to train an effective/useful/representative model?
One possibility (which is conceptually very appealing) is to use a nonparametric Bayesian extension to the HMM, the HDP-HMM (hierarchical Dirichlet process hidden Markov model), in which the number of states can be directly inferred from the data, and moreover, where the number of states are allowed to grow as we obtain more and more data.
Fortunately, even if we choose to use a simple HMM, model selection is perhaps not as important as one might at first think. More specifically, we will show that for a wide range of model states, and for a wide range of timescales, the HMM should return plausible and usable models, so that we can use them to learn something about the data even if we don't have a good idea of what the model parameters should be.
Nevertheless, shifting over to the HDP-HMMs and especially to the HDP-HSMMs (semi-Markov models) where state durations are explicitly specified or learned, is certainly something that I would highly recommend.
TODO
Step1: Load data
Here we consider lin2 data for gor01 on the first recording day (6-7-2006), since this session had the most units (91) of all the gor01 sessions, and lin2 has position data, whereas lin1 only has partial position data.
Step2: Find most appropriate number of states using cross validation
Here we split the data into training, validation, and test sets. We monitor the average log probability per sequence (normalized by length) for each of these sets, and we use the validation set to choose the number of model states $m$.
Note to self
Step3: Remarks
Step4: Remarks
Step5: Remarks
Step6: Remarks
Step7: Remarks
Step8: Remarks
Step9: then we stat to see the emergence of the S-shaped place field progressions again, indicating that the reward locations are overexpressed by several different states.
This observation is even more pronounced if we increase the number of states further
Step10: With enough expressiveness in the number of states, we see the S-shaped curve reappear, which suggests an overexpression of the reward locations, which is consistent with what we see with place cells in animals. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sys
from IPython.display import display, clear_output
sys.path.insert(0, 'helpers')
from efunctions import * # load my helper function(s) to save pdf figures, etc.
from hc3 import load_data, get_sessions
from hmmlearn import hmm # see https://github.com/ckemere/hmmlearn
import klabtools as klab
import seqtools as sq
import importlib
importlib.reload(sq) # reload module here only while prototyping...
importlib.reload(klab) # reload module here only while prototyping...
%matplotlib inline
sns.set(rc={'figure.figsize': (12, 4),'lines.linewidth': 1.5})
sns.set_style("white")
Explanation: ModelSelection.ipynb
Choosing the number of states and a suitable timescale for hidden Markov models
One of the challenges associated with using hidden Markov models is specifying the correct model. For example, how many hidden states should the model have? At what timescale should we bin our observations? How much data do we need in order to train an effective/useful/representative model?
One possibility (which is conceptually very appealing) is to use a nonparametric Bayesian extension to the HMM, the HDP-HMM (hierarchical Dirichlet process hidden Markov model), in which the number of states can be directly inferred from the data, and moreover, where the number of states are allowed to grow as we obtain more and more data.
Fortunately, even if we choose to use a simple HMM, model selection is perhaps not as important as one might at first think. More specifically, we will show that for a wide range of model states, and for a wide range of timescales, the HMM should return plausible and usable models, so that we can use them to learn something about the data even if we don't have a good idea of what the model parameters should be.
Nevertheless, shifting over to the HDP-HMMs and especially to the HDP-HSMMs (semi-Markov models) where state durations are explicitly specified or learned, is certainly something that I would highly recommend.
TODO: Take a look at e.g. https://www.cs.cmu.edu/~ggordon/siddiqi-gordon-moore.fast-hmm.pdf : fast HMM (order of magnitude faster than Baum-Welch) and better model fit: V-STACS.
Import packages and initialization
End of explanation
datadirs = ['/home/etienne/Dropbox/neoReader/Data',
'C:/etienne/Dropbox/neoReader/Data',
'/Users/etienne/Dropbox/neoReader/Data']
fileroot = next( (dir for dir in datadirs if os.path.isdir(dir)), None)
animal = 'gor01'; month,day = (6,7); session = '16-40-19' # 91 units
spikes = load_data(fileroot=fileroot, datatype='spikes',animal=animal, session=session, month=month, day=day, fs=32552, verbose=False)
eeg = load_data(fileroot=fileroot, datatype='eeg', animal=animal, session=session, month=month, day=day,channels=[0,1,2], fs=1252, starttime=0, verbose=False)
posdf = load_data(fileroot=fileroot, datatype='pos',animal=animal, session=session, month=month, day=day, verbose=False)
speed = klab.get_smooth_speed(posdf,fs=60,th=8,cutoff=0.5,showfig=False,verbose=False)
Explanation: Load data
Here we consider lin2 data for gor01 on the first recording day (6-7-2006), since this session had the most units (91) of all the gor01 sessions, and lin2 has position data, whereas lin1 only has partial position data.
End of explanation
## bin ALL spikes
ds = 0.125 # bin spikes into 125 ms bins (theta-cycle inspired)
binned_spikes_all = klab.bin_spikes(spikes.data, ds=ds, fs=spikes.samprate, verbose=True)
## identify boundaries for running (active) epochs and then bin those observations into separate sequences:
runbdries = klab.get_boundaries_from_bins(eeg.samprate,bins=speed.active_bins,bins_fs=60)
binned_spikes_bvr = klab.bin_spikes(spikes.data, fs=spikes.samprate, boundaries=runbdries, boundaries_fs=eeg.samprate, ds=ds)
## stack data for hmmlearn:
seq_stk_bvr = sq.data_stack(binned_spikes_bvr, verbose=True)
seq_stk_all = sq.data_stack(binned_spikes_all, verbose=True)
## split data into train, test, and validation sets:
tr_b,vl_b,ts_b = sq.data_split(seq_stk_bvr, tr=60, vl=20, ts=20, randomseed = 0, verbose=False)
Smax = 40
S = np.arange(start=5,step=1,stop=Smax+1)
tr_ll = []
vl_ll = []
ts_ll = []
for num_states in S:
clear_output(wait=True)
print('Training and evaluating {}-state hmm'.format(num_states))
sys.stdout.flush()
myhmm = sq.hmm_train(tr_b, num_states=num_states, n_iter=30, verbose=False)
tr_ll.append( (np.array(list(sq.hmm_eval(myhmm, tr_b)))/tr_b.sequence_lengths ).mean())
vl_ll.append( (np.array(list(sq.hmm_eval(myhmm, vl_b)))/vl_b.sequence_lengths ).mean())
ts_ll.append( (np.array(list(sq.hmm_eval(myhmm, ts_b)))/ts_b.sequence_lengths ).mean())
clear_output(wait=True)
print('Done!')
sys.stdout.flush()
num_states = 35
fig = plt.figure(1, figsize=(12, 4))
ax = fig.add_subplot(111)
ax.annotate('plateau at approx ' + str(num_states), xy=(num_states, -38.5), xycoords='data',
xytext=(-140, -30), textcoords='offset points',
arrowprops=dict(arrowstyle="->",
connectionstyle="angle3,angleA=0,angleB=-90"),
)
ax.plot(S, tr_ll, lw=1.5, label='train')
ax.plot(S, vl_ll, lw=1.5, label='validation')
ax.plot(S, ts_ll, lw=1.5, label='test')
ax.legend(loc=2)
ax.set_xlabel('number of states')
ax.set_ylabel('normalized (to single time bin) log likelihood')
ax.axhspan(-38.5, -37.5, facecolor='0.75', alpha=0.25)
ax.set_xlim([5, S[-1]])
Explanation: Find most appropriate number of states using cross validation
Here we split the data into training, validation, and test sets. We monitor the average log probability per sequence (normalized by length) for each of these sets, and we use the validation set to choose the number of model states $m$.
Note to self: I should re-write my data splitting routines to allow me to extract as many subsets as I want, so that I can do k-fold cross validation.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
vth = ii+1
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, verbose=False)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight')
Explanation: Remarks: We see that the training error is decreasing (equivalently, the training log probability is increasing) over the entire range of states considered. Indeed, we have computed this for a much larger number of states, and the training error keeps on decreasing, whereas both the validation and test errors reach a plateau at around 30 or 35 states.
As expected, the training set has the largest log probability (best agreement with model), but we might expect the test and validation sets to be about the same. For different subsets of our data this is indeed the case, but the more important thing in model selection is that the validation and test sets should have the same shape or behavior, so that we can choose an appropriate model parameter.
However, if we wanted to predict what our log probability for any given sequence would be, then we probably need a little bit more data, for which the test and validation errors should agree more.
Finally, we have also repeated the above analysis when we restricted ourselves to only using place cells in the model, and although the log probabilities were uniformly increased to around $-7$ or $-8$, the overall shape and characteristic behavior were left unchanged, so that model selection could be done either way.
Place field visualization
Previously we have only considered varying the number of model states for model selection, but of course choosing an appropriate timescale is perhaps just as important. We know, for example, that if our timescale is too short (or fast), then most of the bins will be empty, making it difficult for the model to learn appropriate representations and transitions. On the other hand, if our timescale is too coarse (or long or slow) then we will certainly miss SWR events, and we may even miss some behavioral events as well.
Since theta is around 8 Hz for rodents, it might make sense to consider a timescale of 125 ms or even 62.5 ms for behaviorally relevant events, so that we can hope to capture half or full theta cycles in the observations.
One might also reasonably ask: "even though the log probability has been optimized, how do we know that the learned model makes any sense? That is, that the model is plausible and useful?" One way to try to answer this question is to again consider the place fields that we learn from the data. Place field visualization is considered in more detail in StateClustering.ipynb, but here we simply want to see if we get plausible, behaviorally relevant state representations out when choosing different numbers of states, and different timescales, for example.
Place fields for varying velocity thresholds
We train our models on RUN data, so we might want to know how sensitive our model is to a specific velocity threshold. Using a smaller threshold will include more quiescent data, and using a larger threshold will exclude more data from being used to learn in the model.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
num_states = 5 + ii*5
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states), y=1.02)
ax.axis('tight')
saveFigure('posterfigs/numstates.pdf')
Explanation: Remarks: As can be expected, with low velocity thresholds, we see an overrepresentation of the reward locations, and only a relatively small number of states that are dedicated to encoding the position along the track.
Recall that the track was shortened halfway through the recording session. Here, the reward locations for the longer track (first half of the experiment) and shorter track (second half of the experiment) are shown by the ends of the dashed lines.
We notice that at some point, the movement velocity (for fixed state evolution) appears to be constant, and that at e.g. 8 units/sec we see a clear bifurcation in the place fields, so that states encode both positions before and after the track was shortened.
Place fields for varying number of states
Next, we take a look at how the place fields are affected by changing the number of states in the model.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
ds = (ii+1)*0.03125
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight')
Explanation: Remarks: First, we see that independent of the number of states, the model captures the place field like nature of the underlying states very well. Furthermore, the bifurcation of some states to represent both the first and second halves of the experiment becomes clear with as few as 15 states, but interestingly this bifurcation fades as we add more states to the model, since there is enough flexibility to encode those shifting positions by their own states.
Warning: However, in the case where we have many states so that the states are no longer bimodal, the strict linear ordering that we impose (ordering by peak firing location) can easily mask the underlying structural change in the environment.
Place fields for varying timescales
Next we investigate how the place fields are affected by changing the timescale of our observations. First, we consider timescales in the range of 31.25 ms to 375 ms, in increments of 31.25 ms.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 35
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
#state_pos, peakorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth)
fig, axes = plt.subplots(4, 3, figsize=(17, 11))
axes = [item for sublist in axes for item in sublist]
for ii, ax in enumerate(axes):
ds = (ii+1)*0.0625
state_pos, peakorder, stateorder = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True)
ax.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')
#ax.set_xlabel('position bin')
ax.set_ylabel('state')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_title('learned place fields; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=1)
ax.axis('tight')
Explanation: Remarks: We notice that we clearly see the bimodal place fields when the timescales are sufficiently small, with a particularly clear example at 62.5 ms, for example. Larger timescales tend to focus on the longer track piece, with a single trajectory being skewed away towards the shorter track piece.
Next we consider timescales in increments of 62.5 ms.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 25
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
saveFigure('posterfigs/expsplit.pdf')
Explanation: Remarks: Again, we see that with larger timescales, the spatial resolution becomes more coarse, because we don't have that sufficiently many observations, and the modes of the place fields tend to lie close to those associated wit the longer track.
Splitting the experimment in half
Just as a confirmation of what we've seen so far, we next consider the place fields obtained when we split the experiment into its first and second halves, correponding to when the track was longer, and shorter, respectively.
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 45
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
Explanation: Remarks: We clearly see the bimodal place fields when we use all of the data, and we see the unimodal place fields emerge as we focus on either the first, or the second half of the experiment.
Notice that the reward locations are more concentrated, but that the velocity (with fixed state progression) is roughly constant.
However, if we increase the number of states:
End of explanation
from placefieldviz import hmmplacefieldposviz
num_states = 100
ds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)
vth = 8 # units/sec velocity threshold for place fields
state_pos_b, peakorder_b, stateorder_b = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='both')
state_pos_1, peakorder_1, stateorder_1 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='first')
state_pos_2, peakorder_2, stateorder_2 = hmmplacefieldposviz(num_states=num_states, ds=ds, posdf=posdf, spikes=spikes, speed=speed, vth=vth, normalize=True, experiment='second')
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax2.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
8, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax2.add_patch(
patches.Rectangle(
(41, 0), # (x,y)
11, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
ax3.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
14, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.add_patch(
patches.Rectangle(
(35, 0), # (x,y)
15, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
Explanation: then we stat to see the emergence of the S-shaped place field progressions again, indicating that the reward locations are overexpressed by several different states.
This observation is even more pronounced if we increase the number of states further:
End of explanation
import matplotlib.patches as patches
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[stateorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[stateorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 13], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 7], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.plot([35, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([41, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax2.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
8, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax2.add_patch(
patches.Rectangle(
(41, 0), # (x,y)
11, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.matshow(state_pos_2[stateorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 13], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 7], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.plot([35, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([41, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
ax3.add_patch(
patches.Rectangle(
(-1, 0), # (x,y)
14, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
ax3.add_patch(
patches.Rectangle(
(35, 0), # (x,y)
15, # width
num_states, # height
hatch='/',
facecolor='w',
alpha=0.5
)
)
fig.suptitle('State ordering not by peak location, but by the state transition probability matrix', y=1.08, fontsize=14)
saveFigure('posterfigs/zigzag.pdf')
state_pos_b[state_pos_b < np.transpose(np.tile(state_pos_b.max(axis=1),[state_pos_b.shape[1],1]))] = 0
state_pos_b[state_pos_b == np.transpose(np.tile(state_pos_b.max(axis=1),[state_pos_b.shape[1],1]))] = 1
state_pos_1[state_pos_1 < np.transpose(np.tile(state_pos_1.max(axis=1),[state_pos_1.shape[1],1]))] = 0
state_pos_1[state_pos_1 == np.transpose(np.tile(state_pos_1.max(axis=1),[state_pos_1.shape[1],1]))] = 1
state_pos_2[state_pos_2 < np.transpose(np.tile(state_pos_2.max(axis=1),[state_pos_2.shape[1],1]))] = 0
state_pos_2[state_pos_2 == np.transpose(np.tile(state_pos_2.max(axis=1),[state_pos_2.shape[1],1]))] = 1
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(17, 3))
ax1.matshow(state_pos_b[peakorder_b,:], interpolation='none', cmap='OrRd')
ax1.set_ylabel('state')
ax1.set_xticklabels([])
ax1.set_yticklabels([])
ax1.set_title('learned place fields BOTH; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax1.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax1.axis('tight')
ax2.matshow(state_pos_1[peakorder_1,:], interpolation='none', cmap='OrRd')
ax2.set_ylabel('state')
ax2.set_xticklabels([])
ax2.set_yticklabels([])
ax2.set_title('learned place fields FIRST; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax2.plot([13, 35], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax2.plot([7, 41], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax2.axis('tight')
ax3.matshow(state_pos_2[peakorder_2,:], interpolation='none', cmap='OrRd')
ax3.set_ylabel('state')
ax3.set_xticklabels([])
ax3.set_yticklabels([])
ax3.set_title('learned place fields SECOND; RUN > ' + str(vth) + '; m = ' + str(num_states) + '; ds = ' + str(ds), y=1.02)
ax3.plot([13, 35], [0, num_states], color='k', linestyle='dashed', linewidth=2)
ax3.plot([7, 41], [0, num_states], color='gray', linestyle='dashed', linewidth=1)
ax3.axis('tight')
Explanation: With enough expressiveness in the number of states, we see the S-shaped curve reappear, which suggests an overexpression of the reward locations, which is consistent with what we see with place cells in animals.
End of explanation |
4,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 1
Imports
Step2: Checkerboard
Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0
Step3: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
Step4: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
Explanation: Numpy Exercise 1
Imports
End of explanation
def checkerboard(size):
Return a 2d checkboard of 0.0 and 1.0 as a NumPy array
board = np.ones((size,size), dtype=float)
for i in range(size):
if i%2==0:
board[i,1:size:2]=0
else:
board[i,0:size:2]=0
va.enable()
return board
checkerboard(10)
a = checkerboard(4)
assert a[0,0]==1.0
assert a.sum()==8.0
assert a.dtype==np.dtype(float)
assert np.all(a[0,0:5:2]==1.0)
assert np.all(a[1,0:5:2]==0.0)
b = checkerboard(5)
assert b[0,0]==1.0
assert b.sum()==13.0
assert np.all(b.ravel()[0:26:2]==1.0)
assert np.all(b.ravel()[1:25:2]==0.0)
Explanation: Checkerboard
Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0:
Your function should work for both odd and even size.
The 0,0 element should be 1.0.
The dtype should be float.
End of explanation
va.set_block_size(10)
checkerboard(20)
assert True
Explanation: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
End of explanation
va.set_block_size(5)
checkerboard(27)
assert True
Explanation: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
End of explanation |
4,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Linear Algebra Review
From xkcd
Step2: Linear Algebra and Linear Systems
A lot of problems in statistical computing can be described mathematically using linear algebra. This lecture is meant to serve as a review of concepts you have covered in linear algebra courses, so that we may discuss some important matrix decompositions used in statistical analyses.
Motivation - Simultaneous Equations
Consider a set of $m$ linear equations in $n$ unknowns
Step3: Note that in the standard basis, the coordinates of $e_1$ are $(1,0)$. This is because
Step4: Important Facts
Step5: Let's see what the new matrix looks like
Step6: What does all this have to do with linear systems?
Linear Independence
Step7: Inner Products
Inner products are closely related to norms and distance. The (standard) inner product (or dot product) of two $n$ dimensional vectors $v$ and $w$ is given by
Step8: There is a more abstract formulation of an inner product, that is useful when considering more general vector spaces, especially function vector spaces
Step9: Extended example
Step10: Trace and Determinant of Matrices
The trace of a matrix $A$ is the sum of its diagonal elements. It is important for a couple of reasons
Step11: Column space, Row space, Rank and Kernel
Let $A$ be an $m\times n$ matrix. We can view the columns of $A$ as vectors, say $\textbf{a_1},...,\textbf{a_n}$. The space of all linear combinations of the $\textbf{a_i}$ are the column space of the matrix $A$. Now, if $\textbf{a_1},...,\textbf{a_n}$ are linearly independent, then the column space is of dimension $n$. Otherwise, the dimension of the column space is the size of the maximal set of linearly independent $\textbf{a_i}$. Row space is exactly analogous, but the vectors are the rows of $A$.
The rank of a matrix A is the dimension of its column space - and - the dimension of its row space. These are equal for any matrix. Rank can be thought of as a measure of non-degeneracy of a system of linear equations, in that it is the dimension of the image of the linear transformation determined by $A$.
The kernel of a matrix A is the dimension of the space mapped to zero under the linear transformation that $A$ represents. The dimension of the kernel of a linear transformation is called the nullity.
Index theorem | Python Code:
import os
import sys
import glob
import matplotlib.pyplot as plt
import matplotlib.patches as patch
import numpy as np
import pandas as pd
%matplotlib inline
%precision 4
plt.style.use('ggplot')
from scipy import linalg
np.set_printoptions(suppress=True)
# Students may (probably should) ignore this code. It is just here to make pretty arrows.
def plot_vectors(vs):
Plot vectors in vs assuming origin at (0,0).
n = len(vs)
X, Y = np.zeros((n, 2))
U, V = np.vstack(vs).T
plt.quiver(X, Y, U, V, range(n), angles='xy', scale_units='xy', scale=1)
xmin, xmax = np.min([U, X]), np.max([U, X])
ymin, ymax = np.min([V, Y]), np.max([V, Y])
xrng = xmax - xmin
yrng = ymax - ymin
xmin -= 0.05*xrng
xmax += 0.05*xrng
ymin -= 0.05*yrng
ymax += 0.05*yrng
plt.axis([xmin, xmax, ymin, ymax])
Explanation: Linear Algebra Review
From xkcd:
End of explanation
# Again, this code is not intended as a coding example.
a1 = np.array([3,0]) # axis
a2 = np.array([0,3])
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plot_vectors([a1, a2])
v1 = np.array([2,3])
plot_vectors([a1,v1])
plt.text(2,3,"(2,3)",fontsize=16)
plt.tight_layout()
Explanation: Linear Algebra and Linear Systems
A lot of problems in statistical computing can be described mathematically using linear algebra. This lecture is meant to serve as a review of concepts you have covered in linear algebra courses, so that we may discuss some important matrix decompositions used in statistical analyses.
Motivation - Simultaneous Equations
Consider a set of $m$ linear equations in $n$ unknowns:
\begin{align}
a_{11} x_1 + &a_{12} x_2& +& ... + &a_{1n} x_n &=& b_1\
\vdots && &&\vdots &= &\vdots\
a_{m1} x_1 + &a_{m2} x_2& +& ... + &a_{mn} x_n &=&b_m
\end{align}
We can let:
\begin{align}
A =
\begin{bmatrix}
a_{11}&\cdots&a_{1n}\
\vdots & &\vdots\
a_{m1}&\cdots&a_{mn}
\end{bmatrix}, & &
x =
\begin{bmatrix}
x_1\
\vdots\
x_n
\end{bmatrix} & \;\;\;\;\textrm{ and } &
b = \begin{bmatrix}b_1\
\vdots\
b_m\end{bmatrix}
\end{align}
And re-write the system:
$$
Ax = b
$$
This reduces the problem to a matrix equation, and now solving the system amounts to finding $A^{-1}$ (or sort of). Certain properties of the matrix $A$ yield important information about the linear system.
Most students in elementary linear algebra courses learn to use Gaussian elimination to solve systems such as the one above. To understand more advanced techniques and matrix decompositions (more on those later), we'll need to recall some mathematical concepts.
Vector Spaces
Technically, a vector space is a field of coefficients $\mathbb{F}$, together with a commutative group (over addition) $V$ such that
If $c\in \mathbb{F}$ and $v\in V$, then $cv\in V$
If $v_1,v_2 V$ and $c\in \mathbb{F}$ then
$c(v_1+v_2) = c v_1 + c v_2$
If $c_1,c_2\in \mathbb{F}$ and $v\in V$, then
$(c_1+c_2)v = c_1v + c_2v$
If $c_1,c_2\in \mathbb{F}$ and $v\in V$, then
$(c_1c_2)v = c_1(c_2v)$
If $1$ is the multiplicative identity in $\mathbb{F}$, then
$1\cdot v = v$
That may not seem to be particularly useful for the purposes of this course, and for many of our purposes we can simplify this a bit. We are mostly interested in finite dimensional 'real' vector spaces. So our vectors will be elements of $\mathbb{R}^n$, i.e. points in $n$ dimensional space. The 'coefficents' are also real numbers. This leads to the idea that vectors are simply $n$-tuples of numbers. This is a nice, concrete way of seeing things, but it is a little oversimplified. It obscures a bit the need for a basis, and what 'coordinates' actually are. It also doesn't help much when we want to consider vector spaces of things that are not numbers, such as functions (yes - we can do that!! and it is helpful even in statistics)
Therefore, I hope you will indulge me and first think of the 'vectors' (usually denoted $u,v,w,x,y$) and their 'coefficients' (usually denoted $a,b,c$) as fundamentally different objects.
Conceptually: Think of vectors as linear combinations of Things(Tm). Think of the $v's$ as objects of some sort (functions, apples, cookies) and the $c's$ as numbers (real, complex, quaternions...)
Linear Independence and Basis
A collection of vectors $v_1,...,v_n$ is said to be linearly independent if
$$c_1v_1 + \cdots c_nv_n = 0$$
$$\iff$$
$$c_1=\cdots=c_n=0$$
In other words, any linear combination of the vectors that results in a zero vector is trivial.
Another interpretation of this is that no vector in the set may be expressed as a linear combination of the others. In this sense, linear independence is an expression of non-redundancy in a set of vectors.
Fact: Any linearly independent set of $n$ vectors spans an $n$-dimensional space. (I.e. the collection of all possible linear combinations is $V$ - this is actually the definition of dimension) Such a set of vectors is said to be a basis of $V$. Another term for basis is minimal spanning set.
Example
We can consider the vector space of polynomials of degree $\leq 2$ over $\mathbb{R}$. A basis for this space is
$$\left{1,x,x^2\right}$$
Any vector may be written
$$c_1\cdot 1 + c_2x + c_3 x^2 = c_1 + c_2 x +c_ 3 x^2$$
where $c_1,c_2,c_3\in \mathbb{R}$
Coordinates
When we have a set of basis vectors $\left{v_1,...,v_n\right}$ for a vector space, as we have said, any vector may be represented as:
$$c_1v_1+...+c_nv_n$$
The $c_i's$ are called coordinates. For example, in the space of $2^{nd}$ degree polynomials, the vector:
$$2 x +\pi x^2$$
has coordinates $(0,2,\pi)$.
You probably think of coordinates in terms of the coordinate plane, and equate the coordinates with the $n$-tuples that label the points. This is all true - but skips a step. Now that we have separated our basis vectors from their coordinates, let's see how this applies in the case of the real vector spaces you are accustomed to.
The coordinates of the pictured vector (below) are $(2,3)$. But what does that mean? It means we have assumed the standard basis, $\left{e_1,e_2\right}$, and the vector $(2,3)$ really means:
$$2e_1 + 3e_2$$
where $e_1$ is a unit vector (length = 1) on the horizontal axis and $e_2$ is a unit vector along the vertical axis. This is a choice of coordinates. We could equally well choose the basis $\left{v,e_2\right}$ where $v$ is any vector that is linearly independent of $e_2$. Then all vectors would be considered of the form:
$$c_1 v + c_2 e_2$$.
End of explanation
a1 = np.array([7,0]) # axis
a2 = np.array([0,5])
A = np.array([[2,1],[1,1]]) # transformation f in standard basis
v2 =np.dot(A,v1)
plt.figure(figsize=(8,8))
plot_vectors([a1, a2])
v1 = np.array([2,3])
plot_vectors([v1,v2])
plt.text(2,3,"v1 =(2,3)",fontsize=16)
plt.text(6,5,"Av1 = ", fontsize=16)
plt.text(v2[0],v2[1],"(7,5)",fontsize=16)
print(v2[1])
Explanation: Note that in the standard basis, the coordinates of $e_1$ are $(1,0)$. This is because:
$$e_1 = 1\cdot e_1 + 0\cdot e_2$$
Similarly, the coordinates of $e_2$ are $(0,1)$ because
$$e_2 = 0\cdot e_1 + 1\cdot e_2$$
In the basis $\left{v,e_1\right}$, the coordinates of $e_1$ are $(0,1)$, because
$$e_1 = 0\cdot v + 1\cdot e_1$$
and the coordinates of $v$ are $(1,0)$.
Well need these concepts in a moment when we talk about change of basis.
Matrices and Linear Transformations
So we have this vector space and it consists of linear combinations of vectors. It's not terribly interesting just sitting there. So let's do something with it.
This is mathematics, and once mathematicians have objects collected into some set or 'space', we like to send them to other spaces, or back into the space itself, but changing one object into another. This is called a 'transformation'.
Let's suppose we have two vector spaces, $V$ and $W$. We'd like to define a transformation - but there is a catch. We want our transformation to act on all the vectors. Let's suppose $V=W=\mathbb{R}^2$. That seems simple enough. But there are still infinitely many vectors. Defining a transformation sounds laborious.
Ah, but we are clever. We have defined our space in such a way that for certain transformations, we need only define our transformation on a finite set (in the case of finite dimensional vector spaces).
Linear Transformations
A linear transformation $f:V\rightarrow W$ is a map from $V$ to $W$ such that
$$f(c_1 v_1+c_2v_2) = c_1f(v_1)+c_2f(v_2)$$
Now, recall that a basis essentially generates the entire vector space via linear combinations. So, once we define a linear transformation $f$ on a basis, we have it for the whole space.
Matrices, Transformations and Geometric Interpretation
Thinking back to real vector spaces, what does a matrix do to a vector? Matrix multiplication has a geometric interpretation. When we multiply a vector, we either rotate, reflect, dilate or some combination of those three. So multiplying by a matrix transforms one vector into another vector. These are linear transformations.
See the cell below for an example of a vector ($v_1 = (2,3)$) transformed by a matrix
$$A = \left(\begin{matrix}2 & 1\1&1\end{matrix}\right)$$
so that
$$v_2 = Av_1$$
End of explanation
e1 = np.array([1,0])
e2 = np.array([0,1])
B = np.array([[1,4],[3,1]])
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plot_vectors([e1, e2])
plt.subplot(1,2,2)
plot_vectors([B.dot(e1), B.dot(e2)])
plt.Circle((0,0),2)
#plt.show()
#plt.tight_layout()
Explanation: Important Facts:
Any matrix defines a linear transformation
Every linear transformation may be represented by a matrix. This form is NOT unique (it depends on the chosen bassis - more on that in a moment)
We need only define a transformation by saying what it does to a basis
Suppose we have a matrix $A$ that defines some transformation. We can take any invertible matrix $B$ and
$$B^{-1}AB$$
defines the same transformation. This operation is called a change of basis, because we are simply expressing the transformation with respect to a different basis.
This is an important concept in matrix decompositions.
Example - Find a Matrix Representation of a Linear Transformation
Note that we say find 'a' matrix representation - not 'the' matrix representation. That is because the matrix representation is dependent on the choice of basis. Just to motivate you as to why this is important, recall our linear system:
$$Ax=b$$
Some forms of $A$ are much simpler to invert. For example, suppose $A$ is diagonal. Then we can solve each equation easily:
$$Ax =b \iff \left{\begin{matrix}d_1 & 0& \cdots & 0\0 & d_2 & \cdots & 0\ \vdots & & &\vdots\ 0 &0&\cdots &d_n
\end{matrix}\right}
\left{\begin{matrix}x_1\ \vdots\x_n\end{matrix}\right}= \left{\begin{matrix}b_1\ \vdots\b_n\end{matrix}\right} \iff x_1 = \frac{b_1}{d_1},...,x_n=\frac{b_n}{d_n}$$
So, if we could find a basis in which the transformation defined by $A$ is diagonal, our system is very easily solved. Of course, this is not always possible - but we can often simplify our system via change of basis so that the resulting system is easier to solve. (These are 'matrix decomposition methods', and we will talk about them in detail, once we have the tools to do so).
Now, let $f(x)$ be the linear transformation that takes $e_1=(1,0)$ to $f(e_1)=(2,3)$ and $e_2=(0,1)$ to $f(e_2) = (1,1)$. A matrix representation of $f$ would be given by:
$$A = \left(\begin{matrix}2 & 1\3&1\end{matrix}\right)$$
This is the matrix we use if we consider the vectors of $\mathbb{R}^2$ to be linear combinations of the form
$$c_1 e_1 + c_2 e_2$$
Example - Change to a Different Basis
Now, consider a second pair of (linearly independent) vectors in $\mathbb{R}^2$, say $v_1$ and $v_2$, and suppose that the coordinates of $v_1$ in the basis $e_1,e_2$ are $(1,3)$ and that the coordinates of $v_2$ in the basis $e_1,e_2$ are $(4,1)$. We first find the transformation that takes $e_1$ to $v_1$ and $e_2$ to $v_2$. A matrix representation for this (in the $e_1, e_2$ basis) is:
$$B = \left(\begin{matrix}1 & 4\3&1\end{matrix}\right)$$
Our original transformation $f$ can be expressed with respect to the basis $v_1, v_2$ via
$$BAB^{-1}$$
Here is what the new basis looks like:
End of explanation
A = np.array([[2,1],[3,1]]) # transformation f in standard basis
e1 = np.array([1,0]) # standard basis vectors e1,e2
e2 = np.array([0,1])
print(A.dot(e1)) # demonstrate that Ae1 is (2,3)
print(A.dot(e2)) # demonstrate that Ae2 is (1,1)
# new basis vectors
v1 = np.array([1,3])
v2 = np.array([4,1])
# How v1 and v2 are transformed by A
print("Av1: ")
print(A.dot(v1))
print("Av2: ")
print(A.dot(v2))
# Change of basis from standard to v1,v2
B = np.array([[1,4],[3,1]])
print(B)
B_inv = linalg.inv(B)
print("B B_inv ")
print(B.dot(B_inv)) # check inverse
# Matrix of the transformation with respect to the new basis
T = B.dot(A.dot(B_inv)) # B A B^{-1}
print(T)
print(B_inv)
np.dot(B_inv,(T.dot(e1)))
Explanation: Let's see what the new matrix looks like:
End of explanation
# norm of a vector
# Note: The numpy linalg package is imported at the top of this notebook
v = np.array([1,2])
linalg.norm(v)
# distance between two vectors
w = np.array([1,1])
linalg.norm(v-w)
Explanation: What does all this have to do with linear systems?
Linear Independence:
If $A$ is an $m\times n$ matrix and $m>n$, if all $m$ rows are linearly independent, then the system is overdetermined and inconsistent. The system cannot be solved exactly. This is the usual case in data analysis, and why least squares is so important. For example, we may be finding the parameters of a linear model, where there are $m$ data points and $n$ parameters.
If $A$ is an $m\times n$ matrix and $m<n$, if all $m$ rows are linearly independent, then the system is underdetermined and there are infinite solutions.
If $A$ is an $m\times n$ matrix and some of its rows are linearly dependent, then the system is reducible. We can get rid of some equations. In other words, there are equations in the system that do not give us any new information.
If $A$ is a square matrix and its rows are linearly independent, the system has a unique solution. ($A$ is invertible.) This is a lovely case that happens mostly in the realm of pure mathematics and pretty much never in practice.
Change of Basis
We can often transform a linear system into a simpler form, simply via a change of basis.
More Properties of Vectors, Vector Spaces and Matrices
Linear algebra has a whole lot more to tell us about linear systems, so we'll review a few basics.
Norms and Distance of Vectors
You probably learned that the 'norm' of a vector $v \in \mathbb{R}^n$, denoted $||v||$ is simply its length. For a vector with components
$$v = \left(v_1,...,v_n\right)$$
the norm of $v$ is given by:
$$||v|| = \sqrt{v_1^2+...+v_n^2}$$
This natural definition of a norm comes from the distance formula. Recall that for two points $(x_1,y_1),(x_0,y_0)$ in the plane, the distance between them is given by:
$$D = \sqrt{(x_1-x_0)^2+(y_1-y_0)^2}$$
The length of a vector in $\mathbb{R}^n$ is the distance from the origin, so
$$||v|| = \sqrt{(v_1 -0 )^2 +...+(v_n-0)^2} = \sqrt{v_1^2+...+v_n^2}$$
The distance between two vectors is the length of their difference:
$$d(v,w) = ||v-w||$$
Examples
End of explanation
e1 = np.array([1,0])
e2 = np.array([0,1])
A = np.array([[2,3],[3,1]])
v1=A.dot(e1)
v2=A.dot(e2)
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plot_vectors([e1, e2])
plt.subplot(1,2,2)
plot_vectors([v1,v2])
plt.tight_layout()
#help(plt.Circle)
plt.Circle(np.array([0,0]),radius=1)
plt.Circle.draw
Explanation: Inner Products
Inner products are closely related to norms and distance. The (standard) inner product (or dot product) of two $n$ dimensional vectors $v$ and $w$ is given by:
$$<v,w> = v_1w_1+...+v_nw_n$$
I.e. the inner product is just the sum of the product of the components. Certain 'special' matrices also define inner products, and we will see some of those later.
The standard inner product is related to the standard norm via:
$$||v|| = <v,v>^{\frac12}$$
Geometric Interpretation:
The inner product of two vectors is proportional to the cosine of the angle between them. In fact:
$$<v,w> = ||v|| \cdot ||w|| \cos(\theta)$$
where $\theta$ is the angle between $v$ and $w$.
End of explanation
np.outer(v,w)
Explanation: There is a more abstract formulation of an inner product, that is useful when considering more general vector spaces, especially function vector spaces:
General Inner Product
We'll state the definition for vector spaces over $\mathbb{R}$, but note that all may be extended for any field of coefficients.
An inner product on a vector space $V$ is a symmetric, positive definite, bilinear form. This means an inner product is any map $<,>_A$ (the A is just to make the point that this is different from the standard inner product).
$$<,>_A: V\times V:\rightarrow \mathbb{R}$$
with the following properties:
Symmetric: For any $v_1,v_2\in V\times V$,
$$<v_1,v_2>_A = <v_2,v_1>_A$$
Positive Definite: For any $v\in V$,
$$<v,v>_A \geq 0$$
with equality only when $v=0$ (note that $0$ means the zero vector).
Bilinear: For any $c_1,c_2\in\mathbb{R}$ and $v_1,v_2,v\in V$,
$$<c(v_1+v_2),v>_A = c<v_1,v> + c<v_2,v>$$
Note that symmetry gives that this is true for the second component. This means that the inner product is linear in each of its two components.
Important: Any inner product defines a norm via
$$||v|| = <v,v>^{\frac12}$$
We will discuss this a bit more when we learn about positive-definite matrices!
General Norms
There is also a more abstract definition of a norm - a norm is function from a vector space to the real numbers, that is positive definite, absolutely scalable and satisfies the triangle inequality.
We'll mostly be dealing with norms that come from inner products, but it is good to note that not all norms must come from an inner product.
Outer Products
Note that the inner product is just matrix multiplication of a $1\times n$ vector with an $n\times 1$ vector. In fact, we may write:
$$<v,w> = v^tw$$
The outer product of two vectors is just the opposite. It is given by:
$$v\otimes w = vw^t$$
Note that I am considering $v$ and $w$ as column vectors. The result of the inner product is a scalar. The result of the outer product is a matrix.
Example
End of explanation
# We have n observations of p variables
n, p = 10, 4
v = np.random.random((p,n))
# The covariance matrix is a p by p matrix
np.cov(v)
# From the definition, the covariance matrix
# is just the outer product of the normalized
# matrix where every variable has zero mean
# divided by the number of degrees of freedom
w = v - v.mean(1)[:, np.newaxis]
w.dot(w.T)/(n - 1)
Explanation: Extended example: the covariance matrix is an outer proudct.
End of explanation
n = 6
M = np.random.randint(100,size=(n,n))
print(M)
np.linalg.det(M)
Explanation: Trace and Determinant of Matrices
The trace of a matrix $A$ is the sum of its diagonal elements. It is important for a couple of reasons:
It is an invariant of a matrix under change of basis (more on this later).
It defines a matrix norm (more on that later)
The determinant of a matrix is defined to be the alternating sum of the product of permutations of the elements of a matrix.
$$\det(A) = \sum_{\sigma \in S_n} sgn(\sigma) \prod_{i=1}^n a_{i,\sigma_i}$$
Let's not dwell on that though. It is important to know that the determinant of a $2\times 2$ matrix is
$$\left|\begin{matrix}a_{11} & a_{12}\a_{21} & a_{22}\end{matrix}\right| = a_{11}a_{22} - a_{12}a_{21}$$
This may be extended to an $n\times n$ matrix by minor expansion. I will leave that for you to google. We will be computing determinants using tools such as:
np.linalg.det(A)
What is most important about the determinant:
Like the trace, it is also invariant under change of basis
An $n\times n$ matrix $A$ is invertible $\iff$ det$(A)\neq 0$
The rows(columns) of an $n\times n$ matrix $A$ are linearly independent $\iff$ det$(A)\neq 0$
End of explanation
A = np.array([[1,2,-1,1,2],[3,-4,0,2,3],[0,2,1,0,4],[2,2,-3,2,0],[-2,6,-1,-1,-1]])
np.linalg.matrix_rank(A)
np.linalg.det(A)
Explanation: Column space, Row space, Rank and Kernel
Let $A$ be an $m\times n$ matrix. We can view the columns of $A$ as vectors, say $\textbf{a_1},...,\textbf{a_n}$. The space of all linear combinations of the $\textbf{a_i}$ are the column space of the matrix $A$. Now, if $\textbf{a_1},...,\textbf{a_n}$ are linearly independent, then the column space is of dimension $n$. Otherwise, the dimension of the column space is the size of the maximal set of linearly independent $\textbf{a_i}$. Row space is exactly analogous, but the vectors are the rows of $A$.
The rank of a matrix A is the dimension of its column space - and - the dimension of its row space. These are equal for any matrix. Rank can be thought of as a measure of non-degeneracy of a system of linear equations, in that it is the dimension of the image of the linear transformation determined by $A$.
The kernel of a matrix A is the dimension of the space mapped to zero under the linear transformation that $A$ represents. The dimension of the kernel of a linear transformation is called the nullity.
Index theorem: For an $m\times n$ matrix $A$,
rank($A$) + nullity($A$) = $n$.
Matrix Norms
We can extend the notion of a norm of a vector to a norm of a matrix. Matrix norms are used in determining the condition of a matrix (we will define this in the next lecture.) There are many matrix norms, but three of the most common are so called 'p' norms, and they are based on p-norms of vectors. So, for an $n$-dimensional vector $v$ and for $1\leq p <\infty$
$$||v||p = \left(\sum\limits{i=1}^n |v_i|^p\right)^\frac1p$$
and for $p =\infty$:
$$||v||_\infty = \max{|v_i|}$$
Similarly, the corresponding matrix norms are:
$$||A||_p = \sup_x \frac{||Ax||_p}{||x||_p}$$
$$||A||{1} = \max_j\left(\sum\limits{i=1}^n|a_{ij}|\right)$$
(column sum)
$$||A||{\infty} = \max_i\left(\sum\limits{j=1}^n|a_{ij}|\right)$$
(row sum)
FACT: The matrix 2-norm, $||A||_2$ is given by the largest eigenvalue of $\left(A^TA\right)^\frac12$ - otherwise known as the largest singular value of $A$. We will define eigenvalues and singular values formally in the next lecture.
Another norm that is often used is called the Frobenius norm. It one of the simplests to compute:
$$||A||F = \left(\sum\sum \left(a{ij}\right)^2\right)^\frac12$$
Special Matrices
Some matrices have interesting properties that allow us either simplify the underlying linear system or to understand more about it.
Square Matrices
Square matrices have the same number of columns (usually denoted $n$). We refer to an arbitrary square matrix as and $n\times n$ or we refer to it as a 'square matrix of dimension $n$'. If an $n\times n$ matrix $A$ has full rank (i.e. it has rank $n$), then $A$ is invertible, and its inverse is unique. This is a situation that leads to a unique solution to a linear system.
Diagonal Matrices
A diagonal matrix is a matrix with all entries off the diagonal equal to zero. Strictly speaking, such a matrix should be square, but we can also consider rectangular matrices of size $m\times n$ to be diagonal, if all entries $a_{ij}$ are zero for $i\neq j$
Symmetric and Skew Symmetric
A matrix $A$ is (skew) symmetric iff $a_{ij} = (-)a_{ji}$.
Equivalently, $A$ is (skew) symmetric iff
$$A = (-)A^T$$
Upper and Lower Triangular
A matrix $A$ is (upper|lower) triangular if $a_{ij} = 0$ for all $i (>|<) j$
Banded and Sparse Matrices
These are matrices with lots of zero entries. Banded matrices have non-zero 'bands', and this structure can be exploited to simplify computations. Sparse matrices are matrices where there are 'few' non-zero entries, but there is no pattern to where non-zero entries are found.
Orthogonal and Orthonormal
A matrix $A$ is orthogonal iff
$$A A^T = I$$
In other words, $A$ is orthogonal iff
$$A^T=A^{-1}$$
Facts:
The rows and columns of an orthogonal matrix are an orthonormal set of vectors.
Geometrically speaking, orthogonal transformations preserve lengths and angles between vectors
Positive Definite
Positive definite matrices are an important class of matrices with very desirable properties. A square matrix $A$ is positive definite if
$$u^TA u > 0$$
for any non-zero n-dimensional vector $u$.
A symmetric, positive-definite matrix $A$ is a positive-definite matrix such that
$$A = A^T$$
IMPORTANT:
Symmetric, positive-definite matrices have 'square-roots' (in a sense)
Any symmetric, positive-definite matrix is diagonizable!!!
Co-variance matrices are symmetric and positive-definite
Now that we have the basics down, we can move on to numerical methods for solving systems - aka matrix decompositions.
<font color=red>Exercises</font>
1. Determine whether the following system of equations has no solution, infinite solutions or a unique solution without solving the system
$$\begin{eqnarray}
x+2y-z+w &=& 2\
3x-4y+2 w &=& 3\
2y+z &=& 4\
2x+2y-3z+2w&=&0\
-2x+6y-z-w&=&-1
\end{eqnarray}$$
End of explanation |
4,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Static & Transient DataFrames in PyNastran
The iPython notebook for this demo can be found in
Step1: Solid Bending
Let's show off combine=True/False. We'll talk about the keys soon.
Step2: Single Subcase Buckling Example
The keys cannot be "combined" despite us telling the program that it was OK.
We'll get the following values that we need to handle.
isubcase, analysis_code, sort_method, count, subtitle
isubcase -> the same key that you're used to accessing
sort_method -> 1 (SORT1), 2 (SORT2)
count -> the optimization count
subtitle -> the analysis subtitle (changes for superlements)
analysis code -> the "type" of solution
### Partial code for calculating analysis code
Step3: Keys
Step4: Static Table
Step5: Transient Table | Python Code:
import os
import pandas as pd
import pyNastran
from pyNastran.op2.op2 import read_op2
pkg_path = pyNastran.__path__[0]
model_path = os.path.join(pkg_path, '..', 'models')
Explanation: Static & Transient DataFrames in PyNastran
The iPython notebook for this demo can be found in:
- docs\quick_start\demo\op2_pandas_multi_case.ipynb
- https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_pandas_multi_case.ipynb
End of explanation
solid_bending_op2 = os.path.join(model_path, 'solid_bending', 'solid_bending.op2')
solid_bending = read_op2(solid_bending_op2, combine=False, debug=False)
print(solid_bending.displacements.keys())
solid_bending_op2 = os.path.join(model_path, 'solid_bending', 'solid_bending.op2')
solid_bending2 = read_op2(solid_bending_op2, combine=True, debug=False)
print(solid_bending2.displacements.keys())
Explanation: Solid Bending
Let's show off combine=True/False. We'll talk about the keys soon.
End of explanation
op2_filename = os.path.join(model_path, 'sol_101_elements', 'buckling_solid_shell_bar.op2')
model = read_op2(op2_filename, combine=True, debug=False, build_dataframe=True)
stress_keys = model.cquad4_stress.keys()
print (stress_keys)
# isubcase, analysis_code, sort_method, count, subtitle
key0 = (1, 1, 1, 0, 'DEFAULT1')
key1 = (1, 8, 1, 0, 'DEFAULT1')
Explanation: Single Subcase Buckling Example
The keys cannot be "combined" despite us telling the program that it was OK.
We'll get the following values that we need to handle.
isubcase, analysis_code, sort_method, count, subtitle
isubcase -> the same key that you're used to accessing
sort_method -> 1 (SORT1), 2 (SORT2)
count -> the optimization count
subtitle -> the analysis subtitle (changes for superlements)
analysis code -> the "type" of solution
### Partial code for calculating analysis code:
if trans_word == 'LOAD STEP': # nonlinear statics
analysis_code = 10
elif trans_word in ['TIME', 'TIME STEP']: # TODO check name
analysis_code = 6
elif trans_word == 'EIGENVALUE': # normal modes
analysis_code = 2
elif trans_word == 'FREQ': # TODO check name
analysis_code = 5
elif trans_word == 'FREQUENCY':
analysis_code = 5
elif trans_word == 'COMPLEX EIGENVALUE':
analysis_code = 9
else:
raise NotImplementedError('transient_word=%r is not supported...' % trans_word)
Let's look at an odd case:
You can do buckling as one subcase or two subcases (makes parsing it a lot easier!).
However, you have to do this once you start messing around with superelements or multi-step optimization.
For optimization, sometimes Nastran will downselect elements and do an optimization on that and print out a subset of the elements.
At the end, it will rerun an analysis to double check the constraints are satisfied.
It does not always do multi-step optimization.
End of explanation
stress_static = model.cquad4_stress[key0].data_frame
stress_transient = model.cquad4_stress[key1].data_frame
# The final calculated factor:
# Is it a None or not?
# This defines if it's static or transient
print('stress_static.nonlinear_factor = %s' % model.cquad4_stress[key0].nonlinear_factor)
print('stress_transient.nonlinear_factor = %s' % model.cquad4_stress[key1].nonlinear_factor)
print('data_names = %s' % model.cquad4_stress[key1].data_names)
print('loadsteps = %s' % model.cquad4_stress[key1].lsdvmns)
print('eigenvalues = %s' % model.cquad4_stress[key1].eigrs)
Explanation: Keys:
* key0 is the "static" key
* key1 is the "buckling" key
Similarly:
* Transient solutions can have preload
* Frequency solutions can have loadsets (???)
Moving onto the data frames
The static case is the initial deflection state
The buckling case is "transient", where the modes (called load steps or lsdvmn here) represent the "times"
pyNastran reads these tables differently and handles them differently internally. They look very similar though.
End of explanation
# Sets default precision of real numbers for pandas output\n"
pd.set_option('precision', 2)
stress_static.head(20)
Explanation: Static Table
End of explanation
# Sets default precision of real numbers for pandas output\n"
pd.set_option('precision', 3)
#import numpy as np
#np.set_printoptions(formatter={'all':lambda x: '%g'})
stress_transient.head(20)
Explanation: Transient Table
End of explanation |
4,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Behavior of the median filter with noised sine waves
DW 2015.11.12
Step1: 1. Create all needed arrays and data.
Step2: Figure 1. Behavior of the median filter with given window length and different S/N ratio.
Step3: Figure 1.1 Behavior of the median filter with given window length and different S/N ratio.
Step4: Figure 2
Step5: Figure 2.1 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import medfilt
import gitInformation
%matplotlib inline
gitInformation.printInformation()
Explanation: Behavior of the median filter with noised sine waves
DW 2015.11.12
End of explanation
# Sine wave, 16 wave numbers, 16*128 samples.
x = np.linspace(0, 2, 16*128)
data = np.sin(16*np.pi*x)
# Different noises with different standard deviations (spread or "width")
# will be saved in, so we can generate different signal to noise ratios
diff_noise = np.zeros((140,len(data)))
# Noised sine waves.
noised_sines = np.zeros((140,len(data)))
# Median filtered wave.
medfilter = np.zeros((140,len(data)))
# Filtered sine waves (noised_sines - medfilter)
filtered_sines = np.zeros((140,len(data)))
# Behavior of the median filter. Save the max values of the filtered waves in it.
behav = np.zeros(140)
# Lists with used window lengths and Signal to noise ratios
wl = [17,33,65,97, 129, 161, 193, 225, 257, 289, 321, 353, 385, 417, 449]
sn = [1, 1.5, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Explanation: 1. Create all needed arrays and data.
End of explanation
# Calculate and save all values.
# Because the for loop doesn't count from 1 to 10 for example,
# we need a counter to iterate through the array.
# The counter is assigne to -1, so we can iterate from 0 to len(values)
count = -1
count2 = -1
values = np.zeros((len(sn), len(wl)))
for w in wl[:11]:
count = count + 1
for x in sn:
count2 = count2 + 1
for i in range (len(diff_noise)):
# Create different noises, with x we change the signal to noise
# ratio from 10 to 1.
diff_noise[i, :] = np.random.normal(0, 0.706341266/np.sqrt(x), len(data))
# Add noise to each sine wave, to create a realisitc signal.
noised_sines[i, :] = data + diff_noise[i, :]
# Filter the all noised sine waves.
medfilter[i, :] = medfilt(noised_sines[i, :], w)
# Subtract the filtered wave from the noised sine waves.
filtered_sines[i, :] = noised_sines[i, :] - medfilter[i, :]
# Calculate the root mean square (RMS) of each sine wave
behav[i] = np.sqrt(np.mean(np.square(filtered_sines[i, :])))
# Calculate the mean of the bahvior, so we can see how
# the signal to noise ratio effects the median filter
# with different window lengths.
mean = np.mean(behav)
# Save the result in the 'values' array
values[count2:count2+1:,count] = mean
# Set coun2 back to -1, so we can iterate again from 0 to len(values).
# Otherwise the counter would get higher and is out of range.
count2 = - 1
# Save the array, because the calculation take some time.
# Load the array with "values = np.loadtxt('values.txt')".
np.savetxt("values.txt", values)
plt.plot(noised_sines[1,:])
values = np.loadtxt("values.txt")
plt.plot(values)
viridis_data = np.loadtxt('viridis_data.txt')
plasma_data = np.loadtxt('plasma_data.txt')
# viris_data and plasma_data taken from
# https://github.com/BIDS/colormap/blob/master/colormaps.py
fig = plt.figure(figsize=(20, 7))
for p in range(0,11):
ax = plt.subplot(2, 5, p)
plt.axis([0, 11, 0, 1.5])
plt.plot(sn,values[:,p], 'o-', color=viridis_data[(p*25)-25,:])
plt.savefig('Behavior with given SN ratio and different wl.png',dpi=300)
fig = plt.figure()
values3 = np.zeros((len(sn),len(wl)))
for p in range(6):
ax = plt.subplot()
values3[:,p] = values[::,p]/0.7069341
plt.axis([0, 11, 0, 2])
plt.ylabel('Normalized RMS', size = 14)
plt.xlabel('S/N Ratio', size = 14)
plt.hlines(1,1,10, color = 'b', linestyle = '--')
plt.plot(sn,values3[:,p], color=plasma_data[(p*40),:])
plt.savefig('Behavior with given SN ratio and different wl3.png',dpi=300)
Explanation: Figure 1. Behavior of the median filter with given window length and different S/N ratio.
End of explanation
# Alternative we subtract the filtered wave from the original sine wave,
# not from the noised sine wave.
count = -1
count2 = -1
values = np.zeros((len(sn), len(wl)))
for w in wl[:11]:
count = count + 1
for x in sn:
count2 = count2 + 1
for i in range (len(diff_noise)):
diff_noise[i, :] = np.random.normal(0, 0.706341266/np.sqrt(x), len(data))
noised_sines[i, :] = data + diff_noise[i, :]
medfilter[i, :] = medfilt(noised_sines[i, :], w)
filtered_sines[i, :] = data - medfilter[i, :]
behav[i] = np.sqrt(np.mean(np.square(filtered_sines[i, :])))
mean = np.mean(behav)
values[count2:count2+1:,count] = mean
count2 = - 1
np.savetxt("valuesA.txt", values)
valuesA = np.loadtxt("valuesA.txt")
fig = plt.figure()
values3 = np.zeros((len(sn),len(wl)))
for p in range(6):
ax = plt.subplot()
values3[::,p] = valuesA[::,p]/0.7069341
plt.axis([0, 11, 0, 2])
plt.ylabel('Normalized RMS', size = 14)
plt.xlabel('S/N Ratio', size = 14)
plt.hlines(1,1,101, color = 'b', linestyle = '--')
plt.plot(sn,values3[::,p], color=plasma_data[(p*40),:])
plt.savefig('Behavior with given SN ratio and different wl3A.png',dpi=300)
Explanation: Figure 1.1 Behavior of the median filter with given window length and different S/N ratio.
End of explanation
values = np.zeros((len(wl), len(sn)))
count = -1
count2 = -1
for x in sn:
count = count + 1
for w in wl:
count2 = count2 + 1
for i in range (len(diff_noise)):
diff_noise[i, :] = np.random.normal(0, 0.706341266/np.sqrt(x), len(data))
noised_sines[i, :] = data + diff_noise[i, :]
medfilter[i, :] = medfilt(noised_sines[i, :], w)
filtered_sines[i, :] = noised_sines[i, :] - medfilter[i, :]
behav[i] = np.sqrt(np.mean(np.square(filtered_sines[i, :])))
mean = np.mean(behav)
values[count2:count2+1:,-count] = mean
count2 = -1
np.savetxt("values2.txt", values)
values2 = np.loadtxt("values2.txt")
fig = plt.figure(figsize=(30,7))
for p in range(11):
ax = plt.subplot(2,5,p)
plt.axis([0, 450, 0, 1.5])
xticks = np.arange(0, max(wl), 64)
ax.set_xticks(xticks)
x_label = [r"${%s\pi}$" % (v) for v in range(len(xticks))]
ax.set_xticklabels(x_label)
plt.plot(wl,values2[::,p], color=viridis_data[(p*25),:])
plt.savefig('Behavior with given wl and different SN ratio.png',dpi=300)
fig = plt.figure()
values4 = np.zeros((len(wl), len(sn)))
for p in range (10):
# Normalize the RMS with the RMS of a normal sine wave
values4[::,p] = values2[::,p]/0.7069341
ax = plt.subplot()
plt.axis([0, 450, 0, 2])
# Set xticks at each 64th point
xticks = np.arange(0, max(wl) + 1, 64)
ax.set_xticks(xticks)
# x_label = pi at each 64th point
x_label = [r"${%s\pi}$" % (v) for v in range(len(xticks))]
ax.set_xticklabels(x_label)
plt.ylabel('Normalized RMS', size = 14)
plt.xlabel('Window length', size = 14)
plt.plot(wl,values4[::,p], color=viridis_data[(p*25)-25,:])
plt.hlines(1,1,max(wl), color = 'b', linestyle = '--')
plt.savefig('Behavior with given wl and different SN ratio3.png',dpi=300)
Explanation: Figure 2: Behavior of the median filter with given window length and different S/N ratio
End of explanation
# Alternative
values = np.zeros((len(wl), len(sn)))
count = -1
count2 = -1
for x in sn:
count = count + 1
for w in wl:
count2 = count2 + 1
for i in range (len(diff_noise)):
diff_noise[i, :] = np.random.normal(0, 0.706341266/np.sqrt(x), len(data))
noised_sines[i, :] = data + diff_noise[i, :]
medfilter[i, :] = medfilt(noised_sines[i, :], w)
filtered_sines[i, :] = data - medfilter[i, :]
behav[i] = np.sqrt(np.mean(np.square(filtered_sines[i, :])))
mean = np.mean(behav)
values[count2:count2+1:,-count] = mean
count2 = -1
np.savetxt("values2A.txt", values)
values2A = np.loadtxt("values2A.txt")
fig = plt.figure()
values4 = np.zeros((len(wl), len(sn)))
for i in range (11):
# Normalize the RMS with the RMS of a normal sine wave
values4[::,i] = values2A[::,i]/0.7069341
ax = plt.subplot()
plt.axis([0, 450, 0, 2])
# Set xticks at each 64th point
xticks = np.arange(0, max(wl) + 1, 64)
ax.set_xticks(xticks)
# x_label = pi at each 64th point
x_label = [r"${%s\pi}$" % (v) for v in range(len(xticks))]
ax.set_xticklabels(x_label)
plt.ylabel('Normalized RMS', size = 14)
plt.xlabel('Window length', size = 14)
plt.plot(wl,values4[::,i], color=viridis_data[(i*25)-25,:])
plt.hlines(1,1,max(wl), color = 'b', linestyle = '--')
plt.savefig('Behavior with given wl and different SN ratio2A.png',dpi=300)
Explanation: Figure 2.1: Behavior of the median filter with given window length and different S/N ratio
End of explanation |
4,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Setup
Loading auxiliary files and importing the necessary libraries.
Step2: Grading
We will create a grader instance below and use it to collect your answers. Note that these outputs will be stored locally inside grader and will be uploaded to the platform only after running submit function in the last part of this assignment. If you want to make a partial submission, you can run that cell anytime you want.
Step4: Variational Autoencoder
Recall that Variational Autoencoder is a probabilistic model of data based on a continious mixture of distributions. In the lecture we covered the mixture of gaussians case, but here we will apply VAE to binary MNIST images (each pixel is either black or white). To better model binary data we will use a continuous mixture of binomial distributions
Step6: Encoder / decoder definition
Task 2 Read the code below that defines encoder and decoder networks and implement sampling with reparametrization trick in the provided space.
Step7: Training the model
Task 3 Run the cells below to train the model with the default settings. Modify the parameters to get better results. Especially pay attention to the encoder/decoder architectures (e.g. using more layers, maybe making them convolutional), learning rate, and the number of epochs.
Step8: Load and prepare the data
Step9: Train the model
Step10: Visualize reconstructions for train and validation data
In the picture below you can see the reconstruction ability of your network on training and validation data. In each of the two images, the left column is MNIST images and the right column is the corresponding image after passing through autoencoder (or more precisely the mean of the binomial distribution over the output images).
Note that getting the best possible reconstruction is not the point of VAE, the KL term of the objective specifically hurts the reconstruction performance. But the reconstruction should be anyway reasonable and they provide a visual debugging tool.
Step11: Sending the results of your best model as Task 3 submission
Step12: Hallucinating new data
Task 4 Write code to generate new samples of images from your trained VAE. To do that you have to sample from the prior distribution $p(t)$ and then from the likelihood $p(x \mid t)$.
Note that the sampling you've written in Task 2 was for the variational distribution $q(t \mid x)$, while here you need to sample from the prior.
Step13: Conditional VAE
In the final task, you will modify your code to obtain Conditional Variational Autoencoder [1]. The idea is very simple
Step14: Define the loss and the model
Step15: Train the model
Step16: Visualize reconstructions for train and validation data
Step17: Conditionally hallucinate data
Task 5.2 Implement the conditional sampling from the distribution $p(x \mid t, \text{label})$ by firstly sampling from the prior $p(t)$ and then sampling from the likelihood $p(x \mid t, \text{label})$.
Step18: Authorization & Submission
To submit assignment parts to Cousera platform, please, enter your e-mail and token into variables below. You can generate a token on this programming assignment's page. <b>Note
Step19: Playtime (UNGRADED)
Once you passed all the tests, modify the code above to work with the mixture of Gaussian distributions (in contrast to the mixture of Binomial distributions), and redo the experiments with CIFAR-10 dataset, which are full-color natural images with much more diverse structure. | Python Code:
%tensorflow_version 1.x
Explanation: <a href="https://colab.research.google.com/github/saketkc/notebooks/blob/master/coursera-BayesianML/05_Vae_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
First things first
Click File -> Save a copy in Drive and click Open in new tab in the pop-up window to save your progress in Google Drive.
Click Runtime -> Change runtime type and select GPU in Hardware accelerator box to enable faster GPU training.
Variational Autoencoder
In this assignment, you will build Variational Autoencoder, train it on the MNIST dataset, and play with its architecture and hyperparameters.
End of explanation
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
print("Downloading Colab files")
! shred -u setup_google_colab.py
! wget https://raw.githubusercontent.com/hse-aml/bayesian-methods-for-ml/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
setup_google_colab.load_data_week5()
import tensorflow as tf
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras.layers import Input, Dense, Lambda, InputLayer, concatenate
from keras.models import Model, Sequential
from keras import backend as K
from keras import metrics
from keras.datasets import mnist
from keras.utils import np_utils
from w5_grader import VAEGrader
Explanation: Setup
Loading auxiliary files and importing the necessary libraries.
End of explanation
grader = VAEGrader()
Explanation: Grading
We will create a grader instance below and use it to collect your answers. Note that these outputs will be stored locally inside grader and will be uploaded to the platform only after running submit function in the last part of this assignment. If you want to make a partial submission, you can run that cell anytime you want.
End of explanation
def vlb_binomial(x, x_decoded_mean, t_mean, t_log_var):
Returns the value of negative Variational Lower Bound
The inputs are tf.Tensor
x: (batch_size x number_of_pixels) matrix with one image per row with zeros and ones
x_decoded_mean: (batch_size x number_of_pixels) mean of the distribution p(x | t), real numbers from 0 to 1
t_mean: (batch_size x latent_dim) mean vector of the (normal) distribution q(t | x)
t_log_var: (batch_size x latent_dim) logarithm of the variance vector of the (normal) distribution q(t | x)
Returns:
A tf.Tensor with one element (averaged across the batch), VLB
### YOUR CODE HERE
kl = K.mean(0.5 * K.sum(-t_log_var + K.exp(t_log_var) + K.square(t_mean) - 1, axis=1))
eq = K.mean(-K.sum(x * K.log(x_decoded_mean+1e-6) + (1-x) * K.log(1-x_decoded_mean+1e-6), axis=1))
return (eq+kl)
# Start tf session so we can run code.
#import tensorflow.compat.v1 as tfc
#sess = tf.compat.v1.keras.backend.get_session()
sess = tf.InteractiveSession()
# Connect keras to the created session.
K.set_session(sess)
grader.submit_vlb(sess, vlb_binomial)
Explanation: Variational Autoencoder
Recall that Variational Autoencoder is a probabilistic model of data based on a continious mixture of distributions. In the lecture we covered the mixture of gaussians case, but here we will apply VAE to binary MNIST images (each pixel is either black or white). To better model binary data we will use a continuous mixture of binomial distributions: $p(x \mid w) = \int p(x \mid t, w) p(t) dt$, where the prior distribution on the latent code $t$ is standard normal $p(t) = \mathcal{N}(0, I)$, but probability that $(i, j)$-th pixel is black equals to $(i, j)$-th output of the decoder neural detwork: $p(x_{i, j} \mid t, w) = \text{decoder}(t, w)_{i, j}$.
To train this model we would like to maximize marginal log-likelihood of our dataset $\max_w \log p(X \mid w)$, but it's very hard to do computationally, so instead we maximize the Variational Lower Bound w.r.t. both the original parameters $w$ and variational distribution $q$ which we define as encoder neural network with parameters $\phi$ which takes input image $x$ and outputs parameters of the gaussian distribution $q(t \mid x, \phi)$: $\log p(X \mid w) \geq \mathcal{L}(w, \phi) \rightarrow \max_{w, \phi}$.
So overall our model looks as follows: encoder takes an image $x$, produces a distribution over latent codes $q(t \mid x)$ which should approximate the posterior distribution $p(t \mid x)$ (at least after training), samples a point from this distribution $\widehat{t} \sim q(t \mid x, \phi)$, and finally feeds it into a decoder that outputs a distribution over images.
In the lecture, we also discussed that variational lower bound has an expected value inside which we are going to approximate with sampling. But it is not trivial since we need to differentiate through this approximation. However, we learned about reparametrization trick which suggests instead of sampling from distribution $\widehat{t} \sim q(t \mid x, \phi)$ sample from a distribution which doesn't depend on any parameters, e.g. standard normal, and then deterministically transform this sample to the desired one: $\varepsilon \sim \mathcal{N}(0, I); ~~\widehat{t} = m(x, \phi) + \varepsilon \sigma(x, \phi)$. This way we don't have to worry about our stochastic gradient being biased and can straightforwardly differentiate our loss w.r.t. all the parameters while treating the current sample $\varepsilon$ as constant.
Negative Variational Lower Bound
Task 1 Derive and implement Variational Lower Bound for the continuous mixture of Binomial distributions.
Note that in lectures we discussed maximizing the VLB (which is typically a negative number), but in this assignment, for convenience, we will minimize the negated version of VLB (which will be a positive number) instead of maximizing the usual VLB. In what follows we always talk about negated VLB, even when we use the term VLB for short.
Also note that to pass the test, your code should work with any mini-batch size.
To do that, we need a stochastic estimate of VLB:
$$\text{VLB} = \sum_{i=1}^N \text{VLB}i \approx \frac{N}{M}\sum{i_s}^M \text{VLB}{i_s}$$
where $N$ is the dataset size, $\text{VLB}_i$ is the term of VLB corresponding to the $i$-th object, and $M$ is the mini-batch size. But instead of this stochastic estimate of the full VLB we will use an estimate of the negated VLB normalized by the dataset size, i.e. in the function below you need to return average across the mini-batch $-\frac{1}{M}\sum{i_s}^M \text{VLB}_{i_s}$. People usually optimize this normalized version of VLB since it doesn't depend on the dataset set - you can write VLB function once and use it for different datasets - the dataset size won't affect the learning rate too much. The correct value for this normalized negated VLB should be around $100 - 170$ in the example below.
End of explanation
batch_size = 100
original_dim = 784 # Number of pixels in MNIST images.
latent_dim = 100 # d, dimensionality of the latent code t.
intermediate_dim = 256 # Size of the hidden layer.
epochs = 50
x = Input(batch_shape=(batch_size, original_dim))
def create_encoder(input_dim):
# Encoder network.
# We instantiate these layers separately so as to reuse them later
encoder = Sequential(name='encoder')
encoder.add(InputLayer([input_dim]))
encoder.add(Dense(intermediate_dim, activation='relu'))
encoder.add(Dense(2 * latent_dim))
return encoder
encoder = create_encoder(original_dim)
get_t_mean = Lambda(lambda h: h[:, :latent_dim])
get_t_log_var = Lambda(lambda h: h[:, latent_dim:])
h = encoder(x)
t_mean = get_t_mean(h)
t_log_var = get_t_log_var(h)
# Sampling from the distribution
# q(t | x) = N(t_mean, exp(t_log_var))
# with reparametrization trick.
def sampling(args):
Returns sample from a distribution N(args[0], diag(args[1]))
The sample should be computed with reparametrization trick.
The inputs are tf.Tensor
args[0]: (batch_size x latent_dim) mean of the desired distribution
args[1]: (batch_size x latent_dim) logarithm of the variance vector of the desired distribution
Returns:
A tf.Tensor of size (batch_size x latent_dim), the samples.
t_mean, t_log_var = args
return t_mean + K.exp(0.5*t_log_var)* K.random_normal(t_mean.shape)
# YOUR CODE HERE
t = Lambda(sampling)([t_mean, t_log_var])
def create_decoder(input_dim):
# Decoder network
# We instantiate these layers separately so as to reuse them later
decoder = Sequential(name='decoder')
decoder.add(InputLayer([input_dim]))
decoder.add(Dense(intermediate_dim, activation='relu'))
decoder.add(Dense(original_dim, activation='sigmoid'))
return decoder
decoder = create_decoder(latent_dim)
x_decoded_mean = decoder(t)
grader.submit_samples(sess, sampling)
Explanation: Encoder / decoder definition
Task 2 Read the code below that defines encoder and decoder networks and implement sampling with reparametrization trick in the provided space.
End of explanation
loss = vlb_binomial(x, x_decoded_mean, t_mean, t_log_var)
vae = Model(x, x_decoded_mean)
# Keras will provide input (x) and output (x_decoded_mean) to the function that
# should construct loss, but since our function also depends on other
# things (e.g. t_means), it is easier to build the loss in advance and pass
# a function that always returns it.
vae.compile(optimizer=keras.optimizers.RMSprop(lr=0.001), loss=lambda x, y: loss)
Explanation: Training the model
Task 3 Run the cells below to train the model with the default settings. Modify the parameters to get better results. Especially pay attention to the encoder/decoder architectures (e.g. using more layers, maybe making them convolutional), learning rate, and the number of epochs.
End of explanation
# train the VAE on MNIST digits
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# One hot encoding.
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
Explanation: Load and prepare the data
End of explanation
hist = vae.fit(x=x_train, y=x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, x_test),
verbose=2)
Explanation: Train the model
End of explanation
fig = plt.figure(figsize=(10, 10))
for fid_idx, (data, title) in enumerate(
zip([x_train, x_test], ['Train', 'Validation'])):
n = 10 # figure with 10 x 2 digits
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * 2))
decoded = sess.run(x_decoded_mean, feed_dict={x: data[:batch_size, :]})
for i in range(10):
figure[i * digit_size: (i + 1) * digit_size,
:digit_size] = data[i, :].reshape(digit_size, digit_size)
figure[i * digit_size: (i + 1) * digit_size,
digit_size:] = decoded[i, :].reshape(digit_size, digit_size)
ax = fig.add_subplot(1, 2, fid_idx + 1)
ax.imshow(figure, cmap='Greys_r')
ax.set_title(title)
ax.axis('off')
plt.show()
Explanation: Visualize reconstructions for train and validation data
In the picture below you can see the reconstruction ability of your network on training and validation data. In each of the two images, the left column is MNIST images and the right column is the corresponding image after passing through autoencoder (or more precisely the mean of the binomial distribution over the output images).
Note that getting the best possible reconstruction is not the point of VAE, the KL term of the objective specifically hurts the reconstruction performance. But the reconstruction should be anyway reasonable and they provide a visual debugging tool.
End of explanation
grader.submit_best_val_loss(hist)
Explanation: Sending the results of your best model as Task 3 submission
End of explanation
n_samples = 10 # To pass automatic grading please use at least 2 samples here.
# YOUR CODE HERE.
# ...
# sampled_im_mean is a tf.Tensor of size 10 x 784 with 10 random
# images sampled from the vae model.
z = tf.random_normal((n_samples, latent_dim))
sampled_im_mean = decoder(z)
sampled_im_mean_np = sess.run(sampled_im_mean)
# Show the sampled images.
plt.figure()
for i in range(n_samples):
ax = plt.subplot(n_samples // 5 + 1, 5, i + 1)
plt.imshow(sampled_im_mean_np[i, :].reshape(28, 28), cmap='gray')
ax.axis('off')
plt.show()
grader.submit_hallucinating(sess, sampled_im_mean)
Explanation: Hallucinating new data
Task 4 Write code to generate new samples of images from your trained VAE. To do that you have to sample from the prior distribution $p(t)$ and then from the likelihood $p(x \mid t)$.
Note that the sampling you've written in Task 2 was for the variational distribution $q(t \mid x)$, while here you need to sample from the prior.
End of explanation
# One-hot labels placeholder.
x = Input(batch_shape=(batch_size, original_dim))
label = Input(batch_shape=(batch_size, 10))
# YOUR CODE HERE.
cencoder = create_encoder(original_dim + 10)
stacked_x = concatenate([x, label])
h = cencoder(stacked_x)
cond_t_mean = get_t_mean(h)
cond_t_log_var = get_t_log_var(h)
t = Lambda(sampling)([cond_t_mean, cond_t_log_var])
stacked_t = concatenate([t, label])
cdecoder = create_decoder(latent_dim + 10)
#cond_t_mean =
#cond_t_log_var = # Logarithm of the variance of the latent code (without label) for cvae model.
cond_x_decoded_mean = cdecoder(stacked_t) # Final output of the cvae model.
Explanation: Conditional VAE
In the final task, you will modify your code to obtain Conditional Variational Autoencoder [1]. The idea is very simple: to be able to control the samples you generate, we condition all the distributions on some additional information. In our case, this additional information will be the class label (the digit on the image, from 0 to 9).
So now both the likelihood and the variational distributions are conditioned on the class label: $p(x \mid t, \text{label}, w)$, $q(t \mid x, \text{label}, \phi)$.
The only thing you have to change in your code is to concatenate input image $x$ with (one-hot) label of this image to pass into the encoder $q$ and to concatenate latent code $t$ with the same label to pass into the decoder $p$. Note that it's slightly harder to do with convolutional encoder/decoder model.
[1] Sohn, Kihyuk, Honglak Lee, and Xinchen Yan. “Learning Structured Output Representation using Deep Conditional Generative Models.” Advances in Neural Information Processing Systems. 2015.
Final task
Task 5.1 Implement CVAE model. You may reuse create_encoder and create_decoder modules defined previously (now you can see why they accept the input size as an argument ;) ). You may also need concatenate Keras layer to concatenate labels with input data and latent code.
To finish this task, you should go to Conditionally hallucinate data section and find there Task 5.2
End of explanation
conditional_loss = vlb_binomial(x, cond_x_decoded_mean, cond_t_mean, cond_t_log_var)
cvae = Model([x, label], cond_x_decoded_mean)
cvae.compile(optimizer=keras.optimizers.RMSprop(lr=0.001), loss=lambda x, y: conditional_loss)
Explanation: Define the loss and the model
End of explanation
hist = cvae.fit(x=[x_train, y_train],
y=x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=([x_test, y_test], x_test),
verbose=2)
Explanation: Train the model
End of explanation
fig = plt.figure(figsize=(10, 10))
for fid_idx, (x_data, y_data, title) in enumerate(
zip([x_train, x_test], [y_train, y_test], ['Train', 'Validation'])):
n = 10 # figure with 10 x 2 digits
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * 2))
decoded = sess.run(cond_x_decoded_mean,
feed_dict={x: x_data[:batch_size, :],
label: y_data[:batch_size, :]})
for i in range(10):
figure[i * digit_size: (i + 1) * digit_size,
:digit_size] = x_data[i, :].reshape(digit_size, digit_size)
figure[i * digit_size: (i + 1) * digit_size,
digit_size:] = decoded[i, :].reshape(digit_size, digit_size)
ax = fig.add_subplot(1, 2, fid_idx + 1)
ax.imshow(figure, cmap='Greys_r')
ax.set_title(title)
ax.axis('off')
plt.show()
Explanation: Visualize reconstructions for train and validation data
End of explanation
# Prepare one hot labels of form
# 0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 ...
# to sample five zeros, five ones, etc
curr_labels = np.eye(10)
curr_labels = np.repeat(curr_labels, 5, axis=0) # Its shape is 50 x 10.
# YOUR CODE HERE.
# ...
# cond_sampled_im_mean is a tf.Tensor of size 50 x 784 with 5 random zeros,
# then 5 random ones, etc sampled from the cvae model.
z = tf.random_normal((50, latent_dim))
labels = tf.convert_to_tensor(curr_labels, dtype=tf.float32)
stacked_z = concatenate([z, labels])
cond_sampled_im_mean = cdecoder(stacked_z)
cond_sampled_im_mean_np = sess.run(cond_sampled_im_mean)
# Show the sampled images.
plt.figure(figsize=(10, 10))
global_idx = 0
for digit in range(10):
for _ in range(5):
ax = plt.subplot(10, 5, global_idx + 1)
plt.imshow(cond_sampled_im_mean_np[global_idx, :].reshape(28, 28), cmap='gray')
ax.axis('off')
global_idx += 1
plt.show()
# Submit Task 5 (both 5.1 and 5.2).
grader.submit_conditional_hallucinating(sess, cond_sampled_im_mean)
Explanation: Conditionally hallucinate data
Task 5.2 Implement the conditional sampling from the distribution $p(x \mid t, \text{label})$ by firstly sampling from the prior $p(t)$ and then sampling from the likelihood $p(x \mid t, \text{label})$.
End of explanation
STUDENT_EMAIL = "[email protected]" # EMAIL HERE
STUDENT_TOKEN = "7vivXLipZ6P2kJdf" # TOKEN HERE
grader.status()
grader.submit(STUDENT_EMAIL, STUDENT_TOKEN)
Explanation: Authorization & Submission
To submit assignment parts to Cousera platform, please, enter your e-mail and token into variables below. You can generate a token on this programming assignment's page. <b>Note:</b> The token expires 30 minutes after generation.
End of explanation
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
plt.imshow(x_train[7, :])
plt.show()
Explanation: Playtime (UNGRADED)
Once you passed all the tests, modify the code above to work with the mixture of Gaussian distributions (in contrast to the mixture of Binomial distributions), and redo the experiments with CIFAR-10 dataset, which are full-color natural images with much more diverse structure.
End of explanation |
4,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exact DKL (Deep Kernel Learning) Regression w/ KISS-GP
Overview
In this notebook, we'll give a brief tutorial on how to use deep kernel learning for regression on a medium scale dataset using SKI. This also demonstrates how to incorporate standard PyTorch modules in to a Gaussian process model.
Step1: Loading Data
For this example notebook, we'll be using the elevators UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 80% of the data as training and the last 20% as testing.
Note
Step2: Defining the DKL Feature Extractor
Next, we define the neural network feature extractor used to define the deep kernel. In this case, we use a fully connected network with the architecture d -> 1000 -> 500 -> 50 -> 2, as described in the original DKL paper. All of the code below uses standard PyTorch implementations of neural network layers.
Step3: Defining the DKL-GP Model
We now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a GridInterpolationKernel (SKI) with an RBF base kernel.
The forward method
In deep kernel learning, the forward method is where most of the interesting new stuff happens. Before calling the mean and covariance modules on the data as in the simple GP regression setting, we first pass the input data x through the neural network feature extractor. Then, to ensure that the output features of the neural network remain in the grid bounds expected by SKI, we scales the resulting features to be between 0 and 1.
Only after this processing do we call the mean and covariance module of the Gaussian process. This example also demonstrates the flexibility of defining GP models that allow for learned transformations of the data (in this case, via a neural network) before calling the mean and covariance function. Because the neural network in this case maps to two final output features, we will have no problem using SKI.
Step4: Training the model
The cell below trains the DKL model above, learning both the hyperparameters of the Gaussian process and the parameters of the neural network in an end-to-end fashion using Type-II MLE. We run 20 iterations of training using the Adam optimizer built in to PyTorch. With a decent GPU, this should only take a few seconds.
Step5: Making Predictions
The next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in preds.mean()) using the standard SKI testing code, with no acceleration or precomputation. | Python Code:
import math
import tqdm
import torch
import gpytorch
from matplotlib import pyplot as plt
# Make plots inline
%matplotlib inline
Explanation: Exact DKL (Deep Kernel Learning) Regression w/ KISS-GP
Overview
In this notebook, we'll give a brief tutorial on how to use deep kernel learning for regression on a medium scale dataset using SKI. This also demonstrates how to incorporate standard PyTorch modules in to a Gaussian process model.
End of explanation
import urllib.request
import os
from scipy.io import loadmat
from math import floor
# this is for running the notebook in our testing framework
smoke_test = ('CI' in os.environ)
if not smoke_test and not os.path.isfile('../elevators.mat'):
print('Downloading \'elevators\' UCI dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', '../elevators.mat')
if smoke_test: # this is for running the notebook in our testing framework
X, y = torch.randn(20, 3), torch.randn(20)
else:
data = torch.Tensor(loadmat('../elevators.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
train_n = int(floor(0.8 * len(X)))
train_x = X[:train_n, :].contiguous()
train_y = y[:train_n].contiguous()
test_x = X[train_n:, :].contiguous()
test_y = y[train_n:].contiguous()
if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
Explanation: Loading Data
For this example notebook, we'll be using the elevators UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 80% of the data as training and the last 20% as testing.
Note: Running the next cell will attempt to download a ~400 KB dataset file to the current directory.
End of explanation
data_dim = train_x.size(-1)
class LargeFeatureExtractor(torch.nn.Sequential):
def __init__(self):
super(LargeFeatureExtractor, self).__init__()
self.add_module('linear1', torch.nn.Linear(data_dim, 1000))
self.add_module('relu1', torch.nn.ReLU())
self.add_module('linear2', torch.nn.Linear(1000, 500))
self.add_module('relu2', torch.nn.ReLU())
self.add_module('linear3', torch.nn.Linear(500, 50))
self.add_module('relu3', torch.nn.ReLU())
self.add_module('linear4', torch.nn.Linear(50, 2))
feature_extractor = LargeFeatureExtractor()
Explanation: Defining the DKL Feature Extractor
Next, we define the neural network feature extractor used to define the deep kernel. In this case, we use a fully connected network with the architecture d -> 1000 -> 500 -> 50 -> 2, as described in the original DKL paper. All of the code below uses standard PyTorch implementations of neural network layers.
End of explanation
class GPRegressionModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.GridInterpolationKernel(
gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(ard_num_dims=2)),
num_dims=2, grid_size=100
)
self.feature_extractor = feature_extractor
def forward(self, x):
# We're first putting our data through a deep net (feature extractor)
# We're also scaling the features so that they're nice values
projected_x = self.feature_extractor(x)
projected_x = projected_x - projected_x.min(0)[0]
projected_x = 2 * (projected_x / projected_x.max(0)[0]) - 1
mean_x = self.mean_module(projected_x)
covar_x = self.covar_module(projected_x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = GPRegressionModel(train_x, train_y, likelihood)
if torch.cuda.is_available():
model = model.cuda()
likelihood = likelihood.cuda()
Explanation: Defining the DKL-GP Model
We now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a GridInterpolationKernel (SKI) with an RBF base kernel.
The forward method
In deep kernel learning, the forward method is where most of the interesting new stuff happens. Before calling the mean and covariance modules on the data as in the simple GP regression setting, we first pass the input data x through the neural network feature extractor. Then, to ensure that the output features of the neural network remain in the grid bounds expected by SKI, we scales the resulting features to be between 0 and 1.
Only after this processing do we call the mean and covariance module of the Gaussian process. This example also demonstrates the flexibility of defining GP models that allow for learned transformations of the data (in this case, via a neural network) before calling the mean and covariance function. Because the neural network in this case maps to two final output features, we will have no problem using SKI.
End of explanation
training_iterations = 2 if smoke_test else 60
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam([
{'params': model.feature_extractor.parameters()},
{'params': model.covar_module.parameters()},
{'params': model.mean_module.parameters()},
{'params': model.likelihood.parameters()},
], lr=0.01)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
def train():
iterator = tqdm.notebook.tqdm(range(training_iterations))
for i in iterator:
# Zero backprop gradients
optimizer.zero_grad()
# Get output from model
output = model(train_x)
# Calc loss and backprop derivatives
loss = -mll(output, train_y)
loss.backward()
iterator.set_postfix(loss=loss.item())
optimizer.step()
%time train()
Explanation: Training the model
The cell below trains the DKL model above, learning both the hyperparameters of the Gaussian process and the parameters of the neural network in an end-to-end fashion using Type-II MLE. We run 20 iterations of training using the Adam optimizer built in to PyTorch. With a decent GPU, this should only take a few seconds.
End of explanation
model.eval()
likelihood.eval()
with torch.no_grad(), gpytorch.settings.use_toeplitz(False), gpytorch.settings.fast_pred_var():
preds = model(test_x)
print('Test MAE: {}'.format(torch.mean(torch.abs(preds.mean - test_y))))
Explanation: Making Predictions
The next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in preds.mean()) using the standard SKI testing code, with no acceleration or precomputation.
End of explanation |
4,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Robust Process Scheduling (with Python)
Nominal Model
Necessary imports
Step1: We instantiate the nominal model and solve it using the default solver (Gurobi).
The data of the nominal model can be changed from within STN.__init__().
Step2: Plot the nominal schedule
Step3: Attained objective
Step4: Robust Model
A robust STN is instantiated with a reference to a particular nominal STN (e.g., the one created in the previous section).
Step5: The module robust_STN.py provides two functions to generate appropriate uncertainty sets
Step6: Plot the robust schedule
Step7: We can then simulate an uncertain event, and see how the schedule has to be adapted to accommodate it.
* The important feature to observe is that events are accommodated without disrupting the core of the schedule (assignments), but just by regulating processed quantities.
* Note also that no more heavy computations are required to calculate the adapted schedule.
Step8: Events are indexed according to the decision they affect. The column parameter is used to simulate appropriate delays, when they can be longer than 1 time step (column=2 would simulate a delay by two time steps). Check the implementation of robust_STN.simulate_uncertain_event() for more details on how the parameters are to be understood.
Delay Example #2
Step9: We see that the optimizer does not schedule any reaction 3 on reactor 2. In the above plots we can also inspect how the schedule has to be adjusted when the delay does occur.
Unit Swap Example
Step10: And wen can again check the resulting schedules (remember events are counted from from_t) | Python Code:
%matplotlib inline
from robust_STN import *
Explanation: Robust Process Scheduling (with Python)
Nominal Model
Necessary imports:
End of explanation
stn = STN()
stn.solve()
Explanation: We instantiate the nominal model and solve it using the default solver (Gurobi).
The data of the nominal model can be changed from within STN.__init__().
End of explanation
stn.plot_schedule()
Explanation: Plot the nominal schedule:
End of explanation
stn.model.value
Explanation: Attained objective:
End of explanation
rSTN = robust_STN(stn)
Explanation: Robust Model
A robust STN is instantiated with a reference to a particular nominal STN (e.g., the one created in the previous section).
End of explanation
rSTN.W = rSTN.build_uncertainty_set_for_time_delay(units=(0,), tasks=(0,), delay=1)
rSTN.solve()
Explanation: The module robust_STN.py provides two functions to generate appropriate uncertainty sets:
* robust_STN.build_uncertainty_set_for_time_delay(units=(0,), tasks=(0,), delay=1, from_t=0, to_t=T)
* robust_STN.build_uncertainty_set_for_unit_swap(from_unit=2, to_unit=1, tasks=(1,), from_t=0, to_t=T)
Below some examples of how they can be used. These are sufficient to reproduce the results in [1].
Delay Example #1: Heater Delay
We first produce a robust schedule that is immune to possible time delays of the heater (unit=0), when performing heating (task=0), of at most 1 time step (delay=1), which can happen anytime in the scheduling horizon (from_t and to_t assigned default values)
End of explanation
rSTN.plot_schedule()
Explanation: Plot the robust schedule
End of explanation
rSTN.simulate_uncertain_event(event=[0,0,0], column=1)
rSTN.simulate_uncertain_event(event=[0,0,5], column=1)
Explanation: We can then simulate an uncertain event, and see how the schedule has to be adapted to accommodate it.
* The important feature to observe is that events are accommodated without disrupting the core of the schedule (assignments), but just by regulating processed quantities.
* Note also that no more heavy computations are required to calculate the adapted schedule.
End of explanation
rSTN.W = rSTN.build_uncertainty_set_for_time_delay(units=(2,), tasks=(1,3), delay=1)
rSTN.solve()
rSTN.plot_schedule()
rSTN.simulate_uncertain_event(event=[2,1,0], column=1)
rSTN.simulate_uncertain_event(event=[2,1,5], column=1)
Explanation: Events are indexed according to the decision they affect. The column parameter is used to simulate appropriate delays, when they can be longer than 1 time step (column=2 would simulate a delay by two time steps). Check the implementation of robust_STN.simulate_uncertain_event() for more details on how the parameters are to be understood.
Delay Example #2: Reactor 2, delay when processing Reaction 1 or Reaction 3
End of explanation
rSTN.W = rSTN.build_uncertainty_set_for_unit_swap(from_unit=1, to_unit=2, tasks=(2,), from_t=4)
rSTN.solve()
Explanation: We see that the optimizer does not schedule any reaction 3 on reactor 2. In the above plots we can also inspect how the schedule has to be adjusted when the delay does occur.
Unit Swap Example: swap processing of Reaction 2, from Reactor 1 to Reactor 2, starting from t = 4h
End of explanation
rSTN.plot_schedule()
rSTN.simulate_uncertain_event(event=[1,2,4])
Explanation: And wen can again check the resulting schedules (remember events are counted from from_t):
End of explanation |
4,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Crear el "engine" pasando la dirección de la db
Step1: Hacer la query especificando el "engine" que se desea usar
Step2: Link a Pandas NB para ver join, merge, append, etc
Agregando un nuevo registro a nuestra tabla "Customer" con pandas
Step3: Agregando una nueva tabla a nuestra db desde pandas
Step4: Ahora hagamos lo mismo con sqlalchemy
Step5: Agregando elementos a Wine con pandas | Python Code:
engine = create_engine('postgresql://celia@localhost:5432/mytestdb')
engine
df_customer.to_json('/tmp/test.json')
json_df = pd.read_json('/home/celia/Downloads/MOCK_DATA.json')
json_df
json_df.to_sql('Customer', engine, index=None)
Explanation: Crear el "engine" pasando la dirección de la db
End of explanation
df_customer = pd.read_sql_query('select email, bday, first_name from "Customer"',con=engine)
df_customer
df_customer.info()
df_customer.describe()
Explanation: Hacer la query especificando el "engine" que se desea usar
End of explanation
json_df.columns
new_df = pd.DataFrame([[pd.datetime(1990, 3, 19),'[email protected]', 'celia', 'Female', 10000, 'cintas']], columns=json_df.columns)
new_df
new_df.to_sql('Customer', engine, if_exists='append', index=None)
df_customer = pd.read_sql_query('select * from "Customer" WHERE id = 10000;', con=engine)
df_customer
Explanation: Link a Pandas NB para ver join, merge, append, etc
Agregando un nuevo registro a nuestra tabla "Customer" con pandas
End of explanation
new_table = pd.DataFrame([], columns=['WineCode', 'Type', 'Vintage'])
new_table
new_table.to_sql('Wine', engine, index=None)
Explanation: Agregando una nueva tabla a nuestra db desde pandas
End of explanation
from sqlalchemy import MetaData, types
from sqlalchemy import Table, Column
metadata = MetaData()
time = Table('Time', metadata,
Column('TimeCode', types.Integer, primary_key=True),
Column('Date', types.DateTime, nullable=False),
)
metadata.create_all(engine)
Explanation: Ahora hagamos lo mismo con sqlalchemy
End of explanation
data = [[1, 'White', 2000],
[2, 'red', 2015],
[3, 'rose', 2014]]
new_df = pd.DataFrame(data, columns=new_table.columns)
new_df.to_sql('Wine', engine, if_exists='append', index=None)
df_wine = pd.read_sql_query('select * from "Wine"',con=engine)
df_wine
data = [[1, 'White', pd.datetime(2000, 10, 10)],
[2, 'red', pd.datetime(2010, 9, 9)],
[3, 'rose', pd.datetime(2011, 9, 9)]]
new_df = pd.DataFrame(data, columns=df_wine.columns)
new_df['Vintage']
new_df.to_json('/tmp/lero.json', date_unit='ns')
json_demo = pd.read_json('/tmp/lero.json')
json_demo
new_df
json_demo['Vintage'] = pd.to_datetime(json_demo['Vintage'], unit='ns')
json_demo
new_df
json_demo.columns.values
json_demo.values
pd.merge(json_demo, new_df, on=list(json_demo.columns.values), how='outer')
pd.Series?
score = pd.Series([10, 9, 8], name='score')
score
out = pd.concat([json_demo, score], axis=1)
new_row = pd.DataFrame([[4, 'espumeante',pd.datetime(2000,2,2)]], columns=new_df.columns)
append_df = new_df.append(new_row)
append_df.to_sql('Wine', engine, if_exists='append', index=None)
Explanation: Agregando elementos a Wine con pandas
End of explanation |
4,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
可视化线性关系¶
许多数据集包含多个定量变量,分析的目标通常是将这些变量相互关联
Step1: regplot()和lmplot()绘制两个变量的散点图,x和y,然后拟合回归模型并绘制得到的回归直线和该回归一个95%置信区间:y ~ x
Step2: You should note that the resulting plots are identical, except that the figure shapes are different. We will explain why this is shortly. For now, the other main difference to know about is that regplot() accepts the x and y variables in a variety of formats including simple numpy arrays, pandas Series objects, or as references to variables in a pandas DataFrame object passed to data. In contrast, lmplot() has data as a required parameter and the x and y variables must be specified as strings. This data format is called “long-form” or “tidy” data Other than this input flexibility, regplot() possesses a subset of lmplot()‘s features, so we will demonstrate them using the latter.
It’s possible to fit a linear regression when one of the variables takes discrete values, however, the simple scatterplot produced by this kind of dataset is often not optimal
离散点的拟合
Step3: 在存在异常值的情况下,拟合稳健的回归可能是有用的,该回归使用不同的损失函数来降低相对较大的残差
Step4: 子图的绘制 col | Python Code:
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid", color_codes=True)
np.random.seed(sum(map(ord, "regression")))
tips = sns.load_dataset("tips")
Explanation: 可视化线性关系¶
许多数据集包含多个定量变量,分析的目标通常是将这些变量相互关联
End of explanation
sns.regplot(x="total_bill", y="tip", data=tips)
plt.show()
sns.lmplot(x="total_bill", y="tip", data=tips)
plt.show()
Explanation: regplot()和lmplot()绘制两个变量的散点图,x和y,然后拟合回归模型并绘制得到的回归直线和该回归一个95%置信区间:y ~ x
End of explanation
sns.lmplot(x="size", y="tip", data=tips)
plt.show()
# 同样可以加 噪声
sns.lmplot(x="size", y="tip", data=tips, x_jitter=.05)
plt.show()
# 第二种选择是折叠每个离散值中的观察值以绘制中心趋势的估计值以及置信区间
sns.lmplot(x="size", y="tip", data=tips, x_estimator=np.mean)
plt.show()
# 多项式拟合
anscombe = sns.load_dataset("anscombe")
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'II'"),
order=2, ci=None, scatter_kws={"s": 80})
plt.show()
Explanation: You should note that the resulting plots are identical, except that the figure shapes are different. We will explain why this is shortly. For now, the other main difference to know about is that regplot() accepts the x and y variables in a variety of formats including simple numpy arrays, pandas Series objects, or as references to variables in a pandas DataFrame object passed to data. In contrast, lmplot() has data as a required parameter and the x and y variables must be specified as strings. This data format is called “long-form” or “tidy” data Other than this input flexibility, regplot() possesses a subset of lmplot()‘s features, so we will demonstrate them using the latter.
It’s possible to fit a linear regression when one of the variables takes discrete values, however, the simple scatterplot produced by this kind of dataset is often not optimal
离散点的拟合
End of explanation
sns.lmplot(x="x", y="y", data=anscombe.query("dataset == 'III'"),
robust=True, ci=None, scatter_kws={"s": 80}) # 更稳健的拟合
plt.show()
Explanation: 在存在异常值的情况下,拟合稳健的回归可能是有用的,该回归使用不同的损失函数来降低相对较大的残差
End of explanation
sns.lmplot(x="total_bill", y="tip", hue="smoker", col="time", data=tips)
plt.show()
sns.jointplot(x="total_bill", y="tip", data=tips, kind="reg")
plt.show()
iris = sns.load_dataset("iris")
g = sns.PairGrid(iris, hue="species")
g.map_diag(plt.hist) # 对角线
g.map_offdiag(plt.scatter) # 非对角线
g.add_legend()
plt.show()
g = sns.pairplot(iris, hue="species", palette="Set2", diag_kind="kde", size=2.5)
plt.show()
Explanation: 子图的绘制 col
End of explanation |
4,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Uncertainty analysis for drillholes in Gippsland Basin Model
We here evaluate how to analyse and visualise uncertainties in a kinematic model. The basic idea is that we have a set of drillhole locations and depths and want to know how uncertain the model is at these specific locations.
The required methods are implemented in an Experiment subclass and tested here with an application to the Gippsland Basin model.
Step1: Creating an experiment object
First, we start with generating a pynoddy experiment object. The experiment class inherits all the methods from the base pynoddy.history class and we can directly import the Gippsland Basin model that we want to analyse into the object
Step2: Some basic information about the model can be obtained with
Step3: We can have a quick look at the model in a section view (note that Noddy is now executed in the background when required - and the output automatically generated in the required resolution)
Step4: The base plot is not very useful - but we can create a section plot with a define vertical exaggeration (keyword ve) and plot the colorbar in horizontal orientation
Step5: Note
Step6: Generating random perturbations of the model
Before generating random prerturbations, we should now store the base version so that we can always revert to it at a later stage
Step7: For a reproducible experiment, we can also set the random seed
Step8: And now, let's perturb the model
Step9: Let's see what happened
Step10: ...and another perturbation | Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
# here the usual imports. If any of the imports fails, make sure that pynoddy is installed
# properly, ideally with 'python setup.py develop' or 'python setup.py install'
import sys, os
import matplotlib.pyplot as plt
import numpy as np
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('/Users/flow/git/pynoddy/')
sys.path.append(repo_path)
import pynoddy.history
import pynoddy.experiment
rcParams.update({'font.size': 20})
Explanation: Uncertainty analysis for drillholes in Gippsland Basin Model
We here evaluate how to analyse and visualise uncertainties in a kinematic model. The basic idea is that we have a set of drillhole locations and depths and want to know how uncertain the model is at these specific locations.
The required methods are implemented in an Experiment subclass and tested here with an application to the Gippsland Basin model.
End of explanation
import importlib
importlib.reload(pynoddy.history)
importlib.reload(pynoddy.output)
importlib.reload(pynoddy.experiment)
# the model itself is now part of the repository, in the examples directory:
history_file = os.path.join(repo_path, "examples/GBasin_Ve1_V4_b.his")
gipps_topo_ex = pynoddy.experiment.Experiment(history = history_file)
Explanation: Creating an experiment object
First, we start with generating a pynoddy experiment object. The experiment class inherits all the methods from the base pynoddy.history class and we can directly import the Gippsland Basin model that we want to analyse into the object:
End of explanation
print(gipps_topo_ex)
Explanation: Some basic information about the model can be obtained with:
End of explanation
gipps_topo_ex.plot_section('y')
Explanation: We can have a quick look at the model in a section view (note that Noddy is now executed in the background when required - and the output automatically generated in the required resolution):
End of explanation
# gipps_topo_ex.determine_model_stratigraphy()
gipps_topo_ex.plot_section('x', ve = 5, position = 'centre',
cmap = 'YlOrRd',
title = '',
colorbar = False)
gipps_topo_ex.plot_section('y', position = 100, ve = 5.,
cmap = 'YlOrRd',
title = '',
colorbar_orientation = 'horizontal')
Explanation: The base plot is not very useful - but we can create a section plot with a define vertical exaggeration (keyword ve) and plot the colorbar in horizontal orientation:
End of explanation
importlib.reload(pynoddy.experiment)
# the model itself is now part of the repository, in the examples directory:
history_file = os.path.join(repo_path, "examples/GBasin_Ve1_V4_b.his")
gipps_topo_ex = pynoddy.experiment.Experiment(history = history_file)
gipps_topo_ex.load_parameter_file(os.path.join(repo_path, "examples/gipps_params_2.csv"))
Explanation: Note: The names of the model stratigraphy (colorbar labels) are unfortunately not defined correctly in the input file - we need to fix that, then we should get useful labels, as well!
Loading parameters ranges from file
We now need to define the parameter ranges. This step can either be done through explicit definition in the notebook (see the previous notebook on the Experiment class), or a list of parameters and defined ranges plus statistics can be read in from a csv file. This enables the convenient parameter definition in a spreadsheed (for example through Excel).
In order to be read in correctly, the header should contain the labels:
'event' : event id
'parameter' : Noddy parameter ('Dip', 'Dip Direction', etc.)
'min' : minimum value
'max' : maximum value
'initial' : initial value
In addition, it is possible to define PDF type and parameters. For now, the following settings are supported:
'type' = 'normal'
'stdev' : standard deviation
'mean' : mean value (default: 'initial' value)
We can read in the parameters simply with:
End of explanation
gipps_topo_ex.freeze()
Explanation: Generating random perturbations of the model
Before generating random prerturbations, we should now store the base version so that we can always revert to it at a later stage:
End of explanation
gipps_topo_ex.set_random_seed(12345)
Explanation: For a reproducible experiment, we can also set the random seed:
End of explanation
gipps_topo_ex.random_perturbation()
Explanation: And now, let's perturb the model:
End of explanation
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
gipps_topo_ex.plot_section(ax = ax1, direction = 'x', model_type = "base",
colorbar = False, title = "", ve = 5.)
gipps_topo_ex.plot_section(ax = ax2, direction = 'x', colorbar = False,
title = "", ve = 5.)
#
# Note: keep these lines only for debugging!
#
reload(pynoddy.output)
reload(pynoddy.history)
reload(pynoddy.experiment)
# the model itself is now part of the repository, in the examples directory:
history_file = os.path.join(repo_path, "examples/GBasin_Ve1_V4_b.his")
gipps_topo_ex = pynoddy.experiment.Experiment(history = history_file)
gipps_topo_ex.load_parameter_file(os.path.join(repo_path, "examples/gipps_params.csv"))
# freeze base state
gipps_topo_ex.freeze()
# set seed
gipps_topo_ex.set_random_seed(12345)
# randomize
gipps_topo_ex.random_perturbation()
b1 = gipps_topo_ex.get_section('x', resolution = 50, model_type = 'base')
# b1.plot_section(direction = 'x', colorbar = False, title = "", ve = 5.)
b2 = gipps_topo_ex.get_section('x', resolution = 50, model_type = 'current')
# b1.plot_section(direction = 'x', colorbar = True, title = "", ve = 5.)
b1 -= b2
# b1.plot_section(direction = 'x', colorbar = True, title = "", ve = 5.)
print np.min(b1.block), np.max(b1.block)
type(b1)
Explanation: Let's see what happened: we can compare the new model to the base model as we stored it before:
End of explanation
gipps_topo_ex.random_perturbation()
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
gipps_topo_ex.plot_section(ax = ax1, direction = 'x', model_type = "base",
colorbar = False, title = "", ve = 5.)
gipps_topo_ex.plot_section(ax = ax2, direction = 'x', colorbar = False,
title = "", ve = 5.)
# plot difference
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
gipps_topo_ex.plot_section(ax = ax1, direction = 'x', model_type = "base",
colorbar = False, title = "", ve = 5.)
gipps_topo_ex.plot_section(ax = ax2, direction = 'x', colorbar = False,
title = "", ve = 5.)
gipps_topo_ex.param_stats
Explanation: ...and another perturbation:
End of explanation |
4,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Requirements
Step1: PyTorch deployment requires using an unstable version of PyTorch (1.0.0+).
In order to install this version, use "Preview" option when choosing PyTorch version.
https | Python Code:
torch.__version__
Explanation: Requirements
End of explanation
# Let's create an example model using ResNet-18
model = torchvision.models.resnet18()
model
# Creating a sample of the input
# It will be used to pass it to the network to build the dimensions
sample = torch.rand(size=(1, 3, 224, 224))
# Creating so called "traced Torch script"
traced_script_module = torch.jit.trace(model, sample)
traced_script_module
# The TracedModule is capable of making predictions
sample_prediction = traced_script_module(torch.ones(size=(1, 3, 224, 224)))
sample_prediction.shape
# Serializing the the script module
traced_script_module.save('./models_deployment/model.pt')
Explanation: PyTorch deployment requires using an unstable version of PyTorch (1.0.0+).
In order to install this version, use "Preview" option when choosing PyTorch version.
https://pytorch.org/
End of explanation |
4,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
+
Word Count Lab
Step1: Part 1
Step3: (1b) Pluralize and test
Let's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace <FILL IN> with your solution. If you have trouble, the next cell has the solution. After you have defined makePlural you can run the third cell which contains a test. If you implementation is correct it will print 1 test passed.
This is the general form that exercises will take, except that no example solution will be provided. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more <FILL IN> sections. The cell that needs to be modified will have # TODO
Step4: (1c) Apply makePlural to the base RDD
Now pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD.
Step5: (1d) Pass a lambda function to map
Let's create the same RDD using a lambda function.
Step6: (1e) Length of each word
Now use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable.
Step7: (1f) Pair RDDs
The next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('<word>', 1) for each word element in the RDD.
We can create the pair RDD using the map() transformation with a lambda() function to create a new RDD.
Step8: Part 2
Step9: (2b) Use groupByKey() to obtain the counts
Using the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator.
Now sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs.
Step10: (2c) Counting using reduceByKey
A better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of the values to a single value. reduceByKey() operates by applying the function first within each partition on a per-key basis and then across the partitions, allowing it to scale efficiently to large datasets.
Step11: (2d) All together
The expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement.
Step12: Part 3
Step13: (3b) Mean using reduce
Find the mean number of words per unique word in wordCounts.
Use a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values.
Step15: Part 4
Step17: (4b) Capitalization and punctuation
Real world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are
Step18: (4c) Load a text file
For the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the punctuation and change all text to lowercase. Since the file is large we use take(15), so that we only print 15 lines.
Step19: (4d) Words from lines
Before we can use the wordcount() function, we have to address two issues with the format of the RDD
Step20: (4e) Remove empty elements
The next step is to filter out the empty elements. Remove all entries where the word is ''.
Step21: (4f) Count the words
We now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of the pair.
You'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results.
Use the wordCount() function and takeOrdered() to obtain the fifteen most common words and their counts. | Python Code:
labVersion = 'cs190_week2_word_count_v_1_0'
Explanation: +
Word Count Lab: Building a word count application
This lab will build on the techniques covered in the Spark tutorial to develop a simple word count application. The volume of unstructured text in existence is growing dramatically, and Spark is an excellent tool for analyzing this type of data. In this lab, we will write code that calculates the most common words in the Complete Works of William Shakespeare retrieved from Project Gutenberg. This could also be scaled to find the most common words on the Internet.
During this lab we will cover:
Part 1: Creating a base RDD and pair RDDs
Part 2: Counting with pair RDDs
Part 3: Finding unique words and a mean value
Part 4: Apply word count to a file
Note that, for reference, you can look up the details of the relevant methods in Spark's Python API
End of explanation
wordsList = ['cat', 'elephant', 'rat', 'rat', 'cat']
wordsRDD = sc.parallelize(wordsList, 4)
# Print out the type of wordsRDD
print type(wordsRDD)
Explanation: Part 1: Creating a base RDD and pair RDDs
In this part of the lab, we will explore creating a base RDD with parallelize and using pair RDDs to count words.
(1a) Create a base RDD
We'll start by generating a base RDD by using a Python list and the sc.parallelize method. Then we'll print out the type of the base RDD.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def makePlural(word):
Adds an 's' to `word`.
Note:
This is a simple function that only adds an 's'. No attempt is made to follow proper
pluralization rules.
Args:
word (str): A string.
Returns:
str: A string with 's' added to it.
return word + "s"
print makePlural('cat')
# One way of completing the function
def makePlural(word):
return word + 's'
print makePlural('cat')
# Load in the testing code and check to see if your answer is correct
# If incorrect it will report back '1 test failed' for each failed test
# Make sure to rerun any cell you change before trying the test again
from test_helper import Test
# TEST Pluralize and test (1b)
Test.assertEquals(makePlural('rat'), 'rats', 'incorrect result: makePlural does not add an s')
Explanation: (1b) Pluralize and test
Let's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace <FILL IN> with your solution. If you have trouble, the next cell has the solution. After you have defined makePlural you can run the third cell which contains a test. If you implementation is correct it will print 1 test passed.
This is the general form that exercises will take, except that no example solution will be provided. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more <FILL IN> sections. The cell that needs to be modified will have # TODO: Replace <FILL IN> with appropriate code on its first line. Once the <FILL IN> sections are updated and the code is run, the test cell can then be run to verify the correctness of your solution. The last code cell before the next markdown section will contain the tests.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
pluralRDD = wordsRDD.map(makePlural)
print pluralRDD.collect()
# TEST Apply makePlural to the base RDD(1c)
Test.assertEquals(pluralRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],
'incorrect values for pluralRDD')
Explanation: (1c) Apply makePlural to the base RDD
Now pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
pluralLambdaRDD = wordsRDD.map(lambda a: a + "s")
print pluralLambdaRDD.collect()
# TEST Pass a lambda function to map (1d)
Test.assertEquals(pluralLambdaRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],
'incorrect values for pluralLambdaRDD (1d)')
Explanation: (1d) Pass a lambda function to map
Let's create the same RDD using a lambda function.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
pluralLengths = (pluralRDD
.map(lambda a: len(a))
.collect())
print pluralLengths
# TEST Length of each word (1e)
Test.assertEquals(pluralLengths, [4, 9, 4, 4, 4],
'incorrect values for pluralLengths')
Explanation: (1e) Length of each word
Now use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
wordPairs = wordsRDD.map(lambda a: (a,1))
print wordPairs.collect()
# TEST Pair RDDs (1f)
Test.assertEquals(wordPairs.collect(),
[('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)],
'incorrect value for wordPairs')
Explanation: (1f) Pair RDDs
The next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('<word>', 1) for each word element in the RDD.
We can create the pair RDD using the map() transformation with a lambda() function to create a new RDD.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Note that groupByKey requires no parameters
wordsGrouped = wordPairs.groupByKey()
for key, value in wordsGrouped.collect():
print '{0}: {1}'.format(key, list(value))
# TEST groupByKey() approach (2a)
Test.assertEquals(sorted(wordsGrouped.mapValues(lambda x: list(x)).collect()),
[('cat', [1, 1]), ('elephant', [1]), ('rat', [1, 1])],
'incorrect value for wordsGrouped')
Explanation: Part 2: Counting with pair RDDs
Now, let's count the number of times a particular word appears in the RDD. There are multiple ways to perform the counting, but some are much less efficient than others.
A naive approach would be to collect() all of the elements and count them in the driver program. While this approach could work for small datasets, we want an approach that will work for any size dataset including terabyte- or petabyte-sized datasets. In addition, performing all of the work in the driver program is slower than performing it in parallel in the workers. For these reasons, we will use data parallel operations.
(2a) groupByKey() approach
An approach you might first consider (we'll see shortly that there are better ways) is based on using the groupByKey() transformation. As the name implies, the groupByKey() transformation groups all the elements of the RDD with the same key into a single list in one of the partitions. There are two problems with using groupByKey():
The operation requires a lot of data movement to move all the values into the appropriate partitions.
The lists can be very large. Consider a word count of English Wikipedia: the lists for common words (e.g., the, a, etc.) would be huge and could exhaust the available memory in a worker.
Use groupByKey() to generate a pair RDD of type ('word', iterator).
End of explanation
# TODO: Replace <FILL IN> with appropriate code
wordCountsGrouped = wordsGrouped.map(lambda (a,b): (a, sum(b)))
print wordCountsGrouped.collect()
# TEST Use groupByKey() to obtain the counts (2b)
Test.assertEquals(sorted(wordCountsGrouped.collect()),
[('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCountsGrouped')
Explanation: (2b) Use groupByKey() to obtain the counts
Using the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator.
Now sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Note that reduceByKey takes in a function that accepts two values and returns a single value
wordCounts = wordPairs.reduceByKey(lambda a,b: a+b)
print wordCounts.collect()
# TEST Counting using reduceByKey (2c)
Test.assertEquals(sorted(wordCounts.collect()), [('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCounts')
Explanation: (2c) Counting using reduceByKey
A better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of the values to a single value. reduceByKey() operates by applying the function first within each partition on a per-key basis and then across the partitions, allowing it to scale efficiently to large datasets.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
wordCountsCollected = (wordsRDD
.map(lambda a: (a,1))
.reduceByKey(lambda a,b: a+b)
.collect())
print wordCountsCollected
# TEST All together (2d)
Test.assertEquals(sorted(wordCountsCollected), [('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCountsCollected')
Explanation: (2d) All together
The expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
uniqueWords = wordsRDD.distinct().count()
print uniqueWords
# TEST Unique words (3a)
Test.assertEquals(uniqueWords, 3, 'incorrect count of uniqueWords')
Explanation: Part 3: Finding unique words and a mean value
(3a) Unique words
Calculate the number of unique words in wordsRDD. You can use other RDDs that you have already created to make this easier.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
from operator import add
totalCount = (wordCounts
.map(lambda (a,b): b)
.reduce(lambda a,b: a+b))
average = totalCount / float(wordCounts.distinct().count())
print totalCount
print round(average, 2)
# TEST Mean using reduce (3b)
Test.assertEquals(round(average, 2), 1.67, 'incorrect value of average')
Explanation: (3b) Mean using reduce
Find the mean number of words per unique word in wordCounts.
Use a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def wordCount(wordListRDD):
Creates a pair RDD with word counts from an RDD of words.
Args:
wordListRDD (RDD of str): An RDD consisting of words.
Returns:
RDD of (str, int): An RDD consisting of (word, count) tuples.
return (wordListRDD
.map(lambda a : (a,1))
.reduceByKey(lambda a,b: a+b))
print wordCount(wordsRDD).collect()
# TEST wordCount function (4a)
Test.assertEquals(sorted(wordCount(wordsRDD).collect()),
[('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect definition for wordCount function')
Explanation: Part 4: Apply word count to a file
In this section we will finish developing our word count application. We'll have to build the wordCount function, deal with real world problems like capitalization and punctuation, load in our data source, and compute the word count on the new data.
(4a) wordCount function
First, define a function for word counting. You should reuse the techniques that have been covered in earlier parts of this lab. This function should take in an RDD that is a list of words like wordsRDD and return a pair RDD that has all of the words and their associated counts.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
import re
def removePunctuation(text):
Removes punctuation, changes to lower case, and strips leading and trailing spaces.
Note:
Only spaces, letters, and numbers should be retained. Other characters should should be
eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after
punctuation is removed.
Args:
text (str): A string.
Returns:
str: The cleaned up string.
return re.sub("[^a-zA-Z0-9 ]", "", text.strip(" ").lower())
print removePunctuation('Hi, you!')
print removePunctuation(' No under_score!')
print removePunctuation(' * Remove punctuation then spaces * ')
# TEST Capitalization and punctuation (4b)
Test.assertEquals(removePunctuation(" The Elephant's 4 cats. "),
'the elephants 4 cats',
'incorrect definition for removePunctuation function')
Explanation: (4b) Capitalization and punctuation
Real world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are:
Words should be counted independent of their capitialization (e.g., Spark and spark should be counted as the same word).
All punctuation should be removed.
Any leading or trailing spaces on a line should be removed.
Define the function removePunctuation that converts all text to lower case, removes any punctuation, and removes leading and trailing spaces. Use the Python re module to remove any text that is not a letter, number, or space. Reading help(re.sub) might be useful.
If you are unfamiliar with regular expressions, you may want to review this tutorial from Google. Also, this website is a great resource for debugging your regular expression.
End of explanation
# Just run this code
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt')
fileName = os.path.join(baseDir, inputPath)
shakespeareRDD = (sc
.textFile(fileName, 8)
.map(removePunctuation))
print '\n'.join(shakespeareRDD
.zipWithIndex() # to (line, lineNum)
.map(lambda (l, num): '{0}: {1}'.format(num, l)) # to 'lineNum: line'
.take(15))
Explanation: (4c) Load a text file
For the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the punctuation and change all text to lowercase. Since the file is large we use take(15), so that we only print 15 lines.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
shakespeareWordsRDD = shakespeareRDD.flatMap(lambda a: a.split(" "))
shakespeareWordCount = shakespeareWordsRDD.count()
print shakespeareWordsRDD.top(5)
print shakespeareWordCount
# TEST Words from lines (4d)
# This test allows for leading spaces to be removed either before or after
# punctuation is removed.
Test.assertTrue(shakespeareWordCount == 927631 or shakespeareWordCount == 928908,
'incorrect value for shakespeareWordCount')
Test.assertEquals(shakespeareWordsRDD.top(5),
[u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],
'incorrect value for shakespeareWordsRDD')
Explanation: (4d) Words from lines
Before we can use the wordcount() function, we have to address two issues with the format of the RDD:
The first issue is that that we need to split each line by its spaces. Performed in (4d).
The second issue is we need to filter out empty lines. Performed in (4e).
Apply a transformation that will split each element of the RDD by its spaces. For each element of the RDD, you should apply Python's string split() function. You might think that a map() transformation is the way to do this, but think about what the result of the split() function will be. Note that you should not use the default implemenation of split(), but should instead pass in a separator value. For example, to split line by commas you would use line.split(',').
End of explanation
# TODO: Replace <FILL IN> with appropriate code
shakeWordsRDD = shakespeareWordsRDD.filter(lambda a: a != "")
shakeWordCount = shakeWordsRDD.count()
print shakeWordCount
# TEST Remove empty elements (4e)
Test.assertEquals(shakeWordCount, 882996, 'incorrect value for shakeWordCount')
Explanation: (4e) Remove empty elements
The next step is to filter out the empty elements. Remove all entries where the word is ''.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
top15WordsAndCounts = wordCount(shakeWordsRDD).takeOrdered(15, lambda(a,b): -b)
print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15WordsAndCounts))
# TEST Count the words (4f)
Test.assertEquals(top15WordsAndCounts,
[(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463),
(u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890),
(u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],
'incorrect value for top15WordsAndCounts')
Explanation: (4f) Count the words
We now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of the pair.
You'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results.
Use the wordCount() function and takeOrdered() to obtain the fifteen most common words and their counts.
End of explanation |
4,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Contexto
O Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (Inep) divulgou no dia 21 de Junho de 2017 sobre remuneração média dos professores em exercício na educação básica. O estudo é resultado de uma nova metodologia do Inep cruzando informações do Censo Escolar com a Relação Anual de Informações Sociais (Rais) de 2014, do Ministério do Trabalho e da Previdência Social. O levantamento mostrou uma população de 2.080.619 professores.$^{[1]}$
De acordo com o levantamento, com dados referentes a 2014, uma jornada semanal de 40h representa, na rede de ensino de Porto Alegre, uma média de proventos mensais de R\$ 10.947,15. Porto Alegre seria a rede que melhor paga entre as capitais (média é de R\$ 3.116). Ofereceria também remuneração superior àquela das escolas federais, estaduais e privadas. $^{[2]}$
O dito estudo do Inep é citado em diversas notícias, por exemplo G1, MEC, 10 anos de metodologia de coleta de dados individualizada dos censos educacionais. O estudo completo infelizmente não foi encontrado, porém a apresentação Remuneração Média dos Docentes da Educação Básica pode ser baixada completamente.
Fontes
Step1: Segundo a apresentação, as variáveis utilizadas da RAIS são
Step2: De acordo com o slide 10, a identificação dos docentes foi feita baseado nos seguintes códigos do CNAE e CBO
Step3: Entretanto, podemos notar que as categorias 23 e 33 de CBO se dividem em diversas ocupações específicas, de acordo com a planilha 'ocupação' do arquivo RAIS_vinculos_layout $^{[1]}$. As últimas ocupações da categoria 23 estão ligadas à educação, porém parecem ser mais cargos administrativos e/ou de apoio que cargos do magitério propriamente dito (Coordenador Pedagogico, Orientador Educacional, Pedagogo, Psicopedagogo, Supervisor de Ensino, Designer Educacional). A categoria 33 é ainda mais heterogênea, incluindo Auxiliar de Desenvolvimento Infantil, Instrutor de Auto-Escola, Instrutor de Cursos Livres , Inspetor de Alunos de Escola Privada, Inspetor de Alunos de Escola Publica e Monitor de Transporte Escolar.
231105
Step4: Natureza Jurídica e Códigos
||
|---------------------------------------|
|POD EXEC FE|1015|SOC MISTA|2038|SOC SIMP PUR|2232|FUN DOM EXT|3212|
|POD EXEC ES|1023|SA ABERTA|2046|SOC SIMP LTD|2240|ORG RELIG|3220|
|POD EXEC MU|1031|SA FECH|2054|SOC SIMP COL|2259|COMUN INDIG|3239|
|POD LEG FED|1040|SOC QT LTDA|2062|SOC SIMP COM|2267|FUNDO PRIVAD|3247|
|POD LEG EST|1058|SOC COLETV|2070|EMPR BINAC|2275|OUTR ORG|3999|
|POD LEG MUN|1066|OC COLETV07|2076|CONS EMPREG|2283|EMP IND IMO|4014|
|POD JUD FED|1074|SOC COMD SM|2089|CONS SIMPLES|2291|SEG ESPEC|4022|
|POD JUD EST|1082|SOC COMD AC|2097|CARTORIO|3034|CONTR IND|4080|
|AUTARQ FED|1104|SOC CAP IND|2100|ORG SOCIAL|3042|CONTR IND07|4081|
|AUTARQ EST|1112|SOC CIVIL|2119|OSCIP|3050|CAN CARG POL|4090|
|AUTARQ MUN|1120|SOC CTA PAR|2127|OUT FUND PR|3069|LEILOEIRO|4111|
|FUNDAC FED|1139|FRM MER IND|2135|SERV SOC AU|3077|ORG INTERN|5002|
|FUNDAC EST|1147|COOPERATIVA|2143|CONDOMIN|3085|ORG INTERNAC|5010|
|FUNDAC MUN|1155|CONS EMPRES|2151|UNID EXEC|3093|REPR DIPL ES|5029|
|ORG AUT FED|1163|GRUP SOC|2160|COM CONC|3107|OUT INST EXT|5037|
|ORG AUT EST|1171|FIL EMP EXT|2178|ENT MED ARB|3115|IGNORADO|-1|
|COM POLINAC|1198|FIL ARG-BRA|2194|PART POLIT|3123|
|FUNDO PUBLIC|1201|ENT ITAIPU|2208|ENT SOCIAL|3130|
|ASSOC PUBLIC|1210|EMP DOM EXT|2216|ENT SOCIAL07|3131|
|EMP PUB|2011|FUN INVEST|2224|FIL FUN EXT|3204|
Step5: De acordo com a Secretaria Municipal de Educação
Step6: Variáveis dummy de acordo com o layout$^{[1]}$
Step7: Trabalhando com dados de todas as UFs | Python Code:
# Começamos importando as bibliotecas a serem utilizadas:
import numpy as np
import pandas as pd
import seaborn as sns; sns.set()
%matplotlib inline
# Importando os microdados do arquivo .zip:
rs = pd.read_table('/mnt/part/Data/RAIS/2014/RS2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
rs.head() # mostra as primeiras 5 linhas do DataFrame
print('O formato do DataFrame é: ' , rs.shape, '(linhas, colunas)')
print('\n', 'O tipo de cada coluna:', '\n\n',rs.dtypes)
# Visualizando quais são
print('Bairros Fortaleza unique values:', rs['Bairros Fortaleza'].unique())
print('Bairros RJ unique values:', rs['Bairros RJ'].unique())
print('CBO 2002 unique values:', rs['CBO Ocupaτπo 2002'].unique())
print('Distritos SP unique values:', rs['Distritos SP'].unique())
print('Tipo Estab.1 unique values:', rs['Tipo Estab.1'].unique())
# Atribuindo '-1' aos valores '0000-1', mudando o tipo de 'object' para 'float':
rs[rs['CBO Ocupaτπo 2002'] == '0000-1'] = -1
# rs['CBO Ocupaτπo 2002'].dropna(inplace = True)
rs['CBO Ocupaτπo 2002'] = rs['CBO Ocupaτπo 2002'].astype(dtype = float)
rs['CBO Ocupaτπo 2002'].dtypes
Explanation: 1. Contexto
O Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (Inep) divulgou no dia 21 de Junho de 2017 sobre remuneração média dos professores em exercício na educação básica. O estudo é resultado de uma nova metodologia do Inep cruzando informações do Censo Escolar com a Relação Anual de Informações Sociais (Rais) de 2014, do Ministério do Trabalho e da Previdência Social. O levantamento mostrou uma população de 2.080.619 professores.$^{[1]}$
De acordo com o levantamento, com dados referentes a 2014, uma jornada semanal de 40h representa, na rede de ensino de Porto Alegre, uma média de proventos mensais de R\$ 10.947,15. Porto Alegre seria a rede que melhor paga entre as capitais (média é de R\$ 3.116). Ofereceria também remuneração superior àquela das escolas federais, estaduais e privadas. $^{[2]}$
O dito estudo do Inep é citado em diversas notícias, por exemplo G1, MEC, 10 anos de metodologia de coleta de dados individualizada dos censos educacionais. O estudo completo infelizmente não foi encontrado, porém a apresentação Remuneração Média dos Docentes da Educação Básica pode ser baixada completamente.
Fontes:
$^{[1]}$ Inep divulga estudo sobre salário de professor da educação básica
$^{[2]}$ Itamar Melo. Salário em alta, ensino em baixa. Zero Hora, 30 de Junho de 2017.
2. Fonte de dados:
Quando fui buscar os microdados da RAIS para download tive a (não) imensa surpresa de enfrentar uma enorme dificuldade para encontrá-los. No site do Ministério do Trabalho não há nenhuma referência a RAIS (ou algo que lembre minimamente ela). Uma notícia no site do Governo Federal informa um novo link para consulta dos dados da RAIS; infelizmente o link acaba numa página 'Not Found'. O site (oficial) da RAIS faz referência apenas a download do GDRAIS 2016, GDRAIS Genérico (1976-2015), o Manual de Orientação, Layout e a Portaria da RAIS ano-base 2016; nada dos microdados. Felizmente no site do Laboratório de Estudos Econômicos da Universidade Federal de Juiz de Fora é possível encontrar diversos links para acesso dos microdados originais de diversas pesquisas brasileiras, entre elas a RAIS!
Após clicar no link da RAIS 2014$^{[1]}$, é possível manualmente baixar os microdados para cada Estado brasileiro (no Linux, é possível abrir o terminal - ou linha de comando - e digitar wget -m ftp://ftp.mtps.gov.br/pdet/microdados/RAIS/2014/ que os dados serão baixados automaticamente para sua pasta inicial).
Observações da RAIS trabalhador$^{[2}$: arquivos com separador ';', ao encontrar '-1', '{ñ class}', '{ñclass}' ou parte do texto considerar como ignorado, separador do decimal é ','.
Fontes:
$^{[1]}$ ftp://ftp.mtps.gov.br/pdet/microdados/RAIS/2014/
$^{[2}$ ftp://ftp.mtps.gov.br/pdet/microdados/RAIS/Layouts/v%C3%ADnculos/RAIS_vinculos_layout.xls
3. Software:
Eu utilizei a linguagem de programação Python 3.6.1. utilizando a distribuição Anaconda da Continuum Analytics, por ser uma linguagem gratuita e open-source, além de rápida e versátil, garantindo a total reprodutibilidade do presente estudo. O presente documento é um Jupyter notebook que mistura texto (Markdown e HTML), código (Python) e visualizações (tabelas, gráficos, imagens etc.).
Bibliotecas utilizadas: Pandas (data structures and data analysis), Jupyter (create and share documents that contain live code, equations, visualizations and explanatory text), StatsModels.
End of explanation
df = pd.DataFrame({'Munic': rs['Municφpio'],
'Remuneracao': rs['Vl Remun MΘdia Nom'],
'CBO': rs['CBO Ocupaτπo 2002'],
'CNAE': rs['CNAE 95 Classe'],
'Natureza': rs['Natureza Jurφdica'],
'Horas': rs['Qtd Hora Contr'],
'Tempo': rs['Tempo Emprego'],
'Admissao': rs['MΩs Admissπo'],
'Desligamento': rs['MΩs Desligamento'],
'Sexo': rs['Sexo Trabalhador'],
'Idade': rs['Faixa Etßria'],
'Raca': rs['Raτa Cor']})
print('Número de observações na base original:', len(df))
df.dropna(axis = 0, how = 'any', inplace = True)
print('Número de observações após excluir missings: ', len(df))
df.head()
print('Mun Trab é igual a Municφpio em' , round(((rs['Mun Trab'] == rs['Municφpio']).sum() / len(rs)) * 100, 4), '% dos casos')
Explanation: Segundo a apresentação, as variáveis utilizadas da RAIS são: UF, Município, Remuneração média anual do trabalhador, Classificação Brasileira de Ocupações (CBO), Classe de Atividade Econômica (CNAE), CPF, Ano, Natureza Jurídica (CONCLA), Quantidade de horas contratuais por semana, tempo de emprego do trabalhador, data de admissão do trabalhador e mês de desligamento.
Há duas variáveis de município ('Mun Trab' e 'Municφpio'), como elas são iguais na maior parte dos casos (99%), utilizo a 'Mun Trab'. Há diversas variáveis com a remuneração dos trabalhadores, utilizo a 'Vl Remun MΘdia Nom' (Descrição da variável: Remuneração média do trabalhador (valor nominal) - a partir de 1999). Há apenas uma coluna indicando o CBO. Para a CNAE, utilizo a 'CNAE 95 Classe'. Nos microdados baixados do site indicado, não há nenhuma coluna com CPF dos trabalhadores. O ano é 2014 para todas as linhas já que utilizo apenas a RAIS 2014. Há apenas uma coluna com a Natureza Jurídica que é 'Natureza Jurφdica'. Há apenas uma coluna para Horas contratuais que é 'Qtd Hora Contr'. Tempo de emprego é 'Tempo de Emprego', mês de admissão é 'MΩs Admissπo' e mês de desligamento é 'MΩs Desligamento'.
End of explanation
cnae = [75116, 80136, 80144, 80152, 80209, 80969, 80977, 80993]
cbo = [23, 33]
df1 = df[df['CNAE'].isin(cnae) | (df['CBO'] / 10000).apply(np.floor).isin(cbo)]
Explanation: De acordo com o slide 10, a identificação dos docentes foi feita baseado nos seguintes códigos do CNAE e CBO:
CNAE:
75116: Administração Pública em Geral
80136: Educação Infantil creche
80144: Educação Infantil pré-escola
80152: Ensino Fundamental
80209: Ensino Médio
80969: Educação Profissional de Nível Técnico
80977: Educação Profissional de Nível Tecnológico
80993: Outras Atividades de Ensino
CBO:
23: Profissionais do Ensino
33: Professores leigos e de nível médio
Podemos utilizar uma lista para fatiar (slice) o quadro de dados (DataFrame). Fonte: Wouter Overmeire, https://stackoverflow.com/questions/12096252/use-a-list-of-values-to-select-rows-from-a-pandas-dataframe
End of explanation
ed_basica = [231105, 231110, 231205, 231210, 231305, 231310, 231315,
231320, 231325, 231330, 231335, 231340, 331105, 332105]
print('Remuneração dos professores gaúchos: R$', df1['Remuneracao'].mean())
print('Remuneração dos professores do ensino básico gaúchos: R$',
df1[df1['CBO'].isin(ed_basica)]['Remuneracao'].mean())
print('Remuneração dos professores do ensino básico gaúchos: R$',
df1[df1['CBO'].isin(ed_basica)]['Remuneracao'].mean())
print('Remuneração dos professores do ensino básico gaúchos: R$',
df1[df1['CBO'].isin(ed_basica) & (df1['Munic'] == 431490)]['Remuneracao'].mean())
df1[df1['CBO'].isin(ed_basica) & (df1['Munic'] == 431490)]['Remuneracao'].hist();
poa = df1[(df1['Munic'] == 431490)]
Explanation: Entretanto, podemos notar que as categorias 23 e 33 de CBO se dividem em diversas ocupações específicas, de acordo com a planilha 'ocupação' do arquivo RAIS_vinculos_layout $^{[1]}$. As últimas ocupações da categoria 23 estão ligadas à educação, porém parecem ser mais cargos administrativos e/ou de apoio que cargos do magitério propriamente dito (Coordenador Pedagogico, Orientador Educacional, Pedagogo, Psicopedagogo, Supervisor de Ensino, Designer Educacional). A categoria 33 é ainda mais heterogênea, incluindo Auxiliar de Desenvolvimento Infantil, Instrutor de Auto-Escola, Instrutor de Cursos Livres , Inspetor de Alunos de Escola Privada, Inspetor de Alunos de Escola Publica e Monitor de Transporte Escolar.
231105:Professor de Nivel Superior na Educacao Infantil (Quatro a Seis Anos)
231110:Professor de Nivel Superior na Educacao Infantil (Zero a Tres Anos)
231205:Professor da Educacao de Jovens e Adultos do Ensino Fundamental (Primeira a Quarta Serie)
231210:Professor de Nivel Superior do Ensino Fundamental (Primeira a Quarta Serie)
231305:Professor de Ciencias Exatas e Naturais do Ensino Fundamental
231310:Professor de Educacao Artistica do Ensino Fundamental
231315:Professor de Educacao Fisica do Ensino Fundamental
231320:Professor de Geografia do Ensino Fundamental
231325:Professor de Historia do Ensino Fundamental
231330:Professor de Lingua Estrangeira Moderna do Ensino Fundamental
231335:Professor de Lingua Portuguesa do Ensino Fundamental
231340:Professor de Matematica do Ensino Fundamental
232105:Professor de Artes no Ensino Medio
232110:Professor de Biologia no Ensino Medio
232115:Professor de Disciplinas Pedagogicas no Ensino Medio
232120:Professor de Educacao Fisica no Ensino Medio
232125:Professor de Filosofia no Ensino Medio
232130:Professor de Fisica no Ensino Medio
232135:Professor de Geografia no Ensino Medio
232140:Professor de Historia no Ensino Medio
232145:Professor de Lingua e Literatura Brasileira no Ensino Medio
232150:Professor de Lingua Estrangeira Moderna no Ensino Medio
232155:Professor de Matematica no Ensino Medio
232160:Professor de Psicologia no Ensino Medio
232165:Professor de Quimica no Ensino Medio
232170:Professor de Sociologia no Ensino Medio
233105:Professor da Area de Meio Ambiente
233110:Professor de Desenho Tecnico
233115:Professor de Tecnicas Agricolas
233120:Professor de Tecnicas Comerciais e Secretariais
233125:Professor de Tecnicas de Enfermagem
233130:Professor de Tecnicas Industriais
233135:Professor de Tecnologia e Calculo Tecnico
233205:Instrutor de Aprendizagem e Treinamento Agropecuario
233210:Instrutor de Aprendizagem e Treinamento Industrial
233215:Professor de Aprendizagem e Treinamento Comercial
233220:Professor Instrutor de Ensino e Aprendizagem Agroflorestal
233225:Professor Instrutor de Ensino e Aprendizagem em Servicos
234105:Professor de Matematica Aplicada (No Ensino Superior)
234110:Professor de Matematica Pura (No Ensino Superior)
234115:Professor de Estatistica (No Ensino Superior)
234120:Professor de Computacao (No Ensino Superior)
234125:Professor de Pesquisa Operacional (No Ensino Superior)
234205:Professor de Fisica (Ensino Superior)
234210:Professor de Quimica (Ensino Superior)
234215:Professor de Astronomia (Ensino Superior)
234305:Professor de Arquitetura
234310:Professor de Engenharia
234315:Professor de Geofisica
234320:Professor de Geologia
234405:Professor de Ciencias Biologicas do Ensino Superior
234410:Professor de Educacao Fisica no Ensino Superior
234415:Professor de Enfermagem do Ensino Superior
234420:Professor de Farmacia e Bioquimica
234425:Professor de Fisioterapia
234430:Professor de Fonoaudiologia
234435:Professor de Medicina
234440:Professor de Medicina Veterinaria
234445:Professor de Nutricao
234450:Professor de Odontologia
234455:Professor de Terapia Ocupacional
234460:Professor de Zootecnia do Ensino Superior
234505:Professor de Ensino Superior na Area de Didatica
234510:Professor de Ensino Superior na Area de Orientacao Educacional
234515:Professor de Ensino Superior na Area de Pesquisa Educacional
234520:Professor de Ensino Superior na Area de Pratica de Ensino
234604:Professor de Lingua Alema
234608:Professor de Lingua Italiana
234612:Professor de Lingua Francesa
234616:Professor de Lingua Inglesa
234620:Professor de Lingua Espanhola
234624:Professor de Lingua Portuguesa
234628:Professor de Literatura Brasileira
234632:Professor de Literatura Portuguesa
234636:Professor de Literatura Alema
234640:Professor de Literatura Comparada
234644:Professor de Literatura Espanhola
234648:Professor de Literatura Francesa
234652:Professor de Literatura Inglesa
234656:Professor de Literatura Italiana
234660:Professor de Literatura de Linguas Estrangeiras Modernas
234664:Professor de Outras Linguas e Literaturas
234668:Professor de Linguas Estrangeiras Modernas
234672:Professor de Linguistica e Linguistica Aplicada
234676:Professor de Filologia e Critica Textual
234680:Professor de Semiotica
234684:Professor de Teoria da Literatura
234705:Professor de Antropologia do Ensino Superior
234710:Professor de Arquivologia do Ensino Superior
234715:Professor de Biblioteconomia do Ensio Superior
234720:Professor de Ciencia Politica do Ensino Superior
234725:Professor de Comunicacao Social do Ensino Superior
234730:Professor de Direito do Ensino Superior
234735:Professor de Filosofia do Ensino Superior
234740:Professor de Geografia do Ensino Superior
234745:Professor de Historia do Ensino Superior
234750:Professor de Jornalismo
234755:Professor de Museologia do Ensino Superior
234760:Professor de Psicologia do Ensino Superior
234765:Professor de Servico Social do Ensino Superior
234770:Professor de Sociologia do Ensino Superior
234805:Professor de Economia
234810:Professor de Administracao
234815:Professor de Contabilidade
234905:Professor de Artes do Espetaculo no Ensino Superior
234910:Professor de Artes Visuais no Ensino Superior (Artes Plasticas e Multimidia)
234915:Professor de Musica no Ensino Superior
239205:Professor de Alunos com Deficiencia Auditiva e Surdos
239210:Professor de Alunos com Deficiencia Fisica
239215:Professor de Alunos com Deficiencia Mental
239220:Professor de Alunos com Deficiencia Multipla
239225:Professor de Alunos com Deficiencia Visual
239405:Coordenador Pedagogico
239410:Orientador Educacional
239415:Pedagogo
239420:Professor de Tecnicas e Recursos Audiovisuais
239425:Psicopedagogo
239430:Supervisor de Ensino
239435:Designer Educacional
331105:Professor de Nivel Medio na Educacao Infantil
331110:Auxiliar de Desenvolvimento Infantil
331205:Professor de Nivel Medio no Ensino Fundamental
331305:Professor de Nivel Medio no Ensino Profissionalizante
332105:Professor Leigo no Ensino Fundamental
332205:Professor Pratico no Ensino Profissionalizante
333105:Instrutor de Auto-Escola
333110:Instrutor de Cursos Livres
333115:Professores de Cursos Livres
334105:Inspetor de Alunos de Escola Privada
334110:Inspetor de Alunos de Escola Publica
334115:Monitor de Transporte Escolar
$^{[1]}$ ftp://ftp.mtps.gov.br/pdet/microdados/RAIS/Layouts/v%C3%ADnculos/RAIS_vinculos_layout.xls
End of explanation
# Conta o número de observações no DataFrame 'poa' por grupo de 'Natureza' (Natureza Jurídica):
poa['Admissao'].groupby(by = poa['Natureza']).count()
Explanation: Natureza Jurídica e Códigos
||
|---------------------------------------|
|POD EXEC FE|1015|SOC MISTA|2038|SOC SIMP PUR|2232|FUN DOM EXT|3212|
|POD EXEC ES|1023|SA ABERTA|2046|SOC SIMP LTD|2240|ORG RELIG|3220|
|POD EXEC MU|1031|SA FECH|2054|SOC SIMP COL|2259|COMUN INDIG|3239|
|POD LEG FED|1040|SOC QT LTDA|2062|SOC SIMP COM|2267|FUNDO PRIVAD|3247|
|POD LEG EST|1058|SOC COLETV|2070|EMPR BINAC|2275|OUTR ORG|3999|
|POD LEG MUN|1066|OC COLETV07|2076|CONS EMPREG|2283|EMP IND IMO|4014|
|POD JUD FED|1074|SOC COMD SM|2089|CONS SIMPLES|2291|SEG ESPEC|4022|
|POD JUD EST|1082|SOC COMD AC|2097|CARTORIO|3034|CONTR IND|4080|
|AUTARQ FED|1104|SOC CAP IND|2100|ORG SOCIAL|3042|CONTR IND07|4081|
|AUTARQ EST|1112|SOC CIVIL|2119|OSCIP|3050|CAN CARG POL|4090|
|AUTARQ MUN|1120|SOC CTA PAR|2127|OUT FUND PR|3069|LEILOEIRO|4111|
|FUNDAC FED|1139|FRM MER IND|2135|SERV SOC AU|3077|ORG INTERN|5002|
|FUNDAC EST|1147|COOPERATIVA|2143|CONDOMIN|3085|ORG INTERNAC|5010|
|FUNDAC MUN|1155|CONS EMPRES|2151|UNID EXEC|3093|REPR DIPL ES|5029|
|ORG AUT FED|1163|GRUP SOC|2160|COM CONC|3107|OUT INST EXT|5037|
|ORG AUT EST|1171|FIL EMP EXT|2178|ENT MED ARB|3115|IGNORADO|-1|
|COM POLINAC|1198|FIL ARG-BRA|2194|PART POLIT|3123|
|FUNDO PUBLIC|1201|ENT ITAIPU|2208|ENT SOCIAL|3130|
|ASSOC PUBLIC|1210|EMP DOM EXT|2216|ENT SOCIAL07|3131|
|EMP PUB|2011|FUN INVEST|2224|FIL FUN EXT|3204|
End of explanation
df1[df1['CBO'].isin(ed_basica) & (df1['Munic'] == 431490)]['Remuneracao'].groupby(by = df1['Idade']).mean()
from statsmodels.stats.weightstats import ztest
# Source: http://www.statsmodels.org/dev/generated/statsmodels.stats.weightstats.ztest.html#statsmodels.stats.weightstats.ztest
print(ztest(x1 = df1[(df1['CBO'].isin(ed_basica)) & (df1['Munic'] == 431490)]['Remuneracao'], x2=None,
value=10000, alternative='two-sided', usevar='pooled', ddof=1.0))
print(ztest(x1 = df1[(df1['CBO'].isin(ed_basica)) & (df1['Munic'] == 431490)]['Remuneracao'], x2=None,
value=10000, alternative='smaller', usevar='pooled', ddof=1.0))
from statsmodels.stats.weightstats import DescrStatsW
# Source: http://www.statsmodels.org/dev/generated/statsmodels.stats.weightstats.DescrStatsW.html#statsmodels.stats.weightstats.DescrStatsW
stats = DescrStatsW(df1[(df1['CBO'].isin(ed_basica)) & (df1['Munic'] == 431490)]['Remuneracao'])
print(stats.mean)
print(stats.var)
print(stats.std_mean)
print(stats.ttest_mean(value = 10000, alternative = 'larger'))
print(stats.ztest_mean(value = 10000, alternative = 'larger'))
# tstat, pval, df
print(stats.ttost_mean(low = 6000, upp = 4000))
print(stats.ztost_mean(low = 6000, upp = 4000))
# TOST: two one-sided t tests
# null hypothesis: m < low or m > upp alternative hypothesis: low < m < upp
# returns: pvalue; t1, pv1, df1; t2, pv2, df2
# DescrStatsW.ttest()
Explanation: De acordo com a Secretaria Municipal de Educação: "A Rede Municipal de Ensino – RME – é formada por 99 escolas com cerca de 4 mil professores e 900 funcionários, atende mais de 50 mil alunos da Educação Infantil, Ensino Fundamental, Ensino Médio, Educação Profissional de Nível Técnico, Educação de Jovens e Adultos (EJA) e Educação Especial."
Entretanto, os dados da RAIS mostram apenas 148 professores do Poder Executivo Municipal ("POD EXEC MU 1031"). A maioria dos professores (106.728) são do Poder Executivo Estadual (POD EXEC ES 1023).
End of explanation
import statsmodels.api as sm
# Dummies:
groups = df1['Idade']
dummy = sm.categorical(groups, drop=True)
x = np.linspace(0, 20, nsample)
# drop reference category
X = np.column_stack((x, dummy[:,1:]))
X = sm.add_constant(X, prepend=False)
X = pd.concat([df1['Natureza'], df1['Horas'], df1['Tempo'], df1['Sexo'], df1['Idade'], df1['Raca']], axis = 1)
X = sm.add_constant(X)
y = df1['Remuneracao']
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
# #Open all the .csv file in a directory
# #Author: Gaurav Singh
# #Source: https://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe
# import pandas as pd
# import glob
# #get data file names
# path =r'/mnt/part/Data/RAIS/2014/'
# allFiles = glob.glob(path + "/*.zip")
# df = pd.DataFrame()
# list_ = []
# for file_ in allFiles:
# frame = pd.read_csv(file_, sep = ';', encoding = 'cp860', decimal = ',')
# list_.append(frame)
# df = pd.concat(list_, axis=0)
Explanation: Variáveis dummy de acordo com o layout$^{[1]}$:
||FAIXA ETÁRIA|
|---------------|
|01| 10 A 14 anos|
|02| 15 A 17 anos|
|03| 18 A 24 anos|
|04| 25 A 29 anos|
|05| 30 A 39 anos|
|06| 40 A 49 anos|
|07| 50 A 64 anos|
|08| 65 anos ou mais|
|{ñ class}| {ñ class}|
||FAIXA HORA CONTRATUAL |
|-----------------------|
|01| Até 12 horas|
|02| 13 a 15 horas|
|03| 16 a 20 horas|
|04| 21 a 30 horas|
|05| 31 a 40 horas|
|06| 41 a 44 horas|
|grau de instruçao |escolaridade após 2005 |
|-------------------------------------------|
|ANALFABETO | 1|
|ATE 5.A INC| 2|
|5.A CO FUND| 3|
|6. A 9. FUND| 4|
|FUND COMPL | 5|
|MEDIO INCOMP| 6|
|MEDIO COMPL| 7|
|SUP. INCOMP| 8|
|SUP. COMP | 9|
|MESTRADO | 10|
|DOUTORADO | 11|
|IGNORADO | -1|
$^{[1]}$ ftp://ftp.mtps.gov.br/pdet/microdados/RAIS/Layouts/v%C3%ADnculos/RAIS_vinculos_layout.xls
End of explanation
# ac = pd.read_csv('/mnt/part/Data/RAIS/2014/AC2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# al = pd.read_csv('/mnt/part/Data/RAIS/2014/AL2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# am = pd.read_csv('/mnt/part/Data/RAIS/2014/AM2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ap = pd.read_csv('/mnt/part/Data/RAIS/2014/AP2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ba = pd.read_csv('/mnt/part/Data/RAIS/2014/BA2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ce = pd.read_csv('/mnt/part/Data/RAIS/2014/CE2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# df = pd.read_csv('/mnt/part/Data/RAIS/2014/DF2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# es = pd.read_csv('/mnt/part/Data/RAIS/2014/ES2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# go = pd.read_csv('/mnt/part/Data/RAIS/2014/GO2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ma = pd.read_csv('/mnt/part/Data/RAIS/2014/MA2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# mg = pd.read_csv('/mnt/part/Data/RAIS/2014/MG2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ms = pd.read_csv('/mnt/part/Data/RAIS/2014/MS2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# mt = pd.read_csv('/mnt/part/Data/RAIS/2014/MT2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# pa = pd.read_csv('/mnt/part/Data/RAIS/2014/PA2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# pb = pd.read_csv('/mnt/part/Data/RAIS/2014/PB2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# pe = pd.read_csv('/mnt/part/Data/RAIS/2014/PE2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# pi = pd.read_csv('/mnt/part/Data/RAIS/2014/PI2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# pr = pd.read_csv('/mnt/part/Data/RAIS/2014/PR2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# rj = pd.read_csv('/mnt/part/Data/RAIS/2014/RJ2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# rn = pd.read_csv('/mnt/part/Data/RAIS/2014/RN2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ro = pd.read_csv('/mnt/part/Data/RAIS/2014/RO2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# rr = pd.read_csv('/mnt/part/Data/RAIS/2014/RR2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# rs = pd.read_csv('/mnt/part/Data/RAIS/2014/RS2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# sc = pd.read_csv('/mnt/part/Data/RAIS/2014/SC2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# se = pd.read_csv('/mnt/part/Data/RAIS/2014/SE2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# sp = pd.read_csv('/mnt/part/Data/RAIS/2014/SP2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# to = pd.read_csv('/mnt/part/Data/RAIS/2014/TO2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# Concatenar os DataFrameS individuais a partir de uma lista:
# lista = [ac, al, am, ap, ba, ce, df, es, go, ma, mg, ms, mt, pa, pb, pe, pi, pr, rj, rn, ro, rr, rs, sc, se, sp, to]
# rais = pd.concat(lista)
# Gerando uma variável com a UF a partir do Município:
# np.floor(rais['Municφpio'] / 10000).unique()
Explanation: Trabalhando com dados de todas as UFs
End of explanation |
4,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experimental
Step1: 1. Write Eager code that is fast and scalable
TF.Eager gives you more flexibility while coding, but at the cost of losing the benefits of TensorFlow graphs. For example, Eager does not currently support distributed training, exporting models, and a variety of memory and computation optimizations.
Autograph gives you the best of both worlds
Step2: ... into a TF graph-building function
Step3: You can then use the converted function as you would any regular TF op -- you can pass Tensor arguments and it will return Tensors
Step4: 2. Case study
Step5: Try replacing the continue in the above code with break -- Autograph supports that as well!
The Python code above is much more readable than the matching graph code. Autograph takes care of tediously converting every piece of Python code into the matching TensorFlow graph version for you, so that you can quickly write maintainable code, but still benefit from the optimizations and deployment benefits of graphs.
Let's try some other useful Python constructs, like print and assert. We automatically convert Python assert statements into the equivalent tf.Assert code.
Step6: You can also use print functions in-graph
Step7: We can convert lists to TensorArray, so appending to lists also works, with a few modifications
Step9: And all of these functionalities, and more, can be composed into more complicated code
Step10: 3. Case study
Step11: First, we'll define a small three-layer neural network using the Keras API
Step12: Let's connect the model definition (here abbreviated as m) to a loss function, so that we can train our model.
Step13: Now the final piece of the problem specification (before loading data, and clicking everything together) is backpropagating the loss through the model, and optimizing the weights using the gradient.
Step14: These are some utility functions to download data and generate batches for training
Step15: This function specifies the main training loop. We instantiate the model (using the code above), instantiate an optimizer (here we'll use SGD with momentum, nothing too fancy), and we'll instantiate some lists to keep track of training and test loss and accuracy over time.
In the loop inside this function, we'll grab a batch of data, apply an update to the weights of our model to improve its performance, and then record its current training loss and accuracy. Every so often, we'll log some information about training as well.
Step16: Everything is ready to go, let's train the model and plot its performance!
Step20: 4. Case study
Step23: Next, we set up the RNNColobot model, which is very similar to the one we used in the main exercise.
Autograph doesn't fully support classes yet (but it will soon!), so we'll write the model using simple functions.
Step24: The train and test functions are also similar to the ones used in the Eager notebook. Since the network requires a fixed batch size, we'll train in a single shot, rather than by epoch.
Step25: Finally, we add code to run inference on a single input, which we'll read from the input.
Note the do_not_convert annotation that lets us disable conversion for certain functions and run them as a py_func instead, so you can still call them from compiled code.
Step27: Finally, we put everything together.
Note that the entire training and testing code is all compiled into a single op (tf_train_model) that you only execute once! We also still use a sess.run loop for the inference part, because that requires keyboard input. | Python Code:
# Install TensorFlow; note that Colab notebooks run remotely, on virtual
# instances provided by Google.
!pip install -U -q tf-nightly
import os
import time
import tensorflow as tf
from tensorflow.contrib import autograph
import matplotlib.pyplot as plt
import numpy as np
import six
from google.colab import widgets
Explanation: Experimental: TF Autograph
TensorFlow Dev Summit, 2018.
This interactive notebook demonstrates autograph, an experimental source-code transformation library to automatically convert TF.Eager and Python code to TensorFlow graphs.
Note: this is pre-alpha software! The notebook works best with Python 2, for now.
Table of Contents
Write Eager code that is fast and scalable.
Case study: complex control flow.
Case study: training MNIST with Keras.
Case study: building an RNN.
End of explanation
def g(x):
if x > 0:
x = x * x
else:
x = 0
return x
Explanation: 1. Write Eager code that is fast and scalable
TF.Eager gives you more flexibility while coding, but at the cost of losing the benefits of TensorFlow graphs. For example, Eager does not currently support distributed training, exporting models, and a variety of memory and computation optimizations.
Autograph gives you the best of both worlds: write your code in an Eager style, and we will automatically transform it into the equivalent TF graph code. The graph code can be executed eagerly (as a single op), included as part of a larger graph, or exported.
For example, autograph can convert a function like this:
End of explanation
print(autograph.to_code(g))
Explanation: ... into a TF graph-building function:
End of explanation
tf_g = autograph.to_graph(g)
with tf.Graph().as_default():
g_ops = tf_g(tf.constant(9))
with tf.Session() as sess:
tf_g_result = sess.run(g_ops)
print('g(9) = %s' % g(9))
print('tf_g(9) = %s' % tf_g_result)
Explanation: You can then use the converted function as you would any regular TF op -- you can pass Tensor arguments and it will return Tensors:
End of explanation
def sum_even(numbers):
s = 0
for n in numbers:
if n % 2 > 0:
continue
s += n
return s
tf_sum_even = autograph.to_graph(sum_even)
with tf.Graph().as_default():
with tf.Session() as sess:
result = sess.run(tf_sum_even(tf.constant([10, 12, 15, 20])))
print('Sum of even numbers: %s' % result)
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(sum_even))
Explanation: 2. Case study: complex control flow
Autograph can convert a large chunk of the Python language into graph-equivalent code, and we're adding new supported language features all the time. In this section, we'll give you a taste of some of the functionality in autograph.
Autograph will automatically convert most Python control flow statements into their correct graph equivalent.
We support common statements like while, for, if, break, return and more. You can even nest them as much as you like. Imagine trying to write the graph version of this code by hand:
End of explanation
def f(x):
assert x != 0, 'Do not pass zero!'
return x * x
tf_f = autograph.to_graph(f)
with tf.Graph().as_default():
with tf.Session() as sess:
try:
print(sess.run(tf_f(tf.constant(0))))
except tf.errors.InvalidArgumentError as e:
print('Got error message: %s' % e.message)
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(f))
Explanation: Try replacing the continue in the above code with break -- Autograph supports that as well!
The Python code above is much more readable than the matching graph code. Autograph takes care of tediously converting every piece of Python code into the matching TensorFlow graph version for you, so that you can quickly write maintainable code, but still benefit from the optimizations and deployment benefits of graphs.
Let's try some other useful Python constructs, like print and assert. We automatically convert Python assert statements into the equivalent tf.Assert code.
End of explanation
def print_sign(n):
if n >= 0:
print(n, 'is positive!')
else:
print(n, 'is negative!')
return n
tf_print_sign = autograph.to_graph(print_sign)
with tf.Graph().as_default():
with tf.Session() as sess:
sess.run(tf_print_sign(tf.constant(1)))
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(print_sign))
Explanation: You can also use print functions in-graph:
End of explanation
def f(n):
numbers = []
# We ask you to tell us about the element dtype.
autograph.utils.set_element_type(numbers, tf.int32)
for i in range(n):
numbers.append(i)
return autograph.stack(numbers) # Stack the list so that it can be used as a Tensor
tf_f = autograph.to_graph(f)
with tf.Graph().as_default():
with tf.Session() as sess:
print(sess.run(tf_f(tf.constant(5))))
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(f))
Explanation: We can convert lists to TensorArray, so appending to lists also works, with a few modifications:
End of explanation
def print_primes(n):
Returns all the prime numbers less than n.
assert n > 0
primes = []
autograph.utils.set_element_type(primes, tf.int32)
for i in range(2, n):
is_prime = True
for k in range(2, i):
if i % k == 0:
is_prime = False
break
if not is_prime:
continue
primes.append(i)
all_primes = autograph.stack(primes)
print('The prime numbers less than', n, 'are:')
print(all_primes)
return tf.no_op()
tf_print_primes = autograph.to_graph(print_primes)
with tf.Graph().as_default():
with tf.Session() as sess:
n = tf.constant(50)
sess.run(tf_print_primes(n))
# Uncomment the line below to print the generated graph code
# print(autograph.to_code(print_primes))
Explanation: And all of these functionalities, and more, can be composed into more complicated code:
End of explanation
import gzip
import shutil
from six.moves import urllib
def download(directory, filename):
filepath = os.path.join(directory, filename)
if tf.gfile.Exists(filepath):
return filepath
if not tf.gfile.Exists(directory):
tf.gfile.MakeDirs(directory)
url = 'https://storage.googleapis.com/cvdf-datasets/mnist/' + filename + '.gz'
zipped_filepath = filepath + '.gz'
print('Downloading %s to %s' % (url, zipped_filepath))
urllib.request.urlretrieve(url, zipped_filepath)
with gzip.open(zipped_filepath, 'rb') as f_in, open(filepath, 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
os.remove(zipped_filepath)
return filepath
def dataset(directory, images_file, labels_file):
images_file = download(directory, images_file)
labels_file = download(directory, labels_file)
def decode_image(image):
# Normalize from [0, 255] to [0.0, 1.0]
image = tf.decode_raw(image, tf.uint8)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [784])
return image / 255.0
def decode_label(label):
label = tf.decode_raw(label, tf.uint8)
label = tf.reshape(label, [])
return tf.to_int32(label)
images = tf.data.FixedLengthRecordDataset(
images_file, 28 * 28, header_bytes=16).map(decode_image)
labels = tf.data.FixedLengthRecordDataset(
labels_file, 1, header_bytes=8).map(decode_label)
return tf.data.Dataset.zip((images, labels))
def mnist_train(directory):
return dataset(directory, 'train-images-idx3-ubyte',
'train-labels-idx1-ubyte')
def mnist_test(directory):
return dataset(directory, 't10k-images-idx3-ubyte', 't10k-labels-idx1-ubyte')
Explanation: 3. Case study: training MNIST with Keras
As we've seen, writing control flow in Autograph is easy. So running a training loop in graph should be easy as well!
Here, we show an example of such a training loop for a simple Keras model that trains on MNIST.
End of explanation
def mlp_model(input_shape):
model = tf.keras.Sequential([
tf.keras.layers.Dense(100, activation='relu', input_shape=input_shape),
tf.keras.layers.Dense(100, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')])
model.build()
return model
Explanation: First, we'll define a small three-layer neural network using the Keras API
End of explanation
def predict(m, x, y):
y_p = m(x)
losses = tf.keras.losses.categorical_crossentropy(y, y_p)
l = tf.reduce_mean(losses)
accuracies = tf.keras.metrics.categorical_accuracy(y, y_p)
accuracy = tf.reduce_mean(accuracies)
return l, accuracy
Explanation: Let's connect the model definition (here abbreviated as m) to a loss function, so that we can train our model.
End of explanation
def fit(m, x, y, opt):
l, accuracy = predict(m, x, y)
opt.minimize(l)
return l, accuracy
Explanation: Now the final piece of the problem specification (before loading data, and clicking everything together) is backpropagating the loss through the model, and optimizing the weights using the gradient.
End of explanation
def setup_mnist_data(is_training, hp, batch_size):
if is_training:
ds = mnist_train('/tmp/autograph_mnist_data')
ds = ds.shuffle(batch_size * 10)
else:
ds = mnist_test('/tmp/autograph_mnist_data')
ds = ds.repeat()
ds = ds.batch(batch_size)
return ds
def get_next_batch(ds):
itr = ds.make_one_shot_iterator()
image, label = itr.get_next()
x = tf.to_float(tf.reshape(image, (-1, 28 * 28)))
y = tf.one_hot(tf.squeeze(label), 10)
return x, y
Explanation: These are some utility functions to download data and generate batches for training
End of explanation
def train(train_ds, test_ds, hp):
m = mlp_model((28 * 28,))
opt = tf.train.MomentumOptimizer(hp.learning_rate, 0.9)
train_losses = []
train_losses = autograph.utils.set_element_type(train_losses, tf.float32)
test_losses = []
test_losses = autograph.utils.set_element_type(test_losses, tf.float32)
train_accuracies = []
train_accuracies = autograph.utils.set_element_type(train_accuracies,
tf.float32)
test_accuracies = []
test_accuracies = autograph.utils.set_element_type(test_accuracies,
tf.float32)
i = tf.constant(0)
while i < hp.max_steps:
train_x, train_y = get_next_batch(train_ds)
test_x, test_y = get_next_batch(test_ds)
step_train_loss, step_train_accuracy = fit(m, train_x, train_y, opt)
step_test_loss, step_test_accuracy = predict(m, test_x, test_y)
if i % (hp.max_steps // 10) == 0:
print('Step', i, 'train loss:', step_train_loss, 'test loss:',
step_test_loss, 'train accuracy:', step_train_accuracy,
'test accuracy:', step_test_accuracy)
train_losses.append(step_train_loss)
test_losses.append(step_test_loss)
train_accuracies.append(step_train_accuracy)
test_accuracies.append(step_test_accuracy)
i += 1
return (autograph.stack(train_losses), autograph.stack(test_losses),
autograph.stack(train_accuracies),
autograph.stack(test_accuracies))
Explanation: This function specifies the main training loop. We instantiate the model (using the code above), instantiate an optimizer (here we'll use SGD with momentum, nothing too fancy), and we'll instantiate some lists to keep track of training and test loss and accuracy over time.
In the loop inside this function, we'll grab a batch of data, apply an update to the weights of our model to improve its performance, and then record its current training loss and accuracy. Every so often, we'll log some information about training as well.
End of explanation
with tf.Graph().as_default():
hp = tf.contrib.training.HParams(
learning_rate=0.05,
max_steps=500,
)
train_ds = setup_mnist_data(True, hp, 50)
test_ds = setup_mnist_data(False, hp, 1000)
tf_train = autograph.to_graph(train)
(train_losses, test_losses, train_accuracies,
test_accuracies) = tf_train(train_ds, test_ds, hp)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
(train_losses, test_losses, train_accuracies,
test_accuracies) = sess.run([train_losses, test_losses, train_accuracies,
test_accuracies])
plt.title('MNIST train/test losses')
plt.plot(train_losses, label='train loss')
plt.plot(test_losses, label='test loss')
plt.legend()
plt.xlabel('Training step')
plt.ylabel('Loss')
plt.show()
plt.title('MNIST train/test accuracies')
plt.plot(train_accuracies, label='train accuracy')
plt.plot(test_accuracies, label='test accuracy')
plt.legend(loc='lower right')
plt.xlabel('Training step')
plt.ylabel('Accuracy')
plt.show()
Explanation: Everything is ready to go, let's train the model and plot its performance!
End of explanation
def parse(line):
Parses a line from the colors dataset.
Args:
line: A comma-separated string containing four items:
color_name, red, green, and blue, representing the name and
respectively the RGB value of the color, as an integer
between 0 and 255.
Returns:
A tuple of three tensors (rgb, chars, length), of shapes: (batch_size, 3),
(batch_size, max_sequence_length, 256) and respectively (batch_size).
items = tf.string_split([line], ",").values
rgb = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0
color_name = items[0]
chars = tf.one_hot(tf.decode_raw(color_name, tf.uint8), depth=256)
length = tf.cast(tf.shape(chars)[0], dtype=tf.int64)
return rgb, chars, length
def maybe_download(filename, work_directory, source_url):
Downloads the data from source url.
if not tf.gfile.Exists(work_directory):
tf.gfile.MakeDirs(work_directory)
filepath = os.path.join(work_directory, filename)
if not tf.gfile.Exists(filepath):
temp_file_name, _ = six.moves.urllib.request.urlretrieve(source_url)
tf.gfile.Copy(temp_file_name, filepath)
with tf.gfile.GFile(filepath) as f:
size = f.size()
print('Successfully downloaded', filename, size, 'bytes.')
return filepath
def load_dataset(data_dir, url, batch_size, training=True):
Loads the colors data at path into a tf.PaddedDataset.
path = maybe_download(os.path.basename(url), data_dir, url)
dataset = tf.data.TextLineDataset(path)
dataset = dataset.skip(1)
dataset = dataset.map(parse)
dataset = dataset.cache()
dataset = dataset.repeat()
if training:
dataset = dataset.shuffle(buffer_size=3000)
dataset = dataset.padded_batch(batch_size, padded_shapes=([None], [None, None], []))
return dataset
train_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/extras/colorbot/data/train.csv"
test_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/extras/colorbot/data/test.csv"
data_dir = "tmp/rnn/data"
Explanation: 4. Case study: building an RNN
In this exercise we build and train a model similar to the RNNColorbot model that was used in the main Eager notebook. The model is adapted for converting and training in graph mode.
To get started, we load the colorbot dataset. The code is identical to that used in the other exercise and its details are unimportant.
End of explanation
def model_components():
lower_cell = tf.contrib.rnn.LSTMBlockCell(256)
lower_cell.build(tf.TensorShape((None, 256)))
upper_cell = tf.contrib.rnn.LSTMBlockCell(128)
upper_cell.build(tf.TensorShape((None, 256)))
relu_layer = tf.layers.Dense(3, activation=tf.nn.relu)
relu_layer.build(tf.TensorShape((None, 128)))
return lower_cell, upper_cell, relu_layer
def rnn_layer(chars, cell, batch_size, training):
A simple RNN layer.
Args:
chars: A Tensor of shape (max_sequence_length, batch_size, input_size)
cell: An object of type tf.contrib.rnn.LSTMBlockCell
batch_size: Int, the batch size to use
training: Boolean, whether the layer is used for training
Returns:
A Tensor of shape (max_sequence_length, batch_size, output_size).
hidden_outputs = []
autograph.utils.set_element_type(hidden_outputs, tf.float32)
state, output = cell.zero_state(batch_size, tf.float32)
n = tf.shape(chars)[0]
i = 0
while i < n:
ch = chars[i]
cell_output, (state, output) = cell.call(ch, (state, output))
hidden_outputs.append(cell_output)
i += 1
hidden_outputs = autograph.stack(hidden_outputs)
if training:
hidden_outputs = tf.nn.dropout(hidden_outputs, 0.5)
return hidden_outputs
def model(inputs, lower_cell, upper_cell, relu_layer, batch_size, training):
RNNColorbot model.
The model consists of two RNN layers (made by lower_cell and upper_cell),
followed by a fully connected layer with ReLU activation.
Args:
inputs: A tuple (chars, length)
lower_cell: An object of type tf.contrib.rnn.LSTMBlockCell
upper_cell: An object of type tf.contrib.rnn.LSTMBlockCell
relu_layer: An object of type tf.layers.Dense
batch_size: Int, the batch size to use
training: Boolean, whether the layer is used for training
Returns:
A Tensor of shape (batch_size, 3) - the model predictions.
(chars, length) = inputs
chars_time_major = tf.transpose(chars, [1, 0, 2])
chars_time_major.set_shape((None, batch_size, 256))
hidden_outputs = rnn_layer(chars_time_major, lower_cell, batch_size, training)
final_outputs = rnn_layer(hidden_outputs, upper_cell, batch_size, training)
# Grab just the end-of-sequence from each output.
indices = tf.stack([length - 1, range(batch_size)], axis=1)
sequence_ends = tf.gather_nd(final_outputs, indices)
return relu_layer(sequence_ends)
def loss_fn(labels, predictions):
return tf.reduce_mean((predictions - labels) ** 2)
Explanation: Next, we set up the RNNColobot model, which is very similar to the one we used in the main exercise.
Autograph doesn't fully support classes yet (but it will soon!), so we'll write the model using simple functions.
End of explanation
def train(optimizer, train_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps):
iterator = train_data.make_one_shot_iterator()
step = 0
while step < num_steps:
labels, chars, sequence_length = iterator.get_next()
predictions = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, batch_size, training=True)
loss = loss_fn(labels, predictions)
optimizer.minimize(loss)
if step % (num_steps // 10) == 0:
print('Step', step, 'train loss', loss)
step += 1
return step
def test(eval_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps):
total_loss = 0.0
iterator = eval_data.make_one_shot_iterator()
step = 0
while step < num_steps:
labels, chars, sequence_length = iterator.get_next()
predictions = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, batch_size, training=False)
total_loss += loss_fn(labels, predictions)
step += 1
print('Test loss', total_loss)
return total_loss
def train_model(train_data, eval_data, batch_size, lower_cell, upper_cell, relu_layer, train_steps):
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train(optimizer, train_data, lower_cell, upper_cell, relu_layer, batch_size, num_steps=tf.constant(train_steps))
test(eval_data, lower_cell, upper_cell, relu_layer, 50, num_steps=tf.constant(2))
print('Colorbot is ready to generate colors!\n\n')
# In graph mode, every op needs to be a dependent of another op.
# Here, we create a no_op that will drive the execution of all other code in
# this function. Autograph will add the necessary control dependencies.
return tf.no_op()
Explanation: The train and test functions are also similar to the ones used in the Eager notebook. Since the network requires a fixed batch size, we'll train in a single shot, rather than by epoch.
End of explanation
@autograph.do_not_convert(run_as=autograph.RunMode.PY_FUNC)
def draw_prediction(color_name, pred):
pred = pred * 255
pred = pred.astype(np.uint8)
plt.axis('off')
plt.imshow(pred)
plt.title(color_name)
plt.show()
def inference(color_name, lower_cell, upper_cell, relu_layer):
_, chars, sequence_length = parse(color_name)
chars = tf.expand_dims(chars, 0)
sequence_length = tf.expand_dims(sequence_length, 0)
pred = model((chars, sequence_length), lower_cell, upper_cell, relu_layer, 1, training=False)
pred = tf.minimum(pred, 1.0)
pred = tf.expand_dims(pred, 0)
draw_prediction(color_name, pred)
# Create an op that will drive the entire function.
return tf.no_op()
Explanation: Finally, we add code to run inference on a single input, which we'll read from the input.
Note the do_not_convert annotation that lets us disable conversion for certain functions and run them as a py_func instead, so you can still call them from compiled code.
End of explanation
def run_input_loop(sess, inference_ops, color_name_placeholder):
Helper function that reads from input and calls the inference ops in a loop.
tb = widgets.TabBar(["RNN Colorbot"])
while True:
with tb.output_to(0):
try:
color_name = six.moves.input("Give me a color name (or press 'enter' to exit): ")
except (EOFError, KeyboardInterrupt):
break
if not color_name:
break
with tb.output_to(0):
tb.clear_tab()
sess.run(inference_ops, {color_name_placeholder: color_name})
plt.show()
with tf.Graph().as_default():
# Read the data.
batch_size = 64
train_data = load_dataset(data_dir, train_url, batch_size)
eval_data = load_dataset(data_dir, test_url, 50, training=False)
# Create the model components.
lower_cell, upper_cell, relu_layer = model_components()
# Create the helper placeholder for inference.
color_name_placeholder = tf.placeholder(tf.string, shape=())
# Compile the train / test code.
tf_train_model = autograph.to_graph(train_model)
train_model_ops = tf_train_model(
train_data, eval_data, batch_size, lower_cell, upper_cell, relu_layer, train_steps=100)
# Compile the inference code.
tf_inference = autograph.to_graph(inference)
inference_ops = tf_inference(color_name_placeholder, lower_cell, upper_cell, relu_layer)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Run training and testing.
sess.run(train_model_ops)
# Run the inference loop.
run_input_loop(sess, inference_ops, color_name_placeholder)
Explanation: Finally, we put everything together.
Note that the entire training and testing code is all compiled into a single op (tf_train_model) that you only execute once! We also still use a sess.run loop for the inference part, because that requires keyboard input.
End of explanation |
4,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 0
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
import helper
import random
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20.0, 10.0)
#%matplotlib inline
#%config InlineBackend.figure_format = 'retina'
#display 20 random images from the dataset
num_images = 20
num_cols = 4
num_rows = 5
features, labels = helper.load_cfar10_batch(cifar10_dataset_folder_path, 1)
label_names = helper._load_label_names()
rand_list = random.sample(range(len(features)), num_images)
#fig = plt.figure()
fig, axs = plt.subplots(num_rows,num_cols,figsize=(15,15))
fig.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0.2, hspace=0.25)
#fig.subplots(num_rows,num_cols)
for i in range(len(rand_list)):
sample = rand_list.pop()
sample_img = features[sample]
sample_label_name = label_names[labels[sample]]
a = fig.add_subplot(num_rows,num_cols,i+1)
imgplot = plt.imshow(sample_img)
a.set_title(sample_label_name)
a.axis('off')
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return (255 - x) / 255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
from sklearn.preprocessing import OneHotEncoder
import pandas as pd
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
x_df = pd.DataFrame(x)
enc = OneHotEncoder(n_values = 10)
return enc.fit_transform(x_df).toarray()
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape = (None, image_shape[0], image_shape[1], image_shape[2]), name = 'x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape = (None, n_classes), name = 'y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name = 'keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
#print(x_tensor)
#print(conv_num_outputs)
#print(conv_ksize)
#print(conv_strides)
#print(pool_ksize)
#print(pool_strides)
input_channel_depth = int(x_tensor.get_shape()[3])
weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], input_channel_depth, conv_num_outputs],mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
layer = tf.nn.conv2d(x_tensor, weight, strides=[1,conv_strides[0],conv_strides[1],1], padding='SAME')
layer = tf.nn.bias_add(layer,bias)
layer = tf.nn.relu(layer)
return tf.nn.max_pool(layer, ksize=[1,pool_ksize[0],pool_ksize[1],1], strides=[1,pool_strides[0],pool_strides[1],1], padding='SAME')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
shape = x_tensor.get_shape().as_list()
#print(shape)
dim = np.prod(shape[1:])
#print(dim)
return tf.reshape(x_tensor, [-1,dim])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weight = tf.Variable(tf.truncated_normal((x_tensor.get_shape().as_list()[1], num_outputs),mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.nn.relu(tf.matmul(x_tensor,weight) + bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
#print(x_tensor)
#print(num_outputs)
weight = tf.Variable(tf.truncated_normal((x_tensor.get_shape().as_list()[1], num_outputs),mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.matmul(x_tensor,weight) + bias
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
num_classes = 10
image_size = x.get_shape().as_list()
#print(image_size)
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 255
conv_ksize = [2,2]
conv_strides = [2,2]
pool_ksize = [2,2]
pool_strides = [2,2]
layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
layer = flatten(layer)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
layer = tf.nn.dropout(fully_conn(layer, 255), keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
layer = output(layer, num_classes)
# TODO: return output
return layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
train_loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
val_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print("Loss = {:>10.4f}, Accuracy = {:.04f}".format(train_loss, val_acc))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 64
batch_size = 4096*2
keep_probability = 0.6
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
#%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
4,699 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I may be missing something obvious, but I can't find a way to compute this. | Problem:
import numpy as np
import pandas as pd
import torch
x, y = load_data()
maxs = torch.max(torch.abs(x), torch.abs(y))
xSigns = (maxs == torch.abs(x)) * torch.sign(x)
ySigns = (maxs == torch.abs(y)) * torch.sign(y)
finalSigns = xSigns.int() | ySigns.int()
signed_max = maxs * finalSigns |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.