repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
mne-tools/mne-tools.github.io | dev/_downloads/31239620dd9631320a99b07ac4a81074/interpolate_bad_channels.ipynb | bsd-3-clause | # Authors: Denis A. Engemann <[email protected]>
# Mainak Jas <[email protected]>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
fname = meg_path / 'sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory',
baseline=(None, 0))
# plot with bads
evoked.plot(exclude=[], picks=('grad', 'eeg'))
"""
Explanation: Interpolate bad channels for MEG/EEG channels
This example shows how to interpolate bad MEG/EEG channels
Using spherical splines from :footcite:PerrinEtAl1989 for EEG data.
Using field interpolation for MEG and EEG data.
In this example, the bad channels will still be marked as bad.
Only the data in those channels is replaced.
End of explanation
"""
evoked_interp = evoked.copy().interpolate_bads(reset_bads=False)
evoked_interp.plot(exclude=[], picks=('grad', 'eeg'))
"""
Explanation: Compute interpolation (also works with Raw and Epochs objects)
End of explanation
"""
evoked_interp_mne = evoked.copy().interpolate_bads(
reset_bads=False, method=dict(eeg='MNE'), verbose=True)
evoked_interp_mne.plot(exclude=[], picks=('grad', 'eeg'))
"""
Explanation: You can also use minimum-norm for EEG as well as MEG
End of explanation
"""
|
michaelbrundage/vowpal_wabbit | python/examples/Learning_to_Search.ipynb | bsd-3-clause | from __future__ import print_function
from vowpalwabbit import pyvw
"""
Explanation: A basic part of speech tagger
This tutorial walks you through writing learning to search code using the VW python interface. Once you've completed this, you can graduate to the C++ version, which will be faster for the computer but more painful for you.
The "learning to search" paradigm solves problems that look like the following. You have a sequence of decisions to make. At the end of making these decisions, the world tells you how bad your decisions were. You want to condition later decisions on earlier decisions. But thankfully, at "training time," you have access to an oracle that will tell you the right answer.
Let's start with a simple example: sequence labeling for Part of Speech tagging. The goal is to take a sequence of words ("the monster ate a big sandwich") and label them with their parts of speech (in this case: Det Noun Verb Det Adj Noun).
We will choose to solve this problem with left-to-right search. I.e., we'll label the first word, then the second then the third and so on.
For any vw project in python, we have to start by importing the pyvw library:
End of explanation
"""
DET = 1
NOUN = 2
VERB = 3
ADJ = 4
my_dataset = [ [(DET , 'the'),
(NOUN, 'monster'),
(VERB, 'ate'),
(DET , 'a'),
(ADJ , 'big'),
(NOUN, 'sandwich')],
[(DET , 'the'),
(NOUN, 'sandwich'),
(VERB, 'was'),
(ADJ , 'tasty')],
[(NOUN, 'it'),
(VERB, 'ate'),
(NOUN, 'it'),
(ADJ , 'all')] ]
print(my_dataset[2])
"""
Explanation: Now, let's define our data first. We'll do this first by defining the labels (one annoying thing is that labels in vw have to be integers):
End of explanation
"""
class SequenceLabeler(pyvw.SearchTask):
def __init__(self, vw, sch, num_actions):
# you must must must initialize the parent class
# this will automatically store self.sch <- sch, self.vw <- vw
pyvw.SearchTask.__init__(self, vw, sch, num_actions)
# set whatever options you want
sch.set_options( sch.AUTO_HAMMING_LOSS | sch.AUTO_CONDITION_FEATURES )
def _run(self, sentence): # it's called _run to remind you that you shouldn't call it directly!
output = []
for n in range(len(sentence)):
pos,word = sentence[n]
# use "with...as..." to guarantee that the example is finished properly
with self.vw.example({'w': [word]}) as ex:
pred = self.sch.predict(examples=ex, my_tag=n+1, oracle=pos, condition=[(n,'p'), (n-1, 'q')])
output.append(pred)
return output
"""
Explanation: Here we have an example of a (correctly) tagged sentence.
Now, we need to write the structured prediction code. To do this, we have to write a new class that derives from the pyvw.SearchTask class.
This class must have two functions: __init__ and _run.
The initialization function takes three arguments (plus self): a vw object (vw), a search object (sch), and the number of actions (num_actions) that this object has been initialized with. Within the initialization function, we must first initialize the parent class, and then we can set whatever options we want via sch.set_options(...). Of course we can also do whatever additional initialization we like.
The _run function executes the sequence of decisions on a given input. The input will be of whatever type our data is (so, in the above example, it will be a list of (label,word) pairs).
Here is a basic implementation of sequence labeling:
End of explanation
"""
vw = pyvw.vw("--search 4 --audit --quiet --search_task hook --ring_size 1024")
"""
Explanation: Let's unpack this a bit.
The __init__ function is simple. It first calls the parent initializer and then sets some options. The options it sets are two things designed to make the programmer's life easier. The first is AUTO_HAMMING_LOSS. Remember earlier we said that when the sequence of decision is made, you have to say how bad it was? This says that we want this to be computed automatically by comparing the individual decisions to the oracle decisions, and defining the loss to be the sum of incorrect decisions.
The second is AUTO_CONDITION_FEATURES. This is a bit subtler. Later in the _run function, we will say that the label of the nth word depends on the label of the n-1th word. In order to get the underlying classifier to pay attention to that conditioning, we need to add features. We could do that manually (we'll do this later) or we can ask vw to do it automatically for us. For simplicity, we choose the latter.
The _run function takes a sentence (list of pos/word pairs) as input. We loop over each word position n in the sentence and extract the pos,word pair. We then construct a VW example that consists of a single feature (the word) in the 'w' namespace. Given that example ex, we make a search-based prediction by calling self.sch.predict(...). This is a fairly complicated function that takes a number of arguments. Here, we are calling it with the following:
examples=ex: This tells the predictor what features to use.
my_tag=n+1: In general, we want to condition the prediction of the nth label on the n-1th label. In order to do this, we have to give each prediction a "name" so that we can refer back to it in the future. This name needs to be an integer >= 1. So we'll call the first word 1, the second word 2, and so on. It has to be n+1 and not n because of the 1-based requirement.
oracle=pos: As mentioned before, on training data, we need to tell the system what the "true" (or "best") decision is at this point in time. Here, it is the given part of speech label.
condition=(n,'p'): This says that this prediction depends on the output of whichever-prediction-was-called-n, and that the "nature" of that condition is called 'p' (for "predecessor" in this case, though this is entirely up to you)
Now, we're ready to train the model. We do this in three steps. First, we initialize a vw object, telling it that we have a --search task with 4 labels, second that the specific type of --search_task is hook (you will always use the hook task) and finally that we want it to be quiet and use a larger ring_size (you can ignore the ring_size for now).
End of explanation
"""
sequenceLabeler = vw.init_search_task(SequenceLabeler)
"""
Explanation: Next, we need to initialize the search task. We use the vw.init_search_task function to do this:
End of explanation
"""
for i in range(10):
sequenceLabeler.learn(my_dataset)
"""
Explanation: Finally, we can train on the dataset we defined earlier, using sequenceLabeler.learn (the .learn function is inherited from the pyvw.SearchTask class). The .learn function takes any iterator over data. Whatever type of data it iterates over is what it will feed to your _run function.
End of explanation
"""
test_example = [ (0,w) for w in "the sandwich ate a monster".split() ]
print(test_example)
"""
Explanation: Of course, we want to see if it's learned anything. So let's create a single test example:
End of explanation
"""
out = sequenceLabeler.predict(test_example)
print(out)
"""
Explanation: We've used 0 as the labels so you can be sure that vw isn't cheating and it's actually making predictions:
End of explanation
"""
class SequenceLabeler2(pyvw.SearchTask):
def __init__(self, vw, sch, num_actions):
pyvw.SearchTask.__init__(self, vw, sch, num_actions)
def _run(self, sentence):
output = []
loss = 0.
for n in range(len(sentence)):
pos,word = sentence[n]
prevPred = output[n-1] if n > 0 else '<s>'
with self.vw.example({'w': [word], 'p': [prevPred]}) as ex:
pred = self.sch.predict(examples=ex, my_tag=n+1, oracle=pos, condition=(n,'p'))
output.append(pred)
if pred != pos:
loss += 1.
self.sch.loss(loss)
return output
sequenceLabeler2 = vw.init_search_task(SequenceLabeler2)
sequenceLabeler2.learn(my_dataset)
print(sequenceLabeler2.predict( [(0,w) for w in "the sandwich ate a monster".split()] ))
"""
Explanation: If we look back at our POS tag definitions, this is DET NOUN VERB DET NOUN, which is indeed correct!
Removing the AUTO features
In the above example we used both AUTO_HAMMING_LOSS and AUTO_CONDITION_FEATURES. To make more explicit what these are doing, let's rewrite our SequenceLabeler class without them! Here's a version that gets rid of both simultaneously. It is only modestly more complex:
End of explanation
"""
# the label for each word is its parent, or -1 for root
my_dataset = [ [("the", 1), # 0
("monster", 2), # 1
("ate", -1), # 2
("a", 5), # 3
("big", 5), # 4
("sandwich", 2) ] # 5
,
[("the", 1), # 0
("sandwich", 2), # 1
("is", -1), # 2
("tasty", 2)] # 3
,
[("a", 1), # 0
("sandwich", 2), # 1
("ate", -1), # 2
("itself", 2), # 3
]
]
"""
Explanation: If executed correctly, this should have printed [1, 2, 3, 1, 2].
There are essentially two things that changed here. In order to get rid of AUTO_HAMMING_LOSS, we had to keep track of how many errors the predictor had made. This is done by checking whether pred != pos inside the inner loop, and then at the end calling self.sch.loss(loss) to tell the search procedure how well we did.
In order to get rid of AUTO_CONDITION_FEATURES, we need to explicitly add the previous prediction as features to the example we are predicting with. Here, we've done this by extracting the previous prediction (prevPred) and explicitly adding it as a feature (in the 'p' namespace) during the example construction.
Important Note: even though we're not using AUTO_CONDITION_FEATURES, we still must tell the search process that this prediction depends on the previous prediction. We need to do this because the learning algorithm automatically memoizes certain computations, and so it needs to know that, when memoizing, to remember that this prediction might have been different if a previous decision were different.
Very silly Covington-esqu dependency parsing
Let's also try a variant of dependency parsing to see that this doesn't work just for sequence-labeling list tasks. First we need to define some data:
End of explanation
"""
class CovingtonDepParser(pyvw.SearchTask):
def __init__(self, vw, sch, num_actions):
pyvw.SearchTask.__init__(self, vw, sch, num_actions)
sch.set_options( sch.AUTO_HAMMING_LOSS | sch.AUTO_CONDITION_FEATURES )
def _run(self, sentence):
N = len(sentence)
# initialize our output so everything is a root
output = [-1 for i in range(N)]
for n in range(N):
wordN,parN = sentence[n]
for m in range(-1,N):
if m == n: continue
wordM = sentence[m][0] if m > 0 else "*root*"
# ask the question: is m the parent of n?
isParent = 2 if m == parN else 1
# construct an example
dir = 'l' if m < n else 'r'
with self.vw.example({'a': [wordN, dir + '_' + wordN], 'b': [wordM, dir + '_' + wordN], 'p': [wordN + '_' + wordM, dir + '_' + wordN + '_' + wordM],
'd': [ str(m-n <= d) + '<=' + str(d) for d in [-8, -4, -2, -1, 1, 2, 4, 8] ] +
[ str(m-n >= d) + '>=' + str(d) for d in [-8, -4, -2, -1, 1, 2, 4, 8] ] }) as ex:
pred = self.sch.predict(examples = ex,
my_tag = (m+1)*N + n + 1,
oracle = isParent,
condition = [ (max(0, (m )*N + n + 1), 'p'),
(max(0, (m+1)*N + n ), 'q') ])
if pred == 2:
output[n] = m
break
return output
"""
Explanation: For instance, in the first sentence, the parent of "the" is "monster"; the parent of "monster" is "ate"; and "ate" is the root.
The basic idea of a Covington-style dependency parser is to loop over all O(N^2) word pairs and ask if one is the parent of the other. In a real parser you would want to make sure that you don't have cycles, that you have a unique root and (perhaps) that the resulting graph is projective. I'm not doing that here. Hopefully I'll add a shift-reduce parser example later that does do this. Here's an implementation of this idea:
End of explanation
"""
vw = pyvw.vw("--search 2 --quiet --search_task hook --ring_size 1024")
task = vw.init_search_task(CovingtonDepParser)
for p in range(10): # do ten passes over the training data
task.learn(my_dataset)
print('testing')
print(task.predict( [(w,-1) for w in "the monster ate a sandwich".split()] ))
print('should have printed [ 1 2 -1 4 2 ]')
"""
Explanation: In this, output stores the predicted tree and is initialized with every word being a root. We then loop over every word (n) and every possible parent (m, which can be -1, though that's really kind of unnecessary).
The features are basically the words under consideration, the words paired with the direction of the edge, the pair of words, and then a bunch of (binned) distance features.
We can train and run this parser with:
End of explanation
"""
class CovingtonDepParserLDF(pyvw.SearchTask):
def __init__(self, vw, sch, num_actions):
pyvw.SearchTask.__init__(self, vw, sch, num_actions)
sch.set_options( sch.AUTO_HAMMING_LOSS | sch.IS_LDF | sch.AUTO_CONDITION_FEATURES )
def makeExample(self, sentence, n, m):
wordN = sentence[n][0]
wordM = sentence[m][0] if m >= 0 else '*ROOT*'
dir = 'l' if m < n else 'r'
ex = self.vw.example( { 'a': [wordN, dir + '_' + wordN],
'b': [wordM, dir + '_' + wordN],
'p': [wordN + '_' + wordM, dir + '_' + wordN + '_' + wordM],
'd': [ str(m-n <= d) + '<=' + str(d) for d in [-8, -4, -2, -1, 1, 2, 4, 8] ] +
[ str(m-n >= d) + '>=' + str(d) for d in [-8, -4, -2, -1, 1, 2, 4, 8] ] },
labelType=self.vw.lCostSensitive)
ex.set_label_string(str(m+2) + ":0")
return ex
def _run(self, sentence):
N = len(sentence)
# initialize our output so everything is a root
output = [-1 for i in range(N)]
for n in range(N):
# make LDF examples
examples = [ self.makeExample(sentence,n,m) for m in range(-1,N) if n != m ]
# truth
parN = sentence[n][1]
oracle = parN+1 if parN < n else parN # have to -1 because we excluded n==m from list
# make a prediction
pred = self.sch.predict(examples = examples,
my_tag = n+1,
oracle = oracle,
condition = [ (n, 'p'), (n-1, 'q') ] )
output[n] = pred-1 if pred < n else pred # have to +1 because n==m excluded
for ex in examples: ex.finish() # clean up
return output
"""
Explanation: One could argue that a more natural way to do this would be with LDF rather than the inner loop over m. We'll do that next.
LDF-based Covington-style dependency parser
One of the weirdnesses in the previous parser implementation is that it makes N-many binary decisions per word ("is word n my parent?") rather than a single N-way decision. The latter makes more sense.
The challenge is that you cannot set this up as a vanilla multiclass classification problem because (a) the number of "classes" is a function of the input (a length N sentence will have N classes) and (b) class "1" and "2" don't mean anything consistently across examples.
The way around this is label-dependent features (LDF). In LDF mode, the class ids are (essentially -- see caveat below) irrelevant. Instead, you simply provide features that depend on the label (hence "LDF"). In particular, for each possible label, you provide a different feature vector, and the goal of learning is to pick one of those as the "correct" one.
Here's a re-implementation of Covington using LDF:
End of explanation
"""
vw = pyvw.vw("--search 0 --csoaa_ldf m --search_task hook --ring_size 1024 --quiet")
task = vw.init_search_task(CovingtonDepParserLDF)
for p in range(2): # do two passes over the training data
task.learn(my_dataset)
print(task.predict( [(w,-1) for w in "the monster ate a sandwich".split()] ))
"""
Explanation: There are a few things going on here. Let's focus first on the __init__ function. The only difference here is that when we call sch.set_options we provide sch.IS_LDF to declare that this is an LDF task.
Let's skip the makeExample function for a minute and look at the _run function. You should recognize most of this from the non-LDF version. We initialize the output (parent) of every word to -1 (meaning that every word is connected to the root).
For each word n, we construct N-many examples: one for every -1..(N-1), except for the current word n because you cannot have self-loops. If we were being more clever, we would only do the ones that won't result in the creation of a cycle, but we're not being clever.
Now, because the "labels" are just examples, it's a bit more complicated to specify the oracle. The oracle is an index into the examples list. So if oracle is the oracle action, then examples[oracle] is the corresponding example. We compute the oracle as follows. parN is the actual parent, which is going to be something in the range -1 .. (N-1). If parN < n (this is a left arrow), then the oracle index is parN+1 because the root (-1) is index 0 and so on. If parN > n (note: it cannot be equal to n) then, beacuse n == m is left out of the examples list, the correct index is just parN. Phew.
We then ask for a prediction. Now, instead of giving a single example, with give the list of examples. The tag works the same way, as does the conditioning.
Once we get a prediction out (called pred) we need to figure out what parent it actually corresponds to. This is simply un-doing the computaiton two paragraphs ago!
Finally -- and this is skippable if you trust the Python garbage collector -- we tell VW that we're done with all the examples we created. We do this just to be pedantic; it's optional.
Okay, now let's go back to the makeExample function. This takes two word ids (n and m) and makes an example that roughly says "what would it look like if I had an edge from n to m?" We construct basically the same feautres as before. There are two major changes, though:
When we run self.vw.example(...) we provide labelType=self.vw.lCostSensitive as an argument. This is because under the hood, vw treats LDF examples as cost-sensitive classification examples. This means they need to have cost-sensitive labels, so that's how we need to create them.
We explicitly set the label of the this example to str(m+2)+":0". What is this? Well, this is optional but recommended. Here's the issue. In LDF mode, recall that labels have no intrinsic meaning. This means that when vw does auto-conditioning, it's not really clear what to use as the "previous prediction." By giving explicit label names (in this case, m+2) we're recording what the position of the last parent, which may be useful for predicting the next parent. We could avoid this necessity if we did our own feature engineering on the history, but for now, this seems to capture the right intuition.
Given all this, we can now train and test our parser:
End of explanation
"""
my_dataset = [
( "the blue house".split(),
([0], [2], [1]),
"la maison bleue".split() ),
( "the house".split(),
([0], [1]),
"la maison".split() ),
( "the flower".split(),
([0], [1]),
"la fleur".split() )
]
"""
Explanation: The correct parse of this sentence is [1, 2, -1, 4, 2] which is what this should have printed.
There are two major things to notice in the initialization of VW. The first is that we say --search 0. The zero labels argument to --search declares that this is going to be an LDF task. We also have to tell VW that we want an LDF-enabled cost-sensitive learner, which is what --csoaa_ldf m does (if you're wondering, m means "multiline" -- just treat it as something you have to do). The rest should be familiar.
A simple word-alignment model
Okay, as a last example we'll do a simple word alignment model in the spirit of the IBM models. Note that this will be a supervised model; doing unsupervised stuff is a bit trickier.
Here's some word alignment data. The dataset is triples of E, A, F where A[i] = list of words E[i] aligned to, or [] for null-aligned:
End of explanation
"""
def alignmentError(true, sys):
t = set(true)
s = set(sys)
if len(t | s) == 0: return 0.
return 1. - float(len(t & s)) / float(len(t | s))
"""
Explanation: It's going to be useful to compute alignment mismatches at the word level between true alignments (like [1,2]) and predicted alignments (like [2,3,4]). We use intersection-over-union error:
End of explanation
"""
class WordAligner(pyvw.SearchTask):
def __init__(self, vw, sch, num_actions):
pyvw.SearchTask.__init__(self, vw, sch, num_actions)
sch.set_options( sch.AUTO_HAMMING_LOSS | sch.IS_LDF | sch.AUTO_CONDITION_FEATURES )
def makeExample(self, E, F, i, j0, l):
f = 'Null' if j0 is None else [ F[j0+k] for k in range(l+1) ]
ex = self.vw.example( { 'e': E[i],
'f': f,
'p': '_'.join(f),
'l': str(l),
'o': [str(i-j0), str(i-j0-l)] if j0 is not None else [] },
labelType = self.vw.lCostSensitive )
lab = 'Null' if j0 is None else str(j0+l)
ex.set_label_string(lab + ':0')
return ex
def _run(self, alignedSentence):
E,A,F = alignedSentence
# for each E word, we pick a F span
covered = {} # which F words have been covered so far?
output = []
for i in range(len(E)):
examples = [] # contains vw examples
spans = [] # contains triples (alignment error, index in examples, [range])
# empty span:
examples.append( self.makeExample(E, F, i, None, None) )
spans.append( (alignmentError(A[i], []), 0, []) )
# non-empty spans
for j0 in range(len(F)):
for l in range(3): # max phrase length of 3
if j0+l >= len(F): break
if covered.has_key(j0+l): break
id = len(examples)
examples.append( self.makeExample(E, F, i, j0, l) )
spans.append( (alignmentError(A[i], range(j0,j0+l+1)), id, range(j0,j0+l+1)) )
sortedSpans = []
for s in spans: sortedSpans.append(s)
sortedSpans.sort()
oracle = []
for id in range(len(sortedSpans)):
if sortedSpans[id][0] > sortedSpans[0][0]: break
oracle.append( sortedSpans[id][1] )
pred = self.sch.predict(examples = examples,
my_tag = i+1,
oracle = oracle,
condition = [ (i, 'p'), (i-1, 'q') ] )
for ex in examples: ex.finish()
output.append( spans[pred][2] )
for j in spans[pred][2]:
covered[j] = True
return output
"""
Explanation: And now we can define our structured prediction task. This is also an LDF problem. Basically for each word on the English side, we'll loop over all possible phrases on the Foreign side to which it could align (maximum phrase length of three). For each of these options we'll create an example to be fed into the LDF classifier. We also ensure that the same foreign word cannot be covered by multiple English words, though this might not be a good idea in general.
End of explanation
"""
vw = pyvw.vw("--search 0 --csoaa_ldf m --search_task hook --ring_size 1024 --quiet -q ef -q ep")
task = vw.init_search_task(WordAligner)
for p in range(10):
task.learn(my_dataset)
print(task.predict( ("the blue flower".split(), ([],[],[]), "la fleur bleue".split()) ))
"""
Explanation: The only really complicated thing here is computing the oracle. What we do is, for each possible alignment, compute an intersection-over-union error rate. The oracle is then that alignment that achieves the smallest (local) error rate. This is not perfect, but is good enough. One interesting thing here is that now the oracle could be a list; this is completely supported by the underlying algorithms.
We can train and test this model to make sure it does the right thing:
End of explanation
"""
|
tpin3694/tpin3694.github.io | sql/delete_a_table.ipynb | mit | # Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
"""
Explanation: Title: Delete A Table
Slug: delete_a_table
Summary: Delete an entire table in SQL.
Date: 2016-05-01 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.
For more, check out Learning SQL by Alan Beaulieu.
End of explanation
"""
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
"""
Explanation: Create Data
End of explanation
"""
%%sql
-- Delete the table called 'criminals'
DROP TABLE criminals
"""
Explanation: Delete A Table
End of explanation
"""
%%sql
-- Select everything
SELECT *
-- From the table 'criminals'
FROM criminals
"""
Explanation: View Table
End of explanation
"""
|
dmolina/es_intro_python | 01-Instalación.ipynb | gpl-3.0 | #from IPython.display import HTML
#HTML('''<script>
#code_show=true;
#function code_toggle() {
# if (code_show){
# $('div.input').hide();
# } else {
# $('div.input').show();
# }#
# code_show = !code_show
#}
#$( ocument ).ready(code_toggle);
#</script>
#The raw code for this IPython notebook is by default hidden for easier reading.
#To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
"""
Explanation: Instalación
Lo primero es instalar Python. Para ello, la mejor forma es bajarse Anaconda, está disponible en Windows, Linux y MacOS. <img src="http://www.gurobi.com/images/logo-anaconda.png" alt="Anaconda" style="width: 200px;"/>
Descargar Anaconda
Recordar coger la versión de 64 bits y accesible directamente en los botones que aparecen. Las versiones de abajo son para otro tipo de arquitectura o para 32 bits, lo cual puede dar problemas si el SO es de 64 bits.
Verificar que está instalado
conda --version
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo("qb7FT68tcA8")
"""
Explanation: Crear un entorno
Anaconda nos permite tener distintos entornos, cada uno con distintas librerías (y versiones de librerías).
De esa manera evitamos conflictos si queremos tener aplicaciones que requieran librerías incompatibles entre sí.
Vamos a crear el entorno para Inteligencia de Negocio con la versión 3 de Python, y la librería scikit-learn se crea de la siguiente forma:
conda create -n IN
Y para activarlo se ejecuta
source activate IN
Si todo va bien, en Linux, deberá de de verse "(IN)" en la línea de comandos.
Para desactivar se puede hacer:
source deactivate
Por qué Python
Python es un lenguaje muy usado en Machine-Learning y Ciencias de Datos en General. Tiene muchas ventajas:
Es Software Libre, así que no hay problemas de licencias.
Es un lenguaje muy fácil de aprender.
Librerías científicas muy buenas.
Fácil de integrar con otras librerías.
Librerías que usaremos
Las librerías que vamos a usar ya vienen instaladas por defecto. Son:
numpy, librería matemática, muy potente, y consigue la eficiencia de trabajar en C.
pandas, librería que permite trabajar con tablas de datos (como Excel) y leerlos y escribirlos de ficheros .csv y Excel.
scikit-learn, librería de 'Marchine Learning'.
Entorno de Desarrollo
Existen muchos entornos de desarrollo, como PyCharm, Spyder o PyDev (entorno Eclipse).
Nosotros vamos a usar Jupyter, que nos ofrece un entorno web para escribir código Python y ver el resultado de forma bastante interactiva.
Con conda se instala sólo. Para ejecutarlo primer vamos al directorio que queremos ejecutar y hacemos:
jupyter notebook
y luego desde el navegador abrimos la dirección, y podemos empezar a trabajar.
End of explanation
"""
print("Hola a todos")
"""
Explanation: Notebooks
Los notebooks de Python son fichero terminados en ".pynb", que pueden ser abiertos desde el navegador usando jupyter. Estos notebooks se dividen en celdas que pueden contener texto y código Python que se puede ejecutar, mostrando la salida de su ejecución.
Github entiende el formato, y permite visualizar un notebook, pero no se puede ejecutar el código, para eso es necesario descargar el fichero y editarlo localmente.
Recursos
Existe una gran cantidad de Notebooks muy útiles, en la propia wiki de Jupyter se encuentra una extensa galería de interesantes notebooks
Ejemplo del lenguaje Python
Vamos a empezar con el Hello, World. Mientras que en C sería
include<stdio.h>
int main(void) {
printf("Hello World\n");
return 0;
}
El código en Python es mucho más sencillo.
End of explanation
"""
sumcars = 0
sumwords = 0
for word in ['hola', 'a', 'todos']:
print("Frase: ", word)
sumcars += len(word)
sumwords += 1
print("Se han mostrado ", sumwords, " palabras y ", sumwords, " caracteres")
"""
Explanation: Como véis, no hay puntos y coma al final de cada sentencia, le basta con fin de línea. Y en vez de printf usa print, que es mucho más sencillo.
Un ejemplo un poco más completo
End of explanation
"""
%pylab inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(30)
plt.plot(x, x**2)
"""
Explanation: Visualizando datos
Vamos a visualizar unos pocos datos
End of explanation
"""
# example with a legend and latex symbols
fig, ax = plt.subplots()
ax.plot(x, x**2, label=r"$y = \alpha^2$")
ax.plot(x, x**3, label=r"$y = \alpha^3$")
ax.legend(loc=2) # upper left corner
ax.set_xlabel(r'$\alpha$', fontsize=18)
ax.set_ylabel(r'$y$', fontsize=18)
ax.set_title('Ejemplo más completo');
"""
Explanation: Un ejemplo más completo
End of explanation
"""
import sklearn.datasets
import sklearn.cluster
import matplotlib.pyplot as plot
# Creamos los puntos
n = 1000
k = 4
# Generate fake data
data, labels = sklearn.datasets.make_blobs(
n_samples=n, n_features=2, centers=k)
"""
Explanation: Haciendo uso de Machine Learning
Python posee la estupenda librería scikit-learn para trabajar con Machine Learning, que implementa muchos interesantes métodos. Vamos a mostrarlo aplicando clustering usando el algoritmo K-means.
Primero cargamos las librerías
End of explanation
"""
plot.scatter(data[:, 0], data[:, 1])
"""
Explanation: Primero pintamos los puntos
End of explanation
"""
# scikit-learn
kmeans = sklearn.cluster.KMeans(k, max_iter=300)
kmeans.fit(data)
means = kmeans.cluster_centers_
plot.scatter(data[:, 0], data[:, 1], c=labels)
plot.scatter(means[:, 0], means[:, 1], linewidths=2, color='r')
plot.show()
"""
Explanation: Aplicamos k-means
End of explanation
"""
import seaborn as sns
iris = sns.load_dataset("iris")
g = sns.PairGrid(iris, hue="species")
g.map(plt.scatter);
g = g.add_legend()
from sklearn import datasets
# load the iris dataset
iris = datasets.load_iris()
# start with the first two features: sepal length (cm) and sepal width (cm)
X = iris.data[:100,:2]
# save the target values as y
y = iris.target[:100]
# Define bounds on the X and Y axes
X_min, X_max = X[:,0].min()-.5, X[:,0].max()+.5
y_min, y_max = X[:,1].min()-.5, X[:,1].max()+.5
for target in set(y):
x = [X[i,0] for i in range(len(y)) if y[i]==target]
z = [X[i,1] for i in range(len(y)) if y[i]==target]
plt.scatter(x,z,color=['red','blue'][target], label=iris.target_names[:2][target])
plt.xlabel('Sepal Length')
plt.ylabel('Sepal Width')
plt.xlim(X_min,X_max)
plt.ylim(y_min,y_max)
plt.title('Scatter Plot of Sepal Length vs. Sepal Width')
plt.legend(iris.target_names[:2], loc='lower right')
plt.show()
"""
Explanation: Detectando criterio para abordar orquídeas
El '''hello world''' de aprendizaje es aprender a detectar el tipo de flores de orquídeas a partir de cuatro atributos.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/04_features/a_features.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.5
import math
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
print(tf.__version__)
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
"""
Explanation: Trying out features
Learning Objectives:
* Improve the accuracy of a model by adding new features with the appropriate representation
The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively.
Set Up
In this first cell, we'll load the necessary libraries.
End of explanation
"""
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
"""
Explanation: Next, we'll load our data set.
End of explanation
"""
df.head()
df.describe()
"""
Explanation: Examine and split the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles.
End of explanation
"""
np.random.seed(seed=1) #makes result reproducible
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
"""
Explanation: Now, split the data into two parts -- training and evaluation.
End of explanation
"""
def add_more_features(df):
df['avg_rooms_per_house'] = df['total_rooms'] / df['households'] #expect positive correlation
df['avg_persons_per_room'] = df['population'] / df['total_rooms'] #expect negative correlation
return df
# Create pandas input function
def make_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x = add_more_features(df),
y = df['median_house_value'] / 100000, # will talk about why later in the course
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
# Define your feature columns
def create_feature_cols():
return [
tf.feature_column.numeric_column('housing_median_age'),
tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'), boundaries = np.arange(32.0, 42, 1).tolist()),
tf.feature_column.numeric_column('avg_rooms_per_house'),
tf.feature_column.numeric_column('avg_persons_per_room'),
tf.feature_column.numeric_column('median_income')
]
# Create estimator train and evaluate function
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.compat.v1.estimator.LinearRegressor(model_dir = output_dir, feature_columns = create_feature_cols())
train_spec = tf.estimator.TrainSpec(input_fn = make_input_fn(traindf, None),
max_steps = num_train_steps)
eval_spec = tf.estimator.EvalSpec(input_fn = make_input_fn(evaldf, 1),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds,
throttle_secs = 5) # evaluate every N seconds
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
OUTDIR = './trained_model'
# Run the model
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.compat.v1.summary.FileWriterCache.clear()
train_and_evaluate(OUTDIR, 2000)
"""
Explanation: Training and Evaluation
In this exercise, we'll be trying to predict median_house_value It will be our label (sometimes also called a target).
We'll modify the feature_cols and input function to represent the features you want to use.
We divide total_rooms by households to get avg_rooms_per_house which we expect to positively correlate with median_house_value.
We also divide population by total_rooms to get avg_persons_per_room which we expect to negatively correlate with median_house_value.
End of explanation
"""
|
mayankjohri/LetsExplorePython | Section 1 - Core Python/Chapter 02 - Data Types Part - 1/Lists.ipynb | gpl-3.0 | fruits = ['Apple', 'Mango', 'Grapes', 'Jackfruit',
'Apple', 'Banana', 'Grapes', [1, "Orange"]]
# processing the entire list
for fruit in fruits:
print(fruit, type(fruit))
#
print("*"*30)
fruits.insert(3, "Water Melon")
print(fruits)
# !! Gotcha's
fr = fruits
print(id(fr))
print(id(fruits))
ft1 = list(fruits)
print(id(ft1))
print(id(fruits))
print(id(ft1[2]))
print(id(fruits[2]))
ft1 = fruits[:]
print(id(ft1))
print(id(fruits))
print(id(ft1[2]))
print(id(fruits[2]))
fruits.append('Camel')
print(fruits)
fruits.append(['kiwi', 'Apple', 'Camel'])
print(fruits)
fruits.extend(['kiwi', 'Apple', 'Camel'])
print(fruits)
"""
Explanation: Lists
Lists are collections of heterogeneous objects, which can be of any type, including other lists.
Lists in the Python are mutable and can be changed at any time. Lists can be sliced in the same way as strings, but as the lists are mutable, it is possible to make assignments to the list items.
Syntax:
python
list = [a, b, ..., z]
Common operations with lists:
End of explanation
"""
fruits.extend(['kiwi', ['Apple', 'Camel']])
print(fruits)
"""
Explanation: NOTE: Only one level of extending happens, apple and camel are still in sub-list
End of explanation
"""
## Removing the second instance of Grapes
x = 0
y = 0
for fruit in fruits:
if x == 1 and fruit == 'Grapes':
# del (fruits[y])
fruits.pop(y)
elif fruit == 'Grapes':
x = 1
y +=1
print(fruits)
fruits.remove('Grapes')
"""
Explanation: Removing
End of explanation
"""
print(fruits)
fruits.append("Grapes")
"""
Explanation: Appending
End of explanation
"""
# These will work on only homogeneous list and will fail for heterogeneous
try:
fruits.sort()
print(fruits)
except Exception as e:
print(e)
help(list.sort)
"""
Explanation: Ordering
End of explanation
"""
fruits.reverse()
print(fruits)
fruits = ['kiwi', 'Apple', 'Camel']
print(fruits[::-1])
# # # prints with number order
fruits = ['Apple', 'Mango', 'Grapes', 'Jackfruit',
'Apple', 'Banana', 'Grapes']
for i, prog in enumerate(fruits):
print( i + 1, '=>', prog)
"""
Explanation: Inverting
End of explanation
"""
my_list = ['A', 'B', 'C']
for a, b in enumerate(my_list):
print(a, b)
my_list = ['A', 'B', 'C']
print ('list:', my_list)
# # The empty list is evaluated as false
while my_list:
# In queues, the first item is the first to go out
# pop(0) removes and returns the first item
print ('Left', my_list.pop(0), ', remain', len(my_list), my_list)
my_list.append("G")
# # More items on the list
my_list += ['D', 'E', 'F']
print ('list:', my_list)
while my_list:
# On stacks, the first item is the last to go out
# pop() removes and retorns the last item
print ('Left', my_list.pop(), ', remain', len(my_list), my_list)
l = ['D', 'E', 'F', "G", "H"]
print(l)
k = ('D', "E", "G", "H")
print(dir(l))
print("*"*8)
print(dir(k))
"""
Explanation: The function enumerate() returns a tuple of two elements in each iteration: a sequence number and an item from the corresponding sequence.
The list has a pop() method that helps the implementation of queues and stacks:
End of explanation
"""
t = ([1, 2], 4)
print(t)
print(" :: Error :: ")
try:
t[0] = 3
print(t)
except Exception as e:
print(e)
print(" :: Error :: ")
try:
t[0] = [1, 2, 3]
print(t)
except Exception as e:
print(e)
t[0].append(3)
print(t)
t[0][0] = [1, 2, 3]
print(t)
ta = (1, 2, 3, 4, 5)
for a in ta:
print (a)
ta1 = [1, 2, 3, 4, 5]
for a in ta1:
print(a)
"""
Explanation: The sort (sort) and reversal (reverse) operations are performed in the list and do not create new lists.
Tuples
Similar to lists, but immutable: it's not possible to append, delete or make assignments to the items.
Syntax:
my_tuple = (a, b, ..., z)
The parentheses are optional.
Feature: a tuple with only one element is represented as:
t1 = (1,)
The tuple elements can be referenced the same way as the elements of a list:
first_element = tuple[0]
Lists can be converted into tuples:
my_tuple = tuple(my_list)
And tuples can be converted into lists:
my_list = list(my_tuple)
While tuple can contain mutable elements, these elements can not undergo assignment, as this would change the reference to the object.
Example :
End of explanation
"""
|
katelynneese/dmdd | dmdd_tutorial.ipynb | mit | I. Nuclear-recoil rates
-----
______
`dmdd` has three modules that let you calculate differential rate $\frac{dR}{dE_R}$ and total rate $R(E_R)$ of nuclear-recoil events:
I) `rate_UV`: rates for a variety of UV-complete theories (from Gresham & Zurek, 2014)
II) `rate_genNR`: rates for all non-relativistic scattering operators, including interference terms (from Fitzpatrick et al., 2013)
III) `rate_NR`: rates for individual nuclear responses compatible with the EFT, not automatically including interference terms (from Fitzpatrick et al., 2013)
Appropriate nuclear response functions (accompanied by the right momentum and energy dependencies of the rate) are automatically folded in, and for a specified target element natural abundance of its isotopes (with their specific response functions) are taken into account.
"""
Explanation: Welcome to the dmdd tutorial!
A python package that enables simple simulation and Bayesian posterior analysis
of nuclear-recoil data from dark matter direct detection experiments
for a wide variety of theories of dark matter-nucleon interactions.
dmdd has the following features:
Calculation of the nuclear-recoil rates for various non-standard momentum-, velocity-, and spin-dependent scattering models.
Calculation of the appropriate nuclear response functions triggered by the chosen scattering model.
Inclusion of natural abundances of isotopes for a variety of target elements: Xe, Ge, Ar, F, I, Na.
Simple simulation of data (where data is a list of nuclear recoil energies, including Poisson noise) under different models.
Bayesian analysis (parameter estimation and model selection) of data using MultiNest.
All rate and response functions directly implement the calculations of Anand et al. (2013) and Fitzpatrick et al. (2013) (for non-relativistic operators, in rate_genNR and rate_NR), and Gresham & Zurek (2014) (for UV-motivated scattering models in rate_UV). Simulations follow the prescription from Gluscevic & Peter (2014), and Gluscevic et al. (2015).
This document demonstrates basic usage and describes inputs and outputs so you can quickly get started with dmdd. For more details, refer to the online documentation, or raise an issue on GitHub with questions or feedback.
End of explanation
"""
%matplotlib inline
import numpy as np
import dmdd
# array of nuclear-recoil energies at which to evaluate the rate:
energies = np.linspace(1,100,5)
SI_rate = dmdd.rate_UV.dRdQ(energies, mass=50., sigma_si=70., fnfp_si=1.,
v_lag=220, v_rms=220, v_esc=540, rho_x=0.3,
element='xenon')
ED_rate = dmdd.rate_UV.dRdQ(energies, mass=50., sigma_elecdip=70.,
v_lag=220, v_rms=220, v_esc=540, rho_x=0.3,
element='xenon')
print SI_rate
print ED_rate
"""
Explanation: Let's calculate, separately, a differential rate for a standard spin-independent interaction (with $f_n/f_p=1$), and for an electric-dipole interaction with a massive mediator, assuming a xenon target, and a WIMP mass of 50 GeV, for standard values of the velocity parameters and local DM density:
End of explanation
"""
Rtot_SI = dmdd.rate_UV.R(dmdd.eff.efficiency_unit, mass=50.,
sigma_si=70., fnfp_si=1.,
v_lag=220, v_rms=220, v_esc=540, rho_x=0.3,
element='xenon', Qmin=5, Qmax=50)
Rtot_ED = dmdd.rate_UV.R(dmdd.eff.efficiency_unit, mass=50.,
sigma_elecdip=70.,
v_lag=220, v_rms=220, v_esc=540, rho_x=0.3,
element='xenon', Qmin=5, Qmax=50)
print 'Total spin-independent rate: {:.1e} events/sec/kg'.format(Rtot_SI)
print 'Total electric-dipole rate: {:.1e} events/sec/kg'.format(Rtot_ED)
"""
Explanation: Get the total rate for the same scenario, in the energy window between 5 and 40 keV (assuming unit efficiency):
End of explanation
"""
dmdd.dp.plot_spectrum('xenon',Qmin=5,Qmax=50,exposure=1000,
sigma_name='sigma_si',sigma_val=70,
fnfp_name='fnfp_si', fnfp_val=1,
mass=50, title='theory: SI',color='BlueViolet')
dmdd.dp.plot_spectrum('xenon',Qmin=5,Qmax=50,exposure=1000,
sigma_name='sigma_elecdip',sigma_val=70,
mass=50, title='theory: ED',color='DarkBlue')
"""
Explanation: You can also plot the corresponding recoil-energy spectra; e.g. for 1000 kg-year exposure:
End of explanation
"""
# intialize and instances of Experiment object with a germanium target, with energy resolution,
# and lower energy threshold of keV, upper threshold of 100 keV, and 200 kg-year exposure:
ge = dmdd.Experiment('Ge','germanium',1,100,200,dmdd.eff.efficiency_unit, energy_resolution=True)
# and a similar fluorine target with no energy resolution:
flu = dmdd.Experiment('F','fluorine',1,100,200,dmdd.eff.efficiency_unit, energy_resolution=False)
print 'experiment: {} ({:.0f} kg-yr)'.format(ge.name, ge.exposure)
minimum_mx = ge.find_min_mass(v_esc=540., v_lag=220., mx_guess=1.)
# this is the minimum detectable WIMP mass,
# given the recoil-energy threshold, and escape velocity
# from the Galaxy in the lab frame = v_esc + v_lag.
print 'minimum detectable WIMP mass: {:.1f} GeV'.format(minimum_mx)
# this is how to get the projected reach for such experiment for mx=50GeV,
# for sigma_p under a given theory, in this case, the standard spin-dependent scattering,
# assuming the experiment has 4 expected background events:
sigma = ge.sigma_limit(sigma_name='sigma_sd', fnfp_name='fnfp_sd', fnfp_val=-1.1,
mass=50, Nbackground=4, sigma_guess = 1e10, mx_guess=1.,
v_esc=540., v_lag=220., v_rms=220., rho_x=0.3)
sigma_normalized = sigma * dmdd.PAR_NORMS['sigma_sd']
print 'projected exclusion for SD scattering @ 50 GeV: sigma_p = {:.2e} cm^2'.format(sigma_normalized)
"""
Explanation: NOTES:
Values of the cross-sections passed to the rate functions are normalized with normalizations stored in PAR_NORMS dictionary in globals module; the values used in all calculations are always of this form: sigma_si * dmdd.PAR_NORMS['sigma_si']
v_rms variable is equal to 3/2 * (Maxwellian rms velocity of ~155km/sec) ~ 220 km/sec
v_esc is in the Galactic frame
II. Experiment Object
This object packages all the information that defines a single "experiment". For statistical analysis, a list of these objects is passed to initialize an instance of a MultinestRun object, or to initialize an instance of a Simulation object. It can also be used on its own to explore the capabilities of a theoretical experiment. Experiments set up here can either have perfect energy resolution in a given analysis window, or no resolution (controlled by the parameter energy_resolution, default being True).
This is how you can define and use an instance of Experiment:
End of explanation
"""
# more general way that uses a general Model class:
# set all sigma_p to zero by default:
default_rate_parameters = dict(mass=50., sigma_si=0., sigma_sd=0., sigma_anapole=0., sigma_magdip=0., sigma_elecdip=0.,
sigma_LS=0., sigma_f1=0., sigma_f2=0., sigma_f3=0.,
sigma_si_massless=0., sigma_sd_massless=0.,
sigma_anapole_massless=0., sigma_magdip_massless=0., sigma_elecdip_massless=0.,
sigma_LS_massless=0., sigma_f1_massless=0., sigma_f2_massless=0., sigma_f3_massless=0.,
fnfp_si=1., fnfp_sd=1.,
fnfp_anapole=1., fnfp_magdip=1., fnfp_elecdip=1.,
fnfp_LS=1., fnfp_f1=1., fnfp_f2=1., fnfp_f3=1.,
fnfp_si_massless=1., fnfp_sd_massless=1.,
fnfp_anapole_massless=1., fnfp_magdip_massless=1., fnfp_elecdip_massless=1.,
fnfp_LS_massless=1., fnfp_f1_massless=1., fnfp_f2_massless=1., fnfp_f3_massless=1.,
v_lag=220., v_rms=220., v_esc=544., rho_x=0.3)
elecdip = dmdd.Model('Elec.dip.light', ['mass','sigma_elecdip'],
dmdd.rate_UV.dRdQ, dmdd.rate_UV.loglikelihood,
default_rate_parameters)
# shortcut for scattering models corresponding to rates coded in rate_UV:
elecdip = dmdd.UV_Model('Elec.dip.', ['mass','sigma_elecdip'])
print 'model: {}, parameters: {}'.format(elecdip.name, elecdip.param_names)
# if you wish to set some of the parameters to be fixed
# when this model is used to fit data, you can define a dict fixed_params, e.g.:
millicharge = dmdd.UV_Model('Millicharge', ['mass', 'sigma_si_massless'],
fixed_params={'fnfp_si_massless': 0})
print 'model: {}, parameters: {}; fixed: {}'.format(millicharge.name,
millicharge.param_names,
millicharge.fixed_params)
"""
Explanation: NOTE: initialization of this class requires passing of the efficiency function. Flat unit efficiency is available in dmdd.dmdd_efficiency module. You may want to include in there any new specific efficiency function you'd like to use.
III. Model Object
This object facilitates handling of a "hypothesis" that describes the scattering interaction at hand (to be used either to simulate recoil spectra, or to fit to the simulated recoil events). You have an option to set any parameter to have a fixed value, which will not be varied if the model is used to fit data.
Here's how you can use a general Model object, or its sub-class UV_Model:
End of explanation
"""
# intialize an Experiment with iodine target, to be passed to Simulation:
iod = dmdd.Experiment('I','iodine',5,80,1000,dmdd.eff.efficiency_unit, energy_resolution=True)
# initialize a simulation with iod, for elecdip model defined above,
# for 50 GeV WIMP, for sigma_si = 70*PAR_NORMS['sigma_elecdip'] = 7e-43 cm^2:
test = dmdd.Simulation('simdemo', iod, elecdip, {'mass':50.,'sigma_elecdip':70.})
# you can easily access various attributes of this class, e.g.:
print 'simulation \'{}\' was done for experiment \'{}\', \
it had N={:.0f} events (<N>={:.0f} events), \n and \
the parameters passed to dRdQ were:\n\n {}'.format(test.name,
test.experiment.name,
test.N,
test.model_N,
test.dRdQ_params)
print '\n List of energies generated in {} is: \n\n'.format(test.name),test.Q
"""
Explanation: IV. Simulation Object
This object handles a single simulated data set (nuclear recoil energy spectrum). It is generaly initialized and used by the MultinestRun object, but can be used stand-alone.
Simulation data will only be generated if a simulation with the right parameters and name does not already exist, or if force_sim=True is provided upon Simulation initialization; if the data exist, it will just be read in. (Data is a list of nuclear recoil energies of "observed" events.) Initializing Simulation with given parameters for the first time will produce 3 files, located by default at $DMDD_PATH/simulations (or ./simulations if $DMDD_PATH not defined):
.dat file with a list of nuclear-recoil energies (keV), drawn from a Poisson distribution with $<N>$ = number of events expected at a given energy for a given underlying scattering model and given experimental parameters.
.pkl file with all relevant initialization parameters for record
.pdf plot of the simulated recoil-energy spectrum with simulated data points (with Poisson error bars) on top of the underlying model
Below is an example of Simulation.
End of explanation
"""
# simulate and analyze data from germanium and xenon targets:
xe = dmdd.Experiment('Xe', 'xenon', 5, 40, 1000, dmdd.eff.efficiency_unit)
ge = dmdd.Experiment('Ge', 'germanium', 0.4, 100, 100, dmdd.eff.efficiency_unit)
# simulate data for anapole interaction:
simmodel = dmdd.UV_Model('Anapole', ['mass','sigma_anapole'])
# fit data with standard SI interaction
fitmodel = dmdd.UV_Model('SI', ['mass', 'sigma_si'], fixed_params={'fnfp_si': 1.})
# initialize run:
run = dmdd.MultinestRun('simdemo1', [xe,ge], simmodel,{'mass':50.,'sigma_anapole':45.},
fitmodel, prior_ranges={'mass':(1,1000), 'sigma_si':(0.001,10000)})
# now run MultiNest and visualize data:
run.fit()
run.visualize()
"""
Explanation: V. MultinestRun Object
This is a "master" class of dmdd that makes use of all other objects. It takes in experimental parameters, particle-physics parameters, and astrophysical parameters, and then generates a simulation (if it doesn't already exist), and prepares to perform MultiNest analysis of simulated data. It has methods to do a MultiNest run (.fit() method) and to visualize outputs (.visualize() method). Model used for simulation does not have to be the same as the Model used for fitting. Simulated spectra from multiple experiments will be analyzed jointly if MultiNest run is initialized with a list of appropriate Experiment objects.
The likelihod function is an argument of the fitting model (Model object); for UV models it is set to dmdd.rate_UV.loglikelihood, and for models that would correspond to rate_genNR, dmdd.rate_genNR.loglikelihood. Both likelihood functions include the Poisson factor, and, if energy_resolution=True of the Experiment at hand, the factors that evaluate probability of each individual event, given the fitting model.
Example usage of MultinestRun is given below:
End of explanation
"""
print run.chainspath
"""
Explanation: The .visualize() method produces 2 types of plots (shown above):
recoil spectra for each experiment used in the analysis, where data points, theory model, and best-fit model are all shown.
2d (marginalized) posteriors for every pair or fitting parameters, showing typically mass vs. cross-section $\sigma_p$.
Simulations are saved in $DMDD_PATH/simulations directory directly, and MultiNest chains and plots produced by the .visualize() method are saved in the appropriate chains file, in this case the following directory:
End of explanation
"""
|
robertoneil/coursera_images | Week2_Part2.ipynb | mit | %matplotlib inline
#import typical packages I'll be using
import cv2
import numpy as np
import matplotlib.pyplot as plt
from pylab import rcParams
rcParams['figure.figsize'] = 10, 10 #boiler plate to set the size of the figures
#Load a test image - Lena
im = cv2.imread("lena.tiff")
im_temp = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
plt.imshow(im_temp[232:296,232:296])
#convert for matplotlib from brg to Y CR CB for display
im = cv2.cvtColor(im, cv2.COLOR_BGR2YCR_CB)
#Split into 3 channels, and only use a portion of the image
im_y = im[232:296,232:296,0]
im_cr = im[232:296,232:296,1]
im_cb = im[232:296,232:296,2]
w,h = im_y.shape
block_size = 8
blocks_y = np.zeros((block_size,block_size,w/block_size,h/block_size),np.int)
blocks_cr = np.zeros((block_size,block_size,w/block_size,h/block_size),np.int)
blocks_cb = np.zeros((block_size,block_size,w/block_size,h/block_size),np.int)
for r in range(h/block_size):
for c in range(w/block_size):
blocks_y[r,c] = (im_y[r*block_size : (r+1)*block_size, c*block_size : (c+1)*block_size])
blocks_cr[r,c] = (im_cr[r*block_size : (r+1)*block_size, c*block_size : (c+1)*block_size])
blocks_cb[r,c] = (im_cb[r*block_size : (r+1)*block_size, c*block_size : (c+1)*block_size])
dct_y = np.empty_like(blocks_y).astype(np.float32)
dct_cr = np.empty_like(blocks_cr).astype(np.float32)
dct_cb = np.empty_like(blocks_cb).astype(np.float32)
for r in range(h/block_size):
for c in range(w/block_size):
dct_y[r,c] = cv2.dct(np.float32(blocks_y[r,c]))
dct_cr[r,c] = cv2.dct(np.float32(blocks_cr[r,c]))
dct_cb[r,c] = cv2.dct(np.float32(blocks_cb[r,c]))
#quantize matrix from book 8.30b
normalization = np.asarray(
[16, 11, 10, 16, 24, 40, 51, 61,
12, 12, 14, 19, 26, 58, 60, 55,
14, 13, 16, 24, 40, 57, 69, 56,
14, 17, 22, 29, 51, 87, 80, 62,
18, 22, 37, 56, 68, 109, 103, 77,
24, 35, 55, 64, 81, 104, 113, 92,
49, 64, 78, 87, 103, 121, 120, 101,
72, 92, 95, 98, 112, 100, 103, 99]
).reshape(8,8)
quantized_y = np.empty_like(dct_y)
quantized_cr = np.empty_like(dct_cr)
quantized_cb = np.empty_like(dct_cb)
for r in range(h/block_size):
for c in range(w/block_size):
quantized_y[r,c] = dct_y[r,c]/normalization
quantized_cr[r,c] = dct_cr[r,c]/normalization
quantized_cb[r,c] = dct_cb[r,c]/normalization
quantized_y = quantized_y.astype(np.int)
quantized_cr = quantized_cr.astype(np.int)
quantized_cb = quantized_cb.astype(np.int)
inverted_y = np.empty_like(quantized_y)
inverted_cr = np.empty_like(quantized_cr)
inverted_cb = np.empty_like(quantized_cb)
for r in range(h/block_size):
for c in range(w/block_size):
inverted_y[r,c] = cv2.idct(np.float32(quantized_y[r,c]*normalization))
inverted_cr[r,c] = cv2.idct(np.float32(quantized_cr[r,c]*normalization))
inverted_cb[r,c] = cv2.idct(np.float32(quantized_cb[r,c]*normalization))
#Combine the 3 parts back into a 3 channel image
im_result = np.zeros((64,64,3)).astype(np.uint8)
#Recombine the 3 channels into one image
for r in range(h/block_size):
for c in range(w/block_size):
im_result[r*block_size:(r+1)*block_size, c*block_size:(c+1)*block_size, 0] = inverted_y[r,c]
im_result[r*block_size:(r+1)*block_size, c*block_size:(c+1)*block_size, 1] = inverted_cr[r,c]
im_result[r*block_size:(r+1)*block_size, c*block_size:(c+1)*block_size, 2] = inverted_cb[r,c]
im_temp = cv2.cvtColor(im_result, cv2.COLOR_YCR_CB2RGB)
plt.imshow(im_temp)
"""
Explanation: Color Images and Beyond
Do JPEG now for color images.
In Matlab, use the rgb2ycbcr command to convert the Red-Green-Blue image to a Lumina and Chroma one;
then perform the JPEG-style compression on each one of the three channels independently.
After inverting the compression, invert the color transform and visualize the result.
End of explanation
"""
quantized_y = np.empty_like(dct_y)
quantized_cr = np.empty_like(dct_cr)
quantized_cb = np.empty_like(dct_cb)
for r in range(h/block_size):
for c in range(w/block_size):
quantized_y[r,c] = dct_y[r,c]/normalization
quantized_cr[r,c] = dct_cr[r,c]/(2*normalization)
quantized_cb[r,c] = dct_cb[r,c]/(2*normalization)
quantized_y = quantized_y.astype(np.int)
quantized_cr = quantized_cr.astype(np.int)
quantized_cb = quantized_cb.astype(np.int)
inverted_y = np.empty_like(quantized_y)
inverted_cr = np.empty_like(quantized_cr)
inverted_cb = np.empty_like(quantized_cb)
for r in range(h/block_size):
for c in range(w/block_size):
inverted_y[r,c] = cv2.idct(np.float32(quantized_y[r,c]*normalization))
inverted_cr[r,c] = cv2.idct(np.float32(quantized_cr[r,c]*2*normalization))
inverted_cb[r,c] = cv2.idct(np.float32(quantized_cb[r,c]*2*normalization))
#Combine the 3 parts back into a 3 channel image
im_result = np.zeros((64,64,3)).astype(np.uint8)
#Recombine the 3 channels into one image
for r in range(h/block_size):
for c in range(w/block_size):
im_result[r*block_size:(r+1)*block_size, c*block_size:(c+1)*block_size, 0] = inverted_y[r,c]
im_result[r*block_size:(r+1)*block_size, c*block_size:(c+1)*block_size, 1] = inverted_cr[r,c]
im_result[r*block_size:(r+1)*block_size, c*block_size:(c+1)*block_size, 2] = inverted_cb[r,c]
im_temp = cv2.cvtColor(im_result, cv2.COLOR_YCR_CB2RGB)
plt.imshow(im_temp)
"""
Explanation: While keeping the compression ratio constant for the Y channel, increase the compression of the two chrominance channels and observe the results.
End of explanation
"""
#To do
"""
Explanation: Compute the histogram of a given image and of its prediction errors. If the pixel being processed is at coordinate (0,0), consider
predicting based on just the pixel at (-1,0);
predicting based on just the pixel at (0,1);
predicting based on the average of the pixels at (-1,0), (-1,1), and (0,1).
End of explanation
"""
#To do
"""
Explanation: Compute the entropy for each one of the predictors in the previous exercise. Which predictor will compress better?
End of explanation
"""
|
michael-isaev/cse6040_qna | PythonQnA_7_sets.ipynb | apache-2.0 | a = set ([1, 2, 3])
b = set ([2, 3, 4])
print ("Set a is", a)
print ("Set b is", b)
print ("Set intersection is", a & b)
print ("Set union is", a | b)
print ("Set symmetric difference is", a ^ b)
print ("Set difference 'a - b' is", a - b)
print ("Set difference 'b - a' is", b - a)
"""
Explanation: 7. Set Yourself Up for Success
A Python set is an unordered collection: the elements of a set do not have a position or order, so you cannot do indexing, slicing, or other sequence-like operations on sets as you would do on, for instance, lists.
Sets in Python mimic mathematical sets: the elements do not repeat. They are especially handy if you have other collections, like lists or tuples, and need to create a new collection of unique values, or the union of two collections, or a a superset (you see, set here is not a coincidence!).
There are many methods (functions) available for a set, but there is also a rich collection of overloaded operators, too. That includes '&', '|', '-', and others, which correspond to their "natural" mathematical analogues. Here are some of the most useful of those overloaded operations:
* A <= B or A < B: Check if A is a (proper) subset of B
* A >= B / A > B: Check if A is a (proper) superset of B
* A | B: Compute the union of two sets
* A & B: Intersection
* A - B: Difference
* A ^ B: Symmetric difference
As usual, you can read more about that in the docs. Below are some examples of several set operations:
End of explanation
"""
c = frozenset ([1, 2, 3])
d = frozenset ([2, 3, 4])
print ("Set c is", c)
print ("Set d is", d)
print ("Set intersection is", c & d)
print ("Set union is", c | d)
print ("Set symmetric difference is", c ^ d)
print ("Set difference 'c - d' is", c - d)
print ("Set difference 'd - c' is", d - c)
"""
Explanation: Read-only sets: frozenset. Beside the "normal" set, we have another helpful friend in the set family, the frozenset. A frozenset shares same operations with normal set, except for that fact that they return frozensets. (Huh?)
End of explanation
"""
try:
dict1 = {set([1, 3]): 'set as key'}
print(dict1)
except Exception as e:
print(e)
try:
dict2 = {frozenset([1, 3]): 'frozenset as key'}
print(dict2)
except Exception as e:
print(e)
"""
Explanation: The difference between these two types set is that frozenset is immutable while the normal set is mutable. This property gives us the choice to use a frozenset as a key for a dictionary, or to use it when we want to maintain a set of sets. Below you can see an example of a dictionary where sets serve as the key.
End of explanation
"""
print ("Set intersection between {} and {} is {}".format(a, d, a & d))
print ("Set intersection between {} and {} is {}".format(c, b, c & b))
print ("Set difference between {} and {} is {}".format(a, d, a - d))
print ("Set difference between {} and {} is {}".format(d, a, d - a))
"""
Explanation: Exercise. One curious thing about sets and frozensets is that you can mix them together in binary operations. What could be the result of such operations? Well, it's not very obvious. Try to think about what you think the operations below should return and what they actually return. If you're interested in diving deeper inside Python, you can look here to get some insights.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cmcc/cmip6/models/cmcc-cm2-sr5/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-sr5', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-SR5
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
rishuatgithub/MLPy | nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/05-POS-Assessment.ipynb | apache-2.0 | # RUN THIS CELL to perform standard imports:
import spacy
nlp = spacy.load('en_core_web_sm')
from spacy import displacy
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Parts of Speech Assessment
For this assessment we'll be using the short story The Tale of Peter Rabbit by Beatrix Potter (1902). <br>The story is in the public domain; the text file was obtained from Project Gutenberg.
End of explanation
"""
with open('../TextFiles/peterrabbit.txt') as f:
doc = nlp(f.read())
"""
Explanation: 1. Create a Doc object from the file peterrabbit.txt<br>
HINT: Use with open('../TextFiles/peterrabbit.txt') as f:
End of explanation
"""
# Enter your code here:
for tokens in list(doc.sents)[3]:
print(f"{tokens.text:{15}} {tokens.pos_:{10}} {tokens.tag_:{10}} {spacy.explain(tokens.tag_)} ")
"""
Explanation: 2. For every token in the third sentence, print the token text, the POS tag, the fine-grained TAG tag, and the description of the fine-grained tag.
End of explanation
"""
POS_counts = doc.count_by(spacy.attrs.POS)
for k,v in sorted(POS_counts.items()):
print(f'{k}. {doc.vocab[k].text:{10}} {v}')
"""
Explanation: 3. Provide a frequency list of POS tags from the entire document
End of explanation
"""
total_tokens = len([tokens for tokens in doc])
noun_tokens = len([tokens for tokens in doc if tokens.pos_ == 'NOUN'])
(noun_tokens / total_tokens) * 100
"""
Explanation: 4. CHALLENGE: What percentage of tokens are nouns?<br>
HINT: the attribute ID for 'NOUN' is 91
End of explanation
"""
displacy.render(list(doc.sents)[3],style='dep', jupyter=True, options={'distance':50})
"""
Explanation: 5. Display the Dependency Parse for the third sentence
End of explanation
"""
for ent in doc.ents[:3]:
print(ent.text+' - '+ent.label_+' - '+str(spacy.explain(ent.label_)))
"""
Explanation: Show the first two named entities from Beatrix Potter's The Tale of Peter Rabbit **
End of explanation
"""
len([s for s in doc.sents])
"""
Explanation: 7. How many sentences are contained in The Tale of Peter Rabbit?
End of explanation
"""
list_of_sents = [nlp(sent.text) for sent in doc.sents]
list_of_ners = [doc for doc in list_of_sents if doc.ents]
len(list_of_ners)
"""
Explanation: 8. CHALLENGE: How many sentences contain named entities?
End of explanation
"""
displacy.render(list_of_sents[0], style='ent', jupyter=True)
"""
Explanation: 9. CHALLENGE: Display the named entity visualization for list_of_sents[0] from the previous problem
End of explanation
"""
|
essicolo/GCI733-A2015 | barriere-capillaire.ipynb | mit | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as plticker
from scipy.integrate import quad
from scipy.interpolate import interp1d
"""
Explanation: Profils de succion et drainage latéral dans les barrières capillaires
Pour exécuter une cellule, Ctrl + Enter. Pour exécuter une cellule et passer à la suivante, Maj. + Enter. Pour exécuter toute la feuille, dans le menu Cell, sélectionner Run All.
Charger les librairies
End of explanation
"""
def vanGenuchten(thR, thS, aVG, nVG, mVG, ksat, psi, lVG=0.5):
th = thR + (thS - thR) * (1+(aVG * psi) ** nVG) ** (-mVG)
k = ksat*((1-((aVG*psi)**(nVG*mVG))* \
((1+((aVG*psi)**nVG))**(-mVG)))**2) / \
((1+((aVG*psi)**nVG))**(mVG*lVG))
return(pd.DataFrame({'psi':psi, 'theta':th, 'k':k}))
"""
Explanation: Modèles hydrauliques
Les propriétés hydrauliques des milieux poreux non saturés peuvent être décrite par leurs propriétés de rtention d'eau et de conductivité hydraulique. Le modèle de van Genuchten (1980) sera utilisé.
Courbe de rétention d'eau, van Genuchten (1980):
\begin{align}
\theta(\psi) = \theta_{r} + (\theta_{s} - \theta_{r}) (1+(a_{VG} \psi)^{n_{VG}})^{-m_{VG}} \
\end{align}
Fonction de conductivité hydraulique, van Genuchten et al. (1991), basé sur van Genuchten (1980) et Mualem (1976):
\begin{align}
k(\psi) = k_{sat} \frac {(1-((a_{VG} \psi)^{n_{VG}m_{VG}}) (1+(a_{VG} \psi)^{n_{VG}})^{-m_{VG}}))^2} { (1+(a_{VG} \psi)^{n_{VG}})^{m_{VG}l_{VG}}}
\end{align}
End of explanation
"""
# CBL
cbl_thR = 0.017
cbl_thS = 0.37
cbl_aVG = 3.5 * 9.807 # 9.807 pour exprimer en 1/m des valeurs exprimées en 1/kPa
cbl_nVG = 3.0
cbl_mVG = 1 - 1/cbl_nVG
cbl_lVG = 0.5
cbl_ksat = 2.3e-3 # m/s
# MRL
mrl_thR = 0.1
mrl_thS = 0.4
mrl_aVG = 1.8 * 9.807 # 9.807 pour exprimer en 1/m des valeurs exprimées en 1/kPa
mrl_nVG = 1.3
mrl_mVG = 1 - 1/mrl_nVG
mrl_lVG = 0.5
mrl_ksat = 3e-4 # m/s
"""
Explanation: Une barrière capillaire inclut une couche de bris capillaire (capillary break layer, CBL) sur laquelle est installée une couche de rétention capillaire (moisture retaining layer, MRL). Définissons les paramètres de van Genuchten pour la CBL et la MRL.
End of explanation
"""
npoints = 1000
cbl_psi = np.logspace(start = -2, stop = 2, num = npoints, endpoint = True)
mrl_psi = np.logspace(start = -2, stop = 2, num = npoints, endpoint = True)
"""
Explanation: Nous aurons aussi besoin de vecteurs psi pour indiquer à notre fonction vanGenuchten sur quelles valeurs de succion (en mètres) les valeurs de teneur en eau et de conductivité hydraulique devraient être calculées.
End of explanation
"""
cbl_VG = vanGenuchten(thR=cbl_thR, thS=cbl_thS, aVG=cbl_aVG, nVG=cbl_nVG,
mVG=cbl_mVG, ksat=cbl_ksat, psi=cbl_psi, lVG=cbl_lVG)
mrl_VG = vanGenuchten(thR=mrl_thR, thS=mrl_thS, aVG=mrl_aVG, nVG=mrl_nVG,
mVG=mrl_mVG, ksat=mrl_ksat, psi=mrl_psi, lVG=mrl_lVG)
"""
Explanation: Entrons ces paramètres dans la fonction vanGenuchten.
End of explanation
"""
cbl_VG.head()
"""
Explanation: Comme on l'a demandé dans la fotion vanGenuchten, la sortie de la fonction est un tableau (de type pandas DataFrame). Voyons voir l'entête de cbl_VG par exemple.
End of explanation
"""
fig1, axes = plt.subplots(nrows=2, ncols=1, figsize=(6, 12))
# Courbe de rétention d'eau
## Décorations
axes[0].set_ylabel(r'$\theta (m^3/m^3)$')
axes[0].set_xscale('log')
axes[0].set_xticks([0.1, 1, 10, 100, 1000])
## Graphique
axes[0].plot(cbl_VG.psi * 9.807, cbl_VG.theta, linewidth=2, label="CBL material")
axes[0].plot(mrl_VG.psi * 9.807, mrl_VG.theta, label="MRL material")
axes[0].legend()
# Fonction de conductivité hydraulique
## Décorations
axes[1].set_xlabel(r'$\psi (kPa)$')
axes[1].set_ylabel(r'$k (m/s)$')
axes[1].set_xscale('log')
axes[1].set_xticks([0.1, 1, 10, 100, 1000])
axes[1].set_yscale('log')
axes[1].set_ylim([1e-16, 1e-2])
## Graphique
axes[1].plot(cbl_VG.psi * 9.807, cbl_VG.k, linewidth=2, label="CBL material")
axes[1].plot(mrl_VG.psi * 9.807, mrl_VG.k, label="MRL material")
"""
Explanation: Créons un graphique des courbes de rétention d'eau ainsi que de la fonction de conductivité hydraulique.
End of explanation
"""
def kisch(thR, thS, aVG, nVG, mVG, ksat, psi, q, lVG=0.5, psi_min=1e-3, z_min=0):
model_init = vanGenuchten(thR=thR, thS=thS, aVG=aVG, nVG=nVG, mVG=mVG, ksat=ksat, lVG=lVG,
psi=psi)
interp_func = interp1d(np.log10(model_init.k), np.log10(model_init.psi))
psi_q = 10**interp_func(np.log10(q))
model_kisch = vanGenuchten(thR=thR, thS=thS, aVG=aVG, nVG=nVG, mVG=mVG, ksat=ksat, lVG=lVG,
psi=np.logspace(start = np.log10(psi_min), stop = np.log10(psi_q),
num = model_init.shape[0]))
delta_psi_kisch = model_kisch.psi.diff().shift(-1)
z = (delta_psi_kisch / (1 - q/model_kisch.k)).cumsum() + z_min
return(pd.DataFrame({'psi': model_kisch.psi, 'z':z}))
"""
Explanation: Profil de succion dans une colonne de matériau poreux soumis à un débit unitaire
Une démonstration similaire a d'abord été publiée par Kisch (1959).
\begin{align}
q = k(\psi) \frac{dh}{dz}
\end{align}
\begin{align}
h = z + p = z - \psi
\end{align}
\begin{align}
q = k(\psi) \frac{dz - d\psi}{dz}
\end{align}
\begin{align}
q = k(\psi) (1 - \frac{d\psi}{dz})
\end{align}
\begin{align}
\frac {q}{k(\psi)} = 1 - \frac{d\psi}{dz}
\end{align}
\begin{align}
\frac{d\psi}{dz} = 1-\frac {q}{k(\psi)}
\end{align}
\begin{align}
dz = \frac{d\psi}{1-\frac {q}{k(\psi)}}
\end{align}
\begin{align}
z(\psi) = \int_{\psi_{min}}^{\psi} \frac{1}{1-\frac {q}{k(\psi)}}d\psi
\end{align}
L'intégrale peut être approximée par:
\begin{align}
z(\psi) = \sum_{i=1}^{n} \frac{\Delta\psi}{1-\frac {q}{k_n(\psi)}}
\end{align}
La fonction kisch est une manière parmi d'autres d'encoder cette fonction en Python:
End of explanation
"""
unit_flow=1e-8
cbl_kisch = kisch(thR=cbl_thR, thS=cbl_thS, aVG=cbl_aVG, nVG=cbl_nVG,
mVG=cbl_mVG, ksat=cbl_ksat, psi=cbl_psi,
q=unit_flow, lVG=cbl_lVG, z_min=0)
cbl_kisch.head()
plt.plot(cbl_kisch.psi, cbl_kisch.z, '-')
plt.xlabel(r'$\psi (m)$')
plt.ylabel('Élévation (m)')
"""
Explanation: Par exemple, prenons un débit unitaire traversant une colonne du sol de la CBL ayant une succion de 0 kPa à sa base
End of explanation
"""
cbl_kisch.psi.max()
plt.plot(cbl_VG.psi, cbl_VG.k)
plt.xscale('log')
plt.yscale('log')
plt.xlim([1e-2, 1])
plt.ylim([1e-12, 1e-2])
plt.xlabel(r'$\psi (m)$')
plt.ylabel('k (m/s)')
plt.axhline(unit_flow, ls=':')
plt.axvline(cbl_kisch.psi.max(), ls=':')
"""
Explanation: Le graphique montre que la succion augmente linéairement avec l'élévation, puis s'incurve pour converger vers une valeur d'environ 0.15 m.
End of explanation
"""
cbl_thickness = 0.3
mrl_kisch = kisch(thR=mrl_thR, thS=mrl_thS, aVG=mrl_aVG, nVG=mrl_nVG,
mVG=mrl_mVG, ksat=mrl_ksat, psi=mrl_psi,
q=unit_flow, lVG=mrl_lVG, psi_min=cbl_kisch.psi.max(), z_min=cbl_thickness)
"""
Explanation: Il est possible de superposer une MRL en spécifiant son élévation et la valeur de succion à sa base (égale à la valeur maximale de succion de la CBL).
End of explanation
"""
cbl_kisch_VG = vanGenuchten(thR=cbl_thR, thS=cbl_thS, aVG=cbl_aVG, nVG=cbl_nVG,
mVG=cbl_mVG, ksat=cbl_ksat, psi=cbl_kisch.psi, lVG=cbl_lVG)
cbl_kisch_VG['z'] = cbl_kisch.z
mrl_kisch_VG = vanGenuchten(thR=mrl_thR, thS=mrl_thS, aVG=mrl_aVG, nVG=mrl_nVG,
mVG=mrl_mVG, ksat=mrl_ksat, psi=mrl_kisch.psi, lVG=mrl_lVG)
mrl_kisch_VG['z'] = mrl_kisch.z
mrl_kisch_VG.head()
"""
Explanation: Graphiques du profil de succion dans une barrière capillaire
End of explanation
"""
interface_VG = pd.DataFrame({'k':cbl_kisch_VG.k.min(),
'psi': cbl_kisch_VG.psi.max(),
'theta': cbl_kisch_VG.theta.min(),
'z': cbl_thickness},
index=['interface'])
kisch_VG = pd.concat([cbl_kisch_VG, interface_VG, mrl_kisch_VG]).dropna(axis=0)
"""
Explanation: Pour s'assurer une continuité entre la CBL et la MRL, ajoutons des points à l'interface.
End of explanation
"""
mrl_thickness = 0.8
kisch_VG = kisch_VG.loc[kisch_VG.z <= (cbl_thickness + mrl_thickness), :]
fig2, axes = plt.subplots(nrows=1, ncols=3, figsize=(16,6))
# Suction profile
axes[0].set_xlabel(r'$\psi (kPa)$')
axes[0].set_ylabel(r'$Elevation (m)$')
axes[0].axhline(cbl_thickness, linestyle=':')
axes[0].plot(kisch_VG.psi * 9.807, kisch_VG.z, linestyle='-')
# WC profile
axes[1].set_xlabel(r'$\theta (kPa)$')
axes[1].yaxis.set_visible(False)
axes[1].axhline(cbl_thickness, ls=':')
axes[1].plot(kisch_VG.theta, kisch_VG.z, linestyle='-')
# k profile
axes[2].set_xlabel(r'$k (m/s)$')
axes[2].yaxis.set_visible(False)
axes[2].set_xscale('log')
axes[2].axhline(cbl_thickness, linestyle=':')
axes[2].plot(kisch_VG.k, kisch_VG.z, linestyle='-')
"""
Explanation: Posons une épaisseur de MRL pour la limite supérieure du grahique.
End of explanation
"""
def k_vanGenuchten(x, aVG, nVG, mVG, ksat, lVG = 0.5):
k = ksat * ((1 - ((aVG * x)**(nVG * mVG)) * \
((1 + ((aVG * x)**nVG))**(-mVG)))**2) / \
((1 + ((aVG * x)**nVG))**(mVG * lVG))
return(k)
"""
Explanation: Drainage latéral dans une barrière capillaire inclinée
Ross (1990) décrit le modèle suivant.
\begin{align}
Q_{max} = \tan(\phi) \int_{\psi_{CBC}}^{\psi_{CRC}} k(\psi) d\psi \
L = \frac {Q_{max}}{q}
\end{align}
End of explanation
"""
pente = 0.25
Qmax = pente * quad(k_vanGenuchten, # fonction
cbl_kisch.psi.max(), mrl_kisch.psi.max(), # bornes
args=(mrl_aVG, mrl_nVG, mrl_mVG, mrl_ksat, mrl_lVG))[0] # arguments de la fonction
L = Qmax / unit_flow
print ('Le débit de transfert maximal dans la barrière capillaire est de', Qmax, 'm²/s.')
print ('La longueur de transfert maximale dans la barrière capillaire est de', L, 'm.')
"""
Explanation: Capacité et longueur de transfert
End of explanation
"""
cbl_kisch.psi.max()
"""
Explanation: Épaisseur minimale de la CBL
End of explanation
"""
mrl_psi = np.linspace(start = cbl_kisch.psi.max(), stop = mrl_kisch.psi.max(), num = 20, endpoint = True)
mrl_thickness = mrl_psi - cbl_kisch.psi.max()
Qmax_thickness = np.array([])
for i in range(0, len(mrl_psi)):
Qmax_thickness = np.append(Qmax_thickness, pente * quad(k_vanGenuchten, cbl_kisch.psi.max(), mrl_psi[i], args=(mrl_aVG, mrl_nVG, mrl_mVG, mrl_ksat, mrl_lVG))[0])
L_thickness = Qmax_thickness / unit_flow
Qmax_thickness
plt.axhline(L, linestyle=':')
plt.plot(mrl_thickness, L_thickness)
plt.xlabel('MRL thickness (m)')
plt.ylabel('Diversion length (m)')
"""
Explanation: Épaisseur de la couche de rétention capillaire (CRC ou MRL)
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/hub/tutorials/tf_hub_generative_image_module.ipynb | apache-2.0 | # Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
# Install imageio for creating animations.
!pip -q install imageio
!pip -q install scikit-image
!pip install git+https://github.com/tensorflow/docs
#@title Imports and function definitions
from absl import logging
import imageio
import PIL.Image
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
tf.random.set_seed(0)
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
import time
try:
from google.colab import files
except ImportError:
pass
from IPython import display
from skimage import transform
# We could retrieve this value from module.get_input_shapes() if we didn't know
# beforehand which module we will be using.
latent_dim = 512
# Interpolates between two vectors that are non-zero and don't both lie on a
# line going through origin. First normalizes v2 to have the same norm as v1.
# Then interpolates between the two vectors on the hypersphere.
def interpolate_hypersphere(v1, v2, num_steps):
v1_norm = tf.norm(v1)
v2_norm = tf.norm(v2)
v2_normalized = v2 * (v1_norm / v2_norm)
vectors = []
for step in range(num_steps):
interpolated = v1 + (v2_normalized - v1) * step / (num_steps - 1)
interpolated_norm = tf.norm(interpolated)
interpolated_normalized = interpolated * (v1_norm / interpolated_norm)
vectors.append(interpolated_normalized)
return tf.stack(vectors)
# Simple way to display an image.
def display_image(image):
image = tf.constant(image)
image = tf.image.convert_image_dtype(image, tf.uint8)
return PIL.Image.fromarray(image.numpy())
# Given a set of images, show an animation.
def animate(images):
images = np.array(images)
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images)
return embed.embed_file('./animation.gif')
logging.set_verbosity(logging.ERROR)
"""
Explanation: 使用 CelebA 渐进式 GAN 模型生成人工面部
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/tf_hub_generative_image_module"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View 在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/tf_hub_generative_image_module.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/tf_hub_generative_image_module.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/tf_hub_generative_image_module.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
<td><a href="https://tfhub.dev/google/progan-128/1"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
本 Colab 演示了如何使用基于生成对抗网络 (GAN) 的 TF-Hub 模块。该模块从 N 维向量(称为隐空间)映射到 RGB 图像。
本文提供了两个示例:
从隐空间映射到图像,以及
提供一个目标图像,利用梯度下降法找到生成与目标图像相似的图像的隐向量。
可选前提条件
熟悉低级 Tensorflow 概念。
维基百科上的生成对抗网络。
关于渐进式 GAN 的论文:Progressive Growing of GANs for Improved Quality, Stability, and Variation。
更多模型
这里可以找到 tfhub.dev 上当前托管的所有模型,您可以使用这些模型生成图像。
设置
End of explanation
"""
progan = hub.load("https://tfhub.dev/google/progan-128/1").signatures['default']
def interpolate_between_vectors():
v1 = tf.random.normal([latent_dim])
v2 = tf.random.normal([latent_dim])
# Creates a tensor with 25 steps of interpolation between v1 and v2.
vectors = interpolate_hypersphere(v1, v2, 50)
# Uses module to generate images from the latent space.
interpolated_images = progan(vectors)['default']
return interpolated_images
interpolated_images = interpolate_between_vectors()
animate(interpolated_images)
"""
Explanation: 隐空间插值法
随机向量
两个随机初始化向量之间的隐空间插值。我们将使用包含预训练渐进式 GAN 的 TF-Hub 模块 progan-128。
End of explanation
"""
image_from_module_space = True # @param { isTemplate:true, type:"boolean" }
def get_module_space_image():
vector = tf.random.normal([1, latent_dim])
images = progan(vector)['default'][0]
return images
def upload_image():
uploaded = files.upload()
image = imageio.imread(uploaded[list(uploaded.keys())[0]])
return transform.resize(image, [128, 128])
if image_from_module_space:
target_image = get_module_space_image()
else:
target_image = upload_image()
display_image(target_image)
"""
Explanation: 查找隐空间中的最近向量
确定目标图像。例如,使用从模块生成的图像或上传自己的图像。
End of explanation
"""
tf.random.set_seed(42)
initial_vector = tf.random.normal([1, latent_dim])
display_image(progan(initial_vector)['default'][0])
def find_closest_latent_vector(initial_vector, num_optimization_steps,
steps_per_image):
images = []
losses = []
vector = tf.Variable(initial_vector)
optimizer = tf.optimizers.Adam(learning_rate=0.01)
loss_fn = tf.losses.MeanAbsoluteError(reduction="sum")
for step in range(num_optimization_steps):
if (step % 100)==0:
print()
print('.', end='')
with tf.GradientTape() as tape:
image = progan(vector.read_value())['default'][0]
if (step % steps_per_image) == 0:
images.append(image.numpy())
target_image_difference = loss_fn(image, target_image[:,:,:3])
# The latent vectors were sampled from a normal distribution. We can get
# more realistic images if we regularize the length of the latent vector to
# the average length of vector from this distribution.
regularizer = tf.abs(tf.norm(vector) - np.sqrt(latent_dim))
loss = target_image_difference + regularizer
losses.append(loss.numpy())
grads = tape.gradient(loss, [vector])
optimizer.apply_gradients(zip(grads, [vector]))
return images, losses
num_optimization_steps=200
steps_per_image=5
images, loss = find_closest_latent_vector(initial_vector, num_optimization_steps, steps_per_image)
plt.plot(loss)
plt.ylim([0,max(plt.ylim())])
animate(np.stack(images))
"""
Explanation: 定义目标图像与隐空间变量生成的图像之后,我们可以利用梯度下降法找到最大限度减少损失的变量值。
End of explanation
"""
display_image(np.concatenate([images[-1], target_image], axis=1))
"""
Explanation: 将结果与目标进行对比:
End of explanation
"""
|
elmaso/tno-ai | aind2-cnn/mnist-mlp/mnist_mlp.ipynb | gpl-3.0 | from keras.datasets import mnist
# use Keras to import pre-shuffled MNIST database
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("The MNIST database has a training set of %d examples." % len(X_train))
print("The MNIST database has a test set of %d examples." % len(X_test))
"""
Explanation: Artificial Intelligence Nanodegree
Convolutional Neural Networks
In this notebook, we train an MLP to classify images from the MNIST database.
1. Load MNIST Database
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.cm as cm
import numpy as np
# plot first six training images
fig = plt.figure(figsize=(20,20))
for i in range(6):
ax = fig.add_subplot(1, 6, i+1, xticks=[], yticks=[])
ax.imshow(X_train[i], cmap='gray')
ax.set_title(str(y_train[i]))
"""
Explanation: 2. Visualize the First Six Training Images
End of explanation
"""
def visualize_input(img, ax):
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
ax.annotate(str(round(img[x][y],2)), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
visualize_input(X_train[0], ax)
"""
Explanation: 3. View an Image in More Detail
End of explanation
"""
# rescale [0,255] --> [0,1]
X_train = X_train.astype('float32')/255
X_test = X_test.astype('float32')/255
"""
Explanation: 4. Rescale the Images by Dividing Every Pixel in Every Image by 255
End of explanation
"""
from keras.utils import np_utils
# print first ten (integer-valued) training labels
print('Integer-valued labels:')
print(y_train[:10])
# one-hot encode the labels
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)
# print first ten (one-hot) training labels
print('One-hot labels:')
print(y_train[:10])
"""
Explanation: 5. Encode Categorical Integer Labels Using a One-Hot Scheme
End of explanation
"""
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
# define the model
model = Sequential()
model.add(Flatten(input_shape=X_train.shape[1:]))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
# summarize the model
model.summary()
"""
Explanation: 6. Define the Model Architecture
End of explanation
"""
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
"""
Explanation: 7. Compile the Model
End of explanation
"""
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]
# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
"""
Explanation: 8. Calculate the Classification Accuracy on the Test Set (Before Training)
End of explanation
"""
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='mnist.model.best.hdf5',
verbose=1, save_best_only=True)
hist = model.fit(X_train, y_train, batch_size=128, epochs=10,
validation_split=0.2, callbacks=[checkpointer],
verbose=1, shuffle=True)
"""
Explanation: 9. Train the Model
End of explanation
"""
# load the weights that yielded the best validation accuracy
model.load_weights('mnist.model.best.hdf5')
"""
Explanation: 10. Load the Model with the Best Classification Accuracy on the Validation Set
End of explanation
"""
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]
# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
"""
Explanation: 11. Calculate the Classification Accuracy on the Test Set
End of explanation
"""
|
halfak/are-the-bots-really-fighting | analysis/main/5-1-descriptive-stats.ipynb | mit | import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import pickle
import datetime
%matplotlib inline
start = datetime.datetime.now()
"""
Explanation: Section 5.1: Descriptive statistics on the bot-bot revert dataset
This is the first data analysis script used to produce findings in the paper, which you can run based entirely off the files in this GitHub repository.
This entire notebook can be run from the beginning with Kernel -> Restart & Run All in the menu bar. It takes about 1 minute to run on a laptop running a Core i5-2540M processor.
End of explanation
"""
!unxz -kf ../../datasets/parsed_dataframes/df_all_2016.pickle.xz
!ls -lah ../../datasets/parsed_dataframes/*
with open("../../datasets/parsed_dataframes/df_all_2016.pickle", "rb") as f:
df_all = pickle.load(f)
len(df_all)
"""
Explanation: Data processing
End of explanation
"""
df_all.sample(2).transpose()
"""
Explanation: Format of dataset
End of explanation
"""
gb = df_all[df_all['page_namespace']==0].groupby(["language","reverting_year"])
sns.set(font_scale=1.5)
gb['rev_id'].count().unstack().transpose()
"""
Explanation: Descriptive statistics
Number of bot-bot reverts per language over time, articles only
EGBF looked at bot-bot reverts from 2001-2010, how have things changed since 2010?
Paper section:
End of explanation
"""
sns.set(font_scale=1.5)
sns.set_style("whitegrid")
groupby_unstack = gb['revisions_reverted'].count().unstack().transpose()
ax = groupby_unstack.plot(kind='line', logy=True, figsize=[10,6], colormap="Accent")
plt.xlim(2004,2018)
plt.ylabel("Number of bot-bot reverts (log scaled)")
plt.xlabel("Year of reverting edit")
#plt.suptitle("Bot-bot reverts per language by reverting year, articles only")
leg = plt.legend()
for legobj in leg.legendHandles:
legobj.set_linewidth(8.0)
plt.savefig("reverts-yearly-counts.pdf", bbox_inches='tight', dpi=600)
"""
Explanation: Plot
End of explanation
"""
gb['rev_id'].count().unstack().transpose().sum()
"""
Explanation: Number of bot-bot reverts per language, all years, articles only
End of explanation
"""
gb['rev_id'].count().unstack().transpose().sum().sum()
"""
Explanation: Total number of bot-bot reverts, all 7 languages, all years, articles only
End of explanation
"""
gb_lang_nstype = df_all.groupby(["language", "namespace_type"])
gb_lang_nstype['revisions_reverted'].count().unstack().transpose()
"""
Explanation: Number of bot-bot reverts per language over time, all namespaces
End of explanation
"""
sns.set(font_scale=2)
sns.set_style("whitegrid")
g = sns.factorplot(data=df_all,
x='language',
y=None,
hue='namespace_type',
kind='count',
size=8,
palette="Accent",
aspect = 1)
plt.savefig("reverts-namespace-counts.pdf", bbox_inches='tight', dpi=600)
"""
Explanation: Plot
End of explanation
"""
gb_lang_nstype['revisions_reverted'].count().unstack().transpose().sum()
"""
Explanation: Number of bot-bot reverts per language, all years, all namespaces
End of explanation
"""
gb_lang_nstype['revisions_reverted'].count().unstack().sum()
"""
Explanation: Number of bot-bot reverts by namespace type, all 7 languages, all years, all namespaces
End of explanation
"""
df_all['namespace_type'].value_counts(normalize=True)
"""
Explanation: Proportion of bot-bot reverts by namespace type, all 7 languages, all years, all namespaces
End of explanation
"""
1 - df_all['namespace_type'].value_counts(normalize=True)['article']
"""
Explanation: Proportion of bot-bot reverts outside of the main/article namespace:
Referenced in paper section 5.1
End of explanation
"""
end = datetime.datetime.now()
time_to_run = end - start
minutes = int(time_to_run.seconds/60)
seconds = time_to_run.seconds % 60
print("Total runtime: ", minutes, "minutes, ", seconds, "seconds")
"""
Explanation: Runtime
End of explanation
"""
|
GoogleCloudPlatform/oss-test-infra | ml/tf-prow-squad.ipynb | apache-2.0 | vm_image_project='deeplearning-platform-release'
vm_image_family='tf-ent-2-8-cu113-notebooks'
machine_type='n1-standard-8'
location='us-central1-a'
accelerator_type='CHOOSE' # eg, 'NVIDIA_TESLA_V100'
accelerator_cores=1
project='MY_PROJECT_ID'
instance_name='MY_INSTANCE_NAME'
print('Run the following command:')
print(' \\\n '.join([
f' gcloud notebooks instances create {instance_name}',
f'--project={project}',
f'--vm-image-project={vm_image_project}',
f'--vm-image-family={vm_image_family}',
f'--machine-type={machine_type}',
f'--location={location}',
f'--accelerator_type={accelerator_type}',
f'--accelerator_cores={accelerator_cores}',
]))
"""
Explanation: Fine-tuning BERT to answer SQuAD questions using TensorFlow2
Overview
This notebook demonstrates how to:
Download the SQuAD dataset using tensorflow_datasets
Download a pretrain BERT model using tensorflow_hub and its corresponding tokenizer
Process the squad dataset to:
Tokenize the contexts and questions
Pack each context+question pair into a tensorflow BERT input with
Identify the start and end token in the context
Convert the start/end context token index into the expected outputs
Construct a model which converts the input into output predictions
Choose a loss for each output
Use the predictions to print out the answer
Find the highest probability start and end index
Table of contents
Setup notebook
Setup dependencies
Tokenizer
Dataset
Model
Train
Export
Evaluate
Setup notebook
Back to Table of Contents
First you will need a machine with jupyter installed as well as a GPU/TPU.
One option to consider is to use Google Colaboratory.
This notebook was created with a user-managed notebook on Google Cloud's Vertex AI platform.
Create a notebook in your own project by going to https://notebook.new/ or else customizing the following command:
End of explanation
"""
print('Stop your notebook:')
print(f' gcloud notebooks instances stop {instance_name} --project={project} --location={location}')
print('Delete your notebook:')
print(f' gcloud notebooks instances delete {instance_name} --project={project} --location={location}')
"""
Explanation: Remember that these machines are expensive, many hundreds of dollars a month. So make sure you stop the VM when you are not using it, either at the Vertex AI workbench or else using gcloud:
End of explanation
"""
!pip install -U "tensorflow-text==2.8.*"
# tf-models-official 2.8.0 produces a official.nlp.bert.configs below for some reasons, so use the previous version.
!pip install tf-models-official==2.7.1
!pip install pydot
!sudo apt install graphviz
"""
Explanation: After creating the notebook:
1) head over to the Vertex AI workbench
2) Wait for the OPEN JUPYTERLAB button to appear next the MY_INSTANCE_NAME you chose
3) Click on the button.
This should open JupyterLab, showing you a launcher tab. Click File -> New Launcher to create another one.
On the left side there should be a few buttons, one of which is a folder icon. Clicking this will close/open the file browser on the left sidebar.
This file browser should have a + button (which creates a new launcher tab) as well as an up arrow, which allows you to upload files.
Click the upload button and upload this .ipynb python notebook. After uploading it should now appear in your file list.
Double-click the file you uploaded in the browser, and it should open this notebook. You are ready to go!
Setup dependencies
Back to Table of Contents
Make sure the following packages are installed:
End of explanation
"""
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
# Load the required submodules
from official.nlp import optimization
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization
import official.nlp.data.classifier_data_lib
import official.nlp.modeling.losses
import official.nlp.modeling.models
import official.nlp.modeling.networks
"""
Explanation: Now you can import all of the python modules that this notebook depends on
End of explanation
"""
import tensorflow_text as text # A dependency of the preprocessing model
import tensorflow_addons as tfa
"""
Explanation: Additional imports that may be necessary
End of explanation
"""
default_strategy = tf.distribute.get_strategy()
if os.environ.get('COLAB_TPU_ADDR'):
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
strategy = tf.distribute.TPUStrategy(cluster_resolver)
print('Using TPU')
elif tf.config.list_physical_devices('GPU'):
# https://www.tensorflow.org/guide/distributed_training
strategy = tf.distribute.MirroredStrategy()
# TODO(fejta): strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
# TODO(fejta): default_strategy = tf.distribute.get_strategy()
print('Using GPU')
else:
raise ValueError('Running on CPU is not recommended.')
"""
Explanation: A couple different strategies to use to distribute work, also helps move up some TF debug spew
End of explanation
"""
print('Select pretrained bert model')
# Pre-trained model
tfhub_handle_encoder = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4'
# Matching encoder
tfhub_handle_preprocess = 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3'
print(' ', tfhub_handle_encoder)
gs_folder_bert = "gs://cloud-tpu-checkpoints/bert/v3/uncased_L-12_H-768_A-12"
print('Files in', gs_folder_bert)
gs_files = tf.io.gfile.listdir(gs_folder_bert)
print(' ', '\n '.join(gs_files))
"""
Explanation: Tokenizer
Back to Table of Contents
First let's download the BERT model we wil use. The Choose a BERT model to fine-tune section of the GLUE fine-tuning (and similar tf BERT docs) contain a Toggle code button that lists other encoder/preprocessing pairs.
Let's choose the one with hyperparamers that match the BERT paper: 12 layers, 768 hidden features, 12 attention heads.
End of explanation
"""
print('Create reversible tokenizer')
tokenizer = bert.tokenization.FullTokenizer(
vocab_file=os.path.join(gs_folder_bert, "vocab.txt"),
do_lower_case=True)
"""
Explanation: The tfhub_handle_preprocess does one-way preprocessing, which is going to be hard to work with. The SQuAD dataset returns a start index as well as the answer text. We will need to identify the matching tokens in the text and their start/end index.
Additionally, when we get predictions we will need to be able to convert a list of tokens back into text. Using the BERT tokenizer section from the Fine-tuning BERT documentation provides a better way to do this, which we will copy:
End of explanation
"""
def to_token_ids(s):
"""Converts 'FUN stuffing' into ['fun', 'stuff', '##ing'] and then [7, 2089, 88]."""
return tokenizer.convert_tokens_to_ids(tokenizer.tokenize(s))
def from_token_ids(ids, lossy=True):
"""Converts [7, 2089, 88] into ['fun', 'stuff', '##ing'] and then 'fun stuff ##ing' or 'fun stuffing'."""
s = ' '.join(tokenizer.convert_ids_to_tokens(ids))
if lossy:
s = s.replace('[CLS] ', '').replace(' [PAD]', '').replace(' [SEP]', '\n\n').replace(' ##', '')
return s
"""
Explanation: Let's define a couple helper functions to help us go between strings and tokens.
End of explanation
"""
print('String to token id list:')
orig = 'This is a very interesting sentence.'
ids = to_token_ids(orig)
print(orig, 'becomes:', ids)
print('Token ids to string:')
s = from_token_ids(ids)
print(ids, 'becomes:', s)
"""
Explanation: Now let's try it out!
End of explanation
"""
print('Load squad from tfds...')
out = tfds.load('squad/v1.1', with_info=True, batch_size=-1) # -1 means whole dataset in mem
squad, info = out
print('Done!')
"""
Explanation: Dataset
Back to Table of Contents
The SQuAD dataset is a popular dataset that tests reading comprehension. People are given a wikipedia paragraph and a question about it. The goal is to highlight the correct answer in the text.
The BERT paper mentions this as one of the fine-tuning exercises it does very well on, We will now try and replicate its results.
The tensorflow_datasets module includes squad, which makes it easy to download and prepare for training.
End of explanation
"""
squad.keys(), squad['train'].keys()
print(info)
"""
Explanation: Let's take a look at what we got:
End of explanation
"""
def decode(t):
"""Decode a tensor string into printable one."""
return t.numpy().decode('utf-8')
patches = {} # No patches, see -Copy1.ipynb, TODO(fejta): add these
# BERT uses [CLS] for the start and [SEP] separates the context and question
tok_cls, tok_sep = tokenizer.convert_tokens_to_ids(['[CLS]', '[SEP]'])
seq_length = 384
zeros = np.zeros(seq_length, int)
SKIP = (zeros, zeros, zeros), (-1, -1)
def tokenize_example(ex):
"""Returns a packed example after tokenizing the input/finding output."""
context = ex['context']
context_txt = decode(context)
context_ids = to_token_ids(context_txt)
question_ids = to_token_ids(decode(ex['question']))
# TODO(fejta): handle impossible questions
if 'answers' not in ex: # Probably an example we are predicting.
start_idx, end_idx = 0, 0
else: # Try and identify the start and end index.
# Check if this is a patched example
exid = decode(ex['id'])
ctx_start, atext_txt = patches.get(exid, (None, None))
if ctx_start and ctx_start < 0: # patch says to SKIP
return SKIP
# Now identify where the answer appears in the context.
atext_txt = atext_txt or decode(ex['answers']['text'][0])
answer_ids = to_token_ids(atext_txt)
if not ctx_start:
astart = ex['answers']['answer_start']
ctx_start = int(astart[0])
if ctx_start == 1:
ctx_start = 0
ctx_left = context_txt[:ctx_start]
left_ids = to_token_ids(ctx_left)
start_idx = len(left_ids)
end_idx = start_idx + len(answer_ids)
context_answer_ids = context_ids[start_idx:end_idx]
# Make sure have the answer
if context_answer_ids != answer_ids:
return SKIP
return pack_example(context_ids, question_ids, start_idx, end_idx)
def pack_example(context_ids, question_ids, start_idx, end_idx):
"""Returns a ((words, types, mask), (start, end) tuple given the inputs"""
# Format is [CLS, CTX1, CTX2, ..., CTXN, SEP, Q1, Q2, ..., SEP, 0, 0, ...]
# AKA, CLS token, context tokens, SEP token, question tokens, SEP token, padding.
# The CLS and SEP tokens are special tokens BERT expects.
# NOTE: the BERT paper puts the question before the context, but
# this seems easier.
words = [tok_cls] + context_ids + [tok_sep] + question_ids + [tok_sep]
# NOTE: the BERT paper does something fancier here, we just SKIP inputs that
# are too long for now.
if len(words) > seq_length:
return SKIP
# The types input distinguishes context and question.
types = [0] * (len(context_ids)+2) + [1] * (len(question_ids) + 1)
# The mask input specifies non-padding tokens.
masks = [1] * len(types)
# Padding ensures that it is exactly seq_length.
pad_len = seq_length - len(masks)
padding = [0] * pad_len
types += padding
masks += padding
words += padding
# Sanity check the input
assert len(words) == len(types) == len(masks) == seq_length
if start_idx or end_idx:
# Sanity check the output
assert start_idx >= 0 and end_idx >= 0, (start_idx, end_idx)
assert start_idx < seq_length
assert end_idx < seq_length
ans_start = start_idx + 1
ans_end = end_idx + 1
else:
ans_start = -1
ans_end = -1
return (words, types, masks), (ans_start, ans_end)
def yield_examples(ds, stop=None):
for (i, ex) in enumerate(ds):
if i % 1000 == 0:
print(i, end=' ', flush=True)
if i == stop:
print('Stopping early')
break
yield tokenize_example(ex)
print('Done!')
"""
Explanation: We will need to encode each question into the format expected by bert. This involves tokenizing the question and context, converting the tokens into numbers and then also including the word type and word mask inputs as well.
The GLUE fine-tuning tutorial shows how to do this dynamically with as part of the keras/tensorflow graph, but this seems takes up enough GPU ram that shrinks the batch size and increases training time. It also makes it harder to figure out the correct start/end index to train on.
So instead we are going to preprocess everything so that the input and expected outputs are easy to obtain.
A fun thing about SQuAD is that it appears to have been created by paying people to write questions. A small percentage of them are hilariously bad. For example here's one bad question:
I couldn't could up with another question. But i need to fill this space because I can't submit the hit.
There are also issues where the answer might be 2 and the dataset indicates the first character of 2019, but the tokizer is usually word or word-piece based, not character based. So we won't find the correct answer at this location.
This is a small part of the data, so I set up some infrastructure to patch it with better answers, but eventually got bored. So now we'll just identify these as SKIP examples and then filter them out of the dataset we train/validate on.
Something to improve if you can!
End of explanation
"""
def process_examples(*a, **kw):
"""Returns an input, output for each example in the dataset."""
inputs = []
labels = []
for x, y in yield_examples(*a, **kw):
inputs.append(x)
labels.append(y)
i = tf.constant(inputs, tf.int32)
words, types, masks = tf.unstack(tf.transpose(i, [1,0,2]))
l = tf.constant(labels)
starts, ends = tf.unstack(tf.transpose(l, [1, 0]))
assert len(words) == len(starts)
return {
'input_word_ids': words,
'input_type_ids': types,
'input_mask': masks,
}, {
'label_start': starts,
'label_end': ends,
}
"""
Explanation: Add labels
End of explanation
"""
squad['train']['question']
"""
Explanation: Now let's process the dataset to generate our expected inputs and outputs.
If we look into the squad dataset, we'll find a dictionary that eventually returns a tensor of keys, for example:
End of explanation
"""
print('Pack the dictionary of tensors into an iterable dataset')
raw_train_ds = tf.data.Dataset.from_tensor_slices(squad['train'])
raw_valid_ds = tf.data.Dataset.from_tensor_slices(squad['validation'])
"""
Explanation: Let's first pack the tensors into a dataset, which will reshape things from a single dictionary with many items in each key:
{
'x': [1,2,3,...],
'y': [10,11,12,...],
}
Into a list of dictionaries with a single item in each key:
[
{'x': [1], 'y': [10]},
{'x': [2], 'y': [11]},
...
]
End of explanation
"""
print('Computing validation labels')
valid_stop = None # 1000 to get started
valid_x, valid_y = process_examples(raw_valid_ds, stop=valid_stop)
print('Computing training labels')
train_stop = None # 10000 to get started
train_x, train_y = process_examples(raw_train_ds, stop=train_stop)
"""
Explanation: Now they are in the format our processing functions from above expects, so lets generate all the inputs and outputs:
End of explanation
"""
def decode_x(xs, ys=None, trueys=None, stop=None):
"""Decodes each x (and optional y, ground truth y) and prints it."""
lastctx = None # Try and avoid repeating the same question.
for i, toks in enumerate(xs['input_word_ids']):
toks = toks.numpy()
q = from_token_ids(toks)
ctx = q.split('\n')[0]
if lastctx and lastctx == ctx:
q = '\n'.join(q.split('\n')[1:])
else:
if lastctx:
print('-'*40)
lastctx = ctx
if ys or trueys:
q = q and q[:-2]
if ys:
q += ' ' + answer(toks, ys, i)
if trueys:
a = answer(toks, trueys, i)
q += f' (GTRUTH: {a})'
print(q)
if i == stop:
break
def answer(toks, ys, i):
"""Returns the answer extracted from the input tokens."""
s, e = ys['label_start'][i], ys['label_end'][i]
return from_token_ids(toks[s:e])
"""
Explanation: Lets also define a function that is able to decode the input/output into a format that's easier for a person to read:
End of explanation
"""
print('Training example')
print('='*80)
decode_x(train_x,train_y, stop=1)
print('Validation example')
print('='*80)
decode_x(valid_x, valid_y, stop=1)
"""
Explanation: Let's check an example from each dataset split:
End of explanation
"""
train_ds = tf.data.Dataset.from_tensor_slices({'x': train_x, 'y': train_y})
valid_ds = tf.data.Dataset.from_tensor_slices({'x': valid_x, 'y': valid_y})
"""
Explanation: Now let's package things back up into a format that is easy to send to our model:
End of explanation
"""
print('Dropping bad examples')
ignore_rejects = lambda ex: ex['y']['label_end']>=0
filt_valid_ds = valid_ds.filter(ignore_rejects)
filt_train_ds = train_ds.filter(ignore_rejects)
"""
Explanation: And now filter our the bad examples:
End of explanation
"""
# Who knows what this does! But AUTO sounds promising so haven't bothered to figure it out
AUTOTUNE = tf.data.AUTOTUNE
def softmax(name, inp):
# The input here will be something like (seq_length, hidden)
# So we'll wind up doing (seq_len, hidden) * (hidden, 1)
# and wind up with (seq_len, 1), aka an output for each
# position in the input sequence.
net = tf.keras.layers.Dense(1, name=name, use_bias=False)(inp)
# Flatten this, aka change (seq_len, 1) to (seq_len,)
net = tf.keras.layers.Flatten()(net)
# Now apply softmax, aka an S-ish shape with a min of 0 and max of 1
net = tf.keras.layers.Activation(tf.keras.activations.softmax)(net)
return net
def build_highlighter_model():
sentence_features = [
'input_word_ids',
'input_type_ids',
'input_mask',
]
# Input tells the network what it should expect.
# The (None,) here that it will be* a rank 1 tensor of integers
# This represents the input token ids, and should match seq_length
# of 384
#
# *: actually this should be a batch of rank 1 tensors, making this
# a rank two tensor of (batch_size, seq_length).
inp = {
ft: tf.keras.layers.Input(shape=(None,), dtype=tf.int32, name=ft)
for ft in sentence_features
}
# This handle is a URL to a pretrained BERT model.
# It will cause tensorflow_hub load the right architecture
# And preconfigure all the weights.
encodings = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='BERT')(inp)
#
net = encodings['sequence_output']
net = tf.keras.layers.Dropout(0.1)(net)
start = softmax('start_logit', net)
end = softmax('end_logit', net)
# so they have a name
outs = {
'label_start': tf.keras.layers.Lambda(tf.identity, name='start_pos')(start),
'label_end': tf.keras.layers.Lambda(tf.identity, name='end_pos')(end),
}
return tf.keras.Model(inputs=inp, outputs=outs, name='highlighter')
"""
Explanation: We are now ready to construct our model!
Model
Back to Table of Contents
Fine-tuning BERT is fairly easy, which is one of its primary innovations.
The architecture will send the inputs through BERT and get the sequence output,
which represents an embedding for each input token.
In addition to sequence_output there is also pooled_output and encoder_output, see Using the BERT model in Classify text with BERT for info about these outputs. We want the sequence_output because we want to make a prediction for each token and pick the best one.
These embeddings then get sent to a start and and end output that gets flattened.
The flattend outputs are then sent to softmax, which can represent the probability that
the start/end is at this token.
Later we will use np.argmax to identify the index with the highest probability and
chooes that for our answer.
End of explanation
"""
highlighter = build_highlighter_model()
"""
Explanation: Now let's construct the model!
End of explanation
"""
highlighter.summary()
tf.keras.utils.plot_model(highlighter)
"""
Explanation: Let's see what it looks like
End of explanation
"""
for ex in filt_valid_ds.batch(1):
predy = highlighter(ex['x']) # Print this to see the prediction for all 384 positions
print(np.argmax(predy['label_start']), np.argmax(predy['label_end']))
break
"""
Explanation: Let's try it out!
End of explanation
"""
def prepare_ds(dataset, batch_size, training):
num_examples = len(list(dataset)) # Maybe there's a faster way to do this...
if training:
dataset = dataset.shuffle(num_examples)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda ex: (ex['x'], ex['y']))
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
return dataset, num_examples
def train(model, batch_size, epochs, init_lr):
print('Preparing validation data...')
vds, valid_len = prepare_ds(filt_valid_ds, batch_size, training=False)
print('Preparing training data...')
tds, train_len = prepare_ds(filt_train_ds, batch_size, training=True)
steps_per_epoch = train_len / batch_size
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = num_train_steps / 10
validation_steps = valid_len / batch_size
print('Ready to train!')
with default_strategy.scope():
optimizer = optimization.create_optimizer(
init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw',
)
loss = {
'label_start': tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
'label_end': tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
}
metrics = ['accuracy']
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
model.fit(
x=tds,
validation_data=vds,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_steps=validation_steps,
)
"""
Explanation: Train
Back to Table of Contents
These will just be random values, now we need to train it to do a better job.
The goal of the network is, for example, for the network to predict a start index of 7 and and end index of 15.
Ideally this means the start probability at position 7 should be 1 (and everything else 0), and the end probability at position 15 is 1 (and everywhere else zero).
The sparse categorical crossentropy loss function matches that goal. It is similar to categorical crossentropy except that we can just specify 7 and 15 instead of a one-hot encoded vector (1 at index 7 and 0 everywhere else).
We also make sure to shuffle the training dataset and map the inputs/outputs to what a tensorflow model expects, namely an (x,y) tuple.
End of explanation
"""
train(
model=highlighter,
epochs=3,
batch_size=16,
init_lr=5e-5,
)
"""
Explanation: Now we are ready to run our training program, choosing a couple of hyperparameters:
* epochs: how many times the model should see each training data
- We choose 3, matching the BERT paper
* batch_size: how many examples to compute in each mini-batch
- largely constrained by GPU RAM
- We choose 16 instead of 32 that the BERT paper uses
* init_lr: the initial learning rate, which controls how fast it adjusts weights
- Choose 5e-5 to match the BERT paper
Each epoch takes around 1h to complete on the full dataset.
NOTE: you can open up a terminal and use nvidia-smi -l to monitor stats about the GPU.
End of explanation
"""
def save_model(tfds_name='squad'):
main_save_path = './my_models'
bert_type = tfhub_handle_encoder.split('/')[-2]
saved_model_name = f'{tfds_name.replace("/", "_")}_{bert_type}'
saved_model_path = os.path.join(main_save_path, saved_model_name)
print('Saving', saved_model_path)
# Save everything on the Colab host (even the variables from TPU memory)
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
highlighter.save(saved_model_path, include_optimizer=True,options=save_options)
return saved_model_path
saved_model_path = save_model()
with tf.device('/job:localhost'):
default_model_path = './my_models/squad_bert_en_uncased_L-12_H-768_A-12'
reloaded_model = tf.saved_model.load(globals().get('saved_model_path', default_model_path))
"""
Explanation: Export
We are done! Let's first save the model so we can reload/distribute it later.
End of explanation
"""
def evaluate(xs, true_ys=None, stop=None, batch=16, model=reloaded_model):
ds = tf.data.Dataset.from_tensor_slices({'x': xs, 'y': true_ys}).batch(batch)
for i, batch in enumerate(ds):
x = batch['x']
pred = model(x)
y = {k: np.argmax(pred[k], 1) for k in ['label_start', 'label_end']}
true_y = batch.get('y')
if i == 0:
print('INPUT')
print(ex)
print('RAW OUTPUT PREDICTIONS')
print(pred)
print('ARGMAX OUTPUT')
print(y)
print('='*40)
decode_x(x, y, true_y)
if i == stop:
print('Stopping early')
break
else:
print('Done!')
"""
Explanation: Evaluate
Back to Table of Contents
Let's write a function to see how our model performs:
End of explanation
"""
print('Training example')
print('='*80)
evaluate(train_x,train_y, stop=1, batch=4)
"""
Explanation: Let's first try it on the examples it trained about:
End of explanation
"""
print('Validation example')
print('='*80)
evaluate(valid_x, valid_y, stop=1, batch=4)
"""
Explanation: Now let's see how it does against the validation data it never saw:
End of explanation
"""
# Helper function to help write multiple questions about a single context paragraph.
def user_dataset(contexts):
"""Converts a [(ctx, (q1, q2, q3)), ...] into [{'context': ctx, 'question': q1}]."""
for context, questions in contexts:
for q in questions:
yield {'context': context, 'question': q}
user_ds = user_dataset([
(
'''
Spyglass is a pluggable artifact viewer framework for Prow.
It collects artifacts (usually files in a storage bucket) from various sources and distributes them to registered viewers,
which are responsible for consuming them and rendering a view.
''',
(
'What does spyglass collect?',
'Where does spyglass collect artifacts from?',
'What is spyglass?',
'What are the registered viewers responsible for?',
'What is spyglass a framework for?',
),
),
(
'''
The HTML generated by a lens can reference static assets that will be served by Deck on behalf of your lens.
Scripts and stylesheets can be referenced in the output of the Header() function
(which is inserted into the <head> element).
Relative references into your directory will work:
spyglass adds a <base> tag that references the expected output directory.
Spyglass lenses have access to a spyglass global that provides a number of APIs
to interact with your lens backend and the rest of the world.
Your lens is rendered in a sandboxed iframe, so you generally cannot interact without using these APIs.
''',
(
'What can lenses reference?',
'What serves the HTML generated by the lens?',
'How do relative references work?',
'What provides the spyglass APIs?',
'What do the spyglass APIs allow?',
'Where is your lens rendered?',
),
),
(
'''
Fragment URLs (the part after the #) are supported fairly transparently, despite being in an iframe.
The parent page muxes all the lens's fragments and ensures that if the page is loaded,
each lens receives the fragment it expects.
Changing your fragment will automatically update the parent page's fragment.
If the fragment matches the ID or name of an element, the page will scroll such that that element is visible.
Anchor links (<a href="#something">) would usually not work well in conjunction with the <base> tag.
To resolve this, we rewrite all links of this form to behave as expected both on page load and on DOM modification.
In most cases, this should be transparent.
If you want users to copy links via right click -> copy link, however, this will not work nicely.
Instead, consider setting the href attribute to something from spyglass.makeFragmentLink,
but handling clicks by manually setting location.hash to the desired fragment.
''',
(
'What is a fragment URL?',
'When a fragment matches the ID, what does the page do?',
'How well does copying via right click work?',
'What should you set the href attribute to?',
),
),
(
'''
The three sizes are Standard, Compact, and Super Compact.
You can also specify width=X in the URL (X > 3) to customize the width.
For small widths, this may mean the date and/or changelist, or other custom headers, are no longer visible.
''',
(
'How many sizes are there?',
'How do you customize the width?',
'What might happen when the width is small?',
),
),
])
"""
Explanation: Now let's try some really random data it has not seen. Here is a paragraph taken from the spyglass and its lens documentation for prow as well as testgrid.
End of explanation
"""
def repackage():
out = {}
for ex in user_ds:
for key in ex:
out.setdefault(key, []).append(ex[key])
return out
raw_user_ds = tf.data.Dataset.from_tensor_slices(repackage())
user_x, user_y = process_examples(raw_user_ds)
user_ds = tf.data.Dataset.from_tensor_slices({'x': user_x})
"""
Explanation: Note that there are no known answers to the questions above. How well will it answer these questions?
Let's repackage and process the above questions into a user_x and user_y that we can send into evaluate.
End of explanation
"""
evaluate(user_x)
"""
Explanation: Now let's see it's predictions!
End of explanation
"""
|
yttty/python3-scraper-tutorial | Python_Spider_Tutorial_07.ipynb | gpl-3.0 | import json
from urllib.request import urlopen
def getCountry(ipAddress):
response = urlopen("http://freegeoip.net/json/"+ipAddress).read().decode('utf-8')
responseJson = json.loads(response)
return responseJson.get("country_code")
"""
Explanation: 用Python 3开发网络爬虫
By Terrill Yang (Github: https://github.com/yttty)
用Python 3开发网络爬虫 - Chapter 07 使用API
在这一章,我们将尝试使用一些API来完成抓取,首先看一个查看IP地理信息的例子
End of explanation
"""
print(getCountry("50.78.253.58"))
print(getCountry(""))
"""
Explanation: http://freegeoip.net 这个网站根据你的IP返回对应的地址,如果将IP作为参数传入,那么将返回该IP对应的地址,如下所示
End of explanation
"""
import json
jsonString = '{"arrayOfNums":[{"number":0},{"number":1},{"number":2}],"arrayOfFruits":[{"fruit":"apple"},{"fruit":"banana"},{"fruit":"pear"}]}'
jsonObj = json.loads(jsonString)
"""
Explanation: 用Python解析JSON
End of explanation
"""
print(jsonObj.get("arrayOfNums"))
print(jsonObj.get("arrayOfNums")[1])
print(jsonObj.get("arrayOfNums")[1].get("number")+jsonObj.get("arrayOfNums")[2].get("number"))
print(jsonObj.get("arrayOfFruits")[2].get("fruit"))
"""
Explanation: JSON(JavaScript Object Notation) 是一种轻量级的数据交换格式。它基于ECMAScript的一个子集。 JSON采用完全独立于语言的文本格式,但是也使用了类似于C语言家族的习惯(包括C、C++、C#、Java、JavaScript、Perl、Python等)。这些特性使JSON成为理想的数据交换语言。 易于人阅读和编写,同时也易于机器解析和生成(一般用于提升网络传输速率)。( 来自百度百科 )
将上面的jsonString对齐格式,可以看到他其实是长这个样子的:
{
"arrayOfNums": [
{
"number": 0
},
{
"number": 1
},
{
"number": 2
}
],
"arrayOfFruits": [
{
"fruit": "apple"
},
{
"fruit": "banana"
},
{
"fruit": "pear"
}
]
}
其中[]框住的是数组,而{}框住的是对象(object), 下面我们尝试对这个jsonObj进行操作
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/8b7a85d4b98927c93b7d9ca1da8d2ab2/compute_mne_inverse_volume.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
from nilearn.plotting import plot_stat_map
from nilearn.image import index_img
from mne.datasets import sample
from mne import read_evokeds
from mne.minimum_norm import apply_inverse, read_inverse_operator
print(__doc__)
data_path = sample.data_path()
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-vol-7-meg-inv.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
evoked = read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src']
# Compute inverse solution
stc = apply_inverse(evoked, inverse_operator, lambda2, method)
stc.crop(0.0, 0.2)
# Export result as a 4D nifti object
img = stc.as_volume(src,
mri_resolution=False) # set True for full MRI resolution
# Save it as a nifti file
# nib.save(img, 'mne_%s_inverse.nii.gz' % method)
t1_fname = data_path + '/subjects/sample/mri/T1.mgz'
"""
Explanation: Compute MNE-dSPM inverse solution on evoked data in volume source space
Compute dSPM inverse solution on MNE evoked dataset in a volume source
space and stores the solution in a nifti file for visualisation.
End of explanation
"""
plot_stat_map(index_img(img, 61), t1_fname, threshold=8.,
title='%s (t=%.1f s.)' % (method, stc.times[61]))
"""
Explanation: Plot with nilearn:
End of explanation
"""
|
ilanman/gdi | week3/03_Week3_II_numpy_sol.ipynb | mit | x = np.array([1,2,3,4,5,6])
print "x =", x
print 'dytpe:', x.dtype
print 'shape:', x.shape
print 'ndim:', x.ndim
print 'size:', x.size
print 'type:', type(x)
x.shape = (2,3) # make it into a 2x3 matrix
print x
print 'dytpe:', x.dtype
print 'shape:', x.shape
print 'ndim:', x.ndim
print 'size:', x.size
print 'type:', type(x)
"""
Explanation: Basic concepts of ML
Linear Regression
NDArray
Find beta
More functions
Review Problems
Statistics and Machine Learning<a id='ml'></a>
Supervised Learning
<ul>
<li>Given a data set and already know what our correct output should look like
<li>Having the idea that there is a relationship between the input and the output
<li>Categorized into *regression* and *classification* problems
<li>Example: Can you predict marathon times based on 10K performance?
<li>**Can you think of another?**
</ul>
Unsupervised Learning
<ul>
<li>Problems with little or no idea what our results should look like
<li>Derive structure from data where we don't necessarily know the effect of the variables
<li>No feedback based on the prediction results, i.e., there is no teacher to correct you
<li>Clustering, association, dimensionality reduction
<li>Example: What are the different types of people who use our product? What are the important features that distinguish our users?
<li>**Can you think of another?**
</ul>
Very common to combine, i.e. dimensionality reduction and linear regression
Basic model building process
<ol>
<li>Identify a problem to be solved
<li>Get some data
<li>Clean the data - standardize columns
<li>Exploratory analysis - histograms, scatter plots, summaries
<li>Clean it some more
<li>Identify target features (if supervised learning)
<li>Fit a model to the data - Machine Learning!
<ul>
<li>Split the data into a training and testing set (more on this later)
<li>Train a model. Tune parameters.
<li>Select the appropriate measure of fit. Accuracy, F1 score, AUC, Confusion matrix (more later!)
<li>Test the model. Beware of overfitting and underfitting! (more on this later)
<li>If you're happy with the results, you're done!
<li>Otherwise:
<ol>
<li>The model is misspecified. Repeat the above steps.
<li>The data aren't telling the whole story - are you missing any? Is there too much noise?
<li>The question isn't answerable using the data you have
</ol>
</ul>
</ol>
Linear Regression<a id='linreg'></a>
Motivation
<ul>
<li>Make predictions about real-world quantities, like sales or life expectancy
<li>Understand relationship between variables
<li>Examples:
<ul>
<li>How does sales volume change with changes in price. How is this affected by changes in the weather?
<li>How are the conversions on an ecommerce website affected by two different page titles in an A/B comparison?
<li>How is the interest rate charged on a loan affected by credit history and by loan amount?
</ul>
</ul>
Model set up
Simple linear regression takes the following form:
$y = \beta_0 + \beta_1x$
$y$ is the response (or dependent variable or target)
sometimes you see it as $\hat{y_i}$ which represents an estimate (or prediction) rather than the true value
$x$ is the feature (or covariate or predictor or independent variable or feature)
$\beta_0$ is the intercept
$\beta_1$ is the coefficient for x
many assumptions are baked into this model (random noise, constant variance, ... all of which are out of scope)
Together, $\beta_0$ and $\beta_1$ are called the model coefficients (or feature weights). To create your model, you must "learn" the values of these coefficients.
Another way to write it!
<ul>
<li>In practice use matrices and vectors because we have many data points, i.e.$(x,y)$ pairs
<li>Written as $\mathbf{y = \beta x}$ where<br><br>
</ul>
$\mathbf{y} = \begin{pmatrix} y_1 \ y_2 \ \vdots \ y_n \end{pmatrix}, \mathbf{x} = \begin{pmatrix} 1 & x_1 \ 1 & x_2 \ \vdots & \vdots \ 1 & x_n \end{pmatrix}, \beta = \begin{pmatrix} \beta_1 \ \beta_2 \end{pmatrix}$ <br>
Question:
<ol>
<li>Why did we add the column of 1's?<br>
<li>Why did we write it as $\mathbf{y = \beta x}$ instead of $\mathbf{y = x\beta}$?
</ol>
How to solve for $\beta?$
Using numpy
<ul>
<li>The foundation for numerical computation in Python is the `numpy` package, and essentially all scientific libraries in Python are built on this - e.g. `scipy`, `pandas`, `statsmodels`, `scikit-learn`, `cv2` etc.
<li>The basic data structure in `numpy` is the NDArray, and it is essential to become familiar with how to slice and dice this object.
<li>Lots of helper methods built in - much better than using lists for anything numeric
<li>`numpy` has a bunch of other helpful modules like `random` and `linalg`
</ul>
NDArray<a id='array'></a>
The base structure in numpy is ndarray, used to represent vectors, matrices and higher-dimensional arrays. Each ndarray has the following attributes:
dtype = correspond to data types in C
shape = dimensions of array
Each element of the array is of the same type
dimensions are called axes
End of explanation
"""
y = np.array([1,2,3]) # note the square brackets
y
y = np.array([1,2,3], dtype = np.float64) # note the decimals
y
print np.arange(10) # numpy object
print range(10) # regular list
print np.arange(10)+1
print range(10)+1
print np.array([[1,2,3],[4,5,6],[7,8,9]]) # multi dimensional
print "\nshape =",np.array([[1,2,3],[4,5,6],[7,8,9]]).shape
print np.array([[1,2,3],[4,5,6],[7,8,9,10]]) # not all sublists are same length - no error
print "\nshape =",np.array([[1,2,3],[4,5,6],[7,8,9,10]]).shape
"""
Explanation: Array creation
<ul>
<li>`dtype`
<li>`arange`
<li>`reshape`, `repeat`, `diag`, `ones`, `zeros`
<li>3d array
</ul>
End of explanation
"""
np.ones(10)
np.zeros((5,7))
np.zeros(5*7).reshape(5,7) # specify how many element, then their size
np.eye(5) # identity
np.diag(np.arange(1,6))
np.repeat([1,2,3,4],4) # how to stack arrays?
np.repeat([1,2,3,4],4).reshape(4,4).T # transpose is EXTREMELY handy. learn to love it
"""
Explanation: numpy helper functions to initialize array
End of explanation
"""
x
x[0]
print x[0][0] # preferred
print x[0,0]
x[0,:]
x[:,1] # what is this doing? # note: loses it's shape when you pull out a vector
x[:,1:3]
x[:,-1]
x[:,:-1]
"""
Explanation: Array indexing
End of explanation
"""
x > 2
x[x > 2] # note the result is different shape
"""
Explanation: Boolean indexing
End of explanation
"""
## SOLUTION
import numpy as np
x = np.arange(100).reshape(10,10)
even = sum(x[x % 2 == 0])
odd = sum(x[x % 2 != 0])
print abs(even - odd)
## SOLUTION
print np.diag(1 + np.arange(4), k=-1)
## SOLUTION
z = np.zeros((8,8), dtype=int)
z[1::2,::2] = 1
z[::2,1::2] = 1
print z
## SOLUTION
print np.diag(np.arange(0,111)[::-10])
## SOLUTION
X = np.ones((10,10))
X[1:-1,1:-1] = 0
print X
"""
Explanation: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
EXERCISE TIME
1) Find the absolute difference between the sum of the even numbered elements and the odd numbered elements of the following matrix:
x = np.arange(100).reshape(10,10)
2) Create a 5x5 matrix of zeros with values 1,2,3,4 just below the diagonal.
python
[[0 0 0 0 0]
[1 0 0 0 0]
[0 2 0 0 0]
[0 0 3 0 0]
[0 0 0 4 0]]
3) Create an 8x8 matrix and fill it with a checkerboard pattern of 1 and 0
python
[[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]]
4) Create the following matrix:
python
[[110 0 0 0 0 0 0 0 0 0 0 0]
[ 0 100 0 0 0 0 0 0 0 0 0 0]
[ 0 0 90 0 0 0 0 0 0 0 0 0]
[ 0 0 0 80 0 0 0 0 0 0 0 0]
[ 0 0 0 0 70 0 0 0 0 0 0 0]
[ 0 0 0 0 0 60 0 0 0 0 0 0]
[ 0 0 0 0 0 0 50 0 0 0 0 0]
[ 0 0 0 0 0 0 0 40 0 0 0 0]
[ 0 0 0 0 0 0 0 0 30 0 0 0]
[ 0 0 0 0 0 0 0 0 0 20 0 0]
[ 0 0 0 0 0 0 0 0 0 0 10 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0]]
5) Create a 10x10 matrix with 1's on the border and 0's inside:
python
[[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
End of explanation
"""
x
print np.array([x,x])
print "\nshape:",np.array([x,x]).shape # 3 dim array
np.r_[x,x]
np.vstack([x, x])
np.concatenate([x, x], axis=0)
np.c_[x,x]
np.hstack([x, x])
y = np.r_[x, x]
y
a, b, c = np.hsplit(y, 3)
a
b
c
"""
Explanation: Combining and splitting arrays
End of explanation
"""
y = y[:2]
y.sum(), np.sum(y)
y.sum(0), np.sum(y, axis = 0) # column sum. np.sum() is much faster
y.sum(1), np.sum(y, axis = 1) # row sum
"""
Explanation: Reductions
End of explanation
"""
y = np.array([1,2,3,4])
z = (y - np.mean(y))/np.std(y)
z
z.mean(), z.std()
"""
Explanation: Standardize matrix
End of explanation
"""
from __future__ import division
"""
Explanation: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
End of explanation
"""
## SOLUTION
def normalize_transition(M):
P = np.empty(M.size).reshape(M.shape[0],M.shape[1])
# normalize by row
for i in range(M.shape[0]):
P[i] = M[i,:]/np.sum(M, axis=1)[i]
return P
import numpy as np
M = np.array([7,8,8,1,3,8,9,2,1.0]).reshape(3,3)
normalize_transition(M)
"""
Explanation: EXERCISE TIME!
Given the following matrix:
python
[[7, 8, 8],
[1, 3, 8],
[9, 2, 1]]
Normalize the matrix so that all rows sum to 1.0.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
End of explanation
"""
X = np.column_stack([x,np.ones(len(x))])
np.dot(X.T,X)
"""
Explanation: How to figure out $\beta_0$ and $\beta_1$<a id='findbeta'></a>
<ul>
<li>Want to find line of best fit
<li>Minimize the distances between the line and each data point
<li>Adjust $\beta$s in order to make the sum of the squared residuals, $SSE_{res}$, as small as possible (hence called "least squares").
<li>Residual, $r_i$, is the vertical distance between a data point, $y_i$, and the line estimate, $\hat{y_i}$.
$$\text{min}\sum_{i=1}^{n} SSE_{res}= \text{min}\sum_{i=1}^{n}r_i^2 = \text{min}\sum_{i=1}^{n}(y_i - \hat{y_i})^2 = \text{min}\sum_{i=1}^{n}(y_i - \beta_0 - \beta_1 x_i)^2$$
</ul>
Can be solved in 3 ways:
1) Using calculus. This is used for simple cases only. Not in practice.<br>
$$\beta_1 = \frac{\sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y}) }{\sum_{i=1}^n (x_i - \bar{x})^2} $$<br>
$$\beta_0 = \bar{y} - \beta_1\bar{x}$$
2) Using linear algebra. Inverses and matrix multiplication can be expensive. In practice these quantities are approximated.<br>
$y=x\beta$<br>
$x^Ty=x^T x\beta$ // multiply both sides by $x^T$<br>
$(x^T x)^{-1}x^Ty=\beta$ // "divide" both sides by $(x^Tx)^{-1}$
3) Using numerical approximation - this is what's used in practice. Out of scope.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
End of explanation
"""
## SOLUTION
from numpy import linalg as la
x = np.array([1,2,3,4,5,6,7])
y = np.array([0,-0.5,3,6,11,5,15])
b1 = sum((np.mean(x)-x)*(np.mean(y)-y))/sum((np.mean(x)-x)**2)
b0 = np.mean(y) - b1*np.mean(x)
print "Using the formulas"
print "b1 =", b1
print "b0 =", b0
print
X = np.column_stack([x,np.ones(len(x))])
b1 = la.inv(np.dot(X.T,X)).dot(X.T).dot(y.T)[0]
b0 = la.inv(np.dot(X.T,X)).dot(X.T).dot(y.T)[1]
print "Using linear algebra"
print "b1 =", b1
print "b0 =", b0
plt.figure(figsize=(10,8))
plt.title("Fitting a least squares line",size=16)
plt.scatter(x,y,s=40,color='red')
plt.plot(x, b0 + b1*x, color='blue', linewidth=3)
plt.ylabel('y',size=14)
plt.xlabel('x',size=14)
plt.show()
"""
Explanation: EXERCISE TIME
Given the following data points:
python
x = np.array([1,2,3,4,5,6,7])
y = np.array([0,-0.5,3,6,11,5,15])
1) Solve for $\beta$ using the formulas for $\beta_1$ and $\beta_0$. You should have a value for $\beta_1$ and $\beta_0$
2) Solve for $\beta$ using linear algebra. Is it the same as above?
Hint 1: Remember to keep the dimensions correct<br>
Hint 2: Always use np.dot(a,b) or a.dot(b) when doing matrix multiplication.<br>
Hint 3: Look up the inv() function in the linalg library of numpy.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
End of explanation
"""
import re
from sklearn import linear_model
with open('regression.txt') as f:
text = f.readlines()
data = np.empty(len(text)*4).reshape(len(text),4)
for e, r in enumerate(text):
temp = re.sub(' + ',' ',r)
data[e] = temp[re.search("\d", temp).start():].strip('\n').split(' ') # use regex to skip Nation variable
data[0]
x = data.T[0] # birth rate
y = data.T[3] # per capita income
X = np.column_stack([x,np.ones(len(x))])
b1 = la.inv(np.dot(X.T,X)).dot(X.T).dot(y.T)[0]
b0 = la.inv(np.dot(X.T,X)).dot(X.T).dot(y.T)[1]
print "b1 =", b1
print "b0 =", b0
clf = linear_model.LinearRegression(fit_intercept=False) # include an intercept term
clf.fit(X,y)
pred = clf.predict(X)
print "b1 =",clf.coef_[0]
print "b0 =",clf.coef_[1]
plt.figure(figsize=(10,8))
plt.title("Comparing birth rate to per capita income",size=16)
plt.scatter(x,y,s=40,color='red')
plt.plot(x, pred, color='blue', linewidth=3)
plt.ylabel('y',size=14)
plt.xlabel('x',size=14)
plt.xlabel('birth rate')
plt.ylabel("per capita income")
plt.show()
"""
Explanation: Let's do it using real data!
Dataset: birthrate.dat<br>
Source: R. Weintraub (1962). "The Birth Rate and Economic Development:
An Empirical Study", Econometrica, Vol. 40, #4, pp 812-817.
Description: Birth Rates, per capita income, proportion (ratio?) of
population in farming, and infant mortality during early 1950s for
30 nations.
Variables/Columns:
Nation 1-20
Birth Rate 22-25 /* 1953-1954 (Units not given) */
Per Capita Income 30-33 /* 1953-1954 in 1948 US $ */
Proportion of population on farms 38-41 /* Circa 1950 */
Infant Mortality Rate 45-49 /* 1953-1954 */
End of explanation
"""
np.linspace(0,100,11)
z = np.random.random(10)
print z
print np.argmax(z)
print z[np.argsort(z)]
print z[np.argsort(z)[::-1]] ## reverse order
z = np.linspace(1,100,20)
c = np.random.choice(z)
print z
print c
idx = np.where(z == c)
print idx
print z[idx]
z = np.arange(10*10).reshape(5,20)
print z
print z.ravel()
print np.tile( [[1, 2],[-2, -1]], [5, 3])
"""
Explanation: Lots more to cover about Linear Regression...in another class!
More useful functions<a id='func'></a>
<ul>
<li>`linspace()`
<li>`argsort()`
<li>`argmax()`, `where()`
<li>`ravel()`
<li>`tile()`
</ul>
End of explanation
"""
## SOLUTION - 1
Z = np.tile( np.array([[0,1],[1,0]]), (4,4))
print Z
## SOLUTION - 2
Z = np.arange(100)
v = np.random.uniform(0,100)
index = (np.abs(Z-v)).argmin()
print Z[index]
"""
Explanation: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
EXERCISE TIME
Write a program to create a checkerboard 8x8 matrix using the tile function
Write a program to find the closest value (to a given number) in an array ?
End of explanation
"""
# SOLUTION
A = np.arange(25).reshape(5,5)
A[[0,1]] = A[[1,0]]
print A
"""
Explanation: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Review Problems<a id='hmwk'></a>
Q1.
Write a program to swap any 2 rows of a numpy array. Hint: This is a one liner.
End of explanation
"""
## SOLUTION
A = np.arange(25).reshape(5,5)
B = np.arange(25).reshape(5,5)
print np.diag(np.dot(A, B))
print np.sum(A * B.T, axis=1)
"""
Explanation: Q2.
Write two ways to get the diagonal elements of a dot product of two matrices, A and B.
End of explanation
"""
## SOLUTION
import numpy as np
n = 12
def nested_loops(n):
print("... using nested for loops...")
for_list = np.empty([n,n],dtype=int) # initialize 12 x 12 numpy array
for i in range(1,n+1):
row_vals = np.empty([n]) # initialize an array for each row
for j in range(1,13):
row_vals[j-1] = j*i
for_list[i-1] = row_vals
return for_list
print nested_loops(n)
print
def fromfunction(n):
print("... using numpy fromfunction array constructor...")
return np.fromfunction(lambda i, j: (i+1) * (j+1), (n, n), dtype=int)
print fromfunction(n)
print
def broadcasting(n):
print("...using numpy broadcasting...")
a = np.arange(1,n+1)
b = np.arange(1,n+1)
return np.reshape(a,(n,1))*b # calculate outer product
print broadcasting(n)
"""
Explanation: Q3. Write a 12 by 12 times table matrix shown below. Do this
using nested for loops
uisng numpy fromfunction array constructor
using numpy broadcasting
array([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
[ 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24],
[ 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36],
[ 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48],
[ 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60],
[ 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72],
[ 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84],
[ 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96],
[ 9, 18, 27, 36, 45, 54, 63, 72, 81, 90, 99, 108],
[ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120],
[ 11, 22, 33, 44, 55, 66, 77, 88, 99, 110, 121, 132],
[ 12, 24, 36, 48, 60, 72, 84, 96, 108, 120, 132, 144]])
End of explanation
"""
## SOLUTION
def raise_nth_power(n,transition):
''' Solve for stationary distribution by raising to the nth power '''
temp = transition
for i in range(n-1):
temp = transition.dot(temp)
return temp
print raise_nth_power(3,P)
"""
Explanation: Q4.
Here is the normalized transition matrix from the exercise above:
python
P = [[ 0.30434783 0.34782609 0.34782609]
[ 0.08333333 0.25 0.66666667]
[ 0.75 0.16666667 0.08333333]]
Find the stationary distribution. You can do this by raising this matrix to a very large power, until the result doesn't change. For example:
$P^1$ =
python
[[ 0.30434783 0.34782609 0.34782609]
[ 0.08333333 0.25 0.66666667]
[ 0.75 0.16666667 0.08333333]]
$P^2$ =
python
[[ 0.38248267 0.25078765 0.36672968]
[ 0.54619565 0.20259662 0.25120773]
[ 0.30464976 0.31642512 0.37892512]]
$P^3$ =
python
[[ 0.412354 0.25685598 0.33079002]
[ 0.37152231 0.28249821 0.34597949]
[ 0.40328209 0.2482256 0.34849231]]
With a large enough power, $P^{n} = P^{n+1}$. Write a function that can raise $P$ to any arbitrary power, n.
End of explanation
"""
## SOLUTION
def discrete_mfg(x, t):
assert np.allclose(sum(x[1]),1.0)
assert len(x[0]) == len(x[1])
return sum(np.exp(x[0]*t)*x[1])
x = np.array([[1,2,-1],[1./6,2./6,3./6]])
t = 2
discrete_mfg(x,t)
"""
Explanation: Q5. Calculating moments: Moment Generating Functions
Write a function that calculates the moment, $t$ for a given probability distribution, $x$. The function should have the signature discrete_mgf(x, t) where:<br>
x is a discrete 2D probability vector where:
The first dimension are values that X can take.
The second dimension is the probabilities that X takes on those values.
These should add to 1.0
Both dimensions of x should be of the same length
t is the moment.
Moment Generating Functions are defined as:<br>
$$
M_x(t) = E[e^{tX}]
$$
For example:<br>
$$
\begin{equation}
\nonumber p_X(k) = \left{
\begin{array}{2 1}
\frac{1}{6} & \quad k=1\
& \quad \
\frac{2}{6} & \quad k=2\
& \quad \
\frac{3}{6} & \quad k=-1\
\end{array} \right.
\end{equation}
$$
$E[e^{tX}] = \sum_{k=1}^{n} f_X(k)p_X(k) = \frac{1}{6}e^{(1)t} + \frac{2}{6}e^{(2)t} + \frac{3}{6}e^{(-1)t}$.<br>
<br>
When $t=2, E[e^{2X}] = \frac{1}{6}e^{(1)\times2} + \frac{2}{6}e^{(2)\times2} + \frac{3}{6}e^{(-1)\times2} = 19.5$
End of explanation
"""
## SOLUTION
def moving_average(a, n) :
ret = np.cumsum(a)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
a = [0,3,3,3,9.,6,9,9,12]
print moving_average(a, 3)
"""
Explanation: Q6.
Write a function that computes moving averages for an array and window size. For example
```python
a = [0, 3, 3, 3, 9, 6, 9, 9, 12]
size = 3
moving_average(a, size = 3) = [ 2. 3. 5. 6. 8. 8. 10.]
```
End of explanation
"""
|
vicolab/neural-network-intro | 4-gan/2-gan-mnist.ipynb | mit | import numpy as np
from keras.datasets import mnist
import admin.tools as tools
# Load MNIST data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_data = np.concatenate((X_train, X_test))
"""
Explanation: Generative Adversarial Networks 2
<div class="alert alert-warning">
This is a continuation of the previous notebook, where we learned the gist of what a generative adversarial network (GAN) is and how to learn a 1-d multimodal distribution. Please refer back to the last notebook if you are unsure about what a GAN is.
</div>
Example: MNIST Dataset
In this notebook we will use a GAN to generate samples coming from the familiar MNIST dataset.
We will start loading by our data.
<div class="alert alert-info">
<strong>In the following snippet of code we will:</strong>
<ul>
<li>Load data from MNIST </li>
<li>Merge the training and test set</li>
</ul>
</div>
End of explanation
"""
def normalize_images(images):
"""
Create Matrix Y
:param images: Np tensor with N x R x C x CH.
Where R = Number of rows in a image
Where C = Number of cols in a image
Where CH = Number of channles in a image
:return: images with its values normalized to [-1,1].
"""
images = None
return images
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Test normalisation function and normalise the data if it passes
tests.test_normalize_images(normalize_images)
X_data = normalize_images(X_data)
"""
Explanation: Input Pre-Processing
As we have done previously with MNIST, the first thing we will be doing is normalisation. However, this time we will normalise the 8-bit images from [0, 255] to [-1, 1].
Previous research with GANs indicates that this normalisation yields better results (reference paper).
Task I: Implement an Image Normalisation Function
<div class="alert alert-success">
**Task**: Implement a function that normalises the images to the interval [-1,1].
<ul>
<li>Inputs are integers in the interval [0,255]</li>
<li>Outputs should be floats in the interval [-1,1]</li>
</ul>
</div>
End of explanation
"""
X_data = np.expand_dims(X_data, axis=-1)
print('Shape of X_data {}'.format(X_data.shape))
"""
Explanation: As we did in a previous notebook we will add an extra dimension to our greyscale images.
<div class="alert alert-info">
<strong>In the following code snippet we will:</strong>
<ul>
<li>Transform `X_data` from $(28,28)$ to $(28,28,1)$</li>
</ul>
</div>
End of explanation
"""
# Import some useful keras libraries
import keras
from keras.models import Model
from keras.layers import *
def generator(z_dim, nb_outputs, ouput_shape):
# Define the input_noise as a function of Input()
latent_var = None
# Insert the desired amount of layers for your network
x = None
# Map you latest layer to n_outputs
x = None
# Reshape you data
x = Reshape(ouput_shape)(x)
model = Model(inputs=latent_var, outputs=x)
return model
"""
Explanation: Task II: Implement a Generator Network
<div class="alert alert-success">
<strong>Task:</strong> :
<ul>
<li>Make a network that accepts inputs where the shape is defined by `zdim` $\rightarrow$ `shape=(z_dim,)`</li>
<li>The number of outputs of your network need to be defined as `nb_outputs`</li>
<li>Reshape the final layer to be in the shape of `output_shape`</li>
</ul>
</div>
Since the data lies in the range [-1,1] try using the 'tanh' as the final activation function.
Keras references: Reshape()
End of explanation
"""
# Define the dimension of the latent vector
z_dim = 100
# Dimension of our sample
sample_dimentions = (28, 28, 1)
# Calculate the number of dimensions in a sample
n_dimensions=1
for x in list(sample_dimentions):
n_dimensions *= x
print('A sample of data has shape {} composed of {} dimension(s)'.format(sample_dimentions, n_dimensions))
# Create the generative network
G = generator(z_dim, n_dimensions, sample_dimentions)
# We recommend the followin optimiser
g_optim = keras.optimizers.adam(lr=0.002, beta_1=0.5, beta_2=0.999, epsilon=1e-08, decay=0.0)
# Compile network
G.compile (loss='binary_crossentropy', optimizer=g_optim)
# Network Summary
G.summary()
"""
Explanation: Now, let's build a generative network using the function you just made.
<div class="alert alert-info">
<strong>In the following code snippet we will:</strong>
<ul>
<li>Define the number of dimensions of the latent vector $\mathbf{z}$</li>
<li>Find out the shape of a sample of data</li>
<li>Compute numbers of dimensions in a sample of data</li>
<li>Create the network using your function</li>
<li>Display a summary of your generator network</li>
</ul>
</div>
End of explanation
"""
def discriminator(input_shape, nb_inputs):
# Define the network input to have input_shape shape
input_x = None
# Reshape your input
x = None
# Implement the rest of you classifier
x = None
probabilities = Dense(1, activation='sigmoid')(x)
model = Model(inputs=input_x, outputs=probabilities)
return model
"""
Explanation: Task III: Implement a Discriminative Network
The discriminator network is a simple binary classifier where the output indicates the probability of the input data being real or fake.
<div class="alert alert-success">
<strong>Task:</strong>
<ul>
<li> Create a network where the input shape is `input_shape`
<li> We recomend reshaping your network just after input. This way you can have a vector with shape `(None, nb_inputs)`</li>
<li> Implement a simple network that can classify data</li>
</ul>
</div>
Keras references: Reshape()
End of explanation
"""
# We already computed the shape and number of dimensions in a data sample
print('The data has shape {} composed of {} dimension(s)'.format(sample_dimentions, n_dimensions))
# Discriminative Network
D = discriminator(sample_dimentions,n_dimensions)
# Recommended optimiser
d_optim = keras.optimizers.adam(lr=0.002, beta_1=0.5, beta_2=0.999, epsilon=1e-08, decay=0.0)
# Compile Network
D.compile(loss='binary_crossentropy', optimizer=d_optim)
# Network summary
D.summary()
"""
Explanation: Now, let's build a discriminator network using the function you just made.
<div class="alert alert-info">
<strong>In the following code snippet we will:</strong>
<ul>
<li>Create the network using your function</li>
<li>Display a summary of your generator network</li>
</ul>
</div>
End of explanation
"""
from keras.models import Sequential
def build(generator, discriminator):
"""Build a base model for a Generative Adversarial Networks.
Parameters
----------
generator : keras.engine.training.Model
A keras model built either with keras.models ( Model, or Sequential ).
This is the model that generates the data for the Generative Adversarial networks.
Discriminator : keras.engine.training.Model
A keras model built either with keras.models ( Model, or Sequential ).
This is the model that is a binary classifier for REAL/GENERATED data.
Returns
-------
(keras.engine.training.Model)
It returns a Sequential Keras Model by connecting a Generator model to a
Discriminator model. [ generator-->discriminator]
"""
model = Sequential()
model.add(generator)
discriminator.trainable = False
model.add(discriminator)
return model
# Create GAN
G_plus_D = build(G, D)
G_plus_D.compile(loss='binary_crossentropy', optimizer=g_optim)
D.trainable = True
"""
Explanation: Putting the GAN together
In the following code we will put the generator and discriminator together so we can train our adversarial model.
<div class="alert alert-info">
<strong>In the following code snippet we will:</strong>
<ul>
<li>Use the generator and discriminator to construct a GAN</li>
</ul>
</div>
End of explanation
"""
BATCH_SIZE = 32
NB_EPOCHS = 50
"""
Explanation: Task IV: Define Hyperparameters
Please define the following hyper-parameters to train your GAN.
<br>
<div class="alert alert-success">
<strong>Task:</strong> Please define the following hyperparameters to train your GAN:
<ul>
<li> Batch size</li>
<li>Number of training epochs</li>
</ul>
</div>
End of explanation
"""
# Figure for live plot
fig, ax = plt.subplots(1,1)
# Allocate space for noise variable
z = np.zeros((BATCH_SIZE, z_dim))
# n_bathces
number_of_batches = int(X_data.shape[0] / BATCH_SIZE)
for epoch in range(NB_EPOCHS):
for index in range(number_of_batches):
# Sample minimibath m=BATCH_SIZE from data generating distribution
# in other words :
# Grab a batch of the real data
data_batch = X_data[index*BATCH_SIZE:(index+1)*BATCH_SIZE]
# Sample minibatch of m= BATCH_SIZE noise samples
# in other words, we sample from a uniform distribution
z = np.random.uniform(-1, 1, (BATCH_SIZE,z_dim))
# Sample minibatch m=BATCH_SIZE from data generating distribution Pdata
# in ohter words
# Use genrator to create new fake samples
generated_batch = G.predict(z, verbose=0)
# Update/Train discriminator D
X = np.concatenate((data_batch, generated_batch))
y = [1] * BATCH_SIZE + [0.0] * BATCH_SIZE
d_loss = D.train_on_batch(X, y)
# Sample minibatch of m= BATCH_SIZE noise samples
# in other words, we sample from a uniform distribution
z = np.random.uniform(-1, 1, (BATCH_SIZE,z_dim))
#Update Generator while not updating discriminator
D.trainable = False
# to do gradient ascent we just flip the labels ...
g_loss = G_plus_D.train_on_batch(z, [1] * BATCH_SIZE)
D.trainable = True
# Plot data every 10 mini batches
if index % 10 == 0:
ax.clear()
# Histogram of generated data
image =tools.combine_images(X)
image = image*127.5+127.5
ax.imshow(image.astype(np.uint8))
fig.canvas.draw()
time.sleep(0.01)
# End of epoch ....
print("epoch %d : g_loss : %f | d_loss : %f" % (epoch, g_loss, d_loss))
"""
Explanation: <div class="alert alert-info">
<strong>In the following code snippet we will:</strong>
<ul>
<li>Train the constructed GAN</li>
<li>Live plot the generated data</li>
</ul>
</div>
End of explanation
"""
|
gregorjerse/rt2 | 2015_2016/lab13/Extending values on vertices-template.ipynb | gpl-3.0 | from itertools import combinations, chain
def simplex_closure(a):
"""Returns the generator that iterating over all subsimplices (of all dimensions) in the closure
of the simplex a. The simplex a is also included.
"""
return chain.from_iterable([combinations(a, l) for l in range(1, len(a) + 1)])
def closure(K):
"""Add all missing subsimplices to K in order to make it a simplicial complex."""
return list({s for a in K for s in simplex_closure(a)})
def contained(a, b):
"""Returns True is a is a subsimplex of b, False otherwise."""
return all((v in b for v in a))
def star(s, cx):
"""Return the set of all simplices in the cx that contais simplex s.
"""
return {p for p in cx if contained(s, p)}
def intersection(s1, s2):
"""Return the intersection of s1 and s2."""
return list(set(s1).intersection(s2))
def link(s, cx):
"""Returns link of the simplex s in the complex cx.
"""
# Link consists of all simplices from the closed star that have
# empty intersection with s.
return [c for c in closure(star(s, cx)) if not intersection(s, c)]
def simplex_value(s, f, aggregate):
"""Return the value of f on vertices of s
aggregated by the aggregate function.
"""
return aggregate([f[v] for v in s])
def lower_link(s, cx, f):
"""Return the lower link of the simplex s in the complex cx.
The dictionary f is the mapping from vertices (integers)
to the values on vertices.
"""
sval = simplex_value(s, f, min)
return [s for s in link(s, cx)
if simplex_value(s, f, max) < sval]
"""
Explanation: Extending values on vertices to a discrete gradient vector field
During extension algorithm one has to compute lover_link for every vertex in the complex. So let us implement search for the lower link first. It requires quite a lot of code: first we find a star, then link and finally lower link for the given simplex.
End of explanation
"""
K = closure([(1, 2, 3)])
f = {1: 0, 2: 1, 3: 2}
for v in (1, 2, 3):
print"{0}: {1}".format((v,), lower_link((v,), K, f))
"""
Explanation: Let us test the above function on the simple example: full triangle with values 0, 1 and 2 on the vertices labeled with 1, 2 and 3.
End of explanation
"""
def join(a, b):
"""Return the join of 2 simplices a and b."""
return tuple(sorted(set(a).union(b)))
def extend(K, f):
"""Extend the field to the complex K.
Function on vertices is given in f.
Returns the pair V, C, where V is the dictionary containing discrete gradient vector field
and C is the list of all critical cells.
"""
V = dict()
C = []
for v in (s for s in K if len(s)==1):
# Add your own code
pass
return V, C
"""
Explanation: Now let us implement an extension algorithm. We are leaving out the cancelling step for clarity.
End of explanation
"""
K = closure([(1, 2, 3)])
f = {1: 0, 2: 1, 3: 2}
extend(K, f)
K = closure([(1, 2, 3), (2, 3, 4)])
f = {1: 0, 2: 1, 3: 2, 4: 0}
extend(K, f)
K = closure([(1, 2, 3), (2, 3, 4)])
f = {1: 0, 2: 1, 3: 2, 4: 3}
extend(K, f)
"""
Explanation: Let us test the algorithm on the example from the previous step (full triangle).
End of explanation
"""
|
ggData/tweetharvest | example.ipynb | mit | import pymongo
"""
Explanation: Part 1: tweetharvest Example Analysis
This is an example notebook demonstrating how to establish a connection to a database of tweets collected using tweetharvest. It presupposes that all the setup instructions have been completed (see README file for that repository) and that MongoDB server is running as described there. We start by importing core packages the PyMongo package, the official package to access MongoDB databases.
End of explanation
"""
db = pymongo.MongoClient().tweets_db
coll = db.emotweets
coll
"""
Explanation: Next we establish a link with the database. We know that the database created by tweetharvester is called tweets_db and within it is a collection of tweets that goes by the name of the project, in this example: emotweets.
End of explanation
"""
coll.count()
"""
Explanation: We now have an object, coll, that offers full access to the MongoDB API where we can analyse the data in the collected tweets. For instance, in our small example collection, we can count the number of tweets:
End of explanation
"""
query = {'coordinates': {'$ne': None}}
coll.find(query).count()
"""
Explanation: Or we can count the number of tweets that are geolocated with a field containing the latitude and longitude of the user when they sent the tweet. We construct a MongoDB query that looks for a non-empty field called coordinates.
End of explanation
"""
query = {'hashtags': {'$in': ['happy']}}
coll.find(query).count()
"""
Explanation: Or how many tweets had the hashtag #happy in them?
End of explanation
"""
coll.find_one()
"""
Explanation: Pre-requisites for Analysis
In order to perform these analyses there are a few things one needs to know:
At the risk of stating the obvious: how to code in Python (there is also an excellent tutorial). Please note that the current version of tweetharvest uses Python 2.7, and not Python 3.
How to perform mongoDB queries, including aggregation, counting, grouping of subsets of data. There is a most effective short introduction (The Little Book on MongoDB by Karl Seguin), as well as extremely rich documentation on the parent website.
How to use PyMongo to interface with the MongoDB API.
Apart from these skills, one needs to know how each status is stored in the database. Here is an easy way to look at the data structure of one tweet.
End of explanation
"""
%matplotlib inline
import pymongo # in case we have run Part 1 above
import pandas as pd # for data manipulation and analysis
import matplotlib.pyplot as plt
"""
Explanation: This JSON data structure is documented on the Twitter API website where each field is described in detail. It is recommended that this description is studied in order to understand how to construct valid queries.
tweetharvest is faithful to the core structure of the tweets as described in that documentation, but with minor differences created for convenience:
All date fields are stored as MongoDB Date objects and returned as Python datetime objects. This makes it easier to work on date ranges, sort by date, and do other date and time related manipulation.
A hashtags field is created for convenience. This contains a simple array of all the hashtags contained in a particular tweet and can be queried directly instead of looking for tags inside a dictionary, inside a list of other entities. It is included for ease of querying but may be ignored if one prefers.
Next Steps
This notebook establishes how you can connect to the database of tweets that you have harvested and how you can use the power of Python and MongoDB to access and analyse your collections. Good luck!
Part 2: tweetharvest Further Analysis
Assuming we need some more advanced work to be done on the dataset we have collected, below are some sample analyses to dip our toes in the water.
The examples below are further illustration of using our dataset with standard Python modules used in datascience. The typical idion is that of queryiong MongoDB to get a cursor on our dataset, importing that into an analytic tool such as Pandas, and then producing the analysis. The analyses below require that a few packages are installed on our system:
matplotlib: a python 2D plotting library (documentation)
pandas: "an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools" (documentation)
Important Note
The dataset used in this notebook is not published on the Github repository. If you want to experiment with your own data, you need to install the tweetharvest package, harvest some tweets to replicate the emotweets project embedded there, and then run the notebook. The intended use of this example notebook is simply as an illustration of the type of analysis one might want to do using your own tools.
End of explanation
"""
db = pymongo.MongoClient().tweets_db
COLL = db.emotweets
COLL
"""
Explanation: Establish a Link to the Dataset as a MongoDB Collection
End of explanation
"""
COLL.count()
def count_by_tag(coll, hashtag):
query = {'hashtags': {'$in': [hashtag]}}
count = coll.find(query).count()
return count
print 'Number of #happy tweets: {}'.format(count_by_tag(COLL, 'happy'))
print 'Number of #sad tweets: {}'.format(count_by_tag(COLL, 'sad'))
"""
Explanation: Descriptive Statistics
Number of Tweets in Dataset
End of explanation
"""
query = {'coordinates': {'$ne': None}}
COLL.find(query).count()
"""
Explanation: Number of Geolocated Tweets
End of explanation
"""
# return a cursor that iterates over all documents and returns the creation date
cursor = COLL.find({}, {'created_at': 1, '_id': 0})
# list all the creation times and convert to Pandas DataFrame
times = pd.DataFrame(list(cursor))
times = pd.to_datetime(times.created_at)
earliest_timestamp = min(times)
latest_timestamp = max(times)
print 'Creation time for EARLIEST tweet in dataset: {}'.format(earliest_timestamp)
print 'Creation time for LATEST tweet in dataset: {}'.format(latest_timestamp)
"""
Explanation: Range of Creation Times for Tweets
End of explanation
"""
query = {} # empty query means find all documents
# return just two columns, the date of creation and the id of each document
projection = {'created_at': 1}
df = pd.DataFrame(list(COLL.find(query, projection)))
times = pd.to_datetime(df.created_at)
df.set_index(times, inplace=True)
df.drop('created_at', axis=1, inplace=True)
tweets_all = df.resample('60Min', how='count')
tweets_all.plot(figsize=[12, 7], title='Number of Tweets per Hour', legend=None);
"""
Explanation: Plot Tweets per Hour
End of explanation
"""
query = { # find all documents that:
'hashtags': {'$in': ['happy']}, # contain #happy hashtag
'retweeted_status': None, # are not retweets
'hashtags.1': {'$exists': True}, # and have more than 1 hashtag
'lang': 'en' # written in English
}
projection = {'hashtags': 1, '_id': 0}
cursor = COLL.find(query, projection)
for tags in cursor[:10]:
print tags['hashtags']
"""
Explanation: More Complex Query
As an example of a more complex query, the following demonstrates how to extract all tweets that are not retweets, contain the hashtag #happy as well at least one other hashtag, and that are written in English. These attributes are passed to the .find method as a dictionary, and the hashtags are then extracted.
The hashtags of the first ten tweets meeting this specification are then printed out.
End of explanation
"""
from itertools import combinations
import networkx as nx
"""
Explanation: Build a Network of Hashtags
We could use this method to produce a network of hashtags. The following illustrates this by:
creating a generator function that yields every possible combination of two hashtags from each tweet
adding these pairs of tags as edges in a NetworkX graph
deleting the node happy (since it is connected to all the others by definition)
deleting those edges that are below a threshold weight
plotting the result
In order to run this, we need to install the NetworkX package (pip install networkx, documentation) and import it as well as the combinations function from Python's standard library itertools module.
End of explanation
"""
def gen_edges(coll, hashtag):
query = { # find all documents that:
'hashtags': {'$in': [hashtag]}, # contain hashtag of interest
'retweeted_status': None, # are not retweets
'hashtags.1': {'$exists': True}, # and have more than 1 hashtag
'lang': 'en' # written in English
}
projection = {'hashtags': 1, '_id': 0}
cursor = coll.find(query, projection)
for tags in cursor:
hashtags = tags['hashtags']
for edge in combinations(hashtags, 2):
yield edge
"""
Explanation: Generate list of all pairs of hashtags
End of explanation
"""
def build_graph(coll, hashtag, remove_node=True):
g = nx.Graph()
for u,v in gen_edges(coll, hashtag):
if g.has_edge(u,v):
# add 1 to weight attribute of this edge
g[u][v]['weight'] = g[u][v]['weight'] + 1
else:
# create new edge of weight 1
g.add_edge(u, v, weight=1)
if remove_node:
# since hashtag is connected to every other node,
# it adds no information to this graph; remove it.
g.remove_node(hashtag)
return g
G = build_graph(COLL, 'happy')
"""
Explanation: Build graph with weighted edges between hashtags
End of explanation
"""
def trim_edges(g, weight=1):
# function from http://shop.oreilly.com/product/0636920020424.do
g2 = nx.Graph()
for u, v, edata in g.edges(data=True):
if edata['weight'] > weight:
g2.add_edge(u, v, edata)
return g2
"""
Explanation: Remove rarer edges
Finally we remove rare edges (defined here arbitrarily as edges that have a weigthing of less than 25), then print a table of these edges sorted in descending order by weight.
End of explanation
"""
G2 = trim_edges(G, weight=25)
df = pd.DataFrame([(u, v, edata['weight'])
for u, v, edata in G2.edges(data=True)],
columns = ['from', 'to', 'weight'])
df.sort(['weight'], ascending=False, inplace=True)
df
"""
Explanation: View as Table
End of explanation
"""
G3 = trim_edges(G, weight=35)
pos=nx.circular_layout(G3) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G3, pos, node_size=700,
linewidths=0, node_color='#cccccc')
edge_list = [(u, v) for u, v in G3.edges()]
weight_list = [edata['weight']/5.0 for u, v, edata in G3.edges(data=True)]
# edges
nx.draw_networkx_edges(G3, pos,
edgelist=edge_list,
width=weight_list,
alpha=0.4,edge_color='b')
# labels
nx.draw_networkx_labels(G3, pos, font_size=20,
font_family='sans-serif', font_weight='bold')
fig = plt.gcf()
fig.set_size_inches(10, 10)
plt.axis('off');
"""
Explanation: Plot the Network
End of explanation
"""
G_SAD = build_graph(COLL, 'sad')
G2S = trim_edges(G_SAD, weight=5)
df = pd.DataFrame([(u, v, edata['weight'])
for u, v, edata in G2S.edges(data=True)],
columns = ['from', 'to', 'weight'])
df.sort(['weight'], ascending=False, inplace=True)
df
"""
Explanation: Repeat for #sad
End of explanation
"""
G3S = trim_edges(G_SAD, weight=5)
pos=nx.spring_layout(G3S) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G3S, pos, node_size=700,
linewidths=0, node_color='#cccccc')
edge_list = [(u, v) for u, v in G3S.edges()]
weight_list = [edata['weight'] for u, v, edata in G3S.edges(data=True)]
# edges
nx.draw_networkx_edges(G3S, pos,
edgelist=edge_list,
width=weight_list,
alpha=0.4,edge_color='b')
# labels
nx.draw_networkx_labels(G3S, pos, font_size=12,
font_family='sans-serif', font_weight='bold')
fig = plt.gcf()
fig.set_size_inches(13, 13)
plt.axis('off');
"""
Explanation: Graph is drawn with a spring layout to bring out more clearly the disconnected sub-graphs.
End of explanation
"""
|
JENkt4k/pynotes-general | Linux Tools & Tricks.ipynb | gpl-3.0 | %colors Linux
%history
%dirs
%magic
%pwd
%quickref
"""
Explanation: Python magic
https://ipython.org/ipython-doc/3/interactive/magics.html
End of explanation
"""
print "this is a test of the emergency broadcast system"
%%html
<style>
html {
font-size: 62.5% !important; }
body {
font-size: 1.5em !important; /* currently ems cause chrome bug misinterpreting rems on body element */
line-height: 1.6 !important;
font-weight: 400 !important;
font-family: "Raleway", "HelveticaNeue", "Helvetica Neue", Helvetica, Arial, sans-serif !important;
color: #222 !important; }
div{ border-radius: 0px !important; }
div.CodeMirror-sizer{ background: rgb(244, 244, 248) !important; }
div.input_area{ background: rgb(244, 244, 248) !important; }
div.out_prompt_overlay:hover{ background: rgb(244, 244, 248) !important; }
div.input_prompt:hover{ background: rgb(244, 244, 248) !important; }
h1, h2, h3, h4, h5, h6 {
color: #333 !important;
margin-top: 0 !important;
margin-bottom: 2rem !important;
font-weight: 300 !important; }
h1 { font-size: 4.0rem !important; line-height: 1.2 !important; letter-spacing: -.1rem !important;}
h2 { font-size: 3.6rem !important; line-height: 1.25 !important; letter-spacing: -.1rem !important; }
h3 { font-size: 3.0rem !important; line-height: 1.3 !important; letter-spacing: -.1rem !important; }
h4 { font-size: 2.4rem !important; line-height: 1.35 !important; letter-spacing: -.08rem !important; }
h5 { font-size: 1.8rem !important; line-height: 1.5 !important; letter-spacing: -.05rem !important; }
h6 { font-size: 1.5rem !important; line-height: 1.6 !important; letter-spacing: 0 !important; }
@media (min-width: 550px) {
h1 { font-size: 5.0rem !important; }
h2 { font-size: 4.2rem !important; }
h3 { font-size: 3.6rem !important; }
h4 { font-size: 3.0rem !important; }
h5 { font-size: 2.4rem !important; }
h6 { font-size: 1.5rem !important; }
}
p {
margin-top: 0 !important; }
a {
color: #1EAEDB !important; }
a:hover {
color: #0FA0CE !important; }
code {
padding: .2rem .5rem !important;
margin: 0 .2rem !important;
font-size: 90% !important;
white-space: nowrap !important;
background: #F1F1F1 !important;
border: 1px solid #E1E1E1 !important;
border-radius: 4px !important; }
pre > code {
display: block !important;
padding: 1rem 1.5rem !important;
white-space: pre !important; }
button{ border-radius: 0px !important; }
.navbar-inner{ background-image: none !important; }
select, textarea{ border-radius: 0px !important; }
</style>
"""
Explanation: Sticky keys causing issues? Need Password feedback?
Had an issue with my keyboard where a few keys were sticking, worn, and they weren't detected or showed up twice. Constant password auth failures so a quick Google search returned the following results:
1) Change Password Entry To Show * (asterix) instead of no feed back - less secure!
bash
#run command
sudo visudo
bash
#change
Defaults env_reset
#to
Defaults env_reset,pwfeedback
2) Change from VI to Nano or Emacs etc..
bash
export VISUAL=nano; visudo
Notes * use spaces not tabs
Changing Git author info
source
Check out clean repo:
bash
git clone --bare https://github.com/[user]/[repo].git
cd [repo].git
create git-author-rewrite.sh file:
```bash
!/bin/sh
git filter-branch --env-filter '
OLD_EMAIL="[email protected]"
CORRECT_NAME="Your Correct Name"
CORRECT_EMAIL="[email protected]"
if [ "$GIT_COMMITTER_EMAIL" = "$OLD_EMAIL" ]
then
export GIT_COMMITTER_NAME="$CORRECT_NAME"
export GIT_COMMITTER_EMAIL="$CORRECT_EMAIL"
fi
if [ "$GIT_AUTHOR_EMAIL" = "$OLD_EMAIL" ]
then
export GIT_AUTHOR_NAME="$CORRECT_NAME"
export GIT_AUTHOR_EMAIL="$CORRECT_EMAIL"
fi
' --tag-name-filter cat -- --branches --tags
```
make executable:
bash
chmod +x create git-author-rewrite.sh
review changes:
bash
git log
push changes:
bash
git push --force --tags origin 'refs/heads/*'
cleanup:
bash
cd ..
rm -rf [repo].git
Managing Remotes
(Managing Remotes Documentation)[https://git-scm.com/book/ch2-5.html]
(Multiple push remotes)[http://stackoverflow.com/questions/14290113/git-pushing-code-to-two-remotes]
Show current remotes:
bash
git remote -v
Add a "all" remote
bash
git remote add all git://original/repo.git
git remote -v
Add another repo to the remote
bash
git remote set-url --add --push all git://another/repo.git
This will replace you orignal push, so simply add it back in
bash
git remote set-url --add --push all git://original/repo.git
Now you should see both pushes
bash
git remote -v
Git general
Quick Refference
Whats my name?
Linux Kernel Version
bash
uname -r
Ubuntu version
bash
lsb_release -sc
End of explanation
"""
import sys
import os
from subprocess import PIPE, Popen
import re
def get_active_window_title():
root = Popen(['xprop', '-root', '_NET_ACTIVE_WINDOW'], stdout=PIPE)
for line in root.stdout:
m = re.search(b'^_NET_ACTIVE_WINDOW.* ([\w]+)$', line)
if m != None:
id_ = m.group(1)
id_w = Popen(['xprop', '-id', id_, 'WM_NAME'], stdout=PIPE)
break
if id_w != None:
for line in id_w.stdout:
match = re.match(b"WM_NAME\(\w+\) = (?P<name>.+)$", line)
if match != None:
return match.group("name")
return "Active window not found"
get_active_window_title()
import time
time.sleep(2)
get_active_window_title()
"""
Explanation: Get the Active Window on Linux
Get active window title in X
- Original Code had the following error: TypeError: can't use a string pattern on a bytes-like object
<br/>
Obtain Active window using Python
Corected code is now here
"import wnck" only works with python 2.x, using python3.x pypie and wx were the only options I found so far
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/d1b18c3376911723f0257fe5003a8477/plot_linear_model_patterns.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <[email protected]>
# Romain Trachel <[email protected]>
# Jean-Remi King <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import io, EvokedArray
from mne.datasets import sample
from mne.decoding import Vectorizer, get_coef
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
# import a linear classifier from mne.decoding
from mne.decoding import LinearModel
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Linear classifier on sensor data with plot patterns and filters
Here decoding, a.k.a MVPA or supervised machine learning, is applied to M/EEG
data in sensor space. Fit a linear classifier with the LinearModel object
providing topographical patterns which are more neurophysiologically
interpretable [1]_ than the classifier filters (weight vectors).
The patterns explain how the MEG and EEG data were generated from the
discriminant neural sources which are extracted by the filters.
Note patterns/filters in MEG data are more similar than EEG data
because the noise is less spatially correlated in MEG than EEG.
References
.. [1] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D.,
Blankertz, B., & Bießmann, F. (2014). On the interpretation of
weight vectors of linear models in multivariate neuroimaging.
NeuroImage, 87, 96–110. doi:10.1016/j.neuroimage.2013.10.067
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.4
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(.5, 25, fir_design='firwin')
events = mne.read_events(event_fname)
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
decim=2, baseline=None, preload=True)
labels = epochs.events[:, -1]
# get MEG and EEG data
meg_epochs = epochs.copy().pick_types(meg=True, eeg=False)
meg_data = meg_epochs.get_data().reshape(len(labels), -1)
"""
Explanation: Set parameters
End of explanation
"""
clf = LogisticRegression(solver='lbfgs')
scaler = StandardScaler()
# create a linear model with LogisticRegression
model = LinearModel(clf)
# fit the classifier on MEG data
X = scaler.fit_transform(meg_data)
model.fit(X, labels)
# Extract and plot spatial filters and spatial patterns
for name, coef in (('patterns', model.patterns_), ('filters', model.filters_)):
# We fitted the linear model onto Z-scored data. To make the filters
# interpretable, we must reverse this normalization step
coef = scaler.inverse_transform([coef])[0]
# The data was vectorized to fit a single model across all time points and
# all channels. We thus reshape it:
coef = coef.reshape(len(meg_epochs.ch_names), -1)
# Plot
evoked = EvokedArray(coef, meg_epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='MEG %s' % name, time_unit='s')
"""
Explanation: Decoding in sensor space using a LogisticRegression classifier
End of explanation
"""
X = epochs.pick_types(meg=False, eeg=True)
y = epochs.events[:, 2]
# Define a unique pipeline to sequentially:
clf = make_pipeline(
Vectorizer(), # 1) vectorize across time and channels
StandardScaler(), # 2) normalize features across trials
LinearModel(
LogisticRegression(solver='lbfgs'))) # 3) fits a logistic regression
clf.fit(X, y)
# Extract and plot patterns and filters
for name in ('patterns_', 'filters_'):
# The `inverse_transform` parameter will call this method on any estimator
# contained in the pipeline, in reverse order.
coef = get_coef(clf, name, inverse_transform=True)
evoked = EvokedArray(coef, epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='EEG %s' % name[:-1], time_unit='s')
"""
Explanation: Let's do the same on EEG data using a scikit-learn pipeline
End of explanation
"""
|
bp-kelley/rdkit | Docs/Notebooks/RGroupDecomposition-RingSubstitution.ipynb | bsd-3-clause | from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
IPythonConsole.ipython_useSVG=True
from rdkit.Chem import rdRGroupDecomposition
from IPython.display import HTML
from rdkit import rdBase
rdBase.DisableLog("rdApp.debug")
import pandas as pd
from rdkit.Chem import PandasTools
core = Chem.MolFromSmarts("*1CCCC1")
core
smiles = ["C1CCCC1-C2CCC2Cl", "N1CCCC1-C2CCC2Cl", "O1CCCC1-C2CCC2Cl", "N1OCCC1-C2CCC2Cl", "N1OCSC1-C2CCC2Cl"]
mols = [Chem.MolFromSmiles(smi) for smi in smiles]
from rdkit.Chem import Draw
Draw.MolsToGridImage(mols)
core.GetSubstructMatch(core)
"""
Explanation: This example shows how ring substitutions are handled.
End of explanation
"""
rgroups = rdRGroupDecomposition.RGroupDecomposition(core)
for i,m in enumerate(mols):
rgroups.Add(m)
if i == 10:
break
"""
Explanation: Make RGroup decomposition!
End of explanation
"""
rgroups.Process()
groups = rgroups.GetRGroupsAsColumns()
frame = pd.DataFrame(groups)
PandasTools.ChangeMoleculeRendering(frame)
"""
Explanation: We need to call process after all molecules are added to optimize the RGroups.
End of explanation
"""
HTML(frame.to_html())
"""
Explanation: I would have preferred new cores to appear and the [:2]-N-[:2] depiction is a bit annoying... Perhaps for round 2
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td2a/td2a_correction_session_2E.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 2A.i - Sérialisation - correction
Sérialisation d'objets, en particulier de dataframes. Mesures de vitesse.
End of explanation
"""
import random
values = [ [random.random() for i in range(0,20)] for _ in range(0,100000) ]
col = [ "col%d" % i for i in range(0,20) ]
import pandas
df = pandas.DataFrame( values, columns = col )
"""
Explanation: Exercice 1 : sérialisation d'un gros dataframe
Etape 1 : construction d'un gros dataframe composé de nombres aléatoires
End of explanation
"""
df.to_csv("df_text.txt", sep="\t")
df.to_pickle("df_text.bin")
"""
Explanation: Etape 2 : on sauve ce dataframe sous deux formats texte et sérialisé (binaire)
End of explanation
"""
%timeit pandas.read_csv("df_text.txt", sep="\t")
%timeit pandas.read_pickle("df_text.bin")
"""
Explanation: Etape 3 : on mesure le temps de chargement
End of explanation
"""
obj = dict(a=[50, "r"], gg=(5, 't'))
import jsonpickle
frozen = jsonpickle.encode(obj)
frozen
"""
Explanation: Exercice 2 : json
Un premier essai.
End of explanation
"""
frozen = jsonpickle.encode(df)
len(frozen), type(frozen), frozen[:55]
"""
Explanation: Ce module est équivalent au module json sur les types standard du langage Python (liste, dictionnaires, nombres, ...). Mais le module json ne fonctionne pas sur les dataframe.
End of explanation
"""
def to_json(obj, filename):
frozen = jsonpickle.encode(obj)
with open(filename, "w", encoding="utf-8") as f:
f.write(frozen)
def read_json(filename):
with open(filename, "r", encoding="utf-8") as f:
enc = f.read()
return jsonpickle.decode(enc)
to_json(df, "df_text.json")
try:
df = read_json("df_text.json")
except Exception as e:
print(e)
"""
Explanation: La methode to_json donnera un résultat statisfaisant également mais ne pourra s'appliquer à un modèle de machine learning produit par scikit-learn.
End of explanation
"""
import jsonpickle.ext.numpy as jsonpickle_numpy
jsonpickle_numpy.register_handlers()
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
y = iris.target
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X,y)
clf.predict_proba([[0.1, 0.2]])
to_json(clf, "logreg.json")
try:
clf2 = read_json("logreg.json")
except AttributeError as e:
# Pour une raison inconnue, un bug sans doute, le code ne fonctionne pas.
print(e)
"""
Explanation: Visiblement, cela ne fonctionne pas sur les DataFrame. Il faudra s'inspirer du module numpyson.
json + scikit-learn
Il faut lire l'issue 147 pour saisir l'intérêt des deux lignes suivantes.
End of explanation
"""
class EncapsulateLogisticRegression:
def __init__(self, obj):
self.obj = obj
def __getstate__(self):
return {k: v for k, v in sorted(self.obj.__getstate__().items())}
def __setstate__(self, data):
self.obj = LogisticRegression()
self.obj.__setstate__(data)
enc = EncapsulateLogisticRegression(clf)
to_json(enc, "logreg.json")
enc2 = read_json("logreg.json")
clf2 = enc2.obj
clf2.predict_proba([[0.1, 0.2]])
with open("logreg.json", "r") as f:
content = f.read()
content
"""
Explanation: Donc on essaye d'une essaye d'une autre façon. Si le code précédent ne fonctionne pas et le suivant si, c'est un bug de jsonpickle.
End of explanation
"""
|
xpharry/Udacity-DLFoudation | tutorials/intro-to-rnns/Anna KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
chars[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
"""
Explanation: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
train_x, train_y, val_x, val_y = split_data(chars, 10, 50)
train_x.shape
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
train_x[:,:50]
"""
Explanation: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
"""
Explanation: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# One-hot encoding the input and target characters
x_one_hot = tf.one_hot(inputs, num_classes)
y_one_hot = tf.one_hot(targets, num_classes)
### Build the RNN layers
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
### Run the data through the RNN layers
# This makes a list where each element is on step in the sequence
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one output row for each step for each batch
seq_output = tf.concat(outputs, axis=1)
output = tf.reshape(seq_output, [-1, lstm_size])
# Now connect the RNN putputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(num_classes))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and batch
logits = tf.matmul(output, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
preds = tf.nn.softmax(logits, name='predictions')
# Reshape the targets to match the logits
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
cost = tf.reduce_mean(loss)
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
# NOTE: I'm using a namedtuple here because I think they are cool
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: Building the model
Below is a function where I build the graph for the network.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
keep_prob = 0.5
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that heps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
"""
Explanation: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
checkpoint = "checkpoints/____.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
jinzishuai/learn2deeplearn | deeplearning.ai/C4.CNN/week3_ObjectDetection/hw/Car detection for Autonomous Driving/Autonomous driving application - Car detection - v1.ipynb | gpl-3.0 | import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
"""
Explanation: Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242).
You will learn to:
- Use object detection on a car detection dataset
- Deal with bounding boxes
Run the following cell to load the packages and dependencies that are going to be useful for your journey!
End of explanation
"""
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = None
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = None
box_class_scores = None
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = None
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = None
boxes = None
classes = None
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
"""
Explanation: Important Note: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: K.function(...).
1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We would like to especially thank drive.ai for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.
</center></caption>
<img src="nb_images/driveai.png" style="width:100px;height:100;">
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> Figure 1 </u>: Definition of a box<br> </center></caption>
If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
2 - YOLO
YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
2.1 - Model details
First things to know:
- The input is a batch of images of shape (m, 608, 608, 3)
- The output is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
Lets look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> Figure 2 </u>: Encoding architecture for YOLO<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> Figure 3 </u>: Flattening the last two last dimensions<br> </center></caption>
Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> Figure 4 </u>: Find the class detected by each box<br> </center></caption>
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> Figure 5 </u>: Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> Figure 6 </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)
- Select only one box when several boxes overlap with each other and detect the same object.
2.2 - Filtering with a threshold on class scores
You are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- box_confidence: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- boxes: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.
- box_class_probs: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
Exercise: Implement yolo_filter_boxes().
1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator:
python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
2. For each box, find:
- the index of the class with the maximum box score (Hint) (Be careful with what axis you choose; consider using axis=-1)
- the corresponding box score (Hint) (Be careful with what axis you choose; consider using axis=-1)
3. Create a mask by using a threshold. As a reminder: ([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4) returns: [False, True, False, False, True]. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. (Hint)
Reminder: to call a Keras function, you should use K.function(...).
End of explanation
"""
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
"""
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
xi1 = None
yi1 = None
xi2 = None
yi2 = None
inter_area = None
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = None
box2_area = None
union_area = None
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = None
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
2.3 - Non-max suppression
Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> Figure 7 </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called "Intersection over Union", or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> Figure 8 </u>: Definition of "Intersection over Union". <br> </center></caption>
Exercise: Implement iou(). Some hints:
- In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.
- To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1)
- You'll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that:
- xi1 = maximum of the x1 coordinates of the two boxes
- yi1 = maximum of the y1 coordinates of the two boxes
- xi2 = minimum of the x2 coordinates of the two boxes
- yi2 = minimum of the y2 coordinates of the two boxes
In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
End of explanation
"""
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = None
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = None
boxes = None
classes = None
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**iou = **
</td>
<td>
0.14285714285714285
</td>
</tr>
</table>
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute its overlap with all other boxes, and remove boxes that overlap it more than iou_threshold.
3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
Exercise: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your iou() implementation):
- tf.image.non_max_suppression()
- K.gather()
End of explanation
"""
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = None
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = None
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = None
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
Exercise: Implement yolo_eval() which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of yolo_filter_boxes
python
boxes = scale_boxes(boxes, image_shape)
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
End of explanation
"""
sess = K.get_session()
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
<font color='blue'>
Summary for YOLO:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
3 - Test YOLO pretrained model on images
In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by creating a session to start your graph. Run the following cell.
End of explanation
"""
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
"""
Explanation: 3.1 - Defining classes, anchors and image shape.
Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell.
The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
End of explanation
"""
yolo_model = load_model("model_data/yolo.h5")
"""
Explanation: 3.2 - Loading a pretrained model
Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
End of explanation
"""
yolo_model.summary()
"""
Explanation: This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
End of explanation
"""
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
"""
Explanation: Note: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
Reminder: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
3.3 - Convert output of the model to usable bounding box tensors
The output of yolo_model is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
End of explanation
"""
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
"""
Explanation: You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function.
3.4 - Filtering boxes
yolo_outputs gave you all the predicted boxes of yolo_model in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, which you had previously implemented, to do this.
End of explanation
"""
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = None
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
"""
Explanation: 3.5 - Run the graph on an image
Let the fun begin. You have created a (sess) graph that can be summarized as follows:
<font color='purple'> yolo_model.input </font> is given to yolo_model. The model is used to compute the output <font color='purple'> yolo_model.output </font>
<font color='purple'> yolo_model.output </font> is processed by yolo_head. It gives you <font color='purple'> yolo_outputs </font>
<font color='purple'> yolo_outputs </font> goes through a filtering function, yolo_eval. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
Exercise: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute scores, boxes, classes.
The code below also uses the following function:
python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
Important note: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
End of explanation
"""
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
"""
Explanation: Run the following cell on the "test.jpg" image to verify that your function is correct.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_compute_mne_inverse_epochs_in_label.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
from mne.minimum_norm import apply_inverse
print(__doc__)
data_path = sample.data_path()
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
fname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
event_id, tmin, tmax = 1, -0.2, 0.5
# Using the same inverse operator when inspecting single trials Vs. evoked
snr = 3.0 # Standard assumption for average data but using it for single trial
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
inverse_operator = read_inverse_operator(fname_inv)
label = mne.read_label(fname_label)
raw = mne.io.read_raw_fif(fname_raw)
events = mne.read_events(fname_event)
# Set up pick list
include = []
# Add a bad channel
raw.info['bads'] += ['EEG 053'] # bads + 1 more
# pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=include, exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,
eog=150e-6))
# Get evoked data (averaging across trials in sensor space)
evoked = epochs.average()
# Compute inverse solution and stcs for each epoch
# Use the same inverse operator as with evoked data (i.e., set nave)
# If you use a different nave, dSPM just scales by a factor sqrt(nave)
stcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method, label,
pick_ori="normal", nave=evoked.nave)
stc_evoked = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori="normal")
stc_evoked_label = stc_evoked.in_label(label)
# Mean across trials but not across vertices in label
mean_stc = sum(stcs) / len(stcs)
# compute sign flip to avoid signal cancellation when averaging signed values
flip = mne.label_sign_flip(label, inverse_operator['src'])
label_mean = np.mean(mean_stc.data, axis=0)
label_mean_flip = np.mean(flip[:, np.newaxis] * mean_stc.data, axis=0)
# Get inverse solution by inverting evoked data
stc_evoked = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori="normal")
# apply_inverse() does whole brain, so sub-select label of interest
stc_evoked_label = stc_evoked.in_label(label)
# Average over label (not caring to align polarities here)
label_mean_evoked = np.mean(stc_evoked_label.data, axis=0)
"""
Explanation: Compute MNE-dSPM inverse solution on single epochs
Compute dSPM inverse solution on single trial epochs restricted
to a brain label.
End of explanation
"""
times = 1e3 * stcs[0].times # times in ms
plt.figure()
h0 = plt.plot(times, mean_stc.data.T, 'k')
h1, = plt.plot(times, label_mean, 'r', linewidth=3)
h2, = plt.plot(times, label_mean_flip, 'g', linewidth=3)
plt.legend((h0[0], h1, h2), ('all dipoles in label', 'mean',
'mean with sign flip'))
plt.xlabel('time (ms)')
plt.ylabel('dSPM value')
plt.show()
"""
Explanation: View activation time-series to illustrate the benefit of aligning/flipping
End of explanation
"""
# Single trial
plt.figure()
for k, stc_trial in enumerate(stcs):
plt.plot(times, np.mean(stc_trial.data, axis=0).T, 'k--',
label='Single Trials' if k == 0 else '_nolegend_',
alpha=0.5)
# Single trial inverse then average.. making linewidth large to not be masked
plt.plot(times, label_mean, 'b', linewidth=6,
label='dSPM first, then average')
# Evoked and then inverse
plt.plot(times, label_mean_evoked, 'r', linewidth=2,
label='Average first, then dSPM')
plt.xlabel('time (ms)')
plt.ylabel('dSPM value')
plt.legend()
plt.show()
"""
Explanation: Viewing single trial dSPM and average dSPM for unflipped pooling over label
Compare to (1) Inverse (dSPM) then average, (2) Evoked then dSPM
End of explanation
"""
|
nre-aachen/GeMpy | Prototype Notebook/Example_2_Simple-Deprecated.ipynb | mit | # Importing
import theano.tensor as T
import sys, os
sys.path.append("../GeMpy")
# Importing GeMpy modules
import GeMpy
# Reloading (only for development purposes)
import importlib
importlib.reload(GeMpy)
# Usuful packages
import numpy as np
import pandas as pn
import matplotlib.pyplot as plt
# This was to choose the gpu
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
# Default options of printin
np.set_printoptions(precision = 6, linewidth= 130, suppress = True)
%matplotlib inline
#%matplotlib notebook
"""
Explanation: Example 2: Simple model
This notebook is a series of independent cells showing how to create a simple model from the beginning to the end using GeMpy
Importing dependencies
End of explanation
"""
geo_data = GeMpy.import_data([0,10,0,10,0,10], [50,50,50])
# =========================
# DATA GENERATION IN PYTHON
# =========================
# Layers coordinates
layer_1 = np.array([[0.5,4,7], [2,4,6.5], [4,4,7], [5,4,6]])#-np.array([5,5,4]))/8+0.5
layer_2 = np.array([[3,4,5], [6,4,4],[8,4,4], [7,4,3], [1,4,6]])
layers = np.asarray([layer_1,layer_2])
# Foliations coordinates
dip_pos_1 = np.array([7,4,7])#- np.array([5,5,4]))/8+0.5
dip_pos_2 = np.array([2.,4,4])
# Dips
dip_angle_1 = float(15)
dip_angle_2 = float(340)
dips_angles = np.asarray([dip_angle_1, dip_angle_2], dtype="float64")
# Azimuths
azimuths = np.asarray([90,90], dtype="float64")
# Polarity
polarity = np.asarray([1,1], dtype="float64")
# Setting foliations and interfaces values
GeMpy.set_interfaces(geo_data, pn.DataFrame(
data = {"X" :np.append(layer_1[:, 0],layer_2[:,0]),
"Y" :np.append(layer_1[:, 1],layer_2[:,1]),
"Z" :np.append(layer_1[:, 2],layer_2[:,2]),
"formation" : np.append(
np.tile("Layer 1", len(layer_1)),
np.tile("Layer 2", len(layer_2))),
"labels" : [r'${\bf{x}}_{\alpha \, 0}^1$',
r'${\bf{x}}_{\alpha \, 1}^1$',
r'${\bf{x}}_{\alpha \, 2}^1$',
r'${\bf{x}}_{\alpha \, 3}^1$',
r'${\bf{x}}_{\alpha \, 0}^2$',
r'${\bf{x}}_{\alpha \, 1}^2$',
r'${\bf{x}}_{\alpha \, 2}^2$',
r'${\bf{x}}_{\alpha \, 3}^2$',
r'${\bf{x}}_{\alpha \, 4}^2$'] }))
GeMpy.set_foliations(geo_data, pn.DataFrame(
data = {"X" :np.append(dip_pos_1[0],dip_pos_2[0]),
"Y" :np.append(dip_pos_1[ 1],dip_pos_2[1]),
"Z" :np.append(dip_pos_1[ 2],dip_pos_2[2]),
"azimuth" : azimuths,
"dip" : dips_angles,
"polarity" : polarity,
"formation" : ["Layer 1", "Layer 2"],
"labels" : [r'${\bf{x}}_{\beta \,{0}}$',
r'${\bf{x}}_{\beta \,{1}}$'] }))
GeMpy.get_raw_data(geo_data)
# Plotting data
GeMpy.plot_data(geo_data)
GeMpy.PlotData.annotate_plot(GeMpy.get_raw_data(geo_data),
'labels','X', 'Z', size = 'x-large')
"""
Explanation: Visualize data
End of explanation
"""
GeMpy.i_set_data(geo_data)
"""
Explanation: Interactive pandas Dataframe
Using qgrid it is possible to modify the tables in place as following:
End of explanation
"""
from ipywidgets import widgets
from ipywidgets import interact
def cov_cubic_f(r,a = 6, c_o = 1):
if r <= a:
return c_o*(1-7*(r/a)**2+35/4*(r/a)**3-7/2*(r/a)**5+3/4*(r/a)**7)
else:
return 0
def cov_cubic_d1_f(r,a = 6., c_o = 1):
SED_dips_dips = r
f = c_o
return (f * ((-14 /a ** 2) + 105 / 4 * SED_dips_dips / a ** 3 -
35 / 2 * SED_dips_dips ** 3 / a ** 5 + 21 / 4 * SED_dips_dips ** 5 / a ** 7))
def cov_cubic_d2_f(r, a = 6, c_o = 1):
SED_dips_dips = r
f = c_o
return 7*f*(9*r**5-20*a**2*r**3+15*a**4*r-4*a**5)/(2*a**7)
def plot_potential_var(a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0,12,50)
y = [cov_cubic_f(i, a = a, c_o = c_o) for i in x]
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.hlines(0,0,12, linestyles = "--")
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
ax2.scatter(0,nugget_effect+c_o)
plt.title("Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C_Z(r)$', y = 1.08, fontsize=15, fontweight='bold')
def plot_potential_direction_var( a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0,12,50)
y = np.asarray([cov_cubic_d1_f(i, a = a, c_o = c_o) for i in x])
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
#ax2.scatter(0,c_o)
plt.title("Cross-Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C\'_Z / r$', y = 1.08, fontsize=15, fontweight='bold')
def plot_directionU_directionU_var(a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0.01,12,50)
d1 = np.asarray([cov_cubic_d1_f(i, a = a, c_o = c_o) for i in x])
d2 = np.asarray([cov_cubic_d2_f(i, a = a, c_o = c_o) for i in x])
y = -(d2) # (0.5*x**2)/(x**2)*
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
ax2.scatter(0,nugget_effect+y[0], s = 20)
plt.title("Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C_{\partial {Z}/ \partial x, \, \partial {Z}/ \partial x}(h_x)$'
, y = 1.08, fontsize=15)
def plot_directionU_directionV_var(a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0.01,12,50)
d1 = np.asarray([cov_cubic_d1_f(i, a = a, c_o = c_o) for i in x])
d2 = np.asarray([cov_cubic_d2_f(i, a = a, c_o = c_o) for i in x])
y = -(d2-d1) # (0.5*x**2)/(x**2)*
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
ax2.scatter(0,nugget_effect+y[0], s = 20)
plt.title("Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C_{\partial {Z}/ \partial x, \, \partial {Z}/ \partial y}(h_x,h_y)$'
, y = 1.08, fontsize=15)
def plot_all(a = 10, c_o = 1, nugget_effect = 0):
plot_potential_direction_var(a, c_o, nugget_effect)
plot_directionU_directionU_var(a, c_o, nugget_effect)
plot_directionU_directionV_var(a, c_o, nugget_effect)
"""
Explanation: Grid and potential field
We can see the potential field generated out of the data above
End of explanation
"""
GeMpy.compute_block_model(geo_data)
GeMpy.plot_section(geo_data, 13)
"""
Explanation: From potential field to block
The potential field describe the deposition form and direction of a basin. However, in most scenarios the real goal of structural modeling is the segmentation in layers of areas with significant change of properties (e.g. shales and carbonates). Since we need to provide at least one point per interface, we can easily compute the value of the potential field at the intersections between two layers. Therefore, by simple comparison between a concrete value of the potential field and the values of the interfaces it is possible to segment the domain into layers Fig X.
End of explanation
"""
layer_3 = np.array([[2,4,3], [8,4,2], [9,4,3]])
dip_pos_3 = np.array([1,4,1])
dip_angle_3 = float(80)
azimuth_3 = 90
polarity_3 = 1
GeMpy.set_interfaces(geo_data, pn.DataFrame(
data = {"X" :layer_3[:, 0],
"Y" :layer_3[:, 1],
"Z" :layer_3[:, 2],
"formation" : np.tile("Layer 3", len(layer_3)),
"labels" : [ r'${\bf{x}}_{\alpha \, 0}^3$',
r'${\bf{x}}_{\alpha \, 1}^3$',
r'${\bf{x}}_{\alpha \, 2}^3$'] }), append = True)
GeMpy.get_raw_data(geo_data,"interfaces")
GeMpy.set_foliations(geo_data, pn.DataFrame(data = {
"X" : dip_pos_3[0],
"Y" : dip_pos_3[1],
"Z" : dip_pos_3[2],
"azimuth" : azimuth_3,
"dip" : dip_angle_3,
"polarity" : polarity_3,
"formation" : [ 'Layer 3'],
"labels" : r'${\bf{x}}_{\beta \,{2}}$'}), append = True)
GeMpy.get_raw_data(geo_data, 'foliations')
GeMpy.set_data_series(geo_data, {'younger': ('Layer 1', 'Layer 2'),
'older': 'Layer 3'}, order_series = ['younger', 'older'])
GeMpy.plot_data(geo_data)
"""
Explanation: Combining potential fields: Depositional series
In reality, most geological settings are formed by a concatenation of depositional phases separated clearly by unconformity bounderies. Each of these phases can be model by a potential field. In order to capture this behavior, we can classify the formations that belong to individual depositional phase into categories or series. The potential field computed for each of these series could be seen as a sort of evolution of the basin if an unconformity did not occur. Finally, sorting the temporal relation between series allow to superpose the corresponding potential field at an specific location.
In the next example, we add a new serie consisting in a layer---'Layer 3'--- Fig X, which generate the potential field of Fig X and subsequently the block Figure X.
End of explanation
"""
GeMpy.plot_potential_field(geo_data,4, n_pf=1, direction='y',
colorbar = True, cmap = 'magma' )
GeMpy.get_raw_data(geo_data)
"""
Explanation: This potential field gives the following block
End of explanation
"""
GeMpy.compute_block_model(geo_data, series_number= 'all', verbose = 0)
GeMpy.plot_section(geo_data, 13)
"""
Explanation: Combining both potential field where the first potential field is younger than the second we can obtain the following structure.
End of explanation
"""
plot_potential_var(10,10**2 / 14 / 3 , 0.01)
plot_all(10,10**2 / 14 / 3 , 0.01) # 0**2 /14/3
"""
Explanation: Side note: Example of covariances involved in the cokriging system
End of explanation
"""
|
chengsoonong/crowdastro | notebooks/5_training_data.ipynb | mit | import os.path
import pprint
import sys
import astropy.io.fits
import matplotlib.colors
import matplotlib.pyplot
import numpy
import pymongo
import requests
import scipy.ndimage.filters
import sklearn.decomposition
import sklearn.ensemble
import sklearn.linear_model
import sklearn.neural_network
import sklearn.svm
sys.path.insert(1, '..')
import crowdastro.rgz_analysis.consensus
%matplotlib inline
matplotlib.pyplot.rcParams['image.cmap'] = 'gray'
HOST = 'localhost'
PORT = 27017
DB_NAME = 'radio'
DATA_PATH = os.path.join('..', 'data')
ATLAS_CATALOGUE_PATH = os.path.join(DATA_PATH, 'ATLASDR3_cmpcat_23July2015.dat')
TILE_SIZE = '2x2'
FITS_IMAGE_WIDTH = 200
FITS_IMAGE_HEIGHT = 200
CLICK_IMAGE_WIDTH = 500
CLICK_IMAGE_HEIGHT = 500
CLICK_TO_FITS_X = FITS_IMAGE_WIDTH / CLICK_IMAGE_WIDTH
CLICK_TO_FITS_Y = FITS_IMAGE_HEIGHT / CLICK_IMAGE_HEIGHT
CLICK_TO_FITS = numpy.array([CLICK_TO_FITS_X, CLICK_TO_FITS_Y])
# Setup Mongo DB.
client = pymongo.MongoClient(HOST, PORT)
db = client[DB_NAME]
"""
Explanation: Training Data
In this notebook, I will try to assemble training data pairs: Input subjects from the Radio Galaxy Zoo database and potential hosts from the associated IR image, and output classifications.
End of explanation
"""
subjects = list(db.radio_subjects.find({'metadata.survey': 'atlas', 'state': 'complete', 'metadata.contour_count': 1}))
print('Found {} subjects.'.format(len(subjects)))
"""
Explanation: "Simple" subjects
My first task is to screen out what I think would be a simple set of subjects. In the fits-format notebook, I found that about 30% of ATLAS subjects have just one set of radio contours.
I want to screen out all of these and use them as the training subjects. It's a lot easier to look for just the subjects that have contour_count = 1 — the number of contours seems to be mostly unrelated to the number of radio sources, but if there's only one contour, there should only be one source. The benefit of doing things this way is that I can ignore the classifications collection for a bit.
End of explanation
"""
def open_fits(subject, field, wavelength):
"""Opens a FITS image.
subject: RGZ subject.
field: 'elais' or 'cdfs'.
wavelength: 'ir' or 'radio'.
-> FITS image file handle.
"""
if field not in {'elais', 'cdfs'}:
raise ValueError('field must be either "elais" or "cdfs".')
if wavelength not in {'ir', 'radio'}:
raise ValueError('wavelength must be either "ir" or "radio".')
assert subject['metadata']['survey'] == 'atlas', 'Subject not from ATLAS survey.'
cid = subject['metadata']['source']
filename = '{}_{}.fits'.format(cid, wavelength)
path = os.path.join(DATA_PATH, field, TILE_SIZE, filename)
return astropy.io.fits.open(path, ignore_blank=True)
def plot_contours(subject, colour='green'):
uri = subject['location']['contours']
contours = requests.get(uri).json()['contours']
for row in contours:
for col in row:
xs = []
ys = []
for pair in col['arr']:
xs.append(pair['x'])
ys.append(pair['y'])
matplotlib.pyplot.plot(xs, FITS_IMAGE_HEIGHT - numpy.array(ys), c=colour)
def imshow(im, contrast=0.05):
"""Helper function for showing an image."""
im = im - im.min() + contrast
return matplotlib.pyplot.imshow(im,
origin='lower',
norm=matplotlib.colors.LogNorm(
vmin=im.min(),
vmax=im.max(),
),
)
def show_subject(subject):
with open_fits(subject, 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
with open_fits(subject, 'cdfs', 'radio') as fits_file:
radio = fits_file[0].data
matplotlib.pyplot.figure(figsize=(15, 15))
matplotlib.pyplot.subplot(1, 2, 1)
matplotlib.pyplot.title(subject['zooniverse_id'] + ' IR')
matplotlib.pyplot.xlim((0, FITS_IMAGE_WIDTH))
matplotlib.pyplot.ylim((0, FITS_IMAGE_HEIGHT))
imshow(ir)
plot_contours(subject)
matplotlib.pyplot.subplot(1, 2, 2)
matplotlib.pyplot.title(subject['zooniverse_id'] + ' Radio')
matplotlib.pyplot.xlim((0, FITS_IMAGE_WIDTH))
matplotlib.pyplot.ylim((0, FITS_IMAGE_HEIGHT))
imshow(radio)
plot_contours(subject)
show_subject(subjects[10])
"""
Explanation: That's a lot less than ideal (and less than expected) but we can fix this later. Let's have a look at some.
End of explanation
"""
def potential_hosts(subject, sigma=0.5, threshold=0):
with open_fits(subject, 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
neighborhood = numpy.ones((10, 10))
blurred_ir = scipy.ndimage.filters.gaussian_filter(ir, sigma) > threshold
local_max = scipy.ndimage.filters.maximum_filter(blurred_ir, footprint=neighborhood) == blurred_ir
region_labels, n_labels = scipy.ndimage.measurements.label(local_max)
maxima = numpy.array(
[numpy.array((region_labels == i + 1).nonzero()).T.mean(axis=0)
for i in range(n_labels)]
)
maxima = maxima[numpy.logical_and(maxima[:, 1] != 0, maxima[:, 1] != 499)]
return maxima
with open_fits(subjects[10], 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
matplotlib.pyplot.figure(figsize=(15, 15))
matplotlib.pyplot.subplot(1, 2, 1)
matplotlib.pyplot.title(subjects[10]['zooniverse_id'] + ' IR')
matplotlib.pyplot.xlim((0, FITS_IMAGE_WIDTH))
matplotlib.pyplot.ylim((0, FITS_IMAGE_HEIGHT))
imshow(ir)
maxima = potential_hosts(subjects[10], sigma=1, threshold=0.05)
matplotlib.pyplot.scatter(maxima[:, 1], maxima[:, 0])
matplotlib.pyplot.show()
"""
Explanation: Potential hosts
Since we're representing this as a binary classification problem, let's get all the potential hosts in an image using the method from the potential_host_counting notebook. This is not ideal — it includes far too many hosts — but it'll do for now.
End of explanation
"""
def crowdsourced_label(subject):
answers = crowdastro.rgz_analysis.consensus.consensus(subject['zooniverse_id'])['answer']
answer = [answer for answer in answers.values() if answer['ind'] == 0][0]
if 'ir' in answer:
return answer['ir']
if 'ir_peak' in answer:
return answer['ir_peak']
return None
with open_fits(subjects[10], 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
matplotlib.pyplot.figure(figsize=(15, 15))
matplotlib.pyplot.subplot(1, 2, 1)
matplotlib.pyplot.title(subjects[10]['zooniverse_id'] + ' IR')
matplotlib.pyplot.xlim((0, FITS_IMAGE_WIDTH))
matplotlib.pyplot.ylim((0, FITS_IMAGE_HEIGHT))
imshow(ir)
maxima = potential_hosts(subjects[10], sigma=1, threshold=0.05)
matplotlib.pyplot.scatter(maxima[:, 1], maxima[:, 0])
label = crowdsourced_label(subjects[10])
# Clicks are upside-down, whereas the image and peaks found from it are not.
matplotlib.pyplot.scatter([CLICK_TO_FITS_X * label[0]], [FITS_IMAGE_HEIGHT - CLICK_TO_FITS_Y * label[1]], c='r')
matplotlib.pyplot.show()
"""
Explanation: This is not a fantastic result, but it will do for now. Julie said that the rgz-analysis code found peaks through Gaussian fitting. I can't find the code for that, but I can use the idea later to get better potential hosts.
Crowdsourced labels
We also need to retrieve the labels for each subject. I'll use the rgz_analysis.consensus code for that.
End of explanation
"""
def get_training_pairs(subject):
with open_fits(subject, 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
with open_fits(subject, 'cdfs', 'radio') as fits_file:
radio = fits_file[0].data
radius = 20
ir = numpy.pad(ir, radius, mode='linear_ramp')
radio = numpy.pad(radio, radius, mode='linear_ramp')
hosts = potential_hosts(subject, sigma=1, threshold=0.05)
actual_host = crowdsourced_label(subject)
if actual_host is None:
return []
actual_host = numpy.array(actual_host) * CLICK_TO_FITS
nearest_host = min(hosts, key=lambda host: numpy.hypot(actual_host[0] - host[1], actual_host[1] - host[0]))
pairs = []
for host in hosts:
host_y, host_x = host
ir_neighbourhood = ir[host_x : host_x + 2 * radius, host_y : host_y + 2 * radius]
radio_neighbourhood = radio[int(host_x) : int(host_x) + 2 * radius, int(host_y) : int(host_y) + 2 * radius]
input_vec = numpy.ndarray.flatten(radio_neighbourhood)
label = (nearest_host == host).all()
pairs.append((input_vec, label))
return pairs
training_data = [pair for subject in subjects for pair in get_training_pairs(subject)]
print('Number of training samples:', len(training_data))
"""
Explanation: That seems a reasonable answer.
Assembling the data
We now have
- IR images
- Radio contours
- Radio images
- A single point to classify
- A way to label the points
That's effectively all we need. I want to throw all of this into logistic regression. What I'll do is get a neighbourhood of pixels around the potential host, do the same for the radio image, and naïvely throw it all into scikit-learn. This will almost certainly be ineffective, but it's a start.
Edit, 27/03/2016: According to the results of mean_images, the IR image doesn't really matter. We can quite possibly just ignore it for now, and I do this below.
End of explanation
"""
xs = [x for x, _ in training_data]
ys = [int(y) for _, y in training_data]
xs_train, xs_test, ys_train, ys_test = sklearn.cross_validation.train_test_split(xs, ys, test_size=0.2, random_state=0)
lr = sklearn.linear_model.LogisticRegression(C=1e5, class_weight='auto') # Note - auto deprecated from 0.17.
lr.fit(xs_train, ys_train)
n_true_positive = numpy.logical_and(lr.predict(xs_test) == numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
n_true_negative = numpy.logical_and(lr.predict(xs_test) == numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_positive = numpy.logical_and(lr.predict(xs_test) != numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_negative = numpy.logical_and(lr.predict(xs_test) != numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
print('True positives:', n_true_positive)
print('True negatives:', n_true_negative)
print('False positives:', n_false_positive)
print('False negatives:', n_false_negative)
"""
Explanation: Training
Here, I throw the data into logistic regression and see what happens.
End of explanation
"""
import keras.layers
import keras.models
model = keras.models.Sequential()
radius = 20
input_shape = (1, radius * 2, radius * 2)
n_conv_filters = 10
conv_width = 4
hidden_dim = 256
model = keras.models.Sequential()
model.add(keras.layers.Convolution2D(n_conv_filters, conv_width, conv_width, border_mode='valid', input_shape=input_shape))
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(hidden_dim))
model.add(keras.layers.Activation('sigmoid'))
model.add(keras.layers.Dense(1))
model.add(keras.layers.Activation('sigmoid'))
model.compile(optimizer='sgd', loss='mse')
def get_training_pairs_im(subject):
with open_fits(subject, 'cdfs', 'ir') as fits_file:
ir = fits_file[0].data
with open_fits(subject, 'cdfs', 'radio') as fits_file:
radio = fits_file[0].data
radius = 20
ir = numpy.pad(ir, radius, mode='linear_ramp')
radio = numpy.pad(radio, radius, mode='linear_ramp')
hosts = potential_hosts(subject, sigma=1, threshold=0.05)
actual_host = crowdsourced_label(subject)
if actual_host is None:
return []
actual_host = numpy.array(actual_host) * CLICK_TO_FITS
nearest_host = min(hosts, key=lambda host: numpy.hypot(actual_host[0] - host[1], actual_host[1] - host[0]))
pairs = []
for host in hosts:
host_y, host_x = host
ir_neighbourhood = ir[host_x : host_x + 2 * radius, host_y : host_y + 2 * radius]
radio_neighbourhood = radio[int(host_x) : int(host_x) + 2 * radius, int(host_y) : int(host_y) + 2 * radius]
input_vec = radio_neighbourhood
label = (nearest_host == host).all()
pairs.append((input_vec, label))
return pairs
training_data_im = [pair for subject in subjects for pair in get_training_pairs_im(subject)]
xs = [x.reshape((1, radius * 2, radius * 2)) for x, _ in training_data_im]
ys = [[int(y)] for _, y in training_data_im]
xs_train, xs_test, ys_train, ys_test = sklearn.cross_validation.train_test_split(xs, ys, test_size=0.2, random_state=0)
xs_train = numpy.array(xs_train)
ys_train = numpy.array(ys_train)
xs_test = numpy.array(xs_test)
ys_test = numpy.array(ys_test)
tp = []
tn = []
fp = []
fn = []
correct_pos = []
correct_neg = []
total_epochs = 0
import IPython.display
for i in range(10):
print('Epoch', i + 1)
model.fit(xs_train, ys_train.reshape((-1, 1)), nb_epoch=1, batch_size=1)
for i, kernel in enumerate(model.get_weights()[0]):
kernel = kernel[0]
matplotlib.pyplot.subplot(10, 10, i + 1)
matplotlib.pyplot.axis('off')
matplotlib.pyplot.imshow(kernel, cmap='gray')
matplotlib.pyplot.subplots_adjust(hspace=0, wspace=0)
n_true_positive = numpy.logical_and(model.predict(xs_test).round() == numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
n_true_negative = numpy.logical_and(model.predict(xs_test).round() == numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_positive = numpy.logical_and(model.predict(xs_test).round() != numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_negative = numpy.logical_and(model.predict(xs_test).round() != numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
tp.append(n_true_positive)
tn.append(n_true_negative)
fp.append(n_false_positive)
fn.append(n_false_negative)
correct_pos.append(n_true_positive / (n_true_positive + n_false_negative))
correct_neg.append(n_true_negative / (n_true_negative + n_false_positive))
total_epochs += 1
IPython.display.clear_output(wait=True)
print('Convolutional filters:')
matplotlib.pyplot.show()
# IPython.display.display(matplotlib.pyplot.gcf())
print('Model over time:')
epoch_range = numpy.arange(total_epochs) + 1
matplotlib.pyplot.plot(epoch_range, correct_pos)
matplotlib.pyplot.plot(epoch_range, correct_neg)
matplotlib.pyplot.xlabel('Epochs')
matplotlib.pyplot.ylabel('% Correct')
matplotlib.pyplot.legend(['Positive', 'Negative'])
matplotlib.pyplot.show()
# IPython.display.display(matplotlib.pyplot.gcf())
n_true_positive = numpy.logical_and(model.predict(xs_test).round() == numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
n_true_negative = numpy.logical_and(model.predict(xs_test).round() == numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_positive = numpy.logical_and(model.predict(xs_test).round() != numpy.array(ys_test), numpy.array(ys_test) == 0).sum()
n_false_negative = numpy.logical_and(model.predict(xs_test).round() != numpy.array(ys_test), numpy.array(ys_test) == 1).sum()
print('True positives:', n_true_positive)
print('True negatives:', n_true_negative)
print('False positives:', n_false_positive)
print('False negatives:', n_false_negative)
# TODO: Class weights. Can we fake some data by adding Gaussian noise?
# TODO: IID. The data are not independent - can we use this?
"""
Explanation: Originally, the logistic regression had essentially learned to output False, which makes sense — the examples are overwhelmingly False, so you can get to a very easy minimum by always outputting False. I said that some ways to get around this might be to inflate the number of True examples, or to change the output encoding in some way. Cheng suggested just weighting logistic regression's cost function to balance the Trues and Falses — there's an argument for this. The result is that there are far more attempts to assign True.
ConvNet
Let's try a nonlinear model that learns some features.
This doesn't correctly weight the classes, since Keras doesn't support class weights and I haven't manually weighted yet, but it does learn features.
End of explanation
"""
|
quantopian/research_public | notebooks/lectures/Arbitrage_Pricing_Theory/notebook.ipynb | apache-2.0 | import numpy as np
import pandas as pd
from statsmodels import regression
import matplotlib.pyplot as plt
"""
Explanation: Arbitrage Pricing Theory
By Evgenia "Jenny" Nitishinskaya, Delaney Granizo-Mackenzie, and Maxwell Margenot.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Arbitrage pricing theory is a major asset pricing theory that relies on expressing the returns using a linear factor model:
$$R_i = a_i + b_{i1} F_1 + b_{i2} F_2 + \ldots + b_{iK} F_K + \epsilon_i$$
This theory states that if we have modelled our rate of return as above, then the expected returns obey
$$ E(R_i) = R_F + b_{i1} \lambda_1 + b_{i2} \lambda_2 + \ldots + b_{iK} \lambda_K $$
where $R_F$ is the risk-free rate, and $\lambda_j$ is the risk premium - the return in excess of the risk-free rate - for factor $j$. This premium arises because investors require higher returns to compensate them for incurring risk. This generalizes the capital asset pricing model (CAPM), which uses the return on the market as its only factor.
We can compute $\lambda_j$ by constructing a portfolio that has a sensitivity of 1 to factor $j$ and 0 to all others (called a <i>pure factor portfolio</i> for factor $j$), and measure its return in excess of the risk-free rate. Alternatively, we could compute the factor sensitivities for $K$ well-diversified (no asset-specific risk, i.e. $\epsilon_p = 0$) portfolios, and then solve the resulting system of linear equations.
Arbitrage
There are generally many, many securities in our universe. If we use different ones to compute the $\lambda$s, will our results be consistent? If our results are inconsistent, there is an <i>arbitrage opportunity</i> (in expectation). Arbitrage is an operation that earns a profit without incurring risk and with no net investment of money, and an arbitrage opportunity is an opportunity to conduct such an operation. In this case, we mean that there is a risk-free operation with <i>expected</i> positive return that requires no net investment. It occurs when expectations of returns are inconsistent, i.e. risk is not priced consistently across securities.
For instance, there is an arbitrage opportunity in the following case: say there is an asset with expected rate of return 0.2 for the next year and a $\beta$ of 1.2 with the market, while the market is expected to have a rate of return of 0.1, and the risk-free rate on 1-year bonds is 0.05. Then the APT model tells us that the expected rate of return on the asset should be
$$ R_F + \beta \lambda = 0.05 + 1.2 (0.1 - 0.05) = 0.11$$
This does not agree with the prediction that the asset will have a rate of return of 0.2. So, if we buy \$100 of our asset, short \$120 of the market, and buy \$20 of bonds, we will have invested no net money and are not exposed to any systematic risk (we are market-neutral), but we expect to earn $0.2 \cdot 100 - 0.1 \cdot 120 + 20 \cdot 0.05 = 9$ dollars at the end of the year.
The APT assumes that these opportunities will be taken advantage of until prices shift and the arbitrage opportunities disappear. That is, it assumes that there are arbitrageurs who have sufficient amounts of patience and capital. This provides a justification for the use of empirical factor models in pricing securities: if the model were inconsistent, there would be an arbitrage opportunity, and so the prices would adjust.
Goes Both Ways
Often knowing $E(R_i)$ is incredibly difficult, but notice that this model tells us what the expected returns should be if the market is fully arbitraged. This lays the groundwork for long-short equity strategies based on factor model ranking systems. If you know what the expected return of an asset is given that the market is arbitraged, and you hypothesize that the market will be mostly arbitraged over the timeframe on which you are trading, then you can construct a ranking.
Long-Short Equity
To do this, estimate the expected return for each asset on the market, then rank them. Long the top percentile and short the bottom percentile, and you will make money on the difference in returns. Said another way, if the assets at the top of the ranking on average tend to make $5\%$ more per year than the market, and assets at the bottom tend to make $5\%$ less, then you will make $(M + 0.05) - (M - 0.05) = 0.10$ or $10\%$ percent per year, where $M$ is the market return that gets canceled out.
Long-short equity accepts that any individual asset is very difficult to model, relies on broad trends holding true. We can't accurately predict expected returns for an asset, but we can predict the expected returns for a group of 1000 assets as the errors average out.
We will have a full lecture on long-short models later.
How many factors do you want?
As discussed in other lectures, noteably Overfitting, having more factors will explain more and more of your returns, but at the cost of being more and more fit to noise in your data. Do discover true signals and make good predictions going forward, you want to select as few parameters as possible that still explain a large amount of the variance in returns.
Example: Computing Expected Returns for Two Assets
End of explanation
"""
start_date = '2014-06-30'
end_date = '2015-06-30'
# We will look at the returns of an asset one-month into the future to model future returns.
offset_start_date = '2014-07-31'
offset_end_date = '2015-07-31'
# Get returns data for our assets
asset1 = get_pricing('HSC', fields='price', start_date=offset_start_date, end_date=offset_end_date).pct_change()[1:]
asset2 = get_pricing('MSFT', fields='price', start_date=offset_start_date, end_date=offset_end_date).pct_change()[1:]
# Get returns for the market
bench = get_pricing('SPY', fields='price', start_date=start_date, end_date=end_date).pct_change()[1:]
# Use an ETF that tracks 3-month T-bills as our risk-free rate of return
treasury_ret = get_pricing('BIL', fields='price', start_date=start_date, end_date=end_date).pct_change()[1:]
# Define a constant to compute intercept
constant = pd.TimeSeries(np.ones(len(asset1.index)), index=asset1.index)
df = pd.DataFrame({'R1': asset1,
'R2': asset2,
'SPY': bench,
'RF': treasury_ret,
'Constant': constant})
df = df.dropna()
"""
Explanation: Let's get some data.
End of explanation
"""
OLS_model = regression.linear_model.OLS(df['R1'], df[['SPY', 'RF', 'Constant']])
fitted_model = OLS_model.fit()
print 'p-value', fitted_model.f_pvalue
print fitted_model.params
R1_params = fitted_model.params
OLS_model = regression.linear_model.OLS(df['R2'], df[['SPY', 'RF', 'Constant']])
fitted_model = OLS_model.fit()
print 'p-value', fitted_model.f_pvalue
print fitted_model.params
R2_params = fitted_model.params
"""
Explanation: We'll start by computing static regressions over the whole time period.
End of explanation
"""
model = pd.stats.ols.MovingOLS(y = df['R1'], x=df[['SPY', 'RF']],
window_type='rolling',
window=100)
rolling_parameter_estimates = model.beta
rolling_parameter_estimates.plot();
plt.hlines(R1_params['SPY'], df.index[0], df.index[-1], linestyles='dashed', colors='blue')
plt.hlines(R1_params['RF'], df.index[0], df.index[-1], linestyles='dashed', colors='green')
plt.hlines(R1_params['Constant'], df.index[0], df.index[-1], linestyles='dashed', colors='red')
plt.title('Asset1 Computed Betas');
plt.legend(['Market Beta', 'Risk Free Beta', 'Intercept', 'Market Beta Static', 'Risk Free Beta Static', 'Intercept Static']);
model = pd.stats.ols.MovingOLS(y = df['R2'], x=df[['SPY', 'RF']],
window_type='rolling',
window=100)
rolling_parameter_estimates = model.beta
rolling_parameter_estimates.plot();
plt.hlines(R2_params['SPY'], df.index[0], df.index[-1], linestyles='dashed', colors='blue')
plt.hlines(R2_params['RF'], df.index[0], df.index[-1], linestyles='dashed', colors='green')
plt.hlines(R2_params['Constant'], df.index[0], df.index[-1], linestyles='dashed', colors='red')
plt.title('Asset2 Computed Betas');
plt.legend(['Market Beta', 'Risk Free Beta', 'Intercept', 'Market Beta Static', 'Risk Free Beta Static', 'Intercept Static']);
"""
Explanation: As we've said before in other lectures, these numbers don't tell us too much by themselves. We need to look at the distribution of estimated coefficients and whether it's stable. Let's look at the rolling 100-day regression to see how it looks.
End of explanation
"""
model = pd.stats.ols.MovingOLS(y = df['R2'], x=df[['SPY', 'RF']],
window_type='rolling',
window=100)
rolling_parameter_estimates = model.beta
rolling_parameter_estimates['SPY'].plot();
plt.hlines(R2_params['SPY'], df.index[0], df.index[-1], linestyles='dashed', colors='blue')
plt.title('Asset2 Computed Betas');
plt.legend(['Market Beta', 'Market Beta Static']);
"""
Explanation: It might seem like the market betas are stable here, but let's zoom in to check.
End of explanation
"""
start_date = '2014-07-25'
end_date = '2015-07-25'
# We will look at the returns of an asset one-month into the future to model future returns.
offset_start_date = '2014-08-25'
offset_end_date = '2015-08-25'
# Get returns data for our assets
asset1 = get_pricing('HSC', fields='price', start_date=offset_start_date, end_date=offset_end_date).pct_change()[1:]
# Get returns for the market
bench = get_pricing('SPY', fields='price', start_date=start_date, end_date=end_date).pct_change()[1:]
# Use an ETF that tracks 3-month T-bills as our risk-free rate of return
treasury_ret = get_pricing('BIL', fields='price', start_date=start_date, end_date=end_date).pct_change()[1:]
# Define a constant to compute intercept
constant = pd.TimeSeries(np.ones(len(asset1.index)), index=asset1.index)
df = pd.DataFrame({'R1': asset1,
'SPY': bench,
'RF': treasury_ret,
'Constant': constant})
df = df.dropna()
"""
Explanation: As you can see, the plot scale massively affects how we perceive estimate quality.
Predicting the Future
Let's use this model to predict future prices for these assets.
End of explanation
"""
OLS_model = regression.linear_model.OLS(df['R1'], df[['SPY', 'RF', 'Constant']])
fitted_model = OLS_model.fit()
print 'p-value', fitted_model.f_pvalue
print fitted_model.params
b_SPY = fitted_model.params['SPY']
b_RF = fitted_model.params['RF']
a = fitted_model.params['Constant']
"""
Explanation: We'll perform a historical regression to get our model parameter estimates.
End of explanation
"""
start_date = '2015-07-25'
end_date = '2015-08-25'
# Get returns for the market
last_month_bench = get_pricing('SPY', fields='price', start_date=start_date, end_date=end_date).pct_change()[1:]
# Use an ETF that tracks 3-month T-bills as our risk-free rate of return
last_month_treasury_ret = get_pricing('BIL', fields='price', start_date=start_date, end_date=end_date).pct_change()[1:]
"""
Explanation: Get the factor data for the last month so we can predict the next month.
End of explanation
"""
predictions = b_SPY * last_month_bench + b_RF * last_month_treasury_ret + a
predictions.index = predictions.index + pd.DateOffset(months=1)
plt.plot(asset1.index[-30:], asset1.values[-30:], 'b-')
plt.plot(predictions.index, predictions, 'b--')
plt.ylabel('Returns')
plt.legend(['Actual', 'Predicted']);
"""
Explanation: Make our predictions.
End of explanation
"""
|
GSimas/EEL7045 | Aula 9.2 - Indutores.ipynb | mit | print("Exemplo 6.8")
import numpy as np
from sympy import *
L = 0.1
t = symbols('t')
i = 10*t*exp(-5*t)
v = L*diff(i,t)
w = (L*i**2)/2
print("Tensão no indutor:",v,"V")
print("Energia:",w,"J")
"""
Explanation: Indutores
Jupyter Notebook desenvolvido por Gustavo S.S.
Um indutor consiste em uma bobina de fio condutor.
Qualquer condutor de corrente elétrica possui propriedades indutivas e
pode ser considerado um indutor. Mas, para aumentar o efeito indutivo, um indutor
usado na prática é normalmente formado em uma bobina cilíndrica com
várias espiras de fio condutor, conforme ilustrado na Figura 6.21.
Ao passar uma corrente através de um indutor, constata-se que a tensão nele
é diretamente proporcional à taxa de variação da corrente
\begin{align}
{\Large v = L \frac{di}{dt}}
\end{align}
onde L é a constante de proporcionalidade denominada indutância do indutor.
Indutância é a propriedade segundo a qual um indutor se opõe à mudança
do fluxo de corrente através dele, medida em henrys (H).
A indutância de um indutor depende de suas dimensões físicas e de sua
construção.
\begin{align}
{\Large L = \frac{N^2 µ A}{l}}
\end{align}
onde N é o número de espiras, / é o comprimento, A é a área da seção transversal
e µ é a permeabilidade magnética do núcleo
Relação Tensão-Corrente:
\begin{align}
{\Large i = \frac{1}{L} \int_{t_0}^{t} v(τ)dτ + i(t_0)}
\end{align}
Potência Liberada pelo Indutor:
\begin{align}
{\Large p = vi = (L \frac{di}{dt})i}
\end{align}
Energia Armazenada:
\begin{align}
{\Large w = \int_{-∞}^{t} p(τ)dτ = L \int_{-∞}^{t} \frac{di}{dτ} idτ = L \int_{-∞}^{t} i di}
\end{align}
\begin{align}
{\Large w = \frac{1}{2} Li^2}
\end{align}
Um indutor atua como um curto-circuito em CC.
A corrente através de um indutor não pode mudar instantaneamente.
Assim como o capacitor ideal, o indutor ideal não dissipa energia; a energia armazenada nele pode ser recuperada posteriormente. O indutor absorve potência do circuito quando está armazenando energia e libera potência para o circuito quando retorna a energia previamente armazenada.
Um indutor real, não ideal, tem um componente resistivo significativo, conforme pode ser visto na Figura 6.26. Isso se deve ao fato de que o indutor é feito de um material condutor como cobre, que possui certa resistência denominada resistência de enrolamento Rw, que aparece em série com a indutância do indutor. A presença de Rw o torna tanto um dispositivo armazenador de energia como um dispositivo dissipador de energia. Uma vez que Rw normalmente é muito pequena, ela é ignorada na maioria dos casos. O indutor não ideal também tem uma capacitância de enrolamento Cw em decorrência do acoplamento capacitivo entre as bobinas condutoras. A Cw é muito pequena e pode ser ignorada na maioria dos casos, exceto em altas frequências
Exemplo 6.8
A corrente que passa por um indutor de 0,1 H é i(t) = 10te–5t A. Calcule a tensão no
indutor e a energia armazenada nele.
End of explanation
"""
print("Problema Prático 6.8")
m = 10**-3 #definicao de mili
L = 1*m
i = 60*cos(100*t)*m
v = L*diff(i,t)
w = (L*i**2)/2
print("Tensão:",v,"V")
print("Energia:",w,"J")
"""
Explanation: Problema Prático 6.8
Se a corrente através de um indutor de 1 mH for i(t) = 60 cos(100t) mA, determine a tensão entre os terminais e a energia armazenada.
End of explanation
"""
print("Exemplo 6.9")
L = 5
v = 30*t**2
i = integrate(v,t)/L
print("Corrente:",i,"A")
w = L*(i.subs(t,5)**2)/2
print("Energia:",w,"J")
"""
Explanation: Exemplo 6.9
Determine a corrente através de um indutor de 5 H se a tensão nele for
v(t):
30t^2, t>0
0, t<0
Determine, também, a energia armazenada no instante t = 5s. Suponha i(v)>0.
End of explanation
"""
print("Problema Prático 6.9")
L = 2
v = 10*(1 - t)
i0 = 2
i = integrate(v,t)/L + i0
i4 = i.subs(t,4)
print("Corrente no instante t = 4s:",i4,"A")
p = v*i
w = integrate(p,(t,0,4))
print("Energia no instante t = 4s:",w,"J")
"""
Explanation: Problema Prático 6.9
A tensão entre os terminais de um indutor de 2 H é v = 10(1 – t) V. Determine a corrente
que passa através dele no instante t = 4 s e a energia armazenada nele no instante t = 4s.
Suponha i(0) = 2 A.
End of explanation
"""
print("Exemplo 6.10")
Req = 1 + 5
Vf = 12
C = 1
L = 2
i = Vf/Req
print("Corrente i:",i,"A")
#vc = tensao sobre o capacitor = tensao sobre resistore de 5ohms
vc = 5*i
print("Tensão Vc:",vc,"V")
print("Corrente il:",i,"A")
wl = (L*i**2)/2
wc = (C*vc**2)/2
print("Energia no Indutor:",wl,"J")
print("Energia no Capacitor:",wc,"J")
"""
Explanation: Exemplo 6.10
Considere o circuito da Figura 6.27a. Em CC, determine:
(a) i, vC e iL;
(b) a energia armazenada no capacitor e no indutor.
End of explanation
"""
print("Problema Prático 6.10")
Cf = 10
C = 4
L = 6
il = 10*6/(6 + 2) #divisor de corrente
vc = 2*il
wl = (L*il**2)/2
wc = (C*vc**2)/2
print("Corrente il:",il,"A")
print("Tensão vC:",vc,"V")
print("Energia no Capacitor:",wc,"J")
print("Energia no Indutor:",wl,"J")
"""
Explanation: Problema Prático 6.10
Determine vC, iL e a energia armazenada no capacitor e no indutor no circuito da Figura
6.28 em CC.
End of explanation
"""
print("Exemplo 6.11")
Leq1 = 20 + 12 + 10
Leq2 = Leq1*7/(Leq1 + 7)
Leq3 = 4 + Leq2 + 8
print("Indutância Equivalente:",Leq3,"H")
"""
Explanation: Indutores em Série e Paralelo
A indutância equivalente de indutores conectados em série é a soma das
indutâncias individuais.
\begin{align}
L_{eq} = L_1 + L_2 + ... + L_N = \sum_{i = 1}^{N}L_i
\end{align}
A indutância equivalente de indutores paralelos é o inverso da soma dos
inversos das indutâncias individuais.
\begin{align}
L_{eq} = \frac{1}{L_1} + \frac{1}{L_2} + ... + \frac{1}{L_N} = (\sum_{i = 1}^{N} \frac{1}{L_i})^{-1}
\end{align}
Ou, para duas Indutâncias:
\begin{align}
L_{eq} = \frac{L_1 L_2}{L_1 + L_2}
\end{align}
Exemplo 6.11
Determine a indutância equivalente do circuito mostrado na Figura 6.31.
End of explanation
"""
print("Problema Prático 6.11")
def Leq(x,y): #definicao de funcao para calculo de duas indutancias equivalentes em paralelo
L = x*y/(x + y)
return L
Leq1 = 40*m + 20*m
Leq2 = Leq(30*m,Leq1)
Leq3 = Leq2 + 100*m
Leq4 = Leq(40*m,Leq3)
Leq5 = 20*m + Leq4
Leq6 = Leq(Leq5,50*m)
print("Indutância Equivalente:",Leq6,"H")
"""
Explanation: Problema Prático 6.11
Calcule a indutância equivalente para o circuito indutivo em escada da Figura 6.32.
End of explanation
"""
print("Exemplo 6.12")
i = 4*(2 - exp(-10*t))*m
i2_0 = -1*m
i1_0 = i.subs(t,0) - i2_0
print("Corrente i1(0):",i1_0,"A")
Leq1 = Leq(4,12)
Leq2 = Leq1 + 2
v = Leq2*diff(i,t)
v1 = 2*diff(i,t)
v2 = v - v1
print("Tensão v(t):",v,"V")
print("Tensão v1(t):",v1,"V")
print("Tensão v2(t):",v2,"V")
i1 = integrate(v1,(t,0,t))/4 + i1_0
i2 = integrate(v2,(t,0,t))/12 + i2_0
print("Corrente i1(t):",i1,"A")
print("Corrente i2(t):",i2,"A")
"""
Explanation: Exemplo 6.12
Para o circuito da Figura 6.33,
i(t) = 4(2 – e–10t) mA.
Se i2(0) = –1 mA, determine:
(a) i1(0);
(b) v(t), v1(t) e v2(t);
(c) i1(t) e i2(t).
End of explanation
"""
print("Problema Prático 6.12")
i1 = 0.6*exp(-2*t)
i_0 = 1.4
i2_0 = i_0 - i1.subs(t,0)
print("Corrente i2(0):",i2_0,"A")
v1 = 6*diff(i1,t)
i2 = integrate(v1,(t,0,t))/3 + i2_0
i = i1 + i2
print("Corrente i2(t):",i2,"A")
print("Corrente i(t):",i,"A")
Leq1 = Leq(3,6)
Leq2 = Leq1 + 8
v = Leq2*diff(i)
v2 = v - v1
print("Tensão v1(t):",v1,"V")
print("Tensão v2(t):",v2,"V")
print("Tensão v(t):",v,"V")
"""
Explanation: Problema Prático 6.12
No circuito da Figura 6.34,
i1(t) = 0,6e–2t A.
Se i(0) = 1,4 A, determine:
(a) i2(0);
(b) i2(t) e i(t);
(c) v1(t), v2(t) e v(t).
End of explanation
"""
|
padipadou/CADL | session-1/lecture-1.ipynb | apache-2.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
"""
Explanation: Session 1: Introduction to Tensorflow
<p class='lead'>
Creative Applications of Deep Learning with Tensorflow<br />
Parag K. Mital<br />
Kadenze, Inc.<br />
</p>
<a name="learning-goals"></a>
Learning Goals
Learn the basic idea behind machine learning: learning from data and discovering representations
Learn how to preprocess a dataset using its mean and standard deviation
Learn the basic components of a Tensorflow Graph
Table of Contents
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Introduction
Promo
Session Overview
Learning From Data
Deep Learning vs. Machine Learning
Invariances
Scope of Learning
Existing datasets
Preprocessing Data
Understanding Image Shapes
The Batch Dimension
Mean/Deviation of Images
Dataset Preprocessing
Histograms
Histogram Equalization
Tensorflow Basics
Variables
Tensors
Graphs
Operations
Tensor
Sessions
Tensor Shapes
Many Operations
Convolution
Creating a 2-D Gaussian Kernel
Convolving an Image with a Gaussian
Convolve/Filter an image using a Gaussian Kernel
Modulating the Gaussian with a Sine Wave to create Gabor Kernel
Manipulating an image with this Gabor
Homework
Next Session
Reading Material
<!-- /MarkdownTOC -->
<a name="introduction"></a>
Introduction
This course introduces you to deep learning: the state-of-the-art approach to building artificial intelligence algorithms. We cover the basic components of deep learning, what it means, how it works, and develop code necessary to build various algorithms such as deep convolutional networks, variational autoencoders, generative adversarial networks, and recurrent neural networks. A major focus of this course will be to not only understand how to build the necessary components of these algorithms, but also how to apply them for exploring creative applications. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets and using them to self-organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of another image. Deep learning offers enormous potential for creative applications and in this course we interrogate what's possible. Through practical applications and guided homework assignments, you'll be expected to create datasets, develop and train neural networks, explore your own media collections using existing state-of-the-art deep nets, synthesize new content from generative algorithms, and understand deep learning's potential for creating entirely new aesthetics and new ways of interacting with large amounts of data.
<a name="promo"></a>
Promo
Deep learning has emerged at the forefront of nearly every major computational breakthrough in the last 4 years. It is no wonder that it is already in many of the products we use today, from netflix or amazon's personalized recommendations; to the filters that block our spam; to ways that we interact with personal assistants like Apple's Siri or Microsoft Cortana, even to the very ways our personal health is monitored. And sure deep learning algorithms are capable of some amazing things. But it's not just science applications that are benefiting from this research.
Artists too are starting to explore how Deep Learning can be used in their own practice. Photographers are starting to explore different ways of exploring visual media. Generative artists are writing algorithms to create entirely new aesthetics. Filmmakers are exploring virtual worlds ripe with potential for procedural content.
In this course, we're going straight to the state of the art. And we're going to learn it all. We'll see how to make an algorithm paint an image, or hallucinate objects in a photograph. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets to using them to self organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of other images. We'll even see how to teach a computer to read and synthesize new phrases.
But we won't just be using other peoples code to do all of this. We're going to develop everything ourselves using Tensorflow and I'm going to show you how to do it. This course isn't just for artists nor is it just for programmers. It's for people that want to learn more about how to apply deep learning with a hands on approach, straight into the python console, and learn what it all means through creative thinking and interaction.
I'm Parag Mital, artist, researcher and Director of Machine Intelligence at Kadenze. For the last 10 years, I've been exploring creative uses of computational models making use of machine and deep learning, film datasets, eye-tracking, EEG, and fMRI recordings exploring applications such as generative film experiences, augmented reality hallucinations, and expressive control of large audiovisual corpora.
But this course isn't just about me. It's about bringing all of you together. It's about bringing together different backgrounds, different practices, and sticking all of you in the same virtual room, giving you access to state of the art methods in deep learning, some really amazing stuff, and then letting you go wild on the Kadenze platform. We've been working very hard to build a platform for learning that rivals anything else out there for learning this stuff.
You'll be able to share your content, upload videos, comment and exchange code and ideas, all led by the course I've developed for us. But before we get there we're going to have to cover a lot of groundwork. The basics that we'll use to develop state of the art algorithms in deep learning. And that's really so we can better interrogate what's possible, ask the bigger questions, and be able to explore just where all this is heading in more depth. With all of that in mind, Let's get started>
Join me as we learn all about Creative Applications of Deep Learning with Tensorflow.
<a name="session-overview"></a>
Session Overview
We're first going to talk about Deep Learning, what it is, and how it relates to other branches of learning. We'll then talk about the major components of Deep Learning, the importance of datasets, and the nature of representation, which is at the heart of deep learning.
If you've never used Python before, we'll be jumping straight into using libraries like numpy, matplotlib, and scipy. Before starting this session, please check the resources section for a notebook introducing some fundamentals of python programming. When you feel comfortable with loading images from a directory, resizing, cropping, how to change an image datatype from unsigned int to float32, and what the range of each data type should be, then come back here and pick up where you left off. We'll then get our hands dirty with Tensorflow, Google's library for machine intelligence. We'll learn the basic components of creating a computational graph with Tensorflow, including how to convolve an image to detect interesting features at different scales. This groundwork will finally lead us towards automatically learning our handcrafted features/algorithms.
<a name="learning-from-data"></a>
Learning From Data
<a name="deep-learning-vs-machine-learning"></a>
Deep Learning vs. Machine Learning
So what is this word I keep using, Deep Learning. And how is it different to Machine Learning? Well Deep Learning is a type of Machine Learning algorithm that uses Neural Networks to learn. The type of learning is "Deep" because it is composed of many layers of Neural Networks. In this course we're really going to focus on supervised and unsupervised Deep Learning. But there are many other incredibly valuable branches of Machine Learning such as Reinforcement Learning, Dictionary Learning, Probabilistic Graphical Models and Bayesian Methods (Bishop), or Genetic and Evolutionary Algorithms. And any of these branches could certainly even be combined with each other or with Deep Networks as well. We won't really be able to get into these other branches of learning in this course. Instead, we'll focus more on building "networks", short for neural networks, and how they can do some really amazing things. Before we can get into all that, we're going to need to understand a bit more about data and its importance in deep learning.
<a name="invariances"></a>
Invariances
Deep Learning requires data. A lot of it. It's really one of the major reasons as to why Deep Learning has been so successful. Having many examples of the thing we are trying to learn is the first thing you'll need before even thinking about Deep Learning. Often, it is the biggest blocker to learning about something in the world. Even as a child, we need a lot of experience with something before we begin to understand it. I find I spend most of my time just finding the right data for a network to learn. Getting it from various sources, making sure it all looks right and is labeled. That is a lot of work. The rest of it is easy as we'll see by the end of this course.
Let's say we would like build a network that is capable of looking at an image and saying what object is in the image. There are so many possible ways that an object could be manifested in an image. It's rare to ever see just a single object in isolation. In order to teach a computer about an object, we would have to be able to give it an image of an object in every possible way that it could exist.
We generally call these ways of existing "invariances". That just means we are trying not to vary based on some factor. We are invariant to it. For instance, an object could appear to one side of an image, or another. We call that translation invariance. Or it could be from one angle or another. That's called rotation invariance. Or it could be closer to the camera, or farther. and That would be scale invariance. There are plenty of other types of invariances, such as perspective or brightness or exposure to give a few more examples for photographic images.
<a name="scope-of-learning"></a>
Scope of Learning
With Deep Learning, you will always need a dataset that will teach the algorithm about the world. But you aren't really teaching it everything. You are only teaching it what is in your dataset! That is a very important distinction. If I show my algorithm only faces of people which are always placed in the center of an image, it will not be able to understand anything about faces that are not in the center of the image! Well at least that's mostly true.
That's not to say that a network is incapable of transfering what it has learned to learn new concepts more easily. Or to learn things that might be necessary for it to learn other representations. For instance, a network that has been trained to learn about birds, probably knows a good bit about trees, branches, and other bird-like hangouts, depending on the dataset. But, in general, we are limited to learning what our dataset has access to.
So if you're thinking about creating a dataset, you're going to have to think about what it is that you want to teach your network. What sort of images will it see? What representations do you think your network could learn given the data you've shown it?
One of the major contributions to the success of Deep Learning algorithms is the amount of data out there. Datasets have grown from orders of hundreds to thousands to many millions. The more data you have, the more capable your network will be at determining whatever its objective is.
<a name="existing-datasets"></a>
Existing datasets
With that in mind, let's try to find a dataset that we can work with. There are a ton of datasets out there that current machine learning researchers use. For instance if I do a quick Google search for Deep Learning Datasets, i can see for instance a link on deeplearning.net, listing a few interesting ones e.g. http://deeplearning.net/datasets/, including MNIST, CalTech, CelebNet, LFW, CIFAR, MS Coco, Illustration2Vec, and there are ton more. And these are primarily image based. But if you are interested in finding more, just do a quick search or drop a quick message on the forums if you're looking for something in particular.
MNIST
CalTech
CelebNet
ImageNet: http://www.image-net.org/
LFW
CIFAR10
CIFAR100
MS Coco: http://mscoco.org/home/
WLFDB: http://wlfdb.stevenhoi.com/
Flickr 8k: http://nlp.cs.illinois.edu/HockenmaierGroup/Framing_Image_Description/KCCA.html
Flickr 30k
<a name="preprocessing-data"></a>
Preprocessing Data
In this section, we're going to learn a bit about working with an image based dataset. We'll see how image dimensions are formatted as a single image and how they're represented as a collection using a 4-d array. We'll then look at how we can perform dataset normalization. If you're comfortable with all of this, please feel free to skip to the next video.
We're first going to load some libraries that we'll be making use of.
End of explanation
"""
from libs import utils
# utils.<tab>
files = utils.get_celeb_files()
"""
Explanation: I'll be using a popular image dataset for faces called the CelebFaces dataset. I've provided some helper functions which you can find on the resources page, which will just help us with manipulating images and loading this dataset.
End of explanation
"""
img = plt.imread(files[50])
# img.<tab>
print(img)
"""
Explanation: Let's get the 50th image in this list of files, and then read the file at that location as an image, setting the result to a variable, img, and inspect a bit further what's going on:
End of explanation
"""
# If nothing is drawn and you are using notebook, try uncommenting the next line:
#%matplotlib inline
plt.imshow(img)
"""
Explanation: When I print out this image, I can see all the numbers that represent this image. We can use the function imshow to see this:
End of explanation
"""
img.shape
# (218, 178, 3)
"""
Explanation: <a name="understanding-image-shapes"></a>
Understanding Image Shapes
Let's break this data down a bit more. We can see the dimensions of the data using the shape accessor:
End of explanation
"""
plt.imshow(img[:, :, 0], cmap='gray')
plt.imshow(img[:, :, 1], cmap='gray')
plt.imshow(img[:, :, 2], cmap='gray')
"""
Explanation: This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels.
End of explanation
"""
imgs = utils.get_celeb_imgs()
"""
Explanation: We use the special colon operator to say take every value in this dimension. This is saying, give me every row, every column, and the 0th dimension of the color channels. What we're seeing is the amount of Red, Green, or Blue contributing to the overall color image.
Let's use another helper function which will load every image file in the celeb dataset rather than just give us the filenames like before. By default, this will just return the first 100 images because loading the entire dataset is a bit cumbersome. In one of the later sessions, I'll show you how tensorflow can handle loading images using a pipeline so we can load this same dataset. For now, let's stick with this:
End of explanation
"""
plt.imshow(imgs[0])
"""
Explanation: We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets:
End of explanation
"""
imgs[0].shape
"""
Explanation: <a name="the-batch-dimension"></a>
The Batch Dimension
Remember that an image has a shape describing the height, width, channels:
End of explanation
"""
data = np.array(imgs)
data.shape
"""
Explanation: It turns out we'll often use another convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape will be exactly the same, except we'll stick on a new dimension on the beginning... giving us number of images x the height x the width x the number of color channels.
N x H x W x C
A Color image should have 3 color channels, RGB.
We can combine all of our images to have these 4 dimensions by telling numpy to give us an array of all the images.
End of explanation
"""
mean_img = np.mean(data, axis=0)
plt.imshow(mean_img.astype(np.uint8))
"""
Explanation: This will only work if every image in our list is exactly the same size. So if you have a wide image, short image, long image, forget about it. You'll need them all to be the same size. If you are unsure of how to get all of your images into the same size, then please please refer to the online resources for the notebook I've provided which shows you exactly how to take a bunch of images of different sizes, and crop and resize them the best we can to make them all the same size.
<a name="meandeviation-of-images"></a>
Mean/Deviation of Images
Now that we have our data in a single numpy variable, we can do alot of cool stuff. Let's look at the mean of the batch channel:
End of explanation
"""
std_img = np.std(data, axis=0)
plt.imshow(std_img.astype(np.uint8))
"""
Explanation: This is the first step towards building our robot overlords. We've reduced down our entire dataset to a single representation which describes what most of our dataset looks like. There is one other very useful statistic which we can look at very easily:
End of explanation
"""
plt.imshow(np.mean(std_img, axis=2).astype(np.uint8))
"""
Explanation: So this is incredibly cool. We've just shown where changes are likely to be in our dataset of images. Or put another way, we're showing where and how much variance there is in our previous mean image representation.
We're looking at this per color channel. So we'll see variance for each color channel represented separately, and then combined as a color image. We can try to look at the average variance over all color channels by taking their mean:
End of explanation
"""
flattened = data.ravel()
print(data[:1])
print(flattened[:10])
"""
Explanation: This is showing us on average, how every color channel will vary as a heatmap. The more red, the more likely that our mean image is not the best representation. The more blue, the less likely that our mean image is far off from any other possible image.
<a name="dataset-preprocessing"></a>
Dataset Preprocessing
Think back to when I described what we're trying to accomplish when we build a model for machine learning? We're trying to build a model that understands invariances. We need our model to be able to express all of the things that can possibly change in our data. Well, this is the first step in understanding what can change. If we are looking to use deep learning to learn something complex about our data, it will often start by modeling both the mean and standard deviation of our dataset. We can help speed things up by "preprocessing" our dataset by removing the mean and standard deviation. What does this mean? Subtracting the mean, and dividing by the standard deviation. Another word for that is "normalization".
<a name="histograms"></a>
Histograms
Let's have a look at our dataset another way to see why this might be a useful thing to do. We're first going to convert our batch x height x width x channels array into a 1 dimensional array. Instead of having 4 dimensions, we'll now just have 1 dimension of every pixel value stretched out in a long vector, or 1 dimensional array.
End of explanation
"""
plt.hist(flattened.ravel(), 255)
"""
Explanation: We first convert our N x H x W x C dimensional array into a 1 dimensional array. The values of this array will be based on the last dimensions order. So we'll have: [<font color='red'>251</font>, <font color='green'>238</font>, <font color='blue'>205</font>, <font color='red'>251</font>, <font color='green'>238</font>, <font color='blue'>206</font>, <font color='red'>253</font>, <font color='green'>240</font>, <font color='blue'>207</font>, ...]
We can visualize what the "distribution", or range and frequency of possible values are. This is a very useful thing to know. It tells us whether our data is predictable or not.
End of explanation
"""
plt.hist(mean_img.ravel(), 255)
"""
Explanation: The last line is saying give me a histogram of every value in the vector, and use 255 bins. Each bin is grouping a range of values. The bars of each bin describe the frequency, or how many times anything within that range of values appears.In other words, it is telling us if there is something that seems to happen more than anything else. If there is, it is likely that a neural network will take advantage of that.
<a name="histogram-equalization"></a>
Histogram Equalization
The mean of our dataset looks like this:
End of explanation
"""
bins = 20
fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)
axs[0].hist((data[0]).ravel(), bins)
axs[0].set_title('img distribution')
axs[1].hist((mean_img).ravel(), bins)
axs[1].set_title('mean distribution')
axs[2].hist((data[0] - mean_img).ravel(), bins)
axs[2].set_title('(img - mean) distribution')
"""
Explanation: When we subtract an image by our mean image, we remove all of this information from it. And that means that the rest of the information is really what is important for describing what is unique about it.
Let's try and compare the histogram before and after "normalizing our data":
End of explanation
"""
fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)
axs[0].hist((data[0] - mean_img).ravel(), bins)
axs[0].set_title('(img - mean) distribution')
axs[1].hist((std_img).ravel(), bins)
axs[1].set_title('std deviation distribution')
axs[2].hist(((data[0] - mean_img) / std_img).ravel(), bins)
axs[2].set_title('((img - mean) / std_dev) distribution')
"""
Explanation: What we can see from the histograms is the original image's distribution of values from 0 - 255. The mean image's data distribution is mostly centered around the value 100. When we look at the difference of the original image and the mean image as a histogram, we can see that the distribution is now centered around 0. What we are seeing is the distribution of values that were above the mean image's intensity, and which were below it. Let's take it one step further and complete the normalization by dividing by the standard deviation of our dataset:
End of explanation
"""
axs[2].set_xlim([-150, 150])
axs[2].set_xlim([-100, 100])
axs[2].set_xlim([-50, 50])
axs[2].set_xlim([-10, 10])
axs[2].set_xlim([-5, 5])
"""
Explanation: Now our data has been squished into a peak! We'll have to look at it on a different scale to see what's going on:
End of explanation
"""
import tensorflow as tf
"""
Explanation: What we can see is that the data is in the range of -3 to 3, with the bulk of the data centered around -1 to 1. This is the effect of normalizing our data: most of the data will be around 0, where some deviations of it will follow between -3 to 3.
If our data does not end up looking like this, then we should either (1): get much more data to calculate our mean/std deviation, or (2): either try another method of normalization, such as scaling the values between 0 to 1, or -1 to 1, or possibly not bother with normalization at all. There are other options that one could explore, including different types of normalization such as local contrast normalization for images or PCA based normalization but we won't have time to get into those in this course.
<a name="tensorflow-basics"></a>
Tensorflow Basics
Let's now switch gears and start working with Google's Library for Numerical Computation, TensorFlow. This library can do most of the things we've done so far. However, it has a very different approach for doing so. And it can do a whole lot more cool stuff which we'll eventually get into. The major difference to take away from the remainder of this session is that instead of computing things immediately, we first define things that we want to compute later using what's called a Graph. Everything in Tensorflow takes place in a computational graph and running and evaluating anything in the graph requires a Session. Let's take a look at how these both work and then we'll get into the benefits of why this is useful:
<a name="variables"></a>
Variables
We're first going to import the tensorflow library:
End of explanation
"""
x = np.linspace(-3.0, 3.0, 100)
# Immediately, the result is given to us. An array of 100 numbers equally spaced from -3.0 to 3.0.
print(x)
# We know from numpy arrays that they have a `shape`, in this case a 1-dimensional array of 100 values
print(x.shape)
# and a `dtype`, in this case float64, or 64 bit floating point values.
print(x.dtype)
"""
Explanation: Let's take a look at how we might create a range of numbers. Using numpy, we could for instance use the linear space function:
End of explanation
"""
x = tf.linspace(-3.0, 3.0, 100)
print(x)
"""
Explanation: <a name="tensors"></a>
Tensors
In tensorflow, we could try to do the same thing using their linear space function:
End of explanation
"""
g = tf.get_default_graph()
"""
Explanation: Instead of a numpy.array, we are returned a tf.Tensor. The name of it is "LinSpace:0". Wherever we see this colon 0, that just means the output of. So the name of this Tensor is saying, the output of LinSpace.
Think of tf.Tensors the same way as you would the numpy.array. It is described by its shape, in this case, only 1 dimension of 100 values. And it has a dtype, in this case, float32. But unlike the numpy.array, there are no values printed here! That's because it actually hasn't computed its values yet. Instead, it just refers to the output of a tf.Operation which has been already been added to Tensorflow's default computational graph. The result of that operation is the tensor that we are returned.
<a name="graphs"></a>
Graphs
Let's try and inspect the underlying graph. We can request the "default" graph where all of our operations have been added:
End of explanation
"""
[op.name for op in g.get_operations()]
"""
Explanation: <a name="operations"></a>
Operations
And from this graph, we can get a list of all the operations that have been added, and print out their names:
End of explanation
"""
g.get_tensor_by_name('LinSpace' + ':0')
"""
Explanation: So Tensorflow has named each of our operations to generally reflect what they are doing. There are a few parameters that are all prefixed by LinSpace, and then the last one which is the operation which takes all of the parameters and creates an output for the linspace.
<a name="tensor"></a>
Tensor
We can request the output of any operation, which is a tensor, by asking the graph for the tensor's name:
End of explanation
"""
# We're first going to create a session:
sess = tf.Session()
# Now we tell our session to compute anything we've created in the tensorflow graph.
computed_x = sess.run(x)
print(computed_x)
# Alternatively, we could tell the previous Tensor to evaluate itself using this session:
computed_x = x.eval(session=sess)
print(computed_x)
# We can close the session after we're done like so:
sess.close()
"""
Explanation: What I've done is asked for the tf.Tensor that comes from the operation "LinSpace". So remember, the result of a tf.Operation is a tf.Tensor. Remember that was the same name as the tensor x we created before.
<a name="sessions"></a>
Sessions
In order to actually compute anything in tensorflow, we need to create a tf.Session. The session is responsible for evaluating the tf.Graph. Let's see how this works:
End of explanation
"""
sess = tf.Session(graph=g)
sess.close()
"""
Explanation: We could also explicitly tell the session which graph we want to manage:
End of explanation
"""
g2 = tf.Graph()
"""
Explanation: By default, it grabs the default graph. But we could have created a new graph like so:
End of explanation
"""
sess = tf.InteractiveSession()
x.eval()
"""
Explanation: And then used this graph only in our session.
To simplify things, since we'll be working in iPython's interactive console, we can create an tf.InteractiveSession:
End of explanation
"""
# We can find out the shape of a tensor like so:
print(x.get_shape())
# %% Or in a more friendly format
print(x.get_shape().as_list())
"""
Explanation: Now we didn't have to explicitly tell the eval function about our session. We'll leave this session open for the rest of the lecture.
<a name="tensor-shapes"></a>
Tensor Shapes
End of explanation
"""
# The 1 dimensional gaussian takes two parameters, the mean value, and the standard deviation, which is commonly denoted by the name sigma.
mean = 0.0
sigma = 1.0
# Don't worry about trying to learn or remember this formula. I always have to refer to textbooks or check online for the exact formula.
z = (tf.exp(tf.negative(tf.pow(x - mean, 2.0) /
(2.0 * tf.pow(sigma, 2.0)))) *
(1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))
"""
Explanation: <a name="many-operations"></a>
Many Operations
Lets try a set of operations now. We'll try to create a Gaussian curve. This should resemble a normalized histogram where most of the data is centered around the mean of 0. It's also sometimes refered to by the bell curve or normal curve.
End of explanation
"""
res = z.eval()
plt.plot(res)
# if nothing is drawn, and you are using ipython notebook, uncomment the next two lines:
#%matplotlib inline
#plt.plot(res)
"""
Explanation: Just like before, amazingly, we haven't actually computed anything. We *have just added a bunch of operations to Tensorflow's graph. Whenever we want the value or output of this operation, we'll have to explicitly ask for the part of the graph we're interested in before we can see its result. Since we've created an interactive session, we should just be able to say the name of the Tensor that we're interested in, and call the eval function:
End of explanation
"""
# Let's store the number of values in our Gaussian curve.
ksize = z.get_shape().as_list()[0]
# Let's multiply the two to get a 2d gaussian
z_2d = tf.matmul(tf.reshape(z, [ksize, 1]), tf.reshape(z, [1, ksize]))
# Execute the graph
plt.imshow(z_2d.eval())
"""
Explanation: <a name="convolution"></a>
Convolution
<a name="creating-a-2-d-gaussian-kernel"></a>
Creating a 2-D Gaussian Kernel
Let's try creating a 2-dimensional Gaussian. This can be done by multiplying a vector by its transpose. If you aren't familiar with matrix math, I'll review a few important concepts. This is about 98% of what neural networks do so if you're unfamiliar with this, then please stick with me through this and it'll be smooth sailing. First, to multiply two matrices, their inner dimensions must agree, and the resulting matrix will have the shape of the outer dimensions.
So let's say we have two matrices, X and Y. In order for us to multiply them, X's columns must match Y's rows. I try to remember it like so:
<pre>
(X_rows, X_cols) x (Y_rows, Y_cols)
| | | |
| |___________| |
| ^ |
| inner dimensions |
| must match |
| |
|__________________________|
^
resulting dimensions
of matrix multiplication
</pre>
But our matrix is actually a vector, or a 1 dimensional matrix. That means its dimensions are N x 1. So to multiply them, we'd have:
<pre>
(N, 1) x (1, N)
| | | |
| |___________| |
| ^ |
| inner dimensions |
| must match |
| |
|__________________________|
^
resulting dimensions
of matrix multiplication
</pre>
End of explanation
"""
# Let's first load an image. We're going to need a grayscale image to begin with. skimage has some images we can play with. If you do not have the skimage module, you can load your own image, or get skimage by pip installing "scikit-image".
from skimage import data
img = data.camera().astype(np.float32)
plt.imshow(img, cmap='gray')
print(img.shape)
"""
Explanation: <a name="convolving-an-image-with-a-gaussian"></a>
Convolving an Image with a Gaussian
A very common operation that we'll come across with Deep Learning is convolution. We're going to explore what this means using our new gaussian kernel that we've just created. For now, just think of it as a way of filtering information. We're going to effectively filter our image using this Gaussian function, as if the gaussian function is the lens through which we'll see our image data. What it will do is at every location we tell it to filter, it will average the image values around it based on what the kernel's values are. The Gaussian's kernel is basically saying, take a lot the center, a then decesasingly less as you go farther away from the center. The effect of convolving the image with this type of kernel is that the entire image will be blurred. If you would like an interactive exploratin of convolution, this website is great:
http://setosa.io/ev/image-kernels/
End of explanation
"""
# We could use the numpy reshape function to reshape our numpy array
img_4d = img.reshape([1, img.shape[0], img.shape[1], 1])
print(img_4d.shape)
# but since we'll be using tensorflow, we can use the tensorflow reshape function:
img_4d = tf.reshape(img, [1, img.shape[0], img.shape[1], 1])
print(img_4d)
"""
Explanation: Notice our img shape is 2-dimensional. For image convolution in Tensorflow, we need our images to be 4 dimensional. Remember that when we load many iamges and combine them in a single numpy array, the resulting shape has the number of images first.
N x H x W x C
In order to perform 2d convolution with tensorflow, we'll need the same dimensions for our image. With just 1 grayscale image, this means the shape will be:
1 x H x W x 1
End of explanation
"""
print(img_4d.get_shape())
print(img_4d.get_shape().as_list())
"""
Explanation: Instead of getting a numpy array back, we get a tensorflow tensor. This means we can't access the shape parameter like we did with the numpy array. But instead, we can use get_shape(), and get_shape().as_list():
End of explanation
"""
# Reshape the 2d kernel to tensorflow's required 4d format: H x W x I x O
z_4d = tf.reshape(z_2d, [ksize, ksize, 1, 1])
print(z_4d.get_shape().as_list())
"""
Explanation: The H x W image is now part of a 4 dimensional array, where the other dimensions of N and C are 1. So there is only 1 image and only 1 channel.
We'll also have to reshape our Gaussian Kernel to be 4-dimensional as well. The dimensions for kernels are slightly different! Remember that the image is:
Number of Images x Image Height x Image Width x Number of Channels
we have:
Kernel Height x Kernel Width x Number of Input Channels x Number of Output Channels
Our Kernel already has a height and width of ksize so we'll stick with that for now. The number of input channels should match the number of channels on the image we want to convolve. And for now, we just keep the same number of output channels as the input channels, but we'll later see how this comes into play.
End of explanation
"""
convolved = tf.nn.conv2d(img_4d, z_4d, strides=[1, 1, 1, 1], padding='SAME')
res = convolved.eval()
print(res.shape)
"""
Explanation: <a name="convolvefilter-an-image-using-a-gaussian-kernel"></a>
Convolve/Filter an image using a Gaussian Kernel
We can now use our previous Gaussian Kernel to convolve our image:
End of explanation
"""
# Matplotlib cannot handle plotting 4D images! We'll have to convert this back to the original shape. There are a few ways we could do this. We could plot by "squeezing" the singleton dimensions.
plt.imshow(np.squeeze(res), cmap='gray')
# Or we could specify the exact dimensions we want to visualize:
plt.imshow(res[0, :, :, 0], cmap='gray')
"""
Explanation: There are two new parameters here: strides, and padding. Strides says how to move our kernel across the image. Basically, we'll only ever use it for one of two sets of parameters:
[1, 1, 1, 1], which means, we are going to convolve every single image, every pixel, and every color channel by whatever the kernel is.
and the second option:
[1, 2, 2, 1], which means, we are going to convolve every single image, but every other pixel, in every single color channel.
Padding says what to do at the borders. If we say "SAME", that means we want the same dimensions going in as we do going out. In order to do this, zeros must be padded around the image. If we say "VALID", that means no padding is used, and the image dimensions will actually change.
End of explanation
"""
xs = tf.linspace(-3.0, 3.0, ksize)
"""
Explanation: <a name="modulating-the-gaussian-with-a-sine-wave-to-create-gabor-kernel"></a>
Modulating the Gaussian with a Sine Wave to create Gabor Kernel
We've now seen how to use tensorflow to create a set of operations which create a 2-dimensional Gaussian kernel, and how to use that kernel to filter or convolve another image. Let's create another interesting convolution kernel called a Gabor. This is a lot like the Gaussian kernel, except we use a sine wave to modulate that.
<graphic: draw 1d gaussian wave, 1d sine, show modulation as multiplication and resulting gabor.>
We first use linspace to get a set of values the same range as our gaussian, which should be from -3 standard deviations to +3 standard deviations.
End of explanation
"""
ys = tf.sin(xs)
plt.figure()
plt.plot(ys.eval())
"""
Explanation: We then calculate the sine of these values, which should give us a nice wave
End of explanation
"""
ys = tf.reshape(ys, [ksize, 1])
"""
Explanation: And for multiplication, we'll need to convert this 1-dimensional vector to a matrix: N x 1
End of explanation
"""
ones = tf.ones((1, ksize))
wave = tf.matmul(ys, ones)
plt.imshow(wave.eval(), cmap='gray')
"""
Explanation: We then repeat this wave across the matrix by using a multiplication of ones:
End of explanation
"""
gabor = tf.multiply(wave, z_2d)
plt.imshow(gabor.eval(), cmap='gray')
"""
Explanation: We can directly multiply our old Gaussian kernel by this wave and get a gabor kernel:
End of explanation
"""
# This is a placeholder which will become part of the tensorflow graph, but
# which we have to later explicitly define whenever we run/evaluate the graph.
# Pretty much everything you do in tensorflow can have a name. If we don't
# specify the name, tensorflow will give a default one, like "Placeholder_0".
# Let's use a more useful name to help us understand what's happening.
img = tf.placeholder(tf.float32, shape=[None, None], name='img')
# We'll reshape the 2d image to a 3-d tensor just like before:
# Except now we'll make use of another tensorflow function, expand dims, which adds a singleton dimension at the axis we specify.
# We use it to reshape our H x W image to include a channel dimension of 1
# our new dimensions will end up being: H x W x 1
img_3d = tf.expand_dims(img, 2)
dims = img_3d.get_shape()
print(dims)
# And again to get: 1 x H x W x 1
img_4d = tf.expand_dims(img_3d, 0)
print(img_4d.get_shape().as_list())
# Let's create another set of placeholders for our Gabor's parameters:
mean = tf.placeholder(tf.float32, name='mean')
sigma = tf.placeholder(tf.float32, name='sigma')
ksize = tf.placeholder(tf.int32, name='ksize')
# Then finally redo the entire set of operations we've done to convolve our
# image, except with our placeholders
x = tf.linspace(-3.0, 3.0, ksize)
z = (tf.exp(tf.negative(tf.pow(x - mean, 2.0) /
(2.0 * tf.pow(sigma, 2.0)))) *
(1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))
z_2d = tf.matmul(
tf.reshape(z, tf.stack([ksize, 1])),
tf.reshape(z, tf.stack([1, ksize])))
ys = tf.sin(x)
ys = tf.reshape(ys, tf.stack([ksize, 1]))
ones = tf.ones(tf.stack([1, ksize]))
wave = tf.matmul(ys, ones)
gabor = tf.multiply(wave, z_2d)
gabor_4d = tf.reshape(gabor, tf.stack([ksize, ksize, 1, 1]))
# And finally, convolve the two:
convolved = tf.nn.conv2d(img_4d, gabor_4d, strides=[1, 1, 1, 1], padding='SAME', name='convolved')
convolved_img = convolved[0, :, :, 0]
"""
Explanation: <a name="manipulating-an-image-with-this-gabor"></a>
Manipulating an image with this Gabor
We've already gone through the work of convolving an image. The only thing that has changed is the kernel that we want to convolve with. We could have made life easier by specifying in our graph which elements we wanted to be specified later. Tensorflow calls these "placeholders", meaning, we're not sure what these are yet, but we know they'll fit in the graph like so, generally the input and output of the network.
Let's rewrite our convolution operation using a placeholder for the image and the kernel and then see how the same operation could have been done. We're going to set the image dimensions to None x None. This is something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter.
End of explanation
"""
convolved_img.eval()
"""
Explanation: What we've done is create an entire graph from our placeholders which is capable of convolving an image with a gabor kernel. In order to compute it, we have to specify all of the placeholders required for its computation.
If we try to evaluate it without specifying placeholders beforehand, we will get an error InvalidArgumentError: You must feed a value for placeholder tensor 'img' with dtype float and shape [512,512]:
End of explanation
"""
convolved_img.eval(feed_dict={img: data.camera()})
"""
Explanation: It's saying that we didn't specify our placeholder for img. In order to "feed a value", we use the feed_dict parameter like so:
End of explanation
"""
res = convolved_img.eval(feed_dict={
img: data.camera(), mean:0.0, sigma:1.0, ksize:100})
plt.imshow(res, cmap='gray')
"""
Explanation: But that's not the only placeholder in our graph! We also have placeholders for mean, sigma, and ksize. Once we specify all of them, we'll have our result:
End of explanation
"""
res = convolved_img.eval(feed_dict={
img: data.camera(),
mean: 0.0,
sigma: 0.5,
ksize: 32
})
plt.imshow(res, cmap='gray')
"""
Explanation: Now, instead of having to rewrite the entire graph, we can just specify the different placeholders.
End of explanation
"""
|
TylerJensen1107/tylerjensen1107.github.io | .ipynb_checkpoints/Recursion-checkpoint.ipynb | mit | def pathTo(x, y, path):
#basecase
if x == 0 and y == 0:
print path
#recursive case
#this is an elif because we don't want to recurse forever once we are too far to the right, or too high up
elif x >= 0 and y >= 0:
pathTo(x - 1, y, path + "Right ") #choose right, explore
pathTo(x, y - 1, path + "Up ") #choose up, explore
#pathTo(5, 5, "")
"""
Explanation: Tyler Jensen
Recursive Backtracking || Brute Force Solutions
Why use recursion?
You now have a couple tools to solve programming, namely iteration and recursion. Both can be used in many situations, but recursion allows us to solve problems in a way that human beings cannot.
For example, let's consider guessing someone's PIN. 8800 is mine. A human being could guess every single possible combination of numbers for a PIN (10,000 possible combinations), but that would take forever. 10,000 guesses is actually a relatively small number of guesses for a computer.
While it's possible to solve this with iteration, it's much easier to do with recursion, and specifically recursive backtracking.
Visualizing Recursive Backtracking
How is recursive backtracking different?
Recursive backtracking still follows all the principles of recursion. Those being :
1. A recursive algorithm must have a base case.
2. A recursive algorithm must change its state and move toward the base case.
3. A recursive algorithm must call itself, recursively.
Recursive backtracking will always have a base case, or it will go forever. In recursive backtracking, we add a concept called "Choose, Explore, Unchoose". When we want to change our state and move towards the base case (the second principles), we will generally have a few choices to make (following the PIN example, 10 choices, one for each number). When we implement recursive backtracking, we do this with Choose, Explore, Unchoose.
Problem 1 : Pathfinding
Another use for recursive backtracking is finding all the possible different paths to a point. Consider a basic graph; we may want to find all the paths from the origin to the point (5, 5) given that we can only go up or right. So for example, two possible paths might be :
Up Up Up Up Up Right Right Right Right Right
Up Up Up UP Right Right Right Right Right Up
Base Case :
Generally the easiest case, in this situation if the coordinates we are given are (0, 0)
Recursive Case :
At every point, we have two choices to make (How many recursive calls do you think we will make each time through the method?)
We have to move towards the base case (subtract 1 from X or Y to eventually get to (0, 0))
End of explanation
"""
def pathTo(x, y, path):
#basecase
if x == 0 and y == 0:
print path
#recursive case
#this is an elif because we don't want to recurse forever once we are too far to the right, or too high up
elif x >= 0 and y >= 0:
pathTo(x - 1, y, path + "E ") #choose right, explore
pathTo(x, y - 1, path + "N ") #choose up, explore
pathTo(x - 1, y - 1, path + "NE ") #choose diagnal, explore
#pathTo(5, 5, "")
"""
Explanation: Questions?
What happens if we change the order?
How can we make another choice?
Why don't we have to Unchoose?
How do we stop from going to far?
End of explanation
"""
def hackPassword(correctPassword):
hackPass(correctPassword, "")
def hackPass(correctPassword, guess):
#base case : guess is the correct password
if guess == correctPassword:
print guess
#recursive case : we don't have more than 3 numbers, so make 10 choices
elif len(correctPassword) > len(guess):
for number in range(10):
#choice : add number to guess
#explore : make the recursive call
hackPass(correctPassword, guess + str(number))
hackPassword("8800")
"""
Explanation: Problem 2 : PIN Guesser
Given a PIN, we can use recursive backtracking to "brute force" our way into a solution. This means we are essentially just exhuasting all possible guesses.
We are going to need a second parameter here to start out our soluation
Base Case :
The PIN numbers match
Recursive Case :
At every point, we have 10 choices to make (one for each number). This looks more like a loop with a recursive call rather than 10 recursive calls.
End of explanation
"""
import sys
def possibleSteps(steps):
myList = [] #we have to make this list in here so that we have a way to store steps
#gonna draw the staircase for fun
for number in range(steps)[::-1]:
for stepNum in range(number):
sys.stdout.write(' ')
print "__|"
print ""
possibleStepsRecurse(myList, steps)
def possibleStepsRecurse(myList, steps):
#base case : no steps left
if steps == 0:
print myList
#recursive case : don't recurse if we are past the number of steps needed
elif steps > 0:
myList.append(1) # choose
possibleStepsRecurse(myList, steps - 1) # explore
myList.pop() #unchoose
myList.append(2) # choose
possibleStepsRecurse(myList, steps - 2) # explore
myList.pop() # unchoose
possibleSteps(5)
#test comment
"""
Explanation: Questions?
Why don't we have to unchoose?
Problem 3 : Climbing Stairs
We've all climbed stairs two stairs at a time. Given a number of steps, how many different combinations of stepping once and stepping twice can we climb the given staircase in?
Base Case :
The easiest staircase to climb is when we're already at the top, so 0 stairs, or 0 steps left.
Recursive Case :
At every point, we have 2 choices to make. 1 step or 2 steps.
What makes this problem more difficult is how we are going to choose to store these steps. In this case, a list is the easiest. Every time we make a choice we will append either 1 or 2 to the list.
We finally get to see Unchoose in action here! We have to undo our choice of 1 step before we explore solutions with 2 steps.
End of explanation
"""
|
kit-cel/lecture-examples | mloc/ch6_Unsupervised_Learning/Expectation_Maximization_for_GMMs.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import math
# initialize random seed to have reproducible results
np.random.seed(1)
"""
Explanation: Illustration of Expectation Maximization for Gaussian Mixture Models (GMMs)
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
* Expectation-Maximization (EM) algorithm to perform soft-clustering on a toy dataset
End of explanation
"""
# generate an example of data
# specify means and covariance matrices of 3 Gaussian mixtures
means = [np.array([4,3]), np.array([-0.3,0]), np.array([1,-3])]
covariances = [np.array([[1,0],[0,1]]), np.array([[0.5,0.3],[0.3,0.4]]), np.array([[2,0],[0,1]])]
# specify weighting factors
weights = np.array([0.2,0.2,0.6])
"""
Explanation: Generate an exemplary data set. Here, we simply sample from a mixture of 3 Gaussians and then use the EM algorithm to fit a GMM model to this dataset. We can verify that the EM algorithm works if the parameters of the model correspond to these values that we specify here
End of explanation
"""
length = 1000
occur = np.random.rand(length)
tpi = np.cumsum(np.append([0], weights))
examples = np.zeros((length, 2))
# generate examples
font = {'size' : 14}
plt.rc('font', **font)
plt.rc('text', usetex=True)
plt.rcParams['text.usetex'] = True
plt.rcParams['text.latex.preamble'] = [r'\usepackage{amsmath}\usepackage{amssymb}\usepackage{bm}']
plt.figure(1,figsize=(12,6))
plt.subplot(121)
for k in range(len(weights)):
idx = (occur >= tpi[k]) & (occur < tpi[k+1])
x, y = np.random.multivariate_normal(means[k], covariances[k], sum(idx)).T
plt.scatter(x,y)
examples[idx,0] = x
examples[idx,1] = y
plt.title('Scatter plot with knowledge of class')
plot_range = plt.axis('equal')
plt.xlabel('$x_{i,1}$', fontsize=18)
plt.ylabel('$x_{i,2}$', fontsize=18)
# shuffle examples
np.random.shuffle(examples)
plt.subplot(122)
plt.scatter(examples[:,0], examples[:,1])
plt.title('Scatter plot without class ($\mathbb{X}^{[\\textsf{train}]}$)')
plt.axis('equal')
plt.xlabel('$x_{i,1}$', fontsize=18)
plt.ylabel('$x_{i,2}$', fontsize=18)
#plt.savefig('GMM_m3_initial.pdf',bbox_inches='tight')
plt.show()
"""
Explanation: Plot the data set and shuffle the examples so that they are not assigned
End of explanation
"""
# multivariate Gaussian pdf
# implemented such that x can be an array with each row containing a different example x
def mvnorm(x, mu, sigma):
D = len(mu)
temp = x-mu
sigma_det = np.linalg.det(sigma)
sigma_inv = np.linalg.inv(sigma)
result = np.dot(sigma_inv, temp.T)
exponent = np.array([np.dot(temp[k,:],result[:,k]) for k in range(x.shape[0])])
constant = np.sqrt(1 / ((2*math.pi)**D * sigma_det))
return constant * np.exp(-0.5*exponent)
def plot_nice(mus, sigmas, pis, ax=None, title=None):
ax = ax or plt.gca()
xx, yy = np.mgrid[-ext_max:ext_max:200j, -ext_max:ext_max:200j]
myinput = np.concatenate( (np.reshape(xx,(-1,1)), np.reshape(yy,(-1,1))), axis=1)
f = pis[0]*mvnorm(myinput, mus[0], sigmas[0])
for k in range(1,len(pis)):
f += pis[k]*mvnorm(np.concatenate( (np.reshape(xx,(-1,1)), np.reshape(yy,(-1,1))), axis=1), mus[k], sigmas[k])
f = np.reshape(f, xx.shape)
ax.set_xlim(plot_range[0], plot_range[1])
ax.set_ylim(plot_range[2], plot_range[3])
cfset = ax.contourf(xx, yy, f, 20,cmap='coolwarm')
ax.imshow(np.rot90(f), cmap='coolwarm', extent=[-ext_max, ext_max, -ext_max, ext_max])
cset = ax.contour(xx, yy, f, 20, colors='k',linewidths=0.3)
ax.set_xlabel('$x_{i,1}$', fontsize=18)
ax.set_ylabel('$x_{i,2}$', fontsize=18)
ax.set_title(title)
"""
Explanation: Helper functions to evaluate a multivariate normal distribution and plot a 2D Gaussian distribution using a contour plot.
End of explanation
"""
# number of classes
m = 3
# needed for plotting
ext_max = 1.2*np.max(np.max(np.abs(examples),axis=0))
# randomly distribute initial means so that they lie somewhere within plotting area
mus = [[np.random.uniform(low=plot_range[0], high=plot_range[1]), np.random.uniform(low=plot_range[2], high=plot_range[3])] for k in range(m)]
# start with unit covariance matrices
sigmas = [np.eye(2) for k in range(m)]
# assume that each class is used equally often
pis = np.ones(m)/m
# maximum number of iterations
max_iterations = 200
# number of examples N
N = examples.shape[0]
# initialize space for gammas
gammas = np.zeros((N,m))
# assume that log-likelihood is -infinity before starting
init_log_likelihood = -np.Inf
_, (ax1, ax2) = plt.subplots(1,2, figsize=(12,6))
plot_nice(mus, sigmas, pis, ax1, 'Initial random start')
# carry out EM algorithm
for iter in range(max_iterations):
# E-step, compute gammas
for k in range(m):
gammas[:,k] = pis[k] * mvnorm(examples, mus[k], sigmas[k])
summe = np.sum(gammas, axis=1)
gammas = gammas / summe[:,np.newaxis]
# M-step, re-optimize parameters
Nk = np.sum(gammas,axis=0)
for k in range(m):
# maximize means
mus[k] = np.sum(examples * np.tile(gammas[:,k], (2,1)).T, axis=0) / Nk[k]
# maximize covariance matrices
sigmas[k] = np.zeros((2,2))
for n in range(N):
sigmas[k] += gammas[n,k] * np.outer(examples[n,:]-mus[k], examples[n,:]-mus[k])
sigmas[k] = sigmas[k] / Nk[k]
# maximize weights
pis[k] = Nk[k] / N
# compute log-likelihood
lsumme = np.zeros(N)
for k in range(m):
lsumme += pis[k]*mvnorm(examples, mus[k], sigmas[k])
log_likelihood = np.sum(np.log(lsumme))
# stopping criterion
if abs(log_likelihood-init_log_likelihood) < 1e-4:
print('Breaking after %d iterations as likelihood converged' % iter)
break
init_log_likelihood = log_likelihood
# output
plot_nice(mus, sigmas, pis, ax2, 'After convergence')
#plt.savefig('GMM_m3_afterconvergence.pdf',bbox_inches='tight')
print('\nObtained means after expectation maximization:')
print(mus)
print('\nObtained covariance matrices after expectation maximization:')
[print(sigmas[k]) for k in range(m)]
print('\nObtained weights after expectation maximization:')
print(pis)
"""
Explanation: In the following cell, we run the EM algorithm to fit a mixture of $m$ Gaussians to the data set. The EM algorithm can be summarized as
The Expectation-Maximization Algorithm
Initialize $\boldsymbol{\mu}\ell$, $\boldsymbol{\Sigma}\ell$, $\pi_\ell$ and compute initial log-likelihood as $\mathcal{L}{(0)}$. Set iteration counter $I=1$
<font color=blue>Expectation step</font>: Evaluate, using the current $\boldsymbol{\mu}\ell$, $\boldsymbol{\Sigma}\ell$ and $\pi_\ell$
\begin{equation}
\gamma(\boldsymbol{x}i^{[\textsf{train}]},y{i,\ell}) = \frac{\pi_\ell\mathcal{N}(\boldsymbol{x}i^{[\textsf{train}]}; \boldsymbol{\mu}\ell,\boldsymbol{\Sigma}\ell)}{\sum{k=1}^m\pi_k\mathcal{N}(\boldsymbol{x}_i^{[\textsf{train}]};\boldsymbol{\mu}_k,\boldsymbol{\Sigma}_k)}
\end{equation}
<font color=blue>Maximization step</font>: Re-estimate the parameters as
\begin{align}
\boldsymbol{\mu}\ell^{\textsf{new}} &= \frac{1}{N\ell}\sum_{i=1}^N\gamma(\boldsymbol{x}i^{[\textsf{train}]}, y{i,\ell})\boldsymbol{x}i^{[\textsf{train}]}\
\boldsymbol{\Sigma}\ell^{\textsf{new}} &= \frac{1}{N_\ell}\sum_{i=1}^N\gamma(\boldsymbol{x}i^{[\textsf{train}]}, y{i,\ell})(\boldsymbol{x}i^{[\textsf{train}]}-\boldsymbol{\mu}\ell^{\textsf{new}})(\boldsymbol{x}i^{[\textsf{train}]}-\boldsymbol{\mu}\ell^{\textsf{new}})^{\mathrm{T}} \
\pi_\ell^{\textsf{new}} &= \frac{N_\ell}{N}
\end{align}
where
\begin{equation}
N_\ell = \sum\limits_{i=1}^N\gamma(\boldsymbol{x}i^{[\textsf{train}]}, y{i,\ell})
\end{equation}
Evaluate the log-likelihood
\begin{equation}
\mathcal{L}^{(I)} = \sum\limits_{i=1}^N\log\left(\sum\limits_{\ell=1}^m\pi_\ell\mathcal{N}(\boldsymbol{x}i^{[\textsf{train}]};\boldsymbol{\mu}\ell,\boldsymbol{\Sigma}_\ell)\right)
\end{equation}
If $|\mathcal{L}^{(I)}-\mathcal{L}^{(I-1)}|<\epsilon$ abort, otherwise go to step 2. ($\epsilon$: small constant)
End of explanation
"""
cmap = plt.get_cmap("tab10")
colors = np.zeros((N,3))
for k in range(m):
for n in range(N):
colors[n,:] += np.multiply(gammas[n,k],list(cmap(k))[0:3])
np.clip(colors,0,1)
plt.figure(1,figsize=(6,6))
plt.scatter(examples[:,0], examples[:,1], c=colors)
plt.title('$\gamma(\\bm{x}_i^{[\\textsf{train}]},y_{i,\ell})$ determines colors')
plt.axis('equal')
plt.xlabel('$x_{i,1}$', fontsize=18)
plt.ylabel('$x_{i,2}$', fontsize=18)
#plt.savefig('GMM_m3_scattercolored.pdf',bbox_inches='tight')
"""
Explanation: Plot another scatter plot, but this time take the examples and use the corresponding $\gamma(\boldsymbol{x}i^{[\textsf{train}]}, y{i,\ell)$ to interpolate between the colors. If we compare this scatter plot with the above one, we can see that some points are incorrectly classified (some points of the "bottom" cluster are within the tilted cluster. The algorithm cannot distinguish if these points belong the bottom cluster but rather assume that they are close to the nearest one
End of explanation
"""
%matplotlib notebook
# Generate animation
from matplotlib import animation, rc
from matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs.
np.random.seed(200)
# less examples to have slower convergence
new_examples = False
if new_examples == True:
ani_length = 300
examples = np.zeros((ani_length, 2))
occur = np.random.rand(ani_length)
for k in range(len(weights)):
idx = (occur >= tpi[k]) & (occur < tpi[k+1])
examples[idx,0], examples[idx,1] = np.random.multivariate_normal(means[k], covariances[k], sum(idx)).T
m = 4
# randomly distribute initial means so that they lie somewhere within plotting area
mus = [[np.random.uniform(low=plot_range[0], high=plot_range[1]), np.random.uniform(low=plot_range[2], high=plot_range[3])] for k in range(m)]
# start with unit covariance matrices
sigmas = [0.5*np.eye(2) for k in range(m)]
# assume that each class is used equally often
pis = np.ones(m)/m
# maximum number of iterations
max_iterations = 200
# number of examples N
N = examples.shape[0]
# initialize space for gammas
gammas = np.zeros((N,m))
# assume that log-likelihood is -infinity before starting
init_log_likelihood = -np.Inf
fig, ax = plt.subplots(1, figsize=(6,6))
#plot_nice(mus, sigmas, pis, ax, 'After 0 iterations')
#plt.show()
written = False
def animate(i):
global gammas, mus, sigmas, pis, init_log_likelihood, written
ax.clear()
plot_nice(mus, sigmas, pis, ax, 'After %d iterations' % (i))
if i==0:
return
# E-step
for k in range(m):
gammas[:,k] = pis[k] * mvnorm(examples, mus[k], sigmas[k])
summe = np.sum(gammas, axis=1)
gammas = gammas / summe[:,np.newaxis]
# M-step
Nk = np.sum(gammas,axis=0)
for k in range(m):
# maximize means
mus[k] = np.sum(examples * np.tile(gammas[:,k], (2,1)).T, axis=0) / Nk[k]
# maximize covariance matrices
sigmas[k] = np.zeros((2,2))
for n in range(N):
sigmas[k] += gammas[n,k] * np.outer(examples[n,:]-mus[k], examples[n,:]-mus[k])
sigmas[k] = sigmas[k] / Nk[k]
# maximize weights
pis[k] = Nk[k] / N
# compute log-likelihood
lsumme = np.zeros(N)
for k in range(m):
lsumme += pis[k]*mvnorm(examples, mus[k], sigmas[k])
log_likelihood = np.sum(np.log(lsumme))
# stopping criterion
if abs(log_likelihood-init_log_likelihood) < 1e-4 and not written:
print('Breaking after %d iterations as likelihood converged' % i)
written = True
init_log_likelihood = log_likelihood
anim = animation.FuncAnimation(fig, animate, frames=100, interval=200, blit=False)
fig.show()
#anim.save('expectation_maximation_for_GMMs.gif', writer=PillowWriter(fps=10))
"""
Explanation: Generate videos showing the convergence of the EM algorithm
End of explanation
"""
|
cesarcontre/Simulacion2017 | Modulo2/.ipynb_checkpoints/Clase16_ProbabilidadPrecio-Umbral-checkpoint.ipynb | mit | # Importamos librerías
# Creamos la función
# Descargamos datos de microsoft en el 2016
# Grafiquemos
"""
Explanation: Aplicando Python para análisis de precios: probabilidad precio-umbral
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://c2.staticflickr.com/4/3673/9761565422_8da861e1c8_b.jpg" width="400px" height="125px" />
Ya habíamos visto como importar precios de cierre de acciones desde Yahoo Finance con la libreria pandas-datareader. En la clase pasada, además, vimos como pronosticar escenarios de evolución de precios suponiendo que los rendimientos diarios distribuyen normalmente. Como esta evolución de precios es aleatoria, utilizaremos la simulación montecarlo (hacer muchas simulaciones de escenarios de evolución de precios) para obtener probabilidades de que los precios de cierre estén encima de un valor umbral y tomar decisiones con base en estas probabilidades.
Referencias:
- http://pandas.pydata.org/
- http://www.learndatasci.com/python-finance-part-yahoo-finance-api-pandas-matplotlib/
1. Descargando datos, una vez más
Recordamos una vez más como descargar los precios de cierre ajustados desde yahoo finance.
Esta vez haremos una función.
End of explanation
"""
# Función que devuelve rendimientos diarios, media y desviación estándar
# Calculamos con la función anterior
# Graficamos rendimientos diarios
"""
Explanation: 2. Proyección de rendimientos diarios
Recordemos que los precios diarios de cierre ajustados no son un proceso estocástico estacionario, pero los rendimientos diarios si lo son. Por tanto calculamos los rendimientos a partir de los precios de cierre, obtenemos sus propiedades estadísticas muestrales y proyectamos los rendimientos. Luego, obtenemos la proyección de los precios.
Los rendimientos diarios se pueden calcular con los precios de cierre de la siguiente manera:
$$r_i=\frac{p_i-p_{i-1}}{p_{i-1}},$$
donde $r_i$ es el rendimiento en el día $i$ y $p_i$ es el precio de cierre ajustado en el día $i$.
En la clase pasada, vimos que una buena aproximación de la anterior expresión es:
$$r_i=\frac{p_i-p_{i-1}}{p_{i-1}}\approx \ln\left(\frac{p_i}{p_{i-1}}\right).$$
Además, supusimos que los rendimientos diarios eran una variable aleatoria con distribución normal (que se caracteriza con su media y varianza). Por tanto obtenemos la media y desviación estandar muestrales. Hagamos una función que retorne lo anterior.
End of explanation
"""
# Función que simula varios escenarios de rendimientos diarios
# Simulamos 100 escenarios para todoo el 2017
"""
Explanation: Habiendo caracterizado los rendimientos diarios como una variable aleatoria normal con la media y la varianza muestral obtenida de los datos del 2016, podemos generar números aleatorios con estas características para simular el comportamiento de los precios de cierre de las acciones en el 2017 (hay un supuesto de que las cosas no cambiarán fundamentalmente).
Sin embargo, cada simulación que hagamos nos conducirá a distintos resultados (los precios siguen evolucionando aleatoriamente). Entonces, lo que haremos es simular varios escenarios para así ver alguna tendencia y tomar decisiones.
Hagamos una una función que simule varios escenarios de rendimientos diarios rendimientos diarios y que devuelva un dataframe con esta simulación.
End of explanation
"""
# Función de proyección de precios
# Proyección de precios y concatenación con precios de 2016
# Gráfico
"""
Explanation: 3. Proyección de precios de cierre
Por tanto, para calcular los precios, tenemos:
$$\begin{align}
p_i&=p_{i-1}\exp(r_i)\
p_{i+1}&=p_i\exp(r_{i+1})=p_{i-1}\exp(r_i)\exp(r_{i+1})=p_{i-1}\exp(r_i+r_{i+1})\
&\vdots\
p_{i+k}&=p_{i-1}\exp(r_i+\cdots+r_{i+k}).
\end{align}$$
Si hacemos $i=0$ en la última ecuación, tenemos que $p_{k}=p_{-1}\exp(r_0+\cdots+r_{k})$, donde $p_{-1}$ es el último precio reportado en el 2016.
End of explanation
"""
K = 65
dates = pd.date_range('20170101',periods=ndays)
strike = pd.DataFrame({'Strike':K*np.ones(ndays)},index=dates)
simul = pd.concat([closes.T,simdata.T,strike.T]).T
simul.plot(figsize=(8,6),legend=False);
strike = pd.DataFrame(K*np.ones(ndays*ntraj).reshape((ndays,ntraj)),index=dates)
count = simdata>strike
prob = count.T.sum()/ntraj
prob.plot(figsize=(8,6),legend=False);
((K-closes.iloc[-1,:])/closes.iloc[-1,:]).values
"""
Explanation: 4. Probabilidad Precio-Umbral
Ya que tenemos muchos escenarios de precios proyectados, podemos ver varias cosas. Por ejemplo, ¿cuál es la probabilidad de que el precio de cierre sobrepase algún valor umbral en algún momento?
End of explanation
"""
|
google/brax | notebooks/basics.ipynb | apache-2.0 | #@title Colab setup and imports
from matplotlib.lines import Line2D
from matplotlib.patches import Circle
import matplotlib.pyplot as plt
import numpy as np
try:
import brax
except ImportError:
from IPython.display import clear_output
!pip install git+https://github.com/google/brax.git@main
clear_output()
import brax
"""
Explanation: Brax: a differentiable physics engine
Brax simulates physical systems made up of rigid bodies, joints, and actutators. Brax provides the function:
$$
\text{qp}_{t+1} = \text{step}(\text{system}, \text{qp}_t, \text{act})
$$
where:
* $\text{system}$ is the static description of the physical system: each body in the world, its weight and size, and so on
* $\text{qp}_t$ is the dynamic state of the system at time $t$: each body's position, rotation, velocity, and angular velocity
* $\text{act}$ is dynamic input to the system in the form of motor actuation
Brax simulations are differentiable: the gradient $\Delta \text{step}$ can be used for efficient trajectory optimization. But Brax is also well-suited to derivative-free optimization methods such as evolutionary strategy or reinforcement learning.
Let's review how $\text{system}$, $\text{qp}_t$, and $\text{act}$ are used:
End of explanation
"""
#@title A bouncy ball scene
bouncy_ball = brax.Config(dt=0.05, substeps=20, dynamics_mode='pbd')
# ground is a frozen (immovable) infinite plane
ground = bouncy_ball.bodies.add(name='ground')
ground.frozen.all = True
plane = ground.colliders.add().plane
plane.SetInParent() # for setting an empty oneof
# ball weighs 1kg, has equal rotational inertia along all axes, is 1m long, and
# has an initial rotation of identity (w=1,x=0,y=0,z=0) quaternion
ball = bouncy_ball.bodies.add(name='ball', mass=1)
cap = ball.colliders.add().capsule
cap.radius, cap.length = 0.5, 1
# gravity is -9.8 m/s^2 in z dimension
bouncy_ball.gravity.z = -9.8
"""
Explanation: Brax Config
Here's a brax config that defines a bouncy ball:
End of explanation
"""
def draw_system(ax, pos, alpha=1):
for i, p in enumerate(pos):
ax.add_patch(Circle(xy=(p[0], p[2]), radius=cap.radius, fill=False, color=(0, 0, 0, alpha)))
if i < len(pos) - 1:
pn = pos[i + 1]
ax.add_line(Line2D([p[0], pn[0]], [p[2], pn[2]], color=(1, 0, 0, alpha)))
_, ax = plt.subplots()
plt.xlim([-3, 3])
plt.ylim([0, 4])
draw_system(ax, [[0, 0, 0.5]])
plt.title('ball at rest')
plt.show()
"""
Explanation: We visualize this system config like so:
End of explanation
"""
qp = brax.QP(
# position of each body in 3d (z is up, right-hand coordinates)
pos = np.array([[0., 0., 0.], # ground
[0., 0., 3.]]), # ball is 3m up in the air
# velocity of each body in 3d
vel = np.array([[0., 0., 0.], # ground
[0., 0., 0.]]), # ball
# rotation about center of body, as a quaternion (w, x, y, z)
rot = np.array([[1., 0., 0., 0.], # ground
[1., 0., 0., 0.]]), # ball
# angular velocity about center of body in 3d
ang = np.array([[0., 0., 0.], # ground
[0., 0., 0.]]) # ball
)
"""
Explanation: Brax State
$\text{QP}$, brax's dynamic state, is a structure with the following fields:
End of explanation
"""
#@title Simulating the bouncy ball config { run: "auto"}
bouncy_ball.elasticity = 0.85 #@param { type:"slider", min: 0, max: 1.0, step:0.05 }
ball_velocity = 1 #@param { type:"slider", min:-5, max:5, step: 0.5 }
sys = brax.System(bouncy_ball)
# provide an initial velocity to the ball
qp.vel[1, 0] = ball_velocity
_, ax = plt.subplots()
plt.xlim([-3, 3])
plt.ylim([0, 4])
for i in range(100):
draw_system(ax, qp.pos[1:], i / 100.)
qp, _ = sys.step(qp, [])
plt.title('ball in motion')
plt.show()
"""
Explanation: Brax Step Function
Let's observe $\text{step}(\text{config}, \text{qp}_t)$ with a few different variants of $\text{config}$ and $\text{qp}$:
End of explanation
"""
#@title A pendulum config for Brax
pendulum = brax.Config(dt=0.01, substeps=20, dynamics_mode='pbd')
# start with a frozen anchor at the root of the pendulum
anchor = pendulum.bodies.add(name='anchor', mass=1.0)
anchor.frozen.all = True
# now add a middle and bottom ball to the pendulum
pendulum.bodies.append(ball)
pendulum.bodies.append(ball)
pendulum.bodies[1].name = 'middle'
pendulum.bodies[2].name = 'bottom'
# connect anchor to middle
joint = pendulum.joints.add(name='joint1', parent='anchor',
child='middle', angular_damping=20)
joint.angle_limit.add(min = -180, max = 180)
joint.child_offset.z = 1.5
joint.rotation.z = 90
# connect middle to bottom
pendulum.joints.append(joint)
pendulum.joints[1].name = 'joint2'
pendulum.joints[1].parent = 'middle'
pendulum.joints[1].child = 'bottom'
# gravity is -9.8 m/s^2 in z dimension
pendulum.gravity.z = -9.8
"""
Explanation: Joints
Joints constrain the motion of bodies so that they move in tandem:
End of explanation
"""
_, ax = plt.subplots()
plt.xlim([-3, 3])
plt.ylim([0, 4])
# rather than building our own qp like last time, we ask brax.System to
# generate a default one for us, which is handy
qp = brax.System(pendulum).default_qp()
draw_system(ax, qp.pos)
plt.title('pendulum at rest')
plt.show()
"""
Explanation: Here is our system at rest:
End of explanation
"""
#@title Simulating the pendulum config { run: "auto"}
ball_impulse = 8 #@param { type:"slider", min:-15, max:15, step: 0.5 }
sys = brax.System(pendulum)
qp = sys.default_qp()
# provide an initial velocity to the ball
qp.vel[2, 0] = ball_impulse
_, ax = plt.subplots()
plt.xlim([-3, 3])
plt.ylim([0, 4])
for i in range(50):
draw_system(ax, qp.pos, i / 50.)
qp, _ = sys.step(qp, [])
plt.title('pendulum in motion')
plt.show()
"""
Explanation: Let's observe $\text{step}(\text{config}, \text{qp}_t)$ by smacking the bottom ball with an initial impulse, simulating a pendulum swing.
End of explanation
"""
#@title A single actuator on the pendulum
actuated_pendulum = brax.Config()
actuated_pendulum.CopyFrom(pendulum)
# actuating the joint connecting the anchor and middle
angle = actuated_pendulum.actuators.add(name='actuator', joint='joint1',
strength=100).angle
angle.SetInParent() # for setting an empty oneof
"""
Explanation: Actuators
Actuators provide dynamic input to the system during every physics step. They provide control parameters for users to manipulate the system interactively via the $\text{act}$ parameter.
End of explanation
"""
#@title Simulating the actuated pendulum config { run: "auto"}
target_angle = 45 #@param { type:"slider", min:-90, max:90, step: 1 }
sys = brax.System(actuated_pendulum)
qp = sys.default_qp()
act = np.array([target_angle])
_, ax = plt.subplots()
plt.xlim([-3, 3])
plt.ylim([0, 4])
for i in range(100):
draw_system(ax, qp.pos, i / 100.)
qp, _ = sys.step(qp, act)
plt.title('actuating a pendulum joint')
plt.show()
"""
Explanation: Let's observe $\text{step}(\text{config}, \text{qp}_t, \text{act})$ by raising the middle ball to a desired target angle:
End of explanation
"""
|
darkomen/TFG | ipython_notebooks/07_conclusiones/.ipynb_checkpoints/Conclusiones-checkpoint.ipynb | cc0-1.0 | %pylab inline
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos los ficheros con los datos
conclusiones = pd.read_csv('Conclusiones.csv')
columns=['bq','formfutura','filastruder']
#Mostramos un resumen de los datos obtenidoss
conclusiones[columns].describe()
"""
Explanation: Análisis de los datos obtenidos
Compararación de tres filamentos distintos
Filamento de BQ
Filamento de formfutura
Filamento de filastriuder
End of explanation
"""
graf=conclusiones[columns].plot(figsize=(16,10),ylim=(1.5,2.5))
graf.axhspan(1.65,1.85, alpha=0.2)
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
conclusiones[columns].boxplot(return_type='axes')
"""
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
"""
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
"""
Explanation: Aumentando la velocidad se ha conseguido que disminuya el valor máxima, sin embargo ha disminuido el valor mínimo. Para la siguiente iteracción, se va a volver a las velocidades de 1.5- 3.4 y se van a añadir más reglas con unos incrementos de velocidades menores, para evitar saturar la velocidad de traccción tanto a nivel alto como nivel bajo.
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
"""
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
"""
Explanation: Representación de X/Y
End of explanation
"""
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
"""
Explanation: Analizamos datos del ratio
End of explanation
"""
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
"""
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation
"""
|
NREL/bifacial_radiance | docs/tutorials/15 - New Functionalities Examples.ipynb | bsd-3-clause | import bifacial_radiance
import os
from pathlib import Path
testfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_15')
if not os.path.exists(testfolder):
os.makedirs(testfolder)
print ("Your simulation will be stored in %s" % testfolder)
"""
Explanation: 15 - NEW FUNCTIONALITIES EXAMPLES
This journal includes short examples on how to use the new functionalities of version 0.4.0 of bifacial_radiance. The parts are:
<ol type="I">
<li> <a href='#functionality1'> Simulating Modules with Frames and Omegas </a> </li>
<li> <a href='#functionality2'> Improvements to irradiance sampling</a></li>
<ul>
<li> -Scanning full module (sensors on x)! </li>
<li> -Different points in the front and the back</li>
</ul>
<li> <a href='#functionality3'> Full row scanning.</a> </li>
</ol>
End of explanation
"""
module_type = 'test-module'
frameParams = {'frame_material' : 'Metal_Grey',
'frame_thickness' : 0.05,
'nSides_frame' : 4,
'frame_width' : 0.08}
frameParams = {'frame_material' : 'Metal_Grey',
'frame_thickness' : 0.05,
'nSides_frame' : 4,
'frame_width' : 0.08}
omegaParams = {'omega_material': 'litesoil',
'x_omega1' : 0.4,
'mod_overlap' : 0.25,
'y_omega' : 1.5,
'x_omega3' : 0.25,
'omega_thickness' : 0.05,
'inverted' : False}
tubeParams = { 'visible': True,
'axisofrotation' : True,
'diameter' : 0.3
}
demo = bifacial_radiance.RadianceObj('tutorial_15', testfolder)
mymodule = demo.makeModule(module_type,x=2, y=1, xgap = 0.02, ygap = 0.15, zgap = 0.3,
numpanels = 2, tubeParams=tubeParams,
frameParams=frameParams, omegaParams=omegaParams)
"""
Explanation: <a id='functionality1'></a>
I. Simulating Frames and Omegas
The values for generating frames and omegas are described in the makeModule function, which is where they are introduced into the basic module unit. This diagram shows how they are measured.
End of explanation
"""
mymodule.addTorquetube(visible = True, axisofrotation = True, diameter = 0.3)
mymodule.addOmega(omega_material = 'litesoil', x_omega1 = 0.4, mod_overlap = 0.25,
y_omega = 1.5, x_omega3 = 0.25, omega_thickness = 0.05, inverted = False)
mymodule.addFrame(frame_material = 'Metal_Grey', frame_thickness = 0.05, nSides_frame = 4, frame_width = 0.08)
"""
Explanation: Alternatively, the paremeters can be passed from an Object Oriented Approach as follows:
End of explanation
"""
demo.setGround(0.2)
epwfile = demo.getEPW(lat = 37.5, lon = -77.6)
metdata = demo.readWeatherFile(epwfile, coerce_year = 2021)
demo.gendaylit(4020)
sceneDict = {'tilt':0, 'pitch':3, 'clearance_height':3,'azimuth':90, 'nMods': 1, 'nRows': 1}
scene = demo.makeScene(mymodule,sceneDict)
demo.makeOct()
"""
Explanation: Let's add the rest of the scene and go until OCT, so it can be viewed with rvu:
End of explanation
"""
## Comment any of the ! line below to run rvu from the Jupyter notebook instead of your terminal.
## Simulation will stop until you close the rvu window(s).
#!rvu -vp -7 0 3 -vd 1 0 0 Sim1.oct
#!rvu -vp 0 -5 3 -vd 0 1 0 Sim1.oct
"""
Explanation: To view the module from different angles, you can use the following rvu commands in your terminal:
rvu -vp -7 0 3 -vd 1 0 0 Sim1.oct
rvu -vp 0 -5 3 -vd 0 1 0 Sim1.oct
End of explanation
"""
mymodule = demo.makeModule(name='test-module',x=2, y=1)
sceneDict = {'tilt':0, 'pitch':6, 'clearance_height':3,'azimuth':180, 'nMods': 1, 'nRows': 1}
scene = demo.makeScene(mymodule,sceneDict)
octfile = demo.makeOct()
analysis = bifacial_radiance.AnalysisObj() # return an analysis object including the scan dimensions for back irradiance
"""
Explanation: <a id='functionality2'></a>
II. Improvements to irradiance sampling
The key ideas here are:
moduleAnalysis() returns two structured dictionaries that have the coordinates necessary for analysis to know where to smaple. On the new version, different values can be given for sampling accross the collector slope (y), for both front and backs by using a single value or an array in sensorsy.
Furthermore, now scanning on the module's <b> x-direction </b> is supported, by setting the variables sensorsx as an singel value or an array.
When the sensors differ between the front and the back, instead of saving one .csv with results, two .csv files are saved, one labeled "_Front.csv" and the other "_Back.csv".
To know more, read the functions documentation.
We'll take advantage of Simulation 1 testfolder, Radiance Objects and sky, but let's make a simple module and scene and model it through from there.
End of explanation
"""
name='2222'
sensorsy_front = 2
sensorsy_back = 2
sensorsx_front = 2
sensorsx_back = 2
sensorsy = [sensorsy_front, sensorsy_back]
sensorsx = [sensorsx_front, sensorsx_back]
frontscan, backscan = analysis.moduleAnalysis(scene, sensorsy = sensorsy, sensorsx=sensorsx)
frontDict, backDict = analysis.analysis(octfile = octfile, name = name, frontscan = frontscan,
backscan = backscan)
print('\n--> RESULTS for Front and Back are saved on the same file since the sensors match for front and back')
print('\n', bifacial_radiance.load.read1Result('results\irr_'+name+'.csv'))
"""
Explanation: Same sensors front and back, two sensors accross x
End of explanation
"""
name='2412'
sensorsy_front = 2
sensorsy_back = 4
sensorsx_front = 1
sensorsx_back = 2
sensorsy = [sensorsy_front, sensorsy_back]
sensorsx = [sensorsx_front, sensorsx_back]
frontscan, backscan = analysis.moduleAnalysis(scene, sensorsy=sensorsy, sensorsx=sensorsx)
frontDict, backDict = analysis.analysis(octfile = octfile, name = name, frontscan = frontscan,
backscan = backscan)
print('\n--> RESULTS for Front and Back are saved on SEPARATE file since the sensors do not match for front and back')
print('\nFRONT\n', bifacial_radiance.load.read1Result('results\irr_'+name+'_Front.csv'))
print('\nBACK\n', bifacial_radiance.load.read1Result('results\irr_'+name+'_Back.csv'))
"""
Explanation: Different sensors front and back, two sensors accross x
End of explanation
"""
sceneDict = {'tilt':0, 'pitch':30, 'clearance_height':3,'azimuth':90, 'nMods': 3, 'nRows': 3}
scene = demo.makeScene(mymodule,sceneDict)
octfile = demo.makeOct()
"""
Explanation: <a id='functionality3'></a>
III. Making Analysis Function for Row
Let's explore how to analyze easily a row with the new function analyzeRow. As before, we are not repeating functions alreayd called above, just re-running the necessary ones to show the changes.
End of explanation
"""
sensorsy_back=1
sensorsx_back=1
sensorsy_front=1
sensorsx_front=1
sensorsy = [sensorsy_front, sensorsy_back]
sensorsx = [sensorsx_front, sensorsx_back]
rowscan = analysis.analyzeRow(name = name, scene = scene, sensorsy=sensorsy, sensorsx = sensorsx,
rowWanted = 1, octfile = octfile)
"""
Explanation: The function requires to know the number of modules on the row
End of explanation
"""
|
sbussmann/sleep-bit | notebooks/sbussmann_data-nba.ipynb | mit | import pandas as pd
import os
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import nba_py
sns.set_context('poster')
import plotly.offline as py
import plotly.graph_objs as go
py.init_notebook_mode(connected=True)
data_path = os.path.join(os.getcwd(), os.pardir, 'data', 'interim', 'sleep_data.csv')
df_sleep = pd.read_csv(data_path, index_col='shifted_datetime', parse_dates=True)
df_sleep.index += pd.Timedelta(hours=12)
sleep_day = df_sleep.resample('1D').sum().fillna(0)
from nba_py import league
gswlog = league.GameLog(player_or_team='T')
league_logs = gswlog.json['resultSets'][0]['rowSet']
columns = gswlog.json['resultSets'][0]['headers']
df_league = pd.DataFrame(league_logs, columns=columns)
df_league.columns
gsw_games = df_league[df_league['TEAM_ABBREVIATION'] == 'GSW']
len(gsw_games)
gsw_games.head()
gsw_dates = gsw_games['GAME_DATE']
toplot = sleep_day['minutesAsleep']/60.
data = []
data.append(
go.Scatter(
x=toplot.index,
y=toplot.values,
name='Hours Asleep'
)
)
shapes = []
for idate, gsw_date in enumerate(gsw_dates):
if idate == 0:
showlegend = True
else:
showlegend = False
trace0 = go.Scatter(
x=[gsw_date],
y=[toplot.dropna().min()],
mode='markers',
name='Golden State Warriors Game',
marker=dict(
color='salmon'
),
showlegend=showlegend
)
data.append(trace0)
layout = go.Layout(
title="Daily Sleep Total, 6pm to 6pm",
yaxis=dict(
title='Hours Asleep'
),
)
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='DailySleepTotal_GSWGames')
"""
Explanation: Summary
Do I sleep less on nights when the Warriors play?
End of explanation
"""
gsw_dates = pd.to_datetime(gsw_dates)
gswdatedf = pd.DataFrame(index=gsw_dates)
gswdatedf['game_status'] = 1
gswdatedf = gswdatedf.resample('1D').sum().fillna(0)
gswdatedf_next = gswdatedf.copy()
gswdatedf_next.index += pd.Timedelta(hours=24)
sleepgsw = sleep_day.join(gswdatedf_next, how='inner')
sleepgswyes = sleepgsw.groupby('game_status').mean()
sleepgswyes['minutesAsleep'] / 60
"""
Explanation: The NBA season starts at the end of October. I got my fitbit near the beginning of November, so there is a lot of overlap.
A simple test: sleep on the night of a Warriors game vs. all other nights.
Here, we have to be careful about the definition of a night of sleep. I follow Fitbit's convention and assert that hours of sleep on a given date correspond to falling asleep the night before and waking up the day of. So if I fall asleep on Tuesday August 8th at 11pm and wake up on Wednesday August 9th at 7am, then I got 8 hours of sleep on August 9th.
To answer my question about sleep on the night of a Warriors game, I need to compare sleep the day AFTER the Warriors game to all other nights. E.g., the Warriors had a game on April 8, 2017. The relevant night of sleep for that game is April 9, 2017. A quick and dirty way to tackle this is by adding 24 hours to the Warriors game dates.
End of explanation
"""
|
astroumd/GradMap | notebooks/Lectures2016/Lecture_2/UMD_Intro_Lecture2.ipynb | gpl-3.0 | {1,2,3,"bingo"}
type({1,2,3,"bingo"})
type({})
type(set())
set("spamIam")
"""
Explanation: <CENTER>
<H1>
University of Maryland GRADMAP <BR>
Winter Workshop Python Boot Camp <BR>
</H1>
</CENTER>
More Data Structures, Control Statements, <BR> Functions, and Modules
Sets
End of explanation
"""
a = set("sp"); b = set("am"); print a ; print b
c = set(["a","m"])
c == b
"p" in a
"ps" in a
q = set("spamIam")
a.issubset(q)
a | b
q - (a | b)
q & (a | b)
"""
Explanation: Sets have unique elements. They can be compared, differenced, unionized, etc.
End of explanation
"""
# this is pretty volitile...wont be the same
# order on all machines
for i in q & (a | b):
print i,
q.remove("a")
q.pop()
print q.pop()
print q.pop()
print q.pop()
# q.pop()
"""
Explanation: Like lists, we can use as (unordered) buckets
.pop() gives us a random element
End of explanation
"""
d = {"favorite cat": None, "favorite spam": "all"}
"""
Explanation:
Dictionaries
denoted with a curly braces and colons
End of explanation
"""
print d["favorite cat"]
d[0] ## this is not a list and you dont have a keyword = 0
e = {"favorite cat": None, "favorite spam": "all", \
1: 'loneliest number'}
e[1] == 'loneliest number'
"""
Explanation: these are key: value, key: value, ...
End of explanation
"""
# number 1...you've seen this
d = {"favorite cat": None, "favorite spam": "all"}
# number 2
d = dict(one = 1, two=2,cat = 'dog') ; print d
# number 3 ... just start filling in items/keys
d = {} # empty dictionary
d['cat'] = 'dog'
d['one'] = 1
d['two'] = 2
d
# number 4... start with a list of tuples
mylist = [("cat","dog"), ("one",1),("two",2)]
print dict(mylist)
dict(mylist) == d
"""
Explanation: dictionaries are UNORDERED<sup>*</sup>.
You cannot assume that one key comes before or after another
<sup>*</sup> you can use a special type of ordered dict if you really need it:
http://docs.python.org/whatsnew/2.7.html#pep-372-adding-an-ordered-dictionary-to-collections
4 ways to make a Dictionary
End of explanation
"""
d = {"favorite cat": None, "favorite spam": "all"}
d = {'favorites': {'cat': None, 'spam': 'all'}, \
'least favorite': {'cat': 'all', 'spam': None}}
print d['least favorite']['cat']
"""
Explanation:
Dictionaries: they can be complicated (in a good way)
End of explanation
"""
phone_numbers = {'family': [('mom','642-2322'),('dad','534-2311')],\
'friends': [('Sylvia','652-2212')]}
for group_type in ['friends','family']:
print "Group " + group_type + ":"
for info in phone_numbers[group_type]:
print " ",info[0], info[1]
# this will return a list, but you dont know in what order!
phone_numbers.keys()
phone_numbers.values()
"""
Explanation: remember: the backslash () allows you to across break lines. Not technically needed when defining a dictionary or list
End of explanation
"""
for group_type in phone_numbers.keys():
print "Group " + group_type + ":"
for info in phone_numbers[group_type]:
print " ",info[0], info[1]
"""
Explanation:
.keys() and .values(): are called methods on dictionaries
End of explanation
"""
groups = phone_numbers.keys()
groups.sort()
for group_type in groups:
print "Group " + group_type + ":"
for info in phone_numbers[group_type]:
print " ",info[0], info[1]
"""
Explanation: we cannot ensure ordering here of the groups
End of explanation
"""
for group_type, vals in phone_numbers.iteritems():
print "Group " + group_type + ":"
for info in vals:
print " ",info[0], info[1]
"""
Explanation: .iteritems() is a handy method,
returning key,value pairs with each iteration
End of explanation
"""
phone_numbers['co-workers']
phone_numbers.has_key('co-workers')
print phone_numbers.get('co-workers')
phone_numbers.get('friends') == phone_numbers['friends']
print phone_numbers.get('co-workers',"all alone")
"""
Explanation: Some examples of getting values:
End of explanation
"""
# add to the friends list
phone_numbers['friends'].append(("Jeremy","232-1121"))
print phone_numbers
## Sylvia's number changed
phone_numbers['friends'][0][1] = "532-1521"
phone_numbers['friends'][0] = ("Sylvia","232-1521");
print phone_numbers['friends']
## I lost all my friends preparing for this Python class
phone_numbers['friends'] = [] # sets this to an empty list
## remove the friends key altogether
print phone_numbers.pop('friends')
print phone_numbers
del phone_numbers['family']
print phone_numbers
"""
Explanation:
setting values
you can edit the values of keys and also .pop() & del to remove certain keys
End of explanation
"""
phone_numbers.update({"friends": [("Sylvia's friend, Dave", "532-1521")]})
print phone_numbers
"""
Explanation:
.update() method is very handy, like .append() for lists
End of explanation
"""
x = 1
print x
"""
Explanation:
Loops and branches in python
Python has the usual control flow statements:
- if, else, elif
- for loops
- while loops
- break, continue, pass
Indentation in Python defines where blocks begin and end.
End of explanation
"""
# You can mix indentations between different blocks ... but this is ugly and people will judge you
x = 1
if x > 0:
print "yo"
else:
print "dude"
# You can put everything on one line
print "yo" if x > 0 else "dude"
# Multiple cases
x = 1
if x < -10:
print "yo"
elif x > 10: # 'elif' is short for 'else if'
print "dude"
else:
print "sup"
for x in range(5):
print x**2
for x in ("all","we","wanna","do","is","eat","your","brains"):
print x
x = 0
while x < 5:
print pow(2,x)
x += 1 # don't forget to increment x!
# Multiple levels
for x in range(1,10):
if x % 2 == 0:
print str(x) + " is even."
else:
print str(x) + " is odd."
# Blocks cannot be empty
x = "fried goldfish"
if x == "spam for dinner":
print "I will destroy the universe"
else:
# Nothing here.
# Use a 'pass' statement, which indicates 'do nothing'
x = "fried goldfish"
if x == "spam for dinner":
print "I will destroy the universe"
else:
pass
# Use a 'break' statement to escape a loop
x = 0
while True:
print x**2
if x**2 >= 100:
break
x +=1
"""
Explanation: IPython Notebook automatically converts tabs into spaces, but some programs do not. Be careful not to mix these up! Be consistent in your programming.
If you're working within the Python interpreter (not the IPython Notebook), you'll see this:
>>> x = 1
>>> if x > 0:
... print "yo"
... else:
... print "dude"
... print "ok"
...
yo
ok
End of explanation
"""
def addnums(x,y):
return x + y
addnums(2,3)
print addnums(0x1f,3.3)
print addnums("a","b")
print addnums("cat",23232)
"""
Explanation: What is a Function?
<UL>
<LI> A block of organized, reusable code that is used to perform a single, related action.
<LI> Provides better modularity for your application and a high degree of code reusing.
<LI> You can name a function anything you want as long as it:
<OL>
<LI> Contains only numbers, letters, underscore
<LI> Does not start with a number
<LI> Is not the same name as a built-in function (like print).
</OL>
</UL>
Basic Synthax of a Function
An Example
End of explanation
"""
def numop(x,y):
x *= 3.14
return x + y
x = 2
print numpop(x, 8)
print x
def numop(x,y):
x *= 3.14
global a
a += 1
return x + y, a
a = 2
numop(1,1)
numop(1,1)
"""
Explanation: Scope of a Function
End of explanation
"""
def changeme_1( mylist ):
mylist = [1,2,3,4]; # This would assig new reference in mylist
print "Values inside the function changeme_1: ", mylist
return
def changeme_2( mylist ):
mylist.append([1,2,3,4]);
print "Values inside the function changeme_2: ", mylist
return
mylist1 = [10,20,30];
changeme_1( mylist1 );
print "Values outside the function: ", mylist1
print
mylist2 = [10,20,30];
changeme_2( mylist2 );
print "Values outside the function: ", mylist2
"""
Explanation: Pass by reference vs value
End of explanation
"""
def numop1(x,y,multiplier=1.0,greetings="Thank you for your inquiry."):
""" numop1 -- this does a simple operation on two numbers.
We expect x,y are numbers and return x + y times the multiplier
multiplier is also a number (a float is preferred) and is optional.
It defaults to 1.0.
You can also specify a small greeting as a string. """
if greetings is not None:
print greetings
return (x + y)*multiplier
help(numop1)
numop1(1,1)
numop1(1,1,multiplier=-0.5,greetings=None)
"""
Explanation: Function Arguments
You can call a function by using the following types of formal arguments:
<UL>
<LI> Required arguments (arguments passed to a function in correct positional order. Here, the number of arguments in the function call should match exactly with the function definition)
<LI> Keyword arguments (identified by parameter names)
<LI> Default arguments (assume default values if values are not provided in the function call for those arguments)
<LI> Variable-length arguments (are not explicitly named in the function definition)
</UL>
Keyword Arguments
End of explanation
"""
def cheeseshop(kind, *arguments, **keywords):
print "-- Do you have any", kind, "?"
print "-- I'm sorry, we're all out of", kind
for arg in arguments:
print arg
print "-" * 40
keys = keywords.keys()
keys.sort()
for kw in keys:
print kw, ":", keywords[kw]
cheeseshop("Limburger",
"It's very runny, sir.",
"It's really very, VERY runny, sir.",
shopkeeper='Michael Palin',
client="John Cleese",
sketch="Cheese Shop Sketch")
"""
Explanation: Unspecified args and keywords
End of explanation
"""
import math
math.cos(0)
math.cos(math.pi)
math.sqrt(4)
from datetime import datetime
now = datetime.now()
print now.year, now.month, now.day
from math import acos as arccos
arccos(1)
"""
Explanation: What is a Module?
<UL>
<LI> A Python object with arbitrarily named attributes that you can bind and reference.
<LI> A file consisting of Python code.
<LI> Allows you to logically organize your Python code.
<LI> Makes the code easier to understand and use.
<LI> Can define functions, classes and variables.
<LI> Can also include runnable code.
</UL>
<B> Any file ending in .py is treated as a module. </B>
End of explanation
"""
|
myuuuuun/various | 応用統計/HW1/HW1.ipynb | mit | #-*- encoding: utf-8 -*-
'''
Ouyoutoukei HW1
'''
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import statsmodels.api as sm
np.set_printoptions(precision=3)
pd.set_option('display.precision', 4)
"""
Explanation: 応用統計HW1
詳細: http://www.stat.t.u-tokyo.ac.jp/~takemura/ouyoutoukei/
End of explanation
"""
# csvをインポート
df = pd.read_csv( 'odakyu-mansion.csv' )
# 基礎統計量を表示
print(df.describe())
"""
Explanation: 下準備
データのインポート, 基礎統計量の表示
End of explanation
"""
# サンプルサイズ
data_len = df.shape[0]
# 家の向きはdummyに
df['d_N'] = np.zeros(data_len, dtype=float)
df['d_E'] = np.zeros(data_len, dtype=float)
df['d_W'] = np.zeros(data_len, dtype=float)
df['d_S'] = np.zeros(data_len, dtype=float)
for i, row in df.iterrows():
for direction in ["N", "W", "S", "E"]:
if direction in str(row.muki):
df.loc[i, 'd_{0}'.format(direction)] = 1
# 先頭10件を表示
print(df.head(10))
"""
Explanation: 家の向きは東, 西, 南, 北のダミー変数(0または1。南東の場合、南と東の両方に1)に分解し変換して、
End of explanation
"""
df = df.fillna(df.mean())
"""
Explanation: 欠損値は平均値で置き換える。
End of explanation
"""
# 定数項も加える
X = sm.add_constant(df[['time', 'bus', 'walk', 'area',
'bal', 'kosuu', 'floor', 'tf', 'd_N', 'd_E', 'd_S', 'd_W', 'year']])
# 普通の最小二乗法
model = sm.OLS(df.price, X)
results = model.fit()
# 結果を表示
print(results.summary())
"""
Explanation: 最小二乗法
被説明変数に与える影響の小さい説明変数を順に取り除いていく。具体的にはp > 0.05であるような説明変数を除いていく。
同時に、外れ値の考慮もする。
最小二乗法その1
説明変数13個で最小二乗法を実行すると、
End of explanation
"""
print(df.loc[161])
df = df.drop(161)
"""
Explanation: となる。
p値を見ると、kosuu, floorがほとんど無関係であるように見える。
kosuuには外れ値が1つある(kosuu=2080)ので、それを除いてみる。
最小二乗法その2
外れ値を除き、
End of explanation
"""
X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'bal',
'kosuu', 'floor', 'tf', 'd_N', 'd_E', 'd_S', 'd_W', 'year']])
model = sm.OLS(df.price, X)
results = model.fit()
print(results.summary())
"""
Explanation: 再び最小二乗法を実行すると、
End of explanation
"""
X = sm.add_constant(df[['time', 'bus', 'walk', 'area',
'bal', 'tf', 'd_N', 'd_E', 'd_S', 'd_W', 'year']])
model = sm.OLS(df.price, X)
results = model.fit()
print(results.summary())
"""
Explanation: やはりkosuu, floorのp値が大きいので、説明変数から除くと、
最小二乗法その3
End of explanation
"""
X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'tf', 'year', 'd_S']])
model = sm.OLS(df.price, X)
results = model.fit()
print(results.summary())
"""
Explanation: となる。さらに、balと南向き以外の方角のダミー変数を説明変数から除く。
最小二乗法その4
End of explanation
"""
X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'tf']])
model = sm.OLS(df.price, X)
results = model.fit()
print(results.summary())
"""
Explanation: p値が大きい南向きのダミー変数d_E、築年数yearも説明変数から除く。
最小二乗法その5
End of explanation
"""
X = sm.add_constant(df[['time', 'bus', 'walk', 'area']])
model = sm.OLS(df.price, X)
results = model.fit()
print(results.summary())
"""
Explanation: p値が大きいtfを取り除く
最小二乗法その6
End of explanation
"""
# 回帰に使った変数だけを抜き出す
new_df = df.loc[:, ['price', 'time', 'bus', 'walk', 'area']]
# 説明変数行列
exp_matrix = new_df.loc[:, ['time', 'bus', 'walk', 'area']]
# 回帰係数ベクトル
coefs = results.params
# 理論価格ベクトル
predicted = exp_matrix.dot(coefs[1:]) + coefs[0]
# 残差ベクトル
residuals = new_df.price - predicted
# 残差をplot
fig, ax = plt.subplots(figsize=(12, 8))
plt.plot(predicted, residuals, 'o', color='b', linewidth=1, label="residuals distribution")
plt.xlabel("predicted values")
plt.ylabel("residuals")
plt.show()
# 残差平均
print("residuals mean:", residuals.mean())
"""
Explanation: 修正済みR^2 = 0.783, F統計量のp値3.96e-59 を見ると、「新宿駅からの乗車時間」, 「バスの乗車時間」, 「徒歩時間」, 「部屋の広さ」の4つで十分に住宅価格を説明できていると考えられる。
最小二乗法1〜6と比較しても、AIC・BICは殆ど変わらないか、改善している。
あとは残差を検討して、誤差項に関する諸仮定が満たされているかをチェックする。
残差の分析
残差に関する仮定は:
誤差項の平均が0
誤差項の分散が一定
誤差項は互いに独立
誤差項は(少なくとも近似的には)正規分布に従う
誤差項と各説明変数の相関係数は0
であった。
※全ての項目を厳密にチェックする方法を知らないので、出来る項目だけを確認します。
まずは、横軸に予測値(価格)を、縦軸に残差をとって点をプロットする。
End of explanation
"""
print(new_df.loc[12] )
new_df = new_df.drop(12)
X = sm.add_constant(new_df[['time', 'bus', 'walk', 'area']])
model = sm.OLS(new_df.price, X)
results = model.fit()
print(results.summary())
# 説明変数行列
exp_matrix = new_df.loc[:, ['time', 'bus', 'walk', 'area']]
# 回帰係数ベクトル
coefs = results.params
# 理論価格ベクトル
predicted = exp_matrix.dot(coefs[1:]) + coefs[0]
# 残差ベクトル
residuals = new_df.price - predicted
# 残差をplot
fig, ax = plt.subplots(figsize=(12, 8))
plt.plot(predicted, residuals, 'o', color='b', linewidth=1, label="residuals distribution")
plt.xlabel("predicted values")
plt.ylabel("residuals")
plt.show()
# 残差平均
print("residuals mean:", residuals.mean())
"""
Explanation: 平均はほぼ0であり、グラフでも0付近に点が集中していることがわかる: 仮定1は満たす
しかしながら、右側にいくつか外れ値が見える。右上の1点を除いて、再度回帰分析を行う。
最小二乗法その7
End of explanation
"""
# 残差をplot
fig = plt.figure(figsize=(18, 10))
ax1 = plt.subplot(2, 2, 1)
plt.plot(exp_matrix['time'], residuals, 'o', color='b', linewidth=1, label="residuals - time")
plt.xlabel("time")
plt.ylabel("residuals")
plt.legend()
ax2 = plt.subplot(2, 2, 2, sharey=ax1)
plt.plot(exp_matrix['bus'], residuals, 'o', color='b', linewidth=1, label="residuals - bus")
plt.xlabel("bus")
plt.ylabel("residuals")
plt.legend()
ax3 = plt.subplot(2, 2, 3, sharey=ax1)
plt.plot(exp_matrix['walk'], residuals, 'o', color='b', linewidth=1, label="residuals - walk")
plt.xlabel("walk")
plt.ylabel("residuals")
plt.legend()
ax4 = plt.subplot(2, 2, 4, sharey=ax1)
plt.plot(exp_matrix['area'], residuals, 'o', color='b', linewidth=1, label="residuals - area")
plt.xlabel("area")
plt.ylabel("residuals")
plt.legend()
plt.show()
"""
Explanation: 最小二乗法6の結果に比べ、ばらつきが均等になった。
次に、縦軸に残差、横軸に各説明変数の観測値をとって、残差のばらつきを見る。
End of explanation
"""
|
palandatarxcom/sklearn_tutorial_cn | notebooks/03.2-Regression-Forests.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# 使用seaborn的默认设置
import seaborn as sns; sns.set()
"""
Explanation: 这个分析笔记由Jake Vanderplas编辑汇总。 源代码和license文件在GitHub。 中文翻译由派兰数据在派兰大数据分析平台上完成。 源代码在GitHub上。
深度探索监督学习:随机森林
之前我们已经了解过了强大的判别分类器,支持向量机。在这里我们要看一看另一种强大的算法。这个算法是一个非参数方法,叫随机森林。
End of explanation
"""
import fig_code
fig_code.plot_example_decision_tree()
"""
Explanation: 随机森林:决策树
随机森林是在决策树的基础上的一个集成学习算法。所以我们从讨论决策树开始我们的介绍,
决策树是一个非常直观的,给对象分类或者是贴标签的方法:您仅仅需要提出一系列的问题并回答它们,模型会根据问题的答案逐渐完善。
End of explanation
"""
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
"""
Explanation: 对所有问题,仅需要回答"是"或者"否",这让分类变得极其有效。但是,这里需要注意的是,我们需要问应该问的问题。在训练一个决策树分类器时,算法会根据问题的特征去找出哪一个问题包含着最多的信息。
创建一个决策树
这儿是一个在sklearn中决策树的例子。我们会以定义一组二维的有标签的数据开始:
End of explanation
"""
from fig_code import visualize_tree, plot_tree_interactive
"""
Explanation: 我们在sklearn的库中有一些可以提供帮助的函数:
End of explanation
"""
plot_tree_interactive(X, y);
"""
Explanation: 现在使用 IPython 的interact (在 IPython2.0+ 中可以使用,且需要一个实时的核)。我们可以看到决策树的分隔:
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
plt.figure()
visualize_tree(clf, X[:200], y[:200], boundaries=False)
plt.figure()
visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
"""
Explanation: 注意到随着深度的增加,除了那些本来只包含一类的点之外,每一个数据集都被成功的划分。
这是个非常快的无参数分类过程,在实际运用中非常实用。
问题:您看到这里隐含的缺陷了吗?
决策树和过拟合
决策树的一个特点就是,它训练出来的模型非常容易产生过拟合。也就是说,模型的弹性非常大,以至于它们对数据的噪声的学习可以高过数据本身!比如,接下来我们可以看一看两种决策树模型,针对同一个数据集的两个子集的建模情况:
End of explanation
"""
def fit_randomized_tree(random_state=0):
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=2.0)
clf = DecisionTreeClassifier(max_depth=15)
rng = np.random.RandomState(random_state)
i = np.arange(len(y))
rng.shuffle(i)
visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,
xlim=(X[:, 0].min(), X[:, 0].max()),
ylim=(X[:, 1].min(), X[:, 1].max()))
from IPython.html.widgets import interact
interact(fit_randomized_tree, random_state=[0, 100]);
"""
Explanation: 两种分类模型的细节完全不一样!这就是过拟合的一个直观展示:当你采用你的模型去预测一个新的点的时候,这个模型更容易被数据中的噪声而不是数据本身所影响。
集成学习Estimators:随机森林
一个可能的解决过拟合问题的方式是采用集成方法:创建一个总的estimator,它对很多的独立的,容易产生过拟合的estimator取了平均。让人有些惊讶的是,这个总的estimator的效果非常好,它比任何一个组成它的独立的estimator都要稳定和精确!
随机森林就是众多集成方法中的一种,这里,集成的意思是总的estimator会由很多决策树组成。
对于如何随机组合这些决策树,现在有很多方法和原理。但是,举个例子,我们来看一组estimator对于数据的不同子集建模的结果。我们可以通过接下来的内容获得更好的理解:
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, random_state=0)
visualize_tree(clf, X, y, boundaries=False);
"""
Explanation: 我们可以看到,不管细节怎么随着函数的改变而改变,大的特征总是保持不变的!随机森林的分类器会做类似的事情,只是它会把所有的模型组合起来去得到最后的结果:
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
x = 10 * np.random.rand(100)
def model(x, sigma=0.3):
fast_oscillation = np.sin(5 * x)
slow_oscillation = np.sin(0.5 * x)
noise = sigma * np.random.randn(len(x))
return slow_oscillation + fast_oscillation + noise
y = model(x)
plt.errorbar(x, y, 0.3, fmt='o');
xfit = np.linspace(0, 10, 1000)
yfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None])
ytrue = model(xfit, 0)
plt.errorbar(x, y, 0.3, fmt='o')
plt.plot(xfit, yfit, '-r');
plt.plot(xfit, ytrue, '-k', alpha=0.5);
"""
Explanation: 通过对100个随机的模型取平均,我们得到了一个可以更好拟合我们的数据的模型!
(注意: 上面我们通过下采样的方式对我们的模型做了随机。随机森林运用了更加成熟的方法,对于这点您可以参考 scikit-learn documentation)
例子:回归问题
我们在之前对随机森林的讨论都是基于分类问题的。
随机森林也可以被用在回归问题中(也就是,预测的是连续而不是离散的值)。对应的estimator是sklearn.ensemble.RandomForestRegressor.
我们快速的看一下它是如何使用的:
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
X = digits.data
y = digits.target
print(X.shape)
print(y.shape)
"""
Explanation: 从中你可以看到,这个非参数的随机森林的模型足够去拟合多周期的数据,甚至都不需要我们为它指定一个多周期的模型!
例子:运用随机森林算法做数字分类
我们之前已经接触了手写数字的数据集。我们现在来对支持向量机分类器和随机森林分类器在数据集上的效率做一些评估。
End of explanation
"""
# 设置绘图
fig = plt.figure(figsize=(6, 6)) # 图片大小是以英寸(inches)计算的
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# 绘制这些数字: 每个图像是 8x8 像素点阵的
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# 每一个图像做上对应的标记(label,就是target值)
ax.text(0, 7, str(digits.target[i]))
"""
Explanation: 为了让我们更好的记起来手写数字数据集,我们先将其中一部分画出来:
End of explanation
"""
from sklearn.cross_validation import train_test_split
from sklearn import metrics
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
clf = DecisionTreeClassifier(max_depth=11)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
"""
Explanation: 我们可以使用决策树来快速的对数字进行分类:
End of explanation
"""
metrics.accuracy_score(ypred, ytest)
"""
Explanation: 我们可以检查一下这个分类器的准确率:
End of explanation
"""
plt.imshow(metrics.confusion_matrix(ypred, ytest),
interpolation='nearest', cmap=plt.cm.binary)
plt.grid(False)
plt.colorbar()
plt.xlabel("predicted label")
plt.ylabel("true label");
"""
Explanation: 为了更好的看出分类器的性能,我们画出混淆矩阵:
End of explanation
"""
|
anshbansal/anshbansal.github.io | udacity_data_science_notes/intro_machine_learning/lesson_03/lesson_03.ipynb | mit | from sklearn import tree
X = [[0, 0], [1, 1]]
Y = [0, 1]
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X, Y)
clf.predict([[2., 2.]])
from prep_terrain_data import makeTerrainData
features_train, labels_train, features_test, labels_test = makeTerrainData()
clf = tree.DecisionTreeClassifier()
clf = clf.fit(features_train, labels_train)
show_picture()
show_accuracy()
clf = tree.DecisionTreeClassifier(min_samples_split = 50)
clf = clf.fit(features_train, labels_train)
show_picture()
show_accuracy()
"""
Explanation: Lesson 3 - Decision Trees
Allows you to ask multiple linear questions one after the other
End of explanation
"""
import math
-0.5 * math.log(0.5, 2) -0.5 * math.log(0.5, 2)
"""
Explanation: Entropy
controls how a decision tree decides where to split the data
Decision tree tries to split the data so that we have regions which are as pure as possible. Recursively doing this decision tree is able to make its decisions
End of explanation
"""
p_s = 2/3.0
p_f = 1 - p_s
entropy_steep = - p_s * math.log(p_s, 2) - p_f * math.log(p_f, 2)
entropy_flat = 0.0
entropy_children = (3/4.0) * entropy_steep + (1/4.0) * entropy_flat
entropy_gain = 1 - entropy_children
entropy_gain
"""
Explanation: Information Gain
information gain = entropy of parent - [weighted average] of entropy of children
decision tree algorithm tries to maximize the information gain
We have entropy of 1.0. We will try and make a decision boundary by splitting according to grade
End of explanation
"""
|
kingb12/languagemodelRNN | report_notebooks/encdec_noing10_200_512_04drb.ipynb | mit | report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json'
log_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json'
import json
import matplotlib.pyplot as plt
with open(report_file) as f:
report = json.loads(f.read())
with open(log_file) as f:
logs = json.loads(f.read())
print'Encoder: \n\n', report['architecture']['encoder']
print'Decoder: \n\n', report['architecture']['decoder']
"""
Explanation: Encoder-Decoder Analysis
Model Architecture
End of explanation
"""
print('Train Perplexity: ', report['train_perplexity'])
print('Valid Perplexity: ', report['valid_perplexity'])
print('Test Perplexity: ', report['test_perplexity'])
"""
Explanation: Perplexity on Each Dataset
End of explanation
"""
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: Loss vs. Epoch
End of explanation
"""
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
"""
Explanation: Perplexity vs. Epoch
End of explanation
"""
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
for i, sample in enumerate(report['train_samples']):
print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)
for i, sample in enumerate(report['valid_samples']):
print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)
for i, sample in enumerate(report['test_samples']):
print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)
"""
Explanation: Generations
End of explanation
"""
def print_bleu(blue_struct):
print 'Overall Score: ', blue_struct['score'], '\n'
print '1-gram Score: ', blue_struct['components']['1']
print '2-gram Score: ', blue_struct['components']['2']
print '3-gram Score: ', blue_struct['components']['3']
print '4-gram Score: ', blue_struct['components']['4']
# Training Set BLEU Scores
print_bleu(report['train_bleu'])
# Validation Set BLEU Scores
print_bleu(report['valid_bleu'])
# Test Set BLEU Scores
print_bleu(report['test_bleu'])
# All Data BLEU Scores
print_bleu(report['combined_bleu'])
"""
Explanation: BLEU Analysis
End of explanation
"""
# Training Set BLEU n-pairs Scores
print_bleu(report['n_pairs_bleu_train'])
# Validation Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_valid'])
# Test Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_test'])
# Combined n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_all'])
# Ground Truth n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_gold'])
"""
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
"""
print 'Average (Train) Generated Score: ', report['average_alignment_train']
print 'Average (Valid) Generated Score: ', report['average_alignment_valid']
print 'Average (Test) Generated Score: ', report['average_alignment_test']
print 'Average (All) Generated Score: ', report['average_alignment_all']
print 'Average Gold Score: ', report['average_alignment_gold']
"""
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/exams/interro_rapide_20_minutes_2015_09.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.e - Correction de l'interrogation écrite du 26 septembre 2015
tests, boucles, fonctions
End of explanation
"""
tab = [1, 3]
for i in range(0, len(tab)):
print(tab[i] + tab[i+1])
"""
Explanation: Enoncé 1
Q1
Le programme suivant provoque une erreur pourquoi ?
End of explanation
"""
tab = [1, 3]
for i in range(0, len(tab)):
print(i, i+1, len(tab))
print(tab[i] + tab[i+1])
"""
Explanation: On découvre le problème en ajoutant des affichages intermédiaires :
End of explanation
"""
n = 1
if n = 1:
y = 0
else:
y = 1
"""
Explanation: A la dernière itération, $i+1$ dévient égal à la longueur de la liste tab or le dernier indice d'un tableau est len(tab)-1.
Q2
Où est l'erreur de syntaxe ?
End of explanation
"""
def somme_caracteres(mot):
s = 0
for c in mot :
s += ord(c) - ord("a") + 1
return s
somme_caracteres("elu")
"""
Explanation: Le test d'égalité s'écrit ==.
Q3
On associe la valeur 1 à la lettre a, 2 à b et ainsi de suite. Ecrire une fonction qui fait la somme de ces valeurs pour une chaîne de caractères.
Exemple : elu $\rightarrow$ 5 + 12 + 21 = 38
End of explanation
"""
def somme_caracteres(mot):
return sum(ord(c) - ord("a") + 1 for c in mot)
somme_caracteres("elu")
"""
Explanation: On peut l'écrire de façon plus courte :
End of explanation
"""
y = "a" * 3 + 1
z = 3 * "a" + 1
print(y,z)
"""
Explanation: Enoncé 2
Q1
Barrez les lignes qui produiraient une erreur à l'exécution et dire pourquoi ?
End of explanation
"""
l = []
for i in range(0, 10):
l.append([i])
print(l)
"""
Explanation: Les deux premières lignes sont incorrects car on essaye d'ajouter une chaîne de caractères à un nombre. La première opération est correcte "a" * 3. Dans un sens comme dans l'autre, elle donne "aaa". Mais on ne peut ajouter 1 à "aaa".
Q2
Que vaut l à la fin du programme ?
End of explanation
"""
l = []
for i in range(0, 10):
l.extend([i])
print(l)
"""
Explanation: Il ne faut pas confondre la méthode append et extend.
End of explanation
"""
def un_sur_deux(mot):
s = ""
for i,c in enumerate(mot):
if i % 2 == 0:
s += c
return s
un_sur_deux("python")
"""
Explanation: Q3
Ecrire une fonction qui prend une chaîne de caractères et qui lui enlève une lettre sur 2.
End of explanation
"""
def un_sur_deux(mot):
return "".join( c for i,c in enumerate(mot) if i % 2 == 0 )
un_sur_deux("python")
"""
Explanation: Ou plus court encore :
End of explanation
"""
|
jamesfolberth/NGC_STEM_camp_AWS | notebooks/data8_notebooks/lab04/lab04.ipynb | bsd-3-clause | # Run this cell to set up the notebook, but please don't change it.
# These lines import the Numpy and Datascience modules.
import numpy as np
from datascience import *
# These lines do some fancy plotting magic.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
# These lines load the tests.
from client.api.assignment import load_assignment
tests = load_assignment('lab04.ok')
"""
Explanation: Functions and Visualizations
In the past week, you've learned a lot about using tables to work with datasets. With your tools so far, you can:
Load a dataset from the web;
Work with (extract, add, drop, relabel) columns from the dataset;
Filter and sort it according to certain criteria;
Perform arithmetic on columns of numbers;
Group rows by columns of categories, counting the number of rows in each category;
Make a bar chart of the categories.
These tools are fairly powerful, but they're not quite enough for all the analysis and data we'll eventually be doing in this course. Today we'll learn a tool that dramatically expands this toolbox: the table method apply. We'll also see how to make histograms, which are like bar charts for numerical data.
End of explanation
"""
raw_compensation = Table.read_table('raw_compensation.csv')
raw_compensation
"""
Explanation: 1. Functions and CEO Incomes
In Which We Write Down a Recipe for Cake
Let's start with a real data analysis task. We'll look at the 2015 compensation of CEOs at the 100 largest companies in California. The data were compiled for a Los Angeles Times analysis here, and ultimately came from filings mandated by the SEC from all publicly-traded companies. Two companies have two CEOs, so there are 102 CEOs in the dataset.
We've copied the data in raw form from the LA Times page into a file called raw_compensation.csv. (The page notes that all dollar amounts are in millions of dollars.)
End of explanation
"""
...
"""
Explanation: Question 1. When we first loaded this dataset, we tried to compute the average of the CEOs' pay like this:
np.average(raw_compensation.column("Total Pay"))
Explain why that didn't work. Hint: Try looking at some of the values in the "Total Pay" column.
Write your answer here, replacing this text.
End of explanation
"""
mark_hurd_pay_string = ...
mark_hurd_pay_string
_ = tests.grade('q1_2')
"""
Explanation: Question 2. Extract the first value in the "Total Pay" column. It's Mark Hurd's pay in 2015, in millions of dollars. Call it mark_hurd_pay_string.
End of explanation
"""
mark_hurd_pay = ...
mark_hurd_pay
_ = tests.grade('q1_3')
"""
Explanation: Question 3. Convert mark_hurd_pay_string to a number of dollars. The string method strip will be useful for removing the dollar sign; it removes a specified character from the start or end of a string. For example, the value of "100%".strip("%") is the string "100". You'll also need the function float, which converts a string that looks like a number to an actual number. Last, remember that the answer should be in dollars, not millions of dollars.
End of explanation
"""
def convert_pay_string_to_number(pay_string):
"""Converts a pay string like '$100 ' (in millions) to a number of dollars."""
return float(pay_string.strip("$"))
_ = tests.grade('q1_4')
"""
Explanation: To compute the average pay, we need to do this for every CEO. But that looks like it would involve copying this code 102 times.
This is where functions come in. First, we'll define our own function that packages together the code we wrote to convert a pay string to a pay number. This has its own benefits. Later in this lab we'll see a bigger payoff: we can call that function on every pay string in the dataset at once.
Question 4. Below we've written code that defines a function that converts pay strings to pay numbers, just like your code above. But it has a small error, which you can correct without knowing what all the other stuff in the cell means. Correct the problem.
End of explanation
"""
convert_pay_string_to_number(mark_hurd_pay_string)
# We can also compute Safra Catz's pay in the same way:
convert_pay_string_to_number(raw_compensation.where("Name", are.equal_to("Safra A. Catz*")).column("Total Pay").item(0))
"""
Explanation: Running that cell doesn't convert any particular pay string.
Rather, think of it as defining a recipe for converting a pay string to a number. Writing down a recipe for cake doesn't give you a cake. You have to gather the ingredients and get a chef to execute the instructions in the recipe to get a cake. Similarly, no pay string is converted to a number until we call our function on a particular pay string (which tells Python, our lightning-fast chef, to execute it).
We can call our function just like we call the built-in functions we've seen. (Almost all of those functions are defined in this way, in fact!) It takes one argument, a string, and it returns a number.
End of explanation
"""
...
...
...
...
twenty_percent = ...
twenty_percent
_ = tests.grade('q2_1')
"""
Explanation: What have we gained? Well, without the function, we'd have to copy that 10**6 * float(pay_string.strip("$")) stuff each time we wanted to convert a pay string. Now we just call a function whose name says exactly what it's doing.
We'd still have to call the function 102 times to convert all the salaries, which we'll fix next.
But for now, let's write some more functions.
2. Defining functions
In Which We Write a Lot of Recipes
Let's write a very simple function that converts a proportion to a percentage by multiplying it by 100. For example, the value of to_percentage(.5) should be the number 50. (No percent sign.)
A function definition has a few parts.
def
It always starts with def (short for define):
def
Name
Next comes the name of the function. Let's call our function to_percentage.
def to_percentage
Signature
Next comes something called the signature of the function. This tells Python how many arguments your function should have, and what names you'll use to refer to those arguments in the function's code. to_percentage should take one argument, and we'll call that argument proportion since it should be a proportion.
def to_percentage(proportion)
We put a colon after the signature to tell Python it's over.
def to_percentage(proportion):
Documentation
Functions can do complicated things, so you should write an explanation of what your function does. For small functions, this is less important, but it's a good habit to learn from the start. Conventionally, Python functions are documented by writing a triple-quoted string:
def to_percentage(proportion):
"""Converts a proportion to a percentage."""
Body
Now we start writing code that runs when the function is called. This is called the body of the function. We can write anything we could write anywhere else. First let's give a name to the number we multiply a proportion by to get a percentage.
def to_percentage(proportion):
"""Converts a proportion to a percentage."""
factor = 100
return
The special instruction return in a function's body tells Python to make the value of the function call equal to whatever comes right after return. We want the value of to_percentage(.5) to be the proportion .5 times the factor 100, so we write:
def to_percentage(proportion):
"""Converts a proportion to a percentage."""
factor = 100
return proportion * factor
Question 1. Define to_percentage in the cell below. Call your function to convert the proportion .2 to a percentage. Name that percentage twenty_percent.
End of explanation
"""
a_proportion = 2**(.5) / 2
a_percentage = ...
a_percentage
_ = tests.grade('q2_2')
"""
Explanation: Like the built-in functions, you can use named values as arguments to your function.
Question 2. Use to_percentage again to convert the proportion named a_proportion (defined below) to a percentage called a_percentage.
Note: You don't need to define to_percentage again! Just like other named things, functions stick around after you define them.
End of explanation
"""
# You should see an error when you run this. (If you don't, you might
# have defined factor somewhere above.)
factor
"""
Explanation: Here's something important about functions: Each time a function is called, it creates its own "space" for names that's separate from the main space where you normally define names. (Exception: all the names from the main space get copied into it.) So even though you defined factor = 100 inside to_percentage above and then called to_percentage, you can't refer to factor anywhere except inside the body of to_percentage:
End of explanation
"""
def disemvowel(a_string):
...
...
# An example call to your function. (It's often helpful to run
# an example call from time to time while you're writing a function,
# to see how it currently works.)
disemvowel("Can you read this without vowels?")
_ = tests.grade('q2_3')
"""
Explanation: As we've seen with the built-in functions, functions can also take strings (or arrays, or tables) as arguments, and they can return those things, too.
Question 3. Define a function called disemvowel. It should take a single string as its argument. (You can call that argument whatever you want.) It should return a copy of that string, but with all the characters that are vowels removed. (In English, the vowels are the characters "a", "e", "i", "o", and "u".)
Hint: To remove all the "a"s from a string, you can use that_string.replace("a", ""). And you can call replace multiple times.
End of explanation
"""
def num_non_vowels(a_string):
"""The number of characters in a string, minus the vowels."""
...
_ = tests.grade('q2_4')
"""
Explanation: Calls on calls on calls
Just as you write a series of lines to build up a complex computation, it's useful to define a series of small functions that build on each other. Since you can write any code inside a function's body, you can call other functions you've written.
This is like a recipe for cake telling you to follow another recipe to make the frosting, and another to make the sprinkles. This makes the cake recipe shorter and clearer, and it avoids having a bunch of duplicated frosting recipes. It's a foundation of productive programming.
For example, suppose you want to count the number of characters that aren't vowels in a piece of text. One way to do that is this to remove all the vowels and count the size of the remaining string.
Question 4. Write a function called num_non_vowels. It should take a string as its argument and return a number. The number should be the number of characters in the argument string that aren't vowels.
Hint: Recall that the function len takes a string as its argument and returns the number of characters in it.
End of explanation
"""
movies_by_year = Table.read_table("movies_by_year.csv")
rank = 5
fifth_from_top_movie_year = movies_by_year.sort("Total Gross", descending=True).column("Year").item(rank-1)
print("Year number", rank, "for total gross movie sales was:", fifth_from_top_movie_year)
"""
Explanation: Functions can also encapsulate code that does things rather than just computing values. For example, if you call print inside a function, and then call that function, something will get printed.
The movies_by_year dataset in the textbook has information about movie sales in recent years. Suppose you'd like to display the year with the 5th-highest total gross movie sales, printed in a human-readable way. You might do this:
End of explanation
"""
def print_kth_top_movie_year(k):
# Our solution used 2 lines.
...
...
# Example calls to your function:
print_kth_top_movie_year(2)
print_kth_top_movie_year(3)
_ = tests.grade('q2_5')
"""
Explanation: After writing this, you realize you also wanted to print out the 2nd and 3rd-highest years. Instead of copying your code, you decide to put it in a function. Since the rank varies, you make that an argument to your function.
Question 5. Write a function called print_kth_top_movie_year. It should take a single argument, the rank of the year (like 2, 3, or 5 in the above examples). It should print out a message like the one above. It shouldn't have a return statement.
End of explanation
"""
our_name_for_max = max
our_name_for_max(2, 6)
"""
Explanation: 3. applying functions
In Which Python Bakes 102 Cakes
You'll get more practice writing functions, but let's move on.
Defining a function is a lot like giving a name to a value with =. In fact, a function is a value just like the number 1 or the text "the"!
For example, we can make a new name for the built-in function max if we want:
End of explanation
"""
max(2, 6)
"""
Explanation: The old name for max is still around:
End of explanation
"""
max
"""
Explanation: Try just writing max or our_name_for_max (or the name of any other function) in a cell, and run that cell. Python will print out a (very brief) description of the function.
End of explanation
"""
make_array(max, np.average, are.equal_to)
"""
Explanation: Why is this useful? Since functions are just values, it's possible to pass them as arguments to other functions. Here's a simple but not-so-practical example: we can make an array of functions.
End of explanation
"""
some_functions = ...
some_functions
_ = tests.grade('q3_1')
"""
Explanation: Question 1. Make an array containing any 3 other functions you've seen. Call it some_functions.
End of explanation
"""
make_array(max, np.average, are.equal_to).item(0)(4, -2, 7)
"""
Explanation: Working with functions as values can lead to some funny-looking code. For example, see if you can figure out why this works:
End of explanation
"""
raw_compensation.apply(convert_pay_string_to_number, "Total Pay")
"""
Explanation: Here's a simpler example that's actually useful: the table method apply.
apply calls a function many times, once on each element in a column of a table. It produces an array of the results. Here we use apply to convert every CEO's pay to a number, using the function you defined:
End of explanation
"""
compensation = raw_compensation.with_column(
"Total Pay ($)",
...
compensation
_ = tests.grade('q3_2')
"""
Explanation: Here's an illustration of what that did:
<img src="apply.png"/>
Note that we didn't write something like convert_pay_string_to_number() or convert_pay_string_to_number("Total Pay"). The job of apply is to call the function we give it, so instead of calling convert_pay_string_to_number ourselves, we just write its name as an argument to apply.
Question 2. Using apply, make a table that's a copy of raw_compensation with one more column called "Total Pay (\$)". It should be the result of applying convert_pay_string_to_number to the "Total Pay" column, as we did above. Call the new table compensation.
End of explanation
"""
average_total_pay = ...
average_total_pay
_ = tests.grade('q3_3')
"""
Explanation: Now that we have the pay in numbers, we can compute things about them.
Question 3. Compute the average total pay of the CEOs in the dataset.
End of explanation
"""
cash_proportion = ...
cash_proportion
_ = tests.grade('q3_4')
"""
Explanation: Question 4. Companies pay executives in a variety of ways: directly in cash; by granting stock or other "equity" in the company; or with ancillary benefits (like private jets). Compute the proportion of each CEO's pay that was cash. (Your answer should be an array of numbers, one for each CEO in the dataset.)
End of explanation
"""
# For reference, our solution involved more than just this one line of code
...
with_previous_compensation = ...
with_previous_compensation
_ = tests.grade('q3_5')
"""
Explanation: Check out the "% Change" column in compensation. It shows the percentage increase in the CEO's pay from the previous year. For CEOs with no previous year on record, it instead says "(No previous year)". The values in this column are strings, not numbers, so like the "Total Pay" column, it's not usable without a bit of extra work.
Given your current pay and the percentage increase from the previous year, you can compute your previous year's pay. For example, if your pay is \$100 this year, and that's an increase of 50% from the previous year, then your previous year's pay was $\frac{\$100}{1 + \frac{50}{100}}$, or around \$66.66.
Question 5. Create a new table called with_previous_compensation. It should be a copy of compensation, but with the "(No previous year)" CEOs filtered out, and with an extra column called "2014 Total Pay ($)". That column should have each CEO's pay in 2014.
Hint: This question takes several steps, but each one is still something you've seen before. Take it one step at a time, using as many lines as you need. You can print out your results after each step to make sure you're on the right track.
Hint 2: You'll need to define a function. You can do that just above your other code.
End of explanation
"""
average_pay_2014 = ...
average_pay_2014
_ = tests.grade('q3_6')
"""
Explanation: Question 6. What was the average pay of these CEOs in 2014? Does it make sense to compare this number to the number you computed in question 3?
End of explanation
"""
...
"""
Explanation: Question 7. A skeptical student asks:
"I already knew lots of ways to operate on each element of an array at once. For example, I can multiply each element of some_array by 100 by writing 100*some_array. What good is apply?
How would you answer? Discuss with a neighbor.
4. Histograms
Earlier, we computed the average pay among the CEOs in our 102-CEO dataset. The average doesn't tell us everything about the amounts CEOs are paid, though. Maybe just a few CEOs make the bulk of the money, even among these 102.
We can use a histogram to display more information about a set of numbers. The table method hist takes a single argument, the name of a column of numbers. It produces a histogram of the numbers in that column.
Question 1. Make a histogram of the pay of the CEOs in compensation.
End of explanation
"""
num_ceos_more_than_30_million = ...
"""
Explanation: Question 2. Looking at the histogram, how many CEOs made more than \$30 million? (Answer the question by filling in your answer manually. You'll have to do a bit of arithmetic; feel free to use Python as a calculator.)
End of explanation
"""
num_ceos_more_than_30_million_2 = ...
num_ceos_more_than_30_million_2
_ = tests.grade('q4_3')
"""
Explanation: Question 3. Answer the same question with code. Hint: Use the table method where and the property num_rows.
End of explanation
"""
two_groups = make_array('treatment', 'control')
np.random.choice(two_groups)
"""
Explanation: Question 4. Do most CEOs make around the same amount, or are there some who make a lot more than the rest? Discuss with someone near you.
5. Randomness
Data scientists also have to be able to understand randomness. For example, they have to be able to assign individuals to treatment and control groups at random, and then try to say whether any observed differences in the outcomes of the two groups are simply due to the random assignment or genuinely due to the treatment.
To start off, we will use Python to make choices at random. In numpy there is a sub-module called random that contains many functions that involve random selection. One of these functions is called choice. It picks one item at random from an array, and it is equally likely to pick any of the items. The function call is np.random.choice(array_name), where array_name is the name of the array from which to make the choice.
Thus the following code evaluates to treatment with chance 50%, and control with chance 50%. Run the next code block several times and see what happens.
End of explanation
"""
np.random.choice(two_groups, 10)
"""
Explanation: The big difference between the code above and all the other code we have run thus far is that the code above doesn't always return the same value. It can return either treatment or control, and we don't know ahead of time which one it will pick. We can repeat the process by providing a second argument, the number of times to repeat the process. In the choice function we just used, we can add an optional second argument that tells the function how many times to make a random selection. Try it below:
End of explanation
"""
# replace ... with code that will run the 'choice' function 1000 times;
# the resulting array of choices will then have the name 'exp_results'
exp_results = ...
from collections import Counter
Counter(exp_results)
# the output from Counter tells you how many times 'treatment' and 'control' appear in the array
# produced by 'choice'; run this cell to see the output
# use the info provided by 'Counter' to print the percentage of times 'treatment' and 'control'
# were selected
print(...) # print percentage for 'treatment' here
print(...) # print percentage for 'control' here
"""
Explanation: If we wanted to determine whether the random choice made by the function random is really fair, we could make a random selection a bunch of times and then count how often each selection shows up. In the next few code blocks, write some code that calls the choice function on the two_groups array one thousand times. Then, print out the percentage of occurrences for each of treatment and control. A useful function called Counter will be helpful; look at the code comments to see how it works!
End of explanation
"""
3 > 1 + 1
"""
Explanation: A fundamental question about random events is whether or not they occur. For example:
Did an individual get assigned to the treatment group, or not?
Is a gambler going to win money, or not?
Has a poll made an accurate prediction, or not?
Once the event has occurred, you can answer "yes" or "no" to all these questions. In programming, it is conventional to do this by labeling statements as True or False. For example, if an individual did get assigned to the treatment group, then the statement, "The individual was assigned to the treatment group" would be True. If not, it would be False.
6. Booleans and Comparison
In Python, Boolean values, named for the logician George Boole, represent truth and take only two possible values: True and False. Whether problems involve randomness or not, Boolean values most often arise from comparison operators. Python includes a variety of operators that compare values. For example, 3 is larger than 1 + 1. Run the following cell.
End of explanation
"""
5 = 10/2
5 == 10/2
"""
Explanation: The value True indicates that the comparison is valid; Python has confirmed this simple fact about the relationship between 3 and 1+1. The full set of common comparison operators are listed below.
<img src="comparison_operators.png">
Notice the two equal signs == in the comparison to determine equality. This is necessary because Python already uses = to mean assignment to a name, as we have seen. It can't use the same symbol for a different purpose. Thus if you want to check whether 5 is equal to the 10/2, then you have to be careful: 5 = 10/2 returns an error message because Python assumes you are trying to assign the value of the expression 10/2 to a name that is the numeral 5. Instead, you must use 5 == 10/2, which evaluates to True. Run these blocks of code to see for yourself.
End of explanation
"""
1 < 1 + 1 < 3
"""
Explanation: An expression can contain multiple comparisons, and they all must hold in order for the whole expression to be True. For example, we can express that 1+1 is between 1 and 3 using the following expression.
End of explanation
"""
x = 12
y = 5
min(x, y) <= (x+y)/2 <= max(x, y)
"""
Explanation: The average of two numbers is always between the smaller number and the larger number. We express this relationship for the numbers x and y below. Try different values of x and y to confirm this relationship.
End of explanation
"""
'Dog' > 'Catastrophe' > 'Cat'
"""
Explanation: 7 Comparing Strings
Strings can also be compared, and their order is alphabetical. A shorter string is less than a longer string that begins with the shorter string.
End of explanation
"""
np.random.choice(two_groups) == 'treatment'
"""
Explanation: Let's return to random selection. Recall the array two_groups which consists of just two elements, treatment and control. To see whether a randomly assigned individual went to the treatment group, you can use a comparison:
End of explanation
"""
def sign(x):
if x > 0:
return 'Positive'
sign(3)
"""
Explanation: As before, the random choice will not always be the same, so the result of the comparison won't always be the same either. It will depend on whether treatment or control was chosen. With any cell that involves random selection, it is a good idea to run the cell several times to get a sense of the variability in the result.
8. Conditional Statements
In many situations, actions and results depends on a specific set of conditions being satisfied. For example, individuals in randomized controlled trials receive the treatment if they have been assigned to the treatment group. A gambler makes money if she wins her bet.
In this section we will learn how to describe such situations using code. A conditional statement is a multi-line statement that allows Python to choose among different alternatives based on the truth value of an expression. While conditional statements can appear anywhere, they appear most often within the body of a function in order to express alternative behavior depending on argument values.
A conditional statement always begins with an if header, which is a single line followed by an indented body. The body is only executed if the expression directly following if (called the if expression) evaluates to a True value. If the if expression evaluates to a False value, then the body of the if is skipped.
Let us start defining a function that returns the sign of a number.
End of explanation
"""
sign(-3)
"""
Explanation: This function returns the correct sign if the input is a positive number. But if the input is not a positive number, then the if expression evaluates to a False value, and so the return statement is skipped and the function call has no value. See what happens when you run the next block.
End of explanation
"""
def sign(x):
if x > 0:
return 'Positive'
elif x < 0:
return 'Negative'
"""
Explanation: So let us refine our function to return Negative if the input is a negative number. We can do this by adding an elif clause, where elif is Python's shorthand for the phrase "else, if".
End of explanation
"""
sign(-3)
"""
Explanation: Now sign returns the correct answer when the input is -3:
End of explanation
"""
def sign(x):
if x > 0:
return 'Positive'
elif x < 0:
return 'Negative'
elif x == 0:
return 'Neither positive nor negative'
sign(0)
"""
Explanation: What if the input is 0? To deal with this case, we can add another elif clause:
End of explanation
"""
def sign(x):
if x > 0:
return 'Positive'
elif x < 0:
return 'Negative'
else:
return 'Neither positive nor negative'
sign(0)
"""
Explanation: Run the previous code block for different inputs to our sign() function to make sure it does what we want it to.
Equivalently, we can replaced the final elif clause by an else clause, whose body will be executed only if all the previous comparisons are False; that is, if the input value is equal to 0.
End of explanation
"""
def draw_card():
"""
Print out a random suit and numeric value representing a card from a standard 52-card deck.
"""
# pick a random number to determine the suit
suit_num = np.random.uniform(0,1) # this function returns a random decimal number
# between 0 and 1
### TODO: write an 'if' statement that prints out 'heart' if 0 < suit_num < 0.25,
### 'spade' if 0.25 < suit_num < 0.5,
### 'club' if 0.5 < suit_num < 0.75,
### 'diamond' if 0.75 < suit_num < 1
# pick a random number to determine the suit
val_num = np.random.uniform(0,13)
### TODO: write an if statement so that if 2 < val_num <= 12,
### you print out the floor of val_num
### (you can use the floor() function)
### TODO: write an 'if' statement that prints out the value of the card for the
### non-numeric possibilities'A' for ace, 'J' for jack, 'Q' for 'queen', 'K'
### for king;
return
# test your function by running this block; do it multiple times and see what happens!
draw_card()
"""
Explanation: 9. The General Form
A conditional statement can also have multiple clauses with multiple bodies, and only one of those bodies can ever be executed. The general format of a multi-clause conditional statement appears below.
if <if expression>:
<if body>
elif <elif expression 0>:
<elif body 0>
elif <elif expression 1>:
<elif body 1>
...
else:
<else body>
There is always exactly one if clause, but there can be any number of elif clauses. Python will evaluate the if and elif expressions in the headers in order until one is found that is a True value, then execute the corresponding body. The else clause is optional. When an else header is provided, its else body is executed only if none of the header expressions of the previous clauses are true. The else clause must always come at the end (or not at all).
10 Example: Pick a Card
We will now use conditional statements to define a function that we could use as part of a card game analysis application. Every time we run the function, we want it to print out a random card from a standard 52-card deck. Specifically, we should randomly choose a suit and a numeric value (1-13 for Ace-King) and print these values to the screen. Finish writing the function in code block below:
End of explanation
"""
np.random.choice(make_array('Heads', 'Tails'))
"""
Explanation: 11. Iteration
It is often the case in programming – especially when dealing with randomness – that we want to repeat a process multiple times. For example, to check whether np.random.choice does in fact pick at random, we might want to run the following cell many times to see if Heads occurs about 50% of the time.
End of explanation
"""
for i in np.arange(3):
print(i)
"""
Explanation: We might want to re-run code with slightly different input or other slightly different behavior. We could copy-paste the code multiple times, but that's tedious and prone to typos, and if we wanted to do it a thousand times or a million times, forget it.
A more automated solution is to use a for statement to loop over the contents of a sequence. This is called iteration. A for statement begins with the word for, followed by a name we want to give each item in the sequence, followed by the word in, and ending with an expression that evaluates to a sequence. The indented body of the for statement is executed once for each item in that sequence.
End of explanation
"""
i = np.arange(3).item(0)
print(i)
i = np.arange(3).item(1)
print(i)
i = np.arange(3).item(2)
print(i)
"""
Explanation: It is instructive to imagine code that exactly replicates a for statement without the for statement. (This is called unrolling the loop.) A for statement simple replicates the code inside it, but before each iteration, it assigns a new value from the given sequence to the name we chose. For example, here is an unrolled version of the loop above:
End of explanation
"""
coin = make_array('Heads', 'Tails')
for i in np.arange(5):
print(np.random.choice(make_array('Heads', 'Tails')))
"""
Explanation: Notice that the name i is arbitrary, just like any name we assign with =.
Here we use a for statement in a more realistic way: we print 5 random choices from an array.
End of explanation
"""
pets = make_array('Cat', 'Dog')
np.append(pets, 'Another Pet')
"""
Explanation: In this case, we simply perform exactly the same (random) action several times, so the code inside our for statement does not actually refer to i.
12. Augmenting Arrays
While the for statement above does simulate the results of five tosses of a coin, the results are simply printed and aren't in a form that we can use for computation. Thus a typical use of a for statement is to create an array of results, by augmenting it each time.
The append method in numpy helps us do this. The call np.append(array_name, value) evaluates to a new array that is array_name augmented by value. When you use append, keep in mind that all the entries of an array must have the same type.
End of explanation
"""
pets
"""
Explanation: This keeps the array pets unchanged:
End of explanation
"""
pets = np.append(pets, 'Another Pet')
pets
"""
Explanation: But often while using for loops it will be convenient to mutate an array – that is, change it – when augmenting it. This is done by assigning the augmented array to the same name as the original.
End of explanation
"""
coin = make_array('Heads', 'Tails')
tosses = make_array()
for i in np.arange(5):
tosses = np.append(tosses, np.random.choice(coin))
tosses
"""
Explanation: Example: Counting the Number of Heads
We can now simulate five tosses of a coin and place the results into an array. We will start by creating an empty array and then appending the result of each toss.
End of explanation
"""
coin = make_array('Heads', 'Tails')
tosses = make_array()
i = np.arange(5).item(0)
tosses = np.append(tosses, np.random.choice(coin))
i = np.arange(5).item(1)
tosses = np.append(tosses, np.random.choice(coin))
i = np.arange(5).item(2)
tosses = np.append(tosses, np.random.choice(coin))
i = np.arange(5).item(3)
tosses = np.append(tosses, np.random.choice(coin))
i = np.arange(5).item(4)
tosses = np.append(tosses, np.random.choice(coin))
tosses
"""
Explanation: Let us rewrite the cell with the for statement unrolled:
End of explanation
"""
np.count_nonzero(tosses == 'Heads')
"""
Explanation: By capturing the results in an array we have given ourselves the ability to use array methods to do computations. For example, we can use np.count_nonzero to count the number of heads in the five tosses.
End of explanation
"""
tosses = make_array()
for i in np.arange(1000):
tosses = np.append(tosses, np.random.choice(coin))
np.count_nonzero(tosses == 'Heads')
"""
Explanation: Iteration is a powerful technique. For example, by running exactly the same code for 1000 tosses instead of 5, we can count the number of heads in 1000 tosses.
End of explanation
"""
np.random.choice(coin, 10)
"""
Explanation: Example: Number of Heads in 100 Tosses
It is natural to expect that in 100 tosses of a coin, there will be 50 heads, give or take a few.
But how many is "a few"? What's the chance of getting exactly 50 heads? Questions like these matter in data science not only because they are about interesting aspects of randomness, but also because they can be used in analyzing experiments where assignments to treatment and control groups are decided by the toss of a coin.
In this example we will simulate 10,000 repetitions of the following experiment:
Toss a coin 100 times and record the number of heads.
The histogram of our results will give us some insight into how many heads are likely.
As a preliminary, note that np.random.choice takes an optional second argument that specifies the number of choices to make. By default, the choices are made with replacement. Here is a simulation of 10 tosses of a coin:
End of explanation
"""
N = 10000
heads = make_array()
for i in np.arange(N):
tosses = np.random.choice(coin, 100)
heads = np.append(heads, np.count_nonzero(tosses == 'Heads'))
heads
"""
Explanation: Now let's study 100 tosses. We will start by creating an empty array called heads. Then, in each of the 10,000 repetitions, we will toss a coin 100 times, count the number of heads, and append it to heads.
End of explanation
"""
results = Table().with_columns(
'Repetition', np.arange(1, N+1),
'Number of Heads', heads
)
results
"""
Explanation: Let us collect the results in a table and draw a histogram.
End of explanation
"""
results.select('Number of Heads').hist(bins=np.arange(30.5, 69.6, 1))
"""
Explanation: Here is a histogram of the data, with bins of width 1 centered at each value of the number of heads.
End of explanation
"""
|
claudiuskerth/PhDthesis | Data_analysis/SNP-indel-calling/ANGSD/BOOTSTRAP_CONTIGS/minInd9_overlapping/DADI/adj_error.ipynb | mit | from ipyparallel import Client
cl = Client()
cl.ids
%%px --local
# run whole cell on all engines a well as in the local IPython session
import numpy as np
import sys
sys.path.insert(0, '/home/claudius/Downloads/dadi')
import dadi
from glob import glob
import dill
import pandas as pd
# turn on floating point division by default, old behaviour via '//'
from __future__ import division
from itertools import repeat
def flatten(array):
import numpy as np
res = []
for el in array:
if isinstance(el, (list, tuple, np.ndarray)):
res.extend(flatten(el))
continue
res.append(el)
return list(res)
%matplotlib inline
import pylab
pylab.rcParams['figure.figsize'] = [10, 8]
pylab.rcParams['font.size'] = 12
%%px --local
# load spectrum
sfs2d = dadi.Spectrum.from_file("EryPar.unfolded.sfs.dadi")
sfs2d = sfs2d.transpose()
sfs2d.pop_ids = ['ery', 'par']
sfs2d = sfs2d.fold()
ns = sfs2d.sample_sizes # both populations have the same sample size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
dadi.Plotting.plot_single_2d_sfs(sfs2d, vmin=1, cmap='jet')
pylab.savefig("2DSFS_folded.png")
# get number of segregating sites from SFS
sfs2d.S()
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Preparation" data-toc-modified-id="Preparation-1"><span class="toc-item-num">1 </span>Preparation</a></div><div class="lev1 toc-item"><a href="#Model-definition" data-toc-modified-id="Model-definition-2"><span class="toc-item-num">2 </span>Model definition</a></div><div class="lev1 toc-item"><a href="#LRT" data-toc-modified-id="LRT-3"><span class="toc-item-num">3 </span>LRT</a></div><div class="lev2 toc-item"><a href="#get-optimal-parameter-values" data-toc-modified-id="get-optimal-parameter-values-31"><span class="toc-item-num">3.1 </span>get optimal parameter values</a></div><div class="lev2 toc-item"><a href="#get-bootstrap-replicates" data-toc-modified-id="get-bootstrap-replicates-32"><span class="toc-item-num">3.2 </span>get bootstrap replicates</a></div><div class="lev2 toc-item"><a href="#calculate-adjustment-for-D" data-toc-modified-id="calculate-adjustment-for-D-33"><span class="toc-item-num">3.3 </span>calculate adjustment for D</a></div>
# Preparation
End of explanation
"""
def split_asym_mig_2epoch(params, ns, pts):
"""
params = (nu1_1,nu2_1,T1,nu1_2,nu2_2,T2,m1,m2)
ns = (n1,n2)
Split into two populations of specified size, with potentially asymmetric migration.
The split coincides with a stepwise size change in the daughter populations. Then,
have a second stepwise size change at some point in time after the split. This is
enforced to happen at the same time for both populations. Migration is assumed to
be the same during both epochs.
nu1_1: pop size ratio of pop 1 after split (with respect to Na)
nu2_1: pop size ratio of pop 2 after split (with respect to Na)
T1: Time from split to second size change (in units of 2*Na generations)
nu1_2: pop size ratio of pop 1 after second size change (with respect to Na)
nu2_2: pop size ratio of pop 2 after second size change (with respect to Na)
T2: time in past of second size change (in units of 2*Na generations)
m1: Migration rate from ery into par (in units of 2*Na ind per generation)
m2: Migration rate from par into ery (in units of 2*Na ind per generation)
n1,n2: Sample sizes of resulting Spectrum
pts: Number of grid points to use in integration.
"""
nu1_1,nu2_1,T1,nu1_2,nu2_2,T2,m1,m2 = params
xx = dadi.Numerics.default_grid(pts)
phi = dadi.PhiManip.phi_1D(xx)
# split
phi = dadi.PhiManip.phi_1D_to_2D(xx, phi)
# divergence with potentially asymmetric migration for time T1
phi = dadi.Integration.two_pops(phi, xx, T1, nu1_1, nu2_1, m12=m2, m21=m1)
# divergence with potentially asymmetric migration and different pop size for time T2
phi = dadi.Integration.two_pops(phi, xx, T2, nu1_2, nu2_2, m12=m2, m21=m1)
fs = dadi.Spectrum.from_phi(phi, ns, (xx,xx))
return fs
cl[:].push(dict(split_asym_mig_2epoch=split_asym_mig_2epoch))
%%px --local
func_ex = dadi.Numerics.make_extrap_log_func(split_asym_mig_2epoch)
"""
Explanation: Model definition
End of explanation
"""
ar_split_asym_mig_2epoch = []
for filename in glob("OUT_2D_models/split_asym_mig_2epoch_[0-9]*dill"):
ar_split_asym_mig_2epoch.append(dill.load(open(filename)))
l = 2*8+1
returned = [flatten(out)[:l] for out in ar_split_asym_mig_2epoch]
df = pd.DataFrame(data=returned, \
columns=['ery_1_0','par_1_0','T1_0','ery_2_0','par_2_0','T2_0','m1_0','m2_0', 'Nery_1_opt','Npar_1_opt','T1_opt','Nery_2_opt','Npar_2_opt','T2_opt','m1_opt','m2_opt','-logL'])
df.sort_values(by='-logL', ascending=True).iloc[:10,8:17]
# optimal parameter values for complex model
popt_c = np.array(df.sort_values(by='-logL', ascending=True).iloc[0, 8:16]) # take the 8th best parameter combination
popt_c
"""
Explanation: LRT
get optimal parameter values
End of explanation
"""
# optimal paramter values for simple model (1 epoch)
# note: Nery_2=Nery_1, Npar_2=Npar_1 and T2=0
popt_s = [1.24966921, 3.19164623, 1.42043464, 1.24966921, 3.19164623, 0.0, 0.08489757, 0.39827944]
"""
Explanation: This two-epoch model can be reduced to one-epoch model by either setting $Nery_2 = Nery_1$ and $Npar_2 = Npar_1$ or by setting $T_2 = 0$.
End of explanation
"""
# load bootstrapped 2D SFS
all_boot = [dadi.Spectrum.from_file("../SFS/bootstrap/2DSFS/{0:03d}.unfolded.2dsfs.dadi".format(i)).fold() for i in range(200)]
"""
Explanation: get bootstrap replicates
End of explanation
"""
# calculate adjustment for D evaluating at the *simple* model parameterisation
# specifying only T2 as fixed
adj_s = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, popt_s, sfs2d, nested_indices=[5], multinom=True)
adj_s
# calculate adjustment for D evaluating at the *complex* model parameterisation
# specifying only T2 as fixed
adj_c = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, popt_c, sfs2d, nested_indices=[5], multinom=True)
adj_c
"""
Explanation: calculate adjustment for D
End of explanation
"""
# calculate adjustment for D evaluating at the *simple* model parameterisation
# treating Nery_2, Npar_2 and T2 as nested
adj_s = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, popt_s, sfs2d, nested_indices=[3,4,5], multinom=True)
adj_s
# calculate adjustment for D evaluating at the *complex* model parameterisation
# treating Nery_2, Npar_2 and T2 as nested
adj_c = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, popt_c, sfs2d, nested_indices=[3,4,5], multinom=True)
adj_c
"""
Explanation: From Coffman2016, suppl. mat.:
The two-epoch model can be marginalized down to the SNM model for an LRT by either setting η = 1 or T = 0. We found that the LRT adjustment performed well when treating both parameters as nested, so μ(θ) was evaluated with T = 0 and η = 1.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/building_production_ml_systems/labs/4b_streaming_data_inference.ipynb | apache-2.0 | !pip install --user apache-beam[gcp]
"""
Explanation: Working with Streaming Data
Learning Objectives
1. Learn how to process real-time data for ML models using Cloud Dataflow
2. Learn how to serve online predictions using real-time data
Introduction
It can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial.
Typically you will have the following:
- A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis)
- A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub)
- A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow)
- A persistent store to keep the processed data (in our case this is BigQuery)
These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below.
Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below.
<img src='../assets/taxi_streaming_data.png' width='80%'>
In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of trips_last_5min data as an additional feature. This is our proxy for real-time traffic.
End of explanation
"""
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
"""
Explanation: Restart the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel).
End of explanation
"""
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
"""
Explanation: Re-train our model with trips_last_5min feature
In this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs/4a_streaming_data_training.ipynb. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for trips_last_5min in the model and the dataset.
Simulate Real Time Taxi Data
Since we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.
Inspect the iot_devices.py script in the taxicab_traffic folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery.
In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub.
To execute the iot_devices.py script, launch a terminal and navigate to the training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs directory. Then run the following two commands.
bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID
You will see new messages being published every 5 seconds. Keep this terminal open so it continues to publish events to the Pub/Sub topic. If you open Pub/Sub in your Google Cloud Console, you should be able to see a topic called taxi_rides.
Create a BigQuery table to collect the processed data
In the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called taxifare and a table within that dataset called traffic_realtime.
End of explanation
"""
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
"""
Explanation: Next, we create a table called traffic_realtime and set up the schema.
End of explanation
"""
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
"""
Explanation: Launch Streaming Dataflow Pipeline
Now that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.
The pipeline is defined in ./taxicab_traffic/streaming_count.py. Open that file and inspect it.
There are 5 transformations being applied:
- Read from PubSub
- Window the messages
- Count number of messages in the window
- Format the count for BigQuery
- Write results to BigQuery
TODO: Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the beam programming guide for guidance. To check your answer reference the solution.
For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds.
In a new terminal, launch the dataflow pipeline using the command below. You can change the BUCKET variable, if necessary. Here it is assumed to be your PROJECT_ID.
bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$PROJECT_ID # CHANGE AS NECESSARY
python3 ./taxicab_traffic/streaming_count.py \
--input_topic taxi_rides \
--runner=DataflowRunner \
--project=$PROJECT_ID \
--temp_location=gs://$BUCKET/dataflow_streaming
Once you've submitted the command above you can examine the progress of that job in the Dataflow section of Cloud console.
Explore the data in the table
After a few moments, you should also see new data written to your BigQuery table as well.
Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
End of explanation
"""
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
TODO: Your code goes here
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
"""
Explanation: Make predictions from the new data
In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the train.ipynb notebook.
The add_traffic_last_5min function below will query the traffic_realtime table to find the most recent traffic information and add that feature to our instance for prediction.
Exercise. Complete the code in the function below. Write a SQL query that will return the most recent entry in traffic_realtime and add it to the instance.
End of explanation
"""
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
"""
Explanation: The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.
End of explanation
"""
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
instance = # TODO: Your code goes here.
response = # TODO: Your code goes here.
if 'error' in response:
raise RuntimeError(response['error'])
else:
print( # TODO: Your code goes here
"""
Explanation: Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.
Exercise. Complete the code below to call prediction on an instance incorporating realtime traffic info. You should
- use the function add_traffic_last_5min to add the most recent realtime traffic data to the prediction instance
- call prediction on your model for this realtime instance and save the result as a variable called response
- parse the json of response to print the predicted taxifare cost
End of explanation
"""
|
terencezl/scientific-python-walkabout | Astro Workshop Day.ipynb | mit | # First, make sure this works:
import astropy
# If this doesn't work, raise your hand!
"""
Explanation: Python + Astronomy
This course will be an introduction to Astropy, a maturing library for astronomy routines and tools in Python.
Astropy started as a combination of various common Python libraries (Pyfits, PyWCS, asciitables, and others) and is working towards providing a consistent API with capabilities for all astronomers. It is developed with extensive automated testing, long-term stable releases, extensive documentation, and a friendly community for contributions.
Note that this design differs from the IDL Astronomy User's Library, which is essentially a mishmash of routines.
These tutorials make some use of the examples at:
- The official Astropy Tutorials
- A Workshop Given at SciPy 2014
End of explanation
"""
# First we load the fits submodule from astropy:
from astropy.io import fits
# Then we load a fits file (here an image from the Schmidt telescope)
hdu_list = fits.open('http://data.astropy.org/tutorials/FITS-images/HorseHead.fits')
print(hdu_list)
"""
Explanation: Using FITS files in Python
FITS files are the commonly used data format in astronomy: they are essentially collections of "header data units," which can be images, tables, or some other type of data.
End of explanation
"""
print(hdu_list[0].data)
print(type(hdu_list[0].data))
print(hdu_list[0].header['FILTER'])
print(hdu_list[0].shape)
# We can also display the full header to get a better idea of what we are looking at
hdu_list[0].header
"""
Explanation: The FITS file contains two header data units, a Primary HDU and an ASCII table HDU (see NASA's Primer) for the different types and limitations.
We can use the hdu_list object like a list to obtain information about each HDU:
End of explanation
"""
# If using ipython notebook:
%matplotlib inline
# Load matplotlib
import matplotlib.pyplot as plt
# Load colormaps (the default is somewhat ugly)
from matplotlib import cm
# If *not* using ipython notebook:
# plt.ion()
plt.imshow(hdu_list[0].data, cmap=cm.gist_heat)
plt.colorbar()
"""
Explanation: Since this is an image, we could take a look at it with the matplotlib package:
End of explanation
"""
hdu_list[0].data /= 2
hdu_list[0].header['FAKE'] = 'New Header'
hdu_list[0].header['FILTER'] = 'Changed'
print(hdu_list[0].data)
hdu_list[0].header
"""
Explanation: We can manipulate the HDU's in any way that we want with the astropy.io.fits submodule:
End of explanation
"""
%pwd
hdu_list.writeto('new-horsehead.fits', clobber=True)
"""
Explanation: Let's write our new FITS file to our local computer. Running pwd will tell us what directory it is saved to.
End of explanation
"""
# First we load the ascii submodule:
from astropy.io import ascii
example_csv = ascii.read('http://samplecsvs.s3.amazonaws.com/Sacramentorealestatetransactions.csv')
print(example_csv)
# We can also read Astronomy-specific formats.
# For example, IPAC formatted files
example_ipac = ascii.read('http://exoplanetarchive.ipac.caltech.edu/docs/tblexamples/IPAC_ASCII_one_header.tbl')
print(example_ipac)
"""
Explanation: Astropy ASCII file reader
While a number of ASCII file readers exist (including numpy.genfromtxt, numpy.loadtxt, and pandas.read_*), Astropy includes readers text file formats commonly used in Astronomy.
These are read as an Astropy Table object, which are convertable to numpy arrays or pandas DataFrames. These can contain unit information and there is work on-going to incoporate uncertainities.
End of explanation
"""
from astropy import units as u
# SI, cgs, and other units are defined in Astropy:
u.m, u.angstrom, u.erg, u.Jy, u.solMass
# Units all have documentation and attributes
print(u.solMass.names)
print(u.solMass.physical_type)
"""
Explanation: Tables support many of the same indexing and slicing operations as numpy arrays, as well as some of the higher-level operations of pandas. See the Astropy tutorial for more examples.
Units and Quantities
A nice addition to Astropy is the ability to manipulate units used in astronomy. By convention, we import this functionality into the name u:
End of explanation
"""
u.m / u.second / u.second
u.pc / u.attosecond / u.fortnight
"""
Explanation: We can create composite units, such as units of acceleration:
End of explanation
"""
print(5*u.erg/u.second)
5*u.erg/u.second
import numpy as np
my_data = np.array([1,2,3,4,5,6]) * u.Hertz
print(my_data)
# Quantities (and their units) can be combined through algebraic manipulation:
new_data = (6.626e-34 * u.m**2 * u.kg / u.second) * my_data
print(new_data)
"""
Explanation: In addition to unit manipulation, Astropy has a concept of Quantities - numbers (or arrays) with units:
End of explanation
"""
print(new_data.cgs)
print(new_data.si)
print(new_data.decompose())
# We can use the to() method to convert to anything with the same physical_type
print(new_data.unit.physical_type)
print(new_data.to(u.joule))
print(new_data.to(u.eV))
# With the to() method, unit changes are relatively straightforward:
(420*u.parsec).to(u.AU)
"""
Explanation: Since the computer knows the physical types of each unit, it is able to make conversions between them. Let's use this to simplify my_data. The decompose method will try to use the most basic units, while the .si and .cgs will attempt simple representations with those two bases:
End of explanation
"""
from astropy.constants import M_earth, G, M_sun
(G * M_earth * M_sun / u.AU**2).to(u.N)
"""
Explanation: Astropy also includes constants in another submodule, astropy.constants. For example, the average magnitude of the gravitational force of the Earth on the Sun, in SI units, is:
End of explanation
"""
(450. * u.nm).to(u.GHz, equivalencies=u.spectral())
"""
Explanation: Astropy will even convert units that are not physically compatible, if you are explicit about how to do the conversion. For example, the relationship between wavelength and frequency of light is defined by the choice of the speed of light, allowing the conversion of one to the other:
End of explanation
"""
f_lambda = (1e-18 * u.erg / u.cm**2 / u.s / u.angstrom)
print(f_lambda.to(u.Jy, equivalencies=u.equivalencies.spectral_density(1*u.micron)))
print(f_lambda.to(u.Jy, equivalencies=u.equivalencies.spectral_density(299.79*u.THz)))
"""
Explanation: A very useful trick is that Astropy will even convert units that require extra information to do so. For example, flux density is usually defined as a density with respect to wavelength or frequency, with the two forms convertable via:
$$ \nu f_\nu = \lambda f_\lambda$$
To convert between the different definitions of flux density, we merely need to supply the wavelength or frequency used:
End of explanation
"""
# Let's import the main class used, SkyCoord, and create a couple SkyCoord objects:
from astropy.coordinates import SkyCoord
print(SkyCoord(-2*u.deg, 56*u.deg))
print(SkyCoord(1*u.hourangle, 5*u.degree))
print(SkyCoord('2h2m1s 9d9m9s'))
print(SkyCoord('-2.32d', '52.3d', frame='fk4'))
print(SkyCoord.from_name("M101"))
sc = SkyCoord('25d 35d')
# We can retrive the coordinates we used to create these objects:
print(sc.ra)
print(sc.dec)
# We can transform coordinates to different representations (i.e., coordinate systems)
print(sc.transform_to('fk4'))
print(sc.transform_to('galactic'))
# Seperations and position angles are calculatable from SkyCoord objects:
sc2 = SkyCoord('35d 25d', frame='galactic')
print(sc.separation(sc2))
# When we have a world coordinate system (e.g., from a FITS file header), we can convert to and from pixel coordinates:
from astropy.wcs import WCS
w = WCS(hdu_list[0].header)
print(sc.to_pixel(w))
print(SkyCoord.from_pixel(5, 5, w))
"""
Explanation: Celestial Coordinate Systems
What about units that coorrespond to locations?
While there does exists u.degree and u.arcsecond, the essential coordinate manipulation is part of the astropy.coordinates submodule. Coordinate conversions, catalog conversions, and more are supported.
End of explanation
"""
import numpy as np
from astropy.modeling import models, fitting
np.random.seed(0)
x = np.linspace(-5., 5., 200)
y = 3 * np.exp(-0.5 * (x - 1.3)**2 / 0.8**2)
y += np.random.normal(0., 0.2, x.shape)
plt.plot(x, y, 'ko')
plt.xlabel('Position')
plt.ylabel('Flux')
# Fit the data using a Gaussian
model_object = models.Gaussian1D(amplitude=1., mean=0, stddev=1.)
fiter = fitting.LevMarLSQFitter()
g = fiter(model_object, x, y)
plt.plot(x, y, 'ko')
plt.plot(x, g(x), 'r-', lw=2)
plt.xlabel('Position')
plt.ylabel('Flux')
# Get information about the fitted model:
print(g)
"""
Explanation: Honorable Mentions in Astropy
These are some things that I'm not very familiar with, but I want to point out with a few quick examples.
votable
VOTables are an alternative format to FITS in use by virtual observatory projects. This one is difficult to prepare a head of time, since these are typically generated on the fly in response to search/database queries. astropy.io.votable handles these files.
Modeling
The astropy.modeling submodule is concerned with the fitting of models to data. The goal is to make it easy to fit or represent your data using common models, such as broken power laws or other composite models.
For example, here is some synthetic data that is roughly Gaussian-like:
End of explanation
"""
# Load the 9-year WMAP Cosmology and get H_0
from astropy.cosmology import WMAP9 as cosmo
print(cosmo.H(0))
# find the age of the universe at a given redshift:
print(cosmo.age(1))
# Other cosmologies are avaliable
from astropy.cosmology import Planck13 as cosmo
print(cosmo.age(1))
# Build your own cosmology
from astropy.cosmology import FlatLambdaCDM
cosmo = FlatLambdaCDM(H0=70, Om0=0.3, Ob0=0.05)
print(cosmo.age(1))
"""
Explanation: Cosmology
There is also some work for cosmology computations, specifically with different cosmologies.
For this, it is essential to load a Cosmology object. These are, by convention, named cosmo:
End of explanation
"""
|
rvperry/phys202-2015-work | assignments/assignment07/AlgorithmsEx02.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
"""
Explanation: Algorithms Exercise 2
Imports
End of explanation
"""
def find_peaks(a):
"""Find the indices of the local maxima in a sequence."""
P=[]
for i in range(0,len(a)):
if i==0 and a[1]<a[0]:
P.append(0)
elif i==len(a)-1 and a[i-1]<a[i]:
P.append(i)
else:
if a[i-1]<a[i] and a[i+1]<a[i]:
P.append(i)
return np.array(P)
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
"""
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
"""
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
from IPython.display import display
pi=find_peaks(pi_digits_str)
df=np.diff(pi)
plt.hist(df,range=(0,max(df)),bins=max(df))
plt.title('$\Pi$ Maximums')
plt.xlabel('Digits between Maxes')
assert True # use this for grading the pi digits histogram
"""
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation
"""
|
sjobeek/robostats_mcl | mcl_demonstration.ipynb | mit | logdata = mcl.load_log('data/log/robotdata2.log.gz')
logdata['x_rel'] = logdata['x'] - logdata.ix[0,'x']
logdata['y_rel'] = logdata['y'] - logdata.ix[0,'y']
plt.plot(logdata['x_rel'], logdata['y_rel'])
plt.title('Relative Odometry (x, y) in m')
"""
Explanation: Monte Carlo localization
This notebook presents a demonstration of Erik Sjoberg's implementation of Monte Carlo localization (particle filter) on a dataset of 2d laser scans
Example laser scan data
Note the legs of a person which appear in the dataset; this dynamic obstacle will increase the difficulty of the localization.
<img src="data/robotmovie1.gif"/>
Corresponding relative odometry log data
End of explanation
"""
global_map = mcl.occupancy_map('data/map/wean.dat.gz')
mcl.draw_map_state(global_map, rotate=True)
"""
Explanation: Note the significant drift in the path according to the odometry data above, which should have returned to it's initial position
Map to localize within
End of explanation
"""
sensor = mcl.laser_sensor() # Defines sensor measurement model
particle_list = [mcl.robot_particle(global_map, sensor)
for _ in range(1000)]
mcl.draw_map_state(global_map, particle_list, rotate=True)
plt.show()
"""
Explanation: Initialize valid particles uniformly on map
Particles are initialized uniformy over the entire 8000cm x 8000cm area with random heading, but are re-sampled if they end up in a grid cell which is not clear with high confidence (map value > 0.8).
End of explanation
"""
from tempfile import NamedTemporaryFile
VIDEO_TAG = """<video controls>
<source src="data:video/x-m4v;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>"""
def anim_to_html(anim):
if not hasattr(anim, '_encoded_video'):
with NamedTemporaryFile(suffix='.mp4') as f:
anim.save(f.name, dpi=400, fps=10, extra_args=['-vcodec', 'libx264', '-pix_fmt', 'yuv420p'])
video = open(f.name, "rb").read()
anim._encoded_video = video.encode("base64")
return VIDEO_TAG.format(anim._encoded_video)
class ParticleMap(object):
def __init__(self, ax, global_map, particle_list, target_particles=300, draw_max=2000, resample_period=10):
self.ax = ax
self.draw_max = draw_max
self.global_map = global_map
self.particle_list = particle_list
mcl.draw_map_state(global_map, particle_list, ax=self.ax, draw_max=self.draw_max)
self.i = 1
self.target_particles = target_particles
self.resample_period = resample_period
def update(self, message):
if self.i % self.resample_period == 0:# Resample and plot state
self.particle_list = mcl.mcl_update(self.particle_list, message, resample=True,
target_particles=self.target_particles) # Update
plt.cla()
mcl.draw_map_state(self.global_map, self.particle_list, self.ax, draw_max=self.draw_max)
#print(pd.Series([p.weight for p in self.particle_list]).describe())
else: # Just update particle weights / locations - do not resample
self.particle_list = mcl.mcl_update(self.particle_list, message,
target_particles=self.target_particles) # Update
self.i += 1
import matplotlib.animation as animation
import matplotlib.animation as animation
np.random.seed(5)
wean_hall_map = mcl.occupancy_map('data/map/wean.dat')
logdata = mcl.load_log('data/log/robotdata5.log.gz')
logdata_scans = logdata.query('type > 0.1').values
#Initialize 100 particles uniformly in valid locations on the map
laser = mcl.laser_sensor(stdv_cm=100, uniform_weight=0.2)
particle_list = [mcl.robot_particle(wean_hall_map, laser, log_prob_descale=2000,
sigma_fwd_pct=0.2, sigma_theta_pct=0.1)
for _ in range(50000)]
fig, ax = plt.subplots(figsize=(16,9))
pmap = ParticleMap(ax, wean_hall_map, particle_list,
target_particles=300, draw_max=2000, resample_period=10)
# pass a generator in "emitter" to produce data for the update func
ani = animation.FuncAnimation(fig, pmap.update, logdata_scans, interval=50,
blit=False, repeat=False)
ani.save('./mcl_log5_50k_success.mp4'.format(i), dpi=100, fps=10, extra_args=['-vcodec', 'libx264', '-pix_fmt', 'yuv420p'])
plt.close('all')
#plt.show()
#anim_to_html(ani)
plt.close('all')
mcl.mp4_to_html('./mcl_log1_50k_success.mp4'):
"""
Explanation: Localization Program Execution
End of explanation
"""
logdata = mcl.load_log('data/log/robotdata1.log.gz')
logdata['x_rel'] = logdata['x'] - logdata.ix[0,'x']
logdata['y_rel'] = logdata['y'] - logdata.ix[0,'y']
plt.plot(logdata['x_rel'], logdata['y_rel'])
logdata['theta_rel'] = logdata['theta'] - logdata.ix[0,'theta']
logdata['xl_rel'] = logdata['xl'] - logdata.ix[0,'xl']
logdata['yl_rel'] = logdata['yl'] - logdata.ix[0,'yl']
logdata['thetal_rel'] = logdata['thetal'] - logdata.ix[0,'thetal']
logdata['dt'] = logdata['ts'].shift(-1) - logdata['ts']
logdata['dx'] = logdata['x'].shift(-1) - logdata['x']
logdata['dy'] = logdata['y'].shift(-1) - logdata['y']
logdata['dtheta'] = logdata['theta'].shift(-1) - logdata['theta']
logdata['dxl'] = logdata['xl'].shift(-1) - logdata['xl']
logdata['dyl'] = logdata['yl'].shift(-1) - logdata['yl']
logdata['dthetal'] = logdata['thetal'].shift(-1) - logdata['thetal']
"""
Explanation: View log data
End of explanation
"""
|
fionapigott/Data-Science-45min-Intros | language-processing-vocab/language_processing_vocab.ipynb | unlicense | # first, get some text:
import fileinput
try:
import ujson as json
except ImportError:
import json
documents = []
for line in fileinput.FileInput("example_tweets.json"):
documents.append(json.loads(line)["text"])
"""
Explanation: Introduction to Language Processing Concepts
Original tutorial by Brain Lehman, with updates by Fiona Pigott
The goal of this tutorial is to introduce a few basical vocabularies, ideas, and Python libraries for thinking about topic modeling, in order to make sure that we have a good set of vocabulary to talk more in-depth about processing languge with Python later. We'll spend some time on defining vocabulary for topic modeling and using basic topic modeling tools.
A big thank-you to the good people at the Stanford NLP group, for their informative and helpful online book: https://nlp.stanford.edu/IR-book/.
Definitions.
Document: a body of text (eg. tweet)
Tokenization: dividing a document into pieces (and maybe throwing away some characters), in English this often (but not necessarily) means words separated by spaces and puctuation.
Text corpus: the set of documents that contains the text for the analysis (eg. many tweets)
Stop words: words that occur so frequently, or have so little topical meaning, that they are excluded (e.g., "and")
Vectorize: Turn some documents into vectors
Vector corpus: the set of documents transformed such that each token is a tuple (token_id , doc_freq)
End of explanation
"""
print("One document: \"{}\"".format(documents[0]))
"""
Explanation: 1) Document
In the case of the text that we just imported, each entry in the list is a "document"--a single body of text, hopefully with some coherent meaning.
End of explanation
"""
from nltk.stem import porter
from nltk.tokenize import TweetTokenizer
# tokenize the documents
# find good information on tokenization:
# https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html
# find documentation on pre-made tokenizers and options here:
# http://www.nltk.org/api/nltk.tokenize.html
tknzr = TweetTokenizer(reduce_len = True)
# stem the documents
# find good information on stemming and lemmatization:
# https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html
# find documentation on available pre-implemented stemmers here:
# http://www.nltk.org/api/nltk.stem.html
stemmer = porter.PorterStemmer()
for doc in documents[0:10]:
tokenized = tknzr.tokenize(doc)
stemmed = [stemmer.stem(x) for x in tokenized]
print("Original document:\n{}\nTokenized result:\n{}\nStemmed result:\n{}\n".format(
doc, tokenized, stemmed))
"""
Explanation: 2) Tokenization
We split each document into smaller pieces ("tokens") in a process called tokenization. Tokens can be counted, and most importantly, compared between documents. There are potentially many different ways to tokenize text--splitting on spaces, removing punctionation, diving the document into n-character pieces--anything that gives us tokens that we can, hopefully, effectively compare across documents and derive meaning from.
Related to tokenization are processes called stemming and lemmatiztion which can help when using tokens to model topics based on the meaning of a word. In the phrases "they run" and "he runs" (space separated tokens: ["they", "run"] and ["he", "runs"]) the words "run" and "runs" mean basically the same thing, but are two different tokens. Stemming and/or lemmatization help us compare tokens with the same meaning but different spelling/suffixes.
Lemmatization:
Uses a dictionary of words and their possible morphologies to map many different forms of a base word ("lemma") to a single lemma, comparable across documents. E.g.: "run", "ran", "runs", and "running" might all map to the lemma "run"
Stemming:
Uses a set of heuristic rules to try to approximate lemmatization, without knowing the words in advance. For the English language, a simple and effective stemming algorithm might simply be to remove an "s" from the ends of words, or an "ing" from the end of words. E.g.: "run", "runs", and "running" all map to "run," but "ran" (an irregularrly conjugated verb) would not.
Stemming is particularly interesting and applicable in social data, because while some words are decidely not standard English, conventinoal rules of grammar still apply. A fan of the popular singer Justin Bieber might call herself a "belieber," while a group of fans call themselves "beliebers." You won't find "belieber" in any English lemmatization dictionary, but a good stemming algorithm will still map "belieber" and "beliebers" to the same token ("belieber", or even "belieb", if we remover the common suffix "er").
End of explanation
"""
# number of documents in the corpus
print("There are {} documents in the corpus.".format(len(documents)))
"""
Explanation: 3) Text corpus
The text corpus is a collection of all of the documents (Tweets) that we're interested in modeling. Topic modeling and/or clustering on a corpus tends to work best if that corpus has some similar themes--this will mean that some tokens overlap, and we can get signal out of when documents share (or do not share) tokens.
Modeling text tends to get much harder the more different, uncommon and unrelated tokens appear in a text, especially when we are working with social data, where tokens don't necessarily appear in a dictionary. This difficultly (of having many, many unrelated tokens as dimension in our model) is one example of the curse of dimensionality.
End of explanation
"""
from nltk.corpus import stopwords
stopset = set(stopwords.words('english'))
print("The English stop words list provided by NLTK: ")
print(stopset)
stopset.update(["twitter"]) # add token
stopset.remove("i") # remove token
print("\nAdd or remove stop words form the set: ")
print(stopset)
"""
Explanation: 4) Stop words:
Stop words are simply tokens that we've chosen to remove from the corpus, for any reason. In English, removing words like "and", "the", "a", "at", and "it" are common choices for stop words. Stop words can also be edited per project requirement, in case some words are too common in a particular dataset to be meaningful (another way to do stop word removal is to simply remove any word that appears in more than some fixed percentage of documents).
End of explanation
"""
# we're going to use the vectorizer functions that scikit learn provides
# define the tokenizer that we want to use
# must be a callable function that takes a document and returns a list of tokens
tknzr = TweetTokenizer(reduce_len = True)
stemmer = porter.PorterStemmer()
def myTokenizer(doc):
return [stemmer.stem(x) for x in tknzr.tokenize(doc)]
# choose the stopword set that we want to use
stopset = set(stopwords.words('english'))
stopset.update(["http","https","twitter","amp"])
# vectorize
# we're using the scikit learn CountVectorizer function, which is very handy
# documentation here:
# http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
vectorizer = CountVectorizer(tokenizer = myTokenizer, stop_words = stopset)
vectorized_documents = vectorizer.fit_transform(documents)
vectorized_documents
import matplotlib.pyplot as plt
%matplotlib inline
_ = plt.hist(vectorized_documents.todense().sum(axis = 1))
_ = plt.title("Number of tokens per document")
_ = plt.xlabel("Number of tokens")
_ = plt.ylabel("Number of documents with x tokens")
from numpy import logspace, ceil, histogram, array
# get the token frequency
token_freq = sorted(vectorized_documents.todense().astype(bool).sum(axis = 0).tolist()[0], reverse = False)
# make a histogram with log scales
bins = array([ceil(x) for x in logspace(0, 3, 5)])
widths = (bins[1:] - bins[:-1])
hist = histogram(token_freq, bins=bins)
hist_norm = hist[0]/widths
# plot (notice that most tokens only appear in one document)
plt.bar(bins[:-1], hist_norm, widths)
plt.xscale('log')
plt.yscale('log')
_ = plt.title("Number of documents in which each token appears")
_ = plt.xlabel("Number of documents")
_ = plt.ylabel("Number of tokens")
"""
Explanation: 5) Vectorize:
Transform each document into a vector. There are several good choices that you can make about how to do this transformation, and I'll talk about each of them in a second.
In order to vectorize documents in a corpus (without any dimensional reduction around the vocabulary), think of each document as a row in a matrix, and each column as a word in the vocabulary of the entire corpus. In order to vectorize a corpus, we must read the entire corpus, assign one word to each column, and then turn each document into a row.
Example:
Documents: "I love cake", "I hate chocolate", "I love chocolate cake", "I love cake, but I hate chocolate cake"
Stopwords: Say, because the word "but" is a conjunction, we want to make it a stop word (not include it in our document vectors)
Vocabulary: "I" (column 1), "love" (column 2), "cake" (column 3), "hate" (column 4), "chocolate" (column 5)
\begin{equation}
\begin{matrix}
\text{"I love cake" } & =\
\text{"I hate chocolate" } & =\
\text{"I love chocolate cake" } & = \
\text{"I love cake, but I hate chocolate cake"} & =
\end{matrix}
\qquad
\begin{bmatrix}
1 & 1 & 1 & 0 & 0\
1 & 0 & 0 & 1 & 1\
1 & 1 & 1 & 0 & 1\
2 & 1 & 2 & 1 & 1
\end{bmatrix}
\end{equation}
Vectorization like this don't take into account word order (we call this property "bag of words"), and in the above example I am simply counting the frequency of each term in each document.
End of explanation
"""
# documentation on this sckit-learn function here:
# http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html
tfidf_vectorizer = TfidfVectorizer(tokenizer = myTokenizer, stop_words = stopset)
tfidf_vectorized_documents = tfidf_vectorizer.fit_transform(documents)
tfidf_vectorized_documents
# you can look at two vectors for the same document, from 2 different vectorizers:
tfidf_vectorized_documents[0].todense().tolist()[0]
vectorized_documents[0].todense().tolist()[0]
"""
Explanation: Bag of words
Taking all the words from a document, and sticking them in a bag. Order does not matter, which could cause a problem. "Alice loves cake" might have a different meaning than "Cake loves Alice."
Frequency
Counting the number of times a word appears in a document.
Tf-Idf (term frequency inverse document frequency):
A statistic that is intended to reflect how important a word is to a document in a collection or corpus. The Tf-Idf value increases proportionally to the number of times a word appears in the document and is inversely proportional to the frequency of the word in the corpus--this helps control words that are generally more common than others.
There are several different possibilities for computing the tf-idf statistic--choosing whether to normalize the vectors, choosing whether to use counts or the logarithm of counts, etc. I'm going to show how scikit-learn computed the tf-idf statistic by default, with more information available in the documentation of the sckit-learn TfidfVectorizer.
$tf(t)$ : Term Frequency, count of the number of times each term appears in the document.
$idf(d,t)$ : Inverse document frequency.
$df(d,t)$ : Document frequency, the count of the number of documents in which the term appears.
$$
tfidf(t) = tf(t) * \log\big(\frac{1 + n}{1 + df(d, t)}\big) + 1
$$
We also then take the Euclidean ($l2$) norm of each document vector, so that long documents (documents with many non-stopword tokens) have the same norm as shorter documents.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/c0c3ed4677febbe0a9a8fc4b6deea26c/plot_object_epochs.ipynb | bsd-3-clause | import mne
import os.path as op
import numpy as np
from matplotlib import pyplot as plt
"""
Explanation: The :class:Epochs <mne.Epochs> data structure: epoched data
:class:Epochs <mne.Epochs> objects are a way of representing continuous
data as a collection of time-locked trials, stored in an array of shape
(n_events, n_channels, n_times). They are useful for many statistical
methods in neuroscience, and make it easy to quickly overview what occurs
during a trial.
End of explanation
"""
data_path = mne.datasets.sample.data_path()
# Load a dataset that contains events
raw = mne.io.read_raw_fif(
op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif'))
# If your raw object has a stim channel, you can construct an event array
# easily
events = mne.find_events(raw, stim_channel='STI 014')
# Show the number of events (number of rows)
print('Number of events:', len(events))
# Show all unique event codes (3rd column)
print('Unique event codes:', np.unique(events[:, 2]))
# Specify event codes of interest with descriptive labels.
# This dataset also has visual left (3) and right (4) events, but
# to save time and memory we'll just look at the auditory conditions
# for now.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2}
"""
Explanation: :class:Epochs <mne.Epochs> objects can be created in three ways:
1. From a :class:Raw <mne.io.Raw> object, along with event times
2. From an :class:Epochs <mne.Epochs> object that has been saved as a
.fif file
3. From scratch using :class:EpochsArray <mne.EpochsArray>. See
tut_creating_data_structures
End of explanation
"""
epochs = mne.Epochs(raw, events, event_id, tmin=-0.1, tmax=1,
baseline=(None, 0), preload=True)
print(epochs)
"""
Explanation: Now, we can create an :class:mne.Epochs object with the events we've
extracted. Note that epochs constructed in this manner will not have their
data available until explicitly read into memory, which you can do with
:func:get_data <mne.Epochs.get_data>. Alternatively, you can use
preload=True.
Expose the raw data as epochs, cut from -0.1 s to 1.0 s relative to the event
onsets
End of explanation
"""
print(epochs.events[:3])
print(epochs.event_id)
"""
Explanation: Epochs behave similarly to :class:mne.io.Raw objects. They have an
:class:info <mne.Info> attribute that has all of the same
information, as well as a number of attributes unique to the events contained
within the object.
End of explanation
"""
print(epochs[1:5])
print(epochs['Auditory/Right'])
"""
Explanation: You can select subsets of epochs by indexing the :class:Epochs <mne.Epochs>
object directly. Alternatively, if you have epoch names specified in
event_id then you may index with strings instead.
End of explanation
"""
print(epochs['Right'])
print(epochs['Right', 'Left'])
"""
Explanation: Note the '/'s in the event code labels. These separators allow tag-based
selection of epoch sets; every string separated by '/' can be entered, and
returns the subset of epochs matching any of the strings. E.g.,
End of explanation
"""
epochs_r = epochs['Right']
epochs_still_only_r = epochs_r[['Right', 'Left']]
print(epochs_still_only_r)
try:
epochs_still_only_r["Left"]
except KeyError:
print("Tag-based selection without any matches raises a KeyError!")
"""
Explanation: Note that MNE will not complain if you ask for tags not present in the
object, as long as it can find some match: the below example is parsed as
(inclusive) 'Right' OR 'Left'. However, if no match is found, an error is
returned.
End of explanation
"""
# These will be epochs objects
for i in range(3):
print(epochs[i])
# These will be arrays
for ep in epochs[:2]:
print(ep)
"""
Explanation: It is also possible to iterate through :class:Epochs <mne.Epochs> objects
in this way. Note that behavior is different if you iterate on Epochs
directly rather than indexing:
End of explanation
"""
epochs.drop([0], reason='User reason')
epochs.drop_bad(reject=dict(grad=2500e-13, mag=4e-12, eog=200e-6), flat=None)
print(epochs.drop_log)
epochs.plot_drop_log()
print('Selection from original events:\n%s' % epochs.selection)
print('Removed events (from numpy setdiff1d):\n%s'
% (np.setdiff1d(np.arange(len(events)), epochs.selection).tolist(),))
print('Removed events (from list comprehension -- should match!):\n%s'
% ([li for li, log in enumerate(epochs.drop_log) if len(log) > 0]))
"""
Explanation: You can manually remove epochs from the Epochs object by using
:func:epochs.drop(idx) <mne.Epochs.drop>, or by using rejection or flat
thresholds with :func:epochs.drop_bad(reject, flat) <mne.Epochs.drop_bad>.
You can also inspect the reason why epochs were dropped by looking at the
list stored in epochs.drop_log or plot them with
:func:epochs.plot_drop_log() <mne.Epochs.plot_drop_log>. The indices
from the original set of events are stored in epochs.selection.
End of explanation
"""
epochs_fname = op.join(data_path, 'MEG', 'sample', 'sample-epo.fif')
epochs.save(epochs_fname)
"""
Explanation: If you wish to save the epochs as a file, you can do it with
:func:mne.Epochs.save. To conform to MNE naming conventions, the
epochs file names should end with '-epo.fif'.
End of explanation
"""
epochs = mne.read_epochs(epochs_fname, preload=False)
"""
Explanation: Later on you can read the epochs with :func:mne.read_epochs. For reading
EEGLAB epochs files see :func:mne.read_epochs_eeglab. We can also use
preload=False to save memory, loading the epochs from disk on demand.
End of explanation
"""
ev_left = epochs['Auditory/Left'].average()
ev_right = epochs['Auditory/Right'].average()
f, axs = plt.subplots(3, 2, figsize=(10, 5))
_ = f.suptitle('Left / Right auditory', fontsize=20)
_ = ev_left.plot(axes=axs[:, 0], show=False, time_unit='s')
_ = ev_right.plot(axes=axs[:, 1], show=False, time_unit='s')
plt.tight_layout()
"""
Explanation: If you wish to look at the average across trial types, then you may do so,
creating an :class:Evoked <mne.Evoked> object in the process. Instances
of Evoked are usually created by calling :func:mne.Epochs.average. For
creating Evoked from other data structures see :class:mne.EvokedArray and
tut_creating_data_structures.
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/model_optimization/guide/pruning/pruning_with_keras.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
! pip install -q tensorflow-model-optimization
import tempfile
import os
import tensorflow as tf
import numpy as np
from tensorflow import keras
%load_ext tensorboard
"""
Explanation: Pruning in Keras example
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/pruning/pruning_with_keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/pruning_with_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/pruning_with_keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/pruning/pruning_with_keras.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
Welcome to an end-to-end example for magnitude-based weight pruning.
Other pages
For an introduction to what pruning is and to determine if you should use it (including what's supported), see the overview page.
To quickly find the APIs you need for your use case (beyond fully pruning a model with 80% sparsity), see the
comprehensive guide.
Summary
In this tutorial, you will:
Train a tf.keras model for MNIST from scratch.
Fine tune the model by applying the pruning API and see the accuracy.
Create 3x smaller TF and TFLite models from pruning.
Create a 10x smaller TFLite model from combining pruning and post-training quantization.
See the persistence of accuracy from TF to TFLite.
Setup
End of explanation
"""
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 and 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=4,
validation_split=0.1,
)
"""
Explanation: Train a model for MNIST without pruning
End of explanation
"""
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
print('Saved baseline model to:', keras_file)
"""
Explanation: Evaluate baseline test accuracy and save the model for later usage.
End of explanation
"""
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
# Compute end step to finish pruning after 2 epochs.
batch_size = 128
epochs = 2
validation_split = 0.1 # 10% of training set will be used for validation set.
num_images = train_images.shape[0] * (1 - validation_split)
end_step = np.ceil(num_images / batch_size).astype(np.int32) * epochs
# Define model for pruning.
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.80,
begin_step=0,
end_step=end_step)
}
model_for_pruning = prune_low_magnitude(model, **pruning_params)
# `prune_low_magnitude` requires a recompile.
model_for_pruning.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_for_pruning.summary()
"""
Explanation: Fine-tune pre-trained model with pruning
Define the model
You will apply pruning to the whole model and see this in the model summary.
In this example, you start the model with 50% sparsity (50% zeros in weights)
and end with 80% sparsity.
In the comprehensive guide, you can see how to prune some layers for model accuracy improvements.
End of explanation
"""
logdir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
]
model_for_pruning.fit(train_images, train_labels,
batch_size=batch_size, epochs=epochs, validation_split=validation_split,
callbacks=callbacks)
"""
Explanation: Train and evaluate the model against baseline
Fine tune with pruning for two epochs.
tfmot.sparsity.keras.UpdatePruningStep is required during training, and tfmot.sparsity.keras.PruningSummaries provides logs for tracking progress and debugging.
End of explanation
"""
_, model_for_pruning_accuracy = model_for_pruning.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Pruned test accuracy:', model_for_pruning_accuracy)
"""
Explanation: For this example, there is minimal loss in test accuracy after pruning, compared to the baseline.
End of explanation
"""
#docs_infra: no_execute
%tensorboard --logdir={logdir}
"""
Explanation: The logs show the progression of sparsity on a per-layer basis.
End of explanation
"""
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
_, pruned_keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model_for_export, pruned_keras_file, include_optimizer=False)
print('Saved pruned Keras model to:', pruned_keras_file)
"""
Explanation: For non-Colab users, you can see the results of a previous run of this code block on TensorBoard.dev.
Create 3x smaller models from pruning
Both tfmot.sparsity.keras.strip_pruning and applying a standard compression algorithm (e.g. via gzip) are necessary to see the compression
benefits of pruning.
strip_pruning is necessary since it removes every tf.Variable that pruning only needs during training, which would otherwise add to model size during inference
Applying a standard compression algorithm is necessary since the serialized weight matrices are the same size as they were before pruning. However, pruning makes most of the weights zeros, which is
added redundancy that algorithms can utilize to further compress the model.
First, create a compressible model for TensorFlow.
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
pruned_tflite_model = converter.convert()
_, pruned_tflite_file = tempfile.mkstemp('.tflite')
with open(pruned_tflite_file, 'wb') as f:
f.write(pruned_tflite_model)
print('Saved pruned TFLite model to:', pruned_tflite_file)
"""
Explanation: Then, create a compressible model for TFLite.
End of explanation
"""
def get_gzipped_model_size(file):
# Returns size of gzipped model, in bytes.
import os
import zipfile
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)
"""
Explanation: Define a helper function to actually compress the models via gzip and measure the zipped size.
End of explanation
"""
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped pruned Keras model: %.2f bytes" % (get_gzipped_model_size(pruned_keras_file)))
print("Size of gzipped pruned TFlite model: %.2f bytes" % (get_gzipped_model_size(pruned_tflite_file)))
"""
Explanation: Compare and see that the models are 3x smaller from pruning.
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_and_pruned_tflite_model = converter.convert()
_, quantized_and_pruned_tflite_file = tempfile.mkstemp('.tflite')
with open(quantized_and_pruned_tflite_file, 'wb') as f:
f.write(quantized_and_pruned_tflite_model)
print('Saved quantized and pruned TFLite model to:', quantized_and_pruned_tflite_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped pruned and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_pruned_tflite_file)))
"""
Explanation: Create a 10x smaller model from combining pruning and quantization
You can apply post-training quantization to the pruned model for additional benefits.
End of explanation
"""
import numpy as np
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on ever y image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print('Evaluated on {n} results so far.'.format(n=i))
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
"""
Explanation: See persistence of accuracy from TF to TFLite
Define a helper function to evaluate the TF Lite model on the test dataset.
End of explanation
"""
interpreter = tf.lite.Interpreter(model_content=quantized_and_pruned_tflite_model)
interpreter.allocate_tensors()
test_accuracy = evaluate_model(interpreter)
print('Pruned and quantized TFLite test_accuracy:', test_accuracy)
print('Pruned TF test accuracy:', model_for_pruning_accuracy)
"""
Explanation: You evaluate the pruned and quantized model and see that the accuracy from TensorFlow persists to the TFLite backend.
End of explanation
"""
|
olivierverdier/homogint | Demo.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from homogint import *
"""
Explanation: This is a demo of homogint, a simple Python library for integration on homogeneous spaces. The theoretical background is explained in the paper Integrators on homogeneous spaces, by Oivier Verdier and Hans Munthe-Kaas.
|
General imports
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
def plot_sphere(ax=None):
if ax is None:
ax = plt.gcf().add_subplot(111, projection='3d')
ax.autoscale(tight=True)
ax.set_axis_off()
ax.set_aspect("equal")
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, np.pi, 100).reshape(-1,1)
x = np.cos(u) * np.sin(v)
y = np.sin(u) * np.sin(v)
z = np.cos(v)
ax.plot_wireframe(x, y, z, rstride=4, cstride=4, color='k', alpha=.1)
return ax
"""
Explanation: Plotting routine:
End of explanation
"""
rkmk4 = RungeKutta(RKMK4())
"""
Explanation: Set up the solver
We will use the fourth order Rungke–Kutta–Munthe-Kaas method.
It can be described by the following transition functions (see §3 in the paper):
\begin{align}
\theta_{1,0} &= \frac{1}{2} F_{0} \\
\theta_{2,0} &= \frac{1}{2}F_1 - \frac{1}{8}[F_{0},F_1]\\
\theta_{3,0} &= F_2 \\
\theta_{4,0} &= \frac{1}{6} (F_0 + 2(F_1+F_2) + F_3) - \frac{1}{12} [F_0, F_3]
\end{align}
End of explanation
"""
def solve(vf,xs,stopping,action=None, maxit=10000):
"Simple solver with stopping condition. The list xs is modified **in place**."
for i in range(maxit):
if stopping(i,xs[-1]):
break
xs.append(rkmk4.step(vf, xs[-1], action=action))
"""
Explanation: We define a simple solver function that we use to solve the examples.
End of explanation
"""
def timedep_field(x):
"""
Example from Diffman manual.
"""
J = np.zeros([5,5])
t = x[-2]
J[0,1] = t
J[0,2] = -.4*np.cos(t)
J[1,2] = .1*t
J -= J.T
J[-2,-1] = 1.
return J
xs = [np.array([0.,0,1,0,1])]
dt = .02
solve(time_step(dt)(timedep_field),xs,lambda i,x:x[-2]>10)
axs = np.array(xs)
def plot2(axs):
plt.plot(axs[0,0],axs[0,1],'o')
plt.plot(axs[:,0], axs[:,1],marker='.')
plt.axis('equal')
plot2(axs)
fig = plt.figure(figsize=(15,10))
ax = plot_sphere()
tot = len(xs)
for i,s in enumerate([slice(0,tot//2,None), slice(tot//2,None,None)]):
ax.plot(axs[s,0],axs[s,1],axs[s,2],lw=2,marker='.',color=['black','blue'][i], alpha=[1.,0.2][i])
ax.view_init(50,-130)
# savefig('quad.pdf')
"""
Explanation: Sphere: quadrature
Example from DiffMan.
We study the solution of the equation $x'(t) = ξ(t)x(t)$, where $x$ is on the sphere, and
\[
ξ(t) = \begin{bmatrix}0 & t & -0.4\cos(t) \ -t & 0 & 0.1t \ 0.4 \cos(t) & -0.1 t & 0\end{bmatrix} \in \mathsf{so}(3)
\]
This is equivalent to consider the problem
\[
x'(t) = ω(t) \times x(t)
\]
with $ω(t) = -(0.1t,0.4\cos(t),t)$.
We use a bit of a trickery here, and use instead the autonomous vector (in block notation):
\[
\zeta(x) = \begin{bmatrix} \xi(t) & 0 &0 \ 0 & 0 & 1 \ 0 & 0 & 0 \end{bmatrix}
\]
This amounts to work with the group $\mathsf{SO(3) \times \mathbf{R}}$ instead.
End of explanation
"""
def so31_field(x):
t = x[3,3]
xi = np.zeros([5,5])
xi[0,1] = t
xi[0,2] = 1.
xi[1,2] = -t*t
xi -= xi.T
xi[-2,-1] = 1.
return xi
x0 = np.zeros([5,4])
x0[:3,:3] = np.identity(3)
x0[-1,-1] = 1.
xs = [x0]
dt = .01
solve(time_step(dt)(so31_field), xs, lambda i,x: x[-2,-1] > 5)
axs = np.array(xs)
"""
Explanation: $\mathsf{SO}(3)$: Quadrature
Example from DiffMan
The field is
\[
\xi(t) = \begin{bmatrix} 0 & t & 1\ -t & 0 & -t^2 \ -1 & t^2 & 0\end{bmatrix}
\]
End of explanation
"""
fig = plt.figure(figsize=(15,10))
ax = plot_sphere()
for i in range(3):
ax.plot(axs[:,0,i],axs[:,1,i],axs[:,2,i],lw=2,marker='.')
ax.view_init(45,80)
plt.savefig('so3quad.svg', bbox_inches='tight', pad_inches=-1.5)
"""
Explanation: Plot the three unit vectors of the rotation matrix:
End of explanation
"""
def lorenz(x, sigma=10, beta=8./3., rho=28):
y = x[1]
A = np.array([[-beta, 0, y],
[0, -sigma, sigma],
[-y, rho, -1]])
vf = np.dot(A,x[:3])
xi = np.zeros([4,4])
xi[:-1,-1] = vf # translation only
return xi
xs = [np.array([25.,0,-20,1])]
solve(time_step(0.02)(lorenz), xs, lambda i,x: i > 20/.02)
axs = np.array(xs)
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111, projection='3d')
ax.plot(axs[:,0],axs[:,1],axs[:,2],marker='.')
ax.plot([axs[0,0]],[axs[0,1]],[axs[0,2]],marker='o')
#ax.view_init(90,120)
"""
Explanation: Flat space: Lorenz equation
Example from DiffMan
This is the equation
\[
(x,y,z)' = (-βx + yz, -σy + σz, -xy + ρy - z)
\]
with the values
\[
σ = 10 \qquad ρ = 28 \qquad β = 8/3
\]
The idea here is to use the translation group, so the infinitesimal vector field is
\[
\xi(x) = \begin{bmatrix} 0 & v(x)\ 0 & 0\end{bmatrix}
\]
where $v(x)\in\mathbf{R}^3$ is the Lorenz vector field above.
End of explanation
"""
def iso_field(P):
sk = np.tril(P) - np.triu(P) # skew symmetric
return sk
"""
Explanation: Isospectral Manifold: Toda flow
An isospectral flow is an equation of the form
\[
P' = ξ(P)P - Pξ(P)
\]
where $P$ is symmetric and $ξ(P)$ is skew-symmetric.
We implement what is known as the Toda flow:
End of explanation
"""
#init = np.array([[-1.,1,0],[1,.5,1],[0,1,.5]])
rmat = np.random.randn(20,20)
init = rmat + rmat.T
plt.matshow(init)
plt.savefig('matinit.png', bbox_inches='tight', pad_inches=0)
from homogint.homogint import trans_adjoint
Ps = [init]
dt = .25
solve(time_step(dt)(iso_field), Ps, lambda i,x: i>30/dt, action=trans_adjoint)
"""
Explanation: Random symmetric matrix as initial condition.
End of explanation
"""
import numpy.linalg as nl
eigenvalues = [np.sort(nl.eigvals(P)) for P in Ps]
aeigenvalues = np.array(eigenvalues)
deig = aeigenvalues - aeigenvalues[0]
plt.plot(deig)
plt.title("eigenvalue drift")
from IPython.html.widgets import interact
def view_matrix(i):
plt.matshow(Ps[i])
interact(view_matrix, i=(0,len(Ps)-1,1))
"""
Explanation: The flow does not change the eigenvalues (hence the name isospectral flow)
End of explanation
"""
plt.matshow(Ps[-1])
plt.savefig('matfinal.png', bbox_inches='tight', pad_inches=0)
"""
Explanation: The flow is a continuous version of the QR algorithm, so it almost converges towards a diagonal matrix (almost because there is no shift and deflations).
End of explanation
"""
def airy_field(x):
mat = np.zeros([4,4])
mat[0,1] = 1.
mat[1,0] = -x[-2]
mat[-2,-1] = 1 # time
return mat
"""
Explanation: Airy Equation
Example from Lie Group Method
We solve the equation
\[
x'' + tx = 0
\]
It is reformulated as
\[
(x,v)' = \begin{bmatrix} 0 & 1\ -t & 0\end{bmatrix} (x,v)
\]
End of explanation
"""
x0 = np.array([1.,0,0,1])
xs = [x0]
dt=.05
solve(time_step(dt)(airy_field), xs, stopping=lambda i,x: i>100/dt)
len(xs)
fig = plt.figure(figsize=(15,5))
axs = np.array(xs)
plt.plot(axs[:,-2],axs[:,0])
#plot(axs[1900:,-2],axs[1900:,0],marker='.')
"""
Explanation: We solve it with initial condition $x(0) = 1.$, $x'(0) = 0$.
End of explanation
"""
from scipy.special import airy
from scipy.linalg import solve as linsolve
# taken from the DiffMan examples
tstart = 0
m =np.array([airy(tstart)[0], airy(tstart)[2], -airy(tstart)[1], -airy(tstart)[3]]).reshape(2,-1)
c = linsolve(m, np.array([1.,0]))
# Computes the 'true' solution:
ts = np.linspace(90,100,1000)
def exact_airy(ts, c):
return c[0]*np.real(airy(-ts)[0]) + c[1]*np.real(airy(-ts)[2])
plt.figure(figsize=(15,5))
skip=1800
plt.plot(ts,exact_airy(ts,c),label="exact")
plt.plot(axs[skip:,-2],axs[skip:,0],marker='o',linestyle='',label="computed")
plt.legend()
plt.figure(figsize=(15,5))
error = axs[:,0] - exact_airy(axs[:,-2],c)
plt.plot(axs[1:,-2],np.log10(np.abs(error[1:])))
plt.title("$\log_{10}(error)$")
"""
Explanation: We can compute the exact solution using the airy function in scipy.special.
End of explanation
"""
D = np.diag([16.,8.,4.])
A = D
def oja_field(x):
proj = np.dot(x,x.T)
xi = commutator(A,proj)
return xi
"""
Explanation: Stiefel manifold: Oja Flow
Example from Geometric Numerical Integration, § IV.9.2
The Oja flow is given by
\[
Q' = (I - QQ^T)A Q
\]
for a given positive definite matrix $A$.
Using the connection formula
\[
\langle ω,δQ \rangle_Q = δQ Q^T - QδQ^T -QδQ^T Q Q^T
\]
we obtain the following vector field on the Lie algebra:
\[
ξ(Q) = AQQ^T-QQ^TA
\]
End of explanation
"""
def normalize(x):
nx = np.sqrt(np.sum(np.square(x)))
return x/nx
def rand_sphere_point():
u,v = np.random.rand(2)
phi = u*2*np.pi
theta = np.arccos(2*v-1)
sth = np.sin(theta)
return np.array([sth*np.cos(phi), sth*np.sin(phi), np.cos(theta)])
r1 = rand_sphere_point()
r1_ = rand_sphere_point()
r2 = normalize(np.cross(r1,r1_))
x0 = np.array([r1,r2]).T
"""
Explanation: We choose a random starting point. It amounts to choose two orthogonal vectors of length one.
End of explanation
"""
print(np.allclose(np.dot(x0.T,x0), np.identity(2)))
"""
Explanation: Check that the chosen vectors are orthogonal:
End of explanation
"""
print(x0)
xs = [x0]
dt = .1
solve(time_step(dt)(oja_field), xs, lambda i,x: np.allclose(oja_field(x),0,atol=1e-7))
len(xs)
"""
Explanation: Starging value:
End of explanation
"""
xs[-1]
axs = np.array(xs)
fig = plt.figure(figsize=(15,10))
ax = plot_sphere()
ths = np.linspace(0,2*np.pi,200)
plt.plot(np.cos(ths), np.sin(ths), np.zeros_like(ths))
for i in range(2):
for j in [0,-1]:
ax.plot([axs[j,0,i]],[axs[j,1,i]],[axs[j,2,i]],lw=2,marker=['o','D'][j])
ax.plot([0.,axs[j,0,i]],[0,axs[j,1,i]],[0,axs[j,2,i]],color=['black','red'][j])
ax.plot(axs[:,0,i],axs[:,1,i],axs[:,2,i],marker='.')
ax.view_init(30,0)
plt.savefig('oja.pdf')
"""
Explanation: The flow converges towards an invariant subspace. Here it converges towards the subspace containing the two largest eigenvalues:
End of explanation
"""
for i in range(2):
plt.plot(np.log10(np.abs(axs[:,-1,i])),marker='.')
plt.title("log10 of the z coordinate")
"""
Explanation: Check the convergence towards the plane with largest eigenvalues.
End of explanation
"""
|
mcleonard/seekwell | seekwell.ipynb | mit | from seekwell import Database
"""
Explanation: SeekWell
SeekWell is a package for quickly and easily querying SQL databases in Python. It was made with data analysts in mind and plays well with Jupyter notebooks. This notebook is a little tutorial to get you started working with SQL databases in under 5 minutes.
SeekWell is a higher level library built on top of SQLAlchemy, an amazing library that does all the heavy lifting. SeekWell is designed to get our of your way and focus on retrieving the data you want from your database.
Query results are retrieved in a lazy manner. That is, they aren't returned until you ask for them. Records are cached once you get them, so you only ever run the query once. SeekWell also provides methods for inspecting the tables and columns in your database.
End of explanation
"""
db = Database('sqlite:///database.sqlite')
"""
Explanation: Connect to the database
SeekWell uses SQLAlchemy underneath to connect to literally any database you can throw at it. You just need the appropriate engines, for example, psychopg for PostGres. Make sure to check out SQLAlchemy's documentation for connecting to databases.
In SeekWell, you just need to import the Database class and create a new Database object. The path required is defined by SQLAlchemy (link to documentation here).
Here I'll load a database of European soccer matches, teams, and players available from Kaggle Datasets.
End of explanation
"""
db.table_names
"""
Explanation: I've found database introspection to be really useful, that is, listing out tables and columns. When connected to a Database, you can get a list of tables.
End of explanation
"""
table = db['Team']
table
table.column_names
"""
Explanation: And you can get a table to inspect it.
End of explanation
"""
table.head()
"""
Explanation: To check out the data in a table, use the head method to print out the first few rows.
End of explanation
"""
print(table.head())
"""
Explanation: In a notebook, rows are printed out in an HTML table for nice viewing. In the terminal, rows are printed out as an ASCII table.
End of explanation
"""
db.schema_names
"""
Explanation: If you're using a database with schemas, you can get a list of the schema names with db.schema_names.
End of explanation
"""
records = db.query('SELECT * from Player limit 50')
"""
Explanation: Querying
There we go, now you're connected to the database and it's ready to be queried. Data analysts are all about getting data and workign with it. Queries are run through the Database object's query method. It accepts a SQL statement as a string and returns a Records object.
End of explanation
"""
records
"""
Explanation: The data isn't returned immediately, only when you request it.
End of explanation
"""
records.fetch(10)
"""
Explanation: Use fetch to get the data. Calling fetch without any arguments will return all the rows. Passing in a number will return that many rows.
End of explanation
"""
records.rows
"""
Explanation: The data is cached in records.rows
End of explanation
"""
records[5:15]
records[-5:]
"""
Explanation: You can get rows using slices too.
End of explanation
"""
records.fetch()
"""
Explanation: Or get all the rows by calling fetch with no arguments...
End of explanation
"""
statement = """
SELECT Match.date,
Match.home_team_goal, Match.away_team_goal,
home_team.team_long_name AS home_team,
away_team.team_long_name AS away_team
FROM Match
JOIN Team AS home_team
ON Match.home_team_api_id=home_team.team_api_id
JOIN Team AS away_team
ON Match.away_team_api_id=away_team.team_api_id
WHERE home_team=:home_team
ORDER BY Match.date ASC
"""
records = db.query(statement, home_team='KRC Genk')
records.fetch()[:20]
"""
Explanation: Your statements can of course be as complex as you want. Using parameters in statements is possible using keyword arguments. This uses the SQLAlchemy text syntax, so read up on it here. Below is an example using :home_team as a parameter to filter for the desired home team in the statement.
End of explanation
"""
records.to_csv('KRC_Genk_games.csv')
df = records.to_pandas()
df
df = df.assign(point_diff=(df['home_team_goal'] - df['away_team_goal']))
df.groupby('away_team')['point_diff'].agg({'wins': lambda x: sum(x>0),
'losses': lambda x: sum(x<0),
'ties': lambda x: sum(x==0)})
"""
Explanation: Exporting
You can export your records as a CSV file or a Pandas DataFrame.
End of explanation
"""
|
ScienceStacks/CellBioControl | Analysis/chemotaxis.ipynb | mit | from IPython.display import Image, display
display(Image(filename='img/receptor_states.png'))
"""
Explanation: Backgound
Analysis of the Chemotaxis model described by Spiro et al., PNAS, 1999.
The model describes the receptor state along 3 dimensions:
- bound to a ligand
- phosphorylated
- degree of methylation (considers 2, 3, 4)
Key variables are:
- Y, Yp - concentrations of CheY and its phosphorylated form
- B, Bp - concentrations of CheB and its phosphorylated form
- L - ligand concentration
- f<state>, t<state> - "f" indicates the fraction of the concentration in the state, "t" is the total
<state> is a 3 character string, such as "TT2". T/F indicates the boolean value; the last is an integer.
Below is a figure from Spiro describing the state structure of receptors.
Issues
- yaxis labels are not showing
- Overshoot steady state YP
- Not getting correct time response for YP
End of explanation
"""
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import tellurium as te
from chemotaxis_model import ChemotaxisModel, StateAggregationFactory
from data_plotter import DataPlotter
model = ChemotaxisModel()
"""
Explanation: Summary
Initially, we consider a step response. Later, we repeat the same analysis for a ramp. These correspond to the analyses done by Spiro.
Most of the analysis is done using fractional concentrations (variables that begin with "f"). We begin by justifying this.
Next, we analyze the effects of a step response. "Perfect adaptation" is possible if the step is small enough.
Software Dependencies
The model and various utility functions are in the python modules chemotaxis_model and data_plotter.
End of explanation
"""
# This is the templated model
print model.getModel()
"""
Explanation: Antimony Model
End of explanation
"""
# Runs simulation and creates global variables used in analysis
def sim(elements=None,end=400, concentrations={}):
"""
Runs the simulation and creates global names
:param list elements: additions to model
:param int end: simulation end time
:param dict concentrations: key is variable, value is assignment
Output: global variables - plotter, result
"""
if elements is None:
elements = []
global plotter, result, model, rr
model = ChemotaxisModel()
for element in elements:
model.appendToModel(element)
rr = model.initialize()
for k,v in concentrations.items():
rr[k] = v
result = model.run(end=end)
plotter = DataPlotter(model)
# Export the XML
#f_sbml = "chemotaxis.xml"
# export current model state
#rr.exportToSBML(f_sbml)
# to export the initial state when the model
"""
Explanation: Common Codes Used in this Analysis
End of explanation
"""
# Spiro's plot for a step response with L=1mM
from IPython.display import Image, display
display(Image(filename='img/spiro_largestep.png'))
# The solid line is Yp
# The amount of ligand is 1000 times the amount of receptors
sim(elements=["at (time > 100): L = 1e-3"])
plotter.lines(["L", "Yp", "Bp"])
"""
Explanation: Analysis of Step Response
The goal here is to understand the dynamics of the receptor state for a step response.
From the foregoing, we established that it's sufficient to look at the fraction of receptors that are in the phosphorylated state (since this correlates with Yp). Now we want to understand what substates contribute to phosphorylated receptors.
Spiro shows a step response plot with L going from 0 to 11uM and "perfect adaption of Yp. We do not see this. Possibly, this is because he used a model with more methylation levels. Below is Spiro's step response curve.
Very Large Step - 1 mM (1,000 times ligand-receptor $K_d$)
End of explanation
"""
from IPython.display import Image, display
display(Image(filename='img/spiro_0.11.png'))
# The solid line is Yp;
sim(elements=["at (time > 10): L = 0.11e-6"], end=50)
plotter.lines(["Yp"])
"""
Explanation: Observations
- The steady state after the large step is about Yp=2.5 uM as opposed to Yp=6uM in Spiro.
Small Step - 0.11 uM (11% of ligand-receptor $K_d$)
End of explanation
"""
sim(elements=["at (time > 100): L = 1.1e-6"])
plotter.lines(["Yp"])
"""
Explanation: Observations
- Gain is correct. Steady state value of Yp = 6e-6
- Time constant of disturbance seems about right
- Magnitude of the disturbance is somewhat less than Spiro (which drops to about 5.5e-6)
Medium Size Step - 1.1 uM (110% of ligand-receptor $K_d$)
End of explanation
"""
sim(elements=["at (time > 100): L = 1.8e-6"])
plotter.lines(["Yp"])
"""
Explanation: Observations
- Gain is correct. Steady state value of Yp = 6e-6
- returns to steady state of 6uM
- Magnitude of the disturbance is larger than adding L=11uM, as expected.
Step Response - 1.8 uM
End of explanation
"""
sim(elements=["at (time > 100): L = 4e-6"])
plotter.lines(["Yp"])
"""
Explanation: Step Response - 4 uM (400% of ligand-receptor $K_d$)
End of explanation
"""
# System is flooded with more ligand than there are receptors. Much more ligand than receptors.
sim(elements=["at (time > 100): L = 11e-6"])
plotter.lines(["Yp"])
"""
Explanation: Step Response - 11 uM (138% of ligand-receptor $K_d$)
End of explanation
"""
# Breakdown the state phosphorylation state by methylation level
sim(elements=["at (time > 100): L = 11e-6"])
plotter.lines(["f_T__", "f_T_2", "f_T_3", "f_T_4"], yrange=[0,0.05])
"""
Explanation: Detailed Analysis of 11 uM Step
End of explanation
"""
# Analyze the methylation levels
sim(elements=["at (time > 100): L = 11e-6"])
plotter.lines(["f___2", "f___3", "f___4"], yrange=[0,1])
# Analyze the methylation levels for ligand bound receptors
sim(elements=["at (time > 100): L = 4.1e-6"])
plotter.lines(["fT___", "fT__2", "fT__3", "fT__4"], yrange=[0,0.5])
"""
Explanation: Observations
- Phosphorylation level of receptor does not recover after the Ligands are introduced.
- See a shift to higher methylation states
End of explanation
"""
# Breakdown the state phosphorylations by ligand bound
sim(elements=["at (time > 100): L = 11e-6"])
plotter.lines(["f_T__", "fFT__", "fTT__"], yrange=[0, 0.2])
"""
Explanation: Observations
- Large fraction of LT2 means reduced phosphorylation
Questions
- Why doesn't the LT2 methylate?
End of explanation
"""
from IPython.display import Image, display
display(Image(filename='img/spiro_ramp.png'))
# The solid line is Yp; the long dashed line is L
# Ramp analysis for the same conditions as Spiro
elements = ["at (time > 200): k0 = 0.09e-6", "at (time > 300): k0 = 0, L=3e-6"]
sim(elements=elements)
plotter.lines(["L", "Yp"], yrange=[0,7e-6])
"""
Explanation: Observations
- Most of the phosphorylation after the step is in receptors with the ligand bound
Ramp analysis
End of explanation
"""
# Gradually add 1.1uM of ligand
elements = ["at (time > 200): k0 = 0.011e-6", "at (time > 300): k0 = 0, L=0.15e-6"]
sim(elements=elements)
plotter.lines(["L", "Yp"])
# See how much ligand I can add and still get the same steady state response. Added a total of 1.8uM.
elements = ["at (time > 200): k0 = 0.018e-6", "at (time > 300): k0 = 0, L=0.27e-6"]
sim(elements=elements)
plotter.lines(["L", "Yp"])
# Pushing the ramp a bit longer.
elements = ["at (time > 200): k0 = 0.018e-6", "at (time > 400): k0 = 0, L=0.6e-6"]
sim(elements=elements, end=500)
plotter.lines(["L", "Yp"])
"""
Explanation: Observations
- Initial response is good, but then the steady state state Yp is much lower than Spiro.
- Am I correctly interpreting how the experiment was done? That is, I see the amount of free L, not the total L in the system, which must consider TL as well.
End of explanation
"""
Kd = 1e-6 # 1 micromolar
TTOT = 8 # 8 micro molars
K = Kd/(TTOT*1e-6)
import numpy as np
def fBound(r, K=K):
"""
Using steady state analysis to compute the fraction of ligand bound to receptors
:param float r: ratio of total L to total T
:param float K: ratio of Kd to total T
:return float: fraction of ligand bound to receptors
"""
result = None
b = -(1 + r + K)
term1 = -b/2
term2 = np.sqrt(b**2-4*r)/2
result1 = term1 - term2
result2 = term1 + term2
if result1 <= 1.0 and result1 >= 0:
result = result1
if result2 <= 1.0 and result2 >= 0:
if result is not None:
raise RuntimeError("Two valid solutions")
else:
result = result1
if result is None:
raise RuntimeError("No valid solution.")
return result
def evaluateEstimateError(L):
"""
Returns the error of expected fraction of ligand bound compared with the values
obtained from simulation.
"""
sim(elements=["at (time > 100): L = %s" % (L*1e-6)])
actual = model.getVariable("fT___")[-1] # Get the last (steady state) value
expected = fBound(L/TTOT)
return (expected -actual)/actual
evaluateEstimateError(4.1)
evaluateEstimateError(1.1)
evaluateEstimateError(0.1)
"""
Explanation: Ligand Binding Analysis
This section provides an analytical validation of one quantity in the simulation, the fraction of ligand bound receptors. The analysis is based on a simple equilibrium analysis.
- $K_d = 1uM$ is the disassociation constant for the reaction $TL \rightleftharpoons T + L$, where $T$ is the receptor and $L$ is the ligand.
- $r$ is the ratio $\frac{L_{TOT}}{T_{TOT}}$, where $L_{TOT}$ is the total amount of ligand and $T_{TOT}$ is the total number of receptors.
End of explanation
"""
|
tensorflow/lattice | docs/tutorials/shape_constraints_for_ethics.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
#@test {"skip": true}
!pip install tensorflow-lattice tensorflow_decision_forests seaborn
"""
Explanation: Shape Constraints for Ethics with Tensorflow Lattice
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/shape_constraints_for_ethics"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/lattice/blob/master/docs/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This tutorial demonstrates how the TensorFlow Lattice (TFL) library can be used
to train models that behave responsibly, and do not violate certain
assumptions that are ethical or fair. In particular, we will focus on using monotonicity constraints to avoid unfair penalization of certain attributes. This tutorial includes demonstrations
of the experiments from the paper
Deontological Ethics By Monotonicity Shape Constraints
by Serena Wang and Maya Gupta, published at
AISTATS 2020.
We will use TFL canned estimators on public datasets, but note that
everything in this tutorial can also be done with models constructed from TFL
Keras layers.
Before proceeding, make sure your runtime has all required packages installed
(as imported in the code cells below).
Setup
Installing TF Lattice package:
End of explanation
"""
import tensorflow as tf
import tensorflow_lattice as tfl
import tensorflow_decision_forests as tfdf
import logging
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
import sys
import tempfile
logging.disable(sys.maxsize)
"""
Explanation: Importing required packages:
End of explanation
"""
# List of learning rate hyperparameters to try.
# For a longer list of reasonable hyperparameters, try [0.001, 0.01, 0.1].
LEARNING_RATES = [0.01]
# Default number of training epochs and batch sizes.
NUM_EPOCHS = 1000
BATCH_SIZE = 1000
# Directory containing dataset files.
DATA_DIR = 'https://raw.githubusercontent.com/serenalwang/shape_constraints_for_ethics/master'
"""
Explanation: Default values used in this tutorial:
End of explanation
"""
# Load data file.
law_file_name = 'lsac.csv'
law_file_path = os.path.join(DATA_DIR, law_file_name)
raw_law_df = pd.read_csv(law_file_path, delimiter=',')
"""
Explanation: Case study #1: Law school admissions
In the first part of this tutorial, we will consider a case study using the Law
School Admissions dataset from the Law School Admissions Council (LSAC). We will
train a classifier to predict whether or not a student will pass the bar using
two features: the student's LSAT score and undergraduate GPA.
Suppose that the classifier’s score was used to guide law school admissions or
scholarships. According to merit-based social norms, we would expect that
students with higher GPA and higher LSAT score should receive a higher score
from the classifier. However, we will observe that it is easy for models to
violate these intuitive norms, and sometimes penalize people for having a higher
GPA or LSAT score.
To address this unfair penalization problem, we can impose monotonicity
constraints so that a model never penalizes higher GPA or higher LSAT score, all
else equal. In this tutorial, we will show how to impose those monotonicity
constraints using TFL.
Load Law School Data
End of explanation
"""
# Define label column name.
LAW_LABEL = 'pass_bar'
def preprocess_law_data(input_df):
# Drop rows with where the label or features of interest are missing.
output_df = input_df[~input_df[LAW_LABEL].isna() & ~input_df['ugpa'].isna() &
(input_df['ugpa'] > 0) & ~input_df['lsat'].isna()]
return output_df
law_df = preprocess_law_data(raw_law_df)
"""
Explanation: Preprocess dataset:
End of explanation
"""
def split_dataset(input_df, random_state=888):
"""Splits an input dataset into train, val, and test sets."""
train_df, test_val_df = train_test_split(
input_df, test_size=0.3, random_state=random_state)
val_df, test_df = train_test_split(
test_val_df, test_size=0.66, random_state=random_state)
return train_df, val_df, test_df
law_train_df, law_val_df, law_test_df = split_dataset(law_df)
"""
Explanation: Split data into train/validation/test sets
End of explanation
"""
def plot_dataset_contour(input_df, title):
plt.rcParams['font.family'] = ['serif']
g = sns.jointplot(
x='ugpa',
y='lsat',
data=input_df,
kind='kde',
xlim=[1.4, 4],
ylim=[0, 50])
g.plot_joint(plt.scatter, c='b', s=10, linewidth=1, marker='+')
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels('Undergraduate GPA', 'LSAT score', fontsize=14)
g.fig.suptitle(title, fontsize=14)
# Adust plot so that the title fits.
plt.subplots_adjust(top=0.9)
plt.show()
law_df_pos = law_df[law_df[LAW_LABEL] == 1]
plot_dataset_contour(
law_df_pos, title='Distribution of students that passed the bar')
law_df_neg = law_df[law_df[LAW_LABEL] == 0]
plot_dataset_contour(
law_df_neg, title='Distribution of students that failed the bar')
"""
Explanation: Visualize data distribution
First we will visualize the distribution of the data. We will plot the GPA and
LSAT scores for all students that passed the bar and also for all students that
did not pass the bar.
End of explanation
"""
def train_tfl_estimator(train_df, monotonicity, learning_rate, num_epochs,
batch_size, get_input_fn,
get_feature_columns_and_configs):
"""Trains a TFL calibrated linear estimator.
Args:
train_df: pandas dataframe containing training data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rate: learning rate of Adam optimizer for gradient descent.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
estimator: a trained TFL calibrated linear estimator.
"""
feature_columns, feature_configs = get_feature_columns_and_configs(
monotonicity)
model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs, use_bias=False)
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=get_input_fn(input_df=train_df, num_epochs=1),
optimizer=tf.keras.optimizers.Adam(learning_rate))
estimator.train(
input_fn=get_input_fn(
input_df=train_df, num_epochs=num_epochs, batch_size=batch_size))
return estimator
def optimize_learning_rates(
train_df,
val_df,
test_df,
monotonicity,
learning_rates,
num_epochs,
batch_size,
get_input_fn,
get_feature_columns_and_configs,
):
"""Optimizes learning rates for TFL estimators.
Args:
train_df: pandas dataframe containing training data.
val_df: pandas dataframe containing validation data.
test_df: pandas dataframe containing test data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rates: list of learning rates to try.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
A single TFL estimator that achieved the best validation accuracy.
"""
estimators = []
train_accuracies = []
val_accuracies = []
test_accuracies = []
for lr in learning_rates:
estimator = train_tfl_estimator(
train_df=train_df,
monotonicity=monotonicity,
learning_rate=lr,
num_epochs=num_epochs,
batch_size=batch_size,
get_input_fn=get_input_fn,
get_feature_columns_and_configs=get_feature_columns_and_configs)
estimators.append(estimator)
train_acc = estimator.evaluate(
input_fn=get_input_fn(train_df, num_epochs=1))['accuracy']
val_acc = estimator.evaluate(
input_fn=get_input_fn(val_df, num_epochs=1))['accuracy']
test_acc = estimator.evaluate(
input_fn=get_input_fn(test_df, num_epochs=1))['accuracy']
print('accuracies for learning rate %f: train: %f, val: %f, test: %f' %
(lr, train_acc, val_acc, test_acc))
train_accuracies.append(train_acc)
val_accuracies.append(val_acc)
test_accuracies.append(test_acc)
max_index = val_accuracies.index(max(val_accuracies))
return estimators[max_index]
"""
Explanation: Train calibrated linear model to predict bar exam passage
Next, we will train a calibrated linear model from TFL to predict whether or
not a student will pass the bar. The two input features will be LSAT score and
undergraduate GPA, and the training label will be whether the student passed the
bar.
We will first train a calibrated linear model without any constraints. Then, we
will train a calibrated linear model with monotonicity constraints and observe
the difference in the model output and accuracy.
Helper functions for training a TFL calibrated linear estimator
These functions will be used for this law school case study, as well as the
credit default case study below.
End of explanation
"""
def get_input_fn_law(input_df, num_epochs, batch_size=None):
"""Gets TF input_fn for law school models."""
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['ugpa', 'lsat']],
y=input_df['pass_bar'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_law(monotonicity):
"""Gets TFL feature configs for law school models."""
feature_columns = [
tf.feature_column.numeric_column('ugpa'),
tf.feature_column.numeric_column('lsat'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='ugpa',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='lsat',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs
"""
Explanation: Helper functions for configuring law school dataset features
These helper functions are specific to the law school case study.
End of explanation
"""
def get_predicted_probabilities(estimator, input_df, get_input_fn):
if isinstance(estimator, tf.estimator.Estimator):
predictions = estimator.predict(
input_fn=get_input_fn(input_df=input_df, num_epochs=1))
return [prediction['probabilities'][1] for prediction in predictions]
else:
return estimator.predict(tfdf.keras.pd_dataframe_to_tf_dataset(input_df))
def plot_model_contour(estimator, input_df, num_keypoints=20):
x = np.linspace(min(input_df['ugpa']), max(input_df['ugpa']), num_keypoints)
y = np.linspace(min(input_df['lsat']), max(input_df['lsat']), num_keypoints)
x_grid, y_grid = np.meshgrid(x, y)
positions = np.vstack([x_grid.ravel(), y_grid.ravel()])
plot_df = pd.DataFrame(positions.T, columns=['ugpa', 'lsat'])
plot_df[LAW_LABEL] = np.ones(len(plot_df))
predictions = get_predicted_probabilities(
estimator=estimator, input_df=plot_df, get_input_fn=get_input_fn_law)
grid_predictions = np.reshape(predictions, x_grid.shape)
plt.rcParams['font.family'] = ['serif']
plt.contour(
x_grid,
y_grid,
grid_predictions,
colors=('k',),
levels=np.linspace(0, 1, 11))
plt.contourf(
x_grid,
y_grid,
grid_predictions,
cmap=plt.cm.bone,
levels=np.linspace(0, 1, 11)) # levels=np.linspace(0,1,8));
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
cbar = plt.colorbar()
cbar.ax.set_ylabel('Model score', fontsize=20)
cbar.ax.tick_params(labelsize=20)
plt.xlabel('Undergraduate GPA', fontsize=20)
plt.ylabel('LSAT score', fontsize=20)
"""
Explanation: Helper functions for visualization of trained model outputs
End of explanation
"""
nomon_linear_estimator = optimize_learning_rates(
train_df=law_train_df,
val_df=law_val_df,
test_df=law_test_df,
monotonicity=0,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_law,
get_feature_columns_and_configs=get_feature_columns_and_configs_law)
plot_model_contour(nomon_linear_estimator, input_df=law_df)
"""
Explanation: Train unconstrained (non-monotonic) calibrated linear model
End of explanation
"""
mon_linear_estimator = optimize_learning_rates(
train_df=law_train_df,
val_df=law_val_df,
test_df=law_test_df,
monotonicity=1,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_law,
get_feature_columns_and_configs=get_feature_columns_and_configs_law)
plot_model_contour(mon_linear_estimator, input_df=law_df)
"""
Explanation: Train monotonic calibrated linear model
End of explanation
"""
feature_names = ['ugpa', 'lsat']
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=[
tf.feature_column.numeric_column(feature) for feature in feature_names
],
hidden_units=[100, 100],
optimizer=tf.keras.optimizers.Adam(learning_rate=0.008),
activation_fn=tf.nn.relu)
dnn_estimator.train(
input_fn=get_input_fn_law(
law_train_df, batch_size=BATCH_SIZE, num_epochs=NUM_EPOCHS))
dnn_train_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_train_df, num_epochs=1))['accuracy']
dnn_val_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_val_df, num_epochs=1))['accuracy']
dnn_test_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_test_df, num_epochs=1))['accuracy']
print('accuracies for DNN: train: %f, val: %f, test: %f' %
(dnn_train_acc, dnn_val_acc, dnn_test_acc))
plot_model_contour(dnn_estimator, input_df=law_df)
"""
Explanation: Train other unconstrained models
We demonstrated that TFL calibrated linear models could be trained to be
monotonic in both LSAT score and GPA without too big of a sacrifice in accuracy.
But, how does the calibrated linear model compare to other types of models, like
deep neural networks (DNNs) or gradient boosted trees (GBTs)? Do DNNs and GBTs
appear to have reasonably fair outputs? To address this question, we will next
train an unconstrained DNN and GBT. In fact, we will observe that the DNN and
GBT both easily violate monotonicity in LSAT score and undergraduate GPA.
Train an unconstrained Deep Neural Network (DNN) model
The architecture was previously optimized to achieve high validation accuracy.
End of explanation
"""
law_train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(
law_train_df, label='pass_bar')
law_test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(
law_test_df, label='pass_bar')
law_val_ds = tfdf.keras.pd_dataframe_to_tf_dataset(law_val_df, label='pass_bar')
tree_model = tfdf.keras.GradientBoostedTreesModel(
features=[tfdf.keras.FeatureUsage(name=name) for name in feature_names],
exclude_non_specified_features=True,
num_threads=1,
num_trees=20,
max_depth=4,
growing_strategy='BEST_FIRST_GLOBAL',
random_seed=42,
temp_directory=tempfile.mkdtemp(),
)
tree_model.compile(metrics=[tf.keras.metrics.BinaryAccuracy(name='accuracy')])
tree_model.fit(law_train_ds, validation_data=law_val_ds, verbose=0)
tree_train_acc = tree_model.evaluate(law_train_ds, verbose=0)[1]
tree_val_acc = tree_model.evaluate(law_val_ds, verbose=0)[1]
tree_test_acc = tree_model.evaluate(law_test_ds, verbose=0)[1]
print('accuracies for GBT: train: %f, val: %f, test: %f' %
(tree_train_acc, tree_val_acc, tree_test_acc))
plot_model_contour(tree_model, input_df=law_df)
"""
Explanation: Train an unconstrained Gradient Boosted Trees (GBT) model
The tree structure was previously optimized to achieve high validation accuracy.
End of explanation
"""
# Load data file.
credit_file_name = 'credit_default.csv'
credit_file_path = os.path.join(DATA_DIR, credit_file_name)
credit_df = pd.read_csv(credit_file_path, delimiter=',')
# Define label column name.
CREDIT_LABEL = 'default'
"""
Explanation: Case study #2: Credit Default
The second case study that we will consider in this tutorial is predicting an
individual's credit default probability. We will use the Default of Credit Card
Clients dataset from the UCI repository. This data was collected from 30,000
Taiwanese credit card users and contains a binary label of whether or not a user
defaulted on a payment in a time window. Features include marital status,
gender, education, and how long a user is behind on payment of their existing
bills, for each of the months of April-September 2005.
As we did with the first case study, we again illustrate using monotonicity
constraints to avoid unfair penalization: if the model were to be used to
determine a user’s credit score, it could feel unfair to many if they were
penalized for paying their bills sooner, all else equal. Thus, we apply a
monotonicity constraint that keeps the model from penalizing early payments.
Load Credit Default data
End of explanation
"""
credit_train_df, credit_val_df, credit_test_df = split_dataset(credit_df)
"""
Explanation: Split data into train/validation/test sets
End of explanation
"""
def get_agg_data(df, x_col, y_col, bins=11):
xbins = pd.cut(df[x_col], bins=bins)
data = df[[x_col, y_col]].groupby(xbins).agg(['mean', 'sem'])
return data
def plot_2d_means_credit(input_df, x_col, y_col, x_label, y_label):
plt.rcParams['font.family'] = ['serif']
_, ax = plt.subplots(nrows=1, ncols=1)
plt.setp(ax.spines.values(), color='black', linewidth=1)
ax.tick_params(
direction='in', length=6, width=1, top=False, right=False, labelsize=18)
df_single = get_agg_data(input_df[input_df['MARRIAGE'] == 1], x_col, y_col)
df_married = get_agg_data(input_df[input_df['MARRIAGE'] == 2], x_col, y_col)
ax.errorbar(
df_single[(x_col, 'mean')],
df_single[(y_col, 'mean')],
xerr=df_single[(x_col, 'sem')],
yerr=df_single[(y_col, 'sem')],
color='orange',
marker='s',
capsize=3,
capthick=1,
label='Single',
markersize=10,
linestyle='')
ax.errorbar(
df_married[(x_col, 'mean')],
df_married[(y_col, 'mean')],
xerr=df_married[(x_col, 'sem')],
yerr=df_married[(y_col, 'sem')],
color='b',
marker='^',
capsize=3,
capthick=1,
label='Married',
markersize=10,
linestyle='')
leg = ax.legend(loc='upper left', fontsize=18, frameon=True, numpoints=1)
ax.set_xlabel(x_label, fontsize=18)
ax.set_ylabel(y_label, fontsize=18)
ax.set_ylim(0, 1.1)
ax.set_xlim(-2, 8.5)
ax.patch.set_facecolor('white')
leg.get_frame().set_edgecolor('black')
leg.get_frame().set_facecolor('white')
leg.get_frame().set_linewidth(1)
plt.show()
plot_2d_means_credit(credit_train_df, 'PAY_0', 'default',
'Repayment Status (April)', 'Observed default rate')
"""
Explanation: Visualize data distribution
First we will visualize the distribution of the data. We will plot the mean and
standard error of the observed default rate for people with different marital
statuses and repayment statuses. The repayment status represents the number of
months a person is behind on paying back their loan (as of April 2005).
End of explanation
"""
def get_input_fn_credit(input_df, num_epochs, batch_size=None):
"""Gets TF input_fn for credit default models."""
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['MARRIAGE', 'PAY_0']],
y=input_df['default'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_credit(monotonicity):
"""Gets TFL feature configs for credit default models."""
feature_columns = [
tf.feature_column.numeric_column('MARRIAGE'),
tf.feature_column.numeric_column('PAY_0'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='MARRIAGE',
lattice_size=2,
pwl_calibration_num_keypoints=3,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='PAY_0',
lattice_size=2,
pwl_calibration_num_keypoints=10,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs
"""
Explanation: Train calibrated linear model to predict credit default rate
Next, we will train a calibrated linear model from TFL to predict whether or
not a person will default on a loan. The two input features will be the person's
marital status and how many months the person is behind on paying back their
loans in April (repayment status). The training label will be whether or not the
person defaulted on a loan.
We will first train a calibrated linear model without any constraints. Then, we
will train a calibrated linear model with monotonicity constraints and observe
the difference in the model output and accuracy.
Helper functions for configuring credit default dataset features
These helper functions are specific to the credit default case study.
End of explanation
"""
def plot_predictions_credit(input_df,
estimator,
x_col,
x_label='Repayment Status (April)',
y_label='Predicted default probability'):
predictions = get_predicted_probabilities(
estimator=estimator, input_df=input_df, get_input_fn=get_input_fn_credit)
new_df = input_df.copy()
new_df.loc[:, 'predictions'] = predictions
plot_2d_means_credit(new_df, x_col, 'predictions', x_label, y_label)
"""
Explanation: Helper functions for visualization of trained model outputs
End of explanation
"""
nomon_linear_estimator = optimize_learning_rates(
train_df=credit_train_df,
val_df=credit_val_df,
test_df=credit_test_df,
monotonicity=0,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_credit,
get_feature_columns_and_configs=get_feature_columns_and_configs_credit)
plot_predictions_credit(credit_train_df, nomon_linear_estimator, 'PAY_0')
"""
Explanation: Train unconstrained (non-monotonic) calibrated linear model
End of explanation
"""
mon_linear_estimator = optimize_learning_rates(
train_df=credit_train_df,
val_df=credit_val_df,
test_df=credit_test_df,
monotonicity=1,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_credit,
get_feature_columns_and_configs=get_feature_columns_and_configs_credit)
plot_predictions_credit(credit_train_df, mon_linear_estimator, 'PAY_0')
"""
Explanation: Train monotonic calibrated linear model
End of explanation
"""
|
lcharleux/numerical_analysis | doc/ODE/ODE.ipynb | gpl-2.0 | tmax = .2
t = np.linspace(0., tmax, 1000)
x0, y0 = 0., 0.
vx0, vy0 = 1., 1.
g = 10.
x = vx0 * t
y = -g * t**2/2. + vy0 * t
fig = plt.figure()
ax.set_aspect("equal")
plt.plot(x, y, label = "Exact solution")
plt.grid()
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
"""
Explanation: Ordinary differential equations (ODE)
Scope
Widely used in physics
Closed form solutions only in particular cases
Need for numerical solvers
Ordinary differential equations vs. partial differential equation
Ordinary differential equations (ODE)
Derivatives of the inknown function only with respect to a single variable, time $t$ for example.
Example: the 1D linear oscillator equation
$$
\dfrac{d^2x}{dt^2} + 2 \zeta \omega_0 \dfrac{dx}{dt} + \omega_0 x = 0
$$
Partial differential equations (PDE)
Derivatives of the unknown function with respect to several variables, time $t$ and space $(x, y, z)$ for example. Special techniques not introduced in this course need to be used, such as finite difference or finite elements.
Example : the heat equation
$$
\rho C_p \dfrac{\partial T}{\partial t} - k \Delta T + s = 0
$$
Introductive example
Point mass $P$ in free fall.
Required data:
gravity field $\vec g = (0, -g)$,
Mass $m$,
Initial position $P_0 = (0, 0)$
Initial velocity $\vec V_0 = (v_{x0}, v_{y0})$
Problem formulation:
$$
\left\lbrace \begin{align}
\ddot x & = 0\
\ddot y & = -g
\end{align}\right.
$$
Closed form solution
$$
\left\lbrace \begin{align}
x(t) &= v_{x0} t\
y(t) &= -g \frac{t^2}{2} + v_{y0}t
\end{align}\right.
$$
End of explanation
"""
dt = 0.02 # Pas de temps
X0 = np.array([0., 0., vx0, vy0])
nt = int(tmax/dt) # Nombre de pas
ti = np.linspace(0., nt * dt, nt)
def derivate(X, t):
return np.array([X[2], X[3], 0., -g])
def Euler(func, X0, t):
dt = t[1] - t[0]
nt = len(t)
X = np.zeros([nt, len(X0)])
X[0] = X0
for i in range(nt-1):
X[i+1] = X[i] + func(X[i], t[i]) * dt
return X
%time X_euler = Euler(derivate, X0, ti)
x_euler, y_euler = X_euler[:,0], X_euler[:,1]
plt.figure()
plt.plot(x, y, label = "Exact solution")
plt.plot(x_euler, y_euler, "or", label = "Euler")
plt.grid()
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
"""
Explanation: Reformulation
Any ODEs can be reformulated as a first order system equations. Let's assume that
$$
X = \begin{bmatrix}
X_0 \
X_1 \
X_2 \
X_3 \
\end{bmatrix}
=
\begin{bmatrix}
x \
y \
\dot x \
\dot y \
\end{bmatrix}
$$
As a consequence:
$$
\dot X = \begin{bmatrix}
\dot x \
\dot y \
\ddot x \
\ddot y \
\end{bmatrix}
$$
Then, the initialy second order equation can be reformulated as:
$$
\dot X = f(X, t) =
\begin{bmatrix}
X_2 \
X_3 \
0 \
-g \
\end{bmatrix}
$$
Generic problem
Solving $\dot Y = f(Y, t)$
Numerical integration of ODE
Generic formulation
$$
\dot X = f(X, t)
$$
approximate solution: need for error estimation
discrete time: $t_0$, $t_1$, $\ldots$
time step $dt = t_{i+1} - t_i$,
Euler method
Intuitive
Fast
Slow convergence
$$
X_{i+1} = X_i + f(X, t_i) dt
$$
End of explanation
"""
def RK4(func, X0, t):
dt = t[1] - t[0]
nt = len(t)
X = np.zeros([nt, len(X0)])
X[0] = X0
for i in range(nt-1):
k1 = func(X[i], t[i])
k2 = func(X[i] + dt/2. * k1, t[i] + dt/2.)
k3 = func(X[i] + dt/2. * k2, t[i] + dt/2.)
k4 = func(X[i] + dt * k3, t[i] + dt)
X[i+1] = X[i] + dt / 6. * (k1 + 2. * k2 + 2. * k3 + k4)
return X
%time X_rk4 = RK4(derivate, X0, ti)
x_rk4, y_rk4 = X_rk4[:,0], X_rk4[:,1]
plt.figure()
plt.plot(x, y, label = "Exact solution")
plt.plot(x_euler, y_euler, "or", label = "Euler")
plt.plot(x_rk4, y_rk4, "gs", label = "RK4")
plt.grid()
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
"""
Explanation: Runge Kutta 4
Wikipedia
Evolution of the Euler integrator with:
Multiple slope evaluation (4 here),
Well chosen weighting to match simple solutions.
$$
X_{i+1} = X_i + \dfrac{dt}{6}\left(k_1 + 2k_2 + 2k_3 + k_4 \right)
$$
With:
$k_1$ is the increment based on the slope at the beginning of the interval, using $ X $ (Euler's method);
$k_2$ is the increment based on the slope at the midpoint of the interval, using $ X + dt/2 \times k_1 $;
$k_3$ is again the increment based on the slope at the midpoint, but now using $ X + dt/2\times k_2 $;
$k_4$ is the increment based on the slope at the end of the interval, using $ X + dt \times k_3 $.
End of explanation
"""
from scipy import integrate
X_odeint = integrate.odeint(derivate, X0, ti)
%time x_odeint, y_odeint = X_odeint[:,0], X_rk4[:,1]
plt.figure()
plt.plot(x, y, label = "Exact solution")
plt.plot(x_euler, y_euler, "or", label = "Euler")
plt.plot(x_rk4, y_rk4, "gs", label = "RK4")
plt.plot(x_odeint, y_odeint, "mv", label = "ODEint")
plt.grid()
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
"""
Explanation: Using ODEint
http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.odeint.html
End of explanation
"""
|
mathemage/h2o-3 | h2o-py/demos/H2O_tutorial_breast_cancer_classification.ipynb | apache-2.0 | import h2o
# Start an H2O Cluster on your local machine
h2o.init()
"""
Explanation: H2O Tutorial: Breast Cancer Classification
Author: Erin LeDell
Contact: [email protected]
This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms.
Detailed documentation about H2O's and the Python API is available at http://docs.h2o.ai.
Install H2O in Python
Prerequisites
This tutorial assumes you have Python 2.7 installed. The h2o Python package has a few dependencies which can be installed using pip. The packages that are required are (which also have their own dependencies):
bash
pip install requests
pip install tabulate
pip install scikit-learn
If you have any problems (for example, installing the scikit-learn package), check out this page for tips.
Install h2o
Once the dependencies are installed, you can install H2O. We will use the latest stable version of the h2o package, which is called "Tibshirani-3." The installation instructions are on the "Install in Python" tab on this page.
```bash
The following command removes the H2O module for Python (if it already exists).
pip uninstall h2o
Next, use pip to install this version of the H2O Python module.
pip install http://h2o-release.s3.amazonaws.com/h2o/rel-tibshirani/3/Python/h2o-3.6.0.3-py2.py3-none-any.whl
```
Start up an H2O cluster
In a Python terminal, we can import the h2o package and start up an H2O cluster.
End of explanation
"""
# This will not actually do anything since it's a fake IP address
# h2o.init(ip="123.45.67.89", port=54321)
"""
Explanation: If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows:
End of explanation
"""
csv_url = "https://h2o-public-test-data.s3.amazonaws.com/smalldata/wisc/wisc-diag-breast-cancer-shuffled.csv"
data = h2o.import_file(csv_url)
"""
Explanation: Download Data
The following code downloads a copy of the Wisconsin Diagnostic Breast Cancer dataset.
We can import the data directly into H2O using the Python API.
End of explanation
"""
data.shape
"""
Explanation: Explore Data
Once we have loaded the data, let's take a quick look. First the dimension of the frame:
End of explanation
"""
data.head()
"""
Explanation: Now let's take a look at the top of the frame:
End of explanation
"""
data.columns
"""
Explanation: The first two columns contain an ID and the resposne. The "diagnosis" column is the response. Let's take a look at the column names. The data contains derived features from the medical images of the tumors.
End of explanation
"""
columns = ["id", "diagnosis", "area_mean"]
data[columns].head()
"""
Explanation: To select a subset of the columns to look at, typical Pandas indexing applies:
End of explanation
"""
data['diagnosis']
"""
Explanation: Now let's select a single column, for example -- the response column, and look at the data more closely:
End of explanation
"""
data['diagnosis'].unique()
data['diagnosis'].nlevels()
"""
Explanation: It looks like a binary response, but let's validate that assumption:
End of explanation
"""
data['diagnosis'].levels()
"""
Explanation: We can query the categorical "levels" as well ('B' and 'M' stand for "Benign" and "Malignant" diagnosis):
End of explanation
"""
data.isna()
data['diagnosis'].isna()
"""
Explanation: Since "diagnosis" column is the response we would like to predict, we may want to check if there are any missing values, so let's look for NAs. To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column.
End of explanation
"""
data['diagnosis'].isna().sum()
"""
Explanation: The isna method doesn't directly answer the question, "Does the diagnosis column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look:
End of explanation
"""
data.isna().sum()
"""
Explanation: Great, no missing labels.
Out of curiosity, let's see if there is any missing data in this frame:
End of explanation
"""
# TO DO: Insert a bar chart or something showing the proportion of M to B in the response.
data['diagnosis'].table()
"""
Explanation: The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution, both visually and numerically.
End of explanation
"""
n = data.shape[0] # Total number of training samples
data['diagnosis'].table()['Count']/n
"""
Explanation: Ok, the data is not exactly evenly distributed between the two classes -- there are almost twice as many Benign samples as there are Malicious samples. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
End of explanation
"""
y = 'diagnosis'
x = data.columns
del x[0:1]
x
"""
Explanation: Machine Learning in H2O
We will do a quick demo of the H2O software -- trying to predict malignant tumors using various machine learning algorithms.
Specify the predictor set and response
The response, y, is the 'diagnosis' column, and the predictors, x, are all the columns aside from the first two columns ('id' and 'diagnosis').
End of explanation
"""
train, test = data.split_frame(ratios=[0.75], seed=1)
train.shape
test.shape
"""
Explanation: Split H2O Frame into a train and test set
End of explanation
"""
# Import H2O GBM:
from h2o.estimators.gbm import H2OGradientBoostingEstimator
"""
Explanation: Train and Test a GBM model
End of explanation
"""
model = H2OGradientBoostingEstimator(distribution='bernoulli',
ntrees=100,
max_depth=4,
learn_rate=0.1)
"""
Explanation: We first create a model object of class, "H2OGradientBoostingEstimator". This does not actually do any training, it just sets the model up for training by specifying model parameters.
End of explanation
"""
model.train(x=x, y=y, training_frame=train, validation_frame=test)
"""
Explanation: The model object, like all H2O estimator objects, has a train method, which will actually perform model training. At this step we specify the training and (optionally) a validation set, along with the response and predictor variables.
End of explanation
"""
print(model)
"""
Explanation: Inspect Model
The type of results shown when you print a model, are determined by the following:
- Model class of the estimator (e.g. GBM, RF, GLM, DL)
- The type of machine learning problem (e.g. binary classification, multiclass classification, regression)
- The data you specify (e.g. training_frame only, training_frame and validation_frame, or training_frame and nfolds)
Below, we see a GBM Model Summary, as well as training and validation metrics since we supplied a validation_frame. Since this a binary classification task, we are shown the relevant performance metrics, which inclues: MSE, R^2, LogLoss, AUC and Gini. Also, we are shown a Confusion Matrix, where the threshold for classification is chosen automatically (by H2O) as the threshold which maximizes the F1 score.
The scoring history is also printed, which shows the performance metrics over some increment such as "number of trees" in the case of GBM and RF.
Lastly, for tree-based methods (GBM and RF), we also print variable importance.
End of explanation
"""
perf = model.model_performance(test)
perf.auc()
"""
Explanation: Model Performance on a Test Set
Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we passed the test set as the validation_frame in training, so we have technically already created test set predictions and performance.
However, when performing model selection over a variety of model parameters, it is common for users to break their dataset into three pieces: Training, Validation and Test.
After training a variety of models using different parameters (and evaluating them on a validation set), the user may choose a single model and then evaluate model performance on a separate test set. This is when the model_performance method, shown below, is most useful.
End of explanation
"""
cvmodel = H2OGradientBoostingEstimator(distribution='bernoulli',
ntrees=100,
max_depth=4,
learn_rate=0.1,
nfolds=5)
cvmodel.train(x=x, y=y, training_frame=data)
"""
Explanation: Cross-validated Performance
To perform k-fold cross-validation, you use the same code as above, but you specify nfolds as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row.
Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the nfolds argument.
When performing cross-validation, you can still pass a validation_frame, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which we call data.
End of explanation
"""
ntrees_opt = [5,50,100]
max_depth_opt = [2,3,5]
learn_rate_opt = [0.1,0.2]
hyper_params = {'ntrees': ntrees_opt,
'max_depth': max_depth_opt,
'learn_rate': learn_rate_opt}
"""
Explanation: Grid Search
One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over:
- ntrees: Number of trees
- max_depth: Maximum depth of a tree
- learn_rate: Learning rate in the GBM
We will define a grid as follows:
End of explanation
"""
from h2o.grid.grid_search import H2OGridSearch
gs = H2OGridSearch(H2OGradientBoostingEstimator, hyper_params = hyper_params)
"""
Explanation: Define an "H2OGridSearch" object by specifying the algorithm (GBM) and the hyper parameters:
End of explanation
"""
gs.train(x=x, y=y, training_frame=train, validation_frame=test)
"""
Explanation: An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid.
End of explanation
"""
print(gs)
# print out the auc for all of the models
for g in gs:
print(g.model_id + " auc: " + str(g.auc()))
#TO DO: Compare grid search models
"""
Explanation: Compare Models
End of explanation
"""
|
agile-geoscience/gio | docs/userguide_src/_Gridding_a_bunch_of_xy_points.ipynb | apache-2.0 | from bruges.transform import CoordTransform
corner_ix = [[0, 0], [0, 3], [3, 0]]
corner_xy = [[5000, 6000],
[5000-23.176, 6000+71.329],
[5000+142.658, 6000+46.353]]
transform = CoordTransform(corner_ix, corner_xy)
for i in range(4):
for j in range(4):
print(transform([i, j]))
import pandas as pd
import xarray as xr
df = pd.read_csv('data.csv')
df = df.set_index(['iline', 'xline'])
da = xr.DataArray.from_series(df.z)
da.plot()
"""
Explanation: Gridding a bunch of x, y points
Often in earth science we'd like to represent some spatial data with a sampled, discretized, regular grid. You might call this a raster, a surface, a horizon, or just a grid. Or something else. Here's a very small one:
3 4 7 9
2 3 7 8
1 3 8
0 2 5 8
You can think of this as a map of heights (or depths, or temperatures, or anything). It has one missing value; we'll get to that.
Notice that there are no coordinates as such, just implicit 'row' and 'column' coordinates. So the value 4 is at (0, 1) for example, and the 5 is at (3, 2). That might seem a bit backwards: we'd normally give the 'x' coordinate first, then the 'y', but this is how matrices are indexed. People conventionally call these row, column indices i and j respectively. They will look like this:
j 0 1 2 3
i 0 3 4 7 9
1 2 3 7 8
2 1 3 NaN 8
3 0 2 5 8
We can represent a grid like this as a numpy array. Notice that we don't need to have every sample — one or more might be missing. In their place, we'd have NaNs, not-a-numbers, which are null values that won't be plotted and will generally be ignored.
Adding real-world coordinates
Very often, we'd like to refer to positions on the grid with real-world coordinates of some kind. For example, if this grid is a map, I might have UTMx and UTMy coordinates. Or, if this it's a seismic horizon, I might have inline and crossline numbers as well as UTMx and UTMy. In that case, xarray is useful because I can label the rows and columns with these real-world coordinates:
100 101 102 103
306 3 4 7 9
304 2 3 7 8
302 1 3 NaN 8
300 0 2 5 8
Notice that the numbering of the vertical axis is opposite what we used for j, they don't start at 0 or 1, and they increment in steps of 2. That all just depends on how the real-world coordinates were assigned. They are arbitrary.
The data is the same, but now I can index into this grid using my real-world coordinates, which is much more convenient than having to know the (i, j) indices. But that's far from being the only useful thing about xarray. For example, imagine a text representation of this array. It might look something like this:
iline,xline,z
306,100,3
306,101,4
306,102,7
306,103,9
304,100,2
304,101,3
304,102,7
304,103,8
302,100,1
302,101,3
302,103,8
300,100,0
300,101,2
300,102,5
300,103,8
Notice that one position, (302, 102) is missing: that's the NaN. This is a bit of a headache to load into NumPy, because I can't just read the data elements and reshape — plus I have to throw away the coordinates. But with xarray, I can do this:
End of explanation
"""
from bruges.transform import CoordTransform
corner_ix = [[0, 0], [0, 3], [3, 0]]
corner_xy = [[5000, 6000],
[5000-23.176, 6000+71.329],
[5000+142.658, 6000+46.353]]
transform = CoordTransform(corner_ix, corner_xy)
for i in range(4):
for j in range(4):
print(transform([i, j]))
arr
"""
Explanation: Adding (x, y) coordinates
Often I have UTMx and UTMy coordinates for the map — maybe just for one or more of its corner points, or maybe for every cell. If I have 3 corners, I can compute the (x, y) location of every cell, assuming the cell spacing is regular. If I have fewer than 3 corners, I will need to know the cell spacing in both directions, and the angle of one of the axes with respect to north.
I can attach (x, y) coordinates to the xarray, but there's a catch. Unless the grid is exactly aligned with north, I will need an (x, y) pair at every cell, because the rows won't line up (i.e. the x-coordinates of the second row cells will be different from the x-coordinates of the first row cells). No problem, xarray can handle this.
End of explanation
"""
|
streettraffic/streettraffic | streettraffic/research/multiple_routes_analysis/Multiple_routes_analysis.ipynb | mit | ## import system module
import json
import rethinkdb as r
import time
import datetime as dt
import asyncio
from shapely.geometry import Point, Polygon
import random
import pandas as pd
import os
import matplotlib.pyplot as plt
## import custom module
from streettraffic.server import TrafficServer
from streettraffic.predefined.cities import San_Francisco_polygon
settings = {
'app_id': 'F8aPRXcW3MmyUvQ8Z3J9', # this is where you put your App ID from here.com
'app_code' : 'IVp1_zoGHdLdz0GvD_Eqsw', # this is where you put your App Code from here.com
'map_tile_base_url': 'https://1.traffic.maps.cit.api.here.com/maptile/2.1/traffictile/newest/normal.day/',
'json_tile_base_url': 'https://traffic.cit.api.here.com/traffic/6.2/flow.json?'
}
## initialize traffic server
server = TrafficServer(settings)
"""
Explanation: Multiple Routes Analysis
In this section, we are trying to answer a very interesting question: within a city, do different routes experiences the same traffic pattern.
First let's import our moudles
End of explanation
"""
def get_random_point_in_polygon(poly):
(minx, miny, maxx, maxy) = poly.bounds
while True:
p = Point(random.uniform(minx, maxx), random.uniform(miny, maxy))
if poly.contains(p):
return p
atlanta_polygon = Polygon([[33.658529, -84.471782], [33.667928, -84.351730], [33.883809, -84.347570], [33.855681, -84.469405]])
sample_points = []
for i in range(100):
point_in_poly = get_random_point_in_polygon(atlanta_polygon)
sample_points += [[point_in_poly.x, point_in_poly.y]]
print(server.traffic_data.format_list_points_for_display(sample_points))
"""
Explanation: Generate Random Routes
In order to accomplish this goal, we need to have a function that generates random points within some geospatial region. Hence the function get_random_point_in_polygon is created
End of explanation
"""
sample_route_count = 2
route_obj_collection = []
for i in range(sample_route_count):
point_in_poly1 = get_random_point_in_polygon(atlanta_polygon)
point_in_poly2 = get_random_point_in_polygon(atlanta_polygon)
route_obj_collection += [[
{
"lat": point_in_poly1.x,
"lng": point_in_poly1.y
},
{
"lat": point_in_poly2.x,
"lng": point_in_poly2.y
}
]]
route_obj_collection_json = json.dumps(route_obj_collection)
print(route_obj_collection_json)
"""
Explanation: Now we simply copy the text above and go to https://www.darrinward.com/lat-long/ for plotting. The result would look like the following picture.
Now that we know we can generate random points, let's generate random routes. Let sample_route_count = 3, and we can create 3 random routes
End of explanation
"""
# load the test.json
with open('test.json') as f:
route_traffic_pattern_collection = json.load(f)
# create a function that takes an overview_path and generate the distance
def overview_path_distance(overview_path):
"""
This function extracts the longest_n routes in route_traffic_pattern_collection
"""
distance = 0
for i in range(len(overview_path)-1):
point1 = overview_path[i]
point2 = overview_path[i+1]
distance += server.util.get_distance([point1['lat'], point1['lng']], [point2['lat'], point2['lng']])
return distance
# now we build the dataframe
df = pd.DataFrame(index = [json.dumps(item['origin_destination']) for item in route_traffic_pattern_collection])
df['distance (in meters)'] = [overview_path_distance(item['route']['routes'][0]['overview_path']) for item in route_traffic_pattern_collection]
for i in range(len(route_traffic_pattern_collection[0]['chartLabel'])):
df[route_traffic_pattern_collection[0]['chartLabel'][i]] = [item['chartData'][i] for item in route_traffic_pattern_collection]
df.sort_values(by='distance (in meters)')
df
# remove the 'distance (in meters)' column and then we can do analysis
del df['distance (in meters)']
df
# Now we can do all sorts of fun things with it.
# feel free to comment out the following statement and see various possibilites
#print(df.mean(axis=1))
#print(df.std())
#print(df.median())
# for each route, give me the mean Jamming Factor of all the instant (2:00:00 PM, 2:30:00 PM, ..., 4:30:00 PM)
df.mean(axis=1)
"""
Explanation: Use the web UI
Copy the above result and paste it in the web UI at /#/Main/RouteLab
Then, select a time interval, date interval and click query data. You may go to the Rethinkdb Web UI to make sure your query is actually getting executed.
When it's done, simpy click COPY TO CLIPBOARD button to copy the result.
Then create a test.json file and store it in the same directory as the this file.
Analyzing data
First load the json, and build an appropirate dataframe for it.
End of explanation
"""
## For reproducibility, we executed the following script and store
## route_obj_collection_json
# sample_route_count = 100
# route_obj_collection = []
# for i in range(sample_route_count):
# point_in_poly1 = get_random_point_in_polygon(atlanta_polygon)
# point_in_poly2 = get_random_point_in_polygon(atlanta_polygon)
# route_obj_collection += [[
# {
# "lat": point_in_poly1.x,
# "lng": point_in_poly1.y
# },
# {
# "lat": point_in_poly2.x,
# "lng": point_in_poly2.y
# }
# ]]
# route_obj_collection_json = json.dumps(route_obj_collection)
with open('route_obj_collection_json.json') as f:
route_obj_collection_json = json.load(f)
## after copying and pasting route_obj_collection_json into WEB UI, getting results and load
## it in route_traffic_pattern_collection, we get this:
with open('route_traffic_pattern_collection.json') as f:
route_traffic_pattern_collection = json.load(f)
df = pd.DataFrame(index = [json.dumps(item['origin_destination']) for item in route_traffic_pattern_collection])
df['distance (in meters)'] = [overview_path_distance(item['route']['routes'][0]['overview_path']) for item in route_traffic_pattern_collection]
for i in range(len(route_traffic_pattern_collection[0]['chartLabel'])):
df[route_traffic_pattern_collection[0]['chartLabel'][i]] = [item['chartData'][i] for item in route_traffic_pattern_collection]
df2 = df.sort_values(by='distance (in meters)')
df2
# The following graph shows on average, what is the Jamming Factor throughout 24 hours for those 20 routes.
df3 = df2[-20:]
del df3['distance (in meters)']
df3.mean(axis=1).plot()
plt.show()
# The following graph extracts the worst jamming factor of each routes
df4 = df2[-20:]
del df4['distance (in meters)']
df4.max(axis=1).plot()
plt.show()
"""
Explanation: Be Bold and try 100 routes
End of explanation
"""
|
gnestor/jupyter-renderers | notebooks/nteract/pandas-to-geojson.ipynb | bsd-3-clause | import pandas as pd, requests, json
"""
Explanation: Convert a pandas dataframe to geojson for web-mapping
Author: Geoff Boeing
Original: pandas-to-geojson
End of explanation
"""
# API endpoint for city of Berkeley's 311 calls
endpoint_url = 'https://data.cityofberkeley.info/resource/k489-uv4i.json?$limit=20'
# fetch the URL and load the data
response = requests.get(endpoint_url)
data = response.json()
"""
Explanation: First download data from the city of Berkeley's API. You can use Socrata's $limit parameter to specify how many rows to grab (otherwise the default is 1,000 rows of data): https://dev.socrata.com/docs/paging.html
Example request: https://data.cityofberkeley.info/resource/k489-uv4i.json?$limit=5
End of explanation
"""
# turn the json data into a dataframe and see how many rows and what columns we have
df = pd.DataFrame(data)
print('We have {} rows'.format(len(df)))
str(df.columns.tolist())
# convert lat-long to floats and change address from ALL CAPS to regular capitalization
df['latitude'] = df['latitude'].astype(float)
df['longitude'] = df['longitude'].astype(float)
df['street_address'] = df['street_address'].str.title()
# we don't need all those columns - only keep useful ones
cols = ['issue_description', 'issue_type', 'latitude', 'longitude', 'street_address', 'ticket_status']
df_subset = df[cols]
# drop any rows that lack lat/long data
df_geo = df_subset.dropna(subset=['latitude', 'longitude'], axis=0, inplace=False)
print('We have {} geotagged rows'.format(len(df_geo)))
df_geo.tail()
# what is the distribution of issue types?
df_geo['issue_type'].value_counts()
"""
Explanation: Next, turn the json data into a dataframe and clean it up a bit: drop unnecessary columns and any rows that lack lat-long data. We want to make our json file as small as possible (prefer under 5 mb) so that it can be loaded over the Internet to anyone viewing your map, without taking forever to download a huge file.
End of explanation
"""
def df_to_geojson(df, properties, lat='latitude', lon='longitude'):
# create a new python dict to contain our geojson data, using geojson format
geojson = {'type':'FeatureCollection', 'features':[]}
# loop through each row in the dataframe and convert each row to geojson format
for _, row in df.iterrows():
# create a feature template to fill in
feature = {'type':'Feature',
'properties':{},
'geometry':{'type':'Point',
'coordinates':[]}}
# fill in the coordinates
feature['geometry']['coordinates'] = [row[lon],row[lat]]
# for each column, get the value and add it as a new feature property
for prop in properties:
feature['properties'][prop] = row[prop]
# add this feature (aka, converted dataframe row) to the list of features inside our dict
geojson['features'].append(feature)
return geojson
cols = ['street_address', 'issue_description', 'issue_type', 'ticket_status']
geojson = df_to_geojson(df_geo, cols)
"""
Explanation: Finally, convert each row in the dataframe to a geojson-formatted feature and save the result as a file. The format is pretty simple and you can see it here: http://geojson.org/
End of explanation
"""
import IPython
IPython.display.display({'application/geo+json': geojson}, raw=True)
"""
Explanation: In nteract, we can display geojson directly with the built-in leaflet renderer.
End of explanation
"""
|
Aniruddha-Tapas/Applied-Machine-Learning | Miscellaneous/Topic Modelling using LDA.ipynb | mit | from sklearn.datasets import fetch_20newsgroups
dataset = fetch_20newsgroups(shuffle=True, random_state=1, remove=('headers', 'footers', 'quotes'))
documents = dataset.data
"""
Explanation: Topic Modelling using LDA
<hr>
Latent Dirichlet Allocation (LDA) is a algorithms used to discover the topics that are present in a corpus. Non-negative Matrix Factorization (NMF) can also be used to find topics in text. The mathematical basis underpinning NMF is quite different from LDA. However if you experiment, NMF sometimes produces more meaningful topics for smaller datasets.
How do LDA and NMF work?
Both algorithms are able to return the documents that belong to a topic in a corpus and the words that belong to a topic. LDA is based on probabilistic graphical modeling while NMF relies on linear algebra. Both algorithms take as input a bag of words matrix (i.e., each document represented as a row, with each columns containing the count of words in the corpus). The aim of each algorithm is then to produce 2 smaller matrices; a document to topic matrix and a word to topic matrix that when multiplied together reproduce the bag of words matrix with the lowest error.
How many topics?
Well that is the question! Both NMF and LDA are not able to automatically determine the number of topics and this must be specified.
Dataset Preprocessing
Here we would perform topic modelling on the 20 Newsgoups dataset. It is easy to interpret and load in Scikit Learn. The dataset is easy to interpret because the 20 Newsgroups are known and the generated topics can be compared to the known topics being discussed. Headers, footers and quotes are excluded from the dataset.
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
no_features = 1000
# NMF is able to use tf-idf
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, max_features=no_features, stop_words='english')
tfidf = tfidf_vectorizer.fit_transform(documents)
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
# LDA can only use raw term counts for LDA because it is a probabilistic graphical model
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, max_features=no_features, stop_words='english')
tf = tf_vectorizer.fit_transform(documents)
tf_feature_names = tf_vectorizer.get_feature_names()
"""
Explanation: The creation of the bag of words matrix is very easy in Scikit Learn — all the heavy lifting is done by the feature extraction functionality provided for text datasets. A tf-idf transformer is applied to the bag of words matrix that NMF must process with the TfidfVectorizer. LDA on the other hand, being a probabilistic graphical model (i.e. dealing with probabilities) only requires raw counts, so a CountVectorizer is used. Stop words are removed and the number of terms included in the bag of words matrix is restricted to the top 1000.
End of explanation
"""
from sklearn.decomposition import NMF, LatentDirichletAllocation
no_topics = 20
# Run NMF
nmf = NMF(n_components=no_topics, random_state=1, alpha=.1, l1_ratio=.5, init='nndsvd').fit(tfidf)
# Run LDA
lda = LatentDirichletAllocation(n_topics=no_topics, max_iter=5, learning_method='online', learning_offset=50.,random_state=0).fit(tf)
"""
Explanation: NMF and LDA with Scikit Learn
As mentioned previously the algorithms are not able to automatically determine the number of topics and this value must be set when running the algorithm. Comprehensive documentation on available parameters is available for both NMF and LDA. Initialising the W and H matrices in NMF with ‘nndsvd’ rather than random initialisation improves the time it takes for NMF to converge. LDA can also be set to run in either batch or online mode.
End of explanation
"""
def display_topics(model, feature_names, no_top_words):
for topic_idx, topic in enumerate(model.components_):
print ("Topic %d:" % (topic_idx))
print (" ".join([feature_names[i]
for i in topic.argsort()[:-no_top_words - 1:-1]]))
no_top_words = 10
display_topics(nmf, tfidf_feature_names, no_top_words)
"""
Explanation: Displaying and Evaluating Topics
The structure of the resulting matrices returned by both NMF and LDA is the same and the Scikit Learn interface to access the returned matrices is also the same. This is great and allows for a common Python method that is able to display the top words in a topic. Topics are not labeled by the algorithm — a numeric index is assigned.
End of explanation
"""
display_topics(lda, tf_feature_names, no_top_words)
"""
Explanation: This was using NMF
And now using LDA:
End of explanation
"""
|
michrawson/nyu_ml_lectures | notebooks/02.3 Unsupervised Learning - Transformations and Dimensionality Reduction.ipynb | cc0-1.0 | from sklearn.datasets import load_iris
iris = load_iris()
X, y = iris.data, iris.target
print(X.shape)
"""
Explanation: Unsupervised Learning
Many instances of unsupervised learning, such as dimensionality reduction, manifold learning and feature extraction, find a new representation of the input data without any additional input.
<img src="figures/unsupervised_workflow.svg" width="100%">
The most simple example of this, which can barely be called learning, is rescaling the data to have zero mean and unit variance. This is a helpful preprocessing step for many machine learning models.
Applying such a preprocessing has a very similar interface to the supervised learning algorithms we saw so far.
Let's load the iris dataset and rescale it:
End of explanation
"""
print("mean : %s " % X.mean(axis=0))
print("standard deviation : %s " % X.std(axis=0))
"""
Explanation: The iris dataset is not "centered" that is it has non-zero mean and the standard deviation is different for each component:
End of explanation
"""
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
"""
Explanation: To use a preprocessing method, we first import the estimator, here StandardScaler and instantiate it:
End of explanation
"""
scaler.fit(X)
"""
Explanation: As with the classification and regression algorithms, we call fit to learn the model from the data. As this is an unsupervised model, we only pass X, not y. This simply estimates mean and standard deviation.
End of explanation
"""
X_scaled = scaler.transform(X)
"""
Explanation: Now we can rescale our data by applying the transform (not predict) method:
End of explanation
"""
print(X_scaled.shape)
print("mean : %s " % X_scaled.mean(axis=0))
print("standard deviation : %s " % X_scaled.std(axis=0))
"""
Explanation: X_scaled has the same number of samples and features, but the mean was subtracted and all features were scaled to have unit standard deviation:
End of explanation
"""
rnd = np.random.RandomState(5)
X_ = rnd.normal(size=(300, 2))
X_blob = np.dot(X_, rnd.normal(size=(2, 2))) + rnd.normal(size=2)
y = X_[:, 0] > 0
plt.scatter(X_blob[:, 0], X_blob[:, 1], c=y, linewidths=0, s=30)
plt.xlabel("feature 1")
plt.ylabel("feature 2")
"""
Explanation: Principal Component Analysis
An unsupervised transformation that is somewhat more interesting is Principle Component Analysis (PCA).
It is a technique to reduce the dimensionality of the data, by creating a linear projection.
That is, we find new features to represent the data that are a linear combination of the old data (i.e. we rotate it).
The way PCA finds these new directions is by looking for the directions of maximum variance.
Usually only few components that explain most of the variance in the data are kept. To illustrate how a rotation might look like, we first show it on two dimensional data and keep both principal components.
We create a Gaussian blob that is rotated:
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA()
"""
Explanation: As always, we instantiate our PCA model. By default all directions are kept.
End of explanation
"""
pca.fit(X_blob)
"""
Explanation: Then we fit the PCA model with our data. As PCA is an unsupervised algorithm, there is no output y.
End of explanation
"""
X_pca = pca.transform(X_blob)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, linewidths=0, s=30)
plt.xlabel("first principal component")
plt.ylabel("second principal component")
"""
Explanation: Then we can transform the data, projected on the principal components:
End of explanation
"""
from figures.plot_digits_datasets import digits_plot
digits_plot()
"""
Explanation: On the left of the plot you can see the four points that were on the top right before. PCA found fit first component to be along the diagonal, and the second to be perpendicular to it. As PCA finds a rotation, the principal components are always at right angles to each other.
Dimensionality Reduction for Visualization with PCA
Consider the digits dataset. It cannot be visualized in a single 2D plot, as it has 64 features. We are going to extract 2 dimensions to visualize it in, using the example from the sklearn examples here
End of explanation
"""
from sklearn.datasets import make_s_curve
X, y = make_s_curve(n_samples=1000)
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], X[:, 2], c=y)
ax.view_init(10, -60)
"""
Explanation: Note that this projection was determined without any information about the
labels (represented by the colors): this is the sense in which the learning
is unsupervised. Nevertheless, we see that the projection gives us insight
into the distribution of the different digits in parameter space.
Manifold Learning
One weakness of PCA is that it cannot detect non-linear features. A set
of algorithms known as Manifold Learning have been developed to address
this deficiency. A canonical dataset used in Manifold learning is the
S-curve, which we briefly saw in an earlier section:
End of explanation
"""
X_pca = PCA(n_components=2).fit_transform(X)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y)
"""
Explanation: This is a 2-dimensional dataset embedded in three dimensions, but it is embedded
in such a way that PCA cannot discover the underlying data orientation:
End of explanation
"""
from sklearn.manifold import Isomap
iso = Isomap(n_neighbors=15, n_components=2)
X_iso = iso.fit_transform(X)
plt.scatter(X_iso[:, 0], X_iso[:, 1], c=y)
"""
Explanation: Manifold learning algorithms, however, available in the sklearn.manifold
submodule, are able to recover the underlying 2-dimensional manifold:
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits(5)
X = digits.data
# ...
"""
Explanation: Exercise
Compare the results of Isomap and PCA on a 5-class subset of the digits dataset (load_digits(5)).
Bonus: Also compare to TSNE, another popular manifold learning technique.
End of explanation
"""
|
tombstone/models | official/colab/fine_tuning_bert.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
!pip install -q tf-nightly
!pip install -q tf-models-nightly
"""
Explanation: Fine-tuning a BERT model
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/official_models/tutorials/fine_tune_bert.ipynb"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/models/blob/master/official/colab/fine_tuning_bert.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/models/blob/master/official/colab/fine_tuning_bert.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/models/official/colab/fine_tuning_bert.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this example, we will work through fine-tuning a BERT model using the tensorflow-models PIP package.
The pretrained BERT model this tutorial is based on is also available on TensorFlow Hub, to see how to use it refer to the Hub Appendix
Setup
Install the TensorFlow Model Garden pip package
tf-models-nightly is the nightly Model Garden package created daily automatically.
pip will install all models and dependencies automatically.
End of explanation
"""
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
# Load the required submodules
import official.nlp.optimization
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization
import official.nlp.data.classifier_data_lib
import official.nlp.modeling.losses
import official.nlp.modeling.models
import official.nlp.modeling.networks
"""
Explanation: Imports
End of explanation
"""
gs_folder_bert = "gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-12_H-768_A-12"
tf.io.gfile.listdir(gs_folder_bert)
"""
Explanation: Resources
This directory contains the configuration, vocabulary, and a pre-trained checkpoint used in this tutorial:
End of explanation
"""
hub_url_bert = "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2"
"""
Explanation: You can get a pre-trained BERT encoder from TensorFlow Hub here:
End of explanation
"""
glue, info = tfds.load('glue/mrpc', with_info=True,
# It's small, load the whole dataset
batch_size=-1)
list(glue.keys())
"""
Explanation: The data
For this example we used the GLUE MRPC dataset from TFDS.
This dataset is not set up so that it can be directly fed into the BERT model, so this section also handles the necessary preprocessing.
Get the dataset from TensorFlow Datasets
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
Number of labels: 2.
Size of training dataset: 3668.
Size of evaluation dataset: 408.
Maximum sequence length of training and evaluation dataset: 128.
End of explanation
"""
info.features
"""
Explanation: The info object describes the dataset and it's features:
End of explanation
"""
info.features['label'].names
"""
Explanation: The two classes are:
End of explanation
"""
glue_train = glue['train']
for key, value in glue_train.items():
print(f"{key:9s}: {value[0].numpy()}")
"""
Explanation: Here is one example from the training set:
End of explanation
"""
# Set up tokenizer to generate Tensorflow dataset
tokenizer = bert.tokenization.FullTokenizer(
vocab_file=os.path.join(gs_folder_bert, "vocab.txt"),
do_lower_case=True)
print("Vocab size:", len(tokenizer.vocab))
"""
Explanation: The BERT tokenizer
To fine tune a pre-trained model you need to be sure that you're using exactly the same tokenization, vocabulary, and index mapping as you used during training.
The BERT tokenizer used in this tutorial is written in pure Python (It's not built out of TensorFlow ops). So you can't just plug it into your model as a keras.layer like you can with preprocessing.TextVectorization.
The following code rebuilds the tokenizer that was used by the base model:
End of explanation
"""
tokens = tokenizer.tokenize("Hello TensorFlow!")
print(tokens)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
"""
Explanation: Tokenize a sentence:
End of explanation
"""
tokenizer.convert_tokens_to_ids(['[CLS]', '[SEP]'])
"""
Explanation: Preprocess the data
The section manually preprocessed the dataset into the format expected by the model.
This dataset is small, so preprocessing can be done quickly and easily in memory. For larger datasets the tf_models library includes some tools for preprocessing and re-serializing a dataset. See Appendix: Re-encoding a large dataset for details.
Encode the sentences
The model expects its two inputs sentences to be concatenated together. This input is expected to start with a [CLS] "This is a classification problem" token, and each sentence should end with a [SEP] "Separator" token:
End of explanation
"""
def encode_sentence(s):
tokens = list(tokenizer.tokenize(s.numpy()))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
sentence1 = tf.ragged.constant([
encode_sentence(s) for s in glue_train["sentence1"]])
sentence2 = tf.ragged.constant([
encode_sentence(s) for s in glue_train["sentence2"]])
print("Sentence1 shape:", sentence1.shape.as_list())
print("Sentence2 shape:", sentence2.shape.as_list())
"""
Explanation: Start by encoding all the sentences while appending a [SEP] token, and packing them into ragged-tensors:
End of explanation
"""
cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
_ = plt.pcolormesh(input_word_ids.to_tensor())
"""
Explanation: Now prepend a [CLS] token, and concatenate the ragged tensors to form a single input_word_ids tensor for each example. RaggedTensor.to_tensor() zero pads to the longest sequence.
End of explanation
"""
input_mask = tf.ones_like(input_word_ids).to_tensor()
plt.pcolormesh(input_mask)
"""
Explanation: Mask and input type
The model expects two additional inputs:
The input mask
The input type
The mask allows the model to cleanly differentiate between the content and the padding. The mask has the same shape as the input_word_ids, and contains a 1 anywhere the input_word_ids is not padding.
End of explanation
"""
type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat([type_cls, type_s1, type_s2], axis=-1).to_tensor()
plt.pcolormesh(input_type_ids)
"""
Explanation: The "input type" also has the same shape, but inside the non-padded region, contains a 0 or a 1 indicating which sentence the token is a part of.
End of explanation
"""
def encode_sentence(s, tokenizer):
tokens = list(tokenizer.tokenize(s))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
def bert_encode(glue_dict, tokenizer):
num_examples = len(glue_dict["sentence1"])
sentence1 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in np.array(glue_dict["sentence1"])])
sentence2 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in np.array(glue_dict["sentence2"])])
cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
input_mask = tf.ones_like(input_word_ids).to_tensor()
type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat(
[type_cls, type_s1, type_s2], axis=-1).to_tensor()
inputs = {
'input_word_ids': input_word_ids.to_tensor(),
'input_mask': input_mask,
'input_type_ids': input_type_ids}
return inputs
glue_train = bert_encode(glue['train'], tokenizer)
glue_train_labels = glue['train']['label']
glue_validation = bert_encode(glue['validation'], tokenizer)
glue_validation_labels = glue['validation']['label']
glue_test = bert_encode(glue['test'], tokenizer)
glue_test_labels = glue['test']['label']
"""
Explanation: Put it all together
Collect the above text parsing code into a single function, and apply it to each split of the glue/mrpc dataset.
End of explanation
"""
for key, value in glue_train.items():
print(f'{key:15s} shape: {value.shape}')
print(f'glue_train_labels shape: {glue_train_labels.shape}')
"""
Explanation: Each subset of the data has been converted to a dictionary of features, and a set of labels. Each feature in the input dictionary has the same shape, and the number of labels should match:
End of explanation
"""
import json
bert_config_file = os.path.join(gs_folder_bert, "bert_config.json")
config_dict = json.loads(tf.io.gfile.GFile(bert_config_file).read())
bert_config = bert.configs.BertConfig.from_dict(config_dict)
config_dict
"""
Explanation: The model
Build the model
The first step is to download the configuration for the pre-trained model.
End of explanation
"""
bert_classifier, bert_encoder = bert.bert_models.classifier_model(
bert_config, num_labels=2)
"""
Explanation: The config defines the core BERT Model, which is a Keras model to predict the outputs of num_classes from the inputs with maximum sequence length max_seq_length.
This function returns both the encoder and the classifier.
End of explanation
"""
tf.keras.utils.plot_model(bert_classifier, show_shapes=True, dpi=48)
"""
Explanation: The classifier has three inputs and one output:
End of explanation
"""
glue_batch = {key: val[:10] for key, val in glue_train.items()}
bert_classifier(
glue_batch, training=True
).numpy()
"""
Explanation: Run it on a test batch of data 10 examples from the training set. The output is the logits for the two classes:
End of explanation
"""
tf.keras.utils.plot_model(bert_encoder, show_shapes=True, dpi=48)
"""
Explanation: The TransformerEncoder in the center of the classifier above is the bert_encoder.
Inspecting the encoder, we see its stack of Transformer layers connected to those same three inputs:
End of explanation
"""
checkpoint = tf.train.Checkpoint(model=bert_encoder)
checkpoint.restore(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
"""
Explanation: Restore the encoder weights
When built the encoder is randomly initialized. Restore the encoder's weights from the checkpoint:
End of explanation
"""
# Set up epochs and steps
epochs = 3
batch_size = 32
eval_batch_size = 32
train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
warmup_steps = int(epochs * train_data_size * 0.1 / batch_size)
# creates an optimizer with learning rate schedule
optimizer = nlp.optimization.create_optimizer(
2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
"""
Explanation: Note: The pretrained TransformerEncoder is also available on TensorFlow Hub. See the Hub appendix for details.
Set up the optimizer
BERT adopts the Adam optimizer with weight decay (aka "AdamW").
It also employs a learning rate schedule that firstly warms up from 0 and then decays to 0.
End of explanation
"""
type(optimizer)
"""
Explanation: This returns an AdamWeightDecay optimizer with the learning rate schedule set:
End of explanation
"""
metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)]
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
bert_classifier.compile(
optimizer=optimizer,
loss=loss,
metrics=metrics)
bert_classifier.fit(
glue_train, glue_train_labels,
validation_data=(glue_validation, glue_validation_labels),
batch_size=32,
epochs=epochs)
"""
Explanation: To see an example of how to customize the optimizer and it's schedule, see the Optimizer schedule appendix.
Train the model
The metric is accuracy and we use sparse categorical cross-entropy as loss.
End of explanation
"""
my_examples = bert_encode(
glue_dict = {
'sentence1':[
'The rain in Spain falls mainly on the plain.',
'Look I fine tuned BERT.'],
'sentence2':[
'It mostly rains on the flat lands of Spain.',
'Is it working? This does not match.']
},
tokenizer=tokenizer)
"""
Explanation: Now run the fine-tuned model on a custom example to see that it works.
Start by encoding some sentence pairs:
End of explanation
"""
result = bert_classifier(my_examples, training=False)
result = tf.argmax(result).numpy()
result
np.array(info.features['label'].names)[result]
"""
Explanation: The model should report class 1 "match" for the first example and class 0 "no-match" for the second:
End of explanation
"""
export_dir='./saved_model'
tf.saved_model.save(bert_classifier, export_dir=export_dir)
reloaded = tf.saved_model.load(export_dir)
reloaded_result = reloaded([my_examples['input_word_ids'],
my_examples['input_mask'],
my_examples['input_type_ids']], training=False)
original_result = bert_classifier(my_examples, training=False)
# The results are (nearly) identical:
print(original_result.numpy())
print()
print(reloaded_result.numpy())
"""
Explanation: Save the model
Often the goal of training a model is to use it for something, so export the model and then restore it to be sure that it works.
End of explanation
"""
processor = nlp.data.classifier_data_lib.TfdsProcessor(
tfds_params="dataset=glue/mrpc,text_key=sentence1,text_b_key=sentence2",
process_text_fn=bert.tokenization.convert_to_unicode)
"""
Explanation: Appendix
<a id=re_encoding_tools></a>
Re-encoding a large dataset
This tutorial you re-encoded the dataset in memory, for clarity.
This was only possible because glue/mrpc is a very small dataset. To deal with larger datasets tf_models library includes some tools for processing and re-encoding a dataset for efficient training.
The first step is to describe which features of the dataset should be transformed:
End of explanation
"""
# Set up output of training and evaluation Tensorflow dataset
train_data_output_path="./mrpc_train.tf_record"
eval_data_output_path="./mrpc_eval.tf_record"
max_seq_length = 128
batch_size = 32
eval_batch_size = 32
# Generate and save training data into a tf record file
input_meta_data = (
nlp.data.classifier_data_lib.generate_tf_record_from_data_file(
processor=processor,
data_dir=None, # It is `None` because data is from tfds, not local dir.
tokenizer=tokenizer,
train_data_output_path=train_data_output_path,
eval_data_output_path=eval_data_output_path,
max_seq_length=max_seq_length))
"""
Explanation: Then apply the transformation to generate new TFRecord files.
End of explanation
"""
training_dataset = bert.run_classifier.get_dataset_fn(
train_data_output_path,
max_seq_length,
batch_size,
is_training=True)()
evaluation_dataset = bert.run_classifier.get_dataset_fn(
eval_data_output_path,
max_seq_length,
eval_batch_size,
is_training=False)()
"""
Explanation: Finally create tf.data input pipelines from those TFRecord files:
End of explanation
"""
training_dataset.element_spec
"""
Explanation: The resulting tf.data.Datasets return (features, labels) pairs, as expected by keras.Model.fit:
End of explanation
"""
def create_classifier_dataset(file_path, seq_length, batch_size, is_training):
"""Creates input dataset from (tf)records files for train/eval."""
dataset = tf.data.TFRecordDataset(file_path)
if is_training:
dataset = dataset.shuffle(100)
dataset = dataset.repeat()
def decode_record(record):
name_to_features = {
'input_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
'input_mask': tf.io.FixedLenFeature([seq_length], tf.int64),
'segment_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
'label_ids': tf.io.FixedLenFeature([], tf.int64),
}
return tf.io.parse_single_example(record, name_to_features)
def _select_data_from_record(record):
x = {
'input_word_ids': record['input_ids'],
'input_mask': record['input_mask'],
'input_type_ids': record['segment_ids']
}
y = record['label_ids']
return (x, y)
dataset = dataset.map(decode_record,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(
_select_data_from_record,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.batch(batch_size, drop_remainder=is_training)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
return dataset
# Set up batch sizes
batch_size = 32
eval_batch_size = 32
# Return Tensorflow dataset
training_dataset = create_classifier_dataset(
train_data_output_path,
input_meta_data['max_seq_length'],
batch_size,
is_training=True)
evaluation_dataset = create_classifier_dataset(
eval_data_output_path,
input_meta_data['max_seq_length'],
eval_batch_size,
is_training=False)
training_dataset.element_spec
"""
Explanation: Create tf.data.Dataset for training and evaluation
If you need to modify the data loading here is some code to get you started:
End of explanation
"""
# Note: 350MB download.
import tensorflow_hub as hub
hub_encoder = hub.KerasLayer(hub_url_bert, trainable=True)
print(f"The Hub encoder has {len(hub_encoder.trainable_variables)} trainable variables")
"""
Explanation: <a id="hub_bert"></a>
TFModels BERT on TFHub
You can get the BERT model off the shelf from TFHub. It would not be hard to add a classification head on top of this hub.KerasLayer
End of explanation
"""
result = hub_encoder(
inputs=[glue_train['input_word_ids'][:10],
glue_train['input_mask'][:10],
glue_train['input_type_ids'][:10],],
training=False,
)
print("Pooled output shape:", result[0].shape)
print("Sequence output shape:", result[1].shape)
"""
Explanation: Test run it on a batch of data:
End of explanation
"""
hub_classifier, hub_encoder = bert.bert_models.classifier_model(
# Caution: Most of `bert_config` is ignored if you pass a hub url.
bert_config=bert_config, hub_module_url=hub_url_bert, num_labels=2)
"""
Explanation: At this point it would be simple to add a classification head yourself.
The bert_models.classifier_model function can also build a classifier onto the encoder from TensorFlow Hub:
End of explanation
"""
tf.keras.utils.plot_model(hub_classifier, show_shapes=True, dpi=64)
try:
tf.keras.utils.plot_model(hub_encoder, show_shapes=True, dpi=64)
assert False
except Exception as e:
print(f"{type(e).__name__}: {e}")
"""
Explanation: The one downside to loading this model from TFHub is that the structure of internal keras layers is not restored. So it's more difficult to inspect or modify the model. The TransformerEncoder model is now a single layer:
End of explanation
"""
transformer_config = config_dict.copy()
# You need to rename a few fields to make this work:
transformer_config['attention_dropout_rate'] = transformer_config.pop('attention_probs_dropout_prob')
transformer_config['activation'] = tf_utils.get_activation(transformer_config.pop('hidden_act'))
transformer_config['dropout_rate'] = transformer_config.pop('hidden_dropout_prob')
transformer_config['initializer'] = tf.keras.initializers.TruncatedNormal(
stddev=transformer_config.pop('initializer_range'))
transformer_config['max_sequence_length'] = transformer_config.pop('max_position_embeddings')
transformer_config['num_layers'] = transformer_config.pop('num_hidden_layers')
transformer_config
manual_encoder = nlp.modeling.networks.TransformerEncoder(**transformer_config)
"""
Explanation: <a id="model_builder_functions"></a>
Low level model building
If you need a more control over the construction of the model it's worth noting that the classifier_model function used earlier is really just a thin wrapper over the nlp.modeling.networks.TransformerEncoder and nlp.modeling.models.BertClassifier classes. Just remember that if you start modifying the architecture it may not be correct or possible to reload the pre-trained checkpoint so you'll need to retrain from scratch.
Build the encoder:
End of explanation
"""
checkpoint = tf.train.Checkpoint(model=manual_encoder)
checkpoint.restore(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
"""
Explanation: Restore the weights:
End of explanation
"""
result = manual_encoder(my_examples, training=True)
print("Sequence output shape:", result[0].shape)
print("Pooled output shape:", result[1].shape)
"""
Explanation: Test run it:
End of explanation
"""
manual_classifier = nlp.modeling.models.BertClassifier(
bert_encoder,
num_classes=2,
dropout_rate=transformer_config['dropout_rate'],
initializer=tf.keras.initializers.TruncatedNormal(
stddev=bert_config.initializer_range))
manual_classifier(my_examples, training=True).numpy()
"""
Explanation: Wrap it in a classifier:
End of explanation
"""
optimizer = nlp.optimization.create_optimizer(
2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
"""
Explanation: <a id="optiizer_schedule"></a>
Optimizers and schedules
The optimizer used to train the model was created using the nlp.optimization.create_optimizer function:
End of explanation
"""
epochs = 3
batch_size = 32
eval_batch_size = 32
train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
decay_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=2e-5,
decay_steps=num_train_steps,
end_learning_rate=0)
plt.plot([decay_schedule(n) for n in range(num_train_steps)])
"""
Explanation: That high level wrapper sets up the learning rate schedules and the optimizer.
The base learning rate schedule used here is a linear decay to zero over the training run:
End of explanation
"""
warmup_steps = num_train_steps * 0.1
warmup_schedule = nlp.optimization.WarmUp(
initial_learning_rate=2e-5,
decay_schedule_fn=decay_schedule,
warmup_steps=warmup_steps)
# The warmup overshoots, because it warms up to the `initial_learning_rate`
# following the original implementation. You can set
# `initial_learning_rate=decay_schedule(warmup_steps)` if you don't like the
# overshoot.
plt.plot([warmup_schedule(n) for n in range(num_train_steps)])
"""
Explanation: This, in turn is wrapped in a WarmUp schedule that linearly increases the learning rate to the target value over the first 10% of training:
End of explanation
"""
optimizer = nlp.optimization.AdamWeightDecay(
learning_rate=warmup_schedule,
weight_decay_rate=0.01,
epsilon=1e-6,
exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])
"""
Explanation: Then create the nlp.optimization.AdamWeightDecay using that schedule, configured for the BERT model:
End of explanation
"""
|
nproctor/phys202-2015-work | assignments/assignment07/AlgorithmsEx02.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
"""
Explanation: Algorithms Exercise 2
Imports
End of explanation
"""
def find_peaks(a):
"""Find the indices of the local maxima in a sequence."""
peaks = []
for i in range(len(a)):
if i == 0 and a[i] > a[i+1]:
peaks.append(i)
elif i == (len(a)-1) and a[i] > a[i-1]:
peaks.append(i)
elif a[i] > a[i-1] and a[i] > a[i+1]:
peaks.append(i)
return np.array(peaks)
find_peaks([2,0,1,0,2,0,1])
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
"""
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
"""
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
a = pi_digits_str
b = np.array([int(a[i]) for i in range(len(a))])
plt.figure(figsize=(12,6))
plt.hist(np.diff(find_peaks(b)), bins=50)
plt.xlim(0,18)
plt.xlabel("Distance")
plt.ylabel("Occurences")
plt.title("Distance Between Consecutive Maxima for First 10,000 Decimals of Pi")
plt.show()
assert True # use this for grading the pi digits histogram
"""
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation
"""
|
cvxopt/chompack | doc/source/examples.ipynb | gpl-3.0 |
from cvxopt import matrix, spmatrix, sparse, normal, solvers, blas
import chompack as cp
import random
# Function for generating random sparse matrix
def sp_rand(m,n,a):
"""
Generates an m-by-n sparse 'd' matrix with round(a*m*n) nonzeros.
"""
if m == 0 or n == 0: return spmatrix([], [], [], (m,n))
nnz = min(max(0, int(round(a*m*n))), m*n)
nz = matrix(random.sample(range(m*n), nnz), tc='i')
return spmatrix(normal(nnz,1), nz%m, nz/m, (m,n))
# Generate random sparsity pattern and sparse SDP problem data
random.seed(1)
m, n = 50, 200
A = sp_rand(n,n,0.015) + spmatrix(1.0,range(n),range(n))
I = cp.tril(A)[:].I
N = len(I)/50 # each data matrix has 1/50 of total nonzeros in pattern
Ig = []; Jg = []
for j in range(m):
Ig += sorted(random.sample(I,N))
Jg += N*[j]
G = spmatrix(normal(len(Ig),1),Ig,Jg,(n**2,m))
h = G*normal(m,1) + spmatrix(1.0,range(n),range(n))[:]
c = normal(m,1)
dims = {'l':0, 'q':[], 's': [n]};
"""
Explanation: Examples
SDP conversion
This example demonstrates the SDP conversion method. We first generate a random sparse SDP:
End of explanation
"""
prob = (c, G, matrix(h), dims)
sol = solvers.conelp(*prob)
Z1 = matrix(sol['z'], (n,n))
"""
Explanation: The problem can be solved using CVXOPT's cone LP solver:
End of explanation
"""
prob2, blocks_to_sparse, symbs = cp.convert_conelp(*prob)
sol2 = solvers.conelp(*prob2)
"""
Explanation: An alternative is to convert the sparse SDP into a block-diagonal SDP using the conversion method and solve the converted problem using CVXOPT:
End of explanation
"""
# Map block-diagonal solution sol2['z'] to a sparse positive semidefinite completable matrix
blki,I,J,bn = blocks_to_sparse[0]
Z2 = spmatrix(sol2['z'][blki],I,J)
# Compute completion
symb = cp.symbolic(Z2, p=cp.maxcardsearch)
Z2c = cp.psdcompletion(cp.cspmatrix(symb)+Z2, reordered=False)
Y2 = cp.mrcompletion(cp.cspmatrix(symb)+Z2, reordered=False)
"""
Explanation: The solution to the original SDP can be found by mapping the block-diagonal solution to a sparse positive semidefinite completable matrix and computing a positive semidefinite completion:
End of explanation
"""
mf = cp.merge_size_fill(5,5)
prob3, blocks_to_sparse, symbs = cp.convert_conelp(*prob, coupling = 'full', merge_function = mf)
sol3 = solvers.conelp(*prob3)
"""
Explanation: The conversion can also be combined with clique-merging techniques in the symbolic factorization. This typically yields a block-diagonal SDP with fewer (but bigger) blocks than without clique-merging:
End of explanation
"""
# Map block-diagonal solution sol2['z'] to a sparse positive semidefinite completable matrix
blki,I,J,bn = blocks_to_sparse[0]
Z3 = spmatrix(sol3['z'][blki],I,J)
# Compute completion
symb = cp.symbolic(Z3, p=cp.maxcardsearch)
Z3c = cp.psdcompletion(cp.cspmatrix(symb)+Z3, reordered=False)
"""
Explanation: Finally, we recover the solution to the original SDP:
End of explanation
"""
from cvxopt import uniform, spmatrix, matrix
import chompack as cp
d = 2 # dimension
n = 100 # number of points (order of A)
delta = 0.15**2 # distance threshold
P = uniform(d,n) # generate n points with independent and uniformly distributed coordinates
Y = P.T*P # Gram matrix
# Compute true distances: At[i,j] = norm(P[:,i]-P[:,j])**2
# At = diag(Y)*ones(1,n) + ones(n,1)*diag(Y).T - 2*Y
At = Y[::n+1]*matrix(1.0,(1,n)) + matrix(1.0,(n,1))*Y[::n+1].T - 2*Y
# Generate matrix with "observable distances"
# A[i,j] = At[i,j] if At[i,j] <= delta
V,I,J = zip(*[(At[i,j],i,j) for j in range(n) for i in range(j,n) if At[i,j] <= delta])
A = spmatrix(V,I,J,(n,n))
"""
Explanation: Euclidean distance matrix completion
Suppose that $A$ is a partial EDM of order $n$ where the squared distance $A_{ij} = \| p_i - p_j \|2^2$ between two point $p_i$ and $p_j$ is known if $p_i$ and $p_j$ are sufficiently close. We will assume that $A{ij}$ is known if and only if
$$\| p_i - p_j \|_2^2 \leq \delta $$
where $\delta$ is a positive constant. Let us generate a random partial EDM based on points in $\mathbb{R}^2$:
End of explanation
"""
Ac,p = cp.maxchord(A)
"""
Explanation: The partial EDM $A$ may or may not be chordal. We can find a maximal chordal subgraph using the maxchord routine which returns a chordal matrix $A_{\mathrm{c}}$ and a perfect elimination order $p$. Note that if $A$ is chordal, then $A_{\mathrm{c}} = A$.
End of explanation
"""
from pylab import plot,xlim,ylim,gca
# Extract entries in Ac and entries dropped from A
IJc = zip(Ac.I,Ac.J)
tmp = A - Ac
IJd = [(i,j) for i,j,v in zip(tmp.I,tmp.J,tmp.V) if v > 0]
# Plot edges
for i,j in IJc:
if i > j: plot([P[0,i],P[0,j]],[P[1,i],P[1,j]],'k-')
for i,j in IJd:
if i > j: plot([P[0,i],P[0,j]],[P[1,i],P[1,j]],'r-')
# Plot points
plot(P[0,:].T,P[1,:].T, 'b.', ms=12)
xlim([0.,1.])
ylim([0.,1.])
gca().set_aspect('equal')
"""
Explanation: The points $p_i$ and the known distances can be visualized using Matplotlib:
End of explanation
"""
symb = cp.symbolic(Ac, p=p)
p = symb.p
"""
Explanation: The edges represent known distances. The red edges are edges that were removed to produce the maximal chordal subgraph, and the black edges are the edges of the chordal subgraph.
Next we compute a symbolic factorization of the chordal matrix $A_{\mathrm{c}}$ using the perfect elimination order $p$:
End of explanation
"""
X = cp.edmcompletion(cp.cspmatrix(symb)+Ac, reordered = False)
"""
Explanation: Now edmcompletion can be used to compute an EDM completion of the chordal matrix $A_{\mathrm{c}}$:
End of explanation
"""
import chompack as cp
from cvxopt import spmatrix, amd
L = [[0,2,3,4,14],[1,2,3],[2,3,4,14],[3,4,14],[4,8,14,15],[5,8,15],[6,7,8,14],[7,8,14],[8,14,15],[9,10,12,13,16],[10,12,13,16],[11,12,13,15,16],[12,13,15,16],[13,15,16],[14,15,16],[15,16],[16]]
I = []
J = []
for k,l in enumerate(L):
I.extend(l)
J.extend(len(l)*[k])
A = spmatrix(1.0,I,J,(17,17))
symb = cp.symbolic(A, p=amd.order)
"""
Explanation: Symbolic factorization
This example demonstrates the symbolic factorization. We start by generating a test problem and computing a symbolic factorization using the approximate minimum degree (AMD) ordering heuristic:
End of explanation
"""
from chompack.pybase.plot import sparsity_graph
sparsity_graph(symb, node_size=50, with_labels=False)
"""
Explanation: The sparsity graph can be visualized with the sparsity_graph routine if Matplotlib, NetworkX, and Graphviz are installed:
End of explanation
"""
from chompack.pybase.plot import spy
fig = spy(symb, reordered=True)
"""
Explanation: The sparsity_graph routine passes all optional keyword arguments to NetworkX to make it easy to customize the visualization.
It is also possible to visualize the sparsity pattern using the spy routine which requires the packages Matplotlib, Numpy, and Scipy:
End of explanation
"""
par = symb.parent()
snodes = symb.supernodes()
print "Id Parent id Supernode"
for k,sk in enumerate(snodes):
print "%2i %2i "%(k,par[k]), sk
"""
Explanation: The supernodes and the supernodal elimination tree can be extracted from the symbolic factorization as follows:
End of explanation
"""
from chompack.pybase.plot import etree_graph
etree_graph(symb, with_labels=True, arrows=False, node_size=500, node_color='w', node_shape='s', font_size=14)
"""
Explanation: The supernodal elimination tree can be visualized with the etree_graph routine if Matplotlib, NetworkX, and Graphviz are installed:
End of explanation
"""
|
robertoalotufo/ia898 | master/tutorial_convprop_3.ipynb | mit | # importando a função a ser utilizada nesse tutorial
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Propriedades-da-Convolução" data-toc-modified-id="Propriedades-da-Convolução-1"><span class="toc-item-num">1 </span>Propriedades da Convolução</a></div><div class="lev2 toc-item"><a href="#Translação-por-um-impulso" data-toc-modified-id="Translação-por-um-impulso-11"><span class="toc-item-num">1.1 </span>Translação por um impulso</a></div><div class="lev2 toc-item"><a href="#Resposta-ao-impulso" data-toc-modified-id="Resposta-ao-impulso-12"><span class="toc-item-num">1.2 </span>Resposta ao impulso</a></div><div class="lev2 toc-item"><a href="#Decomposição" data-toc-modified-id="Decomposição-13"><span class="toc-item-num">1.3 </span>Decomposição</a></div><div class="lev3 toc-item"><a href="#Visualizando-as-imagens:" data-toc-modified-id="Visualizando-as-imagens:-131"><span class="toc-item-num">1.3.1 </span>Visualizando as imagens:</a></div>
# Propriedades da Convolução
A convolução possui várias propriedades que são úteis tanto para o melhor entendimento
do seu funcionamento como de uso prático. Aqui são ilustradas três propriedades: translação
por impulso, resposta ao impulso e decomposição do núcleo da convolução.
End of explanation
"""
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys,os
os.chdir('../data')
f = mpimg.imread('cameraman.tif')
h = np.zeros((20,60))
h[19,59] = 1
nb = ia.nbshow(3)
nb.nbshow(f,'entrada')
g = ia.conv(f,h)
nb.nbshow(g.astype(np.uint8),'entrada translada de (20,60)')
nb.nbshow()
"""
Explanation: Translação por um impulso
Quando o núcleo da composição é composto de apenas um único valor um e os demais zeros, a
imagem resultante será a translação da imagem original pelas coordenadas do valor não zero do núcleo.
No exemplo a seguir, o núcleo da convolução consiste do valor 1 na coordenada (19,59). Assim, a imagem
resultante ficara deslocada de 19 pixels para baixo e 59 para a direita. Observe que como estamos
tratando as imagens como infinitas com valores zeros fora do retângulo da imagem, esta translação faz
com que o retângulo da imagem aumente e vários valores iguais a zero sejam agora visíveis.
End of explanation
"""
import numpy as np
#gerando imagem com pulsos
# 1 pulso a cada 4 linhas e 4 colunas
f = np.zeros((4,4))
f[3,3]= 1
f = np.tile(f,(2,2))
print('Matriz com impulsos:\n',f)
#gerando filtro
h = np.array([ [1,2,3],[4,5,6],[7,8,9]])
print('\nNucleo do Filtro:\n',h)
g = ia.conv(f,h)
print('\nVisualização do núcleo após aplicar o fitro sobre a matriz com pulsos:\n',g)
import numpy as np
print('Aplicando a resposta ao impulso numa imagem real para ilustrar o seu comportamento')
#gerando imagem com pulsos
f = np.zeros((40,40))
f[20,20]= 1
f = np.tile(f,(10,10))
f = ia.normalize(f)
nb.nbshow(f, 'imagem original')
#gerando filtro - circulo de raio 50
r,c = np.indices( (40, 40) )
h = ((r-20)**2 + (c-20)**2 < 20**2)
h = ia.normalize(h)
nb.nbshow(h, 'nucleo')
g = ia.conv(f,h)
nb.nbshow(ia.normalize(g), 'resposta ao impulso')
nb.nbshow()
"""
Explanation: Resposta ao impulso
Quando a imagem é formada por um único pixel de valor 1, o resultado da convolução é o núcleo da convolução. Esta propriedade permite que se visualize o núcleo da convolução. Se você souber que existe algum software que possui um filtro linear invariante à translação e você não sabe qual é o seu núcleo, basta aplicá-lo numa imagem com um único pixel igual a 1. O resultado do filtro revelará o seu núcleo. Na ilustração a seguir, uma imagem com vários impulsos é criada. Após aplicar a convolução com um filtro qualquer, é possível visualizar o núcleo sendo repetido em cada lugar do impulso.
End of explanation
"""
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import sys,os
os.chdir('../data')
f = mpimg.imread('cameraman.tif')
h1 = np.ones((1,10))
h2 = np.ones((10,1))
h = ia.conv(h1,h2)
print('Nucleo original h=\n',h)
print('\nTempo de processamento 10 x 10:')
%timeit ia.conv(f,h)
f2 = ia.conv(f,h1)
print('\nNucleo decomposto\nh1=\n',h1,'\nh2=\n',h2)
print('\nTempo de processamento 10 horizontal e 10 vertical:')
%timeit ia.conv(f,h1), ia.conv(f2,h2)
"""
Explanation: Decomposição
A propriedade da associatividade da convolução é dada por:
\begin{align}
fh_{eq} = f(h1h2) = (fh1)*h2
\end{align}
Se conseguirmos decompor um núcleo de modo que ele seja o resultado da convolução de dois núcleos mais simples, esta propriedade permite um ganho computacional se a convolução for aplicada por cada núcleo separadamente. A seguir é ilustrado o caso do núcleo que faz a soma dos pixels numa janela quadrada de 10 pixels de lado. Se a convolução for feita com o quadrado 10 x 10, serão 100 operações feitas na convolução. Se o núcleo for decomposto em dois núcleos uma linha e uma coluna de 10 pixels cada, cada convolução precisará de 10 operações, totalizando 20 operações ao todo. Observe a diferença no tempo de processamento destes dois casos.
End of explanation
"""
f1 = ia.conv(f,h)
f3= ia.conv(f2,h2)
nb.nbshow(ia.normalize(f1), 'filtragem pela soma na janela 10x10 (f1)')
nb.nbshow(ia.normalize(f3), 'filtragem pela soma na janela 10 horizontal e 10 vertical separadas (f3)')
nb.nbshow()
print('f1 é igual f3?\nMaxima diferença entre f1 e f3:', np.max(np.abs(f1-f3)) )
"""
Explanation: Note a grande diferença no tempo de execução com o nucleo original e com o nucleo separado
Visualizando as imagens:
End of explanation
"""
|
lenovor/MNIST | svm.scikit/svc_rbf.scikit_benchmark.ipynb | mit | from __future__ import division
import os, time, math
import cPickle as pickle
#import multiprocessing
import matplotlib.pyplot as plt
import numpy as np
import csv
from print_imgs import print_imgs # my own function to print a grid of square images
from sklearn.preprocessing import StandardScaler
from sklearn.utils import shuffle
from sklearn.svm import SVC
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report, confusion_matrix
#from sklearn.externals import joblib
np.random.seed(seed=1009)
%matplotlib inline
#%qtconsole
"""
Explanation: MNIST digit recognition using SVC in scikit-learn
> Using optimal parameters, fit to BOTH original and deskewed data
End of explanation
"""
file_path = '../data/'
train_img_deskewed_filename = 'train-images_deskewed.csv'
train_img_original_filename = 'train-images.csv'
test_img_deskewed_filename = 't10k-images_deskewed.csv'
test_img_original_filename = 't10k-images.csv'
train_label_filename = 'train-labels.csv'
test_label_filename = 't10k-labels.csv'
"""
Explanation: Where's the data?
End of explanation
"""
portion = 1.0 # set to less than 1.0 for testing; set to 1.0 to use the entire dataset
"""
Explanation: How much of the data will we use?
End of explanation
"""
# read both trainX files
with open(file_path + train_img_original_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
trainXo = np.ascontiguousarray(data, dtype = np.float64)
with open(file_path + train_img_deskewed_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
trainXd = np.ascontiguousarray(data, dtype = np.float64)
# vertically concatenate the two files
trainX = np.vstack((trainXo, trainXd))
trainXo = None
trainXd = None
# scale trainX
scaler = StandardScaler()
scaler.fit(trainX) # find mean/std for trainX
trainX = scaler.transform(trainX) # scale trainX with trainX mean/std
# read trainY twice and vertically concatenate
with open(file_path + train_label_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
trainYo = np.ascontiguousarray(data, dtype = np.int8)
trainYd = np.ascontiguousarray(data, dtype = np.int8)
trainY = np.vstack((trainYo, trainYd)).ravel()
trainYo = None
trainYd = None
data = None
# shuffle trainX & trainY
trainX, trainY = shuffle(trainX, trainY, random_state=0)
# use less data if specified
if portion < 1.0:
trainX = trainX[:portion*trainX.shape[0]]
trainY = trainY[:portion*trainY.shape[0]].ravel()
print("trainX shape: {0}".format(trainX.shape))
print("trainY shape: {0}\n".format(trainY.shape))
print(trainX.flags)
"""
Explanation: Read the training images and labels, both original and deskewed
End of explanation
"""
# read testX
with open(file_path + test_img_deskewed_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
testX = np.ascontiguousarray(data, dtype = np.float64)
if portion < 1.0:
testX = testX[:portion*testX.shape[0]]
# scale testX
testX = scaler.transform(testX) # scale testX with trainX mean/std
# read testY
with open(file_path + test_label_filename,'r') as f:
data_iter = csv.reader(f, delimiter = ',')
data = [data for data in data_iter]
testY = np.ascontiguousarray(data, dtype = np.int8)
if portion < 1.0:
testY = testY[:portion*testY.shape[0]].ravel()
# shuffle testX, testY
testX, testY = shuffle(testX, testY, random_state=0)
print("testX shape: {0}".format(testX.shape))
print("testY shape: {0}".format(testY.shape))
"""
Explanation: Read the DESKEWED test images and labels
End of explanation
"""
print_imgs(images = trainX,
actual_labels = trainY.ravel(),
predicted_labels = trainY.ravel(),
starting_index = np.random.randint(0, high=trainY.shape[0]-36, size=1)[0],
size = 6)
"""
Explanation: Use the smaller, fewer images for testing
Print a sample
End of explanation
"""
# default parameters for SVC
# ==========================
default_svc_params = {}
default_svc_params['C'] = 1.0 # penalty
default_svc_params['class_weight'] = None # Set the parameter C of class i to class_weight[i]*C
# set to 'auto' for unbalanced classes
default_svc_params['gamma'] = 0.0 # Kernel coefficient for 'rbf', 'poly' and 'sigmoid'
default_svc_params['kernel'] = 'rbf' # 'linear', 'poly', 'rbf', 'sigmoid', 'precomputed' or a callable
# use of 'sigmoid' is discouraged
default_svc_params['shrinking'] = True # Whether to use the shrinking heuristic.
default_svc_params['probability'] = False # Whether to enable probability estimates.
default_svc_params['tol'] = 0.001 # Tolerance for stopping criterion.
default_svc_params['cache_size'] = 200 # size of the kernel cache (in MB).
default_svc_params['max_iter'] = -1 # limit on iterations within solver, or -1 for no limit.
default_svc_params['random_state'] = 1009
default_svc_params['verbose'] = False
default_svc_params['degree'] = 3 # 'poly' only
default_svc_params['coef0'] = 0.0 # 'poly' and 'sigmoid' only
# set the parameters for the classifier
# =====================================
svc_params = dict(default_svc_params)
svc_params['C'] = 25.595479226995359
svc_params['gamma'] = 0.00068664884500429981
svc_params['cache_size'] = 2000
# create the classifier itself
# ============================
svc_clf = SVC(**svc_params)
"""
Explanation: SVC Parameter Settings
End of explanation
"""
t0 = time.time()
svc_clf.fit(trainX, trainY.ravel())
print(svc_clf)
print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
"""
Explanation: Fit the training data
End of explanation
"""
target_names = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
predicted_values = svc_clf.predict(testX)
y_true, y_pred = testY.ravel(), predicted_values
print(classification_report(y_true, y_pred, target_names=target_names))
def plot_confusion_matrix(cm,
target_names,
title='Proportional Confusion matrix',
cmap=plt.cm.Paired):
"""
given a confusion matrix (cm), make a nice plot
see the skikit-learn documentation for the original done for the iris dataset
"""
plt.figure(figsize=(8, 6))
plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm = confusion_matrix(y_true, y_pred)
print(cm)
model_accuracy = sum(cm.diagonal())/len(testY)
model_misclass = 1 - model_accuracy
print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass))
plot_confusion_matrix(cm, target_names)
"""
Explanation: Predict the test set and analyze the result
End of explanation
"""
t0 = time.time()
from sklearn.learning_curve import learning_curve
from sklearn.cross_validation import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
"""
Generate a simple plot of the test and training learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : integer, cross-validation generator, optional
If an integer is passed, it is the number of folds (defaults to 3).
Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
"""
plt.figure(figsize=(8, 6))
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.tight_layout()
plt.legend(loc="best")
return plt
C_gamma = "C="+str(np.round(svc_params['C'],4))+", gamma="+str(np.round(svc_params['gamma'],6))
title = "Learning Curves (SVM, RBF, " + C_gamma + ")"
plot_learning_curve(estimator = svc_clf,
title = title,
X = trainX,
y = trainY.ravel(),
ylim = (0.85, 1.01),
cv = ShuffleSplit(n = trainX.shape[0],
n_iter = 5,
test_size = 0.2,
random_state=0),
n_jobs = 8)
plt.show()
print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
"""
Explanation: Learning Curves
see http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html
The score is the model accuracy
The red line shows how well the model fits the data it was trained on:
a high score indicates low bias ... the model does fit the training data
it's not unusual for the red line to start at 1.00 and decline slightly
a low score indicates the model does not fit the training data ... more predictor variables are ususally indicated, or a different model
The green line shows how well the model predicts the test data: if it's rising then it means more data to train on will produce better predictions
End of explanation
"""
|
nusdbsystem/incubator-singa | doc/en/docs/notebook/regression.ipynb | apache-2.0 | from __future__ import division
from __future__ import print_function
from builtins import range
from past.utils import old_div
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements; and to You under the Apache License, Version 2.0.
Train a linear regression model
In this notebook, we are going to use the tensor module from PySINGA to train a linear regression model. We use this example to illustrate the usage of tensor of PySINGA. Please refer the documentation page to for more tensor functions provided by PySINGA.
End of explanation
"""
from singa import tensor
"""
Explanation: To import the tensor module of PySINGA, run
End of explanation
"""
a, b = 3, 2
f = lambda x: a * x + b
gx = np.linspace(0.,1,100)
gy = [f(x) for x in gx]
plt.plot(gx, gy, label='y=f(x)')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='best')
"""
Explanation: The ground-truth
Our problem is to find a line that fits a set of 2-d data points.
We first plot the ground truth line,
End of explanation
"""
nb_points = 30
# generate training data
train_x = np.asarray(np.random.uniform(0., 1., nb_points), np.float32)
train_y = np.asarray(f(train_x) + np.random.rand(30), np.float32)
plt.plot(train_x, train_y, 'bo', ms=7)
"""
Explanation: Generating the trainin data
Then we generate the training data points by adding a random error to sampling points from the ground truth line.
30 data points are generated.
End of explanation
"""
def plot(idx, x, y):
global gx, gy, axes
# print the ground truth line
axes[idx//5, idx%5].plot(gx, gy, label='y=f(x)')
# print the learned line
axes[idx//5, idx%5].plot(x, y, label='y=kx+b')
axes[idx//5, idx%5].legend(loc='best')
# set hyper-parameters
max_iter = 15
alpha = 0.05
# init parameters
k, b = 2.,0.
"""
Explanation: Training via SGD
Assuming that we know the training data points are sampled from a line, but we don't know the line slope and intercept. The training is then to learn the slop (k) and intercept (b) by minimizing the error, i.e. ||kx+b-y||^2.
1. we set the initial values of k and b (could be any values).
2. we iteratively update k and b by moving them in the direction of reducing the prediction error, i.e. in the gradient direction. For every iteration, we plot the learned line.
End of explanation
"""
# to plot the intermediate results
fig, axes = plt.subplots(3, 5, figsize=(12, 8))
x = tensor.from_numpy(train_x)
y = tensor.from_numpy(train_y)
# sgd
for idx in range(max_iter):
y_ = x * k + b
err = y_ - y
loss = old_div(tensor.sum(err * err), nb_points)
print('loss at iter %d = %f' % (idx, loss))
da1 = old_div(tensor.sum(err * x), nb_points)
db1 = old_div(tensor.sum(err), nb_points)
# update the parameters
k -= da1 * alpha
b -= db1 * alpha
plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_))
"""
Explanation: SINGA tensor module supports basic linear algebra operations, like + - * /, and advanced functions including axpy, gemm, gemv, and random function (e.g., Gaussian and Uniform).
SINGA Tensor instances could be created via tensor.Tensor() by specifying the shape, and optionally the device and data type. Note that every Tensor instance should be initialized (e.g., via set_value() or random functions) before reading data from it. You can also create Tensor instances from numpy arrays,
numpy array could be converted into SINGA tensor via tensor.from_numpy(np_ary)
SINGA tensor could be converted into numpy array via tensor.to_numpy(); Note that the tensor should be on the host device. tensor instances could be transferred from other devices to host device via to_host()
Users cannot read a single cell of the Tensor instance. To read a single cell, users need to convert the Tesnor into a numpy array.
End of explanation
"""
# to plot the intermediate results
fig, axes = plt.subplots(3, 5, figsize=(12, 8))
x = tensor.from_numpy(train_x)
y = tensor.from_numpy(train_y)
# sgd
for idx in range(max_iter):
y_ = x * k + b
err = y_ - y
loss = old_div(tensor.sum(err * err), nb_points)
print('loss at iter %d = %f' % (idx, loss))
da1 = old_div(tensor.sum(err * x), nb_points)
db1 = old_div(tensor.sum(err), nb_points)
# update the parameters
k -= da1 * alpha
b -= db1 * alpha
plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_))
"""
Explanation: We can see that the learned line is becoming closer to the ground truth line (in blue color).
Next: MLP example
End of explanation
"""
|
asurunis/CrisisMappingToolkit | ipython/CrisisMappingToolkitOverview.ipynb | apache-2.0 | import sys
import os
import ee
# This script assumes your authentification credentials are stored as operatoring system
# environment variables.
__MY_SERVICE_ACCOUNT = os.environ.get('MY_SERVICE_ACCOUNT')
__MY_PRIVATE_KEY_FILE = os.environ.get('MY_PRIVATE_KEY_FILE')
# Initialize the Earth Engine object, using your authentication credentials.
ee.Initialize()
"""
Explanation: Crisis Mapping Toolkit Documentation
This document provides a high level overview of how to use the Crisis Mapping Toolkit (CMT). The CMT is a set of tools built using Google's Earth Engine Python API so familiariaty with that API will be extremely useful when working with the CMT.
Installing Earth Engine
See instructions from Google here: https://docs.google.com/document/d/1tvkSGb-49YlSqW3AGknr7T_xoRB1KngCD3f2uiwOS3Q/edit
"Hello Crisis Mapping Toolkit"
Initialize Earth Engine
End of explanation
"""
# Make sure that Python can find the CMT source files
CMT_INSTALL_FOLDER = '/home/smcmich1/repo/earthEngine/CrisisMappingToolkit/'
sys.path.append(CMT_INSTALL_FOLDER)
import cmt.util.evaluation
from cmt.mapclient_qt import centerMap, addToMap
"""
Explanation: Load the Crisis Mapping Toolkit
End of explanation
"""
import cmt.domain
domainPath = os.path.join(CMT_INSTALL_FOLDER, 'config/domains/modis/kashmore_2010_8.xml')
kashmore_domain = cmt.domain.Domain(domainPath)
"""
Explanation: Load a domain
A domain is a geographic location associated with certain sensor images, global data sets, and other supporting files. A domain is described by a custom XML file and can easily be loaded in Python. Once the XML file is loaded all of the associated data can be easily accessed. Note that none of the images are stored locally; instead they have been uploaded to web storage locations where Earth Engine can access them.
End of explanation
"""
import cmt.util.gui_util
cmt.util.gui_util.visualizeDomain(kashmore_domain)
"""
Explanation: Display the domain
End of explanation
"""
from cmt.modis.flood_algorithms import *
# Select the algorithm to use and then call it
algorithm = DIFFERENCE
(alg, result) = detect_flood(kashmore_domain, algorithm)
# Get a color pre-associated with the algorithm, then draw it on the map
color = get_algorithm_color(algorithm)
addToMap(result.mask(result), {'min': 0, 'max': 1, 'opacity': 0.5, 'palette': '000000, ' + color}, alg, False)
"""
Explanation: A GUI should appear in a seperate window displaying the domain location. If the GUI does not appear, try restarting the IPython kernel and trying again. This is the default GUI used by the CMT. It is an enhanced version of the GUI provided with the Earth Engine Python API and behaves similarly to the Earth Engine online "playground" interface.
Basic GUI instructions:
You can move the view location by clicking and dragging.
You can zoom in and out using the mouse wheel.
Right clicking the view brings up a context menu with the following:
The lat/lon coordinate where you clicked.
The list of currently loaded image layers.
An opacity slider for each image layer.
The value for each image layer at the location you clicked.
A button which will save the current view as a geotiff file.
Call a classification algorithm
End of explanation
"""
precision, recall, eval_count, quality = cmt.util.evaluation.evaluate_approach(result, kashmore_domain.ground_truth, kashmore_domain.bounds, is_algorithm_fractional(algorithm))
print('For algorithm "%s", precision = %f and recall = %f' % (alg, precision, recall) )
"""
Explanation: Classifier output
The algorithm output should have been added to the GUI as another image layer.
Each classifier algorithm evaluates each pixel as flooded(1) or dry (0). Some algorithms will return a probability of being flooded ranging from 0 to 1.
Evaluate classification results
End of explanation
"""
# Access a specific parameter listed in the domain file
kashmore_domain.algorithm_parameters['modis_diff_threshold']
# Call this function to get whatever digital elevation map is available.
dem = kashmore_domain.get_dem()
# All the sensors included in the domain are stored as a list
first_sensor = kashmore_domain.sensor_list[0]
# If you know the name of a sensor you can access it like this
modis_sensor = kashmore_domain.modis
# Then you can access individual sensor bands like this
one_band = modis_sensor.sur_refl_b03
# To get the EE image object containing all the bands, do this
all_bands = modis.image
# The sensor contains some other information,
# but only if the information is present in the XML files
first_band_name = band_names[0]
first_band_resolution = modis.band_resolutions[first_band_name]
# Related domains have the same structure as the main domain
# and can be accessed like this
kashmore_domain.training_domain
kashmore_domain.unflooded_domain
"""
Explanation: Interpreting results
The two main scores for evaluating an algorithm are "precision" and "recall".
- Precision is a measure of how many false positives the algorithm has. It is calculated as: (number of pixels classified as flooded which are actually flooded) / (number of pixels classified as flooded)
- Recall is a measure of how sensitive to flooding the algorithm is. It is calculated as: (number of pixels classified as flooded which are actually flooded) / (total number of flooded pixels)
In order for these measurements to be computed the domain must have a ground truth file associated with it which labels each pixel as flooded or dry.
End of introduction
The documentation so far covers most of the code used to write a file such as the tool detect_flood_modis.py. The rest of the documentation covers different aspects of the CMT in more detail.
Supported Sensor Data
The Crisis Mapping Toolkit has so far been used with the following types of data:
- MODIS = 250m to 500m satellite imagery covering the globe daily.
- LANDSAT = 30m satellite imagery with global coverage but infrequent images.
- DEM = Earth Engine provides the SRTM90 and NED13 digital elevation maps.
- Skybox = Google owned RGBN imaging satellites.
- SAR = Cloud penetrating radar data. Several specific sources have been tested:
- UAVSAR
- Sentinel-1
- Terrasar-X
MODIS and LANDSAT data are the easiest types to work with because Earth Engine already has all of that data loaded and easily accessible. SAR data on the other hand can be difficult or expensive to get ahold of.
Most of the processing algorithms currently in CMT are for processing MODIS or SAR data and are split between the modis and radar folders. Some of the algorithms, such as the active contour, can also operate on other types of data.
Instructions for how to load your own data are located in the "Domains" section of this documentation.
Algorithm Overviews
The algorithms currently implemented by the CMT fall into these categories:
MODIS
- Simple algorithms = Basic thresholding and small decision tree algorithms.
- EE Classifiers = These algorithms are built around Earth Engine's classifier tool.
- DNNS = Variants of the DNNS algorithm (http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6307841)
- Adaboost = Uses multiple instances of the simple algorithms to build a more accurate composite classification.
- Misc algorithms = A few other algorithms outside those categories.
RADAR
- Learning = Algorithms built around Earth Engine's classifier tool.
- Matgen = An algorithm which attempts to detect water using find a global histogram split. (http://www.sciencedirect.com/science/article/pii/S1474706510002160)
- Martinis = Breaks up the region into sub-regions to try and obtain a more useful histogram to split (http://www.nat-hazards-earth-syst-sci.net/9/303/2009/nhess-9-303-2009.pdf)
- Active Contour = A "snake" algorithm for finding water boundaries.
Skybox
- The MODIS EE Classifiers can incorporate Skybox imagery to improve their results.
- The Active Contour algorithm can be used on Skybox data.
The Production GUI
In addition to the default GUI, the Crisis Mapping Toolkit has another GUI customized to perform a few useful operations. It is accessible by running the "flood_detection_wizard.py" tool. The main map portion of the production GUI is the same as in the default GUI but there are additional controls above and below the map window.
Why use the production GUI?
Easily search through MODIS and Landsat data. The production GUI lets you quickly change the date and then searches for the closest Landsat data.
Quickly perform basic MODIS flood detection. The controls at the bottom allow quick tuning of a simple flood detection algorithm on the currently displayed MODIS data.
Generate training data. You can use the production GUI to create labled training polygons to load into several of the classifier algorithms.
<br>
<img src="production_gui_screenshot.png">
<center> A screenshot of the Production GUI </center>
Top buttons from left to right
Date Selector Button = Choose the date of interest. MODIS data will be loaded from that date and LANDSAT data will be searched for nearby that date.
Set Processing Region = When clicked the current field of view in the map will be set as the region of interest. This region is used when searching for LANDSAT images and performing flood detection.
Load Images = Once the data and region have been set, press this button to search for MODIS and LANDSAT data. The data should be added to the main map display.
Detect Flood = Run a flood detection algorithm using the values currently set by the sliders at the bottom of the GUI. Flood detection results will be displayed in the main map display.
Load Maps Engine Image = Paste the full Earth Engine ID from an image loaded in Google Maps Engine, then select the associated sensor type and click "Ok". The image will now be displayed on the main map display. Currently only one image at a time is supported.
Open Class Trainer = Opens another window for generating training regions.
Clear Map Button = Click this to remove all images from the main map display.
How to load MODIS/LANDSAT data
Click the date select button and pick a date.
Pan and zoom to your region of interest and click "Set Processing Region".
Click "Load Images"
How to detect floods
Perform the three steps above to load MODIS and LANDSAT data.
Adjust the two sliders at the bottom to set the algorithm parameters.
Change Detection Threshold = Decrease this value to detect more pixels as flooded.
Water Mask Threshold = Increase this value to detect more pixels as flooded.
Click "Detect Flood"
How to generate training regions for classifiers
Load the imagery you want to look at while selecting regions, either MODIS/LANDSAT data or by clicking "Load ME image".
Click "Open Class Trainer"
Use the text editor box to enter the name of a region. Each name should contain either "Land" or "Water" to let the classifiers know how to use that region.
Press "Add New Class" to add the named region to the class list.
To select a class, click its name in the list. When a class is selected you cannot drag the map view around!
To unselect a class (so you can reposition the map) click "Deselect Class"
You can delete a selected class from the list by clicking "Delete Class"
To set the region for a selected class just click on locations in the main map view. The points you click will form a polygon which should be drawn in the main map view.
The main map view should keep updated with the polygon of the currently selected class but you may see some transient drawing artifacts.
Click "Save Class File" to write a json file storing the training data.
Click "Load Class File" to load an existing json class file.
Working With Domains
The Domain Concept
A Domain consists of a region, training information, and a list of descriptions of avialable sensor data. They can be easily loaded from XML files and the existing algorithms are all designed to take domain objects as input. MODIS and DEM data are almost always available in any domain. Instructions for creating a custom domain XML file are in the next section.
Anatomy of a Domain File
To use a custom domain generally requires three files:
- A sensor definition XML file. Only one of these is needed per sensor. It defines the bands, data characteristics, and possibly the data source.
- A test domain XML file. This defines the geographic region, algorithm parameters, training and truth information, dates, and other other source information.
- A training domain XML file. This is similar to the test domain file except that it will specify a different date or location to collect training data from.
For more detailed descriptions of all the possible contents of a domain file, check out the domain_example and sensor_example XML files and all of the real config files that are included with the Crisis Mapping Toolkit.
Code Examples
Here are some examples of code working with the Domain class in Python:
End of explanation
"""
|
DistrictDataLabs/yellowbrick | examples/rebeccabilbro/check_is_fitted.ipynb | apache-2.0 | X, y = load_occupancy(return_dataset=True).to_numpy()
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.20)
unfitted_model = LogisticRegression(solver='lbfgs')
fitted_model = unfitted_model.fit(X_train, y_train)
oz = ClassPredictionError(fitted_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ClassPredictionError(unfitted_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ClassificationReport(fitted_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ClassificationReport(unfitted_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ConfusionMatrix(fitted_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ConfusionMatrix(unfitted_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = PrecisionRecallCurve(fitted_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = PrecisionRecallCurve(unfitted_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ROCAUC(fitted_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ROCAUC(unfitted_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = DiscriminationThreshold(fitted_model)
oz.fit(X, y)
oz.show()
oz = DiscriminationThreshold(unfitted_model)
oz.fit(X, y)
oz.show()
"""
Explanation: Check if fitted on Classifiers
End of explanation
"""
viz = FeatureImportances(fitted_model)
viz.fit(X, y)
viz.show()
viz = FeatureImportances(unfitted_model)
viz.fit(X, y)
viz.show()
# NOTE: Not sure how to deal with Recursive Feature Elimination
"""
Explanation: Check if fitted on Feature Visualizers*
Just the ones that inherit from ModelVisualizer
End of explanation
"""
X, y = load_energy(return_dataset=True).to_numpy()
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.20)
unfitted_nonlinear_model = RandomForestRegressor(n_estimators=10)
fitted_nonlinear_model = unfitted_nonlinear_model.fit(X_train, y_train)
unfitted_linear_model = Lasso()
fitted_linear_model = unfitted_linear_model.fit(X_train, y_train)
oz = PredictionError(unfitted_linear_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = PredictionError(fitted_linear_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ResidualsPlot(unfitted_linear_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ResidualsPlot(fitted_linear_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ResidualsPlot(unfitted_nonlinear_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
oz = ResidualsPlot(fitted_nonlinear_model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.show()
unfitted_cv_model = LassoCV(alphas=[.01,1,10], cv=3)
fitted_cv_model = unfitted_cv_model.fit(X, y)
oz = AlphaSelection(unfitted_cv_model)
oz.fit(X, y)
oz.show()
oz = AlphaSelection(fitted_cv_model)
oz.fit(X, y)
oz.show()
"""
Explanation: Check if fitted on Regressors
End of explanation
"""
X, _ = load_credit(return_dataset=True).to_numpy()
unfitted_cluster_model = KMeans(6)
fitted_cluster_model = unfitted_cluster_model.fit(X)
# NOTE: Not sure how to deal with K-Elbow and prefitted models...
# visualizer = KElbowVisualizer(unfitted_cluster_model, k=(4,12))
# visualizer.fit(X)
# visualizer.show()
# visualizer = KElbowVisualizer(fitted_cluster_model, k=(4,12))
# visualizer.fit(X)
# visualizer.show()
# NOTE: Silhouette Scores doesn't have a quick method
visualizer = SilhouetteVisualizer(unfitted_cluster_model)
visualizer.fit(X)
visualizer.show()
visualizer = SilhouetteVisualizer(fitted_cluster_model)
visualizer.fit(X)
visualizer.show()
visualizer = InterclusterDistance(unfitted_cluster_model)
visualizer.fit(X)
visualizer.show()
visualizer = InterclusterDistance(fitted_cluster_model)
visualizer.fit(X)
visualizer.show()
"""
Explanation: Check if fitted on Clusterers
End of explanation
"""
|
QinetiQ-datascience/Docker-Data-Science | WooWeb-Presentation/Workspace/Widgets/Widget Events.ipynb | mit | from __future__ import print_function
"""
Explanation: Index - Back - Next
Widget Events
Special events
End of explanation
"""
import ipywidgets as widgets
print(widgets.Button.on_click.__doc__)
"""
Explanation: The Button is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The on_click method of the Button can be used to register function to be called when the button is clicked. The doc string of the on_click can be seen below.
End of explanation
"""
from IPython.display import display
button = widgets.Button(description="Click Me!")
display(button)
def on_button_clicked(b):
print("Button clicked.")
button.on_click(on_button_clicked)
"""
Explanation: Example
Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the on_click method, a button that prints a message when it has been clicked is shown below.
End of explanation
"""
text = widgets.Text()
display(text)
def handle_submit(sender):
print(text.value)
text.on_submit(handle_submit)
"""
Explanation: on_submit
The Text widget also has a special on_submit event. The on_submit event fires when the user hits return.
End of explanation
"""
print(widgets.Widget.observe.__doc__)
"""
Explanation: Traitlet events
Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the observe method of the widget can be used to register a callback. The doc string for observe can be seen below.
End of explanation
"""
int_range = widgets.IntSlider()
display(int_range)
def on_value_change(change):
print(change['new'])
int_range.observe(on_value_change, names='value')
"""
Explanation: Signatures
Mentioned in the doc string, the callback registered must have the signature handler(change) where change is a dictionary holding the information about the change.
Using this method, an example of how to output an IntSlider's value as it is changed can be seen below.
End of explanation
"""
import traitlets
caption = widgets.Label(value='The values of slider1 and slider2 are synchronized')
sliders1, slider2 = widgets.IntSlider(description='Slider 1'),\
widgets.IntSlider(description='Slider 2')
l = traitlets.link((sliders1, 'value'), (slider2, 'value'))
display(caption, sliders1, slider2)
caption = widgets.Label(value='Changes in source values are reflected in target1')
source, target1 = widgets.IntSlider(description='Source'),\
widgets.IntSlider(description='Target 1')
dl = traitlets.dlink((source, 'value'), (target1, 'value'))
display(caption, source, target1)
"""
Explanation: Linking Widgets
Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events.
Linking traitlets attributes in the kernel
The first method is to use the link and dlink functions from the traitlets module. This only works if we are interacting with a live kernel.
End of explanation
"""
l.unlink()
dl.unlink()
"""
Explanation: Function traitlets.link and traitlets.dlink return a Link or DLink object. The link can be broken by calling the unlink method.
End of explanation
"""
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
slider = widgets.IntSlider(min=-5, max=5, value=1, description='Slider')
def handle_slider_change(change):
caption.value = 'The slider value is ' + (
'negative' if change.new < 0 else 'nonnegative'
)
slider.observe(handle_slider_change, names='value')
display(caption, slider)
"""
Explanation: Registering callbacks to trait changes in the kernel
Since attributes of widgets on the Python side are traitlets, you can register handlers to the change events whenever the model gets updates from the front-end.
The handler passed to observe will be called with one change argument. The change object holds at least a type key and a name key, corresponding respectively to the type of notification and the name of the attribute that triggered the notification.
Other keys may be passed depending on the value of type. In the case where type is change, we also have the following keys:
owner : the HasTraits instance
old : the old value of the modified trait attribute
new : the new value of the modified trait attribute
name : the name of the modified trait attribute.
End of explanation
"""
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
range1, range2 = widgets.IntSlider(description='Range 1'),\
widgets.IntSlider(description='Range 2')
l = widgets.jslink((range1, 'value'), (range2, 'value'))
display(caption, range1, range2)
caption = widgets.Label(value='Changes in source_range values are reflected in target_range1')
source_range, target_range1 = widgets.IntSlider(description='Source range'),\
widgets.IntSlider(description='Target range 1')
dl = widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
display(caption, source_range, target_range1)
"""
Explanation: Linking widgets attributes from the client side
When synchronizing traitlets attributes, you may experience a lag because of the latency due to the roundtrip to the server side. You can also directly link widget attributes in the browser using the link widgets, in either a unidirectional or a bidirectional fashion.
Javascript links persist when embedding widgets in html web pages without a kernel.
End of explanation
"""
# l.unlink()
# dl.unlink()
"""
Explanation: Function widgets.jslink returns a Link widget. The link can be broken by calling the unlink method.
End of explanation
"""
|
mcc-petrinets/formulas | spot/tests/python/gen.ipynb | mit | import spot
import spot.gen as sg
spot.setup()
from IPython.display import display
"""
Explanation: Formulas & Automata generators
The spot.gen package contains the functions used to generate the patterns produced by genltl and genaut.
End of explanation
"""
sg.ltl_pattern(sg.LTL_AND_GF, 3)
sg.ltl_pattern(sg.LTL_CCJ_BETA_PRIME, 4)
"""
Explanation: LTL patterns
Generation of LTL formulas is done via the ltl_pattern() function. This takes two arguments: a pattern id, and a pattern size (or index if the id refers to a list).
End of explanation
"""
for f in sg.ltl_patterns((sg.LTL_GH_R, 3), (sg.LTL_AND_FG, 1, 3), sg.LTL_EH_PATTERNS):
display(f)
"""
Explanation: To see the list of supported patterns, the easiest way is to look at the --help output of genltl. The above pattern for instance is attached to option --ccj-beta-prime. The name of the pattern identifier is the same using capital letters, underscores, and an LTL_ prefix. If a pattern has multiple aliased options in genltl, the first one used for the identifier (e.g., genltl accept both --dac-patterns and --spec-patterns as synonyms to denote the patterns of spot.gen.LTL_DAC_PATTERNS).
Multiple patterns can be generated using the ltl_patterns() function. It's arguments should be either can be:
- pairs of the form (id, n): in this case the pattern id with size/index n is returned,
- triplets of the form (id, min, max): in this case the patterns are output for all n between min and max included,
- an integer id: then this is equivalent to (id, 1, 10) if the pattern has now upper bound, or (id, 1, upper) if the patter id has an upper bound upper. This is mostly used when the pattern id correspond to a hard-coded list of formulas.
Here is an example showing these three types of arguments:
End of explanation
"""
display(sg.aut_pattern(sg.AUT_KS_NCA, 3).show('.a'),
sg.aut_pattern(sg.AUT_L_DSA, 3).show('.a'),
sg.aut_pattern(sg.AUT_L_NBA, 3).show('.a'))
"""
Explanation: Automata patterns
We currently have only a couple of generators of automata:
End of explanation
"""
for aut in sg.aut_patterns(sg.AUT_KS_NCA):
print(aut.num_states())
"""
Explanation: Multiple automata can be generated using the aut_patterns() function, which works similarly to ltl_patterns().
End of explanation
"""
|
drericstrong/Blog | 20170502_MarkovChainsInEquipmentConditionMonitoring.ipynb | agpl-3.0 | import random
import matplotlib.pyplot as plt
%matplotlib inline
# Since the Markov assumption requires that the future
# state only depends on the current state, we will keep
# track of the current state during each iteration.
# "0" is low, "1" is normal, and "2" is high
def MCDegradeSim(t_prob, d_per_state, d_thresh):
# As the number of states increase, the initial state
# below might make a difference, so be careful.
cur_state = 1
cur_deg = 0
res = []
deg = []
# "while True" is usually a bad idea, but I know that the while
# loop must terminate, because the accrued degradation is
# always positive
while True:
rn = random.random()
# Contrary to the previous blog post, this will be
# done with much more coding efficiency.
if rn<=t_prob[cur_state][0]:
cur_state = 0
cur_deg += d_per_state[0]
# Remember that it's the cumulative probability
elif rn<=(t_prob[cur_state][0] + t_prob[cur_state][1]):
cur_state = 1
cur_deg += d_per_state[1]
else:
cur_state = 2
cur_deg += d_per_state[2]
# Save the results to an array
res.append(cur_state)
deg.append(cur_deg)
# If the degradation is above the threshold, the
# simulation is done
if cur_deg>d_thresh:
break
return res
# Transition probability matrix, taken from the image above
tpm = [[0.8, 0.19, 0.01],[0.01, 0.98, 0.01],[0.01, 0.2, 0.79]]
# Don't cheat and look at this! This is the degradation
# accrued per state and the damage threshold
dps = [0.5, 0.1, 1.5]
deg_thresh = 100
# Run and plot the results
res = MCDegradeSim(tpm, dps, deg_thresh)
plt.plot(res)
plt.title('Transition Probability=' + str(tpm))
plt.xlabel('Iteration')
plt.ylabel('State');
"""
Explanation: As a review from the previous blog post, Markov Chains are a way to describe processes that have multiple states. For instance, a switch might be flipped to option A, B, or C. Each of these states has an associated probability of transitioning from one state to another state, and we can construct a "transition probability matrix" which describes the chance of transitioning from one state to another state. The Markov, or "memoryless" assumption, predicts the future state based only on the value of the current state.
In this case, we will be investigating a Markov Chain with three possible states:
[Image in blog post]
Markov Chains are often useful to consider for my domain of expertise, equipment condition-monitoring. Over a given time history, equipment may be considered to operate under different conditions, which might cause damage to the equipment at different rates. For example, operating a generator under high load will accrue more damage than operating under the rated load. Based on the image above, assume that the states refer to the generator load: "low load", "normal load", and "high load".
The simulation from the previous blog post will be modified so that there are three possible states, not two. Also, we will keep track of the degradation of the equipment, under the assumption that the equipment fails when the degradation reaches a hard threshold. Note that there are several simplifying assumptions for this example which make it much less practical for real-life applications. Equipment typically fail under a distribution of accrued damage rather than a hard threshold. Furthermore, the amount of damage accrued while operating in a particular state is not likely to be constant. If I were developing this model for production, it would be much more extensive; this analysis is useful as an example of what might be accomplished with Markov Chain analysis alone.
End of explanation
"""
# The new transition probability matrix
tpm2 = [[0.9, 0.09, 0.01],[0.05, 0.90, 0.05],[0.01, 0.1, 0.89]]
res2 = MCDegradeSim(tpm2, dps, deg_thresh)
plt.plot(res2)
plt.title('Transition Probability=' + str(tpm2))
plt.xlabel('Iteration')
plt.ylabel('State');
"""
Explanation: Note that the probability of transitioning to a normal state is higher than for the other states, so the simulation spends much more time in the "normal" state. Think of state "2" and state "0" as stressful operating conditions which decrease the expected lifetime of the equipment. Compare the number of iterations above (~500) to the following example, where an equipment spends more time in a stressful condition.
[Image in blog post]
Let's modify the transition probability matrix accordingly:
End of explanation
"""
num_failures = 100
# Run the MCDegradeSim function for num_failures
res_array = []
for ii in range(num_failures):
res = MCDegradeSim(tpm, dps, deg_thresh)
res_array.append(res)
"""
Explanation: Based on the figure above, the equipment fails more quickly (~150 iterations) and also spends more time in the "low load" and "high load" states, since we increased the probabilities of staying/transitioning to those two states.
Assuming that you didn't already look at the damage accrued per state in the code above, we can actually use the data itself to estimate it. Let's imagine that we have 100 examples of equipment failures (in real life, this is a very big assumption):
End of explanation
"""
import numpy as np
# Keep track of the transitions from/to each state
trans_matrix = np.zeros((3,3))
# "hist" is the time history for a single equipment
for hist in res_array:
# Iterate over each state in the time history,
# and find the transitions between an old state
# and a new state.
for ii, state in enumerate(hist):
old_state = hist[ii-1]
new_state = hist[ii]
trans_matrix[old_state, new_state] += 1
# To translate the matrix into probabilities,
# divide by the total
trans_prob = trans_matrix/(sum(trans_matrix))
print(trans_prob.transpose())
"""
Explanation: Now, let's count up all the transitions in the data, to see if we can obtain the original transition probability matrix (assuming that we didn't already know it).
End of explanation
"""
from sklearn.linear_model import LinearRegression as LR
# "X"- Keep track of the number of times a state occurs
# over an entire time history
x = np.zeros((num_failures,3))
for ii, hist in enumerate(res_array):
# Bincount will sum the number of times that a
# state occurs in the history
x[ii, :] = np.bincount(hist)
# "Y" is always 100% at failure
y = 100*np.ones((100,3))
# Now, perform linear regression on the above data
lr_model = LR(fit_intercept=False)
lr_model.fit(x, y)
print(lr_model.coef_[0])
"""
Explanation: As can be seen, the above code approximates the original transition probability matrix fairly well. (Compare the above matrix to the first figure in this blog post)
Next, let's estimate the accrued degradation per state using a simple multiple linear regression. The following code will sum the number of times that a state occurs over an entire history ("x" in y=mx), along with the degradation at failure (since we are assuming a hard degradation threshold, degradation is always equal to 100% at failure). There will be no intercept term, since we know that degradation always begins at 0 (although this requires another assumption, as well as a full data history per equipment).
End of explanation
"""
|
tommyogden/maxwellbloch | docs/examples/mbs-lambda-weak-pulse-cloud-atoms-with-coupling.ipynb | mit | mb_solve_json = """
{
"atom": {
"fields": [
{
"coupled_levels": [[0, 1]],
"detuning": 0.0,
"detuning_positive": true,
"label": "probe",
"rabi_freq": 1.0e-3,
"rabi_freq_t_args":
{
"ampl": 1.0,
"centre": 0.0,
"fwhm": 1.0
},
"rabi_freq_t_func": "gaussian"
},
{
"coupled_levels": [[1, 2]],
"detuning": 0.0,
"detuning_positive": false,
"label": "coupling",
"rabi_freq": 5.0,
"rabi_freq_t_args":
{
"ampl": 1.0,
"fwhm": 0.2,
"on": -1.0,
"off": 9.0
},
"rabi_freq_t_func": "ramp_onoff"
}
],
"num_states": 3
},
"t_min": -2.0,
"t_max": 10.0,
"t_steps": 120,
"z_min": -0.2,
"z_max": 1.2,
"z_steps": 70,
"z_steps_inner": 100,
"num_density_z_func": "gaussian",
"num_density_z_args": {
"ampl": 1.0,
"fwhm": 0.5,
"centre": 0.5
},
"interaction_strengths": [1.0e3, 1.0e3],
"savefile": "mbs-lambda-weak-pulse-cloud-atoms-some-coupling"
}
"""
from maxwellbloch import mb_solve
mbs = mb_solve.MBSolve().from_json_str(mb_solve_json)
"""
Explanation: Λ-Type Three-Level: Weak Pulse with Coupling in a Cloud — Pulse Compression
Define and Solve
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style('darkgrid')
plt.plot(mbs.zlist,
mbs.num_density_z_func(mbs.zlist, mbs.num_density_z_args));
%time Omegas_zt, states_zt = mbs.mbsolve(recalc=False)
"""
Explanation: Number Density Profile
In this case we've defined a non-square profile for the number density as a function of $z$ (num_density_z_func).
End of explanation
"""
import numpy as np
fig = plt.figure(1, figsize=(16, 12))
# Probe
ax = fig.add_subplot(211)
cmap_range = np.linspace(0.0, 1.0e-3, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_ylabel('Distance ($L$)')
ax.text(0.02, 0.95, 'Probe',
verticalalignment='top', horizontalalignment='left',
transform=ax.transAxes, color='grey', fontsize=16)
plt.colorbar(cf)
# Coupling
ax = fig.add_subplot(212)
cmap_range = np.linspace(0.0, 8.0, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[1]/(2*np.pi)),
cmap_range, cmap=plt.cm.Greens)
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
ax.text(0.02, 0.95, 'Coupling',
verticalalignment='top', horizontalalignment='left',
transform=ax.transAxes, color='grey', fontsize=16)
plt.colorbar(cf)
# Both
for ax in fig.axes:
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.tight_layout();
"""
Explanation: Plot Output
End of explanation
"""
|
josealber84/deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
print(text)
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
return None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
return None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return None, None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
jakevdp/sklearn_tutorial | notebooks/05-Validation.ipynb | bsd-3-clause | from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn')
"""
Explanation: <small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Validation and Model Selection
In this section, we'll look at model evaluation and the tuning of hyperparameters, which are parameters that define the model.
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
"""
Explanation: Validating Models
One of the most important pieces of machine learning is model validation: that is, checking how well your model fits a given dataset. But there are some pitfalls you need to watch out for.
Consider the digits example we've been looking at previously. How might we check how well our model fits the data?
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
"""
Explanation: Let's fit a K-neighbors classifier
End of explanation
"""
y_pred = knn.predict(X)
"""
Explanation: Now we'll use this classifier to predict labels for the data
End of explanation
"""
print("{0} / {1} correct".format(np.sum(y == y_pred), len(y)))
"""
Explanation: Finally, we can check how well our prediction did:
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.shape, X_test.shape
"""
Explanation: It seems we have a perfect classifier!
Question: what's wrong with this?
Validation Sets
Above we made the mistake of testing our data on the same set of data that was used for training. This is not generally a good idea. If we optimize our estimator this way, we will tend to over-fit the data: that is, we learn the noise.
A better way to test a model is to use a hold-out set which doesn't enter the training. We've seen this before using scikit-learn's train/test split utility:
End of explanation
"""
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print("{0} / {1} correct".format(np.sum(y_test == y_pred), len(y_test)))
"""
Explanation: Now we train on the training data, and validate on the test data:
End of explanation
"""
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
"""
Explanation: This gives us a more reliable estimate of how our model is doing.
The metric we're using here, comparing the number of matches to the total number of samples, is known as the accuracy score, and can be computed using the following routine:
End of explanation
"""
knn.score(X_test, y_test)
"""
Explanation: This can also be computed directly from the model.score method:
End of explanation
"""
for n_neighbors in [1, 5, 10, 20, 30]:
knn = KNeighborsClassifier(n_neighbors)
knn.fit(X_train, y_train)
print(n_neighbors, knn.score(X_test, y_test))
"""
Explanation: Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors:
End of explanation
"""
X1, X2, y1, y2 = train_test_split(X, y, test_size=0.5, random_state=0)
X1.shape, X2.shape
print(KNeighborsClassifier(1).fit(X2, y2).score(X1, y1))
print(KNeighborsClassifier(1).fit(X1, y1).score(X2, y2))
"""
Explanation: We see that in this case, a small number of neighbors seems to be the best option.
Cross-Validation
One problem with validation sets is that you "lose" some of the data. Above, we've only used 3/4 of the data for the training, and used 1/4 for the validation. Another option is to use 2-fold cross-validation, where we split the sample in half and perform the validation twice:
End of explanation
"""
from sklearn.model_selection import cross_val_score
cv = cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
cv.mean()
"""
Explanation: Thus a two-fold cross-validation gives us two estimates of the score for that parameter.
Because this is a bit of a pain to do by hand, scikit-learn has a utility routine to help:
End of explanation
"""
cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
"""
Explanation: K-fold Cross-Validation
Here we've used 2-fold cross-validation. This is just one specialization of $K$-fold cross-validation, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set.
We can do this by changing the cv parameter above. Let's do 10-fold cross-validation:
End of explanation
"""
def test_func(x, err=0.5):
y = 10 - 1. / (x + 0.1)
if err > 0:
y = np.random.normal(y, err)
return y
"""
Explanation: This gives us an even better idea of how well our model is doing.
Overfitting, Underfitting and Model Selection
Now that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, Sometimes using a
more complicated model will give worse results. Also, Sometimes adding
training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
Illustration of the Bias-Variance Tradeoff
For this section, we'll work with a simple 1D regression problem. This will help us to
easily visualize the data and the model, and the results generalize easily to higher-dimensional
datasets. We'll explore a simple linear regression problem.
This can be accomplished within scikit-learn with the sklearn.linear_model module.
We'll create a simple nonlinear function that we'd like to fit
End of explanation
"""
def make_data(N=40, error=1.0, random_seed=1):
# randomly sample the data
np.random.seed(1)
X = np.random.random(N)[:, np.newaxis]
y = test_func(X.ravel(), error)
return X, y
X, y = make_data(40, error=1)
plt.scatter(X.ravel(), y);
"""
Explanation: Now let's create a realization of this dataset:
End of explanation
"""
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
model = LinearRegression()
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
"""
Explanation: Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit:
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
"""
Explanation: We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.
Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the PolynomialFeatures preprocessor, which can be pipelined with a linear regression.
Let's make a convenience routine to do this:
End of explanation
"""
model = PolynomialRegression(2)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
"""
Explanation: Now we'll use this to fit a quadratic curve to the data.
End of explanation
"""
model = PolynomialRegression(30)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)))
plt.ylim(-4, 14);
"""
Explanation: This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
End of explanation
"""
from IPython.html.widgets import interact
def plot_fit(degree=1, Npts=50):
X, y = make_data(Npts, error=1)
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
model = PolynomialRegression(degree=degree)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.ylim(-4, 14)
plt.title("mean squared error: {0:.2f}".format(mean_squared_error(model.predict(X), y)))
interact(plot_fit, degree=[1, 30], Npts=[2, 100]);
"""
Explanation: When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data.
Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively:
End of explanation
"""
X, y = make_data(120, error=1.0)
plt.scatter(X, y);
from sklearn.model_selection import validation_curve
def rms_error(model, X, y):
y_pred = model.predict(X)
return np.sqrt(np.mean((y - y_pred) ** 2))
degree = np.arange(0, 18)
val_train, val_test = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=7,
scoring=rms_error)
"""
Explanation: Detecting Over-fitting with Validation Curves
Clearly, computing the error on the training data is not enough (we saw this previously). As above, we can use cross-validation to get a better handle on how the model fit is working.
Let's do this here, again using the validation_curve utility. To make things more clear, we'll use a slightly larger dataset:
End of explanation
"""
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
plot_with_err(degree, val_train, label='training scores')
plot_with_err(degree, val_test, label='validation scores')
plt.xlabel('degree'); plt.ylabel('rms error')
plt.legend();
"""
Explanation: Now let's plot the validation curves:
End of explanation
"""
model = PolynomialRegression(4).fit(X, y)
plt.scatter(X, y)
plt.plot(X_test, model.predict(X_test));
"""
Explanation: Notice the trend here, which is common for this type of plot.
For a small model complexity, the training error and validation error are very similar. This indicates that the model is under-fitting the data: it doesn't have enough complexity to represent the data. Another way of putting it is that this is a high-bias model.
As the model complexity grows, the training and validation scores diverge. This indicates that the model is over-fitting the data: it has so much flexibility, that it fits the noise rather than the underlying trend. Another way of putting it is that this is a high-variance model.
Note that the training score (nearly) always improves with model complexity. This is because a more complicated model can fit the noise better, so the model improves. The validation data generally has a sweet spot, which here is around 5 terms.
Here's our best-fit model according to the cross-validation:
End of explanation
"""
from sklearn.model_selection import learning_curve
def plot_learning_curve(degree=3):
train_sizes = np.linspace(0.05, 1, 120)
N_train, val_train, val_test = learning_curve(PolynomialRegression(degree),
X, y, train_sizes, cv=5,
scoring=rms_error)
plot_with_err(N_train, val_train, label='training scores')
plot_with_err(N_train, val_test, label='validation scores')
plt.xlabel('Training Set Size'); plt.ylabel('rms error')
plt.ylim(0, 3)
plt.xlim(5, 80)
plt.legend()
"""
Explanation: Detecting Data Sufficiency with Learning Curves
As you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of learning curves, which display this property.
The idea is to plot the mean-squared-error for the training and test set as a function of Number of Training Points
End of explanation
"""
plot_learning_curve(1)
"""
Explanation: Let's see what the learning curves look like for a linear model:
End of explanation
"""
plot_learning_curve(3)
"""
Explanation: This shows a typical learning curve: for very few training points, there is a large separation between the training and test error, which indicates over-fitting. Given the same model, for a large number of training points, the training and testing errors converge, which indicates potential under-fitting.
As you add more data points, the training error will never increase, and the testing error will never decrease (why do you think this is?)
It is easy to see that, in this plot, if you'd like to reduce the MSE down to the nominal value of 1.0 (which is the magnitude of the scatter we put in when constructing the data), then adding more samples will never get you there. For $d=1$, the two curves have converged and cannot move lower. What about for a larger value of $d$?
End of explanation
"""
plot_learning_curve(10)
"""
Explanation: Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!
What if we get even more complex?
End of explanation
"""
|
oditorium/blog | Modules/DataImport.ipynb | agpl-3.0 | #!wget https://www.dropbox.com/s//DataImport.py -O DataImport.py
import DataImport as di
#help('DataImport')
"""
Explanation: Data Import - Testing
Class definitions
module DataImport
We want to import data directly from the ECB data warehouse, so for example rather than going to the series we want to download the csv data. In fact, the ECB provides three different download format (two csv's, one generic and one for Excel) and one XML download.
There is is also an sdmx query facility that allows more granula control over what data will be downloaded.
The URI's are as follows (most also allow https):
human readable series
~~~
http://sdw.ecb.europa.eu/quickview.do?SERIES_KEY=-key-
~~~
csv file (generic and Excel format)
~~~
http://sdw.ecb.europa.eu/quickviewexport.do?SERIES_KEY=-key-&type=csv
http://sdw.ecb.europa.eu/quickviewexport.do?SERIES_KEY=-key-&type=xls
~~~
sdmx file
~~~
http://sdw.ecb.europa.eu/quickviewexport.do?SERIES_KEY=-key-&type=sdmx
~~~
sdmx query and endpoint
~~~
http://sdw.ecb.europa.eu/quickviewexport.do?SERIES_KEY=-key-&type=sdmxQuery
http://sdw-ws.ecb.europa.eu/
~~~
End of explanation
"""
#!wget https://www.dropbox.com/s//PDataFrame.py -O PDataFrame.py
import PDataFrame as pdf
#help('PDataFrame')
"""
Explanation: module PDataFrame
that's a little side project that creates persistent data frames
End of explanation
"""
#pdf.PDataFrame.create('DataImport.csv', ('key', 'description'))
#bm = pdf.PDataFrame('DataImport.csv')
#bm.set('deposit', ('ILM.W.U2.C.L022.U2.EUR', 'current usage of the deposit facility'))
#bm.set('lending', ('ILM.M.U2.C.A05B.U2.EUR', 'current aggregate usage of major lending facilities'))
#bm.set('lending_marg', ('ILM.W.U2.C.A055.U2.EUR', 'current usage of the marginal lending facility')
"""
Explanation: Testing
Bookmarks
that's a little side project, which is to create a file containing bookmarks for interesting series in the ECB database; it uses the PDataFrame class defined above.
Note that the following lines can generally be commented out: the whole idea here is that the bookmarks are kept in persistant storage (here the file ECB_DataSeries.csv) so one only has to execute bm.set() once to add a new bookmark (provided the csv file is being moved around with this note book)
End of explanation
"""
bm = pdf.PDataFrame('DataImport.csv')
bm._df
"""
Explanation: just to check what bookmarks we have defined...
End of explanation
"""
bm.get('deposit', 'key')
"""
Explanation: ...and how to get the values back
End of explanation
"""
ei = di.ECBDataImport()
deposit = ei.fetch(bm.get('deposit', 'key'), skip_end=10)
lending = ei.fetch(bm.get('lending', 'key'))
lending_marg = ei.fetch(bm.get('lending_marg', 'key'))
"""
Explanation: Data
we fetch three data series, the ECB deposit facility, the ECB lending facility, and the ECB marginal lending facility using the fetch method that takes as parameter the series key (see below and explanation for the skip_end parameter)
End of explanation
"""
deposit.keys()
deposit['descr']
"""
Explanation: the dataset returned contains a number of additonal info items, for example a description
End of explanation
"""
unit = 1000000
dp = ei.data_table(deposit, ei.time_reformat1, unit)
le = ei.data_table(lending, ei.time_reformat2, unit, True)
lm = ei.data_table(lending_marg, ei.time_reformat1, unit)
diff = le[2](dp[0]) - dp[1]
"""
Explanation: The time information is in a funny format eg (eg, "2008w21"). So we then reformat the datatables into something that can be plotted, ie a float. For this we have the static method data_table that takes the data and a reformatting function for the time. Normally it returns a 2-tuple, the first component being the time-tuple, the second component being the value-tuple
If desired, additionally an interpolation function can be return as the third component. This is necessary if we want to do operations on series that are not based on the same time values. We see this in the last line below: le[2] is the interpolation function for the lending, and it is applied to dp[0] which are the time values for the deposit function. Now the two series are on the same basis and can hence be substracted (note that in fetch() we needed the skip_end parameter, because the available deposit data series goes further than the available lending series, which makes the interpolation fail).
End of explanation
"""
ei.time_reformat1("2010w2")
ei.time_reformat2("2010mar")
"""
Explanation: The functions for converting time are implemented as static methods on the object. For the time being there are two of them
End of explanation
"""
plot(le[0], le[1])
plot(lm[0], lm[1])
plot(dp[0], dp[1])
plot(dp[0], diff)
"""
Explanation: We now can plot the data series. Note that that would not have been that trivial to do in Excel because one of the data series is monthly, the other one is weekly
End of explanation
"""
|
AEW2015/PYNQ_PR_Overlay | Pynq-Z1/notebooks/examples/tracebuffer_spi.ipynb | bsd-3-clause | from pprint import pprint
from time import sleep
from pynq import PL
from pynq import Overlay
from pynq.drivers import Trace_Buffer
from pynq.iop import Pmod_OLED
from pynq.iop import PMODA
from pynq.iop import PMODB
from pynq.iop import ARDUINO
ol = Overlay("base.bit")
ol.download()
pprint(PL.ip_dict)
"""
Explanation: Trace Buffer - Tracing SPI Transactions
The Trace_Buffer class can monitor the waveform and transations on PMODA, PMODB, and ARDUINO connectors.
This demo shows how to use this class to track SPI transactions. For this demo, users have to connect the Pmod OLED to PMODB.
Step 1: Overlay Management
Users have to import all the necessary classes. Make sure to use the right bitstream.
End of explanation
"""
oled = Pmod_OLED(PMODB)
"""
Explanation: Step 2: Instantiating OLED
Although this demo can also be done on PMODA, we use PMODB in this demo.
End of explanation
"""
tr_buf = Trace_Buffer(PMODB,"spi",samplerate=20000000)
# Start the trace buffer
tr_buf.start()
# Write characters
oled.write("1 2 3 4 5 6")
# Stop the trace buffer
tr_buf.stop()
"""
Explanation: Step 3: Tracking Transactions
Instantiating the trace buffer with SPI protocol. The SPI clock is controlled by the 100MHz IO Processor (IOP). The SPI clock period is 16 times the IOP clock rate based on the settings of the IOP SPI controller. Hence we set the sample rate to 20MHz.
After starting the trace buffer DMA, also start to write some characters. Then stop the trace buffer DMA.
End of explanation
"""
# Configuration for PMODB
start = 20000
stop = 40000
tri_sel = [0x80000<<32,0x40000<<32,0x20000<<32,0x10000<<32]
tri_0 = [0x8<<32,0x4<<32,0x2<<32,0x1<<32]
tri_1 = [0x800<<32,0x400<<32,0x200<<32,0x100<<32]
mask = 0x0
# Parsing and decoding
tr_buf.parse("spi_trace.csv",
start,stop,mask,tri_sel,tri_0,tri_1)
tr_buf.set_metadata(['CLK','NC','MOSI','CS'])
tr_buf.decode("spi_trace.pd",
options=':wordsize=8:cpol=0:cpha=0')
"""
Explanation: Step 4: Parsing and Decoding Transactions
The trace buffer object is able to parse the transactions into a *.csv file (saved into the same folder as this script). The input arguments for the parsing method is:
* start : the starting sample number of the trace.
* stop : the stopping sample number of the trace.
* tri_sel: masks for tri-state selection bits.
* tri_0: masks for pins selected when the corresponding tri_sel = 0.
* tri_0: masks for pins selected when the corresponding tri_sel = 1.
* mask: mask for pins selected always.
For PMODA, the configuration of the masks can be:
* tri_sel = [0x80000,0x40000,0x20000,0x10000]
* tri_0 = [0x8,0x4,0x2,0x1]
* tri_1 = [0x800,0x400,0x200,0x100]
* mask = 0x0
Then the trace buffer object can also decode the transactions using the open-source sigrok decoders. The decoded file (*.pd) is saved into the same folder as this script.
Reference:
https://sigrok.org/wiki/Main_Page
End of explanation
"""
s0 = 10000
s1 = 15000
tr_buf.display(s0,s1)
"""
Explanation: Step 5: Displaying the Result
The final waveform and decoded transactions are shown using the open-source wavedrom library. The two input arguments (s0 and s1 ) indicate the starting and stopping location where the waveform is shown.
The valid range for s0 and s1 is: 0 < s0 < s1 < (stop-start), where start and stop are defined in the last step.
Reference:
https://www.npmjs.com/package/wavedrom
End of explanation
"""
|
pombredanne/gensim | docs/notebooks/Topics_and_Transformations.ipynb | lgpl-2.1 | import logging
import os.path
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
"""
Explanation: Topics and Transformation
Don't forget to set
End of explanation
"""
from gensim import corpora, models, similarities
if (os.path.exists("/tmp/deerwester.dict")):
dictionary = corpora.Dictionary.load('/tmp/deerwester.dict')
corpus = corpora.MmCorpus('/tmp/deerwester.mm')
print("Used files generated from first tutorial")
else:
print("Please run first tutorial to generate data set")
print (dictionary[0])
print (dictionary[1])
print (dictionary[2])
"""
Explanation: if you want to see logging events.
Transformation interface
In the previous tutorial on Corpora and Vector Spaces, we created a corpus of documents represented as a stream of vectors. To continue, let’s fire up gensim and use that corpus:
End of explanation
"""
tfidf = models.TfidfModel(corpus) # step 1 -- initialize a model
"""
Explanation: In this tutorial, I will show how to transform documents from one vector representation into another. This process serves two goals:
To bring out hidden structure in the corpus, discover relationships between words and use them to describe the documents in a new and (hopefully) more semantic way.
To make the document representation more compact. This both improves efficiency (new representation consumes less resources) and efficacy (marginal data trends are ignored, noise-reduction).
Creating a transformation
The transformations are standard Python objects, typically initialized by means of a training corpus:
End of explanation
"""
doc_bow = [(0, 1), (1, 1)]
print(tfidf[doc_bow]) # step 2 -- use the model to transform vectors
"""
Explanation: We used our old corpus from tutorial 1 to initialize (train) the transformation model. Different transformations may require different initialization parameters; in case of TfIdf, the “training” consists simply of going through the supplied corpus once and computing document frequencies of all its features. Training other models, such as Latent Semantic Analysis or Latent Dirichlet Allocation, is much more involved and, consequently, takes much more time.
<B>Note</B>:
Transformations always convert between two specific vector spaces. The same vector space (= the same set of feature ids) must be used for training as well as for subsequent vector transformations. Failure to use the same input feature space, such as applying a different string preprocessing, using different feature ids, or using bag-of-words input vectors where TfIdf vectors are expected, will result in feature mismatch during transformation calls and consequently in either garbage output and/or runtime exceptions.
End of explanation
"""
corpus_tfidf = tfidf[corpus]
for doc in corpus_tfidf:
print(doc)
"""
Explanation: Or to apply a transformation to a whole corpus:
End of explanation
"""
lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2) # initialize an LSI transformation
corpus_lsi = lsi[corpus_tfidf] # create a double wrapper over the original corpus: bow->tfidf->fold-in-lsi
"""
Explanation: In this particular case, we are transforming the same corpus that we used for training, but this is only incidental. Once the transformation model has been initialized, it can be used on any vectors (provided they come from the same vector space, of course), even if they were not used in the training corpus at all. This is achieved by a process called folding-in for LSA, by topic inference for LDA etc.
<b>Note:</b>
Calling model[corpus] only creates a wrapper around the old corpus document stream – actual conversions are done on-the-fly, during document iteration. We cannot convert the entire corpus at the time of calling corpus_transformed = model[corpus], because that would mean storing the result in main memory, and that contradicts gensim’s objective of memory-indepedence. If you will be iterating over the transformed corpus_transformed multiple times, and the transformation is costly, serialize the resulting corpus to disk first and continue using that.
Transformations can also be serialized, one on top of another, in a sort of chain:
End of explanation
"""
lsi.print_topics(2)
"""
Explanation: Here we transformed our Tf-Idf corpus via Latent Semantic Indexing into a latent 2-D space (2-D because we set num_topics=2). Now you’re probably wondering: what do these two latent dimensions stand for? Let’s inspect with models.LsiModel.print_topics():
End of explanation
"""
for doc in corpus_lsi: # both bow->tfidf and tfidf->lsi transformations are actually executed here, on the fly
print(doc)
lsi.save('/tmp/model.lsi') # same for tfidf, lda, ...
lsi = models.LsiModel.load('/tmp/model.lsi')
"""
Explanation: (the topics are printed to log – see the note at the top of this page about activating logging)
It appears that according to LSI, “trees”, “graph” and “minors” are all related words (and contribute the most to the direction of the first topic), while the second topic practically concerns itself with all the other words. As expected, the first five documents are more strongly related to the second topic while the remaining four documents to the first topic:
End of explanation
"""
model = models.TfidfModel(corpus, normalize=True)
"""
Explanation: The next question might be: just how exactly similar are those documents to each other? Is there a way to formalize the similarity, so that for a given input document, we can order some other set of documents according to their similarity? Similarity queries are covered in the next tutorial.
Available transformations
Gensim implements several popular Vector Space Model algorithms:
Term Frequency * Inverse Document Frequency, Tf-Idf expects a bag-of-words (integer values) training corpus during initialization. During transformation, it will take a vector and return another vector of the same dimensionality, except that features which were rare in the training corpus will have their value increased. It therefore converts integer-valued vectors into real-valued ones, while leaving the number of dimensions intact. It can also optionally normalize the resulting vectors to (Euclidean) unit length.
End of explanation
"""
model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=300)
"""
Explanation: Latent Semantic Indexing, LSI (or sometimes LSA) transforms documents from either bag-of-words or (preferrably) TfIdf-weighted space into a latent space of a lower dimensionality. For the toy corpus above we used only 2 latent dimensions, but on real corpora, target dimensionality of 200–500 is recommended as a “golden standard” [1].
End of explanation
"""
model = models.RpModel(corpus_tfidf, num_topics=500)
"""
Explanation: LSI training is unique in that we can continue “training” at any point, simply by providing more training documents. This is done by incremental updates to the underlying model, in a process called online training. Because of this feature, the input document stream may even be infinite – just keep feeding LSI new documents as they arrive, while using the computed transformation model as read-only in the meanwhile!
<b>Example</b>
model.add_documents(another_tfidf_corpus) # now LSI has been trained on tfidf_corpus + another_tfidf_corpus
lsi_vec = model[tfidf_vec] # convert some new document into the LSI space, without affecting the model
model.add_documents(more_documents) # tfidf_corpus + another_tfidf_corpus + more_documents
lsi_vec = model[tfidf_vec]
See the [gensim.models.lsimodel](https://radimrehurek.com/gensim/models/lsimodel.html#module-gensim.models.lsimodel) documentation for details on how to make LSI gradually “forget” old observations in infinite streams. If you want to get dirty, there are also parameters you can tweak that affect speed vs. memory footprint vs. numerical precision of the LSI algorithm.
gensim uses a novel online incremental streamed distributed training algorithm (quite a mouthful!), which I published in [5]. gensim also executes a stochastic multi-pass algorithm from Halko et al. [4] internally, to accelerate in-core part of the computations. See also
[Experiments on the English Wikipedia](https://radimrehurek.com/gensim/wiki.html) for further speed-ups by distributing the computation across a cluster of computers.
Random Projections, RP aim to reduce vector space dimensionality. This is a very efficient (both memory- and CPU-friendly) approach to approximating TfIdf distances between documents, by throwing in a little randomness. Recommended target dimensionality is again in the hundreds/thousands, depending on your dataset.
End of explanation
"""
model = models.LdaModel(corpus, id2word=dictionary, num_topics=100)
"""
Explanation: Latent Dirichlet Allocation, LDA is yet another transformation from bag-of-words counts into a topic space of lower dimensionality. LDA is a probabilistic extension of LSA (also called multinomial PCA), so LDA’s topics can be interpreted as probability distributions over words. These distributions are, just like with LSA, inferred automatically from a training corpus. Documents are in turn interpreted as a (soft) mixture of these topics (again, just like with LSA).
End of explanation
"""
model = models.HdpModel(corpus, id2word=dictionary)
"""
Explanation: gensim uses a fast implementation of online LDA parameter estimation based on [2], modified to run in distributed mode on a cluster of computers.
Hierarchical Dirichlet Process, HDP is a non-parametric bayesian method (note the missing number of requested topics):
End of explanation
"""
|
opencb/opencga | opencga-client/src/main/python/notebooks/user-training/pyopencga_clinical_queries.ipynb | apache-2.0 | ## Step 1. Import pyopencga dependecies
from pyopencga.opencga_config import ClientConfiguration # import configuration module
from pyopencga.opencga_client import OpencgaClient # import client module
from pprint import pprint
from IPython.display import JSON
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
## Step 2. User credentials
user = 'demouser'
####################################
## Step 3. Create the ClientConfiguration dict
host = 'http://bioinfo.hpc.cam.ac.uk/opencga-prod'
config_dict = {'rest': {
'host': host
}
}
## Step 4. Create the ClientConfiguration and OpenCGA client
config = ClientConfiguration(config_dict)
oc = OpencgaClient(config)
## Step 5. Login to OpenCGA using the OpenCGA client- add password when prompted
oc.login(user)
print('Logged succesfuly to {}, your token is: {} well done!'.format(host, oc.token))
"""
Explanation: Clinical Queries
Setup the Client and Login into pyopencga
Configuration and Credentials
Let's assume we already have pyopencga installed in our python setup (all the steps described on pyopencga_first_steps.ipynb).
You need to provide at least a host server URL in the standard configuration format for OpenCGA as a python dictionary or in a json file.
End of explanation
"""
# Define the study id
study = 'reanalysis:rd38'
# Define a clinicalCaseId
case_id = 'OPA-10044-1'
# Define a interpretationId
interpretation_id = 'OPA-10044-1__2'
"""
Explanation: Define some common variables
Here you can define some variables that will be used repeatedly over the notebook.
End of explanation
"""
## Query using the clinical search web service
cases_search = oc.clinical.search(study=study, include='id,type,proband,description,panels,interpretation', limit=5)
cases_search.print_results(title='Cases found for study {}'.format(study), fields='id,type,proband.id,panels.id,interpretation.id')
## Uncomment next line to display an interactive JSON viewer
# JSON(cases_search.get_results())
"""
Explanation: 1. Comon Queries for Clinical Analysis
Retrieve cases in a study
The query below retrieves the cases in a study. For performance reasons, we have limited the number of results retrieved in the query.
You can change the parameter limit to controle the number of cases you want to retrieve for the query.
You can also control the information you want to retrieve and print from the cases with the parameters include and fields.
End of explanation
"""
## Query using the clinical info web service
disorder_search = oc.clinical.search(study=study, include='id,type,proband', limit=5)
disorder_search.print_results(title='Disorders and phenotypes', fields='id,type,proband.id')
disorder_object = disorder_search.get_results()[0]['proband']
## Uncomment next line to display an interactive JSON viewer
# JSON(disorder_object)
"""
Explanation: Proband information: List of disorders and HPO terms from proband of a case
The proband field from a case contains all the information related to a proband, including phenotypes and disorders.
You can retrieve all the phenotypes and disorders of a proband from a case by inspecting the information at the proband level. We'll use the random case_id defined above:
End of explanation
"""
# Query using the clinical info web service
clinical_info = oc.clinical.info(clinical_analysis=case_id, study=study)
clinical_info.print_results(fields='id,interpretation.id,type,proband.id')
## Uncomment next line to display an interactive JSON viewer
# JSON(clinical_info.get_results()[0]['interpretation'])
"""
Explanation: Check the interpretation id of a case
You can find theinterpretation id from a case. This is useful to perform subsequent queries for that interpretation.
Note that you can control the fields that are printed by the function print_results with the parameter fields. To see the whole clinical analysis object, you can use the interactive JSON viewer below.
End of explanation
"""
## Query using the clinical info_interpretation web service
interpretation_object = oc.clinical.info_interpretation(interpretations='OPA-12120-1__2', study=study).get_results()
## Uncomment next line to display an interactive JSON viewer
# JSON(interpretation_object)
"""
Explanation: Inspect the Interpretation object
Here you will retrieve many useful information from a case interpretation.
End of explanation
"""
## Query using the clinical info_interpretation web service
interpretation_stats = oc.clinical.info_interpretation(interpretations='OPA-12120-1__2', include='stats', study=study).get_results()[0]['stats']['primaryFindings']
## Uncomment next line to display an interactive JSON viewer
# JSON(interpretation_stats)
"""
Explanation: Check Reported pathogenic variants in a case interpretation and list the variant tier
Run the cell below to retrieve the interpretation stats, including the pathogenic variants reported in a case.
End of explanation
"""
## Query using the clinical info_interpretation web service
variant_annotation = oc.clinical.info_interpretation(interpretations='OPA-12120-1__2', include='primaryFindings.annotation', study=study).get_results()[0]['primaryFindings']
## Uncomment next line to display an interactive JSON viewer
# JSON(variant_annotation)
"""
Explanation: Retrieve the annotation for the reported variants
Run the cell below to retrieve the annotation for the variants obtained
End of explanation
"""
cases_search = oc.clinical.search(study=study, include='id,panels', limit= 5)
cases_search.print_results(title='Cases found for study {}'.format(study), fields='id,panels.id')
## Uncomment next line to display an interactive JSON viewer
# JSON(cases_search.get_results())
"""
Explanation: PanelApp panels applied in the original analysis
Obtain the list of genes that were in the panel at the time of the original analysis
End of explanation
"""
## Search the cases
cases_search = oc.clinical.search(study=study, limit=3)
## Uncomment next line to display an interactive JSON viewer
# JSON(cases_search.get_results())
"""
Explanation: 2. Use Case
Situation: I want to retrieve a case, check whether the case has a reported pathogenic variant. Retriev the annotation information about these variants, if available.
Finally, I want to come up with the list of tier 1, 2 and 3 variants for the sample.
1. Search Cases in the study and select one random case.
First you need to perform the query of searching over all the cases in a study. Uncomment the second line to have a look at the JSON with all the cases in the study.
Note that this query can take time because there is plenty of information. it is recommended to restrict the search to a number of cases with the parameter limit as below:
End of explanation
"""
## Define an empty list to keep the case ids:
case_ids = []
## Iterate over the cases and retrieve the ids:
for case in oc.clinical.search(study=study, include='id').result_iterator():
case_ids.append(case['id'])
## Uncomment for printing the list with all the case ids
# print(case_ids)
## Select a random case from the list
import random
if case_ids != []:
print('There are {} cases in study {}'.format(len(case_ids), study))
selected_case = random.choice(case_ids)
print('Case selected for analysis is {}'.format(selected_case))
else:
print('There are no cases in the study', study)
"""
Explanation: Now you can select one random case id for the subsequent analysis
End of explanation
"""
## Query using the clinical info web service
interpretation_info = oc.clinical.info(clinical_analysis=selected_case, study=study)
interpretation_info.print_results(fields='id,interpretation.id,type,proband.id')
## Select interpretation object
interpretation_object = interpretation_info.get_results()[0]['interpretation']
## Select interpretation id
interpretation_id = interpretation_info.get_results()[0]['interpretation']['id']
## Uncomment next line to display an interactive JSON viewer
# JSON(interpretation_object)
print('The interpretation id for case {} is {}'.format(selected_case, interpretation_object['id'] ))
"""
Explanation: 2. Retrieve the interpretation id/s from the seleted case
End of explanation
"""
## Query using the clinical info_interpretation web service
interpretation_stats = oc.clinical.info_interpretation(interpretations=interpretation_id, include='stats', study=study).get_results()[0]['stats']['primaryFindings']
## Uncomment next line to display an interactive JSON viewer
# JSON(interpretation_stats)
"""
Explanation: 3. Retrieve reported variants and the annotation, including tiering
Obtain the interpretation stats from the case
End of explanation
"""
## Query using the clinical info_interpretation web service
primary_findings = oc.clinical.info_interpretation(interpretations=interpretation_id, study=study).get_results()[0]['primaryFindings']
## Uncomment next line to display an interactive JSON viewer
# JSON(primary_findings)
"""
Explanation: Obtain annotation from variants reported in a interpretation from a case as a JSON object
End of explanation
"""
## Perform the query
variants_reported = oc.clinical.info_interpretation(interpretations=interpretation_id, study=study)
## Define empty list to store the variants, genes and the tiering
variant_list = []
gene_id_list=[]
genename_list=[]
tier_list =[]
for variant in variants_reported.get_results()[0]['primaryFindings']:
variant_id = variant['id']
variant_list.append(variant_id)
gene_id = variant['evidences'][0]['genomicFeature']['id']
gene_id_list.append(gene_id)
gene_name = variant['evidences'][0]['genomicFeature']['geneName']
genename_list.append(gene_name)
tier = variant['evidences'][0]['classification']['tier']
tier_list.append(tier)
## Construct a Dataframe and return the first 5 rows
df = pd.DataFrame(data = {'variant_id':variant_list, 'gene_id':gene_id_list, 'gene_name':genename_list, 'tier': tier_list})
df.head()
"""
Explanation: Obtain tiering: variant ids, genes, and tier from a case interpretation
End of explanation
"""
|
fluxcapacitor/source.ml | jupyterhub.ml/notebooks/train_deploy/spark/spark_census/01_TrainModel.ipynb | apache-2.0 | import os
master = '--master local[1]'
#master = '--master spark://apachespark-master-2-1-0:7077'
conf = '--conf spark.cores.max=1 --conf spark.executor.memory=512m'
packages = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1'
jars = '--jars /root/lib/jpmml-sparkml-package-1.0-SNAPSHOT.jar'
py_files = '--py-files /root/lib/jpmml.py'
os.environ['PYSPARK_SUBMIT_ARGS'] = master \
+ ' ' + conf \
+ ' ' + packages \
+ ' ' + jars \
+ ' ' + py_files \
+ ' ' + 'pyspark-shell'
print(os.environ['PYSPARK_SUBMIT_ARGS'])
"""
Explanation: Train Model
Configure Spark for Your Notebook
This examples uses the local Spark Master --master local[1]
In production, you would use the PipelineIO Spark Master --master spark://apachespark-master-2-1-0:7077
End of explanation
"""
from pyspark.ml import Pipeline
from pyspark.ml.feature import RFormula
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark import SparkConf, SparkContext
from pyspark.sql.context import SQLContext
"""
Explanation: Import Spark Libraries
End of explanation
"""
from pyspark.sql import SparkSession
spark_session = SparkSession.builder.getOrCreate()
"""
Explanation: Create Spark Session
This may take a minute or two. Please be patient.
End of explanation
"""
df = spark_session.read.format("csv") \
.option("inferSchema", "true").option("header", "true") \
.load("s3a://datapalooza/R/census.csv")
df.head()
print(df.count())
"""
Explanation: Read Data from Public S3 Bucket
AWS credentials are not needed.
We're asking Spark to infer the schema
The data has a header
Using bzip2 because it's a splittable compression file format
End of explanation
"""
formula = RFormula(formula = "income ~ .")
classifier = DecisionTreeClassifier()
pipeline = Pipeline(stages = [formula, classifier])
pipeline_model = pipeline.fit(df)
print(pipeline_model)
"""
Explanation: Create and Train Spark ML Pipeline
End of explanation
"""
from jpmml import toPMMLBytes
model = toPMMLBytes(spark_session, df, pipeline_model)
with open('model.spark', 'wb') as fh:
fh.write(model)
!ls -al model.spark
"""
Explanation: Export the Spark ML Pipeline
End of explanation
"""
|
MegaShow/college-programming | Homework/Principles of Artificial Neural Networks/Week 10 GAN 2/DL_WEEK10.ipynb | mit | import torch
torch.cuda.set_device(2)
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
%matplotlib inline
from utils import initialize_weights
class DCGenerator(nn.Module):
def __init__(self, image_size=32, latent_dim=64, output_channel=1, class_num=3):
super(DCGenerator, self).__init__()
self.image_size = image_size
self.latent_dim = latent_dim
self.output_channel = output_channel
self.class_num = class_num
self.init_size = image_size // 8
# fc: Linear -> BN -> ReLU
self.fc = nn.Sequential(
nn.Linear(latent_dim + class_num, 512 * self.init_size ** 2),
nn.BatchNorm1d(512 * self.init_size ** 2),
nn.ReLU(inplace=True)
)
# deconv: ConvTranspose2d(4, 2, 1) -> BN -> ReLU ->
# ConvTranspose2d(4, 2, 1) -> BN -> ReLU ->
# ConvTranspose2d(4, 2, 1) -> Tanh
self.deconv = nn.Sequential(
nn.ConvTranspose2d(512, 256, 4, stride=2, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(256, 128, 4, stride=2, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(128, output_channel, 4, stride=2, padding=1),
nn.Tanh(),
)
initialize_weights(self)
def forward(self, z, labels):
"""
z : noise vector
labels : one-hot vector
"""
input_ = torch.cat((z, labels), dim=1)
out = self.fc(input_)
out = out.view(out.shape[0], 512, self.init_size, self.init_size)
img = self.deconv(out)
return img
class DCDiscriminator(nn.Module):
def __init__(self, image_size=32, input_channel=1, class_num=3, sigmoid=True):
super(DCDiscriminator, self).__init__()
self.image_size = image_size
self.input_channel = input_channel
self.class_num = class_num
self.fc_size = image_size // 8
# conv: Conv2d(3,2,1) -> LeakyReLU
# Conv2d(3,2,1) -> BN -> LeakyReLU
# Conv2d(3,2,1) -> BN -> LeakyReLU
self.conv = nn.Sequential(
nn.Conv2d(input_channel + class_num, 128, 3, 2, 1),
nn.LeakyReLU(0.2),
nn.Conv2d(128, 256, 3, 2, 1),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.2),
nn.Conv2d(256, 512, 3, 2, 1),
nn.BatchNorm2d(512),
nn.LeakyReLU(0.2),
)
# fc: Linear -> Sigmoid
self.fc = nn.Sequential(
nn.Linear(512 * self.fc_size * self.fc_size, 1),
)
if sigmoid:
self.fc.add_module('sigmoid', nn.Sigmoid())
initialize_weights(self)
def forward(self, img, labels):
"""
img : input image
labels : (batch_size, class_num, image_size, image_size)
the i-th channel is filled with 1, and others is filled with 0.
"""
input_ = torch.cat((img, labels), dim=1)
out = self.conv(input_)
out = out.view(out.shape[0], -1)
out = self.fc(out)
return out
"""
Explanation: Week10: GAN
实验要求与基本流程
实验要求
完成上一节实验课内容,理解GAN(Generative Adversarial Networks,生成对抗网络)的原理与训练方法.
结合理论课内容, 了解CGAN, pix2pix等模型基本结构和主要作用.
阅读实验指导书的实验内容,按照提示运行以及补充实验代码,或者简要回答问题.提交作业时,保留实验结果.
实验流程
CGAN
pix2pix
CGAN(Conditional GAN)
由上节课的内容可以看到,GAN可以用来生成接近真实的图片,但普通的GAN太过自由而不可控了,而CGAN(Conditional GAN)是一种带条件约束的GAN,在生成模型(D)和判别模型(G)的建模中均引入条件变量.这些条件变量可以基于多种信息,例如类别标签,用于图像修复的部分数据等等.在这个接下来这个CGAN中我们引入类别标签作为G和D的条件变量.
在下面的CGAN网络结构(与上节课展示的DCGAN模型相似)中,与之前的模型最大的不同是在G和D的输入中加入了类别标签labels,在G中,labels(用one-hot向量表示,如有3个类(0/1/2),第2类的one-hot向量为[0, 0, 1])和原来的噪声z一起输入到第一层全连接层中,在D中,labels和输入图片一起输入到卷积层中,labels中每个label用大小为(class_num,image_size,image_size)的张量表示,其正确类别的channel全为1,其余channel全为0.
End of explanation
"""
def load_mnist_data():
"""
load mnist(0,1,2) dataset
"""
transform = torchvision.transforms.Compose([
# transform to 1-channel gray image since we reading image in RGB mode
transforms.Grayscale(1),
# resize image from 28 * 28 to 32 * 32
transforms.Resize(32),
transforms.ToTensor(),
# normalize with mean=0.5 std=0.5
transforms.Normalize(mean=(0.5, ),
std=(0.5, ))
])
train_dataset = torchvision.datasets.ImageFolder(root='./data/mnist', transform=transform)
return train_dataset
"""
Explanation: 数据集
我们使用我们熟悉的MNIST手写体数据集来训练我们的CGAN,我们同样提供了一个简化版本的数据集来加快我们的训练速度,与上次的数据集不一样的是,这次的数据集包含0到9共10类的手写数字,每类各200张,共2000张.图片同样为28*28的单通道灰度图(我们将其resize到32*32).下面是加载mnist数据集的代码.
End of explanation
"""
def denorm(x):
# denormalize
out = (x + 1) / 2
return out.clamp(0, 1)
from utils import show
"""
you can pass code in this cell
"""
# show mnist real data
train_dataset = load_mnist_data()
images = []
for j in range(5):
for i in range(10):
images.append(train_dataset[i * 200 + j][0])
show(torchvision.utils.make_grid(denorm(torch.stack(images)), nrow=10))
"""
Explanation: 接下来让我们查看一下各个类上真实的手写体数据集的数据吧.(运行一下2个cell的代码,无需理解)
End of explanation
"""
# class number
class_num = 10
# image size and channel
image_size=32
image_channel=1
# vecs: one-hot vectors of size(class_num, class_num)
# fills: vecs expand to size(class_num, class_num, image_size, image_size)
vecs = torch.eye(class_num)
fills = vecs.unsqueeze(2).unsqueeze(3).expand(class_num, class_num, image_size, image_size)
print(vecs)
print(fills)
def train(trainloader, G, D, G_optimizer, D_optimizer, loss_func, device, z_dim, class_num):
"""
train a GAN with model G and D in one epoch
Args:
trainloader: data loader to train
G: model Generator
D: model Discriminator
G_optimizer: optimizer of G(etc. Adam, SGD)
D_optimizer: optimizer of D(etc. Adam, SGD)
loss_func: Binary Cross Entropy(BCE) or MSE loss function
device: cpu or cuda device
z_dim: the dimension of random noise z
"""
# set train mode
D.train()
G.train()
D_total_loss = 0
G_total_loss = 0
for i, (x, y) in enumerate(trainloader):
x = x.to(device)
batch_size_ = x.size(0)
image_size = x.size(2)
# real label and fake label
real_label = torch.ones(batch_size_, 1).to(device)
fake_label = torch.zeros(batch_size_, 1).to(device)
# y_vec: (batch_size, class_num) one-hot vector, for example, [0,0,0,0,1,0,0,0,0,0] (label: 4)
y_vec = vecs[y.long()].to(device)
# y_fill: (batch_size, class_num, image_size, image_size)
# y_fill: the i-th channel is filled with 1, and others is filled with 0.
y_fill = fills[y.long()].to(device)
z = torch.rand(batch_size_, z_dim).to(device)
# update D network
# D optimizer zero grads
D_optimizer.zero_grad()
# D real loss from real images
d_real = D(x, y_fill)
d_real_loss = loss_func(d_real, real_label)
# D fake loss from fake images generated by G
g_z = G(z, y_vec)
d_fake = D(g_z, y_fill)
d_fake_loss = loss_func(d_fake, fake_label)
# D backward and step
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
D_optimizer.step()
# update G network
# G optimizer zero gradsinput_dim=100, output_dim=1, input_size=32, class_num=10
G_optimizer.zero_grad()
# G loss
g_z = G(z, y_vec)
d_fake = D(g_z, y_fill)
g_loss = loss_func(d_fake, real_label)
# G backward and step
g_loss.backward()
G_optimizer.step()
D_total_loss += d_loss.item()
G_total_loss += g_loss.item()
return D_total_loss / len(trainloader), G_total_loss / len(trainloader)
"""
Explanation: 训练部分的代码代码与之前相似, 不同的地方在于要根据类别生成y_vec(one-hot向量如类别2对应[0,1,0,0,0,0,0,0,0,0])和y_fill(将y_vec扩展到大小为(class_num, image_size, image_size),正确的类别的channel全为1,其他channel全为0),分别输入G和D作为条件变量.其他训练过程与普通的GAN相似.我们可以先为每个类别标签生成vecs和fills.
End of explanation
"""
def visualize_results(G, device, z_dim, class_num, class_result_size=5):
G.eval()
z = torch.rand(class_num * class_result_size, z_dim).to(device)
y = torch.LongTensor([i for i in range(class_num)] * class_result_size)
y_vec = vecs[y.long()].to(device)
g_z = G(z, y_vec)
show(torchvision.utils.make_grid(denorm(g_z.detach().cpu()), nrow=class_num))
def run_gan(trainloader, G, D, G_optimizer, D_optimizer, loss_func, n_epochs, device, latent_dim, class_num):
d_loss_hist = []
g_loss_hist = []
for epoch in range(n_epochs):
d_loss, g_loss = train(trainloader, G, D, G_optimizer, D_optimizer, loss_func, device,
latent_dim, class_num)
print('Epoch {}: Train D loss: {:.4f}, G loss: {:.4f}'.format(epoch, d_loss, g_loss))
d_loss_hist.append(d_loss)
g_loss_hist.append(g_loss)
if epoch == 0 or (epoch + 1) % 10 == 0:
visualize_results(G, device, latent_dim, class_num)
return d_loss_hist, g_loss_hist
"""
Explanation: visualize_results和run_gan的代码不再详细说明.
End of explanation
"""
# hyper params
# z dim
latent_dim = 100
# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)
# epochs and batch size
n_epochs = 120
batch_size = 32
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:2')
# mnist dataset and dataloader
train_dataset = load_mnist_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
# use BCELoss as loss function
bceloss = nn.BCELoss().to(device)
# G and D model
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, class_num=class_num)
G.to(device)
D.to(device)
print(D)
print(G)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss,
n_epochs, device, latent_dim, class_num)
from utils import loss_plot
loss_plot(d_loss_hist, g_loss_hist)
"""
Explanation: 下面尝试训练一下我们的CGAN吧.
End of explanation
"""
class DCDiscriminator1(nn.Module):
def __init__(self, image_size=32, input_channel=1, class_num=3, sigmoid=True):
super().__init__()
self.image_size = image_size
self.input_channel = input_channel
self.class_num = class_num
self.fc_size = image_size // 8
# model : img -> conv1_1
# labels -> conv1_2
# (img U labels) -> Conv2d(3,2,1) -> BN -> LeakyReLU
# Conv2d(3,2,1) -> BN -> LeakyReLU
self.conv1_1 = nn.Sequential(nn.Conv2d(input_channel, 64, 3, 2, 1),
nn.BatchNorm2d(64))
self.conv1_2 = nn.Sequential(nn.Conv2d(class_num, 64, 3, 2, 1),
nn.BatchNorm2d(64))
self.conv = nn.Sequential(
nn.LeakyReLU(0.2),
nn.Conv2d(128, 256, 3, 2, 1),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.2),
nn.Conv2d(256, 512, 3, 2, 1),
nn.BatchNorm2d(512),
nn.LeakyReLU(0.2),
)
# fc: Linear -> Sigmoid
self.fc = nn.Sequential(
nn.Linear(512 * self.fc_size * self.fc_size, 1),
)
if sigmoid:
self.fc.add_module('sigmoid', nn.Sigmoid())
initialize_weights(self)
def forward(self, img, labels):
"""
img : input image
labels : (batch_size, class_num, image_size, image_size)
the i-th channel is filled with 1, and others is filled with 0.
"""
"""
To Do
"""
input_img = self.conv1_1(img)
input_labels = self.conv1_2(labels)
input_ = torch.cat((input_img, input_labels), dim=1)
out = self.conv(input_)
out = out.view(out.shape[0], -1)
out = self.fc(out)
return out
# hyper params
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:2')
# G and D model
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num)
D = DCDiscriminator1(image_size=image_size, input_channel=image_channel, class_num=class_num)
G.to(device)
D.to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss,
n_epochs, device, latent_dim, class_num)
loss_plot(d_loss_hist, g_loss_hist)
"""
Explanation: 作业 :
1. 在D中,可以将输入图片和labels分别通过两个不同的卷积层然后在维度1合并(通道上合并),再一起送去接下来的网络结构.网络部分结构已经在DCDiscriminator中写好,请在补充forward函数完成上述功能并再次使用同样的数据集训练CGAN.与之前的结果对比,说说有什么不同?
End of explanation
"""
class DCDiscriminator2(nn.Module):
def __init__(self, image_size=32, input_channel=1, class_num=3, sigmoid=True):
super().__init__()
self.image_size = image_size
self.input_channel = input_channel
self.class_num = class_num
self.fc_size = image_size // 8
# model : img -> conv1
# labels -> maxpool
# (img U labels) -> Conv2d(3,2,1) -> BN -> LeakyReLU
# Conv2d(3,2,1) -> BN -> LeakyReLU
self.conv1 = nn.Sequential(nn.Conv2d(input_channel, 128, 3, 2, 1),
nn.BatchNorm2d(128))
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv = nn.Sequential(
nn.LeakyReLU(0.2),
nn.Conv2d(128 + class_num, 256, 3, 2, 1),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.2),
nn.Conv2d(256, 512, 3, 2, 1),
nn.BatchNorm2d(512),
nn.LeakyReLU(0.2),
)
# fc: Linear -> Sigmoid
self.fc = nn.Sequential(
nn.Linear(512 * self.fc_size * self.fc_size, 1),
)
if sigmoid:
self.fc.add_module('sigmoid', nn.Sigmoid())
initialize_weights(self)
def forward(self, img, labels):
"""
img : input image
labels : (batch_size, class_num, image_size, image_size)
the i-th channel is filled with 1, and others is filled with 0.
"""
"""
To Do
"""
input_img = self.conv1(img)
input_labels = self.maxpool(labels)
input_ = torch.cat((input_img, input_labels), dim=1)
out = self.conv(input_)
out = out.view(out.shape[0], -1)
out = self.fc(out)
return out
# hyper params
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:2')
# G and D model
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num)
D = DCDiscriminator2(image_size=image_size, input_channel=image_channel, class_num=class_num)
G.to(device)
D.to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss,
n_epochs, device, latent_dim, class_num)
loss_plot(d_loss_hist, g_loss_hist)
"""
Explanation: 答:
观察两次训练的loss曲线,可以发现给图片和标签加上卷积之后,G的loss值一直稳定在一定的范围内,而没加上卷积处理的网络中,G的loss值一开始很低,后来逐渐升高。从loss曲线上分析,在第一次训练中G的变化更大。因此,第二次训练能得到效果更好的生成器。
从输出的图片上比较,也可以很明显可以看到第二次训练输出的结果比第一次好。
在D中,可以将输入图片通过1个卷积层然后和(尺寸与输入图片一致的)labels在维度1合并(通道上合并),再一起送去接下来的网络结构.网络部分结构已经在DCDiscriminator中写好,请在补充forward函数完成上述功能,并再次使用同样的数据集训练CGAN.与之前的结果对比,说说有什么不同?
End of explanation
"""
vecs = torch.randn(class_num, class_num)
fills = vecs.unsqueeze(2).unsqueeze(3).expand(class_num, class_num, image_size, image_size)
print(vecs)
print(fills)
# hyper params
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:2')
# G and D model
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel, class_num=class_num)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, class_num=class_num)
G.to(device)
D.to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss,
n_epochs, device, latent_dim, class_num)
loss_plot(d_loss_hist, g_loss_hist)
"""
Explanation: 答:
可以观察到生成网络的loss曲线变化的程度要较前两次训练的小,说明得到的G的能力相比前两次训练得到的G要强。
不过,在最终输出的图片中,肉眼分辨是效果比前两次训练差,这大概是输出选择的那一代中G的效果比较差。
若输入的类别标签不用one-hot的向量表示,我们一开始先为每个类随机生成一个随机向量,然后使用这个向量作为类别标签,这样对结果会有改变吗?试尝试运行下面代码,与之前的结果对比,说说有什么不同?
End of explanation
"""
import os
import numpy as np
import math
import itertools
import time
import datetime
import sys
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision import datasets
import torch.nn as nn
import torch.nn.functional as F
import torch
"""
Explanation: 答:
可以观察到网络的结果是比较差的,因为类别标签是随机生成的,这导致生成器生成的假图是很容易被判别器正确识别。loss曲线中,G的loss值上升的速率较快,基本不会在固定的范围内波动,而D的loss值也有下降的趋势,可以看出该生成网络的效果是不如前面三次训练所得到的网络。
这大概是因为生成器的生成结果与类别标签的关系随机性强,判别器因此更加容易判断该图为假图,而生成网络得到的调整弱,因此能力较前面三次训练弱。
Image-image translation
下面介绍一个使用CGAN来做Image-to-Image Translation的模型--pix2pix。
End of explanation
"""
import glob
import random
import os
import numpy as np
from torch.utils.data import Dataset
from PIL import Image
import torchvision.transforms as transforms
class ImageDataset(Dataset):
def __init__(self, root, transforms_=None, mode="train"):
self.transform = transforms_
# read image
self.files = sorted(glob.glob(os.path.join(root, mode) + "/*.*"))
def __getitem__(self, index):
# crop image,the left half if groundtruth image, and the right half is outline of groundtruth.
img = Image.open(self.files[index % len(self.files)])
w, h = img.size
img_B = img.crop((0, 0, w / 2, h))
img_A = img.crop((w / 2, 0, w, h))
if np.random.random() < 0.5:
# revese the image by 50%
img_A = Image.fromarray(np.array(img_A)[:, ::-1, :], "RGB")
img_B = Image.fromarray(np.array(img_B)[:, ::-1, :], "RGB")
img_A = self.transform(img_A)
img_B = self.transform(img_B)
return {"A": img_A, "B": img_B}
def __len__(self):
return len(self.files)
"""
Explanation: 本次实验使用的是Facade数据集,由于数据集的特殊性,一张图片包括两部分,如下图,左半边为groundtruth,右半边为轮廓,我们需要重写数据集的读取类,下面这个cell是就是用来读取数据集。最终使得我们的模型可以从右边部分的轮廓生成左边的建筑.
(可以跳过阅读)下面是dataset部分代码.
End of explanation
"""
import torch.nn as nn
import torch.nn.functional as F
import torch
##############################
# U-NET
##############################
class UNetDown(nn.Module):
def __init__(self, in_size, out_size, normalize=True, dropout=0.0):
super(UNetDown, self).__init__()
layers = [nn.Conv2d(in_size, out_size, 4, 2, 1, bias=False)]
if normalize:
# when baych-size is 1, BN is replaced by instance normalization
layers.append(nn.InstanceNorm2d(out_size))
layers.append(nn.LeakyReLU(0.2))
if dropout:
layers.append(nn.Dropout(dropout))
self.model = nn.Sequential(*layers)
def forward(self, x):
return self.model(x)
class UNetUp(nn.Module):
def __init__(self, in_size, out_size, dropout=0.0):
super(UNetUp, self).__init__()
layers = [
nn.ConvTranspose2d(in_size, out_size, 4, 2, 1, bias=False),
# when baych-size is 1, BN is replaced by instance normalization
nn.InstanceNorm2d(out_size),
nn.ReLU(inplace=True),
]
if dropout:
layers.append(nn.Dropout(dropout))
self.model = nn.Sequential(*layers)
def forward(self, x, skip_input):
x = self.model(x)
x = torch.cat((x, skip_input), 1)
return x
class GeneratorUNet(nn.Module):
def __init__(self, in_channels=3, out_channels=3):
super(GeneratorUNet, self).__init__()
self.down1 = UNetDown(in_channels, 64, normalize=False)
self.down2 = UNetDown(64, 128)
self.down3 = UNetDown(128, 256)
self.down4 = UNetDown(256, 256, dropout=0.5)
self.down5 = UNetDown(256, 256, dropout=0.5)
self.down6 = UNetDown(256, 256, normalize=False, dropout=0.5)
self.up1 = UNetUp(256, 256, dropout=0.5)
self.up2 = UNetUp(512, 256)
self.up3 = UNetUp(512, 256)
self.up4 = UNetUp(512, 128)
self.up5 = UNetUp(256, 64)
self.final = nn.Sequential(
nn.Upsample(scale_factor=2),
nn.ZeroPad2d((1, 0, 1, 0)),
nn.Conv2d(128, out_channels, 4, padding=1),
nn.Tanh(),
)
def forward(self, x):
# U-Net generator with skip connections from encoder to decoder
d1 = self.down1(x)# 32x32
d2 = self.down2(d1)#16x16
d3 = self.down3(d2)#8x8
d4 = self.down4(d3)#4x4
d5 = self.down5(d4)#2x2
d6 = self.down6(d5)#1x1
u1 = self.up1(d6, d5)#2x2
u2 = self.up2(u1, d4)#4x4
u3 = self.up3(u2, d3)#8x8
u4 = self.up4(u3, d2)#16x16
u5 = self.up5(u4, d1)#32x32
return self.final(u5)#64x64
##############################
# Discriminator
##############################
class Discriminator(nn.Module):
def __init__(self, in_channels=3):
super(Discriminator, self).__init__()
def discriminator_block(in_filters, out_filters, normalization=True):
"""Returns downsampling layers of each discriminator block"""
layers = [nn.Conv2d(in_filters, out_filters, 4, stride=2, padding=1)]
if normalization:
# when baych-size is 1, BN is replaced by instance normalization
layers.append(nn.InstanceNorm2d(out_filters))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
self.model = nn.Sequential(
*discriminator_block(in_channels * 2, 64, normalization=False),#32x32
*discriminator_block(64, 128),#16x16
*discriminator_block(128, 256),#8x8
*discriminator_block(256, 256),#4x4
nn.ZeroPad2d((1, 0, 1, 0)),
nn.Conv2d(256, 1, 4, padding=1, bias=False)#4x4
)
def forward(self, img_A, img_B):
# Concatenate image and condition image by channels to produce input
img_input = torch.cat((img_A, img_B), 1)
return self.model(img_input)
"""
Explanation: 生成网络G,一个Encoder-Decoder模型,借鉴了U-Net结构,所谓的U-Net是将第i层拼接到第n-i层,这样做是因为第i层和第n-i层的图像大小是一致的。
判别网络D,Pix2Pix中的D被实现为Patch-D,所谓Patch,是指无论生成的图像有多大,将其切分为多个固定大小的Patch输入进D去判断。
End of explanation
"""
from utils import show
def sample_images(dataloader, G, device):
"""Saves a generated sample from the validation set"""
imgs = next(iter(dataloader))
real_A = imgs["A"].to(device)
real_B = imgs["B"].to(device)
fake_B = G(real_A)
img_sample = torch.cat((real_A.data, fake_B.data, real_B.data), -2)
show(torchvision.utils.make_grid(img_sample.cpu().data, nrow=5, normalize=True))
"""
Explanation: (可以跳过阅读)下面这个函数用来保存轮廓图,生成图片,groundtruth,以作对比。
End of explanation
"""
# hyper param
n_epochs = 200
batch_size = 2
lr = 0.0002
img_size = 64
channels = 3
device = torch.device('cuda:2')
betas = (0.5, 0.999)
# Loss weight of L1 pixel-wise loss between translated image and real image
lambda_pixel = 1
"""
Explanation: 接着定义一些超参数lambda_pixel
End of explanation
"""
from utils import weights_init_normal
# Loss functions
criterion_GAN = torch.nn.MSELoss().to(device)
criterion_pixelwise = torch.nn.L1Loss().to(device)
# Calculate output of image discriminator (PatchGAN)
patch = (1, img_size // 16, img_size // 16)
# Initialize generator and discriminator
G = GeneratorUNet().to(device)
D = Discriminator().to(device)
G.apply(weights_init_normal)
D.apply(weights_init_normal)
optimizer_G = torch.optim.Adam(G.parameters(), lr=lr, betas=betas)
optimizer_D = torch.optim.Adam(D.parameters(), lr=lr, betas=betas)
# Configure dataloaders
transforms_ = transforms.Compose([
transforms.Resize((img_size, img_size), Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
dataloader = DataLoader(
ImageDataset("./data/facades", transforms_=transforms_),
batch_size=batch_size,
shuffle=True,
num_workers=8,
)
val_dataloader = DataLoader(
ImageDataset("./data/facades", transforms_=transforms_, mode="val"),
batch_size=10,
shuffle=True,
num_workers=1,
)
"""
Explanation: 对于pix2pix的loss function,包括CGAN的loss,加上L1Loss,其中L1Loss之前有一个系数lambda,用于调节两者之间的权重。
这里定义损失函数和优化器,这里损失函数使用了MSEloss作为GAN的loss(LSGAN).
End of explanation
"""
for epoch in range(n_epochs):
for i, batch in enumerate(dataloader):
# G:B -> A
real_A = batch["A"].to(device)
real_B = batch["B"].to(device)
# Adversarial ground truths
real_label = torch.ones((real_A.size(0), *patch)).to(device)
fake_label = torch.zeros((real_A.size(0), *patch)).to(device)
# ------------------
# Train Generators
# ------------------
optimizer_G.zero_grad()
# GAN loss
fake_B = G(real_A)
pred_fake = D(fake_B, real_A)
loss_GAN = criterion_GAN(pred_fake, real_label)
# Pixel-wise loss
loss_pixel = criterion_pixelwise(fake_B, real_B)
# Total loss
loss_G = loss_GAN + lambda_pixel * loss_pixel
loss_G.backward()
optimizer_G.step()
# ---------------------
# Train Discriminator
# ---------------------
optimizer_D.zero_grad()
# Real loss
pred_real = D(real_B, real_A)
loss_real = criterion_GAN(pred_real, real_label)
# Fake loss
pred_fake = D(fake_B.detach(), real_A)
loss_fake = criterion_GAN(pred_fake, fake_label)
# Total loss
loss_D = 0.5 * (loss_real + loss_fake)
loss_D.backward()
optimizer_D.step()
# Print log
print(
"\r[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f, pixel: %f, adv: %f]"
% (
epoch,
n_epochs,
i,
len(dataloader),
loss_D.item(),
loss_G.item(),
loss_pixel.item(),
loss_GAN.item(),
)
)
# If at sample interval save image
if epoch == 0 or (epoch + 1) % 5 == 0:
sample_images(val_dataloader, G, device)
"""
Explanation: 下面开始训练pix2pix,训练的过程:
首先训练G,对于每张图片A(轮廓),用G生成fakeB(建筑),然后fakeB与realB(ground truth)计算L1loss,同时使用D判别(fakeB,A),计算MSEloss(label为1),用这2个loss一起更新G;
再训练D,使用(fakeB,A)与(realB,A)计算MSEloss(label前者为0,后者为1),更新D.
End of explanation
"""
# Loss functions
criterion_pixelwise = torch.nn.L1Loss().to(device)
# Initialize generator and discriminator
G = GeneratorUNet().to(device)
D = Discriminator().to(device)
G.apply(weights_init_normal)
D.apply(weights_init_normal)
optimizer_G = torch.optim.Adam(G.parameters(), lr=lr, betas=betas)
optimizer_D = torch.optim.Adam(D.parameters(), lr=lr, betas=betas)
for epoch in range(n_epochs):
for i, batch in enumerate(dataloader):
# G:B -> A
real_A = batch["A"].to(device)
real_B = batch["B"].to(device)
# ------------------
# Train Generators
# ------------------
optimizer_G.zero_grad()
# GAN loss
fake_B = G(real_A)
# Pixel-wise loss
loss_pixel = criterion_pixelwise(fake_B, real_B)
# Total loss
loss_G = loss_pixel
loss_G.backward()
optimizer_G.step()
# Print log
print(
"\r[Epoch %d/%d] [Batch %d/%d] [G loss: %f]"
% (
epoch,
n_epochs,
i,
len(dataloader),
loss_G.item()
)
)
# If at sample interval save image
if epoch == 0 or (epoch + 1) % 5 == 0:
sample_images(val_dataloader, G, device)
"""
Explanation: 作业:
只用L1 Loss的情况下训练pix2pix.说说有结果什么不同.
End of explanation
"""
# Loss functions
criterion_GAN = torch.nn.MSELoss().to(device)
# Initialize generator and discriminator
G = GeneratorUNet().to(device)
D = Discriminator().to(device)
G.apply(weights_init_normal)
D.apply(weights_init_normal)
optimizer_G = torch.optim.Adam(G.parameters(), lr=lr, betas=betas)
optimizer_D = torch.optim.Adam(D.parameters(), lr=lr, betas=betas)
for epoch in range(n_epochs):
for i, batch in enumerate(dataloader):
"""
To Do
"""
# G:B -> A
real_A = batch["A"].to(device)
real_B = batch["B"].to(device)
# Adversarial ground truths
real_label = torch.ones((real_A.size(0), *patch)).to(device)
fake_label = torch.zeros((real_A.size(0), *patch)).to(device)
# ------------------
# Train Generators
# ------------------
optimizer_G.zero_grad()
# GAN loss
fake_B = G(real_A)
pred_fake = D(fake_B, real_A)
loss_G = criterion_GAN(pred_fake, real_label)
loss_G.backward()
optimizer_G.step()
# ---------------------
# Train Discriminator
# ---------------------
optimizer_D.zero_grad()
# Real loss
pred_real = D(real_B, real_A)
loss_real = criterion_GAN(pred_real, real_label)
# Fake loss
pred_fake = D(fake_B.detach(), real_A)
loss_fake = criterion_GAN(pred_fake, fake_label)
# Total loss
loss_D = 0.5 * (loss_real + loss_fake)
loss_D.backward()
optimizer_D.step()
# Print log
print(
"\r[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]"
% (
epoch,
n_epochs,
i,
len(dataloader),
loss_D.item(),
loss_G.item()
)
)
# If at sample interval save image
if epoch == 0 or (epoch + 1) % 5 == 0:
sample_images(val_dataloader, G, device)
"""
Explanation: 答:
只用L1 loss训练网络的时候,可以观察到网络在一开始并没有像第一次训练得到的结果那样具有各种五颜六色的噪点。一开始的几次迭代可以很迅速地得到建筑的边框痕迹,相对第一次训练的噪点要少很多。但是若干次迭代之后,只用L1 loss的网络生成的假图很模糊,效果比第一次训练的结果差很多。
只用CGAN Loss训练pix2pix(在下面的cell填入对应代码并运行).说说有结果什么不同.
End of explanation
"""
|
tlby/mxnet | example/recommenders/demo2-dssm.ipynb | apache-2.0 | import warnings
import mxnet as mx
from mxnet import gluon, np, npx, autograd, sym
import numpy as onp
from sklearn.random_projection import johnson_lindenstrauss_min_dim
# Define some constants
max_user = int(1e5)
title_vocab_size = int(3e4)
query_vocab_size = int(3e4)
num_samples = int(1e4)
hidden_units = 128
epsilon_proj = 0.25
ctx = mx.gpu() if mx.device.num_gpus() > 0 else mx.cpu()
"""
Explanation: Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Content-based recommender using Deep Structured Semantic Model
An example of how to build a Deep Structured Semantic Model (DSSM) for incorporating complex content-based features into a recommender system. See Learning Deep Structured Semantic Models for Web Search using Clickthrough Data. This example does not attempt to provide a datasource or train a model, but merely show how to structure a complex DSSM network.
End of explanation
"""
proj_dim = johnson_lindenstrauss_min_dim(num_samples, epsilon_proj)
print("To keep a distance disruption ~< {}% of our {} samples we need to randomly project to at least {} dimensions".format(epsilon_proj*100, num_samples, proj_dim))
class BagOfWordsRandomProjection(gluon.HybridBlock):
def __init__(self, vocab_size, output_dim, random_seed=54321, pad_index=0):
"""
:param int vocab_size: number of element in the vocabulary
:param int output_dim: projection dimension
:param int ramdon_seed: seed to use to guarantee same projection
:param int pad_index: index of the vocabulary used for padding sentences
"""
super(BagOfWordsRandomProjection, self).__init__()
self._vocab_size = vocab_size
self._output_dim = output_dim
proj = self._random_unit_vecs(vocab_size=vocab_size, output_dim=output_dim, random_seed=random_seed)
# we set the projection of the padding word to 0
proj[pad_index, :] = 0
self.proj = self.params.get_constant('proj', value=proj)
def _random_unit_vecs(self, vocab_size, output_dim, random_seed):
rs = onp.random.RandomState(seed=random_seed)
W = rs.normal(size=(vocab_size, output_dim))
Wlen = np.linalg.norm(W, axis=1)
W_unit = W / Wlen[:,None]
return W_unit
def forward(self, x, proj):
"""
:param nd or sym F:
:param nd.NDArray x: index of tokens
returns the sum of the projected embeddings of each token
"""
embedded = npx.embedding(x, proj, input_dim=self._vocab_size, output_dim=self._output_dim)
return embedded.sum(axis=1)
bowrp = BagOfWordsRandomProjection(1000, 20)
bowrp.initialize()
bowrp(mx.np.array([[10, 50, 100], [5, 10, 0]]))
"""
Explanation: Bag of words random projection
A previous version of this example contained a bag of word random projection example, it is kept here for reference but not used in the next example.
Random Projection is a dimension reduction technique that guarantees the disruption of the pair-wise distance between your original data point within a certain bound.
What is even more interesting is that the dimension to project onto to guarantee that bound does not depend on the original number of dimension but solely on the total number of datapoints.
You can see more explanation in this blog post
End of explanation
"""
bowrp(mx.np.array([[10, 50, 100, 0], [5, 10, 0, 0]]))
"""
Explanation: With padding:
End of explanation
"""
proj_dim = 128
class DSSMRecommenderNetwork(gluon.HybridBlock):
def __init__(self, query_vocab_size, proj_dim, max_user, title_vocab_size, hidden_units, random_seed=54321, p=0.5):
super(DSSMRecommenderNetwork, self).__init__()
# User/Query pipeline
self.user_embedding = gluon.nn.Embedding(max_user, proj_dim)
self.user_mlp = gluon.nn.Dense(hidden_units, activation="relu")
# Instead of bag of words, we use learned embeddings + stacked biLSTM average
self.query_text_embedding = gluon.nn.Embedding(query_vocab_size, proj_dim)
self.query_lstm = gluon.rnn.LSTM(hidden_units, 2, bidirectional=True)
self.query_text_mlp = gluon.nn.Dense(hidden_units, activation="relu")
self.query_dropout = gluon.nn.Dropout(p)
self.query_mlp = gluon.nn.Dense(hidden_units, activation="relu")
# Item pipeline
# Instead of bag of words, we use learned embeddings + stacked biLSTM average
self.title_embedding = gluon.nn.Embedding(title_vocab_size, proj_dim)
self.title_lstm = gluon.rnn.LSTM(hidden_units, 2, bidirectional=True)
self.title_mlp = gluon.nn.Dense(hidden_units, activation="relu")
# You could use vgg here for example
self.image_embedding = gluon.model_zoo.vision.resnet18_v2(pretrained=False).features
self.image_mlp = gluon.nn.Dense(hidden_units, activation="relu")
self.item_dropout = gluon.nn.Dropout(p)
self.item_mlp = gluon.nn.Dense(hidden_units, activation="relu")
def forward(self, user, query_text, title, image):
# Query
user = self.user_embedding(user)
user = self.user_mlp(user)
query_text = self.query_text_embedding(query_text)
query_text = self.query_lstm(query_text.transpose((1,0,2)))
# average the states
query_text = query_text.mean(axis=0)
query_text = self.query_text_mlp(query_text)
query = np.concatenate([user, query_text])
query = self.query_dropout(query)
query = self.query_mlp(query)
# Item
title_text = self.title_embedding(title)
title_text = self.title_lstm(title_text.transpose((1,0,2)))
# average the states
title_text = title_text.mean(axis=0)
title_text = self.title_mlp(title_text)
image = self.image_embedding(image)
image = self.image_mlp(image)
item = np.concatenate([title_text, image])
item = self.item_dropout(item)
item = self.item_mlp(item)
# Cosine Similarity
query = query.expand_dims(axis=2)
item = item.expand_dims(axis=2)
sim = npx.batch_dot(query, item, transpose_a=True) / np.expand_dims((np.norm(query, axis=1) * np.norm(item, axis=1) + 1e-9), axis=2)
return sim.squeeze(axis=2)
network = DSSMRecommenderNetwork(
query_vocab_size,
proj_dim,
max_user,
title_vocab_size,
hidden_units
)
network.initialize(mx.init.Xavier(), ctx)
# Load pre-trained vgg16 weights
with network.name_scope():
network.image_embedding = gluon.model_zoo.vision.resnet18_v2(pretrained=True, ctx=ctx).features
"""
Explanation: Content-based recommender / ranking system using DSSM
For example in the search result ranking problem:
You have users, that have performed text-based searches. They were presented with results, and selected one of them.
Results are composed of a title and an image.
Your positive examples will be the clicked items in the search results, and the negative examples are sampled from the non-clicked examples.
The network will jointly learn embeddings for users and query text making up the "Query", title and image making the "Item" and learn how similar they are.
After training, you can index the embeddings for your items and do a knn search with your query embeddings using the cosine similarity to return ranked items
End of explanation
"""
mx.viz.plot_network(network(
mx.sym.var('user'), mx.sym.var('query_text'), mx.sym.var('title'), mx.sym.var('image')),
shape={'user': (1,1), 'query_text': (1,30), 'title': (1,30), 'image': (1,3,224,224)},
node_attrs={"fixedsize":"False"})
"""
Explanation: It is quite hard to visualize the network since it is relatively complex but you can see the two-pronged structure, and the resnet18 branch
End of explanation
"""
user = mx.np.array([[200], [100]], ctx)
query = mx.np.array([[10, 20, 0, 0, 0], [40, 50, 0, 0, 0]], ctx) # Example of an encoded text
title = mx.np.array([[10, 20, 0, 0, 0], [40, 50, 0, 0, 0]], ctx) # Example of an encoded text
image = mx.np.random.uniform(size=(2,3, 224,224), ctx=ctx) # Example of an encoded image
network.summary(user, query, title, image)
network(user, query, title, image)
"""
Explanation: We can print the summary of the network using dummy data. We can see it is already training on 32M parameters!
End of explanation
"""
|
ibm-cds-labs/pixiedust | notebook/GraphFrame with Pixiedust.ipynb | apache-2.0 | cloudantHost='dtaieb.cloudant.com'
cloudantUserName='weenesserliffircedinvers'
cloudantPassword='72a5c4f939a9e2578698029d2bb041d775d088b5'
airports = sqlContext.read.format("com.cloudant.spark").option("cloudant.host",cloudantHost)\
.option("cloudant.username",cloudantUserName).option("cloudant.password",cloudantPassword)\
.option("schemaSampleSize", "-1").load("flight-metadata")
airports.cache()
airports.registerTempTable("airports")
import pixiedust
# Display the airports data
display(airports)
flights = sqlContext.read.format("com.cloudant.spark").option("cloudant.host",cloudantHost)\
.option("cloudant.username",cloudantUserName).option("cloudant.password",cloudantPassword)\
.option("schemaSampleSize", "-1").load("pycon_flightpredict_training_set")
flights.cache()
flights.registerTempTable("training")
# Display the flights data
display(flights)
"""
Explanation: Load the airport and flight data from Cloudant
End of explanation
"""
from pyspark.sql import functions as f
from pyspark.sql.types import *
rdd = flights.rdd.flatMap(lambda s: [s.arrivalAirportFsCode, s.departureAirportFsCode]).distinct()\
.map(lambda row:[row])
vertices = airports.join(
sqlContext.createDataFrame(rdd, StructType([StructField("fs",StringType())])), "fs"
).dropDuplicates(["fs"]).withColumnRenamed("fs","id")
print(vertices.count())
edges = flights.withColumnRenamed("arrivalAirportFsCode","dst")\
.withColumnRenamed("departureAirportFsCode","src")\
.drop("departureWeather").drop("arrivalWeather").drop("pt_type").drop("_id").drop("_rev")
print(edges.count())
"""
Explanation: Build the vertices and edges dataframe from the data
End of explanation
"""
import pixiedust
if sc.version.startswith('1.6.'): # Spark 1.6
pixiedust.installPackage("graphframes:graphframes:0.5.0-spark1.6-s_2.11")
elif sc.version.startswith('2.'): # Spark 2.1, 2.0
pixiedust.installPackage("graphframes:graphframes:0.5.0-spark2.1-s_2.11")
pixiedust.installPackage("com.typesafe.scala-logging:scala-logging-api_2.11:2.1.2")
pixiedust.installPackage("com.typesafe.scala-logging:scala-logging-slf4j_2.11:2.1.2")
print("done")
"""
Explanation: Install GraphFrames package using PixieDust packageManager
The GraphFrames package to install depends on the environment.
Spark 1.6
graphframes:graphframes:0.5.0-spark1.6-s_2.11
Spark 2.x
graphframes:graphframes:0.5.0-spark2.1-s_2.11
In addition, recent versions of graphframes have dependencies on other packages which will need to also be installed:
com.typesafe.scala-logging:scala-logging-api_2.11:2.1.2
com.typesafe.scala-logging:scala-logging-slf4j_2.11:2.1.2
Note: After installing packages, the kernel will need to be restarted and all the previous cells re-run (including the install package cell).
End of explanation
"""
from graphframes import GraphFrame
g = GraphFrame(vertices, edges)
display(g)
"""
Explanation: Create the GraphFrame from the Vertices and Edges Dataframes
End of explanation
"""
from pyspark.sql.functions import *
degrees = g.degrees.sort(desc("degree"))
display( degrees )
"""
Explanation: Compute the degree for each vertex in the graph
The degree of a vertex is the number of edges incident to the vertex. In a directed graph, in-degree is the number of edges where vertex is the destination and out-degree is the number of edges where the vertex is the source. With GraphFrames, there is a degrees, outDegrees and inDegrees property that return a DataFrame containing the id of the vertext and the number of edges. We then sort then in descending order
End of explanation
"""
r = g.shortestPaths(landmarks=["BOS", "LAX"]).select("id", "distances")
display(r)
"""
Explanation: Compute a list of shortest paths for each vertex to a specified list of landmarks
For this we use the shortestPaths api that returns DataFrame containing the properties for each vertex plus an extra column called distances that contains the number of hops to each landmark.
In the following code, we use BOS and LAX as the landmarks
End of explanation
"""
from pyspark.sql.functions import *
ranks = g.pageRank(resetProbability=0.20, maxIter=5)
rankedVertices = ranks.vertices.select("id","pagerank").orderBy(desc("pagerank"))
rankedEdges = ranks.edges.select("src", "dst", "weight").orderBy(desc("weight") )
ranks = GraphFrame(rankedVertices, rankedEdges)
display(ranks)
"""
Explanation: Compute the pageRank for each vertex in the graph
PageRank is a famous algorithm used by Google Search to rank vertices in a graph by order of importance. To compute pageRank, we'll use the pageRank api that returns a new graph in which the vertices have a new pagerank column representing the pagerank score for the vertex and the edges have a new weight column representing the edge weight that contributed to the pageRank score. We'll then display the vertice ids and associated pageranks sorted descending:
End of explanation
"""
paths = g.bfs(fromExpr="id='BOS'",toExpr="id = 'SFO'",edgeFilter="carrierFsCode='UA'", maxPathLength = 2)\
.drop("from").drop("to")
paths.cache()
display(paths)
"""
Explanation: Search routes between 2 airports with specific criteria
In this section, we want to find all the routes between Boston and San Francisco operated by United Airlines with at most 2 hops. To accomplish this, we use the bfs (Breath First Search) api that returns a DataFrame containing the shortest path between matching vertices. For clarity will only keep the edge when displaying the results
End of explanation
"""
from pyspark.sql.functions import *
h = GraphFrame(g.vertices, g.edges.select("src","dst")\
.groupBy("src","dst").agg(count("src").alias("count")))
query = h.find("(a)-[]->(b);(b)-[]->(c);!(a)-[]->(c)").drop("b")
query.cache()
display(query)
"""
Explanation: Find all airports that do not have direct flights between each other
In this section, we'll use a very powerful graphFrames search feature that uses a pattern called motif to find nodes. The pattern we'll use the following pattern "(a)-[]->(b);(b)-[]->(c);!(a)-[]->(c)" which searches for all nodes a, b and c that have a path to (a,b) and a path to (b,c) but not a path to (a,c).
Also, because the search is computationally expensive, we reduce the number of edges by grouping the flights that have the same src and dst.
End of explanation
"""
from pyspark.sql.functions import *
components = g.stronglyConnectedComponents(maxIter=10).select("id","component")\
.groupBy("component").agg(count("id").alias("count")).orderBy(desc("count"))
display(components)
"""
Explanation: Compute the strongly connected components for this graph
Strongly Connected Components are components for which each vertex is reachable from every other vertex. To compute them, we'll use the stronglyConnectedComponents api that returns a DataFrame containing all the vertices with the addition of a component column that has the component id in which the vertex belongs to. We then group all the rows by components and aggregate the sum of all the member vertices. This gives us a good idea of the components distribution in the graph
End of explanation
"""
from pyspark.sql.functions import *
communities = g.labelPropagation(maxIter=5).select("id", "label")\
.groupBy("label").agg(count("id").alias("count")).orderBy(desc("count"))
display(communities)
"""
Explanation: Detect communities in the graph using Label Propagation algorithm
Label Propagation algorithm is a popular algorithm for finding communities within a graph. It has the advantage to be computationally inexpensive and thus works well with large graphs. To compute the communities, we'll use the labelPropagation api that returns a DataFrame containing all the vertices with the addition of a label column that has the label id for the communities in which the vertex belongs to. Similar to the strongly connected components, we'll then group all the rows by label and aggregate the sum of all the member vertices.
End of explanation
"""
%%scala
import org.graphframes.lib.AggregateMessages
import org.apache.spark.sql.functions.{avg,desc,floor}
// For each airport, average the delays of the departing flights
val msgToSrc = AggregateMessages.edge("deltaDeparture")
val __agg = g.aggregateMessages
.sendToSrc(msgToSrc) // send each flight delay to source
.agg(floor(avg(AggregateMessages.msg)).as("averageDelays")) // average up all delays
.orderBy(desc("averageDelays"))
.limit(10)
__agg.cache()
__agg.show()
display(__agg)
"""
Explanation: Use AggregateMessages to compute the average flight delays by originating airport
AggregateMessages api is not currently available in Python, so we use PixieDust Scala bridge to call out the Scala API
Note: Notice that PixieDust is automatically rebinding the python GraphFrame variable g into a scala GraphFrame with same name
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/miroc/cmip6/models/sandbox-3/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-3', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MIROC
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
h2oai/h2o-3 | h2o-py/demos/pdp_multiclass.ipynb | apache-2.0 | # Import the Iris Dataset and Build a GLM
import h2o
h2o.init()
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
# import the iris dataset:
# this dataset is used to classify the type of iris plant
# the original dataset can be found at https://archive.ics.uci.edu/ml/datasets/Iris
# iris = h2o.import_file("http://h2o-public-test-data.s3.amazonaws.com/smalldata/iris/iris_wheader.csv")
iris = h2o.import_file("../../smalldata/iris/iris_wheader.csv")
# convert response column to a factor
iris['class'] = iris['class'].asfactor()
# set the predictor names and the response column name
predictors = iris.col_names[:-1]
response = 'class'
# split into train and validation
train, valid = iris.split_frame(ratios = [.8], seed=1234)
# build model
model = H2OGeneralizedLinearEstimator(family = 'multinomial')
model.train(x = predictors, y = response, training_frame = train, validation_frame = valid)
"""
Explanation: Multinomial Partial Dependency plot
Authors Lauren DiPerna, Veronika Maurerova
Build a GLM with the Iris Dataset
End of explanation
"""
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# hide progress bar
h2o.no_progress()
# specify the model to you:
model = model
# specify the dataframe to use
data_pdp = iris
# specify the feature of interest, available features include:
# ['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class']
# col = "sepal_len"
# col = 'sepal_wid'
col = 'petal_len'
# col = 'petal_wid'
# create a copy of the column of interest, so that values are preserved after each run
col_data = data_pdp[col]
"""
Explanation: Specify Feature of Interest
In the cell below, if you decide to use a different dataset, model, or features please update the following variables:
* model
* data_pdp
* col
End of explanation
"""
# get a list of the classes in your target
classes = h2o.as_list(data_pdp['class'].unique(), use_pandas=False,header=False)
classes = [class_val[0] for class_val in classes]
# create bins for the pdp plot
bins = data_pdp[col].quantile(prob=list(np.linspace(0.05,1,19)))[:,1].unique()
bins = bins.as_data_frame().values.tolist()
bins = [bin_val[0] for bin_val in bins]
bins.sort()
# Loop over each class and print the pdp for the given feature
for class_val in classes:
mean_responses = []
for bin_val in bins:
# warning this line modifies the dataset.
# when you rerun on a new column make sure to return
# all columns to their original values.
data_pdp[col] = bin_val
response = model.predict(data_pdp)
mean_response = response[:,class_val].mean()[0]
mean_responses.append(mean_response)
mean_responses
pdp_manual = pd.DataFrame({col: bins, 'mean_response':mean_responses},columns=[col,'mean_response'])
plt.plot(pdp_manual[col], pdp_manual.mean_response);
plt.xlabel(col);
plt.ylabel('mean_response');
plt.title('PDP for Class {0}'.format(class_val));
plt.show()
# reset col value to original value for future runs of this cell
data_pdp[col] = col_data
"""
Explanation: Generate a PDP per class manualy
End of explanation
"""
# h2o multinomial PDP class setosa
data = model.partial_plot(data=iris, cols=["petal_len"], plot_stddev=False, plot=True, targets=["Iris-setosa"])
# h2o multinomial PDP class versicolor
data = model.partial_plot(data=iris, cols=["petal_len"], plot_stddev=False, plot=True, targets=["Iris-versicolor"])
# h2o multinomial PDP class virginica
data = model.partial_plot(data=iris, cols=["petal_len"], plot_stddev=False, plot=True, targets=["Iris-virginica"])
# h2o multinomial PDP all classes
data = model.partial_plot(data=iris, cols=["petal_len"], plot_stddev=False, plot=True, targets=["Iris-setosa", "Iris-versicolor", "Iris-virginica"])
# h2o multinomial PDP all classes with stddev
data = model.partial_plot(data=iris, cols=["petal_len"], plot_stddev=True, plot=True, targets=["Iris-setosa", "Iris-versicolor", "Iris-virginica"])
"""
Explanation: Use target parameter and plot H2O multinomial PDP
End of explanation
"""
|
microsoft/dowhy | docs/source/example_notebooks/lalonde_pandas_api.ipynb | mit | import os, sys
sys.path.append(os.path.abspath("../../../"))
from rpy2.robjects import r as R
%load_ext rpy2.ipython
#%R install.packages("Matching")
%R library(Matching)
%R data(lalonde)
%R -o lalonde
lalonde.to_csv("lalonde.csv",index=False)
# the data already loaded in the previous cell. we include the import
# here you so you don't have to keep re-downloading it.
import pandas as pd
lalonde=pd.read_csv("lalonde.csv")
"""
Explanation: Lalonde Pandas API Example
by Adam Kelleher
We'll run through a quick example using the high-level Python API for the DoSampler. The DoSampler is different from most classic causal effect estimators. Instead of estimating statistics under interventions, it aims to provide the generality of Pearlian causal inference. In that context, the joint distribution of the variables under an intervention is the quantity of interest. It's hard to represent a joint distribution nonparametrically, so instead we provide a sample from that distribution, which we call a "do" sample.
Here, when you specify an outcome, that is the variable you're sampling under an intervention. We still have to do the usual process of making sure the quantity (the conditional interventional distribution of the outcome) is identifiable. We leverage the familiar components of the rest of the package to do that "under the hood". You'll notice some similarity in the kwargs for the DoSampler.
Getting the Data
First, download the data from the LaLonde example.
End of explanation
"""
import dowhy.api
"""
Explanation: The causal Namespace
We've created a "namespace" for pandas.DataFrames containing causal inference methods. You can access it here with lalonde.causal, where lalonde is our pandas.DataFrame, and causal contains all our new methods! These methods are magically loaded into your existing (and future) dataframes when you import dowhy.api.
End of explanation
"""
do_df = lalonde.causal.do(x='treat',
outcome='re78',
common_causes=['nodegr', 'black', 'hisp', 'age', 'educ', 'married'],
variable_types={'age': 'c', 'educ':'c', 'black': 'd', 'hisp': 'd',
'married': 'd', 'nodegr': 'd','re78': 'c', 'treat': 'b'},
proceed_when_unidentifiable=True)
"""
Explanation: Now that we have the causal namespace, lets give it a try!
The do Operation
The key feature here is the do method, which produces a new dataframe replacing the treatment variable with values specified, and the outcome with a sample from the interventional distribution of the outcome. If you don't specify a value for the treatment, it leaves the treatment untouched:
End of explanation
"""
lalonde.head()
do_df.head()
"""
Explanation: Notice you get the usual output and prompts about identifiability. This is all dowhy under the hood!
We now have an interventional sample in do_df. It looks very similar to the original dataframe. Compare them:
End of explanation
"""
(lalonde[lalonde['treat'] == 1].mean() - lalonde[lalonde['treat'] == 0].mean())['re78']
"""
Explanation: Treatment Effect Estimation
We could get a naive estimate before for a treatment effect by doing
End of explanation
"""
(do_df[do_df['treat'] == 1].mean() - do_df[do_df['treat'] == 0].mean())['re78']
"""
Explanation: We can do the same with our new sample from the interventional distribution to get a causal effect estimate
End of explanation
"""
import numpy as np
1.96*np.sqrt((do_df[do_df['treat'] == 1].var()/len(do_df[do_df['treat'] == 1])) +
(do_df[do_df['treat'] == 0].var()/len(do_df[do_df['treat'] == 0])))['re78']
"""
Explanation: We could get some rough error bars on the outcome using the normal approximation for a 95% confidence interval, like
End of explanation
"""
do_df['re78'].describe()
lalonde['re78'].describe()
"""
Explanation: but note that these DO NOT contain propensity score estimation error. For that, a bootstrapping procedure might be more appropriate.
This is just one statistic we can compute from the interventional distribution of 're78'. We can get all of the interventional moments as well, including functions of 're78'. We can leverage the full power of pandas, like
End of explanation
"""
%matplotlib inline
import seaborn as sns
sns.barplot(data=lalonde, x='treat', y='re78')
sns.barplot(data=do_df, x='treat', y='re78')
"""
Explanation: and even plot aggregations, like
End of explanation
"""
do_df = lalonde.causal.do(x={'treat': 1},
outcome='re78',
common_causes=['nodegr', 'black', 'hisp', 'age', 'educ', 'married'],
variable_types={'age': 'c', 'educ':'c', 'black': 'd', 'hisp': 'd',
'married': 'd', 'nodegr': 'd','re78': 'c', 'treat': 'b'},
proceed_when_unidentifiable=True)
do_df.head()
"""
Explanation: Specifying Interventions
You can find the distribution of the outcome under an intervention to set the value of the treatment.
End of explanation
"""
help(lalonde.causal.do)
"""
Explanation: This new dataframe gives the distribution of 're78' when 'treat' is set to 1.
For much more detail on how the do method works, check the docstring:
End of explanation
"""
|
science-of-imagination/nengo-buffer | Project/trained_mental_translation_testing.ipynb | gpl-3.0 | import nengo
import numpy as np
import cPickle
import matplotlib.pyplot as plt
from matplotlib import pylab
import matplotlib.animation as animation
"""
Explanation: Testing the trained weight matrices (not in an ensemble)
End of explanation
"""
#Weight matrices generated by the neural network after training
#Maps the label vectors to the neuron activity of the ensemble
label_weights = cPickle.load(open("label_weights1000.p", "rb"))
#Maps the activity of the neurons to the visual representation of the image
activity_to_img_weights = cPickle.load(open("activity_to_img_weights_translate1000.p", "rb"))
#Maps the activity of the neurons of an image with the activity of the neurons of an image scaled
translate_up_weights = cPickle.load(open("translate_up_weights1000.p", "rb"))
translate_down_weights = cPickle.load(open("translate_down_weights1000.p", "rb"))
translate_left_weights = cPickle.load(open("translate_left_weights1000.p", "rb"))
translate_right_weights = cPickle.load(open("translate_right_weights1000.p", "rb"))
#Create the pointers for the numbers
temp = np.diag([1]*10)
ZERO = temp[0]
ONE = temp[1]
TWO = temp[2]
THREE= temp[3]
FOUR = temp[4]
FIVE = temp[5]
SIX = temp[6]
SEVEN =temp[7]
EIGHT= temp[8]
NINE = temp[9]
labels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE]
#Visualize the one hot representation
print(ZERO)
print(ONE)
"""
Explanation: Load the weight matrices from the training
End of explanation
"""
#Change this to imagine different digits
imagine = ZERO
#Can also imagine combitnations of numbers (ZERO + ONE)
#Label to activity
test_activity = np.dot(imagine,label_weights)
#Image decoded
test_output_img = np.dot(test_activity, activity_to_img_weights)
plt.imshow(test_output_img.reshape(28,28),cmap='gray')
plt.show()
"""
Explanation: Visualize the digit from one hot representation through the activity weight matrix to the image representation
- Image is average digit from mnist dataset
End of explanation
"""
#Change this to visualize different digits
imagine = ONE
#How long the animation should go for
frames=5
#Make a list of the activation of rotated images and add first frame
rot_seq = []
rot_seq.append(np.dot(imagine,label_weights)) #Map the label vector to the activity vector
test_output_img = np.dot(rot_seq[0], activity_to_img_weights) #Map the activity to the visual representation
#add the rest of the frames, using the previous frame to calculate the current frame
for i in range(1,frames):
rot_seq.append(np.dot(rot_seq[i-1],translate_left_weights)) #add the activity of the current image to the list
test_output_img = np.dot(rot_seq[i], activity_to_img_weights) #map the new activity to the visual image
for i in range(1,frames*2):
rot_seq.append(np.dot(rot_seq[frames+i-2],translate_down_weights)) #add the activity of the current image to the list
test_output_img = np.dot(rot_seq[i], activity_to_img_weights) #map the new activity to the visual image
#Animation of rotation
fig = plt.figure()
def updatefig(i):
image_vector = np.dot(rot_seq[i], activity_to_img_weights) #map the activity to the image representation
im = pylab.imshow(np.reshape(image_vector,(28,28), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True)
return im,
ani = animation.FuncAnimation(fig, updatefig, interval=100, blit=True)
plt.show()
"""
Explanation: Visualize the rotation of the image using the weight matrix from activity to activity
- does not use the weight matrix used on the recurrent connection
End of explanation
"""
|
Open-Power-System-Data/national_generation_capacity | comparison_plot.ipynb | mit | import os.path
import math
import functions.plots as fp # predefined functions in extra file
import bokeh.plotting as plo
from bokeh.io import show, output_notebook
from bokeh.layouts import row, column
from bokeh.models import Panel, Tabs
from bokeh.models.widgets import RangeSlider, MultiSelect, Select
output_notebook()
"""
Explanation: <table style="width:100%">
<tr>
<td style="background-color:#EBF5FB; border: 1px solid #CFCFCF">
<b>National generation capacity: Check notebook</b>
<ul>
<li><a href="main.ipynb">Main notebook</a></li>
<li><a href="processing.ipynb">Processing notebook</a></li>
<li>Check notebook (this)</li>
</ul>
<br>This Notebook is part of the <a href="http://data.open-power-system-data.org/national_generation_capacity">National Generation Capacity Datapackage</a> of <a href="http://open-power-system-data.org">Open Power System Data</a>.
</td>
</tr>
</table>
Table of Contents
1. Introductory notes
2. Script setup
3. Import of processed data
4. Visualisation of results for different energy source levels
4.1 Energy source level 1
4.1.1 Table
4.1.2 Bokeh chart
4.2 Energy source level 2
4.2.1 Table
4.2.2 Bokeh chart
4.3 Energy source level 3
4.3.1 Table
4.3.2 Bokeh chart
4.4 Technology level
4.4.1 Table
4.4.2 Bokeh chart
5. Comparison of total capacity for energy source levels
5.1 Calculation of total capacity for energy source levels
5.2 Identifcation of capacity differences for energy source levels
1. Introductory notes
The notebook extends the processing notebook to make visualisations.
2. Script setup
End of explanation
"""
data_file = 'national_generation_capacity_stacked.csv'
filepath = os.path.join('output', data_file)
data = fp.load_opsd_data(filepath)
data.head()
"""
Explanation: 3. Data import
End of explanation
"""
width = 1000
height = 500
def comparison_plot(doc):
# init of 5 plots for each energy level
sources = []
plots = []
for level in fp.energy_levels:
# init plots with the predfined function
s, p = fp.init_plot(data, level, size=(width, height))
sources.append(s)
plots.append(p)
# associate each plot with a tab of the interactive plot
panels= []
for p, level in zip(plots, fp.energy_levels):
panels.append(Panel(child=p, title=level))
tabs = Tabs(tabs=panels, tabs_location='below', active=2)
# Range slider for available years
oldest_year = min(data["year"])
newest_year = max(data["year"])
y_slider = RangeSlider(title="Years",
value=(2015,2016),
start=oldest_year,
end=newest_year,
step=1)
# Select field for sources
m_select = MultiSelect(title="Available Sources:",
value=fp.global_sources,
options=[(s,s) for s in fp.global_sources])
# Select button for countries
countries = list(data["country"].unique())
c_select = Select(title="Country", options=countries, value='FR')
# catch all widgets
wid = [c_select, y_slider, m_select]
rows = row(wid)
# update function for `on_change` trigger
def update(attrname, old, new):
y = y_slider.value
y_range = [x for x in range(y[0],y[1]+1)]
s_selected = m_select.value
co = c_select.value
# run update for each plot
for p, s, l in zip(plots, sources, fp.energy_levels):
df = fp.filter_data_set(data, co, y_range, s_selected, l)
source_data, x_axis = fp.prepare_data(df)
s.data = source_data
p.x_range.factors = x_axis
# associate `update` function with each widget to apply updates for each change
for w in wid:
w.on_change('value', update)
layout = column(rows, tabs)
doc.add_root(layout)
"""
Explanation: 4. Create interactive plot
Select individual width and height that fits your jupyter notebook settings.
End of explanation
"""
show(comparison_plot)
"""
Explanation: After the bokeh plot is set up a bokeh server is started to make the plot interactive.
Possible options:
- Select a country from the dropdown menu in the top left
- Select a range of years from the range slider
- Multiselect the sources you want to compare in the top right
- Choose which "energy level" you want to investigate with the tabs below the plot
End of explanation
"""
|
befelix/SafeOpt | examples/1d_multiple_constraints_example.ipynb | mit | # Measurement noise
noise_var = 0.05 ** 2
noise_var2 = 1e-5
# Bounds on the inputs variable
bounds = [(-10., 10.)]
# Define Kernel
kernel = GPy.kern.RBF(input_dim=len(bounds), variance=2., lengthscale=1.0, ARD=True)
kernel2 = kernel.copy()
# set of parameters
parameter_set = safeopt.linearly_spaced_combinations(bounds, 1000)
# Initial safe point
x0 = np.zeros((1, len(bounds)))
# Generate function with safe initial point at x=0
def sample_safe_fun():
fun = safeopt.sample_gp_function(kernel, bounds, noise_var, 100)
while True:
fun2 = safeopt.sample_gp_function(kernel2, bounds, noise_var2, 100)
if fun2(0, noise=False) > 1:
break
def combined_fun(x, noise=True):
return np.hstack([fun(x, noise), fun2(x, noise)])
return combined_fun
"""
Explanation: Define a kernel and function
Here we define a kernel. The function is drawn at random from the GP and is corrupted my Gaussian noise
End of explanation
"""
# Define the objective function
fun = sample_safe_fun()
# The statistical model of our objective function and safety constraint
y0 = fun(x0)
gp = GPy.models.GPRegression(x0, y0[:, 0, None], kernel, noise_var=noise_var)
gp2 = GPy.models.GPRegression(x0, y0[:, 1, None], kernel2, noise_var=noise_var2)
# The optimization routine
# opt = safeopt.SafeOptSwarm([gp, gp2], [-np.inf, 0.], bounds=bounds, threshold=0.2)
opt = safeopt.SafeOpt([gp, gp2], parameter_set, [-np.inf, 0.], lipschitz=None, threshold=0.1)
def plot():
# Plot the GP
opt.plot(100)
# Plot the true function
y = fun(parameter_set, noise=False)
for manager, true_y in zip(mpl._pylab_helpers.Gcf.get_all_fig_managers(), y.T):
figure = manager.canvas.figure
figure.gca().plot(parameter_set, true_y, color='C2', alpha=0.3)
plot()
# Obtain next query point
x_next = opt.optimize()
# Get a measurement from the real system
y_meas = fun(x_next)
# Add this to the GP model
opt.add_new_data_point(x_next, y_meas)
plot()
"""
Explanation: Interactive run of the algorithm
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/sdk/sdk_automl_tabular_regression_online_bq.ipynb | apache-2.0 | import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex SDK: AutoML training tabular regression model for online prediction using BigQuery
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_regression_online_bq.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_regression_online_bq.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_regression_online_bq.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to create tabular regression models and do online prediction using a Google Cloud AutoML model.
Dataset
The dataset used for this tutorial is the GSOD dataset from BigQuery public datasets. The version of the dataset you use only the fields year, month and day to predict the value of mean daily temperature (mean_temp).
Objective
In this tutorial, you create an AutoML tabular regression model and deploy for online prediction from a Python script using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import google.cloud.aiplatform as aip
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
"""
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
"""
IMPORT_FILE = "bq://bigquery-public-data.samples.gsod"
"""
Explanation: Tutorial
Now you are ready to start creating your own AutoML tabular regression model.
Location of BigQuery training data.
Now set the variable IMPORT_FILE to the location of the data table in BigQuery.
End of explanation
"""
dataset = aip.TabularDataset.create(
display_name="NOAA historical weather data" + "_" + TIMESTAMP,
bq_source=[IMPORT_FILE],
)
label_column = "mean_temp"
print(dataset.resource_name)
TRANSFORMATIONS = [
{"auto": {"column_name": "year"}},
{"auto": {"column_name": "month"}},
{"auto": {"column_name": "day"}},
]
label_column = "mean_temp"
"""
Explanation: Create the Dataset
Next, create the Dataset resource using the create method for the TabularDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
bq_source: Alternatively, import data items from a BigQuery table into the Dataset resource.
This operation may take several minutes.
End of explanation
"""
dag = aip.AutoMLTabularTrainingJob(
display_name="gsod_" + TIMESTAMP,
optimization_prediction_type="regression",
optimization_objective="minimize-rmse",
column_transformations=TRANSFORMATIONS,
)
print(dag)
"""
Explanation: Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLTabularTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
optimization_prediction_type: The type task to train the model for.
classification: A tabuar classification model.
regression: A tabular regression model.
column_transformations: (Optional): Transformations to apply to the input columns
optimization_objective: The optimization objective to minimize or maximize.
binary classification:
minimize-log-loss
maximize-au-roc
maximize-au-prc
maximize-precision-at-recall
maximize-recall-at-precision
multi-class classification:
minimize-log-loss
regression:
minimize-rmse
minimize-mae
minimize-rmsle
The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
End of explanation
"""
model = dag.run(
dataset=dataset,
model_display_name="gsod_" + TIMESTAMP,
training_fraction_split=0.6,
validation_fraction_split=0.2,
test_fraction_split=0.2,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column=label_column,
)
"""
Explanation: Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
target_column: The name of the column to train as the label.
budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour).
disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 20 minutes.
End of explanation
"""
# Get model resource ID
models = aip.Model.list(filter="display_name=gsod_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
"""
Explanation: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
"""
endpoint = model.deploy(machine_type="n1-standard-4")
"""
Explanation: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
machine_type: The type of compute machine.
End of explanation
"""
INSTANCE = {"year": "1932", "month": "11", "day": "6"}
"""
Explanation: Send a online prediction request
Send a online prediction to your deployed model.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
"""
instances_list = [INSTANCE]
prediction = endpoint.predict(instances_list)
print(prediction)
"""
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
[feature_list]
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
value: The predicted value for each prediction.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
"""
endpoint.undeploy_all()
"""
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
"""
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
joelagnel/lisa | ipynb/tutorial/00_LisaInANutshell.ipynb | apache-2.0 | import logging
from conf import LisaLogging
LisaLogging.setup()
# Execute this cell to enable verbose SSH commands
logging.getLogger('ssh').setLevel(logging.DEBUG)
# Other python modules required by this notebook
import json
import os
"""
Explanation: Linux Interactive System Analysis DEMO
Get LISA and start the Notebook Server
Official repository on GitHub - ARM Software:<br>
https://github.com/ARM-software/lisa
Installation dependencies are listed in the main page of the repository:<br>
https://github.com/ARM-software/lisa#required-dependencies
Once cloned, source init_env to initialized the LISA Shell, which provides a convenient set of shell commands for easy access to many LISA related functions.
shell
$ source init_env
To start the IPython Notebook Server required to use this Notebook, on a LISAShell run:
```shell
[LISAShell lisa] > lisa-ipython start
Starting IPython Notebooks...
Starting IPython Notebook server...
IP Address : http://127.0.0.1:8888/
Folder : /home/derkling/Code/lisa/ipynb
Logfile : /home/derkling/Code/lisa/ipynb/server.log
PYTHONPATH :
/home/derkling/Code/lisa/libs/bart
/home/derkling/Code/lisa/libs/trappy
/home/derkling/Code/lisa/libs/devlib
/home/derkling/Code/lisa/libs/wlgen
/home/derkling/Code/lisa/libs/utils
Notebook server task: [1] 24745
```
The main folder served by the server is:<br>
http://127.0.0.1:8888/
While the tutorial notebooks are accessible starting from this link:<br>
http://127.0.0.1:8888/notebooks/tutorial/00_LisaInANutshell.ipynb
What is an IPython Notebook?
Let's do some example!
Logging configuration and support modules import
End of explanation
"""
# Setup a target configuration
conf = {
# Target is localhost
"platform" : 'linux',
"board" : "juno",
# Login credentials
"host" : "192.168.0.1",
"username" : "root",
"password" : "",
# Binary tools required to run this experiment
# These tools must be present in the tools/ folder for the architecture
"tools" : ['rt-app', 'taskset', 'trace-cmd'],
# Comment the following line to force rt-app calibration on your target
"rtapp-calib" : {
"0": 355, "1": 138, "2": 138, "3": 355, "4": 354, "5": 354
},
# FTrace events end buffer configuration
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_tune_filter",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_frequency",
"cpu_capacity",
],
"buffsize" : 10240
},
# Where results are collected
"results_dir" : "LisaInANutshell",
# Devlib module required (or not required)
'modules' : [ "cpufreq", "cgroups" ],
#"exclude_modules" : [ "hwmon" ],
}
# Support to access the remote target
from env import TestEnv
# Initialize a test environment using:
# the provided target configuration (my_target_conf)
# the provided test configuration (my_test_conf)
te = TestEnv(conf)
target = te.target
print "DONE"
"""
Explanation: <br><br><br><br>
Advanced usage: get more confident with IPython notebooks and discover some hidden features<br>
notebooks/tutorial/01_IPythonNotebooksUsage.ipynb
<br><br><br><br>
Remote target connection and control
End of explanation
"""
# Enable Energy-Aware scheduler
target.execute("echo ENERGY_AWARE > /sys/kernel/debug/sched_features");
# Check which sched_feature are enabled
sched_features = target.read_value("/sys/kernel/debug/sched_features");
print "sched_features:"
print sched_features
# It's possible also to run custom script
# my_script = target.get_installed()
# target.execute(my_script)
"""
Explanation: Commands execution on remote target
End of explanation
"""
target.cpufreq.set_all_governors('sched');
# Check which governor is enabled on each CPU
enabled_governors = target.cpufreq.get_all_governors()
print enabled_governors
"""
Explanation: Example of frameworks configuration on remote target
Configure CPUFreq governor to be "sched-freq"
End of explanation
"""
cpuset = target.cgroups.controller('cpuset')
# Configure a big partition
cpuset_bigs = cpuset.cgroup('/big')
cpuset_bigs.set(cpus=te.target.bl.bigs, mems=0)
# Configure a LITTLE partition
cpuset_littles = cpuset.cgroup('/LITTLE')
cpuset_littles.set(cpus=te.target.bl.littles, mems=0)
# Dump the configuraiton of each controller
cgroups = cpuset.list_all()
for cgname in cgroups:
cgroup = cpuset.cgroup(cgname)
attrs = cgroup.get()
cpus = attrs['cpus']
print '{}:{:<15} cpus: {}'.format(cpuset.kind, cgroup.name, cpus)
"""
Explanation: Create a big/LITTLE partition using CGroups::CPUSet
End of explanation
"""
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Periodic, Ramp
# Light workload
light = Periodic(
duty_cycle_pct = 10,
duration_s = 3,
period_ms = 32,
)
# Ramp workload
ramp = Ramp(
start_pct=10,
end_pct=60,
delta_pct=20,
time_s=0.5,
period_ms=16
)
# Heavy workload
heavy = Periodic(
duty_cycle_pct=60,
duration_s=3,
period_ms=16
)
# Composed workload
lrh_task = light + ramp + heavy
# Create a new RTApp workload generator using the calibration values
# reported by the TestEnv module
rtapp = RTA(target, 'test', calibration=te.calibration())
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind = 'profile',
# 2. define the "profile" of each task
params = {
# 3. Composed task
'task_lrh': lrh_task.get(),
},
#loadref='big',
loadref='LITTLE',
run_dir=target.working_directory
);
# Inspect the JSON file used to run the application
with open('./test_00.json', 'r') as fh:
rtapp_json = json.load(fh)
logging.info('Generated RTApp JSON file:')
print json.dumps(rtapp_json, indent=4, sort_keys=True)
"""
Explanation: <br><br><br><br>
Advanced usage: exploring more APIs exposed by TestEnv and Devlib<br>
notebooks/tutorial/02_TestEnvUsage.ipynb
<br><br><br><br>
Using syntethic workloads
Generate an RTApp configuration
End of explanation
"""
def execute(te, wload, res_dir):
logging.info('# Setup FTrace')
te.ftrace.start()
logging.info('## Start energy sampling')
te.emeter.reset()
logging.info('### Start RTApp execution')
wload.run(out_dir=res_dir)
logging.info('## Read energy consumption: %s/energy.json', res_dir)
nrg_report = te.emeter.report(out_dir=res_dir)
logging.info('# Stop FTrace')
te.ftrace.stop()
trace_file = os.path.join(res_dir, 'trace.dat')
logging.info('# Save FTrace: %s', trace_file)
te.ftrace.get_trace(trace_file)
logging.info('# Save platform description: %s/platform.json', res_dir)
plt, plt_file = te.platform_dump(res_dir)
logging.info('# Report collected data:')
logging.info(' %s', res_dir)
!tree {res_dir}
return nrg_report, plt, plt_file, trace_file
nrg_report, plt, plt_file, trace_file = execute(te, rtapp, te.res_dir)
"""
Explanation: <br><br><br><br>
Advanced usage: using WlGen to create more complex RTApp configurations or run other banchmarks (e.g. hackbench)<br>
notebooks/tutorial/03_WlGenUsage.ipynb
<br><br><br><br>
Execution and Energy Sampling
End of explanation
"""
import pandas as pd
df = pd.DataFrame(list(nrg_report.channels.iteritems()),
columns=['Cluster', 'Energy'])
df = df.set_index('Cluster')
df
"""
Explanation: Example of energy collected data
End of explanation
"""
# Show the collected platform description
with open(os.path.join(te.res_dir, 'platform.json'), 'r') as fh:
platform = json.load(fh)
print json.dumps(platform, indent=4)
logging.info('LITTLE cluster max capacity: %d',
platform['nrg_model']['little']['cpu']['cap_max'])
"""
Explanation: Example of platform description
End of explanation
"""
# Let's look at the trace using kernelshark...
trace_file = te.res_dir + '/trace.dat'
!kernelshark {trace_file} 2>/dev/null
"""
Explanation: <br><br><br><br>
Advanced Workload Execution: using the Executor module to automate data collection for multiple tests<br>
notebooks/tutorial/04_ExecutorUsage.ipynb
<br><br><br><br>
Trace Visualization (the kernelshark way)
Using kernelshark
End of explanation
"""
# Suport for FTrace events parsing and visualization
import trappy
# NOTE: The interactive trace visualization is available only if you run
# the workload to generate a new trace-file
trappy.plotter.plot_trace(trace_file)
"""
Explanation: Using the TRAPpy Trace Plotter
End of explanation
"""
# Load the LISA::Trace parsing module
from trace import Trace
# Define which event we are interested into
trace = Trace(te.platform, te.res_dir, [
"sched_switch",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_boost_cpu",
"sched_boost_task",
"cpu_frequency",
"cpu_capacity",
])
# Let's have a look at the set of events collected from the trace
ftrace = trace.ftrace
logging.info("List of events identified in the trace:")
for event in ftrace.class_definitions.keys():
logging.info(" %s", event)
# Trace events are converted into tables, let's have a look at one
# of such tables
df = trace.data_frame.trace_event('sched_load_avg_task')
df.head()
# Simple selection of events based on conditional values
#df[df.comm == 'task_lrh'].head()
# Simple selection of specific signals
#df[df.comm == 'task_lrh'][['util_avg']].head()
# Simple statistics reporting
#df[df.comm == 'task_lrh'][['util_avg']].describe()
"""
Explanation: Example of Trace Analysis
Generate DataFrames from Trace Events
End of explanation
"""
# Signals can be easily plot using the ILinePlotter
trappy.ILinePlot(
# FTrace object
ftrace,
# Signals to be plotted
signals=[
'sched_load_avg_cpu:util_avg',
'sched_load_avg_task:util_avg'
],
# # Generate one plot for each value of the specified column
# pivot='cpu',
# # Generate only plots which satisfy these filters
# filters={
# 'comm': ['task_lrh'],
# 'cpu' : [0,5]
# },
# Formatting style
per_line=2,
drawstyle='steps-post',
marker = '+'
).view()
"""
Explanation: <br><br><br><br>
Advanced DataFrame usage: filtering by columns/rows, merging tables, plotting data<br>
notebooks/tutorial/05_TrappyUsage.ipynb
<br><br><br><br>
Easy plot signals from DataFrams
End of explanation
"""
from bart.sched.SchedMultiAssert import SchedAssert
# Create an object to get/assert scheduling pbehaviors
sa = SchedAssert(ftrace, te.topology, execname='task_lrh')
"""
Explanation: Example of Behavioral Analysis
End of explanation
"""
# Check the residency of a task on the LITTLE cluster
print "Task residency [%] on LITTLE cluster:",\
sa.getResidency(
"cluster",
te.target.bl.littles,
percent=True
)
# Check on which CPU the task start its execution
print "Task initial CPU:",\
sa.getFirstCpu()
"""
Explanation: Get tasks behaviors
End of explanation
"""
import operator
# Define the time window where we want focus our assertions
start_s = sa.getStartTime()
little_residency_window = (start_s, start_s + 10)
# Defined the expected task residency
EXPECTED_RESIDENCY_PCT=99
result = sa.assertResidency(
"cluster",
te.target.bl.littles,
EXPECTED_RESIDENCY_PCT,
operator.ge,
window=little_residency_window,
percent=True
)
print "Task running {} [%] of its time on LITTLE? {}"\
.format(EXPECTED_RESIDENCY_PCT, result)
result = sa.assertFirstCpu(te.target.bl.bigs)
print "Task starting on a big CPU? {}".format(result)
"""
Explanation: Check for expected behaviros
End of explanation
"""
# Focus on sched_switch events
df = ftrace.sched_switch.data_frame
# # Select only interesting columns
# df = df.ix[:,'next_comm':'prev_state']
# # Group sched_switch event by task switching into the CPU
# df = df.groupby('next_pid').describe(include=['object'])
# df = df.unstack()
# # Sort sched_switch events by number of time a task switch into the CPU
# df = df['next_comm'].sort_values(by=['count'], ascending=False)
df.head()
# # Get topmost task name and PID
# most_switching_pid = df.index[1]
# most_switching_task = df.values[1][2]
# task_name = "{}:{}".format(most_switching_pid, most_switching_task)
# # Print result
# logging.info("The most swithing task is: [%s]", task_name)
"""
Explanation: Examples of Data analysis
Which task is the most active switcher?
End of explanation
"""
# Focus on cpu_frequency events for CPU0
df = ftrace.cpu_frequency.data_frame
df = df[df.cpu == 0]
# # Compute the residency on each OPP before switching to the next one
# df.loc[:,'start'] = df.index
# df.loc[:,'delta'] = (df['start'] - df['start'].shift()).fillna(0).shift(-1)
# # Group by frequency and sum-up the deltas
# freq_residencies = df.groupby('frequency')['delta'].sum()
# logging.info("Residency time per OPP:")
# df = pd.DataFrame(freq_residencies)
df.head()
# # Compute the relative residency time
# tot = sum(freq_residencies)
# #df = df.apply(lambda delta : 100*delta/tot)
# for f in freq_residencies.index:
# logging.info("Freq %10dHz : %5.1f%%", f, 100*freq_residencies[f]/tot)
# Plot residency time
import matplotlib.pyplot as plt
# Enable generation of Notebook emebedded plots
%matplotlib inline
fig, axes = plt.subplots(1, 1, figsize=(16, 5));
df.plot(kind='bar', ax=axes);
"""
Explanation: What are the relative residency on different OPPs?
End of explanation
"""
from perf_analysis import PerfAnalysis
# Full analysis function
def analysis(t_min=None, t_max=None):
test_dir = te.res_dir
platform_json = '{}/platform.json'.format(test_dir)
trace_file = '{}/trace.dat'.format(test_dir)
# Load platform description data
with open(platform_json, 'r') as fh:
platform = json.load(fh)
# Load RTApp Performance data
pa = PerfAnalysis(test_dir)
logging.info("Loaded performance data for tasks: %s", pa.tasks())
# Load Trace data
#events = my_tests_conf['ftrace']['events']
events = [
"sched_switch",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"cpu_frequency",
"cpu_capacity",
]
trace = Trace(platform, test_dir, events, tasks=pa.tasks())
# Define time ranges for all the temporal plots
trace.setXTimeRange(t_min, t_max)
# Tasks performances plots
for task in pa.tasks():
pa.plotPerf(task)
# Tasks plots
trace.analysis.tasks.plotTasks()
# Cluster and CPUs plots
trace.analysis.frequency.plotClusterFrequencies()
analysis()
"""
Explanation: Example of Custom Plotting
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | community-content/pytorch_image_classification_single_gpu_with_vertex_sdk_and_torchserve/vertex_prediction_with_custom_torchserve_container.ipynb | apache-2.0 | PROJECT_ID = "YOUR PROJECT ID"
BUCKET_NAME = "gs://YOUR BUCKET NAME"
REGION = "YOUR REGION"
SERVICE_ACCOUNT = "YOUR SERVICE ACCOUNT"
content_name = "pt-img-cls-gpu-cust-cont-torchserve"
"""
Explanation: Vertex Prediction with Custom TorchServe Container
<table align="left">
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/community-content/pytorch_image_classification_single_gpu_with_vertex_sdk_and_torchserve/vertex_prediction_with_custom_torchserve_container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Setup
End of explanation
"""
gcs_output_uri_prefix = f"{BUCKET_NAME}/{content_name}"
! gsutil ls $gcs_output_uri_prefix
"""
Explanation: Training Artifact
End of explanation
"""
! curl -O https://raw.githubusercontent.com/alvarobartt/pytorch-model-serving/master/images/sample.jpg
! ls sample.jpg
%run convert_b64.py
! ls sample_b64.json
"""
Explanation: Vertex Prediction using Custom TorchServe Container
Test Sample Image
End of explanation
"""
! gsutil cp -r $gcs_output_uri_prefix/model ./model_server/
! ls ./model_server/model/
! cd model_server && torch-model-archiver \
--model-name antandbee \
--version 1.0 \
--serialized-file ./model/antandbee.pth \
--model-file ./model.py \
--handler ./handler.py \
--extra-files ./index_to_name.json \
-f
! ls model_server/antandbee.mar
"""
Explanation: Model Archive for TorchServe
End of explanation
"""
hostname = "gcr.io"
tag = "latest"
model_name = "antandbee"
image_name_serve = content_name + "-" + model_name
custom_container_image_uri_serve = f"{hostname}/{PROJECT_ID}/{image_name_serve}:{tag}"
! cd model_server && docker build -t $custom_container_image_uri_serve -f Dockerfile .
! rm -rf ./model_server/model/
! docker run \
--rm -it \
-d \
--name ts_antandbee \
-p 8080:8080 \
-p 8081:8081 \
$custom_container_image_uri_serve
! curl http://localhost:8080/ping
! curl http://127.0.0.1:8081/models/antandbee
! curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @sample_b64.json \
localhost:8080/predictions/antandbee
! docker stop ts_antandbee
! docker push $custom_container_image_uri_serve
! gcloud container images list --repository $hostname/$PROJECT_ID
"""
Explanation: Option: TorchServe Local Run
```
cd model_server
torchserve --model-store ./ \
--ts-config ./config.properties \
--models antandbee=antandbee.mar
curl http://localhost:8080/ping
curl http://127.0.0.1:8081/models/antandbee
curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @sample_b64.json \
http://localhost:8080/predictions/antandbee
torchserve --stop
! rm model_server/antandbee.mar
! rm -rf model_server/logs
```
Custom TorchServe Container
End of explanation
"""
! pip install -r requirements.txt
from google.cloud import aiplatform
aiplatform.init(
project=PROJECT_ID,
staging_bucket=BUCKET_NAME,
location=REGION,
)
"""
Explanation: Initialize Vertex SDK
End of explanation
"""
model_display_name = image_name_serve
model = aiplatform.Model.upload(
display_name=model_display_name,
serving_container_image_uri=custom_container_image_uri_serve,
serving_container_ports=[8080],
serving_container_predict_route=f"/predictions/{model_name}",
serving_container_health_route="/ping",
)
"""
Explanation: Create a Vertex Model with Custom TorchServe Container
End of explanation
"""
endpoint = model.deploy(
machine_type="n1-standard-4",
)
endpoint.resource_name
import base64
def convert_b64(input_file_name):
"""Open image and convert it to Base64"""
with open(input_file_name, "rb") as input_file:
jpeg_bytes = base64.b64encode(input_file.read()).decode("utf-8")
return jpeg_bytes
image_file_name = "./sample.jpg"
instance = {"data": {"b64": convert_b64(image_file_name)}}
prediction = endpoint.predict(instances=[instance])
print(prediction)
"""
Explanation: Create a Vertex Endpoint for Online Prediction
End of explanation
"""
! gsutil rm -rf $gcs_output_uri_prefix
! rm sample.jpg
! rm sample_b64.json
! rm model_server/antandbee.mar
"""
Explanation: Clean Up
End of explanation
"""
|
ChadFulton/statsmodels | examples/notebooks/statespace_seasonal.ipynb | bsd-3-clause | %matplotlib notebook
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
"""
Explanation: Seasonality in time series data
Consider the problem of modeling time series data with multiple seasonal components with different perioidicities. Let us take the time series $y_t$ and decompose it explicitly to have a level component and two seasonal components.
$$
y_t = \mu_t + \gamma^{(1)}_t + \gamma^{(2)}_t
$$
where $\mu_t$ represents the trend or level, $\gamma^{(1)}_t$ represents a seasonal component with a relatively short period, and $\gamma^{(2)}_t$ represents another seasonal component of longer period. We will have a fixed intercept term for our level and consider both $\gamma^{(2)}_t$ and $\gamma^{(2)}_t$ to be stochastic so that the seasonal patterns can vary over time.
In this notebook, we will generate synthetic data conforming to this model and showcase modeling of the seasonal terms in a few different ways under the unobserved components modeling framework.
End of explanation
"""
# First we'll simulate the synthetic data
def simulate_seasonal_term(periodicity, total_cycles, noise_std=1.,
harmonics=None):
duration = periodicity * total_cycles
assert duration == int(duration)
duration = int(duration)
harmonics = harmonics if harmonics else int(np.floor(periodicity / 2))
lambda_p = 2 * np.pi / float(periodicity)
gamma_jt = noise_std * np.random.randn((harmonics))
gamma_star_jt = noise_std * np.random.randn((harmonics))
total_timesteps = 100 * duration # Pad for burn in
series = np.zeros(total_timesteps)
for t in range(total_timesteps):
gamma_jtp1 = np.zeros_like(gamma_jt)
gamma_star_jtp1 = np.zeros_like(gamma_star_jt)
for j in range(1, harmonics + 1):
cos_j = np.cos(lambda_p * j)
sin_j = np.sin(lambda_p * j)
gamma_jtp1[j - 1] = (gamma_jt[j - 1] * cos_j
+ gamma_star_jt[j - 1] * sin_j
+ noise_std * np.random.randn())
gamma_star_jtp1[j - 1] = (- gamma_jt[j - 1] * sin_j
+ gamma_star_jt[j - 1] * cos_j
+ noise_std * np.random.randn())
series[t] = np.sum(gamma_jtp1)
gamma_jt = gamma_jtp1
gamma_star_jt = gamma_star_jtp1
wanted_series = series[-duration:] # Discard burn in
return wanted_series
duration = 100 * 3
periodicities = [10, 100]
num_harmonics = [3, 2]
std = np.array([2, 3])
np.random.seed(8678309)
terms = []
for ix, _ in enumerate(periodicities):
s = simulate_seasonal_term(
periodicities[ix],
duration / periodicities[ix],
harmonics=num_harmonics[ix],
noise_std=std[ix])
terms.append(s)
terms.append(np.ones_like(terms[0]) * 10.)
series = pd.Series(np.sum(terms, axis=0))
df = pd.DataFrame(data={'total': series,
'10(3)': terms[0],
'100(2)': terms[1],
'level':terms[2]})
h1, = plt.plot(df['total'])
h2, = plt.plot(df['10(3)'])
h3, = plt.plot(df['100(2)'])
h4, = plt.plot(df['level'])
plt.legend(['total','10(3)','100(2)', 'level'])
plt.show()
"""
Explanation: Synthetic data creation
We will create data with multiple seasonal patterns by following equations (3.7) and (3.8) in Durbin and Koopman (2012). We will simulate 300 periods and two seasonal terms parameterized in the frequency domain having periods 10 and 100, respectively, and 3 and 2 number of harmonics, respectively. Further, the variances of their stochastic parts are 4 and 9, respecively.
End of explanation
"""
model = sm.tsa.UnobservedComponents(series.values,
level='fixed intercept',
freq_seasonal=[{'period': 10,
'harmonics': 3},
{'period': 100,
'harmonics': 2}])
res_f = model.fit(disp=False)
print(res_f.summary())
# The first state variable holds our estimate of the intercept
print("fixed intercept estimated as {0:.3f}".format(res_f.smoother_results.smoothed_state[0,-1:][0]))
res_f.plot_components()
plt.show()
model.ssm.transition[:, :, 0]
"""
Explanation: Unobserved components (frequency domain modeling)
The next method is an unobserved components model, where the trend is modeled as a fixed intercept and the seasonal components are modeled using trigonometric functions with primary periodicities of 10 and 100, respectively, and number of harmonics 3 and 2, respecively. Note that this is the correct, generating model. The process for the time series can be written as:
$$
\begin{align}
y_t & = \mu_t + \gamma^{(1)}t + \gamma^{(2)}_t + \epsilon_t\
\mu{t+1} & = \mu_t \
\gamma^{(1)}{t} &= \sum{j=1}^2 \gamma^{(1)}{j, t} \
\gamma^{(2)}{t} &= \sum_{j=1}^3 \gamma^{(2)}{j, t}\
\gamma^{(1)}{j, t+1} &= \gamma^{(1)}{j, t}\cos(\lambda_j) + \gamma^{, (1)}{j, t}\sin(\lambda_j) + \omega^{(1)}{j,t}, ~j = 1, 2, 3\
\gamma^{, (1)}{j, t+1} &= -\gamma^{(1)}{j, t}\sin(\lambda_j) + \gamma^{, (1)}_{j, t}\cos(\lambda_j) + \omega^{, (1)}{j, t}, ~j = 1, 2, 3\
\gamma^{(2)}{j, t+1} &= \gamma^{(2)}{j, t}\cos(\lambda_j) + \gamma^{, (2)}{j, t}\sin(\lambda_j) + \omega^{(2)}{j,t}, ~j = 1, 2\
\gamma^{, (2)}{j, t+1} &= -\gamma^{(2)}{j, t}\sin(\lambda_j) + \gamma^{, (2)}_{j, t}\cos(\lambda_j) + \omega^{, (2)}_{j, t}, ~j = 1, 2\
\end{align}
$$
$$
where $\epsilon_t$ is white noise, $\omega^{(1)}{j,t}$ are i.i.d. $N(0, \sigma^2_1)$, and $\omega^{(2)}{j,t}$ are i.i.d. $N(0, \sigma^2_2)$, where $\sigma_1 = 2.$
End of explanation
"""
model = sm.tsa.UnobservedComponents(series,
level='fixed intercept',
seasonal=10,
freq_seasonal=[{'period': 100,
'harmonics': 2}])
res_tf = model.fit()
print(res_tf.summary())
# The first state variable holds our estimate of the intercept
print("fixed intercept estimated as {0:.3f}".format(res_tf.smoother_results.smoothed_state[0,-1:][0]))
res_tf.plot_components()
plt.show()
"""
Explanation: Observe that the fitted variances are pretty close to the true variances of 4 and 9. Further, the individual seasonal components look pretty close to the true seasonal components. The smoothed level term is kind of close to the true level of 10. Finally, our diagnostics look solid; the test statistics are small enough to fail to reject our three tests.
Unobserved components (mixed time and frequency domain modeling)
The second method is an unobserved components model, where the trend is modeled as a fixed intercept and the seasonal components are modeled using 10 constants summing to 0 and trigonometric functions with a primary periodicities of 100 with 2 harmonics total. Note that this isn't the generating model, as it presupposes that there are more state errors for the shorter seasonal component than in reality. The process for the time series can be written as:
$$
\begin{align}
y_t & = \mu_t + \gamma^{(1)}t + \gamma^{(2)}_t + \epsilon_t\
\mu{t+1} & = \mu_t \
\gamma^{(1)}{t + 1} &= - \sum{j=1}^9 \gamma^{(1)}{t + 1 - j} + \omega^{(1)}_t\
\gamma^{(2)}{j, t+1} &= \gamma^{(2)}{j, t}\cos(\lambda_j) + \gamma^{, (2)}{j, t}\sin(\lambda_j) + \omega^{(2)}{j,t}, ~j = 1, 2\
\gamma^{, (2)}{j, t+1} &= -\gamma^{(2)}{j, t}\sin(\lambda_j) + \gamma^{, (2)}_{j, t}\cos(\lambda_j) + \omega^{, (2)}{j, t}, ~j = 1, 2\
\end{align}
$$
where $\epsilon_t$ is white noise, $\omega^{(1)}{t}$ are i.i.d. $N(0, \sigma^2_1)$, and $\omega^{(2)}{j,t}$ are i.i.d. $N(0, \sigma^2_2)$.
End of explanation
"""
model = sm.tsa.UnobservedComponents(series,
level='fixed intercept',
freq_seasonal=[{'period': 100}])
res_lf = model.fit()
print(res_lf.summary())
# The first state variable holds our estimate of the intercept
print("fixed intercept estimated as {0:.3f}".format(res_lf.smoother_results.smoothed_state[0,-1:][0]))
res_lf.plot_components()
plt.show()
"""
Explanation: The plotted components look good. However, the estimated variance of the second seasonal term is inflated from reality. Additionally, we reject the Ljung-Box statistic, indicating we may have remaining autocorrelation after accounting for our components.
Unobserved components (lazy frequency domain modeling)
The third method is an unobserved components model with a fixed intercept and one seasonal component, which is modeled using trigonometric functions with primary periodicity 100 and 50 harmonics. Note that this isn't the generating model, as it presupposes that there are more harmonics then in reality. Because the variances are tied together, we are not able to drive the estimated covariance of the non-existent harmonics to 0. What is lazy about this model specification is that we have not bothered to specify the two different seasonal components and instead chosen to model them using a single component with enough harmonics to cover both. We will not be able to capture any differences in variances between the two true components. The process for the time series can be written as:
$$
\begin{align}
y_t & = \mu_t + \gamma^{(1)}t + \epsilon_t\
\mu{t+1} &= \mu_t\
\gamma^{(1)}{t} &= \sum{j=1}^{50}\gamma^{(1)}{j, t}\
\gamma^{(1)}{j, t+1} &= \gamma^{(1)}{j, t}\cos(\lambda_j) + \gamma^{, (1)}{j, t}\sin(\lambda_j) + \omega^{(1}{j,t}, ~j = 1, 2, \dots, 50\
\gamma^{, (1)}{j, t+1} &= -\gamma^{(1)}{j, t}\sin(\lambda_j) + \gamma^{, (1)}_{j, t}\cos(\lambda_j) + \omega^{, (1)}{j, t}, ~j = 1, 2, \dots, 50\
\end{align}
$$
where $\epsilon_t$ is white noise, $\omega^{(1)}_{t}$ are i.i.d. $N(0, \sigma^2_1)$.
End of explanation
"""
model = sm.tsa.UnobservedComponents(series,
level='fixed intercept',
seasonal=100)
res_lt = model.fit(disp=False)
print(res_lt.summary())
# The first state variable holds our estimate of the intercept
print("fixed intercept estimated as {0:.3f}".format(res_lt.smoother_results.smoothed_state[0,-1:][0]))
res_lt.plot_components()
plt.show()
"""
Explanation: Note that one of our diagnostic tests would be rejected at the .05 level.
Unobserved components (lazy time domain seasonal modeling)
The fourth method is an unobserved components model with a fixed intercept and a single seasonal component modeled using a time-domain seasonal model of 100 constants. The process for the time series can be written as:
$$
\begin{align}
y_t & =\mu_t + \gamma^{(1)}t + \epsilon_t\
\mu{t+1} &= \mu_{t} \
\gamma^{(1)}{t + 1} &= - \sum{j=1}^{99} \gamma^{(1)}_{t + 1 - j} + \omega^{(1)}_t\
\end{align}
$$
where $\epsilon_t$ is white noise, $\omega^{(1)}_{t}$ are i.i.d. $N(0, \sigma^2_1)$.
End of explanation
"""
# Assign better names for our seasonal terms
true_seasonal_10_3 = terms[0]
true_seasonal_100_2 = terms[1]
true_sum = true_seasonal_10_3 + true_seasonal_100_2
time_s = np.s_[:50] # After this they basically agree
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
h1, = ax1.plot(series.index[time_s], res_f.freq_seasonal[0].filtered[time_s], label='Double Freq. Seas')
h2, = ax1.plot(series.index[time_s], res_tf.seasonal.filtered[time_s], label='Mixed Domain Seas')
h3, = ax1.plot(series.index[time_s], true_seasonal_10_3[time_s], label='True Seasonal 10(3)')
plt.legend([h1, h2, h3], ['Double Freq. Seasonal','Mixed Domain Seasonal','Truth'], loc=2)
plt.title('Seasonal 10(3) component')
plt.show()
time_s = np.s_[:50] # After this they basically agree
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
h21, = ax2.plot(series.index[time_s], res_f.freq_seasonal[1].filtered[time_s], label='Double Freq. Seas')
h22, = ax2.plot(series.index[time_s], res_tf.freq_seasonal[0].filtered[time_s], label='Mixed Domain Seas')
h23, = ax2.plot(series.index[time_s], true_seasonal_100_2[time_s], label='True Seasonal 100(2)')
plt.legend([h21, h22, h23], ['Double Freq. Seasonal','Mixed Domain Seasonal','Truth'], loc=2)
plt.title('Seasonal 100(2) component')
plt.show()
time_s = np.s_[:100]
fig3 = plt.figure()
ax3 = fig3.add_subplot(111)
h31, = ax3.plot(series.index[time_s], res_f.freq_seasonal[1].filtered[time_s] + res_f.freq_seasonal[0].filtered[time_s], label='Double Freq. Seas')
h32, = ax3.plot(series.index[time_s], res_tf.freq_seasonal[0].filtered[time_s] + res_tf.seasonal.filtered[time_s], label='Mixed Domain Seas')
h33, = ax3.plot(series.index[time_s], true_sum[time_s], label='True Seasonal 100(2)')
h34, = ax3.plot(series.index[time_s], res_lf.freq_seasonal[0].filtered[time_s], label='Lazy Freq. Seas')
h35, = ax3.plot(series.index[time_s], res_lt.seasonal.filtered[time_s], label='Lazy Time Seas')
plt.legend([h31, h32, h33, h34, h35], ['Double Freq. Seasonal','Mixed Domain Seasonal','Truth', 'Lazy Freq. Seas', 'Lazy Time Seas'], loc=1)
plt.title('Seasonal components combined')
plt.show()
"""
Explanation: The seasonal component itself looks good--it is the primary singal. The estimated variance of the seasonal term is very high ($>10^5$), leading to a lot of uncertainty in our one-step-ahead predictions and slow responsiveness to new data, as evidenced by large errors in one-step ahead predictions and observations. Finally, all three of our diagnostic tests were rejected.
Comparison of filtered estimates
The plots below show that explicitly modeling the individual components results in the filtered state being close to the true state within roughly half a period. The lazy models took longer (almost a full period) to do the same on the combined true state.
End of explanation
"""
|
heatseeknyc/data-science | src/bryan analyses/Hack for Heat #5.ipynb | mit | connection = psycopg2.connect('dbname= threeoneone user=threeoneoneadmin password=threeoneoneadmin')
cursor = connection.cursor()
cursor.execute('''SELECT createddate, closeddate, borough FROM service;''')
data = cursor.fetchall()
data = pd.DataFrame(data)
data.columns = ['createddate','closeddate','borough']
data = data.loc[data['createddate'].notnull()]
data = data.loc[data['closeddate'].notnull()]
data['timedelta'] = data['closeddate'] - data['createddate']
data['timedeltaint'] = [x.days for x in data['timedelta']]
data.head()
data.groupby(by='borough')['timedeltaint'].mean()
"""
Explanation: Hack for Heat #5: How long do complaints take to resolve?
In this post, we're going to see if we can graph how long it takes for complaints to get resolved.
End of explanation
"""
data.sort_values('timedeltaint').head()
data.sort_values('timedeltaint', ascending=False).head()
"""
Explanation: Oops! Looks like something's wrong. Let's try and find out:
End of explanation
"""
import datetime
today = datetime.date(2016,5,29)
janone = datetime.date(2010,1,1)
"""
Explanation: Ah. Well, as a first step, let's remove any values that are before Jan 1st 2010 or after today:
End of explanation
"""
subdata = data.loc[(data['closeddate'] > janone) & (data['closeddate'] < today)]
subdata = subdata.loc[data['closeddate'] > data['createddate']]
len(subdata)
subdata.sort_values('timedeltaint').head()
subdata.sort_values('timedeltaint',ascending = False).head()
"""
Explanation: Let's also remove any rows where the close date is before the created date:
End of explanation
"""
plotdata = list(subdata['timedeltaint'])
plt.figure(figsize=(12,10))
plt.hist(plotdata);
"""
Explanation: This looks a little bit more realistic, but let's also visualize the distribution:
End of explanation
"""
subdata.quantile([.025, .975])
quantcutdata = subdata.loc[(subdata['timedeltaint'] > 1) & (subdata['timedeltaint'] < 138) ]
len(quantcutdata)
plotdata = list(quantcutdata['timedeltaint'])
plt.figure(figsize=(12,10))
plt.hist(plotdata);
"""
Explanation: Okay, this still looks really wonky. Let's further subset the data, and see what happens when we remove the top and bottom 2.5%.
Pandas has a quantile function:
End of explanation
"""
subdata.groupby(by='borough').median()
subdata.groupby(by='borough').mean()
"""
Explanation: That looks a little better, but there might be other ways to filter out bad data.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.