markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Anonymizing Personally Identifiable Information (PII)There will be many cases where the data will contain PersonallyIdentifiable Information which we cannot disclose. In these cases, wewill want our Tabular Models to replace the information within thesefields with fake, simulated data that looks similar to the real one butdoes not contain any of the original values.Let\'s load a new dataset that contains a PII field, the`student_placements_pii` demo, and try to generate synthetic versions ofit that do not contain any of the PII fields.**Note**The `student_placements_pii` dataset is a modified version of the`student_placements` dataset with one new field, `address`, whichcontains PII information about the students. Notice that this additional`address` field has been simulated and does not correspond to data fromthe real users.
|
data_pii = load_tabular_demo('student_placements_pii')
data_pii.head()
|
_____no_output_____
|
MIT
|
tutorials/single_table_data/04_TVAE_Model.ipynb
|
HDI-Project/SDV
|
If we use our tabular model on this new data we will see how thesynthetic data that it generates discloses the addresses from the realstudents:
|
model = TVAE(
primary_key='student_id',
)
model.fit(data_pii)
new_data_pii = model.sample(200)
new_data_pii.head()
|
_____no_output_____
|
MIT
|
tutorials/single_table_data/04_TVAE_Model.ipynb
|
HDI-Project/SDV
|
More specifically, we can see how all the addresses that have beengenerated actually come from the original dataset:
|
new_data_pii.address.isin(data_pii.address).sum()
|
_____no_output_____
|
MIT
|
tutorials/single_table_data/04_TVAE_Model.ipynb
|
HDI-Project/SDV
|
In order to solve this, we can pass an additional argument`anonymize_fields` to our model when we create the instance. This`anonymize_fields` argument will need to be a dictionary that contains:- The name of the field that we want to anonymize.- The category of the field that we want to use when we generate fake values for it.The list complete list of possible categories can be seen in the [FakerProviders](https://faker.readthedocs.io/en/master/providers.html) page,and it contains a huge list of concepts such as:- name- address- country- city- ssn- credit_card_number- credit_card_expire- credit_card_security_code- email- telephone- \...In this case, since the field is an address, we will pass adictionary indicating the category `address`
|
model = TVAE(
primary_key='student_id',
anonymize_fields={
'address': 'address'
}
)
model.fit(data_pii)
|
_____no_output_____
|
MIT
|
tutorials/single_table_data/04_TVAE_Model.ipynb
|
HDI-Project/SDV
|
As a result, we can see how the real `address` values have been replacedby other fake addresses:
|
new_data_pii = model.sample(200)
new_data_pii.head()
|
_____no_output_____
|
MIT
|
tutorials/single_table_data/04_TVAE_Model.ipynb
|
HDI-Project/SDV
|
Which means that none of the original addresses can be found in thesampled data:
|
data_pii.address.isin(new_data_pii.address).sum()
|
_____no_output_____
|
MIT
|
tutorials/single_table_data/04_TVAE_Model.ipynb
|
HDI-Project/SDV
|
As we can see, in this case these modifications changed the obtainedresults slightly, but they did neither introduce dramatic changes in theperformance. Conditional SamplingAs the name implies, conditional sampling allows us to sample from a conditional distribution using the `TVAE` model, which means we can generate only values that satisfy certain conditions. These conditional values can be passed to the `sample_conditions` method as a list of `sdv.sampling.Condition` objects or to the `sample_remaining_columns` method as a dataframe. When specifying a `sdv.sampling.Condition` object, we can pass in the desired conditions as a dictionary, as well as specify the number of desired rows for that condition.
|
from sdv.sampling import Condition
condition = Condition({
'gender': 'M'
}, num_rows=5)
model.sample_conditions(conditions=[condition])
|
_____no_output_____
|
MIT
|
tutorials/single_table_data/04_TVAE_Model.ipynb
|
HDI-Project/SDV
|
It's also possible to condition on multiple columns, such as `gender = M, 'experience_years': 0`.
|
condition = Condition({
'gender': 'M',
'experience_years': 0
}, num_rows=5)
model.sample_conditions(conditions=[condition])
|
_____no_output_____
|
MIT
|
tutorials/single_table_data/04_TVAE_Model.ipynb
|
HDI-Project/SDV
|
In the `sample_remaining_columns` method, `conditions` is passed as a dataframe. In that case, the model will generate one sample for each row of the dataframe, sorted in the same order. Since the model already knows how many samples to generate, passing it as a parameter is unnecessary. For example, if we want to generate three samples where `gender = M` and three samples with `gender = F`, we can do the following:
|
import pandas as pd
conditions = pd.DataFrame({
'gender': ['M', 'M', 'M', 'F', 'F', 'F'],
})
model.sample_remaining_columns(conditions)
|
_____no_output_____
|
MIT
|
tutorials/single_table_data/04_TVAE_Model.ipynb
|
HDI-Project/SDV
|
`TVAE` also supports conditioning on continuous values, as long as the values are within the range of seen numbers. For example, if all the values of the dataset are within 0 and 1, `TVAE` will not be able to set this value to 1000.
|
condition = Condition({
'degree_perc': 70.0
}, num_rows=5)
model.sample_conditions(conditions=[condition])
|
_____no_output_____
|
MIT
|
tutorials/single_table_data/04_TVAE_Model.ipynb
|
HDI-Project/SDV
|
Lesson 1 - Introduction Getting to know the Notebook Two types of cells:* Code cells* Text cells Hi hello! Hi hello. (shift + enter = executes) This is a **text** cell. It can be formatted with **images**, **HTML**, **LaTeX**. For example **LaTeX**:$Y_t - Y_{t-1} = \rho Y_{t-1} - Y_{t-1} + \epsilon $$\Delta Y_t = (\rho - 1) Y_{t-1} + \epsilon$**Image**: 
|
# Header 1
## Section 1.1
### Sub-section 1.1.1
#### And we can continue
# this is number
# comment
5
6+2
2 + 2
5 + 2
# Pay attention to the execution order!
5 / 2
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
Note It has some differences to the standard implementation of Jupyter Notebook (ex. shortcuts)--- Basic Types (part 1) Numbers
|
# integers
265
# Real (called float)
235.45
# Binary (called Boolean)
True, False
# complex
2 + 4j
# function(123123)
type(2 + 4j)
type(2), type(2.)
type(3/2)
3/2
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
OperationsAll arithmetic operators: * +, -, *, /* %, **, //
|
# % -> Modulus operator
print(13/5)
print(13%5)
# // -> Floor division
13//2
# ** -> expoent
3**3
# operators precedence
print( 2*2**2 )
print( (2*2)**2 )
|
8
16
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
**Note:** Don't use square brackets [ ].or curly brackets to write expressions Comparison Operators==, !=, >, =, <=
|
# the result is always a boolean
2 == 3
1>2
int(True)
float(2)
123 >= 122.99
# comparing two objects
123 == "123", 123 != "123"
int("234")
# Remove int to raise error
123 <= int("234")
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
Logical OperatorsAlways compare booleans and, or, not
|
not True
2 < 5 and (3 < 4)
not (2 > 5) or (3 < 4)
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
https://www.programiz.com/python-programming/precedence-associativity
|
# it doesn't matter the precedence
# True and True or False and not False
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
Bitwise operators* & - AND* | - OR* ^ - XOR* ~ - NOT* << Left shift* '>>' Right shiftIf you thought you could skip this class....https://medium.com/analytics-vidhya/python-for-geosciences-raster-bit-masks-explained-step-by-step-8620ed27141e Strings
|
"Hello World!"
"5 + 2"
"Hello" + " world!"
"Hello" == "Hello!"
# check alphabetical order
"Jean" > "Albin"
3 == "3"
# Some operations are note defined
# "Hello" - "H"
"Hello" < str(3)
12/ 33333
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
Variables
|
a = 23
b = 7.89
type(a), type(b)
a + b
s = "Hello world!"
print(s)
type(s)
a < b
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
ListsUp to now, everything could be done with a good calculator... now things will get better.Ordered, accepts duplicates (diff from set) and can contain different data types.
|
lst = [1, "Hello", 3.5, 4, ["innerList_item1", "innerList_item2"], 6]
lst
len(lst)
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
Indexing/SlicingIt's a way to refer to individual/subset of items within a list.Python indexing is Zero-Based
|
# Examples of indexing
# Get the first item and the last item
lst[0], lst[-1]
lst[5]
# Get second and penultimate itens
lst[1], lst[-2]
# Examples of slicing
# OBS: The slicing don't include the last item. So, 0:3 will return the 3 first
# elements
# [1, 10) - > 1.....9
# Syntax is: list[first index:last_index (excludent)]
lst[0:3]
lst[3:6]
list2 = lst[-2]
lst[-2][0]
# It can work with strings, as well
lst[-2][0][-5:]
lst[-2][0][:5]
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
Acessing object members
|
type(lst)
# crtl+space
lst.index?
lst.index(4)
help(lst.append)
lst.append?
lst.append('last element')
lst
len(lst)
lst.index('Hello')
lst[-1] = 'last'
lst
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
String Members
|
s.replace('Hello', 'Hi')
s.lower()
s.swapcase()
'234'.isnumeric()
s.isnumeric?
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
We will now see how to control the flow execution of a program. There are important structures missing like **tuples**, **dictionaries**, **sets**, etc... We will come back to them afterwards. Flow control If-statement (if-then-else) **basic usage is:** if condition:> flow if condition is satisfiedelse:> flow if condition is not satisfied**Extended version:**if condition:> flow if condition is satisfiedelif condition2:> flow if condition2 is satisfiedelif condition3:> flow if condition3 is satisfiedelse:> flow if now condition is satisfiedCondition is always a boolean
|
# indent
x = 18276748451
if x % 2 == 0:
print(x)
print('This number is even')
else:
print(x)
print('This number is odd')
x = input("Please, enter an integer:")
# The result of the input function is always a string.
# We have to convert it to an integer before proceeding.
x = int(x)
if x < 0:
print('Negative')
elif x > 0:
print('Positive')
else:
print('Zero')
print('finished')
|
Please, enter an integer:2
Positive
finished
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
While statement while condition (is met):> do something
|
# good to count
start = 1
end = 1000
while start <= end:
print(start)
start = start + 1
# combine flow control and loops (printing just numbers divisable by 3)
i = 0
while i <= 100:
if i % 3 == 0:
print(i)
i = i + 1
# Create a list with number divisible by 3 from 0 to 100
current_number = 0
lst = []
while current_number < 100:
if current_number%3 == 0:
lst.append(current_number)
current_number += 1
str(lst)
# Create a list with the 10 first odd numbers?
current_number = 0
lst = []
while len(lst) < 10:
if current_number%2 != 0:
lst.append(current_number)
current_number += 1
lst
# New we can iterate through a list (old-style)
# Calculate the square
i = 0
while i < len(lst):
print(lst[i]**2)
i += 1
|
0
9
36
81
144
225
324
441
576
729
900
1089
1296
1521
1764
2025
2304
2601
2916
3249
3600
3969
4356
4761
5184
5625
6084
6561
7056
7569
8100
8649
9216
9801
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
For statement **Basic usage:**for variable in "list" (Iterable):> do something
|
# to calculate the square of these...
for anything in lst:
print(anything/2)
|
0.0
1.5
3.0
4.5
6.0
7.5
9.0
10.5
12.0
13.5
15.0
16.5
18.0
19.5
21.0
22.5
24.0
25.5
27.0
28.5
30.0
31.5
33.0
34.5
36.0
37.5
39.0
40.5
42.0
43.5
45.0
46.5
48.0
49.5
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
That's something different from older (lower level) languages like C, C++, Pascal, Fortran, etc. **Note: There is no condition in Python's `for statement`**
|
# range(start, end, step)
for i in range(10, 0, -2):
print(i)
|
10
8
6
4
2
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
Exercise We have the precipitation for one month and corresponding days.
|
import random
random.randint?
# create the days and daily rain
random.seed(1)
daily_rain = []
day_of_month = []
for i in range(1, 32, 1):
day_of_month.append(i)
daily_rain.append(random.randint(0, 100))
str(day_of_month), str(daily_rain)
import matplotlib.pyplot as plt
plt.figure(figsize=(18, 9))
plt.bar(day_of_month, daily_rain)
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
Answer these questions:* number of days with rain* day of the maximum rain and day of the minimum rain* total rain* mean rain* Challenge: order the days according to the rain precipitation. Descending order (from highest to lowest). Ex: [12, 7, ...] Extra - n-dimensional matrices as combination of lists
|
# create a checkerboard
l1 = [0, 1, 0, 1, 0, 1, 0, 1]
l2 = [1, 0, 1, 0, 1, 0, 1, 0]
l3 = [0, 1, 0, 1, 0, 1, 0, 1]
l4 = [1, 0, 1, 0, 1, 0, 1, 0]
l5 = [0, 1, 0, 1, 0, 1, 0, 1]
l6 = [1, 0, 1, 0, 1, 0, 1, 0]
l7 = [0, 1, 0, 1, 0, 1, 0, 1]
l8 = [1, 0, 1, 0, 1, 0, 1, 0]
m = [l1, l2, l3, l4, l5, l6, l7, l8]
m
m[2][2]
type(m[2])
plt.imshow(m, cmap='hot')
size = 12
m = []
for i in range(size): # lines
line = []
for j in range(size): # columns
line.append(i%2 == j%2)
m.append(line)
plt.imshow(m, cmap='hot')
linha = []
i = 0
while i < 256:
linha.append(i)
i = i + 1
str(linha)
m = []
i = 0
while i < 256:
m.append(linha)
i = i + 1
plt.imshow(m, cmap='hot')
|
_____no_output_____
|
MIT
|
Python4Scientists_Lesson1.ipynb
|
cordmaur/PythonForScientists
|
_*H2 ground state energy computation using Iterative QPE*_This notebook demonstrates using Qiskit Chemistry to plot graphs of the ground state energy of the Hydrogen (H2) molecule over a range of inter-atomic distances using IQPE (Iterative Quantum Phase Estimation) algorithm. It is compared to the same energies as computed by the ExactEigensolverThis notebook populates a dictionary, that is a progammatic representation of an input file, in order to drive the qiskit_chemistry stack. Such a dictionary can be manipulated programmatically and this is indeed the case here where we alter the molecule supplied to the driver in each loop.This notebook has been written to use the PYSCF chemistry driver. See the PYSCF chemistry driver readme if you need to install the external PySCF library that this driver requires.
|
import numpy as np
import pylab
from qiskit import LegacySimulators
from qiskit_chemistry import QiskitChemistry
import time
# Input dictionary to configure Qiskit Chemistry for the chemistry problem.
qiskit_chemistry_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {'atom': '', 'basis': 'sto3g'},
'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},
'algorithm': {'name': ''},
'initial_state': {'name': 'HartreeFock'},
}
molecule = 'H .0 .0 -{0}; H .0 .0 {0}'
algorithms = [
{
'name': 'IQPE',
'num_iterations': 16,
'num_time_slices': 3000,
'expansion_mode': 'trotter',
'expansion_order': 1,
},
{
'name': 'ExactEigensolver'
}
]
backends = [
LegacySimulators.get_backend('qasm_simulator'),
None
]
start = 0.5 # Start distance
by = 0.5 # How much to increase distance by
steps = 20 # Number of steps to increase by
energies = np.empty([len(algorithms), steps+1])
hf_energies = np.empty(steps+1)
distances = np.empty(steps+1)
import concurrent.futures
import multiprocessing as mp
import copy
def subrountine(i, qiskit_chemistry_dict, d, backend, algorithm):
solver = QiskitChemistry()
qiskit_chemistry_dict['PYSCF']['atom'] = molecule.format(d/2)
qiskit_chemistry_dict['algorithm'] = algorithm
result = solver.run(qiskit_chemistry_dict, backend=backend)
return i, d, result['energy'], result['hf_energy']
start_time = time.time()
max_workers = max(4, mp.cpu_count())
with concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor:
futures = []
for j in range(len(algorithms)):
algorithm = algorithms[j]
backend = backends[j]
for i in range(steps+1):
d = start + i*by/steps
future = executor.submit(
subrountine,
i,
copy.deepcopy(qiskit_chemistry_dict),
d,
backend,
algorithm
)
futures.append(future)
for future in concurrent.futures.as_completed(futures):
i, d, energy, hf_energy = future.result()
energies[j][i] = energy
hf_energies[i] = hf_energy
distances[i] = d
print(' --- complete')
print('Distances: ', distances)
print('Energies:', energies)
print('Hartree-Fock energies:', hf_energies)
print("--- %s seconds ---" % (time.time() - start_time))
pylab.plot(distances, hf_energies, label='Hartree-Fock')
for j in range(len(algorithms)):
pylab.plot(distances, energies[j], label=algorithms[j]['name'])
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('H2 Ground State Energy')
pylab.legend(loc='upper right')
pylab.show()
pylab.plot(distances, np.subtract(hf_energies, energies[1]), label='Hartree-Fock')
pylab.plot(distances, np.subtract(energies[0], energies[1]), label='IQPE')
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('Energy difference from ExactEigensolver')
pylab.legend(loc='upper right')
pylab.show()
|
_____no_output_____
|
Apache-2.0
|
community/aqua/chemistry/h2_iqpe.ipynb
|
Chibikuri/qiskit-tutorials
|
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
|
# import necessary libraries
import pandas as pd
import numpy as np
import os
import pickle
import nltk
import re
from sqlalchemy import create_engine
import sqlite3
from nltk.tokenize import word_tokenize, RegexpTokenizer
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.naive_bayes import MultinomialNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier,AdaBoostClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, accuracy_score, f1_score, fbeta_score, classification_report
from sklearn.metrics import precision_recall_fscore_support
from scipy.stats import hmean
from scipy.stats.mstats import gmean
from nltk.corpus import stopwords
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords'])
import matplotlib.pyplot as plt
%matplotlib inline
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql("SELECT * FROM InsertTableName", engine)
df.head()
# View types of unque 'genre' attribute
genre_types = df.genre.value_counts()
genre_types
# check for attributes with missing values/elements
df.isnull().mean().head()
# drops attributes with missing values
df.dropna()
df.head()
# load data from database with 'X' as attributes for message column
X = df["message"]
# load data from database with 'Y' attributes for the last 36 columns
Y = df.drop(['id', 'message', 'original', 'genre'], axis = 1)
|
_____no_output_____
|
FTL
|
ML Pipeline Preparation.ipynb
|
Sanmilee/Disaster-Response-Pipeline
|
2. Write a tokenization function to process your text data
|
# Proprocess text by removing unwanted properties
def tokenize(text):
'''
input:
text: input text data containing attributes
output:
clean_tokens: cleaned text without unwanted texts
'''
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# take out all punctuation while tokenizing
tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize(text)
# lemmatize as shown in the lesson
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
|
_____no_output_____
|
FTL
|
ML Pipeline Preparation.ipynb
|
Sanmilee/Disaster-Response-Pipeline
|
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
|
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
# Visualize model parameters
pipeline.get_params()
|
_____no_output_____
|
FTL
|
ML Pipeline Preparation.ipynb
|
Sanmilee/Disaster-Response-Pipeline
|
4. Train pipeline- Split data into train and test sets- Train pipeline
|
# use sklearn split function to split dataset into train and 20% test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2)
# Train pipeline using RandomForest Classifier algorithm
pipeline.fit(X_train, y_train)
|
_____no_output_____
|
FTL
|
ML Pipeline Preparation.ipynb
|
Sanmilee/Disaster-Response-Pipeline
|
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's classification_report on each.
|
# Output result metrics of trained RandomForest Classifier algorithm
def evaluate_model(model, X_test, y_test):
'''
Input:
model: RandomForest Classifier trained model
X_test: Test training features
Y_test: Test training response variable
Output:
None:
Display model precision, recall, f1-score, support
'''
y_pred = model.predict(X_test)
for item, col in enumerate(y_test):
print(col)
print(classification_report(y_test[col], y_pred[:, item]))
# classification_report to display model precision, recall, f1-score, support
evaluate_model(pipeline, X_test, y_test)
|
related
precision recall f1-score support
0 0.65 0.38 0.48 1193
1 0.83 0.94 0.88 4016
2 0.50 0.43 0.46 35
avg / total 0.79 0.81 0.79 5244
request
precision recall f1-score support
0 0.89 0.98 0.93 4361
1 0.82 0.39 0.53 883
avg / total 0.88 0.88 0.87 5244
offer
precision recall f1-score support
0 0.99 1.00 1.00 5210
1 0.00 0.00 0.00 34
avg / total 0.99 0.99 0.99 5244
aid_related
precision recall f1-score support
0 0.72 0.88 0.79 3049
1 0.75 0.53 0.62 2195
avg / total 0.74 0.73 0.72 5244
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 4805
1 0.71 0.08 0.14 439
avg / total 0.90 0.92 0.89 5244
medical_products
precision recall f1-score support
0 0.95 1.00 0.98 4984
1 0.60 0.07 0.12 260
avg / total 0.94 0.95 0.93 5244
search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 5106
1 0.67 0.10 0.18 138
avg / total 0.97 0.98 0.97 5244
security
precision recall f1-score support
0 0.98 1.00 0.99 5151
1 0.25 0.01 0.02 93
avg / total 0.97 0.98 0.97 5244
military
precision recall f1-score support
0 0.97 1.00 0.98 5069
1 0.67 0.07 0.12 175
avg / total 0.96 0.97 0.95 5244
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 5244
avg / total 1.00 1.00 1.00 5244
water
precision recall f1-score support
0 0.95 1.00 0.97 4897
1 0.82 0.30 0.44 347
avg / total 0.94 0.95 0.94 5244
food
precision recall f1-score support
0 0.94 0.99 0.96 4655
1 0.83 0.46 0.59 589
avg / total 0.92 0.93 0.92 5244
shelter
precision recall f1-score support
0 0.93 0.99 0.96 4761
1 0.82 0.30 0.44 483
avg / total 0.92 0.93 0.91 5244
clothing
precision recall f1-score support
0 0.98 1.00 0.99 5150
1 1.00 0.05 0.10 94
avg / total 0.98 0.98 0.98 5244
money
precision recall f1-score support
0 0.98 1.00 0.99 5133
1 0.75 0.05 0.10 111
avg / total 0.98 0.98 0.97 5244
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 5181
1 0.75 0.05 0.09 63
avg / total 0.99 0.99 0.98 5244
refugees
precision recall f1-score support
0 0.97 1.00 0.99 5091
1 0.82 0.06 0.11 153
avg / total 0.97 0.97 0.96 5244
death
precision recall f1-score support
0 0.96 1.00 0.98 5021
1 0.77 0.11 0.19 223
avg / total 0.95 0.96 0.95 5244
other_aid
precision recall f1-score support
0 0.87 0.99 0.93 4531
1 0.54 0.04 0.07 713
avg / total 0.82 0.86 0.81 5244
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 4907
1 0.00 0.00 0.00 337
avg / total 0.88 0.93 0.90 5244
transport
precision recall f1-score support
0 0.95 1.00 0.97 4977
1 0.61 0.06 0.12 267
avg / total 0.93 0.95 0.93 5244
buildings
precision recall f1-score support
0 0.95 1.00 0.97 4966
1 0.87 0.07 0.13 278
avg / total 0.95 0.95 0.93 5244
electricity
precision recall f1-score support
0 0.98 1.00 0.99 5138
1 0.83 0.09 0.17 106
avg / total 0.98 0.98 0.97 5244
tools
precision recall f1-score support
0 0.99 1.00 1.00 5209
1 0.00 0.00 0.00 35
avg / total 0.99 0.99 0.99 5244
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 5189
1 0.00 0.00 0.00 55
avg / total 0.98 0.99 0.98 5244
shops
precision recall f1-score support
0 1.00 1.00 1.00 5218
1 0.00 0.00 0.00 26
avg / total 0.99 1.00 0.99 5244
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 5185
1 0.00 0.00 0.00 59
avg / total 0.98 0.99 0.98 5244
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 5011
1 0.25 0.00 0.01 233
avg / total 0.92 0.96 0.93 5244
weather_related
precision recall f1-score support
0 0.85 0.97 0.90 3801
1 0.85 0.53 0.66 1443
avg / total 0.85 0.85 0.83 5244
floods
precision recall f1-score support
0 0.93 1.00 0.96 4798
1 0.87 0.23 0.37 446
avg / total 0.93 0.93 0.91 5244
storm
precision recall f1-score support
0 0.94 0.99 0.96 4758
1 0.77 0.35 0.48 486
avg / total 0.92 0.93 0.92 5244
fire
precision recall f1-score support
0 0.99 1.00 0.99 5186
1 1.00 0.02 0.03 58
avg / total 0.99 0.99 0.98 5244
earthquake
precision recall f1-score support
0 0.96 0.99 0.98 4769
1 0.90 0.61 0.73 475
avg / total 0.96 0.96 0.95 5244
cold
precision recall f1-score support
0 0.98 1.00 0.99 5150
1 0.90 0.10 0.17 94
avg / total 0.98 0.98 0.98 5244
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 4958
1 0.46 0.04 0.08 286
avg / total 0.92 0.95 0.92 5244
direct_report
precision recall f1-score support
0 0.85 0.98 0.91 4197
1 0.78 0.30 0.43 1047
avg / total 0.83 0.84 0.81 5244
|
FTL
|
ML Pipeline Preparation.ipynb
|
Sanmilee/Disaster-Response-Pipeline
|
6. Improve your modelUse grid search to find better parameters.
|
parameters = {'clf__estimator__max_depth': [10, 50, None],
'clf__estimator__min_samples_leaf':[2, 5, 10]}
cv = GridSearchCV(pipeline, parameters)
|
_____no_output_____
|
FTL
|
ML Pipeline Preparation.ipynb
|
Sanmilee/Disaster-Response-Pipeline
|
7. Test your modelShow the accuracy, precision, and recall of the tuned model.Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
|
# Train pipeline using the improved model
cv.fit(X_train, y_train)
# # classification_report to display model precision, recall, f1-score, support
evaluate_model(cv, X_test, y_test)
cv.best_estimator_
|
_____no_output_____
|
FTL
|
ML Pipeline Preparation.ipynb
|
Sanmilee/Disaster-Response-Pipeline
|
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
|
# Improve model using DecisionTree Classifier
new_pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(DecisionTreeClassifier()))
])
# Train improved model
new_pipeline.fit(X_train, y_train)
# Run result metric score display function
evaluate_model(new_pipeline, X_test, y_test)
|
related
precision recall f1-score support
0 0.47 0.45 0.46 1193
1 0.84 0.85 0.84 4016
2 0.31 0.40 0.35 35
avg / total 0.75 0.75 0.75 5244
request
precision recall f1-score support
0 0.92 0.92 0.92 4361
1 0.60 0.61 0.60 883
avg / total 0.87 0.87 0.87 5244
offer
precision recall f1-score support
0 0.99 1.00 1.00 5210
1 0.00 0.00 0.00 34
avg / total 0.99 0.99 0.99 5244
aid_related
precision recall f1-score support
0 0.75 0.75 0.75 3049
1 0.65 0.65 0.65 2195
avg / total 0.71 0.71 0.71 5244
medical_help
precision recall f1-score support
0 0.94 0.95 0.94 4805
1 0.33 0.30 0.31 439
avg / total 0.89 0.89 0.89 5244
medical_products
precision recall f1-score support
0 0.97 0.97 0.97 4984
1 0.40 0.35 0.37 260
avg / total 0.94 0.94 0.94 5244
search_and_rescue
precision recall f1-score support
0 0.98 0.98 0.98 5106
1 0.22 0.20 0.21 138
avg / total 0.96 0.96 0.96 5244
security
precision recall f1-score support
0 0.98 0.99 0.98 5151
1 0.04 0.03 0.03 93
avg / total 0.97 0.97 0.97 5244
military
precision recall f1-score support
0 0.98 0.98 0.98 5069
1 0.39 0.37 0.38 175
avg / total 0.96 0.96 0.96 5244
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 5244
avg / total 1.00 1.00 1.00 5244
water
precision recall f1-score support
0 0.98 0.98 0.98 4897
1 0.67 0.67 0.67 347
avg / total 0.96 0.96 0.96 5244
food
precision recall f1-score support
0 0.96 0.96 0.96 4655
1 0.72 0.71 0.71 589
avg / total 0.94 0.94 0.94 5244
shelter
precision recall f1-score support
0 0.96 0.96 0.96 4761
1 0.62 0.59 0.61 483
avg / total 0.93 0.93 0.93 5244
clothing
precision recall f1-score support
0 0.99 1.00 0.99 5150
1 0.62 0.40 0.49 94
avg / total 0.98 0.98 0.98 5244
money
precision recall f1-score support
0 0.99 0.99 0.99 5133
1 0.40 0.38 0.39 111
avg / total 0.97 0.97 0.97 5244
missing_people
precision recall f1-score support
0 0.99 0.99 0.99 5181
1 0.27 0.21 0.23 63
avg / total 0.98 0.98 0.98 5244
refugees
precision recall f1-score support
0 0.98 0.98 0.98 5091
1 0.24 0.25 0.25 153
avg / total 0.96 0.95 0.96 5244
death
precision recall f1-score support
0 0.98 0.98 0.98 5021
1 0.49 0.53 0.51 223
avg / total 0.96 0.96 0.96 5244
other_aid
precision recall f1-score support
0 0.89 0.90 0.89 4531
1 0.29 0.27 0.28 713
avg / total 0.81 0.81 0.81 5244
infrastructure_related
precision recall f1-score support
0 0.94 0.95 0.95 4907
1 0.18 0.16 0.17 337
avg / total 0.89 0.90 0.90 5244
transport
precision recall f1-score support
0 0.96 0.97 0.97 4977
1 0.36 0.29 0.32 267
avg / total 0.93 0.94 0.93 5244
buildings
precision recall f1-score support
0 0.97 0.97 0.97 4966
1 0.43 0.40 0.42 278
avg / total 0.94 0.94 0.94 5244
electricity
precision recall f1-score support
0 0.99 0.99 0.99 5138
1 0.39 0.31 0.35 106
avg / total 0.97 0.98 0.97 5244
tools
precision recall f1-score support
0 0.99 1.00 0.99 5209
1 0.05 0.03 0.04 35
avg / total 0.99 0.99 0.99 5244
hospitals
precision recall f1-score support
0 0.99 0.99 0.99 5189
1 0.22 0.18 0.20 55
avg / total 0.98 0.98 0.98 5244
shops
precision recall f1-score support
0 1.00 1.00 1.00 5218
1 0.00 0.00 0.00 26
avg / total 0.99 0.99 0.99 5244
aid_centers
precision recall f1-score support
0 0.99 0.99 0.99 5185
1 0.08 0.08 0.08 59
avg / total 0.98 0.98 0.98 5244
other_infrastructure
precision recall f1-score support
0 0.96 0.97 0.96 5011
1 0.15 0.13 0.14 233
avg / total 0.92 0.93 0.93 5244
weather_related
precision recall f1-score support
0 0.89 0.91 0.90 3801
1 0.74 0.71 0.72 1443
avg / total 0.85 0.85 0.85 5244
floods
precision recall f1-score support
0 0.96 0.96 0.96 4798
1 0.59 0.54 0.57 446
avg / total 0.93 0.93 0.93 5244
storm
precision recall f1-score support
0 0.96 0.97 0.97 4758
1 0.66 0.65 0.65 486
avg / total 0.94 0.94 0.94 5244
fire
precision recall f1-score support
0 0.99 0.99 0.99 5186
1 0.31 0.29 0.30 58
avg / total 0.98 0.99 0.98 5244
earthquake
precision recall f1-score support
0 0.98 0.98 0.98 4769
1 0.80 0.78 0.79 475
avg / total 0.96 0.96 0.96 5244
cold
precision recall f1-score support
0 0.99 0.99 0.99 5150
1 0.34 0.38 0.36 94
avg / total 0.98 0.98 0.98 5244
other_weather
precision recall f1-score support
0 0.96 0.96 0.96 4958
1 0.26 0.22 0.24 286
avg / total 0.92 0.92 0.92 5244
direct_report
precision recall f1-score support
0 0.88 0.89 0.88 4197
1 0.54 0.50 0.52 1047
avg / total 0.81 0.81 0.81 5244
|
FTL
|
ML Pipeline Preparation.ipynb
|
Sanmilee/Disaster-Response-Pipeline
|
9. Export your model as a pickle file
|
# save a copy file of the the trained model to disk
trained_model_file = 'trained_model.sav'
pickle.dump(cv, open(trained_model_file, 'wb'))
|
_____no_output_____
|
FTL
|
ML Pipeline Preparation.ipynb
|
Sanmilee/Disaster-Response-Pipeline
|
Total de Casos y Mortalidad padecimiento
|
import matplotlib.pyplot as plt
cv19_confirmed_cases = covid_pd[covid_pd['RESULTADO_LAB'] == YES]
pneumonia_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['NEUMONIA'] == YES]
diabetes_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['DIABETES'] == YES]
epoc_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['EPOC'] == YES]
asma_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['ASMA'] == YES]
inmusupr_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['INMUSUPR'] == YES]
hyper_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['HIPERTENSION'] == YES]
# others_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['OTRAS_COM'] == YES]
cardio_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['CARDIOVASCULAR'] == YES]
obesity_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['OBESIDAD'] == YES]
renal_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['RENAL_CRONICA'] == YES]
#
smoking_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['TABAQUISMO'] == YES]
TOTAL_POSITIVE_COV19_CASES = cv19_confirmed_cases.shape[0] # len(list(filter(lambda x: x, covid_pd['RESULTADO_LAB'] == YES)))
TOTAL_PNEUMONIA_CASES = pneumonia_confirmed_cases.shape[0]
print(TOTAL_POSITIVE_COV19_CASES)
def percentage_died(df):
part = who_died(df).shape[0]
whole = df.shape[0]
percentage = 100 * float(part)/float(whole)
return f'{int(percentage)}%'
def who_died(df):
return df[df['FECHA_DEF'] != '9999-99-99']
diseases_dfs = [
diabetes_confirmed_cases,
# pneumonia_confirmed_cases,
epoc_confirmed_cases,
asma_confirmed_cases,
inmusupr_confirmed_cases,
hyper_confirmed_cases,
cardio_confirmed_cases,
obesity_confirmed_cases,
renal_confirmed_cases,
smoking_confirmed_cases,
]
_ = lambda value: '{:,.2f}'.format(value).split('.')[0] if type(value) != str else value
cases_by_disease = pd.DataFrame.from_dict({
'Padecimiento': ['Diabetes',
# 'Neumonía',
'EPOC', 'Asma', 'Inmunosupresión', 'Hipertensión', 'Cardiovascular',
'Obesidad', 'Renal Crónica', 'Tabaquismo'],
'Positivos': [
diabetes_confirmed_cases.shape[0],
# pneumonia_confirmed_cases.shape[0],
epoc_confirmed_cases.shape[0],
asma_confirmed_cases.shape[0],
inmusupr_confirmed_cases.shape[0],
hyper_confirmed_cases.shape[0],
cardio_confirmed_cases.shape[0],
obesity_confirmed_cases.shape[0],
renal_confirmed_cases.shape[0],
smoking_confirmed_cases.shape[0],
],
'Muertes': [
who_died(diabetes_confirmed_cases).shape[0],
# who_died(pneumonia_confirmed_cases).shape[0],
who_died(epoc_confirmed_cases).shape[0],
who_died(asma_confirmed_cases).shape[0],
who_died(inmusupr_confirmed_cases).shape[0],
who_died(hyper_confirmed_cases).shape[0],
who_died(cardio_confirmed_cases).shape[0],
who_died(obesity_confirmed_cases).shape[0],
who_died(renal_confirmed_cases).shape[0],
who_died(smoking_confirmed_cases).shape[0],
],
'Porcentaje de Muerte': [
percentage_died(diabetes_confirmed_cases),
# percentage_died(pneumonia_confirmed_cases),
percentage_died(epoc_confirmed_cases),
percentage_died(asma_confirmed_cases),
percentage_died(inmusupr_confirmed_cases),
percentage_died(hyper_confirmed_cases),
percentage_died(cardio_confirmed_cases),
percentage_died(obesity_confirmed_cases),
percentage_died(renal_confirmed_cases),
percentage_died(smoking_confirmed_cases),
],
})
cases_by_disease = cases_by_disease.set_index('Padecimiento')
# cases_by_disease = cases_by_disease.astype({'Positivos': float, 'Muertes' : float})
cases_by_disease.applymap(_).to_csv(join(output_folder, 'table1.csv'))
cases_by_disease.applymap(_)
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter, StrMethodFormatter
cases_by_disease
ax = cases_by_disease.plot.bar(rot=0, figsize=(15,5))
plt.yticks(fontsize = 13)
plt.xlabel('Casos positivos y defunciones por padecimiento', fontsize = 18)
# add value label to each bar, displayng its height
for p in ax.patches:
ax.annotate(p.get_height(),
(p.get_x() + p.get_width()/2., p.get_height()),
ha = 'center', va = 'center', xytext = (0,7), textcoords = 'offset points', size=9)
ax.yaxis.set_major_formatter(StrMethodFormatter('{x:,}'))
plt.tight_layout()
# save Figure 7 as an image
plt.savefig(join(output_folder, 'figure1.png'))
from matplotlib_venn import venn3, venn3_circles
from matplotlib.pyplot import gca
major_diseases = [set(diabetes_confirmed_cases['ID_REGISTRO']),
set(hyper_confirmed_cases['ID_REGISTRO']),
set(obesity_confirmed_cases['ID_REGISTRO'])]
major_diseases_deaths = [set(who_died(diabetes_confirmed_cases)['ID_REGISTRO']),
set(who_died(hyper_confirmed_cases)['ID_REGISTRO']),
set(who_died(obesity_confirmed_cases)['ID_REGISTRO'])]
fig, axes = plt.subplots(1, 2, figsize=(15, 15))
venn3(major_diseases,
set_colors=('#3E64AF', '#3EAF5D', '#D74E3B'),
set_labels = ('Diabetes',
'Hipertensión',
'Obesidad',
),
alpha=0.75,
)
venn3_circles(major_diseases, lw=0.7)
plt.subplot(1, 2, 1)
venn3(major_diseases_deaths,
set_colors=('#3E64AF', '#3EAF5D', '#D74E3B'),
set_labels = ('Fallecimientos por \nDiabetes',
'Fallecimientos por \nHipertensión',
'Fallecimientos por \nObesidad'),
alpha=0.75)
venn3_circles(major_diseases_deaths, lw=0.7)
plt.show()
plt.tight_layout()
plt.savefig(join(output_folder, 'figure2.png'), bbox_inches='tight')
axes
fig, axes = plt.subplots(3, 3, figsize=(10, 10), dpi=100)
colors = ['tab:red', 'tab:blue', 'tab:green', 'tab:pink', 'tab:olive']
disease_title = [
'Diabetes',
'EPOC',
'Asma',
'Inmunosuprecion',
'Hipertension',
'Cardiovascular',
'Obesidad',
'Insuficiencia renal',
'Tabaquismo'
]
for i, (ax, df) in enumerate(zip(axes.flatten(), diseases_dfs)):
ax.hist(df['EDAD'], alpha=0.5, bins=100, density=True, stacked=True, label=disease_title[i], color=colors[ i % 4])
ax.set_xlabel("Edad")
ax.set_ylabel("Frecuencia")
ax.legend(loc='upper left', frameon=False)
# ax.set_title(disease_title[i])
ax.set_xlim(0, 90);
plt.suptitle('Afectacion de pacientes con enfermadad preexistente por edad ', y=1.05, size=16)
plt.tight_layout();
plt.savefig(join(output_folder, 'figure3.png'), bbox_inches='tight')
#diabetes_confirmed_cases
fig, axes = plt.subplots(3, 3, figsize=(10, 10), dpi=100)
diseases_dfs = [
who_died(diabetes_confirmed_cases),
who_died(pneumonia_confirmed_cases),
who_died(epoc_confirmed_cases),
who_died(asma_confirmed_cases),
who_died(inmusupr_confirmed_cases),
who_died(hyper_confirmed_cases),
who_died(cardio_confirmed_cases),
who_died(obesity_confirmed_cases),
who_died(renal_confirmed_cases),
who_died(smoking_confirmed_cases),
]
for i, (ax, df) in enumerate(zip(axes.flatten(), diseases_dfs)):
ax.hist(df['EDAD'], alpha=0.5, bins=100, density=True, stacked=True, label=disease_title[i], color=colors[ i % 4])
# ax.set_title(disease_title[i])
ax.set_xlabel("Edad")
ax.set_ylabel("Frecuencia")
ax.legend(loc='upper left', frameon=False)
ax.set_xlim(0, 90);
plt.suptitle('Afectacion de fallecidos con enfermadad preexistente por edad ', y=1.05, size=16)
plt.tight_layout();
plt.savefig(join(output_folder, 'figure4.png'), bbox_inches='tight')
|
_____no_output_____
|
MIT
|
001-000-general-overview/run.ipynb
|
devlabmexico/reporte-covid
|
Computer Vision Nanodegree Project: Image Captioning---In this notebook, you will learn how to load and pre-process data from the [COCO dataset](http://cocodataset.org/home). You will also design a CNN-RNN model for automatically generating image captions.Note that **any amendments that you make to this notebook will not be graded**. However, you will use the instructions provided in **Step 3** and **Step 4** to implement your own CNN encoder and RNN decoder by making amendments to the **models.py** file provided as part of this project. Your **models.py** file **will be graded**. Feel free to use the links below to navigate the notebook:- [Step 1](step1): Explore the Data Loader- [Step 2](step2): Use the Data Loader to Obtain Batches- [Step 3](step3): Experiment with the CNN Encoder- [Step 4](step4): Implement the RNN Decoder Step 1: Explore the Data LoaderWe have already written a [data loader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader) that you can use to load the COCO dataset in batches. In the code cell below, you will initialize the data loader by using the `get_loader` function in **data_loader.py**. > For this project, you are not permitted to change the **data_loader.py** file, which must be used as-is.The `get_loader` function takes as input a number of arguments that can be explored in **data_loader.py**. Take the time to explore these arguments now by opening **data_loader.py** in a new window. Most of the arguments must be left at their default values, and you are only allowed to amend the values of the arguments below:1. **`transform`** - an [image transform](http://pytorch.org/docs/master/torchvision/transforms.html) specifying how to pre-process the images and convert them to PyTorch tensors before using them as input to the CNN encoder. For now, you are encouraged to keep the transform as provided in `transform_train`. You will have the opportunity later to choose your own image transform to pre-process the COCO images.2. **`mode`** - one of `'train'` (loads the training data in batches) or `'test'` (for the test data). We will say that the data loader is in training or test mode, respectively. While following the instructions in this notebook, please keep the data loader in training mode by setting `mode='train'`.3. **`batch_size`** - determines the batch size. When training the model, this is number of image-caption pairs used to amend the model weights in each training step.4. **`vocab_threshold`** - the total number of times that a word must appear in the in the training captions before it is used as part of the vocabulary. Words that have fewer than `vocab_threshold` occurrences in the training captions are considered unknown words. 5. **`vocab_from_file`** - a Boolean that decides whether to load the vocabulary from file. We will describe the `vocab_threshold` and `vocab_from_file` arguments in more detail soon. For now, run the code cell below. Be patient - it may take a couple of minutes to run!
|
# install PixieDebugger - A Visual Python Debugger for Jupyter Notebooks
# https://medium.com/codait/the-visual-python-debugger-for-jupyter-notebooks-youve-always-wanted-761713babc62
# https://www.analyticsvidhya.com/blog/2018/07/pixie-debugger-python-debugging-tool-jupyter-notebooks-data-scientist-must-use/
!pip install pixiedust
# install other toolboxes
!pip install tqdm==4.14 # https://stackoverflow.com/questions/59109313/tqdm-tqdm-tqdmkeyerror-unknown-arguments-unit-divisor-1024
!pip install nltk
!pip install torch==1.2.0 torchvision==0.4.0
!pip install torchsummary
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
import nltk
nltk.download('punkt')
from data_loader import get_loader
import torch
print('PyTorch Version:', torch.__version__)
print('CUDA available:', torch.cuda.is_available())
from torchvision import transforms
from torchsummary import summary
import pixiedust
# Define a transform to pre-process the training images.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Set the minimum word count threshold.
vocab_threshold = 5
# Specify the batch size.
batch_size = 64
# Obtain the data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=False)
|
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
PyTorch Version: 1.2.0
CUDA available: True
Pixiedust database opened successfully
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
When you ran the code cell above, the data loader was stored in the variable `data_loader`. You can access the corresponding dataset as `data_loader.dataset`. This dataset is an instance of the `CoCoDataset` class in **data_loader.py**. If you are unfamiliar with data loaders and datasets, you are encouraged to review [this PyTorch tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html). Exploring the `__getitem__` MethodThe `__getitem__` method in the `CoCoDataset` class determines how an image-caption pair is pre-processed before being incorporated into a batch. This is true for all `Dataset` classes in PyTorch; if this is unfamiliar to you, please review [the tutorial linked above](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html). When the data loader is in training mode, this method begins by first obtaining the filename (`path`) of a training image and its corresponding caption (`caption`). Image Pre-Processing Image pre-processing is relatively straightforward (from the `__getitem__` method in the `CoCoDataset` class):```python Convert image to tensor and pre-process using transformimage = Image.open(os.path.join(self.img_folder, path)).convert('RGB')image = self.transform(image)```After loading the image in the training folder with name `path`, the image is pre-processed using the same transform (`transform_train`) that was supplied when instantiating the data loader. Caption Pre-Processing The captions also need to be pre-processed and prepped for training. In this example, for generating captions, we are aiming to create a model that predicts the next token of a sentence from previous tokens, so we turn the caption associated with any image into a list of tokenized words, before casting it to a PyTorch tensor that we can use to train the network.To understand in more detail how COCO captions are pre-processed, we'll first need to take a look at the `vocab` instance variable of the `CoCoDataset` class. The code snippet below is pulled from the `__init__` method of the `CoCoDataset` class:```pythondef __init__(self, transform, mode, batch_size, vocab_threshold, vocab_file, start_word, end_word, unk_word, annotations_file, vocab_from_file, img_folder): ... self.vocab = Vocabulary(vocab_threshold, vocab_file, start_word, end_word, unk_word, annotations_file, vocab_from_file) ...```From the code snippet above, you can see that `data_loader.dataset.vocab` is an instance of the `Vocabulary` class from **vocabulary.py**. Take the time now to verify this for yourself by looking at the full code in **data_loader.py**. We use this instance to pre-process the COCO captions (from the `__getitem__` method in the `CoCoDataset` class):```python Convert caption to tensor of word ids.tokens = nltk.tokenize.word_tokenize(str(caption).lower()) line 1caption = [] line 2caption.append(self.vocab(self.vocab.start_word)) line 3caption.extend([self.vocab(token) for token in tokens]) line 4caption.append(self.vocab(self.vocab.end_word)) line 5caption = torch.Tensor(caption).long() line 6```As you will see soon, this code converts any string-valued caption to a list of integers, before casting it to a PyTorch tensor. To see how this code works, we'll apply it to the sample caption in the next code cell.
|
sample_caption = 'A person doing a trick on a rail while riding a skateboard.'
|
_____no_output_____
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
In **`line 1`** of the code snippet, every letter in the caption is converted to lowercase, and the [`nltk.tokenize.word_tokenize`](http://www.nltk.org/) function is used to obtain a list of string-valued tokens. Run the next code cell to visualize the effect on `sample_caption`.
|
sample_tokens = nltk.tokenize.word_tokenize(str(sample_caption).lower())
print(sample_tokens)
|
['a', 'person', 'doing', 'a', 'trick', 'on', 'a', 'rail', 'while', 'riding', 'a', 'skateboard', '.']
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
In **`line 2`** and **`line 3`** we initialize an empty list and append an integer to mark the start of a caption. The [paper](https://arxiv.org/pdf/1411.4555.pdf) that you are encouraged to implement uses a special start word (and a special end word, which we'll examine below) to mark the beginning (and end) of a caption.This special start word (`""`) is decided when instantiating the data loader and is passed as a parameter (`start_word`). You are **required** to keep this parameter at its default value (`start_word=""`).As you will see below, the integer `0` is always used to mark the start of a caption.
|
sample_caption = []
start_word = data_loader.dataset.vocab.start_word
print('Special start word:', start_word)
sample_caption.append(data_loader.dataset.vocab(start_word))
print(sample_caption)
|
Special start word: <start>
[0]
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
In **`line 4`**, we continue the list by adding integers that correspond to each of the tokens in the caption.
|
sample_caption.extend([data_loader.dataset.vocab(token) for token in sample_tokens])
print(sample_caption)
|
[0, 3, 98, 754, 3, 396, 39, 3, 1009, 207, 139, 3, 753, 18]
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
In **`line 5`**, we append a final integer to mark the end of the caption. Identical to the case of the special start word (above), the special end word (`""`) is decided when instantiating the data loader and is passed as a parameter (`end_word`). You are **required** to keep this parameter at its default value (`end_word=""`).As you will see below, the integer `1` is always used to mark the end of a caption.
|
end_word = data_loader.dataset.vocab.end_word
print('Special end word:', end_word)
sample_caption.append(data_loader.dataset.vocab(end_word))
print(sample_caption)
|
Special end word: <end>
[0, 3, 98, 754, 3, 396, 39, 3, 1009, 207, 139, 3, 753, 18, 1]
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
Finally, in **`line 6`**, we convert the list of integers to a PyTorch tensor and cast it to [long type](http://pytorch.org/docs/master/tensors.htmltorch.Tensor.long). You can read more about the different types of PyTorch tensors on the [website](http://pytorch.org/docs/master/tensors.html).
|
sample_caption = torch.Tensor(sample_caption).long()
print(sample_caption)
|
tensor([ 0, 3, 98, 754, 3, 396, 39, 3, 1009, 207, 139, 3,
753, 18, 1])
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
And that's it! In summary, any caption is converted to a list of tokens, with _special_ start and end tokens marking the beginning and end of the sentence:```[, 'a', 'person', 'doing', 'a', 'trick', 'while', 'riding', 'a', 'skateboard', '.', ]```This list of tokens is then turned into a list of integers, where every distinct word in the vocabulary has an associated integer value:```[0, 3, 98, 754, 3, 396, 207, 139, 3, 753, 18, 1]```Finally, this list is converted to a PyTorch tensor. All of the captions in the COCO dataset are pre-processed using this same procedure from **`lines 1-6`** described above. As you saw, in order to convert a token to its corresponding integer, we call `data_loader.dataset.vocab` as a function. The details of how this call works can be explored in the `__call__` method in the `Vocabulary` class in **vocabulary.py**. ```pythondef __call__(self, word): if not word in self.word2idx: return self.word2idx[self.unk_word] return self.word2idx[word]```The `word2idx` instance variable is a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.htmldictionaries) that is indexed by string-valued keys (mostly tokens obtained from training captions). For each key, the corresponding value is the integer that the token is mapped to in the pre-processing step.Use the code cell below to view a subset of this dictionary.
|
# Preview the word2idx dictionary.
dict(list(data_loader.dataset.vocab.word2idx.items())[:10])
|
_____no_output_____
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
We also print the total number of keys.
|
# Print the total number of keys in the word2idx dictionary.
print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab))
|
Total number of tokens in vocabulary: 8855
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
As you will see if you examine the code in **vocabulary.py**, the `word2idx` dictionary is created by looping over the captions in the training dataset. If a token appears no less than `vocab_threshold` times in the training set, then it is added as a key to the dictionary and assigned a corresponding unique integer. You will have the option later to amend the `vocab_threshold` argument when instantiating your data loader. Note that in general, **smaller** values for `vocab_threshold` yield a **larger** number of tokens in the vocabulary. You are encouraged to check this for yourself in the next code cell by decreasing the value of `vocab_threshold` before creating a new data loader.
|
# Modify the minimum word count threshold.
vocab_threshold = 4
# Obtain the data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=False)
# Print the total number of keys in the word2idx dictionary.
print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab))
|
Total number of tokens in vocabulary: 9955
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
There are also a few special keys in the `word2idx` dictionary. You are already familiar with the special start word (`""`) and special end word (`""`). There is one more special token, corresponding to unknown words (`""`). All tokens that don't appear anywhere in the `word2idx` dictionary are considered unknown words. In the pre-processing step, any unknown tokens are mapped to the integer `2`.
|
unk_word = data_loader.dataset.vocab.unk_word
print('Special unknown word:', unk_word)
print('All unknown words are mapped to this integer:', data_loader.dataset.vocab(unk_word))
|
Special unknown word: <unk>
All unknown words are mapped to this integer: 2
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
Check this for yourself below, by pre-processing the provided nonsense words that never appear in the training captions.
|
print(data_loader.dataset.vocab('jfkafejw'))
print(data_loader.dataset.vocab('ieowoqjf'))
|
2
2
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
The final thing to mention is the `vocab_from_file` argument that is supplied when creating a data loader. To understand this argument, note that when you create a new data loader, the vocabulary (`data_loader.dataset.vocab`) is saved as a [pickle](https://docs.python.org/3/library/pickle.html) file in the project folder, with filename `vocab.pkl`.If you are still tweaking the value of the `vocab_threshold` argument, you **must** set `vocab_from_file=False` to have your changes take effect. But once you are happy with the value that you have chosen for the `vocab_threshold` argument, you need only run the data loader *one more time* with your chosen `vocab_threshold` to save the new vocabulary to file. Then, you can henceforth set `vocab_from_file=True` to load the vocabulary from file and speed the instantiation of the data loader. Note that building the vocabulary from scratch is the most time-consuming part of instantiating the data loader, and so you are strongly encouraged to set `vocab_from_file=True` as soon as you are able.Note that if `vocab_from_file=True`, then any supplied argument for `vocab_threshold` when instantiating the data loader is completely ignored.
|
# Obtain the data loader (from file). Note that it runs much faster than before!
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_from_file=True)
|
Vocabulary successfully loaded from vocab.pkl file!
loading annotations into memory...
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
In the next section, you will learn how to use the data loader to obtain batches of training data. Step 2: Use the Data Loader to Obtain BatchesThe captions in the dataset vary greatly in length. You can see this by examining `data_loader.dataset.caption_lengths`, a Python list with one entry for each training caption (where the value stores the length of the corresponding caption). In the code cell below, we use this list to print the total number of captions in the training data with each length. As you will see below, the majority of captions have length 10. Likewise, very short and very long captions are quite rare.
|
from collections import Counter
# Tally the total number of training captions with each length.
counter = Counter(data_loader.dataset.caption_lengths)
lengths = sorted(counter.items(), key=lambda pair: pair[1], reverse=True)
for value, count in lengths:
print('value: %2d --- count: %5d' % (value, count))
|
value: 10 --- count: 86334
value: 11 --- count: 79948
value: 9 --- count: 71934
value: 12 --- count: 57637
value: 13 --- count: 37645
value: 14 --- count: 22335
value: 8 --- count: 20771
value: 15 --- count: 12841
value: 16 --- count: 7729
value: 17 --- count: 4842
value: 18 --- count: 3104
value: 19 --- count: 2014
value: 7 --- count: 1597
value: 20 --- count: 1451
value: 21 --- count: 999
value: 22 --- count: 683
value: 23 --- count: 534
value: 24 --- count: 383
value: 25 --- count: 277
value: 26 --- count: 215
value: 27 --- count: 159
value: 28 --- count: 115
value: 29 --- count: 86
value: 30 --- count: 58
value: 31 --- count: 49
value: 32 --- count: 44
value: 34 --- count: 39
value: 37 --- count: 32
value: 33 --- count: 31
value: 35 --- count: 31
value: 36 --- count: 26
value: 38 --- count: 18
value: 39 --- count: 18
value: 43 --- count: 16
value: 44 --- count: 16
value: 48 --- count: 12
value: 45 --- count: 11
value: 42 --- count: 10
value: 40 --- count: 9
value: 49 --- count: 9
value: 46 --- count: 9
value: 47 --- count: 7
value: 50 --- count: 6
value: 51 --- count: 6
value: 41 --- count: 6
value: 52 --- count: 5
value: 54 --- count: 3
value: 56 --- count: 2
value: 6 --- count: 2
value: 53 --- count: 2
value: 55 --- count: 2
value: 57 --- count: 1
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
To generate batches of training data, we begin by first sampling a caption length (where the probability that any length is drawn is proportional to the number of captions with that length in the dataset). Then, we retrieve a batch of size `batch_size` of image-caption pairs, where all captions have the sampled length. This approach for assembling batches matches the procedure in [this paper](https://arxiv.org/pdf/1502.03044.pdf) and has been shown to be computationally efficient without degrading performance.Run the code cell below to generate a batch. The `get_train_indices` method in the `CoCoDataset` class first samples a caption length, and then samples `batch_size` indices corresponding to training data points with captions of that length. These indices are stored below in `indices`.These indices are supplied to the data loader, which then is used to retrieve the corresponding data points. The pre-processed images and captions in the batch are stored in `images` and `captions`.
|
import numpy as np
import torch.utils.data as data
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
print('selected caption length:', set(data_loader.dataset.caption_lengths[i] for i in indices))
print('batch size:', data_loader.dataset.batch_size)
print('sampled indices:', indices)
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
print('images.shape:', images.shape)
print('captions.shape:', captions.shape)
|
selected caption length: {11}
batch size: 64
sampled indices: [163258, 37144, 380255, 317957, 192582, 360740, 10195, 2809, 162865, 309252, 293693, 333283, 35401, 403582, 103488, 93114, 234377, 135463, 281449, 85137, 73144, 43331, 279550, 9538, 215758, 166348, 288499, 375568, 226201, 77114, 139807, 66138, 349567, 316866, 200844, 302747, 78815, 342849, 273002, 58477, 229691, 22617, 172296, 86417, 241012, 201450, 404151, 231331, 202059, 347401, 374039, 220502, 32122, 246526, 157367, 186080, 139093, 410879, 240537, 296696, 208667, 360735, 224908, 87710]
images.shape: torch.Size([64, 3, 224, 224])
captions.shape: torch.Size([64, 13])
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
Each time you run the code cell above, a different caption length is sampled, and a different batch of training data is returned. Run the code cell multiple times to check this out!You will train your model in the next notebook in this sequence (**2_Training.ipynb**). This code for generating training batches will be provided to you.> Before moving to the next notebook in the sequence (**2_Training.ipynb**), you are strongly encouraged to take the time to become very familiar with the code in **data_loader.py** and **vocabulary.py**. **Step 1** and **Step 2** of this notebook are designed to help facilitate a basic introduction and guide your understanding. However, our description is not exhaustive, and it is up to you (as part of the project) to learn how to best utilize these files to complete the project. __You should NOT amend any of the code in either *data_loader.py* or *vocabulary.py*.__In the next steps, we focus on learning how to specify a CNN-RNN architecture in PyTorch, towards the goal of image captioning. Step 3: Experiment with the CNN EncoderRun the code cell below to import `EncoderCNN` and `DecoderRNN` from **model.py**.
|
# Watch for any changes in model.py, and re-load it automatically.
% load_ext autoreload
% autoreload 2
|
_____no_output_____
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
In the next code cell we define a `device` that you will use move PyTorch tensors to GPU (if CUDA is available). Run this code cell before continuing.
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
_____no_output_____
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
Run the code cell below to instantiate the CNN encoder in `encoder`. The pre-processed images from the batch in **Step 2** of this notebook are then passed through the encoder, and the output is stored in `features`.
|
from model import EncoderCNN
# Specify the dimensionality of the image embedding.
embed_size = 256
#-#-#-# Do NOT modify the code below this line. #-#-#-#
# Initialize the encoder. (Optional: Add additional arguments if necessary.)
encoder = EncoderCNN(embed_size)
# Move the encoder to GPU if CUDA is available.
encoder.to(device)
# Move last batch of images (from Step 2) to GPU if CUDA is available.
images = images.to(device)
# Print encoder summary
summary(encoder, images.cpu().data.numpy().shape[1:])
# Pass the images through the encoder.
features = encoder(images)
print('type(features):', type(features))
print('features.shape:', features.shape)
# Check that your encoder satisfies some requirements of the project! :D
assert type(features)==torch.Tensor, "Encoder output needs to be a PyTorch Tensor."
assert (features.shape[0]==batch_size) & (features.shape[1]==embed_size), "The shape of the encoder output is incorrect."
|
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 112, 112] 9,408
BatchNorm2d-2 [-1, 64, 112, 112] 128
ReLU-3 [-1, 64, 112, 112] 0
MaxPool2d-4 [-1, 64, 56, 56] 0
Conv2d-5 [-1, 64, 56, 56] 4,096
BatchNorm2d-6 [-1, 64, 56, 56] 128
ReLU-7 [-1, 64, 56, 56] 0
Conv2d-8 [-1, 64, 56, 56] 36,864
BatchNorm2d-9 [-1, 64, 56, 56] 128
ReLU-10 [-1, 64, 56, 56] 0
Conv2d-11 [-1, 256, 56, 56] 16,384
BatchNorm2d-12 [-1, 256, 56, 56] 512
Conv2d-13 [-1, 256, 56, 56] 16,384
BatchNorm2d-14 [-1, 256, 56, 56] 512
ReLU-15 [-1, 256, 56, 56] 0
Bottleneck-16 [-1, 256, 56, 56] 0
Conv2d-17 [-1, 64, 56, 56] 16,384
BatchNorm2d-18 [-1, 64, 56, 56] 128
ReLU-19 [-1, 64, 56, 56] 0
Conv2d-20 [-1, 64, 56, 56] 36,864
BatchNorm2d-21 [-1, 64, 56, 56] 128
ReLU-22 [-1, 64, 56, 56] 0
Conv2d-23 [-1, 256, 56, 56] 16,384
BatchNorm2d-24 [-1, 256, 56, 56] 512
ReLU-25 [-1, 256, 56, 56] 0
Bottleneck-26 [-1, 256, 56, 56] 0
Conv2d-27 [-1, 64, 56, 56] 16,384
BatchNorm2d-28 [-1, 64, 56, 56] 128
ReLU-29 [-1, 64, 56, 56] 0
Conv2d-30 [-1, 64, 56, 56] 36,864
BatchNorm2d-31 [-1, 64, 56, 56] 128
ReLU-32 [-1, 64, 56, 56] 0
Conv2d-33 [-1, 256, 56, 56] 16,384
BatchNorm2d-34 [-1, 256, 56, 56] 512
ReLU-35 [-1, 256, 56, 56] 0
Bottleneck-36 [-1, 256, 56, 56] 0
Conv2d-37 [-1, 128, 56, 56] 32,768
BatchNorm2d-38 [-1, 128, 56, 56] 256
ReLU-39 [-1, 128, 56, 56] 0
Conv2d-40 [-1, 128, 28, 28] 147,456
BatchNorm2d-41 [-1, 128, 28, 28] 256
ReLU-42 [-1, 128, 28, 28] 0
Conv2d-43 [-1, 512, 28, 28] 65,536
BatchNorm2d-44 [-1, 512, 28, 28] 1,024
Conv2d-45 [-1, 512, 28, 28] 131,072
BatchNorm2d-46 [-1, 512, 28, 28] 1,024
ReLU-47 [-1, 512, 28, 28] 0
Bottleneck-48 [-1, 512, 28, 28] 0
Conv2d-49 [-1, 128, 28, 28] 65,536
BatchNorm2d-50 [-1, 128, 28, 28] 256
ReLU-51 [-1, 128, 28, 28] 0
Conv2d-52 [-1, 128, 28, 28] 147,456
BatchNorm2d-53 [-1, 128, 28, 28] 256
ReLU-54 [-1, 128, 28, 28] 0
Conv2d-55 [-1, 512, 28, 28] 65,536
BatchNorm2d-56 [-1, 512, 28, 28] 1,024
ReLU-57 [-1, 512, 28, 28] 0
Bottleneck-58 [-1, 512, 28, 28] 0
Conv2d-59 [-1, 128, 28, 28] 65,536
BatchNorm2d-60 [-1, 128, 28, 28] 256
ReLU-61 [-1, 128, 28, 28] 0
Conv2d-62 [-1, 128, 28, 28] 147,456
BatchNorm2d-63 [-1, 128, 28, 28] 256
ReLU-64 [-1, 128, 28, 28] 0
Conv2d-65 [-1, 512, 28, 28] 65,536
BatchNorm2d-66 [-1, 512, 28, 28] 1,024
ReLU-67 [-1, 512, 28, 28] 0
Bottleneck-68 [-1, 512, 28, 28] 0
Conv2d-69 [-1, 128, 28, 28] 65,536
BatchNorm2d-70 [-1, 128, 28, 28] 256
ReLU-71 [-1, 128, 28, 28] 0
Conv2d-72 [-1, 128, 28, 28] 147,456
BatchNorm2d-73 [-1, 128, 28, 28] 256
ReLU-74 [-1, 128, 28, 28] 0
Conv2d-75 [-1, 512, 28, 28] 65,536
BatchNorm2d-76 [-1, 512, 28, 28] 1,024
ReLU-77 [-1, 512, 28, 28] 0
Bottleneck-78 [-1, 512, 28, 28] 0
Conv2d-79 [-1, 256, 28, 28] 131,072
BatchNorm2d-80 [-1, 256, 28, 28] 512
ReLU-81 [-1, 256, 28, 28] 0
Conv2d-82 [-1, 256, 14, 14] 589,824
BatchNorm2d-83 [-1, 256, 14, 14] 512
ReLU-84 [-1, 256, 14, 14] 0
Conv2d-85 [-1, 1024, 14, 14] 262,144
BatchNorm2d-86 [-1, 1024, 14, 14] 2,048
Conv2d-87 [-1, 1024, 14, 14] 524,288
BatchNorm2d-88 [-1, 1024, 14, 14] 2,048
ReLU-89 [-1, 1024, 14, 14] 0
Bottleneck-90 [-1, 1024, 14, 14] 0
Conv2d-91 [-1, 256, 14, 14] 262,144
BatchNorm2d-92 [-1, 256, 14, 14] 512
ReLU-93 [-1, 256, 14, 14] 0
Conv2d-94 [-1, 256, 14, 14] 589,824
BatchNorm2d-95 [-1, 256, 14, 14] 512
ReLU-96 [-1, 256, 14, 14] 0
Conv2d-97 [-1, 1024, 14, 14] 262,144
BatchNorm2d-98 [-1, 1024, 14, 14] 2,048
ReLU-99 [-1, 1024, 14, 14] 0
Bottleneck-100 [-1, 1024, 14, 14] 0
Conv2d-101 [-1, 256, 14, 14] 262,144
BatchNorm2d-102 [-1, 256, 14, 14] 512
ReLU-103 [-1, 256, 14, 14] 0
Conv2d-104 [-1, 256, 14, 14] 589,824
BatchNorm2d-105 [-1, 256, 14, 14] 512
ReLU-106 [-1, 256, 14, 14] 0
Conv2d-107 [-1, 1024, 14, 14] 262,144
BatchNorm2d-108 [-1, 1024, 14, 14] 2,048
ReLU-109 [-1, 1024, 14, 14] 0
Bottleneck-110 [-1, 1024, 14, 14] 0
Conv2d-111 [-1, 256, 14, 14] 262,144
BatchNorm2d-112 [-1, 256, 14, 14] 512
ReLU-113 [-1, 256, 14, 14] 0
Conv2d-114 [-1, 256, 14, 14] 589,824
BatchNorm2d-115 [-1, 256, 14, 14] 512
ReLU-116 [-1, 256, 14, 14] 0
Conv2d-117 [-1, 1024, 14, 14] 262,144
BatchNorm2d-118 [-1, 1024, 14, 14] 2,048
ReLU-119 [-1, 1024, 14, 14] 0
Bottleneck-120 [-1, 1024, 14, 14] 0
Conv2d-121 [-1, 256, 14, 14] 262,144
BatchNorm2d-122 [-1, 256, 14, 14] 512
ReLU-123 [-1, 256, 14, 14] 0
Conv2d-124 [-1, 256, 14, 14] 589,824
BatchNorm2d-125 [-1, 256, 14, 14] 512
ReLU-126 [-1, 256, 14, 14] 0
Conv2d-127 [-1, 1024, 14, 14] 262,144
BatchNorm2d-128 [-1, 1024, 14, 14] 2,048
ReLU-129 [-1, 1024, 14, 14] 0
Bottleneck-130 [-1, 1024, 14, 14] 0
Conv2d-131 [-1, 256, 14, 14] 262,144
BatchNorm2d-132 [-1, 256, 14, 14] 512
ReLU-133 [-1, 256, 14, 14] 0
Conv2d-134 [-1, 256, 14, 14] 589,824
BatchNorm2d-135 [-1, 256, 14, 14] 512
ReLU-136 [-1, 256, 14, 14] 0
Conv2d-137 [-1, 1024, 14, 14] 262,144
BatchNorm2d-138 [-1, 1024, 14, 14] 2,048
ReLU-139 [-1, 1024, 14, 14] 0
Bottleneck-140 [-1, 1024, 14, 14] 0
Conv2d-141 [-1, 512, 14, 14] 524,288
BatchNorm2d-142 [-1, 512, 14, 14] 1,024
ReLU-143 [-1, 512, 14, 14] 0
Conv2d-144 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-145 [-1, 512, 7, 7] 1,024
ReLU-146 [-1, 512, 7, 7] 0
Conv2d-147 [-1, 2048, 7, 7] 1,048,576
BatchNorm2d-148 [-1, 2048, 7, 7] 4,096
Conv2d-149 [-1, 2048, 7, 7] 2,097,152
BatchNorm2d-150 [-1, 2048, 7, 7] 4,096
ReLU-151 [-1, 2048, 7, 7] 0
Bottleneck-152 [-1, 2048, 7, 7] 0
Conv2d-153 [-1, 512, 7, 7] 1,048,576
BatchNorm2d-154 [-1, 512, 7, 7] 1,024
ReLU-155 [-1, 512, 7, 7] 0
Conv2d-156 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-157 [-1, 512, 7, 7] 1,024
ReLU-158 [-1, 512, 7, 7] 0
Conv2d-159 [-1, 2048, 7, 7] 1,048,576
BatchNorm2d-160 [-1, 2048, 7, 7] 4,096
ReLU-161 [-1, 2048, 7, 7] 0
Bottleneck-162 [-1, 2048, 7, 7] 0
Conv2d-163 [-1, 512, 7, 7] 1,048,576
BatchNorm2d-164 [-1, 512, 7, 7] 1,024
ReLU-165 [-1, 512, 7, 7] 0
Conv2d-166 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-167 [-1, 512, 7, 7] 1,024
ReLU-168 [-1, 512, 7, 7] 0
Conv2d-169 [-1, 2048, 7, 7] 1,048,576
BatchNorm2d-170 [-1, 2048, 7, 7] 4,096
ReLU-171 [-1, 2048, 7, 7] 0
Bottleneck-172 [-1, 2048, 7, 7] 0
AvgPool2d-173 [-1, 2048, 1, 1] 0
Linear-174 [-1, 256] 524,544
================================================================
Total params: 24,032,576
Trainable params: 524,544
Non-trainable params: 23,508,032
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 286.55
Params size (MB): 91.68
Estimated Total Size (MB): 378.80
----------------------------------------------------------------
type(features): <class 'torch.Tensor'>
features.shape: torch.Size([64, 256])
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
The encoder that we provide to you uses the pre-trained ResNet-50 architecture (with the final fully-connected layer removed) to extract features from a batch of pre-processed images. The output is then flattened to a vector, before being passed through a `Linear` layer to transform the feature vector to have the same size as the word embedding.You are welcome (and encouraged) to amend the encoder in **model.py**, to experiment with other architectures. In particular, consider using a [different pre-trained model architecture](http://pytorch.org/docs/master/torchvision/models.html). You may also like to [add batch normalization](http://pytorch.org/docs/master/nn.htmlnormalization-layers). > You are **not** required to change anything about the encoder.For this project, you **must** incorporate a pre-trained CNN into your encoder. Your `EncoderCNN` class must take `embed_size` as an input argument, which will also correspond to the dimensionality of the input to the RNN decoder that you will implement in Step 4. When you train your model in the next notebook in this sequence (**2_Training.ipynb**), you are welcome to tweak the value of `embed_size`.If you decide to modify the `EncoderCNN` class, save **model.py** and re-execute the code cell above. If the code cell returns an assertion error, then please follow the instructions to modify your code before proceeding. The assert statements ensure that `features` is a PyTorch tensor with shape `[batch_size, embed_size]`. Step 4: Implement the RNN DecoderBefore executing the next code cell, you must write `__init__` and `forward` methods in the `DecoderRNN` class in **model.py**. (Do **not** write the `sample` method yet - you will work with this method when you reach **3_Inference.ipynb**.)> The `__init__` and `forward` methods in the `DecoderRNN` class are the only things that you **need** to modify as part of this notebook. You will write more implementations in the notebooks that appear later in the sequence.Your decoder will be an instance of the `DecoderRNN` class and must accept as input:- the PyTorch tensor `features` containing the embedded image features (outputted in Step 3, when the last batch of images from Step 2 was passed through `encoder`), along with- a PyTorch tensor corresponding to the last batch of captions (`captions`) from Step 2.Note that the way we have written the data loader should simplify your code a bit. In particular, every training batch will contain pre-processed captions where all have the same length (`captions.shape[1]`), so **you do not need to worry about padding**. > While you are encouraged to implement the decoder described in [this paper](https://arxiv.org/pdf/1411.4555.pdf), you are welcome to implement any architecture of your choosing, as long as it uses at least one RNN layer, with hidden dimension `hidden_size`. Although you will test the decoder using the last batch that is currently stored in the notebook, your decoder should be written to accept an arbitrary batch (of embedded image features and pre-processed captions [where all captions have the same length]) as input. In the code cell below, `outputs` should be a PyTorch tensor with size `[batch_size, captions.shape[1], vocab_size]`. Your output should be designed such that `outputs[i,j,k]` contains the model's predicted score, indicating how likely the `j`-th token in the `i`-th caption in the batch is the `k`-th token in the vocabulary. In the next notebook of the sequence (**2_Training.ipynb**), we provide code to supply these scores to the [`torch.nn.CrossEntropyLoss`](http://pytorch.org/docs/master/nn.htmltorch.nn.CrossEntropyLoss) optimizer in PyTorch.
|
from model import DecoderRNN
# Specify the number of features in the hidden state of the RNN decoder.
hidden_size = 512
#-#-#-# Do NOT modify the code below this line. #-#-#-#
# Store the size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the decoder.
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move the decoder to GPU if CUDA is available.
decoder.to(device)
# Move last batch of captions (from Step 1) to GPU if CUDA is available
captions = captions.to(device)
# Pass the encoder output and captions through the decoder.
print('features.shape:', features.shape)
print('captions.shape:', captions.shape)
print(decoder)
outputs = decoder(features, captions)
print('type(outputs):', type(outputs))
print('outputs.shape:', outputs.shape)
# Check that your decoder satisfies some requirements of the project! :D
assert type(outputs)==torch.Tensor, "Decoder output needs to be a PyTorch Tensor."
assert (outputs.shape[0]==batch_size) & (outputs.shape[1]==captions.shape[1]) & (outputs.shape[2]==vocab_size), "The shape of the decoder output is incorrect."
|
features.shape: torch.Size([64, 256])
captions.shape: torch.Size([64, 13])
DecoderRNN(
(embedding): Embedding(9955, 256)
(lstm): LSTM(256, 512, batch_first=True)
(linear): Linear(in_features=512, out_features=9955, bias=True)
)
type(outputs): <class 'torch.Tensor'>
outputs.shape: torch.Size([64, 13, 9955])
|
MIT
|
1_Preliminaries.ipynb
|
zhulingchen/CVND---Image-Captioning-Project
|
**Student BENREKIA Mohamed Ali (IASD 2021-2022)**
|
%matplotlib inline
import numpy as np
from scipy.linalg import norm
import matplotlib.pyplot as plt
import seaborn as sns
%load_ext autoreload
%autoreload 2
|
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
Loading data
|
!wget https://raw.githubusercontent.com/nishitpatel01/predicting-age-of-abalone-using-regression/master/Abalone_data.csv
# Use this code to read from a CSV file.
import pandas as pd
U = pd.read_csv('/content/Abalone_data.csv')
U.shape
U.info()
U.head()
U.tail()
U.Sex=U.Sex.astype('category').cat.codes
U.head()
U.describe(include='all')
U.sample(10)
U.isnull().sum()
U.dtypes
U.hist(figsize=(10,15))
corr = U.corr()
corr
sns.heatmap(corr, annot=False)
# split train - validation
shuffle_df = U.sample(frac=1)
# Define a size for your train set
train_size = int(0.8 * len(U))
# Split your dataset
train_set = shuffle_df[:train_size]
valid_set = shuffle_df[train_size:]
#split feature target
x_train = train_set.drop("Rings",axis=1).to_numpy()
y_train = train_set["Rings"]
x_valid = valid_set.drop("Rings",axis=1)
y_valid = valid_set["Rings"]
#no need
mA = x_train.mean(axis=0)
sA = x_train.std(axis=0)
x_train = (x_train-mA)/sA
x_valid = (x_valid-mA)/sA
# no need
m = y_train.mean()
y_train = y_train-m
y_valid = y_valid-m
x_train.shape[1]
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
Problem definition (Linear regression)
|
class RegPb(object):
'''
A class for regression problems with linear models.
Attributes:
X: Data matrix (features)
y: Data vector (labels)
n,d: Dimensions of X
loss: Loss function to be considered in the regression
'l2': Least-squares loss
lbda: Regularization parameter
'''
# Instantiate the class
def __init__(self, X, y,lbda=0,loss='l2'):
self.X = X
self.y = y
self.n, self.d = X.shape
self.loss = loss
self.lbda = lbda
# Objective value
def fun(self, w):
if self.loss=='l2':
return np.square(self.X.dot(w) - self.y).mean() + self.lbda * norm(w) ** 2
else:
return np.square(self.X.dot(w) - self.y).mean()
"""
# Partial objective value
def f_i(self, i, w):
if self.loss=='l2':
return norm(self.X[i].dot(w) - self.y[i]) ** 2 / (2.) + self.lbda * norm(w) ** 2
else:
return norm(self.X[i].dot(w) - self.y[i]) ** 2 / (2.)
"""
# Full gradient computation
def grad(self, w):
if self.loss=='l2':
return self.X.T.dot(self.X.dot(w) - self.y) * (2/self.n) + 2 * self.lbda * w
else:
return self.X.T.dot(self.X.dot(w) - self.y) * (2/self.n)
# Partial gradient
def grad_i(self,i,w):
x_i = self.X[i]
if self.loss=='l2':
return (2/self.n) * (x_i.dot(w) - self.y[i]) * x_i + 2 * self.lbda*w
else:
return (2/self.n) * (x_i.dot(w) - self.y[i]) * x_i
"""
# Lipschitz constant for the gradient
def lipgrad(self):
if self.loss=='l2':
L = norm(self.X, ord=2) ** 2 / self.n + self.lbda
"""
lda = 1. / x_train.shape[0] ** (0.5)
pblinreg = RegPb(x_train, y_train, lbda=lda, loss='l2')
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
**PCA**
|
U, s, V = np.linalg.svd(x_train.T.dot(x_train))
eig_values, eig_vectors = s, U
explained_variance=(eig_values / np.sum(eig_values))*100
plt.figure(figsize=(8,4))
plt.bar(range(8), explained_variance, alpha=0.6)
plt.ylabel('Percentage of explained variance')
plt.xlabel('Dimensions')
# calculating our new axis
pc1 = x_train.dot(eig_vectors[:,0])
pc2 = x_train.dot(eig_vectors[:,1])
plt.plot(pc1, pc2, '.')
plt.axis('equal');
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
Btach Gradietn Descent
|
def batch_grad(w0,problem, stepchoice=0, lr= 0.01, n_iter=1000,verbose=False):
# objective history
objvals = []
# Number of samples
n = problem.n
# Initial value of current iterate
w = w0.copy()
nw = norm(w)
# Current objective
obj = problem.fun(w)
objvals.append(obj);
# Initialize iteration counter
k=0
# Plot initial quantities of interest
if verbose:
print("Gradient Descent")
print(' | '.join([name.center(8) for name in ["iter", "MSE_Loss"]]))
print(' | '.join([("%d" % k).rjust(8),("%.2e" % obj).rjust(8)]))
# Main loop
while (k < n_iter ):#and nw < 10**100
# gradient calculation
gr = np.zeros(d)
gr = problem.grad(w)
if stepchoice==0:
w[:] = w - lr * gr
elif stepchoice>0:
if (k*nb*10) % n == 0:
sk = float(lr/stepchoice)
w[:] = w - sk * gr
nw = norm(w) #Computing the norm to measure divergence
obj = problem.fun(w)
k += 1
# Plot quantities of interest at the end of every epoch only
objvals.append(obj)
if verbose:
print(' | '.join([("%d" % k).rjust(8),("%.2e" % obj).rjust(8)]))
# End of main loop
#################
# Plot quantities of interest for the last iterate (if needed)
if k % n_iter > 0:
objvals.append(obj)
if verbose:
print(' | '.join([("%d" % k).rjust(8),("%.2e" % obj).rjust(8)]))
# Outputs
w_output = w.copy()
return w_output, np.array(objvals)
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
**Different Learning rates**
|
nb_epochs = 100
n = pblinreg.n
d = pblinreg.d
w0 = np.zeros(d)
valsstep0 = [0.1,0.01,0.001,0.0001,0.00001]
nvals = len(valsstep0)
objs = np.zeros((nvals,nb_epochs+1))
for val in range(nvals):
w_temp, objs_temp = batch_grad(w0,pblinreg, lr=valsstep0[val], n_iter=nb_epochs)
objs[val] = objs_temp
epochs = range(1,102)
plt.figure(figsize=(7, 5))
for val in range(nvals):
plt.plot(epochs, objs[val], label="BG - "+str(valsstep0[val]), lw=2)
plt.title("Convergence plot", fontsize=16)
plt.xlabel("#epochs", fontsize=14)
plt.ylabel("Objective", fontsize=14)
plt.legend()
plt.show()
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
Accelerated Gradient Descent
|
def accelerated_grad(w_0,problem,lr=0.001,method="nesterov",momentum=None,n_iter=100,verbose=False):
"""
A generic code for Nesterov's accelerated gradient method.
Inputs:
w0: Initial vector
problem: Problem structure
lr: Learning rate
method: Type of acceleration technique that is used
'nesterov': Accelerated gradient for convex functions (Nesterov)
momentum: Constant value for the momentum parameter (only used if method!='nesterov')
n_iter: Number of iterations
verbose: Boolean value indicating whether the outcome of every iteration should be displayed
Outputs:
z_output: Final iterate of the method
objvals: History of function values in z (output as a Numpy array of length n_iter+1)
"""
############
# Initial step: Compute and plot some initial quantities
# objective history
objvals = []
# Initial value of current and next iterates
w = w0.copy()
w_new = w0.copy()
z = w0.copy()
if method=='nesterov':
# Initialize parameter sequence
tk = 0
tkp1 = 1
momentum = 0
# Initialize iteration counter
k=0
# Initial objective
obj = problem.fun(z)
objvals.append(obj);
# Plot the initial values if required
if verbose:
print("Accelerated Gradient/"+method)
print(' | '.join([name.center(8) for name in ["iter", "fval"]]))
print(' | '.join([("%d" % k).rjust(8),("%.2e" % obj).rjust(8)]))
#######################
# Main loop
while (k < n_iter):
# Perform the accelerated iteration
# Gradient step
g = problem.grad(z)
w_new[:] = z - lr * g
# Momentum step
z[:] = w_new + momentum*(w_new-w)
# Update sequence
w[:] = w_new[:]
# Adjusting the momentum parameter if needed
if method=='nesterov':
tkp1 = 0.5*(1+np.sqrt(1+4*(tk**2)))
momentum = (tk-1)/tkp1
tk = tkp1
# Compute and plot the new objective value and distance to the minimum
obj = problem.fun(z)
objvals.append(obj)
# Plot these values if required
if verbose:
print(' | '.join([("%d" % k).rjust(8),("%.2e" % obj).rjust(8)]))
# Increment the iteration counter
k += 1
# End loop
#######################
# Output
z_output = z.copy()
return z_output, np.array(objvals)
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
**GD Vs NAGD**
|
nb_epochs = 100
n = pblinreg.n
d = pblinreg.d
w0 = np.zeros(d)
learning_rate = 0.01
w_g, obj_g = batch_grad(w0,pblinreg, lr=learning_rate, n_iter=nb_epochs)
w_n, obj_n = accelerated_grad(w0,pblinreg, lr=learning_rate, n_iter=nb_epochs)
epochs = range(1,102)
plt.figure(figsize=(7, 5))
plt.plot(epochs, obj_g, label="GD", lw=2)
plt.plot(epochs, obj_n, label="NAGD", lw=2)
plt.title("Convergence plot", fontsize=16)
plt.xlabel("#epochs", fontsize=14)
plt.ylabel("Objective", fontsize=14)
plt.legend()
plt.show()
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
Stochastic gradient Descent
|
def stoch_grad(w0,problem, stepchoice=0, lr= 0.01, n_iter=1000,nb=1,average=0,scaling=0,with_replace=False,verbose=False):
"""
A code for gradient descent with various step choices.
Inputs:
w0: Initial vector
problem: Problem structure
problem.fun() returns the objective function, which is assumed to be a finite sum of functions
problem.n returns the number of components in the finite sum
problem.grad_i() returns the gradient of a single component f_i
stepchoice: Strategy for computing the stepsize
0: Constant step size equal to lr
1: Step size decreasing in lr/ stepchoice
lr: Learning rate
n_iter: Number of iterations, used as stopping criterion
nb: Number of components drawn per iteration/Batch size
1: Classical stochastic gradient algorithm (default value)
problem.n: Classical gradient descent (default value)
average: Indicates whether the method computes the average of the iterates
0: No averaging (default)
1: With averaging
scaling: Use a diagonal scaling
0: No scaling (default)
1: Average of magnitudes (RMSProp)
2: Normalization with magnitudes (Adagrad)
with_replace: Boolean indicating whether components are drawn with or without replacement
True: Components drawn with replacement
False: Components drawn without replacement (Default)
verbose: Boolean indicating whether information should be plot at every iteration (Default: False)
Outputs:
w_output: Final iterate of the method (or average if average=1)
objvals: History of function values (Numpy array of length n_iter at most)
"""
############
# Initial step: Compute and plot some initial quantities
# objective history
objvals = []
# iterates distance to the minimum history
normits = []
"""
# Lipschitz constant
L = problem.lipgrad()
"""
# Number of samples
n = problem.n
# Initial value of current iterate
w = w0.copy()
nw = norm(w)
# Average (if needed)
if average:
wavg=np.zeros(len(w))
#Scaling values
if scaling>0:
mu=1/(2 *(n ** (0.5)))
v = np.zeros(d)
beta = 0.8
# Initialize iteration counter
k=0
# Current objective
obj = problem.fun(w)
objvals.append(obj);
# Plot initial quantities of interest
if verbose:
print("Stochastic Gradient, batch size=",nb,"/",n)
print(' | '.join([name.center(8) for name in ["iter", "MSE_Loss"]]))
print(' | '.join([("%d" % k).rjust(8),("%.2e" % obj).rjust(8)]))
################
# Main loop
while (k < n_iter ):#and nw < 10**100
# Draw the batch indices
ik = np.random.choice(n,nb,replace=with_replace)# Batch gradient
# Stochastic gradient calculation
sg = np.zeros(d)
for j in range(nb):
gi = problem.grad_i(ik[j],w)
sg = sg + gi
sg = (1/nb)*sg
if scaling>0:
if scaling==1:
# RMSProp update
v = beta*v + (1-beta)*sg*sg
elif scaling==2:
# Adagrad update
v = v + sg*sg
sg = sg/(np.sqrt(v+mu))
if stepchoice==0:
w[:] = w - lr * sg
elif stepchoice>0:
if (k*nb*10) % n == 0:
sk = float(lr/stepchoice)
w[:] = w - sk * sg
nw = norm(w) #Computing the norm to measure divergence
if average:
# If average, compute the average of the iterates
wavg = k/(k+1) *wavg + w/(k+1)
obj = problem.fun(wavg)
else:
obj = problem.fun(w)
k += 1
# Plot quantities of interest at the end of every epoch only
if k % int(n/nb) == 0:
objvals.append(obj)
if verbose:
print(' | '.join([("%d" % k).rjust(8),("%.2e" % obj).rjust(8)]))
# End of main loop
#################
# Plot quantities of interest for the last iterate (if needed)
if (k*nb) % n > 0:
objvals.append(obj)
if verbose:
print(' | '.join([("%d" % k).rjust(8),("%.2e" % obj).rjust(8)]))
# Outputs
if average:
w_output = wavg.copy()
else:
w_output = w.copy()
return w_output, np.array(objvals)
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
**Constant Vs Decreasing LR**
|
nb_epochs = 60
n = pblinreg.n
d = pblinreg.d
w0 = np.zeros(d)
# Run a - GD with constant stepsize
w_a, obj_a = stoch_grad(w0,pblinreg, n_iter=nb_epochs,nb=n)
# Run b - Stochastic gradient with constant stepsize
# The version below may diverges, in which case the bound on norm(w) in the code will be triggered
w_b, obj_b = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1)
# Run Gradient descent with decreasing stepsize
w_c, obj_c = stoch_grad(w0,pblinreg, stepchoice=0.5, lr=0.2, n_iter=nb_epochs,nb=n)
# Run Stochastic gradient with decreasing stepsize
w_d, obj_d = stoch_grad(w0,pblinreg, stepchoice=0.5, lr=0.2, n_iter=nb_epochs*n,nb=1)
epochs = range(1,62)
plt.figure(figsize=(7, 5))
plt.plot(epochs, obj_a, label="GD - const-lbda", lw=2)
plt.plot(epochs, obj_b, label="SG - const-lbda", lw=2)
plt.plot(epochs, obj_c, label="GD - decr-lbda", lw=2)
plt.plot(epochs, obj_d, label="SG - decr-lbda", lw=2)
plt.title("Convergence plot", fontsize=16)
plt.xlabel("#epochs", fontsize=14)
plt.ylabel("Objective MSE", fontsize=14)
plt.legend()
plt.show()
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
**Different Constant LR**
|
nb_epochs = 60
n = pblinreg.n
d = pblinreg.d
w0 = np.zeros(d)
valsstep0 = [0.01,0.001,0.0001,0.00001]
nvals = len(valsstep0)
objs = np.zeros((nvals,nb_epochs+1))
for val in range(nvals):
w_temp, objs_temp = stoch_grad(w0,pblinreg, lr=valsstep0[val], n_iter=nb_epochs*n,nb=1)
objs[val] = objs_temp
plt.figure(figsize=(7, 5))
for val in range(nvals):
plt.plot(epochs, objs[val], label="SG - "+str(valsstep0[val]), lw=2)
plt.title("Convergence plot", fontsize=16)
plt.xlabel("#epochs", fontsize=14)
plt.ylabel("Objective", fontsize=14)
plt.legend()
plt.show()
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
**Different decreasing LR**
|
nb_epochs = 60
n = pblinreg.n
nbset = 1
w0 = np.zeros(d)
decstep = [1,2,10,20,100]
nvals = len(decstep)
objs = np.zeros((nvals,nb_epochs+1))
for val in range(nvals):
_, objs[val] = stoch_grad(w0,pblinreg,stepchoice=decstep[val],lr=0.02, n_iter=nb_epochs*n,nb=1)
plt.figure(figsize=(7, 5))
for val in range(nvals):
plt.semilogy(epochs, objs[val], label="SG - "+str(decstep[val]), lw=2)
plt.title("Convergence plot", fontsize=16)
plt.xlabel("#epochs", fontsize=14)
plt.ylabel("Objective", fontsize=14)
plt.legend()
plt.show()
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
**Different Batch size**
|
nb_epochs = 100
n = pblinreg.n
w0 = np.zeros(d)
# Stochastic gradient (batch size 1)
w_a, obj_a= stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1)
# Batch stochastic gradient (batch size n/100)
nbset=int(n/100)
w_b, obj_b = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*100,nb=nbset)
# Batch stochastic gradient (batch size n/10)
nbset=int(n/10)
w_c, obj_c = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs*10),nb=nbset)
# Batch stochastic gradient (batch size n/2)
nbset=int(n/2)
w_d, obj_d = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs*2),nb=nbset)
# Gradient descent (batch size n, taken without replacement)
w_f, obj_f = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs),nb=n)
nbset=int(n/100)
w_b, obj_b = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs*100),nb=nbset,verbose=True)
print(len(obj_b))
epochs = range(1,102)
plt.figure(figsize=(7, 5))
plt.semilogy(epochs, obj_a, label="SG (batch=1)", lw=2)
plt.semilogy(epochs, obj_b, label="Batch SG - n/100", lw=2)
plt.semilogy(epochs, obj_c, label="Batch SG - n/10", lw=2)
plt.semilogy(epochs, obj_d, label="Batch SG - n/2", lw=2)
plt.semilogy(epochs, obj_f, label="GD", lw=2)
plt.title("Convergence plot", fontsize=16)
plt.xlabel("#epochs", fontsize=14)
plt.ylabel("Objective", fontsize=14)
plt.legend()
plt.show()
plt.figure(figsize=(7, 5))
plt.plot(epochs, obj_a, label="SG (batch=1)", lw=2)
plt.plot(epochs, obj_b, label="Batch SG - n/100", lw=2)
plt.plot(epochs, obj_c, label="Batch SG - n/10", lw=2)
plt.plot(epochs, obj_d, label="Batch SG - n/2", lw=2)
plt.plot(epochs, obj_f, label="GD", lw=2)
plt.title("Convergence plot", fontsize=16)
plt.xlabel("#epochs", fontsize=14)
plt.ylabel("Objective", fontsize=14)
plt.legend()
plt.show()
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
Other variants for SGD **batch with replacement**
|
#Batch with replacement for GD, SGD and Batch SGD
nb_epochs = 100
n = pblinreg.n
w0 = np.zeros(d)
nruns = 3
for i in range(nruns):
# Run standard stochastic gradient (batch size 1)
_, obj_a= stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1,with_replace=True)
# Batch stochastic gradient (batch size n/10)
nbset=int(n/2)
_, obj_b= stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs*n/nbset),nb=nbset,with_replace=True)
# Batch stochastic gradient (batch size n, with replacement)
nbset=n
_, obj_c=stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs*n/nbset),nb=nbset,with_replace=True)
if i<nruns-1:
plt.semilogy(obj_a,color='orange',lw=2)
plt.semilogy(obj_b,color='green', lw=2)
plt.semilogy(obj_c,color='blue', lw=2)
plt.semilogy(obj_a,label="SG",color='orange',lw=2)
plt.semilogy(obj_b,label="batch n/2",color='green', lw=2)
plt.semilogy(obj_c,label="batch n",color='blue', lw=2)
plt.title("Convergence plot", fontsize=16)
plt.xlabel("#epochs ", fontsize=14)
plt.ylabel("Objective ", fontsize=14)
plt.legend()
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
**Averaging**
|
# Comparison of stochastic gradient with and without averaging
nb_epochs = 100
n = pblinreg.n
w0 = np.zeros(d)
# Run standard stochastic gradient without averaging
_, obj_a =stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1)
# Run stochastic gradient with averaging
_, obj_b =stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1,average=1)
# Plot the results
plt.figure(figsize=(7, 5))
plt.semilogy(obj_a,label='SG',color='orange',lw=2)
plt.semilogy(obj_b,label='SG+averaging',color='red', lw=2)
plt.title("Convergence plot", fontsize=16)
plt.xlabel("#epochs (log scale)", fontsize=14)
plt.ylabel("Objective (log scale)", fontsize=14)
plt.legend()
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
**Diagonal Scaling**
|
# Comparison of stochastic gradient with and without diagonal scaling
nb_epochs = 60
n = pblinreg.n
w0 = np.zeros(d)
# Stochastic gradient (batch size 1) without diagonal scaling
w_a, obj_a= stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1)
# Stochastic gradient (batch size 1) with RMSProp diagonal scaling
w_b, obj_b = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1,average=0,scaling=1)
# Stochastic gradient (batch size 1) with Adagrad diagonal scaling - Constant step size
w_c, obj_c = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1,average=0,scaling=2)
# Stochastic gradient (batch size 1) with Adagrad diagonal scaling - Decreasing step size
w_d, obj_d = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1,average=0,scaling=2)
# Plot the results - Comparison of stochastic gradient with and without diagonal scaling
# In terms of objective value (logarithmic scale)
plt.figure(figsize=(7, 5))
plt.semilogy(obj_a, label="SG", lw=2)
plt.semilogy(obj_b, label="SG/RMSProp", lw=2)
plt.semilogy(obj_c, label="SG/Adagrad (Cst)", lw=2)
plt.semilogy(obj_d, label="SG/Adagrad (Dec)", lw=2)
plt.title("Convergence plot", fontsize=16)
plt.xlabel("#epochs (log scale)", fontsize=14)
plt.ylabel("Objective (log scale)", fontsize=14)
plt.legend()
plt.show
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
Regression (Lasso with iterative soft thersholding) **Lasso regression with ISTA**
|
#Minimization fucntion with l1 norm (Lasso regression)
def cost(w, X, y, lbda):
return np.square(X.dot(w) - y).mean() + lbda * norm(w,1)
def ista_solve( A, d, lbdaa ):
"""
Iterative soft-thresholding solves the minimization problem
Minimize |Ax-d|_2^2 + lambda*|x|_1 (Lasso regression)
"""
max_iter = 300
objvals = []
tol = 10**(-3)
tau = 1.5/np.linalg.norm(A,2)**2
n = A.shape[1]
w = np.zeros((n,1))
for j in range(max_iter):
z = w - tau*(A.T@(A@w-d))
w_old = w
w = np.sign(z) * np.maximum(np.abs(z)-tau*lbdaa, np.zeros(z.shape))
if j % 100 == 0:
obj = cost(w,A,d,lbdaa)
objvals.append(obj)
if np.linalg.norm(w - w_old) < tol:
break
return w, objvals
#we iterate over multiple values of lambda
lmbdas = [0.000001, 0.000002, 0.00001, 0.00002, 0.0001, 0.0002, 0.001, 0.002, 0.01, 0.02, 0.1, 0.2, 1, 2, 10, 20]
mse_list=[]
for lda in lmbdas:
w_star, obj_x = ista_solve_hot( x_train, y_train, lda)
mse_list.append(obj_x[-1])
x_range = range(1,len(lmbdas)+1)
plt.figure(figsize=(7, 5))
plt.plot(x_range,mse_list, label="Lasso-ISTA", lw=2)
plt.title("Best Lambda factor", fontsize=16)
plt.xlabel("Lambda", fontsize=14)
plt.xticks(np.arange(len(lmbdas)),lmbdas,rotation=40)
plt.ylabel("Objective Lasso reg", fontsize=14)
plt.legend()
plt.show()
w_star, obj_x = ista_solve_hot( x_train, y_train, 0.00001)
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
Performance on Test set
|
#MSE on lasso-ISTA
cost(w_star, x_valid, y_valid, 0.00001)
# MSE on best sgd algo
cost(w_b, x_valid, y_valid, 0.00001)
|
_____no_output_____
|
MIT
|
Optim_Project.ipynb
|
iladan0/Abalone_Age_Prediction
|
The Monte Carlo Simulation of Radiation Transport WE will discuss essentiall physics and method to do gamma quanta (photons with high enough energy) radiation transport using Monte Carlo methods. We will covers interactions processes, basics of radiation passing through matter as well as Monte Carlo method and how it helps with radiation propagation. Glossary- $h$ Plank's constant- $\hbar$ reduced Plank's constant, $h/2\pi$- $\omega$ photon circular frequency, - $\hbar \omega$ photon energy- $\lambda$ photon wavelength- $\theta$ scattering angle, between incoming and outgoing photon- $\phi$ azimuthal angle- $c$ speed of light in vacuum- $m_e$ electron mass- $r_e$ classical electron radius- $N_A$ Avogadro Constant, 6.02214076$\times$10$^{23}$ mol$^{-1}$ Basic physics We would cover typical energies and wave length when photons are behaving like a point-like particle interaction with matter. Units Common unit for a photon energy would be electron-volt (eV). This is the kinetic energy electron aquire when it moves in electric field (say, between plates of the capacitor) with potential difference 1Volt. This is very small energy and is equal to about $1.6\times10^{-19}$Joules. Typical energies we are interested inare in the 1keV to 100MeV range. Spatial size and wave length Photons are massless particles, and it is very easy to compute photon "size" which is photon wavelength.$$ \lambda = \frac{hc}{E_\gamma} = \frac{hc}{\hbar \omega} = \frac{2 \pi c}{\omega}$$where $\lambda$ is wavelength, $h$ is Plank's constant, $c$ is speed of light and $E_\gamma$ is photon energy. For example, lets compute wavelength for photon with energy 1eV.
|
h = 6.625e-34
c = 3e8
hw = 1.0 * 1.6e-19 # eV
λ = h*c/hw
print(f"Photon wavelength = {λ*1.0e9} nanometers")
|
Photon wavelength = 1242.1875 nanometers
|
MIT
|
GammaTransport.ipynb
|
Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo
|
Thus, for 1keV photon we will get wave length about 1.2 nm, and for 1MeV photon we will get wave length about $1.2\times10^{-3}$nm. FOr comparison, typical atom size is from 0.1nm (He) to 0.4nm (Fr and other heavy). Therefore, for most interactions between photon and atoms in our enery range we could consider it particles, not waves. Basics of Monte Carlo methods Was first introduced by Conte du Buffon, as needle dropping experiment to calculate value of $\pi$. Laplace extended the example of the CduB by using sampling in the square to calculate value of $\pi$. It is a very general method of stochastic integration of the function. Was successfully applied to the particles (neutron in this case) transport by Enrico Fermi. Since growing applications of computers it is growing exponentially in use - finances, radiation therapy, machine learning, astrophysics, optimizations, younameit. Let's try to calculate $\pi$ with the Laplace method, namely sampe points uniformly in the
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
N = 1000 # number of points to sample
x = 2.0*np.random.random(N) - 1.0
y = 2.0*np.random.random(N) - 1.0
unitCircle = plt.Circle((0, 0), 1.0, color='r', fill=False)
fig, ax = plt.subplots(1, 1)
ax.plot(x, y, 'bo', label='Sampling in square')
ax.add_artist(unitCircle)
plt.axhline(0, color='grey')
plt.axvline(0, color='grey')
plt.title("Sampling in square")
plt.show()
r = np.sqrt(x*x + y*y)
#print(r)
pinside = r[r<=1.0]
Ninside = len(pinside)
print(4.0*Ninside/N)
|
3.08
|
MIT
|
GammaTransport.ipynb
|
Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo
|
Result shall be close to $\pi$ Basic Photons Interactions with atoms There are several interaction processess of photons with media. Compton Scattering Compton scattering is described by Klein-Nishina formula with energy of scattered photon directly tied to incoming energy and scattering angle$$\hbar \omega'=\frac{\hbar\omega}{1+\frac{\hbar \omega}{m_e c^2} (1 - \cos{\theta})}$$where prime marks particle after scattering. It is clear to see that for backscattering photon ($\theta=\pi$, $\cos{\theta}=-1$) the energy of scattered photon reach minimum, which means scattered photon energy has limits$$\frac{\hbar \omega }{1 + 2\hbar\omega/m_ec^2} \le \hbar\omega' \le \hbar\omega$$ Scattering cross-section (you could think of this as denormalized probability to be scattered to a given enegy)$$\frac{d\sigma}{d\hbar\omega'} = \pi r_e^2 \frac{m_ec^2}{(\hbar\omega)^2} \lbrace \frac{\hbar\omega}{\hbar\omega'} + \frac{\hbar\omega'}{\hbar\omega} +\left ( \frac{m_ec^2}{\hbar\omega'} - \frac{m_ec^2}{\hbar\omega} \right )^2 - 2m_ec^2 \left ( \frac{1}{\hbar\omega'} - \frac{1}{\hbar\omega} \right ) \rbrace$$Full cross-section, where $x=2 \hbar\omega/m_e c^2$ is double relative photon enery.$$\sigma=2\pi r_e^2\frac{1}{x}\lbrace \left ( 1 - \frac{4}{x} - \frac{8}{x^2} \right ) \log{(1+x) +\frac{1}{2} + \frac{8}{x}-\frac{1}{2(1+x)^2}} \rbrace$$Then we could divide partial cross-section by total cross-section and get probability of scattered photon energy for different incoming photons. Lets plot few graphs. As one can see, cross-section has dimension of area. They are very small, therefore cross-sections are measured in barns, one barn being $10^-{24}$ centimeter squared.Let's for reference add expression how to compute angular differential cross-section$$\frac{d\sigma}{d\omicron'} = \frac{1}{2} r_e^2 \left( \frac{\hbar\omega'}{\hbar\omega}\right)^2 \left(\frac{\hbar\omega}{\hbar\omega'} + \frac{\hbar\omega'}{\hbar\omega} - \sin^2{\theta}\right)$$ Let's move to more appropriate units: energy would be always in MeV, unit of length for cross-sections would be in femtometers (1fm = $10^{-15}m$). Barn is 100 femtometers squa.
|
# usefule constants
MeC2 = 0.511 # in MeV
Re = 2.82 # femtometers
# main functions to deal with cross-sections
def hw_prime(hw, cos_theta):
"""computes outgoing photon energy vs cosine of the scattered angle"""
hwp = hw/(1.0 + (1.0 - cos_theta)*hw/MeC2)
return hwp
def cosθ_from_hwp(hw, hwp):
return 1.0 - (MeC2/hwp - MeC2/hw)
def hwp_minimum(hw):
"""Computes minimum scattere energy in MeV given incoming photon energy hw"""
return hw/(1.0 + 2.0*hw/MeC2)
def total_cross_section(hw):
"""Klein-Nishina total cross-section, LDL p.358, eq (86.16)"""
if hw <= 0.0:
raise RuntimeError(f"Photon energy is negative: {hw}")
x = 2.0 * hw / MeC2
q = 1.0/x
z = (1.0 + x)
σ = 2.0*np.pi*Re*Re * q * ((1.0 - 4.0*q - 8.0*q*q)*np.log(z) + 0.5 + 8.0*q - 0.5/z/z)
return σ
def diff_cross_section_dhwp(hw, hwp):
"""Differential cross-section over outgoing photon energy"""
if hw <= 0.0:
raise RuntimeError(f"Photon energy is negative or zero: {hw}")
if hwp <= 0.0:
raise RuntimeError(f"Scattered photon energy is negative or zero: {hwp}")
if hwp < hwp_minimum(hw): # outgoing energy cannot be less than minimum allowed
return 0.0
ei = MeC2/hw
eo = MeC2/hwp
dσ_dhwp = np.pi*Re*Re * (ei/hw) * (ei/eo + eo/ei + (eo-ei)**2 - 2.0*(eo-ei))
return dσ_dhwp
def diff_cross_section_dOp(hw, θ):
"""Differential cross-section over outgoing photon differential angle"""
cst = np.cos(θ)
hwp = hw_prime(hw, cst)
rhw = hwp/hw
dσ_dOp = 0.5*np.pi*Re*Re * rhw*rhw*(rhw + 1.0/rhw - (1.0 - cst)*(1.0 + cst))
return dσ_dOp
def make_energyloss_curve(hw):
N = 101
hwm = hwp_minimum(hw)
hws = np.linspace(0.0, hw-hwm, N)
st = total_cross_section(hw)
sc = np.empty(101)
for k in range(0, len(hws)):
hwp = hw - hws[k]
sc[k] = diff_cross_section_dhwp(hw, hwp)/st
return hws, sc
q_p25, s_p25 = make_energyloss_curve(0.25)
q_p50, s_p50 = make_energyloss_curve(0.50)
q_1p0, s_1p0 = make_energyloss_curve(1.00)
fig, ax = plt.subplots(1, 1)
ax.plot(q_p25, s_p25, 'r-', lw=2, label='Scattering probability vs energy loss, 0.25MeV')
ax.plot(q_p50, s_p50, 'g-', lw=2, label='Scattering probability vs energy loss, 0.50MeV')
ax.plot(q_1p0, s_1p0, 'b-', lw=2, label='Scattering probability vs energy loss, 1.00MeV')
plt.title("Klein-Nishina")
plt.show()
def make_angular_curve(hw):
"""Helper function to make angular probability x,y arrays given incoming photon enenrgy, MeV"""
N = 181
theta_d = np.linspace(0.0, 180.0, N) # angles in degrees
theta_r = theta_d * np.pi / 180.0
st = total_cross_section(hw)
so = np.empty(N)
for k in range(0, len(so)):
so[k] = diff_cross_section_dOp(hw, theta_r[k]) * 2.0*np.pi / st
return theta_d, so
a_p25, s_p25 = make_angular_curve(0.25)
a_p50, s_p50 = make_angular_curve(0.50)
a_1p0, s_1p0 = make_angular_curve(1.00)
fig, ax = plt.subplots(1, 1)
ax.plot(a_p25, s_p25, 'r-', lw=2, label='Scattering angular probability, 0.25MeV')
ax.plot(a_p50, s_p50, 'g-', lw=2, label='Scattering angular probability, 0.50MeV')
ax.plot(a_1p0, s_1p0, 'b-', lw=2, label='Scattering angular probability, 1.00MeV')
plt.title("Klein-Nishina")
plt.show()
|
_____no_output_____
|
MIT
|
GammaTransport.ipynb
|
Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo
|
Cross-sections Microscopic and Macroscopic cross-sections We learned about so-called microscopic cross-sections, which is oneabout one photon scattering on one electron. It is very small, measured in barns which is $10^{-24}$ cm$^2$. In real life photons interacti with material objects measured in grams and kilograms. For that, we need macroscopic cross-section. For macroscopic cross-section, we have to multiply microscopic one by $N$, which is density of scatterers, as well as atomic number $Z$ (remember, we are scattering on electrons)For Compton scattering in water, we could write$$\Sigma = \rho Z \frac{N_A}{M} \sigma$$where $N_A$ is Avogadro constant, $M$ is molar mass (total mass of $N_A$ molecules) and $\rho$ is the density. Lets check the units. Suppose density is in $g/cm^3$, Avogadro Constant is in mol$^{-1}$ and molar mass is in $g/mol$. Therefore, macroscopic cross-section is measured in $cm^{-1}$ and gives the base for linear attenuation coefficient$$P(x) = \exp{(-\Sigma x)}$$where one can see that value under exponent is dimensionless. NIST cross-sections database National Institute of Standards and Technologies provides a lot of precomputed corss-sections for elements and mixtures, for energies from 1keV up to 10GeV. One can find cross-sections from [XCOM place](https://www.nist.gov/pml/xcom-photon-cross-sections-database). One can pick elements, materials, mixtures and save them into local file. What is worth mentioning is that XCOM provides data as $$\Sigma = Z \frac{N_A}{M}\sigma$$where density is specifically excluded. It is called mass attenuation coefficient. It is measured in $cm^2/g$. Using such units has certaint advantages, e.g. if you compute photon transport in media where density could change (say, inside nuclear reator where due to heating density of water goes from $\sim$ 1$\;g/cm^3$ to about 0.75$\;g/cm^3$) allows to keep intercation physics separate from density. Multiplying mass attenuation coefficient by density gives you back linear attenuation coefficient. Cross-sections for Water Lets read water cross-sections and plot them
|
lines = None
with open('H2o.data', "r") as f:
lines = f.readlines()
header_len = 3
lines = lines[header_len:41] # remove header, and limit energy to 10MeV
energy = np.empty(len(lines)) # energy scale
coh_xs = np.empty(len(lines)) # coherent cross-section
inc_xs = np.empty(len(lines)) # incoherent cross-section
pht_xs = np.empty(len(lines)) # photo-effect cross-section
npp_xs = np.empty(len(lines)) # nuclear pair production
epp_xs = np.empty(len(lines)) # electron pair production
for k in range(0, len(lines)):
s = lines[k].split('|')
energy[k] = float(s[0])
coh_xs[k] = float(s[1])
inc_xs[k] = float(s[2])
pht_xs[k] = float(s[3])
npp_xs[k] = float(s[4])
epp_xs[k] = float(s[5])
|
_____no_output_____
|
MIT
|
GammaTransport.ipynb
|
Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo
|
Now we will plot together photoeffect, coherent, incoherent and total mass attenuation cross-sections.
|
plt.xscale("log")
plt.yscale("log")
plt.plot(energy, coh_xs, 'g-', linewidth=2)
plt.plot(energy, inc_xs, 'r-', linewidth=2)
plt.plot(energy, pht_xs, 'b-', linewidth=2)
plt.plot(energy, pht_xs+coh_xs+inc_xs, 'o-', linewidth=2) # total cross-section
#plt.plot(energy, npp_xs, 'c-', linewidth=2)
#plt.plot(energy, epp_xs, 'm-', linewidth=2)
plt.show()
|
_____no_output_____
|
MIT
|
GammaTransport.ipynb
|
Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo
|
One can see that for all practical reasons considering only photo-effect and compton (aka incoherent) scatterin is good enough approximation, Compton Scattering Sampling W will use Khan's method to sample Compton scattering.
|
def KhanComptonSampling(hw, rng):
"""Sample scattering energy after Compton interaction"""
α = 2.0*hw/MeC2 # double relative incoming photon energy
t = (α + 1.0)/(α + 9.0)
x = 0.0
while True:
y = 1.0 + α*rng.random()
if rng.random() < t:
if rng.random() < 4.0*(1.0 - 1.0/y)/y:
x = y
break
else:
y = (1.0 + α) / y
c = 2.0*y/α + 1.0
if rng.random() < 0.5*(c*c + 1.0/y):
x = y
break
return hw/x # scattered photon energy back
|
_____no_output_____
|
MIT
|
GammaTransport.ipynb
|
Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo
|
Let's test Compton sampling and compare it with microscopic differential cross-section
|
hw = 1.0 # MeV
hwm = hwp_minimum(hw)
Nt = 1000000
hwp = np.empty(Nt)
rng = np.random.default_rng(312345)
for k in range(0, len(hwp)):
hwp[k] = KhanComptonSampling(hw, rng)
|
_____no_output_____
|
MIT
|
GammaTransport.ipynb
|
Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo
|
Ok, lets check first the minimum energy in sampled values, should be within allowed range.
|
hwm_sampled = np.min(hwp)
print(f"Minimum allowed scattered energy: {hwm} vs actual sampled minimum {hwm_sampled}")
if hwm_sampled < hwm:
print("We have a problem with kinematics!")
count, bins, ignored = plt.hist(hwp, 20, density=True)
plt.show()
# plotting angular distribution
cosθ = cosθ_from_hwp(hw, hwp)
count, bins, ignored = plt.hist(cosθ, 20, density=True)
plt.show()
|
_____no_output_____
|
MIT
|
GammaTransport.ipynb
|
Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo
|
Monte Carlo photon transport code
|
# several helper functions and constants
X = 0
Y = 1
Z = 2
def isotropic_source(rng):
cosθ = 2.0*rng.random() - 1.0 # uniform cosine of the azimuth angle
sinθ = np.sqrt((1.0 - cosθ)*(1.0 + cosθ))
φ = 2.0*np.pi*rng.random() # uniform polar angle
return np.array((sinθ*np.cos(φ), sinθ*np.sin(φ), cosθ))
def find_energy_index(scale, hw):
return np.searchsorted(scale, hw, side='right') - 1
def calculate_xs(xs, scale, hw, idx):
q = (hw - scale[idx])/(scale[idx+1] - scale[idx])
return xs[idx]*(1.0 - q) + xs[idx+1]*q
def transform_cosines(wx, wy, wz, cosθ, φ):
"""https://www.scratchapixel.com/lessons/mathematics-physics-for-computer-graphics/monte-carlo-methods-in-practice/monte-carlo-simulation"""
# print(wx, wy, wz, cosθ)
sinθ = np.sqrt((1.0 - cosθ)*(1.0 + cosθ))
cosφ = np.cos(φ)
sinφ = np.sin(φ)
if wz == 1.0:
return np.array((sinθ * cosφ, sinθ * sinφ, cosθ))
if wz == -1.0:
return np.array((sinθ * cosφ, -sinθ * sinφ, -cosθ))
denom = np.sqrt((1.0 - wz)*(1.0 + wz)) # denominator
wzcosφ = wz * cosφ
return np.array((wx * cosθ + sinθ * (wx * wzcosφ - wy * sinφ)/denom,
wy * cosθ + sinθ * (wy * wzcosφ + wx * sinφ)/denom,
wz * cosθ - denom * sinθ * cosφ))
def is_inside(pos):
"""Check is photon is inside world box"""
if pos[X] > 20.0:
return False
if pos[X] < -20.0:
return False
if pos[Y] > 20.0:
return False
if pos[Y] < -20.0:
return False
if pos[Z] > 20.0:
return False
if pos[Z] < -20.0:
return False
return True
# main MC loop
rng = np.random.default_rng(312345) # set RNG seed
Nt = 100 # number of trajectories
hw_src = 1.0 # initial energy, MeV
hw_max = energy[-1] # maximum energy in xs tables
pos_src = (0.0, 0.0, 0.0) # initial position
dir_src = (0.0, 0.0, 1.0) # initial direction
density = 1.0 # g/cm^3
for k in range(0, Nt): # loop over all trajectories
print(f"Particle # {k}")
# set energy, position and direction from source terms
hw = hw_src
gpos = np.array(pos_src, dtype=np.float64)
gdir = np.array(dir_src, dtype=np.float64) # could try isotropic source here
if hw < 0.0:
raise ValueError(f"Energy is negative: {hw}")
if hw > hw_max:
raise ValueError(f"Energy is too large: {hw}")
while True: # infinite loop over single trajectory till photon is absorbed or out of the box or out of energy range
idx = find_energy_index(energy, hw)
if idx < 0: # photon fell below 1keV energy threshold, kill it
break
phxs = calculate_xs(pht_xs, energy, hw, idx) # photo-effect cross-section
inxs = calculate_xs(inc_xs, energy, hw, idx) # incoherent, aka Compton cross-section
toxs = (phxs + inxs) # total cross-section
pathlength = - np.log(1.0 - rng.random()) # exponential distribution
pathlength /= (toxs*density) # path length now in cm, because we move from mass attenuation toxs to linear attenuation
#gpos = (gpos[X] + gdir[X]*pathlength, gpos[Y] + gdir[Y]*pathlength, gpos[Z] + gdir[Z]*pathlength) # move to the next interaction point
gpos = gpos + np.multiply(gdir, pathlength)
if not is_inside(gpos): # check if we are in volume of interest
break # we'out, done with trajectory
p_abs = phxs/toxs # probability of absorbtion
if rng.random() < p_abs: # sample absorbtion
break # photoeffect, photon is gone
# compton scattering
hwp = KhanComptonSampling(hw, rng)
cosθ = cosθ_from_hwp(hw, hwp)
φ = 2.0*np.pi*rng.random() # uniform azimuth angle
gdir = transform_cosines(*gdir, cosθ, φ)
gdir = gdir/np.linalg.norm(gdir) # normalization
hw = hwp
# here we have new energy, new position and new direction
|
Particle # 0
Particle # 1
Particle # 2
Particle # 3
Particle # 4
Particle # 5
Particle # 6
Particle # 7
Particle # 8
Particle # 9
Particle # 10
Particle # 11
Particle # 12
Particle # 13
Particle # 14
Particle # 15
Particle # 16
Particle # 17
Particle # 18
Particle # 19
Particle # 20
Particle # 21
Particle # 22
Particle # 23
Particle # 24
Particle # 25
Particle # 26
Particle # 27
Particle # 28
Particle # 29
Particle # 30
Particle # 31
Particle # 32
Particle # 33
Particle # 34
Particle # 35
Particle # 36
Particle # 37
Particle # 38
Particle # 39
Particle # 40
Particle # 41
Particle # 42
Particle # 43
Particle # 44
Particle # 45
Particle # 46
Particle # 47
Particle # 48
Particle # 49
Particle # 50
Particle # 51
Particle # 52
Particle # 53
Particle # 54
Particle # 55
Particle # 56
Particle # 57
Particle # 58
Particle # 59
Particle # 60
Particle # 61
Particle # 62
Particle # 63
Particle # 64
Particle # 65
Particle # 66
Particle # 67
Particle # 68
Particle # 69
Particle # 70
Particle # 71
Particle # 72
Particle # 73
Particle # 74
Particle # 75
Particle # 76
Particle # 77
Particle # 78
Particle # 79
Particle # 80
Particle # 81
Particle # 82
Particle # 83
Particle # 84
Particle # 85
Particle # 86
Particle # 87
Particle # 88
Particle # 89
Particle # 90
Particle # 91
Particle # 92
Particle # 93
Particle # 94
Particle # 95
Particle # 96
Particle # 97
Particle # 98
Particle # 99
|
MIT
|
GammaTransport.ipynb
|
Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo
|
Random Signals*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* Auto-Power Spectral DensityThe (auto-) [power spectral density](https://en.wikipedia.org/wiki/Spectral_densityPower_spectral_density) (PSD) is defined as the Fourier transformation of the [auto-correlation function](correlation_functions.ipynb) (ACF). DefinitionFor a continuous-amplitude, real-valued, wide-sense stationary (WSS) random signal $x[k]$ the PSD is given as\begin{equation}\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \mathcal{F}_* \{ \varphi_{xx}[\kappa] \},\end{equation}where $\mathcal{F}_* \{ \cdot \}$ denotes the [discrete-time Fourier transformation](https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform) (DTFT) and $\varphi_{xx}[\kappa]$ the ACF of $x[k]$. Note that the DTFT is performed with respect to $\kappa$. The ACF of a random signal of finite length $N$ can be expressed by way of a linear convolution\begin{equation}\varphi_{xx}[\kappa] = \frac{1}{N} \cdot x_N[k] * x_N[-k].\end{equation}Taking the DTFT of the left- and right-hand side results in\begin{equation}\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})\, X_N(\mathrm{e}^{-\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, | X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2.\end{equation}The last equality results from the definition of the magnitude and the symmetry of the DTFT for real-valued signals. The spectrum $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ quantifies the amplitude density of the signal $x_N[k]$. It can be concluded from above result that the PSD quantifies the squared amplitude or power density of a random signal. This explains the term power spectral density. PropertiesThe properties of the PSD can be deduced from the properties of the ACF and the DTFT as:1. From the link between the PSD $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and the spectrum $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ derived above it can be concluded that the PSD is real valued $$\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \in \mathbb{R}$$2. From the even symmetry $\varphi_{xx}[\kappa] = \varphi_{xx}[-\kappa]$ of the ACF it follows that $$ \Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \Phi_{xx}(\mathrm{e}^{\,-\mathrm{j}\, \Omega}) $$3. The PSD of an uncorrelated random signal is given as $$ \Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \sigma_x^2 + \mu_x^2 \cdot {\bot \!\! \bot \!\! \bot}\left( \frac{\Omega}{2 \pi} \right) ,$$ which can be deduced from the [ACF of an uncorrelated signal](correlation_functions.ipynbProperties).4. The quadratic mean of a random signal is given as $$ E\{ x[k]^2 \} = \varphi_{xx}[\kappa=0] = \frac{1}{2\pi} \int\limits_{-\pi}^{\pi} \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\, \Omega}) \,\mathrm{d} \Omega $$ The last relation can be found by expressing the ACF via the inverse DTFT of $\Phi_{xx}$ and considering that $\mathrm{e}^{\mathrm{j} \Omega \kappa} = 1$ when evaluating the integral for $\kappa=0$. Example - Power Spectral Density of a Speech SignalIn this example the PSD $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \,\Omega})$ of a speech signal of length $N$ is estimated by applying a discrete Fourier transformation (DFT) to its ACF. For a better interpretation of the PSD, the frequency axis $f = \frac{\Omega}{2 \pi} \cdot f_s$ has been chosen for illustration, where $f_s$ denotes the sampling frequency of the signal. The speech signal constitutes a recording of the vowel 'o' spoken from a German male, loaded into variable `x`.In Python the ACF is stored in a vector with indices $0, 1, \dots, 2N - 2$ corresponding to the lags $\kappa = (0, 1, \dots, 2N - 2)^\mathrm{T} - (N-1)$. When computing the discrete Fourier transform (DFT) of the ACF numerically by the fast Fourier transform (FFT) one has to take this shift into account. For instance, by multiplying the DFT $\Phi_{xx}[\mu]$ by $\mathrm{e}^{\mathrm{j} \mu \frac{2 \pi}{2N - 1} (N-1)}$.
|
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
# read audio file
fs, x = wavfile.read('../data/vocal_o_8k.wav')
x = np.asarray(x, dtype=float)
N = len(x)
# compute ACF
acf = 1/N * np.correlate(x, x, mode='full')
# compute PSD
psd = np.fft.fft(acf)
psd = psd * np.exp(1j*np.arange(2*N-1)*2*np.pi*(N-1)/(2*N-1))
f = np.fft.fftfreq(2*N-1, d=1/fs)
# plot PSD
plt.figure(figsize=(10, 4))
plt.plot(f, np.real(psd))
plt.title('Estimated power spectral density')
plt.ylabel(r'$\hat{\Phi}_{xx}(e^{j \Omega})$')
plt.xlabel(r'$f / Hz$')
plt.axis([0, 500, 0, 1.1*max(np.abs(psd))])
plt.grid()
|
_____no_output_____
|
MIT
|
random_signals/power_spectral_densities.ipynb
|
TA1DB/digital-signal-processing-lecture
|
**Exercise*** What does the PSD tell you about the average spectral contents of a speech signal?Solution: The speech signal exhibits a harmonic structure with the dominant fundamental frequency $f_0 \approx 100$ Hz and a number of harmonics $f_n \approx n \cdot f_0$ for $n > 0$. This due to the fact that vowels generate random signals which are in good approximation periodic. To generate vowels, the sound produced by the periodically vibrating vowel folds is filtered by the resonance volumes and articulators above the voice box. The spectrum of periodic signals is a line spectrum. Cross-Power Spectral DensityThe cross-power spectral density is defined as the Fourier transformation of the [cross-correlation function](correlation_functions.ipynbCross-Correlation-Function) (CCF). DefinitionFor two continuous-amplitude, real-valued, wide-sense stationary (WSS) random signals $x[k]$ and $y[k]$, the cross-power spectral density is given as\begin{equation}\Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \mathcal{F}_* \{ \varphi_{xy}[\kappa] \},\end{equation}where $\varphi_{xy}[\kappa]$ denotes the CCF of $x[k]$ and $y[k]$. Note again, that the DTFT is performed with respect to $\kappa$. The CCF of two random signals of finite length $N$ and $M$ can be expressed by way of a linear convolution\begin{equation}\varphi_{xy}[\kappa] = \frac{1}{N} \cdot x_N[k] * y_M[-k].\end{equation}Note the chosen $\frac{1}{N}$-averaging convention corresponds to the length of signal $x$. If $N \neq M$, care should be taken on the interpretation of this normalization. In case of $N=M$ the $\frac{1}{N}$-averaging yields a [biased estimator](https://en.wikipedia.org/wiki/Bias_of_an_estimator) of the CCF, which consistently should be denoted with $\hat{\varphi}_{xy,\mathrm{biased}}[\kappa]$.Taking the DTFT of the left- and right-hand side from above cross-correlation results in\begin{equation}\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})\, Y_M(\mathrm{e}^{-\,\mathrm{j}\,\Omega}).\end{equation} Properties1. The symmetries of $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ can be derived from the symmetries of the CCF and the DTFT as $$ \underbrace {\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \Phi_{xy}^*(\mathrm{e}^{-\,\mathrm{j}\, \Omega})}_{\varphi_{xy}[\kappa] \in \mathbb{R}} = \underbrace {\Phi_{yx}(\mathrm{e}^{\,- \mathrm{j}\, \Omega}) = \Phi_{yx}^*(\mathrm{e}^{\,\mathrm{j}\, \Omega})}_{\varphi_{yx}[-\kappa] \in \mathbb{R}},$$ from which $|\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})| = |\Phi_{yx}(\mathrm{e}^{\,\mathrm{j}\, \Omega})|$ can be concluded.2. The cross PSD of two uncorrelated random signals is given as $$ \Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \mu_x^2 \mu_y^2 \cdot {\bot \!\! \bot \!\! \bot}\left( \frac{\Omega}{2 \pi} \right) $$ which can be deduced from the CCF of an uncorrelated signal. Example - Cross-Power Spectral DensityThe following example estimates and plots the cross PSD $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ of two random signals $x_N[k]$ and $y_M[k]$ of finite lengths $N = 64$ and $M = 512$.
|
N = 64 # length of x
M = 512 # length of y
# generate two uncorrelated random signals
np.random.seed(1)
x = 2 + np.random.normal(size=N)
y = 3 + np.random.normal(size=M)
N = len(x)
M = len(y)
# compute cross PSD via CCF
acf = 1/N * np.correlate(x, y, mode='full')
psd = np.fft.fft(acf)
psd = psd * np.exp(1j*np.arange(N+M-1)*2*np.pi*(M-1)/(2*M-1))
psd = np.fft.fftshift(psd)
Om = 2*np.pi * np.arange(0, N+M-1) / (N+M-1)
Om = Om - np.pi
# plot results
plt.figure(figsize=(10, 4))
plt.stem(Om, np.abs(psd), basefmt='C0:', use_line_collection=True)
plt.title('Biased estimator of cross power spectral density')
plt.ylabel(r'$|\hat{\Phi}_{xy}(e^{j \Omega})|$')
plt.xlabel(r'$\Omega$')
plt.grid()
|
_____no_output_____
|
MIT
|
random_signals/power_spectral_densities.ipynb
|
TA1DB/digital-signal-processing-lecture
|
Otter-Grader TutorialThis notebook is part of the Otter-Grader tutorial. For more information about Otter, see our [documentation](https://otter-grader.rtfd.io).
|
import pandas as pd
import numpy as np
%matplotlib inline
import otter
grader = otter.Notebook()
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorial/submissions/ipynbs/demo-fails2Hidden.ipynb
|
chrispyles/otter-grader
|
**Question 1:** Write a function `square` that returns the square of its argument.
|
def square(x):
return x**2
grader.check("q1")
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorial/submissions/ipynbs/demo-fails2Hidden.ipynb
|
chrispyles/otter-grader
|
**Question 2:** Write an infinite generator of the Fibonacci sequence `fibferator` that is *not* recursive.
|
def fiberator():
yield 0
yield 1
while True:
yield 1
grader.check("q2")
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorial/submissions/ipynbs/demo-fails2Hidden.ipynb
|
chrispyles/otter-grader
|
**Question 3:** Create a DataFrame mirroring the table below and assign this to `data`. Then group by the `flavor` column and find the mean price for each flavor; assign this **series** to `price_by_flavor`.| flavor | scoops | price ||-----|-----|-----|| chocolate | 1 | 2 || vanilla | 1 | 1.5 || chocolate | 2 | 3 || strawberry | 1 | 2 || strawberry | 3 | 4 || vanilla | 2 | 2 || mint | 1 | 4 || mint | 2 | 5 || chocolate | 3 | 5 |
|
data = pd.DataFrame({
"flavor": ["chocolate", "vanilla", "chocolate", "strawberry", "strawberry", "vanilla", "mint",
"mint", "chocolate"],
"scoops": [1, 1, 2, 1, 3, 2, 1, 2, 3],
"price": [2, 1.5, 3, 2, 4, 2, 4, 5, 5]
})
price_by_flavor = data.groupby("flavor").mean()["price"]
price_by_flavor
grader.check("q3")
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorial/submissions/ipynbs/demo-fails2Hidden.ipynb
|
chrispyles/otter-grader
|
**Question 1.4:** Create a barplot of `price_by_flavor`.
|
price_by_flavor.plot.bar()
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorial/submissions/ipynbs/demo-fails2Hidden.ipynb
|
chrispyles/otter-grader
|
**Question 1.5:** What do you notice about the bar plot? _Type your answer here, replacing this text._ The cell below allows you run all checks again.
|
grader.check_all()
grader.export()
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorial/submissions/ipynbs/demo-fails2Hidden.ipynb
|
chrispyles/otter-grader
|
Table of Contents
|
#!python
"""
Find the brightest pixel coordinate of a image.
@author: Bhishan Poudel
@date: Oct 27, 2017
@email: [email protected]
"""
# Imports
import time
import numpy as np
from astropy.io import fits
import subprocess
from scipy.ndimage import measurements
def brightest_coord():
with open('centroids_f8.txt','w') as fo:
for i in range(201):
pre = '/Users/poudel/Research/a01_data/original_data/HST_ACS_WFC_f814w/'
infile = '{}/sect23_f814w_gal{}.fits'.format(pre,i)
dat = fits.getdata(infile)
x,y = np.unravel_index(np.argmax(dat), dat.shape)
x,y = int(y+1) , int(x+1)
print("{} {}".format(x, y), file=fo)
def find_centroid():
with open('centroids_f8_scipy.txt','w') as fo:
for i in range(201):
pre = '/Users/poudel/Research/a01_data/original_data/HST_ACS_WFC_f814w/'
infile = '{}/sect23_f814w_gal{}.fits'.format(pre,i)
dat = fits.getdata(infile)
x,y = measurements.center_of_mass(dat)
x,y = int(y+1) , int(x+1)
print("{} {}".format(x, y), file=fo)
def main():
"""Run main function."""
# bright_coord()
# find_centroid()
# # checking
# i = 0
# pre = '/Users/poudel/Research/a01_data/original_data/HST_ACS_WFC_f814w/'
# infile = '{}/sect23_f814w_gal{}.fits'.format(pre,i)
# ds9 = '/Applications/ds9.app/Contents/MacOS/ds9'
# subprocess.call('{} {}'.format(ds9, infile), shell=True)
# when zooming we can see brightest pixel is at 296, 307 image coord.
if __name__ == "__main__":
import time, os
# Beginning time
program_begin_time = time.time()
begin_ctime = time.ctime()
# Run the main program
main()
# Print the time taken
program_end_time = time.time()
end_ctime = time.ctime()
seconds = program_end_time - program_begin_time
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
d, h = divmod(h, 24)
print("\n\nBegin time: ", begin_ctime)
print("End time: ", end_ctime, "\n")
print("Time taken: {0: .0f} days, {1: .0f} hours, \
{2: .0f} minutes, {3: f} seconds.".format(d, h, m, s))
print("\n")
!head -n 5 centroids_f8.txt
!head -n 5 centroids_f8_scipy.txt
def find_max_coord(dat):
print("dat = \n{}".format(dat))
maxpos = np.unravel_index(np.argmax(dat), dat.shape)
print("maxpos = {}".format(maxpos))
with open('example_data.txt','w') as fo:
data = """0.1 0.5
0.0 0.0
4.0 3.0
0.0 0.0
1.0 1.0
"""
fo.write(data)
dat = np.genfromtxt('example_data.txt')
find_max_coord(dat)
x,y = measurements.center_of_mass(dat)
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(dat) # default is RGB
plt.imshow(dat,cmap='gray', vmin=int(dat.min()), vmax=int(dat.max()))
# we can see brightest pixel is x=0 and y = 2
# or, if we count from 1, x = 1 and y =3
measurements.center_of_mass(dat)
x,y = measurements.center_of_mass(dat)
x,y = int(x), int(y)
x,y
dat
dat[2][0]
# Numpy index is dat[2][0]
# but image shows x=0 and y =2.
x,y = measurements.center_of_mass(dat)
x,y = int(y), int(x)
x,y
dat[2][0]
# Looking at mean
dat.mean(axis=0)
np.argmax(dat)
np.unravel_index(4,dat.shape)
|
_____no_output_____
|
MIT
|
Useful_Codes/find_centroid.ipynb
|
bhishanpdl/Research
|
Poland* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Poland.ipynb)
|
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Poland", weeks=5);
overview("Poland");
compare_plot("Poland", normalise=True);
# load the data
cases, deaths = get_country_data("Poland")
# get population of the region for future normalisation:
inhabitants = population("Poland")
print(f'Population of "Poland": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
|
_____no_output_____
|
CC-BY-4.0
|
ipynb/Poland.ipynb
|
oscovida/oscovida.github.io
|
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Poland.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
|
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
|
_____no_output_____
|
CC-BY-4.0
|
ipynb/Poland.ipynb
|
oscovida/oscovida.github.io
|
Implementation of VGG16> In this notebook I have implemented VGG16 on CIFAR10 dataset using Pytorch
|
#importing libraries
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import transforms
import torch.optim as optim
import tqdm
import matplotlib.pyplot as plt
from torchvision.datasets import CIFAR10
from torch.utils.data import random_split
from torch.utils.data.dataloader import DataLoader
|
_____no_output_____
|
MIT
|
VGG/VGG.ipynb
|
gowriaddepalli/papers
|
Load the data and do standard preprocessing steps,such as resizing and converting the images into tensor
|
transform = transforms.Compose([transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485,0.456,0.406],
std=[0.229,0.224,0.225])])
train_ds = CIFAR10(root='data/',train = True,download=True,transform = transform)
val_ds = CIFAR10(root='data/',train = False,download=True,transform = transform)
batch_size = 128
train_loader = DataLoader(train_ds,batch_size,shuffle=True,num_workers=4,pin_memory=True)
val_loader = DataLoader(val_ds,batch_size,num_workers=4,pin_memory=True)
|
Files already downloaded and verified
Files already downloaded and verified
|
MIT
|
VGG/VGG.ipynb
|
gowriaddepalli/papers
|
A custom utility class to print out the accuracy and losses during training and testing
|
def accuracy(outputs,labels):
_,preds = torch.max(outputs,dim=1)
return torch.tensor(torch.sum(preds==labels).item()/len(preds))
class ImageClassificationBase(nn.Module):
def training_step(self,batch):
images, labels = batch
out = self(images)
loss = F.cross_entropy(out,labels)
return loss
def validation_step(self,batch):
images, labels = batch
out = self(images)
loss = F.cross_entropy(out,labels)
acc = accuracy(out,labels)
return {'val_loss': loss.detach(),'val_acc': acc}
def validation_epoch_end(self,outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean()
batch_accs = [x['val_acc'] for x in outputs]
epoch_acc = torch.stack(batch_accs).mean()
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], train_loss: {:.4f}, val_loss: {:.4f}, val_acc: {:.4f}".format(
epoch, result['train_loss'], result['val_loss'], result['val_acc']))
|
_____no_output_____
|
MIT
|
VGG/VGG.ipynb
|
gowriaddepalli/papers
|
Creating a network
|
VGG_types = {
'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
class VGG_net(ImageClassificationBase):
def __init__(self, in_channels=3, num_classes=1000):
super(VGG_net, self).__init__()
self.in_channels = in_channels
self.conv_layers = self.create_conv_layers(VGG_types['VGG16'])
self.fcs = nn.Sequential(
nn.Linear(512*7*7, 4096),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(4096, 4096),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(4096, num_classes)
)
def forward(self, x):
x = self.conv_layers(x)
x = x.reshape(x.shape[0], -1)
x = self.fcs(x)
return x
def create_conv_layers(self, architecture):
layers = []
in_channels = self.in_channels
for x in architecture:
if type(x) == int:
out_channels = x
layers += [nn.Conv2d(in_channels=in_channels,out_channels=out_channels,
kernel_size=(3,3), stride=(1,1), padding=(1,1)),
nn.BatchNorm2d(x),
nn.ReLU()]
in_channels = x
elif x == 'M':
layers += [nn.MaxPool2d(kernel_size=(2,2), stride=(2,2))]
return nn.Sequential(*layers)
|
_____no_output_____
|
MIT
|
VGG/VGG.ipynb
|
gowriaddepalli/papers
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.