markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
3. (3 pts) Generate a series of four 3D scatter plots at selected time points to visually convey what is going on. Arrange the plots in a single row from left to right. Make sure you indicate which time points you are showing.
import matplotlib.pyplot as plt from mpl_toolkits import mplot3d lim = 70 plt.figure(figsize=(12,3)) for (i,t) in enumerate([0, 100, 1000, 1999]): ax = plt.subplot(1, 4, i+1, projection='3d') x = positions[:,0,t] y = positions[:,1,t] z = positions[:,2,t] ax.scatter(x, y, z) plt.xlim([-lim, lim]) plt.ylim([-lim, lim]) ax.set_zlim([-lim, lim]) plt.xlabel("x") plt.ylabel("y") ax.set_zlabel("z") plt.title(f"Time {t}");
_____no_output_____
Unlicense
homework/key-random_walks.ipynb
nishadalal120/NEU-365P-385L-Spring-2021
4. (3 pts) Draw the path of a single particle (your choice) across all time steps in a 3D plot.
ax = plt.subplot(1, 1, 1, projection='3d') i = 10 # particle index x = positions[i,0,:] y = positions[i,1,:] z = positions[i,2,:] plt.plot(x, y, z) plt.xlabel("x") plt.ylabel("y") ax.set_zlabel("z") plt.title(f"Particle {i}");
_____no_output_____
Unlicense
homework/key-random_walks.ipynb
nishadalal120/NEU-365P-385L-Spring-2021
5. (3 pts) Find the minimum, maximum, mean and variance for the jump distances of all particles throughout the entire simulation. Jump distance is the euclidean distance moved on each time step $\sqrt(dx^2+dy^2+dz^2)$. *Hint: numpy makes this very simple.*
jumpsXYZForAllParticlesAndAllTimeSteps = positions[:,:,1:] - positions[:,:,:-1] jumpDistancesForAllParticlesAndAllTimeSteps = np.sqrt(np.sum(jumpsXYZForAllParticlesAndAllTimeSteps**2, axis=1)) print(f"min = {jumpDistancesForAllParticlesAndAllTimeSteps.min()}") print(f"max = {jumpDistancesForAllParticlesAndAllTimeSteps.max()}") print(f"mean = {jumpDistancesForAllParticlesAndAllTimeSteps.mean()}") print(f"var = {jumpDistancesForAllParticlesAndAllTimeSteps.var()}")
min = 0.0052364433932233926 max = 1.7230154410954457 mean = 0.9602742572616196 var = 0.07749699927626445
Unlicense
homework/key-random_walks.ipynb
nishadalal120/NEU-365P-385L-Spring-2021
6. (3 pts) Repeat the simulation, but this time confine the particles to a unit cell of dimension 10x10x10. Make it so that if a particle leaves one edge of the cell, it enters on the opposite edge (this is the sort of thing most molecular dynamics simulations do). Show plots as in 3 to visualize the simulation (note that most interesting stuff liekly happens in the first 100 time steps).
for t in range(numTimeSteps-1): # 2 * [0 to 1] - 1 --> [-1 to 1] jumpsForAllParticles = 2 * np.random.random((numParticles, 3)) - 1 positions[:,:,t+1] = positions[:,:,t] + jumpsForAllParticles # check for out-of-bounds and warp to opposite bound for i in range(numParticles): for j in range(3): if positions[i,j,t+1] < 0: positions[i,j,t+1] += 10 elif positions[i,j,t+1] > 10: positions[i,j,t+1] -= 10 plt.figure(figsize=(12,3)) for (i,t) in enumerate([0, 3, 10, 1999]): ax = plt.subplot(1, 4, i+1, projection='3d') x = positions[:,0,t] y = positions[:,1,t] z = positions[:,2,t] ax.scatter(x, y, z) plt.xlim([0, 10]) plt.ylim([0, 10]) ax.set_zlim([0, 10]) plt.xlabel("x") plt.ylabel("y") ax.set_zlabel("z") plt.title(f"Time {t}");
_____no_output_____
Unlicense
homework/key-random_walks.ipynb
nishadalal120/NEU-365P-385L-Spring-2021
- 使用ngram进行恶意域名识别- 参考论文:https://www.researchgate.net/publication/330843380_Malicious_Domain_Names_Detection_Algorithm_Based_on_N_-Gram
import numpy as np import pandas as pd import tldextract import matplotlib.pyplot as plt import os import re import time from scipy import sparse %matplotlib inline
_____no_output_____
MIT
malicious_domain_detect.ipynb
aierwiki/ngram-detection
加载数据 - 加载正常域名
df_benign_domain = pd.read_csv('top-1m.csv', index_col=0, header=None).reset_index(drop=True) df_benign_domain.columns = ['domain'] df_benign_domain['label'] = 0
_____no_output_____
MIT
malicious_domain_detect.ipynb
aierwiki/ngram-detection
- 加载恶意域名
df_malicious_domain = pd.read_csv('malicious-domain.csv', engine='python', header=None) df_malicious_domain = df_malicious_domain[[1]] df_malicious_domain.columns = ['domain'] df_malicious_domain = df_malicious_domain[df_malicious_domain['domain'] != '-'] df_malicious_domain['label'] = 1 df_domain = pd.concat([df_benign_domain, df_malicious_domain], axis=0) def remove_tld(domain): ext = tldextract.extract(domain) if ext.subdomain != '': domain = ext.subdomain + '.' + ext.domain else: domain = ext.domain return domain df_domain['domain'] = df_domain['domain'].map(lambda x: tldextract.extract(x).domain)
_____no_output_____
MIT
malicious_domain_detect.ipynb
aierwiki/ngram-detection
提取ngram特征
from sklearn.feature_extraction.text import CountVectorizer domain_list = df_domain[df_domain['label'] == 0]['domain'].values.tolist() benign_text_str = '.'.join(domain_list) benign_text = re.split(r'[.-]', benign_text_str) benign_text = list(filter(lambda x: len(x) >= 3, benign_text)) def get_ngram_weight_dict(benign_text): cv = CountVectorizer(ngram_range = (3, 7), analyzer='char', max_features=100000) cv.fit(benign_text) feature_names = cv.get_feature_names() benign_text_vectors = cv.transform(benign_text) ngram_count = benign_text_vectors.sum(axis=0) window_sizes = np.array(list(map(lambda x: len(x), feature_names))) ngram_weights = np.multiply(np.log2(ngram_count), window_sizes) ngram_weights = sparse.csr_matrix(ngram_weights) feature_names = cv.get_feature_names() ngram_weights_dict = dict() for ngram, weight in zip(feature_names, ngram_weights.toarray()[0].tolist()): ngram_weights_dict[ngram] = weight return ngram_weights_dict ngram_weights_dict = get_ngram_weight_dict(benign_text)
_____no_output_____
MIT
malicious_domain_detect.ipynb
aierwiki/ngram-detection
计算域名的信誉值
def get_reputation_value(ngram_weights_dict, domain): if len(domain) < 3: return 1000 domains = re.split(r'[.-]', domain) reputation = 0 domain_len = 0 for domain in domains: domain_len += len(domain) for window_size in range(3, 8): for i in range(len(domain) - window_size + 1): reputation += ngram_weights_dict.get(domain[i:i+window_size], 0) reputation = reputation / domain_len return reputation get_reputation_value(ngram_weights_dict, 'google') get_reputation_value(ngram_weights_dict, 'ta0ba0') get_reputation_value(ngram_weights_dict, 'dskdjisuowerwdfskdfj000') start = time.time() df_domain['reputation'] = df_domain['domain'].map(lambda x: get_reputation_value(ngram_weights_dict, x)) end = time.time() print('cost time : {}'.format(end - start)) df_domain[df_domain['label'] == 0]['reputation'].describe() df_domain[df_domain['label'] == 1]['reputation'].describe()
_____no_output_____
MIT
malicious_domain_detect.ipynb
aierwiki/ngram-detection
保存模型文件
import joblib joblib.dump(ngram_weights_dict, 'ngram_weights_dict.m', compress=4)
_____no_output_____
MIT
malicious_domain_detect.ipynb
aierwiki/ngram-detection
Another attempt at MC Simulation on AHP/ANP The ideas are the following:1. There is a class MCAnp that has a sim() method that will simulate any Prioritizer2. MCAnp also has a sim_fill() function that does fills in the data needed for a single simulation Import needed libs
import pandas as pd import sys import os sys.path.insert(0, os.path.abspath("../")) import numpy as np from scipy.stats import triang from copy import deepcopy from pyanp.priority import pri_eigen from pyanp.pairwise import Pairwise from pyanp.ahptree import AHPTree, AHPTreeNode from pyanp.direct import Direct
_____no_output_____
MIT
scrap/MCAnpResearch.ipynb
georg-cantor/pyanp
MCAnp class
def ascale_mscale(val:(float,int))->float: if val is None: return 0 elif val < 0: val = -val val += 1 val = 1.0/val return val else: return val+1 def mscale_ascale(val:(float,int))->float: if val == 0: return None elif val >= 1: return val - 1 else: val = 1/val val = val-1 return -val DEFAULT_DISTRIB = triang(c=0.5, loc=-1.5, scale=3.0) def avote_random(avote): """ Returns a random additive vote in the neighborhood of the additive vote avote according to the default disribution DEFAULT_DISTRIB """ if avote is None: return None raw_val = DEFAULT_DISTRIB.rvs(size=1)[0] return avote+raw_val def mvote_random(mvote): """ Returns a random multiplicative vote in the neighborhhod of the multiplicative vote mvote according to the default distribution DEFAULT_DISTRIB. This is handled by converting the multiplicative vote to an additive vote, calling avote_random() and converting the result back to an additive vote """ avote = mscale_ascale(mvote) rval_a = avote_random(avote) rval = ascale_mscale(rval_a) return rval def direct_random(direct, max_percent_chg=0.2)->float: """ Returns a random direct data value near the value `direct'. This function creates a random percent change, between -max_percent_chg and +max_percent_chg, and then changes the direct value by that factor, and returns it. """ pchg = np.random.uniform(low=-max_percent_chg, high=max_percent_chg) return direct * (1 + pchg) class MCAnp: def __init__(self): # Setup the random pairwise vote generator self.pwvote_random = mvote_random # Setup the random direct vote generator self.directvote_random = direct_random # Set the default user to use across the simulation # follows the standard from Pairwise class, i.e. it can be a list # of usernames, a single username, or None (which means total group average) self.username = None # What is the pairwise priority calculation? self.pwprioritycalc = pri_eigen def sim_fill(self, src, dest): """ Fills in data on a structure prior to doing the simulation calculations. This function calls sim_NAME_fill depending on the class of the src object. If the dest object is None, we create a dest object by calling deepcopy(). In either case, we always return the allocated dest object """ if dest is None: dest = deepcopy(src) # Which kind of src do we have if isinstance(src, np.ndarray): # We are simulating on a pairwise comparison matrix return self.sim_pwmat_fill(src, dest) elif isinstance(src, Pairwise): # We are simulating on a multi-user pairwise comparison object return self.sim_pw_fill(src, dest) elif isinstance(src, AHPTree): # We are simulating on an ahp tree object return self.sim_ahptree_fill(src, dest) elif isinstance(src, Direct): # We are simulating on an ahp direct data return self.sim_direct_fill(src, dest) else: raise ValueError("Src class is not handled, it is "+type(src).__name__) def sim_pwmat_fill(self, pwsrc:np.ndarray, pwdest:np.ndarray=None)->np.ndarray: """ Fills in a pairwise comparison matrix with noisy votes based on pwsrc If pwsrc is None, we create a new matrix, otherwise we fill in pwdest with noisy values based on pwsrc and the self.pwvote_random parameter. In either case, we return the resulting noisy matrix """ if pwdest is None: pwdest = deepcopy(pwsrc) size = len(pwsrc) for row in range(size): pwdest[row,row] = 1.0 for col in range(row+1, size): val = pwsrc[row,col] if val >= 1: nvote = self.pwvote_random(val) pwdest[row, col]=nvote pwdest[col, row]=1/nvote elif val!= 0: nvote = self.pwvote_random(1/val) pwdest[col, row] = nvote pwdest[row, col] = 1/nvote else: pwdest[row, col] = nvote pwdest[col, row] = nvote return pwdest def sim_pwmat(self, pwsrc:np.ndarray, pwdest:np.ndarray=None)->np.ndarray: """ creates a noisy pw comparison matrix from pwsrc, stores the matrix in pwdest (which is created if pwdest is None) calculates the resulting priority and returns that """ pwdest = self.sim_pwmat_fill(pwsrc, pwdest) rval = self.pwprioritycalc(pwdest) return rval def sim_pw(self, pwsrc:Pairwise, pwdest:Pairwise)->np.ndarray: """ Performs a simulation on a pairwise comparison matrix object and returns the resulting priorities """ pwdest = self.sim_pw_fill(pwsrc, pwdest) mat = pwdest.matrix(self.username) rval = self.pwprioritycalc(mat) return rval def sim_pw_fill(self, pwsrc:Pairwise, pwdest:Pairwise=None)->Pairwise: """ Fills in the pairwise comparison structure of pwdest with noisy pairwise data from pwsrc. If pwdest is None, we create one first, then fill in. In either case, we return the pwdest object with new noisy data in it. """ if pwdest is None: pwdest = deepcopy(pwsrc) for user in pwsrc.usernames(): srcmat = pwsrc.matrix(user) destmat = pwdest.matrix(user) self.sim_pwmat_fill(srcmat, destmat) return pwdest def sim_direct_fill(self, directsrc:Direct, directdest:Direct=None)->Direct: """ Fills in the direct data structure of directdest with noisy data from directsrc. If directdest is None, we create on as a deep copy of directsrc, then fill in. In either case, we return the directdest object with new noisy data in it. """ if directdest is None: directdest = deepcopy(directsrc) for altpos in range(len(directdest)): orig = directsrc[altpos] newvote = self.directvote_random(orig) directdest.data[altpos] = newvote return directdest def sim_direct(self, directsrc:Direct, directdest:Direct=None)->np.ndarray: """ Simulates for direct data """ directdest = self.sim_direct_fill(directsrc, directdest) return directdest.priority() def sim_ahptree_fill(self, ahpsrc:AHPTree, ahpdest:AHPTree)->AHPTree: """ Fills in the ahp tree structure of ahpdest with noisy data from ahpsrc. If ahpdest is None, we create one as a deepcopy of ahpsrc, then fill in. In either case, we return the ahpdest object with new noisy data in it. """ if ahpdest is None: ahpdest = deepcopy(ahpsrc) self.sim_ahptreenode_fill(ahpsrc.root, ahpdest.root) return ahpdest def sim_ahptreenode_fill(self, nodesrc:AHPTreeNode, nodedest:AHPTreeNode)->AHPTreeNode: """ Fills in data in an AHPTree """ #Okay, first we fill in for the alt_prioritizer if nodesrc.alt_prioritizer is not None: self.sim_fill(nodesrc.alt_prioritizer, nodedest.alt_prioritizer) #Now wefill in the child prioritizer if nodesrc.child_prioritizer is not None: self.sim_fill(nodesrc.child_prioritizer, nodedest.child_prioritizer) #Now for each child, fill in for childsrc, childdest in zip(nodesrc.children, nodedest.children): self.sim_ahptreenode_fill(childsrc, childdest) #We are done, return the dest return nodedest def sim_ahptree(self, ahpsrc:AHPTree, ahpdest:AHPTree)->np.ndarray: """ Perform the actual simulation """ ahpdest = self.sim_ahptree_fill(ahpsrc, ahpdest) return ahpdest.priority() mc = MCAnp() pw = np.array([ [1, 1/2, 3], [2, 1, 5], [1/3, 1/5, 1] ]) rpw= mc.sim_pwmat_fill(pw) rpw [mc.sim_pwmat(pw) for i in range(20)] pwobj = Pairwise(alts=['alt '+str(i) for i in range(3)]) pwobj.vote_matrix(user_name='u1', val=pw)
_____no_output_____
MIT
scrap/MCAnpResearch.ipynb
georg-cantor/pyanp
Checking that the deep copy is actually a deep copyFor some reason deepcopy was not copying the matrix, I had to overwrite__deepcopy__ in Pairwise
pwobj.matrix('u1') rpwobj = pwobj.__deepcopy__() a=rpwobj b=pwobj a.df display(a.df.loc['u1', 'Matrix']) display(b.df.loc['u1', 'Matrix']) display(a.matrix('u1') is b.matrix('u1')) display(a.matrix('u1') == b.matrix('u1'))
_____no_output_____
MIT
scrap/MCAnpResearch.ipynb
georg-cantor/pyanp
Now let's try to simulate
[mc.sim_pw(pwobj, rpwobj) for i in range(20)] pwobj.matrix('u1')
_____no_output_____
MIT
scrap/MCAnpResearch.ipynb
georg-cantor/pyanp
Try to simulate a direct data
dd = Direct(alt_names=['a1', 'a2', 'a3']) dd.data[0]=0.5 dd.data[1]=0.3 dd.data[2]=0.2 rdd=mc.sim_direct_fill(dd) rdd.data
_____no_output_____
MIT
scrap/MCAnpResearch.ipynb
georg-cantor/pyanp
Simulate an ahptree
alts=['alt '+str(i) for i in range(3)] tree = AHPTree(alt_names=alts) kids = ['crit '+str(i) for i in range(4)] for kid in kids: tree.add_child(kid) node = tree.get_node(kid) direct = node.alt_prioritizer s = 0 for alt in alts: direct[alt] = np.random.uniform() s += direct[alt] if s != 0: for alt in alts: direct[alt] /= s tree.priority() mc.sim_ahptree(tree, None) tree.priority()
_____no_output_____
MIT
scrap/MCAnpResearch.ipynb
georg-cantor/pyanp
Laboratorio 5 Datos: _European Union lesbian, gay, bisexual and transgender survey (2012)_Link a los datos [aquí](https://www.kaggle.com/ruslankl/european-union-lgbt-survey-2012). ContextoLa FRA (Agencia de Derechos Fundamentales) realizó una encuesta en línea para identificar cómo las personas lesbianas, gays, bisexuales y transgénero (LGBT) que viven en la Unión Europea y Croacia experimentan el cumplimiento de sus derechos fundamentales. La evidencia producida por la encuesta apoyará el desarrollo de leyes y políticas más efectivas para combatir la discriminación, la violencia y el acoso, mejorando la igualdad de trato en toda la sociedad. La necesidad de una encuesta de este tipo en toda la UE se hizo evidente después de la publicación en 2009 del primer informe de la FRA sobre la homofobia y la discriminación por motivos de orientación sexual o identidad de género, que destacó la ausencia de datos comparables. La Comisión Europea solicitó a FRA que recopilara datos comparables en toda la UE sobre este tema. FRA organizó la recopilación de datos en forma de una encuesta en línea que abarca todos los Estados miembros de la UE y Croacia. Los encuestados eran personas mayores de 18 años, que se identifican como lesbianas, homosexuales, bisexuales o transgénero, de forma anónima. La encuesta se hizo disponible en línea, de abril a julio de 2012, en los 23 idiomas oficiales de la UE (excepto irlandés) más catalán, croata, luxemburgués, ruso y turco. En total, 93,079 personas LGBT completaron la encuesta. Los expertos internos de FRA diseñaron la encuesta que fue implementada por Gallup, uno de los líderes del mercado en encuestas a gran escala. Además, organizaciones de la sociedad civil como ILGA-Europa (Región Europea de la Asociación Internacional de Lesbianas, Gays, Bisexuales, Trans e Intersexuales) y Transgender Europe (TGEU) brindaron asesoramiento sobre cómo acercarse mejor a las personas LGBT.Puede encontrar más información sobre la metodología de la encuesta en el [__Informe técnico de la encuesta LGBT de la UE. Metodología, encuesta en línea, cuestionario y muestra__](https://fra.europa.eu/sites/default/files/eu-lgbt-survey-technical-report_en.pdf). ContenidoEl conjunto de datos consta de 5 archivos .csv que representan 5 bloques de preguntas: vida cotidiana, discriminación, violencia y acoso, conciencia de los derechos, preguntas específicas de personas transgénero.El esquema de todas las tablas es idéntico:* `CountryCode` - name of the country* `subset` - Lesbian, Gay, Bisexual women, Bisexual men or Transgender (for Transgender Specific Questions table the value is only Transgender)* `question_code` - unique code ID for the question* `question_label` - full question text* `answer` - answer given* `percentage`* `notes` - [0]: small sample size; [1]: NA due to small sample size; [2]: missing valueEn el laboratorio de hoy solo utilizaremos los relacionados a la vida cotidiana, disponibles en el archivo `LGBT_Survey_DailyLife.csv` dentro de la carpeta `data`.
import os import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline daily_life_raw = pd.read_csv(os.path.join("..", "data", "LGBT_Survey_DailyLife.csv")) daily_life_raw.head() daily_life_raw.info() daily_life_raw.describe(include="all").T questions = ( daily_life_raw.loc[: , ["question_code", "question_label"]] .drop_duplicates() .set_index("question_code") .squeeze() ) for idx, value in questions.items(): print(f"Question code {idx}:\n\n{value}\n\n")
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
Preprocesamiento de datos ¿Te fijaste que la columna `percentage` no es numérica? Eso es por los registros con notes `[1]`, por lo que los eliminaremos.
daily_life_raw.notes.unique() daily_life = ( daily_life_raw.query("notes != ' [1] '") .astype({"percentage": "int"}) .drop(columns=["question_label", "notes"]) .rename(columns={"CountryCode": "country"}) ) daily_life.head()
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
Ejercicio 1(1 pto)¿A qué tipo de dato (nominal, ordinal, discreto, continuo) corresponde cada columna del DataFrame `daily_life`?Recomendación, mira los valores únicos de cada columna.
daily_life.dtypes # FREE STYLE #
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
__Respuesta:__* `country`: * `subset`: * `question_code`:* `answer`: * `percentage`: Ejercicio 2 (1 pto)Crea un nuevo dataframe `df1` tal que solo posea registros de Bélgica, la pregunta con código `b1_b` y que hayan respondido _Very widespread_.Ahora, crea un gráfico de barras vertical con la función `bar` de `matplotlib` para mostrar el porcentaje de respuestas por cada grupo. La figura debe ser de tamaño 10 x 6 y el color de las barras verde.
print(f"Question b1_b:\n\n{questions['b1_b']}") df1 = # FIX ME # df1 x = # FIX ME # y = # FIX ME # fig = plt.figure(# FIX ME #) plt# FIX ME # plt.show()
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
Ejercicio 3(1 pto)Respecto a la pregunta con código `g5`, ¿Cuál es el porcentaje promedio por cada valor de la respuesta (notar que la respuestas a las preguntas son numéricas)?
print(f"Question g5:\n\n{questions['g5']}")
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
Crea un DataFrame llamado `df2` tal que:1. Solo sean registros con la pregunta con código `g5`2. Cambia el tipo de la columna `answer` a `int`.3. Agrupa por país y respuesta y calcula el promedio a la columna porcentaje (usa `agg`).4. Resetea los índices.
df2 = ( # FIX ME # ) df2
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
Crea un DataFrame llamado `df2_mean` tal que:1. Agrupa `df2` por respuesta y calcula el promedio del porcentaje.2. Resetea los índices.
df2_mean = df2.# FIX ME # df2_mean.head()
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
Ahora, grafica lo siguiente:1. Una figura con dos columnas, tamaño de figura 15 x 12 y que compartan eje x y eje y. Usar `plt.subplots`.2. Para el primer _Axe_ (`ax1`), haz un _scatter plot_ tal que el eje x sea los valores de respuestas de `df2`, y el eye y corresponda a los porcentajes de `df2`. Recuerda que en este caso corresponde a promedios por país, por lo que habrán más de 10 puntos en el gráfico..3. Para el segundo _Axe_ (`ax2`), haz un gráfico de barras horizontal tal que el eje x sea los valores de respuestas de `df2_mean`, y el eye y corresponda a los porcentajes de `df2_mean`.
x = # FIX ME # y = # FIX ME # x_mean = # FIX ME #s y_mean = # FIX ME # fig, (ax1, ax2) = plt.subplots(# FIX ME #) ax1.# FIX ME # ax1.grid(alpha=0.3) ax2.# FIX ME # ax2.grid(alpha=0.3) fig.show()
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
Ejercicio 4(1 pto) Respecto a la misma pregunta `g5`, cómo se distribuyen los porcentajes en promedio para cada país - grupo?Utilizaremos el mapa de calor presentado en la clase, para ello es necesario procesar un poco los datos para conformar los elementos que se necesitan.Crea un DataFrame llamado `df3` tal que:1. Solo sean registros con la pregunta con código `g5`2. Cambia el tipo de la columna `answer` a `int`.3. Agrupa por país y subset, luego calcula el promedio a la columna porcentaje (usa `agg`).4. Resetea los índices.5. Pivotea tal que los índices sean los países, las columnas los grupos y los valores el promedio de porcentajes.6. Llena los valores nulos con cero. Usa `fillna`.
## Code from: # https://matplotlib.org/3.1.1/gallery/images_contours_and_fields/image_annotated_heatmap.html#sphx-glr-gallery-images-contours-and-fields-image-annotated-heatmap-py import numpy as np import matplotlib import matplotlib.pyplot as plt def heatmap(data, row_labels, col_labels, ax=None, cbar_kw={}, cbarlabel="", **kwargs): """ Create a heatmap from a numpy array and two lists of labels. Parameters ---------- data A 2D numpy array of shape (N, M). row_labels A list or array of length N with the labels for the rows. col_labels A list or array of length M with the labels for the columns. ax A `matplotlib.axes.Axes` instance to which the heatmap is plotted. If not provided, use current axes or create a new one. Optional. cbar_kw A dictionary with arguments to `matplotlib.Figure.colorbar`. Optional. cbarlabel The label for the colorbar. Optional. **kwargs All other arguments are forwarded to `imshow`. """ if not ax: ax = plt.gca() # Plot the heatmap im = ax.imshow(data, **kwargs) # Create colorbar cbar = ax.figure.colorbar(im, ax=ax, **cbar_kw) cbar.ax.set_ylabel(cbarlabel, rotation=-90, va="bottom") # We want to show all ticks... ax.set_xticks(np.arange(data.shape[1])) ax.set_yticks(np.arange(data.shape[0])) # ... and label them with the respective list entries. ax.set_xticklabels(col_labels) ax.set_yticklabels(row_labels) # Let the horizontal axes labeling appear on top. ax.tick_params(top=True, bottom=False, labeltop=True, labelbottom=False) # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=-30, ha="right", rotation_mode="anchor") # Turn spines off and create white grid. for edge, spine in ax.spines.items(): spine.set_visible(False) ax.set_xticks(np.arange(data.shape[1]+1)-.5, minor=True) ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True) ax.grid(which="minor", color="w", linestyle='-', linewidth=3) ax.tick_params(which="minor", bottom=False, left=False) return im, cbar def annotate_heatmap(im, data=None, valfmt="{x:.2f}", textcolors=["black", "white"], threshold=None, **textkw): """ A function to annotate a heatmap. Parameters ---------- im The AxesImage to be labeled. data Data used to annotate. If None, the image's data is used. Optional. valfmt The format of the annotations inside the heatmap. This should either use the string format method, e.g. "$ {x:.2f}", or be a `matplotlib.ticker.Formatter`. Optional. textcolors A list or array of two color specifications. The first is used for values below a threshold, the second for those above. Optional. threshold Value in data units according to which the colors from textcolors are applied. If None (the default) uses the middle of the colormap as separation. Optional. **kwargs All other arguments are forwarded to each call to `text` used to create the text labels. """ if not isinstance(data, (list, np.ndarray)): data = im.get_array() # Normalize the threshold to the images color range. if threshold is not None: threshold = im.norm(threshold) else: threshold = im.norm(data.max())/2. # Set default alignment to center, but allow it to be # overwritten by textkw. kw = dict(horizontalalignment="center", verticalalignment="center") kw.update(textkw) # Get the formatter in case a string is supplied if isinstance(valfmt, str): valfmt = matplotlib.ticker.StrMethodFormatter(valfmt) # Loop over the data and create a `Text` for each "pixel". # Change the text's color depending on the data. texts = [] for i in range(data.shape[0]): for j in range(data.shape[1]): kw.update(color=textcolors[int(im.norm(data[i, j]) > threshold)]) text = im.axes.text(j, i, valfmt(data[i, j], None), **kw) texts.append(text) return texts df3 = ( # FIX ME # ) df3.head()
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
Finalmente, los ingredientes para el heat map son:
countries = df3.index.tolist() subsets = df3.columns.tolist() answers = df3.values
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
El mapa de calor debe ser de la siguiente manera:* Tamaño figura: 15 x 20* cmap = "YlGn"* cbarlabel = "Porcentaje promedio (%)"* Precición en las anotaciones: Flotante con dos decimales.
fig, ax = plt.subplots(# FIX ME #) im, cbar = heatmap(# FIX ME #") texts = annotate_heatmap(# FIX ME #) fig.tight_layout() plt.show()
_____no_output_____
BSD-3-Clause
labs/lab05.ipynb
aoguedao/mat281_2020S2
Talktorial 1 Compound data acquisition (ChEMBL) Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charité/FU Berlin Paula Junge and Svetlana Leng Aim of this talktorialWe learn how to extract data from ChEMBL:* Find ligands which were tested on a certain target* Filter by available bioactivity data* Calculate pIC50 values* Merge dataframes and draw extracted molecules Learning goals Theory* ChEMBL database * ChEMBL web services * ChEMBL webresource client* Compound activity measures * IC50 * pIC50 Practical Goal: Get list of compounds with bioactivity data for a given target* Connect to ChEMBL database* Get target data (EGFR kinase)* Bioactivity data * Download and filter bioactivities * Clean and convert* Compound data * Get list of compounds * Prepare output data* Output * Draw molecules with highest pIC50 * Write output file References* ChEMBL bioactivity database (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5210557/)* ChEMBL web services: Nucleic Acids Res. (2015), 43, 612-620 (https://academic.oup.com/nar/article/43/W1/W612/2467881) * ChEMBL webrescource client GitHub (https://github.com/chembl/chembl_webresource_client)* myChEMBL webservices version 2.x (https://github.com/chembl/mychembl/blob/master/ipython_notebooks/09_myChEMBL_web_services.ipynb)* ChEMBL web-interface (https://www.ebi.ac.uk/chembl/)* EBI-RDF platform (https://www.ncbi.nlm.nih.gov/pubmed/24413672)* IC50 and pIC50 (https://en.wikipedia.org/wiki/IC50)* UniProt website (https://www.uniprot.org/) _____________________________________________________________________________________________________________________ Theory ChEMBL database* Open large-scale bioactivity database* **Current data content (as of 10.2018):** * \>1.8 million distinct compound structures * \>15 million activity values from 1 million assays * Assays are mapped to ∼12 000 targets* **Data sources** include scientific literature, PubChem bioassays, Drugs for Neglected Diseases Initiative (DNDi), BindingDB database, ...* ChEMBL data can be accessed via a [web-interface](https://www.ebi.ac.uk/chembl/), the [EBI-RDF platform](https://www.ncbi.nlm.nih.gov/pubmed/24413672) and the [ChEMBL web services](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/B5) ChEMBL web services* RESTful web service* ChEMBL web service version 2.x resource schema: [![ChEMBL web service schema](images/chembl_webservices_schema_diagram.jpg)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/figure/F2/)*Figure 1:* "ChEMBL web service schema diagram. The oval shapes represent ChEMBL web service resources and the line between two resources indicates that they share a common attribute. The arrow direction shows where the primary information about a resource type can be found. A dashed line indicates the relationship between two resources behaves differently. For example, the `Image` resource provides a graphical based representation of a `Molecule`."Figure and description taken from: [Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881). ChEMBL webresource client* Python client library for accessing ChEMBL data* Handles interaction with the HTTPS protocol* Lazy evaluation of results -> reduced number of network requests Compound activity measures IC50 * [Half maximal inhibitory concentration](https://en.wikipedia.org/wiki/IC50)* Indicates how much of a particular drug or other substance is needed to inhibit a given biological process by half[](https://commons.wikimedia.org/wiki/File:Example_IC50_curve_demonstrating_visually_how_IC50_is_derived.png)*Figure 2:* Visual demonstration of how to derive an IC50 value: Arrange data with inhibition on vertical axis and log(concentration) on horizontal axis; then identify max and min inhibition; then the IC50 is the concentration at which the curve passes through the 50% inhibition level. pIC50* To facilitate the comparison of IC50 values, we define pIC50 values on a logarithmic scale, such that $ pIC_{50} = -log_{10}(IC_{50}) $ where $ IC_{50}$ is specified in units of M.* Higher pIC50 values indicate exponentially greater potency of the drug* pIC50 is given in terms of molar concentration (mol/L or M) * IC50 should be specified in M to convert to pIC50 * For nM: $pIC_{50} = -log_{10}(IC_{50}*10^{-9})= 9-log_{10}(IC_{50}) $ Besides, IC50 and pIC50, other bioactivity measures are used, such as the equilibrium constant [KI](https://en.wikipedia.org/wiki/Equilibrium_constant) and the half maximal effective concentration [EC50](https://en.wikipedia.org/wiki/EC50). PracticalIn the following, we want to download all molecules that have been tested against our target of interest, the EGFR kinase. Connect to ChEMBL database First, the ChEMBL webresource client as well as other python libraries are imported.
from chembl_webresource_client.new_client import new_client import pandas as pd import math from rdkit.Chem import PandasTools
/home/andrea/anaconda2/envs/cadd-py36/lib/python3.6/site-packages/grequests.py:21: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.contrib.pyopenssl (/home/andrea/anaconda2/envs/cadd-py36/lib/python3.6/site-packages/urllib3/contrib/pyopenssl.py)', 'urllib3.util (/home/andrea/anaconda2/envs/cadd-py36/lib/python3.6/site-packages/urllib3/util/__init__.py)']. curious_george.patch_all(thread=False, select=False)
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Create resource objects for API access.
targets = new_client.target compounds = new_client.molecule bioactivities = new_client.activity
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Target data* Get UniProt-ID (http://www.uniprot.org/uniprot/P00533) of the target of interest (EGFR kinase) from UniProt website (https://www.uniprot.org/)* Use UniProt-ID to get target information* Select a different UniProt-ID if you are interested into another target
uniprot_id = 'P00533' # Get target information from ChEMBL but restrict to specified values only target_P00533 = targets.get(target_components__accession=uniprot_id) \ .only('target_chembl_id', 'organism', 'pref_name', 'target_type') print(type(target_P00533)) pd.DataFrame.from_records(target_P00533)
<class 'chembl_webresource_client.query_set.QuerySet'>
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
After checking the entries, we select the first entry as our target of interest`CHEMBL203`: It is a single protein and represents the human Epidermal growth factor receptor (EGFR, also named erbB1)
target = target_P00533[0] target
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Save selected ChEMBL-ID.
chembl_id = target['target_chembl_id'] chembl_id
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Bioactivity dataNow, we want to query bioactivity data for the target of interest. Download and filter bioactivities for the target In this step, we download and filter the bioactivity data and only consider* human proteins* bioactivity type IC50* exact measurements (relation '=') * binding data (assay type 'B')
bioact = bioactivities.filter(target_chembl_id = chembl_id) \ .filter(type = 'IC50') \ .filter(relation = '=') \ .filter(assay_type = 'B') \ .only('activity_id','assay_chembl_id', 'assay_description', 'assay_type', \ 'molecule_chembl_id', 'type', 'units', 'relation', 'value', \ 'target_chembl_id', 'target_organism') len(bioact), len(bioact[0]), type(bioact), type(bioact[0])
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
If you experience difficulties to query the ChEMBL database, we provide here a file containing the results for the query in the previous cell (11 April 2019). We do this using the Python package pickle which serializes Python objects so they can be saved to a file, and loaded in a program again later on.(Learn more about object serialization on [DataCamp](https://www.datacamp.com/community/tutorials/pickle-python-tutorial))You can load the "pickled" compounds by uncommenting and running the next cell.
#import pickle #bioact = pickle.load(open("../data/T1/EGFR_compounds_from_chembl_query_20190411.p", "rb"))
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Clean and convert bioactivity dataThe data is stored as a list of dictionaries
bioact[0]
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Convert to pandas dataframe (this might take some minutes).
bioact_df = pd.DataFrame.from_records(bioact) bioact_df.head(10) bioact_df.shape
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Delete entries with missing values.
bioact_df = bioact_df.dropna(axis=0, how = 'any') bioact_df.shape
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Delete duplicates:Sometimes the same molecule (`molecule_chembl_id`) has been tested more than once, in this case, we only keep the first one.
bioact_df = bioact_df.drop_duplicates('molecule_chembl_id', keep = 'first') bioact_df.shape
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
We would like to only keep bioactivity data measured in molar units. The following print statements will help us to see what units are contained and to control what is kept after dropping some rows.
print(bioact_df.units.unique()) bioact_df = bioact_df.drop(bioact_df.index[~bioact_df.units.str.contains('M')]) print(bioact_df.units.unique()) bioact_df.shape
['uM' 'nM' 'M' "10'1 ug/ml" 'ug ml-1' "10'-1microM" "10'1 uM" "10'-1 ug/ml" "10'-2 ug/ml" "10'2 uM" '/uM' "10'-6g/ml" 'mM' 'umol/L' 'nmol/L'] ['uM' 'nM' 'M' "10'-1microM" "10'1 uM" "10'2 uM" '/uM' 'mM']
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Since we deleted some rows, but we want to iterate over the index later, we reset index to be continuous.
bioact_df = bioact_df.reset_index(drop=True) bioact_df.head()
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
To allow further comparison of the IC50 values, we convert all units to nM. First, we write a helper function, which can be applied to the whole dataframe in the next step.
def convert_to_NM(unit, bioactivity): # c=0 # for i, unit in enumerate(bioact_df.units): if unit != "nM": if unit == "pM": value = float(bioactivity)/1000 elif unit == "10'-11M": value = float(bioactivity)/100 elif unit == "10'-10M": value = float(bioactivity)/10 elif unit == "10'-8M": value = float(bioactivity)*10 elif unit == "10'-1microM" or unit == "10'-7M": value = float(bioactivity)*100 elif unit == "uM" or unit == "/uM" or unit == "10'-6M": value = float(bioactivity)*1000 elif unit == "10'1 uM": value = float(bioactivity)*10000 elif unit == "10'2 uM": value = float(bioactivity)*100000 elif unit == "mM": value = float(bioactivity)*1000000 elif unit == "M": value = float(bioactivity)*1000000000 else: print ('unit not recognized...', unit) return value else: return bioactivity bioactivity_nM = [] for i, row in bioact_df.iterrows(): bioact_nM = convert_to_NM(row['units'], row['value']) bioactivity_nM.append(bioact_nM) bioact_df['value'] = bioactivity_nM bioact_df['units'] = 'nM' bioact_df.head()
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Compound dataWe have a data frame containing all molecules tested (with the respective measure) against EGFR. Now, we want to get the molecules that are stored behind the respective ChEMBL IDs. Get list of compoundsLet's have a look at the compounds from ChEMBL we have defined bioactivity data for. First, we retrieve ChEMBL ID and structures for the compounds with desired bioactivity data.
cmpd_id_list = list(bioact_df['molecule_chembl_id']) compound_list = compounds.filter(molecule_chembl_id__in = cmpd_id_list) \ .only('molecule_chembl_id','molecule_structures')
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Then, we convert the list to a pandas dataframe and delete duplicates (again, the pandas from_records function might take some time).
compound_df = pd.DataFrame.from_records(compound_list) compound_df = compound_df.drop_duplicates('molecule_chembl_id', keep = 'first') print(compound_df.shape) print(bioact_df.shape) compound_df.head()
(4780, 2) (4780, 11)
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
So far, we have multiple different molecular structure representations. We only want to keep the canonical SMILES.
for i, cmpd in compound_df.iterrows(): if compound_df.loc[i]['molecule_structures'] != None: compound_df.loc[i]['molecule_structures'] = cmpd['molecule_structures']['canonical_smiles'] print (compound_df.shape)
(4780, 2)
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Prepare output data Merge values of interest in one dataframe on ChEMBL-IDs:* ChEMBL-IDs* SMILES* units* IC50
output_df = pd.merge(bioact_df[['molecule_chembl_id','units','value']], compound_df, on='molecule_chembl_id') print(output_df.shape) output_df.head()
(4780, 4)
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
For distinct column names, we rename IC50 and SMILES columns.
output_df = output_df.rename(columns= {'molecule_structures':'smiles', 'value':'IC50'}) output_df.shape
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
If we do not have a SMILES representation of a compound, we can not further use it in the following talktorials. Therefore, we delete compounds without SMILES column.
output_df = output_df[~output_df['smiles'].isnull()] print(output_df.shape) output_df.head()
(4771, 4)
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
In the next cell, you see that the low IC50 values are difficult to read. Therefore, we prefer to convert the IC50 values to pIC50.
output_df = output_df.reset_index(drop=True) ic50 = output_df.IC50.astype(float) print(len(ic50)) print(ic50.head(10)) # Convert IC50 to pIC50 and add pIC50 column: pIC50 = pd.Series() i = 0 while i < len(output_df.IC50): value = 9 - math.log10(ic50[i]) # pIC50=-log10(IC50 mol/l) --> for nM: -log10(IC50*10**-9)= 9-log10(IC50) if value < 0: print("Negative pIC50 value at index"+str(i)) pIC50.at[i] = value i += 1 output_df['pIC50'] = pIC50 output_df.head()
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Collected bioactivity data for EGFR Let's have a look at our collected data set. Draw moleculesIn the next steps, we add a molecule column to our datafame and look at the structures of the molecules with the highest pIC50 values.
PandasTools.AddMoleculeColumnToFrame(output_df, smilesCol='smiles')
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Sort molecules by pIC50.
output_df.sort_values(by="pIC50", ascending=False, inplace=True) output_df.reset_index(drop=True, inplace=True)
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Show the most active molecules = molecules with the highest pIC50 values.
output_df.drop("smiles", axis=1).head()
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Write output fileTo use the data for the following talktorials, we save the data as csv file. Note that it is advisable to drop the molecule column (only contains an image of the molecules) when saving the data.
output_df.drop("ROMol", axis=1).to_csv("../data/T1/EGFR_compounds.csv")
_____no_output_____
CC-BY-4.0
talktorials/1_ChEMBL/T1_ChEMBL.ipynb
speleo3/TeachOpenCADD
Making ML Applications with Gradio[Gradio](https://www.gradio.app/) is a python library that provides web interfaces for your models. This library is very high-level with it being the easiest to learn for beginners. Here we use a dataset called [EMNIST](https://pytorch.org/vision/stable/datasets.htmlemnist) which is an addition to the MNIST(dataset of images with numbers) datasest, by including images of capital and lowercase letters with a total of 62 classes.Using Gradio, an interface is created at the bottom using the model trained in this notebook to accept our drawings of images or numbers to then predict. Importing libraries and Installing Gradio using PIPGoogle does not have Gradio automatically installed on their Google Colab machines, so it is necessary to install it to the specific machine you are using right now. If you choose another runtime machine, it is necessary to repeat this step.**Also, please run this code with a GPU**
import pandas as pd import numpy as np import matplotlib.pyplot as plt # Importing PyTorch import torch import torch.nn as nn # Importing torchvision for dataset import torchvision import torchvision.transforms as transforms # Installing gradio using PIP !pip install gradio
Collecting gradio [?25l Downloading https://files.pythonhosted.org/packages/e4/c6/19d6941437fb56db775b00c0181af81e539c42369bc79c664001d2272ccb/gradio-2.0.5-py3-none-any.whl (1.6MB)  |████████████████████████████████| 1.6MB 5.2MB/s [?25hRequirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from gradio) (3.2.2) Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from gradio) (1.1.5) Collecting analytics-python Downloading https://files.pythonhosted.org/packages/30/81/2f447982f8d5dec5b56c10ca9ac53e5de2b2e9e2bdf7e091a05731f21379/analytics_python-1.3.1-py2.py3-none-any.whl Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gradio) (1.4.1) Collecting paramiko [?25l Downloading https://files.pythonhosted.org/packages/95/19/124e9287b43e6ff3ebb9cdea3e5e8e88475a873c05ccdf8b7e20d2c4201e/paramiko-2.7.2-py2.py3-none-any.whl (206kB)  |████████████████████████████████| 215kB 22.1MB/s [?25hCollecting pycryptodome [?25l Downloading https://files.pythonhosted.org/packages/ad/16/9627ab0493894a11c68e46000dbcc82f578c8ff06bc2980dcd016aea9bd3/pycryptodome-3.10.1-cp35-abi3-manylinux2010_x86_64.whl (1.9MB)  |████████████████████████████████| 1.9MB 22.7MB/s [?25hCollecting Flask-Cors>=3.0.8 Downloading https://files.pythonhosted.org/packages/db/84/901e700de86604b1c4ef4b57110d4e947c218b9997adf5d38fa7da493bce/Flask_Cors-3.0.10-py2.py3-none-any.whl Collecting flask-cachebuster Downloading https://files.pythonhosted.org/packages/74/47/f3e1fedfaad965c81c2f17234636d72f71450f1b4522ca26d2b7eb4a0a74/Flask-CacheBuster-1.0.0.tar.gz Requirement already satisfied: Flask>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from gradio) (1.1.4) Collecting markdown2 Downloading https://files.pythonhosted.org/packages/5d/be/3924cc1c0e12030b5225de2b4521f1dc729730773861475de26be64a0d2b/markdown2-2.4.0-py2.py3-none-any.whl Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from gradio) (1.19.5) Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from gradio) (2.23.0) Collecting ffmpy Downloading https://files.pythonhosted.org/packages/bf/e2/947df4b3d666bfdd2b0c6355d215c45d2d40f929451cb29a8a2995b29788/ffmpy-0.3.0.tar.gz Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from gradio) (7.1.2) Collecting Flask-Login Downloading https://files.pythonhosted.org/packages/2b/83/ac5bf3279f969704fc1e63f050c50e10985e50fd340e6069ec7e09df5442/Flask_Login-0.5.0-py2.py3-none-any.whl Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->gradio) (1.3.1) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->gradio) (2.4.7) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->gradio) (0.10.0) Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->gradio) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->gradio) (2018.9) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from analytics-python->gradio) (1.15.0) Collecting backoff==1.10.0 Downloading https://files.pythonhosted.org/packages/f0/32/c5dd4f4b0746e9ec05ace2a5045c1fc375ae67ee94355344ad6c7005fd87/backoff-1.10.0-py2.py3-none-any.whl Collecting monotonic>=1.5 Downloading https://files.pythonhosted.org/packages/9a/67/7e8406a29b6c45be7af7740456f7f37025f0506ae2e05fb9009a53946860/monotonic-1.6-py2.py3-none-any.whl Collecting pynacl>=1.0.1 [?25l Downloading https://files.pythonhosted.org/packages/9d/57/2f5e6226a674b2bcb6db531e8b383079b678df5b10cdaa610d6cf20d77ba/PyNaCl-1.4.0-cp35-abi3-manylinux1_x86_64.whl (961kB)  |████████████████████████████████| 962kB 28.0MB/s [?25hCollecting cryptography>=2.5 [?25l Downloading https://files.pythonhosted.org/packages/b2/26/7af637e6a7e87258b963f1731c5982fb31cd507f0d90d91836e446955d02/cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2MB)  |████████████████████████████████| 3.2MB 37.5MB/s [?25hCollecting bcrypt>=3.1.3 [?25l Downloading https://files.pythonhosted.org/packages/26/70/6d218afbe4c73538053c1016dd631e8f25fffc10cd01f5c272d7acf3c03d/bcrypt-3.2.0-cp36-abi3-manylinux2010_x86_64.whl (63kB)  |████████████████████████████████| 71kB 9.2MB/s [?25hRequirement already satisfied: Werkzeug<2.0,>=0.15 in /usr/local/lib/python3.7/dist-packages (from Flask>=1.1.1->gradio) (1.0.1) Requirement already satisfied: itsdangerous<2.0,>=0.24 in /usr/local/lib/python3.7/dist-packages (from Flask>=1.1.1->gradio) (1.1.0) Requirement already satisfied: click<8.0,>=5.1 in /usr/local/lib/python3.7/dist-packages (from Flask>=1.1.1->gradio) (7.1.2) Requirement already satisfied: Jinja2<3.0,>=2.10.1 in /usr/local/lib/python3.7/dist-packages (from Flask>=1.1.1->gradio) (2.11.3) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->gradio) (1.24.3) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->gradio) (2.10) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->gradio) (2021.5.30) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->gradio) (3.0.4) Requirement already satisfied: cffi>=1.4.1 in /usr/local/lib/python3.7/dist-packages (from pynacl>=1.0.1->paramiko->gradio) (1.14.5) Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from Jinja2<3.0,>=2.10.1->Flask>=1.1.1->gradio) (2.0.1) Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.4.1->pynacl>=1.0.1->paramiko->gradio) (2.20) Building wheels for collected packages: flask-cachebuster, ffmpy Building wheel for flask-cachebuster (setup.py) ... [?25l[?25hdone Created wheel for flask-cachebuster: filename=Flask_CacheBuster-1.0.0-cp37-none-any.whl size=3372 sha256=2e68e88b4d90e766446a679a8b3c199673be350c949933099f14b55957e7b658 Stored in directory: /root/.cache/pip/wheels/9f/fc/a7/ab5712c3ace9a8f97276465cc2937316ab8063c1fea488ea77 Building wheel for ffmpy (setup.py) ... [?25l[?25hdone Created wheel for ffmpy: filename=ffmpy-0.3.0-cp37-none-any.whl size=4710 sha256=9da4ad5c3f5cf80dbda1d5ddde11d3b0b7388a9fc462dc27b8e4d3eba882ae2c Stored in directory: /root/.cache/pip/wheels/cc/ac/c4/bef572cb7e52bfca170046f567e64858632daf77e0f34e5a74 Successfully built flask-cachebuster ffmpy Installing collected packages: backoff, monotonic, analytics-python, pynacl, cryptography, bcrypt, paramiko, pycryptodome, Flask-Cors, flask-cachebuster, markdown2, ffmpy, Flask-Login, gradio Successfully installed Flask-Cors-3.0.10 Flask-Login-0.5.0 analytics-python-1.3.1 backoff-1.10.0 bcrypt-3.2.0 cryptography-3.4.7 ffmpy-0.3.0 flask-cachebuster-1.0.0 gradio-2.0.5 markdown2-2.4.0 monotonic-1.6 paramiko-2.7.2 pycryptodome-3.10.1 pynacl-1.4.0
MIT
machine_learning/lesson 4 - ML Apps/Gradio/EMNIST_Gradio_Tutorial.ipynb
BreakoutMentors/Data-Science-and-Machine-Learning
Downloading and Preparing EMNIST Dataset**Note:** Even though the images in the EMNIST dataset are 28x28 images just like the regular MNIST dataset, there are necessary transforms needed for EMNIST dataset. If not transformed, the images are rotated 90° counter-clockwise and are flipped vertically. To undo these two issues, we first rotate it 90° counter-clockwise and then flip it horizontallyHere is the image before processing:Here is the image after processing:
# Getting Dataset !mkdir EMNIST root = '/content/EMNIST' # Creating Transforms transforms = transforms.Compose([ # Rotating image 90 degrees counter-clockwise transforms.RandomRotation((-90,-90)), # Flipping images horizontally transforms.RandomHorizontalFlip(p=1), # Converting images to tensor transforms.ToTensor() ]) # Getting dataset training_dataset = torchvision.datasets.EMNIST(root, split='byclass', train=True, download=True, transform=transforms) test_dataset = torchvision.datasets.EMNIST(root, split='byclass', train=False, download=True, transform=transforms) # Loading Dataset into dataloaders batch_size = 2048 training_dataloader = torch.utils.data.DataLoader(training_dataset, batch_size=batch_size, shuffle=True) test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False) # Getting shapes of dataset print('Shape of the training dataset:', training_dataset.data.shape) print('Shape of the test dataset:', test_dataset.data.shape) # Getting reverted class_to_idx dictionary to get classes by idx idx_to_class = {val:key for key, val in training_dataset.class_to_idx.items()} # Plotting 5 images with classes plt.figure(figsize=(10,2)) for i in range(5): plt.subplot(1,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(training_dataset[i][0].squeeze().numpy(), cmap=plt.cm.binary) plt.xlabel(idx_to_class[training_dataset[i][1]])
Downloading and extracting zip archive Downloading https://www.itl.nist.gov/iaui/vip/cs_links/EMNIST/gzip.zip to /content/EMNIST/EMNIST/raw/emnist.zip
MIT
machine_learning/lesson 4 - ML Apps/Gradio/EMNIST_Gradio_Tutorial.ipynb
BreakoutMentors/Data-Science-and-Machine-Learning
Building the Model
class Neural_Network(nn.Module): # Constructor def __init__(self, num_classes): super(Neural_Network, self).__init__() # Defining Fully-Connected Layers self.fc1 = nn.Linear(28*28, 392) # 28*28 since each image is 28*28 self.fc2 = nn.Linear(392, 196) self.fc3 = nn.Linear(196, 98) self.fc4 = nn.Linear(98, num_classes) # Activation function self.relu = nn.ReLU() def forward(self, x): # Need to flatten each image in the batch x = x.flatten(start_dim=1) # Input it into the Fully connected layers x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.relu(self.fc3(x)) x = self.fc4(x) return x # Getting number of classes num_classes = len(idx_to_class) model = Neural_Network(num_classes) print(model)
Neural_Network( (fc1): Linear(in_features=784, out_features=392, bias=True) (fc2): Linear(in_features=392, out_features=196, bias=True) (fc3): Linear(in_features=196, out_features=98, bias=True) (fc4): Linear(in_features=98, out_features=62, bias=True) (relu): ReLU() )
MIT
machine_learning/lesson 4 - ML Apps/Gradio/EMNIST_Gradio_Tutorial.ipynb
BreakoutMentors/Data-Science-and-Machine-Learning
Defining Loss Function and Optimizer
criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
_____no_output_____
MIT
machine_learning/lesson 4 - ML Apps/Gradio/EMNIST_Gradio_Tutorial.ipynb
BreakoutMentors/Data-Science-and-Machine-Learning
Moving model to GPUIf you have not changed the runtime type to a GPU, please do so now. This helps with the speed of training.
# Use GPU if available device = "cuda" if torch.cuda.is_available() else "cpu" # Moving model to use GPU model.to(device)
_____no_output_____
MIT
machine_learning/lesson 4 - ML Apps/Gradio/EMNIST_Gradio_Tutorial.ipynb
BreakoutMentors/Data-Science-and-Machine-Learning
Training the Model
# Function that returns a torch tensor with predictions to compare with labels def get_preds_from_logits(logits): # Using softmax to get an array that sums to 1, and then getting the index with the highest value return torch.nn.functional.softmax(logits, dim=1).argmax(dim=1) epochs = 10 train_losses = [] train_accuracies = [] for epoch in range(1, epochs+1): train_loss = 0.0 train_counts = 0 ################### # train the model # ################### # Setting model to train mode model.train() for images, labels in training_dataloader: # Moving data to GPU if available images, labels = images.to(device), labels.to(device) # Setting all gradients to zero optimizer.zero_grad() # Calculate Output output = model(images) # Calculate Loss loss = criterion(output, labels) # Calculate Gradients loss.backward() # Perform Gradient Descent Step optimizer.step() # Saving loss train_loss += loss.item() # Get Predictions train_preds = get_preds_from_logits(output) # Saving number of right predictions for accuracy train_counts += train_preds.eq(labels).sum().item() # Averaging and Saving Losses train_loss/=len(training_dataset) train_losses.append(train_loss) # Getting accuracies and saving them train_acc = train_counts/len(training_dataset) train_accuracies.append(train_acc) print('Epoch: {} \tTraining Loss: {:.6f} \tTraining Accuracy: {:.2f}%'.format(epoch, train_loss, train_acc*100)) plt.plot(train_losses) plt.xlabel('epoch') plt.ylabel('Mean Squared Error') plt.title('Training Loss') plt.show() plt.plot(train_accuracies) plt.xlabel('epoch') plt.ylabel('Accuracy') plt.title('Training Accuracy') plt.show()
_____no_output_____
MIT
machine_learning/lesson 4 - ML Apps/Gradio/EMNIST_Gradio_Tutorial.ipynb
BreakoutMentors/Data-Science-and-Machine-Learning
Evaluating the modelHere we will display the test loss and accuracy and examples of images that were misclassified.
test_loss = 0.0 test_counts = 0 # Setting model to evaluation mode, no parameters will change model.eval() for images, labels in test_dataloader: # Moving to GPU if available images, labels = images.to(device), labels.to(device) # Calculate Output output = model(images) # Calculate Loss loss = criterion(output, labels) # Saving loss test_loss += loss.item() # Get Predictions test_preds = get_preds_from_logits(output) # Saving number of right predictions for accuracy test_counts += test_preds.eq(labels).sum().item() # Calculating test accuracy test_acc = test_counts/len(test_dataset) print('Test Loss: {:.6f} \tTest Accuracy: {:.2f}%'.format(test_loss, test_acc*100)) import torchvision.transforms as transforms # Have to another set of transforms to rotate and flip testing data test_transforms = transforms.Compose([ # Rotating image 90 degrees counter-clockwise transforms.RandomRotation((-90,-90)), # Flipping images horizontally transforms.RandomHorizontalFlip(p=1) ]) # Transforming the data and normalizing them test_images = test_transforms(test_dataset.data).to(device)/255 # Getting Predictions predictions = get_preds_from_logits(model(test_images)) # Getting Labels test_labels = test_dataset.targets.to(device) # Getting misclassified booleans correct_bools = test_labels.eq(predictions) misclassified_indices = [] for i in range(len(correct_bools)): if correct_bools[i] == False: misclassified_indices.append(i) # Plotting 5 misclassified images plt.figure(figsize=(10,2)) for i in range(5): idx = misclassified_indices[i] plt.subplot(1,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(test_images[idx].squeeze().cpu().numpy(), cmap=plt.cm.binary) true_label = idx_to_class[test_labels[idx].item()] pred_label = idx_to_class[predictions[idx].item()] plt.xlabel(f'True: {true_label}, Pred: {pred_label}')
_____no_output_____
MIT
machine_learning/lesson 4 - ML Apps/Gradio/EMNIST_Gradio_Tutorial.ipynb
BreakoutMentors/Data-Science-and-Machine-Learning
How to use GradioThere are three parts of using Gradio1. Define a function that takes input and returns your model's output2. Define what type of input the interface will use3. Define what type of output the interface will giveThe function `recognize_image` takes a 28x28 image that is not yet normalized and returns a dictionary with the keys being the classes and the values being the probabilities for that class.The class [`gradio.inputs.Image`](https://www.gradio.app/docsi_image) is used as the input that provides a window in the Gradio interface, but there are many customizations you can provide.These are some the parameters:1. shape - (width, height) shape to crop and resize image to; if None, matches input image size.2. image_mode - "RGB" if color, or "L" if black and white.3. invert_colors - whether to invert the image as a preprocessing step.4. source - Source of image. "upload" creates a box where user can drop an image file, "webcam" allows user to take snapshot from their webcam, "canvas" defaults to a white image that can be edited and drawn upon with tools.The class [gradio.outputs.Label](https://www.gradio.app/docso_label) is used as the output, which provides probabilities to the interface for the purpose of displaying them.These are the parameters:1. num_top_classes - number of most confident classes to show.2. type - Type of value to be passed to component. "value" expects a single out label, "confidences" expects a dictionary mapping labels to confidence scores, "auto" detects return type.3. label - component name in interface.The interface class [gradio.Interface](https://www.gradio.app/docsinterface) is responsible of creating the interface that compiles the type of inputs and outputs. There is a `.launch()` method that launches the interface in this notebook after compiling.These are the parameters used in this interface:1. fn - the function to wrap an interface around.2. inputs - a single Gradio input component, or list of Gradio input components. Components can either be passed as instantiated objects, or referred to by their string shortcuts. The number of input components should match the number of parameters in fn.3. outputs - a single Gradio output component, or list of Gradio output components. Components can either be passed as instantiated objects, or referred to by their string shortcuts. The number of output components should match the number of values returned by fn.4. title - a title for the interface; if provided, appears above the input and output components.5. description - a description for the interface; if provided, appears above the input and output components.6. live - whether the interface should automatically reload on change.7. interpretation - function that provides interpretation explaining prediction output. Pass "default" to use built-in interpreter.I will enourage you to view the [documentation](https://www.gradio.app/docs) for the interface, inputs and outputs, you can find all the information you need there. It is helpful to refer to the documentation to understand other parameters that are not used in this lesson.
import gradio import gradio as gr # Function that returns a torch tensor with predictions to compare with labels def get_probs_from_logits(logits): # Using softmax to get probabilities from the logits return torch.nn.functional.softmax(logits, dim=1) # Function that takes the img drawn in the Gradio interface, then gives probabilities def recognize_image(img): # Normalizes inputted image and converts it to a tensor for the model img = torch.tensor(img/255, dtype=torch.float).unsqueeze(dim=0).to(device) # Getting output output = model(img) # Getting probabilites of the image probabilities = get_probs_from_logits(output).flatten() # Returns a dictionary with the key being the class and val being the probability probabilities_dict = {idx_to_class[i]:probabilities[i].item() for i in range(num_classes)} return probabilities_dict im = gradio.inputs.Image(shape=(28, 28), image_mode='L', invert_colors=True, source="canvas") title = "Number and Letter Classifier App" description = """This app is able to guess the number or letter you draw below. The ML model was trained on the EMNIST dataset, please use below!""" iface = gr.Interface(fn=recognize_image, inputs=im, outputs=gradio.outputs.Label(num_top_classes=5), title=title, description=description, live=True, interpretation="default") iface.launch()
Colab notebook detected. To show errors in colab notebook, set `debug=True` in `launch()` This share link will expire in 24 hours. If you need a permanent link, visit: https://gradio.app/introducing-hosted (NEW!) Running on External URL: https://27407.gradio.app Interface loading below...
MIT
machine_learning/lesson 4 - ML Apps/Gradio/EMNIST_Gradio_Tutorial.ipynb
BreakoutMentors/Data-Science-and-Machine-Learning
Baixando a base de dados do Kaggle
# baixando a lib do kaggle !pip install --upgrade kaggle !pip install plotly # para visualizar dados faltantes !pip install missingno # requisitando upload do token de autentificação do Kaggle # OBS: o arquivo kaggle.json precisa ser baixado da sua conta pessoal do Kaggle. from google.colab import files uploaded = files.upload() for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format(name=fn, length=len(uploaded[fn]) )) # alocando o arquivo kaggle.json em seu devido local e permitindo escrita e leitura no mesmo !mkdir -p ~/.kaggle !mv kaggle.json ~/.kaggle !chmod 600 ~/.kaggle/kaggle.json !kaggle datasets download -d gpreda/covid-world-vaccination-progress !!unzip covid-world-vaccination-progress.zip -d data_folder
Downloading covid-world-vaccination-progress.zip to /content 0% 0.00/172k [00:00<?, ?B/s] 100% 172k/172k [00:00<00:00, 20.4MB/s]
MIT
src/projeto_1_ciencia_de_dados.ipynb
Cogitus/covid-vacine-progression-analysis
Código da análise exploratória em si
import pandas as pd import matplotlib.pyplot as plt import missingno as msno import plotly.graph_objects as go import matplotlib.ticker as ticker def gera_lista_vacinas(dataset): ''' Gera uma lista com todas as vacinas do dataset input: DataFrame dos dados output: lista de todas as vacinas ''' todas_vacinas = list(dataset.groupby(['vacinas']).count().index) conjunto_vacinas = set() for lista_vacinas in todas_vacinas: lista_vacinas = lista_vacinas.split(', ') for vacina in lista_vacinas: conjunto_vacinas.add(vacina) lista_vacinas = list(conjunto_vacinas) lista_vacinas.sort() return lista_vacinas def gera_lista_paises(dataset): ''' Gera a lista de países que estão em vacinação input: DataFrame dos dados output: lista de todos os países ''' return list(dataset.groupby(['pais']).count().index) def gera_dataframe_vacinas(dataset): ''' Gera um novo DataFrame em que as vacinas antes eram listadas na coluna 'vacinas' agora são listadas entre 10 colunas correspondentes à cada vacina, com 0's e 1's. Os 1's representam que vacina está sendo aplicada naquele país, os 0's que não! input: DataFrame dos dados output: DataFrame dos dados das vacinas categorizados ''' labels_vacinas = gera_lista_vacinas(dataset) # lista das vacinas entendidas como labels dataset_vacinas = dataset['vacinas'] array_temporario_vacinas = [] # inicia como uma lista vazia for linha_vacina in dataset_vacinas: sublista_vacinas = linha_vacina.split(', ') #lista de tamanho len(labels_vacinas) com 0's para elementos em sublista nova_linha = [int(vacina in sublista_vacinas) for vacina in labels_vacinas] array_temporario_vacinas.append(nova_linha) dataset_temporario_vacinas = pd.DataFrame(array_temporario_vacinas, columns=labels_vacinas) dataset.drop(columns=['vacinas'], axis=1, inplace=True) dataset = pd.concat([dataset, dataset_temporario_vacinas], axis=1) return dataset dataset = pd.read_csv(r'data_folder/country_vaccinations.csv') nome_colunas = ['pais', 'codigo_iso', 'data', 'total_vacinacoes', 'pessoas_vacinadas', 'pessoas_tot_vacinadas', 'vacinacoes_diarias_raw', 'vacinacoes_diarias', 'tot_vacinacoes_por_cent', 'pessoas_vacinadas_por_cent', 'pessoas_tot_vacinadas_por_cent', 'vacinacoes_diarias_por_milhao', 'vacinas', 'fonte_dados', 'website_fonte'] nome_colunas_antigo = list(dataset.columns) dataset.rename(columns=dict(zip(nome_colunas_antigo, nome_colunas)), inplace=True) dataset.head() # DATAFRAME COM AS INFOS DAS VACINAS freq_vacinas = dataset.groupby('pais').max() demais_colunas = [coluna for coluna in nome_colunas if coluna not in lista_vacinas and coluna not in ['pais', 'vacinas']] freq_vacinas.drop(columns=demais_colunas, axis=1, inplace=True) # para o bar plot vacinas x num_paises densidade_vacinas = pd.DataFrame(freq_vacinas.sum(), columns=['num_paises']) # BARPLOT DAS VACINAS fig_disposicao_vacinas = plt.figure(figsize = (20, 10)) plt.title('Número de países que utilizam as vacinas', fontsize=18) y_label = densidade_vacinas.index x_label = densidade_vacinas['num_paises'].values plt.bar(y_label, x_label) plt.grid() for i in range(len(x_label)): plt.annotate(str(x_label[i]), xy=(y_label[i], x_label[i]), ha='center', va='bottom', fontsize=14) plt.show() # dados faltantes de todo o banco de dados msno.matrix(dataset) # Vamos visualizar a distribuição de dados faltantes POR PAÍS from math import floor # caso dê problema, é possível que um novo país tenha sido adicionado! num_rows = 25 num_columns = 6 fig, axarr = plt.subplots(num_rows, num_columns, figsize=(24, 90)) lista_paises = gera_lista_paises(dataset) for pais in enumerate(lista_paises): # extraindo nome e numero do pais num_pais = pais[0] nome_pais = pais[1] # definindo coordenadas de onde no subplot será plotado x_plot = floor(num_pais/num_columns) y_plot = num_pais % num_columns axarr[x_plot][y_plot].set_title(nome_pais) msno.matrix(dataset[dataset['pais'] == nome_pais], ax=axarr[x_plot][y_plot], labels=False) dataset.describe()
_____no_output_____
MIT
src/projeto_1_ciencia_de_dados.ipynb
Cogitus/covid-vacine-progression-analysis
Código da criação dos gráficos e mapas
groupby_country = dataset.groupby(['pais']) listof_dataframe_countries = [] for idx, group in enumerate(groupby_country): listof_dataframe_countries.append(group) total_vac_top_countries = pd.DataFrame() # total_vacinacoes pessoas_vacinadas pessoas_tot_vacinadas for i in range(len(listof_dataframe_countries)): country_df = listof_dataframe_countries[i][1] filtered_df = country_df[country_df['total_vacinacoes'].notna()] latest_day_data = filtered_df.iloc[-1:] total_vac_top_countries = total_vac_top_countries.append(latest_day_data, ignore_index=True) total_vac_top_countries = total_vac_top_countries.sort_values(by=['total_vacinacoes'], ascending=False) fig, axes = plt.subplots(nrows=2, ncols=5) i = 0 j = 0 for pais in total_vac_top_countries.head(10).iterrows(): country = dataset[dataset['pais'] == pais[1]['pais']] filtered = country[country['total_vacinacoes'].notna()].reset_index() fig2 = filtered[['total_vacinacoes','pessoas_vacinadas','pessoas_tot_vacinadas']].plot(title=pais[1]['pais'], ax=axes[j][i], grid=True) fig2.yaxis.set_major_formatter(ticker.EngFormatter()) i+=1 if(i%5 == 0): j+=1 i=0 plt.show() fig, axes = plt.subplots(nrows=2, ncols=5) i = 0 j = 0 for pais in total_vac_top_countries.head(10).iterrows(): country = dataset[dataset['pais'] == pais[1]['pais']] filtered = country[country['tot_vacinacoes_por_cent'].notna()].reset_index() fig2 = filtered[['tot_vacinacoes_por_cent','pessoas_vacinadas_por_cent','pessoas_tot_vacinadas_por_cent']].plot(title=pais[1]['pais'], ax=axes[j][i], grid=True) fig2.yaxis.set_major_formatter(ticker.PercentFormatter()) fig2.set_ylim(0, 100) fig2.legend(('Total doses', 'Pessoas vacinadas', 'Pessoas imunizadas')) i+=1 if(i%5 == 0): j+=1 i=0 plt.show() for i in range(len(listof_dataframe_countries)): country_name = listof_dataframe_countries[i][0] if(country_name in ["United States", "Austria", "Brazil", "United Kingdom"]): country_df = listof_dataframe_countries[i][1] filtered_df = country_df[country_df['total_vacinacoes'].notna()] filtered_df[['total_vacinacoes','pessoas_vacinadas','pessoas_tot_vacinadas']].plot(title=country_name) plt.show() df = pd.DataFrame() for i in range(len(listof_dataframe_countries)): country_name = listof_dataframe_countries[i][0] country_df = listof_dataframe_countries[i][1] filtered_df = country_df[country_df['pessoas_vacinadas_por_cent'].notna()] latest_day_data = filtered_df.iloc[-1:] df = df.append(latest_day_data, ignore_index=True) df.to_csv('./pessoas_vacinadas_por_cent.csv') fig_pessoas_vacinadas = go.Figure(data=go.Choropleth( locations = df['codigo_iso'], z = df['pessoas_vacinadas_por_cent'], text = df['pais'], colorscale = 'YlGnBu', autocolorscale=False, marker_line_width=0.5, colorbar_title = '% pessoas<br>vacinadas', )) config = { 'modeBarButtonsToRemove': ['lasso2d','zoomInGeo','zoomOutGeo'] } fig_pessoas_vacinadas.update_layout( title_text='Covid-19 World Vaccination - Porcentagem de pessoas que tomaram pelo menos uma dose da vacina', geo=dict( showframe=False, showcoastlines=False, projection_type='equirectangular' ) ) fig_pessoas_vacinadas.data[0].update(zmin=0, zmax=60) fig_pessoas_vacinadas.show(config=config) df2 = pd.DataFrame() for i in range(len(listof_dataframe_countries)): country_name = listof_dataframe_countries[i][0] country_df = listof_dataframe_countries[i][1] filtered_df = country_df[country_df['total_vacinacoes'].notna()] latest_day_data = filtered_df.iloc[-1:] df2 = df2.append(latest_day_data, ignore_index=True) df2.to_csv('./total_vacinacoes.csv') fig_total_doses = go.Figure(data=go.Choropleth( locations = df2['codigo_iso'], z = df2['total_vacinacoes'], text = df2['pais'], colorscale = 'Blues', autocolorscale=False, marker_line_width=0.5, colorbar_title = 'Total<br>vacinas<br>(milhões)', )) config = { 'modeBarButtonsToRemove': ['lasso2d','zoomInGeo','zoomOutGeo'] } fig_total_doses.update_layout( title_text='Covid-19 World Vaccination - Total de doses aplicadas', geo=dict( showframe=False, showcoastlines=False, projection_type='equirectangular' ) ) fig_total_doses.show(config=config) df3 = pd.DataFrame() for i in range(len(listof_dataframe_countries)): country_name = listof_dataframe_countries[i][0] country_df = listof_dataframe_countries[i][1] filtered_df = country_df[country_df['vacinacoes_diarias_por_milhao'].notna()] latest_day_data = filtered_df.iloc[-1:] df3 = df3.append(latest_day_data, ignore_index=True) df3.to_csv('./vac_diarias_milhao.csv') fig_vac_diarias_milhao = go.Figure(data=go.Choropleth( locations = df3['codigo_iso'], z = df3['vacinacoes_diarias_por_milhao'], text = df3['pais'], colorscale = 'YlGnBu', autocolorscale=False, reversescale=False, marker_line_width=0.5, colorbar_title = 'vacinações<br>diárias<br>p/ milhão', )) config = { 'modeBarButtonsToRemove': ['lasso2d','zoomInGeo','zoomOutGeo'] } fig_vac_diarias_milhao.update_layout( title_text='Covid-19 World Vaccination - Vacinações diárias por milhão', geo=dict( showframe=False, showcoastlines=False, projection_type='equirectangular' ) ) fig_vac_diarias_milhao.data[0].update(zmin=500, zmax=15000) fig_vac_diarias_milhao.show(config=config)
_____no_output_____
MIT
src/projeto_1_ciencia_de_dados.ipynb
Cogitus/covid-vacine-progression-analysis
Backprop Core Example: Text SummarisationText summarisation takes a chunk of text, and extracts the key information.
# Set your API key to do inference on Backprop's platform # Leave as None to run locally api_key = None import backprop summarisation = backprop.Summarisation(api_key=api_key) # Change this up. input_text = """ Britain began its third COVID-19 lockdown on Tuesday with the government calling for one last major national effort to defeat the spread of a virus that has infected an estimated one in 50 citizens before mass vaccinations turn the tide. Finance minister Rishi Sunak announced a new package of business grants worth 4.6 billion pounds ($6.2 billion) to help keep people in jobs and firms afloat until measures are relaxed gradually, at the earliest from mid-February but likely later. Britain has been among the countries worst-hit by COVID-19, with the second highest death toll in Europe and an economy that suffered the sharpest contraction of any in the Group of Seven during the first wave of infections last spring. Prime Minister Boris Johnson said the latest data showed 2% of the population were currently infected - more than a million people in England. “When everybody looks at the position, people understand overwhelmingly that we have no choice,” he told a news conference. More than 1.3 million people in Britain have already received their first dose of a COVID-19 vaccination, but this is not enough to have an impact on transmission yet. Johnson announced the new lockdown late on Monday, saying the highly contagious new coronavirus variant first identified in Britain was spreading so fast the National Health Service risked being overwhelmed within 21 days. In England alone, some 27,000 people are in hospital with COVID, 40% more than during the first peak in April, with infection numbers expected to rise further after increased socialising during the Christmas period. Since the start of the pandemic, more than 75,000 people have died in the United Kingdom within 28 days of testing positive for coronavirus, according to official figures. The number of daily new infections passed 60,000 for the first time on Tuesday. A Savanta-ComRes poll taken just after Johnson’s address suggested four in five adults in England supported the lockdown. “I definitely think it was the right decision to make,” said Londoner Kaitlin Colucci, 28. “I just hope that everyone doesn’t struggle too much with having to be indoors again.” Downing Street said Johnson had cancelled a visit to India later this month to focus on the response to the virus, and Buckingham Palace called off its traditional summer garden parties this year. nder the new rules in England, schools are closed to most pupils, people should work from home if possible, and all hospitality and non-essential shops are closed. Semi-autonomous executives in Scotland, Wales and Northern Ireland have imposed similar measures. As infection rates soar across Europe, other countries are also clamping down on public life. Germany is set to extend its strict lockdown until the end of the month, and Italy will keep nationwide restrictions in place this weekend while relaxing curbs on weekdays. Sunak’s latest package of grants adds to the eye-watering 280 billion pounds in UK government support already announced for this financial year to stave off total economic collapse. The new lockdown is likely to cause the economy to shrink again, though not as much as during the first lockdown last spring. JP Morgan economist Allan Monks said he expected the economy to shrink by 2.5% in the first quarter of 2021 -- compared with almost 20% in the second quarter of 2020. To end the cycle of lockdowns, the government is pinning its hopes on vaccines. It aims to vaccinate all elderly care home residents and their carers, everyone over the age of 70, all frontline health and social care workers, and everyone who is clinically extremely vulnerable by mid-February. """ summary = summarisation(input_text) print(summary)
Britain begins its third COVID-19 lockdown. Finance minister Rishi Sunak announces a package of business grants. The government is pinning its hopes on vaccines.
Apache-2.0
examples/Summarisation.ipynb
lucky7323/backprop
Jacobian calculation
x = pdt.Var('x') y = pdt.Var('y') gm = pdt.Par('gm') a = pdt.Par('a') b = pdt.Par('b') t = pdt.Var('t') xdot = pdt.Fun(y, [y], 'xdot') ydot = pdt.Fun(-a*gm*gm - b*gm*gm*x -gm*gm*x*x*x -gm*x*x*y + gm*gm*x*x - gm*x*y, [x, y], 'ydot') F = pdt.Fun([xdot(y), ydot(x, y)], [x,y], 'F') jac = pdt.Fun(pdt.Diff(F, [x, y]), [t, x, y], 'Jacobian') jac.simplify() print(jac.eval(t=t, x=x, y=y))
[[0,1],[(((-b*gm*gm)-gm*gm*(x*x+x*2*x))-gm*(x*y+x*y)+gm*gm*2*x)-gm*y,(-gm*x*x)-gm*x]]
MIT
pydstools_implementation.ipynb
gnouveau/birdsynth
Simple model
icdict = {'x': 0, 'y': 0} pardict = { 'gm': 2 # g is γ in Boari 2015 } vardict = { 'x': xdot(y), 'y': ydot(x,y), } args = pdt.args() args.name = 'birdsynth' args.fnspecs = [jac, xdot, ydot] args.ics = icdict args.pars = pardict args.inputs = inputs args.tdata = [0, 1] args.varspecs = vardict ds = pdt.Generator.Vode_ODEsystem(args) ds.haveJacobian() traj = ds.compute('demo') plt.plot(traj.sample(dt=1/(44100*20))['x']) auxdict = {'Pi':(['t', 'x', 'a_'], 'if(t > 0, a_ * x - r * 1, 0)'), 'Pt':(['t', 'x', 'a_'], '(1 - r) * Pi(t - 0.5 * T, x, a_)') } icdict = {'x': 0, 'y': 0, 'o1':0, 'i1':0, 'i3':0} pardict = {'g': 2400, # g is γ in Boari 2015 'T': 0.2, 'r': 0.1, 'a_p': -540e6, 'b_p': -7800, 'c_p': 1.8e8, 'd_p': 1.2e-2, 'e_p': 7.2e-1, 'f_p': -0.83e-2, 'g_p': -5e2, 'h_p': 1e-4 } vardict = {'x': 'y', 'y': '-a*Pow(g, 2) - b * Pow(g, 2) * x - Pow(g, 2) * Pow(x, 3) - g * Pow(x, 2) * y + Pow(g, 2) * x * x' '- g * x * y', 'i1': 'o1', 'o1': 'a_p * i1 + b_p * o1 + c_p * i3 + d_p * Pt(t, x, a) + e_p * Pt(t, x, a)', 'i3': 'f_p * o1 + g_p * i3 + h_p * Pt(t, x, a)' } args = pdt.args() args.name = 'birdsynth' args.ics = icdict args.pars = pardict args.fnspecs = auxdict args.inputs = inputs args.tdata = [0, len(ab)/44100] args.varspecs = vardict ds = pdt.Generator.Vode_ODEsystem(args) traj = ds.compute('demo') pts = traj.sample(dt=1/(44100)) plt.plot(pts['t'], pts['x']) x = ds.variables['x'] y_0 = pdt.Var('-a*Pow(g, 2) - b * Pow(g, 2) * x - Pow(g, 2) * Pow(x, 3) - g * Pow(x, 2) * y + Pow(g, 2) * x * x' '- g * x * y', 'y_0') Pi(2)
_____no_output_____
MIT
pydstools_implementation.ipynb
gnouveau/birdsynth
05 超参数
import numpy as np from sklearn import datasets digits = datasets.load_digits() X = digits.data y = digits.target from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=666) from sklearn.neighbors import KNeighborsClassifier knn_clf = KNeighborsClassifier(n_neighbors=3) knn_clf.fit(X_train, y_train) knn_clf.score(X_test, y_test)
_____no_output_____
Apache-2.0
04-kNN/05-Hyper-Parameters/05-Hyper-Parameters.ipynb
mtianyan/Mtianyan-Play-with-Machine-Learning-Algorithms
寻找最好的k
best_score = 0.0 best_k = -1 for k in range(1, 11): knn_clf = KNeighborsClassifier(n_neighbors=k) knn_clf.fit(X_train, y_train) score = knn_clf.score(X_test, y_test) if score > best_score: best_k = k best_score = score print("best_k =", best_k) print("best_score =", best_score)
best_k = 4 best_score = 0.991666666667
Apache-2.0
04-kNN/05-Hyper-Parameters/05-Hyper-Parameters.ipynb
mtianyan/Mtianyan-Play-with-Machine-Learning-Algorithms
考虑距离?不考虑距离?
best_score = 0.0 best_k = -1 best_method = "" for method in ["uniform", "distance"]: for k in range(1, 11): knn_clf = KNeighborsClassifier(n_neighbors=k, weights=method) knn_clf.fit(X_train, y_train) score = knn_clf.score(X_test, y_test) if score > best_score: best_k = k best_score = score best_method = method print("best_method =", best_method) print("best_k =", best_k) print("best_score =", best_score) sk_knn_clf = KNeighborsClassifier(n_neighbors=4, weights="distance", p=1) sk_knn_clf.fit(X_train, y_train) sk_knn_clf.score(X_test, y_test)
_____no_output_____
Apache-2.0
04-kNN/05-Hyper-Parameters/05-Hyper-Parameters.ipynb
mtianyan/Mtianyan-Play-with-Machine-Learning-Algorithms
搜索明可夫斯基距离相应的p
best_score = 0.0 best_k = -1 best_p = -1 for k in range(1, 11): for p in range(1, 6): knn_clf = KNeighborsClassifier(n_neighbors=k, weights="distance", p=p) knn_clf.fit(X_train, y_train) score = knn_clf.score(X_test, y_test) if score > best_score: best_k = k best_p = p best_score = score print("best_k =", best_k) print("best_p =", best_p) print("best_score =", best_score)
best_k = 3 best_p = 2 best_score = 0.988888888889
Apache-2.0
04-kNN/05-Hyper-Parameters/05-Hyper-Parameters.ipynb
mtianyan/Mtianyan-Play-with-Machine-Learning-Algorithms
Bayesian Hierarchical ModelingThis jupyter notebook accompanies the Bayesian Hierarchical Modeling lecture(s) delivered by Stephen Feeney as part of David Hogg's [Computational Data Analysis class](http://dwh.gg/FlatironCDA). As part of the lecture(s) you will be asked to complete a number of tasks, some of which will involve direct coding into the notebook; these sections are marked by task. This notebook requires numpy, matplotlib, scipy, [corner](https://github.com/sfeeney/bhm_lecture.git), [pystan](https://pystan.readthedocs.io/en/latest/getting_started.html) and pickle to run (the last two are required solely for the final task).The model we're going to be inferring is below.We start with imports...
from __future__ import print_function # make sure everything we need is installed if running on Google Colab def is_colab(): try: cfg = get_ipython().config if cfg['IPKernelApp']['kernel_class'] == 'google.colab._kernel.Kernel': return True else: return False except NameError: return False if is_colab(): !pip install --quiet numpy matplotlib scipy corner pystan import numpy as np import numpy.random as npr import matplotlib.pyplot as mp %matplotlib inline
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
... and immediately move to... Task 2In which I ask you to write a Python function to generate a simulated Cepheid sample using the period-luminosity relation $m_{ij} = \mu_i + M^* + s\,\log p_{ij} + \epsilon(\sigma_{\rm int})$. For simplicity, assume Gaussian priors on everything, Gaussian intrinsic scatter and Gaussian measurement uncertainties. Assume only the first host has a distance modulus estimate.
# setup n_gal = 2 n_star = 200 n_samples = 50000 # PL relation parameters abs_bar = -26.0 # mean of standard absolute magnitude prior abs_sig = 4.0 # std dev of standard absolute magnitude prior s_bar = -1.0 # mean of slope prior s_sig = 1.0 # std dev of slope prior mu_bar = 30.0 # mean of distance modulus prior mu_sig = 5.0 # std dev of distance modulus prior m_sig_int = 0.05 # intrinsic scatter, assumed known # uncertainties mu_hat_sig = 0.01 # distance modulus measurement uncertainty m_hat_sig = 0.02 # apparent magnitude measurement uncertainty def simulate(n_gal, n_star, abs_bar, abs_sig, s_bar, s_sig, mu_bar, mu_sig, mu_hat_sig, m_sig_int, m_hat_sig): # draw CPL parameters from Gaussian prior with means abs_bar and s_bar and standard deviations # abs_sig and s_sig #abs_true = abs_bar #s_true = s_bar abs_true = abs_bar + npr.randn() * abs_sig s_true = s_bar + npr.randn() * s_sig # draw n_gal distance moduli from Gaussian prior with mean mu_bar and standard deviation mu_sig # i've chosen to sort here so the closest galaxy is the one with the measured distance modulus mu_true = np.sort(mu_bar + npr.randn(n_gal) * mu_sig) # measure ONLY ONE galaxy's distance modulus noisily. the noise here is assumed Gaussian with # zero mean and standard deviation mu_hat_sig mu_hat = mu_true[0] + npr.randn() * mu_hat_sig # draw log periods. these are assumed to be perfectly observed in this model, so they # are simply a set of pre-specified numbers. i have chosen to generate new values with # each simulation, drawn such that log-periods are uniformly drawn in the range 1-2 (i.e., # 10 to 100 days). you can have these for free! lp_true = 1.0 + npr.rand(n_gal, n_star) # draw true apparent magnitudes. these are distributed around the Cepheid period-luminosity # relation with Gaussian intrinsic scatter (mean 0, standard deviation m_sig_int) m_true = np.zeros((n_gal, n_star)) for i in range(n_gal): m_true[i, :] = mu_true[i] + abs_true + s_true * lp_true[i, :] + npr.randn(n_star) * m_sig_int # measure the apparent magnitudes noisily, all with the same measurement uncertainty m_hat_sig m_hat = m_true + npr.randn(n_gal, n_star) * m_hat_sig # return! return (abs_true, s_true, mu_true, lp_true, m_true, mu_hat, m_hat)
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
Let's check that the simulation generates something sane. A simple test that the magnitude measurements errors are correctly generated.
# simulate abs_true, s_true, mu_true, lp_true, m_true, mu_hat, m_hat = \ simulate(n_gal, n_star, abs_bar, abs_sig, s_bar, s_sig, mu_bar, mu_sig, mu_hat_sig, m_sig_int, m_hat_sig) # plot difference between true and observed apparent magnitudes. this should be the # noise, which is Gaussian distributed with mean zero and std dev m_hat_sig outs = mp.hist((m_true - m_hat).flatten()) dm_grid = np.linspace(np.min(outs[1]), np.max(outs[1])) mp.plot(dm_grid, np.exp(-0.5 * (dm_grid/m_hat_sig) ** 2) * np.max(outs[0])) mp.xlabel(r'$m_{ij} - \hat{m}_{ij}$') mp.ylabel(r'$N \left(m_{ij} - \hat{m}_{ij}\right)$')
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
And another test that the intrinsic scatter is added as expected.
# plot difference between true apparent magnitudes and expected apparent # magnitude given a perfect (i.e., intrinsic-scatter-free) period-luminosity # relation. this should be the intrinsic scatter, which is Gaussian- # distributed with mean zero and std dev m_sig_int eps = np.zeros((n_gal, n_star)) for i in range(n_gal): eps[i, :] = mu_true[i] + abs_true + s_true * lp_true[i, :] - m_true[i, :] outs = mp.hist(eps.flatten()) dm_grid = np.linspace(np.min(outs[1]), np.max(outs[1])) mp.plot(dm_grid, np.exp(-0.5 * (dm_grid/m_sig_int) ** 2) * np.max(outs[0])) mp.xlabel(r'$m_{ij} - \hat{m}_{ij}$') mp.ylabel(r'$N \left(m_{ij} - \hat{m}_{ij}\right)$')
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
Generalized Least Squares DemoCoding up the [GLS estimator](https://en.wikipedia.org/wiki/Generalized_least_squares) is a little involved, so I've done it for you below. Note that, rather unhelpfully, I've done so in a different order than in the notes. When I get a chance I will re-write. For now, you can simply evaluate the cells and bask in the glory of the fastest inference you will ever do!
def gls_fit(n_gal, n_star, mu_hat, mu_hat_sig, m_hat, m_sig_int, m_hat_sig, \ lp_true, priors=None): # setup # n_obs is one anchor constraint and one magnitude per Cepheid. # n_par is one mu per Cepheid host and 2 CPL params. if priors # are used, we add on n_gal + 2 observations: one prior constraint # on each host distance modulus and CPL parameter n_obs = n_gal * n_star + 1 n_par = n_gal + 2 if priors is not None: n_obs += n_gal + 2 data = np.zeros(n_obs) design = np.zeros((n_obs, n_par)) cov_inv = np.zeros((n_obs, n_obs)) # anchor data[0] = mu_hat design[0, 0] = 1.0 cov_inv[0, 0] = 1.0 / mu_hat_sig ** 2 # Cepheids k = 1 for i in range(0, n_gal): for j in range(0, n_star): data[k] = m_hat[i, j] design[k, i] = 1.0 design[k, n_gal] = 1.0 design[k, n_gal + 1] = lp_true[i, j] cov_inv[k, k] = 1.0 / (m_hat_sig ** 2 + m_sig_int ** 2) k += 1 # and, finally, priors if desired if priors is not None: abs_bar, abs_sig, s_bar, s_sig, mu_bar, mu_sig = priors for i in range(n_gal): data[k] = mu_bar design[k, i] = 1.0 cov_inv[k, k] = 1.0 / mu_sig ** 2 k += 1 data[k] = abs_bar design[k, n_gal] = 1.0 cov_inv[k, k] = 1.0 / abs_sig ** 2 k += 1 data[k] = s_bar design[k, n_gal + 1] = 1.0 cov_inv[k, k] = 1.0 / s_sig ** 2 k += 1 # fit and return destci = np.dot(design.transpose(), cov_inv) pars_cov = np.linalg.inv(np.dot(destci, design)) pars = np.dot(np.dot(pars_cov, destci), data) res = data - np.dot(design, pars) dof = n_obs - n_par chisq_dof = np.dot(res.transpose(), np.dot(cov_inv, res)) return pars, pars_cov, chisq_dof gls_pars, gls_pars_cov, gls_chisq = gls_fit(n_gal, n_star, mu_hat, mu_hat_sig, m_hat, \ m_sig_int, m_hat_sig, lp_true, \ priors=[abs_bar, abs_sig, s_bar, s_sig, mu_bar, mu_sig])
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
In order to plot the outputs of the GLS fit we could draw a large number of samples from the resulting multivariate Gaussian posterior and pass them to something like [`corner`](https://corner.readthedocs.io/en/latest/); however, as we have analytic results we might as well use those directly. I've coded up something totally hacky here in order to do so. Information on how to draw confidence ellipses can be found in [Dan Coe's note](https://arxiv.org/pdf/0906.4123.pdf).
# this is a hacky function designed to transform the analytic GLS outputs # into a corner.py style triangle plot, containing 1D and 2D marginalized # posteriors import scipy.stats as sps import matplotlib.patches as mpp def schmorner(par_mean, par_cov, par_true, par_label): # setup par_std = np.sqrt(np.diag(par_cov)) x_min = par_mean[0] - 3.5 * par_std[0] x_max = par_mean[0] + 3.5 * par_std[0] y_min = par_mean[1] - 3.5 * par_std[1] y_max = par_mean[1] + 3.5 * par_std[1] fig, axes = mp.subplots(2, 2) # 1D marge x = np.linspace(x_min, x_max, 100) axes[0, 0].plot(x, sps.norm.pdf(x, par_mean[0], par_std[0]), 'k') axes[0, 0].axvline(par_true[0]) axes[1, 0].axvline(par_true[0]) axes[0, 0].set_xticklabels([]) axes[0, 0].set_yticklabels([]) axes[0, 0].set_xlim(x_min, x_max) axes[0, 0].set_title(par_label[0]) axes[0, 0].set_title(par_label[0] + r'$=' + '{:6.2f}'.format(par_mean[0]) + \ r'\pm' + '{:4.2f}'.format(par_std[0]) + r'$') y = np.linspace(y_min, y_max, 100) axes[1, 1].plot(y, sps.norm.pdf(y, par_mean[1], par_std[1]), 'k') axes[1, 0].axhline(par_true[1]) axes[1, 1].axvline(par_true[1]) axes[1, 1].tick_params(labelleft=False) axes[1, 1].set_xlim(y_min, y_max) for tick in axes[1, 1].get_xticklabels(): tick.set_rotation(45) axes[1, 1].set_title(par_label[1] + r'$=' + '{:5.2f}'.format(par_mean[1]) + \ r'\pm' + '{:4.2f}'.format(par_std[1]) + r'$') # 2D marge vals, vecs = np.linalg.eig(par_cov) theta = np.degrees(np.arctan2(*vecs[::-1, 0])) w, h = 2 * np.sqrt(vals) ell = mpp.Ellipse(xy=par_mean, width=w, height=h, angle=theta, color='k') ell.set_facecolor("none") axes[1, 0].add_artist(ell) ell = mpp.Ellipse(xy=par_mean, width=2*w, height=2*h, angle=theta, color='k') ell.set_facecolor("none") axes[1, 0].add_artist(ell) axes[1, 0].set_xlim(x_min, x_max) axes[1, 0].set_ylim(y_min, y_max) for tick in axes[1, 0].get_xticklabels(): tick.set_rotation(45) for tick in axes[1, 0].get_yticklabels(): tick.set_rotation(45) axes[1, 0].set_xlabel(par_label[0]) axes[1, 0].set_ylabel(par_label[1]) fig.delaxes(axes[0, 1]) fig.subplots_adjust(hspace=0, wspace=0) test = schmorner(gls_pars[n_gal:], gls_pars_cov[n_gal:, n_gal:], \ [abs_true, s_true], [r'$M$', r'$s$']) # #lazy = npr.multivariate_normal(gls_pars[n_gal:], gls_pars_cov[n_gal:, n_gal:], n_samples) #fig = corner.corner(samples.T, labels=[r"$M$", r"$s$"], # show_titles=True, truths=[abs_bar, s_bar])
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
Task 3BBelow I've written the majority of a Gibbs sampler to infer the hyper-parameters of the Cepheid PL relation from our simulated sample. One component is missing: drawing from the conditional distribution of the standard absolute magnitude, $M^*$. Please fill it in, using the results of whiteboard/paper Task 3A.
def gibbs_sample(n_samples, n_gal, n_star, abs_bar, abs_sig, \ s_bar, s_sig, mu_bar, mu_sig, mu_hat_sig, \ m_sig_int, m_hat_sig, mu_hat, lp_true, m_hat): # storage abs_samples = np.zeros(n_samples) s_samples = np.zeros(n_samples) mu_samples = np.zeros((n_gal, n_samples)) m_samples = np.zeros((n_gal, n_star, n_samples)) # initialize sampler abs_samples[0] = abs_bar + npr.randn() * abs_sig s_samples[0] = s_bar + npr.randn() * s_sig mu_samples[:, 0] = mu_bar + npr.randn(n_gal) * mu_bar for i in range(n_gal): m_samples[i, :, 0] = mu_samples[i, 0] + abs_samples[0] + s_samples[0] * lp_true[i, :] # sample! for k in range(1, n_samples): # sample abs mag abs_sig_pl = m_sig_int / np.sqrt(n_gal * n_star) abs_bar_pl = 0.0 for j in range(n_gal): abs_bar_pl += np.sum(m_samples[j, :, k - 1] - mu_samples[j, k - 1] - s_samples[k - 1] * lp_true[j, :]) abs_bar_pl /= (n_gal * n_star) abs_std = np.sqrt((abs_sig * abs_sig_pl) ** 2 / (abs_sig ** 2 + abs_sig_pl ** 2)) abs_mean = (abs_sig ** 2 * abs_bar_pl + abs_sig_pl ** 2 * abs_bar) / \ (abs_sig ** 2 + abs_sig_pl ** 2) abs_samples[k] = abs_mean + npr.randn() * abs_std # sample slope s_sig_pl = m_sig_int / np.sqrt(np.sum(lp_true ** 2)) s_bar_pl = 0.0 for j in range(n_gal): s_bar_pl += np.sum((m_samples[j, :, k - 1] - mu_samples[j, k - 1] - abs_samples[k]) * lp_true[j, :]) s_bar_pl /= np.sum(lp_true ** 2) s_std = np.sqrt((s_sig * s_sig_pl) ** 2 / (s_sig ** 2 + s_sig_pl ** 2)) s_mean = (s_sig ** 2 * s_bar_pl + s_sig_pl ** 2 * s_bar) / \ (s_sig ** 2 + s_sig_pl ** 2) s_samples[k] = s_mean + npr.randn() * s_std # sample apparent magnitudes for j in range(n_gal): m_mean_pl = mu_samples[j, k - 1] + abs_samples[k] + s_samples[k] * lp_true[j, :] m_std = np.sqrt(m_sig_int ** 2 * m_hat_sig ** 2 / (m_sig_int ** 2 + m_hat_sig ** 2)) m_mean = (m_sig_int ** 2 * m_hat[j, :] + m_hat_sig ** 2 * m_mean_pl) / (m_sig_int ** 2 + m_hat_sig ** 2) m_samples[j, :, k] = m_mean + npr.randn(n_star) * m_std # sample distance moduli mu_sig_pl = m_sig_int / np.sqrt(n_star) mu_bar_pl = np.mean(m_samples[0, :, k] - abs_samples[k] - s_samples[k] * lp_true[0, :]) mu_var = 1.0 / (1.0 / mu_sig ** 2 + 1.0 / mu_hat_sig ** 2 + 1.0 / mu_sig_pl ** 2) mu_mean = (mu_bar / mu_sig ** 2 + mu_hat / mu_hat_sig ** 2 + mu_bar_pl / mu_sig_pl ** 2) * mu_var mu_samples[0, k] = mu_mean + npr.randn() * np.sqrt(mu_var) for j in range(1, n_gal): mu_sig_pl = m_sig_int / np.sqrt(n_star) mu_bar_pl = np.mean(m_samples[j, :, k] - abs_samples[k] - s_samples[k] * lp_true[j, :]) mu_std = (mu_sig * mu_sig_pl) ** 2 / (mu_sig ** 2 + mu_sig_pl ** 2) mu_mean = (mu_sig ** 2 * mu_bar_pl + mu_sig_pl ** 2 * mu_bar) / \ (mu_sig ** 2 + mu_sig_pl ** 2) mu_samples[j, k] = mu_mean + npr.randn() * mu_std return (abs_samples, s_samples, mu_samples, m_samples)
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
Now let's sample, setting aside the first half of the samples as warmup.
all_samples = gibbs_sample(n_samples, n_gal, n_star, abs_bar, abs_sig, \ s_bar, s_sig, mu_bar, mu_sig, mu_hat_sig, \ m_sig_int, m_hat_sig, mu_hat, lp_true, m_hat) n_warmup = int(n_samples / 2) g_samples = [samples[n_warmup:] for samples in all_samples]
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
Let's make sure that the absolute magnitude is being inferred as expected. First, generate a trace plot of the absolute magnitude samples (the first entry in `g_samples`), overlaying the ground truth. Then print out the mean and standard deviation of the marginalized absolute magnitude posterior. Recall that marginalizing is as simple as throwing away the samples of all other parameters.
mp.plot(g_samples[0]) mp.axhline(abs_true) mp.xlabel('sample') mp.ylabel(r'$M^*$') print('Truth {:6.2f}; inferred {:6.2f} +/- {:4.2f}'.format(abs_true, np.mean(g_samples[0]), np.std(g_samples[0])))
Truth -30.95; inferred -30.97 +/- 0.02
MIT
bhms.ipynb
sfeeney/bhm_lecture
Now let's generate some marginalized parameter posteriors (by simply discarding all samples of the latent parameters) using DFM's [`corner`](https://corner.readthedocs.io/en/latest/) package. Note the near identical nature of this plot to the `schmorner` plot we generated above.
import corner samples = np.stack((g_samples[0], g_samples[1])) fig = corner.corner(samples.T, labels=[r"$M^*$", r"$s$"], show_titles=True, truths=[abs_true, s_true])
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
Task 4The final task is to write a [Stan model](https://pystan.readthedocs.io/en/latest/getting_started.html) to infer the parameters of the period-luminosity relation. I've coded up the other two blocks required (`data` and `parameters`), so all that is required is for you to write the joint posterior (factorized into its individual components) in Stan's sampling-statement-based syntax. Essentially all you need are Gaussian sampling statements (`abs_true ~ normal(abs_bar, abs_sig);`) and for loops (`for(i in 1: n_gal){...}`).When you evaluate this cell, Stan will translate your model into `c++` code and compile it. We will then pickle the compiled model so you can re-use it rapidly without recompiling. To do so, please set `recompile = False` in the notebook.
import sys import pystan as ps import pickle stan_code = """ data { int<lower=0> n_gal; int<lower=0> n_star; real mu_hat; real mu_hat_sig; real m_hat[n_gal, n_star]; real m_hat_sig; real m_sig_int; real lp_true[n_gal, n_star]; real abs_bar; real abs_sig; real s_bar; real s_sig; real mu_bar; real mu_sig; } parameters { real mu_true[n_gal]; real m_true[n_gal, n_star]; real abs_true; real s_true; } model { // priors abs_true ~ normal(abs_bar, abs_sig); s_true ~ normal(s_bar, s_sig); mu_true ~ normal(mu_bar, mu_sig); // whatevers for(i in 1: n_gal){ for(j in 1: n_star){ m_true[i, j] ~ normal(mu_true[i] + abs_true + s_true * lp_true[i, j], m_sig_int); } } // likelihoods mu_hat ~ normal(mu_true[1], mu_hat_sig); for(i in 1: n_gal){ for(j in 1: n_star){ m_hat[i, j] ~ normal(m_true[i, j], m_hat_sig); } } } """ n_samples_stan = 5000 recompile = True pkl_fname = 'bhms_stan_model_v{:d}p{:d}p{:d}.pkl'.format(sys.version_info[0], \ sys.version_info[1], \ sys.version_info[2]) if recompile: stan_model = ps.StanModel(model_code=stan_code) with open(pkl_fname, 'wb') as f: pickle.dump(stan_model, f) else: try: with open(pkl_fname, 'rb') as f: stan_model = pickle.load(f) except EnvironmentError: print('ERROR: pickled Stan model (' + pkl_fname + ') not found. ' + \ 'Please set recompile = True')
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_ab41b39e55c2f57c74acf30e86ea4ea5 NOW.
MIT
bhms.ipynb
sfeeney/bhm_lecture
Now let's sample...
stan_data = {'n_gal': n_gal, 'n_star': n_star, 'mu_hat': mu_hat, 'mu_hat_sig': mu_hat_sig, \ 'm_hat': m_hat, 'm_hat_sig': m_hat_sig, 'm_sig_int': m_sig_int, 'lp_true': lp_true, \ 'abs_bar': abs_bar, 'abs_sig': abs_sig, 's_bar': s_bar, 's_sig': s_sig, \ 'mu_bar': mu_bar, 'mu_sig': mu_sig} fit = stan_model.sampling(data=stan_data, iter=n_samples_stan, chains=4)
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
... print out Stan's posterior summary (note this is for _all_ parameters)...
samples = fit.extract(permuted=True) print(fit)
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
... and plot the marginalized posterior of the PL parameters, as with the Gibbs sampler.
c_samples = np.stack((samples['abs_true'], samples['s_true'])) fig = corner.corner(c_samples.T, labels=[r"$M^*$", r"$s$"], show_titles=True, truths=[abs_true, s_true])
_____no_output_____
MIT
bhms.ipynb
sfeeney/bhm_lecture
Detecting Loops in Linked ListsIn this notebook, you'll implement a function that detects if a loop exists in a linked list. The way we'll do this is by having two pointers, called "runners", moving through the list at different rates. Typically we have a "slow" runner which moves at one node per step and a "fast" runner that moves at two nodes per step.If a loop exists in the list, the fast runner will eventually move behind the slow runner as it moves to the beginning of the loop. Eventually it will catch up to the slow runner and both runners will be pointing to the same node at the same time. If this happens then you know there is a loop in the linked list. Below is an example where we have a slow runner (the green arrow) and a fast runner (the red arrow).
class Node: def __init__(self, value): self.value = value self.next = None class LinkedList: def __init__(self, init_list=None): self.head = None if init_list: for value in init_list: self.append(value) def append(self, value): if self.head is None: self.head = Node(value) return # Move to the tail (the last node) node = self.head while node.next: node = node.next node.next = Node(value) return def __iter__(self): node = self.head while node: yield node.value node = node.next def __repr__(self): return str([i for i in self]) list_with_loop = LinkedList([2, -1, 3, 0, 5]) # Creating a loop where the last node points back to the second node loop_start = list_with_loop.head.next node = list_with_loop.head while node.next: node = node.next node.next = loop_start # You will encouter the unlimited loop # Click on stop # Then right click on `clear outpit` for i in list_with_loop: print(i)
_____no_output_____
MIT
2/1-3/linked_lists/Detecting Loops.ipynb
ZacksAmber/Udacity-Data-Structure-Algorithms
Write the function definition here**Exercise:** Given a linked list, implement a function `iscircular` that returns `True` if a loop exists in the list and `False` otherwise.
def iscircular(linked_list): """ Determine whether the Linked List is circular or not Args: linked_list(obj): Linked List to be checked Returns: bool: Return True if the linked list is circular, return False otherwise """ # TODO: Write function to check if linked list is circular if linked_list is None: return False slow, fast = linked_list.head, linked_list.head while fast and fast.next: slow, fast = slow.next, fast.next.next if slow == fast: return True return False
_____no_output_____
MIT
2/1-3/linked_lists/Detecting Loops.ipynb
ZacksAmber/Udacity-Data-Structure-Algorithms
Let's test your function
iscircular(list_with_loop) # Test Cases # Create another circular linked list small_loop = LinkedList([0]) small_loop.head.next = small_loop.head print ("Pass" if iscircular(list_with_loop) else "Fail") # Pass print ("Pass" if iscircular(LinkedList([-4, 7, 2, 5, -1])) else "Fail") # Fail print ("Pass" if iscircular(LinkedList([1])) else "Fail") # Fail print ("Pass" if iscircular(small_loop) else "Fail") # Pass print ("Pass" if iscircular(LinkedList([])) else "Fail") # Fail
Pass Fail Fail Pass Fail
MIT
2/1-3/linked_lists/Detecting Loops.ipynb
ZacksAmber/Udacity-Data-Structure-Algorithms
Normalize> Data normalization methods.
#hide from nbdev.showdoc import * #export def simple_normalize(data, method='minmax', target_column='RATING'): zscore = lambda x: (x - x.mean()) / x.std() minmax = lambda x: (x - x.min()) / (x.max() - x.min()) if method=='minmax': norm = data.groupby('USERID')[target_column].transform(minmax) elif method=='zscore': norm = data.groupby('USERID')[target_column].transform(zscore) data.loc[:,target_column] = norm return data #hide !pip install -q watermark %reload_ext watermark %watermark -a "Sparsh A." -m -iv -u -t -d
Author: Sparsh A. Last updated: 2021-12-18 08:35:26 Compiler : GCC 7.5.0 OS : Linux Release : 5.4.104+ Machine : x86_64 Processor : x86_64 CPU cores : 2 Architecture: 64bit IPython: 5.5.0
Apache-2.0
nbs/utils/utils.normalize.ipynb
sparsh-ai/recohut
Copyright 2020 The Google AI Language Team AuthorsLicensed under the Apache License, Version 2.0 (the "License");
# Copyright 2019 The Google AI Language Team Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
notebooks/sqa_predictions.ipynb
aniket371/tapas
Running a Tapas fine-tuned checkpoint---This notebook shows how to load and make predictions with TAPAS model, which was introduced in the paper: [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) Clone and install the repository First, let's install the code.
! pip install tapas-table-parsing
Collecting tapas-table-parsing Downloading tapas_table_parsing-0.0.1.dev0-py3-none-any.whl (195 kB) [?25l  |█▊ | 10 kB 22.3 MB/s eta 0:00:01  |███▍ | 20 kB 28.7 MB/s eta 0:00:01  |█████ | 30 kB 16.4 MB/s eta 0:00:01  |██████▊ | 40 kB 11.4 MB/s eta 0:00:01  |████████▍ | 51 kB 5.7 MB/s eta 0:00:01  |██████████ | 61 kB 6.7 MB/s eta 0:00:01  |███████████▊ | 71 kB 7.3 MB/s eta 0:00:01  |█████████████▍ | 81 kB 5.6 MB/s eta 0:00:01  |███████████████ | 92 kB 6.2 MB/s eta 0:00:01  |████████████████▊ | 102 kB 6.8 MB/s eta 0:00:01  |██████████████████▍ | 112 kB 6.8 MB/s eta 0:00:01  |████████████████████ | 122 kB 6.8 MB/s eta 0:00:01  |█████████████████████▉ | 133 kB 6.8 MB/s eta 0:00:01  |███████████████████████▌ | 143 kB 6.8 MB/s eta 0:00:01  |█████████████████████████▏ | 153 kB 6.8 MB/s eta 0:00:01  |██████████████████████████▉ | 163 kB 6.8 MB/s eta 0:00:01  |████████████████████████████▌ | 174 kB 6.8 MB/s eta 0:00:01  |██████████████████████████████▏ | 184 kB 6.8 MB/s eta 0:00:01  |███████████████████████████████▉| 194 kB 6.8 MB/s eta 0:00:01  |████████████████████████████████| 195 kB 6.8 MB/s [?25hCollecting frozendict==1.2 Downloading frozendict-1.2.tar.gz (2.6 kB) Collecting pandas~=1.0.0 Downloading pandas-1.0.5-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)  |████████████████████████████████| 10.1 MB 51.1 MB/s [?25hCollecting tensorflow-probability==0.10.1 Downloading tensorflow_probability-0.10.1-py2.py3-none-any.whl (3.5 MB)  |████████████████████████████████| 3.5 MB 43.8 MB/s [?25hCollecting nltk~=3.5 Downloading nltk-3.7-py3-none-any.whl (1.5 MB)  |████████████████████████████████| 1.5 MB 47.9 MB/s [?25hCollecting scikit-learn~=0.22.1 Downloading scikit_learn-0.22.2.post1-cp37-cp37m-manylinux1_x86_64.whl (7.1 MB)  |████████████████████████████████| 7.1 MB 36.9 MB/s [?25hCollecting kaggle<1.5.8 Downloading kaggle-1.5.6.tar.gz (58 kB)  |████████████████████████████████| 58 kB 5.4 MB/s [?25hCollecting tf-models-official~=2.2.0 Downloading tf_models_official-2.2.2-py2.py3-none-any.whl (711 kB)  |████████████████████████████████| 711 kB 53.0 MB/s [?25hCollecting tensorflow~=2.2.0 Downloading tensorflow-2.2.3-cp37-cp37m-manylinux2010_x86_64.whl (516.4 MB)  |████████████████████████████████| 516.4 MB 17 kB/s [?25hCollecting apache-beam[gcp]==2.20.0 Downloading apache_beam-2.20.0-cp37-cp37m-manylinux1_x86_64.whl (3.5 MB)  |████████████████████████████████| 3.5 MB 45.4 MB/s [?25hCollecting tf-slim~=1.1.0 Downloading tf_slim-1.1.0-py2.py3-none-any.whl (352 kB)  |████████████████████████████████| 352 kB 56.0 MB/s [?25hRequirement already satisfied: future<1.0.0,>=0.16.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (0.16.0) Requirement already satisfied: pytz>=2018.3 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (2018.9) Requirement already satisfied: numpy<2,>=1.14.3 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.21.5) Collecting fastavro<0.22,>=0.21.4 Downloading fastavro-0.21.24-cp37-cp37m-manylinux1_x86_64.whl (1.2 MB)  |████████████████████████████████| 1.2 MB 36.6 MB/s [?25hRequirement already satisfied: protobuf<4,>=3.5.0.post1 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (3.17.3) Collecting dill<0.3.2,>=0.3.1.1 Downloading dill-0.3.1.1.tar.gz (151 kB)  |████████████████████████████████| 151 kB 42.9 MB/s [?25hCollecting httplib2<=0.12.0,>=0.8 Downloading httplib2-0.12.0.tar.gz (218 kB)  |████████████████████████████████| 218 kB 50.0 MB/s [?25hCollecting oauth2client<4,>=2.0.1 Downloading oauth2client-3.0.0.tar.gz (77 kB)  |████████████████████████████████| 77 kB 5.8 MB/s [?25hCollecting mock<3.0.0,>=1.0.1 Downloading mock-2.0.0-py2.py3-none-any.whl (56 kB)  |████████████████████████████████| 56 kB 4.7 MB/s [?25hCollecting pymongo<4.0.0,>=3.8.0 Downloading pymongo-3.12.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (508 kB)  |████████████████████████████████| 508 kB 31.5 MB/s [?25hCollecting hdfs<3.0.0,>=2.1.0 Downloading hdfs-2.7.0-py3-none-any.whl (34 kB) Collecting typing-extensions<3.8.0,>=3.7.0 Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB) Collecting avro-python3!=1.9.2,<1.10.0,>=1.8.1 Downloading avro-python3-1.9.2.1.tar.gz (37 kB) Requirement already satisfied: crcmod<2.0,>=1.7 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.7) Requirement already satisfied: python-dateutil<3,>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (2.8.2) Collecting pyarrow<0.17.0,>=0.15.1 Downloading pyarrow-0.16.0-cp37-cp37m-manylinux2014_x86_64.whl (63.1 MB)  |████████████████████████████████| 63.1 MB 35 kB/s [?25hRequirement already satisfied: grpcio<2,>=1.12.1 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.44.0) Requirement already satisfied: pydot<2,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.3.0) Collecting google-cloud-dlp<=0.13.0,>=0.12.0 Downloading google_cloud_dlp-0.13.0-py2.py3-none-any.whl (151 kB)  |████████████████████████████████| 151 kB 51.1 MB/s [?25hRequirement already satisfied: google-cloud-core<2,>=0.28.1 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.0.3) Collecting google-cloud-bigtable<1.1.0,>=0.31.1 Downloading google_cloud_bigtable-1.0.0-py2.py3-none-any.whl (232 kB)  |████████████████████████████████| 232 kB 55.5 MB/s [?25hCollecting google-cloud-language<2,>=1.3.0 Downloading google_cloud_language-1.3.0-py2.py3-none-any.whl (83 kB)  |████████████████████████████████| 83 kB 1.7 MB/s [?25hCollecting google-cloud-vision<0.43.0,>=0.38.0 Downloading google_cloud_vision-0.42.0-py2.py3-none-any.whl (435 kB)  |████████████████████████████████| 435 kB 52.1 MB/s [?25hRequirement already satisfied: google-cloud-bigquery<=1.24.0,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.21.0) Collecting grpcio-gcp<1,>=0.2.2 Downloading grpcio_gcp-0.2.2-py2.py3-none-any.whl (9.4 kB) Collecting google-cloud-spanner<1.14.0,>=1.13.0 Downloading google_cloud_spanner-1.13.0-py2.py3-none-any.whl (212 kB)  |████████████████████████████████| 212 kB 68.0 MB/s [?25hCollecting google-cloud-datastore<1.8.0,>=1.7.1 Downloading google_cloud_datastore-1.7.4-py2.py3-none-any.whl (82 kB)  |████████████████████████████████| 82 kB 1.1 MB/s [?25hCollecting cachetools<4,>=3.1.0 Downloading cachetools-3.1.1-py2.py3-none-any.whl (11 kB) Collecting google-cloud-videointelligence<1.14.0,>=1.8.0 Downloading google_cloud_videointelligence-1.13.0-py2.py3-none-any.whl (177 kB)  |████████████████████████████████| 177 kB 64.9 MB/s [?25hCollecting google-cloud-pubsub<1.1.0,>=0.39.0 Downloading google_cloud_pubsub-1.0.2-py2.py3-none-any.whl (118 kB)  |████████████████████████████████| 118 kB 68.4 MB/s [?25hCollecting google-apitools<0.5.29,>=0.5.28 Downloading google-apitools-0.5.28.tar.gz (172 kB)  |████████████████████████████████| 172 kB 65.3 MB/s [?25hRequirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability==0.10.1->tapas-table-parsing) (4.4.2) Requirement already satisfied: gast>=0.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability==0.10.1->tapas-table-parsing) (0.5.3) Requirement already satisfied: cloudpickle==1.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability==0.10.1->tapas-table-parsing) (1.3.0) Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-probability==0.10.1->tapas-table-parsing) (1.15.0) Collecting fasteners>=0.14 Downloading fasteners-0.17.3-py3-none-any.whl (18 kB) Requirement already satisfied: google-resumable-media!=0.4.0,<0.5.0dev,>=0.3.1 in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery<=1.24.0,>=1.6.0->apache-beam[gcp]==2.20.0->tapas-table-parsing) (0.4.1) Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.26.3) Collecting grpc-google-iam-v1<0.13dev,>=0.12.3 Downloading grpc-google-iam-v1-0.12.3.tar.gz (13 kB) Requirement already satisfied: google-auth<2.0dev,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.35.0) Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (57.4.0) Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (21.3) Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (2.23.0) Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (1.56.0) Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (4.8) Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (0.2.8) Requirement already satisfied: docopt in /usr/local/lib/python3.7/dist-packages (from hdfs<3.0.0,>=2.1.0->apache-beam[gcp]==2.20.0->tapas-table-parsing) (0.6.2) Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from kaggle<1.5.8->tapas-table-parsing) (1.24.3) Requirement already satisfied: certifi in /usr/local/lib/python3.7/dist-packages (from kaggle<1.5.8->tapas-table-parsing) (2021.10.8) Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from kaggle<1.5.8->tapas-table-parsing) (4.63.0) Requirement already satisfied: python-slugify in /usr/local/lib/python3.7/dist-packages (from kaggle<1.5.8->tapas-table-parsing) (6.1.1) Collecting pbr>=0.11 Downloading pbr-5.8.1-py2.py3-none-any.whl (113 kB)  |████████████████████████████████| 113 kB 63.8 MB/s [?25hRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from nltk~=3.5->tapas-table-parsing) (7.1.2) Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from nltk~=3.5->tapas-table-parsing) (1.1.0) Collecting regex>=2021.8.3 Downloading regex-2022.3.15-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (749 kB)  |████████████████████████████████| 749 kB 45.4 MB/s [?25hRequirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.7/dist-packages (from oauth2client<4,>=2.0.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (0.4.8) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=14.3->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (3.0.7) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (2.10) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1->apache-beam[gcp]==2.20.0->tapas-table-parsing) (3.0.4) Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn~=0.22.1->tapas-table-parsing) (1.4.1) Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (1.6.3) Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (1.1.0) Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (1.14.0) Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (3.3.0) Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (0.37.1) Collecting tensorboard<2.3.0,>=2.2.0 Downloading tensorboard-2.2.2-py3-none-any.whl (3.0 MB)  |████████████████████████████████| 3.0 MB 39.4 MB/s [?25hCollecting h5py<2.11.0,>=2.10.0 Downloading h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)  |████████████████████████████████| 2.9 MB 44.0 MB/s [?25hRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (0.2.0) Collecting numpy<2,>=1.14.3 Downloading numpy-1.18.5-cp37-cp37m-manylinux1_x86_64.whl (20.1 MB)  |████████████████████████████████| 20.1 MB 82.3 MB/s [?25hCollecting tensorflow-estimator<2.3.0,>=2.2.0 Downloading tensorflow_estimator-2.2.0-py2.py3-none-any.whl (454 kB)  |████████████████████████████████| 454 kB 65.6 MB/s [?25hCollecting gast>=0.3.2 Downloading gast-0.3.3-py2.py3-none-any.whl (9.7 kB) Requirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (1.1.2) Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow~=2.2.0->tapas-table-parsing) (1.0.0) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (0.4.6) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (1.8.1) Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (1.0.1) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (3.3.6) Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (1.3.1) Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (4.11.3) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (3.7.0) Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow~=2.2.0->tapas-table-parsing) (3.2.0) Collecting mlperf-compliance==0.0.10 Downloading mlperf_compliance-0.0.10-py3-none-any.whl (24 kB) Requirement already satisfied: tensorflow-hub>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (0.12.0) Collecting sentencepiece Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)  |████████████████████████████████| 1.2 MB 37.7 MB/s [?25hCollecting typing==3.7.4.1 Downloading typing-3.7.4.1-py3-none-any.whl (25 kB) Requirement already satisfied: gin-config in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (0.5.0) Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (3.13) Collecting opencv-python-headless Downloading opencv_python_headless-4.5.5.64-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (47.8 MB)  |████████████████████████████████| 47.8 MB 49 kB/s [?25hRequirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (3.2.2) Requirement already satisfied: Cython in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (0.29.28) Requirement already satisfied: tensorflow-datasets in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (4.0.1) Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (5.4.8) Collecting tensorflow-model-optimization>=0.2.1 Downloading tensorflow_model_optimization-0.7.2-py2.py3-none-any.whl (237 kB)  |████████████████████████████████| 237 kB 64.3 MB/s [?25hCollecting py-cpuinfo>=3.3.0 Downloading py-cpuinfo-8.0.0.tar.gz (99 kB)  |████████████████████████████████| 99 kB 9.3 MB/s [?25hRequirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (7.1.2) Requirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.7/dist-packages (from tf-models-official~=2.2.0->tapas-table-parsing) (1.12.11) Collecting dataclasses Downloading dataclasses-0.6-py3-none-any.whl (14 kB) Collecting tensorflow-addons Downloading tensorflow_addons-0.16.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)  |████████████████████████████████| 1.1 MB 37.8 MB/s [?25hCollecting google-api-python-client>=1.6.7 Downloading google_api_python_client-2.42.0-py2.py3-none-any.whl (8.3 MB)  |████████████████████████████████| 8.3 MB 64.3 MB/s [?25h Downloading google_api_python_client-2.41.0-py2.py3-none-any.whl (8.3 MB)  |████████████████████████████████| 8.3 MB 21.1 MB/s [?25h Downloading google_api_python_client-2.40.0-py2.py3-none-any.whl (8.2 MB)  |████████████████████████████████| 8.2 MB 44.2 MB/s [?25h Downloading google_api_python_client-2.39.0-py2.py3-none-any.whl (8.2 MB)  |████████████████████████████████| 8.2 MB 38.0 MB/s [?25h Downloading google_api_python_client-2.38.0-py2.py3-none-any.whl (8.2 MB)  |████████████████████████████████| 8.2 MB 41.1 MB/s [?25h Downloading google_api_python_client-2.37.0-py2.py3-none-any.whl (8.1 MB)  |████████████████████████████████| 8.1 MB 30.0 MB/s [?25h Downloading google_api_python_client-2.36.0-py2.py3-none-any.whl (8.0 MB)  |████████████████████████████████| 8.0 MB 50.5 MB/s [?25h Downloading google_api_python_client-2.35.0-py2.py3-none-any.whl (8.0 MB)  |████████████████████████████████| 8.0 MB 35.1 MB/s [?25h Downloading google_api_python_client-2.34.0-py2.py3-none-any.whl (7.9 MB)  |████████████████████████████████| 7.9 MB 50.6 MB/s [?25h Downloading google_api_python_client-2.33.0-py2.py3-none-any.whl (7.9 MB)  |████████████████████████████████| 7.9 MB 26.7 MB/s [?25h Downloading google_api_python_client-2.32.0-py2.py3-none-any.whl (7.8 MB)  |████████████████████████████████| 7.8 MB 50.7 MB/s [?25h Downloading google_api_python_client-2.31.0-py2.py3-none-any.whl (7.8 MB)  |████████████████████████████████| 7.8 MB 31.8 MB/s [?25h Downloading google_api_python_client-2.30.0-py2.py3-none-any.whl (7.8 MB)  |████████████████████████████████| 7.8 MB 25.6 MB/s [?25h Downloading google_api_python_client-2.29.0-py2.py3-none-any.whl (7.7 MB)  |████████████████████████████████| 7.7 MB 45.6 MB/s [?25h Downloading google_api_python_client-2.28.0-py2.py3-none-any.whl (7.7 MB)  |████████████████████████████████| 7.7 MB 32.2 MB/s [?25h Downloading google_api_python_client-2.27.0-py2.py3-none-any.whl (7.7 MB)  |████████████████████████████████| 7.7 MB 37.2 MB/s [?25h Downloading google_api_python_client-2.26.1-py2.py3-none-any.whl (7.6 MB)  |████████████████████████████████| 7.6 MB 33.4 MB/s [?25h Downloading google_api_python_client-2.26.0-py2.py3-none-any.whl (7.6 MB)  |████████████████████████████████| 7.6 MB 19.4 MB/s [?25h Downloading google_api_python_client-2.25.0-py2.py3-none-any.whl (7.5 MB)  |████████████████████████████████| 7.5 MB 41.9 MB/s [?25h Downloading google_api_python_client-2.24.0-py2.py3-none-any.whl (7.5 MB)  |████████████████████████████████| 7.5 MB 9.5 MB/s [?25h Downloading google_api_python_client-2.23.0-py2.py3-none-any.whl (7.5 MB)  |████████████████████████████████| 7.5 MB 11.0 MB/s [?25h Downloading google_api_python_client-2.22.0-py2.py3-none-any.whl (7.5 MB)  |████████████████████████████████| 7.5 MB 8.1 MB/s [?25h Downloading google_api_python_client-2.21.0-py2.py3-none-any.whl (7.5 MB)  |████████████████████████████████| 7.5 MB 37.5 MB/s [?25h Downloading google_api_python_client-2.20.0-py2.py3-none-any.whl (7.4 MB)  |████████████████████████████████| 7.4 MB 29.5 MB/s [?25h Downloading google_api_python_client-2.19.1-py2.py3-none-any.whl (7.4 MB)  |████████████████████████████████| 7.4 MB 6.7 MB/s [?25h Downloading google_api_python_client-2.19.0-py2.py3-none-any.whl (7.4 MB)  |████████████████████████████████| 7.4 MB 31.1 MB/s [?25h Downloading google_api_python_client-2.18.0-py2.py3-none-any.whl (7.4 MB)  |████████████████████████████████| 7.4 MB 30.6 MB/s [?25h Downloading google_api_python_client-2.17.0-py2.py3-none-any.whl (7.3 MB)  |████████████████████████████████| 7.3 MB 21.9 MB/s [?25h Downloading google_api_python_client-2.16.0-py2.py3-none-any.whl (7.3 MB)  |████████████████████████████████| 7.3 MB 27.5 MB/s [?25h Downloading google_api_python_client-2.15.0-py2.py3-none-any.whl (7.2 MB)  |████████████████████████████████| 7.2 MB 19.3 MB/s [?25h Downloading google_api_python_client-2.14.1-py2.py3-none-any.whl (7.1 MB)  |████████████████████████████████| 7.1 MB 25.4 MB/s [?25h Downloading google_api_python_client-2.14.0-py2.py3-none-any.whl (7.1 MB)  |████████████████████████████████| 7.1 MB 22.6 MB/s [?25h Downloading google_api_python_client-2.13.0-py2.py3-none-any.whl (7.1 MB)  |████████████████████████████████| 7.1 MB 16.9 MB/s [?25h Downloading google_api_python_client-2.12.0-py2.py3-none-any.whl (7.1 MB)  |████████████████████████████████| 7.1 MB 22.3 MB/s [?25h Downloading google_api_python_client-2.11.0-py2.py3-none-any.whl (7.0 MB)  |████████████████████████████████| 7.0 MB 5.1 MB/s [?25h Downloading google_api_python_client-2.10.0-py2.py3-none-any.whl (7.0 MB)  |████████████████████████████████| 7.0 MB 26.8 MB/s [?25h Downloading google_api_python_client-2.9.0-py2.py3-none-any.whl (7.0 MB)  |████████████████████████████████| 7.0 MB 19.7 MB/s [?25h Downloading google_api_python_client-2.8.0-py2.py3-none-any.whl (7.0 MB)  |████████████████████████████████| 7.0 MB 24.8 MB/s [?25h Downloading google_api_python_client-2.7.0-py2.py3-none-any.whl (7.3 MB)  |████████████████████████████████| 7.3 MB 19.0 MB/s [?25h Downloading google_api_python_client-2.6.0-py2.py3-none-any.whl (7.2 MB)  |████████████████████████████████| 7.2 MB 34.9 MB/s [?25h Downloading google_api_python_client-2.5.0-py2.py3-none-any.whl (7.1 MB)  |████████████████████████████████| 7.1 MB 17.7 MB/s [?25h Downloading google_api_python_client-2.4.0-py2.py3-none-any.whl (7.1 MB)  |████████████████████████████████| 7.1 MB 31.2 MB/s [?25h Downloading google_api_python_client-2.3.0-py2.py3-none-any.whl (7.1 MB)  |████████████████████████████████| 7.1 MB 37.4 MB/s [?25h Downloading google_api_python_client-2.2.0-py2.py3-none-any.whl (7.0 MB)  |████████████████████████████████| 7.0 MB 6.2 MB/s [?25h Downloading google_api_python_client-2.1.0-py2.py3-none-any.whl (6.6 MB)  |████████████████████████████████| 6.6 MB 14.4 MB/s [?25h Downloading google_api_python_client-2.0.2-py2.py3-none-any.whl (6.5 MB)  |████████████████████████████████| 6.5 MB 37.7 MB/s [?25h Downloading google_api_python_client-1.12.10-py2.py3-none-any.whl (61 kB)  |████████████████████████████████| 61 kB 119 kB/s [?25h Downloading google_api_python_client-1.12.8-py2.py3-none-any.whl (61 kB)  |████████████████████████████████| 61 kB 785 bytes/s [?25h Downloading google_api_python_client-1.12.7-py2.py3-none-any.whl (61 kB)  |████████████████████████████████| 61 kB 29 kB/s [?25h Downloading google_api_python_client-1.12.6-py2.py3-none-any.whl (61 kB)  |████████████████████████████████| 61 kB 27 kB/s [?25h Downloading google_api_python_client-1.12.5-py2.py3-none-any.whl (61 kB)  |████████████████████████████████| 61 kB 7.5 MB/s [?25h Downloading google_api_python_client-1.12.4-py2.py3-none-any.whl (61 kB)  |████████████████████████████████| 61 kB 7.1 MB/s [?25h Downloading google_api_python_client-1.12.3-py2.py3-none-any.whl (61 kB)  |████████████████████████████████| 61 kB 5.8 MB/s [?25h Downloading google_api_python_client-1.12.2-py2.py3-none-any.whl (61 kB)  |████████████████████████████████| 61 kB 8.4 MB/s [?25hRequirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official~=2.2.0->tapas-table-parsing) (0.0.4) Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official~=2.2.0->tapas-table-parsing) (3.0.1) Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-model-optimization>=0.2.1->tf-models-official~=2.2.0->tapas-table-parsing) (0.1.6) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->tf-models-official~=2.2.0->tapas-table-parsing) (1.4.0) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->tf-models-official~=2.2.0->tapas-table-parsing) (0.11.0) Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.7/dist-packages (from python-slugify->kaggle<1.5.8->tapas-table-parsing) (1.3) Requirement already satisfied: typeguard>=2.7 in /usr/local/lib/python3.7/dist-packages (from tensorflow-addons->tf-models-official~=2.2.0->tapas-table-parsing) (2.7.1) Requirement already satisfied: importlib-resources in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official~=2.2.0->tapas-table-parsing) (5.4.0) Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official~=2.2.0->tapas-table-parsing) (1.7.0) Requirement already satisfied: promise in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official~=2.2.0->tapas-table-parsing) (2.3) Requirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-datasets->tf-models-official~=2.2.0->tapas-table-parsing) (21.4.0) Building wheels for collected packages: frozendict, avro-python3, dill, google-apitools, grpc-google-iam-v1, httplib2, kaggle, oauth2client, py-cpuinfo Building wheel for frozendict (setup.py) ... [?25l[?25hdone Created wheel for frozendict: filename=frozendict-1.2-py3-none-any.whl size=3166 sha256=bcbf7ecdf36cf16604986862f798d3cfd039a27a02d608e86236c97dac08c3ae Stored in directory: /root/.cache/pip/wheels/68/17/69/ac196dd181e620bba5fae5488e4fd6366a7316dce13cf88776 Building wheel for avro-python3 (setup.py) ... [?25l[?25hdone Created wheel for avro-python3: filename=avro_python3-1.9.2.1-py3-none-any.whl size=43513 sha256=8d5079abbdcb60a53a8929f07491be91b469e9ca1e0b266eb0f868e065147dae Stored in directory: /root/.cache/pip/wheels/bc/49/5f/fdb5b9d85055c478213e0158ac122b596816149a02d82e0ab1 Building wheel for dill (setup.py) ... [?25l[?25hdone Created wheel for dill: filename=dill-0.3.1.1-py3-none-any.whl size=78544 sha256=5847f08d96cd5f1809473da81ce0e7cea3badef87bbcc5e8f88f7136bb233a9c Stored in directory: /root/.cache/pip/wheels/a4/61/fd/c57e374e580aa78a45ed78d5859b3a44436af17e22ca53284f Building wheel for google-apitools (setup.py) ... [?25l[?25hdone Created wheel for google-apitools: filename=google_apitools-0.5.28-py3-none-any.whl size=130109 sha256=3a5edaac514084485549d713c595af9a20d638189d5ab3dfa0107675ce6c2937 Stored in directory: /root/.cache/pip/wheels/34/3b/69/ecd8e6ae89d9d71102a58962c29faa7a9467ba45f99f205920 Building wheel for grpc-google-iam-v1 (setup.py) ... [?25l[?25hdone Created wheel for grpc-google-iam-v1: filename=grpc_google_iam_v1-0.12.3-py3-none-any.whl size=18515 sha256=c6a155cf0d184085c4d718e1e3c19356ba32b9f7c29d3884f6de71f8d14a6387 Stored in directory: /root/.cache/pip/wheels/b9/ee/67/2e444183030cb8d31ce8b34cee34a7afdbd3ba5959ea846380 Building wheel for httplib2 (setup.py) ... [?25l[?25hdone Created wheel for httplib2: filename=httplib2-0.12.0-py3-none-any.whl size=93465 sha256=e1e718e4ceca2290ca872bd2434defee84953402a8baad2e2b183a115bb6b901 Stored in directory: /root/.cache/pip/wheels/0d/e7/b6/0dd30343ceca921cfbd91f355041bd9c69e0f40b49f25b7b8a Building wheel for kaggle (setup.py) ... [?25l[?25hdone Created wheel for kaggle: filename=kaggle-1.5.6-py3-none-any.whl size=72858 sha256=f911a59bdadc590e7f089c41c23f24c49e3d2d586bbefe73925d026b7989d7fc Stored in directory: /root/.cache/pip/wheels/aa/e7/e7/eb3c3d514c33294d77ddd5a856bdd58dc9c1fabbed59a02a2b Building wheel for oauth2client (setup.py) ... [?25l[?25hdone Created wheel for oauth2client: filename=oauth2client-3.0.0-py3-none-any.whl size=106375 sha256=a0b226e54e128315e6205fc9380270bca443fc3e1bac6e135e51a4cd24bb3622 Stored in directory: /root/.cache/pip/wheels/86/73/7a/3b3f76a2142176605ff38fbca574327962c71e25a43197a4c1 Building wheel for py-cpuinfo (setup.py) ... [?25l[?25hdone Created wheel for py-cpuinfo: filename=py_cpuinfo-8.0.0-py3-none-any.whl size=22257 sha256=113156ecdb59f6181b20ec42ed7d3373f3b3be32ec31144ee5cb7efd4caca5f3 Stored in directory: /root/.cache/pip/wheels/d2/f1/1f/041add21dc9c4220157f1bd2bd6afe1f1a49524c3396b94401 Successfully built frozendict avro-python3 dill google-apitools grpc-google-iam-v1 httplib2 kaggle oauth2client py-cpuinfo Installing collected packages: typing-extensions, cachetools, pbr, numpy, httplib2, grpcio-gcp, tensorflow-estimator, tensorboard, pymongo, pyarrow, oauth2client, mock, hdfs, h5py, grpc-google-iam-v1, gast, fasteners, fastavro, dill, avro-python3, typing, tensorflow-model-optimization, tensorflow-addons, tensorflow, sentencepiece, regex, py-cpuinfo, pandas, opencv-python-headless, mlperf-compliance, kaggle, google-cloud-vision, google-cloud-videointelligence, google-cloud-spanner, google-cloud-pubsub, google-cloud-language, google-cloud-dlp, google-cloud-datastore, google-cloud-bigtable, google-apitools, google-api-python-client, dataclasses, apache-beam, tf-slim, tf-models-official, tensorflow-probability, scikit-learn, nltk, frozendict, tapas-table-parsing Attempting uninstall: typing-extensions Found existing installation: typing-extensions 3.10.0.2 Uninstalling typing-extensions-3.10.0.2: Successfully uninstalled typing-extensions-3.10.0.2 Attempting uninstall: cachetools Found existing installation: cachetools 4.2.4 Uninstalling cachetools-4.2.4: Successfully uninstalled cachetools-4.2.4 Attempting uninstall: numpy Found existing installation: numpy 1.21.5 Uninstalling numpy-1.21.5: Successfully uninstalled numpy-1.21.5 Attempting uninstall: httplib2 Found existing installation: httplib2 0.17.4 Uninstalling httplib2-0.17.4: Successfully uninstalled httplib2-0.17.4 Attempting uninstall: tensorflow-estimator Found existing installation: tensorflow-estimator 2.8.0 Uninstalling tensorflow-estimator-2.8.0: Successfully uninstalled tensorflow-estimator-2.8.0 Attempting uninstall: tensorboard Found existing installation: tensorboard 2.8.0 Uninstalling tensorboard-2.8.0: Successfully uninstalled tensorboard-2.8.0 Attempting uninstall: pymongo Found existing installation: pymongo 4.0.2 Uninstalling pymongo-4.0.2: Successfully uninstalled pymongo-4.0.2 Attempting uninstall: pyarrow Found existing installation: pyarrow 6.0.1 Uninstalling pyarrow-6.0.1: Successfully uninstalled pyarrow-6.0.1 Attempting uninstall: oauth2client Found existing installation: oauth2client 4.1.3 Uninstalling oauth2client-4.1.3: Successfully uninstalled oauth2client-4.1.3 Attempting uninstall: h5py Found existing installation: h5py 3.1.0 Uninstalling h5py-3.1.0: Successfully uninstalled h5py-3.1.0 Attempting uninstall: gast Found existing installation: gast 0.5.3 Uninstalling gast-0.5.3: Successfully uninstalled gast-0.5.3 Attempting uninstall: dill Found existing installation: dill 0.3.4 Uninstalling dill-0.3.4: Successfully uninstalled dill-0.3.4 Attempting uninstall: tensorflow Found existing installation: tensorflow 2.8.0 Uninstalling tensorflow-2.8.0: Successfully uninstalled tensorflow-2.8.0 Attempting uninstall: regex Found existing installation: regex 2019.12.20 Uninstalling regex-2019.12.20: Successfully uninstalled regex-2019.12.20 Attempting uninstall: pandas Found existing installation: pandas 1.3.5 Uninstalling pandas-1.3.5: Successfully uninstalled pandas-1.3.5 Attempting uninstall: kaggle Found existing installation: kaggle 1.5.12 Uninstalling kaggle-1.5.12: Successfully uninstalled kaggle-1.5.12 Attempting uninstall: google-cloud-language Found existing installation: google-cloud-language 1.2.0 Uninstalling google-cloud-language-1.2.0: Successfully uninstalled google-cloud-language-1.2.0 Attempting uninstall: google-cloud-datastore Found existing installation: google-cloud-datastore 1.8.0 Uninstalling google-cloud-datastore-1.8.0: Successfully uninstalled google-cloud-datastore-1.8.0 Attempting uninstall: google-api-python-client Found existing installation: google-api-python-client 1.12.11 Uninstalling google-api-python-client-1.12.11: Successfully uninstalled google-api-python-client-1.12.11 Attempting uninstall: tensorflow-probability Found existing installation: tensorflow-probability 0.16.0 Uninstalling tensorflow-probability-0.16.0: Successfully uninstalled tensorflow-probability-0.16.0 Attempting uninstall: scikit-learn Found existing installation: scikit-learn 1.0.2 Uninstalling scikit-learn-1.0.2: Successfully uninstalled scikit-learn-1.0.2 Attempting uninstall: nltk Found existing installation: nltk 3.2.5 Uninstalling nltk-3.2.5: Successfully uninstalled nltk-3.2.5 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. yellowbrick 1.4 requires scikit-learn>=1.0.0, but you have scikit-learn 0.22.2.post1 which is incompatible. tables 3.7.0 requires numpy>=1.19.0, but you have numpy 1.18.5 which is incompatible. pymc3 3.11.4 requires cachetools>=4.2.1, but you have cachetools 3.1.1 which is incompatible. pydrive 1.3.1 requires oauth2client>=4.0.0, but you have oauth2client 3.0.0 which is incompatible. multiprocess 0.70.12.2 requires dill>=0.3.4, but you have dill 0.3.1.1 which is incompatible. jaxlib 0.3.2+cuda11.cudnn805 requires numpy>=1.19, but you have numpy 1.18.5 which is incompatible. jax 0.3.4 requires numpy>=1.19, but you have numpy 1.18.5 which is incompatible. imbalanced-learn 0.8.1 requires scikit-learn>=0.24, but you have scikit-learn 0.22.2.post1 which is incompatible. google-colab 1.0.0 requires pandas>=1.1.0; python_version >= "3.0", but you have pandas 1.0.5 which is incompatible. datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible. albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible. Successfully installed apache-beam-2.20.0 avro-python3-1.9.2.1 cachetools-3.1.1 dataclasses-0.6 dill-0.3.1.1 fastavro-0.21.24 fasteners-0.17.3 frozendict-1.2 gast-0.3.3 google-api-python-client-1.12.2 google-apitools-0.5.28 google-cloud-bigtable-1.0.0 google-cloud-datastore-1.7.4 google-cloud-dlp-0.13.0 google-cloud-language-1.3.0 google-cloud-pubsub-1.0.2 google-cloud-spanner-1.13.0 google-cloud-videointelligence-1.13.0 google-cloud-vision-0.42.0 grpc-google-iam-v1-0.12.3 grpcio-gcp-0.2.2 h5py-2.10.0 hdfs-2.7.0 httplib2-0.12.0 kaggle-1.5.6 mlperf-compliance-0.0.10 mock-2.0.0 nltk-3.7 numpy-1.18.5 oauth2client-3.0.0 opencv-python-headless-4.5.5.64 pandas-1.0.5 pbr-5.8.1 py-cpuinfo-8.0.0 pyarrow-0.16.0 pymongo-3.12.3 regex-2022.3.15 scikit-learn-0.22.2.post1 sentencepiece-0.1.96 tapas-table-parsing-0.0.1.dev0 tensorboard-2.2.2 tensorflow-2.2.3 tensorflow-addons-0.16.1 tensorflow-estimator-2.2.0 tensorflow-model-optimization-0.7.2 tensorflow-probability-0.10.1 tf-models-official-2.2.2 tf-slim-1.1.0 typing-3.7.4.1 typing-extensions-3.7.4.3
Apache-2.0
notebooks/sqa_predictions.ipynb
aniket371/tapas
Fetch models fom Google Storage Next we can get pretrained checkpoint from Google Storage. For the sake of speed, this is base sized model trained on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). Note that best results in the paper were obtained with a large model, with 24 layers instead of 12.
! gsutil cp gs://tapas_models/2020_04_21/tapas_sqa_base.zip . && unzip tapas_sqa_base.zip
Copying gs://tapas_models/2020_04_21/tapas_sqa_base.zip... | [1 files][ 1.0 GiB/ 1.0 GiB] 51.4 MiB/s Operation completed over 1 objects/1.0 GiB. Archive: tapas_sqa_base.zip replace tapas_sqa_base/model.ckpt.data-00000-of-00001? [y]es, [n]o, [A]ll, [N]one, [r]ename: y inflating: tapas_sqa_base/model.ckpt.data-00000-of-00001 y y inflating: tapas_sqa_base/model.ckpt.index inflating: tapas_sqa_base/README.txt inflating: tapas_sqa_base/vocab.txt inflating: tapas_sqa_base/bert_config.json inflating: tapas_sqa_base/model.ckpt.meta
Apache-2.0
notebooks/sqa_predictions.ipynb
aniket371/tapas
Imports
import tensorflow.compat.v1 as tf import os import shutil import csv import pandas as pd import IPython tf.get_logger().setLevel('ERROR') from tapas.utils import tf_example_utils from tapas.protos import interaction_pb2 from tapas.utils import number_annotation_utils from tapas.scripts import prediction_utils
_____no_output_____
Apache-2.0
notebooks/sqa_predictions.ipynb
aniket371/tapas
Load checkpoint for prediction Here's the prediction code, which will create and `interaction_pb2.Interaction` protobuf object, which is the datastructure we use to store examples, and then call the prediction script.
os.makedirs('results/sqa/tf_examples', exist_ok=True) os.makedirs('results/sqa/model', exist_ok=True) with open('results/sqa/model/checkpoint', 'w') as f: f.write('model_checkpoint_path: "model.ckpt-0"') for suffix in ['.data-00000-of-00001', '.index', '.meta']: shutil.copyfile(f'tapas_sqa_base/model.ckpt{suffix}', f'results/sqa/model/model.ckpt-0{suffix}') max_seq_length = 512 vocab_file = "tapas_sqa_base/vocab.txt" config = tf_example_utils.ClassifierConversionConfig( vocab_file=vocab_file, max_seq_length=max_seq_length, max_column_id=max_seq_length, max_row_id=max_seq_length, strip_column_names=False, add_aggregation_candidates=False, ) converter = tf_example_utils.ToClassifierTensorflowExample(config) def convert_interactions_to_examples(tables_and_queries): """Calls Tapas converter to convert interaction to example.""" for idx, (table, queries) in enumerate(tables_and_queries): interaction = interaction_pb2.Interaction() for position, query in enumerate(queries): question = interaction.questions.add() question.original_text = query question.id = f"{idx}-0_{position}" for header in table[0]: interaction.table.columns.add().text = header for line in table[1:]: row = interaction.table.rows.add() for cell in line: row.cells.add().text = cell number_annotation_utils.add_numeric_values(interaction) for i in range(len(interaction.questions)): try: yield converter.convert(interaction, i) except ValueError as e: print(f"Can't convert interaction: {interaction.id} error: {e}") def write_tf_example(filename, examples): with tf.io.TFRecordWriter(filename) as writer: for example in examples: writer.write(example.SerializeToString()) def predict(table_data, queries): table = [list(map(lambda s: s.strip(), row.split("|"))) for row in table_data.split("\n") if row.strip()] examples = convert_interactions_to_examples([(table, queries)]) write_tf_example("results/sqa/tf_examples/test.tfrecord", examples) write_tf_example("results/sqa/tf_examples/random-split-1-dev.tfrecord", []) ! python -m tapas.run_task_main \ --task="SQA" \ --output_dir="results" \ --noloop_predict \ --test_batch_size={len(queries)} \ --tapas_verbosity="ERROR" \ --compression_type= \ --init_checkpoint="tapas_sqa_base/model.ckpt" \ --bert_config_file="tapas_sqa_base/bert_config.json" \ --mode="predict" 2> error results_path = "results/sqa/model/test_sequence.tsv" all_coordinates = [] df = pd.DataFrame(table[1:], columns=table[0]) display(IPython.display.HTML(df.to_html(index=False))) print() with open(results_path) as csvfile: reader = csv.DictReader(csvfile, delimiter='\t') for row in reader: coordinates = prediction_utils.parse_coordinates(row["answer_coordinates"]) all_coordinates.append(coordinates) answers = ', '.join([table[row + 1][col] for row, col in coordinates]) position = int(row['position']) print(">", queries[position]) print(answers) return all_coordinates
_____no_output_____
Apache-2.0
notebooks/sqa_predictions.ipynb
aniket371/tapas
Predict
# Example nu-1000-0 result = predict(""" Doctor_ID|Doctor_Name|Department|opd_day|Morning_time|Evening_time 1|ABCD|Nephrology|Monday|9|5 2|ABC|Opthomology|Tuesday|9|6 3|DEF|Nephrology|Wednesday|9|6 4|GHI|Gynaecology|Thursday|9|6 5|JKL|Orthopeadics|Friday|9|6 6|MNO|Cardiology|Saturday|9|6 7|PQR|Dentistry|Sunday|9|5 8|STU|Epidemology|Monday|9|6 9|WVX|ENT|Tuesday|9|5 10|GILOY|Genetics|Wednesday|9|6 11|Rajeev|Neurology|Wednesday|10|4:30 12|Makan|Immunology|Tuesday|9|4:30 13|Arora|Paediatrics|Sunday|11|4:30 14|Piyush|Radiology|Monday|11:20|2 15|Roha|Gynaecology|Wednesday|9:20|2 16|Bohra|Dentistry|Thursday|11|2 17|Rajeev Khan|Virology|Tuesday|10|2 18|Arnab|Pharmocology|Sunday|10|2 19|Muskan|ENT|Friday|10|2 20|pamela|Epidemology|Monday|10|2 21|Rohit|Radiology|Tuesday|10|2 22|Aniket|Cardiology|Saturday|10|2 23|Darbar|Genetics|Saturday|10|2 24|Suyash|Neurology|Friday|10|2 25|Abhishek|Immunology|Wednesday|10|2 26|Yogesh|Immunology|Saturday|10|2 27|Kunal|Paediatrics|Monday|10|2 28|Vimal|Pharmocology|Friday|10|2 29|Kalyan|Virology|Tuesday|10|2 30|DSS|Nephrology|Thursday|10|2 """, ["How many doctors are there in Immunology department?", "of these, which doctor is available on Saturday?"])
_____no_output_____
Apache-2.0
notebooks/sqa_predictions.ipynb
aniket371/tapas
Tutorial Part 6: Going Deeper On Molecular FeaturizationsOne of the most important steps of doing machine learning on molecular data is transforming this data into a form amenable to the application of learning algorithms. This process is broadly called "featurization" and involves tutrning a molecule into a vector or tensor of some sort. There are a number of different ways of doing such transformations, and the choice of featurization is often dependent on the problem at hand.In this tutorial, we explore the different featurization methods available for molecules. These featurization methods include:1. `ConvMolFeaturizer`, 2. `WeaveFeaturizer`, 3. `CircularFingerprints`4. `RDKitDescriptors`5. `BPSymmetryFunction`6. `CoulombMatrix`7. `CoulombMatrixEig`8. `AdjacencyFingerprints` ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.
!wget -c https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh !chmod +x Anaconda3-2019.10-Linux-x86_64.sh !bash ./Anaconda3-2019.10-Linux-x86_64.sh -b -f -p /usr/local !conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0 import sys sys.path.append('/usr/local/lib/python3.7/site-packages/')
--2020-03-07 01:06:34-- https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh Resolving repo.anaconda.com (repo.anaconda.com)... 104.16.130.3, 104.16.131.3, 2606:4700::6810:8303, ... Connecting to repo.anaconda.com (repo.anaconda.com)|104.16.130.3|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 530308481 (506M) [application/x-sh] Saving to: ‘Anaconda3-2019.10-Linux-x86_64.sh’ Anaconda3-2019.10-L 100%[===================>] 505.74M 105MB/s in 5.1s 2020-03-07 01:06:39 (99.5 MB/s) - ‘Anaconda3-2019.10-Linux-x86_64.sh’ saved [530308481/530308481] PREFIX=/usr/local Unpacking payload ... Collecting package metadata (current_repodata.json): - \ | / - \ | done Solving environment: - \ | / - \ | / - \ | / - \ | / - \ | done ## Package Plan ## environment location: /usr/local added / updated specs: - _ipyw_jlab_nb_ext_conf==0.1.0=py37_0 - _libgcc_mutex==0.1=main - alabaster==0.7.12=py37_0 - anaconda-client==1.7.2=py37_0 - anaconda-navigator==1.9.7=py37_0 - anaconda-project==0.8.3=py_0 - anaconda==2019.10=py37_0 - asn1crypto==1.0.1=py37_0 - astroid==2.3.1=py37_0 - astropy==3.2.2=py37h7b6447c_0 - atomicwrites==1.3.0=py37_1 - attrs==19.2.0=py_0 - babel==2.7.0=py_0 - backcall==0.1.0=py37_0 - backports.functools_lru_cache==1.5=py_2 - backports.os==0.1.1=py37_0 - backports.shutil_get_terminal_size==1.0.0=py37_2 - backports.tempfile==1.0=py_1 - backports.weakref==1.0.post1=py_1 - backports==1.0=py_2 - beautifulsoup4==4.8.0=py37_0 - bitarray==1.0.1=py37h7b6447c_0 - bkcharts==0.2=py37_0 - blas==1.0=mkl - bleach==3.1.0=py37_0 - blosc==1.16.3=hd408876_0 - bokeh==1.3.4=py37_0 - boto==2.49.0=py37_0 - bottleneck==1.2.1=py37h035aef0_1 - bzip2==1.0.8=h7b6447c_0 - ca-certificates==2019.8.28=0 - cairo==1.14.12=h8948797_3 - certifi==2019.9.11=py37_0 - cffi==1.12.3=py37h2e261b9_0 - chardet==3.0.4=py37_1003 - click==7.0=py37_0 - cloudpickle==1.2.2=py_0 - clyent==1.2.2=py37_1 - colorama==0.4.1=py37_0 - conda-build==3.18.9=py37_3 - conda-env==2.6.0=1 - conda-package-handling==1.6.0=py37h7b6447c_0 - conda-verify==3.4.2=py_1 - conda==4.7.12=py37_0 - contextlib2==0.6.0=py_0 - cryptography==2.7=py37h1ba5d50_0 - curl==7.65.3=hbc83047_0 - cycler==0.10.0=py37_0 - cython==0.29.13=py37he6710b0_0 - cytoolz==0.10.0=py37h7b6447c_0 - dask-core==2.5.2=py_0 - dask==2.5.2=py_0 - dbus==1.13.6=h746ee38_0 - decorator==4.4.0=py37_1 - defusedxml==0.6.0=py_0 - distributed==2.5.2=py_0 - docutils==0.15.2=py37_0 - entrypoints==0.3=py37_0 - et_xmlfile==1.0.1=py37_0 - expat==2.2.6=he6710b0_0 - fastcache==1.1.0=py37h7b6447c_0 - filelock==3.0.12=py_0 - flask==1.1.1=py_0 - fontconfig==2.13.0=h9420a91_0 - freetype==2.9.1=h8a8886c_1 - fribidi==1.0.5=h7b6447c_0 - fsspec==0.5.2=py_0 - future==0.17.1=py37_0 - get_terminal_size==1.0.0=haa9412d_0 - gevent==1.4.0=py37h7b6447c_0 - glib==2.56.2=hd408876_0 - glob2==0.7=py_0 - gmp==6.1.2=h6c8ec71_1 - gmpy2==2.0.8=py37h10f8cd9_2 - graphite2==1.3.13=h23475e2_0 - greenlet==0.4.15=py37h7b6447c_0 - gst-plugins-base==1.14.0=hbbd80ab_1 - gstreamer==1.14.0=hb453b48_1 - h5py==2.9.0=py37h7918eee_0 - harfbuzz==1.8.8=hffaf4a1_0 - hdf5==1.10.4=hb1b8bf9_0 - heapdict==1.0.1=py_0 - html5lib==1.0.1=py37_0 - icu==58.2=h9c2bf20_1 - idna==2.8=py37_0 - imageio==2.6.0=py37_0 - imagesize==1.1.0=py37_0 - importlib_metadata==0.23=py37_0 - intel-openmp==2019.4=243 - ipykernel==5.1.2=py37h39e3cac_0 - ipython==7.8.0=py37h39e3cac_0 - ipython_genutils==0.2.0=py37_0 - ipywidgets==7.5.1=py_0 - isort==4.3.21=py37_0 - itsdangerous==1.1.0=py37_0 - jbig==2.1=hdba287a_0 - jdcal==1.4.1=py_0 - jedi==0.15.1=py37_0 - jeepney==0.4.1=py_0 - jinja2==2.10.3=py_0 - joblib==0.13.2=py37_0 - jpeg==9b=h024ee3a_2 - json5==0.8.5=py_0 - jsonschema==3.0.2=py37_0 - jupyter==1.0.0=py37_7 - jupyter_client==5.3.3=py37_1 - jupyter_console==6.0.0=py37_0 - jupyter_core==4.5.0=py_0 - jupyterlab==1.1.4=pyhf63ae98_0 - jupyterlab_server==1.0.6=py_0 - keyring==18.0.0=py37_0 - kiwisolver==1.1.0=py37he6710b0_0 - krb5==1.16.1=h173b8e3_7 - lazy-object-proxy==1.4.2=py37h7b6447c_0 - libarchive==3.3.3=h5d8350f_5 - libcurl==7.65.3=h20c2e04_0 - libedit==3.1.20181209=hc058e9b_0 - libffi==3.2.1=hd88cf55_4 - libgcc-ng==9.1.0=hdf63c60_0 - libgfortran-ng==7.3.0=hdf63c60_0 - liblief==0.9.0=h7725739_2 - libpng==1.6.37=hbc83047_0 - libsodium==1.0.16=h1bed415_0 - libssh2==1.8.2=h1ba5d50_0 - libstdcxx-ng==9.1.0=hdf63c60_0 - libtiff==4.0.10=h2733197_2 - libtool==2.4.6=h7b6447c_5 - libuuid==1.0.3=h1bed415_2 - libxcb==1.13=h1bed415_1 - libxml2==2.9.9=hea5a465_1 - libxslt==1.1.33=h7d1a2b0_0 - llvmlite==0.29.0=py37hd408876_0 - locket==0.2.0=py37_1 - lxml==4.4.1=py37hefd8a0e_0 - lz4-c==1.8.1.2=h14c3975_0 - lzo==2.10=h49e0be7_2 - markupsafe==1.1.1=py37h7b6447c_0 - matplotlib==3.1.1=py37h5429711_0 - mccabe==0.6.1=py37_1 - mistune==0.8.4=py37h7b6447c_0 - mkl-service==2.3.0=py37he904b0f_0 - mkl==2019.4=243 - mkl_fft==1.0.14=py37ha843d7b_0 - mkl_random==1.1.0=py37hd6b4f25_0 - mock==3.0.5=py37_0 - more-itertools==7.2.0=py37_0 - mpc==1.1.0=h10f8cd9_1 - mpfr==4.0.1=hdf1c602_3 - mpmath==1.1.0=py37_0 - msgpack-python==0.6.1=py37hfd86e86_1 - multipledispatch==0.6.0=py37_0 - navigator-updater==0.2.1=py37_0 - nbconvert==5.6.0=py37_1 - nbformat==4.4.0=py37_0 - ncurses==6.1=he6710b0_1 - networkx==2.3=py_0 - nltk==3.4.5=py37_0 - nose==1.3.7=py37_2 - notebook==6.0.1=py37_0 - numba==0.45.1=py37h962f231_0 - numexpr==2.7.0=py37h9e4a6bb_0 - numpy-base==1.17.2=py37hde5b4d6_0 - numpy==1.17.2=py37haad9e8e_0 - numpydoc==0.9.1=py_0 - olefile==0.46=py37_0 - openpyxl==3.0.0=py_0 - openssl==1.1.1d=h7b6447c_2 - packaging==19.2=py_0 - pandas==0.25.1=py37he6710b0_0 - pandoc==2.2.3.2=0 - pandocfilters==1.4.2=py37_1 - pango==1.42.4=h049681c_0 - parso==0.5.1=py_0 - partd==1.0.0=py_0 - patchelf==0.9=he6710b0_3 - path.py==12.0.1=py_0 - pathlib2==2.3.5=py37_0 - patsy==0.5.1=py37_0 - pcre==8.43=he6710b0_0 - pep8==1.7.1=py37_0 - pexpect==4.7.0=py37_0 - pickleshare==0.7.5=py37_0 - pillow==6.2.0=py37h34e0f95_0 - pip==19.2.3=py37_0 - pixman==0.38.0=h7b6447c_0 - pkginfo==1.5.0.1=py37_0 - pluggy==0.13.0=py37_0 - ply==3.11=py37_0 - prometheus_client==0.7.1=py_0 - prompt_toolkit==2.0.10=py_0 - psutil==5.6.3=py37h7b6447c_0 - ptyprocess==0.6.0=py37_0 - py-lief==0.9.0=py37h7725739_2 - py==1.8.0=py37_0 - pycodestyle==2.5.0=py37_0 - pycosat==0.6.3=py37h14c3975_0 - pycparser==2.19=py37_0 - pycrypto==2.6.1=py37h14c3975_9 - pycurl==7.43.0.3=py37h1ba5d50_0 - pyflakes==2.1.1=py37_0 - pygments==2.4.2=py_0 - pylint==2.4.2=py37_0 - pyodbc==4.0.27=py37he6710b0_0 - pyopenssl==19.0.0=py37_0 - pyparsing==2.4.2=py_0 - pyqt==5.9.2=py37h05f1152_2 - pyrsistent==0.15.4=py37h7b6447c_0 - pysocks==1.7.1=py37_0 - pytables==3.5.2=py37h71ec239_1 - pytest-arraydiff==0.3=py37h39e3cac_0 - pytest-astropy==0.5.0=py37_0 - pytest-doctestplus==0.4.0=py_0 - pytest-openfiles==0.4.0=py_0 - pytest-remotedata==0.3.2=py37_0 - pytest==5.2.1=py37_0 - python-dateutil==2.8.0=py37_0 - python-libarchive-c==2.8=py37_13 - python==3.7.4=h265db76_1 - pytz==2019.3=py_0 - pywavelets==1.0.3=py37hdd07704_1 - pyyaml==5.1.2=py37h7b6447c_0 - pyzmq==18.1.0=py37he6710b0_0 - qt==5.9.7=h5867ecd_1 - qtawesome==0.6.0=py_0 - qtconsole==4.5.5=py_0 - qtpy==1.9.0=py_0 - readline==7.0=h7b6447c_5 - requests==2.22.0=py37_0 - ripgrep==0.10.0=hc07d326_0 - rope==0.14.0=py_0 - ruamel_yaml==0.15.46=py37h14c3975_0 - scikit-image==0.15.0=py37he6710b0_0 - scikit-learn==0.21.3=py37hd81dba3_0 - scipy==1.3.1=py37h7c811a0_0 - seaborn==0.9.0=py37_0 - secretstorage==3.1.1=py37_0 - send2trash==1.5.0=py37_0 - setuptools==41.4.0=py37_0 - simplegeneric==0.8.1=py37_2 - singledispatch==3.4.0.3=py37_0 - sip==4.19.8=py37hf484d3e_0 - six==1.12.0=py37_0 - snappy==1.1.7=hbae5bb6_3 - snowballstemmer==2.0.0=py_0 - sortedcollections==1.1.2=py37_0 - sortedcontainers==2.1.0=py37_0 - soupsieve==1.9.3=py37_0 - sphinx==2.2.0=py_0 - sphinxcontrib-applehelp==1.0.1=py_0 - sphinxcontrib-devhelp==1.0.1=py_0 - sphinxcontrib-htmlhelp==1.0.2=py_0 - sphinxcontrib-jsmath==1.0.1=py_0 - sphinxcontrib-qthelp==1.0.2=py_0 - sphinxcontrib-serializinghtml==1.1.3=py_0 - sphinxcontrib-websupport==1.1.2=py_0 - sphinxcontrib==1.0=py37_1 - spyder-kernels==0.5.2=py37_0 - spyder==3.3.6=py37_0 - sqlalchemy==1.3.9=py37h7b6447c_0 - sqlite==3.30.0=h7b6447c_0 - statsmodels==0.10.1=py37hdd07704_0 - sympy==1.4=py37_0 - tbb==2019.4=hfd86e86_0 - tblib==1.4.0=py_0 - terminado==0.8.2=py37_0 - testpath==0.4.2=py37_0 - tk==8.6.8=hbc83047_0 - toolz==0.10.0=py_0 - tornado==6.0.3=py37h7b6447c_0 - tqdm==4.36.1=py_0 - traitlets==4.3.3=py37_0 - unicodecsv==0.14.1=py37_0 - unixodbc==2.3.7=h14c3975_0 - urllib3==1.24.2=py37_0 - wcwidth==0.1.7=py37_0 - webencodings==0.5.1=py37_1 - werkzeug==0.16.0=py_0 - wheel==0.33.6=py37_0 - widgetsnbextension==3.5.1=py37_0 - wrapt==1.11.2=py37h7b6447c_0 - wurlitzer==1.0.3=py37_0 - xlrd==1.2.0=py37_0 - xlsxwriter==1.2.1=py_0 - xlwt==1.3.0=py37_0 - xz==5.2.4=h14c3975_4 - yaml==0.1.7=had09818_2 - zeromq==4.3.1=he6710b0_3 - zict==1.0.0=py_0 - zipp==0.6.0=py_0 - zlib==1.2.11=h7b6447c_3 - zstd==1.3.7=h0b5b093_0 The following NEW packages will be INSTALLED: _ipyw_jlab_nb_ext~ pkgs/main/linux-64::_ipyw_jlab_nb_ext_conf-0.1.0-py37_0 _libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main alabaster pkgs/main/linux-64::alabaster-0.7.12-py37_0 anaconda pkgs/main/linux-64::anaconda-2019.10-py37_0 anaconda-client pkgs/main/linux-64::anaconda-client-1.7.2-py37_0 anaconda-navigator pkgs/main/linux-64::anaconda-navigator-1.9.7-py37_0 anaconda-project pkgs/main/noarch::anaconda-project-0.8.3-py_0 asn1crypto pkgs/main/linux-64::asn1crypto-1.0.1-py37_0 astroid pkgs/main/linux-64::astroid-2.3.1-py37_0 astropy pkgs/main/linux-64::astropy-3.2.2-py37h7b6447c_0 atomicwrites pkgs/main/linux-64::atomicwrites-1.3.0-py37_1 attrs pkgs/main/noarch::attrs-19.2.0-py_0 babel pkgs/main/noarch::babel-2.7.0-py_0 backcall pkgs/main/linux-64::backcall-0.1.0-py37_0 backports pkgs/main/noarch::backports-1.0-py_2 backports.functoo~ pkgs/main/noarch::backports.functools_lru_cache-1.5-py_2 backports.os pkgs/main/linux-64::backports.os-0.1.1-py37_0 backports.shutil_~ pkgs/main/linux-64::backports.shutil_get_terminal_size-1.0.0-py37_2 backports.tempfile pkgs/main/noarch::backports.tempfile-1.0-py_1 backports.weakref pkgs/main/noarch::backports.weakref-1.0.post1-py_1 beautifulsoup4 pkgs/main/linux-64::beautifulsoup4-4.8.0-py37_0 bitarray pkgs/main/linux-64::bitarray-1.0.1-py37h7b6447c_0 bkcharts pkgs/main/linux-64::bkcharts-0.2-py37_0 blas pkgs/main/linux-64::blas-1.0-mkl bleach pkgs/main/linux-64::bleach-3.1.0-py37_0 blosc pkgs/main/linux-64::blosc-1.16.3-hd408876_0 bokeh pkgs/main/linux-64::bokeh-1.3.4-py37_0 boto pkgs/main/linux-64::boto-2.49.0-py37_0 bottleneck pkgs/main/linux-64::bottleneck-1.2.1-py37h035aef0_1 bzip2 pkgs/main/linux-64::bzip2-1.0.8-h7b6447c_0 ca-certificates pkgs/main/linux-64::ca-certificates-2019.8.28-0 cairo pkgs/main/linux-64::cairo-1.14.12-h8948797_3 certifi pkgs/main/linux-64::certifi-2019.9.11-py37_0 cffi pkgs/main/linux-64::cffi-1.12.3-py37h2e261b9_0 chardet pkgs/main/linux-64::chardet-3.0.4-py37_1003 click pkgs/main/linux-64::click-7.0-py37_0 cloudpickle pkgs/main/noarch::cloudpickle-1.2.2-py_0 clyent pkgs/main/linux-64::clyent-1.2.2-py37_1 colorama pkgs/main/linux-64::colorama-0.4.1-py37_0 conda pkgs/main/linux-64::conda-4.7.12-py37_0 conda-build pkgs/main/linux-64::conda-build-3.18.9-py37_3 conda-env pkgs/main/linux-64::conda-env-2.6.0-1 conda-package-han~ pkgs/main/linux-64::conda-package-handling-1.6.0-py37h7b6447c_0 conda-verify pkgs/main/noarch::conda-verify-3.4.2-py_1 contextlib2 pkgs/main/noarch::contextlib2-0.6.0-py_0 cryptography pkgs/main/linux-64::cryptography-2.7-py37h1ba5d50_0 curl pkgs/main/linux-64::curl-7.65.3-hbc83047_0 cycler pkgs/main/linux-64::cycler-0.10.0-py37_0 cython pkgs/main/linux-64::cython-0.29.13-py37he6710b0_0 cytoolz pkgs/main/linux-64::cytoolz-0.10.0-py37h7b6447c_0 dask pkgs/main/noarch::dask-2.5.2-py_0 dask-core pkgs/main/noarch::dask-core-2.5.2-py_0 dbus pkgs/main/linux-64::dbus-1.13.6-h746ee38_0 decorator pkgs/main/linux-64::decorator-4.4.0-py37_1 defusedxml pkgs/main/noarch::defusedxml-0.6.0-py_0 distributed pkgs/main/noarch::distributed-2.5.2-py_0 docutils pkgs/main/linux-64::docutils-0.15.2-py37_0 entrypoints pkgs/main/linux-64::entrypoints-0.3-py37_0 et_xmlfile pkgs/main/linux-64::et_xmlfile-1.0.1-py37_0 expat pkgs/main/linux-64::expat-2.2.6-he6710b0_0 fastcache pkgs/main/linux-64::fastcache-1.1.0-py37h7b6447c_0 filelock pkgs/main/noarch::filelock-3.0.12-py_0 flask pkgs/main/noarch::flask-1.1.1-py_0 fontconfig pkgs/main/linux-64::fontconfig-2.13.0-h9420a91_0 freetype pkgs/main/linux-64::freetype-2.9.1-h8a8886c_1 fribidi pkgs/main/linux-64::fribidi-1.0.5-h7b6447c_0 fsspec pkgs/main/noarch::fsspec-0.5.2-py_0 future pkgs/main/linux-64::future-0.17.1-py37_0 get_terminal_size pkgs/main/linux-64::get_terminal_size-1.0.0-haa9412d_0 gevent pkgs/main/linux-64::gevent-1.4.0-py37h7b6447c_0 glib pkgs/main/linux-64::glib-2.56.2-hd408876_0 glob2 pkgs/main/noarch::glob2-0.7-py_0 gmp pkgs/main/linux-64::gmp-6.1.2-h6c8ec71_1 gmpy2 pkgs/main/linux-64::gmpy2-2.0.8-py37h10f8cd9_2 graphite2 pkgs/main/linux-64::graphite2-1.3.13-h23475e2_0 greenlet pkgs/main/linux-64::greenlet-0.4.15-py37h7b6447c_0 gst-plugins-base pkgs/main/linux-64::gst-plugins-base-1.14.0-hbbd80ab_1 gstreamer pkgs/main/linux-64::gstreamer-1.14.0-hb453b48_1 h5py pkgs/main/linux-64::h5py-2.9.0-py37h7918eee_0 harfbuzz pkgs/main/linux-64::harfbuzz-1.8.8-hffaf4a1_0 hdf5 pkgs/main/linux-64::hdf5-1.10.4-hb1b8bf9_0 heapdict pkgs/main/noarch::heapdict-1.0.1-py_0 html5lib pkgs/main/linux-64::html5lib-1.0.1-py37_0 icu pkgs/main/linux-64::icu-58.2-h9c2bf20_1 idna pkgs/main/linux-64::idna-2.8-py37_0 imageio pkgs/main/linux-64::imageio-2.6.0-py37_0 imagesize pkgs/main/linux-64::imagesize-1.1.0-py37_0 importlib_metadata pkgs/main/linux-64::importlib_metadata-0.23-py37_0 intel-openmp pkgs/main/linux-64::intel-openmp-2019.4-243 ipykernel pkgs/main/linux-64::ipykernel-5.1.2-py37h39e3cac_0 ipython pkgs/main/linux-64::ipython-7.8.0-py37h39e3cac_0 ipython_genutils pkgs/main/linux-64::ipython_genutils-0.2.0-py37_0 ipywidgets pkgs/main/noarch::ipywidgets-7.5.1-py_0 isort pkgs/main/linux-64::isort-4.3.21-py37_0 itsdangerous pkgs/main/linux-64::itsdangerous-1.1.0-py37_0 jbig pkgs/main/linux-64::jbig-2.1-hdba287a_0 jdcal pkgs/main/noarch::jdcal-1.4.1-py_0 jedi pkgs/main/linux-64::jedi-0.15.1-py37_0 jeepney pkgs/main/noarch::jeepney-0.4.1-py_0 jinja2 pkgs/main/noarch::jinja2-2.10.3-py_0 joblib pkgs/main/linux-64::joblib-0.13.2-py37_0 jpeg pkgs/main/linux-64::jpeg-9b-h024ee3a_2 json5 pkgs/main/noarch::json5-0.8.5-py_0 jsonschema pkgs/main/linux-64::jsonschema-3.0.2-py37_0 jupyter pkgs/main/linux-64::jupyter-1.0.0-py37_7 jupyter_client pkgs/main/linux-64::jupyter_client-5.3.3-py37_1 jupyter_console pkgs/main/linux-64::jupyter_console-6.0.0-py37_0 jupyter_core pkgs/main/noarch::jupyter_core-4.5.0-py_0 jupyterlab pkgs/main/noarch::jupyterlab-1.1.4-pyhf63ae98_0 jupyterlab_server pkgs/main/noarch::jupyterlab_server-1.0.6-py_0 keyring pkgs/main/linux-64::keyring-18.0.0-py37_0 kiwisolver pkgs/main/linux-64::kiwisolver-1.1.0-py37he6710b0_0 krb5 pkgs/main/linux-64::krb5-1.16.1-h173b8e3_7 lazy-object-proxy pkgs/main/linux-64::lazy-object-proxy-1.4.2-py37h7b6447c_0 libarchive pkgs/main/linux-64::libarchive-3.3.3-h5d8350f_5 libcurl pkgs/main/linux-64::libcurl-7.65.3-h20c2e04_0 libedit pkgs/main/linux-64::libedit-3.1.20181209-hc058e9b_0 libffi pkgs/main/linux-64::libffi-3.2.1-hd88cf55_4 libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0 libgfortran-ng pkgs/main/linux-64::libgfortran-ng-7.3.0-hdf63c60_0 liblief pkgs/main/linux-64::liblief-0.9.0-h7725739_2 libpng pkgs/main/linux-64::libpng-1.6.37-hbc83047_0 libsodium pkgs/main/linux-64::libsodium-1.0.16-h1bed415_0 libssh2 pkgs/main/linux-64::libssh2-1.8.2-h1ba5d50_0 libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0 libtiff pkgs/main/linux-64::libtiff-4.0.10-h2733197_2 libtool pkgs/main/linux-64::libtool-2.4.6-h7b6447c_5 libuuid pkgs/main/linux-64::libuuid-1.0.3-h1bed415_2 libxcb pkgs/main/linux-64::libxcb-1.13-h1bed415_1 libxml2 pkgs/main/linux-64::libxml2-2.9.9-hea5a465_1 libxslt pkgs/main/linux-64::libxslt-1.1.33-h7d1a2b0_0 llvmlite pkgs/main/linux-64::llvmlite-0.29.0-py37hd408876_0 locket pkgs/main/linux-64::locket-0.2.0-py37_1 lxml pkgs/main/linux-64::lxml-4.4.1-py37hefd8a0e_0 lz4-c pkgs/main/linux-64::lz4-c-1.8.1.2-h14c3975_0 lzo pkgs/main/linux-64::lzo-2.10-h49e0be7_2 markupsafe pkgs/main/linux-64::markupsafe-1.1.1-py37h7b6447c_0 matplotlib pkgs/main/linux-64::matplotlib-3.1.1-py37h5429711_0 mccabe pkgs/main/linux-64::mccabe-0.6.1-py37_1 mistune pkgs/main/linux-64::mistune-0.8.4-py37h7b6447c_0 mkl pkgs/main/linux-64::mkl-2019.4-243 mkl-service pkgs/main/linux-64::mkl-service-2.3.0-py37he904b0f_0 mkl_fft pkgs/main/linux-64::mkl_fft-1.0.14-py37ha843d7b_0 mkl_random pkgs/main/linux-64::mkl_random-1.1.0-py37hd6b4f25_0 mock pkgs/main/linux-64::mock-3.0.5-py37_0 more-itertools pkgs/main/linux-64::more-itertools-7.2.0-py37_0 mpc pkgs/main/linux-64::mpc-1.1.0-h10f8cd9_1 mpfr pkgs/main/linux-64::mpfr-4.0.1-hdf1c602_3 mpmath pkgs/main/linux-64::mpmath-1.1.0-py37_0 msgpack-python pkgs/main/linux-64::msgpack-python-0.6.1-py37hfd86e86_1 multipledispatch pkgs/main/linux-64::multipledispatch-0.6.0-py37_0 navigator-updater pkgs/main/linux-64::navigator-updater-0.2.1-py37_0 nbconvert pkgs/main/linux-64::nbconvert-5.6.0-py37_1 nbformat pkgs/main/linux-64::nbformat-4.4.0-py37_0 ncurses pkgs/main/linux-64::ncurses-6.1-he6710b0_1 networkx pkgs/main/noarch::networkx-2.3-py_0 nltk pkgs/main/linux-64::nltk-3.4.5-py37_0 nose pkgs/main/linux-64::nose-1.3.7-py37_2 notebook pkgs/main/linux-64::notebook-6.0.1-py37_0 numba pkgs/main/linux-64::numba-0.45.1-py37h962f231_0 numexpr pkgs/main/linux-64::numexpr-2.7.0-py37h9e4a6bb_0 numpy pkgs/main/linux-64::numpy-1.17.2-py37haad9e8e_0 numpy-base pkgs/main/linux-64::numpy-base-1.17.2-py37hde5b4d6_0 numpydoc pkgs/main/noarch::numpydoc-0.9.1-py_0 olefile pkgs/main/linux-64::olefile-0.46-py37_0 openpyxl pkgs/main/noarch::openpyxl-3.0.0-py_0 openssl pkgs/main/linux-64::openssl-1.1.1d-h7b6447c_2 packaging pkgs/main/noarch::packaging-19.2-py_0 pandas pkgs/main/linux-64::pandas-0.25.1-py37he6710b0_0 pandoc pkgs/main/linux-64::pandoc-2.2.3.2-0 pandocfilters pkgs/main/linux-64::pandocfilters-1.4.2-py37_1 pango pkgs/main/linux-64::pango-1.42.4-h049681c_0 parso pkgs/main/noarch::parso-0.5.1-py_0 partd pkgs/main/noarch::partd-1.0.0-py_0 patchelf pkgs/main/linux-64::patchelf-0.9-he6710b0_3 path.py pkgs/main/noarch::path.py-12.0.1-py_0 pathlib2 pkgs/main/linux-64::pathlib2-2.3.5-py37_0 patsy pkgs/main/linux-64::patsy-0.5.1-py37_0 pcre pkgs/main/linux-64::pcre-8.43-he6710b0_0 pep8 pkgs/main/linux-64::pep8-1.7.1-py37_0 pexpect pkgs/main/linux-64::pexpect-4.7.0-py37_0 pickleshare pkgs/main/linux-64::pickleshare-0.7.5-py37_0 pillow pkgs/main/linux-64::pillow-6.2.0-py37h34e0f95_0 pip pkgs/main/linux-64::pip-19.2.3-py37_0 pixman pkgs/main/linux-64::pixman-0.38.0-h7b6447c_0 pkginfo pkgs/main/linux-64::pkginfo-1.5.0.1-py37_0 pluggy pkgs/main/linux-64::pluggy-0.13.0-py37_0 ply pkgs/main/linux-64::ply-3.11-py37_0 prometheus_client pkgs/main/noarch::prometheus_client-0.7.1-py_0 prompt_toolkit pkgs/main/noarch::prompt_toolkit-2.0.10-py_0 psutil pkgs/main/linux-64::psutil-5.6.3-py37h7b6447c_0 ptyprocess pkgs/main/linux-64::ptyprocess-0.6.0-py37_0 py pkgs/main/linux-64::py-1.8.0-py37_0 py-lief pkgs/main/linux-64::py-lief-0.9.0-py37h7725739_2 pycodestyle pkgs/main/linux-64::pycodestyle-2.5.0-py37_0 pycosat pkgs/main/linux-64::pycosat-0.6.3-py37h14c3975_0 pycparser pkgs/main/linux-64::pycparser-2.19-py37_0 pycrypto pkgs/main/linux-64::pycrypto-2.6.1-py37h14c3975_9 pycurl pkgs/main/linux-64::pycurl-7.43.0.3-py37h1ba5d50_0 pyflakes pkgs/main/linux-64::pyflakes-2.1.1-py37_0 pygments pkgs/main/noarch::pygments-2.4.2-py_0 pylint pkgs/main/linux-64::pylint-2.4.2-py37_0 pyodbc pkgs/main/linux-64::pyodbc-4.0.27-py37he6710b0_0 pyopenssl pkgs/main/linux-64::pyopenssl-19.0.0-py37_0 pyparsing pkgs/main/noarch::pyparsing-2.4.2-py_0 pyqt pkgs/main/linux-64::pyqt-5.9.2-py37h05f1152_2 pyrsistent pkgs/main/linux-64::pyrsistent-0.15.4-py37h7b6447c_0 pysocks pkgs/main/linux-64::pysocks-1.7.1-py37_0 pytables pkgs/main/linux-64::pytables-3.5.2-py37h71ec239_1 pytest pkgs/main/linux-64::pytest-5.2.1-py37_0 pytest-arraydiff pkgs/main/linux-64::pytest-arraydiff-0.3-py37h39e3cac_0 pytest-astropy pkgs/main/linux-64::pytest-astropy-0.5.0-py37_0 pytest-doctestplus pkgs/main/noarch::pytest-doctestplus-0.4.0-py_0 pytest-openfiles pkgs/main/noarch::pytest-openfiles-0.4.0-py_0 pytest-remotedata pkgs/main/linux-64::pytest-remotedata-0.3.2-py37_0 python pkgs/main/linux-64::python-3.7.4-h265db76_1 python-dateutil pkgs/main/linux-64::python-dateutil-2.8.0-py37_0 python-libarchive~ pkgs/main/linux-64::python-libarchive-c-2.8-py37_13 pytz pkgs/main/noarch::pytz-2019.3-py_0 pywavelets pkgs/main/linux-64::pywavelets-1.0.3-py37hdd07704_1 pyyaml pkgs/main/linux-64::pyyaml-5.1.2-py37h7b6447c_0 pyzmq pkgs/main/linux-64::pyzmq-18.1.0-py37he6710b0_0 qt pkgs/main/linux-64::qt-5.9.7-h5867ecd_1 qtawesome pkgs/main/noarch::qtawesome-0.6.0-py_0 qtconsole pkgs/main/noarch::qtconsole-4.5.5-py_0 qtpy pkgs/main/noarch::qtpy-1.9.0-py_0 readline pkgs/main/linux-64::readline-7.0-h7b6447c_5 requests pkgs/main/linux-64::requests-2.22.0-py37_0 ripgrep pkgs/main/linux-64::ripgrep-0.10.0-hc07d326_0 rope pkgs/main/noarch::rope-0.14.0-py_0 ruamel_yaml pkgs/main/linux-64::ruamel_yaml-0.15.46-py37h14c3975_0 scikit-image pkgs/main/linux-64::scikit-image-0.15.0-py37he6710b0_0 scikit-learn pkgs/main/linux-64::scikit-learn-0.21.3-py37hd81dba3_0 scipy pkgs/main/linux-64::scipy-1.3.1-py37h7c811a0_0 seaborn pkgs/main/linux-64::seaborn-0.9.0-py37_0 secretstorage pkgs/main/linux-64::secretstorage-3.1.1-py37_0 send2trash pkgs/main/linux-64::send2trash-1.5.0-py37_0 setuptools pkgs/main/linux-64::setuptools-41.4.0-py37_0 simplegeneric pkgs/main/linux-64::simplegeneric-0.8.1-py37_2 singledispatch pkgs/main/linux-64::singledispatch-3.4.0.3-py37_0 sip pkgs/main/linux-64::sip-4.19.8-py37hf484d3e_0 six pkgs/main/linux-64::six-1.12.0-py37_0 snappy pkgs/main/linux-64::snappy-1.1.7-hbae5bb6_3 snowballstemmer pkgs/main/noarch::snowballstemmer-2.0.0-py_0 sortedcollections pkgs/main/linux-64::sortedcollections-1.1.2-py37_0 sortedcontainers pkgs/main/linux-64::sortedcontainers-2.1.0-py37_0 soupsieve pkgs/main/linux-64::soupsieve-1.9.3-py37_0 sphinx pkgs/main/noarch::sphinx-2.2.0-py_0 sphinxcontrib pkgs/main/linux-64::sphinxcontrib-1.0-py37_1 sphinxcontrib-app~ pkgs/main/noarch::sphinxcontrib-applehelp-1.0.1-py_0 sphinxcontrib-dev~ pkgs/main/noarch::sphinxcontrib-devhelp-1.0.1-py_0 sphinxcontrib-htm~ pkgs/main/noarch::sphinxcontrib-htmlhelp-1.0.2-py_0 sphinxcontrib-jsm~ pkgs/main/noarch::sphinxcontrib-jsmath-1.0.1-py_0 sphinxcontrib-qth~ pkgs/main/noarch::sphinxcontrib-qthelp-1.0.2-py_0 sphinxcontrib-ser~ pkgs/main/noarch::sphinxcontrib-serializinghtml-1.1.3-py_0 sphinxcontrib-web~ pkgs/main/noarch::sphinxcontrib-websupport-1.1.2-py_0 spyder pkgs/main/linux-64::spyder-3.3.6-py37_0 spyder-kernels pkgs/main/linux-64::spyder-kernels-0.5.2-py37_0 sqlalchemy pkgs/main/linux-64::sqlalchemy-1.3.9-py37h7b6447c_0 sqlite pkgs/main/linux-64::sqlite-3.30.0-h7b6447c_0 statsmodels pkgs/main/linux-64::statsmodels-0.10.1-py37hdd07704_0 sympy pkgs/main/linux-64::sympy-1.4-py37_0 tbb pkgs/main/linux-64::tbb-2019.4-hfd86e86_0 tblib pkgs/main/noarch::tblib-1.4.0-py_0 terminado pkgs/main/linux-64::terminado-0.8.2-py37_0 testpath pkgs/main/linux-64::testpath-0.4.2-py37_0 tk pkgs/main/linux-64::tk-8.6.8-hbc83047_0 toolz pkgs/main/noarch::toolz-0.10.0-py_0 tornado pkgs/main/linux-64::tornado-6.0.3-py37h7b6447c_0 tqdm pkgs/main/noarch::tqdm-4.36.1-py_0 traitlets pkgs/main/linux-64::traitlets-4.3.3-py37_0 unicodecsv pkgs/main/linux-64::unicodecsv-0.14.1-py37_0 unixodbc pkgs/main/linux-64::unixodbc-2.3.7-h14c3975_0 urllib3 pkgs/main/linux-64::urllib3-1.24.2-py37_0 wcwidth pkgs/main/linux-64::wcwidth-0.1.7-py37_0 webencodings pkgs/main/linux-64::webencodings-0.5.1-py37_1 werkzeug pkgs/main/noarch::werkzeug-0.16.0-py_0 wheel pkgs/main/linux-64::wheel-0.33.6-py37_0 widgetsnbextension pkgs/main/linux-64::widgetsnbextension-3.5.1-py37_0 wrapt pkgs/main/linux-64::wrapt-1.11.2-py37h7b6447c_0 wurlitzer pkgs/main/linux-64::wurlitzer-1.0.3-py37_0 xlrd pkgs/main/linux-64::xlrd-1.2.0-py37_0 xlsxwriter pkgs/main/noarch::xlsxwriter-1.2.1-py_0 xlwt pkgs/main/linux-64::xlwt-1.3.0-py37_0 xz pkgs/main/linux-64::xz-5.2.4-h14c3975_4 yaml pkgs/main/linux-64::yaml-0.1.7-had09818_2 zeromq pkgs/main/linux-64::zeromq-4.3.1-he6710b0_3 zict pkgs/main/noarch::zict-1.0.0-py_0 zipp pkgs/main/noarch::zipp-0.6.0-py_0 zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3 zstd pkgs/main/linux-64::zstd-1.3.7-h0b5b093_0 Preparing transaction: - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | done Executing transaction: - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / done installation finished. WARNING: You currently have a PYTHONPATH environment variable set. This may cause unexpected behavior when running the Python interpreter in Anaconda3. For best results, please verify that your PYTHONPATH only points to directories of packages that are compatible with the Python interpreter in Anaconda3: /usr/local Collecting package metadata (current_repodata.json): - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - done Solving environment: | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ failed with initial frozen solve. Retrying with flexible solve. Solving environment: / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ done Solving environment: / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / done ==> WARNING: A newer version of conda exists. <== current version: 4.7.12 latest version: 4.8.2 Please update conda by running $ conda update -n base -c defaults conda ## Package Plan ## environment location: /usr/local added / updated specs: - deepchem-gpu=2.3.0 The following packages will be downloaded: package | build ---------------------------|----------------- _py-xgboost-mutex-2.0 | cpu_0 8 KB conda-forge _tflow_select-2.1.0 | gpu 2 KB absl-py-0.9.0 | py37_0 162 KB conda-forge astor-0.7.1 | py_0 22 KB conda-forge c-ares-1.15.0 | h516909a_1001 100 KB conda-forge certifi-2019.9.11 | py37_0 147 KB conda-forge conda-4.8.2 | py37_0 3.0 MB conda-forge cudatoolkit-10.1.243 | h6bb024c_0 347.4 MB cudnn-7.6.5 | cuda10.1_0 179.9 MB cupti-10.1.168 | 0 1.4 MB deepchem-gpu-2.3.0 | py37_0 2.1 MB deepchem fftw3f-3.3.4 | 2 1.2 MB omnia gast-0.3.3 | py_0 12 KB conda-forge google-pasta-0.1.8 | py_0 42 KB conda-forge grpcio-1.23.0 | py37he9ae1f9_0 1.1 MB conda-forge keras-applications-1.0.8 | py_1 30 KB conda-forge keras-preprocessing-1.1.0 | py_0 33 KB conda-forge libboost-1.67.0 | h46d08c1_4 13.0 MB libprotobuf-3.11.4 | h8b12597_0 4.8 MB conda-forge libxgboost-0.90 | he1b5a44_4 2.4 MB conda-forge markdown-3.2.1 | py_0 61 KB conda-forge mdtraj-1.9.3 | py37h00575c5_0 1.9 MB conda-forge openmm-7.4.1 |py37_cuda101_rc_1 11.9 MB omnia pdbfixer-1.6 | py37_0 190 KB omnia protobuf-3.11.4 | py37he1b5a44_0 699 KB conda-forge py-boost-1.67.0 | py37h04863e7_4 278 KB py-xgboost-0.90 | py37_4 73 KB conda-forge rdkit-2019.09.3.0 | py37hc20afe1_1 23.7 MB rdkit simdna-0.4.2 | py_0 627 KB deepchem tensorboard-1.14.0 | py37_0 3.2 MB conda-forge tensorflow-1.14.0 |gpu_py37h74c33d7_0 4 KB tensorflow-base-1.14.0 |gpu_py37he45bfe2_0 146.3 MB tensorflow-estimator-1.14.0| py37h5ca1d4c_0 645 KB conda-forge tensorflow-gpu-1.14.0 | h0d30ee6_0 3 KB termcolor-1.1.0 | py_2 6 KB conda-forge xgboost-0.90 | py37he1b5a44_4 11 KB conda-forge ------------------------------------------------------------ Total: 746.5 MB The following NEW packages will be INSTALLED: _py-xgboost-mutex conda-forge/linux-64::_py-xgboost-mutex-2.0-cpu_0 _tflow_select pkgs/main/linux-64::_tflow_select-2.1.0-gpu absl-py conda-forge/linux-64::absl-py-0.9.0-py37_0 astor conda-forge/noarch::astor-0.7.1-py_0 c-ares conda-forge/linux-64::c-ares-1.15.0-h516909a_1001 cudatoolkit pkgs/main/linux-64::cudatoolkit-10.1.243-h6bb024c_0 cudnn pkgs/main/linux-64::cudnn-7.6.5-cuda10.1_0 cupti pkgs/main/linux-64::cupti-10.1.168-0 deepchem-gpu deepchem/linux-64::deepchem-gpu-2.3.0-py37_0 fftw3f omnia/linux-64::fftw3f-3.3.4-2 gast conda-forge/noarch::gast-0.3.3-py_0 google-pasta conda-forge/noarch::google-pasta-0.1.8-py_0 grpcio conda-forge/linux-64::grpcio-1.23.0-py37he9ae1f9_0 keras-applications conda-forge/noarch::keras-applications-1.0.8-py_1 keras-preprocessi~ conda-forge/noarch::keras-preprocessing-1.1.0-py_0 libboost pkgs/main/linux-64::libboost-1.67.0-h46d08c1_4 libprotobuf conda-forge/linux-64::libprotobuf-3.11.4-h8b12597_0 libxgboost conda-forge/linux-64::libxgboost-0.90-he1b5a44_4 markdown conda-forge/noarch::markdown-3.2.1-py_0 mdtraj conda-forge/linux-64::mdtraj-1.9.3-py37h00575c5_0 openmm omnia/linux-64::openmm-7.4.1-py37_cuda101_rc_1 pdbfixer omnia/linux-64::pdbfixer-1.6-py37_0 protobuf conda-forge/linux-64::protobuf-3.11.4-py37he1b5a44_0 py-boost pkgs/main/linux-64::py-boost-1.67.0-py37h04863e7_4 py-xgboost conda-forge/linux-64::py-xgboost-0.90-py37_4 rdkit rdkit/linux-64::rdkit-2019.09.3.0-py37hc20afe1_1 simdna deepchem/noarch::simdna-0.4.2-py_0 tensorboard conda-forge/linux-64::tensorboard-1.14.0-py37_0 tensorflow pkgs/main/linux-64::tensorflow-1.14.0-gpu_py37h74c33d7_0 tensorflow-base pkgs/main/linux-64::tensorflow-base-1.14.0-gpu_py37he45bfe2_0 tensorflow-estima~ conda-forge/linux-64::tensorflow-estimator-1.14.0-py37h5ca1d4c_0 tensorflow-gpu pkgs/main/linux-64::tensorflow-gpu-1.14.0-h0d30ee6_0 termcolor conda-forge/noarch::termcolor-1.1.0-py_2 xgboost conda-forge/linux-64::xgboost-0.90-py37he1b5a44_4 The following packages will be UPDATED: conda pkgs/main::conda-4.7.12-py37_0 --> conda-forge::conda-4.8.2-py37_0 The following packages will be SUPERSEDED by a higher-priority channel: certifi pkgs/main --> conda-forge Downloading and Extracting Packages keras-applications-1 | 30 KB | : 100% 1.0/1 [00:00<00:00, 8.82it/s] libboost-1.67.0 | 13.0 MB | : 100% 1.0/1 [00:01<00:00, 1.85s/it] absl-py-0.9.0 | 162 KB | : 100% 1.0/1 [00:00<00:00, 11.13it/s] libxgboost-0.90 | 2.4 MB | : 100% 1.0/1 [00:00<00:00, 2.16it/s] cupti-10.1.168 | 1.4 MB | : 100% 1.0/1 [00:00<00:00, 7.39it/s] termcolor-1.1.0 | 6 KB | : 100% 1.0/1 [00:00<00:00, 22.33it/s] tensorflow-base-1.14 | 146.3 MB | : 100% 1.0/1 [00:14<00:00, 14.12s/it] tensorboard-1.14.0 | 3.2 MB | : 100% 1.0/1 [00:00<00:00, 1.87it/s] cudnn-7.6.5 | 179.9 MB | : 100% 1.0/1 [00:10<00:00, 10.91s/it] conda-4.8.2 | 3.0 MB | : 100% 1.0/1 [00:00<00:00, 1.22it/s] py-boost-1.67.0 | 278 KB | : 100% 1.0/1 [00:00<00:00, 8.26it/s] py-xgboost-0.90 | 73 KB | : 100% 1.0/1 [00:00<00:00, 18.94it/s] tensorflow-gpu-1.14. | 3 KB | : 100% 1.0/1 [00:00<00:00, 9.85it/s] mdtraj-1.9.3 | 1.9 MB | : 100% 1.0/1 [00:00<00:00, 2.17it/s] rdkit-2019.09.3.0 | 23.7 MB | : 100% 1.0/1 [00:05<00:00, 76.64s/it] deepchem-gpu-2.3.0 | 2.1 MB | : 100% 1.0/1 [00:00<00:00, 50.91s/it] grpcio-1.23.0 | 1.1 MB | : 100% 1.0/1 [00:00<00:00, 4.14it/s] _py-xgboost-mutex-2. | 8 KB | : 100% 1.0/1 [00:00<00:00, 27.43it/s] libprotobuf-3.11.4 | 4.8 MB | : 100% 1.0/1 [00:01<00:00, 1.08s/it] keras-preprocessing- | 33 KB | : 100% 1.0/1 [00:00<00:00, 22.50it/s] markdown-3.2.1 | 61 KB | : 100% 1.0/1 [00:00<00:00, 20.73it/s] google-pasta-0.1.8 | 42 KB | : 100% 1.0/1 [00:00<00:00, 11.05it/s] protobuf-3.11.4 | 699 KB | : 100% 1.0/1 [00:00<00:00, 4.10it/s] _tflow_select-2.1.0 | 2 KB | : 100% 1.0/1 [00:00<00:00, 10.36it/s] simdna-0.4.2 | 627 KB | : 100% 1.0/1 [00:00<00:00, 2.80it/s] c-ares-1.15.0 | 100 KB | : 100% 1.0/1 [00:00<00:00, 13.50it/s] gast-0.3.3 | 12 KB | : 100% 1.0/1 [00:00<00:00, 20.80it/s] certifi-2019.9.11 | 147 KB | : 100% 1.0/1 [00:00<00:00, 7.10it/s] fftw3f-3.3.4 | 1.2 MB | : 100% 1.0/1 [00:00<00:00, 12.56s/it] openmm-7.4.1 | 11.9 MB | : 100% 1.0/1 [00:03<00:00, 108.64s/it] tensorflow-1.14.0 | 4 KB | : 100% 1.0/1 [00:00<00:00, 10.64it/s] tensorflow-estimator | 645 KB | : 100% 1.0/1 [00:00<00:00, 4.16it/s] astor-0.7.1 | 22 KB | : 100% 1.0/1 [00:00<00:00, 26.30it/s] xgboost-0.90 | 11 KB | : 100% 1.0/1 [00:00<00:00, 32.86it/s] cudatoolkit-10.1.243 | 347.4 MB | : 100% 1.0/1 [00:19<00:00, 19.76s/it] pdbfixer-1.6 | 190 KB | : 100% 1.0/1 [00:00<00:00, 1.50it/s] Preparing transaction: \ | / - \ | / - \ | / done Verifying transaction: \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - done Executing transaction: | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ done
MIT
examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
patrickphatnguyen/deepchem
Let's start with some basic imports
from __future__ import print_function from __future__ import division from __future__ import unicode_literals import numpy as np from rdkit import Chem from deepchem.feat import ConvMolFeaturizer, WeaveFeaturizer, CircularFingerprint from deepchem.feat import AdjacencyFingerprint, RDKitDescriptors from deepchem.feat import BPSymmetryFunctionInput, CoulombMatrix, CoulombMatrixEig from deepchem.utils import conformers
/usr/local/lib/python3.6/dist-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+. warnings.warn(msg, category=FutureWarning)
MIT
examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
patrickphatnguyen/deepchem
We use `propane`( $CH_3 CH_2 CH_3 $ ) as a running example throughout this tutorial. Many of the featurization methods use conformers or the molecules. A conformer can be generated using the `ConformerGenerator` class in `deepchem.utils.conformers`. RDKitDescriptors `RDKitDescriptors` featurizes a molecule by computing descriptors values for specified descriptors. Intrinsic to the featurizer is a set of allowed descriptors, which can be accessed using `RDKitDescriptors.allowedDescriptors`.The featurizer uses the descriptors in `rdkit.Chem.Descriptors.descList`, checks if they are in the list of allowed descriptors and computes the descriptor value for the molecule.
example_smile = "CCC" example_mol = Chem.MolFromSmiles(example_smile)
_____no_output_____
MIT
examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
patrickphatnguyen/deepchem
Let's check the allowed list of descriptors. As you will see shortly, there's a wide range of chemical properties that RDKit computes for us.
for descriptor in RDKitDescriptors.allowedDescriptors: print(descriptor) rdkit_desc = RDKitDescriptors() features = rdkit_desc._featurize(example_mol) print('The number of descriptors present are: ', len(features))
The number of descriptors present are: 111
MIT
examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
patrickphatnguyen/deepchem
BPSymmetryFunction `Behler-Parinello Symmetry function` or `BPSymmetryFunction` featurizes a molecule by computing the atomic number and coordinates for each atom in the molecule. The features can be used as input for symmetry functions, like `RadialSymmetry`, `DistanceMatrix` and `DistanceCutoff` . More details on these symmetry functions can be found in [this paper](https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.98.146401). These functions can be found in `deepchem.feat.coulomb_matrices`The featurizer takes in `max_atoms` as an argument. As input, it takes in a conformer of the molecule and computes:1. coordinates of every atom in the molecule (in Bohr units)2. the atomic numbers for all atoms. These features are concantenated and padded with zeros to account for different number of atoms, across molecules.
example_smile = "CCC" example_mol = Chem.MolFromSmiles(example_smile) engine = conformers.ConformerGenerator(max_conformers=1) example_mol = engine.generate_conformers(example_mol)
_____no_output_____
MIT
examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
patrickphatnguyen/deepchem
Let's now take a look at the actual featurized matrix that comes out.
bp_sym = BPSymmetryFunctionInput(max_atoms=20) features = bp_sym._featurize(mol=example_mol) features
_____no_output_____
MIT
examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
patrickphatnguyen/deepchem