markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Cleaning up a service instance*Back to [table of contents](Table-of-Contents)*To clean all data on the service instance, you can run the following snippet. The code is self-contained and does not require you to execute any of the cells above. However, you will need to have the `key.json` containing a service key in place.You will need to set `CLEANUP_EVERYTHING = True` below to execute the cleanup.**NOTE: This will delete all data on the service instance!**
|
CLEANUP_EVERYTHING = False
def cleanup_everything():
import logging
import sys
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
import json
import os
if not os.path.exists("key.json"):
msg = "key.json is not found. Please follow instructions above to create a service key of"
msg += " Data Attribute Recommendation. Then, upload it into the same directory where"
msg += " this notebook is saved."
print(msg)
raise ValueError(msg)
with open("key.json") as file_handle:
key = file_handle.read()
SERVICE_KEY = json.loads(key)
from sap.aibus.dar.client.model_manager_client import ModelManagerClient
model_manager = ModelManagerClient.construct_from_service_key(SERVICE_KEY)
for deployment in model_manager.read_deployment_collection()["deployments"]:
model_manager.delete_deployment_by_id(deployment["id"])
for model in model_manager.read_model_collection()["models"]:
model_manager.delete_model_by_name(model["name"])
for job in model_manager.read_job_collection()["jobs"]:
model_manager.delete_job_by_id(job["id"])
from sap.aibus.dar.client.data_manager_client import DataManagerClient
data_manager = DataManagerClient.construct_from_service_key(SERVICE_KEY)
for dataset in data_manager.read_dataset_collection()["datasets"]:
data_manager.delete_dataset_by_id(dataset["id"])
for dataset_schema in data_manager.read_dataset_schema_collection()["datasetSchemas"]:
data_manager.delete_dataset_schema_by_id(dataset_schema["id"])
print("Cleanup done!")
if CLEANUP_EVERYTHING:
print("Cleaning up all resources in this service instance.")
cleanup_everything()
else:
print("Not cleaning up. Set 'CLEANUP_EVERYTHING = True' above and run again.")
|
_____no_output_____
|
Apache-2.0
|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb
|
SAP-samples/teched2020-INT260
|
ResumenEste cuaderno digital interactivo tiene como objetivo demostrar las relaciones entre las propiedades fisico-químicas de la vegetación y el espectro solar.Para ello haremos uso de modelos de simulación, en particular de modelos de transferencia radiativa tanto a nivel de hoja individual como a nivel de dosel vegetal. InstruccionesLee con detenimiento todo el texto, y sigue sus instrucciones.Una vez leida cada sección de texto ejecuta la celda de código siguiente (marcada como `In []`) presionando el icono de `Run`/`Ejecutar` o presionando en el teclado ALT + ENTER. Aparecerá una interfaz gráfica con la que poder realizar las tareas asignadas.Como ejemplo ejectuta la siguiente celda para importar todas las librerías necesarias para el correcto funcionamiento del cuaderno. Una vez ejecutada debería aparecer un mensaje de agradecimiento.
|
%matplotlib inline
from ipywidgets import interactive, fixed
from IPython.display import display
from functions import prosail_and_spectra as fn
|
_____no_output_____
|
CC0-1.0
|
ES_espectro_vegetacion.ipynb
|
hectornieto/Curso-WUE
|
Espectro de una hojaLas propiedades espectrales de una hoja (tanto su transmisividad, su reflectividad y su absortividad) dependen de su concentración de pigmentos, de su contenido de agua, su peso específico y la estructura interna de sus tejidos. Vamos a usar el modelo ProspectD, el cual es una simplificación de la realidad en la que simula el espectro mediante la concentración de clorofilas (`Cab`), carotenoides (`Car`), antocianinos (`Ant`), así como el peso de agua y por unidad de supeficie (`Cw`) y el peso del resto de la materia seca (`Cm`) que engloba las celulosas, ligninas (responsables principales de la biomasa foliar) y otros componentes proteicos. También incluye un parámetro semi-empírico que representa otros pigmentos responsables del color de las hojas senescentes y enfermas. Además con el fin de simular hojas con distintas estructuras celulares incluye un último parámetro (`Nf`) que emula las distitas capas y tejidos celulares de la hoja. con idénticas propiedades espectrales")> Si quieres saber más sobre el modelo ProspectD pincha en esta [publicación](./lecturas_adicionales/ProspectD_model.pdf).>> Si quieres más detalles sobre el cálculo y el código del modelo pincha [aquí](https://github.com/hectornieto/pypro4sail/blob/b111891e0a2c01b8b3fa5ff41790687d31297e5f/pypro4sail/prospect.pyL46).Ejecuta la siguiente célula y verás un espectro típico de la hoja. El gráfico muestra tanto la reflectividad (en el eje y) como la transmisividad (en el eje secundario y, con valores invertidos) y la absortividad (como el espacio entre las dos curvas de reflectividad y transmisividad) $\rho + \tau + \alpha = 1$.Presta atención a cómo y en qué regiones cambia el espectro según el parámetro que modifiques.* Haz variar la clorofila. * Haz variar el contenido de agua* Haz variar la materia seca* Haz variar los pigmentos marrones desde un valor de 0 (hoja sana) a valores mayores (hoja enferma o seca)
|
w_rho_leaf = interactive(fn.update_prospect_spectrum, N_leaf=fn.w_nleaf, Cab=fn.w_cab,
Car=fn.w_car, Ant=fn.w_ant, Cbrown=fn.w_cbrown, Cw=fn.w_cw, Cm=fn.w_cm)
display(w_rho_leaf)
|
_____no_output_____
|
CC0-1.0
|
ES_espectro_vegetacion.ipynb
|
hectornieto/Curso-WUE
|
Observa lo siguente:* La concentración de clorofila `Cab` afecta principalmente a la región del visible (RGB) y del *red egde* (R-E), con más absorción en la región del rojo y del azul y más reflexión en el verde. Es por ello que la mayoría de las hojas presentan color verde.* El contenido de agua `Cw` afecta principalmente a la absorción en el infrarrojo de onda corta (SWIR), con máximos de absorción en trono a los 1460 y 2100 nm.* La materia seca `Cm` afecta principalmente a la absorción en el infrarrojo cercano (NIR).* Otros pigmentos afectan en menor medida al espectro visible. Por ejemplo los antocianos `Ant` que suelen aparecer durante la senescencia desplazan el pico de reflexión del verde hacia el rojo, sobre todo cuando a su vez decrece la concentración de clorofila.* El parámetro `N` afecta a la relación entre reflectividad y transmisividad. Cuantas más *capas* tenga una hoja más fenómenos de dispersión múltiple habrá y reflejará más.> Puedes ver este fenómeno también en las ventanas con doble o triple cristal usadas como aislante, por ejemplo de los escaparates comerciales. A no ser que uno se sitúen justo de frente y cerca del escaparate, éste parece más un espejo que una ventana. Espectro del sueloEl espectro del dosel o de la supeficie vegetal no sólo depende del espectro y las propiedades de las hojas, sino que también de la propia estructura del dosel así como del suelo. En particular en doseles abiertos o poco densos, como en las primeras fases fenológicas, el comportamiento espectral del suelo puede influir de manera muy importante en la señal espectral que capten los sensores de teledetección.El espectro del suelo depende de varios factores, como son su composición mineralógica, materia orgánica, su textura y densidad así como su humedad superficial. Ejectuta la siguiente celda y mira los distintas características espectrales de distintos tipos de suelo.
|
w_rho_soil = interactive(fn.update_soil_spectrum, soil_name=fn.w_soil)
display(w_rho_soil)
|
_____no_output_____
|
CC0-1.0
|
ES_espectro_vegetacion.ipynb
|
hectornieto/Curso-WUE
|
Observa lo diferente que puede ser un espectro de suelo en comparación con el de una hoja. Esto es clave a la hora de clasificar tipos de coberturas mediante teledetección así como cuantificar el vigor/densidad vegetal del cultivo.Observa que suelos más salinos (`aridisol.salorthid`) o gipsicos (`aridisol.gypsiorthd`), tienen una mayor reflectividad, sobre todo en el visible (RGB). Es decir, son más blancos que otros suelos. Espectro del doselFinalmente, integrando la firma espectral de una hoja y del suelo subyacente podemos obtener el espectro de un dosel vegetal. El espectro de la superficie vegetal además depende de la estructura del dosel, principalmente de la cantidad de hojas por unidad de superficie (definido como el Índice de Área Foliar) y de cómo estas hojas se orientan con respecto a la vertical. Además, dado que se produce una interacción de la luz incidente y reflejada entre el volumen de hojas y el suelo, la posición del sol y del sensor influyen en la señal espectral que obtengamos.Para esta parte cobinaremos el modelo de transferencia ProspectD para simular el espectro de una hoja con otro modelo de trasnferencia a nivel de dosel (4SAIL). Este último modelo considera la superficie vegetal como una capa horizontal y verticalmente homogéna, por lo que se recomienda cautela en su aplicación en doseles arbóreos heterogéneos.> Si quieres saber más sobre el modelo 4SAIL pincha en esta [publicación](./lecturas_adicionales/4SAIL_model.pdf)> > Si quieres más detalles sobre el cálculo y el código del modelo pincha [aquí](https://github.com/hectornieto/pypro4sail/blob/b111891e0a2c01b8b3fa5ff41790687d31297e5f/pypro4sail/four_sail.pyL245)Ejecuta la siguente celda y mira cómo los [espectros de hoja](Espectro-de-una-hoja) y [suelo](Espectro-del-suelo) que se han generado previamente se integran para obtener un espectro de la superficie vegetal.> Puedes modificar los espectros de hoja y suelo, y esta gráfica se actualizará automáticamente.
|
w_rho_canopy = interactive(fn.update_4sail_spectrum,
lai=fn.w_lai, hotspot=fn.w_hotspot, leaf_angle=fn.w_leaf_angle,
sza=fn.w_sza, vza=fn.w_vza, psi=fn.w_psi, skyl=fn.w_skyl,
leaf_spectrum=fixed(w_rho_leaf), soil_spectrum=fixed(w_rho_soil))
display(w_rho_canopy)
|
_____no_output_____
|
CC0-1.0
|
ES_espectro_vegetacion.ipynb
|
hectornieto/Curso-WUE
|
Recuerda en la [práctica sobre la radiación neta](./ES_radiacion_neta.ipynb) que una superficie vegetal tiene ciertas propiedades anisotrópicas, lo que quiere decir que reflejará de manera distinta según la geometria de iluminación y de observación. Mira cómo cambia el espectro variando los valores del ángulo de observación cenital (VZA), ańgulo cenital del sol (SZA) y el ángulo azimutal relativo (PSI) entre el sol y el observador.Haz variar el LAI, y ponlo en cero (sin vegetación). Comprueba que el espectro que sale es directamente el espectro del suelo. Ahora incrementa ligeramente el LAI, verás como el espectro va cambiando, disminuyendo la reflectividad en el rojo y azul (debido a la clorofila de la hoja), y aumentando la reflectividad en el *red-edge* y el NIR.Recuerda también de la [práctica sobre la radiación neta](./ES_radiacion_neta.ipynb) el efecto que también tiene la disposición angular de las hojas. Con una observación al nadir (VZA=0) haz variar el ángulo típico de la hoja (`Leaf Angle`) desde un valor predominantemente horizontal (0º) a un ángulo predominantemente vertical (90º) Sensibilidad de los parámetrosEn esta tarea podrás ver el comportamiento espectral de la vegetación según varían los parámetros fisico-químicos de la vegetación así como su sensibilidad a las condiciones de observación e iluminación.Para ello vamos a realizar un análisis de sensibilidad variando un sólo parámetro a la vez, mientras que el resto de los parámetros permanecerán constantes. Puedes variar los valores individuales para el resto de los parámetros individuales (también se actualizarán de las gráficas anteriores). A continuación selecciona qué parámetro quieres analizar y el rango de valores máximo y mínimo que quieras que tenga.
|
w_sensitivity = interactive(fn.prosail_sensitivity,
N_leaf=fn.w_nleaf, Cab=fn.w_cab, Car=fn.w_car, Ant=fn.w_ant, Cbrown=fn.w_cbrown,
Cw=fn.w_cw, Cm=fn.w_cm, lai=fn.w_lai, hotspot=fn.w_hotspot, leaf_angle=fn.w_leaf_angle,
sza=fn.w_sza, vza=fn.w_vza, psi=fn.w_psi, skyl=fn.w_skyl,
soil_name=fn.w_soil, var=fn.w_param, value_range=fn.w_range)
display(w_sensitivity)
|
_____no_output_____
|
CC0-1.0
|
ES_espectro_vegetacion.ipynb
|
hectornieto/Curso-WUE
|
Empieza con al sensiblidad del espectro a la concentración de clorofila. Verás que la zona donde sobre todo hay variaciones es en el verde y el rojo. Observa también que en el *red-edge*, la zona de transición entre el rojo y el NIR, se produce un "desplazamiento" de la señal, este fenómento es clave y es la razón por la que los nuevos sensores (Sentinel, nuevas cámaras UAV) incluyen esta región para ayudar en la estimación de la clorofila y por tanto en la actividad fotosintética.Evalúa la sensibilidad al espectro de otros pigmentos (`Car` o `Ant`). Verás que la respuesta espectral a estos otros pigmentos es menor, lo que implica que resulta más dificil estimarlos a partir de teledetección. En cambio la variación espectral con los pigmentos marrones es bastante fuerte, como recordatorio estos pigmentos representan las variaciones cromáticas que se producen en hojas enfermas y muertas.> Esto implica que es relativamente posible detectar y cuantificar problemas sanitarios en la vegetación.Mira ahora la sensibilidad del LAI cuando su rango es pequeño (p.ej. de 0 a 2). Verás que el espectro cambia significativamente según incrementa el LAI. Ahora mira la sensibilidad cuando el LAI recorre valores mas altos (p.ej. de 2 a 4), verás que la variación en el espectro es mucho menor. Se suele decir que a valores altos de LAI el espectro tiende a "saturarse" por lo que la señal se hace menos sensible.> Es más fácil estimar el LAI con menor margen de error en cultivos con poca densidad foliar o fases fenológicas tempranas, que en cultivos o vegetación muy densa.Ahora mantén el valor fijo de LAI en un valor alto (p.ej 3) y haz variar el ángulo de observación cenital entre 0º (nadir) y una obsrvación oblicua (p.ej 35º). Verás que a pesar de haber un LAI alto, y que a priori hemos visto que ya es menos sensible, hay mayores variaciones espectrales al variar la geometría de observación.> Gracias a la anisotropía de la vegetación, las variaciones espectrales con respecto a la geometría de observación e iluminación pueden ayudar a resolver el LAI en condiciones de alta densidad.Ahora mira el peso específico de la hoja, o la cantidad de materia seca (`Cm`). Verás que según el peso específico de la hora se producen variaciones importantes en el NIR y SWIR.> La biomasa foliar puede calcularse a partir del producto entre el `LAI` y `Cm`, por lo que es viable estimar la biomasa foliar de un cultivo. Esta informaición puede ser útil por ejemplo para estimar el rendimiento final de algunos cultivos, como pueden ser los cereales.El parámetro `hotspot` es un parámetro semi-empírico relacionado con el tamaño relativo de la hoja con respecto a la altura del dosel. Afecta a cómo las hojas ensombrecen otras hojas dentro del dosel, por lo que su efecto más fuerte se observará cuando el observador (sensor) está justo en la misma posición que el sol. Para ello valores similares para VZA y SZA, y el ángulo azimutal relativo PSI en 0º. Ahora haz variar el hotstpot. Al poner el observador en la zona iluminada de la vegetación, el tamaño relativo de las hojas juega un papel importante, ya que cuanto más grandes sean estas el volumen de copa directamente iluminado será mayor. La señal de un sensorHasta ahora hemos visto el comportamiento espectral detallado de la vegetación. Sin embargo los sensores a bordo de los satélites, aeroplanos y drones no miden todo el espectro en continuo, si no que muestrean tal espectro en torno a unas bandas específicas, estratégicamente seleccionadas con el fin de intentar capturar los aspectos biofísicos más relvantes.Se denomina función de respuesta espectral a la forma en que un sensor específico integra el espectro con el fin de proporcionar la información en sus distintas bandas. Cada sensor, con cada una de sus bandas, tiene una función de respuesta espectral propia.En esta tarea veremos las respuestas espectrales de los sensores que utilizaremos más comunmente, Landsat, Sentinel-2 y Sentinel-3. También veremos el comportamiento espectral de una cámara típica que se usa con drones.Partimos de las simulaciones generadas anteriormente. Selecciona el sensor que quieras simular para ver como cada uno de los sensores "verían" esos mismos espectros.
|
w_rho_sensor = interactive(fn.sensor_sensitivity,
sensor=fn.w_sensor, spectra=fixed(w_sensitivity))
display(w_rho_sensor)
|
_____no_output_____
|
CC0-1.0
|
ES_espectro_vegetacion.ipynb
|
hectornieto/Curso-WUE
|
Realiza de nuevo un análisis de sensibilidad para la clorofila y compara la respuesta espectral que daría Landsat, Sentinel-2 y una camára UAV Derivación de parámetros de la vegetaciónHasta ahora hemos visto cómo el espectro de la superficie varía con respecto a los distintos parámetros biofísicos.Sin embargo, nuestro objetivo final es el contrario, es decir, a partir de un espectro, o de unas determinadas bandas espectrales estimar una o varias variables biofísicas que nos son de interés. En particular para el objetivo del cálculo de la evapotranspiración y la eficiencia en el uso en el agua, nos puede interesar estimar el índice de área foliar y/o la fracción de radiación PAR absorbida, así como las clorofilas u otros pigmentos.Una de los métodos típicos es desarrollar relaciones empíricas entre las bandas (o entre índices de vegetación) y datos muestreados en el campo. Esto puede darnos la respuesta más fiable para nuestra parcela de estudio, pero como has podido ver anteriormente la señal espectral depende de otros muchos factores, que pueden provocar que esa relación calibrada con unos cuantos muestreos locales no sea extrapolable o aplicable a otros cultivos o regiones.Otra alternativa es desarrollar bases de datos sintéticas a partir de simulaciones. Es lo que vamos a realizar en esta tarea.Vamos a ejecutar 5000 simulaciones haciendo variar los valores de los parámetros aleatoriamente según un rango de valores que puedan ser esperado en nuestras zonas de estudio. Por ejemplo si trabajas con cultivos perennes puede que te interesa mantener un rango de LAI con valores mínimos sensiblemente superiores a cero, mientras si trabajas con cultivos anuales, el valor 0 es necesario para reflejar el desarrollo del cultivo desde su plantación, emergencia y madurez. Ya que hay una gran cantidad de parámetros y es muy probable que desconozcamos el rango plausible en la mayoría de los cultivos, no te preocupes, deja los valores por defecto y céntrate en los parámetros en los que tengas más confianza.Puedes también elegir uno o varios tipos de suelo, en función de la edafología de tu lugar. > Incluso podrías subir un espectro de suelo típico de tu zona a la carpeta [./input/soil_spectral_library](./input/soil_spectral_library). Tan sólo asegúrate que el archivo de texto tenga dos columnas, la primera con las longitudes de onda de 400 a 2500 y la segunda columna con la reflectividad correspondiente. Para actualizar la librería espectral de suelos, tendrías que ejecutar también la [primera celda](Instrucciones).Finalmente selecciona el sensor para el que quieras genera la señal.Cuando tengas tu configurado tu entorno de simulación, pincha en el botón `Generar Simulaciones`. El sistema tardará un rato pero al cabo de unos minutos te retornará una serie de gráficos.> Es posible que recibas un mensaje de aviso, no te preocupes, en principio todo debería funcionar con normalidad.
|
w_rho_sensor = interactive(fn.build_random_simulations, {"manual": True, "manual_name": "Generar simulaciones"},
n_sim=fixed(5000), n_leaf_range=fn.w_range_nleaf,
cab_range=fn.w_range_cab, car_range=fn.w_range_car,
ant_range=fn.w_range_ant, cbrown_range=fn.w_range_cbrown,
cw_range=fn.w_range_cw, cm_range=fn.w_range_cm,
lai_range=fn.w_range_lai, hotspot_range=fn.w_range_hotspot,
leaf_angle_range=fn.w_range_leaf_angle,
sza=fn.w_sza, vza=fn.w_vza, psi=fn.w_psi,
skyl=fn.w_skyl, soil_names=fn.w_soils, sensor=fn.w_sensor)
display(w_rho_sensor)
|
_____no_output_____
|
CC0-1.0
|
ES_espectro_vegetacion.ipynb
|
hectornieto/Curso-WUE
|
Detecting sound sources in YouTube videos First load all dependencies and set work and data paths
|
# set plotting parameters
%matplotlib inline
import matplotlib.pyplot as plt
# change notebook settings for wider screen
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
# For embedding YouTube videos in Ipython Notebook
from IPython.display import YouTubeVideo
# and setting the time of the video in seconds
from datetime import timedelta
import numpy as np
import os
import sys
import urllib.request
import pandas as pd
sys.path.append(os.path.join('src', 'audioset_demos'))
from __future__ import print_function
# signal processing library
from scipy import signal
from scipy.io import wavfile
import wave
import six
import tensorflow as tf
import h5py
# Audio IO and fast plotting
import pyaudio
from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph as pg
# Multiprocessing and threading
import multiprocessing
# Dependencies for creating deep VGGish embeddings
from src.audioset_demos import vggish_input
import vggish_input
import vggish_params
import vggish_postprocess
import vggish_slim
pca_params = 'vggish_pca_params.npz'
model_checkpoint = 'vggish_model.ckpt'
# Our YouTube video downloader based on youtube-dl module
from src.audioset_demos import download_youtube_wav as dl_yt
# third-party sounds processing and visualization library
import librosa
import librosa.display
# Set user
usr = 'maxvo'
MAXINT16 = np.iinfo(np.int16).max
print(MAXINT16)
FOCUS_CLASSES_ID = [0, 137, 62, 63, 500, 37]
#FOCUS_CLASSES_ID = [0, 137, 37, 40, 62, 63, 203, 208, 359, 412, 500]
class_labels = pd.read_csv(os.path.join('src', 'audioset_demos', 'class_labels_indices.csv'))
CLASS_NAMES = class_labels.loc[:, 'display_name'].tolist()
FOCUS_CLASS_NAME_FRAME = class_labels.loc[FOCUS_CLASSES_ID, 'display_name']
FOCUS_CLASS_NAME = FOCUS_CLASS_NAME_FRAME.tolist()
print("Chosen classes for experiments:")
print(FOCUS_CLASS_NAME_FRAME)
# Set current working directory
src_dir = os.getcwd()
# Set raw wav-file data directories for placing downloaded audio
raw_dir = os.path.join(src_dir, 'data' ,'audioset_demos', 'raw')
short_raw_dir = os.path.join(src_dir, 'data', 'audioset_demos', 'short_raw')
if not os.path.exists(short_raw_dir):
os.makedirs(short_raw_dir)
if not os.path.exists(raw_dir):
os.makedirs(raw_dir)
audioset_data_path = os.path.join('data', 'audioset_demos', 'audioset', 'packed_features')
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Download model parameters and PCA embedding
|
if not os.path.isfile(os.path.join('src', 'audioset_demos', 'vggish_model.ckpt')):
urllib.request.urlretrieve(
"https://storage.googleapis.com/audioset/vggish_model.ckpt",
filename=os.path.join('src', 'audioset_demos', 'vggish_model.ckpt')
)
if not os.path.isfile(os.path.join('src', 'audioset_demos', 'vggish_pca_params.npz')):
urllib.request.urlretrieve(
"https://storage.googleapis.com/audioset/vggish_pca_params.npz",
filename=os.path.join('src', 'audioset_demos', 'vggish_pca_params.npz')
)
if not os.path.isfile(os.path.join('data', 'audioset_demos', 'features.tar.gz')):
urllib.request.urlretrieve(
"https://storage.googleapis.com/eu_audioset/youtube_corpus/v1/features/features.tar.gz",
filename=os.path.join('data', 'audioset_demos', 'features.tar.gz')
)
import gzip
import shutil
with gzip.open(os.path.join('data', 'audioset_demos', 'features.tar.gz'), 'rb') as f_in:
with open('packed_features', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
def save_data(hdf5_path, x, video_id_list, y=None):
with h5py.File(hdf5_path, 'w') as hf:
hf.create_dataset('x', data=x)
hf.create_dataset('y', data=y)
hf.create_dataset('video_id_list', data=video_id_list, dtype='S11')
def load_data(hdf5_path):
with h5py.File(hdf5_path, 'r') as hf:
x = hf['x'][:]
if hf['y'] is not None:
y = hf['y'][:]
else:
y = hf['y']
video_id_list = hf['video_id_list'][:].tolist()
return x, y, video_id_list
def time_str_to_sec(time_str='00:00:00'):
time_str_list = time_str.split(':')
seconds = int(
timedelta(
hours=int(time_str_list[0]),
minutes=int(time_str_list[1]),
seconds=int(time_str_list[2])
).total_seconds()
)
return seconds
class miniRecorder:
def __init__(self, seconds=4, sampling_rate=16000):
self.FORMAT = pyaudio.paInt16 #paFloat32 #paInt16
self.CHANNELS = 1 # Must be Mono
self.RATE = sampling_rate # sampling rate (Hz), 22050 was used for this application
self.FRAMESIZE = 4200 # buffer size, number of data points to read at a time
self.RECORD_SECONDS = seconds + 1 # how long should the recording (approx) be
self.NOFRAMES = int((self.RATE * self.RECORD_SECONDS) / self.FRAMESIZE) # number of frames needed
def record(self):
# instantiate pyaudio
p = pyaudio.PyAudio()
# open stream
stream = p.open(format=self.FORMAT,
channels=self.CHANNELS,
rate=self.RATE,
input=True,
frames_per_buffer=self.FRAMESIZE)
# discard the first part of the recording
discard = stream.read(self.FRAMESIZE)
print('Recording...')
data = stream.read(self.NOFRAMES * self.FRAMESIZE)
decoded = np.frombuffer(data, dtype=np.int16) #np.float32)
print('Finished...')
stream.stop_stream()
stream.close()
p.terminate()
# Remove first second to avoid "click" sound from starting recording
self.sound_clip = decoded[self.RATE:]
class Worker(QtCore.QRunnable):
'''
Worker thread
Inherits from QRunnable to handler worker thread setup, signals and wrap-up.
:param callback: The function callback to run on this worker thread. Supplied args and
kwargs will be passed through to the runner.
:type callback: function
:param args: Arguments to pass to the callback function
:param kwargs: Keywords to pass to the callback function
'''
def __init__(self, fn, *args, **kwargs):
super(Worker, self).__init__()
# Store constructor arguments (re-used for processing)
self.fn = fn
self.args = args
self.kwargs = kwargs
@QtCore.pyqtSlot()
def run(self):
'''
Initialise the runner function with passed args, kwargs.
'''
self.fn(*self.args, **self.kwargs)
class AudioFile:
def __init__(self, file, chunk):
""" Init audio stream """
self.chunk = chunk
self.data = ''
self.wf = wave.open(file, 'rb')
self.p = pyaudio.PyAudio()
self.stream = self.p.open(
format = self.p.get_format_from_width(self.wf.getsampwidth()),
channels = self.wf.getnchannels(),
rate = self.wf.getframerate(),
output = True
)
def play(self):
""" Play entire file """
self.data = self.wf.readframes(self.chunk)
while self.data:
self.stream.write(self.data)
self.data = self.wf.readframes(self.chunk)
self.close()
def close(self):
""" Graceful shutdown """
self.stream.close()
self.p.terminate()
def read(self, chunk, exception_on_overflow=False):
return self.data
class App(QtGui.QMainWindow):
def __init__(self,
predictor,
n_top_classes=10,
plot_classes=FOCUS_CLASSES_ID,
parent=None):
super(App, self).__init__(parent)
### Predictor model ###
self.predictor = predictor
self.n_classes = predictor.n_classes
self.n_top_classes = n_top_classes
self.plot_classes = plot_classes
self.n_plot_classes = len(self.plot_classes)
### Start/stop control variable
self.continue_recording = False
self._timerId = None
### Settings ###
self.rate = 16000 # sampling rate
self.chunk = 1000 # reading chunk sizes,
#self.rate = 22050 # sampling rate
#self.chunk = 2450 # reading chunk sizes, make it a divisor of sampling rate
#self.rate = 44100 # sampling rate
#self.chunk = 882 # reading chunk sizes, make it a divisor of sampling rate
self.nperseg = 400 # samples pr segment for spectrogram, scipy default is 256
# self.nperseg = 490 # samples pr segment for spectrogram, scipy default is 256
self.noverlap = 0 # overlap between spectrogram windows, scipt default is nperseg // 8
self.tape_length = 20 # length of running tape
self.plot_length = 10 * self.rate
self.samples_passed = 0
self.pred_length = 10
self.pred_samples = self.rate * self.pred_length
self.start_tape() # initialize the tape
self.eps = np.finfo(float).eps
# Interval between predictions in number of samples
self.pred_intv = (self.tape_length // 4) * self.rate
self.pred_step = 10 * self.chunk
self.full_tape = False
#### Create Gui Elements ###########
self.mainbox = QtGui.QWidget()
self.setCentralWidget(self.mainbox)
self.mainbox.setLayout(QtGui.QVBoxLayout())
self.canvas = pg.GraphicsLayoutWidget()
self.mainbox.layout().addWidget(self.canvas)
self.label = QtGui.QLabel()
self.mainbox.layout().addWidget(self.label)
# Thread pool for prediction worker coordination
self.threadpool = QtCore.QThreadPool()
# self.threadpool_plot = QtCore.QThreadPool()
print("Multithreading with maximum %d threads" % self.threadpool.maxThreadCount())
# Play, record and predict button in toolbar
'''
self.playTimer = QtCore.QTimer()
self.playTimer.setInterval(500)
self.playTimer.timeout.connect(self.playTick)
self.toolbar = self.addToolBar("Play")
self.playScansAction = QtGui.QAction(QtGui.QIcon("control_play_blue.png"), "play scans", self)
self.playScansAction.triggered.connect(self.playScansPressed)
self.playScansAction.setCheckable(True)
self.toolbar.addAction(self.playScansAction)
'''
# Buttons and user input
btn_brow_1 = QtGui.QPushButton('Start/Stop Recording', self)
btn_brow_1.setGeometry(300, 15, 250, 25)
#btn_brow_4.clicked.connect(support.main(fname_points, self.fname_stl_indir, self.fname_stl_outdir))
# Action: Start or stop recording
btn_brow_1.clicked.connect(lambda: self.press_record())
btn_brow_2 = QtGui.QPushButton('Predict', self)
btn_brow_2.setGeometry(20, 15, 250, 25)
# Action: predict on present tape roll
btn_brow_2.clicked.connect(
lambda: self.start_predictions(
sound_clip=self.tape,
full_tape=False
)
)
self.le1 = QtGui.QLineEdit(self)
self.le1.setGeometry(600, 15, 250, 21)
self.yt_video_id = str(self.le1.text())
self.statusBar().showMessage("Ready")
# self.toolbar = self.addToolBar('Exit')
# self.toolbar.addAction(exitAction)
self.setGeometry(300, 300, 1400, 1200)
self.setWindowTitle('Live Audio Event Detector')
# self.show()
# line plot
self.plot = self.canvas.addPlot()
self.p1 = self.plot.plot(pen='r')
self.plot.setXRange(0, self.plot_length)
self.plot.setYRange(-0.5, 0.5)
self.plot.hideAxis('left')
self.plot.hideAxis('bottom')
self.canvas.nextRow()
# spectrogram
self.view = self.canvas.addViewBox()
self.view.setAspectLocked(False)
self.view.setRange(QtCore.QRectF(0,0, self.spec.shape[1], 100))
# image plot
self.img = pg.ImageItem() #(border='w')
self.view.addItem(self.img)
# bipolar colormap
pos = np.array([0., 1., 0.5, 0.25, 0.75])
color = np.array([[0,255,255,255], [255,255,0,255], [0,0,0,255], (0, 0, 255, 255), (255, 0, 0, 255)], dtype=np.ubyte)
cmap = pg.ColorMap(pos, color)
lut = cmap.getLookupTable(0.0, 1.0, 256)
self.img.setLookupTable(lut)
self.img.setLevels([-15, -5])
self.canvas.nextRow()
# create bar chart
#self.view2 = self.canvas.addViewBox()
# dummy data
#self.x = np.arange(self.n_top_classes)
#self.y1 = np.linspace(0, self.n_classes, num=self.n_top_classes)
#self.bg1 = pg.BarGraphItem(x=self.x, height=self.y1, width=0.6, brush='r')
#self.view2.addItem(self.bg1)
# Prediction line plot
self.plot2 = self.canvas.addPlot()
self.plot2.addLegend()
self.plot_list = [None]*self.n_plot_classes
for i in range(self.n_plot_classes):
self.plot_list[i] = self.plot2.plot(
pen=pg.intColor(i),
name=CLASS_NAMES[self.plot_classes[i]]
)
self.plot2.setXRange(0, self.plot_length)
self.plot2.setYRange(0.0, 1.0)
self.plot2.hideAxis('left')
self.plot2.hideAxis('bottom')
# self.canvas.nextRow()
#### Start #####################
# self.p = pyaudio.PyAudio()
# self.start_stream()
# self._update()
def playScansPressed(self):
if self.playScansAction.isChecked():
self.playTimer.start()
else:
self.playTimer.stop()
def playTick(self):
self._update()
def start_stream(self):
if not self.yt_video_id:
self.stream = self.p.open(
format=pyaudio.paFloat32,
channels=1,
rate=self.rate,
input=True,
frames_per_buffer=self.chunk
)
else:
self.stream = AudioFile(self.yt_video_id, self.chunk)
self.stream.play()
def close_stream(self):
self.stream.stop_stream()
self.stream.close()
self.p.terminate()
# self.exit_pool()
def read_stream(self):
self.raw = self.stream.read(self.chunk, exception_on_overflow=False)
data = np.frombuffer(self.raw, dtype=np.float32)
return self.raw, data
def start_tape(self):
self.tape = np.zeros(self.tape_length * self.rate)
# empty spectrogram tape
self.f, self.t, self.Sxx = signal.spectrogram(
self.tape[-self.plot_length:],
self.rate,
nperseg=self.nperseg,
noverlap=self.noverlap,
detrend=False,
return_onesided=True,
mode='magnitude'
)
self.spec = np.zeros(self.Sxx.shape)
self.pred = np.zeros((self.n_plot_classes, self.plot_length))
def tape_add(self):
if self.continue_recording:
raw, audio = self.read_stream()
self.tape[:-self.chunk] = self.tape[self.chunk:]
self.tape[-self.chunk:] = audio
self.samples_passed += self.chunk
# spectrogram on whole tape
# self.f, self.t, self.Sxx = signal.spectrogram(self.tape, self.rate)
# self.spec = self.Sxx
# spectrogram on last added part of tape
self.f, self.t, self.Sxx = signal.spectrogram(self.tape[-self.chunk:],
self.rate,
nperseg=self.nperseg,
noverlap=self.noverlap)
spec_chunk = self.Sxx.shape[1]
self.spec[:, :-spec_chunk] = self.spec[:, spec_chunk:]
# Extend spectrogram after converting to dB scale
self.spec[:, -spec_chunk:] = np.log10(abs(self.Sxx) + self.eps)
self.pred[:, :-self.chunk] = self.pred[:, self.chunk:]
'''
if (self.samples_passed % self.pred_intv) == 0:
sound_clip = self.tape # (MAXINT16 * self.tape).astype('int16') / 32768.0
if self.full_tape:
# predictions on full tape
pred_chunk = self.predictor.predict(
sound_clip=sound_clip[-self.pred_intv:],
sample_rate=self.rate
)[0][self.plot_classes]
self.pred[:, -self.pred_intv:] = np.asarray(
(self.pred_intv) * [pred_chunk]).transpose()
else:
# prediction, on some snip of the last part of the signal
# 1 s seems to be the shortest time frame with reliable predictions
self.start_predictions(sound_clip)
'''
def start_predictions(self, sound_clip=None, full_tape=False):
#self.samples_passed_at_predict = self.samples_passed
if sound_clip is None:
sound_clip = self.tape
if full_tape:
worker = Worker(self.provide_prediction, *(), **{
"sound_clip": sound_clip,
"pred_start": -self.pred_samples,
"pred_stop": None,
"pred_step": self.pred_samples
}
)
self.threadpool.start(worker)
else:
for chunk in range(0, self.pred_intv, self.pred_step):
pred_start = - self.pred_intv - self.pred_samples + chunk
pred_stop = - self.pred_intv + chunk
worker = Worker(self.provide_prediction, *(), **{
"sound_clip": sound_clip,
"pred_start": pred_start,
"pred_stop": pred_stop,
"pred_step": self.pred_step
}
)
self.threadpool.start(worker)
def provide_prediction(self, sound_clip, pred_start, pred_stop, pred_step):
#samples_passed_since_predict = self.samples_passed - self.samples_passed_at_predict
#pred_stop -= samples_passed_since_predict
pred_chunk = self.predictor.predict(
sound_clip=sound_clip[pred_start:pred_stop],
sample_rate=self.rate
)[0][self.plot_classes]
#samples_passed_since_predict = self.samples_passed - self.samples_passed_at_predict - samples_passed_since_predict
#pred_stop -= samples_passed_since_predict
if pred_stop is not None:
pred_stop_step = pred_stop - pred_step
else:
pred_stop_step = None
self.pred[:, pred_stop_step:pred_stop] = np.asarray(
(pred_step) * [pred_chunk]
).transpose()
def exit_pool(self):
"""
Exit all QRunnables and delete QThreadPool
"""
# When trying to quit, the application takes a long time to stop
self.threadpool.globalInstance().waitForDone()
self.threadpool.deleteLater()
sys.exit(0)
def press_record(self):
self.yt_video_id = str(self.le1.text())
# Switch between continue recording or stopping it
# Start or avoid starting recording dependent on last press
if self.continue_recording:
self.continue_recording = False
#if self._timerId is not None:
# self.killTimer(self._timerId)
self.close_stream()
else:
self.continue_recording = True
self.p = pyaudio.PyAudio()
self.start_stream()
self._update()
def _update(self):
try:
if self.continue_recording:
self.tape_add()
# self.img.setImage(self.spec.T)
#kwargs = {
# "image": self.spec.T,
# "autoLevels": False,
#
#worker = Worker(self.img.setImage, *(), **kwargs)
#self.threadpool_plot.start(worker)
self.img.setImage(self.spec.T, autoLevels=False)
#worker = Worker(
# self.p1.setData,
# *(),
# **{'y': self.tape[-self.plot_length:]}
#)
#self.threadpool_plot.start(worker)
self.p1.setData(self.tape[-self.plot_length:])
#pred_var = np.var(self.pred, axis=-1)
#pred_mean = np.mean(self.pred, axis=-1)
#class_cand = np.where( (pred_mean > 0.001)*(pred_var > 0.01) )
# n_classes_incl = min(self.n_top_classes, class_cand[0].shape[0])
# print(n_classes_incl)
for i in range(self.n_plot_classes):
#worker = Worker(
# self.plot_list[i].setData,
# *(),
# **{'y': self.pred[i,:]}
#)
#self.threadpool_plot.start(worker)
self.plot_list[i].setData(self.pred[i,:]) # self.plot_classes[i],:])
#self.bg1.setOpts(
# height=self.y1
#)
#self.bg1.setOpts(
# height=np.sort(
# self.pred[:, -1]
# )[-1:-(self.n_top_classes+1):-1]
#)
#print(np.max(self.tape), np.min(self.tape))
# self.label.setText('Class: {0:0.3f}'.format(self.pred[-1]))
QtCore.QTimer.singleShot(1, self._update)
except KeyboardInterrupt:
self.close_stream()
from AudioSetClassifier import AudioSetClassifier
# model_type='decision_level_single_attention',
# balance_type='balance_in_batch',
# at_iteration=50000
#ASC = AudioSetClassifier(
# model_type='decision_level_max_pooling', #single_attention',
# balance_type='balance_in_batch',
# iters=50000
#)
ASC = AudioSetClassifier()
app=0 #This is the solution
app = QtGui.QApplication(sys.argv)
MainApp = App(predictor=ASC)
MainApp.show()
sys.exit(app.exec_())
minirec = miniRecorder(seconds=10, sampling_rate=16000)
minirec.record()
minirec_pred = ASC.predict(sound_clip=minirec.sound_clip / 32768.0, sample_rate=16000)
print(minirec_pred[:,[0, 37, 62, 63]])
max_prob_classes = np.argsort(minirec_pred, axis=-1)[:, ::-1]
max_prob = np.sort(minirec_pred, axis=-1)[:,::-1]
print(max_prob.shape)
example = pd.DataFrame(class_labels['display_name'][max_prob_classes[0,:10]])
example.loc[:, 'prob'] = pd.Series(max_prob[0, :10], index=example.index)
print(example)
example.plot.bar(x='display_name', y='prob', rot=90)
plt.show()
print()
|
(1, 527)
display_name prob
0 Speech 0.865663
506 Inside, small room 0.050520
1 Male speech, man speaking 0.047573
5 Narration, monologue 0.047426
46 Snort 0.043561
482 Ping 0.025956
354 Door 0.017503
458 Arrow 0.016863
438 Chop 0.014797
387 Writing 0.014618
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Parameters for how to plot audio
|
# Sample rate
# this has to be at least twice of max frequency which we've entered
# but you can play around with different sample rates and see how this
# affects the results;
# since we generated this audio, the sample rate is the bitrate
sample_rate = vggish_params.SAMPLE_RATE
# size of audio FFT window relative to sample_rate
n_window = 1024
# overlap between adjacent FFT windows
n_overlap = 360
# number of mel frequency bands to generate
n_mels = 64
# max duration of short video clips
duration = 10
# note frequencies https://pages.mtu.edu/~suits/notefreqs.html
freq1 = 512.
freq2 = 1024.
# fmin and fmax for librosa filters in Hz - used for visualization purposes only
fmax = max(freq1, freq2)*8 + 1000.
fmin = 0.
# stylistic change to the notebook
fontsize = 14
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = fontsize
plt.rcParams['axes.labelsize'] = fontsize
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['axes.titlesize'] = fontsize
plt.rcParams['xtick.labelsize'] = fontsize
plt.rcParams['ytick.labelsize'] = fontsize
plt.rcParams['legend.fontsize'] = fontsize
plt.rcParams['figure.titlesize'] = fontsize
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Choosing video IDs and start times before download
|
video_ids = [
'BaW_jenozKc',
'E6sS2d-NeTE',
'xV0eTva6SKQ',
'2Szah76TMgo',
'g38kRk6YAA0',
'OkkkPAE9KvE',
'N1zUp9aPFG4'
]
video_start_time_str = [
'00:00:00',
'00:00:10',
'00:00:05',
'00:00:02',
'00:03:10',
'00:00:10',
'00:00:06'
]
video_start_time = list(map(time_str_to_sec, video_start_time_str))
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Download, save and cut video audio
|
video_titles = []
maxv = np.iinfo(np.int16).max
for i, vid in enumerate(video_ids):
# Download and store video under data/raw/
video_title = dl_yt.download_youtube_wav(
video_id=vid,
raw_dir=raw_dir,
short_raw_dir=short_raw_dir,
start_sec=video_start_time[i],
duration=duration,
sample_rate=sample_rate
)
video_titles += [video_title]
print()
'''
audio_path = os.path.join(raw_dir, vid) + '.wav'
short_audio_path = os.path.join(short_raw_dir, vid) + '.wav'
# Load and downsample audio to 16000
# audio is a 1D time series of the sound
# can also use (audio, fs) = soundfile.read(audio_path)
(audio, fs) = librosa.load(
audio_path,
sr = sample_rate,
offset = video_start_time[i],
duration = duration
)
# Store downsampled 10sec clip under data/short_raw/
wavfile.write(
filename=short_audio_path,
rate=sample_rate,
data=(audio * maxv).astype(np.int16)
)
'''
# Usage example for pyaudio
i = 6
a = AudioFile(
os.path.join(short_raw_dir, video_ids[i]) + '.wav',
chunk = 1000
)
a.play()
a.close()
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Retrieve VGGish PCA embeddings
|
video_vggish_emb = []
# Restore VGGish model trained on YouTube8M dataset
# Retrieve PCA-embeddings of bottleneck features
with tf.Graph().as_default(), tf.Session() as sess:
# Define the model in inference mode, load the checkpoint, and
# locate input and output tensors.
vggish_slim.define_vggish_slim(training=False)
vggish_slim.load_vggish_slim_checkpoint(sess, model_checkpoint)
features_tensor = sess.graph.get_tensor_by_name(
vggish_params.INPUT_TENSOR_NAME)
embedding_tensor = sess.graph.get_tensor_by_name(
vggish_params.OUTPUT_TENSOR_NAME)
for i, vid in enumerate(video_ids):
audio_path = os.path.join(short_raw_dir, vid) + '.wav'
examples_batch = vggish_input.wavfile_to_examples(audio_path)
print(examples_batch.shape)
# Prepare a postprocessor to munge the model embeddings.
pproc = vggish_postprocess.Postprocessor(pca_params)
# Run inference and postprocessing.
[embedding_batch] = sess.run([embedding_tensor],
feed_dict={features_tensor: examples_batch})
print(embedding_batch.shape)
postprocessed_batch = pproc.postprocess(embedding_batch)
print(postprocessed_batch.shape)
video_vggish_emb.extend([postprocessed_batch])
print(len(video_vggish_emb))
|
INFO:tensorflow:Restoring parameters from vggish_model.ckpt
(10, 96, 64)
(10, 128)
(10, 128)
(10, 96, 64)
(10, 128)
(10, 128)
(10, 96, 64)
(10, 128)
(10, 128)
(10, 96, 64)
(10, 128)
(10, 128)
(10, 96, 64)
(10, 128)
(10, 128)
(10, 96, 64)
(10, 128)
(10, 128)
(10, 96, 64)
(10, 128)
(10, 128)
7
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Plot audio, transformations and embeddings Function for visualising audio
|
def plot_audio(audio, emb):
audio_sec = audio.shape[0]/sample_rate
# Make a new figure
plt.figure(figsize=(18, 16), dpi= 60, facecolor='w', edgecolor='k')
plt.subplot(511)
# Display the spectrogram on a mel scale
librosa.display.waveplot(audio, int(sample_rate), max_sr = int(sample_rate))
plt.title('Raw audio waveform @ %d Hz' % sample_rate, fontsize = fontsize)
plt.xlabel("Time (s)")
plt.ylabel("Amplitude")
# Define filters and windows
melW =librosa.filters.mel(sr=sample_rate, n_fft=n_window, n_mels=n_mels, fmin=fmin, fmax=fmax)
ham_win = np.hamming(n_window)
# Compute fft to spectrogram
[f, t, x] = signal.spectral.spectrogram(
x=audio,
window=ham_win,
nperseg=n_window,
noverlap=n_overlap,
detrend=False,
return_onesided=True,
mode='magnitude')
# Apply filters and log transformation
x_filtered = np.dot(x.T, melW.T)
x_logmel = np.log(x_filtered + 1e-8)
x_logmel = x_logmel.astype(np.float32)
# Display frequency power spectrogram
plt.subplot(512)
x_coords = np.linspace(0, audio_sec, x.shape[0])
librosa.display.specshow(
x.T,
sr=sample_rate,
x_axis='time',
y_axis='hz',
x_coords=x_coords
)
plt.xlabel("Time (s)")
plt.title("FFT spectrogram (dB)", fontsize = fontsize)
# optional colorbar plot
plt.colorbar(format='%+02.0f dB')
# Display log-mel freq. power spectrogram
plt.subplot(513)
x_coords = np.linspace(0, audio_sec, x_logmel.shape[0])
librosa.display.specshow(
x_logmel.T,
sr=sample_rate,
x_axis='time',
y_axis='mel',
x_coords=x_coords
)
plt.xlabel("Time (s)")
plt.title("Mel power spectrogram used in DCASE 2017 (dB)", fontsize = fontsize)
# optional colorbar plot
plt.colorbar(format='%+02.0f dB')
# Display embeddings
plt.subplot(514)
x_coords = np.linspace(0, audio_sec, emb.shape[0])
librosa.display.specshow(
emb.T,
sr=sample_rate,
x_axis='time',
y_axis=None,
x_coords=x_coords
)
plt.xlabel("Time (s)")
plt.colorbar()
plt.subplot(515)
plt.scatter(
x=emb[:, 0],
y=emb[:, 1],
)
plt.xlabel("PC_1")
plt.ylabel("PC_2")
# Make the figure layout compact
plt.tight_layout()
plt.show()
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Visualise all clips of audio chosen
|
for i, vid in enumerate(video_ids):
print("\nAnalyzing audio from video with title:\n", video_titles[i])
audio_path = os.path.join(short_raw_dir, vid) + '.wav'
# audio is a 1D time series of the sound
# can also use (audio, fs) = soundfile.read(audio_path)
(audio, fs) = librosa.load(
audio_path,
sr = sample_rate,
)
plot_audio(audio, video_vggish_emb[i])
start=int(
timedelta(
hours=0,
minutes=0,
seconds=video_start_time[i]
).total_seconds()
)
YouTubeVideo(vid, start=start, autoplay=0, theme="light", color="red")
print()
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Visualise one clip of audio and embed YouTube video for comparison
|
i = 4
vid = video_ids[i]
audio_path = os.path.join(raw_dir, vid) + '.wav'
# audio is a 1D time series of the sound
# can also use (audio, fs) = soundfile.read(audio_path)
(audio, fs) = librosa.load(
audio_path,
sr = sample_rate,
offset = video_start_time[i],
duration = duration
)
plot_audio(audio, video_vggish_emb[i])
start=int(
timedelta(
hours=0,
minutes=0,
seconds=video_start_time[i]
).total_seconds()
)
YouTubeVideo(
vid,
start=start,
end=start+duration,
autoplay=0,
theme="light",
color="red"
)
# Plot emb with scatter
# Check first couple of PCs,
# for both train and test data, to see if the test is lacking variance
|
/usr/local/anaconda3/envs/audioset_tensorflow/lib/python3.6/site-packages/librosa/filters.py:284: UserWarning: Empty filters detected in mel frequency basis. Some channels will produce empty responses. Try increasing your sampling rate (and fmax) or reducing n_mels.
warnings.warn('Empty filters detected in mel frequency basis. '
/usr/local/anaconda3/envs/audioset_tensorflow/lib/python3.6/site-packages/matplotlib/font_manager.py:1328: UserWarning: findfont: Font family ['serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Evaluate trained audio detection model
|
import audio_event_detection_model as AEDM
import utilities
from sklearn import metrics
model = AEDM.CRNN_audio_event_detector()
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Evaluating model on audio downloaded
|
(x_user_inp, y_user_inp) = utilities.transform_data(
np.array(video_vggish_emb)
)
predictions = model.predict(
x=x_user_inp
)
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Evaluating model on training data
|
(x_tr, y_tr, vid_tr) = load_data(os.path.join(audioset_data_path, 'bal_train.h5'))
(x_tr, y_tr) = utilities.transform_data(x_tr, y_tr)
pred_tr = model.predict(x=x_tr)
print(pred_tr.max())
print(metrics.accuracy_score(y_tr, (pred_tr > 0.5).astype(np.float32)))
print(metrics.roc_auc_score(y_tr, pred_tr))
print(np.mean(metrics.roc_auc_score(y_tr, pred_tr, average=None)))
stats = utilities.calculate_stats(pred_tr, y_tr)
mAUC = np.mean([stat['auc'] for stat in stats])
max_prob_classes = np.argsort(predictions, axis=-1)[:, ::-1]
max_prob = np.sort(predictions, axis=-1)[:,::-1]
print(mAUC)
print(max_prob.max())
print(max_prob[:,:10])
print(predictions.shape)
print(max_prob_classes[:,:10])
from numpy import genfromtxt
import pandas as pd
class_labels = pd.read_csv('class_labels_indices.csv')
print(class_labels['display_name'][max_prob_classes[5,:10]])
for i, vid in enumerate(video_ids[0]):
print(video_titles[i])
print()
example = pd.DataFrame(class_labels['display_name'][max_prob_classes[i,:10]])
example.loc[:, 'prob'] = pd.Series(max_prob[i, :10], index=example.index)
print(example)
example.plot.bar(x='display_name', y='prob', rot=90)
plt.show()
print()
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Investigating model predictions on downloaded audio clips
|
i = 0
vid = video_ids[i]
print(video_titles[i])
print()
YouTubeVideo(
vid,
start=start,
end=start+duration,
autoplay=0,
theme="light",
color="red"
)
example = pd.DataFrame(class_labels['display_name'][max_prob_classes[i,:10]])
example.loc[:, 'prob'] = pd.Series(max_prob[i, :10], index=example.index)
print(example)
example.plot.bar(x='display_name', y='prob', rot=90)
plt.show()
print()
#eval_metrics = model.evaluate(x=x_tr, y=y_tr)
#for i, metric_name in enumerate(model.metrics_names):
# print("{}: {:1.4f}".format(metric_name, eval_metrics[i]))
#qtapp = App(model)
from AudioSetClassifier import AudioSetClassifier
import time
ASC = AudioSetClassifier()
sound_clip = os.path.join(short_raw_dir, video_ids[1]) + '.wav'
t0 = time.time()
test_pred = ASC.predict(sound_clip=sound_clip)
t1 = time.time()
print('Time spent on 1 forward pass prediction:', t1-t0)
print(test_pred.shape)
for i, vid in enumerate(video_ids):
print(video_titles[i])
print()
sound_clip = os.path.join(short_raw_dir, vid) + '.wav'
predictions = ASC.predict(sound_clip=sound_clip)
max_prob_classes = np.argsort(predictions, axis=-1)[:, ::-1]
max_prob = np.sort(predictions, axis=-1)[:,::-1]
print(max_prob.shape)
example = pd.DataFrame(class_labels['display_name'][max_prob_classes[0,:10]])
example.loc[:, 'prob'] = pd.Series(max_prob[0, :10], index=example.index)
print(example)
example.plot.bar(x='display_name', y='prob', rot=90)
plt.show()
print()
import sys
app=0 #This is the solution
app = QtGui.QApplication(sys.argv)
MainApp = App(predictor=ASC)
MainApp.show()
sys.exit(app.exec_())
#from PyQt4 import QtGui, QtCore
class SimpleWindow(QtGui.QWidget):
def __init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
self.setGeometry(300, 300, 200, 80)
self.setWindowTitle('Hello World')
quit = QtGui.QPushButton('Close', self)
quit.setGeometry(10, 10, 60, 35)
self.connect(quit, QtCore.SIGNAL('clicked()'),
self, QtCore.SLOT('close()'))
if __name__ == '__main__':
app = QtCore.QCoreApplication.instance()
if app is None:
app = QtGui.QApplication([])
sw = SimpleWindow()
sw.show()
try:
from IPython.lib.guisupport import start_event_loop_qt4
start_event_loop_qt4(app)
except ImportError:
app.exec_()
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
1. Understand attention2. Understand filters3. Understand Multi-label, hierachical, knowledge graphs4. Understand class imbalance 5. CCA on VGGish vs. ResNet audioset emb. to check if there's a linear connection. 6. Train linear layer to convert VGGish emb. to ResNet-50 emb. Plot in GUI:1. Exclude all non-active classes2. Draw class names on curves going up3. Remove histogram4. Make faster
|
video_vggish_emb = []
test_wav_path = os.path.join(src_dir, 'data', 'wav_file')
wav_files = os.listdir(test_wav_path)
example_names = []
# Restore VGGish model trained on YouTube8M dataset
# Retrieve PCA-embeddings of bottleneck features
with tf.Graph().as_default(), tf.Session() as sess:
# Define the model in inference mode, load the checkpoint, and
# locate input and output tensors.
vggish_slim.define_vggish_slim(training=False)
vggish_slim.load_vggish_slim_checkpoint(sess, model_checkpoint)
features_tensor = sess.graph.get_tensor_by_name(
vggish_params.INPUT_TENSOR_NAME)
embedding_tensor = sess.graph.get_tensor_by_name(
vggish_params.OUTPUT_TENSOR_NAME)
# Prepare a postprocessor to munge the model embeddings.
pproc = vggish_postprocess.Postprocessor(pca_params)
for i, vid in enumerate(wav_files):
audio_path = os.path.join(test_wav_path, vid)
print(vid)
examples_batch = vggish_input.wavfile_to_examples(audio_path)
print(examples_batch.shape)
# Run inference and postprocessing.
[embedding_batch] = sess.run([embedding_tensor],
feed_dict={features_tensor: examples_batch})
print(embedding_batch.shape)
postprocessed_batch = pproc.postprocess(embedding_batch)
batch_shape = postprocessed_batch.shape
print(batch_shape)
if batch_shape[0] > 10:
postprocessed_batch = postprocessed_batch[:10]
elif batch_shape[0] < 10:
zero_pad = np.zeros((10, 128))
zero_pad[:batch_shape[0]] = postprocessed_batch
postprocessed_batch = zero_pad
print(postprocessed_batch.shape)
if postprocessed_batch.shape[0] == 10:
video_vggish_emb.extend([postprocessed_batch])
example_names.extend([vid])
print(len(video_vggish_emb))
import audio_event_detection_model as AEDM
import utilities
model = AEDM.CRNN_audio_event_detector()
(x_user_inp, y_user_inp) = utilities.transform_data(
np.array(video_vggish_emb)
)
predictions_AEDM = model.predict(
x=x_user_inp
)
predictions_ASC = np.zeros([len(wav_files), 527])
for i, vid in enumerate(wav_files):
audio_path = os.path.join(test_wav_path, vid)
predictions_ASC[i] = ASC.predict(sound_clip=audio_path)
qkong_res = '''2018Q1Q10Q17Q12Q59Q512440Q-5889Q.fea_lab ['Speech'] [0.8013877]
12_4_train ambience.fea_lab ['Vehicle', 'Rail transport', 'Train', 'Railroad car, train wagon'] [0.38702238, 0.6618184, 0.7742054, 0.5886036]
19_3_forest winter.fea_lab ['Animal'] [0.16109303]
2018Q1Q10Q17Q58Q49Q512348Q-5732Q.fea_lab ['Speech'] [0.78335935]
15_1_whistle.fea_lab ['Whistling'] [0.34013063] ['music']
2018Q1Q10Q13Q52Q8Q512440Q-5889Q.fea_lab ['Speech'] [0.7389336]
09_2_my guitar.fea_lab ['Music', 'Musical instrument', 'Plucked string instrument', 'Guitar'] [0.84308875, 0.48860216, 0.43791085, 0.47915566]
2018Q1Q10Q13Q29Q46Q512440Q-5889Q.fea_lab ['Vehicle'] [0.18344605]
05_2_DFA.fea_lab ['Music', 'Musical instrument', 'Plucked string instrument', 'Guitar'] [0.93665695, 0.57123834, 0.53891456, 0.63112855]
'''
q_kong_res = {
'2018Q1Q10Q17Q12Q59Q512440Q-5889Q.wav': (['Speech'], [0.8013877]),
'12_4_train ambience.wav': (['Vehicle', 'Rail transport', 'Train', 'Railroad car, train wagon'], [0.38702238, 0.6618184, 0.7742054, 0.5886036]),
'19_3_forest winter.wav': (['Animal'], [0.16109303]),
'2018Q1Q10Q17Q58Q49Q512348Q-5732Q.wav': (['Speech'], [0.78335935]),
'15_1_whistle.wav': (['Whistling'], [0.34013063], ['music']),
'2018Q1Q10Q13Q52Q8Q512440Q-5889Q.wav': (['Speech'], [0.7389336]),
'09_2_my guitar.wav': (['Music', 'Musical instrument', 'Plucked string instrument', 'Guitar'], [0.84308875, 0.48860216, 0.43791085, 0.47915566]),
'2018Q1Q10Q13Q29Q46Q512440Q-5889Q.wav': (['Vehicle'], [0.18344605]),
'05_2_DFA.wav': (['Music', 'Musical instrument', 'Plucked string instrument', 'Guitar'], [0.93665695, 0.57123834, 0.53891456, 0.63112855])
}
#test_examples_res = qkong_res.split('\n')
#print(test_examples_res)
#rint()
#split_fun = lambda x: x.split(' [')
#test_examples_res = list(map(split_fun, test_examples_res))#
# print(test_examples_res)
max_prob_classes_AEDM = np.argsort(predictions_AEDM, axis=-1)[:, ::-1]
max_prob_AEDM = np.sort(predictions_AEDM, axis=-1)[:,::-1]
max_prob_classes_ASC = np.argsort(predictions_ASC, axis=-1)[:, ::-1]
max_prob_ASC = np.sort(predictions_ASC, axis=-1)[:,::-1]
for i in range(len(wav_files)):
print(wav_files[i])
print(max_prob_classes_AEDM[i,:10])
print(max_prob_AEDM[i,:10])
print()
print(max_prob_classes_ASC[i,:10])
print(max_prob_ASC[i,:10])
print()
print()
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
2018Q1Q10Q17Q12Q59Q512440Q-5889Q.wav2018Q1Q10Q13Q52Q8Q512440Q-5889Q.wav2018Q1Q10Q13Q29Q46Q512440Q-5889Q.wav2018Q1Q10Q17Q58Q49Q512348Q-5732Q.wav
|
for i, vid in enumerate(example_names):
print(vid)
print()
example = pd.DataFrame(class_labels['display_name'][max_prob_classes_AEDM[i,:10]])
example.loc[:, 'top_10_AEDM_pred'] = pd.Series(max_prob_AEDM[i, :10], index=example.index)
example.loc[:, 'index_ASC'] = pd.Series(max_prob_classes_ASC[i,:10], index=example.index)
example.loc[:, 'display_name_ASC'] = pd.Series(
class_labels['display_name'][max_prob_classes_ASC[i,:10]],
index=example.index_ASC
)
example.loc[:, 'top_10_ASC_pred'] = pd.Series(max_prob_ASC[i, :10], index=example.index)
print(example)
example.plot.bar(x='display_name', y=['top_10_AEDM_pred', 'top_10_ASC_pred'] , rot=90)
plt.show()
print()
ex_lab = q_kong_res[vid][0]
ex_pred = q_kong_res[vid][1]
example = pd.DataFrame(class_labels[class_labels['display_name'].isin(ex_lab)])
example.loc[:, 'AEDM_pred'] = pd.Series(
predictions_AEDM[i, example.index.tolist()],
index=example.index
)
example.loc[:, 'ASC_pred'] = pd.Series(
predictions_ASC[i, example.index.tolist()],
index=example.index
)
example.loc[:, 'qkong_pred'] = pd.Series(
ex_pred,
index=example.index
)
print(example)
print()
example.plot.bar(x='display_name', y=['AEDM_pred', 'ASC_pred', 'qkong_pred'], rot=90)
plt.show()
|
_____no_output_____
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Audio set data collection pipeline Download, cut and convert the audio of listed urls
|
colnames = '# YTID, start_seconds, end_seconds, positive_labels'.split(', ')
print(colnames)
bal_train_csv = pd.read_csv('balanced_train_segments.csv', sep=', ', header=2) #usecols=colnames)
bal_train_csv.rename(columns={colnames[0]: colnames[0][-4:]}, inplace=True)
print(bal_train_csv.columns.values)
print(bal_train_csv.loc[:10, colnames[3]])
print(bal_train_csv.YTID.tolist()[:10])
bal_train_csv['pos_lab_list'] = bal_train_csv.positive_labels.apply(lambda x: x[1:-1].split(','))
colnames.extend('pos_lab_list')
print('Pos_lab_list')
print(bal_train_csv.loc[:10, 'pos_lab_list'])
sample_rate = 16000
audioset_short_raw_dir = os.path.join(src_dir, 'data', 'audioset_short_raw')
if not os.path.exists(audioset_short_raw_dir):
os.makedirs(audioset_short_raw_dir)
audioset_raw_dir = os.path.join(src_dir, 'data', 'audioset_raw')
if not os.path.exists(audioset_raw_dir):
os.makedirs(audioset_raw_dir)
audioset_embed_path = os.path.join(src_dir, 'data', 'audioset_embed')
if not os.path.exists(audioset_embed_path):
os.makedirs(audioset_embed_path)
audioset_video_titles = []
audioset_video_ids = bal_train_csv.YTID.tolist()
audioset_video_ids_bin = bal_train_csv.YTID.astype('|S11').tolist()
video_start_time = bal_train_csv.start_seconds.tolist()
video_end_time = bal_train_csv.end_seconds.tolist()
# Provide class dictionary for conversion from mid to either index [0] or display_name [1]
class_dict = class_labels.set_index('mid').T.to_dict('list')
print(class_dict['/m/09x0r'])
print(
list(
map(
lambda x: class_dict[x][0],
bal_train_csv.loc[0, 'pos_lab_list']
)
)
)
bal_train_csv['pos_lab_ind_list'] = bal_train_csv.pos_lab_list.apply(
lambda x: [class_dict[y][0] for y in x]
)
class_vec = np.zeros([1, 527])
class_vec[:, bal_train_csv.loc[0, 'pos_lab_ind_list']] = 1
print(class_vec)
print(bal_train_csv.dtypes)
#print(bal_train_csv.loc[:10, colnames[4]])
video_ids_incl = []
video_ids_incl_bin = []
video_ids_excl = []
vggish_embeds = []
labels = []
print(video_ids_incl)
video_ids_incl = video_ids_incl[:-1]
print(video_ids_incl)
video_ids_checked = video_ids_incl + video_ids_excl
video_ids = [vid for vid in audioset_video_ids if vid not in video_ids_checked]
for i, vid in enumerate(video_ids):
print('{}.'.format(i))
# Download and store video under data/audioset_short_raw/
if (vid + '.wav') not in os.listdir(audioset_short_raw_dir):
video_title = dl_yt.download_youtube_wav(
video_id=vid,
raw_dir=None,
short_raw_dir=audioset_short_raw_dir,
start_sec=video_start_time[i],
duration=video_end_time[i]-video_start_time[i],
sample_rate=sample_rate
)
audioset_video_titles += [video_title]
wav_available = video_title is not None
else:
print(vid, 'already downloaded, so we skip this download.')
wav_available = True
if wav_available:
video_ids_incl += [vid]
video_ids_incl_bin += [audioset_video_ids_bin[i]]
vggish_embeds.extend(
ASC.embed(
os.path.join(
audioset_short_raw_dir,
vid
) + '.wav'
)
)
class_vec = np.zeros([1, 527])
class_vec[:, bal_train_csv.loc[i, 'pos_lab_ind_list']] = 1
labels.extend(class_vec)
else:
video_ids_excl += [vid]
print()
jobs = []
for i, vid in enumerate(video_ids):
# Download and store video under data/audioset_short_raw/
if (vid + '.wav') not in os.listdir(audioset_short_raw_dir):
args = (
vid,
None,
audioset_short_raw_dir,
video_start_time[i],
video_end_time[i]-video_start_time[i],
sample_rate
)
process = multiprocessing.Process(
target=dl_yt.download_youtube_wav,
args=args
)
jobs.append(process)
# Start the processes (i.e. calculate the random number lists)
for j in jobs:
j.start()
# Ensure all of the processes have finished
for j in jobs:
j.join()
save_data(
hdf5_path=os.path.join(audioset_embed_path, 'bal_train.h5'),
x=np.array(vggish_embeds),
video_id_list=np.array(video_ids_incl_bin),
y=np.array(labels)
)
x, y, vid_list = load_data(os.path.join(audioset_embed_path, 'bal_train.h5'))
print(vid_list)
x_train, y_train, video_id_train = load_data(os.path.join(audioset_embed_path, 'bal_train.h5'))
print(video_id_train)
x_train, y_train, video_id_train = load_data(
os.path.join(
'data',
'audioset',
'packed_features',
'bal_train.h5'
)
)
print(video_id_train[:100])
from retrieve_audioset import retrieve_embeddings
retrieve_embeddings(
data_path=os.path.join('data', 'audioset')
)
|
Loading VGGish base model:
|
Apache-2.0
|
notebooks/experiments/Sound Demo 3 - Multi-label classifier pretrained on audioset.ipynb
|
fronovics/AI_playground
|
Gradient Descent Algorithm Implementation
* Tutorial: https://towardsai.net/p/data-science/gradient-descent-algorithm-for-machine-learning-python-tutorial-ml-9ded189ec556
* Github: https://github.com/towardsai/tutorials/tree/master/gradient_descent_tutorial
|
#Download the dataset
!wget https://raw.githubusercontent.com/towardsai/tutorials/master/gradient_descent_tutorial/data.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
column_names = ['Population', 'Profit']
df = pd.read_csv('data.txt', header=None, names=column_names)
df.head()
df.insert(0, 'Theta0', 1)
cols = df.shape[1]
X = df.iloc[:,0:cols-1]
Y = df.iloc[:,cols-1:cols]
theta = np.matrix(np.array([0]*X.shape[1]))
X = np.matrix(X.values)
Y = np.matrix(Y.values)
def calculate_RSS(X, y, theta):
inner = np.power(((X * theta.T) - y), 2)
return np.sum(inner) / (2 * len(X))
def gradientDescent(X, Y, theta, alpha, iters):
t = np.matrix(np.zeros(theta.shape))
parameters = int(theta.ravel().shape[1])
cost = np.zeros(iters)
for i in range(iters):
error = (X * theta.T) - Y
for j in range(parameters):
term = np.multiply(error, X[:,j])
t[0,j] = theta[0,j] - ((alpha / len(X)) * np.sum(term))
theta = t
cost[i] = calculate_RSS(X, Y, theta)
return theta, cost
df.plot(kind='scatter', x='Population', y='Profit', figsize=(12,8))
|
_____no_output_____
|
MIT
|
gradient_descent_tutorial/gradient_descent_tutorial.ipynb
|
fimoziq/tutorials
|
**Error before applying Gradient Descent**
|
error = calculate_RSS(X, Y, theta)
error
|
_____no_output_____
|
MIT
|
gradient_descent_tutorial/gradient_descent_tutorial.ipynb
|
fimoziq/tutorials
|
**Apply Gradient Descent**
|
g, cost = gradientDescent(X, Y, theta, 0.01, 1000)
g
|
_____no_output_____
|
MIT
|
gradient_descent_tutorial/gradient_descent_tutorial.ipynb
|
fimoziq/tutorials
|
**Error after Applying Gradient Descent**
|
error = calculate_RSS(X, Y, g)
error
x = np.linspace(df.Population.min(), df.Population.max(), 100)
f = g[0, 0] + (g[0, 1] * x)
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(x, f, 'r', label='Prediction')
ax.scatter(df.Population, df.Profit, label='Traning Data')
ax.legend(loc=2)
ax.set_xlabel('Population')
ax.set_ylabel('Profit')
ax.set_title('Predicted Profit vs. Population Size')
|
_____no_output_____
|
MIT
|
gradient_descent_tutorial/gradient_descent_tutorial.ipynb
|
fimoziq/tutorials
|
if you wish to set which cores to useaffinity_mask = {4, 5, 7} affinity_mask = {6, 7, 9} affinity_mask = {0, 1, 3} affinity_mask = {2, 3, 5} affinity_mask = {0, 2, 4, 6} pid = 0os.sched_setaffinity(pid, affinity_mask) print("CPU affinity mask is modified to %s for process id 0" % affinity_mask) DEFAULT 'CarRacing-v3' environment values continuos action = (steering_angle, throttle, brake)ACT = [[0, 0, 0], [-0.4, 0, 0], [0.4, 0, 0], [0, 0.6, 0], [0, 0, 0.8]] discrete actions: center_steering and no gas/brake, steer left, steer right, accel, brake --> actually a good choice, because car_dynamics softens the action's diff for gas and steeringREWARDS reward given each step: step taken, distance to centerline, normalized speed [0-1], normalized steer angle [0-1] reward given on new tile touched: %proportional of advance, %advance/steps_taken reward given at episode end: all tiles touched (track finished), patience or off-raod exceeded, out of bounds, max_steps exceeded reward for obstacles: obstacle hit (each step), obstacle collided (episode end)GYM_REWARD = [ -0.1, 0.0, 0.0, 0.0, 10.0, 0.0, 0, -0, -100, -0, -0, -0 ]STD_REWARD = [ -0.1, 0.0, 0.0, 0.0, 1.0, 0.0, 100, -20, -100, -50, -0, -0 ]CONT_REWARD =[-0.11, 0.1, 0.0, 0.0, 1.0, 0.0, 100, -20, -100, -50, -5, -100 ] see docu for RETURN computation details DEFAULT Environment Parameters (not related to RL Algorithm!)game_color = 1 State (frame) color option: 0 = RGB, 1 = Grayscale, 2 = Green onlyindicators = True show or not bottom Info Panelframes_per_state = 4 stacked (rolling history) Frames on each state [1-inf], latest observation always on first Frameskip_frames = 3 number of consecutive Frames to skip between history saves [0-4]discre = ACT Action discretization function, format [[steer0, throtle0, brake0], [steer1, ...], ...]. None for continuoususe_track = 1 number of times to use the same Track, [1-100]. More than 20 high risk of overfitting!!episodes_per_track = 1 number of evenly distributed starting points on each track [1-20]. Every time you call reset(), the env automatically starts at the next pointtr_complexity = 12 generated Track geometric Complexity, [6-20]tr_width = 45 relative Track Width, [30-50]patience = 2.0 max time in secs without Progress, [0.5-20]off_track = 1.0 max time in secs Driving on Grass, [0.0-5]f_reward = CONT_REWARD Reward Funtion coefficients, refer to Docu for detailsnum_obstacles = 5 Obstacle objects placed on track [0-10]end_on_contact = False Stop Episode on contact with obstacle, not recommended for starting-phase of trainingobst_location = 0 array pre-setting obstacle Location, in %track. Negative value means tracks's left-hand side. 0 for random locationoily_patch = False use all obstacles as Low-friction road (oily patch)verbose = 2
|
## Choose one agent, see Docu for description
#agent='CarRacing-v0'
#agent='CarRacing-v1'
agent='CarRacing-v3'
# Stop training when the model reaches the reward threshold
callback_on_best = StopTrainingOnRewardThreshold(reward_threshold = 170, verbose=1)
seed = 2000
## SIMULATION param
## Changing these makes world models incompatible!!
game_color = 2
indicators = True
fpst = 4
skip = 3
actions = [[0, 0, 0], [-0.4, 0, 0], [0.4, 0, 0], [0, 0.6, 0], [0, 0, 0.8]] #this is ACT
obst_loc = [6, -12, 25, -50, 75, -37, 62, -87, 95, -29] #track percentage, negative for obstacle to the left-hand side
## Loading drive_pretained model
import pickle
root = 'ppo_cnn_gym-mod_'
file = root+'c{:d}_f{:d}_s{:d}_{}_a{:d}'.format(game_color,fpst,skip,indicators,len(actions))
model = PPO2.load(file)
## This model param
use = 6 # number of times to use same track [1,100]
ept = 10 # different starting points on same track [1,20]
patience = 1.0
track_complexity = 12
#REWARD2 = [-0.05, 0.1, 0.0, 0.0, 2.0, 0.0, 100, -20, -100, -50, -5, -100]
if agent=='CarRacing-v3':
env = gym.make(agent, seed=seed,
game_color=game_color,
indicators=indicators,
frames_per_state=fpst,
skip_frames=skip,
# discre=actions, #passing custom actions
use_track = use,
episodes_per_track = ept,
tr_complexity = track_complexity,
tr_width = 45,
patience = patience,
off_track = patience,
end_on_contact = True, #learning to avoid obstacles the-hard-way
oily_patch = False,
num_obstacles = 5, #some obstacles
obst_location = obst_loc, #passing fixed obstacle location
# f_reward = REWARD2, #passing a custom reward function
verbose = 2 )
else:
env = gym.make(agent)
env = DummyVecEnv([lambda: env])
## Training on obstacles
model.set_env(env)
batch_size = 256
updates = 700
model.learn(total_timesteps = updates*batch_size, log_interval=1) #, callback=eval_callback)
#Save last updated model
file = root+'c{:d}_f{:d}_s{:d}_{}_a{:d}__u{:d}_e{:d}_p{}_bs{:d}'.format(
game_color,fpst,skip,indicators,len(actions),use,ept,patience,batch_size)
model.save(file, cloudpickle=True)
param_list=model.get_parameter_list()
env.close()
## This model param #2
use = 6 # number of times to use same track [1,100]
ept = 10 # different starting points on same track [1,20]
patience = 1.0
track_complexity = 12
#REWARD2 = [-0.05, 0.1, 0.0, 0.0, 2.0, 0.0, 100, -20, -100, -50, -5, -100]
seed = 25000
if agent=='CarRacing-v3':
env2 = gym.make(agent, seed=seed,
game_color=game_color,
indicators=indicators,
frames_per_state=fpst,
skip_frames=skip,
# discre=actions, #passing custom actions
use_track = use,
episodes_per_track = ept,
tr_complexity = track_complexity,
tr_width = 45,
patience = patience,
off_track = patience,
end_on_contact = False, # CHANGED
oily_patch = False,
num_obstacles = 5, #some obstacles
obst_location = 0, #using random obstacle location
# f_reward = REWARD2, #passing a custom reward function
verbose = 3 )
else:
env2 = gym.make(agent)
env2 = DummyVecEnv([lambda: env2])
## Training on obstacles
model.set_env(env2)
#batch_size = 384
updates = 1500
## Separate evaluation env
test_freq = 100 #policy updates until evaluation
test_episodes_per_track = 5 #number of starting points on test_track
eval_log = './evals/'
env_test = gym.make(agent, seed=int(3.14*seed),
game_color=game_color,
indicators=indicators,
frames_per_state=fpst,
skip_frames=skip,
# discre=actions, #passing custom actions
use_track = 1, #change test track after 1 ept round
episodes_per_track = test_episodes_per_track,
tr_complexity = 12, #test on a medium complexity track
tr_width = 45,
patience = 2.0,
off_track = 2.0,
end_on_contact = False,
oily_patch = False,
num_obstacles = 5,
obst_location = obst_loc) #passing fixed obstacle location
env_test = DummyVecEnv([lambda: env_test])
eval_callback = EvalCallback(env_test, callback_on_new_best=callback_on_best, #None,
n_eval_episodes=test_episodes_per_track*3, eval_freq=test_freq*batch_size,
best_model_save_path=eval_log, log_path=eval_log, deterministic=True,
render = False)
model.learn(total_timesteps = updates*batch_size, log_interval=1, callback=eval_callback)
#Save last updated model
#file = root+'c{:d}_f{:d}_s{:d}_{}_a{:d}__u{:d}_e{:d}_p{}_bs{:d}'.format(
# game_color,fpst,skip,indicators,len(actions),use,ept,patience,batch_size)
model.save(file+'_II', cloudpickle=True)
param_list=model.get_parameter_list()
env2.close()
env_test.close()
## Enjoy last trained policy
if agent=='CarRacing-v3': #create an independent test environment, almost everything in std/random definition
env3 = gym.make(agent, seed=None,
game_color=game_color,
indicators = True,
frames_per_state=fpst,
skip_frames=skip,
# discre=actions,
use_track = 2,
episodes_per_track = 1,
patience = 5.0,
off_track = 3.0 )
else:
env3 = gym.make(agent)
env3 = DummyVecEnv([lambda: env3])
obs = env3.reset()
print(obs.shape)
done = False
pasos = 0
_states=None
while not done: # and pasos<1500:
action, _states = model.predict(obs, deterministic=True)
obs, reward, done, info = env3.step(action)
env3.render()
pasos+=1
env3.close()
print()
print(reward, done, pasos) #, info)
## Enjoy best eval_policy
obs = env3.reset()
print(obs.shape)
## Load bestmodel from eval
#if not isinstance(model_test, PPO2):
model_test = PPO2.load(eval_log+'best_model', env3)
done = False
pasos = 0
_states=None
while not done: # and pasos<1500:
action, _states = model_test.predict(obs, deterministic=True)
obs, reward, done, info = env3.step(action)
env3.render()
pasos+=1
env3.close()
print()
print(reward, done, pasos)
print(action, _states)
model_test.save(file+'_evalbest', cloudpickle=True)
env2.close()
env3.close()
env_test.close()
print(action, _states)
obs.shape
|
_____no_output_____
|
MIT
|
examples/Train_ppo_cnn+eval_contact-(pretrained).ipynb
|
pleslabay/CarRacing-mod
|
Connect to Chicago Data Portal API - Business Licenses Data
|
#Import dependencies
import pandas as pd
import requests
import json
# Google developer API key
from config2 import API_chi_key
# Build API URL
# API calls = 8000 (based on zipcode and issued search results)
# Filters: 'application type' Issued
target_URL = f"https://data.cityofchicago.org/resource/xqx5-8hwx.json?$$app_token={API_chi_key}&$limit=8000&application_type=ISSUE&zip_code="
# Create list of zipcodes we are examining based
# on three different businesses of interest
zipcodes = ["60610","60607","60606","60661",
"60614","60622","60647","60654"]
# Create a request to get json data on business licences
responses = []
for zipcode in zipcodes:
license_response = requests.get(target_URL + zipcode).json()
responses.append(license_response)
len(responses)
# Create sepearte variables for the 8 responses for zipcodes
# Data loaded in nested gropus based on zipcodes, so
# needed to make them separate
zip_60610 = responses[0]
zip_60607 = responses[1]
zip_60606 = responses[2]
zip_60661 = responses[3]
zip_60614 = responses[4]
zip_60622 = responses[5]
zip_60647 = responses[6]
zip_60654 = responses[7]
# Read zipcode_responses_busi.json files into pd DF
zip_60610_data = pd.DataFrame(zip_60610)
# Create list of the json object variables
# excluding zip_60610 bc that will start as a DF
zip_data = [zip_60607, zip_60606, zip_60661, zip_60614,
zip_60622, zip_60647, zip_60654]
# Create a new DF to save compiled business data into
all_7_zipcodes = zip_60610_data
# Append json objects to all_7_zipcode DF
# Print length of all_7_zipcode to check adding correctly
for zipcodes_df in zip_data:
all_7_zipcodes = all_7_zipcodes.append(zipcodes_df)
len(all_7_zipcodes)
# Get list of headers of all_7_zipcodes
list(all_7_zipcodes)
# Select certain columns to show
core_info_busi_licences = all_7_zipcodes[['legal_name', 'doing_business_as_name',
'zip_code', 'license_description',
'business_activity', 'application_type',
'license_start_date', 'latitude', 'longitude']]
# Get an idea of the number of null values in each column
core_info_busi_licences.isna().sum()
# Add sepearate column for just the start year
# Will use later when selecting year businesess were created
core_info_busi_licences['start_year'] = core_info_busi_licences['license_start_date']
# Edit 'start_year' to just include year from date information
core_info_busi_licences['start_year'] = core_info_busi_licences['start_year'].str[0:4]
# Explore what kinds of businesses are missing "latitude" and "longitude"
# Also, the 'business_activity' licenses have null values (limited Business Licences?)
core_info_busi_licences[core_info_busi_licences.isnull().any(axis=1)]
# Get rid of NaN values in 'latitude' and 'license_start_date'
core_info_busi_licences.dropna(subset=['latitude'], inplace=True)
core_info_busi_licences.dropna(subset=['license_start_date'], inplace=True)
core_info_busi_licences['application_type'].unique()
# Cast 'start_year' column as an integer
core_info_busi_licences['start_year'] = core_info_busi_licences['start_year'].astype('int64')
# Confirm that NaN values for 'latitude' and 'license_start_date'
# were dropped
core_info_busi_licences.isna().sum()
# Record number of businesses licenses pulled
len(core_info_busi_licences)
|
_____no_output_____
|
CNRI-Python
|
API_Chi_Busi_Licences.ipynb
|
oimartin/Real_Tech_Influence
|
Connect to sqlite database
|
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy import create_engine
from config2 import mysql_password
# Declare a Base using `automap_base()`
Base = automap_base()
# Create engine using the `demographics.sqlite` database file
# engine = create_engine("sqlite://", echo=False)
engine = create_engine(f'mysql://root:coolcat1015@localhost:3306/real_tech_db')
# Copy 'core_info_busi_licenses' db to MySql database
core_info_busi_licences.to_sql('business_licenses',
con=engine,
if_exists='replace',
index_label=True)
|
_____no_output_____
|
CNRI-Python
|
API_Chi_Busi_Licences.ipynb
|
oimartin/Real_Tech_Influence
|
Tutorial 1: Bayes with a binary hidden state**Week 3, Day 1: Bayesian Decisions****By Neuromatch Academy**__Content creators:__ [insert your name here]__Content reviewers:__ Tutorial ObjectivesThis is the first in a series of two core tutorials on Bayesian statistics. In these tutorials, we will explore the fundemental concepts of the Bayesian approach from two perspectives. This tutorial will work through an example of Bayesian inference and decision making using a binary hidden state. The second main tutorial extends these concepts to a continuous hidden state. In the next days, each of these basic ideas will be extended--first through time as we consider what happens when we infere a hidden state using multiple observations and when the hidden state changes across time. In the third day, we will introduce the notion of how to use inference and decisions to select actions for optimal control. For this tutorial, you will be introduced to our binary state fishing problem!This notebook will introduce the fundamental building blocks for Bayesian statistics: 1. How do we use probability distributions to represent hidden states?2. How does marginalization work and how can we use it?3. How do we combine new information with our prior knowledge?4. How do we combine the possible loss (or gain) for making a decision with our probabilitic knowledge?
|
#@title Video 1: Introduction to Bayesian Statistics
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='JiEIn9QsrFg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
Setup Please execute the cells below to initialize the notebook environment.
|
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import patches
from matplotlib import transforms
from matplotlib import gridspec
from scipy.optimize import fsolve
from collections import namedtuple
#@title Figure Settings
import ipywidgets as widgets # interactive display
from ipywidgets import GridspecLayout
from IPython.display import clear_output
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
import warnings
warnings.filterwarnings("ignore")
# @title Plotting Functions
def plot_joint_probs(P, ):
assert np.all(P >= 0), "probabilities should be >= 0"
# normalize if not
P = P / np.sum(P)
marginal_y = np.sum(P,axis=1)
marginal_x = np.sum(P,axis=0)
# definitions for the axes
left, width = 0.1, 0.65
bottom, height = 0.1, 0.65
spacing = 0.005
# start with a square Figure
fig = plt.figure(figsize=(5, 5))
joint_prob = [left, bottom, width, height]
rect_histx = [left, bottom + height + spacing, width, 0.2]
rect_histy = [left + width + spacing, bottom, 0.2, height]
rect_x_cmap = plt.cm.Blues
rect_y_cmap = plt.cm.Reds
# Show joint probs and marginals
ax = fig.add_axes(joint_prob)
ax_x = fig.add_axes(rect_histx, sharex=ax)
ax_y = fig.add_axes(rect_histy, sharey=ax)
# Show joint probs and marginals
ax.matshow(P,vmin=0., vmax=1., cmap='Greys')
ax_x.bar(0, marginal_x[0], facecolor=rect_x_cmap(marginal_x[0]))
ax_x.bar(1, marginal_x[1], facecolor=rect_x_cmap(marginal_x[1]))
ax_y.barh(0, marginal_y[0], facecolor=rect_y_cmap(marginal_y[0]))
ax_y.barh(1, marginal_y[1], facecolor=rect_y_cmap(marginal_y[1]))
# set limits
ax_x.set_ylim([0,1])
ax_y.set_xlim([0,1])
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i,j in zip(x.flatten(), y.flatten()):
c = f"{P[i,j]:.2f}"
ax.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = marginal_x[i]
c = f"{v:.2f}"
ax_x.text(i, v +0.1, c, va='center', ha='center', color='black')
v = marginal_y[i]
c = f"{v:.2f}"
ax_y.text(v+0.2, i, c, va='center', ha='center', color='black')
# set up labels
ax.xaxis.tick_bottom()
ax.yaxis.tick_left()
ax.set_xticks([0,1])
ax.set_yticks([0,1])
ax.set_xticklabels(['Silver','Gold'])
ax.set_yticklabels(['Small', 'Large'])
ax.set_xlabel('color')
ax.set_ylabel('size')
ax_x.axis('off')
ax_y.axis('off')
return fig
# test
# P = np.random.rand(2,2)
# P = np.asarray([[0.9, 0.8], [0.4, 0.1]])
# P = P / np.sum(P)
# fig = plot_joint_probs(P)
# plt.show(fig)
# plt.close(fig)
# fig = plot_prior_likelihood(0.5, 0.3)
# plt.show(fig)
# plt.close(fig)
def plot_prior_likelihood_posterior(prior, likelihood, posterior):
# definitions for the axes
left, width = 0.05, 0.3
bottom, height = 0.05, 0.9
padding = 0.1
small_width = 0.1
left_space = left + small_width + padding
added_space = padding + width
fig = plt.figure(figsize=(10, 4))
rect_prior = [left, bottom, small_width, height]
rect_likelihood = [left_space , bottom , width, height]
rect_posterior = [left_space + added_space, bottom , width, height]
ax_prior = fig.add_axes(rect_prior)
ax_likelihood = fig.add_axes(rect_likelihood, sharey=ax_prior)
ax_posterior = fig.add_axes(rect_posterior, sharey = ax_prior)
rect_colormap = plt.cm.Blues
# Show posterior probs and marginals
ax_prior.barh(0, prior[0], facecolor = rect_colormap(prior[0, 0]))
ax_prior.barh(1, prior[1], facecolor = rect_colormap(prior[1, 0]))
ax_likelihood.matshow(likelihood, vmin=0., vmax=1., cmap='Reds')
ax_posterior.matshow(posterior, vmin=0., vmax=1., cmap='Greens')
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Prior p(s)")
ax_prior.axis('off')
# Likelihood plot details
ax_likelihood.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Likelihood p(m (right) | s)')
ax_likelihood.xaxis.set_ticks_position('bottom')
ax_likelihood.spines['left'].set_visible(False)
ax_likelihood.spines['bottom'].set_visible(False)
# Posterior plot details
ax_posterior.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Posterior p(s | m)')
ax_posterior.xaxis.set_ticks_position('bottom')
ax_posterior.spines['left'].set_visible(False)
ax_posterior.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i,j in zip(x.flatten(), y.flatten()):
c = f"{posterior[i,j]:.2f}"
ax_posterior.text(j,i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{likelihood[i,j]:.2f}"
ax_likelihood.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i, 0]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
def plot_prior_likelihood(ps, p_a_s1, p_a_s0, measurement):
likelihood = np.asarray([[p_a_s1, 1-p_a_s1],[p_a_s0, 1-p_a_s0]])
assert 0.0 <= ps <= 1.0
prior = np.asarray([ps, 1 - ps])
if measurement:
posterior = likelihood[:, 0] * prior
else:
posterior = (likelihood[:, 1] * prior).reshape(-1)
posterior /= np.sum(posterior)
# definitions for the axes
left, width = 0.05, 0.3
bottom, height = 0.05, 0.9
padding = 0.1
small_width = 0.22
left_space = left + small_width + padding
small_padding = 0.05
fig = plt.figure(figsize=(10, 4))
rect_prior = [left, bottom, small_width, height]
rect_likelihood = [left_space , bottom , width, height]
rect_posterior = [left_space + width + small_padding, bottom , small_width, height]
ax_prior = fig.add_axes(rect_prior)
ax_likelihood = fig.add_axes(rect_likelihood, sharey=ax_prior)
ax_posterior = fig.add_axes(rect_posterior, sharey=ax_prior)
prior_colormap = plt.cm.Blues
posterior_colormap = plt.cm.Greens
# Show posterior probs and marginals
ax_prior.barh(0, prior[0], facecolor = prior_colormap(prior[0]))
ax_prior.barh(1, prior[1], facecolor = prior_colormap(prior[1]))
ax_likelihood.matshow(likelihood, vmin=0., vmax=1., cmap='Reds')
# ax_posterior.matshow(posterior, vmin=0., vmax=1., cmap='')
ax_posterior.barh(0, posterior[0], facecolor = posterior_colormap(posterior[0]))
ax_posterior.barh(1, posterior[1], facecolor = posterior_colormap(posterior[1]))
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Prior p(s)")
ax_prior.axis('off')
# Likelihood plot details
ax_likelihood.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Likelihood p(m | s)')
ax_likelihood.xaxis.set_ticks_position('bottom')
ax_likelihood.spines['left'].set_visible(False)
ax_likelihood.spines['bottom'].set_visible(False)
# Posterior plot details
ax_posterior.set(xlim = [0, 1], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Posterior p(s | m)")
ax_posterior.axis('off')
# ax_posterior.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
# yticks = [0, 1], yticklabels = ['left', 'right'],
# ylabel = 'state (s)', xlabel = 'measurement (m)',
# title = 'Posterior p(s | m)')
# ax_posterior.xaxis.set_ticks_position('bottom')
# ax_posterior.spines['left'].set_visible(False)
# ax_posterior.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
# for i,j in zip(x.flatten(), y.flatten()):
# c = f"{posterior[i,j]:.2f}"
# ax_posterior.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = posterior[i]
c = f"{v:.2f}"
ax_posterior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{likelihood[i,j]:.2f}"
ax_likelihood.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
return fig
# fig = plot_prior_likelihood(0.5, 0.3)
# plt.show(fig)
# plt.close(fig)
from matplotlib import colors
def plot_utility(ps):
prior = np.asarray([ps, 1 - ps])
utility = np.array([[2, -3], [-2, 1]])
expected = prior @ utility
# definitions for the axes
left, width = 0.05, 0.16
bottom, height = 0.05, 0.9
padding = 0.04
small_width = 0.1
left_space = left + small_width + padding
added_space = padding + width
fig = plt.figure(figsize=(17, 3))
rect_prior = [left, bottom, small_width, height]
rect_utility = [left + added_space , bottom , width, height]
rect_expected = [left + 2* added_space, bottom , width, height]
ax_prior = fig.add_axes(rect_prior)
ax_utility = fig.add_axes(rect_utility, sharey=ax_prior)
ax_expected = fig.add_axes(rect_expected)
rect_colormap = plt.cm.Blues
# Data of plots
ax_prior.barh(0, prior[0], facecolor = rect_colormap(prior[0]))
ax_prior.barh(1, prior[1], facecolor = rect_colormap(prior[1]))
ax_utility.matshow(utility, cmap='cool')
norm = colors.Normalize(vmin=-3, vmax=3)
ax_expected.bar(0, expected[0], facecolor = rect_colormap(norm(expected[0])))
ax_expected.bar(1, expected[1], facecolor = rect_colormap(norm(expected[1])))
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Probability of state")
ax_prior.axis('off')
# Utility plot details
ax_utility.set(xticks = [0, 1], xticklabels = ['left', 'right'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'action (a)',
title = 'Utility')
ax_utility.xaxis.set_ticks_position('bottom')
ax_utility.spines['left'].set_visible(False)
ax_utility.spines['bottom'].set_visible(False)
# Expected utility plot details
ax_expected.set(title = 'Expected utility', ylim = [-3, 3],
xticks = [0, 1], xticklabels = ['left', 'right'],
xlabel = 'action (a)',
yticks = [])
ax_expected.xaxis.set_ticks_position('bottom')
ax_expected.spines['left'].set_visible(False)
ax_expected.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i,j in zip(x.flatten(), y.flatten()):
c = f"{utility[i,j]:.2f}"
ax_utility.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i in ind:
v = expected[i]
c = f"{v:.2f}"
ax_expected.text(i, 2.5, c, va='center', ha='center', color='black')
return fig
def plot_prior_likelihood_utility(ps, p_a_s1, p_a_s0,measurement):
assert 0.0 <= ps <= 1.0
assert 0.0 <= p_a_s1 <= 1.0
assert 0.0 <= p_a_s0 <= 1.0
prior = np.asarray([ps, 1 - ps])
likelihood = np.asarray([[p_a_s1, 1-p_a_s1],[p_a_s0, 1-p_a_s0]])
utility = np.array([[2.0, -3.0], [-2.0, 1.0]])
# expected = np.zeros_like(utility)
if measurement:
posterior = likelihood[:, 0] * prior
else:
posterior = (likelihood[:, 1] * prior).reshape(-1)
posterior /= np.sum(posterior)
# expected[:, 0] = utility[:, 0] * posterior
# expected[:, 1] = utility[:, 1] * posterior
expected = posterior @ utility
# definitions for the axes
left, width = 0.05, 0.15
bottom, height = 0.05, 0.9
padding = 0.05
small_width = 0.1
large_padding = 0.07
left_space = left + small_width + large_padding
fig = plt.figure(figsize=(17, 4))
rect_prior = [left, bottom+0.05, small_width, height-0.1]
rect_likelihood = [left_space, bottom , width, height]
rect_posterior = [left_space + padding + width - 0.02, bottom+0.05 , small_width, height-0.1]
rect_utility = [left_space + padding + width + padding + small_width, bottom , width, height]
rect_expected = [left_space + padding + width + padding + small_width + padding + width, bottom+0.05 , width, height-0.1]
ax_likelihood = fig.add_axes(rect_likelihood)
ax_prior = fig.add_axes(rect_prior, sharey=ax_likelihood)
ax_posterior = fig.add_axes(rect_posterior, sharey=ax_likelihood)
ax_utility = fig.add_axes(rect_utility, sharey=ax_posterior)
ax_expected = fig.add_axes(rect_expected)
prior_colormap = plt.cm.Blues
posterior_colormap = plt.cm.Greens
expected_colormap = plt.cm.Wistia
# Show posterior probs and marginals
ax_prior.barh(0, prior[0], facecolor = prior_colormap(prior[0]))
ax_prior.barh(1, prior[1], facecolor = prior_colormap(prior[1]))
ax_likelihood.matshow(likelihood, vmin=0., vmax=1., cmap='Reds')
ax_posterior.barh(0, posterior[0], facecolor = posterior_colormap(posterior[0]))
ax_posterior.barh(1, posterior[1], facecolor = posterior_colormap(posterior[1]))
ax_utility.matshow(utility, vmin=0., vmax=1., cmap='cool')
# ax_expected.matshow(expected, vmin=0., vmax=1., cmap='Wistia')
ax_expected.bar(0, expected[0], facecolor = expected_colormap(expected[0]))
ax_expected.bar(1, expected[1], facecolor = expected_colormap(expected[1]))
# Probabilities plot details
ax_prior.set(xlim = [1, 0], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Prior p(s)")
ax_prior.axis('off')
# Likelihood plot details
ax_likelihood.set(xticks = [0, 1], xticklabels = ['fish', 'no fish'],
yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', xlabel = 'measurement (m)',
title = 'Likelihood p(m | s)')
ax_likelihood.xaxis.set_ticks_position('bottom')
ax_likelihood.spines['left'].set_visible(False)
ax_likelihood.spines['bottom'].set_visible(False)
# Posterior plot details
ax_posterior.set(xlim = [0, 1], yticks = [0, 1], yticklabels = ['left', 'right'],
ylabel = 'state (s)', title = "Posterior p(s | m)")
ax_posterior.axis('off')
# Utility plot details
ax_utility.set(xticks = [0, 1], xticklabels = ['left', 'right'],
xlabel = 'action (a)',
title = 'Utility')
ax_utility.xaxis.set_ticks_position('bottom')
ax_utility.spines['left'].set_visible(False)
ax_utility.spines['bottom'].set_visible(False)
# Expected Utility plot details
ax_expected.set(ylim = [-2, 2], xticks = [0, 1], xticklabels = ['left', 'right'],
xlabel = 'action (a)', title = 'Expected utility', yticks=[])
# ax_expected.axis('off')
ax_expected.spines['left'].set_visible(False)
# ax_expected.set(xticks = [0, 1], xticklabels = ['left', 'right'],
# xlabel = 'action (a)',
# title = 'Expected utility')
# ax_expected.xaxis.set_ticks_position('bottom')
# ax_expected.spines['left'].set_visible(False)
# ax_expected.spines['bottom'].set_visible(False)
# show values
ind = np.arange(2)
x,y = np.meshgrid(ind,ind)
for i in ind:
v = posterior[i]
c = f"{v:.2f}"
ax_posterior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{likelihood[i,j]:.2f}"
ax_likelihood.text(j,i, c, va='center', ha='center', color='black')
for i,j in zip(x.flatten(), y.flatten()):
c = f"{utility[i,j]:.2f}"
ax_utility.text(j,i, c, va='center', ha='center', color='black')
# for i,j in zip(x.flatten(), y.flatten()):
# c = f"{expected[i,j]:.2f}"
# ax_expected.text(j,i, c, va='center', ha='center', color='black')
for i in ind:
v = prior[i]
c = f"{v:.2f}"
ax_prior.text(v+0.2, i, c, va='center', ha='center', color='black')
for i in ind:
v = expected[i]
c = f"{v:.2f}"
ax_expected.text(i, v, c, va='center', ha='center', color='black')
# # show values
# ind = np.arange(2)
# x,y = np.meshgrid(ind,ind)
# for i,j in zip(x.flatten(), y.flatten()):
# c = f"{P[i,j]:.2f}"
# ax.text(j,i, c, va='center', ha='center', color='white')
# for i in ind:
# v = marginal_x[i]
# c = f"{v:.2f}"
# ax_x.text(i, v +0.2, c, va='center', ha='center', color='black')
# v = marginal_y[i]
# c = f"{v:.2f}"
# ax_y.text(v+0.2, i, c, va='center', ha='center', color='black')
return fig
# @title Helper Functions
def compute_marginal(px, py, cor):
# calculate 2x2 joint probabilities given marginals p(x=1), p(y=1) and correlation
p11 = px*py + cor*np.sqrt(px*py*(1-px)*(1-py))
p01 = px - p11
p10 = py - p11
p00 = 1.0 - p11 - p01 - p10
return np.asarray([[p00, p01], [p10, p11]])
# test
# print(compute_marginal(0.4, 0.6, -0.8))
def compute_cor_range(px,py):
# Calculate the allowed range of correlation values given marginals p(x=1) and p(y=1)
def p11(corr):
return px*py + corr*np.sqrt(px*py*(1-px)*(1-py))
def p01(corr):
return px - p11(corr)
def p10(corr):
return py - p11(corr)
def p00(corr):
return 1.0 - p11(corr) - p01(corr) - p10(corr)
Cmax = min(fsolve(p01, 0.0), fsolve(p10, 0.0))
Cmin = max(fsolve(p11, 0.0), fsolve(p00, 0.0))
return Cmin, Cmax
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
--- Section 1: Gone Fishin'
|
#@title Video 2: Gone Fishin'
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='McALsTzb494', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
You were just introduced to the **binary hidden state problem** we are going to explore. You need to decide which side to fish on. We know fish like to school together. On different days the school of fish is either on the left or right side, but we don’t know what the case is today. We will represent our knowledge probabilistically, asking how to make a decision (where to decide the fish are or where to fish) and what to expect in terms of gains or losses. In the next two sections we will consider just the probability of where the fish might be and what you gain or lose by choosing where to fish.Remember, you can either think of your self as a scientist conducting an experiment or as a brain trying to make a decision. The Bayesian approach is the same! --- Section 2: Deciding where to fish
|
#@title Video 3: Utility
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='xvIVZrqF_5s', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
You know the probability that the school of fish is on the left side of the dock today, $P(s = left)$. You also know the probability that it is on the right, $P(s = right)$, because these two probabilities must add up to 1. You need to decide where to fish. It may seem obvious - you could just fish on the side where the probability of the fish being is higher! Unfortunately, decisions and actions are always a little more complicated. Deciding to fish may be influenced by more than just the probability of the school of fish being there as we saw by the potential issues of submarines and sunburn. We quantify these factors numerically using **utility**, which describes the consequences of your actions: how much value you gain (or if negative, lose) given the state of the world ($s$) and the action you take ($a$). In our example, our utility can be summarized as:| Utility: U(s,a) | a = left | a = right || ----------------- |----------|----------|| s = Left | 2 | -3 || s = right | -2 | 1 |To use utility to choose an action, we calculate the **expected utility** of that action by weighing these utilities with the probability of that state occuring. This allows us to choose actions by taking probabilities of events into account: we don't care if the outcome of an action-state pair is a loss if the probability of that state is very low. We can formalize this as:$$\text{Expected utility of action a} = \sum_{s}U(s,a)P(s) $$In other words, the expected utility of an action a is the sum over possible states of the utility of that action and state times the probability of that state. Interactive Demo 2: Exploring the decisionLet's start to get a sense of how all this works. Take a look at the interactive demo below. You can change the probability that the school of fish is on the left side ($p(s = left)$ using the slider. You will see the utility matrix and the corresponding expected utility of each action.First, make sure you understand how the expected utility of each action is being computed from the probabilities and the utility values. In the initial state: the probability of the fish being on the left is 0.9 and on the right is 0.1. The expected utility of the action of fishing on the left is then $U(s = left,a = left)p(s = left) + U(s = right,a = left)p(s = right) = 2(0.9) + -2(0.1) = 1.6$.For each of these scenarios, think and discuss first. Then use the demo to try out each and see if your action would have been correct (that is, if the expected value of that action is the highest).1. You just arrived at the dock for the first time and have no sense of where the fish might be. So you guess that the probability of the school being on the left side is 0.5 (so the probability on the right side is also 0.5). Which side would you choose to fish on given our utility values?2. You think that the probability of the school being on the left side is very low (0.1) and correspondingly high on the right side (0.9). Which side would you choose to fish on given our utility values?3. What would you choose if the probability of the school being on the left side is slightly lower than on the right side (0. 4 vs 0.6)?
|
# @markdown Execute this cell to use the widget
ps_widget = widgets.FloatSlider(0.9, description='p(s = left)', min=0.0, max=1.0, step=0.01)
@widgets.interact(
ps = ps_widget,
)
def make_utility_plot(ps):
fig = plot_utility(ps)
plt.show(fig)
plt.close(fig)
return None
# to_remove explanation
# 1) With equal probabilities, the expected utility is higher on the left side,
# since that is the side without submarines, so you would choose to fish there.
# 2) If the probability that the fish is on the right side is high, you would
# choose to fish there. The high probability of fish being on the right far outweights
# the slightly higher utilities from fishing on the left (as you are unlikely to gain these)
# 3) If the probability that the fish is on the right side is just slightly higher
#. than on the left, you would choose the left side as the expected utility is still
#. higher on the left. Note that in this situation, you are not simply choosing the
#. side with the higher probability - the utility really matters here for the decision
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
In this section, you have seen that both the utility of various state and action pairs and our knowledge of the probability of each state affects your decision. Importantly, we want our knowledge of the probability of each state to be as accurate as possible! So how do we know these probabilities? We may have prior knowledge from years of fishing at the same dock. Over those years, we may have learned that the fish are more likely to be on the left side for example. We want to make sure this knowledge is as accurate as possible though. To do this, we want to collect more data, or take some more measurements! For the next few sections, we will focus on making our knowledge of the probability as accurate as possible, before coming back to using utility to make decisions. --- Section 3: Likelihood of the fish being on either side
|
#@title Video 4: Likelihood
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='l4m0JzMWGio', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
First, we'll think about what it means to take a measurement (also often called an observation or just data) and what it tells you about what the hidden state may be. Specifically, we'll be looking at the **likelihood**, which is the probability of your measurement ($m$) given the hidden state ($s$): $P(m | s)$. Remember that in this case, the hidden state is which side of the dock the school of fish is on.We will watch someone fish (for let's say 10 minutes) and our measurement is whether they catch a fish or not. We know something about what catching a fish means for the likelihood of the fish being on one side or the other. Think! 3: Guessing the location of the fishLet's say we go to different dock from the one in the video. Here, there are different probabilities of catching fish given the state of the world. In this case, if they fish on the side of the dock where the fish are, they have a 70% chance of catching a fish. Otherwise, they catch a fish with only 20% probability. The fisherperson is fishing on the left side. 1) Figure out each of the following:- probability of catching a fish given that the school of fish is on the left side, $P(m = catch\text{ } fish | s = left )$- probability of not catching a fish given that the school of fish is on the left side, $P(m = no \text{ } fish | s = left)$- probability of catching a fish given that the school of fish is on the right side, $P(m = catch \text{ } fish | s = right)$- probability of not catching a fish given that the school of fish is on the right side, $P(m = no \text{ } fish | s = right)$2) If the fisherperson catches a fish, which side would you guess the school is on? Why?3) If the fisherperson does not catch a fish, which side would you guess the school is on? Why?
|
#to_remove explanation
# 1) The fisherperson is on the left side so:
# - P(m = catch fish | s = left) = 0.7 because they have a 70% chance of catching
# a fish when on the same side as the school
# - P(m = no fish | s = left) = 0.3 because the probability of catching a fish
# and not catching a fish for a given state must add up to 1 as these
# are the only options: 1 - 0.7 = 0.3
# - P(m = catch fish | s = right) = 0.2
# - P(m = no fish | s = right) = 0.8
# 2) If the fisherperson catches a fish, you would guess the school of fish is on the
# left side. This is because the probability of catching a fish given that the
# school is on the left side (0.7) is higher than the probability given that
# the school is on the right side (0.2).
# 3) If the fisherperson does not catch a fish, you would guess the school of fish is on the
# right side. This is because the probability of not catching a fish given that the
# school is on the right side (0.8) is higher than the probability given that
# the school is on the right side (0.3).
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
In the prior exercise, you guessed where the school of fish was based on the measurement you took (watching someone fish). You did this by choosing the state (side of school) that maximized the probability of the measurement. In other words, you estimated the state by maximizing the likelihood (had the highest probability of measurement given state $P(m|s$)). This is called maximum likelihood estimation (MLE) and you've encountered it before during this course, in W1D3!What if you had been going to this river for years and you knew that the fish were almost always on the left side? This would probably affect how you make your estimate - you would rely less on the single new measurement and more on your prior knowledge. This is the idea behind Bayesian inference, as we will see later in this tutorial! --- Section 4: Correlation and marginalization
|
#@title Video 5: Correlation and marginalization
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='vsDjtWi-BVo', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
In this section, we are going to take a step back for a bit and think more generally about the amount of information shared between two random variables. We want to know how much information you gain when you observe one variable (take a measurement) if you know something about another. We will see that the fundamental concept is the same if we think about two attributes, for example the size and color of the fish, or the prior information and the likelihood. Math Exercise 4: Computing marginal likelihoodsTo understand the information between two variables, let's first consider the size and color of the fish.| P(X, Y) | Y = silver | Y = gold || ----------------- |----------|----------|| X = small | 0.4 | 0.2 || X = large | 0.1 | 0.3 |The table above shows us the **joint probabilities**: the probability of both specific attributes occuring together. For example, the probability of a fish being small and silver ($P(X = small, Y = silver$) is 0.4.We want to know what the probability of a fish being small regardless of color. Since the fish are either silver or gold, this would be the probability of a fish being small and silver plus the probability of a fish being small and gold. This is an example of marginalizing, or averaging out, the variable we are not interested in across the rows or columns.. In math speak: $P(X = small) = \sum_y{P(X = small, Y)}$. This gives us a **marginal probability**, a probability of a variable outcome (in this case size), regardless of the other variables (in this case color).Please complete the following math problems to further practice thinking through probabilities:1. Calculate the probability of a fish being silver.2. Calculate the probability of a fish being small, large, silver, or gold.3. Calculate the probability of a fish being small OR gold. (Hint: $P(A\ \textrm{or}\ B) = P(A) + P(B) - P(A\ \textrm{and}\ B)$)
|
# to_remove explanation
# 1) The probability of a fish being silver is the joint probability of it being
#. small and silver plus the joint probability of it being large and silver:
#
#. P(Y = silver) = P(X = small, Y = silver) + P(X = large, Y = silver)
#. = 0.4 + 0.1
#. = 0.5
# 2) This is all the possibilities as in this scenario, our fish can only be small
#. or large, silver or gold. So the probability is 1 - the fish has to be at
#. least one of these.
#. 3) First we compute the marginal probabilities
#. P(X = small) = P(X = small, Y = silver) + P(X = small, Y = gold) = 0.6
#. P(Y = gold) = P(X = small, Y = gold) + P(X = large, Y = gold) = 0.5
#. We already know the joint probability: P(X = small, Y = gold) = 0.2
#. We can now use the given formula:
#. P( X = small or Y = gold) = P(X = small) + P(Y = gold) - P(X = small, Y = gold)
#. = 0.6 + 0.5 - 0.2
#. = 0.9
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
Think! 4: Covarying probability distributionsThe relationship between the marginal probabilities and the joint probabilities is determined by the correlation between the two random variables - a normalized measure of how much the variables covary. We can also think of this as gaining some information about one of the variables when we observe a measurement from the other. We will think about this more formally in Tutorial 2. Here, we want to think about how the correlation between size and color of these fish changes how much information we gain about one attribute based on the other. See Bonus Section 1 for the formula for correlation.Use the widget below and answer the following questions:1. When the correlation is zero, $\rho = 0$, what does the distribution of size tell you about color?2. Set $\rho$ to something small. As you change the probability of golden fish, what happens to the ratio of size probabilities? Set $\rho$ larger (can be negative). Can you explain the pattern of changes in the probabilities of size as you change the probability of golden fish?3. Set the probability of golden fish and of large fish to around 65%. As the correlation goes towards 1, how often will you see silver large fish?4. What is increasing the (absolute) correlation telling you about how likely you are to see one of the properties if you see a fish with the other?
|
# @markdown Execute this cell to enable the widget
style = {'description_width': 'initial'}
gs = GridspecLayout(2,2)
cor_widget = widgets.FloatSlider(0.0, description='ρ', min=-1, max=1, step=0.01)
px_widget = widgets.FloatSlider(0.5, description='p(color=golden)', min=0.01, max=0.99, step=0.01, style=style)
py_widget = widgets.FloatSlider(0.5, description='p(size=large)', min=0.01, max=0.99, step=0.01, style=style)
gs[0,0] = cor_widget
gs[0,1] = px_widget
gs[1,0] = py_widget
@widgets.interact(
px=px_widget,
py=py_widget,
cor=cor_widget,
)
def make_corr_plot(px, py, cor):
Cmin, Cmax = compute_cor_range(px, py) #allow correlation values
cor_widget.min, cor_widget.max = Cmin+0.01, Cmax-0.01
if cor_widget.value > Cmax:
cor_widget.value = Cmax
if cor_widget.value < Cmin:
cor_widget.value = Cmin
cor = cor_widget.value
P = compute_marginal(px,py,cor)
# print(P)
fig = plot_joint_probs(P)
plt.show(fig)
plt.close(fig)
return None
# gs[1,1] = make_corr_plot()
# to_remove explanation
#' 1. When the correlation is zero, the two properties are completely independent.
#' This means you don't gain any information about one variable from observing the other.
#' Importantly, the marginal distribution of one variable is therefore independent of the other.
#' 2. The correlation controls the distribution of probability across the joint probabilty table.
#' The higher the correlation, the more the probabilities are restricted by the fact that both rows
#' and columns need to sum to one! While the marginal probabilities show the relative weighting, the
#' absolute probabilities for one quality will become more dependent on the other as the correlation
#' goes to 1 or -1.
#' 3. The correlation will control how much probabilty mass is located on the diagonals. As the
#' correlation goes to 1 (or -1), the probability of seeing the one of the two pairings has to goes
#' towards zero!
#' 4. If we think about what information we gain by observing one quality, the intution from (3.) tells
#' us that we know more (have more information) about the other quality as a function of the correlation.
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
We have just seen how two random variables can be more or less independent. The more correlated, the less independent, and the more shared information. We also learned that we can marginalize to determine the marginal likelihood of a hidden state or to find the marginal probability distribution of two random variables. We are going to now complete our journey towards being fully Bayesian! --- Section 5: Bayes' Rule and the Posterior Marginalization is going to be used to combine our prior knowlege, which we call the **prior**, and our new information from a measurement, the **likelihood**. Only in this case, the information we gain about the hidden state we are interested in, where the fish are, is based on the relationship between the probabilities of the measurement and our prior. We can now calculate the full posterior distribution for the hidden state ($s$) using Bayes' Rule. As we've seen, the posterior is proportional the the prior times the likelihood. This means that the posterior probability of the hidden state ($s$) given a measurement ($m$) is proportional to the likelihood of the measurement given the state times the prior probability of that state (the marginal likelihood):$$ P(s | m) \propto P(m | s) P(s) $$We say proportional to instead of equal because we need to normalize to produce a full probability distribution:$$ P(s | m) = \frac{P(m | s) P(s)}{P(m)} $$Normalizing by this $P(m)$ means that our posterior is a complete probability distribution that sums or integrates to 1 appropriately. We now can use this new, complete probability distribution for any future inference or decisions we like! In fact, as we will see tomorrow, we can use it as a new prior! Finally, we often call this probability distribution our beliefs over the hidden states, to emphasize that it is our subjective knowlege about the hidden state.For many complicated cases, like those we might be using to model behavioral or brain inferences, the normalization term can be intractable or extremely complex to calculate. We can be careful to choose probability distributions were we can analytically calculate the posterior probability or numerical approximation is reliable. Better yet, we sometimes don't need to bother with this normalization! The normalization term, $P(m)$, is the probability of the measurement. This does not depend on state so is essentially a constant we can often ignore. We can compare the unnormalized posterior distribution values for different states because how they relate to each other is unchanged when divided by the same constant. We will see how to do this to compare evidence for different hypotheses tomorrow. (It's also used to compare the likelihood of models fit using maximum likelihood estimation, as you did in W1D5.)In this relatively simple example, we can compute the marginal probability $P(m)$ easily by using:$$P(m) = \sum_s P(m | s) P(s)$$We can then normalize so that we deal with the full posterior distribution. Math Exercise 5: Calculating a posterior probabilityOur prior is $p(s = left) = 0.3$ and $p(s = right) = 0.7$. In the video, we learned that the chance of catching a fish given they fish on the same side as the school was 50%. Otherwise, it was 10%. We observe a person fishing on the left side. Our likelihood is: | Likelihood: p(m \| s) | m = catch fish | m = no fish || ----------------- |----------|----------|| s = left | 0.5 | 0.5 || s = right | 0.1 | 0.9 |Calculate the posterior probability (on paper) that:1. The school is on the left if the fisherperson catches a fish: $p(s = left | m = catch fish)$ (hint: normalize by compute $p(m = catch fish)$)2. The school is on the right if the fisherperson does not catch a fish: $p(s = right | m = no fish)$
|
# to_remove explanation
# 1. Using Bayes rule, we know that P(s = left | m = catch fish) = P(m = catch fish | s = left)P(s = left) / P(m = catch fish)
#. Let's first compute P(m = catch fish):
#. P(m = catch fish) = P(m = catch fish | s = left)P(s = left) + P(m = catch fish | s = right)P(s = right)
# = 0.5 * 0.3 + .1*.7
# = 0.22
#. Now we can plug in all parts of Bayes rule:
# P(s = left | m = catch fish) = P(m = catch fish | s = left)P(s = left) / P(m = catch fish)
# = 0.5*0.3/0.22
# = 0.68
# 2. Using Bayes rule, we know that P(s = right | m = no fish) = P(m = no fish | s = right)P(s = right) / P(m = no fish)
#. Let's first compute P(m = no fish):
#. P(m = no fish) = P(m = no fish | s = left)P(s = left) + P(m = no fish | s = right)P(s = right)
# = 0.5 * 0.3 + .9*.7
# = 0.78
#. Now we can plug in all parts of Bayes rule:
# P(s = right | m = no fish) = P(m = no fish | s = right)P(s = right) / P(m = no fish)
# = 0.9*0.7/0.78
# = 0.81
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
Coding Exercise 5: Computing PosteriorsLet's implement our above math to be able to compute posteriors for different priors and likelihood.sAs before, our prior is $p(s = left) = 0.3$ and $p(s = right) = 0.7$. In the video, we learned that the chance of catching a fish given they fish on the same side as the school was 50%. Otherwise, it was 10%. We observe a person fishing on the left side. Our likelihood is: | Likelihood: p(m \| s) | m = catch fish | m = no fish || ----------------- |----------|----------|| s = left | 0.5 | 0.5 || s = right | 0.1 | 0.9 |We want our full posterior to take the same 2 by 2 form. Make sure the outputs match your math answers!
|
def compute_posterior(likelihood, prior):
""" Use Bayes' Rule to compute posterior from likelihood and prior
Args:
likelihood (ndarray): i x j array with likelihood probabilities where i is
number of state options, j is number of measurement options
prior (ndarray): i x 1 array with prior probability of each state
Returns:
ndarray: i x j array with posterior probabilities where i is
number of state options, j is number of measurement options
"""
#################################################
## TODO for students ##
# Fill out function and remove
raise NotImplementedError("Student exercise: implement compute_posterior")
#################################################
# Compute unnormalized posterior (likelihood times prior)
posterior = ... # first row is s = left, second row is s = right
# Compute p(m)
p_m = np.sum(posterior, axis = 0)
# Normalize posterior (divide elements by p_m)
posterior /= ...
return posterior
# Make prior
prior = np.array([0.3, 0.7]).reshape((2, 1)) # first row is s = left, second row is s = right
# Make likelihood
likelihood = np.array([[0.5, 0.5], [0.1, 0.9]]) # first row is s = left, second row is s = right
# Compute posterior
posterior = compute_posterior(likelihood, prior)
# Visualize
with plt.xkcd():
plot_prior_likelihood_posterior(prior, likelihood, posterior)
# to_remove solution
def compute_posterior(likelihood, prior):
""" Use Bayes' Rule to compute posterior from likelihood and prior
Args:
likelihood (ndarray): i x j array with likelihood probabilities where i is
number of state options, j is number of measurement options
prior (ndarray): i x 1 array with prior probability of each state
Returns:
ndarray: i x j array with posterior probabilities where i is
number of state options, j is number of measurement options
"""
# Compute unnormalized posterior (likelihood times prior)
posterior = likelihood * prior # first row is s = left, second row is s = right
# Compute p(m)
p_m = np.sum(posterior, axis = 0)
# Normalize posterior (divide elements by p_m)
posterior /= p_m
return posterior
# Make prior
prior = np.array([0.3, 0.7]).reshape((2, 1)) # first row is s = left, second row is s = right
# Make likelihood
likelihood = np.array([[0.5, 0.5], [0.1, 0.9]]) # first row is s = left, second row is s = right
# Compute posterior
posterior = compute_posterior(likelihood, prior)
# Visualize
with plt.xkcd():
plot_prior_likelihood_posterior(prior, likelihood, posterior)
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
Interactive Demo 5: What affects the posterior?Now that we can understand the implementation of *Bayes rule*, let's vary the parameters of the prior and likelihood to see how changing the prior and likelihood affect the posterior. In the demo below, you can change the prior by playing with the slider for $p( s = left)$. You can also change the likelihood by changing the probability of catching a fish given that the school is on the left and the probability of catching a fish given that the school is on the right. The fisherperson you are observing is fishing on the left. 1. Keeping the likelihood constant, when does the prior have the strongest influence over the posterior? Meaning, when does the posterior look most like the prior no matter whether a fish was caught or not?2. Keeping the likelihood constant, when does the prior exert the weakest influence? Meaning, when does the posterior look least like the prior and depend most on whether a fish was caught or not?3. Set the prior probability of the state = left to 0.6 and play with the likelihood. When does the likelihood exert the most influence over the posterior?
|
# @markdown Execute this cell to enable the widget
style = {'description_width': 'initial'}
ps_widget = widgets.FloatSlider(0.3, description='p(s = left)',
min=0.01, max=0.99, step=0.01)
p_a_s1_widget = widgets.FloatSlider(0.5, description='p(fish | s = left)',
min=0.01, max=0.99, step=0.01, style=style)
p_a_s0_widget = widgets.FloatSlider(0.1, description='p(fish | s = right)',
min=0.01, max=0.99, step=0.01, style=style)
observed_widget = widgets.Checkbox(value=False, description='Observed fish (m)',
disabled=False, indent=False, layout={'width': 'max-content'})
@widgets.interact(
ps=ps_widget,
p_a_s1=p_a_s1_widget,
p_a_s0=p_a_s0_widget,
m_right=observed_widget
)
def make_prior_likelihood_plot(ps,p_a_s1,p_a_s0,m_right):
fig = plot_prior_likelihood(ps,p_a_s1,p_a_s0,m_right)
plt.show(fig)
plt.close(fig)
return None
# to_remove explanation
# 1). The prior exerts a strong influence over the posterior when it is very informative: when
#. the probability of the school being on one side or the other. If the prior that the fish are
#. on the left side is very high (like 0.9), the posterior probability of the state being left is
#. high regardless of the measurement.
# 2). The prior does not exert a strong influence when it is not informative: when the probabilities
#. of the school being on the left vs right are similar (both are 0.5 for example). In this case,
#. the posterior is more driven by the collected data (the measurement) and more closely resembles
#. the likelihood.
#. 3) Similarly to the prior, the likelihood exerts the most influence when it is informative: when catching
#. a fish tells you a lot of information about which state is likely. For example, if the probability of the
#. fisherperson catching a fish if he is fishing on the right side and the school is on the left is 0
#. (p fish | s = left) = 0 and the probability of catching a fish if the school is on the right is 1, the
#. prior does not affect the posterior at all. The measurement tells you the hidden state completely.
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
Section 6: Making Bayesian fishing decisionsWe will explore how to consider the expected utility of an action based on our belief (the posterior distribution) about where we think the fish are. Now we have all the components of a Bayesian decision: our prior information, the likelihood given a measurement, the posterior distribution (belief) and our utility (the gains and losses). This allows us to consider the relationship between the true value of the hidden state, $s$, and what we *expect* to get if we take action, $a$, based on our belief!Let's use the following widget to think about the relationship between these probability distributions and utility function. Think! 6: What is more important, the probabilities or the utilities?We are now going to put everything we've learned together to gain some intuitions for how each of the elements that goes into a Bayesian decision comes together. Remember, the common assumption in neuroscience, psychology, economics, ecology, etc. is that we (humans and animals) are tying to maximize our expected utility.1. Can you find a situation where the expected utility is the same for both actions?2. What is more important for determining the expected utility: the prior or a new measurement (the likelihood)?3. Why is this a normative model?4. Can you think of ways in which this model would need to be extended to describe human or animal behavior?
|
# @markdown Execute this cell to enable the widget
style = {'description_width': 'initial'}
ps_widget = widgets.FloatSlider(0.3, description='p(s)',
min=0.01, max=0.99, step=0.01)
p_a_s1_widget = widgets.FloatSlider(0.5, description='p(fish | s = left)',
min=0.01, max=0.99, step=0.01, style=style)
p_a_s0_widget = widgets.FloatSlider(0.1, description='p(fish | s = right)',
min=0.01, max=0.99, step=0.01, style=style)
observed_widget = widgets.Checkbox(value=False, description='Observed fish (m)',
disabled=False, indent=False, layout={'width': 'max-content'})
@widgets.interact(
ps=ps_widget,
p_a_s1=p_a_s1_widget,
p_a_s0=p_a_s0_widget,
m_right=observed_widget
)
def make_prior_likelihood_utility_plot(ps, p_a_s1, p_a_s0,m_right):
fig = plot_prior_likelihood_utility(ps, p_a_s1, p_a_s0,m_right)
plt.show(fig)
plt.close(fig)
return None
# to_remove explanation
#' 1. There are actually many (infinite) combinations that can produce the same
#. expected utility for both actions: but the posterior probabilities will always
# have to balance out the differences in the utility function. So, what is
# important is that for a given utility function, there will be some 'point
# of indifference'
#' 2. What matters is the relative information: if the prior is close to 50/50,
# then the likelihood has more infuence, if the likelihood is 50/50 given a
# measurement (the measurement is uninformative), the prior is more important.
# But the critical insite from Bayes Rule and the Bayesian approach is that what
# matters is the relative information you gain from a measurement, and that
# you can use all of this information for your decision.
#' 3. The model gives us a very precise way to think about how we *should* combine
# information and how we *should* act, GIVEN some assumption about our goals.
# In this case, if we assume we are trying to maximize expected utility--we can
# state what an animal or subject should do.
#' 4. There are lots of possible extensions. Humans may not always try to maximize
# utility; humans and animals might not be able to calculate or represent probabiltiy
# distributions exactly; The utility function might be more complicated; etc.
|
_____no_output_____
|
CC-BY-4.0
|
tutorials/W3D1_BayesianDecisions/W3D1_Tutorial1.ipynb
|
bgalbraith/course-content
|
Dependencies
|
# !pip install --quiet efficientnet
!pip install --quiet image-classifiers
import warnings, json, re, glob, math
from scripts_step_lr_schedulers import *
from melanoma_utility_scripts import *
from kaggle_datasets import KaggleDatasets
from sklearn.model_selection import KFold
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import optimizers, layers, metrics, losses, Model
# import efficientnet.tfkeras as efn
from classification_models.tfkeras import Classifiers
import tensorflow_addons as tfa
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
|
_____no_output_____
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
TPU configuration
|
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
|
REPLICAS: 1
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Model parameters
|
config = {
"HEIGHT": 256,
"WIDTH": 256,
"CHANNELS": 3,
"BATCH_SIZE": 64,
"EPOCHS": 25,
"LEARNING_RATE": 3e-4,
"ES_PATIENCE": 10,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"TTA_STEPS": 25,
"BASE_MODEL": 'seresnet18',
"BASE_MODEL_WEIGHTS": 'imagenet',
"DATASET_PATH": 'melanoma-256x256'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
|
_____no_output_____
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Load data
|
database_base_path = '/kaggle/input/siim-isic-melanoma-classification/'
k_fold = pd.read_csv(database_base_path + 'train.csv')
test = pd.read_csv(database_base_path + 'test.csv')
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print(f'Test samples: {len(test)}')
display(test.head())
GCS_PATH = KaggleDatasets().get_gcs_path(config['DATASET_PATH'])
TRAINING_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/train*.tfrec')
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/test*.tfrec')
|
Train samples: 33126
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Augmentations
|
def data_augment(image, label):
p_spatial = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_spatial2 = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_rotate = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_crop = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_pixel = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
### Spatial-level transforms
if p_spatial >= .2: # flips
image['input_image'] = tf.image.random_flip_left_right(image['input_image'])
image['input_image'] = tf.image.random_flip_up_down(image['input_image'])
if p_spatial >= .7:
image['input_image'] = tf.image.transpose(image['input_image'])
if p_rotate >= .8: # rotate 270º
image['input_image'] = tf.image.rot90(image['input_image'], k=3)
elif p_rotate >= .6: # rotate 180º
image['input_image'] = tf.image.rot90(image['input_image'], k=2)
elif p_rotate >= .4: # rotate 90º
image['input_image'] = tf.image.rot90(image['input_image'], k=1)
if p_spatial2 >= .6:
if p_spatial2 >= .9:
image['input_image'] = transform_rotation(image['input_image'], config['HEIGHT'], 180.)
elif p_spatial2 >= .8:
image['input_image'] = transform_zoom(image['input_image'], config['HEIGHT'], 8., 8.)
elif p_spatial2 >= .7:
image['input_image'] = transform_shift(image['input_image'], config['HEIGHT'], 8., 8.)
else:
image['input_image'] = transform_shear(image['input_image'], config['HEIGHT'], 2.)
if p_crop >= .6: # crops
if p_crop >= .8:
image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.8), int(config['WIDTH']*.8), config['CHANNELS']])
elif p_crop >= .7:
image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.9), int(config['WIDTH']*.9), config['CHANNELS']])
else:
image['input_image'] = tf.image.central_crop(image['input_image'], central_fraction=.8)
image['input_image'] = tf.image.resize(image['input_image'], size=[config['HEIGHT'], config['WIDTH']])
if p_pixel >= .6: # Pixel-level transforms
if p_pixel >= .9:
image['input_image'] = tf.image.random_hue(image['input_image'], 0.01)
elif p_pixel >= .8:
image['input_image'] = tf.image.random_saturation(image['input_image'], 0.7, 1.3)
elif p_pixel >= .7:
image['input_image'] = tf.image.random_contrast(image['input_image'], 0.8, 1.2)
else:
image['input_image'] = tf.image.random_brightness(image['input_image'], 0.1)
return image, label
|
_____no_output_____
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Auxiliary functions
|
# Datasets utility functions
def read_labeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_meta': data}, label # returns a dataset of (image, data, label)
def read_labeled_tfrecord_eval(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_meta': data}, label, image_name # returns a dataset of (image, data, label, image_name)
def load_dataset(filenames, ordered=False, buffer_size=-1):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.with_options(ignore_order) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(read_labeled_tfrecord, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label)
def load_dataset_eval(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_labeled_tfrecord_eval, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label, image_name)
def get_training_dataset(filenames, batch_size, buffer_size=-1):
dataset = load_dataset(filenames, ordered=False, buffer_size=buffer_size)
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.repeat() # the training dataset must repeat for several epochs
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=True) # slighly faster with fixed tensor sizes
dataset = dataset.prefetch(buffer_size) # prefetch next batch while training (autotune prefetch buffer size)
return dataset
def get_validation_dataset(filenames, ordered=True, repeated=False, batch_size=32, buffer_size=-1):
dataset = load_dataset(filenames, ordered=ordered, buffer_size=buffer_size)
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=repeated)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_eval_dataset(filenames, batch_size=32, buffer_size=-1):
dataset = load_dataset_eval(filenames, buffer_size=buffer_size)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
# Test function
def read_unlabeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, UNLABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_tabular': data}, image_name # returns a dataset of (image, data, image_name)
def load_dataset_test(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_unlabeled_tfrecord, num_parallel_calls=buffer_size)
# returns a dataset of (image, data, label, image_name) pairs if labeled=True or (image, data, image_name) pairs if labeled=False
return dataset
def get_test_dataset(filenames, batch_size=32, buffer_size=-1, tta=False):
dataset = load_dataset_test(filenames, buffer_size=buffer_size)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
# Advanced augmentations
def transform_rotation(image, height, rotation):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly rotated
DIM = height
XDIM = DIM%2 #fix for size 331
rotation = rotation * tf.random.normal([1],dtype='float32')
# CONVERT DEGREES TO RADIANS
rotation = math.pi * rotation / 180.
# ROTATION MATRIX
c1 = tf.math.cos(rotation)
s1 = tf.math.sin(rotation)
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
rotation_matrix = tf.reshape( tf.concat([c1,s1,zero, -s1,c1,zero, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(rotation_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
def transform_shear(image, height, shear):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly sheared
DIM = height
XDIM = DIM%2 #fix for size 331
shear = shear * tf.random.normal([1],dtype='float32')
shear = math.pi * shear / 180.
# SHEAR MATRIX
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
c2 = tf.math.cos(shear)
s2 = tf.math.sin(shear)
shear_matrix = tf.reshape( tf.concat([one,s2,zero, zero,c2,zero, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(shear_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
def transform_shift(image, height, h_shift, w_shift):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly shifted
DIM = height
XDIM = DIM%2 #fix for size 331
height_shift = h_shift * tf.random.normal([1],dtype='float32')
width_shift = w_shift * tf.random.normal([1],dtype='float32')
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
# SHIFT MATRIX
shift_matrix = tf.reshape( tf.concat([one,zero,height_shift, zero,one,width_shift, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(shift_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
def transform_zoom(image, height, h_zoom, w_zoom):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly zoomed
DIM = height
XDIM = DIM%2 #fix for size 331
height_zoom = 1.0 + tf.random.normal([1],dtype='float32')/h_zoom
width_zoom = 1.0 + tf.random.normal([1],dtype='float32')/w_zoom
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
# ZOOM MATRIX
zoom_matrix = tf.reshape( tf.concat([one/height_zoom,zero,zero, zero,one/width_zoom,zero, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(zoom_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
|
_____no_output_____
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Learning rate scheduler
|
lr_min = 1e-6
# lr_start = 0
lr_max = config['LEARNING_RATE']
steps_per_epoch = 24844 // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * steps_per_epoch
warmup_steps = steps_per_epoch * 5
# hold_max_steps = 0
# step_decay = .8
# step_size = steps_per_epoch * 1
# rng = [i for i in range(0, total_steps, 32)]
# y = [step_schedule_with_warmup(tf.cast(x, tf.float32), step_size=step_size,
# warmup_steps=warmup_steps, hold_max_steps=hold_max_steps,
# lr_start=lr_start, lr_max=lr_max, step_decay=step_decay) for x in rng]
# sns.set(style="whitegrid")
# fig, ax = plt.subplots(figsize=(20, 6))
# plt.plot(rng, y)
# print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
|
_____no_output_____
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Model
|
# Initial bias
pos = len(k_fold[k_fold['target'] == 1])
neg = len(k_fold[k_fold['target'] == 0])
initial_bias = np.log([pos/neg])
print('Bias')
print(pos)
print(neg)
print(initial_bias)
# class weights
total = len(k_fold)
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Class weight')
print(class_weight)
def model_fn(input_shape):
input_image = L.Input(shape=input_shape, name='input_image')
BaseModel, preprocess_input = Classifiers.get(config['BASE_MODEL'])
base_model = BaseModel(input_shape=input_shape,
weights=config['BASE_MODEL_WEIGHTS'],
include_top=False)
x = base_model(input_image)
x = L.GlobalAveragePooling2D()(x)
output = L.Dense(1, activation='sigmoid', name='output',
bias_initializer=tf.keras.initializers.Constant(initial_bias))(x)
model = Model(inputs=input_image, outputs=output)
return model
|
_____no_output_____
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Training
|
# Evaluation
eval_dataset = get_eval_dataset(TRAINING_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
image_names = next(iter(eval_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(count_data_items(TRAINING_FILENAMES)))).numpy().astype('U')
image_data = eval_dataset.map(lambda data, label, image_name: data)
# Resample dataframe
k_fold = k_fold[k_fold['image_name'].isin(image_names)]
# Test
NUM_TEST_IMAGES = len(test)
test_preds = np.zeros((NUM_TEST_IMAGES, 1))
test_preds_last = np.zeros((NUM_TEST_IMAGES, 1))
test_dataset = get_test_dataset(TEST_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, tta=True)
image_names_test = next(iter(test_dataset.unbatch().map(lambda data, image_name: image_name).batch(NUM_TEST_IMAGES))).numpy().astype('U')
test_image_data = test_dataset.map(lambda data, image_name: data)
history_list = []
k_fold_best = k_fold.copy()
kfold = KFold(config['N_FOLDS'], shuffle=True, random_state=SEED)
for n_fold, (trn_idx, val_idx) in enumerate(kfold.split(TRAINING_FILENAMES)):
if n_fold < config['N_USED_FOLDS']:
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# tf.tpu.experimental.initialize_tpu_system(tpu)
K.clear_session()
### Data
train_filenames = np.array(TRAINING_FILENAMES)[trn_idx]
valid_filenames = np.array(TRAINING_FILENAMES)[val_idx]
steps_per_epoch = count_data_items(train_filenames) // config['BATCH_SIZE']
# Train model
model_path = f'model_fold_{n_fold}.h5'
es = EarlyStopping(monitor='val_auc', mode='max', patience=config['ES_PATIENCE'],
restore_best_weights=False, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_auc', mode='max',
save_best_only=True, save_weights_only=True)
with strategy.scope():
model = model_fn((config['HEIGHT'], config['WIDTH'], config['CHANNELS']))
optimizer = tfa.optimizers.RectifiedAdam(lr=lr_max,
total_steps=total_steps,
warmup_proportion=(warmup_steps / total_steps),
min_lr=lr_min)
model.compile(optimizer, loss=losses.BinaryCrossentropy(label_smoothing=0.05),
metrics=[metrics.AUC()])
history = model.fit(get_training_dataset(train_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
validation_data=get_validation_dataset(valid_filenames, ordered=True, repeated=False,
batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
epochs=config['EPOCHS'],
steps_per_epoch=steps_per_epoch,
callbacks=[checkpoint, es],
class_weight=class_weight,
verbose=2).history
# save last epoch weights
model.save_weights('last_' + model_path)
history_list.append(history)
# Get validation IDs
valid_dataset = get_eval_dataset(valid_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
valid_image_names = next(iter(valid_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(count_data_items(valid_filenames)))).numpy().astype('U')
k_fold[f'fold_{n_fold}'] = k_fold.apply(lambda x: 'validation' if x['image_name'] in valid_image_names else 'train', axis=1)
k_fold_best[f'fold_{n_fold}'] = k_fold_best.apply(lambda x: 'validation' if x['image_name'] in valid_image_names else 'train', axis=1)
##### Last model #####
print('Last model evaluation...')
preds = model.predict(image_data)
name_preds_eval = dict(zip(image_names, preds.reshape(len(preds))))
k_fold[f'pred_fold_{n_fold}'] = k_fold.apply(lambda x: name_preds_eval[x['image_name']], axis=1)
print(f'Last model inference (TTA {config["TTA_STEPS"]} steps)...')
for step in range(config['TTA_STEPS']):
test_preds_last += model.predict(test_image_data)
##### Best model #####
print('Best model evaluation...')
model.load_weights(model_path)
preds = model.predict(image_data)
name_preds_eval = dict(zip(image_names, preds.reshape(len(preds))))
k_fold_best[f'pred_fold_{n_fold}'] = k_fold_best.apply(lambda x: name_preds_eval[x['image_name']], axis=1)
print(f'Best model inference (TTA {config["TTA_STEPS"]} steps)...')
for step in range(config['TTA_STEPS']):
test_preds += model.predict(test_image_data)
# normalize preds
test_preds /= (config['N_USED_FOLDS'] * config['TTA_STEPS'])
test_preds_last /= (config['N_USED_FOLDS'] * config['TTA_STEPS'])
name_preds = dict(zip(image_names_test, test_preds.reshape(NUM_TEST_IMAGES)))
name_preds_last = dict(zip(image_names_test, test_preds_last.reshape(NUM_TEST_IMAGES)))
test['target'] = test.apply(lambda x: name_preds[x['image_name']], axis=1)
test['target_last'] = test.apply(lambda x: name_preds_last[x['image_name']], axis=1)
|
FOLD: 1
Downloading data from https://github.com/qubvel/classification_models/releases/download/0.0.1/seresnet18_imagenet_1000_no_top.h5
45359104/45351256 [==============================] - 4s 0us/step
Epoch 1/25
408/408 - 147s - loss: 0.7536 - auc: 0.7313 - val_loss: 0.1771 - val_auc: 0.5180
Epoch 2/25
408/408 - 144s - loss: 0.5032 - auc: 0.8513 - val_loss: 0.1900 - val_auc: 0.7332
Epoch 3/25
408/408 - 145s - loss: 0.5233 - auc: 0.8501 - val_loss: 0.3951 - val_auc: 0.8278
Epoch 4/25
408/408 - 146s - loss: 0.5291 - auc: 0.8503 - val_loss: 0.5892 - val_auc: 0.8244
Epoch 5/25
408/408 - 146s - loss: 0.5110 - auc: 0.8528 - val_loss: 0.3490 - val_auc: 0.8255
Epoch 6/25
408/408 - 139s - loss: 0.4682 - auc: 0.8814 - val_loss: 0.6515 - val_auc: 0.8535
Epoch 7/25
408/408 - 142s - loss: 0.4610 - auc: 0.8793 - val_loss: 0.3841 - val_auc: 0.8673
Epoch 8/25
408/408 - 141s - loss: 0.4458 - auc: 0.8894 - val_loss: 0.3039 - val_auc: 0.8412
Epoch 9/25
408/408 - 141s - loss: 0.4535 - auc: 0.8867 - val_loss: 0.5310 - val_auc: 0.8819
Epoch 10/25
408/408 - 142s - loss: 0.4385 - auc: 0.8951 - val_loss: 0.6995 - val_auc: 0.8433
Epoch 11/25
408/408 - 141s - loss: 0.4285 - auc: 0.8992 - val_loss: 0.3153 - val_auc: 0.8760
Epoch 12/25
408/408 - 139s - loss: 0.4083 - auc: 0.9143 - val_loss: 0.5465 - val_auc: 0.8659
Epoch 13/25
408/408 - 142s - loss: 0.4043 - auc: 0.9146 - val_loss: 0.2805 - val_auc: 0.8740
Epoch 14/25
408/408 - 141s - loss: 0.3886 - auc: 0.9253 - val_loss: 0.4161 - val_auc: 0.8632
Epoch 15/25
408/408 - 143s - loss: 0.3700 - auc: 0.9338 - val_loss: 0.4457 - val_auc: 0.8666
Epoch 16/25
408/408 - 141s - loss: 0.3674 - auc: 0.9356 - val_loss: 0.3412 - val_auc: 0.8839
Epoch 17/25
408/408 - 140s - loss: 0.3375 - auc: 0.9480 - val_loss: 0.3924 - val_auc: 0.8703
Epoch 18/25
408/408 - 142s - loss: 0.3237 - auc: 0.9544 - val_loss: 0.2758 - val_auc: 0.8552
Epoch 19/25
408/408 - 143s - loss: 0.3311 - auc: 0.9535 - val_loss: 0.3325 - val_auc: 0.8610
Epoch 20/25
408/408 - 140s - loss: 0.2902 - auc: 0.9680 - val_loss: 0.2184 - val_auc: 0.8552
Epoch 21/25
408/408 - 145s - loss: 0.2846 - auc: 0.9694 - val_loss: 0.2595 - val_auc: 0.8624
Epoch 22/25
408/408 - 143s - loss: 0.2552 - auc: 0.9785 - val_loss: 0.2535 - val_auc: 0.8693
Epoch 23/25
408/408 - 145s - loss: 0.2528 - auc: 0.9791 - val_loss: 0.2828 - val_auc: 0.8782
Epoch 24/25
408/408 - 143s - loss: 0.2380 - auc: 0.9839 - val_loss: 0.2587 - val_auc: 0.8695
Epoch 25/25
408/408 - 140s - loss: 0.2331 - auc: 0.9850 - val_loss: 0.2575 - val_auc: 0.8700
Last model evaluation...
Last model inference (TTA 25 steps)...
Best model evaluation...
Best model inference (TTA 25 steps)...
FOLD: 2
Epoch 1/25
408/408 - 145s - loss: 1.4535 - auc: 0.6980 - val_loss: 0.1884 - val_auc: 0.4463
Epoch 2/25
408/408 - 143s - loss: 0.5506 - auc: 0.8422 - val_loss: 0.1974 - val_auc: 0.7122
Epoch 3/25
408/408 - 143s - loss: 0.5113 - auc: 0.8634 - val_loss: 0.2947 - val_auc: 0.8525
Epoch 4/25
408/408 - 144s - loss: 0.5339 - auc: 0.8479 - val_loss: 0.6366 - val_auc: 0.7912
Epoch 5/25
408/408 - 139s - loss: 0.5006 - auc: 0.8572 - val_loss: 0.7557 - val_auc: 0.8531
Epoch 6/25
408/408 - 140s - loss: 0.4760 - auc: 0.8725 - val_loss: 0.2821 - val_auc: 0.8140
Epoch 7/25
408/408 - 146s - loss: 0.4729 - auc: 0.8781 - val_loss: 0.4087 - val_auc: 0.8631
Epoch 8/25
408/408 - 139s - loss: 0.4344 - auc: 0.9005 - val_loss: 0.3831 - val_auc: 0.8518
Epoch 9/25
408/408 - 138s - loss: 0.4148 - auc: 0.9081 - val_loss: 0.3328 - val_auc: 0.8447
Epoch 10/25
408/408 - 139s - loss: 0.4234 - auc: 0.9058 - val_loss: 0.2685 - val_auc: 0.8798
Epoch 11/25
408/408 - 142s - loss: 0.4121 - auc: 0.9129 - val_loss: 0.3698 - val_auc: 0.8625
Epoch 12/25
408/408 - 139s - loss: 0.4006 - auc: 0.9194 - val_loss: 0.3287 - val_auc: 0.8915
Epoch 13/25
408/408 - 139s - loss: 0.3891 - auc: 0.9254 - val_loss: 0.6094 - val_auc: 0.8477
Epoch 14/25
408/408 - 141s - loss: 0.3815 - auc: 0.9255 - val_loss: 0.3927 - val_auc: 0.8242
Epoch 15/25
408/408 - 139s - loss: 0.3694 - auc: 0.9361 - val_loss: 0.3596 - val_auc: 0.8811
Epoch 16/25
408/408 - 139s - loss: 0.3507 - auc: 0.9424 - val_loss: 0.3919 - val_auc: 0.8972
Epoch 17/25
408/408 - 139s - loss: 0.3349 - auc: 0.9512 - val_loss: 0.3917 - val_auc: 0.8796
Epoch 18/25
408/408 - 138s - loss: 0.3244 - auc: 0.9568 - val_loss: 0.4386 - val_auc: 0.8936
Epoch 19/25
408/408 - 140s - loss: 0.3032 - auc: 0.9635 - val_loss: 0.2599 - val_auc: 0.8978
Epoch 20/25
408/408 - 139s - loss: 0.2978 - auc: 0.9653 - val_loss: 0.2680 - val_auc: 0.8872
Epoch 21/25
408/408 - 141s - loss: 0.2585 - auc: 0.9775 - val_loss: 0.3013 - val_auc: 0.8902
Epoch 22/25
408/408 - 139s - loss: 0.2553 - auc: 0.9792 - val_loss: 0.2838 - val_auc: 0.9049
Epoch 23/25
408/408 - 141s - loss: 0.2520 - auc: 0.9792 - val_loss: 0.2890 - val_auc: 0.9038
Epoch 24/25
408/408 - 140s - loss: 0.2374 - auc: 0.9841 - val_loss: 0.2680 - val_auc: 0.9005
Epoch 25/25
408/408 - 139s - loss: 0.2374 - auc: 0.9843 - val_loss: 0.2697 - val_auc: 0.9005
Last model evaluation...
Last model inference (TTA 25 steps)...
Best model evaluation...
Best model inference (TTA 25 steps)...
FOLD: 3
Epoch 1/25
408/408 - 149s - loss: 1.1720 - auc: 0.7271 - val_loss: 0.1744 - val_auc: 0.6360
Epoch 2/25
408/408 - 149s - loss: 0.5507 - auc: 0.8441 - val_loss: 0.1939 - val_auc: 0.6844
Epoch 3/25
408/408 - 148s - loss: 0.5058 - auc: 0.8589 - val_loss: 0.3885 - val_auc: 0.8607
Epoch 4/25
408/408 - 150s - loss: 0.5285 - auc: 0.8505 - val_loss: 0.4079 - val_auc: 0.8370
Epoch 5/25
408/408 - 149s - loss: 0.4942 - auc: 0.8637 - val_loss: 0.5155 - val_auc: 0.8279
Epoch 6/25
408/408 - 150s - loss: 0.4860 - auc: 0.8649 - val_loss: 0.5217 - val_auc: 0.8420
Epoch 7/25
408/408 - 145s - loss: 0.4461 - auc: 0.8891 - val_loss: 0.5477 - val_auc: 0.7847
Epoch 8/25
408/408 - 148s - loss: 0.4492 - auc: 0.8907 - val_loss: 0.4068 - val_auc: 0.8478
Epoch 9/25
408/408 - 145s - loss: 0.4383 - auc: 0.8924 - val_loss: 0.3346 - val_auc: 0.8607
Epoch 10/25
408/408 - 148s - loss: 0.4140 - auc: 0.9097 - val_loss: 0.4308 - val_auc: 0.8583
Epoch 11/25
408/408 - 143s - loss: 0.4326 - auc: 0.9001 - val_loss: 0.3784 - val_auc: 0.8515
Epoch 12/25
408/408 - 145s - loss: 0.4062 - auc: 0.9162 - val_loss: 0.2820 - val_auc: 0.8529
Epoch 13/25
408/408 - 152s - loss: 0.3969 - auc: 0.9224 - val_loss: 0.5535 - val_auc: 0.8801
Epoch 14/25
408/408 - 147s - loss: 0.3705 - auc: 0.9303 - val_loss: 0.3018 - val_auc: 0.8591
Epoch 15/25
408/408 - 150s - loss: 0.3705 - auc: 0.9347 - val_loss: 0.4215 - val_auc: 0.8524
Epoch 16/25
408/408 - 143s - loss: 0.3468 - auc: 0.9420 - val_loss: 0.3081 - val_auc: 0.8652
Epoch 17/25
408/408 - 150s - loss: 0.3313 - auc: 0.9511 - val_loss: 0.2978 - val_auc: 0.8725
Epoch 18/25
408/408 - 146s - loss: 0.3187 - auc: 0.9567 - val_loss: 0.2542 - val_auc: 0.8754
Epoch 19/25
408/408 - 150s - loss: 0.2957 - auc: 0.9648 - val_loss: 0.2455 - val_auc: 0.8672
Epoch 20/25
408/408 - 145s - loss: 0.2855 - auc: 0.9697 - val_loss: 0.2919 - val_auc: 0.8630
Epoch 21/25
408/408 - 148s - loss: 0.2806 - auc: 0.9714 - val_loss: 0.2798 - val_auc: 0.8683
Epoch 22/25
408/408 - 145s - loss: 0.2517 - auc: 0.9785 - val_loss: 0.3469 - val_auc: 0.8822
Epoch 23/25
408/408 - 148s - loss: 0.2355 - auc: 0.9841 - val_loss: 0.3256 - val_auc: 0.8883
Epoch 24/25
408/408 - 147s - loss: 0.2316 - auc: 0.9851 - val_loss: 0.2705 - val_auc: 0.8842
Epoch 25/25
408/408 - 143s - loss: 0.2232 - auc: 0.9873 - val_loss: 0.2706 - val_auc: 0.8848
Last model evaluation...
Last model inference (TTA 25 steps)...
Best model evaluation...
Best model inference (TTA 25 steps)...
FOLD: 4
Epoch 1/25
408/408 - 150s - loss: 1.3047 - auc: 0.7255 - val_loss: 0.2129 - val_auc: 0.3756
Epoch 2/25
408/408 - 149s - loss: 0.5269 - auc: 0.8519 - val_loss: 0.2027 - val_auc: 0.6503
Epoch 3/25
408/408 - 148s - loss: 0.5088 - auc: 0.8602 - val_loss: 0.3177 - val_auc: 0.8657
Epoch 4/25
408/408 - 146s - loss: 0.4965 - auc: 0.8637 - val_loss: 0.3331 - val_auc: 0.8419
Epoch 5/25
408/408 - 148s - loss: 0.5099 - auc: 0.8562 - val_loss: 0.3309 - val_auc: 0.8765
Epoch 6/25
408/408 - 140s - loss: 0.4817 - auc: 0.8662 - val_loss: 1.0217 - val_auc: 0.8450
Epoch 7/25
408/408 - 143s - loss: 0.4676 - auc: 0.8770 - val_loss: 0.3531 - val_auc: 0.8796
Epoch 8/25
408/408 - 141s - loss: 0.4693 - auc: 0.8751 - val_loss: 0.5131 - val_auc: 0.9037
Epoch 9/25
408/408 - 142s - loss: 0.4275 - auc: 0.8990 - val_loss: 0.3979 - val_auc: 0.8893
Epoch 10/25
408/408 - 142s - loss: 0.4374 - auc: 0.8914 - val_loss: 0.8738 - val_auc: 0.8437
Epoch 11/25
408/408 - 142s - loss: 0.4223 - auc: 0.9046 - val_loss: 0.3543 - val_auc: 0.8808
Epoch 12/25
408/408 - 142s - loss: 0.4182 - auc: 0.9067 - val_loss: 0.3024 - val_auc: 0.8862
Epoch 13/25
408/408 - 144s - loss: 0.4118 - auc: 0.9144 - val_loss: 1.0908 - val_auc: 0.8471
Epoch 14/25
408/408 - 143s - loss: 0.4006 - auc: 0.9190 - val_loss: 0.4312 - val_auc: 0.8784
Epoch 15/25
408/408 - 142s - loss: 0.3657 - auc: 0.9344 - val_loss: 0.2889 - val_auc: 0.8589
Epoch 16/25
408/408 - 142s - loss: 0.3656 - auc: 0.9353 - val_loss: 0.4130 - val_auc: 0.8836
Epoch 17/25
408/408 - 142s - loss: 0.3435 - auc: 0.9453 - val_loss: 0.3969 - val_auc: 0.8719
Epoch 18/25
408/408 - 142s - loss: 0.3292 - auc: 0.9503 - val_loss: 0.5234 - val_auc: 0.8915
Epoch 00018: early stopping
Last model evaluation...
Last model inference (TTA 25 steps)...
Best model evaluation...
Best model inference (TTA 25 steps)...
FOLD: 5
Epoch 1/25
408/408 - 138s - loss: 1.1062 - auc: 0.7312 - val_loss: 0.1750 - val_auc: 0.4583
Epoch 2/25
408/408 - 138s - loss: 0.5368 - auc: 0.8487 - val_loss: 0.2057 - val_auc: 0.5969
Epoch 3/25
408/408 - 141s - loss: 0.5105 - auc: 0.8574 - val_loss: 0.5717 - val_auc: 0.8518
Epoch 4/25
408/408 - 140s - loss: 0.5071 - auc: 0.8597 - val_loss: 0.4976 - val_auc: 0.8366
Epoch 5/25
408/408 - 146s - loss: 0.5105 - auc: 0.8546 - val_loss: 0.3420 - val_auc: 0.8437
Epoch 6/25
408/408 - 140s - loss: 0.4721 - auc: 0.8772 - val_loss: 0.3763 - val_auc: 0.8288
Epoch 7/25
408/408 - 141s - loss: 0.4629 - auc: 0.8822 - val_loss: 0.4596 - val_auc: 0.8585
Epoch 8/25
408/408 - 143s - loss: 0.4467 - auc: 0.8933 - val_loss: 0.5264 - val_auc: 0.8700
Epoch 9/25
408/408 - 142s - loss: 0.4328 - auc: 0.8978 - val_loss: 0.4177 - val_auc: 0.8681
Epoch 10/25
408/408 - 142s - loss: 0.4295 - auc: 0.9040 - val_loss: 0.2681 - val_auc: 0.8794
Epoch 11/25
408/408 - 143s - loss: 0.4001 - auc: 0.9158 - val_loss: 0.3823 - val_auc: 0.8509
Epoch 12/25
408/408 - 141s - loss: 0.4058 - auc: 0.9154 - val_loss: 0.3573 - val_auc: 0.8603
Epoch 13/25
408/408 - 141s - loss: 0.3936 - auc: 0.9226 - val_loss: 0.4060 - val_auc: 0.8714
Epoch 14/25
408/408 - 142s - loss: 0.3788 - auc: 0.9292 - val_loss: 0.5670 - val_auc: 0.8686
Epoch 15/25
408/408 - 141s - loss: 0.3713 - auc: 0.9336 - val_loss: 0.5811 - val_auc: 0.8503
Epoch 16/25
408/408 - 143s - loss: 0.3567 - auc: 0.9418 - val_loss: 0.2765 - val_auc: 0.8795
Epoch 17/25
408/408 - 142s - loss: 0.3378 - auc: 0.9480 - val_loss: 0.5983 - val_auc: 0.8812
Epoch 18/25
408/408 - 141s - loss: 0.3140 - auc: 0.9601 - val_loss: 0.2750 - val_auc: 0.8484
Epoch 19/25
408/408 - 142s - loss: 0.3175 - auc: 0.9590 - val_loss: 0.3299 - val_auc: 0.8727
Epoch 20/25
408/408 - 141s - loss: 0.2859 - auc: 0.9698 - val_loss: 0.3239 - val_auc: 0.8892
Epoch 21/25
408/408 - 143s - loss: 0.2695 - auc: 0.9745 - val_loss: 0.2528 - val_auc: 0.8865
Epoch 22/25
408/408 - 144s - loss: 0.2526 - auc: 0.9795 - val_loss: 0.2476 - val_auc: 0.8705
Epoch 23/25
408/408 - 141s - loss: 0.2444 - auc: 0.9823 - val_loss: 0.2859 - val_auc: 0.8829
Epoch 24/25
408/408 - 143s - loss: 0.2326 - auc: 0.9857 - val_loss: 0.2601 - val_auc: 0.8804
Epoch 25/25
408/408 - 140s - loss: 0.2301 - auc: 0.9853 - val_loss: 0.2612 - val_auc: 0.8816
Last model evaluation...
Last model inference (TTA 25 steps)...
Best model evaluation...
Best model inference (TTA 25 steps)...
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Model loss graph
|
for n_fold in range(config['N_USED_FOLDS']):
print(f'Fold: {n_fold + 1}')
plot_metrics(history_list[n_fold])
|
Fold: 1
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Model loss graph aggregated
|
plot_metrics_agg(history_list, config['N_USED_FOLDS'])
|
_____no_output_____
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Model evaluation (best)
|
display(evaluate_model(k_fold_best, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(k_fold_best, config['N_USED_FOLDS']).style.applymap(color_map))
|
_____no_output_____
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Model evaluation (last)
|
display(evaluate_model(k_fold, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(k_fold, config['N_USED_FOLDS']).style.applymap(color_map))
|
_____no_output_____
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Confusion matrix
|
for n_fold in range(config['N_USED_FOLDS']):
n_fold += 1
pred_col = f'pred_fold_{n_fold}'
train_set = k_fold_best[k_fold_best[f'fold_{n_fold}'] == 'train']
valid_set = k_fold_best[k_fold_best[f'fold_{n_fold}'] == 'validation']
print(f'Fold: {n_fold}')
plot_confusion_matrix(train_set['target'], np.round(train_set[pred_col]),
valid_set['target'], np.round(valid_set[pred_col]))
|
Fold: 1
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Visualize predictions
|
k_fold['pred'] = 0
for n_fold in range(config['N_USED_FOLDS']):
k_fold['pred'] += k_fold[f'pred_fold_{n_fold+1}'] / config['N_FOLDS']
print('Label/prediction distribution')
print(f"Train positive labels: {len(k_fold[k_fold['target'] > .5])}")
print(f"Train positive predictions: {len(k_fold[k_fold['pred'] > .5])}")
print(f"Train positive correct predictions: {len(k_fold[(k_fold['target'] > .5) & (k_fold['pred'] > .5)])}")
print('Top 10 samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10))
print('Top 10 predicted positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('pred > .5').head(10))
|
Label/prediction distribution
Train positive labels: 581
Train positive predictions: 2647
Train positive correct predictions: 578
Top 10 samples
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Visualize test predictions
|
print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}")
print(f"Test predictions (last) {len(test[test['target_last'] > .5])}|{len(test[test['target_last'] <= .5])}")
print('Top 10 samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].query('target > .5').head(10))
print('Top 10 positive samples (last)')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].query('target_last > .5').head(10))
|
Test predictions 1506|9476
Test predictions (last) 1172|9810
Top 10 samples
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
Test set predictions
|
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission['target'] = test['target']
submission['target_last'] = test['target_last']
submission['target_blend'] = (test['target'] * .5) + (test['target_last'] * .5)
display(submission.head(10))
display(submission.describe())
### BEST ###
submission[['image_name', 'target']].to_csv('submission.csv', index=False)
### LAST ###
submission_last = submission[['image_name', 'target_last']]
submission_last.columns = ['image_name', 'target']
submission_last.to_csv('submission_last.csv', index=False)
### BLEND ###
submission_blend = submission[['image_name', 'target_blend']]
submission_blend.columns = ['image_name', 'target']
submission_blend.to_csv('submission_blend.csv', index=False)
|
_____no_output_____
|
MIT
|
Model backlog/Train/64-melanoma-5fold-seresnet18-radam.ipynb
|
dimitreOliveira/melanoma-classification
|
CTA data analysis with Gammapy Introduction**This notebook shows an example how to make a sky image and spectrum for simulated CTA data with Gammapy.**The dataset we will use is three observation runs on the Galactic center. This is a tiny (and thus quick to process and play with and learn) subset of the simulated CTA dataset that was produced for the first data challenge in August 2017. SetupAs usual, we'll start with some setup ...
|
%matplotlib inline
import matplotlib.pyplot as plt
!gammapy info --no-envvar --no-system
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from astropy.convolution import Gaussian2DKernel
from regions import CircleSkyRegion
from gammapy.modeling import Fit
from gammapy.data import DataStore
from gammapy.datasets import (
Datasets,
FluxPointsDataset,
SpectrumDataset,
MapDataset,
)
from gammapy.modeling.models import (
PowerLawSpectralModel,
SkyModel,
GaussianSpatialModel,
)
from gammapy.maps import MapAxis, WcsNDMap, WcsGeom, RegionGeom
from gammapy.makers import (
MapDatasetMaker,
SafeMaskMaker,
SpectrumDatasetMaker,
ReflectedRegionsBackgroundMaker,
)
from gammapy.estimators import TSMapEstimator, FluxPointsEstimator
from gammapy.estimators.utils import find_peaks
from gammapy.visualization import plot_spectrum_datasets_off_regions
# Configure the logger, so that the spectral analysis
# isn't so chatty about what it's doing.
import logging
logging.basicConfig()
log = logging.getLogger("gammapy.spectrum")
log.setLevel(logging.ERROR)
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
Select observationsA Gammapy analysis usually starts by creating a `~gammapy.data.DataStore` and selecting observations.This is shown in detail in the other notebook, here we just pick three observations near the galactic center.
|
data_store = DataStore.from_dir("$GAMMAPY_DATA/cta-1dc/index/gps")
# Just as a reminder: this is how to select observations
# from astropy.coordinates import SkyCoord
# table = data_store.obs_table
# pos_obs = SkyCoord(table['GLON_PNT'], table['GLAT_PNT'], frame='galactic', unit='deg')
# pos_target = SkyCoord(0, 0, frame='galactic', unit='deg')
# offset = pos_target.separation(pos_obs).deg
# mask = (1 < offset) & (offset < 2)
# table = table[mask]
# table.show_in_browser(jsviewer=True)
obs_id = [110380, 111140, 111159]
observations = data_store.get_observations(obs_id)
obs_cols = ["OBS_ID", "GLON_PNT", "GLAT_PNT", "LIVETIME"]
data_store.obs_table.select_obs_id(obs_id)[obs_cols]
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
Make sky images Define map geometrySelect the target position and define an ON region for the spectral analysis
|
axis = MapAxis.from_edges(
np.logspace(-1.0, 1.0, 10), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0), npix=(500, 400), binsz=0.02, frame="galactic", axes=[axis]
)
geom
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
Compute imagesExclusion mask currently unused. Remove here or move to later in the tutorial?
|
target_position = SkyCoord(0, 0, unit="deg", frame="galactic")
on_radius = 0.2 * u.deg
on_region = CircleSkyRegion(center=target_position, radius=on_radius)
exclusion_mask = geom.to_image().region_mask([on_region], inside=False)
exclusion_mask = WcsNDMap(geom.to_image(), exclusion_mask)
exclusion_mask.plot();
%%time
stacked = MapDataset.create(geom=geom)
stacked.edisp = None
maker = MapDatasetMaker(selection=["counts", "background", "exposure", "psf"])
maker_safe_mask = SafeMaskMaker(methods=["offset-max"], offset_max=2.5 * u.deg)
for obs in observations:
cutout = stacked.cutout(obs.pointing_radec, width="5 deg")
dataset = maker.run(cutout, obs)
dataset = maker_safe_mask.run(dataset, obs)
stacked.stack(dataset)
# The maps are cubes, with an energy axis.
# Let's also make some images:
dataset_image = stacked.to_image()
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
Show imagesLet's have a quick look at the images we computed ...
|
dataset_image.counts.smooth(2).plot(vmax=5);
dataset_image.background.plot(vmax=5);
dataset_image.excess.smooth(3).plot(vmax=2);
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
Source DetectionUse the class `~gammapy.estimators.TSMapEstimator` and function `gammapy.estimators.utils.find_peaks` to detect sources on the images. We search for 0.1 deg sigma gaussian sources in the dataset.
|
spatial_model = GaussianSpatialModel(sigma="0.05 deg")
spectral_model = PowerLawSpectralModel(index=2)
model = SkyModel(spatial_model=spatial_model, spectral_model=spectral_model)
ts_image_estimator = TSMapEstimator(
model,
kernel_width="0.5 deg",
selection_optional=[],
downsampling_factor=2,
sum_over_energy_groups=False,
energy_edges=[0.1, 10] * u.TeV,
)
%%time
images_ts = ts_image_estimator.run(stacked)
sources = find_peaks(
images_ts["sqrt_ts"],
threshold=5,
min_distance="0.2 deg",
)
sources
source_pos = SkyCoord(sources["ra"], sources["dec"])
source_pos
# Plot sources on top of significance sky image
images_ts["sqrt_ts"].plot(add_cbar=True)
plt.gca().scatter(
source_pos.ra.deg,
source_pos.dec.deg,
transform=plt.gca().get_transform("icrs"),
color="none",
edgecolor="white",
marker="o",
s=200,
lw=1.5,
);
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
Spatial analysisSee other notebooks for how to run a 3D cube or 2D image based analysis. SpectrumWe'll run a spectral analysis using the classical reflected regions background estimation method,and using the on-off (often called WSTAT) likelihood function.
|
energy_axis = MapAxis.from_energy_bounds(0.1, 40, 40, unit="TeV", name="energy")
energy_axis_true = MapAxis.from_energy_bounds(
0.05, 100, 200, unit="TeV", name="energy_true"
)
geom = RegionGeom.create(region=on_region, axes=[energy_axis])
dataset_empty = SpectrumDataset.create(
geom=geom, energy_axis_true=energy_axis_true
)
dataset_maker = SpectrumDatasetMaker(
containment_correction=False, selection=["counts", "exposure", "edisp"]
)
bkg_maker = ReflectedRegionsBackgroundMaker(exclusion_mask=exclusion_mask)
safe_mask_masker = SafeMaskMaker(methods=["aeff-max"], aeff_percent=10)
%%time
datasets = Datasets()
for observation in observations:
dataset = dataset_maker.run(
dataset_empty.copy(name=f"obs-{observation.obs_id}"), observation
)
dataset_on_off = bkg_maker.run(dataset, observation)
dataset_on_off = safe_mask_masker.run(dataset_on_off, observation)
datasets.append(dataset_on_off)
plt.figure(figsize=(8, 8))
_, ax, _ = dataset_image.counts.smooth("0.03 deg").plot(vmax=8)
on_region.to_pixel(ax.wcs).plot(ax=ax, edgecolor="white")
plot_spectrum_datasets_off_regions(datasets, ax=ax)
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
Model fitThe next step is to fit a spectral model, using all data (i.e. a "global" fit, using all energies).
|
%%time
spectral_model = PowerLawSpectralModel(
index=2, amplitude=1e-11 * u.Unit("cm-2 s-1 TeV-1"), reference=1 * u.TeV
)
model = SkyModel(spectral_model=spectral_model, name="source-gc")
datasets.models = model
fit = Fit(datasets)
result = fit.run()
print(result)
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
Spectral pointsFinally, let's compute spectral points. The method used is to first choose an energy binning, and then to do a 1-dim likelihood fit / profile to compute the flux and flux error.
|
# Flux points are computed on stacked observation
stacked_dataset = datasets.stack_reduce(name="stacked")
print(stacked_dataset)
energy_edges = MapAxis.from_energy_bounds("1 TeV", "30 TeV", nbin=5).edges
stacked_dataset.models = model
fpe = FluxPointsEstimator(energy_edges=energy_edges, source="source-gc")
flux_points = fpe.run(datasets=[stacked_dataset])
flux_points.table_formatted
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
PlotLet's plot the spectral model and points. You could do it directly, but for convenience we bundle the model and the flux points in a `FluxPointDataset`:
|
flux_points_dataset = FluxPointsDataset(data=flux_points, models=model)
flux_points_dataset.plot_fit();
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
Exercises* Re-run the analysis above, varying some analysis parameters, e.g. * Select a few other observations * Change the energy band for the map * Change the spectral model for the fit * Change the energy binning for the spectral points* Change the target. Make a sky image and spectrum for your favourite source. * If you don't know any, the Crab nebula is the "hello world!" analysis of gamma-ray astronomy.
|
# print('hello world')
# SkyCoord.from_name('crab')
|
_____no_output_____
|
BSD-3-Clause
|
docs/tutorials/cta_data_analysis.ipynb
|
Jaleleddine/gammapy
|
Vuoi conoscere gli incendi divampati dopo il 15 settembre 2019?
|
mes = australia_1[(australia_1["acq_date"]>= "2019-09-15")]
mes.head()
mes.describe()
map_sett = folium.Map([-25.274398,133.775136], zoom_start=4)
lat_3 = mes["latitude"].values.tolist()
long_3 = mes["longitude"].values.tolist()
australia_cluster_3 = MarkerCluster().add_to(map_sett)
for lat_3,long_3 in zip(lat_3,long_3):
folium.Marker([lat_3,long_3]).add_to(australia_cluster_3)
map_sett
|
_____no_output_____
|
BSD-3-Clause
|
courses/08_Plotly_Bokeh/Fire_Australia19.ipynb
|
visiont3lab/data-visualization
|
Play with Folium
|
44.4807035,11.3712528
import folium
m1 = folium.Map(location=[44.48, 11.37], tiles='openstreetmap', zoom_start=18)
m1.save('map1.html')
m1
m3.save("filename.png")
|
_____no_output_____
|
BSD-3-Clause
|
courses/08_Plotly_Bokeh/Fire_Australia19.ipynb
|
visiont3lab/data-visualization
|
from google.colab import drive
drive.mount('/content/drive')
|
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
|
MIT
|
Combineinator_Library.ipynb
|
combineinator/combine-inator-acikhack2021
|
|
CombineInator (parent class)
|
class CombineInator:
def __init__(self):
self.source = ""
def translate_model(self, source):
if source == "en":
tokenizer_trs = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-trk")
model_trs = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-trk")
pipe_trs = "translation_en_to_trk"
elif source == "tr":
tokenizer_trs = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-tr-en")
model_trs = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-tr-en")
pipe_trs = "translation_tr_to_en"
return model_trs, tokenizer_trs, pipe_trs
def translate(self, pipe, model, tokenizer, response):
translator = pipeline(pipe, model=model, tokenizer=tokenizer)
# elde edilen cümleleri hedeflnen dile çevirme:
trans = translator(response)[0]["translation_text"]
return trans
|
_____no_output_____
|
MIT
|
Combineinator_Library.ipynb
|
combineinator/combine-inator-acikhack2021
|
WikiWebScraper (child)
|
import requests
import re
from bs4 import BeautifulSoup
from tqdm import tqdm
from os.path import exists, basename, splitext
class WikiWebScraper(CombineInator):
def __init__(self):
self.__HEADERS_PARAM = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36"}
def category_scraping_interface(self, CATEGORY_QUERY, LIMIT, SAVE_PATH, PAGE_PER_SAVE, REMOVE_NUMBERS, JUST_TITLE_ANALYSIS, TEXT_INTO_SENTENCES_PARAM):
"""
Kategorik verilerin ayıklanma işlemleri bu fonksiyonda yönetilir.
:param CATEGORY_QUERY: Ayıklanacak kategori sorgusu.
:type CATEGORY_QUERY: str
:param SAVE_PATH: Ayıklanan verinin kaydedileceği yol.
:type SAVE_PATH: str
:param LIMIT: Ayıklanması istenen veri limiti. Verilmediği taktirde tüm verileri çeker.
:type LIMIT: int
:param PAGE_PER_SAVE: Belirlenen aralıkla ayıklanan kategorik verinin kaydedilmesini sağlar.
:type PAGE_PER_SAVE: int
:param TEXT_INTO_SENTENCES_PARAM: Ayıklanan verilerin cümleler halinde mi, yoksa metin halinde mi kaydedileceğini belirler.
:type TEXT_INTO_SENTENCES_PARAM: bool
:param REMOVE_NUMBERS: Ayıklanan verilerden rakamların silinip silinmemesini belirler.
:type REMOVE_NUMBERS: bool
:param JUST_TITLE_ANALYSIS: Sadece sayfaların başlık bilgilerinin toplanmasını sağlar.
:type JUST_TITLE_ANALYSIS: bool
"""
sub_list = []
page_list = []
text_list = []
page_list, sub_list = self.first_variable(CATEGORY_QUERY, (LIMIT - len(text_list)))
fv = True
if page_list and sub_list is not None:
with tqdm(total=LIMIT, desc="Sayfa taranıyor.") as pbar:
while len(page_list) < LIMIT:
if fv is True:
pbar.update(len(page_list))
fv = False
temp_soup = ""
if len(sub_list) == 0:
break
temp_soup = self.sub_scraper(sub_list[0])
if (temp_soup == False):
break
del sub_list[0]
sub_list = sub_list + self.sub_category_scraper(temp_soup)
temp_page_scraper = self.page_scraper(temp_soup, (LIMIT - len(page_list)))
if temp_page_scraper is not None:
for i in temp_page_scraper:
if i not in page_list:
page_list.append(i)
pbar.update(1)
if len(sub_list) == 0:
sub_list = sub_list + self.sub_category_scraper(temp_soup)
temp_range = 0
loop_counter = 0
if JUST_TITLE_ANALYSIS is False:
for i in range(PAGE_PER_SAVE, len(page_list)+PAGE_PER_SAVE, PAGE_PER_SAVE):
if loop_counter == (len(page_list) // PAGE_PER_SAVE):
PATH = SAVE_PATH + "/" + CATEGORY_QUERY + "_" + str(temp_range) + " - " + str(len(page_list)) + ".txt"
temp_text_list = self.text_into_sentences(self.text_scraper(page_list[temp_range:i], (len(page_list) % PAGE_PER_SAVE)), REMOVE_NUMBERS,TEXT_INTO_SENTENCES_PARAM)
else:
PATH = SAVE_PATH + "/" + CATEGORY_QUERY + "_" + str(temp_range) + " - " + str(i) + ".txt"
temp_text_list = self.text_into_sentences(self.text_scraper(page_list[temp_range:i], PAGE_PER_SAVE), REMOVE_NUMBERS, TEXT_INTO_SENTENCES_PARAM)
text_list += temp_text_list
self.save_to_csv(PATH, temp_text_list)
temp_range = i
loop_counter += 1
print("\n\n"+str(len(page_list)) + " adet sayfa bulundu ve içerisinden " + str(len(text_list)) + " satır farklı metin ayrıştırıldı.")
return text_list
else:
PATH = SAVE_PATH + "/" + CATEGORY_QUERY + "_" + str(len(page_list)) + "_page_links" + ".txt"
self.save_to_csv(PATH, page_list, JUST_TITLE_ANALYSIS)
print("\n\n"+str(len(page_list)) + " adet sayfa bulundu ve sayfaların adresleri \"" + PATH + "\" konumunda kaydedildi.")
return page_list
else:
print("Aranan kategori bulunamadı.")
def categorical_scraper(self, CATEGORY_QUERY, save_path, LIMIT=-1, page_per_save=10000, text_into_sentences_param=True, remove_numbers=False, just_title_analysis=False):
"""
Wikipedia üzerinden kategorik olarak veri çekmek için kullanılır.
:param CATEGORY_QUERY: Ayıklanacak kategori sorgusu.
:type CATEGORY_QUERY: str
:param save_path: Ayıklanan verinin kaydedileceği yol.
:type save_path: str
:param LIMIT: Ayıklanması istenen veri limiti. Verilmediği taktirde tüm verileri çeker.
:type LIMIT: int
:param page_per_save: Belirlenen aralıkla ayıklanan kategorik verinin kaydedilmesini sağlar.
:type page_per_save: int
:param text_into_sentences_param: Ayıklanan verilerin cümleler halinde mi, yoksa metin halinde mi kaydedileceğini belirler.
:type text_into_sentences_param: bool
:param remove_numbers: Ayıklanan verilerden rakamların silinip silinmemesini belirler.
:type remove_numbers: bool
:param just_title_analysis: Sadece sayfaların başlık bilgilerinin toplanmasını sağlar.
:type just_title_analysis: bool
"""
if LIMIT == -1:
LIMIT = 9999999
CATEGORY_QUERY = CATEGORY_QUERY.replace(" ","_")
return_list = self.category_scraping_interface(CATEGORY_QUERY, LIMIT, save_path, page_per_save, remove_numbers, just_title_analysis, text_into_sentences_param)
if return_list is None:
return []
else:
return return_list
def text_scraper_from_pagelist(self, page_list_path, save_path, page_per_save=10000, remove_numbers=False, text_into_sentences_param=True, RANGE=None):
"""
Wikipedia üzerinden kategorik olarak veri çekmek için kullanılır.
:param page_list_path: Toplanan sayfaların başlık bilgilerinin çıkartılmasını sağlar
:type page_list_path: str
:param save_path: Ayıklanan verinin kaydedileceği yol.
:type save_path: str
:param page_per_save: Belirlenen aralıkla ayıklanan kategorik verinin kaydedilmesini sağlar.
:type page_per_save: int
:param text_into_sentences_param: Ayıklanan verilerin cümleler halinde mi, yoksa metin halinde mi kaydedileceğini belirler.
:type text_into_sentences_param: bool
:param remove_numbers: Ayıklanan verilerden rakamların silinip silinmemesini belirler.
:type remove_numbers: bool
:param RANGE: Ayıklnacak verilerin aralığını belirler. "RANGE = [500,1000]" şeklinde kullanılır. Verilmediği zaman tüm veri ayıklanır.
:type RANGE: list
"""
page_list = []
text_list = []
with open(page_list_path, 'r') as f:
page_list = [line.strip() for line in f]
if RANGE is not None:
page_list = page_list[RANGE[0]:RANGE[1]]
temp_range = 0
loop_counter = 0
for i in range(page_per_save, len(page_list)+page_per_save, page_per_save):
if loop_counter == (len(page_list) // page_per_save):
PATH = save_path + "/" + "scraped_page" + "_" + str(temp_range) + " - " + str(len(page_list)) + ".txt"
temp_text_list = self.text_into_sentences(self.text_scraper(page_list[temp_range:i], (len(page_list) % page_per_save), True), remove_numbers, text_into_sentences_param)
else:
PATH = save_path + "/" + "scraped_page" + "_" + str(temp_range) + " - " + str(i) + ".txt"
temp_text_list = self.text_into_sentences(self.text_scraper(page_list[temp_range:i], page_per_save, True), remove_numbers, text_into_sentences_param)
text_list += temp_text_list
save_to_csv(PATH, temp_text_list)
temp_range = i
loop_counter += 1
print("\n\"" + page_list_path + "\" konumundaki " + str(len(page_list)) + " adet sayfa içerisinden " + str(len(text_list)) + " satır metin ayrıştırıldı.")
return text_list
def page_scraper(self, page_soup, LIMIT):
"""
Gönderilen wikipedia SOUP objesinin içerisindeki kategorik içerik sayfaları döndürür.
:param page_soup: Wikipedia kategori sayfasının SOUP objesidir.
:param LIMIT: Ayıklanacaj sayfa limitini belirler.
:type LIMIT: int
"""
page_list = []
try:
pages = page_soup.find("div", attrs={"id": "mw-pages"}).find_all("a")
for page in pages[1:]:
if len(page_list) == LIMIT:
break
else:
page_list.append([page.text, page["href"]])
return page_list
except:
pass
def sub_category_scraper(self, sub_soup):
"""
Gönderilen wikipedia SOUP objesinin içerisindeki alt kategorileri döndürür.
:param sub_soup: Alt kategori sayfasının SOUP objesidir.
"""
sub_list = []
try:
sub_categories = sub_soup.find_all("div", attrs={"class": "CategoryTreeItem"})
for sub in sub_categories[1:]:
sub_list.append([sub.a.text, sub.a["href"]])
return sub_list
except:
print("Aranan kategori için yeterli sayfa bulunamadı.")
def sub_scraper(self, sub):
"""
Fonksiyona gelen wikipedia kategori/alt kategorisinin SOUP objesini döndürür.
:param sub: Alt kategori sayfasının linkini içerir.
"""
try:
req = requests.get("https://tr.wikipedia.org" + str(sub[1]), headers=self.__HEADERS_PARAM)
soup = BeautifulSoup(req.content, "lxml")
return soup
except:
print("\nAlt kategori kalmadı")
return False
def text_scraper(self, page_list, LIMIT, IS_FROM_TXT=False):
"""
Önceden ayıklanmış sayfa listesini içerisindeki sayfaları ayıklayarak içerisindeki metin listesini döndürür.
:param page_list: Sayfa listesini içerir.
:parama LIMIT: Ayıklanacaj sayfa limitini belirler.
:type LIMIT: int
:param IS_FROM_TXT: Ayıklanacak sayfanın listeleden mi olup olmadığını kontrol eder.
"""
text_list = []
with tqdm(total=LIMIT, desc="Sayfa Ayrıştırılıyor") as pbar:
for page in page_list:
if len(text_list) == LIMIT:
break
if IS_FROM_TXT is False:
req = requests.get("https://tr.wikipedia.org" + str(page[1]), headers=self.__HEADERS_PARAM)
else:
req = requests.get("https://tr.wikipedia.org" + str(page), headers=self.__HEADERS_PARAM)
soup = BeautifulSoup(req.content, "lxml")
page_text = soup.find_all("p")
temp_text = ""
for i in page_text[1:]:
temp_text = temp_text + i.text
text_list.append(temp_text)
pbar.update(1)
return text_list
def first_variable(self, CATEGORY_QUERY, LIMIT):
"""
Sorguda verilen kategorinin doğruluğunu kontrol eder ve eğer sorgu doğru ise ilk değerleri ayıklar.
:param CATEGORY_QUERY: Ayıklanacak kategori sorgusu.
:type CATEGORY_QUERY: str
:param LIMIT: Ayıklanması istenen veri limiti. Verilmediği taktirde tüm verileri çeker.
:type LIMIT: int
"""
first_req = requests.get("https://tr.wikipedia.org/wiki/Kategori:" + CATEGORY_QUERY, headers=self.__HEADERS_PARAM)
first_soup = BeautifulSoup(first_req.content, "lxml")
page_list = self.page_scraper(first_soup, LIMIT)
sub_list = self.sub_category_scraper(first_soup)
return page_list, sub_list
def text_into_sentences(self, texts, remove_numbers, text_into_sentences_param):
"""
Metin verilerini cümlelerine ayıklar.
:param texts: Düzlenecek metin verisi.
:param remove_numbers: Sayıların temizlenip temizlenmeyeceğini kontrol eder.
:param text_into_sentences_param: Metinlerin cümlelere çevrilip çevrilmeyeceğini kontrol eder.
"""
flatlist = []
sent_list = []
texts = self.sentence_cleaning(texts, remove_numbers)
if text_into_sentences_param is True:
for line in texts:
temp_line = re.split(r'(?<![IVX0-9]\S)(?<!\w\.\w.)(?<![A-Z][a-z]\.)(?<=\.|\?)\s', line)
for i in temp_line:
if len(i.split(" ")) > 3:
sent_list.append(i)
else:
sent_list = texts
flatlist = list(dict.fromkeys(self.flat(sent_list, flatlist)))
return flatlist
def flat(self, sl,fl):
"""
Metinler, cümlelerine ayırıldıktan sonra listenin düzlenmesine yarar.
:param sl: Yollanan listle.
:param fl: Düzlemem liste.
"""
for e in sl:
if type(e) == list:
flat(e,fl)
elif len(e.split(" "))>3:
fl.append(e)
return fl
def sentence_cleaning(self, sentences, remove_numbers):
"""
Ayıklanan wikipedia verilerinin temizlenmesi bu fonksiyonda gerçekleşir.
:param sentences: Temizlenmek için gelen veri seti.
:param remove_numbers: Sayıların temizlenip temizlenmeyeceğini kontrol eder.
"""
return_list = []
if remove_numbers is False:
removing_func = '[^[a-zA-ZğüışöçĞÜIİŞÖÇ0-9.,!:;`?%&\-\'" ]'
else:
removing_func = '[^[a-zA-ZğüışöçĞÜIİŞÖÇ.,!:;`?%&\-\'" ]'
for input_text in sentences:
try:
input_text = re.sub(r'(\[.*?\])', '', input_text)
input_text = re.sub(r'(\(.*?\))', '', input_text)
input_text = re.sub(r'(\{.*?\})', '', input_text)
input_text = re.sub(removing_func, '', input_text)
input_text = re.sub("(=+(\s|.)*)", "", input_text)
input_text = re.sub("(\s{2,})", "", input_text)
input_text = input_text.replace("''", "")
input_text = input_text.replace("\n", "")
return_list.append(input_text)
except:
pass
return return_list
def save_to_csv(self, PATH, data, is_just_title_analysis=False):
"""
Verilerin 'csv' formatında kaydedilmesini bu fonksiyonda gerçekleşir.
:param PATH: Kaydedilecek yol.
:param data: Kaydedilecek veri.
:param is_just_title_analysis: Sadece analiz yapılıp yapılmadığını kontrol eder.
"""
if is_just_title_analysis is False:
with open(PATH, "w") as output:
for i in data:
output.write(i+"\n")
else:
temp_data = []
for i in data:
temp_data.append(i[1])
with open(PATH, "w") as output:
for i in temp_data:
output.write(i+"\n")
|
_____no_output_____
|
MIT
|
Combineinator_Library.ipynb
|
combineinator/combine-inator-acikhack2021
|
Örnek kullanım
|
library = WikiWebScraper()
PATH = "/content/"
library.categorical_scraper("savaş", PATH, 20, text_into_sentences_param=False)
|
Sayfa taranıyor.: 100%|██████████| 20/20 [00:00<00:00, 52.99it/s]
Sayfa Ayrıştırılıyor: 100%|██████████| 20/20 [00:04<00:00, 4.68it/s]
|
MIT
|
Combineinator_Library.ipynb
|
combineinator/combine-inator-acikhack2021
|
speechModule (child)
|
!pip install transformers
!pip install simpletransformers
from os import path
from IPython.display import Audio
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM, Wav2Vec2Processor, Wav2Vec2ForCTC
import librosa
import torch
class speechModule(CombineInator):
def __init__(self):
self.SAMPLING_RATE = 16_000
self.git_repo_url = 'https://github.com/CorentinJ/Real-Time-Voice-Cloning.git'
self.project_name = splitext(basename(self.git_repo_url))[0]
def get_repo(self):
"""
Metinin sese çevrilmesi sırasında kullanılacak ses klonlama kütüphanesini çeker.
"""
if not exists(self.project_name):
# clone and install
!git clone -q --recursive {self.git_repo_url}
# install dependencies
!cd {self.project_name} && pip install -q -r requirements.txt
!pip install -q gdown
!apt-get install -qq libportaudio2
!pip install -q https://github.com/tugstugi/dl-colab-notebooks/archive/colab_utils.zip
# download pretrained model
!cd {self.project_name} && wget https://github.com/blue-fish/Real-Time-Voice-Cloning/releases/download/v1.0/pretrained.zip && unzip -o pretrained.zip
from sys import path as syspath
syspath.append(self.project_name)
def wav2vec_model(self, source):
"""
Sesin metne çevrilmesi sırasında kullanılacak ilgili dile göre wav2vec modelini belirler.
:param source: ses dosyası dili ("tr" / "en")
:type source: str
"""
processor = None
model = None
if source == "en":
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h")
elif source =="tr":
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-turkish")
return model, processor
def speech2text(self, audio_file, model, processor, language):
"""
Girdi olarak verilen sesi metne çevirir.
:param audio_file: ses dosyasının yer aldığı dizin
type audio_file: str
:param model: sesin metne çevrilmesi esnasında kullanılacak huggingface kütüphanesinden çekilen model
:param processor: sesin metne çevrilmesi esnasında kullanılacak huggingface kütüphanesinden çekilen istemci
:param language: girdi olarak verilen ses doyasının dili ("tr" / "en")
:type language: str
"""
#load any audio file of your choice
speech, rate = librosa.load(audio_file, sr=self.SAMPLING_RATE)
input_values = processor(speech, sampling_rate=self.SAMPLING_RATE, return_tensors = 'pt').input_values
#Store logits (non-normalized predictions)
logits = model(input_values).logits
#Store predicted id's
predicted_ids = torch.argmax(logits, dim =-1)
#decode the audio to generate text
response = processor.decode(predicted_ids[0]).lower()
if language == "en":
response = ">>tur<< " + response
return response
def text2speech(self, audio, translation):
"""
Metini sese çevirir.
:param audio: klonlanacak ses doyasının yer aldığı dizin
:type audio: str
:param translation: çevirisi yapılmış metin
:type translation: str
"""
from numpy import pad as pad
from synthesizer.inference import Synthesizer
from encoder import inference as encoder
from vocoder import inference as vocoder
from pathlib import Path
encoder.load_model(self.project_name / Path("encoder/saved_models/pretrained.pt"))
synthesizer = Synthesizer(self.project_name / Path("synthesizer/saved_models/pretrained/pretrained.pt"))
vocoder.load_model(self.project_name / Path("vocoder/saved_models/pretrained/pretrained.pt"))
embedding = encoder.embed_utterance(encoder.preprocess_wav(audio, self.SAMPLING_RATE))
specs = synthesizer.synthesize_spectrograms([translation], [embedding])
generated_wav = vocoder.infer_waveform(specs[0])
generated_wav = pad(generated_wav, (0, self.SAMPLING_RATE), mode="constant")
return Audio(generated_wav, rate=self.SAMPLING_RATE, autoplay=True)
def speech2text2trans2speech(self, filename:str, source_lang:str, output_type:str = "text"):
"""
Aldığı ses dosyasını text'e dönüştürüp, hedeflenen dile çeviren ve çevirdiği metni ses olarak
döndüren fonksiyon.
:param filename: Ses dosyasının adı
:type filename: str
:param lang: Ses dosyası dili ("en"/"tr")
:type lang: str
"""
output_types = ["text", "speech"]
source_languages = ["en", "tr"]
if source_lang not in source_languages:
print("Kaynak dil olarak yalnızca 'en' ve 'tr' parametreleri kullanılabilir.")
return None
if output_type not in output_types:
print("Çıkış türü için yalnızca 'text' ve 'speech' parametreleri desteklenmektedir.")
return None
if source_lang == "en" and output_type=="speech":
print("Üzgünüz, text2speech modülümüzde Türkçe dil desteği bulunmamaktadır.\n")
return None
model_trs, tokenizer_trs, pipe_trs = CombineInator.translate_model(self, source_lang)
model_s2t, processor_s2t = self.wav2vec_model(source_lang)
input_text = self.speech2text(filename, model_s2t, processor_s2t, source_lang)
print(input_text)
translation = CombineInator.translate(self, pipe_trs, model_trs, tokenizer_trs, input_text)
if output_type == "text":
return translation
else:
print("\n" + translation + "\n")
return self.text2speech(filename, translation)
|
_____no_output_____
|
MIT
|
Combineinator_Library.ipynb
|
combineinator/combine-inator-acikhack2021
|
Örnek kullanım
|
filename = "_path_to_wav_file" # ses dosyası pathi verilmelidir
speechM = speechModule()
speechM.get_repo()
speechM.speech2text2trans2speech(filename, "tr", "speech")
|
_____no_output_____
|
MIT
|
Combineinator_Library.ipynb
|
combineinator/combine-inator-acikhack2021
|
Lxmert (child)
|
!git clone https://github.com/hila-chefer/Transformer-MM-Explainability
import os
os.chdir(f'./Transformer-MM-Explainability')
!pip install -r requirements.txt
%cd Transformer-MM-Explainability
from lxmert.lxmert.src.modeling_frcnn import GeneralizedRCNN
import lxmert.lxmert.src.vqa_utils as utils
from lxmert.lxmert.src.processing_image import Preprocess
from transformers import LxmertTokenizer
from lxmert.lxmert.src.huggingface_lxmert import LxmertForQuestionAnswering
from lxmert.lxmert.src.lxmert_lrp import LxmertForQuestionAnswering as LxmertForQuestionAnsweringLRP
from tqdm import tqdm
from lxmert.lxmert.src.ExplanationGenerator import GeneratorOurs, GeneratorBaselines, GeneratorOursAblationNoAggregation
import random
import numpy as np
import cv2
import torch
import matplotlib.pyplot as plt
from PIL import Image
import torchvision.transforms as transforms
from captum.attr import visualization
import requests
class Lxmert(CombineInator):
def __init__(self):
self.OBJ_URL = "https://raw.githubusercontent.com/airsplay/py-bottom-up-attention/master/demo/data/genome/1600-400-20/objects_vocab.txt"
self.ATTR_URL = "https://raw.githubusercontent.com/airsplay/py-bottom-up-attention/master/demo/data/genome/1600-400-20/attributes_vocab.txt"
self.VQA_URL = "https://raw.githubusercontent.com/airsplay/lxmert/master/data/vqa/trainval_label2ans.json"
self.model_lrp = self.ModelUsage()
self.lrp = GeneratorOurs(self.model_lrp)
self.baselines = GeneratorBaselines(self.model_lrp)
self.vqa_answers = utils.get_data(self.VQA_URL)
class ModelUsage:
"""
Model kullanımı için sınıf yapısı
"""
def __init__(self, use_lrp=True):
self.VQA_URL = "https://raw.githubusercontent.com/airsplay/lxmert/master/data/vqa/trainval_label2ans.json"
self.vqa_answers = utils.get_data(self.VQA_URL)
# load models and model components
self.frcnn_cfg = utils.Config.from_pretrained("unc-nlp/frcnn-vg-finetuned")
self.frcnn_cfg.MODEL.DEVICE = "cuda"
self.frcnn = GeneralizedRCNN.from_pretrained("unc-nlp/frcnn-vg-finetuned", config=self.frcnn_cfg)
self.image_preprocess = Preprocess(self.frcnn_cfg)
self.lxmert_tokenizer = LxmertTokenizer.from_pretrained("unc-nlp/lxmert-base-uncased")
if use_lrp:
self.lxmert_vqa = LxmertForQuestionAnsweringLRP.from_pretrained("unc-nlp/lxmert-vqa-uncased").to("cuda")
else:
self.lxmert_vqa = LxmertForQuestionAnswering.from_pretrained("unc-nlp/lxmert-vqa-uncased").to("cuda")
self.lxmert_vqa.eval()
self.model = self.lxmert_vqa
# self.vqa_dataset = vqa_data.VQADataset(splits="valid")
def forward(self, item):
PATH, question = item
self.image_file_path = PATH
# run frcnn
images, sizes, scales_yx = self.image_preprocess(PATH)
output_dict = self.frcnn(
images,
sizes,
scales_yx=scales_yx,
padding="max_detections",
max_detections= self.frcnn_cfg.max_detections,
return_tensors="pt"
)
inputs = self.lxmert_tokenizer(
question,
truncation=True,
return_token_type_ids=True,
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt"
)
self.question_tokens = self.lxmert_tokenizer.convert_ids_to_tokens(inputs.input_ids.flatten())
self.text_len = len(self.question_tokens)
# Very important that the boxes are normalized
normalized_boxes = output_dict.get("normalized_boxes")
features = output_dict.get("roi_features")
self.image_boxes_len = features.shape[1]
self.bboxes = output_dict.get("boxes")
self.output = self.lxmert_vqa(
input_ids=inputs.input_ids.to("cuda"),
attention_mask=inputs.attention_mask.to("cuda"),
visual_feats=features.to("cuda"),
visual_pos=normalized_boxes.to("cuda"),
token_type_ids=inputs.token_type_ids.to("cuda"),
return_dict=True,
output_attentions=False,
)
return self.output
def ceviri(self, text: str, lang_src='tr'):
"""
Aldığı metni istenilen dile çeviren fonksiyon.
:param text: Orjinal metin
:type text: str
:param lang_src: Metin dosyasının dili (kaynak dili)
:type lang_src: str
:param lang_tgt: Çevrilecek dil (hedef dil)
:param lang_tgt: str
:return: translated text
"""
if lang_src == "en":
text = ">>tur<< " + text
model, tokenizer, pipeline = CombineInator.translate_model(self, lang_src)
return (CombineInator.translate(self, pipeline, model, tokenizer, text))
def save_image_vis(self, image_file_path, bbox_scores):
"""
Resim üzerinde sorunun cevabını çizer ve kaydeder.
:param image_file_path: imgenin yer aldığı dizin
:type image_file_path: str
:param bbox_scores: tespit edilen nesnelerin skorlarını içeren tensor
:type bbox_scores: tensor
"""
_, top_bboxes_indices = bbox_scores.topk(k=1, dim=-1)
img = cv2.imread(image_file_path)
mask = torch.zeros(img.shape[0], img.shape[1])
for index in range(len(bbox_scores)):
[x, y, w, h] = self.model_lrp.bboxes[0][index]
curr_score_tensor = mask[int(y):int(h), int(x):int(w)]
new_score_tensor = torch.ones_like(curr_score_tensor)*bbox_scores[index].item()
mask[int(y):int(h), int(x):int(w)] = torch.max(new_score_tensor,mask[int(y):int(h), int(x):int(w)])
mask = (mask - mask.min()) / (mask.max() - mask.min())
mask = mask.unsqueeze_(-1)
mask = mask.expand(img.shape)
img = img * mask.cpu().data.numpy()
cv2.imwrite('lxmert/lxmert/experiments/paper/new.jpg', img)
def get_image_and_question(self, img_path:str, soru:str):
"""
Input olarak verilen imge ve soruyu döndürür.
:param img_path: Soru sorulacak imgenin path bilgisi
:type img_path: str
:param soru: Resim özelinde modele sorulacak olan Türkçe soru
:type soru: str
:return: image_scores, text_scores
"""
ing_soru = self.ceviri(soru, "tr")
R_t_t, R_t_i = self.lrp.generate_ours((img_path, ing_soru), use_lrp=False, normalize_self_attention=True, method_name="ours")
return R_t_i[0], R_t_t[0]
def resim_uzerinden_soru_cevap(self, PATH:str, turkce_soru:str):
"""
Verilen girdi imgesi üzerinden yine veriler sorular ile sorgulama yapılabilmesini
sağlar.
PATH: imgenin path bilgisi
turkce_soru: Resimde cevabı aranacak soru
"""
#Eğer sorgulanacak resim local'de yok ve internet üzerinden bir resim ise:
if PATH.startswith("http"):
im = Image.open(requests.get(PATH, stream=True).raw)
im.save('lxmert/lxmert/experiments/paper/online_image.jpg', 'JPEG')
PATH = 'lxmert/lxmert/experiments/paper/online_image.jpg'
image_scores, text_scores = self.get_image_and_question(PATH, turkce_soru)
self.save_image_vis(PATH, image_scores)
orig_image = Image.open(self.model_lrp.image_file_path)
fig, axs = plt.subplots(ncols=2, figsize=(20, 5))
axs[0].imshow(orig_image);
axs[0].axis('off');
axs[0].set_title('original');
masked_image = Image.open('lxmert/lxmert/experiments/paper/new.jpg')
axs[1].imshow(masked_image);
axs[1].axis('off');
axs[1].set_title('masked');
text_scores = (text_scores - text_scores.min()) / (text_scores.max() - text_scores.min())
vis_data_records = [visualization.VisualizationDataRecord(text_scores,0,0,0,0,0,self.model_lrp.question_tokens,1)]
visualization.visualize_text(vis_data_records)
cevap = self.ceviri(self.vqa_answers[self.model_lrp.output.question_answering_score.argmax()], lang_src='en')
print("ANSWER:", cevap)
|
_____no_output_____
|
MIT
|
Combineinator_Library.ipynb
|
combineinator/combine-inator-acikhack2021
|
Örnek kullanım
|
lxmert = Lxmert()
PATH = '_path_to_jpg_' # jpg dosyası pathi verilmelidir
turkce_soru = 'Resimde neler var'
lxmert.resim_uzerinden_soru_cevap(PATH, turkce_soru)
|
loading configuration file cache
loading weights file https://cdn.huggingface.co/unc-nlp/frcnn-vg-finetuned/pytorch_model.bin from cache at /root/.cache/torch/transformers/57f6df6abe353be2773f2700159c65615babf39ab5b48114d2b49267672ae10f.77b59256a4cf8343ae0f923246a81489fc8d82f98d082edc2d2037c977c0d9d0
|
MIT
|
Combineinator_Library.ipynb
|
combineinator/combine-inator-acikhack2021
|
Web Arayüz
|
!pip install flask-ngrok
from flask import Flask, redirect, url_for, render_template, request, flash
from flask_ngrok import run_with_ngrok
# Burada web_dependencies klasörü içerisinde bulunan klasörlerin pathi verilmelidir.
template_folder = '_path_to_templates_folder_'
static_folder = '_path_to_static_folder_'
app = Flask(__name__, template_folder=template_folder, static_folder=static_folder)
run_with_ngrok(app) # Start ngrok when app is run
@app.route("/", methods=['GET', 'POST'])
def home():
if request.method == 'POST':
konu = request.form["topic"]
library.categorical_scraper(konu, PATH, 20, text_into_sentences_param=False)
return render_template("index.html")
if __name__ == "__main__":
#app.debug = True
app.run()
|
* Serving Flask app "__main__" (lazy loading)
* Environment: production
[31m WARNING: This is a development server. Do not use it in a production deployment.[0m
[2m Use a production WSGI server instead.[0m
* Debug mode: off
|
MIT
|
Combineinator_Library.ipynb
|
combineinator/combine-inator-acikhack2021
|
Reflect Tables into SQLAlchemy ORM
|
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurements = Base.classes.measurement
Stations = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
|
_____no_output_____
|
ADSL
|
climate_starter.ipynb
|
gracesco/HuefnerSQLAlchemyChallenge
|
Exploratory Climate Analysis
|
# Design a query to retrieve the last 12 months of precipitation data and plot the results
CY_precipitation = session.query(Measurements.date).filter(Measurements.date >= "2016-08-23").order_by(Measurements.date).all()
# # Calculate the date 1 year ago from the last data point in the database
LY_precipitation = session.query(Measurements.date).filter(Measurements.date).order_by(Measurements.date.desc()).first()
last_date = dt.date(2017,8,23) - dt.timedelta(days=365)
last_date
# # # Perform a query to retrieve the data and precipitation scores
last_year = session.query(Measurements.prcp, Measurements.date).order_by(Measurements.date.desc())
# # # Save the query results as a Pandas DataFrame and set the index to the date column, Sort the dataframe by date
date = []
precipitation = []
last_year_df = last_year.filter(Measurements.date >="2016-08-23")
for precip in last_year_df:
date.append(precip.date)
precipitation.append(precip.prcp)
LY_df = pd.DataFrame({
"Date": date,
"Precipitation": precipitation
})
LY_df.set_index("Date", inplace=True)
LY_df = LY_df.sort_index(ascending=True)
# # Use Pandas Plotting with Matplotlib to plot the data
LY_graph = LY_df.plot(figsize = (20,10), rot=90, title= "Hawaii Precipitation Data 8/23/16-8/23/17")
LY_graph.set_ylabel("Precipitation (in)")
plt.savefig("Images/PrecipitationAugtoAug.png")
# Use Pandas to calcualte the summary statistics for the precipitation data
LY_df.describe()
# Design a query to show how many stations are available in this dataset?
station_count = session.query(Measurements.station, func.count(Measurements.station)).\
group_by(Measurements.station).all()
print("There are " + str(len(station_count)) + " stations in this dataset.")
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
active_stations = session.query(Measurements.station, func.count(Measurements.station)).\
group_by(Measurements.station).\
order_by(func.count(Measurements.station).desc()).all()
active_stations
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
most_active = session.query(Measurements.station).group_by(Measurements.station).order_by(func.count(Measurements.station).desc()).first()
most_active_info = session.query(Measurements.station, func.min(Measurements.tobs), func.max(Measurements.tobs), func.avg(Measurements.tobs)).\
filter(Measurements.station == most_active[0]).all()
print("The lowest temperature at station " + most_active_info[0][0] + " is " + str(most_active_info[0][1]) + " degrees.")
print("The highest temperature at station " + most_active_info[0][0] + " is " + str(most_active_info[0][2]) + " degrees.")
print("The average temperature at station " + most_active_info[0][0] + " is " + str(round(most_active_info[0][3], 2)) + " degrees.")
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
tobs_query = session.query(Measurements.station).group_by(Measurements.station).order_by(func.count(Measurements.id).desc()).first()
tobs = session.query(Measurements.station, Measurements.tobs).filter(Measurements.station == tobs_query[0]).filter(Measurements.date > last_date).all()
station_df = pd.DataFrame(tobs, columns = ['station', 'tobs'])
station_df.hist(column='tobs', bins=12)
plt.ylabel('Frequency')
plt.show()
|
_____no_output_____
|
ADSL
|
climate_starter.ipynb
|
gracesco/HuefnerSQLAlchemyChallenge
|
Import dataset
|
bd=pd.read_csv('creditcard.csv')
bd.head()
|
_____no_output_____
|
Unlicense
|
Credit card fraud .ipynb
|
Boutayna98/Credit-Card-Fraud-Detection
|
Exploring dataset
|
bd.info()
|
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 284807 entries, 0 to 284806
Data columns (total 31 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Time 284807 non-null float64
1 V1 284807 non-null float64
2 V2 284807 non-null float64
3 V3 284807 non-null float64
4 V4 284807 non-null float64
5 V5 284807 non-null float64
6 V6 284807 non-null float64
7 V7 284807 non-null float64
8 V8 284807 non-null float64
9 V9 284807 non-null float64
10 V10 284807 non-null float64
11 V11 284807 non-null float64
12 V12 284807 non-null float64
13 V13 284807 non-null float64
14 V14 284807 non-null float64
15 V15 284807 non-null float64
16 V16 284807 non-null float64
17 V17 284807 non-null float64
18 V18 284807 non-null float64
19 V19 284807 non-null float64
20 V20 284807 non-null float64
21 V21 284807 non-null float64
22 V22 284807 non-null float64
23 V23 284807 non-null float64
24 V24 284807 non-null float64
25 V25 284807 non-null float64
26 V26 284807 non-null float64
27 V27 284807 non-null float64
28 V28 284807 non-null float64
29 Amount 284807 non-null float64
30 Class 284807 non-null int64
dtypes: float64(30), int64(1)
memory usage: 67.4 MB
|
Unlicense
|
Credit card fraud .ipynb
|
Boutayna98/Credit-Card-Fraud-Detection
|
Pre processing
|
sc = StandardScaler()
amount = bd['Amount'].values
bd['Amount'] = sc.fit_transform(amount.reshape(-1, 1))
bd.drop(['Time'], axis=1, inplace=True)
bd.shape
bd.drop_duplicates(inplace=True)
bd.shape
|
_____no_output_____
|
Unlicense
|
Credit card fraud .ipynb
|
Boutayna98/Credit-Card-Fraud-Detection
|
Modelling
|
X = bd.drop('Class', axis = 1).values
y = bd['Class'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 1)
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor, export_graphviz, export
DT = DecisionTreeClassifier(max_depth = 4, criterion = 'entropy')
DT.fit(X_train, y_train)
dt_yhat = DT.predict(X_test)
print('Accuracy score of the Decision Tree model is {}'.format(accuracy_score(y_test,dt_yhat)))
print('F1 score of the Decision Tree model is {}'.format(f1_score(y_test,dt_yhat)))
confusion_matrix(y_test, dt_yhat, labels = [0, 1])
|
_____no_output_____
|
Unlicense
|
Credit card fraud .ipynb
|
Boutayna98/Credit-Card-Fraud-Detection
|
Exploratory data analysis
|
# import libraries
import numpy as np
import pandas as pd
import plotly.graph_objects as go
import matplotlib.pyplot as plt
pd.set_option('display.max_rows', 500)
%matplotlib inline
# load the processed data
main_df = pd.read_csv('../data/processed/COVID_small_table_confirmed.csv', sep=';')
main_df.head()
main_df.info()
# GEt countries list
country_list = main_df.columns[1:]
country_list
fig = go.Figure()
# define plot for individual trace
for country in country_list:
fig.add_trace(go.Scatter(x = main_df.date,
y = main_df[country],
mode = 'markers+lines',
name = country))
# overall layout properties
fig.update_layout( width = 1200,
height = 800,
title = 'Confirmed cases for specific countries',
xaxis_title = 'Date',
yaxis_title = 'No. of confirmed cases')
fig.update_yaxes(type='log')
# Dash board visualization
import dash
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div([
dcc.Graph(figure=fig, id='main_window_slope')
])
app.run_server(debug=True, use_reloader=False) # Turn off reloader if inside Jupyter
|
Running on http://127.0.0.1:8050/
Running on http://127.0.0.1:8050/
Debugger PIN: 839-733-624
Debugger PIN: 839-733-624
* Serving Flask app "__main__" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
|
FTL
|
notebooks/Data_EDA.ipynb
|
Prudhvi-Kumar-Kakani/Data-Science-CRISP-DM--Covid-19
|
2+3+
for r in range(n):
sumaRenglon=0
sumaRenglon=0
sumaRenglon=0
for c in range(n):
sumaRenglon +=a2d.get_item(r,c)
total += a2d.get_item(r,c)
def ejemplo1( n ):
c = n + 1
d = c * n
e = n * n
total = c + e - d
print(f"total={ total }")
ejemplo1( 99999 )
def ejemplo2( n ):
contador = 0
for i in range( n ) :
for j in range( n ) :
contador += 1
return contador
ejemplo2( 100 )
def ejemplo3( n ): # n=4
x = n * 2 # x = 8
y = 0 # y = 0
for m in range( 100 ): #3
y = x - n # y = 4
return y
ejemplo3(1000000000)
def ejemplo4( n ):
x = 3 * 3.1416 + n
y = x + 3 * 3 - n
z = x + y
return z
ejemplo4(9)
def ejemplo5( x ):
n = 10
for j in range( 0 , x , 1 ):
n = j + n
return n
ejemplo5(1000000)
from time import time
def ejemplo6( n ):
start_time = time()
data=[[[1 for x in range(n)] for x in range(n)]
for x in range(n)]
suma = 0
for d in range(n):
for r in range(n):
for c in range(n):
suma += data[d][r][c]
elapsed_time = time() - start_time
print("Tiempo transcurrido: %0.10f segundos." % elapsed_time)
return suma
ejemplo6( 500 )
def ejemplo7( n ):
count = 0
for i in range( n ) :
for j in range( 25 ) :
for k in range( n ):
count += 1
return count
def ejemplo7_2( n ):
count = 1
for i in range( n ) :
for j in range( 25 ) :
for k in range( n ):
count += 1
for k in range( n ):
count += 1
return count # 1 + 25n^2 +25n^2
ejemplo7_2(3)
def ejemplo8( numeros ): # numeros es una lista (arreglo en c)
total = 0
for index in range(len(numeros)):
total = numeros[index]
return total
ejemplo8(numeros)
def ejemplo9( n ):
contador = 0
basura = 0
for i in range( n ) :
contador += 1
for j in range( n ) :
contador += 1
basura = basura + contador
return contador
print(ejemplo9( 5 ))
#3+2n
def ejemplo10( n ):
count = 0
for i in range( n ) :
for j in range( i+1 ) :
count += 1
return count
def ejemplo10( n ):
count = 0
for i in range( n ) :
for j in range( i ) :
count += 1
return count
print(ejemplo10(5))
"""
n= 3
000
n00 <-- aqui empieza el for interno
nn0 <--- aqui termina el for interno
nnn
n = 4
0000
n000 <-- aqui empieza el for interno
nn00
nnn0 <--- aqui termina el for interno
nnnn
n =5
00000
n0000 <-- aqui empieza el for interno
nn000
nnn00
nnnn0 <--- aqui termina el for interno
nnnnn
"""
def ejemplo11( n ):
count = 0
i = n
while i > 1 :
count += 1
i = i // 2
return count
print(ejemplo11(16))
# T(n) = 2 + (2 Log 2 n)
def ejemplo12( n ):
contador = 0
for x in range(n):
contador += ejemplo11(x)
return contador
def ejemplo12_bis( n=5 ):
contador = 0
contador = contador + ejemplo11(0) # 0
contador = contador + ejemplo11(1) # 0
contador = contador + ejemplo11(2) # 1
contador = contador + ejemplo11(3) # 1
contador = contador + ejemplo11(4) # 2
return contador
ejemplo12_bis( 5 )
def ejemplo13( x ):
bandera = x
contador = 0
while( bandera >= 10):
print(f" x = { bandera } ")
bandera /= 10
contador = contador + 1
print(contador)
# T(x) = log10 x +1
ejemplo13( 1000 )
def ejemplo14( n ):
y = n
z = n
contador = 0
while y >= 3: #3
y /= 3 # 1
contador += 1 # cont =3
while z >= 3: #27
z /= 3
contador += 1
return contador
|
_____no_output_____
|
MIT
|
21octubre.ipynb
|
humbertoguell/daa2020_1
|
|
Naive Bayes Classifiers Author : Sanjoy Biswas Topic : NaiveNaive Bayes Classifiers : Spam Ham Email Datection Email : [email protected] It is a classification technique based on Bayes’ Theorem with an assumption of independence among predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that this fruit is an apple and that is why it is known as ‘Naive’.Naive Bayes model is easy to build and particularly useful for very large data sets. Along with simplicity, Naive Bayes is known to outperform even highly sophisticated classification methods.Bayes theorem provides a way of calculating posterior probability P(c|x) from P(c), P(x) and P(x|c). Look at the equation below:  Above,P(c|x) is the posterior probability of class (c, target) given predictor (x, attributes).!P(c) is the prior probability of class.P(x|c) is the likelihood which is the probability of predictor given class.P(x) is the prior probability of predictor. Import Libraries
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
|
_____no_output_____
|
MIT
|
ML Algorithms/Naive Bayes Classifiers/Naive Bayes Classifiers.ipynb
|
jrderek/Data-science-master-resources
|
Import Dataset
|
df = pd.read_csv(r'F:\ML Algorithms By Me\Naive Bayes Classifiers\emails.csv')
df.head()
df.isnull().sum()
|
_____no_output_____
|
MIT
|
ML Algorithms/Naive Bayes Classifiers/Naive Bayes Classifiers.ipynb
|
jrderek/Data-science-master-resources
|
Separate Dependent & Independent Value
|
x = df.text.values
y = df.spam.values
|
_____no_output_____
|
MIT
|
ML Algorithms/Naive Bayes Classifiers/Naive Bayes Classifiers.ipynb
|
jrderek/Data-science-master-resources
|
Split Train and Test Dataset
|
from sklearn.model_selection import train_test_split
xtrain,xtest,ytrain,ytest = train_test_split(x,y,test_size=0.3)
|
_____no_output_____
|
MIT
|
ML Algorithms/Naive Bayes Classifiers/Naive Bayes Classifiers.ipynb
|
jrderek/Data-science-master-resources
|
Data Preprocessing
|
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
x_train = cv.fit_transform(xtrain)
x_train.toarray()
|
_____no_output_____
|
MIT
|
ML Algorithms/Naive Bayes Classifiers/Naive Bayes Classifiers.ipynb
|
jrderek/Data-science-master-resources
|
Apply Naive Bayes Classifiers Algorithm
|
from sklearn.naive_bayes import MultinomialNB
model = MultinomialNB()
model.fit(x_train,ytrain)
x_test = cv.fit_transform(xtest)
x_test.toarray()
model.score(x_train,ytrain)
|
_____no_output_____
|
MIT
|
ML Algorithms/Naive Bayes Classifiers/Naive Bayes Classifiers.ipynb
|
jrderek/Data-science-master-resources
|
Framing models
|
import lettertask
import patches
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
from tqdm import tqdm
import lazytools_sflippl as lazytools
import plotnine as gg
import pandas as pd
cbm = lettertask.data.CompositionalBinaryModel(
width=[5, 5],
change_probability=[0.05, 0.5],
samples=10000,
seed=1001
)
cts = patches.data.Contrastive1DTimeSeries(cbm.to_array(), seed=202)
|
_____no_output_____
|
MIT
|
notebooks/03-framing-models.ipynb
|
sflippl/patches
|
Base-reconstructive model
|
class BaRec(nn.Module):
def __init__(self, latent_features, input_features=None, timesteps=None,
data=None, bias=True):
super().__init__()
if data:
input_features = input_features or data.n_vars
timesteps = timesteps or data.n_timesteps
elif input_features is None or timesteps is None:
raise ValueError('You must either provide data or both input '
'features and timesteps.')
self.latent_features = latent_features
self.input_features = input_features
self.timesteps = timesteps
self.encoder = nn.Linear(input_features, latent_features, bias=bias)
self.predictor = nn.Linear(latent_features, timesteps, bias=bias)
self.decoder = nn.Conv1d(latent_features, input_features, 1, bias=bias)
def forward(self, x):
code = self.encoder(x['current_values'])
prediction = self.predictor(code)
decoded = self.decoder(prediction).transpose(1, 2)
return decoded
barec = BaRec(1, data=cts)
optimizer = optim.Adam(barec.parameters())
criterion = nn.MSELoss()
data = cts[0]
prediction = barec(data)
print(data['future_values'].shape)
print(prediction.shape)
ideal = np.array([[1,0],[0,1]], dtype=np.float32).repeat(5,1)/np.sqrt(5)
ideal
barec = BaRec(1, data=cts, bias=False)
optimizer = optim.Adam(barec.parameters())
criterion = nn.MSELoss()
loss_traj = []
angles = []
running_loss = 0
for epoch in tqdm(range(10)):
for i, data in enumerate(cts):
if i<len(cts):
optimizer.zero_grad()
prediction = barec(data)
loss = criterion(prediction, data['future_values'])
loss.backward()
optimizer.step()
running_loss += loss
if i % 50 == 49:
loss_traj.append(running_loss.detach.numpy()/50)
running_loss = 0
est = next(barec.parameters()).detach().numpy()
angles.append(np.matmul(ideal, est.T)/np.sqrt(np.matmul(est, est.T)))
(gg.ggplot(
lazytools.array_to_dataframe(
np.array(
[l.detach().numpy() for l in loss_traj]
)
),
gg.aes(x='dim0', y='array')
) +
gg.geom_smooth(method='loess'))
(gg.ggplot(
lazytools.array_to_dataframe(
np.concatenate(angles, axis=1)
),
gg.aes(x='dim1', y='array', color='dim0', group='dim0')
) +
gg.geom_line())
np.save('angles.npy', np.concatenate(angles, axis=1))
|
_____no_output_____
|
MIT
|
notebooks/03-framing-models.ipynb
|
sflippl/patches
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.