Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
6,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas-Jupyter labor
2019. március 26.
Név (neptun)
Step1: A MovieLens adatsorral fogunk dolgozni, de először le kell töltenünk. http
Step2: Kicsomagoljuk
Step3: Adat betöltése és normalizálása
A pd.read_table függvény táblázatos adatok betöltésére alkalmas. Több tucat paraméterrel rendelkezik, de csak egy kötelező paramétere van
Step4: Ez még elég rosszul néz ki. Hogyan tudnánk javítani?
1. Rossz szeparátort használt a függvény (tab az alapértelmezett). A fájlban | a szeparátor. Ezt a sep paraméterrel tudjuk megadni.
1. A fájl első sora került az oszlopnevek helyére. Az oszlopok valódi nevei a README fájlból derülnek ki, amit kézzel megadhatjuk a read_table-nek a names paraméterben.
1. A read_table automatikusan generált egy id-t minden sornak, azonban az adatfájlban a filmek már rendelkeznek egy egyedi azonosítóval (movie_id), használjuk ezt a DataFrame indexeként (index_col paraméter). Célszerű szóköz nélküli, kisbetűs oszlopneveket használni, mert akkor attribútumként is elérjük őket (df.release_date).
Step5: Két oszlop is van, amik dátumot jelölnek
Step6: Még mindig nem tökéletes, hiszen a filmek címei után szerepel az évszám zárójelben, ami egyrészt redundáns, másrészt zaj. Tüntessük el!
A szokásos str műveletek egy része elérhető DataSeries objektumokra is (minden elemre végrehajtja). A függvényeket az str névtérben találjuk.
Step7: Egy reguláris kifejezéssel eltüntetjük a két zárójel közti részt, majd eltávolítjuk az ott maradt whitespace-eket (a strip függvény a stringek elejéről és végéről is eltávolítja).
Végül adjuk értékül a régi title oszlopnak a kezdő és záró whitespace-ektől megfosztott változatát.
Step8: A video_release_date mező az első néhány sorban csak érvénytelen mezőket tartalmaz. Vajon igaz ez az egész DataFrame-re? Listázzuk ki azokat a mezőket, ahol nem NaT a video_release_date értéke, vagyis érvénytelen dátum.
Step9: Nincs ilyen mező, ezért elhagyhatjuk az oszlopot.
Step10: Van egy unknown oszlop, ettől is szabaduljunk meg!
Step11: Adatok felszínes vizsgálata
Nézzük meg, hogy milyen információkat tudhatunk könnyedén meg a DataFrame-ről.
A describe függvény oszloponként szolgáltat alapvető infomációkkal
Step12: Átlag, szórás, variancia stb.
Egyenként is lekérdezhetőek
Step13: Egyszerű lekérdezések
Melyik filmek jelentek meg 1956-ban?
Step14: Melyik filmek jelentek meg a 80-as években?
Step15: 107 film jelent meg a 80-as években, ezt már nem praktikus kiíni. Nézzük meg csak az első 3-at.
Step16: A megjelenítési év legyen külön oszlop
Többször fogjuk még használni a megjelenési évet, ezért praktikus külön év oszlopot létrehozni.
A DateTime mezőhöz használható metódusok és attribútumok a dt névtérben vannak, így tudjuk minden oszlopra egyszerre meghívni. Az eredményt egy új oszlopban tároljuk.
Step17: Mikor jelentek meg a Die Hard filmek?
Step18: Sajnos csak teljes egyezésre tudunk így szűrni.
A szöveges mezőkre a pandas nyújt egy csomó műveletet, amik az str névtérben vannak (ahogy a dátum mezőkre a dt-ben voltak).
Step19: A Die Hard 4 és 5 hiányzik. Kilógnának az adatsorból? Nézzük meg még egyszer, hogy mikori filmek szerepelnek.
Step20: A Die Hard 4 és 5 2007-ben, illetve 2013-ban jelentek meg, ezért nem szerepelnek az adatban.
Melyik filmek tartoznak egyszerre az akció és romantikus kategóriába?
Step21: Melyik filmek tartoznak az akció VAGY a romantikus kategóriába?
Itt a Boole vagyra gondolunk.
Step22: 1. feladat
Step23: Q1.2. Létezik-e gyerekeknek szóló thriller? Keress egy példát rá és térj vissza a film címével.
Step24: Q1.3. Hány filmnek hosszabb a címe, mint 30 karakter?
Step25: Q1.4. Mi a legrégebbi és a legújabb film címe?
A megjelenésnek nem csak éve van!
Step26: Q1.5. Melyik a legújabb sci-fi?
Step27: Csoportosítás és vizualizáció
Hány filmet adtak ki évente?
A kérdést két lépésben tudjuk megválaszolni
Step28: Vonaldiagram az alapértelmezett, de oszlopdiagramként informatívabb lenne.
Step29: Lásztik, hogy a 80-as évek végén nőtt meg a kiadott filmek száma, kicsit közelítsünk rá. Ehhez először szűrni fogjuk a 1985 utáni filmeket, majd csoportosítva ábrázolni.
Step30: Groupby tetszőleges feltétel szerint
Nem csak egy kategóriaértékű oszlop szerint csoportosíthatunk, hanem tetszőleges kifejezés szerint. Ezt kihasználva fogunk évtizedenként csoportosítani. A groupby-nak bármilyen kifejezést megadhatunk, ami diszkrét értékekre képezi le a sorokat, tehát véges sok csoport egyikébe helyezi (mint egy hash függvény).
Az évtizedet úgy kaphatjuk meg, ha az évet 10-zel osztjuk és csak az egészrészt tartjuk meg, hiszen 1983/10 és 1984/10 egészrésze ugyanúgy 198. Használjuk a Python egészosztás operátorát (//).
Step31: 2. feladat
Step32: Ábrázold.
Step33: Gyerekfilmet vagy krimit adnak ki többet évtizedenként?
Step34: A 90-es években több filmet adtak ki, mint előtte összesen, nézzük meg azt az évtizedet közelebbről!
Q2.2. Mennyivel adtak ki több gyerekfilmet, mint krimit évente a 90-es években? Ábrázold.
Először a szűrt groupby-t készítsd el az imént létrhozott d DataFrame-ből.
Step35: Ábrázold.
Step36: Q2.3. Ábrázold a kiadási napok (hónap napjai) eloszlását egy tortadiagramon!
Tortadiagramot a plot függvény kind="pie" argumentumával tudsz készíteni.
A tortadiagramhoz érdemes megváltoztatni a diagram képarányát, amit a plot függvény figsize paraméterének megadásával tehetsz meg. figsize=(10,10). Százalékokat a autopct="%.0lf%%" opcióval lehet a diagramra írni.
A tortadiagramot szebbé teheted másik colormap választásával
Step37: Ábrázold.
Step38: Q2.4. Hagyományos lexikont szeretnénk készíteni a filmekből. Melyik kezdőbetű hányszor fordul elő a filmek címében? Ábrázold tortadiagramon.
Csoportosítsd a filmeket kezdőbetű szerint.
Step39: Ábrázold.
Step40: *Q2.5. Írj függvényt, ami több oszlop mentén csoportosít és visszaadja a legnagyobb csoportot.
Tipp
Step41: Több DataFrame kezelése, pd.merge
Az adathalmaz lényegi része a 100000 értékelés, amit az u.data fájlból tudunk beolvasni. A README-ből kiolvashatjuk a fájl oszlopait.
Step42: A timestamp oszlop Unix timestampeket tartalmaz, konvertáljuk DateTime-má.
Step43: Merge a film táblával
Mivel már több DataFrame-mel dolgozunk, érdemes a filmeket tartalmazó táblának beszédesebb nevet adni.
Step44: Felülírjuk a ratings táblát
Step45: Hány értékelés érkezett a film megjelenése előtt?
Step46: Hogy oszlik meg ez a szám a filmek között?
Step47: 3. feladat
Step48: Hisztogram készítése az egyes értékelésekről
Hisztogram készítésére (melyik érték hányszor szerepelt), a hist függvény áll rendelkezésünkre
Step49: Q3.2. Ábrázold hisztogramon az 1960 előtti krimik értékeléseit!
Step50: Ábrázold.
Step51: Q3.3. Mi az értékelések átlaga évtizedenként (film megjelenési éve)?
Figyelj arra, hogy csak annyi adat szerepeljen az összesítésben, amennyit a feladat kér. Az indexek legyenek az évtizedek kezdőévei.
Step52: Q3.4. Az értékelésekhez tartozik egy timestamp. Mi az értékelések átlaga a hét napjaira lebontva?
Tehát melyik napon jószívűbbek az emberek?
Tipp
Step53: Q3.5. Melyik hónapban mennyi a kalandfilmek (adventure) értékeléseinek szórása?
Vigyázat, a szórás és a variancia nem azonos!
Step54: 4. feladat
Step55: Q4.2. Merge-öld a ratings táblát a users táblával. Őrizd meg az összes oszlopot.
Step56: Q4.3. Korcsoportonként hány értékelést adtak le? 10 évet veszünk egy korcsoportnak, tehát 10-19, 20-29 stb. Ábrázold oszlopdiagramon.
Step57: Ábrázold.
Step58: Q4.4. A nap melyik órájában értékelnek a programozók, illetve a marketingesek? Ábrázold két tortadiagramon.
Tipp
Step59: Ábrázold tortadiagramon a marketingesek és a programozók értékelési óráit.
Először a marketingesek
Step60: majd a programozók
Step61: Q4.5. Készíts hisztogramot az értékelési kedvről! Hány user adott le N értékelést?
Segítség
Step62: Q4.6. (Szorgalmi) Milyen volt a nemek eloszlása a romantikus filmet, illetve az akciófilmeket értékelők között? Készíts két tortadiagramot!
Step63: Q4.7. (Szorgalmi) Jóval több férfi adott le értékelést. Hogy alakulnak ezek az arányok, ha normálunk az összes értékelésre jellemző nemek arányával?
Step64: Q4.8. (**Szorgalmi) A nap melyik órájában melyik szakma értékel legtöbbször és hányszor értékelnek?
Példa válasz
Step65: 5. feladat
Step66: Q5.2. Futtasd le a KNN-t az X mátrixon!
Ehhez bele kell nézned a NearestNeighbors dokumentációjába.
Az indexeket az indices változóban tárold.
A legközelebbi szomszédok számát a K változóban tárold. Először állítsd 4-re a K-t, később kísérletezhetsz más értékekkel is. A többi paramétert ne módosítsd, különben a tesztek nem biztos, hogy működnek.
Step67: Az indices változó tartalmazza az indexeket, ebből készítsünk DataFrame-et
Step68: Értelmezzük a táblázatot!
Az index oszlop (első oszlop) azt mondja meg, hogy az X mátrix hányadik sorához tartozó szomszédok találhatók meg a sorban. A 0-3. nevű oszlopok a legközelebbi szomszédokat adják meg. Legtöbb film esetén saját maga a legközelebbi szomszédja, hiszen 0 a távolságuk, azonban nincs mindenhol így. Mit gondolsz, miért?
A táblázat indexe 0-val kezdődik, de a movies táblában a movie_id 1-től indul.
Q5.3. Állítsd át az ind DataFrame indexét úgy, hogy 1-től indexeljen! Az összes mezőt is növeld meg eggyel!
Segítség
Step69: Q5.4. Keresd meg az indexekhez tartozó filmcímeket!
Az indices táblázatban filmcímek helyett indexek vannak, ami nem túl felhasználóbarát. A movies DataFrame tartalmazza a filmeket indexekkel együtt, ezzel kell merge-ölni K alkalommal.
Pl. az első sorban megjelenő 422-es index az Aladdin and the King of Thieves film indexe. Kerüljön ez a cím az index helyére a merge után.
Segítség
Step70: Q5.5. Jelenjen meg a táblázatban az a film is, aminek a szomszédjai a sorban vannak! A címeken kívül más oszlopa ne legyen a táblázatnak!
Most olyanok a soraink hogy
Step71: Q5.6. (Szorgalmi) Készíts függvényt, ami egy filmcímrészletet vesz át és megkeresi azokat a filmeket, amikben szerepel.
A függvény visszatérési értéke legyen egy DataFrame, amely a hasonló filmeket tartalmazza (akkor is, ha 1 vagy 0 hasonló film van). A most_similar táblázat szintén a függvény paramétere. | Python Code:
import pandas as pd # konvenció szerint pd aliast használunk
%matplotlib inline
import matplotlib
import numpy as np
# tegyük szebbé a grafikonokat
matplotlib.style.use('ggplot')
matplotlib.pyplot.rcParams['figure.figsize'] = (15, 3)
matplotlib.pyplot.rcParams['font.family'] = 'sans-serif'
Explanation: Pandas-Jupyter labor
2019. március 26.
Név (neptun):
YOUR ANSWER HERE
A labor célja egy rövid bevezetőt adni a manapság népszerű "data science" Python eszközeibe.
A labor feladatai előtt mindenképp meg kell ismerkedni a Python nyelv alapjaival. A laborhoz tartozó rövid magyar Python bevezetőt itt találod.
A pandashoz egy rövid magyar bevezető itt.
A labort összeállította: Ács Judit
A labor menete
A labor teljes beadandó anyaga ez a notebook. A kérdések sorszámozva vannak és Q-val kezdődnek: Q1.1-Q5.5-ig. Néhány szorgalmi kérdés is van, ezekkel plusz pontot lehet szerezni a maximálison felül. Minden kiskérdés 2 pontot ér, a 25 kérdés összesen 50 pont. A jegyek a következőképpen alakulnak:
| pontszám | jegy |
| ---- | ----|
| 40+ | 5 |
| 30+ | 4 |
| 20+ | 3 |
| 10+ | 2 |
| 9- | 1 |
A feladatokhoz tartoznak tesztek, amiket a megoldás elkészítése előtt érdemes elolvasni.
FIGYELEM! A vizualizációt nem tartalmazó feladatok javítása automatikusan történik. A notebookban szereplő tesztek a visszatérési értékek típusát ellenőrzik, a válaszok helyességét rejtett tesztek ellenőrzik. A teszteket tartalmazó cellákat nem lehet módosítani. A kitöltendő helyek YOUR CODE HERE commenttel vannak jelölve (az exception dobását értelemszerűen törölni kell).
Beadás
Amennyiben az órán nem sikerült befejezned, otthon folytathatod a munkát és a labor hetének végéig (vasárnap éjfél) feltöltheted. Az Anaconda Python disztribúció tartalmazza a laboranyaghoz szükséges összes csomagot, beleértve a jupytert.
Mielőtt feltöltöd, győződj meg róla, hogy újraindított kernellel exception nélkül lefut és azokat eredményeket adja, amiket vártál. Ezt a Kernel->Restart & Run All opcióval teheted meg. Ha nem minden feladatot oldasz meg, akkor a NotImplementedError-ok miatt nem fog végigfutni, de kézzel le tudod futtatni a cellákat (Shift+Enter lefuttatja és a következőre lép).
Ügyelj arra, hogy ne maradjanak hosszú táblázatok kiírva. Helyette használd a head függvényt.
A laborhoz külön jegyzőkönyvet nem kell készíteni, ezt a notebookot kell vasárnap éjfélig az AUT portálra feltölteni NEPTUN.ipynb néven. A fájlrendszerben pandas_labor.ipynb néven megtalálod a notebookot abban a könyvtárban, ahonnan indítottad a jupytert.
Az AUT portálra .zip-be csomagolva tudod feltölteni.
End of explanation
import os
data_dir = os.getenv("MOVIELENS")
if data_dir is None:
data_dir = ""
ml_path = os.path.join(data_dir, "ml.zip")
if not os.path.exists(ml_path):
print("Download data")
import urllib
u = urllib.request.URLopener()
u.retrieve("http://files.grouplens.org/datasets/movielens/ml-100k.zip", ml_path)
print("Data downloaded")
Explanation: A MovieLens adatsorral fogunk dolgozni, de először le kell töltenünk. http://grouplens.org/datasets/movielens/
Csak akkor töltjük le a fájlt, ha még nem létezik.
End of explanation
unzip_path = os.path.join(data_dir, "ml-100k")
if not os.path.exists(unzip_path):
print("Extracting data")
from zipfile import ZipFile
with ZipFile(ml_path) as myzip:
myzip.extractall(data_dir)
print("Data extraction done")
data_dir = unzip_path
Explanation: Kicsomagoljuk:
End of explanation
# df = pd.read_table("ml-100k/u.item") # UnicodeDecodeErrort kapunk, mert rossz dekódert használ
df = pd.read_table(os.path.join(data_dir, "u.item"), encoding="latin1")
df.head()
Explanation: Adat betöltése és normalizálása
A pd.read_table függvény táblázatos adatok betöltésére alkalmas. Több tucat paraméterrel rendelkezik, de csak egy kötelező paramétere van: a fájl, amit beolvasunk.
A karakterkódolást is meg kell adnunk, mert a fájl nem az alapértelmezett (utf-8) kódolást használja, hanem az ISO-8859-1-et, vagy köznéven a latin1-et.
End of explanation
column_names = [
"movie_id", "title", "release_date", "video_release_date", "imdb_url", "unknown", "action", "adventure", "animation",
"children", "comedy", "crime", "documentary", "drama", "fantasy", "film_noir", "horror", "musical", "mystery",
"romance", "sci_fi", "thriller", "war", "western"]
df = pd.read_table(
os.path.join(data_dir, "u.item"), sep="|",
names=column_names, encoding="latin1", index_col='movie_id')
df.head()
Explanation: Ez még elég rosszul néz ki. Hogyan tudnánk javítani?
1. Rossz szeparátort használt a függvény (tab az alapértelmezett). A fájlban | a szeparátor. Ezt a sep paraméterrel tudjuk megadni.
1. A fájl első sora került az oszlopnevek helyére. Az oszlopok valódi nevei a README fájlból derülnek ki, amit kézzel megadhatjuk a read_table-nek a names paraméterben.
1. A read_table automatikusan generált egy id-t minden sornak, azonban az adatfájlban a filmek már rendelkeznek egy egyedi azonosítóval (movie_id), használjuk ezt a DataFrame indexeként (index_col paraméter). Célszerű szóköz nélküli, kisbetűs oszlopneveket használni, mert akkor attribútumként is elérjük őket (df.release_date).
End of explanation
df = pd.read_table(os.path.join(data_dir, "u.item"), sep="|",
names=column_names, encoding="latin1",
parse_dates=[2,3], index_col='movie_id')
df.head()
Explanation: Két oszlop is van, amik dátumot jelölnek: release_date, video_release_date. A pandas parszolni tudja a dátumokat többféle népszerű formátumban, ehhez csak a parse_dates paraméterben kell megadnunk a dátumot tartalmazó oszlopokat. Figyeljük meg, hogy ahol nincs dátum, az Nan (not a number)-ről NaT-ra (not a time) változik.
End of explanation
df.title.str
Explanation: Még mindig nem tökéletes, hiszen a filmek címei után szerepel az évszám zárójelben, ami egyrészt redundáns, másrészt zaj. Tüntessük el!
A szokásos str műveletek egy része elérhető DataSeries objektumokra is (minden elemre végrehajtja). A függvényeket az str névtérben találjuk.
End of explanation
df.title = df.title.str.replace(r'\(.*\)', '').str.strip()
df.head()
Explanation: Egy reguláris kifejezéssel eltüntetjük a két zárójel közti részt, majd eltávolítjuk az ott maradt whitespace-eket (a strip függvény a stringek elejéről és végéről is eltávolítja).
Végül adjuk értékül a régi title oszlopnak a kezdő és záró whitespace-ektől megfosztott változatát.
End of explanation
df[df.video_release_date.notnull()]
Explanation: A video_release_date mező az első néhány sorban csak érvénytelen mezőket tartalmaz. Vajon igaz ez az egész DataFrame-re? Listázzuk ki azokat a mezőket, ahol nem NaT a video_release_date értéke, vagyis érvénytelen dátum.
End of explanation
df = df.drop('video_release_date', axis=1)
df.head()
Explanation: Nincs ilyen mező, ezért elhagyhatjuk az oszlopot.
End of explanation
df = df.drop('unknown', axis=1)
Explanation: Van egy unknown oszlop, ettől is szabaduljunk meg!
End of explanation
df.describe()
Explanation: Adatok felszínes vizsgálata
Nézzük meg, hogy milyen információkat tudhatunk könnyedén meg a DataFrame-ről.
A describe függvény oszloponként szolgáltat alapvető infomációkkal: darabszám, átlag, szórás stb.
Mivel a legtöbb mező bináris, most nem tudunk meg sok hasznos információt a mezőkről.
End of explanation
df.quantile(.9).head()
Explanation: Átlag, szórás, variancia stb.
Egyenként is lekérdezhetőek:
count()
átlag: mean()
szórás: std()
variancia: var()
50% kvantilis: quantile(.5)
min, max
End of explanation
df[df.release_date.dt.year == 1956]
Explanation: Egyszerű lekérdezések
Melyik filmek jelentek meg 1956-ban?
End of explanation
d = df[(df.release_date.dt.year >= 1980) & (df.release_date.dt.year < 1990)]
len(d)
Explanation: Melyik filmek jelentek meg a 80-as években?
End of explanation
d.head(3)
Explanation: 107 film jelent meg a 80-as években, ezt már nem praktikus kiíni. Nézzük meg csak az első 3-at.
End of explanation
df['year'] = df.release_date.dt.year
Explanation: A megjelenítési év legyen külön oszlop
Többször fogjuk még használni a megjelenési évet, ezért praktikus külön év oszlopot létrehozni.
A DateTime mezőhöz használható metódusok és attribútumok a dt névtérben vannak, így tudjuk minden oszlopra egyszerre meghívni. Az eredményt egy új oszlopban tároljuk.
End of explanation
df[df.title == 'Die Hard']
Explanation: Mikor jelentek meg a Die Hard filmek?
End of explanation
df[df.title.str.contains('Die Hard')]
Explanation: Sajnos csak teljes egyezésre tudunk így szűrni.
A szöveges mezőkre a pandas nyújt egy csomó műveletet, amik az str névtérben vannak (ahogy a dátum mezőkre a dt-ben voltak).
End of explanation
df.release_date.describe()
Explanation: A Die Hard 4 és 5 hiányzik. Kilógnának az adatsorból? Nézzük meg még egyszer, hogy mikori filmek szerepelnek.
End of explanation
d = df[(df.action==1) & (df.romance==1)]
print(len(d))
d.head()
Explanation: A Die Hard 4 és 5 2007-ben, illetve 2013-ban jelentek meg, ezért nem szerepelnek az adatban.
Melyik filmek tartoznak egyszerre az akció és romantikus kategóriába?
End of explanation
d = df[(df.action==1) | (df.romance==1)]
print(len(d))
d.head()
Explanation: Melyik filmek tartoznak az akció VAGY a romantikus kategóriába?
Itt a Boole vagyra gondolunk.
End of explanation
def count_movies_before_1985(df):
# YOUR CODE HERE
raise NotImplementedError()
def count_movies_after_1984(df):
# YOUR CODE HERE
raise NotImplementedError()
before = count_movies_before_1985(df)
print(before)
assert type(before) == int
after = count_movies_after_1984(df)
print(after)
assert type(after) == int
Explanation: 1. feladat: egyszerű lekérdezések
Q1.1. Hány akciófilm jelent meg 1985 előtt, illetve 1985-ben vagy később?
End of explanation
def child_thriller(df):
# YOUR CODE HERE
raise NotImplementedError()
title = child_thriller(df)
assert type(title) == str
Explanation: Q1.2. Létezik-e gyerekeknek szóló thriller? Keress egy példát rá és térj vissza a film címével.
End of explanation
def long_titles(df):
# YOUR CODE HERE
raise NotImplementedError()
title_cnt = long_titles(df)
assert type(title_cnt) == int
Explanation: Q1.3. Hány filmnek hosszabb a címe, mint 30 karakter?
End of explanation
def oldest_movie(df):
# YOUR CODE HERE
raise NotImplementedError()
def newest_movie(df):
# YOUR CODE HERE
raise NotImplementedError()
oldest = oldest_movie(df)
newest = newest_movie(df)
assert type(oldest) == str
assert type(newest) == str
Explanation: Q1.4. Mi a legrégebbi és a legújabb film címe?
A megjelenésnek nem csak éve van!
End of explanation
def newest_scifi(df):
# YOUR CODE HERE
raise NotImplementedError()
newest = newest_scifi(df)
assert type(newest) == str
Explanation: Q1.5. Melyik a legújabb sci-fi?
End of explanation
df.groupby('year').size().plot()
Explanation: Csoportosítás és vizualizáció
Hány filmet adtak ki évente?
A kérdést két lépésben tudjuk megválaszolni:
csoportosítás évenként
összesítés 1-1 évre
End of explanation
df.groupby('year').size().plot(kind='bar')
Explanation: Vonaldiagram az alapértelmezett, de oszlopdiagramként informatívabb lenne.
End of explanation
d = df[df.year > 1985]
d.groupby('year').size().plot(kind='bar')
# df[df.year > 1985].groupby('year').size().plot(kind='bar') # vagy egy sorban
Explanation: Lásztik, hogy a 80-as évek végén nőtt meg a kiadott filmek száma, kicsit közelítsünk rá. Ehhez először szűrni fogjuk a 1985 utáni filmeket, majd csoportosítva ábrázolni.
End of explanation
d = df.groupby(df.year // 10 * 10)
d.groups.keys() # létrejött csoportok listázása
d.size().plot(kind='bar')
Explanation: Groupby tetszőleges feltétel szerint
Nem csak egy kategóriaértékű oszlop szerint csoportosíthatunk, hanem tetszőleges kifejezés szerint. Ezt kihasználva fogunk évtizedenként csoportosítani. A groupby-nak bármilyen kifejezést megadhatunk, ami diszkrét értékekre képezi le a sorokat, tehát véges sok csoport egyikébe helyezi (mint egy hash függvény).
Az évtizedet úgy kaphatjuk meg, ha az évet 10-zel osztjuk és csak az egészrészt tartjuk meg, hiszen 1983/10 és 1984/10 egészrésze ugyanúgy 198. Használjuk a Python egészosztás operátorát (//).
End of explanation
def comedy_by_year(df):
# YOUR CODE HERE
raise NotImplementedError()
c = comedy_by_year(df)
assert type(c) == pd.core.groupby.DataFrameGroupBy
Explanation: 2. feladat: csoportosítás és vizualizáció
Q2.1. Csoportosítsd vígjátékokat (comedy) évenként. Ábrázold oszlopdiagramon hány vígjátékot adtak ki évente.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Ábrázold.
End of explanation
col1 = 'children'
col2 = 'crime'
d = df[['year', col1, col2]].copy()
d['diff'] = d[col1] - d[col2]
d.groupby(d.year // 10 * 10).sum()
d.groupby(d.year // 10 * 10).sum().plot(y='diff', kind='bar')
Explanation: Gyerekfilmet vagy krimit adnak ki többet évtizedenként?
End of explanation
def groupby_nineties(d):
# YOUR CODE HERE
raise NotImplementedError()
nineties = groupby_nineties(d)
assert type(nineties) == pd.core.groupby.DataFrameGroupBy
# a diff oszlop szerepel
assert 'diff' in nineties.sum()
Explanation: A 90-es években több filmet adtak ki, mint előtte összesen, nézzük meg azt az évtizedet közelebbről!
Q2.2. Mennyivel adtak ki több gyerekfilmet, mint krimit évente a 90-es években? Ábrázold.
Először a szűrt groupby-t készítsd el az imént létrhozott d DataFrame-ből.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Ábrázold.
End of explanation
def groupby_release_day(df):
# YOUR CODE HERE
raise NotImplementedError()
by_day = groupby_release_day(df)
assert type(by_day) == pd.core.groupby.DataFrameGroupBy
# legfeljebb 31 napos egy hónap
assert len(by_day) < 32
# nehogy a hét napjai szerint csoportosítsunk
assert len(by_day) > 7
Explanation: Q2.3. Ábrázold a kiadási napok (hónap napjai) eloszlását egy tortadiagramon!
Tortadiagramot a plot függvény kind="pie" argumentumával tudsz készíteni.
A tortadiagramhoz érdemes megváltoztatni a diagram képarányát, amit a plot függvény figsize paraméterének megadásával tehetsz meg. figsize=(10,10). Százalékokat a autopct="%.0lf%%" opcióval lehet a diagramra írni.
A tortadiagramot szebbé teheted másik colormap választásával: dokumentáció és a colormapek listája.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Ábrázold.
End of explanation
def groupby_initial_letter(df):
# YOUR CODE HERE
raise NotImplementedError()
initial = groupby_initial_letter(df)
assert type(initial) == pd.core.groupby.DataFrameGroupBy
Explanation: Q2.4. Hagyományos lexikont szeretnénk készíteni a filmekből. Melyik kezdőbetű hányszor fordul elő a filmek címében? Ábrázold tortadiagramon.
Csoportosítsd a filmeket kezdőbetű szerint.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Ábrázold.
End of explanation
def get_largest_group(df, groupby_columns):
# YOUR CODE HERE
raise NotImplementedError()
genres = ["drama"]
drama_largest = get_largest_group(df, genres)
assert type(drama_largest) == pd.DataFrame
assert len(drama_largest) == 957
genres = ["drama", "comedy"]
both_largest = get_largest_group(df, genres)
# a csoportban minden film comedy es drama cimkeje azonos
assert both_largest[["comedy", "drama"]].nunique().loc["comedy"] == 1
assert both_largest[["comedy", "drama"]].nunique().loc["drama"] == 1
print(both_largest.shape)
Explanation: *Q2.5. Írj függvényt, ami több oszlop mentén csoportosít és visszaadja a legnagyobb csoportot.
Tipp: a GroupBy objektum get_group függvénye visszaad egy csoportot.
End of explanation
cols = ['user', 'movie_id', 'rating', 'timestamp']
ratings = pd.read_table(os.path.join(data_dir, "u.data"), names=cols)
ratings.head()
Explanation: Több DataFrame kezelése, pd.merge
Az adathalmaz lényegi része a 100000 értékelés, amit az u.data fájlból tudunk beolvasni. A README-ből kiolvashatjuk a fájl oszlopait.
End of explanation
ratings['timestamp'] = pd.to_datetime(ratings.timestamp, unit='s')
ratings.head()
Explanation: A timestamp oszlop Unix timestampeket tartalmaz, konvertáljuk DateTime-má.
End of explanation
movies = df
Explanation: Merge a film táblával
Mivel már több DataFrame-mel dolgozunk, érdemes a filmeket tartalmazó táblának beszédesebb nevet adni.
End of explanation
ratings = pd.merge(ratings, movies, left_on='movie_id', right_index=True)
ratings.head()
Explanation: Felülírjuk a ratings táblát:
End of explanation
len(ratings[ratings.timestamp <= ratings.release_date])
Explanation: Hány értékelés érkezett a film megjelenése előtt?
End of explanation
ratings[ratings.timestamp <= ratings.release_date].title.value_counts()
Explanation: Hogy oszlik meg ez a szám a filmek között?
End of explanation
def count_greater_than_4(ratings):
# YOUR CODE HERE
raise NotImplementedError()
greater = count_greater_than_4(ratings)
assert type(greater) == int
assert greater != 1160 # titles are NOT UNIQUE
Explanation: 3. feladat: merge
Q3.1. Hány film kapott legalább egyszer 4 fölötti értékelést?
VIGYÁZAT! A filmek címe nem feltétlenül egyedi.
End of explanation
ratings.hist('rating', bins=5)
Explanation: Hisztogram készítése az egyes értékelésekről
Hisztogram készítésére (melyik érték hányszor szerepelt), a hist függvény áll rendelkezésünkre:
End of explanation
def filter_old_crime_movies(ratings):
# YOUR CODE HERE
raise NotImplementedError()
old_crime_movies = filter_old_crime_movies(ratings)
assert type(old_crime_movies) == pd.DataFrame
Explanation: Q3.2. Ábrázold hisztogramon az 1960 előtti krimik értékeléseit!
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Ábrázold.
End of explanation
def rating_mean_by_decade(ratings):
# YOUR CODE HERE
raise NotImplementedError()
decade_mean = rating_mean_by_decade(ratings)
# csak az ertekeles oszlop atalga erdekel minket, nem az egesz DataFrame-e
assert not type(decade_mean) == pd.DataFrame
assert type(decade_mean) == pd.Series
assert 1920 in decade_mean.index
assert 1921 not in decade_mean.index
Explanation: Q3.3. Mi az értékelések átlaga évtizedenként (film megjelenési éve)?
Figyelj arra, hogy csak annyi adat szerepeljen az összesítésben, amennyit a feladat kér. Az indexek legyenek az évtizedek kezdőévei.
End of explanation
def rating_mean_by_weekday(ratings):
# YOUR CODE HERE
raise NotImplementedError()
weekday_mean = rating_mean_by_weekday(ratings)
assert type(weekday_mean) == pd.Series
assert type(weekday_mean) != pd.DataFrame # csak egy oszlop kell
Explanation: Q3.4. Az értékelésekhez tartozik egy timestamp. Mi az értékelések átlaga a hét napjaira lebontva?
Tehát melyik napon jószívűbbek az emberek?
Tipp: érdemes körbenézni a dátummezőkhöz tartozó dt névtérben.
End of explanation
def adventure_monthly_std(ratings):
# YOUR CODE HERE
raise NotImplementedError()
adventure = adventure_monthly_std(ratings)
assert type(adventure) == pd.Series
assert type(adventure) != pd.DataFrame
# legfeljebb 12 különböző hónapban érkezhettek értékelések
assert len(adventure) <= 12
Explanation: Q3.5. Melyik hónapban mennyi a kalandfilmek (adventure) értékeléseinek szórása?
Vigyázat, a szórás és a variancia nem azonos!
End of explanation
# users = ...
# YOUR CODE HERE
raise NotImplementedError()
assert type(users) == pd.DataFrame
# user_id starts from 1
assert 0 not in users.index
Explanation: 4. feladat: Users DataFrame
Q4.1 Olvasd be a u.user fájlt egy users nevű DataFrame-be!
Segítségképpen az oszlopok: user_id, age, gender, occupation, zip. A user_id oszlop legyen a DataFrame indexe.
End of explanation
# ratings = ratings.merge...
# YOUR CODE HERE
raise NotImplementedError()
assert type(ratings) == pd.DataFrame
assert ratings.shape == (100000, 30)
Explanation: Q4.2. Merge-öld a ratings táblát a users táblával. Őrizd meg az összes oszlopot.
End of explanation
def by_age_group(ratings):
# YOUR CODE HERE
raise NotImplementedError()
r = by_age_group(ratings)
assert type(r) == pd.Series
assert 20 in r
Explanation: Q4.3. Korcsoportonként hány értékelést adtak le? 10 évet veszünk egy korcsoportnak, tehát 10-19, 20-29 stb. Ábrázold oszlopdiagramon.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Ábrázold.
End of explanation
def occupation_cnt_by_hour(ratings, occupation):
# YOUR CODE HERE
raise NotImplementedError()
marketing = occupation_cnt_by_hour(ratings, "marketing")
assert type(marketing) == pd.Series
# 24 órás egy nap
assert len(marketing) < 25
Explanation: Q4.4. A nap melyik órájában értékelnek a programozók, illetve a marketingesek? Ábrázold két tortadiagramon.
Tipp:
használd az értékelések táblából származó timestamp mezőt,
használhatsz két külön cellát a megoldáshoz,
gondold át hány szeletes lesz a tortadiagram.
Készíts egy függvényt, ami egy adott szakma képviselőinek óránkénti értékelésszámát adja vissza.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Ábrázold tortadiagramon a marketingesek és a programozók értékelési óráit.
Először a marketingesek:
End of explanation
programmer = occupation_cnt_by_hour(ratings, "programmer")
# YOUR CODE HERE
raise NotImplementedError()
Explanation: majd a programozók:
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q4.5. Készíts hisztogramot az értékelési kedvről! Hány user adott le N értékelést?
Segítség:
Az adatból hiányoznak a 20 értékelésnél kevesebbet leadó felhasználók, ami a hisztogramról könnyen leolvasható, ha jól ábrázoltad.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q4.6. (Szorgalmi) Milyen volt a nemek eloszlása a romantikus filmet, illetve az akciófilmeket értékelők között? Készíts két tortadiagramot!
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q4.7. (Szorgalmi) Jóval több férfi adott le értékelést. Hogy alakulnak ezek az arányok, ha normálunk az összes értékelésre jellemző nemek arányával?
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q4.8. (**Szorgalmi) A nap melyik órájában melyik szakma értékel legtöbbször és hányszor értékelnek?
Példa válasz:
0-1 óra között a mérnökök értékelnek legtöbbször, 2134-szer.
1-2 óra között az oktatók (educator) értékelnek legtöbbször, 1879-szer.
Táblázatos formában elég megválaszolni.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert type(X) == np.ndarray
Explanation: 5. feladat: K legközelebbi szomszéd
Ebben a feladatban a műfajok alapján fogjuk megkeresni minden filmhez a hozzá leghasonlóbb K filmet. Az eljárás neve k-nearest neighbor (KNN). A scikit-learn tartalmaz több KNN implementációt is, mi most a ball_tree-t fogjuk használni. Az osztály dokumentációja itt található: http://scikit-learn.org/stable/modules/neighbors.html
Q5.1. Nyerd ki a movies adattáblából a műfaji címkéket mátrixként!
A DataFrame values attribútumával kérhetünk le mátrixként az értékeket (oszlopnév, index stb. nélkül). Most csak a műfajokat tartalmazó oszlopokat kell megtartani. Vigyázz, az utolsó oszlop az évet tartalmazza!
A mátrix neve legyen X.
End of explanation
from sklearn.neighbors import NearestNeighbors
def run_knn(X, K):
# YOUR CODE HERE
raise NotImplementedError()
K = 4
indices = run_knn(X, K)
assert type(indices) == np.ndarray
# K legközelebbi szomszédot keresünk
assert indices.shape[1] == K
Explanation: Q5.2. Futtasd le a KNN-t az X mátrixon!
Ehhez bele kell nézned a NearestNeighbors dokumentációjába.
Az indexeket az indices változóban tárold.
A legközelebbi szomszédok számát a K változóban tárold. Először állítsd 4-re a K-t, később kísérletezhetsz más értékekkel is. A többi paramétert ne módosítsd, különben a tesztek nem biztos, hogy működnek.
End of explanation
ind = pd.DataFrame(indices)
ind.head()
Explanation: Az indices változó tartalmazza az indexeket, ebből készítsünk DataFrame-et:
End of explanation
def increment_table(df):
# YOUR CODE HERE
raise NotImplementedError()
indices = increment_table(ind)
assert indices.shape[1] == 4
assert indices.index[0] == 1
assert indices.index[-1] == len(indices)
Explanation: Értelmezzük a táblázatot!
Az index oszlop (első oszlop) azt mondja meg, hogy az X mátrix hányadik sorához tartozó szomszédok találhatók meg a sorban. A 0-3. nevű oszlopok a legközelebbi szomszédokat adják meg. Legtöbb film esetén saját maga a legközelebbi szomszédja, hiszen 0 a távolságuk, azonban nincs mindenhol így. Mit gondolsz, miért?
A táblázat indexe 0-val kezdődik, de a movies táblában a movie_id 1-től indul.
Q5.3. Állítsd át az ind DataFrame indexét úgy, hogy 1-től indexeljen! Az összes mezőt is növeld meg eggyel!
Segítség: a Pandas alapok Vektoros műveletvégzés részét érdemes megnézni.
End of explanation
def find_neighbor_titles(movies, indices):
# YOUR CODE HERE
raise NotImplementedError()
neighbors = find_neighbor_titles(movies, indices)
assert type(neighbors) == pd.DataFrame
assert neighbors.shape[1] == K
Explanation: Q5.4. Keresd meg az indexekhez tartozó filmcímeket!
Az indices táblázatban filmcímek helyett indexek vannak, ami nem túl felhasználóbarát. A movies DataFrame tartalmazza a filmeket indexekkel együtt, ezzel kell merge-ölni K alkalommal.
Pl. az első sorban megjelenő 422-es index az Aladdin and the King of Thieves film indexe. Kerüljön ez a cím az index helyére a merge után.
Segítség:
1. az indices tábla oszlopainak nevei most nem stringek, hanem integerek,
1. az oszlopokat át tudod nevezni a rename metódussal.
~~~
df = df.rename(columns={'regi': 'uj', 'masik regi': 'masik uj'})
~~~
A filmcímeken kívül minden oszlopot el lehet dobni.
Tipp: érdemes belenézni a kapott táblázatba, hogy reálisak-e az adatok. Pl. a Toy Story szomszédai szintén rajzfilmek lesznek.
End of explanation
def recover_titles(movies, neighbors):
# YOUR CODE HERE
raise NotImplementedError()
most_similar = recover_titles(movies, neighbors)
assert type(most_similar) == pd.DataFrame
assert "Toy Story" in most_similar.index
Explanation: Q5.5. Jelenjen meg a táblázatban az a film is, aminek a szomszédjai a sorban vannak! A címeken kívül más oszlopa ne legyen a táblázatnak!
Most olyanok a soraink hogy:
Jelenleg a táblázat indexe a filmek azonosítója, tehát egy szám:
| | nearest1 | nearest 2 |
| ------- | ----- | ----- |
| 1 | hasonló film címe 1 | hasonló film címe 2 |
Számok helyett a filmcím legyen az index.
| | nearest1 | nearest 2 |
| ------- | ----- | ----- |
| Filmcím | hasonló film címe 1 | hasonló film címe 2 |
Sok filmnek saját maga a legközelebbi szomszédja.
End of explanation
def recommend_similar_movies(most_similar, title):
# YOUR CODE HERE
raise NotImplementedError()
die_hard = recommend_similar_movies(most_similar, "Die Hard")
assert type(die_hard) == pd.DataFrame
# there are more than one Die Hard movies
assert len(die_hard) > 1
asdf_movies = recommend_similar_movies(most_similar, "asdf")
assert type(asdf_movies) == pd.DataFrame
Explanation: Q5.6. (Szorgalmi) Készíts függvényt, ami egy filmcímrészletet vesz át és megkeresi azokat a filmeket, amikben szerepel.
A függvény visszatérési értéke legyen egy DataFrame, amely a hasonló filmeket tartalmazza (akkor is, ha 1 vagy 0 hasonló film van). A most_similar táblázat szintén a függvény paramétere.
End of explanation |
6,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stopword Removal from Media Unit & Annotation
In this tutorial, we will show how dimensionality reduction can be applied over both the media units and the annotations of a crowdsourcing task, and how this impacts the results of the CrowdTruth quality metrics. We start with an open-ended extraction task, where the crowd was asked to highlight words or phrases in a text that identify or refer to people in a video. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.
To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here
Step1: Notice the diverse behavior of the crowd workers. While most annotated each word individually, the worker on row 5 annotated chunks of the sentence together in one word phrase. Also, when no answer was picked by the worker, the value in the cell is NaN.
A basic pre-processing configuration
Our basic pre-processing configuration attempts to normalize the different ways of performing the crowd annotations.
We set remove_empty_rows = False to keep the empty rows from the crowd. This configuration option will set all empty cell values to correspond to a NONE token in the annotation vector.
We build the annotation vector to have one component for each word in the sentence. To do this, we break up multiple-word annotations into a list of single words in the processJudgments call
Step2: Now we can pre-process the data and run the CrowdTruth metrics
Step3: Removing stopwords from Media Units and Annotations
A more complex dimensionality reduction technique involves removing the stopwords from both the media units and the crowd annotations. Stopwords (i.e. words that are very common in the English language) do not usually contain much useful information. Also, the behavior of the crowds w.r.t them is inconsistent - some workers omit them, some annotate them.
The first step is to build a function that removes stopwords from strings. We will use the stopwords corpus in the nltk package to get the list of words. We want to build a function that can be reused for both the text in the media units and in the annotations column. Also, we need to be careful about omitting punctuation.
The function remove_stop_words does all of these things
Step4: In the new configuration class ConfigDimRed, we apply the function we just built to both the column that contains the media unit text (inputColumns[2]), and the column containing the crowd annotations (outputColumns[0])
Step5: Now we can pre-process the data and run the CrowdTruth metrics
Step6: Effect on CrowdTruth metrics
Finally, we can compare the effect of the stopword removal on the CrowdTruth sentence quality score.
Step7: The red line in the plot runs through the diagonal. All sentences above the line have a higher sentence quality score when the stopwords were removed.
The plot shows that removing the stopwords improved the quality for a majority of the sentences. Surprisingly though, some sentences decreased in quality. This effect can be understood when plotting the worker quality scores.
Step8: The quality of the majority of workers also has increased in the configuration where we removed the stopwords. However, because of the inter-linked nature of the CrowdTruth quality metrics, the annotations of these workers now has a greater weight when calculating the sentence quality score. So the stopword removal process had the effect of removing some of the noise in the annotations and therefore increasing the quality scores, but also of amplifying the true ambiguity in the sentences. | Python Code:
import pandas as pd
test_data = pd.read_csv("../data/person-video-highlight.csv")
test_data["taggedinsubtitles"][0:30]
Explanation: Stopword Removal from Media Unit & Annotation
In this tutorial, we will show how dimensionality reduction can be applied over both the media units and the annotations of a crowdsourcing task, and how this impacts the results of the CrowdTruth quality metrics. We start with an open-ended extraction task, where the crowd was asked to highlight words or phrases in a text that identify or refer to people in a video. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.
To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript.
This is how the task looked like to the workers:
A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. The answers from the crowd are stored in the taggedinsubtitles column.
End of explanation
import crowdtruth
from crowdtruth.configuration import DefaultConfig
class Config(DefaultConfig):
inputColumns = ["ctunitid", "videolocation", "subtitles"]
outputColumns = ["taggedinsubtitles"]
open_ended_task = True
annotation_separator = ","
remove_empty_rows = False
def processJudgments(self, judgments):
# build annotation vector just from words
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(' ',self.annotation_separator))
# normalize vector elements
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace('[',''))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(']',''))
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace('"',''))
return judgments
Explanation: Notice the diverse behavior of the crowd workers. While most annotated each word individually, the worker on row 5 annotated chunks of the sentence together in one word phrase. Also, when no answer was picked by the worker, the value in the cell is NaN.
A basic pre-processing configuration
Our basic pre-processing configuration attempts to normalize the different ways of performing the crowd annotations.
We set remove_empty_rows = False to keep the empty rows from the crowd. This configuration option will set all empty cell values to correspond to a NONE token in the annotation vector.
We build the annotation vector to have one component for each word in the sentence. To do this, we break up multiple-word annotations into a list of single words in the processJudgments call:
judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply(
lambda x: str(x).replace(' ',self.annotation_separator))
The final configuration class Config is this:
End of explanation
data_with_stopwords, config_with_stopwords = crowdtruth.load(
file = "../data/person-video-highlight.csv",
config = Config()
)
processed_results_with_stopwords = crowdtruth.run(
data_with_stopwords,
config_with_stopwords
)
Explanation: Now we can pre-process the data and run the CrowdTruth metrics:
End of explanation
import nltk
from nltk.corpus import stopwords
import string
stopword_set = set(stopwords.words('english'))
stopword_set.update(['s'])
def remove_stop_words(words_string, sep):
'''
words_string: string containing all words
sep: separator character for the words in words_string
'''
words_list = words_string.replace("'", sep).split(sep)
corrected_words_list = ""
for word in words_list:
if word.translate(None, string.punctuation) not in stopword_set:
if corrected_words_list != "":
corrected_words_list += sep
corrected_words_list += word
return corrected_words_list
Explanation: Removing stopwords from Media Units and Annotations
A more complex dimensionality reduction technique involves removing the stopwords from both the media units and the crowd annotations. Stopwords (i.e. words that are very common in the English language) do not usually contain much useful information. Also, the behavior of the crowds w.r.t them is inconsistent - some workers omit them, some annotate them.
The first step is to build a function that removes stopwords from strings. We will use the stopwords corpus in the nltk package to get the list of words. We want to build a function that can be reused for both the text in the media units and in the annotations column. Also, we need to be careful about omitting punctuation.
The function remove_stop_words does all of these things:
End of explanation
import pandas as pd
class ConfigDimRed(Config):
def processJudgments(self, judgments):
judgments = Config.processJudgments(self, judgments)
# remove stopwords from input sentence
for idx in range(len(judgments[self.inputColumns[2]])):
judgments.at[idx, self.inputColumns[2]] = remove_stop_words(
judgments[self.inputColumns[2]][idx], " ")
for idx in range(len(judgments[self.outputColumns[0]])):
judgments.at[idx, self.outputColumns[0]] = remove_stop_words(
judgments[self.outputColumns[0]][idx], self.annotation_separator)
if judgments[self.outputColumns[0]][idx] == "":
judgments.at[idx, self.outputColumns[0]] = self.none_token
return judgments
Explanation: In the new configuration class ConfigDimRed, we apply the function we just built to both the column that contains the media unit text (inputColumns[2]), and the column containing the crowd annotations (outputColumns[0]):
End of explanation
data_without_stopwords, config_without_stopwords = crowdtruth.load(
file = "../data/person-video-highlight.csv",
config = ConfigDimRed()
)
processed_results_without_stopwords = crowdtruth.run(
data_without_stopwords,
config_without_stopwords
)
Explanation: Now we can pre-process the data and run the CrowdTruth metrics:
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.scatter(
processed_results_with_stopwords["units"]["uqs"],
processed_results_without_stopwords["units"]["uqs"],
)
plt.plot([0, 1], [0, 1], 'red', linewidth=1)
plt.title("Sentence Quality Score")
plt.xlabel("with stopwords")
plt.ylabel("without stopwords")
Explanation: Effect on CrowdTruth metrics
Finally, we can compare the effect of the stopword removal on the CrowdTruth sentence quality score.
End of explanation
plt.scatter(
processed_results_with_stopwords["workers"]["wqs"],
processed_results_without_stopwords["workers"]["wqs"],
)
plt.plot([0, 0.6], [0, 0.6], 'red', linewidth=1)
plt.title("Worker Quality Score")
plt.xlabel("with stopwords")
plt.ylabel("without stopwords")
Explanation: The red line in the plot runs through the diagonal. All sentences above the line have a higher sentence quality score when the stopwords were removed.
The plot shows that removing the stopwords improved the quality for a majority of the sentences. Surprisingly though, some sentences decreased in quality. This effect can be understood when plotting the worker quality scores.
End of explanation
processed_results_with_stopwords["units"].to_csv("../data/results/openextr-persvid-units.csv")
processed_results_with_stopwords["workers"].to_csv("../data/results/openextr-persvid-workers.csv")
processed_results_without_stopwords["units"].to_csv("../data/results/openextr-persvid-dimred-units.csv")
processed_results_without_stopwords["workers"].to_csv("../data/results/openextr-persvid-dimred-workers.csv")
Explanation: The quality of the majority of workers also has increased in the configuration where we removed the stopwords. However, because of the inter-linked nature of the CrowdTruth quality metrics, the annotations of these workers now has a greater weight when calculating the sentence quality score. So the stopword removal process had the effect of removing some of the noise in the annotations and therefore increasing the quality scores, but also of amplifying the true ambiguity in the sentences.
End of explanation |
6,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Taken in part from the course Creative applications of deep learning with tensorflow
Regression to a noisy sine wave
L1 minimization with SGD
Linear regression iterations
Regression by a cubic polynomial
Non linear activation
Simple network with a non linear activation
Going deeper
Step1: <a name="regression-1d"></a>
Regression to a sine wave
Creating the dataset
Step2: <a name="L1-SGD"></a>
The training procedure
Step3: <a name="regression-1d-sine"></a>
Linear regression
Step4: <a name="cubic-regression"></a>
Cubic polynomial regression
Step5: <a name="non-linear-activation"></a>
Non linear activation
Step6: <a name="net-with-non-linear-activation"></a>
Simple network with tanh non linear activation
Step7: <a name="going-deeper"></a>
Going deeper | Python Code:
# imports
%matplotlib inline
# %pylab osx
import os
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
plt.style.use('ggplot')
Explanation: Taken in part from the course Creative applications of deep learning with tensorflow
Regression to a noisy sine wave
L1 minimization with SGD
Linear regression iterations
Regression by a cubic polynomial
Non linear activation
Simple network with a non linear activation
Going deeper
End of explanation
#---------------------------------------------
# Create the data set: Sine wave with noise
#--------------------------------------------
n_observations = 1000
xs = np.linspace(-3, 3, n_observations)
ys = np.sin(xs) + np.random.uniform(-0.5, 0.5, n_observations)
#plt.scatter(xs, ys, alpha=0.15, marker='+')
Explanation: <a name="regression-1d"></a>
Regression to a sine wave
Creating the dataset
End of explanation
# L1 cost function
def distance(p1, p2):
return tf.abs(p1 - p2)
def train(X, Y, Y_pred, n_iterations=100, batch_size=200, learning_rate=0.02):
cost = tf.reduce_mean(distance(Y_pred, Y)) # cost ==> mean L1 distance for alls samples
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
# Plot the true data distribution
fig, ax = plt.subplots(1, 1)
ax.scatter(xs, ys, alpha=0.15, marker='+')
ax.set_xlim([-4, 4])
ax.set_ylim([-2, 2])
with tf.Session() as sess:
# Here we tell tensorflow that we want to initialize all
# the variables in the graph so we can use them
# This will set `W` and `b` to their initial random normal value.
sess.run(tf.initialize_all_variables())
# We now run a loop over epochs
prev_training_cost = 0.0
for it_i in range(n_iterations):
idxs = np.random.permutation(range(len(xs)))
n_batches = len(idxs) // batch_size
for batch_i in range(n_batches):
idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size]
sess.run(optimizer, feed_dict={X: xs[idxs_i], Y: ys[idxs_i]})
training_cost = sess.run(cost, feed_dict={X: xs, Y: ys})
if it_i % 10 == 0:
ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess)
ax.plot(xs, ys_pred, 'k', alpha=it_i / float(n_iterations))
# getting the values as numpy array
# w = sess.run(W, feed_dict={X: xs, Y: ys})
# b = sess.run(B, feed_dict={X: xs, Y: ys})
print ' iteration: {:3} Cost: {} '.format(it_i,training_cost)
fig.show()
plt.draw()
Explanation: <a name="L1-SGD"></a>
The training procedure: L1 minimization with batch stochadtic GD
End of explanation
# Reset default graph
tf.reset_default_graph()
# Declare variables
# placeholders to hold input data
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')
# Variables for internal weights
W = tf.Variable(tf.random_normal([1], dtype=tf.float32, stddev=0.1), name='weight')
B = tf.Variable(tf.constant([1], dtype=tf.float32), name='bias')
Y_pred = X * W + B
train(X,Y,Y_pred)
Explanation: <a name="regression-1d-sine"></a>
Linear regression
End of explanation
# Reset default graph
tf.reset_default_graph()
# Declare variables
# placeholders to hold input data
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')
# Variables for internal weights
B = tf.Variable(tf.constant([1], dtype=tf.float32), name='bias')
Y_pred = tf.Variable(tf.random_normal([1]), name='bias')
for pow_i in range(0, 4):
# Instantiate weight for each monomial
W = tf.Variable(
tf.random_normal([1], stddev=0.1), name='weight_%d' % pow_i)
Y_pred = tf.add(tf.mul(tf.pow(X, pow_i), W), Y_pred)
train(X,Y,Y_pred)
Explanation: <a name="cubic-regression"></a>
Cubic polynomial regression
End of explanation
# Reset default graph
tf.reset_default_graph()
sess = tf.InteractiveSession()
x = np.linspace(-6,6,1000)
plt.plot(x, tf.nn.tanh(x).eval(), label='tanh')
plt.plot(x, tf.nn.sigmoid(x).eval(), label='sigmoid')
plt.plot(x, tf.nn.relu(x).eval(), label='relu')
plt.legend(loc='lower right')
plt.xlim([-6, 6])
plt.ylim([-2, 2])
plt.xlabel('Input')
plt.ylabel('Output')
plt.grid('on')
Explanation: <a name="non-linear-activation"></a>
Non linear activation
End of explanation
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')
#---------------------------------------------------------------------------
# Create inner layer of 10 neurons followed by non-linear activation
# W : 1 x n_neurons martix
# b : n_neurons vector
#
# X : n_vals Input vector of values
# Y : n_vals Input noisy values of sin(X)
#
# For a single input value x_i we do:
# h_k(x_i)= h(x_i * w_k + b_k) , k=1,...,n_neuorns
# In vector notations
# (1) h = x_i * W + b
# where h is the non-linear activation function.
# (2) y_pred_i = h_1 + h_2 + .... + h_n_neurons
#
# From single input/output value to a vector
# -------------------------------------------
# Consider now a vector of input valuse: X = (x_1,...,x_n_vals)^T
# Let us expand this vector to a column matrix XM = f.expand_dims(X, 1)
# Then,instead of the single vector in (1) we have a matrix of n_vals x n_neurons entries
# H = matmul(XM,W)
# which we sum, row by row, as in (2) to get the prediction vector Y_PRED
#---------------------------------------------------------------------------
n_neurons = 10
W = tf.Variable(tf.random_normal([1, n_neurons]), name='W')
b = tf.Variable(tf.constant(0, dtype=tf.float32, shape=[n_neurons]), name='b')
h = tf.nn.tanh(tf.matmul(tf.expand_dims(X, 1), W) + b, name='h')
Y_pred = tf.reduce_sum(h, 1)
#with tf.Session() as sess:
# sess.run(tf.initialize_all_variables())
# print 'Shape of H 111 x 10',sess.run(h, feed_dict={X: xs[0:111], Y: ys[0:111]}).shape
# And retrain w/ our new Y_pred
train(X, Y, Y_pred)
Explanation: <a name="net-with-non-linear-activation"></a>
Simple network with tanh non linear activation
End of explanation
# Define a single hidden layer with activation function
# Creating variables with scopes help in further debugging
def linear(X, n_input, n_output, activation=None, scope=None):
with tf.variable_scope(scope or "linear"):
# Create/return variable with a given scope
W = tf.get_variable(
name='W',
shape=[n_input, n_output],
initializer=tf.random_normal_initializer(mean=0.0, stddev=0.1))
b = tf.get_variable(
name='b',
shape=[n_output],
initializer=tf.constant_initializer())
h = tf.matmul(X, W) + b
if activation is not None:
h = activation(h)
return h
# first clear the graph
from tensorflow.python.framework import ops
ops.reset_default_graph()
# let's get the current graph
g = tf.get_default_graph()
# See the names of any operations in the graph
print 'Empty graph: ',[op.name for op in tf.get_default_graph().get_operations()]
# let's create a new network
X = tf.placeholder(tf.float32, name='X')
h = linear(X, 1, 10, scope='layer1')
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print 's',sess.run(h, feed_dict={X: np.expand_dims(xs[:111],1)}).shape
# See the names of any operations in the graph
print 'Graph of first layer with 2 input and 10 neurons: '
for op in [op.name for op in tf.get_default_graph().get_operations()]:
print ' ',op
ops.reset_default_graph()
g = tf.get_default_graph()
n_observations = 1000
xs = np.linspace(-3, 3, n_observations)
ys = np.sin(xs) + np.random.uniform(-0.5, 0.5, n_observations)
xs = xs.reshape(n_observations,1)
ys = ys.reshape(n_observations,1)
X = tf.placeholder(tf.float32, shape=[None,1],name='X')
Y = tf.placeholder(tf.float32, shape=[None,1],name='Y')
n_inputs = 1
n_neurons = 10
h1 = linear(X, n_inputs, n_neurons, activation=tf.nn.tanh, scope='layer1')
h2 = linear(h1, n_neurons, n_neurons, activation=tf.nn.tanh, scope='layer2')
Y_pred = linear(h2, n_neurons, 1, scope='layer3')
#Y_pred = linear(h2, 10, 1, scope='layer3')
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print 'Dimension of prediction vector is: ',sess.run(Y_pred, feed_dict={X: xs[0:111]}).shape
# And retrain w/ our new Y_pred
train(X, Y, Y_pred,n_iterations=500, batch_size=30, learning_rate=0.055)
|
Explanation: <a name="going-deeper"></a>
Going deeper
End of explanation |
6,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sessionize
The MADlib sessionize function performs time-oriented session reconstruction on a data set comprising a sequence of events. A defined period of inactivity indicates the end of one session and beginning of the next session.
Step1: The data set describes shopper behavior on a notional web site that sells beer and wine. A beacon fires an event to a log file when the shopper visits different pages on the site
Step2: Sessionize the table by each user_id
Step3: Now let's say we want to see 3 minute sessions by a group of users with a certain range of user IDs. To do this, we need to sessionize the table based on a partition expression. Also, we want to persist a table output with a reduced set of columns in the table. | Python Code:
%load_ext sql
# %sql postgresql://[email protected]:55000/madlib
%sql postgresql://fmcquillan@localhost:5432/madlib
%sql select madlib.version();
Explanation: Sessionize
The MADlib sessionize function performs time-oriented session reconstruction on a data set comprising a sequence of events. A defined period of inactivity indicates the end of one session and beginning of the next session.
End of explanation
%%sql
DROP TABLE IF EXISTS eventlog CASCADE; -- Use CASCADE because views created below depend on this table
CREATE TABLE eventlog (event_timestamp TIMESTAMP,
user_id INT,
page TEXT,
revenue FLOAT);
INSERT INTO eventlog VALUES
('04/15/2015 02:19:00', 101331, 'CHECKOUT', 16),
('04/15/2015 02:17:00', 202201, 'WINE', 0),
('04/15/2015 03:18:00', 202201, 'BEER', 0),
('04/15/2015 01:03:00', 100821, 'LANDING', 0),
('04/15/2015 01:04:00', 100821, 'WINE', 0),
('04/15/2015 01:05:00', 100821, 'CHECKOUT', 39),
('04/15/2015 02:06:00', 100821, 'WINE', 0),
('04/15/2015 02:09:00', 100821, 'WINE', 0),
('04/15/2015 02:15:00', 101331, 'LANDING', 0),
('04/15/2015 02:16:00', 101331, 'WINE', 0),
('04/15/2015 02:17:00', 101331, 'HELP', 0),
('04/15/2015 02:18:00', 101331, 'WINE', 0),
('04/15/2015 02:29:00', 201881, 'LANDING', 0),
('04/15/2015 02:30:00', 201881, 'BEER', 0),
('04/15/2015 01:05:00', 202201, 'LANDING', 0),
('04/15/2015 01:06:00', 202201, 'HELP', 0),
('04/15/2015 01:09:00', 202201, 'LANDING', 0),
('04/15/2015 02:15:00', 202201, 'WINE', 0),
('04/15/2015 02:16:00', 202201, 'BEER', 0),
('04/15/2015 03:19:00', 202201, 'WINE', 0),
('04/15/2015 03:22:00', 202201, 'CHECKOUT', 21);
SELECT * FROM eventlog ORDER BY event_timestamp;
Explanation: The data set describes shopper behavior on a notional web site that sells beer and wine. A beacon fires an event to a log file when the shopper visits different pages on the site: landing page, beer selection page, wine selection page, and checkout. Each user is identified by a a user id, and every time a page is visited, the page and time stamp are logged.
Create the data table:
End of explanation
%%sql
DROP VIEW IF EXISTS sessionize_output_view;
SELECT madlib.sessionize(
'eventlog', -- Name of input table
'sessionize_output_view', -- View to store sessionize results
'user_id', -- Partition input table by user id
'event_timestamp', -- Time column used to compute sessions
'0:30:0' -- Time out used to define a session (30 minutes)
);
SELECT * FROM sessionize_output_view ORDER BY user_id, event_timestamp;
Explanation: Sessionize the table by each user_id:
End of explanation
%%sql
DROP TABLE IF EXISTS sessionize_output_table;
SELECT madlib.sessionize(
'eventlog', -- Name of input table
'sessionize_output_table', -- Table to store sessionize results
'user_id < 200000', -- Partition input table by subset of users
'event_timestamp', -- Order partitions in input table by time
'180', -- Use 180 second time out to define sessions
-- Note that this is the same as '0:03:0'
'event_timestamp, user_id, user_id < 200000 AS "Department-A1"', -- Select only user_id and event_timestamp columns, along with the session id as output
'f' -- create a table
);
SELECT * FROM sessionize_output_table WHERE "Department-A1"='TRUE' ORDER BY event_timestamp;
Explanation: Now let's say we want to see 3 minute sessions by a group of users with a certain range of user IDs. To do this, we need to sessionize the table based on a partition expression. Also, we want to persist a table output with a reduced set of columns in the table.
End of explanation |
6,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Author
Step1: First let's check if there are new or deleted files (only matching by file names).
Step2: Cool, no new nor deleted files.
Now let's set up a dataset that, for each table, links both the old and the new file together.
Step3: Let's make sure the structure hasn't changed
Step4: OK no columns have changed.
Now let's see for each file if there are more or less rows.
Step5: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, competence, and liens_rome_referentiels, so let's see more precisely.
Step6: Alright, so the only change seems to be 15 new jobs added. Let's take a look (only showing interesting fields)
Step7: Those are indeed new jobs. Some are related to COVID-19 sneaking in.
OK, let's check at the changes in items
Step8: As anticipated it is a very minor change (hard to see it visually)
Step9: The new ones seem legit to me and related to the new jobs.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
Step10: So in addition to the added items, there are few fixes. Let's have a look at them | Python Code:
import collections
import glob
import os
from os import path
import matplotlib_venn
import pandas as pd
rome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv')
OLD_VERSION = '343'
NEW_VERSION = '344'
old_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION)))
new_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION)))
Explanation: Author: Pascal, [email protected]
Date: 2020-10-14
ROME update from v343 to v344
In October 2020 a new version of the ROME was released. I want to investigate what changed and whether we need to do anything about it.
You might not be able to reproduce this notebook, mostly because it requires to have the two versions of the ROME in your data/rome/csv folder which happens only just before we switch to v344. You will have to trust me on the results ;-)
Skip the run test because it requires older versions of the ROME.
End of explanation
new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files)
deleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files)
print('{:d} new files'.format(len(new_files)))
print('{:d} deleted files'.format(len(deleted_files)))
Explanation: First let's check if there are new or deleted files (only matching by file names).
End of explanation
# Load all ROME datasets for the two versions we compare.
VersionedDataset = collections.namedtuple('VersionedDataset', ['basename', 'old', 'new'])
def read_csv(filename):
try:
return pd.read_csv(filename)
except pd.errors.ParserError:
display(f'While parsing: {filename}')
raise
rome_data = [VersionedDataset(
basename=path.basename(f),
old=read_csv(f.replace(NEW_VERSION, OLD_VERSION)),
new=read_csv(f))
for f in sorted(new_version_files)]
def find_rome_dataset_by_name(data, partial_name):
for dataset in data:
if 'unix_{}_v{}_utf8.csv'.format(partial_name, NEW_VERSION) == dataset.basename:
return dataset
raise ValueError('No dataset named {}, the list is\n{}'.format(partial_name, [d.basename for d in data]))
Explanation: Cool, no new nor deleted files.
Now let's set up a dataset that, for each table, links both the old and the new file together.
End of explanation
for dataset in rome_data:
if set(dataset.old.columns) != set(dataset.new.columns):
print('Columns of {} have changed.'.format(dataset.basename))
Explanation: Let's make sure the structure hasn't changed:
End of explanation
same_row_count_files = 0
for dataset in rome_data:
diff = len(dataset.new.index) - len(dataset.old.index)
if diff > 0:
print('{:d}/{:d} values added in {}'.format(
diff, len(dataset.new.index), dataset.basename))
elif diff < 0:
print('{:d}/{:d} values removed in {}'.format(
-diff, len(dataset.old.index), dataset.basename))
else:
same_row_count_files += 1
print('{:d}/{:d} files with the same number of rows'.format(
same_row_count_files, len(rome_data)))
Explanation: OK no columns have changed.
Now let's see for each file if there are more or less rows.
End of explanation
jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')
new_jobs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr)
obsolete_jobs = set(jobs.old.code_ogr) - set(jobs.new.code_ogr)
stable_jobs = set(jobs.new.code_ogr) & set(jobs.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_jobs), len(new_jobs), len(stable_jobs)), (OLD_VERSION, NEW_VERSION));
Explanation: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, competence, and liens_rome_referentiels, so let's see more precisely.
End of explanation
pd.options.display.max_colwidth = 2000
jobs.new[jobs.new.code_ogr.isin(new_jobs)][['code_ogr', 'libelle_appellation_long', 'code_rome']]
Explanation: Alright, so the only change seems to be 15 new jobs added. Let's take a look (only showing interesting fields):
End of explanation
items = find_rome_dataset_by_name(rome_data, 'item')
new_items = set(items.new.code_ogr) - set(items.old.code_ogr)
obsolete_items = set(items.old.code_ogr) - set(items.new.code_ogr)
stable_items = set(items.new.code_ogr) & set(items.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_items), len(new_items), len(stable_items)), (OLD_VERSION, NEW_VERSION));
Explanation: Those are indeed new jobs. Some are related to COVID-19 sneaking in.
OK, let's check at the changes in items:
End of explanation
items.new[items.new.code_ogr.isin(new_items)].head()
Explanation: As anticipated it is a very minor change (hard to see it visually): 9 new ones have been created. Let's have a look at them.
End of explanation
links = find_rome_dataset_by_name(rome_data, 'liens_rome_referentiels')
old_links_on_stable_items = links.old[links.old.code_ogr.isin(stable_items)]
new_links_on_stable_items = links.new[links.new.code_ogr.isin(stable_items)]
old = old_links_on_stable_items[['code_rome', 'code_ogr']]
new = new_links_on_stable_items[['code_rome', 'code_ogr']]
links_merged = old.merge(new, how='outer', indicator=True)
links_merged['_diff'] = links_merged._merge.map({'left_only': 'removed', 'right_only': 'added'})
links_merged._diff.value_counts()
Explanation: The new ones seem legit to me and related to the new jobs.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
End of explanation
job_group_names = find_rome_dataset_by_name(rome_data, 'referentiel_code_rome').new.set_index('code_rome').libelle_rome
item_names = items.new.set_index('code_ogr').libelle.drop_duplicates()
links_merged['job_group_name'] = links_merged.code_rome.map(job_group_names)
links_merged['item_name'] = links_merged.code_ogr.map(item_names)
display(links_merged[links_merged._diff == 'removed'].dropna().head(5))
links_merged[links_merged._diff == 'added'].dropna().head(5)
Explanation: So in addition to the added items, there are few fixes. Let's have a look at them:
End of explanation |
6,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistical Thinking in Python (Part 1)
Exploratory Data Analysis
Step1: Bee Sworm plot
Step2: Empirical cumulative distribution function (ECDF)
Step3: Summary Statistics
mean - avg value but it is affected by outliers values
$$ mean = \frac{1}{n}\Sigma_{i=1}^n {i_n} $$
$$ mean = \frac{1}{n}\sum_{n=1}^n {i_n} $$
median - middile value in values, doesn't affected by diversity of values
Step4: for repesenting the percentile and checking the outliers, we use boxplot
in box plot, box starts from 25% , median 50% and end at 75%
whiskers are ususllay represent the 1.5x range of box values(25-75)
If there is any points after the whiskers are called outliers
Step5: Variance - it is the sum of squared distance of data-point from mean data-point
if x(bar) is mean
$$ variance = \frac{1}{n}{\sum_{i=1}^{n}(x_i - \bar{x})^2} $$
Standard Variance - square root of variance
$$ std =\sqrt{\frac{1}{n}{\sum_{i=1}^{n}(x_i - \bar{x})^2}} $$
higher std value represents the more diverse data
Step6: covariance - how two datapoints are related with each other
Or A measure of how two quantities vary together
$$ cov = \frac{1}{n}{\sum{(x-\bar{x})}{(y-\bar{y})}} $$
Pearson correlation coefficient -
$$ \rho = \frac{\frac{1}{n}{\sum{(x-\bar{x})}{(y-\bar{y})}}}{std(x) std(y)} $$ | Python Code:
# import
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
%matplotlib inline
# reading excel file
fh = pd.ExcelFile("dataset/EAVS.xlsx")
fh
print(fh.sheet_names)
data = fh.parse("SectionC")
data.head()
## Loading the IRIS dataset
irisds = load_iris()
# selecting the data field only
data = irisds['data']
# saving into csv file
col = irisds['feature_names']
print(len(data))
# sample data
print(col)
data[:5]
# plotting the sepal length ( sns style)
sepal = data[:,0]
sns.set()
plt.hist(sepal)
plt.xlabel("sepal length")
plt.ylabel("Count")
# by default matplotlib creates 10 bins, we can customize it 2 ways
# no of bins
# plotting the sepal length ( sns style)
sepal = data[:,0]
sns.set()
plt.hist(sepal, bins=6)
plt.xlabel("sepal length")
plt.ylabel("Count")
# bins details
# plotting the sepal length ( sns style)
sepal = data[:,0]
sns.set()
plt.hist(sepal, bins=[ x for x in range(0, 10)])
plt.xlabel("sepal length")
plt.ylabel("Count")
iris = pd.DataFrame(data, columns=col)
iris['species'] = ['Setosa' if x==0 else 'Versicolour' if x==1 else 'Virginica' for x in irisds['target']]
iris.head()
Explanation: Statistical Thinking in Python (Part 1)
Exploratory Data Analysis
End of explanation
sns.set()
sns.swarmplot(x="species", y='sepal length (cm)', data=iris)
plt.xlabel("species")
plt.ylabel("sepal length (cm)")
Explanation: Bee Sworm plot
End of explanation
def ecdf(data):
n = len(data)
x = np.sort(data)
y = np.arange(1, n+1)/n
return x,y
# plotting
x, y = ecdf(iris[iris['species']=='Setosa']['sepal length (cm)'])
plt.plot(x, y, marker='.', linestyle='none')
plt.xlabel("sepal length (cm)")
plt.xlabel("ecdf")
plt.margins(0.02)
plt.legend(('Setosa'), loc='upper left')
# plotting all species
x, y = ecdf(iris[iris['species']=='Setosa']['sepal length (cm)'])
plt.plot(x, y, marker='.', linestyle='none')
x, y = ecdf(iris[iris['species']=='Versicolour']['sepal length (cm)'])
plt.plot(x, y, marker='.', linestyle='none')
x, y = ecdf(iris[iris['species']=='Virginica']['sepal length (cm)'])
plt.plot(x, y, marker='.', linestyle='none')
plt.xlabel("sepal length (cm)")
plt.xlabel("ecdf")
plt.margins(0.02)
plt.legend(('Setosa','Versicolour', 'Virginica'), loc='upper left')
Explanation: Empirical cumulative distribution function (ECDF)
End of explanation
# checking out what is avg and median sepal length (cm) for Virginica species
print("Mean : ",np.mean(iris[iris['species']=='Virginica']['sepal length (cm)']))
print("Median : ",np.median(iris[iris['species']=='Virginica']['sepal length (cm)']))
# creating variables
virgin = iris[iris['species']=='Virginica']
setosa = iris[iris['species']=='Setosa']
versi = iris[iris['species']=='Versicolour']
# getting percentile 25, 50, 75
print("25, 50, 70 percentile", np.percentile(virgin['sepal length (cm)'], [25, 50, 75]))
Explanation: Summary Statistics
mean - avg value but it is affected by outliers values
$$ mean = \frac{1}{n}\Sigma_{i=1}^n {i_n} $$
$$ mean = \frac{1}{n}\sum_{n=1}^n {i_n} $$
median - middile value in values, doesn't affected by diversity of values
End of explanation
#sns.boxplot(x='species', y='sepal length (cm)', data=virgin)
#sns.boxplot(x='species', y='sepal length (cm)', data=setosa)
#sns.boxplot(x='species', y='sepal length (cm)', data=versi)
sns.boxplot(x='species', y='sepal length (cm)', data=iris)
plt.margins(0.2)
Explanation: for repesenting the percentile and checking the outliers, we use boxplot
in box plot, box starts from 25% , median 50% and end at 75%
whiskers are ususllay represent the 1.5x range of box values(25-75)
If there is any points after the whiskers are called outliers
End of explanation
var = np.var(versi['sepal length (cm)'])
var
std1 = np.sqrt(var)
std2 = np.std(versi['sepal length (cm)'])
print(std1, std2)
Explanation: Variance - it is the sum of squared distance of data-point from mean data-point
if x(bar) is mean
$$ variance = \frac{1}{n}{\sum_{i=1}^{n}(x_i - \bar{x})^2} $$
Standard Variance - square root of variance
$$ std =\sqrt{\frac{1}{n}{\sum_{i=1}^{n}(x_i - \bar{x})^2}} $$
higher std value represents the more diverse data
End of explanation
# plotting the scatter plot
plt.plot(versi['sepal length (cm)'], versi['sepal width (cm)'], marker='.',linestyle='none')
plt.xlabel('sepal length (cm)')
plt.ylabel("sepal width (cm)")
plt.margins(0.2)
plt.scatter(versi['sepal length (cm)' ], versi['sepal width (cm)'])
plt.xlabel('sepal length (cm)')
plt.ylabel("sepal width (cm)")
plt.margins(0.2)
## Calculating the covariance
cov_mat = np.cov(versi['sepal length (cm)'], versi['sepal width (cm)'])
print(cov_mat)
## Calculating the corelation coefficient
coef = np.corrcoef(versi['sepal length (cm)'], versi['sepal width (cm)'])
print(coef)
print(coef[0,1])
# let's calculate the corelation coefficient in math
std_len = np.std(versi['sepal length (cm)'])
std_wid = np.std(versi['sepal width (cm)'])
print(std_len, std_wid)
coeff = cov_mat/(std_len*std_wid)
coeff
Explanation: covariance - how two datapoints are related with each other
Or A measure of how two quantities vary together
$$ cov = \frac{1}{n}{\sum{(x-\bar{x})}{(y-\bar{y})}} $$
Pearson correlation coefficient -
$$ \rho = \frac{\frac{1}{n}{\sum{(x-\bar{x})}{(y-\bar{y})}}}{std(x) std(y)} $$
End of explanation |
6,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Maximizing the profit of an oil company
This tutorial includes everything you need to set up the decision optimization engines and build mathematical programming models.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>
Step1: If CPLEX is not installed, you can install CPLEX Community edition.
Step2: Step 2
Step3: Step 3
Step5: Use basic HTML and a stylesheet to format the data.
Step6: Let's display the data we just prepared.
Step7: Step 4
Step8: Define the decision variables
For each combination of oil and gas, we have to decide the quantity of oil to use to produce a gasoline. A decision variable will be needed to represent that amount.
A matrix of continuous variables, indexed by the set of oils and the set of gasolines needs to be created.
Step9: We also have to decide how much should be spent in advertising for each time of gasoline. To do so, we will create a list of continuous variables, indexed by the gasolines.
Step10: Express the business constraints
The business constraints are the following
Step11: Maximum capacity
For each type of oil, the total quantity used in all types of gasolines must not exceed the maximum capacity for this oil.
Step12: Octane and Lead levels
For each gasoline type, the octane level must be above a minimum level, and the lead level must be below a maximum level.
Step13: Maximum total production
The total production must not exceed the maximum (here 14000).
Step14: Express the objective
The objective or goal is to maximize profit, which is made from sales of the final products minus total costs. The costs consist of the purchase cost of the crude oils, production costs, and inventory costs.
The model maximizes the net revenue, that is revenue minus oil cost and production cost, to which we subtract the total advertising cost.
To define business objective, let's define a few KPIs
Step15: Solve with Decision Optimization
If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.
We display the objective and KPI values after the solve by calling the method report() on the model.
Step16: Step 5
Step17: Let's display some KPIs in pie charts using the Python package matplotlib.
Step18: Production
Step19: We see that the most produced gasoline type is by far regular.
Now, let's plot the breakdown of oil blend quantities per gasoline type.
We are using a multiple bar chart diagram, displaying all blend values for each couple of oil and gasoline type.
Step20: Notice the missing bar for (crude2, diesel) which is expected since blend[crude2, diesel] is zero in the solution.
We can check the solution value of blends for crude2 and diesel, remembering that crude2 has offset 1 and diesel has offset 2.
Note how the decision variable is automatically converted to a float here. This would raise an exception if called before submitting a solve, as no solution value would be present. | Python Code:
import sys
try:
import docplex.mp
except:
raise Exception('Please install docplex. See https://pypi.org/project/docplex/')
Explanation: Maximizing the profit of an oil company
This tutorial includes everything you need to set up the decision optimization engines and build mathematical programming models.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition
Table of contents:
Describe the business problem
How decision optimization (prescriptive analytics) can help
Use decision optimization
Step 1: Import the library
Step 2: Model the data
Step 3: Prepare the data
Step 4: Set up the prescriptive model
Define the decision variables
Express the business constraints
Express the objective
Solve with Decision Optimization
Step 5: Investigate the solution and run an example analysis
Summary
Describe the business problem
An oil company manufactures different types of gasoline and diesel. Each type of gasoline is produced by blending different types of crude oils that must be purchased. The company must decide how much crude oil to buy in order to maximize its profit while respecting processing capacities and quality levels as well as satisfying customer demand.
Blending problems are a typical industry application of Linear Programming (LP). LP represents real life problems mathematically using an objective function to represent the goal that is to be minimized or maximized, together with a set of linear constraints which define the conditions to be satisfied and the limitations of the real life problem. The function and constraints are expressed in terms of decision variables and the solution, obtained from optimization engines such as IBM® ILOG® CPLEX®, provides the best values for these variables so that the objective function is optimized.
The oil-blending problem consists of calculating different blends of gasoline according to specific quality criteria.
Three types of gasoline are manufactured: super, regular, and diesel.
Each type of gasoline is produced by blending three types of crude oil: crude1, crude2, and crude3.
The gasoline must satisfy some quality criteria with respect to their lead content and their octane ratings, thus constraining the possible blendings.
The company must also satisfy its customer demand, which is 3,000 barrels a day of super, 2,000 of regular, and 1,000 of diesel.
The company can purchase 5,000 barrels of each type of crude oil per day and can process at most 14,000 barrels a day.
In addition, the company has the option of advertising a gasoline, in which case the demand for this type of gasoline increases by ten barrels for every dollar spent.
Finally, it costs four dollars to transform a barrel of oil into a barrel of gasoline.
How decision optimization can help
Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
For example:
Automate complex decisions and trade-offs to better manage limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
Use decision optimization
Step 1: Import the library
Run the following code to import the Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming (docplex.mp) and Constraint Programming (docplex.cp).
End of explanation
try:
import cplex
except:
raise Exception('Please install CPLEX. See https://pypi.org/project/cplex/')
Explanation: If CPLEX is not installed, you can install CPLEX Community edition.
End of explanation
import numpy as np
gas_names = ["super", "regular", "diesel"]
gas_data = np.array([[3000, 70, 10, 1], [2000, 60, 8, 2], [1000, 50, 6, 1]])
oil_names = ["crude1", "crude2", "crude3"]
oil_data = np.array([[5000, 45, 12, 0.5], [5000, 35, 6, 2], [5000, 25, 8, 3]])
nb_gas = len(gas_names)
nb_oils = len(oil_names)
range_gas = range(nb_gas)
range_oil = range(nb_oils)
print("Number of gasoline types = {0}".format(nb_gas))
print("Number of crude types = {0}".format(nb_oils))
# global data
production_cost = 4
production_max = 14000
# each $1 spent on advertising increases demand by 10.
advert_return = 10
Explanation: Step 2: Model the data
For each type of crude oil, there are capacities of what can be bought, the buying price, the octane level, and the lead level.
For each type of gasoline or diesel, there is customer demand, selling prices, and octane and lead levels.
There is a maximum level of production imposed by the factory's limit as well as a fixed production cost.
There are inventory costs for each type of final product and blending proportions. All of these have actual values in the model.
The maginal production cost and maximum production are assumed to be identical for all oil types.
Input data comes as NumPy arrays with two dimensions. NumPy is the fundamental package for scientific computing with Python.
The first dimension of the NumPy array is the number of gasoline types;
and for each gasoline type, we have a NumPy array containing capacity, price, octane and lead level, in that order.
End of explanation
import pandas as pd
gaspd = pd.DataFrame([(gas_names[i],int(gas_data[i][0]),int(gas_data[i][1]),int(gas_data[i][2]),int(gas_data[i][3]))
for i in range_gas])
oilpd = pd.DataFrame([(oil_names[i],int(oil_data[i][0]),int(oil_data[i][1]),int(oil_data[i][2]),oil_data[i][3])
for i in range_oil])
gaspd.columns = ['name','demand','price','octane','lead']
oilpd.columns= ['name','capacity','price','octane','lead']
Explanation: Step 3: Prepare the data
Pandas is another Python library that we use to store data. pandas contains data structures and data analysis tools for the Python programming language.
End of explanation
CSS =
body {
margin: 0;
font-family: Helvetica;
}
table.dataframe {
border-collapse: collapse;
border: none;
}
table.dataframe tr {
border: none;
}
table.dataframe td, table.dataframe th {
margin: 0;
border: 1px solid white;
padding-left: 0.25em;
padding-right: 0.25em;
}
table.dataframe th:not(:empty) {
background-color: #fec;
text-align: left;
font-weight: normal;
}
table.dataframe tr:nth-child(2) th:empty {
border-left: none;
border-right: 1px dashed #888;
}
table.dataframe td {
border: 2px solid #ccf;
background-color: #f4f4ff;
}
table.dataframe thead th:first-child {
display: none;
}
table.dataframe tbody th {
display: none;
}
from IPython.core.display import HTML
HTML('<style>{}</style>'.format(CSS))
Explanation: Use basic HTML and a stylesheet to format the data.
End of explanation
from IPython.display import display
print("Gas data:")
display(gaspd)
print("Oil data:")
display(oilpd)
Explanation: Let's display the data we just prepared.
End of explanation
from docplex.mp.model import Model
mdl = Model(name="oil_blending")
Explanation: Step 4: Set up the prescriptive model
Create the DOcplex model
A model is needed to store all the variables and constraints needed to formulate the business problem and submit the problem to the solve service.
End of explanation
blends = mdl.continuous_var_matrix(keys1=nb_oils, keys2=nb_gas, lb=0)
Explanation: Define the decision variables
For each combination of oil and gas, we have to decide the quantity of oil to use to produce a gasoline. A decision variable will be needed to represent that amount.
A matrix of continuous variables, indexed by the set of oils and the set of gasolines needs to be created.
End of explanation
adverts = mdl.continuous_var_list(nb_gas, lb=0)
Explanation: We also have to decide how much should be spent in advertising for each time of gasoline. To do so, we will create a list of continuous variables, indexed by the gasolines.
End of explanation
# gasoline demand is numpy array field #0
mdl.add_constraints(mdl.sum(blends[o, g] for o in range(nb_oils)) == gas_data[g][0] + advert_return * adverts[g]
for g in range(nb_gas))
mdl.print_information()
Explanation: Express the business constraints
The business constraints are the following:
The demand for each gasoline type must be satisfied. The total demand includes the initial demand as stored in the data,plus a variable demand caused by the advertising. This increase is assumed to be proportional to the advertising cost.
The capacity constraint on each oil type must also be satisfied.
For each gasoline type, the octane level must be above a minimum level, and the lead level must be below a maximum level.
Demand
For each gasoline type, the total quantity produced must equal the raw demand plus the demand increase created by the advertising.
End of explanation
mdl.add_constraints(mdl.sum(blends[o,g] for g in range_gas) <= oil_data[o][0]
for o in range_oil)
mdl.print_information()
Explanation: Maximum capacity
For each type of oil, the total quantity used in all types of gasolines must not exceed the maximum capacity for this oil.
End of explanation
# minimum octane level
# octane is numpy array field #2
mdl.add_constraints(mdl.sum(blends[o,g]*(oil_data[o][2] - gas_data[g][2]) for o in range_oil) >= 0
for g in range_gas)
# maximum lead level
# lead level is numpy array field #3
mdl.add_constraints(mdl.sum(blends[o,g]*(oil_data[o][3] - gas_data[g][3]) for o in range_oil) <= 0
for g in range_gas)
mdl.print_information()
Explanation: Octane and Lead levels
For each gasoline type, the octane level must be above a minimum level, and the lead level must be below a maximum level.
End of explanation
# -- maximum global production
mdl.add_constraint(mdl.sum(blends) <= production_max)
mdl.print_information()
Explanation: Maximum total production
The total production must not exceed the maximum (here 14000).
End of explanation
# KPIs
total_advert_cost = mdl.sum(adverts)
mdl.add_kpi(total_advert_cost, "Total advertising cost")
total_oil_cost = mdl.sum(blends[o,g] * oil_data[o][1] for o in range_oil for g in range_gas)
mdl.add_kpi(total_oil_cost, "Total Oil cost")
total_production_cost = production_cost * mdl.sum(blends)
mdl.add_kpi(total_production_cost, "Total production cost")
total_revenue = mdl.sum(blends[o,g] * gas_data[g][1] for g in range(nb_gas) for o in range(nb_oils))
mdl.add_kpi(total_revenue, "Total revenue")
# finally the objective
mdl.maximize(total_revenue - total_oil_cost - total_production_cost - total_advert_cost)
Explanation: Express the objective
The objective or goal is to maximize profit, which is made from sales of the final products minus total costs. The costs consist of the purchase cost of the crude oils, production costs, and inventory costs.
The model maximizes the net revenue, that is revenue minus oil cost and production cost, to which we subtract the total advertising cost.
To define business objective, let's define a few KPIs :
Total advertising cost
Total Oil cost
Total production cost
Total revenue
End of explanation
assert mdl.solve(), "Solve failed"
mdl.report()
Explanation: Solve with Decision Optimization
If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.
We display the objective and KPI values after the solve by calling the method report() on the model.
End of explanation
all_kpis = [(kp.name, kp.compute()) for kp in mdl.iter_kpis()]
kpis_bd = pd.DataFrame(all_kpis, columns=['kpi', 'value'])
blend_values = [ [ blends[o,g].solution_value for g in range_gas] for o in range_oil]
total_gas_prods = [sum(blend_values[o][g] for o in range_oil) for g in range_gas]
prods = list(zip(gas_names, total_gas_prods))
prods_bd = pd.DataFrame(prods)
Explanation: Step 5: Investigate the solution and then run an example analysis
Displaying the solution
First, get the KPIs values and store them in a pandas DataFrame.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
def display_pie(pie_values, pie_labels, colors=None,title=''):
plt.axis("equal")
plt.pie(pie_values, labels=pie_labels, colors=colors, autopct="%1.1f%%")
plt.title(title)
plt.show()
display_pie( [kpnv[1] for kpnv in all_kpis], [kpnv[0] for kpnv in all_kpis],title='KPIs: Revenue - Oil Cost - Production Cost')
Explanation: Let's display some KPIs in pie charts using the Python package matplotlib.
End of explanation
display_pie(total_gas_prods, gas_names, colors=["green", "goldenrod", "lightGreen"],title='Gasoline Total Production')
Explanation: Production
End of explanation
sblends = [(gas_names[n], oil_names[o], round(blends[o,n].solution_value)) for n in range_gas for o in range_oil]
blends_bd = pd.DataFrame(sblends)
f, barplot = plt.subplots(1, figsize=(16,5))
bar_width = 0.1
offset = 0.12
rho = 0.7
# position of left-bar boundaries
bar_l = [o for o in range_oil]
mbar_w = 3*bar_width+2*max(0, offset-bar_width)
tick_pos = [b*rho + mbar_w/2.0 for b in bar_l]
colors = ['olive', 'lightgreen', 'cadetblue']
for i in range_oil:
barplot.bar([b*rho + (i*offset) for b in bar_l],
blend_values[i], width=bar_width, color=colors[i], label=oil_names[i])
plt.xticks(tick_pos, gas_names)
barplot.set_xlabel("gasolines")
barplot.set_ylabel("blend")
plt.legend(loc="upper right")
plt.title('Blend Repartition\n')
# Set a buffer around the edge
plt.xlim([0, max(tick_pos)+mbar_w +0.5])
plt.show()
Explanation: We see that the most produced gasoline type is by far regular.
Now, let's plot the breakdown of oil blend quantities per gasoline type.
We are using a multiple bar chart diagram, displaying all blend values for each couple of oil and gasoline type.
End of explanation
print("* value of blend[crude2, diesel] is %g" % blends[1,2])
Explanation: Notice the missing bar for (crude2, diesel) which is expected since blend[crude2, diesel] is zero in the solution.
We can check the solution value of blends for crude2 and diesel, remembering that crude2 has offset 1 and diesel has offset 2.
Note how the decision variable is automatically converted to a float here. This would raise an exception if called before submitting a solve, as no solution value would be present.
End of explanation |
6,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
Go back to the Index
</center>
Chapter 1
Step1: We start by importing several classes from the skymap module and setting a few constants that we will use in this example.
Step3: At it's core, skymap is just a wrapper around basemap. The core skymap class skymap.Skymap is just a subclass of basemap.Basemap. It keeps all the core functionality (and most of the quirks) of basemap.Basemap, but adds a few celestially oriented features.
Following the basemap.Basemap convention, the default projection for a skymap.Skymap is "Cylindrical Equidistant" ('cyl'), which you may commonly think of as Cartesian. Creating a basic map of the sky is as easy as creating a instance of the skymap.Skymap class. Once you have a Skymap, you can call any of the methods inherited from basemap API (see here for details).
Step4: The example above is not very impressive in the 'cyl' projection, but the power of basemap means that you can "$\sim$seamlessly" switch between projections. | Python Code:
# Basic notebook imports
%matplotlib inline
import matplotlib
import pylab as plt
import numpy as np
import healpy as hp
Explanation: <center>
Go back to the Index
</center>
Chapter 1: Skymap Base Class
In this chapter we introduce the skymap.Skymap base class and some of it's features.
End of explanation
# Import skymap and some of it's basic map classes
import skymap
Explanation: We start by importing several classes from the skymap module and setting a few constants that we will use in this example.
End of explanation
smap = skymap.Skymap()
def skymap_test(smap):
Some simple test cases.
plt.gca()
# Draw some scatter points
smap.scatter([0,45,-30],[0,-45,-30],latlon=True)
# Draw a tissot (projected circle)
smap.tissot(-60,30,10,100,facecolor='none',edgecolor='b',lw=2)
# Draw a color mesh image (careful, basemap is quirky)
x = y = np.arange(30,60)
xx,yy = np.meshgrid(x,y)
z = xx*yy
smap.pcolormesh(xx,yy,data=z,cmap='gray_r',latlon=True)
skymap_test(smap)
Explanation: At it's core, skymap is just a wrapper around basemap. The core skymap class skymap.Skymap is just a subclass of basemap.Basemap. It keeps all the core functionality (and most of the quirks) of basemap.Basemap, but adds a few celestially oriented features.
Following the basemap.Basemap convention, the default projection for a skymap.Skymap is "Cylindrical Equidistant" ('cyl'), which you may commonly think of as Cartesian. Creating a basic map of the sky is as easy as creating a instance of the skymap.Skymap class. Once you have a Skymap, you can call any of the methods inherited from basemap API (see here for details).
End of explanation
fig,axes = plt.subplots(2,2,figsize=(14,8))
# A nice projection for plotting the visible sky
plt.sca(axes[0,0])
smap = skymap.Skymap(projection='ortho',lon_0=0, lat_0=0)
skymap_test(smap)
plt.title('Orthographic')
# A common equal area all-sky projection
plt.sca(axes[1,0])
smap = skymap.Skymap(projection='hammer',lon_0=0, lat_0=0)
skymap_test(smap)
plt.title("Hammer-Aitoff")
# Something wacky that I've never used
plt.sca(axes[0,1])
smap = skymap.Skymap(projection='sinu',lon_0=0, lat_0=0)
skymap_test(smap)
plt.title("Sinusoidal")
# My favorite projection for DES
plt.sca(axes[1,1])
smap = skymap.Skymap(projection='mbtfpq',lon_0=0, lat_0=0)
skymap_test(smap)
plt.title("McBryde-Thomas Flat Polar Quartic")
Explanation: The example above is not very impressive in the 'cyl' projection, but the power of basemap means that you can "$\sim$seamlessly" switch between projections.
End of explanation |
6,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Benchmarking your code
Step1: Using magic functions of Jupyter and timeit
https
Step3: Exercises
What is the fastest way to download 100 pages from index.hu?
How to calculate the factors of 1000 random integers effectively using factorize_naive function below? | Python Code:
def fun():
max(range(1000))
Explanation: Benchmarking your code
End of explanation
%%timeit
fun()
%%time
fun()
Explanation: Using magic functions of Jupyter and timeit
https://docs.python.org/3.5/library/timeit.html
https://ipython.org/ipython-doc/3/interactive/magics.html#magic-time
End of explanation
import requests
def get_page(url):
response = requests.request(url=url, method="GET")
return response
get_page("http://index.hu")
def factorize_naive(n):
A naive factorization method. Take integer 'n', return list of
factors.
if n < 2:
return []
factors = []
p = 2
while True:
if n == 1:
return factors
r = n % p
if r == 0:
factors.append(p)
n = n // p
elif p * p >= n:
factors.append(n)
return factors
elif p > 2:
# Advance in steps of 2 over odd numbers
p += 2
else:
# If p == 2, get to 3
p += 1
assert False, "unreachable"
Explanation: Exercises
What is the fastest way to download 100 pages from index.hu?
How to calculate the factors of 1000 random integers effectively using factorize_naive function below?
End of explanation |
6,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IPython.parallel
To start the cluster, you can use notebook GUI or command line $ipcluster start
Step1: Check a number of cores
Step2: Simple parallel summation
First the input array is initialized and distributed over the cluster
Step3: Parallel sum
Engines computes sumation of each subset and send back to the controller | Python Code:
from IPython import parallel
c=parallel.Client()
dview=c.direct_view()
dview.block=True
Explanation: IPython.parallel
To start the cluster, you can use notebook GUI or command line $ipcluster start
End of explanation
c.ids
Explanation: Check a number of cores
End of explanation
import numpy as np
x=np.arange(100)
dview.scatter('x',x)
print c[0]['x']
print c[1]['x']
print c[-1]['x']
Explanation: Simple parallel summation
First the input array is initialized and distributed over the cluster
End of explanation
dview.execute('import numpy as np; y=np.sum(x)')
ys=dview.gather('y')
total=np.sum(ys)
print total
Explanation: Parallel sum
Engines computes sumation of each subset and send back to the controller
End of explanation |
6,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CCCR-IITM
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:48
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
6,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression model—sound exposure level
This notebook explores and models the data collected from recordings of the natural acoustic environment over the urban-rural gradient near Innsbruck, Austria. The models are implemented as Bayesian models with the PyMC3 probabilistic programming library.
References
Step1: Plot settings
Step2: Variable definitions
Step3: Load data
Step4: sort data by site and then by visit
Step5: transform variables (mean center)
Step6: create sites variable for PyMC3 models
Step7: Model 0 - emtpy model
$$
\begin{align}
y_{ts} \sim \mathcal{N}(\alpha_s + \epsilon_t, \sigma_y^2) \
\alpha_s \sim \mathcal{N}(M + \epsilon_s, \sigma_\alpha^2) \
\end{align}
$$
Step8: Model 1—time and site predictors
$$
\begin{align}
\text{level 1} \
y_{ts} \sim \mathcal{N}(\alpha_s + \beta_s T_t, \sigma_y^2) \
\text{level 2} \
\alpha_s \sim \mathcal{N}(\gamma_\alpha + \gamma_{\alpha s} L_s, \sigma_\alpha^2) \
\beta_s \sim \mathcal{N}(\gamma_\beta + \gamma_{\beta s} L_s, \sigma_\beta^2) \
\end{align}
$$
Step9: Model 2—environmental predictors
$$
\begin{align}
\text{level 1} \
y_{ts} \sim \mathcal{N}(\alpha_s + \beta_s T_t, \sigma_y^2) \
\text{level 2} \
\alpha_s \sim \mathcal{N}(\gamma_\alpha + \gamma_{\alpha s} L_s, \sigma_\alpha^2) \
\beta_s \sim \mathcal{N}(\gamma_\beta + \gamma_{\beta s} L_s, \sigma_\beta^2) \
\end{align}
$$ | Python Code:
import warnings
warnings.filterwarnings('ignore')
import pandas
import numpy
from os import path
%matplotlib inline
from matplotlib import pyplot
from matplotlib.patches import Rectangle
import seaborn
from pymc3 import glm, Model, NUTS, sample, stats, \
forestplot, traceplot, plot_posterior, summary, \
Normal, Uniform, Deterministic, StudentT
from pymc3.backends import SQLite
Explanation: Regression model—sound exposure level
This notebook explores and models the data collected from recordings of the natural acoustic environment over the urban-rural gradient near Innsbruck, Austria. The models are implemented as Bayesian models with the PyMC3 probabilistic programming library.
References:<br />
https://github.com/fonnesbeck/multilevel_modeling<br />
Gelman, A., & Hill, J. (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models (1st ed.). Cambridge University Press.
Import statements
End of explanation
from matplotlib import rcParams
rcParams['font.sans-serif']
rcParams['font.sans-serif'] = ['Helvetica',
'Arial',
'Bitstream Vera Sans',
'DejaVu Sans',
'Lucida Grande',
'Verdana',
'Geneva',
'Lucid',
'Avant Garde',
'sans-serif']
Explanation: Plot settings
End of explanation
data_filepath = "/Users/Jake/OneDrive/Documents/alpine soundscapes/data/dataset.csv"
trace_output_path = "/Users/Jake/OneDrive/Documents/alpine soundscapes/data/model traces/sel"
seaborn_blue = seaborn.color_palette()[0]
Explanation: Variable definitions
End of explanation
data = pandas.read_csv(data_filepath)
data = data.loc[data.site<=30]
Explanation: Load data
End of explanation
data_sorted = data.sort_values(by=['site', 'sound']).reset_index(drop=True)
Explanation: sort data by site and then by visit
End of explanation
column_list = ['sel', 'sel_anthrophony', 'sel_biophony', 'biophony', 'week',
'building_50m', 'pavement_50m', 'forest_50m', 'field_50m',
'building_100m', 'pavement_100m', 'forest_100m', 'field_100m',
'building_200m', 'pavement_200m', 'forest_200m', 'field_200m',
'building_500m', 'pavement_500m', 'forest_500m', 'field_500m',
'd2n_50m', 'd2n_100m', 'd2n_200m', 'd2n_500m',
'temperature', 'wind_speed', 'pressure', 'bus_stop',
'construction', 'crossing', 'cycleway', 'elevator', 'escape', 'footway',
'living_street', 'motorway', 'motorway_link', 'path', 'pedestrian',
'platform', 'primary_road', 'primary_link', 'proposed', 'residential',
'rest_area', 'secondary', 'secondary_link', 'service', 'services',
'steps', 'tertiary', 'tertiary_link', 'track', 'unclassified', 'combo']
data_centered = data_sorted.copy()
for column in column_list:
data_centered[column] = data_sorted[column] - data_sorted[column].mean()
Explanation: transform variables (mean center)
End of explanation
sites = numpy.copy(data_sorted.site.values) - 1
Explanation: create sites variable for PyMC3 models
End of explanation
with Model() as model0:
# Priors
mu_grand = Normal('mu_grand', mu=0., tau=0.0001)
sigma_a = Uniform('sigma_a', lower=0, upper=100)
tau_a = sigma_a**-2
# Random intercepts
a = Normal('a', mu=mu_grand, tau=tau_a, shape=len(set(sites)))
# Model error
sigma_y = Uniform('sigma_y', lower=0, upper=100)
tau_y = sigma_y**-2
# Expected value
y_hat = a[sites]
# Data likelihood
y_like = Normal('y_like', mu=y_hat, tau=tau_y, observed=data_centered.sel)
# sample model
backend = SQLite(path.join(trace_output_path, "model0.sqlite"))
model0_samples = sample(draws=10000, step=NUTS(), random_seed=1, trace=backend)
fig, ax = pyplot.subplots()
# organize results
model0_data = pandas.DataFrame({'site': data_sorted.site.unique(),
'site_name': data_sorted.site_name.unique()}).set_index('site')
model0_data['forest_200m'] = data.groupby('site')['forest_200m'].mean()
model0_data['quantiles'] = [stats.quantiles(model0_samples.a[:5000, i]) for i in range(len(set(sites)))]
# plot quantiles
for i, row in model0_data.sort_values(by='forest_200m').iterrows():
x = row['forest_200m']
ax.plot([x, x], [row['quantiles'][2.5], row['quantiles'][97.5]], color='black', linewidth=0.5)
ax.plot([x, x], [row['quantiles'][25], row['quantiles'][75]], color='black', linewidth=1)
ax.scatter([x], [row['quantiles'][50]], color='black', marker='o')
# format plot
l1 = ax.set_xlim([0, 100])
xl = ax.set_xlabel("forest land cover within 200 meters (percent area)")
yl = ax.set_ylabel("SEL (difference from grand mean)")
fig, ax = pyplot.subplots()
# organize results
model0_data = pandas.DataFrame({'site': data_sorted.site.unique(),
'site_name': data_sorted.site_name.unique()}).set_index('site')
model0_data['d2n_200m'] = data.groupby('site')['d2n_200m'].mean()
model0_data['quantiles'] = [stats.quantiles(model0_samples.a[:5000, i]) for i in range(len(set(sites)))]
# plot quantiles
for i, row in model0_data.sort_values(by='d2n_200m').iterrows():
x = row['d2n_200m']
ax.plot([x, x], [row['quantiles'][2.5], row['quantiles'][97.5]], color='black', linewidth=0.5)
ax.plot([x, x], [row['quantiles'][25], row['quantiles'][75]], color='black', linewidth=1)
ax.scatter([x], [row['quantiles'][50]], color='black', marker='o')
# format plot
l1 = ax.set_xlim([0, 100])
xl = ax.set_xlabel("d2n within 200 meters (percent area)")
yl = ax.set_ylabel("SEL (difference from grand mean)")
Explanation: Model 0 - emtpy model
$$
\begin{align}
y_{ts} \sim \mathcal{N}(\alpha_s + \epsilon_t, \sigma_y^2) \
\alpha_s \sim \mathcal{N}(M + \epsilon_s, \sigma_\alpha^2) \
\end{align}
$$
End of explanation
site_predictors = [
# 'building_50m', 'pavement_50m', 'forest_50m', 'field_50m',
# 'building_100m', 'pavement_100m', 'forest_100m', 'field_100m',
# 'building_200m', 'pavement_200m', 'forest_200m', 'field_200m',
# 'building_500m', 'pavement_500m', 'forest_500m', 'field_500m',
'd2n_50m', 'd2n_100m', 'd2n_200m', 'd2n_500m',
]
for predictor in site_predictors:
with Model() as model_1:
# intercept
g_a = Normal('g_a', mu=0, tau=0.001)
g_as = Normal('g_as', mu=0, tau=0.001)
sigma_a = Uniform('sigma_a', lower=0, upper=100)
tau_a = sigma_a**-2
mu_a = g_a + (g_as * data_centered.groupby('site')[predictor].mean())
a = Normal('a', mu=mu_a, tau=tau_a, shape=len(set(sites)))
# slope
g_b = Normal('g_b', mu=0, tau=0.001)
g_bs = Normal('g_bs', mu=0, tau=0.001)
sigma_b = Uniform('sigma_b', lower=0, upper=100)
tau_b = sigma_b**-2
mu_b = g_b + (g_bs * data_centered.groupby('site')[predictor].mean())
b = Normal('b', mu=mu_b, tau=tau_b, shape=len(set(sites)))
# model error (data-level)
sigma_y = Uniform('sigma_y', lower=0, upper=100)
tau_y = sigma_y**-2
# expected values
y_hat = a[sites] + (b[sites] * data_centered.week)
# likelihood
y_like = Normal('y_like', mu=y_hat, tau=tau_y, observed=data_centered.sel)
# simulated
#y_sim = Normal('y_sim', mu=y_hat, tau=tau_y, shape=y_hat.tag.test_value.shape)
# sample model
backend = SQLite(path.join(trace_output_path, "model1_{}.sqlite".format(predictor)))
model_1_samples = sample(draws=10000, step=NUTS(), random_seed=1, trace=backend)
fig = pyplot.figure()
fig.set_figwidth(6.85)
fig.set_figheight(6.85/2)
ax_a = pyplot.subplot2grid((1, 2), (0, 0), rowspan=1, colspan=1)
ax_b = pyplot.subplot2grid((1, 2), (0, 1), rowspan=1, colspan=1, sharex=ax_a)
fig.subplots_adjust(left=0, bottom=0, right=1, top=1)
# organize results
model_1_data = pandas.DataFrame({'site': data_sorted.site.unique(),
'site_name': data_sorted.site_name.unique()})
model_1_data['forest_200m'] = data_sorted.forest_200m.unique()
model_1_data['quantiles_a'] = [stats.quantiles(model_1_samples['a'][:5000][:, i]) for i in range(len(set(sites)))]
model_1_data['quantiles_b'] = [stats.quantiles(model_1_samples['b'][:5000][:, i]) for i in range(len(set(sites)))]
# plot quantiles
for i, row in model_1_data.sort_values(by='forest_200m').iterrows():
x = row['forest_200m']
#ax_a.plot([x, x], [row['quantiles_a'][2.5], row['quantiles_a'][97.5]], color='black', linewidth=0.5)
ax_a.plot([x, x], [row['quantiles_a'][25], row['quantiles_a'][75]], color='black', linewidth=1)
ax_a.scatter([x], [row['quantiles_a'][50]], color='black', marker='o')
# format plot
l1 = ax_a.set_xlim([0, 100])
xl = ax_a.set_xlabel("forest land cover within 200 meters (percent area)")
yl = ax_a.set_ylabel("sel (decibel difference from grand mean)")
# plot quantiles
for i, row in model_1_data.sort_values(by='forest_200m').iterrows():
x = row['forest_200m']
#ax_b.plot([x, x], [row['quantiles_b'][2.5], row['quantiles_b'][97.5]], color='black', linewidth=0.5)
ax_b.plot([x, x], [row['quantiles_b'][25], row['quantiles_b'][75]], color='black', linewidth=1)
ax_b.scatter([x], [row['quantiles_b'][50]], color='black', marker='o')
# format plot
l1 = ax_b.set_xlim([0, 100])
l2 = ax_b.set_ylim((-2, 2))
xl = ax_b.set_xlabel("forest land cover within 200 meters (percent area)")
yl = ax_b.set_ylabel("rate of change of sel (dB/week)")
fig = pyplot.figure()
fig.set_figwidth(6.85)
fig.set_figheight(6.85/2)
ax_a = pyplot.subplot2grid((1, 2), (0, 0), rowspan=1, colspan=1)
ax_b = pyplot.subplot2grid((1, 2), (0, 1), rowspan=1, colspan=1, sharex=ax_a)
fig.subplots_adjust(left=0, bottom=0, right=1, top=1)
# organize results
model_1_data = pandas.DataFrame({'site': data_sorted.site.unique(),
'site_name': data_sorted.site_name.unique()})
model_1_data['d2n_500m'] = data_sorted.d2n_500m.unique()
model_1_data['quantiles_a'] = [stats.quantiles(model_1_samples['a'][:5000][:, i]) for i in range(len(set(sites)))]
model_1_data['quantiles_b'] = [stats.quantiles(model_1_samples['b'][:5000][:, i]) for i in range(len(set(sites)))]
# plot quantiles
for i, row in model_1_data.sort_values(by='d2n_500m').iterrows():
x = row['d2n_500m']
#ax_a.plot([x, x], [row['quantiles_a'][2.5], row['quantiles_a'][97.5]], color='black', linewidth=0.5)
ax_a.plot([x, x], [row['quantiles_a'][25], row['quantiles_a'][75]], color='black', linewidth=1)
ax_a.scatter([x], [row['quantiles_a'][50]], color='black', marker='o')
# format plot
l1 = ax_a.set_xlim([0, 0.6])
xl = ax_a.set_xlabel("d2n within 500 meters (percent area)")
yl = ax_a.set_ylabel("sel (decibel difference from grand mean)")
# plot quantiles
for i, row in model_1_data.sort_values(by='d2n_500m').iterrows():
x = row['d2n_500m']
#ax_b.plot([x, x], [row['quantiles_b'][2.5], row['quantiles_b'][97.5]], color='black', linewidth=0.5)
ax_b.plot([x, x], [row['quantiles_b'][25], row['quantiles_b'][75]], color='black', linewidth=1)
ax_b.scatter([x], [row['quantiles_b'][50]], color='black', marker='o')
# format plot
l1 = ax_b.set_xlim([0, 0.6])
l2 = ax_b.set_ylim((-2, 2))
xl = ax_b.set_xlabel("d2n within 500 meters (percent area)")
yl = ax_b.set_ylabel("rate of change of sel (dB/week)")
Explanation: Model 1—time and site predictors
$$
\begin{align}
\text{level 1} \
y_{ts} \sim \mathcal{N}(\alpha_s + \beta_s T_t, \sigma_y^2) \
\text{level 2} \
\alpha_s \sim \mathcal{N}(\gamma_\alpha + \gamma_{\alpha s} L_s, \sigma_\alpha^2) \
\beta_s \sim \mathcal{N}(\gamma_\beta + \gamma_{\beta s} L_s, \sigma_\beta^2) \
\end{align}
$$
End of explanation
measurement_predictors = [
'temperature', 'wind_speed', 'precipitation', 'pressure',
]
for predictor in measurement_predictors:
with Model() as model2a:
# intercept
g_a = Normal('g_a', mu=0, tau=0.001)
g_as = Normal('g_as', mu=0, tau=0.001)
sigma_a = Uniform('sigma_a', lower=0, upper=100)
tau_a = sigma_a**-2
mu_a = g_a + (g_as * data_centered.groupby('site')['forest_200m'].mean())
a = Normal('a', mu=mu_a, tau=tau_a, shape=len(set(sites)))
# time slope
g_b = Normal('g_b', mu=0, tau=0.001)
g_bs = Normal('g_bs', mu=0, tau=0.001)
sigma_b = Uniform('sigma_b', lower=0, upper=100)
tau_b = sigma_b**-2
mu_b = g_b + (g_bs * data_centered.groupby('site')['forest_200m'].mean())
b = Normal('b', mu=mu_b, tau=tau_b, shape=len(set(sites)))
# temp slope
#g_c = Normal('g_c', mu=0, tau=0.001)
#g_cs = Normal('g_cs', mu=0, tau=0.001)
#sigma_c = Uniform('sigma_c', lower=0, upper=100)
#tau_c = sigma_c**-2
#mu_c = g_c + (g_cs * data_centered.groupby('site')['forest_200m'].mean())
#c = Normal('c', mu=mu_c, tau=tau_c, shape=len(set(sites)))
c = Uniform('c', lower=-100, upper=100)
# model error (data-level)
sigma_y = Uniform('sigma_y', lower=0, upper=100)
tau_y = sigma_y**-2
# expected values
y_hat = a[sites] + (b[sites] * data_centered.week) + (c * data_centered[predictor])
# likelihood
y_like = Normal('y_like', mu=y_hat, tau=tau_y, observed=data_centered.sel)
# simulated
#y_sim = Normal('y_sim', mu=y_hat, tau=tau_y, shape=y_hat.tag.test_value.shape)
# sample model
backend = SQLite(path.join(trace_output_path, "model2a_{0}.sqlite".format(predictor)))
model_2_samples = sample(draws=10000, step=NUTS(), random_seed=1, trace=backend)
measurement_predictors = [
'temperature', 'wind_speed', 'precipitation', 'pressure',
]
for predictor in measurement_predictors:
with Model() as model2b:
# intercept
g_a = Normal('g_a', mu=0, tau=0.001)
g_as = Normal('g_as', mu=0, tau=0.001)
sigma_a = Uniform('sigma_a', lower=0, upper=100)
tau_a = sigma_a**-2
mu_a = g_a + (g_as * data_centered.groupby('site')['forest_200m'].mean())
a = Normal('a', mu=mu_a, tau=tau_a, shape=len(set(sites)))
# time slope
g_b = Normal('g_b', mu=0, tau=0.001)
g_bs = Normal('g_bs', mu=0, tau=0.001)
sigma_b = Uniform('sigma_b', lower=0, upper=100)
tau_b = sigma_b**-2
mu_b = g_b + (g_bs * data_centered.groupby('site')['forest_200m'].mean())
b = Normal('b', mu=mu_b, tau=tau_b, shape=len(set(sites)))
# predictor slope
g_c = Normal('g_c', mu=0, tau=0.001)
g_cs = Normal('g_cs', mu=0, tau=0.001)
sigma_c = Uniform('sigma_c', lower=0, upper=100)
tau_c = sigma_c**-2
mu_c = g_c + (g_cs * data_centered.groupby('site')['forest_200m'].mean())
c = Normal('c', mu=mu_c, tau=tau_c, shape=len(set(sites)))
# model error (data-level)
sigma_y = Uniform('sigma_y', lower=0, upper=100)
tau_y = sigma_y**-2
# expected values
y_hat = a[sites] + (b[sites] * data_centered.week) + (c[sites] * data_centered[predictor])
# likelihood
y_like = Normal('y_like', mu=y_hat, tau=tau_y, observed=data_centered.sel)
# simulated
#y_sim = Normal('y_sim', mu=y_hat, tau=tau_y, shape=y_hat.tag.test_value.shape)
# sample model
backend = SQLite(path.join(trace_output_path, "model2b_{0}.sqlite".format(predictor)))
model_2_samples = sample(draws=10000, step=NUTS(), random_seed=1, trace=backend)
fig, ax = pyplot.subplots()
# organize results
model_2_data = pandas.DataFrame({'site': data_sorted.site.unique(),
'site_name': data_sorted.site_name.unique()})
model_2_data['forest_200m'] = data_sorted.forest_200m.unique()
model_2_data['quantiles'] = [stats.quantiles(model_2_samples['c'][:1000][:, i]) for i in range(len(set(sites)))]
# plot quantiles
for i, row in model_2_data.sort_values(by='forest_200m').iterrows():
x = row['forest_200m']
ax.plot([x, x], [row['quantiles'][2.5], row['quantiles'][97.5]], color='black', linewidth=0.5)
ax.plot([x, x], [row['quantiles'][25], row['quantiles'][75]], color='black', linewidth=1)
ax.scatter([x], [row['quantiles'][50]], color='black', marker='o')
# format plot
l1 = ax.set_xlim([0, 100])
xl = ax.set_xlabel("forest land cover within 200 meters (percent area)")
yl = ax.set_ylabel("percent biophony (difference from grand mean)")
Explanation: Model 2—environmental predictors
$$
\begin{align}
\text{level 1} \
y_{ts} \sim \mathcal{N}(\alpha_s + \beta_s T_t, \sigma_y^2) \
\text{level 2} \
\alpha_s \sim \mathcal{N}(\gamma_\alpha + \gamma_{\alpha s} L_s, \sigma_\alpha^2) \
\beta_s \sim \mathcal{N}(\gamma_\beta + \gamma_{\beta s} L_s, \sigma_\beta^2) \
\end{align}
$$
End of explanation |
6,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Everyone's favorite nerdy comic, XKCD, ranked colors by best tasting. I thought I would use the WTB dataset to compare and see if the data agrees.
Step1: Let's add a color column.
Step2: Now group by the new color column, get the mean, and sort the values high to low.
Step3: There we have it. Blue is the best tasting color.
But brown is awfully close. I wonder how the ranges compare. Let's take a look at a histogram. | Python Code:
# Import libraries
import numpy as np
import pandas as pd
# Import the data
import WTBLoad
wtb = WTBLoad.load_frame()
pink = ["watermelon", "cranberry"]
red = ["cherry","apple","raspberry","strawberry", "rose hips", "hibiscus",'rhubarb', "red wine"]
blue = ["blueberry","juniper berries"]
green = ["green tea","mint","lemon grass",'cucumber','basil']
white = ["pear", "elderflower", "ginger", "coconut","piña colada","vanilla","white wine"]
brown = [ "chai", "chicory", "coriander", "cardamom", "seeds of paradise", "cinnamon", "chocolate", "peanut butter", "hazelnut","pecan","bacon","bourbon","whiskey","coffee","oak","rye","maple"]
orange = ["apricot", "peach", "grapefruit","orange peel", "pumpkin","sweet potato"]
yellow = ["chamomile","lemon peel"]
purple = [ "plum", "lavender", "port","blackberry"]
black = [ "anise", 'peppercorn', 'lemon pepper', "smoke"]
additionsColors = {"pink": pink,"red": red,"blue": blue,"green": green,"white": white,"brown": brown,"orange": orange,"yellow": yellow,"purple": purple, "black": black}
# Great. Now we have a mapping from color to addition, but we really need it the other way around.
additionToColor = {}
for color in additionsColors:
for addition in additionsColors[color]:
additionToColor[addition] = color
print(additionToColor['watermelon'])
Explanation: Everyone's favorite nerdy comic, XKCD, ranked colors by best tasting. I thought I would use the WTB dataset to compare and see if the data agrees.
End of explanation
def addcolor(addition):
return additionToColor[addition]
wtb['color'] = np.vectorize(addcolor)(wtb['addition'])
Explanation: Let's add a color column.
End of explanation
wtb.groupby(by='color').mean().sort_values('vote',ascending=False)
Explanation: Now group by the new color column, get the mean, and sort the values high to low.
End of explanation
%matplotlib inline
wtb.groupby(by='color').boxplot(subplots=False,rot=45)
Explanation: There we have it. Blue is the best tasting color.
But brown is awfully close. I wonder how the ranges compare. Let's take a look at a histogram.
End of explanation |
6,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: ユニバーサルセンテンスエンコーダー
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: Tensorflow のインストールに関する詳細は、https
Step3: セマンティックテキストの類似性タスクの例
ユニバーサルセンテンスエンコーダーによって生成される埋め込みは、おおよそ正規化されています。2 つの文章の意味的類似性は、エンコーディングの内積として簡単に計算することができます。
Step4: 類似性の視覚化
ここでは、ヒートマップに類似性を表示します。最終的なグラフは 9x9 の行列で、各エントリ [i, j] は、文章 i と j のエンコーディングの内積に基づいて色付けされます。
Step5: 評価
Step7: 文章埋め込みの評価 | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
%%capture
!pip3 install seaborn
Explanation: ユニバーサルセンテンスエンコーダー
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
<td> <a href="https://tfhub.dev/s?q=google%2Funiversal-sentence-encoder%2F4%20OR%20google%2Funiversal-sentence-encoder-large%2F5"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを参照</a> </td>
</table>
このノートブックでは、ユニバーサルセンテンスエンコーダーにアクセスし、文章の類似性と文章の分類タスクに使用する方法を説明します。
ユニバーサルセンテンスエンコーダーでは、これまで各単語の埋め込みをルックアップしてきたのと同じくらい簡単に文章レベルの埋め込みを取得することができます。取得された文章埋め込みは、文章レベルでの意味の類似性を計算するためだけではなく、少ない教師ありトレーニングデータを使うことで、ダウンストリームの分類タスクのパフォーマンスを改善するために使用することができます。
セットアップ
このセクションは、TF Hub でユニバーサルセンテンスエンコーダーにアクセスする環境をセットアップし、エンコーダーを単語、文章、および段落に適用する例を提供します。
End of explanation
#@title Load the Universal Sentence Encoder's TF Hub module
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" #@param ["https://tfhub.dev/google/universal-sentence-encoder/4", "https://tfhub.dev/google/universal-sentence-encoder-large/5"]
model = hub.load(module_url)
print ("module %s loaded" % module_url)
def embed(input):
return model(input)
#@title Compute a representation for each message, showing various lengths supported.
word = "Elephant"
sentence = "I am a sentence for which I would like to get its embedding."
paragraph = (
"Universal Sentence Encoder embeddings also support short paragraphs. "
"There is no hard limit on how long the paragraph is. Roughly, the longer "
"the more 'diluted' the embedding will be.")
messages = [word, sentence, paragraph]
# Reduce logging output.
logging.set_verbosity(logging.ERROR)
message_embeddings = embed(messages)
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
Explanation: Tensorflow のインストールに関する詳細は、https://www.tensorflow.org/install/ をご覧ください。
End of explanation
def plot_similarity(labels, features, rotation):
corr = np.inner(features, features)
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=rotation)
g.set_title("Semantic Textual Similarity")
def run_and_plot(messages_):
message_embeddings_ = embed(messages_)
plot_similarity(messages_, message_embeddings_, 90)
Explanation: セマンティックテキストの類似性タスクの例
ユニバーサルセンテンスエンコーダーによって生成される埋め込みは、おおよそ正規化されています。2 つの文章の意味的類似性は、エンコーディングの内積として簡単に計算することができます。
End of explanation
messages = [
# Smartphones
"I like my phone",
"My phone is not good.",
"Your cellphone looks great.",
# Weather
"Will it snow tomorrow?",
"Recently a lot of hurricanes have hit the US",
"Global warming is real",
# Food and health
"An apple a day, keeps the doctors away",
"Eating strawberries is healthy",
"Is paleo better than keto?",
# Asking about age
"How old are you?",
"what is your age?",
]
run_and_plot(messages)
Explanation: 類似性の視覚化
ここでは、ヒートマップに類似性を表示します。最終的なグラフは 9x9 の行列で、各エントリ [i, j] は、文章 i と j のエンコーディングの内積に基づいて色付けされます。
End of explanation
import pandas
import scipy
import math
import csv
sts_dataset = tf.keras.utils.get_file(
fname="Stsbenchmark.tar.gz",
origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
extract=True)
sts_dev = pandas.read_table(
os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"),
error_bad_lines=False,
skip_blank_lines=True,
usecols=[4, 5, 6],
names=["sim", "sent_1", "sent_2"])
sts_test = pandas.read_table(
os.path.join(
os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"),
error_bad_lines=False,
quoting=csv.QUOTE_NONE,
skip_blank_lines=True,
usecols=[4, 5, 6],
names=["sim", "sent_1", "sent_2"])
# cleanup some NaN values in sts_dev
sts_dev = sts_dev[[isinstance(s, str) for s in sts_dev['sent_2']]]
Explanation: 評価: STS(セマンティックテキストの類似性)ベンチマーク
STS ベンチマークは、文章埋め込みを使って計算された類似性スコアが人の判定に適合する程度の本質的評価です。ベンチマークでは、システムは多様な文章ペアに対して類似性スコアを返す必要があります。その後で、ピアソン相関を使用して、人の判定に対して機械の類似性スコアの質が評価されます。
データのダウンロード
End of explanation
sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"}
def run_sts_benchmark(batch):
sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1)
sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1)
cosine_similarities = tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1)
clip_cosine_similarities = tf.clip_by_value(cosine_similarities, -1.0, 1.0)
scores = 1.0 - tf.acos(clip_cosine_similarities) / math.pi
Returns the similarity scores
return scores
dev_scores = sts_data['sim'].tolist()
scores = []
for batch in np.array_split(sts_data, 10):
scores.extend(run_sts_benchmark(batch))
pearson_correlation = scipy.stats.pearsonr(scores, dev_scores)
print('Pearson correlation coefficient = {0}\np-value = {1}'.format(
pearson_correlation[0], pearson_correlation[1]))
Explanation: 文章埋め込みの評価
End of explanation |
6,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
numbers on a plane
Numbers can be a lot more interesting than just a value if you're just willing to shift your perspective a bit.
integers
When we are dealing with integers we are dealing with all the whole numbers, zero and all the negative whole numbers. In math this set of numbers is often denoted with the symbol $\mathbb{Z}$. This is a countable infinite set and even though the numbers are a bit basic we can try to get some more insight into the structure of numbers.
squares
If we take a number and multiply it with itself we get a square number. These are called square because we can easily plot them as squares in a plot.
Step1: However, what happens we have a non-square number such as $5$?. We can't easily plot this as two equal lenghts, we'll have to turn it into a rectangle of $1 \times 5$ or $5 \times 1$. | Python Code:
def plot_rect(ax, p, fmt='b'):
x, y = p
ax.plot([0, x], [y, y], fmt) # horizontal line
ax.plot([x, x], [0, y], fmt) # vertical line
with plt.xkcd():
fig, axes = plt.subplots(1, figsize=(4, 4))
pu.setup_axes(axes, xlim=(-1, 4), ylim=(-1, 4))
for x in [1,2,3]: plot_rect(axes, (x, x))
Explanation: numbers on a plane
Numbers can be a lot more interesting than just a value if you're just willing to shift your perspective a bit.
integers
When we are dealing with integers we are dealing with all the whole numbers, zero and all the negative whole numbers. In math this set of numbers is often denoted with the symbol $\mathbb{Z}$. This is a countable infinite set and even though the numbers are a bit basic we can try to get some more insight into the structure of numbers.
squares
If we take a number and multiply it with itself we get a square number. These are called square because we can easily plot them as squares in a plot.
End of explanation
with plt.xkcd():
fig, axes = plt.subplots(1, figsize=(4, 4))
pu.setup_axes(axes, xlim=(-1, 6), ylim=(-1, 6))
for x, y in [(1, 5), (5, 1)]:
plot_rect(axes, (x, y))
Explanation: However, what happens we have a non-square number such as $5$?. We can't easily plot this as two equal lenghts, we'll have to turn it into a rectangle of $1 \times 5$ or $5 \times 1$.
End of explanation |
6,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC. All Rights Reserved.
Step1: RLDS
Step2: Import Modules
Step3: Load dataset
We can load an RLDS dataset using TFDS. See the available datasets in the TFDS catalog and look for those with an episodic structure.
For example
Step4: The content of the dataset complies with the format described in the RLDS README
Step5: Basic dataset transformations
RLDS datasets can be manipulated with tf.data.Dataset functions, but the RLDS library provides building blocks to perform more complex transformations that prepare the data to be consumed by an algorithm.
In the following sections, we show how to use some of the standard tf.data functions and some of the RLDS transformations.
If you are not familiar with the tf.data.Dataset API, we recommend you to
take a look first at the tf.data documentation here. Methods such as map, flat_map, batch and filter are worth to know about. RLDS provides a number of helper functions built with the use of tf.data pipelines, but you will still need to use tf.data directly to glue them together.
See this colab for performance tips when building tf.data.Dataset pipelines for RLDS datasets.
tf.data operations
These are a couple of examples of how to use standard tf.data operations with RLDS datasets.
This first example, uses take and skip to skip one episode and to take the next 5. The result is a dataset of 5 episodes.
Step6: This other example converts a dataset of episodes into a flat dataset of steps using flat_map.
The result, steps_dataset, is a sequence of all episodes' steps from the original dataset.
Step7: Zeros like
RLDS offers functions to create empty steps with the same shape and dtype of the original step (and datasets that contain only one of these zeros-like steps).
Step8: Alignment of the step fields
RLDS retrieves the steps with the current observation, the action applied to this observation, and the reward obtained after applying this action. However, algorithms may consume them with a different alignment. For example, an algorithm may need a step with the next observation instead of the current one. The RLDS library supports an alignment transformation to shift steps fields. Here is an example.
Step9: Conditional truncation
This operation allows truncating a dataset after a condition (that is defined by the user) has been met.
Step10: Concatenate
The RLDS library provides a custom concatenate function that supports concatenating datasets even if the dataset elements do not contain the same fields. The only condition is that the elements of the datasets are dictionaries (like the steps or the episodes).
In this example, we just add a zeros-like step at the end of each episode.
Step11: When the elements of the two datasets don't contain the same fields, the new dataset contains the union of the fields and it adds the extra fields with a zero-like value.
For example, let's change the example above to indicate that the empty step we added is not a real step, but only padding. To do so, instead of concatenating a zeros-like step, we concatenate a dataset with one step that contains only the following
Step12: If we only want to pad episodes that end in a terminal state, we can use concat_if_terminal.
Step13: Batch dataset with overlap
RLDS provides a flexible batching method that allows to configure batches as sliding windows, allowing, for example, to create batches that overlap.
Step14: Statistics
The RLDS library includes optimized helpers to calculate statistics across the dataset. To avoid making assumptions on the data, many of them receive as an extra parameter a function to select the data of the step that we want to use in the stats.
Sum of step fields
sum_nested_steps allows users to sum data accross all the steps in the dataset. It gets an extra parameter, get_data, that enables the user to define which data of the step is going to be added. In this example, we take the action and reward of all steps.
Step15: Total reward per episode
sum_dataset can be used to efficiently sum the values of one or multiple fields of a step. For example, to calculate the total reward per episode.
Step16: Episode length
episode_length computes the number of steps in a given dataset.
Step17: Mean and std
mean_and_std computes the mean and standard deviation accross all the steps for any field (or nested field) of the step.
Besides the dataset, users need to provide a function get_data that returns
two values
Step18: To customize this function (for example, when we change the alignment, or when we want only the stats of certain fields), we can provide our own implementation of get_data.
Step19: RL examples
This section includes examples of more complex operations based on the RLDS library.
Filter episodes based on average reward
In this example, we fist add the average reward to each episode, and then filter out episodes with an average reward higher than 5.
Step20: Truncate episodes to a given length
Truncates all episode to a max length of 20
Step21: Convert Steps into SARS Transitions
This example ilustrates how to convert this dataset from an Episode of Steps to an Episode of Transitions.
Step22: Convert Steps into N-Step Transitions
Similar to the example above, but using N-step transitions.
Step23: Change alignment from SAR to ARS
The RLDS step contains the current observation, the action applied and the reward obtained. In ARS alignment, a step contains the action applied, the reward, and observation obtained after applying the action.
This example shows how to transform the RLDS dataset to ARS format.
Step24: Conditional truncation
This example illustrates how to truncate an episode after a condition.
Step25: Normalization
Use the calculation of the mean and the std to apply normalization to the observations. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 Google LLC. All Rights Reserved.
End of explanation
!pip install rlds[tensorflow]
!pip install tfds-nightly
Explanation: RLDS: Tutorial
This colab provides an overview of how RLDS can be used to load and manipulate datasets.
If you're looking for more complex examples, see the the RLDS examples Notebook in Google Colab. Or this one for performance tips.
<table class="tfo-notebook-buttons" align="left">
<td>
<a href="https://colab.research.google.com/github/google-research/rlds/blob/main/rlds/examples/rlds_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Run In Google Colab"/></a>
</td>
</table>
Install module
End of explanation
from typing import Any, Dict, Union, NamedTuple
import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
import rlds
Explanation: Import Modules
End of explanation
dataset_name = 'd4rl_mujoco_walker2d' # @param { isTemplate: true}
num_episodes_to_load = 10 # @param { isTemplate: true}
dataset = tfds.load(dataset_name, split = f'train[:{num_episodes_to_load}]')
Explanation: Load dataset
We can load an RLDS dataset using TFDS. See the available datasets in the TFDS catalog and look for those with an episodic structure.
For example:
* D4RL Datasets
* RL Unplugged Datasets
* RLDS Datasets
End of explanation
print(dataset.element_spec)
Explanation: The content of the dataset complies with the format described in the RLDS README: an outer tf.data.Dataset of episodes, each of them containing episode metadata and a tf.data.Dataset of steps.
For example, let's inspect the first episode of the dataset that we have just loaded.
End of explanation
shortened_dataset = dataset.skip(1).take(5)
Explanation: Basic dataset transformations
RLDS datasets can be manipulated with tf.data.Dataset functions, but the RLDS library provides building blocks to perform more complex transformations that prepare the data to be consumed by an algorithm.
In the following sections, we show how to use some of the standard tf.data functions and some of the RLDS transformations.
If you are not familiar with the tf.data.Dataset API, we recommend you to
take a look first at the tf.data documentation here. Methods such as map, flat_map, batch and filter are worth to know about. RLDS provides a number of helper functions built with the use of tf.data pipelines, but you will still need to use tf.data directly to glue them together.
See this colab for performance tips when building tf.data.Dataset pipelines for RLDS datasets.
tf.data operations
These are a couple of examples of how to use standard tf.data operations with RLDS datasets.
This first example, uses take and skip to skip one episode and to take the next 5. The result is a dataset of 5 episodes.
End of explanation
dataset_steps = dataset.flat_map(lambda episode: episode[rlds.STEPS])
print(dataset_steps.element_spec)
Explanation: This other example converts a dataset of episodes into a flat dataset of steps using flat_map.
The result, steps_dataset, is a sequence of all episodes' steps from the original dataset.
End of explanation
# Creates a step with all fields initialized to zeros.
zero_step = rlds.transformations.zeros_from_spec(
dataset.element_spec[rlds.STEPS].element_spec)
# Creates an episode with steps and the episode metadata initialized to zeros.
zero_episode = rlds.build_episode(
steps=rlds.transformations.zero_dataset_like(
dataset.element_spec[rlds.STEPS]),
metadata=rlds.transformations.zeros_from_spec({
k: dataset.element_spec[k]
for k in dataset.element_spec.keys()
if k != rlds.STEPS
}))
zero_episode
Explanation: Zeros like
RLDS offers functions to create empty steps with the same shape and dtype of the original step (and datasets that contain only one of these zeros-like steps).
End of explanation
# Uses `shift_keys` to shift observations 2 steps backwards in an episode.
def shift_episode(episode):
episode[rlds.STEPS] = rlds.transformations.alignment.shift_keys(
episode[rlds.STEPS], [rlds.OBSERVATION], -2)
return episode
# Shifts observations 2 steps backwards in all episodes.
shifted_dataset = dataset.map(shift_episode)
Explanation: Alignment of the step fields
RLDS retrieves the steps with the current observation, the action applied to this observation, and the reward obtained after applying this action. However, algorithms may consume them with a different alignment. For example, an algorithm may need a step with the next observation instead of the current one. The RLDS library supports an alignment transformation to shift steps fields. Here is an example.
End of explanation
# Defines a condition function.
def condition(step):
return step[rlds.REWARD] > 5.
# Truncates dataset after the first step with a reward higher than 5.
truncated_dataset = dataset.map(
lambda episode: rlds.transformations.truncate_after_condition(
episode[rlds.STEPS], condition))
Explanation: Conditional truncation
This operation allows truncating a dataset after a condition (that is defined by the user) has been met.
End of explanation
def concatenate_episode(episode):
episode[rlds.STEPS] = rlds.transformations.concatenate(
episode[rlds.STEPS],
rlds.transformations.zero_dataset_like(
dataset.element_spec[rlds.STEPS]))
return episode
# Concatenates the existing dataset with a zeros-like dataset.
zero_concatenate_dataset = dataset.map(concatenate_episode)
Explanation: Concatenate
The RLDS library provides a custom concatenate function that supports concatenating datasets even if the dataset elements do not contain the same fields. The only condition is that the elements of the datasets are dictionaries (like the steps or the episodes).
In this example, we just add a zeros-like step at the end of each episode.
End of explanation
def concatenate_episode(episode):
step_with_padding = tf.data.Dataset.from_tensors({'is_padding': [True]})
episode[rlds.STEPS] = rlds.transformations.concatenate(
episode[rlds.STEPS], step_with_padding)
return episode
# Adds field `is_padding` in the existing dataset.
zero_concatenate_dataset = dataset.map(concatenate_episode)
Explanation: When the elements of the two datasets don't contain the same fields, the new dataset contains the union of the fields and it adds the extra fields with a zero-like value.
For example, let's change the example above to indicate that the empty step we added is not a real step, but only padding. To do so, instead of concatenating a zeros-like step, we concatenate a dataset with one step that contains only the following: {'is_padding'=True}.
In the output dataset, all the real steps will contain the new key ('is_padding') with a default value (False). The new last step will contain all the step keys with a zero value, and 'is_padding'=True.
End of explanation
# Builds a dataset with a zeros-like step.
def make_extra_step(_):
return rlds.transformations.zero_dataset_like(
dataset.element_spec[rlds.STEPS])
def concatenate_episode(episode):
episode[rlds.STEPS] = rlds.transformations.concat_if_terminal(
episode[rlds.STEPS], make_extra_step)
return episode
# Concatenates the existing dataset with a zeros-like dataset if the existing
# dataset ends with a terminal step.
condition_concatenate_dataset = dataset.map(concatenate_episode)
Explanation: If we only want to pad episodes that end in a terminal state, we can use concat_if_terminal.
End of explanation
batch_size = 2
shift = 3
batched_dataset = rlds.transformations.batch(dataset, batch_size, shift)
Explanation: Batch dataset with overlap
RLDS provides a flexible batching method that allows to configure batches as sliding windows, allowing, for example, to create batches that overlap.
End of explanation
@tf.function
def get_data(step):
# Extracts reward and action from the step (sets them to zeros if this is the
# last step).
if step[rlds.IS_LAST]:
return {k: tf.nest.map_structure(tf.zeros_like, step[k]) for k in [rlds.REWARD, rlds.ACTION]}
else:
return {k: step[k] for k in [rlds.REWARD, rlds.ACTION]}
# Calculates sum across reward and action fields.
sum = rlds.transformations.sum_nested_steps(dataset, get_data)
print('sum rewards: ', sum[rlds.REWARD].numpy())
print('sum actions: ', sum[rlds.ACTION].numpy())
Explanation: Statistics
The RLDS library includes optimized helpers to calculate statistics across the dataset. To avoid making assumptions on the data, many of them receive as an extra parameter a function to select the data of the step that we want to use in the stats.
Sum of step fields
sum_nested_steps allows users to sum data accross all the steps in the dataset. It gets an extra parameter, get_data, that enables the user to define which data of the step is going to be added. In this example, we take the action and reward of all steps.
End of explanation
@tf.function
def data_to_sum(step):
# This assumes that the reward is valid in all steps.
return step[rlds.REWARD]
@tf.function
def add_total_reward(episode):
total = rlds.transformations.sum_dataset(episode[rlds.STEPS], data_to_sum)
return {
**episode,
'total_reward': total
}
ds_with_total_reward = dataset.map(add_total_reward)
for e in ds_with_total_reward:
print(e['total_reward'])
Explanation: Total reward per episode
sum_dataset can be used to efficiently sum the values of one or multiple fields of a step. For example, to calculate the total reward per episode.
End of explanation
# Calculates lengths for each episode.
lengths = dataset.map(
lambda episode: rlds.transformations.episode_length(episode[rlds.STEPS]))
Explanation: Episode length
episode_length computes the number of steps in a given dataset.
End of explanation
# Calculates mean and std of reward, observation and action across the full dataset of episodes.
mean, std = rlds.transformations.mean_and_std(dataset, rlds.transformations.sar_fields_mask)
print('mean[REWARD]: ', mean[rlds.REWARD].numpy())
print('std[REWARD]:' , std[rlds.REWARD])
print('mean[OBSERVATION]: ', mean[rlds.OBSERVATION].numpy())
print('std[OBSERVATION]:' , std[rlds.OBSERVATION])
print('mean[ACTION]: ', mean[rlds.ACTION].numpy())
print('std[ACTION]:' , std[rlds.ACTION])
Explanation: Mean and std
mean_and_std computes the mean and standard deviation accross all the steps for any field (or nested field) of the step.
Besides the dataset, users need to provide a function get_data that returns
two values:
the data in this step (for example one of the fields of the step)
if the data is valid (for example, in the last step in SAR alignment, the action and the reward are usually undefined).
When the data in the observation and action is numeric, and the user hasn't changed the alignment, it is possible to use a predefined get_data: sar_fields_mask, as in the following example (note that it may not work with all the datasets).
End of explanation
def get_data(step):
# Obtains the desired data from the step.
data = {rlds.REWARD: step[rlds.REWARD]}
# Discards the data of the last step.
if step[rlds.IS_LAST]:
mask = {
rlds.REWARD: False,
}
else:
mask = {
rlds.REWARD: True,
}
return data, mask
# Calculates mean and std of the reward across the full dataset of episodes.
mean, std = rlds.transformations.mean_and_std(dataset, get_data)
print('mean: ', mean[rlds.REWARD].numpy())
print('std:' , std[rlds.REWARD])
Explanation: To customize this function (for example, when we change the alignment, or when we want only the stats of certain fields), we can provide our own implementation of get_data.
End of explanation
# Defines a function to calculate mean of rewards per episode.
def reduction_fn(episode):
@tf.function
def data_to_sum(step):
# Sets the reward of the last step to 0
if step[rlds.IS_LAST]:
return {rlds.REWARD: tf.zeros_like(step[rlds.REWARD])}
else:
return {rlds.REWARD: step[rlds.REWARD]}
total_reward = rlds.transformations.nested_ops.sum_dataset(
episode[rlds.STEPS], data_to_sum)
count = rlds.transformations.episode_length(episode[rlds.STEPS])
avg = tf.cast(total_reward[rlds.REWARD], tf.float32) / (tf.cast(
count, tf.float32)-1)
episode['avg_reward'] = avg
return episode
# Calculates average reward per episode.
dataset_with_avg_reward = dataset.map(reduction_fn)
print('mean rewards: ', list(dataset_with_avg_reward.map(lambda e: e['avg_reward']).as_numpy_iterator()))
# Filters the episodes with an average reward higher than 5
filtered_dataset = dataset_with_avg_reward.filter(lambda episode: episode['avg_reward']<=5)
print('filtered mean rewards: ', list(filtered_dataset.map(lambda e: e['avg_reward']).as_numpy_iterator()))
Explanation: RL examples
This section includes examples of more complex operations based on the RLDS library.
Filter episodes based on average reward
In this example, we fist add the average reward to each episode, and then filter out episodes with an average reward higher than 5.
End of explanation
# Sets the maximum length for each episode.
max_episode_length = 20
def truncate_steps(steps: tf.data.Dataset) -> tf.data.Dataset:
return steps.take(max_episode_length)
truncated_dataset = rlds.transformations.apply_nested_steps(dataset,
truncate_steps)
Explanation: Truncate episodes to a given length
Truncates all episode to a max length of 20
End of explanation
def steps_to_transition(step):
new_step = {k: step[k][0] for k in step.keys()}
new_step['next_observation'] = step[rlds.OBSERVATION][1]
new_step['next_is_terminal'] = step[rlds.IS_TERMINAL][1]
new_step['next_is_last'] = step[rlds.IS_LAST][1]
return new_step
def episode_steps_to_transition(episode: Dict[str, Any]) -> tf.data.Dataset:
episode[rlds.STEPS] = rlds.transformations.batch(
episode[rlds.STEPS], size=2, shift=1, drop_remainder=True).map(
steps_to_transition)
return episode
dataset_transitions = dataset.map(episode_steps_to_transition)
# Prints the first episode and the step spec.
first_episode = next(iter(dataset_transitions))
print(f'first_episode: {first_episode}\n')
print(f'steps spec: {first_episode[rlds.STEPS].element_spec}\n')
Explanation: Convert Steps into SARS Transitions
This example ilustrates how to convert this dataset from an Episode of Steps to an Episode of Transitions.
End of explanation
def steps_to_nstep_transition(step):
new_step = {k: step[k][0] for k in step.keys()}
new_step[rlds.REWARD] = tf.experimental.numpy.sum(step[rlds.REWARD])
new_step['next_observation'] = step[rlds.OBSERVATION][-1]
new_step['next_is_terminal'] = step[rlds.IS_TERMINAL][-1]
new_step['next_is_last'] = step[rlds.IS_LAST][-1]
return new_step
def episode_steps_to_nstep_transition(episode: Dict[str, Any], n) -> tf.data.Dataset:
episode[rlds.STEPS] = rlds.transformations.batch(
episode[rlds.STEPS], size=n, shift=1, drop_remainder=True).map(
steps_to_nstep_transition)
return episode
dataset_transitions = dataset.map(lambda e: episode_steps_to_nstep_transition(e, 4))
# Prints the first episode and the step spec.
first_episode = next(iter(dataset_transitions))
print(f'first_episode: {first_episode}\n')
print(f'steps spec: {first_episode[rlds.STEPS].element_spec}\n')
Explanation: Convert Steps into N-Step Transitions
Similar to the example above, but using N-step transitions.
End of explanation
# Defines a function to transform SAR to ARS.
def sar_to_ars(episode):
steps = episode[rlds.STEPS]
steps = rlds.transformations.concatenate(
rlds.transformations.zero_dataset_like(steps), steps)
steps = rlds.transformations.shift_keys(
steps, [rlds.OBSERVATION, rlds.IS_FIRST, rlds.IS_TERMINAL],
shift=-1)
episode[rlds.STEPS] = steps.map(
lambda s: rlds.transformations.add_alignment_to_step(
s, rlds.transformations.AlignmentType.ARS))
return episode
ars_dataset = dataset.map(sar_to_ars)
# Prints the first episode and the step spec.
first_episode = next(iter(ars_dataset))
print(f'first_episode: {first_episode}\n')
print(f'steps spec: {first_episode[rlds.STEPS].element_spec}\n')
Explanation: Change alignment from SAR to ARS
The RLDS step contains the current observation, the action applied and the reward obtained. In ARS alignment, a step contains the action applied, the reward, and observation obtained after applying the action.
This example shows how to transform the RLDS dataset to ARS format.
End of explanation
# Defines the name of the field that we are going to look for in the steps to check the condition after which we want to truncate.
terminal_tag = 'reward'
# Defines a function for setting the terminal step.
def set_terminal(step):
has_termination_tag = tf.not_equal(step[terminal_tag],
tf.zeros_like(step[terminal_tag]))
step[rlds.IS_TERMINAL] = tf.math.logical_or(step[rlds.IS_TERMINAL],
has_termination_tag)
return step
# Defines a function for cutting episodes after a terminal step.
def cut_single_episode(episode: Dict[str, Any]) -> Dict[str, Any]:
steps = episode[rlds.STEPS]
steps = rlds.transformations.truncate_after_condition(
steps, lambda step: tf.not_equal(step[terminal_tag],
tf.zeros_like(step[terminal_tag])))
steps = steps.map(set_terminal)
episode[rlds.STEPS] = steps
return episode
dataset.map(cut_single_episode)
Explanation: Conditional truncation
This example illustrates how to truncate an episode after a condition.
End of explanation
mean, std = rlds.transformations.mean_and_std(dataset, rlds.transformations.sar_fields_mask)
shift = -mean[rlds.OBSERVATION].numpy()
scale = 1.0 / np.maximum(std[rlds.OBSERVATION], 1e-3)
def normalize_observations(step):
step[rlds.OBSERVATION] = (step[rlds.OBSERVATION]+shift) * scale
return step
normalized_dataset = rlds.transformations.map_nested_steps(dataset, normalize_observations)
Explanation: Normalization
Use the calculation of the mean and the std to apply normalization to the observations.
End of explanation |
6,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Accessing Data
As pandas is built on Python, any means available in Python can be used to retrieve data from outside source. This really makes the possibility of the data that can be accessed unlimited including text files, excel spreadsheets, web sites and services, databases and cloud based services.
Setting up the Python notebook
Step1: CSV & Text/Tabular Format
Step2: Reading a CSV file into DataFrame
Step3: The data field is now the index however because of this it is also not a column data. If you want to use the date as a column, you will need to create a new column and assign the index labels to that column.
Step4: Specifying column names
Step5: Saving DataFrame to a CSV file
Step6: It was necessary to tell the method that the index label should be saved with a column name of date using index_label=date. Otherwise, the index does not have a name added to the first row of the file, which makes it difficult to read back properly.
Step7: General Field-Delimited Data
Step8: Handling noise rows in a dataset
Sometimes, data in a field-delimited file may contain erroneous headers and footers. Examples can be company information at the top, such as invoice number, addresses and summary footers. Sometimes data is stored on ever other line. These situations will cause error when pandas tries to open files. To handle these scenarios some useful parameters can be used.
Step9: Another common situation is where a file has content at the end of the file which should be ignored to prevent an error, such as the following
Step10: Reading and Writing data in Excel Format
pandas support reading data in Excel 2003 and newer formats using the pd.read_excel() function or via ExcelFile class.
Step11: To write more than one DataFrame to a single Excel file and each DataFrame object on a separate worksheet use the ExcelWriter object along with the with keyword.
Step12: Reading and Writing JSON files
pandas can read and write data stored on JSON format.
Step13: Notice two slight differences here caused by the reading / writing of data from JSON. First the columns have been reordered alphabetically. Second, the index for DataFram although containing contnet, is sorted as a string.
Reading HTML data from the web
Underneath the covers pandas makes use of LXML, Html5Lib and BeautifulSoup4 packages.
Step14: Reading and Writing HDF5 format files
HDF5 is a data model, library and file format to store and manage data. It is commonly used in scientific computing environments. It supports an unlimited variety of data types and is designed for flexible and efficient I/O and for high volume and complex data.
HDF5 is portable and extensible allowing applications to evolve in their use of HDF5. HDF5 technology suite includes tools and applications to manage, manipulate, view and analyse data in HDF5 format.
HDF5 is
Step15: Accessing data on the web and in the cloud
pandas makes it extremely easy to read data from the web and the cloud. All of the pandas functions we have examined so far can also be given an HTTP URL, FTP address or S3 address instead of a local file path.
Step16: Reading and writing from/to SQL databases
pandas can read data from any SQL databases that support Python data adapters, that respect the Python DB-API. Reading is performed by using the pandas.io.sql.read_sql() function and writing to SQL databases using the .to_sql() method of DataFrame.
Step17: As these functions take a connection object, which can be any Python DB-API compatible data adapter, you can more or less work with any supported database data by simply creating an appropriate connection object. The code at pandas level should remain the same for any supported database.
Reading data from remote data services
pandas has direct support for various web-based data source classes in the pandas.io.data namespace. The primary class of interest is pandas.io.data.DataReader, which is implemented to read data from various supported sources and return it to the application directly as DataFrame.
Currently, support exists for the following sources via the DataReader class
Step18: Reading from Federal Reserve Economic Data
Step19: Accessing Kenneth French's Data
Kenneth R French is a professor of finance at Tuck School of Business at Dartmouth University. He has created an extensive library of economic data, which is available for download over the Web.
Step20: Reading from the World Bank
World Bank datasets are identified using indicators, a text code that represents each dataset. A full list of indicators can be retrieved using the pandas_datareader.get_indicators() function.
Step21: We can do some interesting things with this data. The example we will look at, determines which country has the lowest life expectancy for each year. To do this, we first need to pivot this data, so that the index is the country name and the year is the column. | Python Code:
# import pandas and numpy
import numpy as np
import pandas as pd
# set some pandas options for controlling output
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns',10)
pd.set_option('display.max_rows',10)
Explanation: Accessing Data
As pandas is built on Python, any means available in Python can be used to retrieve data from outside source. This really makes the possibility of the data that can be accessed unlimited including text files, excel spreadsheets, web sites and services, databases and cloud based services.
Setting up the Python notebook
End of explanation
# view the first five lines of data/msft.csv
! head -n 5 ../../data/msft.csv # OS/Linux
# !type ..\..\data\msft.csv # on windows
Explanation: CSV & Text/Tabular Format
End of explanation
# read in msft.csv into a DataFrame
msft = pd.read_csv("../../data/msft.csv")
msft.head()
# specifying the index column
msft = pd.read_csv("../../data/msft.csv", index_col=0)
msft.head()
Explanation: Reading a CSV file into DataFrame
End of explanation
# examine the types of the columns in the DataFrame
msft.dtypes
# to force type of columns, use the dtypes parameter
# following forces the column to be float64
msft = pd.read_csv("../../data/msft.csv", dtype={'Volume': np.float64})
msft.dtypes
Explanation: The data field is now the index however because of this it is also not a column data. If you want to use the date as a column, you will need to create a new column and assign the index labels to that column.
End of explanation
# specify a new set of names for the columns
# all lower case, remove space in Adj Close
# also, header = 0 skips the header row
df = pd.read_csv("../../data/msft.csv",header=0,names=['open','high','low','close','volume','adjclose'])
df.head()
# read in data only in the Date and close columns,
# use Date as the inde
df2 = pd.read_csv("../../data/msft.csv",usecols=['Date','Close'],index_col=['Date'])
df2.head()
Explanation: Specifying column names
End of explanation
# save df2 to a new csv file
# also specify naming the index as date
df2.to_csv("../../data/msft_modified.csv",index_label='date')
Explanation: Saving DataFrame to a CSV file
End of explanation
# view the start of the file just saved
!head ../../data/msft_modified.csv
Explanation: It was necessary to tell the method that the index label should be saved with a column name of date using index_label=date. Otherwise, the index does not have a name added to the first row of the file, which makes it difficult to read back properly.
End of explanation
# use read_table with sep=',' to read a csv
df=pd.read_table("../../data/msft.csv",sep=',')
df.head()
# save as pipe delimited
df.to_csv("../../data/msft_piped.txt",sep='|')
# check if it worked
!head -n 5 ../../data/msft_piped.txt
Explanation: General Field-Delimited Data
End of explanation
# messy file
!head ../../data/msft2.csv # Linux
# read, but skip rows 0,2 and 3
df = pd.read_csv("../../data/msft2.csv",skiprows=[0,2,3])
df
Explanation: Handling noise rows in a dataset
Sometimes, data in a field-delimited file may contain erroneous headers and footers. Examples can be company information at the top, such as invoice number, addresses and summary footers. Sometimes data is stored on ever other line. These situations will cause error when pandas tries to open files. To handle these scenarios some useful parameters can be used.
End of explanation
# another messy file with mess at the end
!cat ../../data/msft_with_footer.csv # osx / Linux
# skip only two lines at the end
# engine parameter to force python implementation rather than default c implementation
df = pd.read_csv("../../data/msft_with_footer.csv",skipfooter=2,engine='python')
df
# only process the first three rows
pd.read_csv("../../data/msft.csv",nrows=3)
# skip 100 lines, then only process the next five
pd.read_csv("../../data/msft.csv", skiprows=100, nrows=5, header=0,names=['open','high','low','close','vol','adjclose'])
Explanation: Another common situation is where a file has content at the end of the file which should be ignored to prevent an error, such as the following:
End of explanation
# read excel file
# only reads first sheet
df = pd.read_excel("../../data/stocks.xlsx")
df.head()
# read from the appl worksheet
aapl = pd.read_excel("../../data/stocks.xlsx", sheetname='aapl')
aapl.head()
# save to excel file in worksheet sheet1
df.to_excel("../../data/stocks2.xlsx")
# write making the worksheet name MSFT
df.to_excel("../../data/stocks_msft.xlsx", sheet_name='MSFT')
Explanation: Reading and Writing data in Excel Format
pandas support reading data in Excel 2003 and newer formats using the pd.read_excel() function or via ExcelFile class.
End of explanation
from pandas import ExcelWriter
with ExcelWriter("../../data/all_stocks.xls") as writer:
aapl.to_excel(writer,sheet_name='AAPL')
df.to_excel(writer,sheet_name='MSFT')
# write to xlsx
df.to_excel("../../data/msft2.xlsx")
Explanation: To write more than one DataFrame to a single Excel file and each DataFrame object on a separate worksheet use the ExcelWriter object along with the with keyword.
End of explanation
# write the excel data to a JSON file
df.head().to_json("../../data/stocks.json")
!cat ../../data/stocks.json
# read data in from JSON
df_from_json = pd.read_json("../../data/stocks.json")
df_from_json.head(5)
Explanation: Reading and Writing JSON files
pandas can read and write data stored on JSON format.
End of explanation
# url to read
url = "http://www.fdic.gov/bank/individual/failed/banklist.html"
# read it
banks = pd.read_html(url)
# examine a subset of the first table read
banks[0][0:5].ix[:,0:4]
# write to html
# read the stock data
df=pd.read_excel("../../data/stocks.xlsx")
# write first 2 rows to HTML
df.head(2).to_html("../../data/stocks.html")
# check
!head -n 28 ../../data/stocks.html
Explanation: Notice two slight differences here caused by the reading / writing of data from JSON. First the columns have been reordered alphabetically. Second, the index for DataFram although containing contnet, is sorted as a string.
Reading HTML data from the web
Underneath the covers pandas makes use of LXML, Html5Lib and BeautifulSoup4 packages.
End of explanation
# seed for replication
np.random.seed(123456)
# create a DataFrame of dates and random numbers in three columns
df = pd.DataFrame(np.random.randn(8,3),index=pd.date_range('1/1/2000', periods=8), columns=['A','B','C'])
# create HDF5 store
store = pd.HDFStore('../../data/store.h5')
store['df'] = df # persisting happened here
store
# read in data from HDF5
store = pd.HDFStore("../../data/store.h5")
df = store['df']
df
# this changes the DataFrame, but did not persist
df.ix[0].A = 1
# to persist the change, assign the dataframe to the
# HDF5 store object
store['df'] = df
# it is now persisted
# the following loads the store and
# shows the first two rows, demonstrating
# the persisting was done
pd.HDFStore("../../data/store.h5")['df'].head(2)
Explanation: Reading and Writing HDF5 format files
HDF5 is a data model, library and file format to store and manage data. It is commonly used in scientific computing environments. It supports an unlimited variety of data types and is designed for flexible and efficient I/O and for high volume and complex data.
HDF5 is portable and extensible allowing applications to evolve in their use of HDF5. HDF5 technology suite includes tools and applications to manage, manipulate, view and analyse data in HDF5 format.
HDF5 is:
A Versatile data model that can represent very complex data objects and wide variety of metadata
A completely portable file format with no limit on the number or size of data objects in a collection
A Software library that runs on range of computational platforms from laptops to massively parallel processing systems and implements high level API with C,C++,Fortran and Java interfaces
A rich set of integrated performance features that allows for access time and storage space optimizations.
Tools and applications to manage, manipulate, view and analyze the data in collection
HDF5Store is a hierarchical dictionary like object that reads and writes pandas objects to the HDF5 format.
End of explanation
# read csv directly from Yahoo! Finance from a URL
df = pd.read_csv("https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/datasets/AirPassengers.csv")
df[:5]
Explanation: Accessing data on the web and in the cloud
pandas makes it extremely easy to read data from the web and the cloud. All of the pandas functions we have examined so far can also be given an HTTP URL, FTP address or S3 address instead of a local file path.
End of explanation
# reference SQLITE
import sqlite3
# read in the stock data from csv
msft = pd.read_csv("../../data/msft.csv")
msft['Symbol'] = "MSFT"
aapl = pd.read_csv("../../data/aapl.csv")
aapl['Symbol'] = 'AAPL'
# create connection
connection = sqlite3.connect("../../data/stocks.sqlite")
# .to_sql() will create sql to store the DataFrame
# in the specified table. if_exists specifies
# what to do if the table already exists
msft.to_sql("STOCK DATA", connection, if_exists="replace")
aapl.to_sql("STOCK DATA", connection, if_exists="append")
# commit the sql and close the connection
connection.commit()
connection.close()
# read data
# connect to the database file
connection = sqlite3.connect("../../data/stocks.sqlite")
# query all records in STOCK_DATA
# returns a DataFrame
# index_col specifies which column to make the DataFrame index
stocks = pd.io.sql.read_sql("SELECT * FROM STOCK_DATA;", connection, index_col="index")
# close the connection
connection.close()
# report the head of the data received
stocks.head()
# open the connection
connection = sqlite3.connect("../../data/stocks.sqlite")
# construct the query string
query = "SELECT * FROM STOCK_DATA WHERE Volume > 29200100 AND Symbol='MSFT';"
# execute and close connection
items = pd.io.sql.read_sql(query,connection,index_col='index')
connection.close()
items
Explanation: Reading and writing from/to SQL databases
pandas can read data from any SQL databases that support Python data adapters, that respect the Python DB-API. Reading is performed by using the pandas.io.sql.read_sql() function and writing to SQL databases using the .to_sql() method of DataFrame.
End of explanation
import pandas_datareader.data as web
import datetime
# start and end dates
start = datetime.datetime(2012,1,1)
end = datetime.datetime(2014,1,27)
# read the MSFT stock data from Yahoo!
yahoo = web.DataReader('MSFT','yahoo',start,end)
yahoo.head()
# read from google
google = web.DataReader('MSFT','google',start,end)
google.head()
# specify we want all yahoo options data for AAPL
# this can take a little time...
from pandas_datareader.data import Options
aapl = Options('AAPL','yahoo')
# read all the data
data = aapl.get_all_data()
# examine the first six rows and four columns
data.iloc[0:6,0:4]
# get all puts at strike price of $80 (first four columns only)
data.loc[(80, slice(None),'put'),:].iloc[0:5,0:4]
data.loc[(80,slice('20150117','20150417'),'put'),:].iloc[:,0:4]
# msft calls expiring on 2015-01-05
expiry = datetime.date(2015, 1, 5)
msft_calls = Options('MSFT','yahoo').get_call_data(expiry=expiry)
msft_calls.iloc[0:5,0:5]
# msft calls expiring on 2015-01-17
expiry = datetime.date(2015,1,17)
aapl_calls = aapl.get_call_data(expiry=expiry)
aapl_calls.iloc[0:5,0:4]
Explanation: As these functions take a connection object, which can be any Python DB-API compatible data adapter, you can more or less work with any supported database data by simply creating an appropriate connection object. The code at pandas level should remain the same for any supported database.
Reading data from remote data services
pandas has direct support for various web-based data source classes in the pandas.io.data namespace. The primary class of interest is pandas.io.data.DataReader, which is implemented to read data from various supported sources and return it to the application directly as DataFrame.
Currently, support exists for the following sources via the DataReader class:
* Daily historical prices stock from either Yahoo! and Google Finance
* Yahoo! Options
* Federal Reserve Economic Data Library
* Kenneth French's Data Library
* The World Bank
Reading Stock Data from Yahoo! and Google Finance
End of explanation
gdp = web.DataReader("GDP","fred",datetime.date(2012,1,1),datetime.date(2014,1,27))
gdp
# get compensation of employees: Wages and Salaries
web.DataReader("A576RC1A027NBEA","fred",datetime.date(1929,1,1),datetime.date(2013,1,1))
Explanation: Reading from Federal Reserve Economic Data
End of explanation
# read from Kenneth French fama global factors data set
factors = web.DataReader("Global_Factors","famafrench")
factors
Explanation: Accessing Kenneth French's Data
Kenneth R French is a professor of finance at Tuck School of Business at Dartmouth University. He has created an extensive library of economic data, which is available for download over the Web.
End of explanation
from pandas_datareader import wb
all_indicators = wb.get_indicators()
# examine some of the indicators
all_indicators.ix[:,0:1]
# search of life expectancy indicators
le_indicators = wb.search("life expectancy")
le_indicators.iloc[:3,:2]
# get countries and show the 3 digit code and name
countries = wb.get_countries()
# show a subset of the country data
countries.iloc[0:10].ix[:,['name','capitalcity','iso2c']]
# get life expectancy at birth for all countries from 1980 to 2014
le_data_all = wb.download(indicator="SP.DYN.LE00.IN", start='1980',end='2014')
le_data_all
# only US, CAN and MEX are returned by default
le_data_all.index.levels[0]
# retrieve life expectancy at birth for all countries
# from 1980 to 2014
le_data_all = wb.download(indicator="SP.DYN.LE00.IN",country=countries['iso2c'],start='1980',end='2012')
le_data_all
Explanation: Reading from the World Bank
World Bank datasets are identified using indicators, a text code that represents each dataset. A full list of indicators can be retrieved using the pandas_datareader.get_indicators() function.
End of explanation
# le_data_all.pivot(index='country',columns='year')
le_data = le_data_all.reset_index().pivot(index='country',columns='year')
# examine pivoted data
le_data.iloc[:,0:3]
# ask what is the name of the country for each year
# with the least life expectancy
country_with_least_expectancy = le_data.idxmin(axis=0)
country_with_least_expectancy
# and what is the minimum life expectancy for each year
expectancy_for_least_country = le_data.min(axis=0)
expectancy_for_least_country
# this merges the two frames together and gives us
# year, country and expectancy where the minimum exists
least = pd.DataFrame(data={'Country':country_with_least_expectancy.values,
'Expectancy':expectancy_for_least_country.values},
index= country_with_least_expectancy.index.levels[1])
least
Explanation: We can do some interesting things with this data. The example we will look at, determines which country has the lowest life expectancy for each year. To do this, we first need to pivot this data, so that the index is the country name and the year is the column.
End of explanation |
6,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1 Finding Patterns in Text
Step1: 2 Compiling Expressions
Step2: 3 Multiple Matches
Step4: 4 Repetition
Step5: When processing a repetition instruction, re will usually consume as much of the input as possible while matching the pattern. This so-called greedy behavior may result in fewer individual matches, or the matches may include more of the input text than intended. Greediness can be turned off by following the repetition instruction with ?.
Step6: 5 character Sets
Step7: 5 Escape Codes
code | Meaning
-- | --
\d | a digit
\D | a non-digit
\s | whitespace(tab,space, newline, etc)
\S | non-whitespace
\w | alphanumeric
\W | non-alphanumeric
Step8: 6 Anchoring
code | Meaning
-- | --
^ | start of string, or line
$ | end of string, or line
\A | start of string
\Z | end of string
\b | empty string at begining or end of a word
\B | empty string not at begining or end of word
Step9: 7 Constraining the Search
Step10: 8 Dissecting Matches with groups
Step11: 8 Search Options
8.1 Case-insensitive match
Step12: 8.1 Input with mulitline
Step13: 9 Unicode
Step14: 10 Verbose Expression Syntax
Step15: 11 Modifying Strings with Patterns
Step16: 12 Spliting with patterns | Python Code:
import re
pattern = 'this'
text = 'Does this text match the pattern'
match = re.search(pattern, text)
s = match.start()
e = match.end()
print('Found "{}" \n in "{}" from {} to {} ("{}")'.format(match.re.pattern,match.string, s, e, text[s:e]))
Explanation: 1 Finding Patterns in Text
End of explanation
import re
regexes = [
re.compile(p)
for p in ['this', 'that']
]
text = 'Does this text match the pattern'
print('Text: {!r}\n'.format(text))
for regex in regexes:
print('Seeking "{}"->'.format(regex.pattern), end= '')
if regex.search(text):
print('match!')
else:
print('no match')
Explanation: 2 Compiling Expressions
End of explanation
import re
text = 'abbaaabbbbaaaaa'
pattern = 'ab'
for match in re.findall(pattern, text):
print('Found {!r}'.format(match))
for match in re.finditer(pattern, text):
s = match.start()
e = match.end()
print('Found at {:d}:{:d}'.format(s,e))
Explanation: 3 Multiple Matches
End of explanation
import re
def test_patterns(text, patterns):
Given source text and a list of patterns, look for
matches for each pattern within the text and print
them to stdout.
# Look for each pattern in the text and print the results
for pattern, desc in patterns:
print("'{}' ({})\n".format(pattern, desc))
print(" '{}'".format(text))
for match in re.finditer(pattern, text):
s = match.start()
e = match.end()
substr = text[s:e]
n_backslashes = text[:s].count('\\')
prefix = '.' * (s + n_backslashes)
print(" {}'{}'".format(prefix, substr))
print()
return
test_patterns('abbaabbba',
[
('ab*','a followdd by zero or more b'),
('ab+','a followed by one or more b'),
('ab?', 'a followed by zero or one b'),
('ab{3}','a followed by three b'),
('ab{2,3}', 'a followed by two or three b')
]
)
Explanation: 4 Repetition
End of explanation
test_patterns('abbaabbba',
[
('ab*?','a followdd by zero or more b'),
('ab+?','a followed by one or more b'),
('ab??', 'a followed by zero or one b'),
('ab{3}?','a followed by three b'),
('ab{2,3}?', 'a followed by two or three b')
]
)
Explanation: When processing a repetition instruction, re will usually consume as much of the input as possible while matching the pattern. This so-called greedy behavior may result in fewer individual matches, or the matches may include more of the input text than intended. Greediness can be turned off by following the repetition instruction with ?.
End of explanation
test_patterns(
'abbaabbba',
[
('[ab]', 'either a or b'),
('a[ab]+', 'a followed by one or more a or b'),
('a[ab]+?', 'a followed by one or more a or b, not greedy'),
]
)
test_patterns(
'This is some text -- with punctuation',
[
('[^-. ]+', 'sequence withouct -, ., or space')
]
)
test_patterns(
'This is some text -- with punctuation',
[
('[a-z]+', 'sequence of lowercase letters'),
('[A-Z]+', 'sequecne of uppercase letters'),
('[a-zA-Z]+', 'sequecne of letters of either case'),
('[A-Z][a-z]+', 'one uppercase followed by lowercase')
]
)
test_patterns(
'abbaabbba',
[
('a.', 'a followed by any one character'),
('b.', 'b followed by any one character'),
('a.*b', 'a followed by anything, end in b'),
('a.*?b', 'a followed by anythin, end in b')
]
)
Explanation: 5 character Sets
End of explanation
test_patterns(
'A prime #1 example!',
[
(r'\d+', 'sequece of digits'),
(r'\D+', 'sequence of non-digits'),
(r'\s+', 'sequence of whitespace'),
(r'\S+', 'sequence of non-whitespace'),
(r'\w+', 'alphanumeric characters'),
(r'\W+', 'non-alphanumeric')
]
)
Explanation: 5 Escape Codes
code | Meaning
-- | --
\d | a digit
\D | a non-digit
\s | whitespace(tab,space, newline, etc)
\S | non-whitespace
\w | alphanumeric
\W | non-alphanumeric
End of explanation
test_patterns(
'This is some text -- with punctuation.',
[(r'^\w+', 'word at start of string'),
(r'\A\w+', 'word at start of string'),
(r'\w+\S*$', 'word near end of string'),
(r'\w+\S*\Z', 'word near end of string'),
(r'\w*t\w*', 'word containing t'),
(r'\bt\w+', 't at start of word'),
(r'\w+t\b', 't at end of word'),
(r'\Bt\B', 't, not start or end of word')],
)Constraining the Search
Explanation: 6 Anchoring
code | Meaning
-- | --
^ | start of string, or line
$ | end of string, or line
\A | start of string
\Z | end of string
\b | empty string at begining or end of a word
\B | empty string not at begining or end of word
End of explanation
import re
text = 'This is some text --with punctuation.'
pattern = 'is'
print('Text :',text)
print('pattern:', pattern)
m = re.match(pattern, text)
print('Match', m)
s = re.search(pattern ,text)
print('Search', s)
Explanation: 7 Constraining the Search
End of explanation
test_patterns(
'abbaaabbbbaaaaa',
[
('a(ab)', 'a followed by literal ab'),
('a(a*b*)','a followed by 0-n a and 0-b b'),
('a(ab)*', 'a followed by 0-n ab'),
('a(ab)+', 'a followed by 1-n ab')
]
)
import re
text = 'This is some text -- with punctuation'
print(text)
print()
patterns = [
(r'^(\w+)', 'word at start of string'),
(r'(\w+)\S*$', 'word at end, with optional punctuation'),
(r'(\bt\w+)\W+(\w+)', 'word starting with t, another word'),
(r'(\w+t)\b', 'word ending with t')
]
for pattern, desc in patterns:
regex = re.compile(pattern)
match = regex.search(text)
print("'{}' ({})\n".format(pattern, desc))
print(' ', match.groups())
print()
import re
text = 'This is some text -- with punctuation'
print(text)
print()
patterns = [
r'(?P<first_word>\w+)',
r'(?P<last_word>\w+)\S*$',
r'(?P<t_word>\bt\w+)\W+(?P<other_word>\w+)',
r'(?P<ends_with_t>\w+t)\b'
]
for pattern in patterns:
regex = re.compile(pattern)
match = regex.search(text)
print("'{}'".format(pattern))
print(' ', match.groups())
print(' ', match.groupdict())
print()
Explanation: 8 Dissecting Matches with groups
End of explanation
import re
text = 'This is some text -- with punctuation.'
pattern = r'\bT\w+'
with_case = re.compile(pattern)
without_case = re.compile(pattern, re.IGNORECASE)
print('Text:\n {!r}'.format(text))
print('Pattern:\n {}'.format(pattern))
print('Case-sensitive:')
for match in with_case.findall(text):
print(' {!r}'.format(match))
print('Case-insensitive:')
for match in without_case.findall(text):
print(' {!r}'.format(match))
Explanation: 8 Search Options
8.1 Case-insensitive match
End of explanation
import re
text = 'This is some text -- with punctuation.\nA second line.'
pattern = r'(^\w+)|(\w+\S*$)'
single_line = re.compile(pattern)
multiline = re.compile(pattern, re.MULTILINE)
print('Text:\n {!r}'.format(text))
print('Pattern:\n {}'.format(pattern))
print('Single Line :')
for match in single_line.findall(text):
print(' {!r}'.format(match))
print('Multline :')
for match in multiline.findall(text):
print(' {!r}'.format(match))
Explanation: 8.1 Input with mulitline
End of explanation
import re
text = u'Français złoty Österreich 中国矿业大学'
pattern = r'\w+'
ascii_pattern = re.compile(pattern, re.ASCII)
unicode_pattern = re.compile(pattern)
print('Text :', text)
print('Pattern :', pattern)
print('ASCII :', list(ascii_pattern.findall(text)))
print('Unicode :', list(unicode_pattern.findall(text)))
Explanation: 9 Unicode
End of explanation
import re
address = re.compile(
'''
[\w\d.+-]+ # username
@
([\w\d.]+\.)+ # domain name prefix
(com|org|edu) # TODO: support more top-level domains
''',
re.VERBOSE)
candidates = [
u'[email protected]',
u'[email protected]',
u'[email protected]',
u'[email protected]',
]
for candidate in candidates:
match = address.search(candidate)
print('{:<30} {}'.format(
candidate, 'Matches' if match else 'No match'),
)
Explanation: 10 Verbose Expression Syntax
End of explanation
import re
bold = re.compile(r'\*{2}(.*?)\*{2}')
text = 'Make this **bold**. This **too**.'
print('Text:', text)
print('Bold:', bold.sub(r'<b>\1</b>', text))
import re
bold = re.compile(r'\*{2}(?P<bold_text>.*?)\*{2}')
text = 'Make this **bold**. This **too**.'
print('Text:', text)
print('Bold:', bold.sub(r'<b>\g<bold_text></b>', text))
Explanation: 11 Modifying Strings with Patterns
End of explanation
import re
text = '''Paragraph one
on two lines.
Paragraph two.
Paragraph three.'''
print('With findall:')
for num, para in enumerate(re.findall(r'(.+?)(\n{2,}|$)',
text,
flags=re.DOTALL)):
print(num, repr(para))
print()
print()
print('With split:')
for num, para in enumerate(re.split(r'\n{2,}', text)):
print(num, repr(para))
print()
Explanation: 12 Spliting with patterns
End of explanation |
6,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collecting and Using Data in Python
Laila A. Wahedi
Massive Data Institute Postdoctoral Fellow <br>McCourt School of Public Policy<br>
Follow along
Step1: Other Useful Packages (not used today)
ggplot
Step2: Look at those zip codes!
Clean Zip Code
We don't need the latitude and longitude
Create two variables by splitting the zip code variable
Step3: Rearrange The Data
Step4: Lost Columns! Fips summed!
Group by
Step5: Aside on Copying
Multiple variables can point to the same data in Python. Saves memory
If you set one variable equal to another, then change the first variable, the second changes.
Causes warnings in Pandas all the time.
Solution
Step6: Rearrange The Data
Step7: Rename Columns, Subset Data
Step8: Save Your Data
No saving your workspace like in R or STATA
Save specific variables, models, or results using Pickle
wb
Step9: Scraping
How the Internet Works
Code is stored on servers
Web addresses point to the location of that code
Going to an address or clicking a button sends requests to the server for data,
The server returns the requested content
Your web browser interprets the code to render the web page
<img src='Internet.png'>
Scraping
Step10: Requests from Python
Use requests package
Requested json format
Returns list of dictionaries
Look at the returned keys
Step11: View Returned Data
Step12: Ethics
Check the websites terms of use
Don't hit too hard
Step13: Collect Our Data
Python helps us automate repetitive tasks. Don't download each datapoint you want separately
Get a list of zip codes we want
take a subset to demo, so it doesn't take too long and so we don't all hit too hard from the same ip
Request the data for those zipcodes on a day in 2015 (you pick, fire season July-Oct)
Be sure to sleep between requests
Store that data as you go into a dictionary
Key
Step14: Scraping
Step15: Use Find Feature to Narrow Your Search
Find the unique div we identified
Remember the underscore
Step16: Back To Our Data
If it's still running, go ahead and stop it by pushing the square at the top of the notebook
Step17: Subset down to the data we have
Step18: Create a dataframe from the new AQI data
Step19: Combine The Data
https
Step20: Look At The Data
Step21: Look At The Data
Step22: Look at particulates
There is a lot of missingness in 2015
Try other variables, such as comparing children and adults
Step23: Scatter Plot
Try some other combinations
Our data look clustered, but we'll ignore that for now
Step24: Run a regression
Step25: Clustering Algorithm
Learn more about clustering here
Step26: Look At Clusters
Our data are very closely clustered, OLS was probably not appropriate. | Python Code:
import pandas as pd
import numpy as np
import pickle
import statsmodels.api as sm
from sklearn import cluster
import matplotlib.pyplot as plt
%matplotlib inline
from bs4 import BeautifulSoup as bs
import requests
import time
# from ggplot import *
Explanation: Collecting and Using Data in Python
Laila A. Wahedi
Massive Data Institute Postdoctoral Fellow <br>McCourt School of Public Policy<br>
Follow along: Wahedi.us, Current Presentation
Agenda for today:
More on manipulating data
Scrape data
Merge data into a data frame
Run a basic model on the data
Packages to Import For Today
Should all be included with your Anaconda Python Distribution
Raise your hand for help if you have trouble
Our plots will use matplotlib, similar to plotting in matlab
%matplotlib inline tells Jupyter Notebooks to display your plots
from allows you to import part of a package
End of explanation
asthma_data = pd.read_csv('asthma-emergency-department-visit-rates-by-zip-code.csv')
asthma_data.head()
Explanation: Other Useful Packages (not used today)
ggplot: the familiar ggplot2 you know and love from R
seaborn: Makes your plots prettier
plotly: makes interactive visualizations, similar to shiny
gensim: package for doing natural language processing
scipy: used with numpy to do math. Generates random numbers from distributions, does matrix operations, etc.
Data Manipulation
Download the .csv file at: <br>
https://data.chhs.ca.gov/dataset/asthma-emergency-department-visit-rates-by-zip-code
OR: https://tinyurl.com/y79jbxlk
Move it to the same directory as your notebook
End of explanation
asthma_data[['zip','coordinates']] = asthma_data.loc[:,'ZIP code'].str.split(
pat='\n',expand=True)
asthma_data.drop('ZIP code', axis=1,inplace=True)
asthma_data.head(2)
Explanation: Look at those zip codes!
Clean Zip Code
We don't need the latitude and longitude
Create two variables by splitting the zip code variable:
index the data frame to the zip code variable
split it in two: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html
assign it to another two variables
Remember: can't run this cell twice without starting over
End of explanation
asthma_grouped = asthma_data.groupby(by=['Year','zip']).sum()
asthma_grouped.head(4)
Explanation: Rearrange The Data: Group By
Make child and adult separate columns rather than rows.
Must specify how to aggregate the columns <br>
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html
End of explanation
asthma_grouped.drop('County Fips code',axis=1,inplace=True)
temp_grp = asthma_data.groupby(by=['Year','zip']).first()
asthma_grouped[['fips','county','coordinates']]=temp_grp.loc[:,['County Fips code',
'County',
'coordinates']]
asthma_grouped.loc[:,'Number of Visits']=asthma_grouped.loc[:,'Number of Visits']/2
asthma_grouped.head(2)
Explanation: Lost Columns! Fips summed!
Group by: Cleaning Up
Lost columns you can't sum
took sum of fips
Must add these back in
Works because temp table has same index
End of explanation
A = [5]
B = A
A.append(6)
print(B)
import copy
A = [5]
B = A.copy()
A.append(6)
print(B)
asthma_grouped[['fips','county','coordinates']]=temp_grp.loc[:,['County Fips code',
'County',
'coordinates']].copy()
Explanation: Aside on Copying
Multiple variables can point to the same data in Python. Saves memory
If you set one variable equal to another, then change the first variable, the second changes.
Causes warnings in Pandas all the time.
Solution:
Use proper slicing-- .loc[] --for the right hand side
Use copy
End of explanation
asthma_unstacked = asthma_data.pivot_table(index = ['Year',
'zip',
'County',
'coordinates',
'County Fips code'],
columns = 'Age Group',
values = 'Number of Visits')
asthma_unstacked.reset_index(drop=False,inplace=True)
asthma_unstacked.head(2)
Explanation: Rearrange The Data: Pivot
Use pivot and melt to to move from row identifiers to column identifiers and back <br>
https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-by-melt
Tell computer what to do with every cell:
Index: Stays the same
Columns: The column containing the new column labels
Values: The column containing values to insert
<img src='pivot.png'>
Rearrange The Data: Pivot
Tell computer what to do with every cell:
Index: Stays the same
Columns: The column containing the new column labels
Values: The column containing values to insert
End of explanation
asthma_unstacked.rename(columns={
'zip':'Zip',
'coordinates':'Coordinates',
'County Fips code':'Fips',
'Adults (18+)':'Adults',
'All Ages':'Incidents',
'Children (0-17)': 'Children'
},
inplace=True)
asthma_2015 = asthma_unstacked.loc[asthma_unstacked.Year==2015,:]
asthma_2015.head(2)
Explanation: Rename Columns, Subset Data
End of explanation
pickle.dump(asthma_unstacked,open('asthma_unstacked.p','wb'))
asthma_unstacked.to_csv('asthma_unstacked.csv')
asthma_unstacked = pickle.load(open('asthma_unstacked.p','rb'))
Explanation: Save Your Data
No saving your workspace like in R or STATA
Save specific variables, models, or results using Pickle
wb: write binary. Tells computer to save the file
rb: read binary. Tells computer to read the file
If you mix them up, you may write over your data and lose it
Write your data to a text file to read later
End of explanation
base_url = "http://www.airnowapi.org/aq/observation/zipCode/historical/"
attributes = ["format=application/json",
"zipCode=20007",
"date=2017-09-05T00-0000",
"distance=25",
"API_KEY=39DC3727-09BD-48C4-BBD8-XXXXXXXXXXXX"
]
post_url = '&'.join(attributes)
print(post_url)
Explanation: Scraping
How the Internet Works
Code is stored on servers
Web addresses point to the location of that code
Going to an address or clicking a button sends requests to the server for data,
The server returns the requested content
Your web browser interprets the code to render the web page
<img src='Internet.png'>
Scraping:
Collect the website code by emulating the process:
Can haz cheezburger?
<img src='burger.png'>
Extract the useful information from the scraped code:
Where's the beef?
<img src='beef.png'>
API
Application Programming Interface
The set of rules that govern communication between two pieces of code
Code requires clear expected inputs and outputs
APIs define required inputs to get the outputs in a format you can expect.
Easier than scraping a website because gives you exactly what you ask for
<img src = "beef_direct.png">
API Keys
APIs often require identification
Go to https://docs.airnowapi.org
Register and get a key
Log in to the site
Select web services
DO NOT SHARE YOUR KEY
It will get stolen and used for malicious activity
Requests to a Server
<div style="float: left;width:50%">
<h3> GET</h3>
<ul><li>Requests data from the server</li>
<li> Encoded into the URL</li></ul>
<img src = 'get.png'>
</div>
<div style="float: left;width:50%">
<h3>POST</h3>
<ul><li>Submits data to be processed by the server</li>
<li>For example, filter the data</li>
<li>Can attach additional data not directly in the url</li></ul>
<img src = 'post.png'>
</div>
Using an API
<img src = 'api.png'>
Requests encoded in the URL
Parsing a URL
<font color="blue">http://www.airnowapi.org/aq/observation/zipCode/historical/</font><font color="red">?</font><br><font color="green">format</font>=<font color="purple">application/json</font><font color="orange">&<br></font><font color="green">zipCode</font>=<font color="purple">20007</font><font color="orange">&</font><br><font color="green">date</font>=<font color="purple">2017-09-05T00-0000</font><font color="orange">&</font><br><font color="green">distance</font>=<font color="purple">25</font><font color="orange">&</font><br><font color="green">API_KEY</font>=<font color="purple">D9AA91E7-070D-4221-867CC-XXXXXXXXXXX</font>
The base URL or endpoint is:<br>
<font color="blue">http://www.airnowapi.org/aq/observation/zipCode/historical/</font>
<font color="red">?</font> tells us that this is a query.
<font color="orange">&</font> separates name, value pairs within the request.
Five <font color="green"><strong>name</strong></font>, <font color="purple"><strong>value</strong></font> pairs POSTED
format, zipCode, date, distance, API_KEY
Request from Python
prepare the url
List of attributes
Join them with "&" to form a string
End of explanation
ingredients=requests.get(base_url, post_url)
ingredients = ingredients.json()
print(ingredients[0])
Explanation: Requests from Python
Use requests package
Requested json format
Returns list of dictionaries
Look at the returned keys
End of explanation
for item in ingredients:
AQIType = item['ParameterName']
City=item['ReportingArea']
AQIValue=item['AQI']
print("For Location ", City, " the AQI for ", AQIType, "is ", AQIValue)
Explanation: View Returned Data:
Each list gives a different parameter for zip code and date we searched
End of explanation
time.sleep(1)
Explanation: Ethics
Check the websites terms of use
Don't hit too hard:
Insert pauses in your code to act more like a human
Scraping can look like an attack
Server will block you without pauses
APIs often have rate limits
Use the time package to pause for a second between hits
End of explanation
base_url = "http://www.airnowapi.org/aq/observation/zipCode/historical/"
zips = asthma_2015.Zip.unique()
zips = zips[:450]
date ="date=2015-09-01T00-0000"
api_key = "API_KEY=39DC3727-09BD-48C4-BBD8-XXXXXXXXXXXX"
return_format = "format=application/json"
zip_str = "zipCode="
post_url = "&".join([date,api_key,return_format,zip_str])
data_dict = {}
for zipcode in zips:
time.sleep(1)
zip_post = post_url + str(zipcode)
ingredients = requests.get(base_url, zip_post)
ingredients = ingredients.json()
zip_data = {}
for data_point in ingredients:
AQIType = data_point['ParameterName']
AQIVal = data_point['AQI']
zip_data[AQIType] = AQIVal
data_dict[zipcode]= zip_data
Explanation: Collect Our Data
Python helps us automate repetitive tasks. Don't download each datapoint you want separately
Get a list of zip codes we want
take a subset to demo, so it doesn't take too long and so we don't all hit too hard from the same ip
Request the data for those zipcodes on a day in 2015 (you pick, fire season July-Oct)
Be sure to sleep between requests
Store that data as you go into a dictionary
Key: zip code
Value: Dictionary of the air quality parameters and their value
End of explanation
ingredients = requests.get("https://en.wikipedia.org/wiki/Data_science")
soup = bs(ingredients.text)
print(soup.body.p)
Explanation: Scraping: Parsing HTML
What about when you don't have an API that returns dictionaries?
HTML is a markup language that displays data (text, images, etc)
Puts content within nested tags to tell your browser how to display it
<Section_tag>
  <tag> Content </tag>
  <tag> Content </tag>
< /Section_tag>
<Section_tag>
  <tag> <font color="red">Beef</font> </tag>
< /Section_tag>
Find the tags that identify the content you want:
First paragraph of wikipedia article:
https://en.wikipedia.org/wiki/Data_science
Inspect the webpage:
Windows: ctrl+shift+i
Mac: ctrl+alt+i
<img src = "wikipedia_scrape.png">
Parsing HTML with Beautiful Soup
Beautiful Soup takes the raw html and parses the tags so you can search through them.
text attribute returns raw html text from requests
Ignore the warning, default parser is fine
We know it's the first paragraph tag in the body tag, so:
Can find first tag of a type using <strong>.</strong>
But it's not usually that easy...
End of explanation
parser_div = soup.find("div", class_="mw-parser-output")
wiki_content = parser_div.find_all('p')
print(wiki_content[0])
print('*****************************************')
print(wiki_content[0].text)
Explanation: Use Find Feature to Narrow Your Search
Find the unique div we identified
Remember the underscore: "class_"
Find the p tag within the resulting html
Use an index to return just the first paragraph tag
Use the text attribute to ignore all the formatting and link tags
Next: Use a for loop and scrape the first paragraph from a bunch of wikipedia articles
Learn More: http://web.stanford.edu/~zlotnick/TextAsData/Web_Scraping_with_Beautiful_Soup.html
End of explanation
pickle.dump(data_dict,open('AQI_data_raw.p','wb'))
Explanation: Back To Our Data
If it's still running, go ahead and stop it by pushing the square at the top of the notebook:
<img src="interrupt.png">
Save what you collected, don't want to hit them twice!
End of explanation
collected = list(data_dict.keys())
asthma_2015_sub = asthma_2015.loc[asthma_2015.Zip.isin(collected),:]
Explanation: Subset down to the data we have:
use the isin() method to include only those zip codes we've already collected
End of explanation
aqi_data = pd.DataFrame.from_dict(data_dict, orient='index')
aqi_data.reset_index(drop=False,inplace=True)
aqi_data.rename(columns={'index':'Zip'},inplace=True)
aqi_data.head()
Explanation: Create a dataframe from the new AQI data
End of explanation
asthma_aqi = asthma_2015_sub.merge(aqi_data,how='outer',on='Zip')
asthma_aqi.head(2)
Explanation: Combine The Data
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html
* Types of merges:
* Left: Use only rows from the dataframe you are merging into
* Right: use only rows from the dataframe you are inserting, (the one in the parentheses)
* Inner: Use only rows that match between both
* Outer: Use all rows, even if they only appear in one of the dataframes
* On: The variables you want to compare
* Specify right_on and left_on if they have different names
End of explanation
asthma_aqi.Incidents.plot.hist(20)
Explanation: Look At The Data: Histogram
20 bins
End of explanation
asthma_aqi.loc[:,['Incidents','OZONE']].plot.density()
Explanation: Look At The Data: Smoothed Distribution
End of explanation
asthma_aqi.loc[:,['PM2.5','PM10']].plot.hist()
Explanation: Look at particulates
There is a lot of missingness in 2015
Try other variables, such as comparing children and adults
End of explanation
asthma_aqi.plot.scatter('OZONE','PM2.5')
Explanation: Scatter Plot
Try some other combinations
Our data look clustered, but we'll ignore that for now
End of explanation
y =asthma_aqi.loc[:,'Incidents']
x =asthma_aqi.loc[:,['OZONE','PM2.5']]
x['c'] = 1
ols_model1 = sm.OLS(y,x,missing='drop')
results = ols_model1.fit()
print(results.summary())
pickle.dump([results,ols_model1],open('ols_model_results.p','wb'))
Explanation: Run a regression:
Note: statsmodels supports equation format like R <br>
http://www.statsmodels.org/dev/example_formulas.html
End of explanation
model_df = asthma_aqi.loc[:,['OZONE','PM2.5','Incidents',]]
model_df.dropna(axis=0,inplace=True)
model_df = (model_df - model_df.mean()) / (model_df.max() - model_df.min())
asthma_air_clusters=cluster.KMeans(n_clusters = 3)
asthma_air_clusters.fit(model_df)
model_df['clusters3']=asthma_air_clusters.labels_
Explanation: Clustering Algorithm
Learn more about clustering here: <br>
http://scikit-learn.org/stable/modules/clustering.html
Use sklearn, a package for data mining and machine learing
Drop rows with missing values first
Standardize the data so they're all on the same scale
End of explanation
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
labels = asthma_air_clusters.labels_
ax.scatter(model_df.loc[:, 'PM2.5'], model_df.loc[:, 'OZONE'], model_df.loc[:, 'Incidents'],
c=labels.astype(np.float), edgecolor='k')
ax.set_xlabel('Particulates')
ax.set_ylabel('Ozone')
ax.set_zlabel('Incidents')
Explanation: Look At Clusters
Our data are very closely clustered, OLS was probably not appropriate.
End of explanation |
6,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Algebra Tutorial
Based on Chapter 4 for Data Science from Scratch Book by Joel Grus with code from https
Step1: Simple vector operations
Vectors can be thought of a as representation of a single point in multi-dimensional space.
Vectors are objects that can be added together to form new vectors or multiplied by scalars to form new vectors.
If we have the height, weight, and age data for a large number of people then we can treat the data as a 3-d vector using [height, weight, age].
Step4: Vector addition requires that the dimensionality of the vectors is the same other wise it fails.
So if we add v + w then each element v(x) + v(x) with x=0,1,...N-1 must have a value in both the vectors for the addition to be successful.
Lists are not really vectors so we need to create them our selves.
Functions for working with vectors
Step8: this isn't right if you don't -- from __future_ import division
Step10: Demonstration of show 2d vectors on a graph
Step13: Functions for working with matrices
Definition of terms for matrix
https
Step14: Using matrix manipulation | Python Code:
# resources for the rest of the page
from __future__ import division # want 3 / 2 == 1.5
import re, math, random # regexes, math functions, random numbers
import matplotlib.pyplot as plt # pyplot
from collections import defaultdict, Counter
from functools import partial
Explanation: Linear Algebra Tutorial
Based on Chapter 4 for Data Science from Scratch Book by Joel Grus with code from https://github.com/joelgrus/data-science-from-scratch
End of explanation
john = [72, #inches
195, #pound
32] #years
mary = [53, #inches
105, #pounds
28] #years
Explanation: Simple vector operations
Vectors can be thought of a as representation of a single point in multi-dimensional space.
Vectors are objects that can be added together to form new vectors or multiplied by scalars to form new vectors.
If we have the height, weight, and age data for a large number of people then we can treat the data as a 3-d vector using [height, weight, age].
End of explanation
def vector_add(v, w):
adds two vectors componentwise
return [v_i + w_i for v_i, w_i in zip(v,w)]
def vector_subtract(v, w):
subtracts two vectors componentwise
return [v_i - w_i for v_i, w_i in zip(v,w)]
def vector_sum(vectors):
return reduce(vector_add, vectors)
def scalar_multiply(c, v):
return [c * v_i for v_i in v]
x=[1,2]
y=[2,1]
vector_add(x,y)
vector_subtract(x,y)
vlist=[x,y,x,y]
vector_sum(vlist)
scalar_multiply(10,x)
Explanation: Vector addition requires that the dimensionality of the vectors is the same other wise it fails.
So if we add v + w then each element v(x) + v(x) with x=0,1,...N-1 must have a value in both the vectors for the addition to be successful.
Lists are not really vectors so we need to create them our selves.
Functions for working with vectors
End of explanation
def vector_mean(vectors):
compute the vector whose i-th element is the mean of the
i-th elements of the input vectors
n = len(vectors)
return scalar_multiply(1/n, vector_sum(vectors))
vector_mean(vlist)
def dot(v, w):
v_1 * w_1 + ... + v_n * w_n
return sum(v_i * w_i for v_i, w_i in zip(v, w))
dot(x,x)
def sum_of_squares(v):
v_1 * v_1 + ... + v_n * v_n
return dot(v, v)
sum_of_squares(x)
def magnitude(v):
return math.sqrt(sum_of_squares(v))
magnitude(x)
def squared_distance(v, w):
return sum_of_squares(vector_subtract(v, w))
squared_distance(x,y)
def distance(v, w):
return math.sqrt(squared_distance(v, w))
distance(x,x)
Explanation: this isn't right if you don't -- from __future_ import division
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
def plot_2D_vectors(tupleof2dvectors,xlim_in,ylim_in):
plot_2d_vectors pass in the 2d vectors and axes object and x and y limits
fig, ax = plt.subplots(figsize=(10, 8))
# Set the axes through the origin
for spine in ['left', 'bottom']:
ax.spines[spine].set_position('zero')
for spine in ['right', 'top']:
ax.spines[spine].set_color('none')
vecs = tupleof2dvectors
ax.set(xlim=xlim_in, ylim=ylim_in)
ax.grid()
for v in vecs:
ax.annotate('', xy=v, xytext=(0, 0),
arrowprops=dict(facecolor='blue',
shrink=0,
alpha=0.7,
width=0.5))
ax.text(1.1 * v[0], 1.1 * v[1], str(v))
x=[-1,4]
y=[2,4]
z=[1,-3]
vt=(x,y,z)
plot_2D_vectors(vt,(-5,5),(-5,5))
plt.show()
Explanation: Demonstration of show 2d vectors on a graph
End of explanation
def shape(A):
num_rows = len(A)
num_cols = len(A[0]) if A else 0
return num_rows, num_cols
def get_row(A, i):
return A[i]
def get_column(A, j):
return [A_i[j] for A_i in A]
def make_matrix(num_rows, num_cols, entry_fn):
returns a num_rows x num_cols matrix
whose (i,j)-th entry is entry_fn(i, j)
return [[entry_fn(i, j) for j in range(num_cols)]
for i in range(num_rows)]
def is_diagonal(i, j):
1's on the 'diagonal', 0's everywhere else
return 1 if i == j else 0
identity_matrix = make_matrix(5, 5, is_diagonal)
print "identity_matrix =" + str(identity_matrix)
Explanation: Functions for working with matrices
Definition of terms for matrix
https://en.wikipedia.org/wiki/Matrix_(mathematics)
A matrix (plural: matrices) is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns.
End of explanation
friendships = [[0, 1, 1, 0, 0, 0, 0, 0, 0, 0], # user 0
[1, 0, 1, 1, 0, 0, 0, 0, 0, 0], # user 1
[1, 1, 0, 1, 0, 0, 0, 0, 0, 0], # user 2
[0, 1, 1, 0, 1, 0, 0, 0, 0, 0], # user 3
[0, 0, 0, 1, 0, 1, 0, 0, 0, 0], # user 4
[0, 0, 0, 0, 1, 0, 1, 1, 0, 0], # user 5
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0], # user 6
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0], # user 7
[0, 0, 0, 0, 0, 0, 1, 1, 0, 1], # user 8
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] # user 9
identity_matrix_10 = make_matrix(10, 10, is_diagonal)
def matrix_add(A, B):
if shape(A) != shape(B):
raise ArithmeticError("cannot add matrices with different shapes")
num_rows, num_cols = shape(A)
def entry_fn(i, j): return A[i][j] + B[i][j]
return make_matrix(num_rows, num_cols, entry_fn)
matrix_add(friendships,identity_matrix_10)
def make_graph_dot_product_as_vector_projection(plt):
v = [2, 1]
w = [math.sqrt(.25), math.sqrt(.75)]
c = dot(v, w)
vonw = scalar_multiply(c, w)
o = [0,0]
fig, ax = plt.subplots(figsize=(10, 8))
plt.arrow(0, 0, v[0], v[1],
width=0.002, head_width=.1, length_includes_head=True)
plt.annotate("v", v, xytext=[v[0] + 0.1, v[1]])
plt.arrow(0 ,0, w[0], w[1],
width=0.002, head_width=.1, length_includes_head=True)
plt.annotate("w", w, xytext=[w[0] - 0.1, w[1]])
plt.arrow(0, 0, vonw[0], vonw[1], length_includes_head=True)
plt.annotate(u"(v•w)w", vonw, xytext=[vonw[0] - 0.1, vonw[1] + 0.1])
plt.arrow(v[0], v[1], vonw[0] - v[0], vonw[1] - v[1],
linestyle='dotted', length_includes_head=True)
plt.scatter(*zip(v,w,o),marker='.')
plt.axis('equal')
plt.show()
v = [2, 1]
w = [math.sqrt(.25), math.sqrt(.75)]
c = dot(v, w)
print "Scalar multiplication of V=[2,1] * scalar "+ str(c) + " is " + str(scalar_multiply(c, w))
make_graph_dot_product_as_vector_projection(plt)
Explanation: Using matrix manipulation
End of explanation |
6,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Geocoding no Geopandas
O Geocoding é o processo de transformar um endereço em coordenadas geográficas (formato numérico). Em contrapartida a geocodificação reversa transforma coordenadas em um endereço.
Utilizando o geopandas, podemos fazer operação de geocoding através da função geocode(), que recebe uma lista de endereços (string) e retorna um GeoDataFrame contendo o resultado em objetos Point na coluna geometry.
Nós geocodificaremos os endereços armazenados em um arquivo de texto chamado roubos.csv, que é uma pequena amostra com apenas 5 tuplas contendo informações de eventos de roubos que aconteceram na cidade de Fortaleza.
vamos carregar os dados utilizando pandas com a função read_csv() e mostrá-los.
Step1: Perceba que apesar de possuírmos informação de endereço, não temos coordenadas dos eventos, o que dificulta qualquer tipo de análise. Para obtermos as coordenadas vamos fazer o geocoding dos endereços.
Mas antes, vamos unir todas as informações de endereço em uma coluna só chamada de endereco.
Step2: Agora vamos transformar os endereços em coordenadas usando geocode() com a ferramente de busca de dados Nominatim que realiza consultas no OpenStreetMap.
Antes será necessário instalar a biblioteca geopy com o pip, para isso utilize o comando
Step3: Como resultado, temos um GeoDataFrame que contém nosso endereço e uma coluna 'geometry' contendo objeto Point que podemos usar para exportar os endereços para um Shapefile por exemplo.
Como os indices das duas tabelas são iguais, podemos unir facilmente.
Step4: Notas sobre a ferramenta Nominatim
Nominatim funciona relativamente bem se você tiver endereços bem definidos e bem conhecidos, como os que usamos neste tutorial. No entanto, em alguns casos, talvez você não tenha endereços bem definidos e você pode ter, por exemplo, apenas o nome de um shopping ou uma lanchonete. Nesses casos, a Nominatim pode não fornecer resultados tão bons e, porém você pode utilizar outras APIs como o Google Geocoding API (V3).
2. Operações entre geometrias
Descobrir se um certo ponto está localizado dentro ou fora de uma área,
ou descobrir se uma linha cruza com outra linha ou polígono são
operações geoespaciais fundamentais que são frequentemente usadas, e selecionar
dados baseados na localização. Tais consultas espaciais são uma das primeiras etapas do fluxo de trabalho ao fazer análise espacial.
2.1 Como verificar se o ponto está dentro de um polígono?
Computacionalmente, detectar se um ponto está dentro de um polígono é mais comumente feito utilizando uma fórmula específica chamada algoritmo Ray Casting. Em vez disso, podemos tomar
vantagem dos predicados binários de Shapely
que podem avaliar as relações topológicas com os objetos.
Existem basicamente duas maneiras de conduzir essa consulta com o Shapely
Step5: Vamos verificar se esses pontos estão dentro do polígono
Step6: Então podemos ver que o primeiro ponto parece estar dentro do polígono e o segundo não.
Na verdade, o primeiro ponto é perto do centro do polígono, como nós
podemos ver se compararmos a localização do ponto com o centróide do polígono
Step7: 2.2 Interseção
Outra operação geoespacial típica é ver se uma geometria
intercepta ou toca
outra geometria. A diferença entre esses dois é que
Step8: Vamos ver se eles se interceptam
Step9: Eles também tocam um ao outro?
Step10: Sim, as duas operações são verdade e podemos ver isso plotando os dois objetos juntos.
Step11: 2.3 Ponto dentro de polygon usando o geopandas
Uma das estratégias adotadas pela Secretaria da Segurança Pública e Defesa Social (SSPDS) para o aperfeiçoamento de trabalhos policiais, periciais e bombeirísticos em território cearense é a delimitação do Estado em Áreas Integradas de Segurança (AIS).
A cidade de fortaleza por si só é dividida em cerca de 10 áreas integradas de segurança (AIS). Vamos carregar estas divisões administrativas e visualizar elas.
Step12: Agora vamos mostrar somente as fronteiras das AIS e os nosso eventos de crimes.
Mas antes bora transformar os nosso dados de roubo em um GeoDataFrame.
Step13: Agora sim, vamos mostrar as fronteiras de cada AIS juntamente com os eventos de roubo.
Step14: Relembrando o endereço dos nosso dados, dois roubos aconteceram na avenida bezerra de menezes próximos ao north shopping. Sabendo que a AIS que contém o shopping é a de número 6, vamos selecionar somente os eventos de roubo dentro da AIS 6.
Primeiro vamos separar somente a geometria da AIS 6. Antes vamos visualizar os dados e verificar qual coluna pode nos ajudar nessa tarefa.
Step15: Existem duas colunas que podem nos ajudar a filtrar a AIS desejada, a coluna AIS e a coluna NM_AIS. Vamos utilizar a primeira por ser necessário utilizar apenas o número.
Step16: Agora podemos utilizar a função within() para selecionar apenas os eventos que aconteceram dentro da AIS 6.
Step17: Vamos ver os nosso dados em um mapa utilizando a o módulo Folium | Python Code:
# Import necessary modules
import pandas as pd
import geopandas as gpd
from shapely.geometry import Point
# Filepath
fp = r"data/roubos.csv"
# Read the data
data = pd.read_csv(fp, sep=',')
data
Explanation: 1. Geocoding no Geopandas
O Geocoding é o processo de transformar um endereço em coordenadas geográficas (formato numérico). Em contrapartida a geocodificação reversa transforma coordenadas em um endereço.
Utilizando o geopandas, podemos fazer operação de geocoding através da função geocode(), que recebe uma lista de endereços (string) e retorna um GeoDataFrame contendo o resultado em objetos Point na coluna geometry.
Nós geocodificaremos os endereços armazenados em um arquivo de texto chamado roubos.csv, que é uma pequena amostra com apenas 5 tuplas contendo informações de eventos de roubos que aconteceram na cidade de Fortaleza.
vamos carregar os dados utilizando pandas com a função read_csv() e mostrá-los.
End of explanation
data['endereco'] = data['logradouro'] + ', ' + data['localNumero'].apply(str)
data.head()
Explanation: Perceba que apesar de possuírmos informação de endereço, não temos coordenadas dos eventos, o que dificulta qualquer tipo de análise. Para obtermos as coordenadas vamos fazer o geocoding dos endereços.
Mas antes, vamos unir todas as informações de endereço em uma coluna só chamada de endereco.
End of explanation
# Import the geocoding tool
from geopandas.tools import geocode
# Geocode addresses with Nominatim backend
geo = geocode(data['endereco'], provider = 'nominatim', user_agent ='carlos')
geo
Explanation: Agora vamos transformar os endereços em coordenadas usando geocode() com a ferramente de busca de dados Nominatim que realiza consultas no OpenStreetMap.
Antes será necessário instalar a biblioteca geopy com o pip, para isso utilize o comando: pip install geopy
End of explanation
data['geometry'] = geo['geometry']
data.head()
Explanation: Como resultado, temos um GeoDataFrame que contém nosso endereço e uma coluna 'geometry' contendo objeto Point que podemos usar para exportar os endereços para um Shapefile por exemplo.
Como os indices das duas tabelas são iguais, podemos unir facilmente.
End of explanation
from shapely.geometry import Point, Polygon
# Create Point objects
p1 = Point(24.952242, 60.1696017)
p2 = Point(24.976567, 60.1612500)
# Create a Polygon
coords = [(24.950899, 60.169158), (24.953492, 60.169158), (24.953510, 60.170104), (24.950958, 60.169990)]
poly = Polygon(coords)
# Let's check what we have
print(p1)
print(p2)
print(poly)
Explanation: Notas sobre a ferramenta Nominatim
Nominatim funciona relativamente bem se você tiver endereços bem definidos e bem conhecidos, como os que usamos neste tutorial. No entanto, em alguns casos, talvez você não tenha endereços bem definidos e você pode ter, por exemplo, apenas o nome de um shopping ou uma lanchonete. Nesses casos, a Nominatim pode não fornecer resultados tão bons e, porém você pode utilizar outras APIs como o Google Geocoding API (V3).
2. Operações entre geometrias
Descobrir se um certo ponto está localizado dentro ou fora de uma área,
ou descobrir se uma linha cruza com outra linha ou polígono são
operações geoespaciais fundamentais que são frequentemente usadas, e selecionar
dados baseados na localização. Tais consultas espaciais são uma das primeiras etapas do fluxo de trabalho ao fazer análise espacial.
2.1 Como verificar se o ponto está dentro de um polígono?
Computacionalmente, detectar se um ponto está dentro de um polígono é mais comumente feito utilizando uma fórmula específica chamada algoritmo Ray Casting. Em vez disso, podemos tomar
vantagem dos predicados binários de Shapely
que podem avaliar as relações topológicas com os objetos.
Existem basicamente duas maneiras de conduzir essa consulta com o Shapely:
usando uma função chamada
.within()
que verifica se um ponto está dentro de um polígono
usando uma função chamada
.contains ()
que verifica se um polígono contém um ponto
Aviso: apesar de estarmos falando aqui sobre a operação de Point dentro de um Polygon, também é possível verificar se um LineString ou Polygon esta dentro de outro Polygon.
Vamos primeiro criar um polígono usando uma lista de coordenadas-tuplas e um
par de objetos pontuais
End of explanation
# Check if p1 is within the polygon using the within function
print(p1.within(poly))
# Check if p2 is within the polygon
print(p2.within(poly))
Explanation: Vamos verificar se esses pontos estão dentro do polígono
End of explanation
# Our point
print(p1)
# The centroid
print(poly.centroid)
Explanation: Então podemos ver que o primeiro ponto parece estar dentro do polígono e o segundo não.
Na verdade, o primeiro ponto é perto do centro do polígono, como nós
podemos ver se compararmos a localização do ponto com o centróide do polígono:
End of explanation
from shapely.geometry import LineString, MultiLineString
# Create two lines
line_a = LineString([(0, 0), (1, 1)])
line_b = LineString([(1, 1), (0, 2)])
Explanation: 2.2 Interseção
Outra operação geoespacial típica é ver se uma geometria
intercepta ou toca
outra geometria. A diferença entre esses dois é que:
Se os objetos se cruzam, o limite e o interior de um objeto precisa
interceptar com os do outro objeto.
Se um objeto tocar o outro, só é necessário ter (pelo menos) um ponto único de suas fronteiras em comum, mas seus interiores não se cruzam.
Vamos tentar isso.
Vamos criar dois LineStrings
End of explanation
line_a.intersects(line_b)
Explanation: Vamos ver se eles se interceptam
End of explanation
line_a.touches(line_b)
Explanation: Eles também tocam um ao outro?
End of explanation
# Create a MultiLineString from line_a and line_b
multi_line = MultiLineString([line_a, line_b])
multi_line
Explanation: Sim, as duas operações são verdade e podemos ver isso plotando os dois objetos juntos.
End of explanation
ais_filep = 'data/ais.shp'
ais_gdf = gpd.read_file(ais_filep)
ais_gdf.crs
ais_gdf.head()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1,1, figsize=(15,8))
ais_gdf.plot(ax=ax)
plt.show()
Explanation: 2.3 Ponto dentro de polygon usando o geopandas
Uma das estratégias adotadas pela Secretaria da Segurança Pública e Defesa Social (SSPDS) para o aperfeiçoamento de trabalhos policiais, periciais e bombeirísticos em território cearense é a delimitação do Estado em Áreas Integradas de Segurança (AIS).
A cidade de fortaleza por si só é dividida em cerca de 10 áreas integradas de segurança (AIS). Vamos carregar estas divisões administrativas e visualizar elas.
End of explanation
data_gdf = gpd.GeoDataFrame(data)
data_gdf.crs = ais_gdf.crs
data_gdf.head()
Explanation: Agora vamos mostrar somente as fronteiras das AIS e os nosso eventos de crimes.
Mas antes bora transformar os nosso dados de roubo em um GeoDataFrame.
End of explanation
fig, ax = plt.subplots(1,1, figsize=(15,8))
for idx, ais in ais_gdf.iterrows():
ax.plot(*ais['geometry'].exterior.xy, color='black')
data_gdf.plot(ax=ax, color='red')
plt.show()
Explanation: Agora sim, vamos mostrar as fronteiras de cada AIS juntamente com os eventos de roubo.
End of explanation
ais_gdf
Explanation: Relembrando o endereço dos nosso dados, dois roubos aconteceram na avenida bezerra de menezes próximos ao north shopping. Sabendo que a AIS que contém o shopping é a de número 6, vamos selecionar somente os eventos de roubo dentro da AIS 6.
Primeiro vamos separar somente a geometria da AIS 6. Antes vamos visualizar os dados e verificar qual coluna pode nos ajudar nessa tarefa.
End of explanation
ais6 = ais_gdf[ais_gdf['AIS'] == 6]
ais6.plot()
plt.show()
ais6_geometry = ais6.iloc[0].geometry
ais6_geometry
type(ais6)
Explanation: Existem duas colunas que podem nos ajudar a filtrar a AIS desejada, a coluna AIS e a coluna NM_AIS. Vamos utilizar a primeira por ser necessário utilizar apenas o número.
End of explanation
mask = data_gdf.within(ais6.geometry[0])
mask
data_gdf_ais6 = data_gdf[mask]
data_gdf_ais6
Explanation: Agora podemos utilizar a função within() para selecionar apenas os eventos que aconteceram dentro da AIS 6.
End of explanation
import folium
map_fortal = folium.Map(location=[data_gdf_ais6.loc[0, 'geometry'].y,
data_gdf_ais6.loc[0, 'geometry'].x],
zoom_start = 14)
folium.Marker([data_gdf_ais6.loc[0, 'geometry'].y,
data_gdf_ais6.loc[0, 'geometry'].x]).add_to(map_fortal)
folium.Marker([data_gdf_ais6.loc[1, 'geometry'].y,
data_gdf_ais6.loc[1, 'geometry'].x]).add_to(map_fortal)
border_layer = folium.features.GeoJson(ais6_geometry,
style_function=lambda feature: {
'color': 'red',
'weight' : 2,
'fillOpacity' : 0.2,
'opacity': 1,
}).add_to(map_fortal)
map_fortal
Explanation: Vamos ver os nosso dados em um mapa utilizando a o módulo Folium: conda install -c conda-forge folium
End of explanation |
6,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. | Python Code:
%matplotlib
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
#print idx
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
- some test image is similar to every image in the training dataset, in contrast some test image is not.
- some train image is similar to each test image, in trast some train image is not.
End of explanation
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
#two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
#print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
#no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
#print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
num_example_each_fold = X_train.shape[0] / num_folds
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
print "num_example_each_fold is {}".format(num_example_each_fold)
print "X_train_folds:", [data.shape for data in X_train_folds]
print "y_train_folds:", [data.shape for data in y_train_folds]
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
def run_knn(X_train, y_train, X_test, y_test, k):
#print X_train.shape,y_train.shape,X_test.shape,y_test.shape
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
dists = classifier.compute_distances_no_loops(X_test)
y_test_pred = classifier.predict_labels(dists, k=k)
num_correct = np.sum(y_test_pred == y_test)
num_test = X_test.shape[0]
accuracy = float(num_correct) / num_test
return accuracy
def run_knn_on_nfolds_with_k(X_train_folds,y_train_folds,num_folds,k):
acc_list = []
for ind_fold in range(num_folds):
X_test = X_train_folds[ind_fold]
y_test = y_train_folds[ind_fold]
X_train = np.vstack(X_train_folds[0:ind_fold]+X_train_folds[ind_fold+1:])
y_train = np.hstack(y_train_folds[0:ind_fold]+y_train_folds[ind_fold+1:])
#print "y_train_folds:", [data.shape for data in y_train_folds]
#print np.vstack(y_train_folds[0:ind_fold]+y_train_folds[ind_fold:]).shape
#print X_train.shape,y_train.shape,X_test.shape,y_test.shape
acc = run_knn(X_train,y_train,X_test,y_test,k)
acc_list.append(acc)
return acc_list
for k in k_choices:
print "run knn on {} folds data with k: {}".format(num_folds,k)
k_to_accuracies[k] = run_knn_on_nfolds_with_k(X_train_folds, y_train_folds, num_folds, k)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
print "the best mean from k {}".format(np.argmax(accuracies_mean))
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 4
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation |
6,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We show a CT scan and overlay the PET scan
Step1: Zoom
Zoom in by clicking the magnifying icon, or keep the alt/option key pressed. After zooming in, the higher resolution verion cutout will be displayed.
Multivolume rendering
Since version 0.5, ipyvolume supports multivolume rendering, so we can render two volumetric datasets at the same time. | Python Code:
full_scan = {k: v.swapaxes(0, 1)[::-1] for k,v in np.load('petct.npz').items()}
print(list(full_scan.keys()))
table_ct = cm.gray_r(np.linspace(0, 1, 255))
table_ct[:50, 3] = 0 # make the lower values transparent
table_ct[50:, 3] = np.linspace(0, 0.05, table_ct[50:].shape[0])
tf_ct = ipv.TransferFunction(rgba=table_ct)
ct_vol = ipv.quickvolshow(full_scan['ct_data'],
tf=tf_ct, lighting=False,
data_min=-1000, data_max=1000)
ct_vol
Explanation: We show a CT scan and overlay the PET scan
End of explanation
table_pet = cm.hot(np.linspace(0, 1, 255))
table_pet[:50, 3] = 0 # make the lower values transparent
table_pet[50:, 3] = np.linspace(0, 1, table_pet[50:].shape[0])
tf_pet = ipv.TransferFunction(rgba=table_pet)
pet_vol = ipv.volshow(full_scan['pet_data'],
tf=tf_pet,
data_min=0,
data_max=10)
pet_vol.rendering_method='MAX_INTENSITY'
table_lab = np.array([
[0,0,0,0],
[0,1,0,1]
])
tf_lab = ipv.TransferFunction(rgba=table_lab)
lab_vol = ipv.volshow(full_scan['label_data']>0,
tf=tf_lab,
data_min=0,
data_max=1)
Explanation: Zoom
Zoom in by clicking the magnifying icon, or keep the alt/option key pressed. After zooming in, the higher resolution verion cutout will be displayed.
Multivolume rendering
Since version 0.5, ipyvolume supports multivolume rendering, so we can render two volumetric datasets at the same time.
End of explanation |
6,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Remap MEG channel types
In this example, MEG data are remapped from one channel type to another.
This is useful to
Step1: First, let's call remap gradiometers to magnometers, and plot
the original and remapped topomaps of the magnetometers.
Step2: Now, we remap magnometers to gradiometers, and plot
the original and remapped topomaps of the gradiometers | Python Code:
# Author: Mainak Jas <[email protected]>
# License: BSD-3-Clause
import mne
from mne.datasets import sample
print(__doc__)
# read the evoked
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
fname = meg_path / 'sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))
Explanation: Remap MEG channel types
In this example, MEG data are remapped from one channel type to another.
This is useful to:
- visualize combined magnetometers and gradiometers as magnetometers
or gradiometers.
- run statistics from both magnetometers and gradiometers while
working with a single type of channels.
End of explanation
# go from grad + mag to mag and plot original mag
virt_evoked = evoked.as_type('mag')
evoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')
# plot interpolated grad + mag
virt_evoked.plot_topomap(ch_type='mag', time_unit='s',
title='mag (interpolated from mag + grad)')
Explanation: First, let's call remap gradiometers to magnometers, and plot
the original and remapped topomaps of the magnetometers.
End of explanation
# go from grad + mag to grad and plot original grad
virt_evoked = evoked.as_type('grad')
evoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')
# plot interpolated grad + mag
virt_evoked.plot_topomap(ch_type='grad', time_unit='s',
title='grad (interpolated from mag + grad)')
Explanation: Now, we remap magnometers to gradiometers, and plot
the original and remapped topomaps of the gradiometers
End of explanation |
6,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Create some Data
Step2: Visualize Data
Step3: Creating the Clusters | Python Code:
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
K Means Clustering with Python
This notebook is just a code reference for the video lecture and reading.
Method Used
K Means Clustering is an unsupervised learning algorithm that tries to cluster data based on their similarity. Unsupervised learning means that there is no outcome to be predicted, and the algorithm just tries to find patterns in the data. In k means clustering, we have the specify the number of clusters we want the data to be grouped into. The algorithm randomly assigns each observation to a cluster, and finds the centroid of each cluster. Then, the algorithm iterates through two steps:
Reassign data points to the cluster whose centroid is closest. Calculate new centroid of each cluster. These two steps are repeated till the within cluster variation cannot be reduced any further. The within cluster variation is calculated as the sum of the euclidean distance between the data points and their respective cluster centroids.
Import Libraries
End of explanation
from sklearn.datasets import make_blobs
# Create Data
data = make_blobs(n_samples=200, n_features=2,
centers=4, cluster_std=1.8,random_state=101)
Explanation: Create some Data
End of explanation
plt.scatter(data[0][:,0],data[0][:,1],c=data[1],cmap='rainbow')
Explanation: Visualize Data
End of explanation
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=4)
kmeans.fit(data[0])
kmeans.cluster_centers_
kmeans.labels_
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True,figsize=(10,6))
ax1.set_title('K Means')
ax1.scatter(data[0][:,0],data[0][:,1],c=kmeans.labels_,cmap='rainbow')
ax2.set_title("Original")
ax2.scatter(data[0][:,0],data[0][:,1],c=data[1],cmap='rainbow')
Explanation: Creating the Clusters
End of explanation |
6,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Mandelbrot set
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: Now you'll define a function to actually display the image once you have iteration counts.
Step4: Session and variable initialization
For playing around like this, an interactive session is often used, but a regular session would work as well.
Step5: It's handy that you can freely mix NumPy and TensorFlow.
Step6: Now you define and initialize TensorFlow tensors.
Step7: TensorFlow requires that you explicitly initialize variables before using them.
Step8: Defining and running the computation
Now you specify more of the computation...
Step9: ... and run it for a couple hundred steps
Step10: Let's see what you've got. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
# Import libraries for simulation
import tensorflow.compat.v1 as tf
import numpy as np
# Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
Explanation: Mandelbrot set
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/non-ml/mandelbrot.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/non-ml/mandelbrot.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: This is an archived TF1 notebook. These are configured
to run in TF2's
compatibility mode
but will run in TF1 as well. To use TF1 in Colab, use the
%tensorflow_version 1.x
magic.
Visualizing the Mandelbrot set doesn't have anything to do with machine learning, but it makes for a fun example of how one can use TensorFlow for general mathematics. This is actually a pretty naive implementation of the visualization, but it makes the point. (We may end up providing a more elaborate implementation down the line to produce more truly beautiful images.)
Basic setup
You'll need a few imports to get started.
End of explanation
def DisplayFractal(a, fmt='jpeg'):
Display an array of iteration counts as a
colorful picture of a fractal.
a_cyclic = (6.28*a/20.0).reshape(list(a.shape)+[1])
img = np.concatenate([10+20*np.cos(a_cyclic),
30+50*np.sin(a_cyclic),
155-80*np.cos(a_cyclic)], 2)
img[a==a.max()] = 0
a = img
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
Explanation: Now you'll define a function to actually display the image once you have iteration counts.
End of explanation
sess = tf.InteractiveSession()
Explanation: Session and variable initialization
For playing around like this, an interactive session is often used, but a regular session would work as well.
End of explanation
# Use NumPy to create a 2D array of complex numbers
Y, X = np.mgrid[-1.3:1.3:0.005, -2:1:0.005]
Z = X+1j*Y
Explanation: It's handy that you can freely mix NumPy and TensorFlow.
End of explanation
xs = tf.constant(Z.astype(np.complex64))
zs = tf.Variable(xs)
ns = tf.Variable(tf.zeros_like(xs, tf.float32))
Explanation: Now you define and initialize TensorFlow tensors.
End of explanation
tf.global_variables_initializer().run()
Explanation: TensorFlow requires that you explicitly initialize variables before using them.
End of explanation
# Compute the new values of z: z^2 + x
zs_ = zs*zs + xs
# Have we diverged with this new value?
not_diverged = tf.abs(zs_) < 4
# Operation to update the zs and the iteration count.
#
# Note: We keep computing zs after they diverge! This
# is very wasteful! There are better, if a little
# less simple, ways to do this.
#
step = tf.group(
zs.assign(zs_),
ns.assign_add(tf.cast(not_diverged, tf.float32))
)
Explanation: Defining and running the computation
Now you specify more of the computation...
End of explanation
for i in range(200): step.run()
Explanation: ... and run it for a couple hundred steps
End of explanation
DisplayFractal(ns.eval())
Explanation: Let's see what you've got.
End of explanation |
6,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
==============================================
Compute effect-matched-spatial filtering (EMS)
==============================================
This example computes the EMS to reconstruct the time course of the
experimental effect as described in [1]_.
This technique is used to create spatial filters based on the difference
between two conditions. By projecting the trial onto the corresponding spatial
filters, surrogate single trials are created in which multi-sensor activity is
reduced to one time series which exposes experimental effects, if present.
We will first plot a trials x times image of the single trials and order the
trials by condition. A second plot shows the average time series for each
condition. Finally a topographic plot is created which exhibits the temporal
evolution of the spatial filters.
References
.. [1] Aaron Schurger, Sebastien Marti, and Stanislas Dehaene, "Reducing
multi-sensor data to a single time course that reveals experimental
effects", BMC Neuroscience 2013, 14
Step1: Note that a similar transformation can be applied with compute_ems
However, this function replicates Schurger et al's original paper, and thus
applies the normalization outside a leave-one-out cross-validation, which we
recommend not to do. | Python Code:
# Author: Denis Engemann <[email protected]>
# Jean-Remi King <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io, EvokedArray
from mne.datasets import sample
from mne.decoding import EMS, compute_ems
from sklearn.model_selection import StratifiedKFold
print(__doc__)
data_path = sample.data_path()
# Preprocess the data
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_ids = {'AudL': 1, 'VisL': 3}
# Read data and create epochs
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(0.5, 45, fir_design='firwin')
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
epochs = mne.Epochs(raw, events, event_ids, tmin=-0.2, tmax=0.5, picks=picks,
baseline=None, reject=dict(grad=4000e-13, eog=150e-6),
preload=True)
epochs.drop_bad()
epochs.pick_types(meg='grad')
# Setup the data to use it a scikit-learn way:
X = epochs.get_data() # The MEG data
y = epochs.events[:, 2] # The conditions indices
n_epochs, n_channels, n_times = X.shape
# Initialize EMS transformer
ems = EMS()
# Initialize the variables of interest
X_transform = np.zeros((n_epochs, n_times)) # Data after EMS transformation
filters = list() # Spatial filters at each time point
# In the original paper, the cross-validation is a leave-one-out. However,
# we recommend using a Stratified KFold, because leave-one-out tends
# to overfit and cannot be used to estimate the variance of the
# prediction within a given fold.
for train, test in StratifiedKFold().split(X, y):
# In the original paper, the z-scoring is applied outside the CV.
# However, we recommend to apply this preprocessing inside the CV.
# Note that such scaling should be done separately for each channels if the
# data contains multiple channel types.
X_scaled = X / np.std(X[train])
# Fit and store the spatial filters
ems.fit(X_scaled[train], y[train])
# Store filters for future plotting
filters.append(ems.filters_)
# Generate the transformed data
X_transform[test] = ems.transform(X_scaled[test])
# Average the spatial filters across folds
filters = np.mean(filters, axis=0)
# Plot individual trials
plt.figure()
plt.title('single trial surrogates')
plt.imshow(X_transform[y.argsort()], origin='lower', aspect='auto',
extent=[epochs.times[0], epochs.times[-1], 1, len(X_transform)],
cmap='RdBu_r')
plt.xlabel('Time (ms)')
plt.ylabel('Trials (reordered by condition)')
# Plot average response
plt.figure()
plt.title('Average EMS signal')
mappings = [(key, value) for key, value in event_ids.items()]
for key, value in mappings:
ems_ave = X_transform[y == value]
plt.plot(epochs.times, ems_ave.mean(0), label=key)
plt.xlabel('Time (ms)')
plt.ylabel('a.u.')
plt.legend(loc='best')
plt.show()
# Visualize spatial filters across time
evoked = EvokedArray(filters, epochs.info, tmin=epochs.tmin)
evoked.plot_topomap()
Explanation: ==============================================
Compute effect-matched-spatial filtering (EMS)
==============================================
This example computes the EMS to reconstruct the time course of the
experimental effect as described in [1]_.
This technique is used to create spatial filters based on the difference
between two conditions. By projecting the trial onto the corresponding spatial
filters, surrogate single trials are created in which multi-sensor activity is
reduced to one time series which exposes experimental effects, if present.
We will first plot a trials x times image of the single trials and order the
trials by condition. A second plot shows the average time series for each
condition. Finally a topographic plot is created which exhibits the temporal
evolution of the spatial filters.
References
.. [1] Aaron Schurger, Sebastien Marti, and Stanislas Dehaene, "Reducing
multi-sensor data to a single time course that reveals experimental
effects", BMC Neuroscience 2013, 14:122.
End of explanation
epochs.equalize_event_counts(event_ids)
X_transform, filters, classes = compute_ems(epochs)
Explanation: Note that a similar transformation can be applied with compute_ems
However, this function replicates Schurger et al's original paper, and thus
applies the normalization outside a leave-one-out cross-validation, which we
recommend not to do.
End of explanation |
6,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Loading MNIST data
This little helper function loads the MNIST data available here.
Step2: Definition of the layers
So let us define the layers for the convolutional net. In general, layers are assembled in a list. Each element of the list is a tuple -- first a Lasagne layer, next a dictionary containing the arguments of the layer. We will explain the layer definitions in a moment, but in general, you should look them up in the Lasagne documenation.
Nolearn allows you to skip Lasagne's incoming keyword, which specifies how the layers are connected. Instead, nolearn will automatically assume that layers are connected in the order they appear in the list.
Note
Step3: Definition of the neural network
Now we initialize nolearn's neural net itself. We will explain each argument shortly
Step4: Training the neural network
To train the net, we call its fit method with our X and y data, as we would with any scikit learn classifier.
Step5: As we set the verbosity to 1, nolearn will print some useful information for us
Step6: Visualizing the network architecture
First we may be interested in simply visualizing the architecture. When using an IPython/Jupyter notebook, this is achieved best by calling the draw_to_notebook function, passing the net as the first argument.
Step7: If we have accidentally made an error during the construction of the architecture, you should be able to spot it easily now.
Train and validation loss progress
With nolearn's visualization tools, it is possible to get some further insights into the working of the CNN. Below, we will simply plot the log loss of the training and validation data over each epoch
Step8: This kind of visualization can be helpful in determining whether we want to continue training or not. For instance, here we see that both loss functions still are still decreasing and that more training will pay off. This graph can also help determine if we are overfitting
Step9: As can be seen above, in our case, the results are not too interesting. If the weights just look like noise, we might have to do something (e.g. use more filters so that each can specialize better).
Visualizing the layers' activities
To see through the "eyes" of the net, we can plot the activities produced by different layers. The plot_conv_activity function is made for that. The first argument, again, is a layer, the second argument an image in the bc01 format (which is why we use X[0
Step10: Here we can see that depending on the learned filters, the neural net represents the image in different ways, which is what we should expect. If, e.g., some images were completely black, that could indicate that the corresponding filters have not learned anything useful. When you find yourself in such a situation, training longer or initializing the weights differently might do the trick.
Plot occlusion images
A possibility to check if the net, for instance, overfits or learns important features is to occlude part of the image. Then we can check whether the net still makes correct predictions. The idea behind that is the following
Step11: Here we see which parts of the number are most important for correct classification. We see that the critical parts are all directly above the numbers, so this seems to work out. For more complex images with different objects in the scene, this function should be more useful, though.
Salience plot
Similarly to plotting the occlusion images, we may also backpropagate the error onto the image parts to see which ones matter to the net. The idea here is similar but the outcome differs, as a quick comparison shows. The advantage of using the gradient is that the computation is much quicker but the critical parts are more distributed across the image, making interpretation more difficult.
Step12: Finding a good architecture
This section tries to help you go deep with your convolutional neural net.
There is more than one way to go deep with CNNs. A possibility is to try a residual net architecture, which won several tasks of the 2015 imagenet competition. Here we will try instead a more "traditional" approach using blocks of convolutional layers separated by pooling layers. If we want to increase the number of convolutional layers, we cannot simply do so at will. It is important that the layers have a sufficiently high learning capacity while they should cover approximately 100% of the incoming image (Xudong Cao, 2015).
The usual approach is to try to go deep with convolutional layers. If you chain too many convolutional layers, though, the learning capacity of the layers falls too low. At this point, you have to add a max pooling layer. Use too many max pooling layers, and your image coverage grows larger than the image, which is clearly pointless. Striking the right balance while maximizing the depth of your layer is the final goal.
It is generally a good idea to use small filter sizes for your convolutional layers, generally <b>3x3</b>. The reason for this is that this allows to cover the same receptive field of the image while using less parameters that would be required if a larger filter size were used. Moreover, deeper stacks of convolutional layers are more expressive (see here for more).
Step13: A shallow net
Let us try out a simple architecture and see how we fare.
Step14: To see information about the capacity and coverage of each layer, we need to set the verbosity of the net to a value of 2 and then initialize the net. We next pass the initialized net to PrintLayerInfo to see some useful information. By the way, we could also just call the fit method of the net to get the same outcome, but since we don't want to fit just now, we proceed as shown below.
Step15: This net is fine. The capacity never falls below 1/6, which would be 16.7%, and the coverage of the image never exceeds 100%. However, with only 4 convolutional layers, this net is not very deep and will properly not achieve the best possible results.
What we also see is the role of max pooling. If we look at 'maxpool2d1', after this layer, the capacity of the net is increased. Max pooling thus helps to increase capacity should it dip too low. However, max pooling also significantly increases the coverage of the image. So if we use max pooling too often, the coverage will quickly exceed 100% and we cannot go sufficiently deep.
Too little maxpooling
Now let us try an architecture that uses a lot of convolutional layers but only one maxpooling layer.
Step16: Here we have a very deep net but we have a problem
Step17: This net uses too much maxpooling for too small an image. The later layers, colored in cyan, would cover more than 100% of the image. So this network is clearly also suboptimal.
A good compromise
Now let us have a look at a reasonably deep architecture that satisfies the criteria we set out to meet
Step18: With 10 convolutional layers, this network is rather deep, given the small image size. Yet the learning capacity is always suffiently large and never are is than 100% of the image covered. This could just be a good solution. Maybe you would like to give this architecture a spin?
Note 1 | Python Code:
import os
import matplotlib.pyplot as plt
%pylab inline
import numpy as np
from lasagne.layers import DenseLayer
from lasagne.layers import InputLayer
from lasagne.layers import DropoutLayer
from lasagne.layers import Conv2DLayer
from lasagne.layers import MaxPool2DLayer
from lasagne.nonlinearities import softmax
from lasagne.updates import adam
from lasagne.layers import get_all_params
from nolearn.lasagne import NeuralNet
from nolearn.lasagne import TrainSplit
from nolearn.lasagne import objective
Explanation: Tutorial: Training convolutional neural networks with nolearn
Author: Benjamin Bossan
Note: This notebook was updated on April 4, 2016, to reflect recent changes in nolearn.
This tutorial's goal is to teach you how to use nolearn to train convolutional neural networks (CNNs). The nolearn documentation can be found here. We assume that you have some general knowledge about machine learning in general or neural nets specifically, but want to learn more about convolutional neural networks and nolearn.
We well cover several points in this notebook.
How to load image data such that we can use it for our purpose. For this tutorial, we will use the MNIST data set, which consists of images of the numbers from 0 to 9.
How to properly define layers of the net. A good choice of layers, i.e. a good network architecture, is most important to get nice results out of a neural net.
The definition of the neural network itself. Here we define important hyper-parameters.
Next we will see how visualizations may help us to further refine the network.
Finally, we will show you how nolearn can help us find better architectures for our neural network.
Imports
End of explanation
def load_mnist(path):
X = []
y = []
with open(path, 'rb') as f:
next(f) # skip header
for line in f:
yi, xi = line.split(',', 1)
y.append(yi)
X.append(xi.split(','))
# Theano works with fp32 precision
X = np.array(X).astype(np.float32)
y = np.array(y).astype(np.int32)
# apply some very simple normalization to the data
X -= X.mean()
X /= X.std()
# For convolutional layers, the default shape of data is bc01,
# i.e. batch size x color channels x image dimension 1 x image dimension 2.
# Therefore, we reshape the X data to -1, 1, 28, 28.
X = X.reshape(
-1, # number of samples, -1 makes it so that this number is determined automatically
1, # 1 color channel, since images are only black and white
28, # first image dimension (vertical)
28, # second image dimension (horizontal)
)
return X, y
# here you should enter the path to your MNIST data
path = os.path.join(os.path.expanduser('~'), 'data/mnist/train.csv')
X, y = load_mnist(path)
figs, axes = plt.subplots(4, 4, figsize=(6, 6))
for i in range(4):
for j in range(4):
axes[i, j].imshow(-X[i + 4 * j].reshape(28, 28), cmap='gray', interpolation='none')
axes[i, j].set_xticks([])
axes[i, j].set_yticks([])
axes[i, j].set_title("Label: {}".format(y[i + 4 * j]))
axes[i, j].axis('off')
Explanation: Loading MNIST data
This little helper function loads the MNIST data available here.
End of explanation
layers0 = [
# layer dealing with the input data
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
# first stage of our convolutional layers
(Conv2DLayer, {'num_filters': 96, 'filter_size': 5}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(MaxPool2DLayer, {'pool_size': 2}),
# second stage of our convolutional layers
(Conv2DLayer, {'num_filters': 128, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 128, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 128, 'filter_size': 3}),
(MaxPool2DLayer, {'pool_size': 2}),
# two dense layers with dropout
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
# the output layer
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
Explanation: Definition of the layers
So let us define the layers for the convolutional net. In general, layers are assembled in a list. Each element of the list is a tuple -- first a Lasagne layer, next a dictionary containing the arguments of the layer. We will explain the layer definitions in a moment, but in general, you should look them up in the Lasagne documenation.
Nolearn allows you to skip Lasagne's incoming keyword, which specifies how the layers are connected. Instead, nolearn will automatically assume that layers are connected in the order they appear in the list.
Note: Of course you can manually set the incoming parameter if your neural net's layers are connected differently. To do so, you have to give the corresponding layer a name (e.g. 'name': 'my layer') and use that name as a reference ('incoming': 'my layer').
The layers we use are the following:
InputLayer: We have to specify the shape of the data. For image data, it is batch size x color channels x image dimension 1 x image dimension 2 (aka bc01). Here you should generally just leave the batch size as None, so that it is taken care off automatically. The other dimensions are given by X.
Conv2DLayer: The most important keywords are num_filters and filter_size. The former indicates the number of channels -- the more you choose, the more different filters can be learned by the CNN. Generally, the first convolutional layers will learn simple features, such as edges, while deeper layers can learn more abstract features. Therefore, you should increase the number of filters the deeper you go. The filter_size is the size of the filter/kernel. The current consensus is to always use 3x3 filters, as these allow to cover the same number of image pixels with fewer parameters than larger filters do.
MaxPool2DLayer: This layer performs max pooling and hopefully provides translation invariance. We need to indicate the region over which it pools, with 2x2 being the default of most users.
DenseLayer: This is your vanilla fully-connected layer; you should indicate the number of 'neurons' with the num_units argument. The very last layer is assumed to be the output layer. We thus set the number of units to be the number of classes, 10, and choose softmax as the output nonlinearity, as we are dealing with a classification task.
DropoutLayer: Dropout is a common technique to regularize neural networks. It is almost always a good idea to include dropout between your dense layers.
Apart from these arguments, the Lasagne layers have very reasonable defaults concerning weight initialization, nonlinearities (rectified linear units), etc.
End of explanation
net0 = NeuralNet(
layers=layers0,
max_epochs=10,
update=adam,
update_learning_rate=0.0002,
objective_l2=0.0025,
train_split=TrainSplit(eval_size=0.25),
verbose=1,
)
Explanation: Definition of the neural network
Now we initialize nolearn's neural net itself. We will explain each argument shortly:
* The most important argument is the layers argument, which should be the list of layers defined above.
* max_epochs is simply the number of epochs the net learns with each call to fit (an 'epoch' is a full training cycle using all training data).
* As update, we choose adam, which for many problems is a good first choice as updateing rule.
* The objective of our net will be the regularization_objective we just defined.
* To change the magnitude of L2 regularization (see here), we set the objective_l2 parameter. The NeuralNetwork class will then automatically pass this value when calling the objective. Usually, moderate L2 regularization is applied, whereas L1 regularization is less frequently used.
* For 'adam', a small learning rate is best, so we set it with the update_learning_rate argument (nolearn will automatically interpret this argument to mean the learning_rate argument of the update parameter, i.e. adam in our case).
* The NeuralNet will hold out some of the training data for validation if we set the eval_size of the TrainSplit to a number greater than 0. This will allow us to monitor how well the net generalizes to yet unseen data. By setting this argument to 1/4, we tell the net to hold out 25% of the samples for validation.
* Finally, we set verbose to 1, which will result in the net giving us some useful information.
End of explanation
net0.fit(X, y)
Explanation: Training the neural network
To train the net, we call its fit method with our X and y data, as we would with any scikit learn classifier.
End of explanation
from nolearn.lasagne.visualize import draw_to_notebook
from nolearn.lasagne.visualize import plot_loss
from nolearn.lasagne.visualize import plot_conv_weights
from nolearn.lasagne.visualize import plot_conv_activity
from nolearn.lasagne.visualize import plot_occlusion
from nolearn.lasagne.visualize import plot_saliency
Explanation: As we set the verbosity to 1, nolearn will print some useful information for us:
First of all, some general information about the net and its layers is printed. Then, during training, the progress will be printed after each epoch.
The train loss is the loss/cost that the net tries to minimize. For this example, this is the log loss (cross entropy).
The valid loss is the loss for the hold out validation set. You should expect this value to indicate how well your model generalizes to yet unseen data.
train/val is simply the ratio of train loss to valid loss. If this value is very low, i.e. if the train loss is much better than your valid loss, it means that the net has probably overfitted the train data.
When we are dealing with a classification task, the accuracy score of the valdation set, valid acc, is also printed.
dur is simply the duration it took to process the given epoch.
In addition to this, nolearn will color the as of yet best train and valid loss, so that it is easy to spot whether the net makes progress.
Visualizations
Diagnosing what's wrong with your neural network if the results are unsatisfying can sometimes be difficult, something closer to an art than a science. But with nolearn's visualization tools, we should be able to get some insights that help us diagnose if something is wrong.
End of explanation
draw_to_notebook(net0)
Explanation: Visualizing the network architecture
First we may be interested in simply visualizing the architecture. When using an IPython/Jupyter notebook, this is achieved best by calling the draw_to_notebook function, passing the net as the first argument.
End of explanation
plot_loss(net0)
Explanation: If we have accidentally made an error during the construction of the architecture, you should be able to spot it easily now.
Train and validation loss progress
With nolearn's visualization tools, it is possible to get some further insights into the working of the CNN. Below, we will simply plot the log loss of the training and validation data over each epoch:
End of explanation
plot_conv_weights(net0.layers_[1], figsize=(4, 4))
Explanation: This kind of visualization can be helpful in determining whether we want to continue training or not. For instance, here we see that both loss functions still are still decreasing and that more training will pay off. This graph can also help determine if we are overfitting: If the train loss is much lower than the validation loss, we should probably do something to regularize the net.
Visualizing layer weights
We can further have a look at the weights learned by the net. The first argument of the function should be the layer we want to visualize. The layers can be accessed through the layers_ attribute and then by name (e.g. 'conv2dcc1') or by index, as below. (Obviously, visualizing the weights only makes sense for convolutional layers.)
End of explanation
x = X[0:1]
plot_conv_activity(net0.layers_[1], x)
Explanation: As can be seen above, in our case, the results are not too interesting. If the weights just look like noise, we might have to do something (e.g. use more filters so that each can specialize better).
Visualizing the layers' activities
To see through the "eyes" of the net, we can plot the activities produced by different layers. The plot_conv_activity function is made for that. The first argument, again, is a layer, the second argument an image in the bc01 format (which is why we use X[0:1] instead of just X[0]).
End of explanation
plot_occlusion(net0, X[:5], y[:5])
Explanation: Here we can see that depending on the learned filters, the neural net represents the image in different ways, which is what we should expect. If, e.g., some images were completely black, that could indicate that the corresponding filters have not learned anything useful. When you find yourself in such a situation, training longer or initializing the weights differently might do the trick.
Plot occlusion images
A possibility to check if the net, for instance, overfits or learns important features is to occlude part of the image. Then we can check whether the net still makes correct predictions. The idea behind that is the following: If the most critical part of an image is something like the head of a person, that is probably right. If it is instead a random part of the background, the net probably overfits (see here for more).
With the plot_occlusion function, we can check this. The approach is to occlude parts of the image and check how strongly this affects the power of our net to predict the correct label. The first argument to the function is the neural net, the second the X data, the third the y data. Be warned that this function can be quite slow for larger images.
End of explanation
plot_saliency(net0, X[:5]);
Explanation: Here we see which parts of the number are most important for correct classification. We see that the critical parts are all directly above the numbers, so this seems to work out. For more complex images with different objects in the scene, this function should be more useful, though.
Salience plot
Similarly to plotting the occlusion images, we may also backpropagate the error onto the image parts to see which ones matter to the net. The idea here is similar but the outcome differs, as a quick comparison shows. The advantage of using the gradient is that the computation is much quicker but the critical parts are more distributed across the image, making interpretation more difficult.
End of explanation
from nolearn.lasagne import PrintLayerInfo
Explanation: Finding a good architecture
This section tries to help you go deep with your convolutional neural net.
There is more than one way to go deep with CNNs. A possibility is to try a residual net architecture, which won several tasks of the 2015 imagenet competition. Here we will try instead a more "traditional" approach using blocks of convolutional layers separated by pooling layers. If we want to increase the number of convolutional layers, we cannot simply do so at will. It is important that the layers have a sufficiently high learning capacity while they should cover approximately 100% of the incoming image (Xudong Cao, 2015).
The usual approach is to try to go deep with convolutional layers. If you chain too many convolutional layers, though, the learning capacity of the layers falls too low. At this point, you have to add a max pooling layer. Use too many max pooling layers, and your image coverage grows larger than the image, which is clearly pointless. Striking the right balance while maximizing the depth of your layer is the final goal.
It is generally a good idea to use small filter sizes for your convolutional layers, generally <b>3x3</b>. The reason for this is that this allows to cover the same receptive field of the image while using less parameters that would be required if a larger filter size were used. Moreover, deeper stacks of convolutional layers are more expressive (see here for more).
End of explanation
layers1 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net1 = NeuralNet(
layers=layers1,
update_learning_rate=0.01,
verbose=2,
)
Explanation: A shallow net
Let us try out a simple architecture and see how we fare.
End of explanation
net1.initialize()
layer_info = PrintLayerInfo()
layer_info(net1)
Explanation: To see information about the capacity and coverage of each layer, we need to set the verbosity of the net to a value of 2 and then initialize the net. We next pass the initialized net to PrintLayerInfo to see some useful information. By the way, we could also just call the fit method of the net to get the same outcome, but since we don't want to fit just now, we proceed as shown below.
End of explanation
layers2 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net2 = NeuralNet(
layers=layers2,
update_learning_rate=0.01,
verbose=2,
)
net2.initialize()
layer_info(net2)
Explanation: This net is fine. The capacity never falls below 1/6, which would be 16.7%, and the coverage of the image never exceeds 100%. However, with only 4 convolutional layers, this net is not very deep and will properly not achieve the best possible results.
What we also see is the role of max pooling. If we look at 'maxpool2d1', after this layer, the capacity of the net is increased. Max pooling thus helps to increase capacity should it dip too low. However, max pooling also significantly increases the coverage of the image. So if we use max pooling too often, the coverage will quickly exceed 100% and we cannot go sufficiently deep.
Too little maxpooling
Now let us try an architecture that uses a lot of convolutional layers but only one maxpooling layer.
End of explanation
layers3 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net3 = NeuralNet(
layers=layers3,
update_learning_rate=0.01,
verbose=2,
)
net3.initialize()
layer_info(net3)
Explanation: Here we have a very deep net but we have a problem: The lack of max pooling layers means that the capacity of the net dips below 16.7%. The corresponding layers are shown in magenta. We need to find a better solution.
Too much maxpooling
Here is an architecture with too mach maxpooling. For illustrative purposes, we set the pad parameter to 1; without it, the image size would shrink below 0, at which point the code will raise an error.
End of explanation
layers4 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net4 = NeuralNet(
layers=layers4,
update_learning_rate=0.01,
verbose=2,
)
net4.initialize()
layer_info(net4)
Explanation: This net uses too much maxpooling for too small an image. The later layers, colored in cyan, would cover more than 100% of the image. So this network is clearly also suboptimal.
A good compromise
Now let us have a look at a reasonably deep architecture that satisfies the criteria we set out to meet:
End of explanation
net4.verbose = 3
layer_info(net4)
Explanation: With 10 convolutional layers, this network is rather deep, given the small image size. Yet the learning capacity is always suffiently large and never are is than 100% of the image covered. This could just be a good solution. Maybe you would like to give this architecture a spin?
Note 1: The MNIST images typically don't cover the whole of the 28x28 image size. Therefore, an image coverage of less than 100% is probably very acceptable. For other image data sets such as CIFAR or ImageNet, it is recommended to cover the whole image.
Note 2: This analysis does not tell us how many feature maps (i.e. number of filters per convolutional layer) to use. Here we have to experiment with different values. Larger values mean that the network should learn more types of features but also increase the risk of overfitting (and may exceed the available memory). In general though, deeper layers (those farther down) are supposed to learn more complex features and should thus have more feature maps.
Even more information
It is possible to get more information by increasing the verbosity level beyond 2.
End of explanation |
6,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scaling analysis of Nexa on Wall Street
Here I will present a scaling analysis of Nexa wall street with regards to the number of clusters in the sensors space and the number of clusters in the data space.
Load the libraries
Step1: Load database, letters and main parameters
Step2: Prediction loop
Step3: Plot the result
Step4: Plot sample axis
Step5: Now with seaborn | Python Code:
import numpy as np
import h5py
from sklearn import svm, cross_validation
Explanation: Scaling analysis of Nexa on Wall Street
Here I will present a scaling analysis of Nexa wall street with regards to the number of clusters in the sensors space and the number of clusters in the data space.
Load the libraries
End of explanation
# First we load the file
file_location = '../results_database/text_wall_street_big.hdf5'
f = h5py.File(file_location, 'r')
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters.npy'
letters_sequence = np.load(text_directory)
Nletters = len(letters_sequence)
symbols = set(letters_sequence)
Explanation: Load database, letters and main parameters
End of explanation
N = 5000 # Amount of data
delay = 5
Nembedding = 3
# Quantities to scale
time_clustering_collection = np.arange(5, 55, 5)
spatial_clustering_collection = np.arange(3, 11, 1)
N_time_clusters = time_clustering_collection.size
N_spatial_clusters = spatial_clustering_collection.size
accuracy = np.zeros((N_time_clusters, N_spatial_clusters))
for spatial_index, Nspatial_clusters in enumerate(spatial_clustering_collection):
for time_index, Ntime_clusters in enumerate(time_clustering_collection):
run_name = '/low-resolution'
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
# Now we load the time and the code vectors
code_vectors = nexa['code-vectors']
code_vectors_distance = nexa['code-vectors-distance']
code_vectors_softmax = nexa['code-vectors-softmax']
code_vectors_winner = nexa['code-vectors-winner']
# Make prediction with scikit-learn
X = code_vectors_winner[:(N - delay)]
y = letters_sequence[delay:N]
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf = svm.SVC(C=1.0, cache_size=200, kernel='linear')
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test) * 100.0
print(parameters_string)
print('SVM score', score)
accuracy[time_index, spatial_index] = score
Explanation: Prediction loop
End of explanation
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
%matplotlib inline
colormap = 'Blues'
origin = 'lower'
interpolation = 'none'
gs = gridspec.GridSpec(2, 2)
fig = plt.figure(figsize=(12, 9))
ax = plt.subplot(gs[:, :])
im = ax.imshow(accuracy, origin=origin, interpolation=interpolation, aspect='auto',
extent=[0, Ntime_clusters, 0, Nspatial_clusters], vmin=0, vmax=100,
cmap=colormap)
fig.colorbar(im)
ax.set_xlabel('Data Clusters')
ax.set_ylabel('Spatio Temporal Clusters')
ax.set_title('Accuracy as a function of Nexa parameters')
Explanation: Plot the result
End of explanation
import seaborn as sns
value1 = 0
value2 = 3
value3 = 7
print(accuracy.shape)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(time_clustering_collection, accuracy[:, value1],'o-', lw=2, markersize=10, label='Nst_clusters='+ str(value1 + 3))
ax.plot(time_clustering_collection, accuracy[:, value2],'o-', lw=2, markersize=10, label='Nst_clusters='+ str(value2 + 3))
ax.plot(time_clustering_collection, accuracy[:, value3],'o-', lw=2, markersize=10, label='Nst_clusters='+ str(value3 + 3))
ax.set_xlim(-5, 55)
ax.set_ylim(0, 110)
ax.set_title('Sample curves from the matrix as a function of data clusters')
ax.legend()
value1 = 1
value2 = 3
value3 = 7
print(accuracy.shape)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(spatial_clustering_collection, accuracy[value1, :],'o-', lw=2, markersize=10, label='Ndata_clusters='+ str((value1*5)+5))
ax.plot(spatial_clustering_collection, accuracy[value2, :],'o-', lw=2, markersize=10, label='Ndata_clusters='+ str((value2*5)+5))
ax.plot(spatial_clustering_collection, accuracy[value3, :],'o-', lw=2, markersize=10, label='Ndata_clusters='+ str((value3*5)+5))
ax.set_xlim(0, 10)
ax.set_ylim(0, 110)
ax.set_title('Sample curves from the matrix as a function of number of spatio temporal clusters')
ax.legend()
Explanation: Plot sample axis
End of explanation
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
sns.set(rc={'image.cmap': 'inferno'})
%matplotlib inline
gs = gridspec.GridSpec(2, 2)
fig = plt.figure(figsize=(12, 9))
ax = plt.subplot(gs[:, :])
im = ax.imshow(accuracy, origin='lower', interpolation='none', aspect='auto',
extent=[0, Ntime_clusters, 0, Nspatial_clusters], vmin=0, vmax=100)
fig.colorbar(im)
ax.set_xlabel('Data Clusters')
ax.set_ylabel('Spatio Temporal Clusters')
ax.set_title('Accuracy as a function of Nexa parameters')
Explanation: Now with seaborn
End of explanation |
6,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data
Step2: Fit Imputer
Step3: Apply Imputer
Step4: View Data | Python Code:
import pandas as pd
import numpy as np
from sklearn.preprocessing import Imputer
Explanation: Title: Impute Missing Values With Means
Slug: impute_missing_values_with_means
Summary: Impute Missing Values With Means.
Date: 2016-11-28 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Mean imputation replaces missing values with the mean value of that feature/variable. Mean imputation is one of the most 'naive' imputation methods because unlike more complex methods like k-nearest neighbors imputation, it does not use the information we have about an observation to estimate a value for it.
Preliminaries
End of explanation
# Create an empty dataset
df = pd.DataFrame()
# Create two variables called x0 and x1. Make the first value of x1 a missing value
df['x0'] = [0.3051,0.4949,0.6974,0.3769,0.2231,0.341,0.4436,0.5897,0.6308,0.5]
df['x1'] = [np.nan,0.2654,0.2615,0.5846,0.4615,0.8308,0.4962,0.3269,0.5346,0.6731]
# View the dataset
df
Explanation: Create Data
End of explanation
# Create an imputer object that looks for 'Nan' values, then replaces them with the mean value of the feature by columns (axis=0)
mean_imputer = Imputer(missing_values='NaN', strategy='mean', axis=0)
# Train the imputor on the df dataset
mean_imputer = mean_imputer.fit(df)
Explanation: Fit Imputer
End of explanation
# Apply the imputer to the df dataset
imputed_df = mean_imputer.transform(df.values)
Explanation: Apply Imputer
End of explanation
# View the data
imputed_df
Explanation: View Data
End of explanation |
6,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
5. Imaging
Previous
Step1: Import section specific modules
Step2: 5.5 The Break Down of the Small Angle Approximation and the W-Term
Up to this point we used a resampling step and the Fast Fourier Transform to move between the image and visibility domains. Recall that we used the following simplified Fourier relationship to justify this synthesis process
Step3: Figure
Step4: Figure
Step5: Figure
Step6: Figure
Step7: Figure
Step8: Figure
Step9: Figure
Step10: Figure
Step11: Figure | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
5. Imaging
Previous: 5.4 Imaging weights
Next: 5.5 References and further reading
Import standard modules:
End of explanation
from IPython.display import Image
from mpl_toolkits.mplot3d import Axes3D
import track_simulator
from astropy.io import fits
import aplpy
#Disable astropy/aplpy logging
import logging
logger0 = logging.getLogger('astropy')
logger0.setLevel(logging.CRITICAL)
logger1 = logging.getLogger('aplpy')
logger1.setLevel(logging.CRITICAL)
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
NO_ANTENNA = 4
NO_BASELINES = NO_ANTENNA * (NO_ANTENNA - 1) / 2 + NO_ANTENNA
CENTRE_CHANNEL = 1e9 / 299792458 #Wavelength of 1 GHz
#Create a perfectly planar array with both a perfectly East-West baseline and 2 2D baselines
ENU_2d = np.array([[5,0,0],
[-5,0,0],
[10,0,0],
[0,23,0]]);
ENU_ew = np.array([[5,0,0],
[-5,0,0],
[10,0,0],
[0,0,0]]);
ARRAY_LATITUDE = 30 #Equator->North
ARRAY_LONGITUDE = 0 #Greenwitch->East, prime -> local meridian
fig = plt.figure(figsize=(10, 5))
ax=fig.add_subplot(121)
ax.set_title("2D Array")
ax.plot(ENU_2d[:,0],ENU_2d[:,1],"ko")
ax.set_xlabel("East")
ax.set_ylabel("North")
ax.set_xlim(-30,30)
ax.set_ylim(-30,30)
ax=fig.add_subplot(122)
ax.set_title("East-west array")
ax.plot(ENU_ew[:,0],ENU_ew[:,1],"ko")
ax.set_xlabel("East")
ax.set_ylabel("North")
ax.set_xlim(-30,30)
ax.set_ylim(-30,30)
plt.show()
Explanation: 5.5 The Break Down of the Small Angle Approximation and the W-Term
Up to this point we used a resampling step and the Fast Fourier Transform to move between the image and visibility domains. Recall that we used the following simplified Fourier relationship to justify this synthesis process:
\begin{equation}
\begin{split}
V(u,v) &= \int_\text{sky}{I(l,m)e^{-2\pi i/\lambda(\vec{b}\dot(\vec{s}-\vec{s}_0))}}dS\
&= \int\int{I(l,m)e^{-2\pi i/\lambda(ul+vm+w(\sqrt{1-l^2-m^2}-1))}}\frac{dldm}{\sqrt{1-l^2-m^2}}\
&\approx\int\int{I(l,m)e^{-2\pi i/\lambda(ul+vm)}}dldm\
\end{split}
\end{equation}
The last approximation to the model is just a Fourier transform by definition and is the one used when we were imaging up to this point. However, the more accurate version that relates the measurement to the brightness distribution along the celestial sphere is not the classical Fourier transform. The approximation is only valid when $n - 1 = \sqrt{1-l^2-m^2} - 1 \ll 1$ (ie. images of small regions of the sky) and/or $w \approx 0$ (the array is coplanar). Here $(n-1)$ is the projection height difference between the planar approximation tangent to the celestial sphere and a source's true position on the sphere, see the illustration below.
<img src="figures/orthogonal_projection_difference.png" alt="Smiley face" width="512">
Figure: The direction cosines (here $l$ is plotted against $n$) lie along the unit celestial sphere. $n$ is given by $n=\sqrt{1-l^2-m^2}$. If the projection pole (tangent point of the image) is at the same point as the phase reference centre, $n_0 = 1$. The total error between the orthogonal (SIN) projection of the source onto the tangent image plane and the source position on the celestial sphere is given as $\epsilon=(n-n_0)=(\sqrt{1-l^2-m^2} - 1)$.
Under the assumptions of a narrow field of view and coplanar measurements it is valid to use the FFT to construct a planar approximation to the sky. This section discusses the problem of wide-field imaging using non-coplanar baselines that arises when these assumptions are broken.
5.5.1 Coplanar Sampling
Consider the following two hypothetical arrays: a perfectly flat array that only has baselines along the east-west direction, and a second perfectly flat two-dimensional array with some baselines in a non-east-west direction.
End of explanation
DECLINATION = 0
T_OBS = 12
T_INT = 1/60.0
uw_2hr_2d = track_simulator.sim_uv(0.0,DECLINATION,T_OBS,T_INT,ENU_2d,ARRAY_LATITUDE,False)/CENTRE_CHANNEL
uv_2hr_ew = track_simulator.sim_uv(0.0,DECLINATION,T_OBS,T_INT,ENU_ew,ARRAY_LATITUDE,False)/CENTRE_CHANNEL
fig = plt.figure(figsize=(10, 5))
ax=fig.add_subplot(121)
ax.set_title("2D Array")
ax.plot(uw_2hr_2d[:,0],uw_2hr_2d[:,1],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
ax.set_xlim(-10,10)
ax.set_ylim(-10,10)
ax=fig.add_subplot(122)
ax.set_title("East-west Array")
ax.plot(uv_2hr_ew[:,0],uv_2hr_ew[:,1],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
ax.set_xlim(-10,10)
ax.set_ylim(-10,10)
plt.show()
Explanation: Figure: ENU coordinates for two hypothetical flat arrays: a 2D array and an east-west array
The two-dimensional interferometer has two major advantages over its one-dimensional east-west counterpart:
1. Improved u,v coverage at lower declinations, as plotted below.
2. Recall that the interferometer response is maximum when the phase-reference centre is orthogonal to the baseline. At lower observation angles it is desirable to have baseline components that are not aligned from east-to-west.
End of explanation
DECLINATION = 45
T_INT = 1/60.0
T_OBS = 12
uvw_2d = track_simulator.sim_uv(0.0,DECLINATION,T_OBS,T_INT,ENU_2d,ARRAY_LATITUDE,False)/CENTRE_CHANNEL
uvw_ew = track_simulator.sim_uv(0.0,DECLINATION,T_OBS,T_INT,ENU_ew,ARRAY_LATITUDE,False)/CENTRE_CHANNEL
fig=plt.figure(figsize=(10, 5))
ax=fig.add_subplot(121,projection='3d')
ax.set_title("2D Array")
ax.view_init(elev=10, azim=160)
ax.plot(uvw_2d[:,0],uvw_2d[:,1],uvw_2d[:,2],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
ax.set_zlabel("w")
ax=fig.add_subplot(122,projection='3d')
ax.set_title("East-west array")
ax.view_init(elev=10, azim=160)
ax.plot(uvw_ew[:,0],uvw_ew[:,1],uvw_ew[:,2],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
ax.set_zlabel("w")
plt.show()
fig = plt.figure(figsize=(10, 5))
ax=fig.add_subplot(121)
ax.set_title("2D Array")
ax.plot(uvw_2d[:,0],uvw_2d[:,1],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
ax=fig.add_subplot(122)
ax.set_title("East-west array")
ax.plot(uvw_ew[:,0],uvw_ew[:,1],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
plt.show()
Explanation: Figure: u,v coverage at declinatin $\delta=0$ for both a 2D and east-west array
The one drawback to using these two-dimensional layouts is that the measurements taken over the duration of the observation do not remain coplanar, even though the array layout is perfectly flat. The uvw tracks and their projections are plotted in 3-space below to illustrate this. This is opposed to the tracks created by the east-west interferometer which all remain in the same plane parallel to the Earth's equator. Alas, if an observation is sufficiently short, called a snapshot observation, then the rotation of the Earth is short enough to approximate the measurements of a two-dimensional interferometer as coplanar.
End of explanation
Image(filename="figures/tilted_interferometer.png")
Explanation: Figure: $u,v,w$ tracks and their projections onto ($u,v,w=0$) for a 2D and east-west interferometer
When the measurement domain is sampled along a single plane, as is true for the east-west interferometer, then all $w$ can be written as the same linear combination of u and v: $w = \alpha{u}+\beta{v}$. Although this introduces a slight distortion of the u,v coordinates in the Fourier relationship between the sky and the measurements, the distorted relationship remains a valid two-dimensional Fourier transform. It can be stated as:
\begin{equation}
\begin{split}
V(u,v,w) &= \int\int{I(l,m)e^{-2\pi i/\lambda(ul' + vm')}\frac{dldm}{\sqrt{1-l^2-m^2}}}\
l' &= l + \alpha(\sqrt{1-l^2-m^2} - 1)\
m' &= m + \beta(\sqrt{1-l^2-m^2} - 1)\
\end{split}
\end{equation}
5.5.2 Non-coplanar Sampling
The same can not be said for two-dimensional arrays. There is no fixed relationship between $w$ and $u,v$. Instead the relationship depends both on the time-variant zenithal and parallactic angles, and the $u,v$ coverage only remains co-planar for instantaneous observations, provided the array layout is approximately flat.
Neglecting the $w(n-1)$ term by synthesizing wide-field images with two-dimensional arrays, using a planar approximation, introduces a direction-dependent error in the measurement. This phase error depends on the height-difference between antennae, as is illustrated by the tilted interferometer below.
End of explanation
Image(filename="figures/vla_uncorrected.png")
Explanation: Figure: As the two figures show the projection of the source vector onto the two baselines are different for the coplanar and tilted interferometers. The phase for signals taken by co-planar interferometer baselines along some line of sight, $\vec{s}$, is given as $\phi = \frac{-2\pi i}{\lambda}(ul + vm)$, as opposed to tilted baselines that measure this same phase as $\phi_\text{tilt} = \frac{-2\pi i}{\lambda}{[ul + vm + w(n-1)]}$. The signal propagation delay is worse on the longest baselines and along the direction of sources far away from the phase centre of the interferometer.
It is important to realize that this phase term is purely geometric in origin; inserting a delay to correct $\Delta{w}$ for non-coplanar measurements only serves to correct the error in a single line of sight. In other words only the phase centre of the interferometer is changed by such a correction.
The small angle approximation $\sqrt{1+x} \approx 1+ \frac{x}{2}$ gives some intuition on how this phase error effects the brightness of sources away from the phase centre. It can be shown that:
\begin{equation}
V(u,v,w) \approx {\int\int{I(l,m)(e^{2\pi i/\lambda wl^2/2}e^{2\pi i /\lambda wm^2/2})e^{-2\pi i /\lambda(ul+vm)}\frac{dldm}{n}}}
\end{equation}
Since $w$ can be rewritten as a complex relationship of $u,v$ and time-variant elevation and azimuth angles we expect to see a time- and baseline-variant shift in source position. This relative position shift also grows roughly quadratically with the source offsets in l and m. The images below show how sources are smeared over large areas during long observations first on the data captured with the JVLA and then on a simulated MeerKAT observation.
End of explanation
Image(filename="figures/vla_wproj.png")
Explanation: Figure: Uncorrected image of the 8 hour observation of the supernova reminant G55.7+3.4 on the JVLA in D-configuration. Notice the eliptical smearing around the point source.
End of explanation
gc1 = aplpy.FITSFigure('../data/fits/wterm/MeerKAT_6h60s_dec-30_10MHz_10chans_uniform_n3000_w0-image.fits')
cpx = gc1.pixel2world(256, 256)
gc1.recenter(cpx[0], cpx[1], radius=0.2)
gc1.show_colorscale(vmin=-0.2, vmax=1., cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('MeerKAT Observation (Not Corrected)')
gc1.add_colorbar()
Explanation: Figure: W-projection image of the 8 hour observation of the supernova reminant G55.7+3.4 on the JVLA in D-configuration.
End of explanation
gc1 = aplpy.FITSFigure('../data/fits/wterm/MeerKAT_6h60s_dec-30_10MHz_10chans_uniform_n3000-image.fits')
cpx = gc1.pixel2world(256, 256)
gc1.recenter(cpx[0], cpx[1], radius=0.2)
gc1.show_colorscale(vmin=-0.2, vmax=1., cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('MeerKAT Observation (W-Corrected)')
gc1.add_colorbar()
Explanation: Figure: A quadrant of an image, not w-projection corrected, from a MeerKAT simulated observation, the phase centre is at the top right corner. Sources further from the phase centre have a larger amount of smearing compared to closer in.
End of explanation
Image(filename="figures/coplanar-faceting.png")
Explanation: Figure: A quadrant of an image, w-projection corrected, from a MeerKAT simulated observation, the phase centre is at the top right corner. Sources far from the phase centre remain point-like when the correction is accounted for as compared to the un-corrected image above.
5.3.3 Correcting Non-coplanar Baselines Effects
There are various ways the delay error introduced when discarding the $w(n-1)$ term during resampling and 2D Fast Fourier Transform can be corrected for, these include:
Full 3D transform: Similar to the 2D Direct Fourier Transform the Fourier transform can be computed for every element in a cube of $l,m,n$ values. The sky lies along the unit sphere within this cube. See Perley's discussion in <cite data-cite='taylor1999synthesis'>Synthesis Imaging in Radio Astronomy II</cite> ⤴ for a full derivation of this usually computationally and memory prohibitive technique.
Snapshot imaging: As alluded to earlier the visibility measurements taken during very short observations are co-planar, assuming the physical array lies on a flat plane. During each observation the $l,m$ coordnates are slightly distorted and the images have to be interpolated to the same coordinates before the images can be averaged into a single map of the sky.
Facet imaging: In facet imaging the goal is to drive the $(n-1)$ factor down to 0; satisfying the narrow-field assumption that makes the 2D Fourier inversion valid. There are a few ways in which the sky can be split into smaller images, but the classical faceting approach is to tile the celestial sphere with many small tangent images, approximating the sky by a polyhedron.
The algorithm behind tangent (polyhedron) facet imaging is simple to implement. First the sky is recentred at the image centres $l_i,m_i$ of each of the narrow-field facets, by phase rotating the measured visibilities. Each of the facet-images is then rotated to be tangent to the sky sphere. As the Fourier transform preserves rotations, the facets can be tilted by rotating the u,v coordinates of the measurements to the tracks that would have been produced if the interferometer was pointing at $\alpha_i,\delta_i$, instead of the original phase tracking centre. Let $(l_\Delta,m_\Delta,n_\Delta) = (l_i-l_0,m_i-m_0,n_i-n_0)$, then:
\begin{equation}
\begin{split}
V(u,v,w)&\approx\int{\int{B(l-l_i,m-m_i,n-n_i)e^{-2{\pi}i[u(l-l_i)+v(m-m_i)+w(n-n_i)]}\frac{dldm}{n}}}\
&\approx\int{\int{B(l-l_i,m-m_i,n-n_i)e^{-2{\pi}i[u(l-l_0-l_\Delta)+v(m-m_0-m_\Delta)+w(n-n_0-n_\Delta)]}\frac{dldm}{n}}}\
&\approx\left[\int{\int{B(l-l_0,m-m_0,n-n_0)e^{-2{\pi}i[u(l-l_0)+v(m-m_0)+w(n-n_0)]}\frac{dldm}{n}}}\right]e^{2{\pi}i[ul_\Delta,vm_\Delta,wn_\Delta]}\
\end{split}
\end{equation}
Note that if only the phase rotation is performed without rotating the facet geometry the effective field of view of individual facets that are far away from the phase centre will decrease. In order to keep the projection error at the edge of all the facets comparable this means that the facets closer to the edge of the field must be significantly smaller, increasing the computational demands of such an approach. Instead if the facets are rotated to form a polyhedron around the celestial sphere the facets can all be the same size. A simple visual proof of this is given by the following two cartoons:
End of explanation
Image(filename="figures/non-coplanar-faceting.png")
Explanation: Figure: Only phase steering the visibilities to new phase centres without tilting the u,v,w coordinates to
correspond to the new phase tracking centre significantly reduces the achievable field of view. Here instead
each facet is parallel to the original tangent plane. As the new centre is taken further away from the original
phase tracking centre the effective facet size must be shrunk down to achieve a comparable projection error
at the edge of the synthesized facets.
End of explanation |
6,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gini coefficient
Gini coefficient is a measure of statistical dispersion. For the Kaggle competition, the normalized Gini coefficient is used as a measure of comparing how much the ordering of the model prediction matches the actual output. The magnitudes of the prediction do matter, but not in the same way they do in regular regressions.
The Gini coefficient is calculated as follows. As an example, let's say we have the following target values and model output (predictions)
Step1: In the above example, the prediction output is not perfect, because we have the 4 and 8 switched. Regardless, we first sort the output from the largest to the smallest and calculate the sort ordering
Step2: Next, we sort the target values using this sorting order. Since the predicted order was incorrect, the target values are not going to be sorted by largest to the smallest.
Step3: Then we look at the cumulative sum, and divide by the total sum to get the proportion of the cumulative sum.
Step4: Let's plot cumsum_target_ratio
Step5: cumsum_target_ratio was plotted in green, whereas The line for $y = x$ was also plotted in blue. This line represents the random model prediction. If we had a large array of numbers, sorted it randomly and looked at the cumulative sum from the left, we would expect the cumulative sum to be about 10% of the total when we look at the number 10% from the left. In general, we would expect $x$ % of the cumulative sum total for the array element that is $x$ % from the left of the array.
Finally, the Gini coefficient is determined to be the "area" between the green and the blue lines
Step6: For convenience, we collect the above in a function.
Step7: Note that we can also calculate the Gini coefficient of two same vectors. In this case, it returns the maximum value that can be achievable by any sorting of the same set of numbers
Step8: Finally, the normalized Gini coefficient is defined as the ratio of Gini coefficient between the target and the prediction with respect to the maximum value achievable from the target values themselves
Step9: The normalized Gini coefficient has the maximum of 1, when the ordering is correct.
Step10: The model prediction is considered better the closer it is to 1. It appears that this number can become negative, though, if the prediction is very bad (the opposite ordering, for example)
Step11: This measure is insensitve to the magnitudes
Step12: However, because we are sorting from the largest to the smallest number (and looking at the ratio of the largest to the total), it is more important to predict the samples with large numbers.
To wit, here're two sets of predictions | Python Code:
target=array([1,4,8,5])
output=array([1,8,4,5])
Explanation: Gini coefficient
Gini coefficient is a measure of statistical dispersion. For the Kaggle competition, the normalized Gini coefficient is used as a measure of comparing how much the ordering of the model prediction matches the actual output. The magnitudes of the prediction do matter, but not in the same way they do in regular regressions.
The Gini coefficient is calculated as follows. As an example, let's say we have the following target values and model output (predictions):
End of explanation
sort_index=argsort(-output) # Because we want to sort from largest to smallest
print(sort_index)
Explanation: In the above example, the prediction output is not perfect, because we have the 4 and 8 switched. Regardless, we first sort the output from the largest to the smallest and calculate the sort ordering:
End of explanation
sorted_target=target[sort_index]
print(sorted_target)
Explanation: Next, we sort the target values using this sorting order. Since the predicted order was incorrect, the target values are not going to be sorted by largest to the smallest.
End of explanation
cumsum_target=cumsum(sorted_target)
print(cumsum_target)
cumsum_target_ratio=cumsum_target / asarray(target.sum(), dtype=float) # Convert to float type
print(cumsum_target_ratio)
Explanation: Then we look at the cumulative sum, and divide by the total sum to get the proportion of the cumulative sum.
End of explanation
xs=linspace(0, 1, len(cumsum_target_ratio) + 1)
plt.plot(xs, c_[xs, r_[0, cumsum_target_ratio]])
plt.gca().set_aspect('equal')
plt.gca().set_xlabel(r'% from left of array')
plt.gca().set_ylabel(r'% cumsum')
Explanation: Let's plot cumsum_target_ratio:
End of explanation
gini_coeff=(r_[0, cumsum_target_ratio] - xs).sum()
print(gini_coeff)
Explanation: cumsum_target_ratio was plotted in green, whereas The line for $y = x$ was also plotted in blue. This line represents the random model prediction. If we had a large array of numbers, sorted it randomly and looked at the cumulative sum from the left, we would expect the cumulative sum to be about 10% of the total when we look at the number 10% from the left. In general, we would expect $x$ % of the cumulative sum total for the array element that is $x$ % from the left of the array.
Finally, the Gini coefficient is determined to be the "area" between the green and the blue lines: green values minus the blue line values. (This can also be negative in some places, as we see above; hence the quotation marks.)
End of explanation
def gini_coeff(target, output):
sort_index=argsort(-output) # Because we want to sort from largest to smallest
sorted_target=target[sort_index]
cumsum_target=cumsum(sorted_target)
cumsum_target_ratio=cumsum_target / asarray(target.sum(), dtype=float) # Convert to float type
xs = linspace(0, 1, len(cumsum_target_ratio) + 1)
return (r_[0, cumsum_target_ratio] - xs).sum()
print(gini_coeff(target, output))
Explanation: For convenience, we collect the above in a function.
End of explanation
print(gini_coeff(target, target))
Explanation: Note that we can also calculate the Gini coefficient of two same vectors. In this case, it returns the maximum value that can be achievable by any sorting of the same set of numbers:
End of explanation
def normalized_gini(target, output):
return gini_coeff(target, output) / gini_coeff(target, target)
print(normalized_gini(target, output))
Explanation: Finally, the normalized Gini coefficient is defined as the ratio of Gini coefficient between the target and the prediction with respect to the maximum value achievable from the target values themselves:
End of explanation
print(normalized_gini(target, target))
Explanation: The normalized Gini coefficient has the maximum of 1, when the ordering is correct.
End of explanation
target=array([1,4,8,5])
output2=array([5,8,4,1])
print(normalized_gini(target, output2))
Explanation: The model prediction is considered better the closer it is to 1. It appears that this number can become negative, though, if the prediction is very bad (the opposite ordering, for example):
End of explanation
target=array([1,4,8,5])
output3=array([10,80,40,50])
output4=array([0,3,1,2])
print(normalized_gini(target, output3))
print(normalized_gini(target, output4))
Explanation: This measure is insensitve to the magnitudes:
End of explanation
target_large=array([1,2,1,2,1,2,1,2,1,2,9])
output_small=array([2,1,2,1,2,1,2,1,2,1,8]) # All 1, 2 s are wrong, but got the largest number right
output_large=array([1,2,1,2,1,2,1,2,1,6,2]) # Got most 1, 2 s right, but missed the largest number
print('output_small RMSE: %f' % sqrt((target_large-output_small)**2).mean())
print('output_large RMSE: %f' % sqrt((target_large-output_large)**2).mean())
print('output_small normalized Gini: %f' % normalized_gini(target_large, output_small))
print('output_large normalized Gini: %f' % normalized_gini(target_large, output_large))
Explanation: However, because we are sorting from the largest to the smallest number (and looking at the ratio of the largest to the total), it is more important to predict the samples with large numbers.
To wit, here're two sets of predictions:
End of explanation |
6,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's see if we can get Aaron's delay network to recognize two different patterns. First, let's create the patterns. For this simple test, we'll just use a 1Hz sine wave and a 0.5Hz sine wave for the two patterns.
Step1: Now let's create a network that represents a rolling window in time (Aaron's "delay network"). The process determines what sort of pattern the network will be optimized for -- here we just go with white noise of a maximum of 3Hz. theta determines how big the rolling window is -- here we use 0.5 seconds.
Step2: Now we need to create the training data for decoding out of the rolling window. Our patterns are larger than the rolling window, so to create our training data we will take our patterns, shift them, and cut them down to the right size. In order to then give that to nengo, we also need to project from the window's space to the internal representation space (using the inv_basis).
The target array is the desired output value for each of the slices of the pattern in eval_points. We'll use 1 for pattern1 and -1 for pattern2.
Step3: Now we can create a connection optimized to do this decoding
Step4: Let's try feeding in those two patterns and see what the response is
Step5: It successfully detects the two frequencies, outputting 1 for the 1Hz pattern (pattern1) and -1 for the 0.5Hz (pattern2)!
Note that it has never observed a transition between frequencies, so it's somewhat reasonable that it's confused at the transitions. This could be fixed by adding more training data that includes such transitions.
Now let's try intermediate frequencies that it has never seen before | Python Code:
s_pattern = 2000 # number of data points in the pattern
t = np.arange(s_pattern)*0.001 # time points for the elements in the patter
pattern1 = np.sin(t*np.pi*2)
pattern2 = np.sin(0.5*t*np.pi*2)
plt.plot(t, pattern1, label='pattern1')
plt.plot(t, pattern2, label='pattern2')
plt.legend(loc='best')
plt.show()
Explanation: Let's see if we can get Aaron's delay network to recognize two different patterns. First, let's create the patterns. For this simple test, we'll just use a 1Hz sine wave and a 0.5Hz sine wave for the two patterns.
End of explanation
net = nengo.Network()
with net:
process = nengo.processes.WhiteSignal(period=100., high=3., y0=0)
rw = nengolib.networks.RollingWindow(theta=0.5, n_neurons=3000, process=process, neuron_type=nengo.LIFRate())
Explanation: Now let's create a network that represents a rolling window in time (Aaron's "delay network"). The process determines what sort of pattern the network will be optimized for -- here we just go with white noise of a maximum of 3Hz. theta determines how big the rolling window is -- here we use 0.5 seconds.
End of explanation
s_window = 500
t_window = np.linspace(0, 1, s_window)
inv_basis = rw.inverse_basis(t_window)
eval_points = []
target = []
for i in range(s_pattern):
eval_points.append(np.dot(inv_basis, np.roll(pattern1, i)[:s_window]))
target.append([1])
eval_points.append(np.dot(inv_basis, np.roll(pattern2, i)[:s_window]))
target.append([-1])
Explanation: Now we need to create the training data for decoding out of the rolling window. Our patterns are larger than the rolling window, so to create our training data we will take our patterns, shift them, and cut them down to the right size. In order to then give that to nengo, we also need to project from the window's space to the internal representation space (using the inv_basis).
The target array is the desired output value for each of the slices of the pattern in eval_points. We'll use 1 for pattern1 and -1 for pattern2.
End of explanation
with net:
result = nengo.Node(None, size_in=1)
nengo.Connection(rw.state, result,
eval_points=eval_points, scale_eval_points=False,
function=target, synapse=0.1)
Explanation: Now we can create a connection optimized to do this decoding
End of explanation
model = nengo.Network()
model.networks.append(net)
with model:
freqs = [1, 0.5]
def stim_func(t):
freq = freqs[int(t/5) % len(freqs)]
return np.sin(t*2*np.pi*freq)
stim = nengo.Node(stim_func)
nengo.Connection(stim, rw.input, synapse=None)
p_result = nengo.Probe(result)
p_stim = nengo.Probe(stim)
sim = nengo.Simulator(model)
sim.run(10)
plt.plot(sim.trange(), sim.data[p_stim], label='input')
plt.plot(sim.trange(), sim.data[p_result], label='output')
plt.legend(loc='best')
Explanation: Let's try feeding in those two patterns and see what the response is
End of explanation
model = nengo.Network()
model.networks.append(net)
with model:
freqs = [1, 0.5, 0.75, 0.875, 0.625]
def stim_func(t):
freq = freqs[int(t/5) % len(freqs)]
return np.sin(t*2*np.pi*freq)
stim = nengo.Node(stim_func)
nengo.Connection(stim, rw.input, synapse=None)
p_result = nengo.Probe(result)
p_stim = nengo.Probe(stim)
sim = nengo.Simulator(model)
sim.run(25)
plt.plot(sim.trange(), sim.data[p_stim], label='input')
plt.plot(sim.trange(), sim.data[p_result], label='output')
plt.legend(loc='best')
Explanation: It successfully detects the two frequencies, outputting 1 for the 1Hz pattern (pattern1) and -1 for the 0.5Hz (pattern2)!
Note that it has never observed a transition between frequencies, so it's somewhat reasonable that it's confused at the transitions. This could be fixed by adding more training data that includes such transitions.
Now let's try intermediate frequencies that it has never seen before:
End of explanation |
6,533 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
simulating amplicon fragments for genomes in non-singleton OTUs
Setting variables
Step1: Init
Step2: gradient params
Step3: Get GC distribution info
Step4: Combining info table with OTU tabel | Python Code:
import os
workDir = '/var/seq_data/ncbi_db/genome/Jan2016/ampFragsGC/'
ampFragFile = '/var/seq_data/ncbi_db/genome/Jan2016/ampFrags_KDE.pkl'
otuFile = '/var/seq_data/ncbi_db/genome/Jan2016/rnammer_aln/otusn_map_nonSingle.txt'
Explanation: Goal
simulating amplicon fragments for genomes in non-singleton OTUs
Setting variables
End of explanation
import dill
import numpy as np
import pandas as pd
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(dplyr)
library(tidyr)
library(ggplot2)
if not os.path.isdir(workDir):
os.makedirs(workDir)
%cd $workDir
Explanation: Init
End of explanation
# max 13C shift
max_13C_shift_in_BD = 0.036
# min BD (that we care about)
min_GC = 13.5
min_BD = min_GC/100.0 * 0.098 + 1.66
# max BD (that we care about)
max_GC = 80
max_BD = max_GC / 100.0 * 0.098 + 1.66 # 80.0% G+C
max_BD = max_BD + max_13C_shift_in_BD
## BD range of values
BD_vals = np.arange(min_BD, max_BD, 0.001)
Explanation: gradient params
End of explanation
infoFile = os.path.splitext(ampFragFile)[0] + '_info.txt'
infoFile = os.path.join(workDir, os.path.split(infoFile)[1])
!SIPSim KDE_info -s $ampFragFile > $infoFile
!wc -l $infoFile
!head -n 4 $infoFile
%%R -i infoFile
df.info = read.delim(infoFile, sep='\t') %>%
mutate(genus_ID = gsub('_.+', '', taxon_ID),
species_ID = gsub('^([^_]+_[^_]+).+', '\\1', taxon_ID))
df.info %>% head(n=3)
Explanation: Get GC distribution info
End of explanation
%%R -i otuFile
df.OTU = read.delim(otuFile, sep='\t', header=FALSE) %>%
mutate(genome_ID = gsub('\\.fna', '', V13)) %>%
select(genome_ID, V2) %>%
rename('OTU_ID' = V2)
df.info.j = inner_join(df.info, df.OTU, c('taxon_ID' = 'genome_ID'))
df.OTU = NULL
df.info.j %>% head(n=3)
%%R
df.info.j.f1 = df.info.j %>%
filter(KDE_ID == 1) %>%
distinct(taxon_ID, OTU_ID) %>%
group_by(OTU_ID) %>%
mutate(n_taxa = n()) %>%
ungroup() %>%
filter(n_taxa > 1)
df.info.j.f1 %>% nrow %>% print
df.info.j.f1 %>% head(n=3) %>% as.data.frame
%%R -h 4000 -w 800
df.info.j.f1$taxon_ID = reorder(df.info.j.f1$taxon_ID, df.info.j.f1$genus_ID)
df.info.j.f1$OTU_ID = reorder(df.info.j.f1$OTU_ID, -df.info.j.f1$n_taxa)
ggplot(df.info.j.f1, aes(x=taxon_ID, y=median,
ymin=percentile_25, ymax=percentile_75,
color=species_ID)) +
geom_linerange() +
geom_point(size=1) +
facet_wrap(~ OTU_ID, scales='free_x', ncol=8) +
theme_bw() +
theme(
axis.text.x = element_blank(),
legend.position='none'
)
Explanation: Combining info table with OTU tabel
End of explanation |
6,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img align="right" src="../img/square_240.png" />
Exercise
Step1: 3. Program
The code is structured in three parts
Step2: Solution
Step3: The trajectory is | Python Code:
import packages.initialization
import pioneer3dx as p3dx
p3dx.init()
Explanation: <img align="right" src="../img/square_240.png" />
Exercise: Square Test.
You are going to make a program for describing a square trajectory with the robot.
Instead of starting to code from scratch, you are going to reuse the code that you developed for the distance and turning exercises.
1. Starting position
For a better visual understanding of the task, it is recommended that the robot starts at the center of the room.
You can easily relocate the robot there by simply restarting the simulation.
2. Initialization
After restarting the simulation, the robot needs to be initialized.
End of explanation
def forward():
# copy and paste your code here
...
def turn():
# copy and paste your code here
...
print('Pose of the robot at the start')
p3dx.pose()
for _ in range(4):
forward()
turn()
print('Pose of the robot at the end')
p3dx.pose()
Explanation: 3. Program
The code is structured in three parts:
1. The first part is a function for moving forward: you must copy and paste the code inside the body of the function template, in the following cell.
2. The second part is a similar function for turning.
3. Finally, the third part is the main code, consisting of a loop that calls the previous functions four times. The code also displays the pose of the robot (position and orientation) before and after the motion.
End of explanation
def forward():
target = 2.0 # target distance
r = 0.1953 / 2 # wheel radius
initialEncoder = p3dx.rightEncoder
distance = 0
while distance < target:
p3dx.move(2.5,2.5)
angle = p3dx.rightEncoder - initialEncoder
distance = angle * r
p3dx.move(0,0)
def turn():
target = 3.1416/2 # target angle in radians
r = 0.1953 / 2 # wheel radius
L = 0.33 # axis length
initialEncoder = p3dx.leftEncoder
robotAngle = 0
while robotAngle < target:
p3dx.move(1.0,-1.0)
wheelAngle = p3dx.leftEncoder - initialEncoder
robotAngle = 2 * r * wheelAngle / L
p3dx.move(0,0)
print('Pose of the robot at the start')
p3dx.pose()
for _ in range(4):
forward()
turn()
print('Pose of the robot at the end')
p3dx.pose()
Explanation: Solution
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
x, y = p3dx.trajectory()
plt.plot(x,y)
Explanation: The trajectory is:
End of explanation |
6,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TB Model
We pick the following parameters
Step1: d Wave
Instantiation
Step2: Modification
Step3: MC Driver
Instantiation
Step4: Modification | Python Code:
Tc_mf = meV_to_K(0.5*250)
print meV_to_K(pi/2.0)
print 1.0/0.89
print cst.physical_constants["Boltzmann constant"]
print '$T_c^{MF} = $', Tc_mf, "K"
T_KT = meV_to_K(0.1*250)
print r"$T_{KT} = $", T_KT, "K"
Explanation: TB Model
We pick the following parameters:
+ hopping constant $ t= 250$ meV
+ $\Delta = 1.0 t$ so that $T_c^{MF} = 0.5 t$, and so that $\xi_0 \simeq a_0$
+ $g = -0.25$, unitless, so as to match the article's formalism, not the thesis'
+ $J = \dfrac{0.1 t}{0.89}$ so as to set $T_{KT} = 0.1 t$.
This means that we have the following physical properties
End of explanation
T_CST = 0.25
BCS_PARAMS = {"width":4, "chem_potential": 0.0,
"hopping_constant": T_CST, "J_constant": 0.1 * T_CST / 0.89,
"g_constant": 0.25, "delta": 1.0 * T_CST, "use_assaad": True,
"uniform_phase": True, "temperature": 100}
MY_DWAVE_MODEL = DWaveModel(BCS_PARAMS)
print MY_DWAVE_MODEL
Explanation: d Wave
Instantiation
End of explanation
BCS_PARAMS = {"width":20, "use_assaad": True,
"uniform_phase": True, "temperature": 1.75*145.0}
MY_DWAVE_MODEL.set_params(BCS_PARAMS)
print MY_DWAVE_MODEL
print "temp: ", K_to_meV(MY_DWAVE_MODEL.temperature), "meV"
Explanation: Modification
End of explanation
BCS_PARAMS = {"width":20, "use_assaad": True,
"uniform_phase": False, "temperature": 1.75*145.0}
MY_DWAVE_MODEL.set_params(BCS_PARAMS)
print MY_DWAVE_MODEL._uniform_phase
MC_Params = {"seed": 222315, "intervals": 100,
"target_snapshots": 15, "observable_list":["correlation_length"]}
MY_DRIVER = MCMCDriver(MY_DWAVE_MODEL, MC_Params)
Explanation: MC Driver
Instantiation
End of explanation
MC_PARAMS_MP = {"intervals": BCS_PARAMS["width"]**2 / 2,
"target_snapshots": 25,
"algorithm":"metropolis"}
MC_PARAMS_CLUSTER = {"intervals": 5,
"target_snapshots": 25,
"algorithm":"cluster"}
MY_DRIVER.set_params(MC_PARAMS_CLUSTER)
print MY_DWAVE_MODEL._uniform_phase
print MY_DRIVER
print MY_DRIVER.params
#MY_DRIVER.mc_object.set_params({"temperature": 2.0 * 145.0})
#MY_DRIVER.thermalize(20000)
MY_DRIVER.mc_object.set_params({"temperature": 1.05 * 290.0 * 1.05 / 1.12})
MY_DRIVER.thermalize(50)
MY_DRIVER.execute()
result = MY_DRIVER.result
data = result.observable_results["correlation_length"]
print data["length_values"]
print data["correlation_values"]
print result
x_data = np.sqrt(data["length_values"])
y_data = data["correlation_values"]
fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False)
ymin = 0.0
ymax = 1.0
ax.plot(x_data, y_data)
ax.set_ylim([ymin, ymax])
popt, pcov = curve_fit(func_short, x_data[1:], y_data[1:])
print popt
ax.plot(x_data, func_short(x_data, 2.0*popt[0], 2.0*popt[1]))
print "corr length:", 1.0/popt[1]
fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False)
ax.plot(x_data[1:], np.log(y_data[1:]))
ax.plot(x_data[1:], np.log(popt[0]) - popt[1] * x_data[1:])
fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False)
plt.imshow(MY_DRIVER.mc_object.xy_lattice, cmap=plt.cm.hot, interpolation='none')
plt.colorbar()
#Cf http://matplotlib.org/examples/pylab_examples/quiver_demo.html
fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False)
plt.quiver(np.cos(MY_DRIVER.mc_object.xy_lattice), np.sin(MY_DRIVER.mc_object.xy_lattice))
MY_DRIVER.mc_object.make_wolff_step(np.pi/2.0)
dimension = MY_DRIVER.mc_object.xy_lattice.shape[0]
cluster = np.reshape(MY_DRIVER.mc_object.cluster, (dimension, dimension))
plt.imshow(cluster, cmap=plt.cm.hot, interpolation='none')
plt.colorbar()
print pi
print 5.65226755763 / (2.0 * pi), 3.77251040313 / (2.0 * pi)
neigh = MY_DRIVER.mc_object.lattice.get_neighbors()
results = pickle.load( open( "data_new.txt", "rb" ) )
data = results[0].observable_results["correlation_length"]
datas = {}
temps =np.array([])
for elem in results:
temps = np.append(temps, elem.bcs_params['temperature'])
temps = np.unique(temps)
for temp in temps:
datas[temp] = np.array([elem for elem in results if elem.bcs_params['temperature']==temp])
x_datas = {}
y_datas = {}
for temp in temps:
x_datas[temp] = np.sqrt(datas[temp][0].observable_results["correlation_length"]["length_values"])
y_datas[temp] = np.zeros((x_datas[temp].size))
total_sum = 0
for elem in datas[temp]:
y_datas[temp] +=\
elem.observable_results["correlation_length"]["correlation_values"]
y_datas[temp] /= datas[temp].size
fig, ax = plt.subplots(figsize = (14, 12), dpi=100, frameon=False)
corr_lens = {}
chosen_fun = func_power
for temp in temps[:1]:
x_data = x_datas[temp]
y_data = y_datas[temp]
print temp
ax.plot(x_data, y_data, label=str(temp))
#popt, pcov = curve_fit(func, x_data[0:], y_data[0:])
popt, pcov = curve_fit(chosen_fun, x_data[1:], y_data[1:])
print "temp: ", temp, "params: ", popt, r"$\eta$: ", -popt[1]
corr_lens[temp] = 1.0/popt[1]
ax.plot(x_data, chosen_fun(x_data, popt[0], popt[1]))
chosen_fun = func_full
for temp in temps[1:]:
x_data = x_datas[temp]
y_data = y_datas[temp]
print temp
ax.plot(x_data, y_data, label=str(temp))
#popt, pcov = curve_fit(func, x_data[0:], y_data[0:])
popt, pcov = curve_fit(chosen_fun, x_data[1:], y_data[1:])
print "temp: ", temp, "params: ", popt, "length: ", 1.0/popt[1], 1.0/popt[-1]
corr_lens[temp] = 1.0/popt[1]
ax.plot(x_data, chosen_fun(x_data, popt[0], popt[1], popt[2]))
ax.legend()
plt.savefig("transition.pdf")
fig, ax = plt.subplots(figsize = (14, 12), dpi=100, frameon=False)
x_es = np.sort(np.array(corr_lens.keys()))
y_es = np.array([corr_lens[elem] for elem in x_es])
ax.plot(x_es[1:], y_es[1:])
ax.grid(True)
plt.savefig("corr_length.pdf")
popt, pcov = curve_fit(func_exponent, x_es[1:], y_es[1:])
nu = popt[1]
print "nu", popt[1]
fig, ax = plt.subplots(figsize = (14, 12), dpi=100, frameon=False)
ax.plot(x_es[1:], y_es[1:])
ax.plot(x_es[1:-1], func_exponent(x_es[1:-1], popt[0], popt[1]))
ax.grid(True)
fig, ax = plt.subplots(figsize = (10, 8), dpi=100, frameon=False)
temp = 275.0
x_data = x_datas[temp]
y_data = y_datas[temp]
print x_data.shape
print y_data.shape
print x_es.shape
print data["length_values"].shape
l_values = y_data
popt, pcov = curve_fit(func, x_data, y_data)
ax.plot(np.sqrt(data["length_values"]), np.log(l_values - popt[2]))
ax.plot(np.sqrt(data["length_values"]), np.log(popt[0]) - popt[1] * np.sqrt(data["length_values"]))
results = pickle.load( open( "dos_alltemps.txt", "rb" ) )
datas = {}
temps =np.array([])
for elem in results:
temps = np.append(temps, elem.bcs_params['temperature'])
temps = np.unique(temps)
for temp in temps:
datas[temp] = np.array([elem for elem in results if elem.bcs_params['temperature']==temp])
print temps
print datas[275.0][0]
x_datas = {}
y_datas = {}
for temp in temps:
x_datas[temp] = datas[temp][0].observable_results["DOS"]["omega_mesh"]
y_datas[temp] = np.zeros((x_datas[temp].size))
total_sum = 0
for elem in datas[temp]:
y_datas[temp] +=\
elem.observable_results["DOS"]["DOS_values"]
y_datas[temp] /= datas[temp].size
fig, ax = plt.subplots(figsize = (8, 14), dpi=100, frameon=False)
#for i in range(len(temps)):
selected_temps = [0,1, 2, 3, 4, 6, 8]
for i in range(len(selected_temps)):
temp = temps[selected_temps[i]]
x_data = x_datas[temp]
y_data = y_datas[temp] + i * 0.7
ax.plot(x_data, y_data, label=(r'T={:3.2f}$T_{{KT}}$').format(temp/T_KT))
ax.legend()
plt.savefig("all_dos.pdf")
Explanation: Modification
End of explanation |
6,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model16
Step1: Right, now, you can use those module.
GMM
Classifying questions
features
Step3: B. Modeling
Select model
Step4: n_iter=10 | Python Code:
from utils import load_buzz, select, write_result
from features import featurize, get_pos
from containers import Questions, Users, Categories
Explanation: Model16: Extract common functions
Now, we know what kind of common functions we need. So, I have make some functions which we used as files. So, you can find them in the howto directory.
utils.py
features.py
containers.py
I think we already know what they are. So, just import them and use. BTW, if you want to modify them, just open those files and modify.
Also, you can find the commit which is for extracting common functions in github. Here is the comment of the commit. Now, you can understand what's this.
refactor: extract some common functions and make them modules
In python, a file can be a module. So, we can extract some common
functions from implementation in IPython and make them as a
or some files. After this, we can get a or some modules and we can
import them into our source code.
https://docs.python.org/2/tutorial/modules.html
End of explanation
%matplotlib inline
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
def plot_gmm(X, models, n_components, covariance_type='diag', n_iter=100,
figsize=(10, 20), suptitle=None, xlabel=None, ylabel=None):
color_iter = ['r', 'g', 'b', 'c', 'm', 'y', 'k', 'gray', 'pink', 'lime']
plt.figure(figsize=figsize)
plt.suptitle(suptitle, fontsize=20)
for i, model in enumerate(models):
mm = getattr(mixture, model)(n_components=n_components,
covariance_type=covariance_type,
n_iter=n_iter)
mm.fit(X_pos_qid)
Y = mm.predict(X_pos_qid)
plt.subplot(len(models), 1, 1 + i)
for i, color in enumerate(color_iter):
plt.scatter(X_pos_qid[Y == i, 0], X_pos_qid[Y == i, 1], .7, color=color)
plt.title(model, fontsize=15)
plt.xlabel(xlabel, fontsize=12)
plt.ylabel(ylabel, fontsize=12)
plt.grid()
plt.show()
users = Users(load_buzz())
questions = Questions(load_buzz())
X_pos_uid = users.select(['ave_pos_uid', 'acc_ratio_uid'])
X_pos_qid = questions.select(['ave_pos_qid', 'acc_ratio_qid'])
plot_gmm(X_pos_uid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying users',
xlabel='abs(position)',
ylabel='accuracy ratio')
plot_gmm(X_pos_qid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying questions',
xlabel='abs(position)',
ylabel='accuracy ratio')
# Question category
n_components = 8
gmm = mixture.DPGMM(n_components=n_components, covariance_type='diag', n_iter=10**10)
gmm.fit(X_pos_qid)
pred_cat_qid = gmm.predict(X_pos_qid)
plt.hist(pred_cat_qid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("Question Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
# User category
n_components = 8
gmm = mixture.DPGMM(n_components=n_components, covariance_type='diag', n_iter=10**10)
gmm.fit(X_pos_uid)
pred_cat_uid = gmm.predict(X_pos_uid)
plt.hist(pred_cat_uid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("User Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
from collections import Counter
users.sub_append('cat_uid', [str(x) for x in pred_cat_uid])
questions.sub_append('cat_qid', [str(x) for x in pred_cat_qid])
# to get most frequent cat for some test data which do not have ids in train set
most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0]
most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0]
print(most_pred_cat_uid)
print(most_pred_cat_qid)
print(users[1])
print(questions[1])
Explanation: Right, now, you can use those module.
GMM
Classifying questions
features: avg_pos, accuracy rate
End of explanation
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['sign_val', 'avg_pos'])
X_train = select(X_train, regression_keys)
categories = Categories(load_buzz())
for item in X_train:
for key in categories[item['category']].keys():
item[key] = categories[item['category']][key]
X_train
import nltk
def extract_entities(text, all=True, verbose=False):
count = 0
for sent in nltk.sent_tokenize(text):
for chunk in nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize(sent))):
if all:
if verbose: print(chunk)
if type(chunk) is nltk.tree.Tree:
count += 1
if verbose: print(chunk.label(), ' '.join(c[0] for c in chunk.leaves()))
elif chunk[1] == 'CD':
count += 1
if verbose: print('CD', chunk[0])
return count
from collections import defaultdict
ne_count = defaultdict(int)
for key in questions:
ne_count[key] = extract_entities(questions[key]['question'], all=False, verbose=False)
import pickle
with open('ne_count01.pkl', 'wb') as f:
pickle.dump(ne_count, f)
def transform(X):
for index, item in enumerate(X):
uid = int(item['uid'])
qid = int(item['qid'])
# uid
if int(uid) in users:
item['acc_ratio_uid'] = users[uid]['acc_ratio_uid']
item['cat_uid'] = users[uid]['cat_uid']
else:
acc = users.select(['acc_ratio_uid'])
item['acc_ratio_uid'] = sum(acc) / float(len(acc))
item['cat_uid'] = most_pred_cat_uid
# qid
if int(qid) in questions:
item['acc_ratio_qid'] = questions[qid]['acc_ratio_qid']
item['cat_qid'] = questions[qid]['cat_qid']
item['ne_count'] = ne_count[qid]
else:
acc = questions.select(['acc_ratio_qid'])
item['acc_ratio_qid'] = sum(acc) / float(len(acc))
item['cat_qid'] = most_pred_cat_qid
item['uid'] = str(uid)
item['qid'] = str(qid)
transform(X_train)
X_train[1]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
X_train_dict_vec = vec.fit_transform(X_train)
import multiprocessing
from sklearn import linear_model
from sklearn.cross_validation import train_test_split, cross_val_score
import math
from numpy import abs, sqrt
regressor_names =
ElasticNetCV
#for l1 in [0.5, 0.2, 0.7, 0.9]:
for l1 in [0.5]:
print ("=== ElasticNetCV RMSE", "with", l1)
for regressor in regressor_names.split():
scores = cross_val_score(getattr(linear_model, regressor)(n_jobs=3, normalize=True, l1_ratio = l1),
X_train_dict_vec, y_train,
cv=2,
scoring='mean_squared_error'
)
print (regressor, sqrt(abs(scores)).mean())
Explanation: B. Modeling
Select model
End of explanation
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['avg_pos'])
X_train = select(X_train, regression_keys)
X_test = featurize(load_buzz(), group='test', sign_val=None, extra=['avg_pos'])
X_test = select(X_test, regression_keys)
transform(X_train)
transform(X_test)
for item in X_train:
for key in categories[item['category']].keys():
item[key] = categories[item['category']][key]
for item in X_test:
for key in categories[item['category']].keys():
item[key] = categories[item['category']][key]
X_train[1]
X_test[1]
vec = DictVectorizer()
vec.fit(X_train + X_test)
X_train = vec.transform(X_train)
X_test = vec.transform(X_test)
for l1_ratio in [0.72, 0.7]:
print('=== l1_ratio:', l1_ratio)
regressor = linear_model.ElasticNetCV(n_jobs=3, normalize=True, l1_ratio=l1_ratio)
regressor.fit(X_train, y_train)
print(regressor.coef_)
print(regressor.alpha_)
predictions = regressor.predict(X_test)
write_result(load_buzz()['test'], predictions, file_name=str(l1_ratio)+'guess.csv')
Explanation: n_iter=10: 78.9121215405
n_iter=100 take1: 78.9251743166
n_iter=100 take2: 78.9268663663
Training and testing model
End of explanation |
6,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Uniquely Identifying Particles With Hashes
In many cases, one can just identify particles by their position in the particle array, e.g. using sim.particles[5]. However, in cases where particles might get reordered in the particle array finding a particle might be difficult. This is why we added a hash attribute to particles.
In REBOUND particles might get rearranged when a tree code is used for the gravity or collision routine, when particles merge, when a particle leaves the simulation box, or when you manually remove or add particles. In general, therefore, the user should not assume that particles stay at the same index or in the same location in memory. The reliable way to access particles is to assign them hashes and to access particles through them.
Note
Step1: We can now not only access the Earth particle with
Step2: but also with
Step3: We can access particles with negative indices like a list. We can get the last particle with
Step4: Details
We can also access particles through their hash directly. However, to differentiate from passing an integer index, we have to first cast the hash to the underlying C datatype. We can do this through the rebound.hash function
Step5: which corresponds to particles[0] as it should. sim.particles[999] would try to access index 999, which doesn't exist in the simulation, and REBOUND would raise an AttributeError.
When we above set the hash to a string, REBOUND converted this to an unsigned integer using the same rebound.hash function
Step6: The hash attribute always returns the appropriate unsigned integer ctypes type. (Depending on your computer architecture, ctypes.c_uint32 can be an alias for another ctypes type).
So we could also access the earth with
Step7: The numeric hashes could be useful in cases where you have a lot of particles you don't want to assign individual names, but you still need to keep track of them individually as they get rearranged
Step8: Possible Pitfalls
The user is responsible for making sure the hashes are unique. If two particles share the same hash, you could get either one when you access them using their hash (in most cases the first hit in the particles array). Two random strings used for hashes have a $\sim 10^{-9}$ chance of clashing. The most common case is setting a hash to 0
Step9: Here we expected to get back the first particle, but instead got the last one. This is because we didn't assign a hash to the last particle and it got automatically set to 0. If we give hashes to all the particles in the simulation, then there's no clash
Step10: Due to details of the ctypes library, comparing two ctypes.c_uint32 instances for equality fails
Step11: You have to compare the value | Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1., hash=999)
sim.add(a=0.4, hash="mercury")
sim.add(a=1., hash="earth")
sim.add(a=5., hash="jupiter")
Explanation: Uniquely Identifying Particles With Hashes
In many cases, one can just identify particles by their position in the particle array, e.g. using sim.particles[5]. However, in cases where particles might get reordered in the particle array finding a particle might be difficult. This is why we added a hash attribute to particles.
In REBOUND particles might get rearranged when a tree code is used for the gravity or collision routine, when particles merge, when a particle leaves the simulation box, or when you manually remove or add particles. In general, therefore, the user should not assume that particles stay at the same index or in the same location in memory. The reliable way to access particles is to assign them hashes and to access particles through them.
Note: When you don't assign particles a hash, they automatically get set to 0. The user is responsible for making sure hashes are unique, so if you set up particles without a hash and later set a particle's hash to 0, you don't know which one you'll get back when you access hash 0. See Possible Pitfalls below.
In this example, we show the basic usage of the hash attribute, which is an unsigned integer.
End of explanation
sim.particles[2]
Explanation: We can now not only access the Earth particle with:
End of explanation
sim.particles["earth"]
Explanation: but also with
End of explanation
sim.particles[-1]
Explanation: We can access particles with negative indices like a list. We can get the last particle with
End of explanation
from rebound import hash as h
sim.particles[h(999)]
Explanation: Details
We can also access particles through their hash directly. However, to differentiate from passing an integer index, we have to first cast the hash to the underlying C datatype. We can do this through the rebound.hash function:
End of explanation
h("earth")
sim.particles[2].hash
Explanation: which corresponds to particles[0] as it should. sim.particles[999] would try to access index 999, which doesn't exist in the simulation, and REBOUND would raise an AttributeError.
When we above set the hash to a string, REBOUND converted this to an unsigned integer using the same rebound.hash function:
End of explanation
sim.particles[h(1424801690)]
Explanation: The hash attribute always returns the appropriate unsigned integer ctypes type. (Depending on your computer architecture, ctypes.c_uint32 can be an alias for another ctypes type).
So we could also access the earth with:
End of explanation
for i in range(1,100):
sim.add(m=0., a=i, hash=i)
sim.particles[99].a
sim.particles[h(99)].a
Explanation: The numeric hashes could be useful in cases where you have a lot of particles you don't want to assign individual names, but you still need to keep track of them individually as they get rearranged:
End of explanation
sim = rebound.Simulation()
sim.add(m=1., hash=0)
sim.add(a=1., hash="earth")
sim.add(a=5.)
sim.particles[h(0)]
Explanation: Possible Pitfalls
The user is responsible for making sure the hashes are unique. If two particles share the same hash, you could get either one when you access them using their hash (in most cases the first hit in the particles array). Two random strings used for hashes have a $\sim 10^{-9}$ chance of clashing. The most common case is setting a hash to 0:
End of explanation
sim = rebound.Simulation()
sim.add(m=1., hash=0)
sim.add(a=1., hash="earth")
sim.add(a=5., hash="jupiter")
sim.particles[h(0)]
Explanation: Here we expected to get back the first particle, but instead got the last one. This is because we didn't assign a hash to the last particle and it got automatically set to 0. If we give hashes to all the particles in the simulation, then there's no clash:
End of explanation
h(32) == h(32)
Explanation: Due to details of the ctypes library, comparing two ctypes.c_uint32 instances for equality fails:
End of explanation
h(32).value == h(32).value
Explanation: You have to compare the value
End of explanation |
6,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
모형 하이퍼 파라미터 튜닝
머신 러닝 모형이 완성된 후에는 성능을 향상시키기 위한 하이퍼 파라미터 최적화 등의 모형 최적화 과정을 통해 예측 성능을 향상시킨다.
Scikit-Learn 의 모형 하이퍼 파라미터 튜닝 도구
Scikit-Learn에서는 다음과 같은 모형 최적화 도구를 지원한다.
validation_curve
단일 하이퍼 파라미터 최적화
GridSearchCV
그리드를 사용한 복수 하이퍼 파라미터 최적화
ParameterGrid
복수 파라미터 최적화용 그리드
validation_curve 사용 예
validation_curve 함수는 최적화할 파라미터 이름과 범위, 그리고 성능 기준을 param_name, param_range, scoring 인수로 받아 파라미터 범위의 모든 경우에 대해 성능 기준을 계산한다.
Step1: GridSearchCV 사용예
GridSearchCV 클래스는 validation_curve 함수와 달리 모형 래퍼(Wrapper) 성격의 클래스이다. 클래스 객체에 fit 메서드를 호출하면 grid search를 사용하여 자동으로 복수개의 내부 모형을 생성하고 이를 모두 실행시켜서 최적 파라미터를 찾아준다. 생성된 복수개와 내부 모형과 실행 결과는 다음 속성에 저장된다.
grid_scores_
param_grid 의 모든 파리미터 조합에 대한 성능 결과. 각각의 원소는 다음 요소로 이루어진 튜플이다.
parameters
Step2: ParameterGrid 사용예
때로는 scikit-learn 이 제공하는 GridSearchCV 이외의 방법으로 그리드 탐색을 해야하는 경우도 있다. 이 경우 파라미터를 조합하여 탐색 그리드를 생성해 주는 명령어가 ParameterGrid 이다. ParameterGrid 는 탐색을 위한 iterator 역할을 한다. | Python Code:
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.learning_curve import validation_curve
digits = load_digits()
X, y = digits.data, digits.target
param_range = np.logspace(-6, -1, 10)
%%time
train_scores, test_scores = \
validation_curve(SVC(), X, y,
param_name="gamma", param_range=param_range,
cv=10, scoring="accuracy", n_jobs=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve with SVM")
plt.xlabel("$\gamma$")
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
plt.semilogx(param_range, train_scores_mean, label="Training score", color="r")
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2, color="r")
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score", color="g")
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2, color="g")
plt.legend(loc="best")
plt.show()
Explanation: 모형 하이퍼 파라미터 튜닝
머신 러닝 모형이 완성된 후에는 성능을 향상시키기 위한 하이퍼 파라미터 최적화 등의 모형 최적화 과정을 통해 예측 성능을 향상시킨다.
Scikit-Learn 의 모형 하이퍼 파라미터 튜닝 도구
Scikit-Learn에서는 다음과 같은 모형 최적화 도구를 지원한다.
validation_curve
단일 하이퍼 파라미터 최적화
GridSearchCV
그리드를 사용한 복수 하이퍼 파라미터 최적화
ParameterGrid
복수 파라미터 최적화용 그리드
validation_curve 사용 예
validation_curve 함수는 최적화할 파라미터 이름과 범위, 그리고 성능 기준을 param_name, param_range, scoring 인수로 받아 파라미터 범위의 모든 경우에 대해 성능 기준을 계산한다.
End of explanation
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
pipe_svc = Pipeline([('scl', StandardScaler()), ('clf', SVC(random_state=1))])
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
param_grid = [
{'clf__C': param_range, 'clf__kernel': ['linear']},
{'clf__C': param_range, 'clf__gamma': param_range, 'clf__kernel': ['rbf']}]
gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid, scoring='accuracy', cv=10, n_jobs=1)
%time gs = gs.fit(X, y)
print(gs.best_score_)
print(gs.best_params_)
gs.grid_scores_
Explanation: GridSearchCV 사용예
GridSearchCV 클래스는 validation_curve 함수와 달리 모형 래퍼(Wrapper) 성격의 클래스이다. 클래스 객체에 fit 메서드를 호출하면 grid search를 사용하여 자동으로 복수개의 내부 모형을 생성하고 이를 모두 실행시켜서 최적 파라미터를 찾아준다. 생성된 복수개와 내부 모형과 실행 결과는 다음 속성에 저장된다.
grid_scores_
param_grid 의 모든 파리미터 조합에 대한 성능 결과. 각각의 원소는 다음 요소로 이루어진 튜플이다.
parameters: 사용된 파라미터
mean_validation_score: 교차 검증(cross-validation) 결과의 평균값
cv_validation_scores: 모든 교차 검증(cross-validation) 결과
best_score_
최고 점수
best_params_
최고 점수를 낸 파라미터
best_estimator_
최고 점수를 낸 파라미터를 가진 모형
End of explanation
from sklearn.grid_search import ParameterGrid
param_grid = {'a': [1, 2], 'b': [True, False]}
list(ParameterGrid(param_grid))
param_grid = [{'kernel': ['linear']}, {'kernel': ['rbf'], 'gamma': [1, 10]}]
list(ParameterGrid(param_grid))
Explanation: ParameterGrid 사용예
때로는 scikit-learn 이 제공하는 GridSearchCV 이외의 방법으로 그리드 탐색을 해야하는 경우도 있다. 이 경우 파라미터를 조합하여 탐색 그리드를 생성해 주는 명령어가 ParameterGrid 이다. ParameterGrid 는 탐색을 위한 iterator 역할을 한다.
End of explanation |
6,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with TensorFlow
Learning Objectives
1. Practice defining and performing basic operations on constant Tensors
1. Use Tensorflow's automatic differentiation capability
1. Learn how to train a linear regression from scratch with TensorFLow
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is tf.GradientTape, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
Step1: Operations on Tensors
Variables and Constants
Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable).
Constant values can not be changed, while variables values can be.
The main difference is that instances of tf.Variable have methods allowing us to change
their values while tensors constructed with tf.constant don't have these methods, and
therefore their values can not be changed. When you want to change the value of a tf.Variable
x use one of the following method
Step2: Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does
Step3: NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
Step4: You can convert a native TF tensor to a NumPy array using .numpy()
Step5: Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
Toy Dataset
We'll model the following function
Step6: Let's also create a test dataset to evaluate our models
Step7: Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set
Step8: Using mean squared error, our loss is
Step9: This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
we can write a loss function taking as arguments the coefficients of the model
Step10: Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of tf.GradientTape instance which will reccord gradient information
Step11: Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
Step12: Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set
Step13: This is indeed much better!
Bonus
Try modelling a non-linear function such as | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.5
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print(tf.__version__)
Explanation: Getting started with TensorFlow
Learning Objectives
1. Practice defining and performing basic operations on constant Tensors
1. Use Tensorflow's automatic differentiation capability
1. Learn how to train a linear regression from scratch with TensorFLow
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is tf.GradientTape, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
End of explanation
x = tf.constant([2, 3, 4])
x
x = tf.Variable(2.0, dtype=tf.float32, name='my_variable')
x.assign(45.8) # TODO 1
x
x.assign_add(4) # TODO 2
x
x.assign_sub(3) # TODO 3
x
Explanation: Operations on Tensors
Variables and Constants
Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable).
Constant values can not be changed, while variables values can be.
The main difference is that instances of tf.Variable have methods allowing us to change
their values while tensors constructed with tf.constant don't have these methods, and
therefore their values can not be changed. When you want to change the value of a tf.Variable
x use one of the following method:
x.assign(new_value)
x.assign_add(value_to_be_added)
x.assign_sub(value_to_be_subtracted
End of explanation
a = tf.constant([5, 3, 8]) # TODO 1
b = tf.constant([3, -1, 2])
c = tf.add(a, b)
d = a + b
print("c:", c)
print("d:", d)
a = tf.constant([5, 3, 8]) # TODO 2
b = tf.constant([3, -1, 2])
c = tf.multiply(a, b)
d = a * b
print("c:", c)
print("d:", d)
# tf.math.exp expects floats so we need to explicitly give the type
a = tf.constant([5, 3, 8], dtype=tf.float32)
b = tf.math.exp(a)
print("b:", b)
Explanation: Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does:
tf.add allows to add the components of a tensor
tf.multiply allows us to multiply the components of a tensor
tf.subtract allow us to substract the components of a tensor
tf.math.* contains the usual math operations to be applied on the components of a tensor
and many more...
Most of the standard aritmetic operations (tf.add, tf.substrac, etc.) are overloaded by the usual corresponding arithmetic symbols (+, -, etc.)
End of explanation
# native python list
a_py = [1, 2]
b_py = [3, 4]
tf.add(a_py, b_py) # TODO 1
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
tf.add(a_np, b_np) # TODO 2
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
tf.add(a_tf, b_tf) # TODO 3
Explanation: NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
End of explanation
a_tf.numpy()
Explanation: You can convert a native TF tensor to a NumPy array using .numpy()
End of explanation
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
print("X:{}".format(X))
print("Y:{}".format(Y))
Explanation: Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
End of explanation
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
print("X_test:{}".format(X_test))
print("Y_test:{}".format(Y_test))
Explanation: Let's also create a test dataset to evaluate our models:
End of explanation
y_mean = Y.numpy().mean()
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
Explanation: Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set:
End of explanation
errors = (Y_hat - Y)**2
loss = tf.reduce_mean(errors)
loss.numpy()
Explanation: Using mean squared error, our loss is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
For this simple model the loss is then:
End of explanation
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
Explanation: This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
we can write a loss function taking as arguments the coefficients of the model:
End of explanation
# TODO 1
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
print("dw0:", dw0.numpy())
print("dw1", dw1.numpy())
Explanation: Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of tf.GradientTape instance which will reccord gradient information:
python
with tf.GradientTape() as tape:
loss = # computation
This will allow us to later compute the gradients of any tensor computed within the tf.GradientTape context with respect to instances of tf.Variable:
python
gradients = tape.gradient(loss, [w0, w1])
We illustrate this procedure with by computing the loss gradients with respect to the model weights:
End of explanation
STEPS = 1000
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = compute_gradients(X, Y, w0, w1)
w0.assign_sub(dw0 * LEARNING_RATE)
w1.assign_sub(dw1 * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(X, Y, w0, w1)
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
Explanation: Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
End of explanation
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
Explanation: Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set:
End of explanation
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-X**2)
%matplotlib inline
plt.plot(X, Y)
def make_features(X):
f1 = tf.ones_like(X) # Bias.
f2 = X
f3 = tf.square(X)
f4 = tf.sqrt(X)
f5 = tf.exp(X)
return tf.stack([f1, f2, f3, f4, f5], axis=1)
def predict(X, W):
return tf.squeeze(X @ W, -1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
# TODO 2
STEPS = 2000
LEARNING_RATE = .02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print("STEP: {} MSE: {}".format(STEPS, loss_mse(Xf, Y, W)))
plt.figure()
plt.plot(X, Y, label='actual')
plt.plot(X, predict(Xf, W), label='predicted')
plt.legend()
Explanation: This is indeed much better!
Bonus
Try modelling a non-linear function such as: $y=xe^{-x^2}$
End of explanation |
6,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtering and resampling data
Some artifacts are restricted to certain frequencies and can therefore
be fixed by filtering. An artifact that typically affects only some
frequencies is due to the power line.
Power-line noise is a noise created by the electrical network.
It is composed of sharp peaks at 50Hz (or 60Hz depending on your
geographical location). Some peaks may also be present at the harmonic
frequencies, i.e. the integer multiples of
the power-line frequency, e.g. 100Hz, 150Hz, ... (or 120Hz, 180Hz, ...).
This tutorial covers some basics of how to filter data in MNE-Python.
For more in-depth information about filter design in general and in
MNE-Python in particular, check out tut_background_filtering.
Step1: Removing power-line noise with notch filtering
Removing power-line noise can be done with a Notch filter, directly on the
Raw object, specifying an array of frequency to be cut off
Step2: Removing power-line noise with low-pass filtering
If you're only interested in low frequencies, below the peaks of power-line
noise you can simply low pass filter the data.
Step3: High-pass filtering to remove slow drifts
To remove slow drifts, you can high pass.
<div class="alert alert-danger"><h4>Warning</h4><p>In several applications such as event-related potential (ERP)
and event-related field (ERF) analysis, high-pass filters with
cutoff frequencies greater than 0.1 Hz are usually considered
problematic since they significantly change the shape of the
resulting averaged waveform (see examples in
`tut_filtering_hp_problems`). In such applications, apply
high-pass filters with caution.</p></div>
Step4: To do the low-pass and high-pass filtering in one step you can do
a so-called band-pass filter by running the following
Step5: Downsampling and decimation
When performing experiments where timing is critical, a signal with a high
sampling rate is desired. However, having a signal with a much higher
sampling rate than necessary needlessly consumes memory and slows down
computations operating on the data. To avoid that, you can downsample
your time series. Since downsampling raw data reduces the timing precision
of events, it is recommended only for use in procedures that do not require
optimal precision, e.g. computing EOG or ECG projectors on long recordings.
<div class="alert alert-info"><h4>Note</h4><p>A *downsampling* operation performs a low-pass (to prevent
aliasing) followed by *decimation*, which selects every
$N^{th}$ sample from the signal. See
| Python Code:
import numpy as np
import mne
from mne.datasets import sample
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
proj_fname = data_path + '/MEG/sample/sample_audvis_eog_proj.fif'
tmin, tmax = 0, 20 # use the first 20s of data
# Setup for reading the raw data (save memory by cropping the raw data
# before loading it)
raw = mne.io.read_raw_fif(raw_fname)
raw.crop(tmin, tmax).load_data()
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # bads + 2 more
fmin, fmax = 2, 300 # look at frequencies between 2 and 300Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
# Pick a subset of channels (here for speed reason)
selection = mne.read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,
stim=False, exclude='bads', selection=selection)
# Let's first check out all channel types
raw.plot_psd(area_mode='range', tmax=10.0, picks=picks, average=False)
Explanation: Filtering and resampling data
Some artifacts are restricted to certain frequencies and can therefore
be fixed by filtering. An artifact that typically affects only some
frequencies is due to the power line.
Power-line noise is a noise created by the electrical network.
It is composed of sharp peaks at 50Hz (or 60Hz depending on your
geographical location). Some peaks may also be present at the harmonic
frequencies, i.e. the integer multiples of
the power-line frequency, e.g. 100Hz, 150Hz, ... (or 120Hz, 180Hz, ...).
This tutorial covers some basics of how to filter data in MNE-Python.
For more in-depth information about filter design in general and in
MNE-Python in particular, check out tut_background_filtering.
End of explanation
raw.notch_filter(np.arange(60, 241, 60), picks=picks, filter_length='auto',
phase='zero')
raw.plot_psd(area_mode='range', tmax=10.0, picks=picks, average=False)
Explanation: Removing power-line noise with notch filtering
Removing power-line noise can be done with a Notch filter, directly on the
Raw object, specifying an array of frequency to be cut off:
End of explanation
# low pass filtering below 50 Hz
raw.filter(None, 50., fir_design='firwin')
raw.plot_psd(area_mode='range', tmax=10.0, picks=picks, average=False)
Explanation: Removing power-line noise with low-pass filtering
If you're only interested in low frequencies, below the peaks of power-line
noise you can simply low pass filter the data.
End of explanation
raw.filter(1., None, fir_design='firwin')
raw.plot_psd(area_mode='range', tmax=10.0, picks=picks, average=False)
Explanation: High-pass filtering to remove slow drifts
To remove slow drifts, you can high pass.
<div class="alert alert-danger"><h4>Warning</h4><p>In several applications such as event-related potential (ERP)
and event-related field (ERF) analysis, high-pass filters with
cutoff frequencies greater than 0.1 Hz are usually considered
problematic since they significantly change the shape of the
resulting averaged waveform (see examples in
`tut_filtering_hp_problems`). In such applications, apply
high-pass filters with caution.</p></div>
End of explanation
# band-pass filtering in the range 1 Hz - 50 Hz
raw.filter(1, 50., fir_design='firwin')
Explanation: To do the low-pass and high-pass filtering in one step you can do
a so-called band-pass filter by running the following:
End of explanation
raw.resample(100, npad="auto") # set sampling frequency to 100Hz
raw.plot_psd(area_mode='range', tmax=10.0, picks=picks)
Explanation: Downsampling and decimation
When performing experiments where timing is critical, a signal with a high
sampling rate is desired. However, having a signal with a much higher
sampling rate than necessary needlessly consumes memory and slows down
computations operating on the data. To avoid that, you can downsample
your time series. Since downsampling raw data reduces the timing precision
of events, it is recommended only for use in procedures that do not require
optimal precision, e.g. computing EOG or ECG projectors on long recordings.
<div class="alert alert-info"><h4>Note</h4><p>A *downsampling* operation performs a low-pass (to prevent
aliasing) followed by *decimation*, which selects every
$N^{th}$ sample from the signal. See
:func:`scipy.signal.resample` and
:func:`scipy.signal.resample_poly` for examples.</p></div>
Data resampling can be done with resample methods.
End of explanation |
6,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature
Step1: Config
Automatically discover the paths to various data folders and compose the project structure.
Step2: Identifier for storing these features on disk and referring to them later.
Step3: Read Data
Preprocessed and tokenized questions.
Step4: Pretrained word vector database.
Step5: Build Features
Step6: Save features | Python Code:
from pygoose import *
from gensim.models.wrappers.fasttext import FastText
Explanation: Feature: Word Mover's Distance
Based on the pre-trained word embeddings, we'll compute the Word Mover's Distance between each tokenized question pair.
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
project = kg.Project.discover()
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
feature_list_id = 'wmd'
Explanation: Identifier for storing these features on disk and referring to them later.
End of explanation
tokens_train = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_train.pickle')
tokens_test = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_test.pickle')
tokens = tokens_train + tokens_test
Explanation: Read Data
Preprocessed and tokenized questions.
End of explanation
embedding_model = FastText.load_word2vec_format(project.aux_dir + 'fasttext_vocab.vec')
Explanation: Pretrained word vector database.
End of explanation
def wmd(pair):
return embedding_model.wmdistance(pair[0], pair[1])
wmds = kg.jobs.map_batch_parallel(
tokens,
item_mapper=wmd,
batch_size=1000,
)
wmds = np.array(wmds).reshape(-1, 1)
X_train = wmds[:len(tokens_train)]
X_test = wmds[len(tokens_train):]
print('X_train:', X_train.shape)
print('X_test: ', X_test.shape)
Explanation: Build Features
End of explanation
feature_names = [
'wmd',
]
project.save_features(X_train, X_test, feature_names, feature_list_id)
Explanation: Save features
End of explanation |
6,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this blog post, I want to show you how you can visualize the contributions of developers to your code base over time. I came across the Stream Graph visualization and it looks like it would fit quite nicely for this purpose. Fortunately, there is already a D3 template by William Turman for this that I use in this blog post
Step1: In this example, I'm using the rather big repository of IntelliJ which is written in Java.
We import the existing Git log file with file statistics that was generated using
bash
git log --numstat --pretty=format
Step2: The logfile contains the added and deleted lines of code for each file in each commit of an author.
This file has over 1M entries.
Step3: The repository itself has over 200k individual commits
Step4: and almost 400 different contributors (Note
Step5: We mold the raw data to get a nice list of all committed files including additions and deletions by each author. You can find details about this approach in Reading a Git repo's commit history with Pandas efficiently.
Step6: We further do some basic filtering (because we only want the Java source code) and some data type conversions. Additionally, we calculate a new column that holds the number of modifications (added and deleted lines of source code).
Step7: The next step is optional and some basic data cleaning. It just filters out any nonsense commits that were cause by wrong timestamp configuration of some committers.
Step8: Summarizing the data
In this section, we group the data to achieve a meaningful visualization with Stream Graphs. We do this by grouping all relevant data of the commits by author and quarters (TIME_FREQUENCY is set to Q = quarterly). We reset the index because we don't need it in the following.
Step9: We also do some primitive outlier treatment by limiting the number of modifications to lower than the 99% quantile of the whole data.
Step10: Next, we pivot the DataFrame to get the modifications for each author over time.
Step11: Ugly visualization
At this point, we could already plot the data with the built-in plot function of Pandas.
Step12: But it doesn't look good at all
Step13: To combine the new index with our existing DataFrame, we have to reindex the existing DataFrame and transform the data format.
Step14: Then we adjust the column names and ordering to the given CSV format for the D3 template and export the data into a CSV file.
Step15: Because we use a template in this example, we simply copy it and replace the CSV filename variable with the filename from above.
Step16: And that's it!
Result 1 | Python Code:
PROJECT = "intellij-community"
SOURCE_CODE_FILE_EXTENSION = ".java"
TIME_FREQUENCY = "Q" # how should data be grouped? 'Q' means quarterly
FILENAME_PREFIX = "vis/interactive_streamgraph/"
FILENAME_SUFFIX = "_" + PROJECT + "_" + TIME_FREQUENCY
Explanation: Introduction
In this blog post, I want to show you how you can visualize the contributions of developers to your code base over time. I came across the Stream Graph visualization and it looks like it would fit quite nicely for this purpose. Fortunately, there is already a D3 template by William Turman for this that I use in this blog post:
So let's prototype some visualizations!
Getting the data
At the beginning, we declare some general variables for easy access and to reuse them for other repositories easily.
End of explanation
import pandas as pd
logfile = "../../{}/git_numstat.log".format(PROJECT)
git_log = pd.read_csv(
logfile,
sep="\t",
header=None,
names=[
'additions',
'deletions',
'filename',
'sha',
'timestamp',
'author'])
git_log.head()
Explanation: In this example, I'm using the rather big repository of IntelliJ which is written in Java.
We import the existing Git log file with file statistics that was generated using
bash
git log --numstat --pretty=format:"%x09%x09%x09%h%x09%at%x09%aN" > git_numstat.log
End of explanation
len(git_log)
Explanation: The logfile contains the added and deleted lines of code for each file in each commit of an author.
This file has over 1M entries.
End of explanation
git_log['sha'].count()
Explanation: The repository itself has over 200k individual commits
End of explanation
git_log['author'].value_counts().size
Explanation: and almost 400 different contributors (Note: I created a separate .mailmap file locally to avoid multiple author names for the same person)
End of explanation
commits = git_log[['additions', 'deletions', 'filename']]\
.join(git_log[['sha', 'timestamp', 'author']]\
.fillna(method='ffill'))\
.dropna()
commits.head()
Explanation: We mold the raw data to get a nice list of all committed files including additions and deletions by each author. You can find details about this approach in Reading a Git repo's commit history with Pandas efficiently.
End of explanation
commits = commits[commits['filename'].str.endswith(SOURCE_CODE_FILE_EXTENSION)]
commits['additions'] = pd.to_numeric(commits['additions'], errors='coerce').dropna()
commits['deletions'] = pd.to_numeric(commits['deletions'], errors='coerce').dropna()
commits['timestamp'] = pd.to_datetime(commits['timestamp'], unit="s")
commits = commits.set_index(commits['timestamp'])
commits['modifications'] = commits['additions'] + commits['deletions']
commits.head()
Explanation: We further do some basic filtering (because we only want the Java source code) and some data type conversions. Additionally, we calculate a new column that holds the number of modifications (added and deleted lines of source code).
End of explanation
commits = commits[commits['timestamp'] <= 'today']
initial_commit_date = commits[-1:]['timestamp'].values[0]
commits = commits[commits['timestamp'] >= initial_commit_date]
commits.head()
Explanation: The next step is optional and some basic data cleaning. It just filters out any nonsense commits that were cause by wrong timestamp configuration of some committers.
End of explanation
modifications_over_time = commits[['author', 'timestamp', 'modifications']].groupby(
[commits['author'],
pd.Grouper(freq=TIME_FREQUENCY)]).sum().reset_index()
modifications_over_time.head()
Explanation: Summarizing the data
In this section, we group the data to achieve a meaningful visualization with Stream Graphs. We do this by grouping all relevant data of the commits by author and quarters (TIME_FREQUENCY is set to Q = quarterly). We reset the index because we don't need it in the following.
End of explanation
modifications_over_time['modifications_norm'] = modifications_over_time['modifications'].clip_upper(
modifications_over_time['modifications'].quantile(0.99))
modifications_over_time[['modifications', 'modifications_norm']].max()
Explanation: We also do some primitive outlier treatment by limiting the number of modifications to lower than the 99% quantile of the whole data.
End of explanation
modifications_per_authors_over_time = modifications_over_time.reset_index().pivot_table(
index=modifications_over_time['timestamp'],
columns=modifications_over_time['author'],
values='modifications_norm')
modifications_per_authors_over_time.head()
Explanation: Next, we pivot the DataFrame to get the modifications for each author over time.
End of explanation
%matplotlib inline
modifications_per_authors_over_time.plot(kind='area', legend=None, figsize=(12,4))
Explanation: Ugly visualization
At this point, we could already plot the data with the built-in plot function of Pandas.
End of explanation
time_range = pd.DatetimeIndex(
start=modifications_per_authors_over_time.index.min(),
end=modifications_per_authors_over_time.index.max(),
freq=TIME_FREQUENCY)
time_range
Explanation: But it doesn't look good at all :-/
Let's bend the data in a way so that it fits into the Stream Graph D3 template!
Treat missing data
The D3.js template that we are using needs a continuous series of timestamp data for each author. We are filling the existing modifications_per_authors_over_time DataFrame with the missing values. That means to add all quarters for all authors by introducing a new time_range index.
End of explanation
full_history = pd.DataFrame(
modifications_per_authors_over_time.reindex(time_range).fillna(0).unstack().reset_index()
)
full_history.head()
Explanation: To combine the new index with our existing DataFrame, we have to reindex the existing DataFrame and transform the data format.
End of explanation
full_history.columns = ["key", "date", "value"]
full_history = full_history.reindex(columns=["key", "value", "date"])
full_history.to_csv(FILENAME_PREFIX + "modifications" + FILENAME_SUFFIX + ".csv", index=False)
full_history.head()
Explanation: Then we adjust the column names and ordering to the given CSV format for the D3 template and export the data into a CSV file.
End of explanation
with open("vis/interactive_streamgraph_template.html", "r") as template:
content = template.read()
content = content.replace("${FILENAME}", "modifications" + FILENAME_SUFFIX + ".csv")
with open(FILENAME_PREFIX + "modifications" + FILENAME_SUFFIX + ".html", "w") as output_file:
output_file.write(content)
Explanation: Because we use a template in this example, we simply copy it and replace the CSV filename variable with the filename from above.
End of explanation
full_history_committers = full_history.copy()
full_history_committers['value'] = full_history_committers['value'].apply(lambda x: min(x,1))
full_history_committers.to_csv(FILENAME_PREFIX + "committer" + FILENAME_SUFFIX + ".csv", index=False)
with open("vis/interactive_streamgraph_template.html", "r") as template:
content = template.read()
content = content.replace("${FILENAME}", "committer" + FILENAME_SUFFIX + ".csv")
with open(FILENAME_PREFIX + "committer" + FILENAME_SUFFIX + ".html", "w") as output_file:
output_file.write(content)
full_history_committers.head()
Explanation: And that's it!
Result 1: Modifications over time
Here is the Stream Graph for the IntelliJ Community GitHub Project:
The visualization is also interactive. You can hover over one color to see a committer's "modifications trail".
You can find the interactive Stream Graph here (but beware, it loads 0.5 MB of data. This has to be improved in the future).
Result 2: Committers over time
Another interesting visualization with Stream Graphs is the number of committers over time. With this, you can see how the developer fluctuation in your project was.
To achieve this, we set the number 1 for each committer that has contributed code in a quarter. Because we are <strike>lazy</strike> efficient, we reuse the value column for that.
End of explanation |
6,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
linear algebra
Most of these notes correspond to the video lectures by Professor Gilbert Strang of MIT.
the geometry of linear equations
The fundamental problem of linear algebra is to solve a system of linear equations. We'll start with the case of $n$ equations and $n$ unknowns.
Here are two lines that cross somewhere in a plane we're looking for the point where they cross.
$
\left{
\begin{aligned}
2x - y &= 0 \
-x + 2y &= 3
\end{aligned}
\right.
$
As a preview (and because it's so easy to do) we'll quickly skip to the matrix form and write down
Step1: We are interested in the value of $x$ where $2x = \frac{1}{2}x + 1\frac{1}{2}$ we can solve this pretty easily by saying that $2x - \frac{1}{2}x = 1\frac{1}{2}$ which simplifies to $1\frac{1}{2}x = 1\frac{1}{2}$. Now we can divide both sides by $1\frac{1}{2}$ and we'll end up with $x = 1$.
column picture
Let's take a look at the columns of the matrix form.
$x\begin{bmatrix}2 \ -1\end{bmatrix} + y\begin{bmatrix}-1 \ 2\end{bmatrix} = \begin{bmatrix}0 \ 3\end{bmatrix}$
The equation above is asking us to somehow combine the two vectors in the right amounts so we'll end up with a vector $\begin{bmatrix}0 \ 3\end{bmatrix}$.
We need to find the right linear combination of $x$ and $y$. We'll start by plotting the vectors.
Step2: From the row picture earlier we already know that the right combination is $x = 1$ and $y = 2$ so
Step3: Let's do a 3D example.
$
\left{
\begin{aligned}
2x - &y &= 0 \
-x + 2&y - z &= -1 \
-3&y + 4z &= 4
\end{aligned}
\right.
$
We're in three dimensions with unknowns $x$, $y$ and $z$.
The matrix $A$ is $\begin{bmatrix}2 & -1 & 0\-1 & 2 & -1\0 & -3 & 4\end{bmatrix}$
And our right hand side $b$ is the vector $\begin{bmatrix}0 \ -1 \ 4\end{bmatrix}$
Looking at the row picture, when dealing with a $2 \times 2$ problem each row is a line in two dimensions. Each row in a $3 \times 3$ problem gives us a plane in three dimensions.
If we look at the column picture we get
Step4: Let's change the right hand side to something different so that we have
Step5: Now we'll look for the matrix $E_{32}$ which will fix the $A_{32}$ position. We need a matrix $E_{32}$ such that
Step6: Finally we can say
Step7: If we place the permutation matrix $P$ on the left we are doing row operations so we'll exchange the rows
Step8: However if we place the permutation matrix $P$ on the right side then we are doing column operations and end up exchanging the columns.
Step9: This also shows that we cannot just change the order of matrices when multiplying them without changing the result.
inverses
Now let's combine $E_{32}$ and $E_{21}$. We could just multiply them but there is a better way to do it. Let's think about it in a different way, instead of going from $A$ to $U$ how can we get from $U$ to $A$? For this we'll involve the concept of the inverse of a matrix.
Let's start with $E_{21}$ and figure out how we can undo the operation. What we need is a matrix $E_{21}^{-1}$ so that when we multiply that with $E_{21}$ we get back the identity matrix | Python Code:
f1 = lambda x: 2*x
f2 = lambda x: (1/2*x) + 1 + (1/2)
x = np.linspace(0, 3, 100)
plt.plot(x, f1(x), label=r'$y = 2x$')
plt.plot(x, f2(x), label=r'$y = \frac{1}{2}x + 1\frac{1}{2}$')
plt.legend(loc=4)
Explanation: linear algebra
Most of these notes correspond to the video lectures by Professor Gilbert Strang of MIT.
the geometry of linear equations
The fundamental problem of linear algebra is to solve a system of linear equations. We'll start with the case of $n$ equations and $n$ unknowns.
Here are two lines that cross somewhere in a plane we're looking for the point where they cross.
$
\left{
\begin{aligned}
2x - y &= 0 \
-x + 2y &= 3
\end{aligned}
\right.
$
As a preview (and because it's so easy to do) we'll quickly skip to the matrix form and write down:
$
\begin{bmatrix}2 & -1 \ -1 & 2\end{bmatrix}
\begin{bmatrix}x \ y\end{bmatrix} =
\begin{bmatrix}0 \ 3\end{bmatrix}
$
Where $A = \begin{bmatrix}2 & -1 \ -1 & 2\end{bmatrix}$ $x = \begin{bmatrix}x \ y\end{bmatrix}$ and $b = \begin{bmatrix}0 \ 3\end{bmatrix}$.
We'll end up with $Ax = b$.
row picture
Let's start by plotting the equations.
$
\left{
\begin{aligned}
2x - y &= 0 \implies y = 2x\
-x + 2y &= 3 \implies 2y = x + 3 \implies y = \frac{1}{2}x + 1\frac{1}{2}
\end{aligned}
\right.
$
End of explanation
ax = plt.axes()
ax.set_xlim(-2, 3)
ax.set_ylim(-2, 4)
ax.arrow(0, 0, 2, -1, head_width=0.1, fc='g', ec='g', label='foo')
ax.arrow(0, 0, -1, 2, head_width=0.1, fc='b', ec='b')
ax.arrow(0, 0, 0, 3, head_width=0.1, fc='k', ec='k')
Explanation: We are interested in the value of $x$ where $2x = \frac{1}{2}x + 1\frac{1}{2}$ we can solve this pretty easily by saying that $2x - \frac{1}{2}x = 1\frac{1}{2}$ which simplifies to $1\frac{1}{2}x = 1\frac{1}{2}$. Now we can divide both sides by $1\frac{1}{2}$ and we'll end up with $x = 1$.
column picture
Let's take a look at the columns of the matrix form.
$x\begin{bmatrix}2 \ -1\end{bmatrix} + y\begin{bmatrix}-1 \ 2\end{bmatrix} = \begin{bmatrix}0 \ 3\end{bmatrix}$
The equation above is asking us to somehow combine the two vectors in the right amounts so we'll end up with a vector $\begin{bmatrix}0 \ 3\end{bmatrix}$.
We need to find the right linear combination of $x$ and $y$. We'll start by plotting the vectors.
End of explanation
ax = plt.axes()
ax.set_xlim(-0.5, 2.5)
ax.set_ylim(-2, 4)
ax.arrow(0, 0, 2, -1, head_width=0.0, fc='g', ec='g', label='foo')
ax.arrow(2, -1, -1, 2, head_width=0.1, fc='b', ec='b')
ax.arrow(0, 0, 0, 3, head_width=0.1, fc='k', ec='k')
Explanation: From the row picture earlier we already know that the right combination is $x = 1$ and $y = 2$ so:
$1\begin{bmatrix}2 \ -1\end{bmatrix} + 2\begin{bmatrix}-1 \ 2\end{bmatrix} = \begin{bmatrix}0 \ 3\end{bmatrix}$
We can plot and show how it works as well.
End of explanation
vec3 = lambda x,y,z: np.array((x,y,z))
A, B, C = vec3(2, -1, 0), vec3(-1, 2, -3), vec3(0, -1, 4)
x, y, z = 0, 0, 1
(x*A) + (y*B) + (z*C)
Explanation: Let's do a 3D example.
$
\left{
\begin{aligned}
2x - &y &= 0 \
-x + 2&y - z &= -1 \
-3&y + 4z &= 4
\end{aligned}
\right.
$
We're in three dimensions with unknowns $x$, $y$ and $z$.
The matrix $A$ is $\begin{bmatrix}2 & -1 & 0\-1 & 2 & -1\0 & -3 & 4\end{bmatrix}$
And our right hand side $b$ is the vector $\begin{bmatrix}0 \ -1 \ 4\end{bmatrix}$
Looking at the row picture, when dealing with a $2 \times 2$ problem each row is a line in two dimensions. Each row in a $3 \times 3$ problem gives us a plane in three dimensions.
If we look at the column picture we get:
$x\begin{bmatrix}2\-1\0\end{bmatrix} + y\begin{bmatrix}-1\2\-3\end{bmatrix} + z\begin{bmatrix}0\-1\4\end{bmatrix} = \begin{bmatrix}0\-1\4\end{bmatrix}$
And we can already see that $x = 0$, $y = 0$ and $z = 1$. Of course we won't always be able to see it so easily though.
End of explanation
A = np.array([[1, 2, 1], [3, 8, 1], [0, 4, 1]])
E_21 = np.array([[1, 0, 0], [-3, 1, 0], [0, 0, 1]])
np.dot(E_21, A)
Explanation: Let's change the right hand side to something different so that we have:
$x\begin{bmatrix}2\-1\0\end{bmatrix} + y\begin{bmatrix}-1\2\-3\end{bmatrix} + z\begin{bmatrix}0\-1\4\end{bmatrix} = \begin{bmatrix}1\1\-3\end{bmatrix}$
In this case we made up $b$ by taking the sum of the first two columns of $A$:
$b = \begin{bmatrix}2\-1\0\end{bmatrix} + \begin{bmatrix}-1\2\-3\end{bmatrix} = \begin{bmatrix}1\1\-3\end{bmatrix}$
And $x = 1$, $y = 1$ and $z = 0$. Which makes sense since $b$ is the sum of the $x$ and $y$ components.
matrix form
Now we can ask the question, can we solve $Ax = b$ for every $b$? Do the linear combinations of the columns fill three (or $n$) dimensional space? It depends on $A$. In the case of matrix $A$ above yes, because it's a nonsingular matrix and invertible matrix.
If all the vectors that make up $A$ are in the same plane we cannot compute $Ax = b$ for every $b$. We can compute $b$ for all the points that are in the plane but all those outside are unreachable. The matrix would be singular and not invertible.
matrix times vector
The basic equation we're dealing with is $Ax = b$ where $A$ is some kind of matrix that represents an operation and $V$ is a vector.
We can multiply them as columns. This basically takes the components of vector $x$ as scalars for the column vectors in matrix $A$.
$\begin{bmatrix}2 & 5\1 & 3\end{bmatrix}\begin{bmatrix}1\2\end{bmatrix} = 1\begin{bmatrix}2\1\end{bmatrix} + 2\begin{bmatrix}5\3\end{bmatrix} = \begin{bmatrix}12\7\end{bmatrix}$
You can also do it by doing it a row at a time which is also known as the dot product:
$
\begin{bmatrix}(2 \cdot 1) + (5 \cdot 2) \(1 \cdot 1) + (3 \cdot 2)\end{bmatrix} = \begin{bmatrix}12\7\end{bmatrix}
$
We can also say that $Ax$ is a combination of vector $x$ and the columns of matrix $A$.
elimination with matrices
Below is a system of equations that we will use as an example.
$
\left{
\begin{aligned}
x + 2&y + z = 2 \
3x + 8&y + z = 12 \
4&y + z = 2
\end{aligned}
\right.
$
With these equations we can already write down $A$ and $b$ as well:
$A = \begin{bmatrix}1 & 2 & 1\3 & 8 & 1\0 & 4 & 1\end{bmatrix}$ and $b = \begin{bmatrix}2\12\2\end{bmatrix}$
Our system to solve is $Ax = b$
The first step of elimination will be to multiply the first equation with the right multiplier and then substract it from the second equation. Our purpose is to eliminate the $x$ part of equation two.
We'll start at the top left at $A_{11}$ of the matrix, this is the first pivot and we're looking for our multiplier. In order to get rid of the $3$ in the second row we'er gonna multiply the first row (the first equation) with $3$ and then subtract that from the second row (the second equation):
$\begin{bmatrix}3 & 8 & 1\end{bmatrix} - 3 \cdot \begin{bmatrix}1 & 2 & 1\end{bmatrix} = \begin{bmatrix}0 & 2 & -2\end{bmatrix}$
Our first row will not change (it's the pivot row) but now we end up with:
$\begin{bmatrix}1 & 2 & 1\0 & 2 & -2\0 & 4 & 1\end{bmatrix}$
But what about the right side? Well, that gets carried along (actually matlab will finish with the left side before taking care of the right side) so we'll fill that in later.
So we finished taking care of $A_{21}$ and the next step is actually to finish the column and take care of $A_{31}$ but since we already have a $0$ there we can skip it.
The next step will be to take care of the second pivot $A_{22}$. If we look at $A_{32}$ we see that the multiplier is $\frac{A_{32}}{A_{22}} = \frac{4}{2} = 2$. So we repeat the process by multiplying our pivot row with that value and then substracting the result from the third row:
$\begin{bmatrix}0 & 4 & 1\end{bmatrix} - 2 \cdot \begin{bmatrix}0 & 2 & -2\end{bmatrix} = \begin{bmatrix}0 & 0 & 5\end{bmatrix}$
We'll end up with:
$\begin{bmatrix}1 & 2 & 1\0 & 2 & -2\0 & 0 & 5\end{bmatrix}$
We found our final pivot $A_{33}$ with value $5$.
Also note, pivots cannot be zero. If we end up with a zero value in the pivot position we can try to exchange rows if there's a non-zero value below it.
back substitution
Let's create an augmented matrix $A_{aug}$ with $b$ tacked on.
$A_{aug} = \begin{bmatrix}1 & 2 & 1 & 2\3 & 8 & 1 & 12\0 & 4 & 1 & 2\end{bmatrix}$
During the first step we subtracted 3 times the first equation from the second equation:
$\begin{bmatrix}1 & 2 & 1 & 2\0 & 2 & -2 & 6\0 & 4 & 1 & 2\end{bmatrix}$
During the second step we subtracted 2 times the second equation from the third equation:
$\begin{bmatrix}1 & 2 & 1 & 2\0 & 2 & -2 & 6\0 & 0 & 5 & -10\end{bmatrix}$
Now in the matrix above, $U$ is what happens to $A$ and $c$ is what happens to $b$:
$U = \begin{bmatrix}1 & 2 & 1\0 & 2 & -2\0 & 0 & 5\end{bmatrix}$ and $c = \begin{bmatrix}2\6\-10\end{bmatrix}$
Writing $Ux = c$ as equations we get:
$
\left{
\begin{aligned}
x + 2y + z &= 2 \
2y + -2z &= 6 \
5z &= -10
\end{aligned}
\right.
$
In order to solve this we start with $z$. We can immediately see that the correct value is $-2$:
$5z = -10 \implies z = \frac{-10}{2} = -2$
Now that we know $z$ we can go back one row up and plug in our value. We get:
$2y + (-2 \cdot -2) = 6 \implies y = \frac{6 - 4}{2} = 1$
And finally now that we know $y$ we can go back up once more and calculate the first row:
$x + (2 \cdot 1) + -2 = 2 \implies x = 2 - 2 + 2 = 2$
Back substitution is solving the equations in reverse order because the system is triangular.
matrices
What we would like to do now is to express the elimination steps as matrices. Remember when we write something such as:
$
\begin{bmatrix}
A_{11} & A_{12} & A_{13} \
A_{21} & A_{22} & A_{23} \
A_{31} & A_{32} & A_{33}
\end{bmatrix}
\begin{bmatrix}x_1 \x_2 \x_3\end{bmatrix}
$
Then the result will be a combination of the columns of matrix $A$ and the scalars in vector $x$ so we get:
$
x_1\begin{bmatrix}A_{11}\A_{21}\A_{31}\end{bmatrix} + x_2\begin{bmatrix}A_{12}\A_{22}\A_{32}\end{bmatrix} +
x_3\begin{bmatrix}A_{13}\A_{23}\A_{33}\end{bmatrix}
$
A matrix times a column vector will result in a column. However, when we write:
$
\begin{bmatrix}x_1 & x_2 & x_3\end{bmatrix}
\begin{bmatrix}
A_{11} & A_{12} & A_{13} \
A_{21} & A_{22} & A_{23} \
A_{31} & A_{32} & A_{33}
\end{bmatrix}
$
Then, we're multiplying a matrix with a row vector and the result will be a row:
$
x_1\begin{bmatrix}A_{11}&A_{12}&A_{13}\end{bmatrix} +
x_2\begin{bmatrix}A_{21}&A_{22}&A_{23}\end{bmatrix} +
x_3\begin{bmatrix}A_{31}&A_{32}&A_{33}\end{bmatrix}
$
So now we'll look for the matrix that represents the first elimination step. We need a matrix that subtracts three times row one from row two and leaves the other rows the same. In other words, we need a matrix $E_{21}$ (the elimination matrix for position $A_{21}$) so that:
$E_{21}\begin{bmatrix}1&2&1\3&8&1\0&4&1\end{bmatrix} = \begin{bmatrix}1&2&1\0&2&-2\0&4&1\end{bmatrix}$
We know the first row will not change. This means that for the first row we want one of the first row and none of the others: $\begin{bmatrix}1&0&0\end{bmatrix}$.
The last row is easy as well, we want one of the last row and zero of the others: $\begin{bmatrix}0&0&1\end{bmatrix}$.
Now finally the center row: $\begin{bmatrix}-3&1&0\end{bmatrix}$ because we want to subtract three times the first row (that's where the $-3$ comes from) and just $1$ time the second row. We end up with the following matrix:
$E_{21} = \begin{bmatrix}1&0&0\-3&1&0\0&0&1\end{bmatrix}$
End of explanation
E_32 = np.array([[1, 0, 0], [0, 1, 0], [0, -2, 1]])
np.dot(E_32, np.dot(E_21, A))
Explanation: Now we'll look for the matrix $E_{32}$ which will fix the $A_{32}$ position. We need a matrix $E_{32}$ such that:
$E_{32}\begin{bmatrix}1&2&1\0&2&-2\0&4&1\end{bmatrix} = \begin{bmatrix}1&2&1\0&2&-2\0&0&5\end{bmatrix}$
We already know the first and second row will now change so we only need to look for the third row. We know we want one of the last row and $-2$ times the second row so we can write down:
$E_{32} = \begin{bmatrix}1&0&0\0&1&0\0&-2&1\end{bmatrix}$
End of explanation
M = np.array([[1,2],[3,4]])
M
Explanation: Finally we can say: $E_{32}(E_{21}A) = U$ and what we want is one matrix that combines $E_{32}$ and $E_{21}$.
As long as we keep the matrices in order we can move the parenthesis: $(E_{32}E_{21})A = U$ so that we end up with a single matrix $E = E_{32}E_{21}$.
This is made possible due to the law of associativity.
permutation matrix
We didn't need it in this case but there's another elemental matrix called the permutation matrix which we can use to exchange rows. For example, if we wanted to exchange rows one and two of a matrix we could do this with a permutation matrix $P$ so that:
$P\begin{bmatrix}a&b\c&d\end{bmatrix} = \begin{bmatrix}c&d\a&b\end{bmatrix}$ where $P = \begin{bmatrix}0&1\1&0\end{bmatrix}$
End of explanation
P = np.array([[0,1],[1,0]])
np.dot(P, M)
Explanation: If we place the permutation matrix $P$ on the left we are doing row operations so we'll exchange the rows:
End of explanation
np.dot(M, P)
Explanation: However if we place the permutation matrix $P$ on the right side then we are doing column operations and end up exchanging the columns.
End of explanation
E_inv = np.array([[1, 0, 0], [3, 1, 0], [0, 0, 1]])
np.dot(E_inv, E_21)
Explanation: This also shows that we cannot just change the order of matrices when multiplying them without changing the result.
inverses
Now let's combine $E_{32}$ and $E_{21}$. We could just multiply them but there is a better way to do it. Let's think about it in a different way, instead of going from $A$ to $U$ how can we get from $U$ to $A$? For this we'll involve the concept of the inverse of a matrix.
Let's start with $E_{21}$ and figure out how we can undo the operation. What we need is a matrix $E_{21}^{-1}$ so that when we multiply that with $E_{21}$ we get back the identity matrix:
$E_{21}^{-1}E_{21} = \begin{bmatrix}1&0&0\0&1&0\0&0&1\end{bmatrix}$
We end up with:
$\begin{bmatrix}1&0&0\3&1&0\0&0&1\end{bmatrix}\begin{bmatrix}1&0&0\-3&1&0\0&0&1\end{bmatrix} = \begin{bmatrix}1&0&0\0&1&0\0&0&1\end{bmatrix}$
End of explanation |
6,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1"><a href="#Function-Optimization"><span class="toc-item-num">1 </span>Function Optimization</a></div><div class="lev2"><a href="#scipy.optimize.fsolve"><span class="toc-item-num">1.1 </span>scipy.optimize.fsolve</a></div><div class="lev1"><a href="#Conclusion"><span class="toc-item-num">2 </span>Conclusion</a></div>
# Function Optimization
Problem
Step1: scipy.optimize.fsolve | Python Code:
from IPython.display import display
import pandas as pd
# data
data = pd.DataFrame([
[10, 300],
[20, 200],
[30, 100],
[40, 400]
], columns=['QTY', 'UNIT.V'],
index=['A', 'B', 'C', 'D'])
display(data)
def gain(unit_v, qty):
return unit_v*qty*0.1
data['GAIN'] = data.apply(lambda v: gain(v['UNIT.V'], v['QTY']), axis=1)
display(data)
def gain_1_2(var_qty, unit_v):
return unit_v*var_qty*0.12
Explanation: Table of Contents
<p><div class="lev1"><a href="#Function-Optimization"><span class="toc-item-num">1 </span>Function Optimization</a></div><div class="lev2"><a href="#scipy.optimize.fsolve"><span class="toc-item-num">1.1 </span>scipy.optimize.fsolve</a></div><div class="lev1"><a href="#Conclusion"><span class="toc-item-num">2 </span>Conclusion</a></div>
# Function Optimization
Problem:
We have a order with quantity of a product and its unit value.
We know that for each product sold we gain 10% but, how should be its quantity if we gain 12% for each product and to have the same total.
|PRODUCT|QTY|UNIT.V|GAIN|QTY_X|
|:-:|:-:|:----:|:--------:|:--------:|
|A|10 | 300 |300 | ?|
|B|20 | 200 |400 | ?|
|C|30 | 100 |300 | ?|
|D|40 | 400 |1600| ?|
End of explanation
from scipy.optimize import fsolve
help(fsolve)
fsolve(
gain_1_2, [1], args=()
)
Explanation: scipy.optimize.fsolve
End of explanation |
6,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
8. Calibration
Previous
Step1: Import section specific modules
Step2: 8.1 Calibration as a Least Squares Problem <a id='cal
Step3: We first need to set the hour angle range of our observation and the declination of our field center.
Step4: Our hour angle range is from -6h to 6h, and our declination is set to $60^{\circ}$.
As the earth rotates the antennas trace out $uv$-tracks (ellipses) as shown in the code fragment below, where the red tracks are due to baseline $pq$ and blue tracks are due to baseline $qp$. We can construct these $uv$-tracks by using Eq. 8.1 ⤵<!--\ref{cal
Step5: We can also pack the $uv$-coverage into a 2D-matrix. We denote the rows of this matrix with $p$ and the columns with $q$. The $pq$-th entry denotes the $uv$-track associated with baseline $pq$. The reason for packing the visibilities into a 2D structure will become apparent in Sec. 8.1.2 ⤵<!--\ref{cal
Step6: 8.1.2. Unpolarized Calibration <a id='cal
Step7: We now use create_vis_mat to create an example $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{\mathcal{D}}$. Note that
there are two sources in our sky model.
Step8: We now plot the baseline entries of $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{\mathcal{D}}$.
Step9: The images above contain the real part of the corrupted (green) and uncorrupted (blue)
visibilities as a function of timeslots for baseline 01, 02 and 12 respectively.
8.1.4 Levenberg-Marquardt (create_G_LM) <a id='cal
Step10: We are now able to define a wrapper function create_G_LM that in turn calls optimize.leastsq.
The wrapper function translates the calibration problem into a format that optimize.leastsq
can interpret. The input of create_G_LM is $\boldsymbol{\mathcal{D}}$ and $\boldsymbol{\mathcal{M}}$, while the output is $\mathbf{g}$ and $\boldsymbol{\mathscr{G}}=\mathbf{g}\mathbf{g}^H$.
Step11: We may now calibrate $\boldsymbol{\mathcal{D}}$ by using create_G_LM.
Step12: The above function works by vectorizing the real and imaginary part of $\boldsymbol{\mathcal{D}}$ and
storing the result in $\mathbf{d}$. The vector $\mathbf{m}$ is generated in a similar manner.
The error vector $\mathbf{r}$ is calculated by err_func. We initialize $\breve{\mathbf{g}}$ with
$\breve{\mathbf{g}}_0=[\mathbf{1},\mathbf{0}]$. We can then call
optimize.leastsq(self.err_func, g_0, args=(d, m)).
We can now calculate $\mathbf{g} = \breve{\mathbf{g}}_U+\imath\breve{\mathbf{g}}_L$ and
$\boldsymbol{\mathscr{G}}=\mathbf{g}\mathbf{g}^H$. This is repeated for each observational time-slot.
8.1.5 Corrected Visibilites <a id='cal
Step13: We plot the corrected visibilities below. Note that the model and corrected visibilities align well, implying that calibration was successfull. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
8. Calibration
Previous: 8. Calibration
Next: 8.2 1GC calibration
Import standard modules:
End of explanation
from scipy import optimize
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 10)
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
lam = 3e8/1.4e9 #observational wavelenth
print "lam = ",lam
b = np.array([100,200,300])/lam
print "b [wavelengths] = ",b
plt.plot(np.array([0,100,200]),np.array([0,0,0]),'ro')
plt.xlim([-250,250])
plt.ylim([-100,100])
plt.xlabel("East-West [m]", fontsize=18)
plt.ylabel("North-South [m]", fontsize=18)
plt.title("ENU-coordinates of three element interferometer.", fontsize=18)
plt.show()
Explanation: 8.1 Calibration as a Least Squares Problem <a id='cal:sec:cal_ls'></a> <!--\label{cal:sec:cal_ls}-->
In this section we discuss the procedure that is generally used in practice to perform calibration. We will use the unpolarized RIME in this section instead of the full-polarized RIME (see $\S$ 7 ➞). It provides us with a much simpler framework with which we can grasp the basics of calibration. Moreover, we assume for the sake of simplicity that the observed data are only corrupted by the instrument's antenna gains. This assumption results in a idealised treatment as there are many other factors that do in fact corrupt radio interferometric data (see $\S$ 7 ➞).
The unpolarized RIME is given by the following:
<p class=conclusion>
<font size=4> <b>Unpolarized RIME</b></font>
<br>
\begin{equation}
d_{pq}(t) = g_p(t) g_q^*(t) \tilde{d}_{pq}(t) + \epsilon_{pq}(t),
\end{equation}
</p>
where $d_{pq}(t)$ and $\tilde{d}{pq}(t)$ denote the corrupted observed and uncorrupted visibility at time $t$ associated with baseline $pq$. Moreover, the factors $g_p$ and $g_q$
denote the complex gain of antenna $p$ and $q$. The term $\epsilon{pq}$ is a zero mean (Gaussian)
noise term, representing thermal noise.
We begin this section by generating the $uv$-tracks of a fictitious instrument in $\S$ 8.1.1 ⤵<!--\ref{cal:sec:uv}-->. In $\S$ 8.1.2 ⤵<!--\ref{cal:sec:RIME_un}--> we phrase the calibration problem (for the antenna gains) as a least squares minimization problem. Then in $\S$ 8.1.3 ⤵<!--\ref{cal:sec:sim}--> we simulate "realistic" visibility data for the $uv$-tracks by including gain errors and adding noise to the resulting visibilities (similar to adding noise to a simple sinusoid as seen in $\S$ 2.11 ➞). We then vectorize the problem in $\S$ 8.1.4 ⤵<!--\ref{cal:sec:LM}-->, enabling us to use the built in scipy Levenberg-Marquardt algorithm to calibrate the data produced in $\S$ 8.1.3 ⤵<!--\ref{cal:sec:sim}-->. We implement the aforementioned steps via a wrapper ipython function called create_G_LM. We finish $\S$ 8.1.4 ⤵<!--\ref{cal:sec:LM}--> by using create_G_LM to estimate the antenna gains corrupting the simulated data we produced in $\S$ 8.1.3 ⤵<!--\ref{cal:sec:sim}-->. The estimated antenna gains are then used to correct the corrupted data in $\S$ 8.1.5 ⤵<!--\ref{cal:sec:cor}-->.
8.1.1 Creating $uv$-Tracks: East-West Interferometer <a id='cal:sec:uv'></a> <!--\label{cal:sec:uv}-->
We know from $\S$ 4.4.1.B.3 ➞ that when we work with an east-west interferometer things simplify to a large degree. Firstly: $XYZ = [0~|\mathbf{b}|~0]^T$, where $|\mathbf{b}|$ is the baseline length.
Moreover, we have that:
<p class=conclusion>
<font size=4> <b>$uv$-Coverage of an EW-array (8.1)</b></font>
<br>
\begin{eqnarray}
\\
u &=&| \mathbf{b}|\cos H\\
v &=& |\mathbf{b}|\sin H \sin \delta,
\end{eqnarray}
</p>
<a id='cal:eq:uv_cov'></a> <!--\label{cal:eq:uv_cov}-->
where $H$ is the hour angle of the field center and $\delta$ its declination. In this section we will be plotting the $uv$-coverage of a three element east-west interferometer.
The ENU layout of a simple interferometer is given below. Note that $|\mathbf{b}|$ is measured in wavelengths.
Now consider an array made up of three antennas situated at 0, 100, 200 meters east of
the array center as shown in the code fragment below.
End of explanation
H = np.linspace(-6,6,600)*(np.pi/12) #Hour angle in radians
delta = 60*(np.pi/180) #Declination in radians
Explanation: We first need to set the hour angle range of our observation and the declination of our field center.
End of explanation
u = np.zeros((len(b),len(H)))
v = np.zeros((len(b),len(H)))
for k in xrange(len(b)):
u[k,:] = b[k]*np.cos(H)
v[k,:] = b[k]*np.sin(H)*np.sin(delta)
plt.plot(u[k,:],v[k,:],"r")
plt.plot(-u[k,:],-v[k,:],"b")
plt.xlabel("$u$ [rad$^{-1}$]", fontsize=18)
plt.ylabel("$v$ [rad$^{-1}$]", fontsize=18)
plt.title("$uv$-Coverage of three element interferometer", fontsize=18)
plt.show()
Explanation: Our hour angle range is from -6h to 6h, and our declination is set to $60^{\circ}$.
As the earth rotates the antennas trace out $uv$-tracks (ellipses) as shown in the code fragment below, where the red tracks are due to baseline $pq$ and blue tracks are due to baseline $qp$. We can construct these $uv$-tracks by using Eq. 8.1 ⤵<!--\ref{cal:eq:uv_cov}-->.
End of explanation
u_m = np.zeros((len(b),len(b),len(H)))
v_m = np.zeros((len(b),len(b),len(H)))
u_m[0,1,:] = u[0,:] #the first two entries denote p and q and the third index denotes time
u_m[1,2,:] = u[1,:]
u_m[0,2,:] = u[2,:]
v_m[0,1,:] = v[0,:]
v_m[1,2,:] = v[1,:]
v_m[0,2,:] = v[2,:]
Explanation: We can also pack the $uv$-coverage into a 2D-matrix. We denote the rows of this matrix with $p$ and the columns with $q$. The $pq$-th entry denotes the $uv$-track associated with baseline $pq$. The reason for packing the visibilities into a 2D structure will become apparent in Sec. 8.1.2 ⤵<!--\ref{cal:sec:RIME_un}-->.
End of explanation
'''Creates the observed visibilities
point_sources - skymodel of point sources - (amplitude, l, m)
u_m - the u coordinates of observation (packed in a 2D structure)
v_m - the v coordinates of observation (packed in a 2D structure)
g - the antenna gain error vector
sig - the noise
'''
def create_vis_mat(point_sources,u_m,v_m,g,sig):
D = np.zeros(u.shape)
G = np.diag(g)
#Step 1: Create Model Visibility Matrix
for k in xrange(len(point_sources)): #for each point source
l_0 = point_sources[k,1]
m_0 = point_sources[k,2]
D = D + point_sources[k,0]*np.exp(-2*np.pi*1j*(u_m*l_0+v_m*m_0))
for t in xrange(D.shape[2]): #for each time-step
#Step 2: Corrupting the Visibilities
D[:,:,t] = np.dot(G,D[:,:,t])
D[:,:,t] = np.dot(D[:,:,t],G.conj())
#Step 3: Adding Noise
D[:,:,t] = D[:,:,t] + sig*np.random.randn(u_m.shape[0],u_m.shape[1]) \
+ sig*np.random.randn(u_m.shape[0],u_m.shape[1])*1j
return D
Explanation: 8.1.2. Unpolarized Calibration <a id='cal:sec:RIME_un'></a> <!--\label{cal:sec:RIME_un}-->
As explained in $\S$ 7.2 ➞ the RIME assumes that our observed signal is polarized. For the sake
of simplicity, however, we will now introduce the calibration problem with the underlying assumption that the observed signal is unpolarized. Unpolarized calibration is achieved by solving the following minimization problem:
<p class=conclusion>
<font size=4> <b>Unpolarized Calibration</b></font>
<br>
\begin{equation}
\min_{\boldsymbol{\mathcal{G}}} \left \| \boldsymbol{\mathcal{D}} - \boldsymbol{\mathcal{G}}\boldsymbol{\mathcal{M}}\boldsymbol{\mathcal{G}}^H \right \|,
\end{equation}
</p>
where
* $\boldsymbol{\mathcal{D}}$ is the observed visibility matrix. Each entry, which we denote by $d_{pq}$, represents the visibility measured by the baseline formed by antennas $p$ and $q$.
* $\boldsymbol{\mathcal{M}}$ is the model visibility matrix. The entry $m_{pq}$ of $\boldsymbol{\mathcal{M}}$ denotes a true or model visibility which was created with the calibration sky model and a $uv$-point on the $uv$-track associated with baseline $pq$.
* $\boldsymbol{\mathcal{G}} = \textrm{diag}(\mathbf{g})$ is the antenna gain matrix, where $\mathbf{g}=[g_1,g_2,\cdots,g_N]^T$ denotes the antenna gain vector. The operator $\textrm{diag}(\cdot)$ forms a diagonal matrix from a vector by putting the elements of the vector on the main diagonal. The vector $\mathbf{g}$ represents the instrumental response of the antennas, i.e. the complex antenna gains. These antenna gains are chosen in such a way that they minimize the difference between the observed and model visibilities.
* $\boldsymbol{\mathcal{G}}\boldsymbol{\mathcal{M}}\boldsymbol{\mathcal{G}}^H$ is the predicted visibility matrix. This matrix contains the model visibilities after the antenna gains have been applied to them.
The superscript $(\cdot)^H$ denotes the Hermitian or conjugate transpose and $\left \| \cdot \right \|$ denotes the norm used. Most calibration algorithms use the Frobenius norm for matrices and the 2-norm or Euclidean norm for vectors, thus treating calibration as a least squares problem.<br><br>
<div class=warn>
<b>Warning:</b> Do not get confused with the polarized and unpolarized RIME. We use
the notation $\mathbf{V}_{pq}\in\mathbb{C}^{2\times 2}$ to denote the observed correlation matrix corresponding to the antenna feeds $XX,YY,XY$ and $YX$ of antenna $p$ and $q$. We use the notation $\boldsymbol{\mathcal{D}}\in\mathbb{C}^{N\times N}$ to denote the unpolarized observed visibility matrix which contains the observed scalar visibilities of all the antenna pairs.
</div>
<br><br>
<div class=advice>
<b>Advice:</b> The unpolarized calibration equation above is equivalent to the following more familiar form: $\min_{\boldsymbol{g}}\sum_{pq}\left|d_{pq}-g_pg_q^*m_{pq}\right|^2$.
</div>
<br>
8.1.3. Creating an Unpolarized Visibility Matrix (create_vis_mat) <a id='cal:sec:sim'></a> <!--\label{cal:sec:sim}-->
In this section we present a function that allows us to create the observed visibility matrix $\boldsymbol{\mathcal{D}}$ and
the model visibility matrix $\boldsymbol{\mathcal{M}}$. The function employs three seperate
steps to produce a visibility matrix, namely:
We first take the Fourier transform of the skymodel and then sample the result using the
sampling function (i.e. $uv$-coverage). The sky model can only consist of point sources. Mathematically we may represent our sky model as $I(l,m) = \sum_k A_k\delta(l-l_k,m-m_k)$, where $A_k$ denotes the flux of the $k$-th source and $(l_k,m_k)$ denotes the direction cosine position vector that is associated with the $k$-th source. We then have that
$V(u,v) = \mathscr{F}{I(l,m)} = A_k e^{-2\pi \imath l_k u + m_k v}$, where $\mathscr{F}{\cdot}$ denotes the Fourier transform of its operand. This result stems from the fact that the Fourier transform of a delta function is a complex exponential. If we now apply the sampling function we finally obtain $V_{pq}(u_{pq},v_{pq}) = A_k e^{-2\pi \imath l_k u_{pq} + m_k v_{pq}}$. We now use $V_{pq}$ to construct a 2D model visibility matrix. The skymodel is passed to the function via the variable point_sources. The sampling function is passed in via u_m and v_m.
We then corrupt the visibilities with the antenna gains that were passed into the function via g. We use g to construct $\boldsymbol{\mathcal{G}}$. We corrupt our visibilites by multiplying by $\boldsymbol{\mathcal{G}}$ on the left of the model visibility matrix and on the right by $\boldsymbol{\mathcal{G}}^H$.
The last step is to add some noise to our visibilities. The standard deviation of the noise is passed in via sig.
It should now be obvious how we can use the same function to produce both $\boldsymbol{\mathcal{M}}$ and
$\boldsymbol{\mathcal{D}}$. In the case of $\boldsymbol{\mathcal{M}}$, we do not corrupt our visibilities, nor add any noise. See the function create_vis_mat below.
End of explanation
point_sources = np.array([(1,0,0),(0.5,(1*np.pi)/180,(0*np.pi)/180)]) #l and m are measures in radians
g = np.array([1.2+1.3j,1.1-1.5j,-1.3+0.7j])
sig = 0.1
D = create_vis_mat(point_sources,u_m,v_m,g,sig) #we corrupt our data and we add noise
M = create_vis_mat(point_sources,u_m,v_m,np.array([1,1,1]),0) #no corruption and no noise
Explanation: We now use create_vis_mat to create an example $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{\mathcal{D}}$. Note that
there are two sources in our sky model.
End of explanation
fig = plt.figure()
timeslots = np.cumsum(np.ones((len(M[0,1,:]),)))
#We only plot the real part of visibilities
#Plotting Baseline 01
ax = plt.subplot("311")
ax.set_title("$m_{01}$ (blue) and $d_{01}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,1,:].real)
ax.plot(timeslots,D[0,1,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 02
ax = plt.subplot("312")
ax.set_title("$m_{02}$ (blue) and $d_{02}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,2,:].real)
ax.plot(timeslots,D[0,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 12
ax = plt.subplot("313")
ax.set_title("$m_{12}$ (blue) and $d_{12}$ (green)", fontsize=18)
ax.plot(timeslots,M[1,2,:].real)
ax.plot(timeslots,D[1,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
plt.tight_layout()
plt.show()
Explanation: We now plot the baseline entries of $\boldsymbol{\mathcal{M}}$ and $\boldsymbol{\mathcal{D}}$.
End of explanation
'''Unpolarized direction independent calibration entails finding the G that minimizes ||R-GMG^H||.
This function evaluates D-GMG^H.
g is a vector containing the real and imaginary components of the antenna gains.
d is a vector containing a vecotrized R (observed visibilities), real and imaginary.
m is a vector containing a vecotrized M (predicted), real and imaginary.
r is a vector containing the residuals.
'''
def err_func(g,d,m):
Nm = len(d)/2
N = len(g)/2
G = np.diag(g[0:N]+1j*g[N:])
D = np.reshape(d[0:Nm],(N,N))+np.reshape(d[Nm:],(N,N))*1j #matrization
M = np.reshape(m[0:Nm],(N,N))+np.reshape(m[Nm:],(N,N))*1j
T = np.dot(G,M)
T = np.dot(T,G.conj())
R = D - T
r_r = np.ravel(R.real) #vectorization
r_i = np.ravel(R.imag)
r = np.hstack([r_r,r_i])
return r
Explanation: The images above contain the real part of the corrupted (green) and uncorrupted (blue)
visibilities as a function of timeslots for baseline 01, 02 and 12 respectively.
8.1.4 Levenberg-Marquardt (create_G_LM) <a id='cal:sec:LM'></a> <!--\label{cal:sec:LM}-->
We are now ready to use least squares to calibrate $\boldsymbol{\mathcal{D}}$ (see <cite data-cite='Yatawatta2012'>GPU accelerated nonlinear optimization in radio interferometric calibration</cite> ⤴).
We first present a brief review of least squares minimization. Suppose we wish to fit a model $\mathbf{f}\left( \mathbf{m},\breve{\mathbf{g}}\right)$, where $\mathbf{m}$ and $\breve{\mathbf{g}}$ denote
the model input values and a vector of unknown variables respectively, to some data $\left{\mathbf{d}{i},\mathbf{m}{i}\right}$. The vector of unknown variables $\breve{\mathbf{g}}$ parametrize the model. A standard method for determining which parameter vector $\breve{\mathbf{g}}$ best fits the data is to minimize the sum of the squared residuals. This technique is referred to as least squares minimization. The residual vector is denoted by $\mathbf{r}(\mathbf{m},\mathbf{d},\breve{\mathbf{g}}) = \mathbf{d} - \mathbf{f}\left( \mathbf{m},\breve{\mathbf{g}}\right)$. The objective function (the function we wish to minimize) associated with least squares is: $\sum_i \mathbf{r}_i^2$. The function optimize.leastsq is scipy's built-in least squares solver and employs the Levenberg-Marquardt algorithm in the background. The Levenberg-Marquardt algorithm is discussed in more detail in $\S$ 2.11 ➞. To use optimize.leastsq one needs a function, here called err_func, that calculates the residual vector $\mathbf{r}$. An initial guess of the parameter vector $\breve{\mathbf{g}}$ is also required.
For calibration the above variables become:
<p class=conclusion>
<font size=4> <b>Vectorizing</b></font>
<br>
<br>
• $\mathbf{d} = [\textrm{vec}(\Re\{\boldsymbol{\mathcal{D}}\}),\textrm{vec}(\Im\{\boldsymbol{\mathcal{D}}\})]$ <br><br>
• $\mathbf{m} = [\textrm{vec}(\Re\{\boldsymbol{\mathcal{M}}\}),\textrm{vec}(\Im\{\boldsymbol{\mathcal{M}}\})]$ <br><br>
• $\breve{\mathbf{g}} = [\Re\{\mathbf{g}\},\Im\{\mathbf{g}\}]$ <br><br>
• $\mathbf{f}\left(\mathbf{m},\breve{\mathbf{g}}\right) = [\textrm{vec}(\Re\{\boldsymbol{\mathcal{G}}\boldsymbol{\mathcal{M}}\boldsymbol{\mathcal{G}}^H\}),\textrm{vec}(\Im\{\boldsymbol{\mathcal{G}}\boldsymbol{\mathcal{M}}\boldsymbol{\mathcal{G}}^H\})]$, where
$\boldsymbol{\mathcal{M}} = \textrm{vec}^{-1}(\mathbf{m}_U)+\imath\textrm{vec}^{-1}(\mathbf{m}_L)$ and $\boldsymbol{\mathcal{G}} = \textrm{diag}(\breve{\mathbf{g}}_U)+\imath\textrm{diag}(\breve{\mathbf{g}}_L)$
</p>
In the above bullets $\textrm{vec}(\cdot)$, $\textrm{vec}^{-1}(\cdot)$, $(\cdot)_U$,
and $(\cdot)_L$ denote vectorization, matrization, the upper half of
a vector and the lower half of a vector respectively. Moreover, $\Re{\cdot}$ and $\Im{\cdot}$ denote the real and imaginary part of their operands.
The first thing we need to define in order to perform calibration by using optimize.leastsq is the function err_func, which we do below.
End of explanation
'''This function finds argmin G ||D-GMG^H|| using Levenberg-Marquardt. It uses the optimize.leastsq scipy to perform
the actual minimization.
D is your observed visibilities matrx.
M is your predicted visibilities.
g the antenna gains.
G = gg^H.'''
def create_G_LM(D,M):
N = D.shape[0] #number of antennas
temp =np.ones((D.shape[0],D.shape[1]) ,dtype=complex)
G = np.zeros(D.shape,dtype=complex)
g = np.zeros((D.shape[0],D.shape[2]),dtype=complex)
for t in xrange(D.shape[2]): #perform calibration per time-slot
g_0 = np.ones((2*N,)) # first antenna gain guess
g_0[N:] = 0
d_r = np.ravel(D[:,:,t].real) #vectorization of observed + seperating real and imag
d_i = np.ravel(D[:,:,t].imag)
d = np.hstack([d_r,d_i])
m_r = np.ravel(M[:,:,t].real) #vectorization of model + seperating real and imag
m_i = np.ravel(M[:,:,t].imag)
m = np.hstack([m_r,m_i])
g_lstsqr_temp = optimize.leastsq(err_func, g_0, args=(d, m))
g_lstsqr = g_lstsqr_temp[0]
G_m = np.dot(np.diag(g_lstsqr[0:N]+1j*g_lstsqr[N:]),temp)
G_m = np.dot(G_m,np.diag((g_lstsqr[0:N]+1j*g_lstsqr[N:]).conj()))
g[:,t] = g_lstsqr[0:N]+1j*g_lstsqr[N:] #creating antenna gain vector
G[:,:,t] = G_m
return g,G
Explanation: We are now able to define a wrapper function create_G_LM that in turn calls optimize.leastsq.
The wrapper function translates the calibration problem into a format that optimize.leastsq
can interpret. The input of create_G_LM is $\boldsymbol{\mathcal{D}}$ and $\boldsymbol{\mathcal{M}}$, while the output is $\mathbf{g}$ and $\boldsymbol{\mathscr{G}}=\mathbf{g}\mathbf{g}^H$.
End of explanation
glm,Glm = create_G_LM(D,M)
Explanation: We may now calibrate $\boldsymbol{\mathcal{D}}$ by using create_G_LM.
End of explanation
R_c = Glm**(-1)*D
Explanation: The above function works by vectorizing the real and imaginary part of $\boldsymbol{\mathcal{D}}$ and
storing the result in $\mathbf{d}$. The vector $\mathbf{m}$ is generated in a similar manner.
The error vector $\mathbf{r}$ is calculated by err_func. We initialize $\breve{\mathbf{g}}$ with
$\breve{\mathbf{g}}_0=[\mathbf{1},\mathbf{0}]$. We can then call
optimize.leastsq(self.err_func, g_0, args=(d, m)).
We can now calculate $\mathbf{g} = \breve{\mathbf{g}}_U+\imath\breve{\mathbf{g}}_L$ and
$\boldsymbol{\mathscr{G}}=\mathbf{g}\mathbf{g}^H$. This is repeated for each observational time-slot.
8.1.5 Corrected Visibilites <a id='cal:sec:cor'></a> <!--\label{cal:sec:cor}-->
Before imaging, we have to correct our observed visibilities by removing the effect that the antenna gains had on the observed visibilities. This can be accomplished by using
<p class=conclusion>
<font size=4> <b>Correcting Visibilities</b></font>
<br>
\begin{equation}
\boldsymbol{\mathcal{D}}^\mathrm{(c)} = \boldsymbol{\mathcal{G}}^{-1}\boldsymbol{\mathcal{D}}\boldsymbol{\mathcal{G}}^{-H} = \boldsymbol{\boldsymbol{\mathscr{G}}}^{\odot-1}\odot\boldsymbol{\mathcal{D}},
\end{equation}
</p>
<br>
where
$\boldsymbol{\mathcal{D}}^\mathrm{(c)}$ is the corrected visibility matrix.
$\boldsymbol{\mathscr{G}}^{\odot-1}$ denotes the visibility calibration matrix, which is computed by taking the Hadamard (element-wise) inverse of $\boldsymbol{\mathscr{G}}$.
The superscript $(\cdot)^{-1}$ denotes matrix inversion, while $(\cdot)^{-H}$ denotes the inverse of the Hermitian transpose. The operator $\odot$ denotes the Hadamard product.
We calculate the corrected visibilities below.<br><br>
<div class=advice>
<b>Advice:</b> The matrix and vector operations (like $\odot$) and operators used in this section are discussed in more detail in [$\S$ 2.10 ➞](../2_Mathematical_Groundwork/2_10_linear_algebra.ipynb)
</div>
End of explanation
fig = plt.figure()
timeslots = np.cumsum(np.ones((len(M[0,1,:]),)))
#We only plot the real part of visibilities
#Plotting Baseline 01
ax = plt.subplot("311")
ax.set_title("$m_{01}$ (blue) and $d_{01}^{(c)}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,1,:].real)
ax.plot(timeslots,R_c[0,1,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 02
ax = plt.subplot("312")
ax.set_title("$m_{02}$ (blue) and $d_{02}^{(c)}$ (green)", fontsize=18)
ax.plot(timeslots,M[0,2,:].real)
ax.plot(timeslots,R_c[0,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
#Plotting Baseline 12
ax = plt.subplot("313")
ax.set_title("$m_{12}$ (blue) and $d_{12}^{(c)}$ (green)", fontsize=18)
ax.plot(timeslots,M[1,2,:].real)
ax.plot(timeslots,R_c[1,2,:].real)
ax.set_xlabel("Timeslot", fontsize=18)
ax.set_ylabel("Jy", fontsize=18)
ax.set_xlim([1,len(M[0,1,:])])
y_t = ax.get_yticks()
y_t = y_t[::2]
ax.set_yticks(y_t)
plt.tight_layout()
plt.show()
Explanation: We plot the corrected visibilities below. Note that the model and corrected visibilities align well, implying that calibration was successfull.
End of explanation |
6,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python API for Table Display
In addition to APIs for creating and formatting BeakerX's interactive table widget, the Python runtime configures pandas to display tables with the interactive widget instead of static HTML.
Step1: Display mode
Step2: Display mode
Step3: Recognized Formats
Step4: Set index to DataFrame
Step5: Update cell | Python Code:
import pandas as pd
from beakerx import *
pd.read_csv('../resources/data/interest-rates.csv')
table = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))
table.setAlignmentProviderForColumn('m3', TableDisplayAlignmentProvider.CENTER_ALIGNMENT)
table.setRendererForColumn("y10", TableDisplayCellRenderer.getDataBarsRenderer(False))
table.setRendererForType(ColumnType.Double, TableDisplayCellRenderer.getDataBarsRenderer(True))
table
df = pd.read_csv('../resources/data/interest-rates.csv')
df['time'] = df['time'].str.slice(0,19).astype('datetime64[ns]')
table = TableDisplay(df)
table.setStringFormatForTimes(TimeUnit.DAYS)
table.setStringFormatForType(ColumnType.Double, TableDisplayStringFormat.getDecimalFormat(4,6))
table.setStringFormatForColumn("m3", TableDisplayStringFormat.getDecimalFormat(0, 0))
table
table = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))
table
#freeze a column
table.setColumnFrozen("y1", True)
#freeze a column to the right
table.setColumnFrozenRight("y10", True)
#hide a column
table.setColumnVisible("y30", False)
table.setColumnOrder(["m3", "y1", "y5", "time", "y2"])
table
table = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))
table.addCellHighlighter(TableDisplayCellHighlighter.getHeatmapHighlighter("m3", TableDisplayCellHighlighter.FULL_ROW))
table
Explanation: Python API for Table Display
In addition to APIs for creating and formatting BeakerX's interactive table widget, the Python runtime configures pandas to display tables with the interactive widget instead of static HTML.
End of explanation
beakerx.pandas_display_default()
pd.read_csv('../resources/data/interest-rates.csv')
Explanation: Display mode: Pandas default
End of explanation
beakerx.pandas_display_table()
pd.read_csv('../resources/data/interest-rates.csv')
Explanation: Display mode: TableDisplay Widget
End of explanation
TableDisplay([{'y1':4, 'm3':2, 'z2':1}, {'m3':4, 'z2':2}])
TableDisplay({"x" : 1, "y" : 2})
mapList4 = [
{"a":1, "b":2, "c":3},
{"a":4, "b":5, "c":6},
{"a":7, "b":8, "c":5}
]
display = TableDisplay(mapList4)
#set what happens on a double click
display.setDoubleClickAction(lambda row, column, tabledisplay: tabledisplay.values[row].__setitem__(column, sum(tabledisplay.values[row])))
#add a context menu item
display.addContextMenuItem("negate", lambda row, column, tabledisplay: tabledisplay.values[row].__setitem__(column, -1 * tabledisplay.values[row][column]))
display
mapList4 = [
{"a":1, "b":2, "c":3},
{"a":4, "b":5, "c":6},
{"a":7, "b":8, "c":5}
]
display = TableDisplay(mapList4)
#set what happens on a double click
display.setDoubleClickAction("runDoubleClick")
display
print("runDoubleClick fired")
Explanation: Recognized Formats
End of explanation
df = pd.read_csv('../resources/data/interest-rates.csv')
df.set_index(['m3'])
df = pd.read_csv('../resources/data/interest-rates.csv')
df.index = df['time']
df
Explanation: Set index to DataFrame
End of explanation
dataToUpdate = [
{'a':1, 'b':2, 'c':3},
{'a':4, 'b':5, 'c':6},
{'a':7, 'b':8, 'c':9}
]
tableToUpdate = TableDisplay(dataToUpdate)
tableToUpdate
tableToUpdate.values[0][0] = 99
tableToUpdate.sendModel()
tableToUpdate.updateCell(2,"c",121)
tableToUpdate.sendModel()
Explanation: Update cell
End of explanation |
6,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithmic Complexity
Notes by J. S. Oishi
Step1: How long will my code take to run?
Today, we will be concerned solely with time complexity.
Formally, we want to know $T(d)$, where $d$ is any given dataset and $T(d)$ gives the total run time.
Let's begin with a simple problem.
How many instructions does the following bit of code take? Go ahead and assume that you can ignore the machinery in the if and for statements.
Step2: Go ahead and work it out with your neighbors.
Step3: The answer
Each time the function is called, we have
python
n = len(x)
mini = x[0]
that's two instructions (again, ignoring how much goes into the len call).
Then, the for loop body requires either one or two instructions. You always have the comparison x[i] < mini, but you may or may not have the assignment mini = x[i].
Exercise
Compute the number number of instructions for this input data
python
x = [4, 3, 2, 1]
and
python
y = [1, 3, 2, 4]
$N_{inst}(x) = 9$
$N_{inst}(y) = 7$
As usual, pessimism is the most useful view
The answer to "how long does this take" is...it depends on the input.
Since we would like to know how long an algorithm will take before we run it, let's examine the worst case scenario.
This allows us to looking for from $T(d)$ to $f(n)$, where $n \equiv \mathrm{size}(d)$ is the size of the dataset.
For our simple problem,
$$f(n) = 2 + 4n$$
Asymptotics
Let's look at a pair of cubic polynomials,
$$ f(n) = f_0 + f_1 n + f_2 n^2 + f_3 n^3 $$
$$ g(n) = g_0 + g_1 n + g_2 n^2 + g_3 n^3 $$
Step4: Clearly, we can drop the lower order terms and the coefficients $f_3$ and $g_3$.
We call this
$$O(n^3)$$,
and we say our algorithm is "$n^3$", meaning no worse than $n^3$.
Of course this is exactly the same notation and meaning as when we do a series expansion in any other calculation,
$$ e^{x} = 1 + x + \frac{x^2}{2} + O(x^3), x\to 0$$.
An example
Let's take the force calculation for an N-body simulation. We recall we can write this as
$$\ddot{\mathbf{r}}i = -G\sum{i \ne j} \frac{m_j \mathbf{r}{ij}}{r{ij}^3},$$
for each particle $i$. This is fairly easy to analyze.
Calculate the complexity with your neighbors
Some Code
This is a very simple implementation of a force calculator that only calculates the $x$ component (for unit masses!).
Step5: Test it!
Theory is all well and good, but let's do a numerical experiment.
Step6: Plot the results...
Step7: Several Common Asymptotics
Step8: But...
Consider the problem
Let's talk about solving PDES | Python Code:
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
Explanation: Algorithmic Complexity
Notes by J. S. Oishi
End of explanation
def mini(x):
n = len(x)
mini = x[0]
for i in range(n):
if x[i] < mini:
mini= x[i]
return mini
Explanation: How long will my code take to run?
Today, we will be concerned solely with time complexity.
Formally, we want to know $T(d)$, where $d$ is any given dataset and $T(d)$ gives the total run time.
Let's begin with a simple problem.
How many instructions does the following bit of code take? Go ahead and assume that you can ignore the machinery in the if and for statements.
End of explanation
x = np.random.rand(1000)
print(mini(x))
print(x.min())
Explanation: Go ahead and work it out with your neighbors.
End of explanation
n = np.linspace(0,1000,10000)
f0 = 2; f1 = 1; f2 = 10; f3 = 2
g0 = 0; g1 = 10; g2 = 1; g3 = 1
f = f0 + f1*n + f2*n**2 + f3*n**3
g = g0 + g1*n + g2*n**2 + g3*n**3
plt.figure()
plt.plot(n, f, label='f')
plt.plot(n, g, label='g')
plt.xlim(0,2)
plt.ylim(0,20)
plt.legend()
Explanation: The answer
Each time the function is called, we have
python
n = len(x)
mini = x[0]
that's two instructions (again, ignoring how much goes into the len call).
Then, the for loop body requires either one or two instructions. You always have the comparison x[i] < mini, but you may or may not have the assignment mini = x[i].
Exercise
Compute the number number of instructions for this input data
python
x = [4, 3, 2, 1]
and
python
y = [1, 3, 2, 4]
$N_{inst}(x) = 9$
$N_{inst}(y) = 7$
As usual, pessimism is the most useful view
The answer to "how long does this take" is...it depends on the input.
Since we would like to know how long an algorithm will take before we run it, let's examine the worst case scenario.
This allows us to looking for from $T(d)$ to $f(n)$, where $n \equiv \mathrm{size}(d)$ is the size of the dataset.
For our simple problem,
$$f(n) = 2 + 4n$$
Asymptotics
Let's look at a pair of cubic polynomials,
$$ f(n) = f_0 + f_1 n + f_2 n^2 + f_3 n^3 $$
$$ g(n) = g_0 + g_1 n + g_2 n^2 + g_3 n^3 $$
End of explanation
def f_x(particles):
G = 1
a_x = np.empty(len(particles))
for i, p in enumerate(particles):
for j, p in enumerate(particles):
if j != i:
a_x[i] -= G*p.x/(p.x**2 + p.y**2 + p.z**2)**1.5
class Particle:
def __init__(self, r, v):
self.r = r
self.v = v
@property
def x(self):
return self.r[0]
@property
def y(self):
return self.r[1]
@property
def z(self):
return self.r[2]
Explanation: Clearly, we can drop the lower order terms and the coefficients $f_3$ and $g_3$.
We call this
$$O(n^3)$$,
and we say our algorithm is "$n^3$", meaning no worse than $n^3$.
Of course this is exactly the same notation and meaning as when we do a series expansion in any other calculation,
$$ e^{x} = 1 + x + \frac{x^2}{2} + O(x^3), x\to 0$$.
An example
Let's take the force calculation for an N-body simulation. We recall we can write this as
$$\ddot{\mathbf{r}}i = -G\sum{i \ne j} \frac{m_j \mathbf{r}{ij}}{r{ij}^3},$$
for each particle $i$. This is fairly easy to analyze.
Calculate the complexity with your neighbors
Some Code
This is a very simple implementation of a force calculator that only calculates the $x$ component (for unit masses!).
End of explanation
nn = np.array([10, 100, 300, 1000])
nnn = np.linspace(nn[0],nn.max(),10000)
p1 = [Particle(np.random.rand(3),(0,0,0)) for i in range(nn[0])]
p2 = [Particle(np.random.rand(3),(0,0,0)) for i in range(nn[1])]
p3 = [Particle(np.random.rand(3),(0,0,0)) for i in range(nn[2])]
p4 = [Particle(np.random.rand(3),(0,0,0)) for i in range(nn[3])]
t1 = %timeit -o f_x(p1)
t2 = %timeit -o f_x(p2)
t3 = %timeit -o f_x(p3)
t4 = %timeit -o f_x(p4)
times = np.array([t1.average, t2.average, t3.average, t4.average])
Explanation: Test it!
Theory is all well and good, but let's do a numerical experiment.
End of explanation
plt.figure()
plt.loglog(nn,times,'x', label='data')
plt.loglog(nnn,times[0]*(nnn/nnn[0])**2, label=r'$O(n^2)$')
plt.legend();plt.xlabel('data size');plt.ylabel('run time (s)')
Explanation: Plot the results...
End of explanation
plt.figure()
plt.loglog(nn,times,'x', label='data')
plt.loglog(nnn,t1.average*(nnn/nnn[0])**3, label=r'$O(n^3)$')
plt.loglog(nnn,times[0]*(nnn/nnn[0])**2, label=r'$O(n^2)$')
plt.loglog(nnn,times[0]*(nnn/nnn[0]), label=r'$O(n)$')
plt.loglog(nnn,t1.average*(nnn/nnn[0])*np.log(nnn/nnn[0]), label=r'$O(n \log(n))$')
plt.legend()
plt.xlabel('data size')
plt.ylabel('run time (s)')
Explanation: Several Common Asymptotics
End of explanation
plt.figure()
plt.loglog(nnn,times[0]*(nnn/nnn[0]), label=r'$O(n)$ finite difference')
plt.loglog(nnn,t1.average*(nnn/nnn[0])*np.log(nnn/nnn[0]), label=r'$O(n \log(n))$ spectral')
plt.legend()
plt.xlabel('data size')
plt.ylabel('run time (s)')
Explanation: But...
Consider the problem
Let's talk about solving PDES:
$$\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} = -\frac{\nabla p}{\rho} + \nu \nabla^2 \mathbf{u} $$
Let's focus on two ways of calculating gradients.
Finite Difference
$$\frac{\partial u}{\partial x} \simeq \frac{u_{i+1} - u_{i-1}}{\Delta x}$$
Spectral
$$\frac{\partial u}{\partial x} \simeq i k_j \sum_{j = 0}^{N} f_j \exp{i k_j x}$$
Scaling
End of explanation |
6,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a name="top"></a>
<div style="width
Step1: Let's Import Some Data through NOAA
Step2: Turn list of urls into one large, combined (concatenated) dataset based on time
Step3: Take a peak to ensure everything was read successfully and understand the dataset that you have
Step4: Take another peak
Step5: Write out data for processing | Python Code:
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import netCDF4 as nc
from mpl_toolkits.basemap import Basemap
from datetime import datetime
from dask.diagnostics import ProgressBar
%matplotlib inline
from dask.distributed import Client
import xarray as xr
Explanation: <a name="top"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://cdn.miami.edu/_assets-common/images/system/um-logo-gray-bg.png" alt="Miami Logo" style="height: 98px;">
</div>
<h1>Lunch Byte 4/19/2019</h1>
By Kayla Besong
<br>
<br>
<br>Introduction to Xarray and Dask to upload and process data from NOAA for ProcessData_XR.ipynb
<br>use to compare to GettingData_XR.ipynb
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
End of explanation
%%time
heights = [] # empty array to append opened netCDF's to
temps = []
date_range = np.arange(1995,2001,1) # range of years interested in obtaining, remember python starts counting at 0 so for 10 years we actually need to say through 2005
for i in date_range:
url_h = 'https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/hgt.%s.nc' % i # string subset --> %.s and % i will insert the i in date_range we are looping through
url_t = 'https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.%s.nc' % i
print(url_h, url_t)
heights.append(url_h) # append
temps.append(url_t)
Explanation: Let's Import Some Data through NOAA
End of explanation
%%time
concat_h = xr.open_mfdataset(heights) # aligns all the lat, lon, lev, values of all the datasets based on dimesnion of time
%%time
concat_t = xr.open_mfdataset(temps)
Explanation: Turn list of urls into one large, combined (concatenated) dataset based on time
End of explanation
concat_h, concat_t
%%time
concat_h = concat_h.sel(lat = slice(90,0), level = 500).resample(time = '24H').mean(dim = 'time')
%%time
concat_t = concat_t.sel(lat = slice(90,0), level = 925).resample(time = '24H').mean(dim = 'time')
Explanation: Take a peak to ensure everything was read successfully and understand the dataset that you have
End of explanation
concat_h, concat_t
Explanation: Take another peak
End of explanation
%%time
concat_h.to_netcdf('heights_9520.nc')
%%time
concat_t.to_netcdf('temps_9520.nc')
Explanation: Write out data for processing
End of explanation |
6,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Brainstorm auditory tutorial dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see [1] and the associated
brainstorm site <http
Step1: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change this to False. With
use_precomputed = False running time of this script can be several
minutes even on a fast computer.
Step2: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of
Step3: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded in the memory.
Step4: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz).
In addition
Step5: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
Step6: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
Step7: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
Step8: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
Step9: We also lowpass filter the data at 100 Hz to remove the hf components.
Step10: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
Step11: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
Step12: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
Step13: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
Step14: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
Step15: The averages for each conditions are computed.
Step16: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with
Step17: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
Step18: Show activations as topography figures.
Step19: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
Step20: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
Step21: The transformation is read from a file. More information about coregistering
the data, see ch_interactive_analysis or
Step22: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information
Step23: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
Step24: Deviant condition.
Step25: Difference. | Python Code:
# Authors: Mainak Jas <[email protected]>
# Eric Larson <[email protected]>
# Jaakko Leppakangas <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)
Explanation: Brainstorm auditory tutorial dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see [1] and the associated
brainstorm site <http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory>.
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
FieldTrip bug tracker <http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300>_.
References
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
End of explanation
use_precomputed = True
Explanation: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change this to False. With
use_precomputed = False running time of this script can be several
minutes even on a fast computer.
End of explanation
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', 'bst_auditory',
'S01_Noise_20131218_01.ds')
Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of :class:mne.io.Raw.
End of explanation
preload = not use_precomputed
raw = read_raw_ctf(raw_fname1, preload=preload)
n_times_run1 = raw.n_times
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)])
raw_erm = read_raw_ctf(erm_fname, preload=preload)
Explanation: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded in the memory.
End of explanation
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick_types(meg=True, eeg=False, stim=True, misc=True, eog=True,
ecg=True)
Explanation: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz).
In addition:
1 stim channel for marking presentation times for the stimuli
1 audio channel for the sent signal
1 response channel for recording the button presses
1 ECG bipolar
2 EOG bipolar (vertical and horizontal)
12 head tracking channels
20 unused channels
The head tracking channels and the unused channels are marked as misc
channels. Here we define the EOG and ECG channels.
End of explanation
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.annotations = annotations
del onsets, durations, descriptions
Explanation: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
End of explanation
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
Explanation: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
End of explanation
raw.plot(block=True)
Explanation: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
End of explanation
if not use_precomputed:
meg_picks = mne.pick_types(raw.info, meg=True, eeg=False)
raw.plot_psd(tmax=np.inf, picks=meg_picks)
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks=meg_picks)
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
End of explanation
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')
Explanation: We also lowpass filter the data at 100 Hz to remove the hf components.
End of explanation
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
Explanation: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
End of explanation
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
Explanation: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
End of explanation
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
Explanation: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
End of explanation
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=False,
proj=True)
Explanation: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
End of explanation
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs, picks
Explanation: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
End of explanation
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
Explanation: The averages for each conditions are computed.
End of explanation
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with :func:mne.io.Raw.filter), but to reduce memory
consumption of this tutorial, we do it at evoked stage. (At the raw stage,
you could alternatively notch filter with :func:mne.io.Raw.notch_filter.)
End of explanation
evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')
evoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')
Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
End of explanation
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard', time_unit='s')
evoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')
Explanation: Show activations as topography figures.
End of explanation
evoked_difference = combine_evoked([evoked_dev, -evoked_std], weights='equal')
evoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')
Explanation: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
End of explanation
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
Explanation: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
End of explanation
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
Explanation: The transformation is read from a file. More information about coregistering
the data, see ch_interactive_analysis or
:func:mne.gui.coregistration.
End of explanation
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
Explanation: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information: CHDBBCEJ, :func:mne.setup_source_space,
create_bem_model, :func:mne.bem.make_watershed_bem.
End of explanation
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
Explanation: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
End of explanation
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
Explanation: Deviant condition.
End of explanation
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
Explanation: Difference.
End of explanation |
6,550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data
Step2: Calculate Pearson's Correlation Coefficient
There are a number of equivalent expression ways to calculate Pearson's correlation coefficient (also called Pearson's r). Here is one.
$$r={\frac {1}{n-1}}\sum {i=1}^{n}\left({\frac {x{i}-{\bar {x}}}{s_{x}}}\right)\left({\frac {y_{i}-{\bar {y}}}{s_{y}}}\right)$$
where $s_{x}$ and $s_{y}$ are the sample standard deviation for $x$ and $y$, and $\left({\frac {x_{i}-{\bar {x}}}{s_{x}}}\right)$ is the standard score for $x$ and $y$. | Python Code:
import statistics as stats
Explanation: Title: Pearson's Correlation Coefficient
Slug: pearsons_correlation_coefficient
Summary: Pearson's Correlation Coefficient in Python.
Date: 2016-02-08 12:00
Category: Statistics
Tags: Basics
Authors: Chris Albon
Based on this StackOverflow answer by cbare.
Preliminaries
End of explanation
x = [1,2,3,4,5,6,7,8,9]
y = [2,1,2,4.5,7,6.5,6,9,9.5]
Explanation: Create Data
End of explanation
# Create a function
def pearson(x,y):
# Create n, the number of observations in the data
n = len(x)
# Create lists to store the standard scores
standard_score_x = []
standard_score_y = []
# Calculate the mean of x
mean_x = stats.mean(x)
# Calculate the standard deviation of x
standard_deviation_x = stats.stdev(x)
# Calculate the mean of y
mean_y = stats.mean(y)
# Calculate the standard deviation of y
standard_deviation_y = stats.stdev(y)
# For each observation in x
for observation in x:
# Calculate the standard score of x
standard_score_x.append((observation - mean_x)/standard_deviation_x)
# For each observation in y
for observation in y:
# Calculate the standard score of y
standard_score_y.append((observation - mean_y)/standard_deviation_y)
# Multiple the standard scores together, sum them, then divide by n-1, return that value
return (sum([i*j for i,j in zip(standard_score_x, standard_score_y)]))/(n-1)
# Show Pearson's Correlation Coefficient
pearson(x,y)
Explanation: Calculate Pearson's Correlation Coefficient
There are a number of equivalent expression ways to calculate Pearson's correlation coefficient (also called Pearson's r). Here is one.
$$r={\frac {1}{n-1}}\sum {i=1}^{n}\left({\frac {x{i}-{\bar {x}}}{s_{x}}}\right)\left({\frac {y_{i}-{\bar {y}}}{s_{y}}}\right)$$
where $s_{x}$ and $s_{y}$ are the sample standard deviation for $x$ and $y$, and $\left({\frac {x_{i}-{\bar {x}}}{s_{x}}}\right)$ is the standard score for $x$ and $y$.
End of explanation |
6,551 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
plot a distribution plot using seaborn library in pyhton
| Python Code::
sns.distplot(dataset[columns], kde = False, bins = 30, color = 'red', hist_kws=dict(edgecolor="k", linewidth=1) )
|
6,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's take a look at the original images that you took
Step1: This image is not science-ready yet...
Dark image
Step2: Why is this?
Another interesting feature of CCD cameras is that the chips do not respond equally to the same light intensity. For example if you illuminate the camera with uniform light (this is called flat image).
Step3: Let's create a better image!
Step4: Compare to the original!
Step5: Reduce the rest of images (in principle we should take a different bias image for each filter) because the CCD has different sensitivity at different wavelengths
Step6: An example from SDSS
Step7: If you want to know more about Jupyter | Python Code:
science_image_path_g = 'data/seo_m66_g-band_180s_apagul_1.fits' #Type the path to your image
sci_g = fits.open(science_image_path_g)
sci_im_g = fits.open(science_image_path_g)[0].data
plt.imshow(sci_im_g,cmap='gray', vmax=1800, norm=matplotlib.colors.LogNorm())
plt.colorbar()
Explanation: Let's take a look at the original images that you took
End of explanation
dark_image_path='data/dark.fits' #Type the path to your dark image
drk_im = fits.open(dark_image_path)[0].data
plt.imshow(drk_im,cmap='gray', vmax=2000)
plt.colorbar()
bias_image_path = 'data/bias.fits' #Type the path to your bias image
bias_image = fits.open(bias_image_path)[0].data
plt.imshow(bias_image, cmap='gray')
plt.colorbar()
plt.hist(drk_im.flatten());
plt.yscale('log')
plt.xlabel('Output counts')
plt.ylabel('Number of pixels')
Explanation: This image is not science-ready yet...
Dark image: If you take a shot with the shutter closed (i.e., no light/photons incoming in the camera) you still have a non-zero image.
End of explanation
flat_image_path = 'data/FLAT_g-band_2016-10-06_bin1_id5908.fits' #Type the path to your flat image here
flat_image = fits.open(flat_image_path)[0].data
#You can try cmap='hot' or cmap='jet' to see how it changes
plt.imshow(flat_image, cmap='gray')
plt.colorbar()
plt.hist(flat_image.flatten())
def reduce_image(sci_im,drk_im,flat_im, bias_im, filter_dark=True):
from scipy.stats import mode
dkr_im = drk_im - bias_im
#First part: We take "zero" the image
#The next part is optional and averages the dark image in a 10 pixel radius
#to get rid of salt/pepper noise
if(filter_dark):
selem = disk(10) #We are going to perform averages in 10 pixel radius disks
selem2 = disk(4)
drk_im = rank.mean(drk_im, selem=selem) #We perform an average to remove salt-pepper noise
flat_im = rank.mean(flat_im, selem=selem2)
#Second part: Make every part have the same sensitivity
#flat_im = (flat_im - drk_im)/mode(flat_im-drk_im,axis=None)[0] #most common pixel value will equal 1
flat_im = (flat_im - drk_im)/np.median(flat_im-drk_im)
#Lower than 1 where the CCD is less sensitive and more than 1 where it's more sensitive
sci_im = (sci_im -drk_im)/flat_im
#Error image
return sci_im
Explanation: Why is this?
Another interesting feature of CCD cameras is that the chips do not respond equally to the same light intensity. For example if you illuminate the camera with uniform light (this is called flat image).
End of explanation
new_sci_image_g = reduce_image(sci_im_g,drk_im,flat_image,bias_image, filter_dark=False)
plt.imshow(new_sci_image_g, cmap='gray', vmax=4000, vmin=50, norm=matplotlib.colors.LogNorm())
plt.colorbar()
Explanation: Let's create a better image!
End of explanation
fig, ax = plt.subplots(nrows=1,ncols=3,figsize=(10,8))
ax[0].imshow(sci_im_g,cmap='gray',vmax=1800, norm=matplotlib.colors.LogNorm())
ax[0].set_title('Before reduction')
ax[1].imshow(new_sci_image_g,cmap='gray',vmax=2000, vmin=50, norm=matplotlib.colors.LogNorm())
ax[1].set_title('After reduction')
ax[2].imshow(sci_im_g-new_sci_image_g,cmap='gray', vmax=1050, vmin=1000)
ax[2].set_title('Difference')
science_image_path_r = 'data/seo_m66_r_180s_apagul_1.fits'
sci_im_r = fits.open(science_image_path_r)[0].data
science_image_path_i = 'data/seo_m66_i-band_180s_apagul_1.fits'
sci_im_i = fits.open(science_image_path_i)[0].data
flat_r = fits.open('data/FLAT_r-band_2016-10-06_bin1_id5906.fits')[0].data
flat_i = fits.open('data/FLAT_i-band_2016-10-06_bin1_id5907.fits')[0].data
Explanation: Compare to the original!
End of explanation
new_sci_image_r = reduce_image(sci_im_r,drk_im,flat_r,bias_image)
new_sci_image_i = reduce_image(sci_im_i,drk_im,flat_i,bias_image)
Explanation: Reduce the rest of images (in principle we should take a different bias image for each filter) because the CCD has different sensitivity at different wavelengths
End of explanation
# Read in the three images downloaded from here:
# g: http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-g-001737-5-0039.fits.bz2
# r: http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-r-001737-5-0039.fits.bz2
# i: http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-i-001737-5-0039.fits.bz2
g = fits.open('data/frame-g-001737-5-0039.fits.bz2')[0]
r = fits.open('data/frame-r-001737-5-0039.fits.bz2')[0]
i = fits.open('data/frame-i-001737-5-0039.fits.bz2')[0]
# remap r and i onto g
r_new, r_mask = reproject_interp(r, g.header)
i_new, i_mask = reproject_interp(i, g.header)
# zero out the unmapped values
i_new[np.logical_not(i_mask)] = 0
r_new[np.logical_not(r_mask)] = 0
# red=i, green=r, blue=g
# make a file with the default scaling
rgb_default = make_lupton_rgb(i_new, r_new, g.data, filename="ngc6976-default.jpeg")
# this scaling is very similar to the one used in Lupton et al. (2004)
rgb = make_lupton_rgb(i_new, r_new, g.data, Q=10, stretch=0.5, filename="ngc6976.jpeg")
plt.imshow(rgb)
Explanation: An example from SDSS:
End of explanation
positions = [(550., 600.), (450., 500.)] #Change it and include the position of an object in your image
apertures = CircularAperture(positions, r=20.)
phot_table = aperture_photometry(new_sci_image_g, apertures)
print phot_table
Explanation: If you want to know more about Jupyter:
https://github.com/fjaviersanchez/JupyterTutorial/blob/master/TutorialJupyter.ipynb
Aperture photometry
Astronomers use the magnitude scale to characterize the bright of an object. With the magnitude scale you quantify the brightness of an object by comparing it with other objects. Astronomers have agreed to use "Vega" as the zero magnitude point (like the freezing point for water is the zero-point for the Celsius temperature scale). The magnitude scale goes "backwards" in the sense that brighter objects have smaller magnitude. For example the Sun has magnitude -27, the full Moon -13, and Venus -5.
How can we measure magnitudes from an image?
A first approach is to use an object which magnitude we know, called "standard" and refer the rest of the objects in an image to it.
But what do you use to count the total brightness of an object?
Use the brightest pixel?
Add the brightness in a certain radius?
Count only the pixels which belong to each object?
End of explanation |
6,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--TITLE
Step1: Step 2 - Define Model
To illustrate the effect of augmentation, we'll just add a couple of simple transformations to the model from Tutorial 1.
Step2: Step 3 - Train and Evaluate
And now we'll start the training! | Python Code:
#$HIDE_INPUT$
# Imports
import os, warnings
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
# Reproducability
def set_seed(seed=31415):
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
set_seed()
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
warnings.filterwarnings("ignore") # to clean up output cells
# Load training and validation sets
ds_train_ = image_dataset_from_directory(
'../input/car-or-truck/train',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=True,
)
ds_valid_ = image_dataset_from_directory(
'../input/car-or-truck/valid',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=False,
)
# Data Pipeline
def convert_to_float(image, label):
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
return image, label
AUTOTUNE = tf.data.experimental.AUTOTUNE
ds_train = (
ds_train_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
ds_valid = (
ds_valid_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
Explanation: <!--TITLE:Data Augmentation-->
Introduction
Now that you've learned the fundamentals of convolutional classifiers, you're ready to move on to more advanced topics.
In this lesson, you'll learn a trick that can give a boost to your image classifiers: it's called data augmentation.
The Usefulness of Fake Data
The best way to improve the performance of a machine learning model is to train it on more data. The more examples the model has to learn from, the better it will be able to recognize which differences in images matter and which do not. More data helps the model to generalize better.
One easy way of getting more data is to use the data you already have. If we can transform the images in our dataset in ways that preserve the class, we can teach our classifier to ignore those kinds of transformations. For instance, whether a car is facing left or right in a photo doesn't change the fact that it is a Car and not a Truck. So, if we augment our training data with flipped images, our classifier will learn that "left or right" is a difference it should ignore.
And that's the whole idea behind data augmentation: add in some extra fake data that looks reasonably like the real data and your classifier will improve.
Using Data Augmentation
Typically, many kinds of transformation are used when augmenting a dataset. These might include rotating the image, adjusting the color or contrast, warping the image, or many other things, usually applied in combination. Here is a sample of the different ways a single image might be transformed.
<figure>
<img src="https://i.imgur.com/UaOm0ms.png" width=400, alt="Sixteen transformations of a single image of a car.">
</figure>
Data augmentation is usually done online, meaning, as the images are being fed into the network for training. Recall that training is usually done on mini-batches of data. This is what a batch of 16 images might look like when data augmentation is used.
<figure>
<img src="https://i.imgur.com/MFviYoE.png" width=400, alt="A batch of 16 images with various random transformations applied.">
</figure>
Each time an image is used during training, a new random transformation is applied. This way, the model is always seeing something a little different than what it's seen before. This extra variance in the training data is what helps the model on new data.
It's important to remember though that not every transformation will be useful on a given problem. Most importantly, whatever transformations you use should not mix up the classes. If you were training a digit recognizer, for instance, rotating images would mix up '9's and '6's. In the end, the best approach for finding good augmentations is the same as with most ML problems: try it and see!
Example - Training with Data Augmentation
Keras lets you augment your data in two ways. The first way is to include it in the data pipeline with a function like ImageDataGenerator. The second way is to include it in the model definition by using Keras's preprocessing layers. This is the approach that we'll take. The primary advantage for us is that the image transformations will be computed on the GPU instead of the CPU, potentially speeding up training.
In this exercise, we'll learn how to improve the classifier from Lesson 1 through data augmentation. This next hidden cell sets up the data pipeline.
End of explanation
from tensorflow import keras
from tensorflow.keras import layers
# these are a new feature in TF 2.2
from tensorflow.keras.layers.experimental import preprocessing
pretrained_base = tf.keras.models.load_model(
'../input/cv-course-models/cv-course-models/vgg16-pretrained-base',
)
pretrained_base.trainable = False
model = keras.Sequential([
# Preprocessing
preprocessing.RandomFlip('horizontal'), # flip left-to-right
preprocessing.RandomContrast(0.5), # contrast change by up to 50%
# Base
pretrained_base,
# Head
layers.Flatten(),
layers.Dense(6, activation='relu'),
layers.Dense(1, activation='sigmoid'),
])
Explanation: Step 2 - Define Model
To illustrate the effect of augmentation, we'll just add a couple of simple transformations to the model from Tutorial 1.
End of explanation
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['binary_accuracy'],
)
history = model.fit(
ds_train,
validation_data=ds_valid,
epochs=30,
verbose=0,
)
import pandas as pd
history_frame = pd.DataFrame(history.history)
history_frame.loc[:, ['loss', 'val_loss']].plot()
history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot();
Explanation: Step 3 - Train and Evaluate
And now we'll start the training!
End of explanation |
6,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Digital Text Analysis
Present-day society is flooded with digital texts
Step1: Here we define a string of text by enclosing it with quotations marks and assigning it to a variable or container called text. The fact that Python still sees this piece of text as a continguous string of characters becomes evident when we ask Python to print out the length of text, using the len() function
Step2: One could say that characters are the 'atoms' or smallest meaningful units in computational text processing. Just as computer images use pixels as their fundamental building blocks, all digital text processing applications start from raw characters and it are these characters that are physically stored on your machines in bits and bytes.
DIY
Define a string containing your own name in the code block below and print its length. Insert a number of whitespaces at the end of your name (i.e. tab the space bar a couple of times)
Step3: Many people find it more intuitive to consider texts as a strings of words, rather than plain characters, because words correspond to more concrete entities. In Python, we can easily turn our original 'string' into a list of words
Step4: Using the split() method, we split our original sentence into a word list along instances of whitespace characters. Note that, in technical terms, the variable type of text is different from that of the newly created variable words
Step5: Likewise, they evidently differ in length
Step6: By using 'indexing' (with square brackets), we can now access individual words in our word list. Check out the following print statements
Step7: DIY
Try out the indexes [0] and [-1] for the example with words. Which words are being printed when you use these indices? Do this makes sense? What is peculiar about the way Python 'counts'?
Note that words is a so-called list variable in technical terms, but that the individual elements of words are still plain strings
Step8: Tokenization
In the previous paragraph, we have adopted an extremely crude definition of a 'word', namely as a string of characters that doesn't contain any whitespace. There are of course many problems that arise if we use such a naive definition. Can you think of some?
In computer science, and computational linguistics in particular, people have come up with much smarter ways to divide texts into words. This process is called tokenization, which refers to the fact that this process divides strings of characters into a list of more meaningful tokens. One interesting package which we can use for this, is nltk (the Natural Language Toolkit), a package which has been specifically designed to deal with language problems. First, we have to import it, since it isn't part of the standard library of Python
Step9: We can now use apply nltk's functionality, for instance, its (default) tokenizer for English
Step10: Note how the function word_tokenize() neatly splits off punctuation! Many improvements nevertheless remain. To collapse the difference between uppercase and lowercase variables, for instance, we could first lowercase the original input string
Step11: Many applications will not be very interested in punctuation marks, so can we can try to remove these as well. The isalpha() method allows you to determine whether a string only contains alphabetic characters
Step12: Functions like isalpha() return something that is called a 'boolean' value, a kind of variable that can only take two values, i.e. True or False. Such values are useful because you can use them to test whether some condition is true or not. For example, if isalpha() evaluates to False for a word, we can have Python ignore such a word.
DIY
Using some more complicated Python syntax (a so-called 'list generator'), it is very easy to filter out non-alphabetic strings. In the example below, I inserted a logical thinking error on purpose
Step14: Counting words
Once we have come up with a good way to split texts into individual tokens, it is time to start thinking about how we can represent texts via these tokens. One popular approach to this problem is called the bag-of-words model (BOW)
Step15: We obtain a list of 148 tokens. Counting how often each individual token occurs in this 'document' is trivial, using the Counter object which we can import from Python's collection module
Step16: Let us have a look at the three most frequent items in the text
Step17: Obviously, the most common items in such a frequency list are typically small, grammatical words that are very frequent throughout all the texts in a language. Let us add a small visualisation of this information. We can use a barplot to show the top-frequency items in a more pleasing manner. In the following block, we use the matplotlib package for this purpose, which is a popular graphics package in Python. To make sure that it shows up neatly in our notebook, first execute this cell
Step18: And then execute the following blocks -- and try to understand the intuition behind them
Step19: DIY
Can you try to increase the number of words plotted? And change the color used for the bars to blue? And the width of the bars plotted?
The Bag of Words Model
We are almost there
Step20: This code makes use of a so called for-loop
Step21: Note that these three lists can be neatly zipped together, so that the third item in authors corresponds to the third item in titles (remember
Step22: To have a peak at the content of this novel, we can now 'stack' indices as follows. Using the first square brackets ([2]) we select the third novel in the list, i.e. Sense and Sensibility by Jane Austen. Then, using a second index ([
Step23: After loading these documents, we can now represent them using a bag of words model. To this end, we will use a library called scikit-learn, or sklearn in shorthand, which is increasingly popular in text analysis nowadays. As you will see below, we import its CountVectorizer object below, and we apply it to our corpus, specifying that we would like to extract a maximum of 10 features from the texts. The means that we will only consider the frequencies of 10 words (to keep our model small enough to be workable for now).
Step24: The code block above creates a matrix which has a 9x10 shape
Step25: As you can see, the max_features argument which we used above restricts the model to the n words which are most frequent throughout our texts. These are typically smallish function words. Funnily, sklearn uses its own tokenizer, and this default tokenizer ignores certain words that are surprisingly enough absent in the vocabulary list we just inspected. Can you figure which words? Why are they absent?
Luckily, sklearn is flexible enough to allow us to use our own tokenizer. To use the nltk tokenizer for instance, we can simply pass it as an argument when we create the CountVectorizer. (Note that, depending on the speed of your machine, the following code block might actually take a while to execute, because the tokenizer now has to process entire novels, instead of a single sentence. Sit tight!)
Step26: Finally, let us visually inspect the BOW model which we have converted. To this end, we make use of pandas, a Python-package that is nowadays often used to work with all sorts of data tables in Python. In the code block below, we create a new table or 'DataFrame' and add the correct row and column names
Step27: After creating this DataFrame, it becomes very easy to retrieve specific information from our corpus. What is the frequency, for instance, of the word 'the' in each text?
Step28: Or the frequency of 'and' in 'Emma'?
Step29: Text analysis
Distance metrics
Now that we have converted our small dummy corpus to a bag-of-words matrix, we can finally start actually analyzing it! One very common technique to visualize texts is to render a cluster diagram or dendrogram. Such a tree-like visualization (an example will follow shortly) can be used to obtain a rough-and-ready first idea of the (dis)similarities between the texts in our corpus. Texts that cluster together under a similar branch in the resulting diagram, can be argued to be stylistically closer to each other, than texts which occupy completely different places in the tree. Texts by the same authors, for instance, will often form thight clades in the tree, because they are written in a similar style.
However, when comparing texts, we should be aware of the fact that documents can strongly vary in length. The bag-of-words model which we created above does not take into account that some texts might be longer than others, because it simply uses absolute frequencies, which will be much higher in the case of longer documents. Before comparing texts on the basis of word frequencies, it therefore makes sense to apply some sort of normalization. One very common type of normalization is to use relative, instead of absolute word frequencies
Step30: Now, we can efficiently 'scale' or normalize the matrix using these sums
Step31: If we inspect our new frequency table, we can see that the values are now neatly in the 0-1 range
Step32: Moreover, if we now print the sum of the word frequencies for each of our nine texts, we see that the relative values sum to 1
Step33: That looks great. Let us now build a model with a more serious vocabulary size (=300) for the actual cluster analysis
Step34: Clustering algorithms are based on essentially based on the distances between texts
Step35: The function pdist() ('pairwise distances') is a function which we can use to calculate the distance between each pair of texts in our corpus. Using the squareform() function, we will eventually obtain a 9x9 matrix, the structure of which is conceptually easy to understand
Step36: As is clear from the shape info, we have obtained a 9 by 9 matrix, which holds the distance between each pair of texts. Note that the distance from a text to itself is of course zero (cf. diagonal cells)
Step37: Additionally, we can observe that the distance from text A to text B, is equal to the distance from B to A
Step38: We can visualize this distance matrix as a square heatmap, where darker cells indicate a larger distance between texts. Again, we use the matplotlib package to achieve this
Step39: As you can see, the little squares representing texts by the same author already show a tendency to invite lower distance scores. But how are these distances exactly calculated?
Each text in our distance matrix is represented as a row, consisting of 100 numbers. Such a list of numbers is also called a document vector, which is why the document modeling process described above is sometimes also called vectorization (cf. CountVectorizer). In digital text analysis, documents are compared by applying standard metrics from geometry to these documents vectors containing word frequencies. Let us have a closer look at one popular, but intuitively simple distance metric, the Manhattan city block distance. The formula behind this metric is very simple (don't be afraid of the mathematical notation; it won't bite)
Step40: Can you calculate the manhattan distance between a and b by hand? Compare the result you obtain to this line of code
Step41: This is an example of one popular distance metric which is currently used a lot in digital text analysis. Alternatives (which might ring a bell from math classes in high school) include the Euclidean distance or cosine distance. Our dm distance matrix from above can be created with any of these option, by specifying the correct metric when calling pdist(). Try out some of them!
Step42: Cluster trees
Now that we have learned how to calculate the pairwise distances between texts, we are very close to the dendrogram that I promised you a while back. To be able to visualize a dendrogram, we must first figure out the (branch) linkages in the tree, because we have to determine which texts are most similar to each etc. Our clustering procedure therefore starts by merging (or 'linking') the most similar texts in the corpus into a new node; only at a later stage in the tree, these new nodes of very similar texts will be joined together with nodes representing other texts. We perform this - fairly abstract - step on our distance matrix as follows
Step43: We are now ready to draw the actual dendrogram, which we do in the following code block. Note that we annotate the outer leaf nodes in our tree (i.e. the actual texts) using the labels argument. With the orientation argument, we make sure that our dendrogram can be easily read from left to right
Step44: Using the authors as labels is of course also a good idea
Step45: As we can see, Jane Austen's novels form a tight and distinctive cloud; an author like Thackeray is apparantly more difficult to tell apart. The actual distance between nodes is hinted at on the horizontal length of the branches (i.e. the values on the x-axis in this plot). Note that in this code block too we can easily switch to, for instance, the Euclidean distance. Does the code block below produce better results?
Step46: Exercise
The code repository also contains a larger folder of novels, called victorian_large. Use the code block below to copy and paste code snippets from above, which you can slightly adapt to do the following things
Step47: Topic modeling
Up until now, we have been working with fairly small, dummy-size corpora to introduce you to some standard methods for text analysis in Python. When working with real-world data, however, we are often confronted with much larger and noisier datasets, sometimes even datasets that are too large to read or inspect manually. To deal with such huge datasets, researchers in fields such as computer science have come up with a number of techniques that allow us to nevertheless get a grasp of the kind of texts that are contained in a document collection, as well as their content.
For this part of the tutorial, I have included a set of +3,000 documents under the folder 'data/newsgroups'. The so-called "20 newsgroups dataset" is a very famous dataset in computational linguistics (see this website )
Step48: As you can see, we are dealing with 3,551 documents. Have a look at some of the documents and try to find out what they are about. Vary the index used to select a random document and print out its first 1000 characters or so
Step49: You might already get a sense of the kind of topics that are being discussed. Also, you will notice that these are rather noisy data, which is challenging for humans to process manually. In the last part of this tutorial we will use a technique called topic modelling. This technique will automatically determine a number of topics or semantic word clusters that seem to be important in a document collection. The nice thing about topic modelling is that is a largely unsupervised technique, meaning that it does not need prior information about the document collection or the language it uses. It will simply inspect which words often co-occur in documents and are therefore more likely to be semantically related.
After fitting a topic model to a document collection, we can use it to inspect which topics have been detected. Additionally, we can use the model to infer to which extent these topics are present in new documents. Interestingly, the model does not assume that texts are always about a single topic; rather, it assumes that documents contain a mixture of different topics. A text about the transfer of a football player, for instance, might contain 80% of a 'sports' topic, 15% of a 'finance'-related topic, and 5% of a topic about 'Spanish lifestyle' etc. For topic modelling too, we first need to convert our corpus to a numerical format (i.e. 'vectorize' it as we did above). Luckily, we already know how to do that
Step50: Note that we make use of a couple of additional bells and whistles that ship with sklearn's CountVectorizer. Can you figure out what they mean (hint
Step51: We are now ready to start modelling the topics in this text collection. For this we make use of a popular technique called Latent Dirichlet Allocation or LDA, which is also included in the sklearn library. In the code block below, you can safely ignore most of the settings which we use when we initialize the model, but you should pay attention to the n_topics and max_iter parameter. The former controls how many topics we will extract from the document collection (this is one of few parameters which the model, sadly, does not learn itself). We start with a fairly small number of topics, but if you want a more finegrained analysis of your corpus you can always increase this parameter. The max_iter setting, finally, controls how long we let the model 'think'
Step52: After the model has (finally!) been fitted, we can now inspect our topics. We do this by finding out which items in our vocabulary have the highest score for each topic. The topics are available as lda.components_ after the model has been fitted.
Step53: Can you make sense of these topics? Which are the main thematic categories that you can discern?
DIY
Try to run the algorithm with more topics and allow more iterations (but don't exaggerate!)
Step54: As you can see, we obtain another sort of document matrix, where the number of columns corresponds to the number of topics we extracted. Let us now find out whether this representation yields anything useful. It is difficult to visualize 3,000+ documents all at once, so in the code block below, I select a smaller subset of 30 documents (and the corresponding filenames), using the random module.
Step55: We can now use our clustering algorithm from above in an exactly parallel way. Go on and try it (because of the random aspect of the previous code block, it possible that you obtain a different random selection). | Python Code:
text = 'It is a truth, universally acknowledged.'
Explanation: Digital Text Analysis
Present-day society is flooded with digital texts: never before, humankind has produced more text than now. To efficiently cope with the vast amounts of text that are published nowadays, industry and academia alike increasingly turn to automated techniques for text analysis. Spam filtering or machine translation are but two relevant examples of popular applications for automated text analysis.
In this workshop, we will work our way through a newbee tutorial which showcases a number of ways how you can use computer programming to analyze digital texts. We will use Python, an intuitive, yet powerful programming language which is increasingly popular among people working in text analysis. While the scope of this workshop is too limited to present all possibilities, I hope this workshop will give you some idea of Python's current possibilities for text analysis. Below, we will have a look at three introductory topics: (1) building a text representation using a bag-of-words model, (2) visualizing texts via cluster trees and (3) text mining via topic modelling. We might not make it up to the third and final part of the tutorial, but if you're interested feel free to give this part a go at home.
Because we will of course not have the time to explain all details about coding in Python, I will try to give you an idea of what it is like to use Python to analyse texts via 'distant reading'. I do not expect you will be able to understand everything little detail about the code after today, but I trust you will nevertheless experience that Python is an easy-to-read language. Hopefully, this tutorial will wet your appetite! Below you will find code blocks (in lightgrey) which you should be able to execute in your browser, by clicking Shift + Enter. It is important that you execute each of these code blocks when progressing through the chapter. Let's get started!
Text representation: the bag-of-words model
Basic string processing
Computers cannot intuitively process or 'read' texts, as humans can. Even today, a computer is still nothing more than an incredibly large and fast abacus, which is only able to count and process information if it represented in a numerical format. For computers, texts are nothing more than a long series or 'string' of characters. In Python, we can create a variable collecting a piece of text as follows:
End of explanation
print(len(text))
Explanation: Here we define a string of text by enclosing it with quotations marks and assigning it to a variable or container called text. The fact that Python still sees this piece of text as a continguous string of characters becomes evident when we ask Python to print out the length of text, using the len() function:
End of explanation
# your code goes here
Explanation: One could say that characters are the 'atoms' or smallest meaningful units in computational text processing. Just as computer images use pixels as their fundamental building blocks, all digital text processing applications start from raw characters and it are these characters that are physically stored on your machines in bits and bytes.
DIY
Define a string containing your own name in the code block below and print its length. Insert a number of whitespaces at the end of your name (i.e. tab the space bar a couple of times): does this change the length of the string?
End of explanation
words = text.split()
print(words)
Explanation: Many people find it more intuitive to consider texts as a strings of words, rather than plain characters, because words correspond to more concrete entities. In Python, we can easily turn our original 'string' into a list of words:
End of explanation
print(type(text))
print(type(words))
Explanation: Using the split() method, we split our original sentence into a word list along instances of whitespace characters. Note that, in technical terms, the variable type of text is different from that of the newly created variable words:
End of explanation
print(len(text))
print(len(words))
Explanation: Likewise, they evidently differ in length:
End of explanation
print(words[3])
print(words[5])
Explanation: By using 'indexing' (with square brackets), we can now access individual words in our word list. Check out the following print statements:
End of explanation
print(type(words[3]))
Explanation: DIY
Try out the indexes [0] and [-1] for the example with words. Which words are being printed when you use these indices? Do this makes sense? What is peculiar about the way Python 'counts'?
Note that words is a so-called list variable in technical terms, but that the individual elements of words are still plain strings:
End of explanation
import nltk
Explanation: Tokenization
In the previous paragraph, we have adopted an extremely crude definition of a 'word', namely as a string of characters that doesn't contain any whitespace. There are of course many problems that arise if we use such a naive definition. Can you think of some?
In computer science, and computational linguistics in particular, people have come up with much smarter ways to divide texts into words. This process is called tokenization, which refers to the fact that this process divides strings of characters into a list of more meaningful tokens. One interesting package which we can use for this, is nltk (the Natural Language Toolkit), a package which has been specifically designed to deal with language problems. First, we have to import it, since it isn't part of the standard library of Python:
End of explanation
tokens = nltk.word_tokenize(text)
print(tokens)
Explanation: We can now use apply nltk's functionality, for instance, its (default) tokenizer for English:
End of explanation
lower_str = text.lower()
lower_tokens = nltk.word_tokenize(lower_str)
print(lower_tokens)
Explanation: Note how the function word_tokenize() neatly splits off punctuation! Many improvements nevertheless remain. To collapse the difference between uppercase and lowercase variables, for instance, we could first lowercase the original input string:
End of explanation
print(lower_tokens[1].isalpha())
print(lower_tokens[-1].isalpha())
Explanation: Many applications will not be very interested in punctuation marks, so can we can try to remove these as well. The isalpha() method allows you to determine whether a string only contains alphabetic characters:
End of explanation
clean_tokens = [w for w in lower_tokens if not w.isalpha()]
print(clean_tokens)
Explanation: Functions like isalpha() return something that is called a 'boolean' value, a kind of variable that can only take two values, i.e. True or False. Such values are useful because you can use them to test whether some condition is true or not. For example, if isalpha() evaluates to False for a word, we can have Python ignore such a word.
DIY
Using some more complicated Python syntax (a so-called 'list generator'), it is very easy to filter out non-alphabetic strings. In the example below, I inserted a logical thinking error on purpose: can you adapt the line below and make it output the correct result? You will note that Python is really a super-intuitive programming language, because it almost reads like plain English.
End of explanation
text = It is a truth universally acknowledged, that a single man
in possession of a good fortune, must be in want of a wife. However
little known the feelings or views of such a man may be on his first
entering a neighbourhood, this truth is so well fixed in the minds
of the surrounding families, that he is considered the rightful
property of some one or other of their daughters. "My dear Mr. Bennet,"
said his lady to him one day, "have you heard that Netherfield Park is
let at last?" Mr. Bennet replied that he had not. "But it is," returned
she; "for Mrs. Long has just been here, and she told me all about it."
Mr. Bennet made no answer. "Do you not want to know who has taken it?"
cried his wife impatiently. "_You_ want to tell me, and I have no
objection to hearing it." This was invitation enough.
lower_str = text.lower()
lower_tokens = nltk.word_tokenize(lower_str)
clean_tokens = [w for w in lower_tokens if w.isalpha()]
print('Word count:', len(clean_tokens))
Explanation: Counting words
Once we have come up with a good way to split texts into individual tokens, it is time to start thinking about how we can represent texts via these tokens. One popular approach to this problem is called the bag-of-words model (BOW): this is a very old (and slightly naive) strategy for text representation, but is still surprisingly popular. Many spam filters, for instance, will still rely on a bag-of-words model when deciding which email messages will show up in your Junk folder.
The intuition behind this model is very simple: to represent a document, we consider it a 'bag', containing tokens in no particular order. We then characterize a particular text by counting how often each term occurs in it. Counting how often each word occurs in a list of tokens example from above is child's play in Python. For this purpose, we copy some of the code from the previous section and apply it too a larger paragraph:
End of explanation
from collections import Counter
bow = Counter(clean_tokens)
print(bow)
Explanation: We obtain a list of 148 tokens. Counting how often each individual token occurs in this 'document' is trivial, using the Counter object which we can import from Python's collection module:
End of explanation
print(bow.most_common(3))
Explanation: Let us have a look at the three most frequent items in the text:
End of explanation
%matplotlib inline
Explanation: Obviously, the most common items in such a frequency list are typically small, grammatical words that are very frequent throughout all the texts in a language. Let us add a small visualisation of this information. We can use a barplot to show the top-frequency items in a more pleasing manner. In the following block, we use the matplotlib package for this purpose, which is a popular graphics package in Python. To make sure that it shows up neatly in our notebook, first execute this cell:
End of explanation
# first, we extract the counts:
nb_words = 8
wrds, cnts = zip(*bow.most_common(nb_words))
print(wrds)
print(cnts)
# now the plotting part:
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots()
bar_width = 0.5
idxs = np.arange(nb_words)
#print(idxs)
ax.bar(idxs, cnts, bar_width, color='blue', align='center')
plt.xticks(idxs, wrds)
plt.show()
Explanation: And then execute the following blocks -- and try to understand the intuition behind them:
End of explanation
# we import some modules which we need
import glob
import os
# we create three emptylist
authors, titles, texts = [], [], []
# we loop over the filenames under the directory:
for filename in sorted(glob.glob('data/victorian_small/*.txt')):
# we open a file and read the contents from it:
with open(filename, 'r') as f:
text = f.read()
# we derive the title and author from the filename
author, title = os.path.basename(filename).replace('.txt', '').split('_')
# we add to the lists:
authors.append(author)
titles.append(title)
texts.append(text)
Explanation: DIY
Can you try to increase the number of words plotted? And change the color used for the bars to blue? And the width of the bars plotted?
The Bag of Words Model
We are almost there: we now know how to split documents into tokens and how we we count (and even visualize!) the frequencies of these items. Now it is only a small step towards a 'real' bag of words model. If we present a collection of texts under a bag of words model, what we really would like to end up with is a frequence table, which has a row for each document, and a column for all the tokens that occur in the collections, which is also called the vocabulary of the corpus. Each cell is then filled with the frequency of each vocabulary item, so that the final matrix will look like like a standard, two dimensional table which you all know from spreadsheet applications.
While creating such a matrix youself isn't too difficult in Python, here we will rely on an external package, which makes it really simple to efficiently create such matrices. The zipped folder which you downloaded for this coarse, contains a small corpus, containing novels by a number of famous Victorian novelists. Under data/victorian_small, for instance, you will find a number of files; the filenames indicate the author and (abbreviated) title of the novel contained in that file (e.g. Austen_Pride.txt). In the block below, I prepared some code to load these files from your hard drive into Python, which can execute now:
End of explanation
print(authors)
print(titles)
Explanation: This code makes use of a so called for-loop: after retrieving the list of relevant file names, we load the content of each file and add it to a list called texts, using the append() function. Additionally, we also create lists in which we store the authors and titles of the novels:
End of explanation
print('Title:', titles[2], '- by:', authors[2])
Explanation: Note that these three lists can be neatly zipped together, so that the third item in authors corresponds to the third item in titles (remember: Python starts counting at zero!):
End of explanation
print(texts[2][:300])
Explanation: To have a peak at the content of this novel, we can now 'stack' indices as follows. Using the first square brackets ([2]) we select the third novel in the list, i.e. Sense and Sensibility by Jane Austen. Then, using a second index ([:300]), we print the first 300 characters in that novel.
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer(max_features=10)
BOW = vec.fit_transform(texts).toarray()
print(BOW.shape)
Explanation: After loading these documents, we can now represent them using a bag of words model. To this end, we will use a library called scikit-learn, or sklearn in shorthand, which is increasingly popular in text analysis nowadays. As you will see below, we import its CountVectorizer object below, and we apply it to our corpus, specifying that we would like to extract a maximum of 10 features from the texts. The means that we will only consider the frequencies of 10 words (to keep our model small enough to be workable for now).
End of explanation
print(vec.get_feature_names())
Explanation: The code block above creates a matrix which has a 9x10 shape: this means that the resulting matrix has 9 rows and 10 columns. Can you figure out where these numbers come from?
To find out which words are included, we can inspect the newly created vec object as follows:
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer(max_features=10, tokenizer=nltk.word_tokenize)
BOW = vec.fit_transform(texts).toarray()
print(vec.get_feature_names())
Explanation: As you can see, the max_features argument which we used above restricts the model to the n words which are most frequent throughout our texts. These are typically smallish function words. Funnily, sklearn uses its own tokenizer, and this default tokenizer ignores certain words that are surprisingly enough absent in the vocabulary list we just inspected. Can you figure which words? Why are they absent?
Luckily, sklearn is flexible enough to allow us to use our own tokenizer. To use the nltk tokenizer for instance, we can simply pass it as an argument when we create the CountVectorizer. (Note that, depending on the speed of your machine, the following code block might actually take a while to execute, because the tokenizer now has to process entire novels, instead of a single sentence. Sit tight!)
End of explanation
import pandas as pd # conventional shorthand!
df = pd.DataFrame(BOW, columns=vec.get_feature_names(), index=titles)
print(df)
Explanation: Finally, let us visually inspect the BOW model which we have converted. To this end, we make use of pandas, a Python-package that is nowadays often used to work with all sorts of data tables in Python. In the code block below, we create a new table or 'DataFrame' and add the correct row and column names:
End of explanation
print(df['the'])
Explanation: After creating this DataFrame, it becomes very easy to retrieve specific information from our corpus. What is the frequency, for instance, of the word 'the' in each text?
End of explanation
print(df['of']['Emma'])
Explanation: Or the frequency of 'and' in 'Emma'?
End of explanation
totals = BOW.sum(axis=1, keepdims=True)
print(totals)
Explanation: Text analysis
Distance metrics
Now that we have converted our small dummy corpus to a bag-of-words matrix, we can finally start actually analyzing it! One very common technique to visualize texts is to render a cluster diagram or dendrogram. Such a tree-like visualization (an example will follow shortly) can be used to obtain a rough-and-ready first idea of the (dis)similarities between the texts in our corpus. Texts that cluster together under a similar branch in the resulting diagram, can be argued to be stylistically closer to each other, than texts which occupy completely different places in the tree. Texts by the same authors, for instance, will often form thight clades in the tree, because they are written in a similar style.
However, when comparing texts, we should be aware of the fact that documents can strongly vary in length. The bag-of-words model which we created above does not take into account that some texts might be longer than others, because it simply uses absolute frequencies, which will be much higher in the case of longer documents. Before comparing texts on the basis of word frequencies, it therefore makes sense to apply some sort of normalization. One very common type of normalization is to use relative, instead of absolute word frequencies: that means that we have to divide the original, absolute frequencies in a document, by the total number of word in that document. Remember that we are dealing with a 9x10 matrix at this stage:
Each of the 9 document rows which we obtain should now be normalized by dividing each word count by the total number of words which we recorded for that document. First, we therefore need to calculate the row-wise sum in our table.
End of explanation
BOW = BOW / totals
print(BOW.shape)
Explanation: Now, we can efficiently 'scale' or normalize the matrix using these sums:
End of explanation
print(BOW)
Explanation: If we inspect our new frequency table, we can see that the values are now neatly in the 0-1 range:
End of explanation
print(BOW.sum(axis=1))
Explanation: Moreover, if we now print the sum of the word frequencies for each of our nine texts, we see that the relative values sum to 1:
End of explanation
vec = CountVectorizer(max_features=300, tokenizer=nltk.word_tokenize)
BOW = vec.fit_transform(texts).toarray()
BOW = BOW / BOW.sum(axis=1, keepdims=True)
Explanation: That looks great. Let us now build a model with a more serious vocabulary size (=300) for the actual cluster analysis:
End of explanation
from scipy.spatial.distance import pdist, squareform
Explanation: Clustering algorithms are based on essentially based on the distances between texts: clustering algorithms typically start by calculating the distance between each pair of texts in a corpus, so that they know for each text how (dis)similar it is from any other text. Only after these pairwise-distances have been calculated, we can have the clustering algorithm start building a tree representation, in which the similar texts are joined together and merged into new nodes. To create a distance matrix, we use a number of functions from scipy (Scientific Python), a commonly used package for scientific applications.
End of explanation
dm = squareform(pdist(BOW))
print(dm.shape)
Explanation: The function pdist() ('pairwise distances') is a function which we can use to calculate the distance between each pair of texts in our corpus. Using the squareform() function, we will eventually obtain a 9x9 matrix, the structure of which is conceptually easy to understand: this square distance matrix (named dm) will hold for each of our 9 texts the distance to each other text in the corpus. Naturally, the diagonal in this matrix are all-zeroes (since the distance from a text to itself will be zero). We create this distance matrix as follows:
End of explanation
print(dm[3][3])
print(dm[8][8])
Explanation: As is clear from the shape info, we have obtained a 9 by 9 matrix, which holds the distance between each pair of texts. Note that the distance from a text to itself is of course zero (cf. diagonal cells):
End of explanation
print(dm[2][3])
print(dm[3][2])
Explanation: Additionally, we can observe that the distance from text A to text B, is equal to the distance from B to A:
End of explanation
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
heatmap = ax.pcolor(dm, cmap=plt.cm.Blues)
ax.set_xticks(np.arange(dm.shape[0])+0.5, minor=False)
ax.set_yticks(np.arange(dm.shape[1])+0.5, minor=False)
ax.set_xticklabels(titles, minor=False, rotation=90)
ax.set_yticklabels(authors, minor=False)
plt.show()
Explanation: We can visualize this distance matrix as a square heatmap, where darker cells indicate a larger distance between texts. Again, we use the matplotlib package to achieve this:
End of explanation
a = [2, 5, 1, 6, 7]
b = [4, 5, 1, 7, 3]
Explanation: As you can see, the little squares representing texts by the same author already show a tendency to invite lower distance scores. But how are these distances exactly calculated?
Each text in our distance matrix is represented as a row, consisting of 100 numbers. Such a list of numbers is also called a document vector, which is why the document modeling process described above is sometimes also called vectorization (cf. CountVectorizer). In digital text analysis, documents are compared by applying standard metrics from geometry to these documents vectors containing word frequencies. Let us have a closer look at one popular, but intuitively simple distance metric, the Manhattan city block distance. The formula behind this metric is very simple (don't be afraid of the mathematical notation; it won't bite):
$$manhattan(x, y) = \sum_{i=1}^{n} \left| x_i - y_i \right|$$
What this formula expresses, is that to calculate the distance between two documents, we loop over each word column in both texts and we calculate the absolute difference between the value for that item in each text. Afterwards we simply sum all the absolute differences.
DIY
Consider the following two dummy vectors:
End of explanation
from scipy.spatial.distance import cityblock as manhattan
print(manhattan(a, b))
Explanation: Can you calculate the manhattan distance between a and b by hand? Compare the result you obtain to this line of code:
End of explanation
dm = squareform(pdist(BOW), 'cosine') # or 'euclidean', or 'cosine' etc.
fig, ax = plt.subplots()
heatmap = ax.pcolor(dm, cmap=plt.cm.Reds)
ax.set_xticks(np.arange(dm.shape[0])+0.5, minor=False)
ax.set_yticks(np.arange(dm.shape[1])+0.5, minor=False)
ax.set_xticklabels(titles, minor=False, rotation=90)
ax.set_yticklabels(authors, minor=False)
plt.show()
Explanation: This is an example of one popular distance metric which is currently used a lot in digital text analysis. Alternatives (which might ring a bell from math classes in high school) include the Euclidean distance or cosine distance. Our dm distance matrix from above can be created with any of these option, by specifying the correct metric when calling pdist(). Try out some of them!
End of explanation
from scipy.cluster.hierarchy import linkage
linkage_object = linkage(dm)
Explanation: Cluster trees
Now that we have learned how to calculate the pairwise distances between texts, we are very close to the dendrogram that I promised you a while back. To be able to visualize a dendrogram, we must first figure out the (branch) linkages in the tree, because we have to determine which texts are most similar to each etc. Our clustering procedure therefore starts by merging (or 'linking') the most similar texts in the corpus into a new node; only at a later stage in the tree, these new nodes of very similar texts will be joined together with nodes representing other texts. We perform this - fairly abstract - step on our distance matrix as follows:
End of explanation
from scipy.cluster.hierarchy import dendrogram
d = dendrogram(Z=linkage_object, labels=titles, orientation='right')
Explanation: We are now ready to draw the actual dendrogram, which we do in the following code block. Note that we annotate the outer leaf nodes in our tree (i.e. the actual texts) using the labels argument. With the orientation argument, we make sure that our dendrogram can be easily read from left to right:
End of explanation
d = dendrogram(Z=linkage_object, labels=authors, orientation='right')
Explanation: Using the authors as labels is of course also a good idea:
End of explanation
dm = squareform(pdist(BOW, 'euclidean'))
linkage_object = linkage(dm, method='ward')
d = dendrogram(Z=linkage_object, labels=authors, orientation='right')
Explanation: As we can see, Jane Austen's novels form a tight and distinctive cloud; an author like Thackeray is apparantly more difficult to tell apart. The actual distance between nodes is hinted at on the horizontal length of the branches (i.e. the values on the x-axis in this plot). Note that in this code block too we can easily switch to, for instance, the Euclidean distance. Does the code block below produce better results?
End of explanation
# exercise code goes here
Explanation: Exercise
The code repository also contains a larger folder of novels, called victorian_large. Use the code block below to copy and paste code snippets from above, which you can slightly adapt to do the following things:
1. Read in the texts, producing 3 lists of texts, authors and titles. How many texts did you load (use the len() function)?
2. Select the text for David (Copperfield) by Charles Dickens and find out how often the word "and" is used in this text. Hint: use the nltk tokenizer and the Counter object from the collections module. Make sure no punctuation is included in your counts.
3. Vectorize the text using the CountVectorizer using the 250 most frequent words.
4. Normalize the resulting document matrix and draw a heatmap using blue colors.
5. Draw a cluster diagram and experiment with the distance metrics: which distance metric produces the 'best' result from the point of view of authorship clustering?
End of explanation
import os
documents, names = [], []
for filename in sorted(os.listdir('data/newsgroups')):
try:
with open('data/newsgroups/'+filename, 'r') as f:
text = f.read()
documents.append(text)
names.append(filename)
except:
continue
print(len(documents))
Explanation: Topic modeling
Up until now, we have been working with fairly small, dummy-size corpora to introduce you to some standard methods for text analysis in Python. When working with real-world data, however, we are often confronted with much larger and noisier datasets, sometimes even datasets that are too large to read or inspect manually. To deal with such huge datasets, researchers in fields such as computer science have come up with a number of techniques that allow us to nevertheless get a grasp of the kind of texts that are contained in a document collection, as well as their content.
For this part of the tutorial, I have included a set of +3,000 documents under the folder 'data/newsgroups'. The so-called "20 newsgroups dataset" is a very famous dataset in computational linguistics (see this website ): it refers to a collection of approximately 20,000 newsgroup documents, divided in 20 categories which each correspond to another topic. The topics are very diverse and range from science to politics. I have subsampled a number of these categories in the repository for this tutorial, but I won't tell you which... The idea is that we will use topic modelling so that you can find out for yourself which topics are discussed in this dataset! First, we start by loading the documents, using code that is very similar to the text loading code we used above:
End of explanation
print(documents[3041][:1000])
Explanation: As you can see, we are dealing with 3,551 documents. Have a look at some of the documents and try to find out what they are about. Vary the index used to select a random document and print out its first 1000 characters or so:
End of explanation
vec = CountVectorizer(max_df=0.95, min_df=5, max_features=2000, stop_words='english')
BOW = vec.fit_transform(documents)
print(BOW.shape)
Explanation: You might already get a sense of the kind of topics that are being discussed. Also, you will notice that these are rather noisy data, which is challenging for humans to process manually. In the last part of this tutorial we will use a technique called topic modelling. This technique will automatically determine a number of topics or semantic word clusters that seem to be important in a document collection. The nice thing about topic modelling is that is a largely unsupervised technique, meaning that it does not need prior information about the document collection or the language it uses. It will simply inspect which words often co-occur in documents and are therefore more likely to be semantically related.
After fitting a topic model to a document collection, we can use it to inspect which topics have been detected. Additionally, we can use the model to infer to which extent these topics are present in new documents. Interestingly, the model does not assume that texts are always about a single topic; rather, it assumes that documents contain a mixture of different topics. A text about the transfer of a football player, for instance, might contain 80% of a 'sports' topic, 15% of a 'finance'-related topic, and 5% of a topic about 'Spanish lifestyle' etc. For topic modelling too, we first need to convert our corpus to a numerical format (i.e. 'vectorize' it as we did above). Luckily, we already know how to do that:
End of explanation
print(vec.get_feature_names())
Explanation: Note that we make use of a couple of additional bells and whistles that ship with sklearn's CountVectorizer. Can you figure out what they mean (hint: df here stands for document frequency)? In topic modelling we are not interested in the type of high-frequency grammatical words that we have used up until now. Such words are typically called function words in Information Retrieval and there are mostly completely ignored in topic modelling. Have a look at the 1000 features extracted: are these indeed content words?
End of explanation
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_topics=50,
max_iter=10,
learning_method='online',
learning_offset=50.,
random_state=0)
lda.fit(BOW)
Explanation: We are now ready to start modelling the topics in this text collection. For this we make use of a popular technique called Latent Dirichlet Allocation or LDA, which is also included in the sklearn library. In the code block below, you can safely ignore most of the settings which we use when we initialize the model, but you should pay attention to the n_topics and max_iter parameter. The former controls how many topics we will extract from the document collection (this is one of few parameters which the model, sadly, does not learn itself). We start with a fairly small number of topics, but if you want a more finegrained analysis of your corpus you can always increase this parameter. The max_iter setting, finally, controls how long we let the model 'think': the more interations we allow, the better the model will get, but because LDA is as you will see fairly computationally intensive, it makes sense start with a relatively low number in this respect. You can now execute the following code block -- you will see that this code might take several minutes to complete.
End of explanation
feature_names = vec.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print('Topic', topic_idx, '> ', end='')
print(' '.join([feature_names[i] for i in topic.argsort()[:-12 - 1:-1]]))
Explanation: After the model has (finally!) been fitted, we can now inspect our topics. We do this by finding out which items in our vocabulary have the highest score for each topic. The topics are available as lda.components_ after the model has been fitted.
End of explanation
topic_repr = lda.transform(BOW)
print(topic_repr.shape)
Explanation: Can you make sense of these topics? Which are the main thematic categories that you can discern?
DIY
Try to run the algorithm with more topics and allow more iterations (but don't exaggerate!): do the results get more interpretable?
Now that we have build a topic model, we can use it to represent our corpus. Instead of representing each document as a vector containing word frequencies, we represent it as a vector containing topic scores. To achieve this, we can simply call the fit() function to the bag-of-words representation of our document:
End of explanation
comb = list(zip(names, topic_repr))
import random
random.seed(10000)
random.shuffle(comb)
comb = comb[:30]
subset_names, subset_topic_repr = zip(*comb)
Explanation: As you can see, we obtain another sort of document matrix, where the number of columns corresponds to the number of topics we extracted. Let us now find out whether this representation yields anything useful. It is difficult to visualize 3,000+ documents all at once, so in the code block below, I select a smaller subset of 30 documents (and the corresponding filenames), using the random module.
End of explanation
dm = squareform(pdist(subset_topic_repr), 'cosine') # or 'euclidean', or 'cosine' etc.
linkage_object = linkage(dm, method='ward')
fig_size = plt.rcParams["figure.figsize"]
plt.rcParams["figure.figsize"] = [15, 9]
d = dendrogram(Z=linkage_object, labels=subset_names, orientation='right')
Explanation: We can now use our clustering algorithm from above in an exactly parallel way. Go on and try it (because of the random aspect of the previous code block, it possible that you obtain a different random selection).
End of explanation |
6,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supernova Example
Introduction
In this toy example we compare ABC and MCMC as methods of estimating cosmological parameters from supernovae data. The following model describes the distance modulus as a function of redshift
Step1: The Data
First, we need to provide a dataset. The aim is to generate values of the distance modulus, $\mu_{i}$, with fixed "true" parameters $\Omega_{m}$ and $w_{0}$
$$\Omega_{m} = 0.3$$
$$w_{0} = -1.0$$
SNANA is used to generate ~400 supernova light curves which are then fit with the SALT-II light curve fitter. The data span the redshift interval $z \in [0.5,1.0]$ and are binned into 20 redshift bins.
Step2: Adding noise to the data
To add artificial noise to the data we use a skewed normal distribution. The standard normal distribution has a probability distribution function given by
$$ \phi(x) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{x^{2}}{2}}$$
and a cumulative distribution function
Step3: From this distribution, the noisy data is generated. At each $z_{i}$ a random number is drawn from the above distribution and added to $\mu_{i}$.
Step4: A comparison of the data before and after noise is added is shown below
Step5: To demonstrate the non-Gaussian distribution of the data at each $z_{i}$, we focus on the data at $z_{1}=0.5$. The distribution of a large random sample at this redshift is shown below. Each value in this sample is generated by adding a randomly drawn number from the skewed normal distribution to $\mu_{1}$. The value of $\mu_{1}$ before noise is added is shown in red. As we can see the data is now a skewed distribution around the expected mean.
Step6: ABC
ABC is used to estimate the posterior distribution of the unknown parameters $\Omega_{m}$ and $w_{0}$ from the data. To use ABC we need to specify
Step7: Parameters needed for ABC are specified
Step8: The prior distributions of $\Omega_{m}$ and $w_{0}$ are chosen as follows
Step9: Finally, we need a simulation function. This must be able to simulate the data at every point in parameter space. At each $z_{i}$ the simulation uses $\mu_{model}(z_{i};\Omega_{m},w_{0})$ given by
$$\mu_{i}^{model}(z_{i};\Omega_{m},w_{0}) \propto 5log_{10}(\frac{c(1+z)}{h_{0}})\int_{0}^{z}\frac{dz'}{E(z')}$$
to produce a value of distance modulus. To account for noise in the data we then add a number randomly drawn from a skewed normal distribution.
Step10: The ABC sampler is run for 20 iterations. At each iteration the current estimate of the unknown parameters is shown. | Python Code:
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
import numpy as np
from scipy.stats import skewnorm
import math
import astroabc
from distance_calc import DistanceCalc
from bin_data import *
Explanation: Supernova Example
Introduction
In this toy example we compare ABC and MCMC as methods of estimating cosmological parameters from supernovae data. The following model describes the distance modulus as a function of redshift:
$$\mu_{i}^{model}(z_{i};\Omega_{m},w_{0}) \propto 5log_{10}(\frac{c(1+z)}{h_{0}})\int_{0}^{z}\frac{dz'}{E(z')}$$
$$E(z) = \sqrt{\Omega_{m} (1+z)^{3} + (1-\Omega_{m})e^{3\int_{0}^{z} dln(1+z')[1+w(z')]}}$$
Both ABC and MCMC use this model to estimate the posterior distributions of matter density $\Omega_{m}$ and the dark energy equation of state $w_{0}$.
To demonstrate the difference between ABC and MCMC, we first generate a dataset containing artificial noise such that the distribution of the data is non-Gaussian.
* To use MCMC we are required to specify a likelihood function $\mathcal{L}(D|\Omega_{m}, w_{0})$. By making the standard assumption of a Gaussian likelihood, we are (incorrectly) assumming that the data is Gaussian distributed. Under this assumption we expect biased results from MCMC.
To use ABC, we must be able to simulate the data at every point in parameter space. The simulation can naturally include non-Gaussian noise (in many physical examples it is easier to include noise in a simulation then to account for it analytically).
End of explanation
zbins,avmu_bin,averr_bin,mu_in_bin_new,mu_in_bin_new = read_data()
Explanation: The Data
First, we need to provide a dataset. The aim is to generate values of the distance modulus, $\mu_{i}$, with fixed "true" parameters $\Omega_{m}$ and $w_{0}$
$$\Omega_{m} = 0.3$$
$$w_{0} = -1.0$$
SNANA is used to generate ~400 supernova light curves which are then fit with the SALT-II light curve fitter. The data span the redshift interval $z \in [0.5,1.0]$ and are binned into 20 redshift bins.
End of explanation
e = -0.1 #location
w = 0.3 #scale
a = 5.0 #skew
plt.figure(figsize=(17,8))
plt.hist(skewnorm.rvs(a, loc=e, scale=w, size=10000),normed=True,bins=20,color='#593686')
plt.title("Distribution of a random sample",fontsize=17);
Explanation: Adding noise to the data
To add artificial noise to the data we use a skewed normal distribution. The standard normal distribution has a probability distribution function given by
$$ \phi(x) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{x^{2}}{2}}$$
and a cumulative distribution function:
$$\Phi(x) = \frac{1}{2} [1 + erf(\frac{x}{\sqrt{2}})] $$
The skewed normal distribution $f(x)$ with parameter $\alpha$ is given by
$$f(x) = 2\phi(x)\Phi(\alpha x)$$
Using this probability distribution function, we can draw a random sample from the skewed normal distribution.
End of explanation
data = np.zeros(len(zbins))
for i in range(len(zbins)):
data[i] = avmu_bin[i] + skewnorm.rvs(a, loc=e, scale=w, size=1)
Explanation: From this distribution, the noisy data is generated. At each $z_{i}$ a random number is drawn from the above distribution and added to $\mu_{i}$.
End of explanation
plt.figure(figsize=(17,8))
plt.errorbar(zbins,avmu_bin,averr_bin,marker="o",linestyle="None",label="without noise",color='#593686')
plt.scatter(zbins,data,color='r',label="with noise")
plt.legend(loc="upper left",prop={'size':17});
plt.xlabel("$z$",fontsize=20)
plt.ylabel("$\mu(z)$",fontsize=20)
plt.title("Data before and after noise is added",fontsize=17);
Explanation: A comparison of the data before and after noise is added is shown below:
End of explanation
z = 0
distribution = np.zeros(10000)
for j in range(10000):
distribution[j] = avmu_bin[z] + skewnorm.rvs(a, loc=e, scale=w, size=1)
plt.figure(figsize=(17,8))
plt.title("Distribution of the data at redshift z=0.5",fontsize=17);
plt.hist(distribution,bins=20,color='#593686',normed=True)
plt.plot((avmu_bin[z], avmu_bin[z]), (0, 2.5), 'r-', label="True $\mu$ at $z = 0.5$");
plt.legend(prop={'size':16});
Explanation: To demonstrate the non-Gaussian distribution of the data at each $z_{i}$, we focus on the data at $z_{1}=0.5$. The distribution of a large random sample at this redshift is shown below. Each value in this sample is generated by adding a randomly drawn number from the skewed normal distribution to $\mu_{1}$. The value of $\mu_{1}$ before noise is added is shown in red. As we can see the data is now a skewed distribution around the expected mean.
End of explanation
def my_dist(d,x):
if x[0]==None:
return float('Inf')
else:
return np.sum(((x-d)/averr_bin)**2)
Explanation: ABC
ABC is used to estimate the posterior distribution of the unknown parameters $\Omega_{m}$ and $w_{0}$ from the data. To use ABC we need to specify:
* A metric $\rho$
* Prior distributions of $\Omega_{m}$ and $w_{0}$
* A simulation function
For more information on each of these, please see the Introduction page or the simple Gaussian demo
In this example the metric $\rho$ is defined to be
$$\rho(\mu,\mu_{sim}(z)) = \sum_{i} \frac{(\mu_{i} - \mu_{sim}(z_{i}))^{2}}{2 \sigma_{i}^{2}}$$
where $\sigma_{i}$ is the error on the data point $\mu_{i}$.
End of explanation
nparam = 2
npart = 100 #number of particles/walkers
niter = 20 #number of iterations
tlevels = [500.0,0.005] #maximum,minimum tolerance
prop={'tol_type':'exp',"verbose":1,'adapt_t':True,
'threshold':75,'pert_kernel':2,'variance_method':0,
'dist_type': 'user','dfunc':my_dist, 'restart':"restart_test.txt", \
'outfile':"abc_pmc_output_"+str(nparam)+"param.txt",'mpi':False,
'mp':True,'num_proc':2, 'from_restart':False}
Explanation: Parameters needed for ABC are specified:
End of explanation
priorname = ["normal","normal"]
hyperp = [[0.3,0.5], [-1.0,0.5]]
prior = list(zip(priorname,hyperp))
Explanation: The prior distributions of $\Omega_{m}$ and $w_{0}$ are chosen as follows:
* For $\Omega_{m}$ we use a normal distribution with mean $0.3$ and standard deviation $0.5$.
* For $w_{0}$ we use a normal distribution with mean $-1.0$ and standard deviation $0.5$.
End of explanation
def ABCsimulation(param): #param = [om, w0]
if param[0] < 0.0 or param[0] > 1.0:
return [None]*len(zbins)
else:
model_1_class = DistanceCalc(param[0],0,1-param[0],0,[param[1],0],0.7) #om,ok,ol,wmodel,de_params,h0
data_abc = np.zeros(len(zbins))
for i in range(len(zbins)):
data_abc[i] = model_1_class.mu(zbins[i]) + skewnorm.rvs(a, loc=e, scale=w, size=1)
return data_abc
Explanation: Finally, we need a simulation function. This must be able to simulate the data at every point in parameter space. At each $z_{i}$ the simulation uses $\mu_{model}(z_{i};\Omega_{m},w_{0})$ given by
$$\mu_{i}^{model}(z_{i};\Omega_{m},w_{0}) \propto 5log_{10}(\frac{c(1+z)}{h_{0}})\int_{0}^{z}\frac{dz'}{E(z')}$$
to produce a value of distance modulus. To account for noise in the data we then add a number randomly drawn from a skewed normal distribution.
End of explanation
sampler = astroabc.ABC_class(nparam,npart,data,tlevels,niter,prior,**prop)
sampler.sample(ABCsimulation)
Explanation: The ABC sampler is run for 20 iterations. At each iteration the current estimate of the unknown parameters is shown.
End of explanation |
6,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of pyesgf download usage
Obtain MyProxy credentials to allow downloading files
Step1: Now download a file using the ESGF wget script extracted from the server
Step2: … and the files will be downloaded to a temporary directory
Step3: If you are doing batch searching and things are running slow, you might be able to achieve a considerable speed up by sending the following argument to the search call
Step4: This cuts out an extra call that typically takes 2 seconds to return a response. Note that it may mean some of the functionality is affected (such as being able to view the available facets and access the hit count) so use this feature with care.
You can also dictate how the search batches up its requests with | Python Code:
from pyesgf.logon import LogonManager
lm = LogonManager()
lm.logoff()
lm.is_logged_on()
myproxy_host = 'esgf-data.dkrz.de'
lm.logon(username=None, password=None, hostname=myproxy_host)
lm.is_logged_on()
Explanation: Examples of pyesgf download usage
Obtain MyProxy credentials to allow downloading files:
End of explanation
from pyesgf.search import SearchConnection
conn = SearchConnection('https://esgf-data.dkrz.de/esg-search', distrib=False)
ctx = conn.new_context(project='obs4MIPs', institute='FUB-DWD')
ds = ctx.search()[0]
import tempfile
fc = ds.file_context()
wget_script_content = fc.get_download_script()
script_path = tempfile.mkstemp(suffix='.sh', prefix='download-')[1]
with open(script_path, "w") as writer:
writer.write(wget_script_content)
import os, subprocess
os.chmod(script_path, 0o750)
download_dir = os.path.dirname(script_path)
subprocess.check_output("{}".format(script_path), cwd=download_dir)
Explanation: Now download a file using the ESGF wget script extracted from the server:
End of explanation
print(download_dir)
Explanation: … and the files will be downloaded to a temporary directory:
End of explanation
ctx.search(ignore_facet_check=True)
Explanation: If you are doing batch searching and things are running slow, you might be able to achieve a considerable speed up by sending the following argument to the search call:
End of explanation
ctx.search(batch_size=250)
Explanation: This cuts out an extra call that typically takes 2 seconds to return a response. Note that it may mean some of the functionality is affected (such as being able to view the available facets and access the hit count) so use this feature with care.
You can also dictate how the search batches up its requests with:
End of explanation |
6,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python Workshop Part 1
Welcome again!
We want to thank the many people that have made this workshop possible.
First, the generosity of our sponsors have provided facilities for the workshop, food and refreshments, and travel assistance for our guest speakers. Please give a hand to our fabulous sponsors
Step1: Special type of division (floor)
Step2: Horizontal line spacing does not matter in Python in an individual statement. In a multiline program it does, you typically indent 4 spaces.
Step3: Parens and order of operations follow typical mathematics conventions in Python. _ in IPython signifies the previous result.
Step4: A trip to PyCon
Step5: Using type() to find the datatype
Let's use a function. You'll be hearing more about functions later today. For now let's say that a function is like a Personal Assistant. You ask for a job to be done, and if you give the assistant the correct instructions, he will do the tasks asked.
One handy thing that our Python Personal Assistant, aka Funky Function, can do is tell us a variable's current data type.
Step6: Questions?
String data type
Step7: Concatenating strings
Step8: Tip - arrow up to save time typing
Step9: Quotes (single, double, triple)
Step10: Displaying versus printing in IPython
Step11: Questions? Quick review
Step12: Make choices
Step13: One of two actions
Step14: One of three things
Step15: Exiting the interpreter is a good thing to know.
To exit, type exit() or press CNTL-D.
Practice problems - Codeacademy practice
Strings and choices
http | Python Code:
2 + 2
1.4 + 2.25
4 - 2
2 * 3
4 / 2
0.5/2
Explanation: Introduction to Python Workshop Part 1
Welcome again!
We want to thank the many people that have made this workshop possible.
First, the generosity of our sponsors have provided facilities for the workshop, food and refreshments, and travel assistance for our guest speakers. Please give a hand to our fabulous sponsors:
- Ansir Innovation Center Twitter: @AnsirSD
- Python Software Foundation Twitter: @ThePSF
We are committed to offering a positive and productive workshop for you. We are proud to be an OpenHatch (@openhatch) affiliated event. OpenHatch is a non-profit that helps people become contributors to free and open source software. OpenHatch is a friendly community and can help you find a suitable project if you are interested in contributing.
While we are thanking the PSF and OpenHatch, I would like to thank Jessica McKellar, PSF Director and OpenHatch board member (@jessicamckellar), for sharing her materials from her Intro to Python workshop as well as providing encouragement and support to us.
If anyone wishes to tweet their appreciation, please do so.
A programming community outreach workshop, brought to you by the generous volunteers and leadership from:
PyLadies San Diego
San Diego Python User Group
Inland Empire Pyladies
Inland Empire Python
Thanks to David and Kendall of SDPUG, Juliet of PyLadies SD, and John of Inland Empire Python for their support.
Introduction of who is here to teach and help you today.
- Audrey
- Danny
- Rise
- Trey
- Alain
- Micah
- Jim
- Others that are helping on day of event
- Carol
Please take a moment to share 2-4 sentences about yourself.
We are all volunteers. If you enjoy this workshop and decide to continue with Python programming, we encourage you to volunteer at the next Intro to Python Workshop.
We are also very thankful that you have chosen to spend the better part of your Saturday sharing Python with us. It's a language that we have fun using to make all sorts of projects and a thoughtful, accepting, considerate, and fun community that we are glad to take part in.
And now...let's get going with some Python development!
The Game Plan
Did you do your Python setup homework?
Recap: Setting Up Python
You should have completed the Python 3.4 setup instructions on your own:
http://nbviewer.ipython.org/github/pythonsd/intro-to-python/blob/master/part-0.ipynb
If not, please do it now.
Stuck? Raise Your Hand / Place Yellow Sticky Note
Tried but ran into errors? Raise your hand or place a yellow sticky note on your screen to show that you need help. A volunteer will come over to help you.
Don't be shy. Setup problems are common. It's particularly important for everyone who's stuck to get help right now before we move on, to ensure that you get the most out of this workshop.
Done With Setup?
Volunteers will be walking around to check everyone's setup. Have your setup checked.
When a volunteer comes over, open up a command prompt and type this, to show that you are ready:
$ python3 --version
Python 3.4.1
Once you've had your setup checked:
* While waiting, introduce yourself to at least 3 of the people sitting near you. Tell them your name, something about yourself, and why you decided to come to today's workshop.
Setup Complete!
After completing these steps, you should:
* Have Python installed
* Be able to enter and exit Python
Let's get started with the interactive lecture
Python as a Calculator
From your command prompt, type python3 to enter IDLE.
Some regular math operations.
End of explanation
3 // 2
15.5 // 2
Explanation: Special type of division (floor)
End of explanation
2 + 2
2+2
Explanation: Horizontal line spacing does not matter in Python in an individual statement. In a multiline program it does, you typically indent 4 spaces.
End of explanation
(1 + 3) * 4
x = 4
x * 3
_ + 8
Explanation: Parens and order of operations follow typical mathematics conventions in Python. _ in IPython signifies the previous result.
End of explanation
jeans = 5
shoes = 2
socks = 12
shirts = 1
items_packed = jeans + shoes + socks + shirts
items_packed
print(items_packed)
Explanation: A trip to PyCon
End of explanation
type(shirts)
type(0.99)
Explanation: Using type() to find the datatype
Let's use a function. You'll be hearing more about functions later today. For now let's say that a function is like a Personal Assistant. You ask for a job to be done, and if you give the assistant the correct instructions, he will do the tasks asked.
One handy thing that our Python Personal Assistant, aka Funky Function, can do is tell us a variable's current data type.
End of explanation
"Hello"
"Python, I'm your #1 fan!"
type("Hello")
Explanation: Questions?
String data type
End of explanation
name = "Carol"
2 + 2
"Carol" + "Willing"
"Carol " + "Willing"
"Carol" + " " + "Willing"
name = "Carol"
"My name is " + name
Explanation: Concatenating strings
End of explanation
"Hello" + 1
"Hello" + "1"
type(1)
type("1")
"Hello" + str(1)
len("Hello")
len(name)
"The length of my name is " + str(len(name))
"Hello"
'Hello'
Explanation: Tip - arrow up to save time typing
End of explanation
"Python, I'm your #1 fan!"
"A" * 40
h = "Happy"
b = "Birthday"
(h + b) * 10
Explanation: Quotes (single, double, triple)
End of explanation
"Hello"
print("Hello")
Explanation: Displaying versus printing in IPython
End of explanation
3 ** 3
type(1)
type(1.0)
type("1")
Explanation: Questions? Quick review
End of explanation
True
False
type(True)
type(False)
0 == 0
0 == 1
0 != 1
"a" == "A"
1 > 0
2 >= 3
-1<0
.5 <= 1
"H" in "Hello"
"x" in "Hello"
"a" not in "abcde"
type(True)
type("True")
type(true)
x = 4
x == 4
if 6 > 5:
print("Six is greater than 5")
if 0 > 2:
print("Zero is greater")
if "banana" in "bananarama":
print("I miss the 80s")
Explanation: Make choices
End of explanation
sister = 15
brother = 12
if sister > brother:
print("Sister is older")
else:
print("Brother is older")
1 > 0 and 1 < 2
1 < 2 and "x" in "abc"
"a" in "abc" and "b" in "abc" and "c" in "abc"
"a" in "hello" or "e" in "hello"
temp = 32
if temp > 60 and temp < 75:
print("Nice and cozy")
else:
print("Too extreme for me")
Explanation: One of two actions
End of explanation
sister = 15
brother = 15
if sister > brother:
print("Sister is older")
elif sister == brother:
print("Twinsies!!")
else:
print("Brother is older")
Explanation: One of three things
End of explanation
from IPython.display import YouTubeVideo
# a tutorial about Python at PyCon 2014 in Montreal, Canada by Jessica McKellar
# Credit: William Stein.
YouTubeVideo('MirG-vJOg04')
from IPython.display import IFrame
# Pull in the tutorial prep information from OpenHatch wiki
IFrame('http://bit.ly/intro-setup', width='100%', height=350)
Explanation: Exiting the interpreter is a good thing to know.
To exit, type exit() or press CNTL-D.
Practice problems - Codeacademy practice
Strings and choices
http://bit.ly/py-practice
End of explanation |
6,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Networks for Classification
In this project, you'll be working with one of the most well-known machine learning datasets - the Iris Data Set hosted at the UCI Machine Learning Repository. Our goal is to train a network to identify a species of iris based on the flower's sepal length, sepal width, petal length, and petal width.
The dataset contains 50 data points for each of the three species, Iris setosa, Iris versicolour, and Iris Virginica for a total of 150 data points.
Data Description
We can load the data into a pandas dataframe as follows
Step1: We can see the number of records in each column to ensure all of our datapoints are complete
Step2: And we can see the data type for each column like so
Step3: Visualization
In machine learning problems, it can be helpful to try and visualize the data where possible in order to get a feel for the problem. The seaborn library has some great tools for this.
Caution
Step4: Or we can use pairplot to do this for all combinations of features!
Step5: From these plots we can see that Iris setosa is linearly separable from the others in all feature pairs. This could prove useful for the design of our network classifier.
Now that we've loaded our data and we know how it's structured, it's up to you to create a neural network classifier! I've given you some code to branch off of below. Good luck! | Python Code:
import pandas as pd
iris = pd.read_csv('data/iris.csv')
# Display the first few rows of the dataframe
iris.head()
Explanation: Neural Networks for Classification
In this project, you'll be working with one of the most well-known machine learning datasets - the Iris Data Set hosted at the UCI Machine Learning Repository. Our goal is to train a network to identify a species of iris based on the flower's sepal length, sepal width, petal length, and petal width.
The dataset contains 50 data points for each of the three species, Iris setosa, Iris versicolour, and Iris Virginica for a total of 150 data points.
Data Description
We can load the data into a pandas dataframe as follows:
End of explanation
iris.count()
Explanation: We can see the number of records in each column to ensure all of our datapoints are complete:
End of explanation
iris.dtypes
Explanation: And we can see the data type for each column like so:
End of explanation
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
sns.FacetGrid(iris, hue="Species", size=6) \
.map(plt.scatter, "SepalLengthCm", "SepalWidthCm") \
.add_legend()
Explanation: Visualization
In machine learning problems, it can be helpful to try and visualize the data where possible in order to get a feel for the problem. The seaborn library has some great tools for this.
Caution: You may not have seaborn installed on your machine. If this is the case, use the pip installer from your shell (Mac OSX/Linux): pip install seaborn. If you're on Windows, you won't be able to install scipy using pip. You'll have to use conda to install the package or manually download and install a wheel yourself.
We can visualize the relationship between two features and the target classes using seaborn's FacetGrid:
End of explanation
sns.pairplot(iris.drop("Id", axis=1), hue="Species", size=3)
Explanation: Or we can use pairplot to do this for all combinations of features!
End of explanation
%matplotlib inline
# This cell can be run independently of the ones above it.
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
# Path for saving model data
model_path = 'tmp/model.ckpt'
# Hyperparameters
learn_rate = .5
batch_size = 10
epochs = 50
# Load the data into dataframes
# There is NO OVERLAP between the training and testing data
# Take a minute to remember why this should be the case!
iris_train = pd.read_csv('data/iris_train.csv', dtype={'Species': 'category'})
iris_test = pd.read_csv('data/iris_test.csv', dtype={'Species': 'category'})
test_features = iris_test.as_matrix()[:,:4]
test_targets = pd.get_dummies(iris_test.Species).as_matrix()
# Create placeholder for the input tensor (input layer):
# Our input has four features so our shape will be (none, 4)
# A variable number of rows and four feature columns.
x = tf.placeholder(tf.float32, [None, 4])
# Outputs will have 3 columns since there are three categories
# This placeholder is for our targets (correct categories)
# It will be fed with one-hot vectors from the data
y_ = tf.placeholder(tf.float32, [None, 3])
# The baseline model will consist of a single softmax layer with
# weights W and bias b
# Because these values will be calculated and recalculated
# on the fly, we'll declare variables for them.
# We use a normal distribution to initialize our matrix with small random values
W = tf.Variable(tf.truncated_normal([4, 3], stddev=0.1))
# And an initial value of zero for the bias.
b = tf.Variable(tf.zeros([3]))
# We define our simple model here
y = tf.nn.softmax(tf.matmul(x, W) + b)
#=================================================================
# And our cost function here (make sure only one is uncommented!)|
#=================================================================
# Mean Squared Error
cost = tf.reduce_mean(tf.squared_difference(y_, y))
# Cross-Entropy
#cost = tf.reduce_mean(
# tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
#
#=================================================================
# Gradient descent step
train_step = tf.train.GradientDescentOptimizer(learn_rate).minimize(cost)
# Start a TensorFlow session
with tf.Session() as sess:
# Initialize all of the Variables
sess.run(tf.global_variables_initializer())
# Operation for saving all variables
saver = tf.train.Saver()
# Training loop
for epoch in range(epochs):
avg_cost = 0.
num_batches = int(iris_train.shape[0]/batch_size)
for _ in range(num_batches):
# Randomly select <batch_size> samples from the set (with replacement)
batch = iris_train.sample(n=batch_size)
# Capture the x and y_ data
batch_features = batch.as_matrix()[:,:4]
# get_dummies turns our categorical data into one-hot vectors
batch_targets = pd.get_dummies(batch.Species).as_matrix()
# Run the training step using batch_features and batch_targets
# as x and y_, respectively and capture the cost at each step
_, c = sess.run([train_step, cost], feed_dict={x:batch_features, y_:batch_targets})
# Calculate the average cost for the epoch
avg_cost += c/num_batches
# Print epoch results
print("Epoch %04d cost: %s" % (epoch + 1, "{:.4f}".format(avg_cost)))
# If our model's most likely classification is equal to the one-hot index
# add True to our correct_prediction tensor
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
# Cast the boolean variables as floats and take the mean.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Calculate the percentage of correct answers using the test data
score = sess.run(accuracy, feed_dict={x: test_features, y_: test_targets}) * 100
print("\nThe model correctly identified %s of the test data." % "{:.2f}%".format(score))
# Save the model data
save_path = saver.save(sess, model_path)
print("\nModel data saved to %s" % model_path)
Explanation: From these plots we can see that Iris setosa is linearly separable from the others in all feature pairs. This could prove useful for the design of our network classifier.
Now that we've loaded our data and we know how it's structured, it's up to you to create a neural network classifier! I've given you some code to branch off of below. Good luck!
End of explanation |
6,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We can find the number of decision nodes in the dBG by counting unique hashes...
Step1: We'll make a new column for total degree, for convenience.
Step2: Let's start with the overal degree distribution during the entire construction process.
Step3: So most decision nodes in this dataset have degree 3. Note that a few have degree 2; these forks without handles. | Python Code:
k27_df.hash.nunique(), k35_df.hash.nunique()
Explanation: We can find the number of decision nodes in the dBG by counting unique hashes...
End of explanation
k35_df['degree'] = k35_df['l_degree'] + k35_df['r_degree']
k27_df['degree'] = k27_df['l_degree'] + k27_df['r_degree']
Explanation: We'll make a new column for total degree, for convenience.
End of explanation
figsize(18,10)
fig, ax_mat = subplots(ncols=3, nrows=2)
top = ax_mat[0]
sns.distplot(k35_df.degree, kde=False, ax=top[0], bins=8)
sns.distplot(k35_df.l_degree, kde=False, ax=top[1], bins=5)
sns.distplot(k35_df.r_degree, kde=False, ax=top[2], bins=5)
bottom = ax_mat[1]
sns.distplot(k27_df.degree, kde=False, ax=bottom[0], bins=8)
sns.distplot(k27_df.l_degree, kde=False, ax=bottom[1], bins=5)
sns.distplot(k27_df.r_degree, kde=False, ax=bottom[2], bins=5)
Explanation: Let's start with the overal degree distribution during the entire construction process.
End of explanation
figsize(12,8)
sns.distplot(k35_df.position, kde=False, label='K=35', bins=15)
sns.distplot(k27_df.position, kde=False, label='K=27', bins=15)
legend()
melted_df = k35_df.melt(id_vars=['hash', 'position'], value_vars=['l_degree', 'r_degree'], )
melted_df.head()
figsize(18,8)
sns.violinplot('position', 'value', 'variable', melted_df)
k35_dnodes_per_read = k35_df.groupby('read_n').count().\
reset_index()[['read_n', 'hash']].rename({'hash': 'n_dnodes'}, axis='columns')
k27_dnodes_per_read = k27_df.groupby('read_n').count().\
reset_index()[['read_n', 'hash']].rename({'hash': 'n_dnodes'}, axis='columns')
ax = k35_dnodes_per_read.rolling(1000, min_periods=10, on='read_n').mean().plot(x='read_n',
y='n_dnodes',
label='k = 35')
ax = k27_dnodes_per_read.rolling(1000, min_periods=10, on='read_n').mean().plot(x='read_n',
y='n_dnodes',
label='k = 27',
ax=ax)
ax.xaxis.set_major_formatter(mpl.ticker.StrMethodFormatter("{x:,}"))
Explanation: So most decision nodes in this dataset have degree 3. Note that a few have degree 2; these forks without handles.
End of explanation |
6,560 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a csv file without headers which I'm importing into python using pandas. The last column is the target class, while the rest of the columns are pixel values for images. How can I go ahead and split this dataset into a training set and a testing set (80/20)? | Problem:
import numpy as np
import pandas as pd
dataset = load_data()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(dataset.iloc[:, :-1], dataset.iloc[:, -1], test_size=0.2,
random_state=42) |
6,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Coding With SHARPpy
Written by
Step1: All of the SHARPpy routines (parcel lifting, composite indices, etc.) reside within the SHARPTAB module.
SHARPTAB contains 6 modules
Step2: Step 3
Step3: In SHARPpy, Profile objects have quality control checks built into them to alert the user to bad data and in order to prevent the program from crashing on computational routines. For example, upon construction of the Profile object, the SHARPpy will check for unrealistic values (i.e. dewpoint or temperature below absolute zero, negative wind speeds) and incorrect ordering of the height and pressure arrays. Height arrays must be increasing with array index, and pressure arrays must be decreasing with array index. Repeat values are not allowed.
If the user wishes to avoid these checks, set the "strictQC" flag to False when constructing an object.
Because Python is an interpreted language, it can be quite slow for certain processes. When working with soundings in SHARPpy, we recommend the profiles contain a maximum of 200-500 points. High resolution radiosonde profiles (i.e. 1 second profiles) contain thousands of points and some of the SHARPpy functions that involve lifting parcels (i.e. parcelx) may take a long time to run. To filter your data to make it easier for SHARPpy to work with, you can use a sounding filter such as the one found here
Step4: SHARPpy Profile objects keep track of the height grid the profile lies on. Within the profile object, the height grid is assumed to be in meters above mean sea level.
In the example data provided, the profile can be converted to and from AGL from MSL
Step5: Showing derived profiles
Step6: Lifting Parcels
Step7: Once your parcel attributes are computed by params.parcelx(), you can extract information about the parcel such as CAPE, CIN, LFC height, LCL height, EL height, etc.
Step9: Other Parcel Object Attributes
Step10: Calculating Kinematic Variables
Step11: Calculating variables based off of the effective inflow layer
Step12: Putting it all together into one plot
Step13: List of functions in each module | Python Code:
%matplotlib inline
spc_file = open('14061619.OAX', 'r').read()
Explanation: Basic Coding With SHARPpy
Written by: Greg Blumberg (OU/CIMMS)
This IPython Notebook tutorial is meant to teach the user how to directly interact with the SHARPpy libraries using the Python interpreter. This tutorial will cover reading in files into the the Profile object, plotting the data using Matplotlib, and computing various indices from the data. It is also a reference to the different functions and variables SHARPpy has available to the user.
In order to work with SHARPpy, you need to perform 3 steps before you can begin running routines such as CAPE/CIN on the data.
Step 1: Read in the data to work with.
1.) The Pilger, NE tornado proximity sounding from 19 UTC within the tutorial/ directory is an example of the SPC sounding file format that can be read in by the GUI. Here we'll read it in manually.
End of explanation
import sharppy.sharptab as tab
Explanation: All of the SHARPpy routines (parcel lifting, composite indices, etc.) reside within the SHARPTAB module.
SHARPTAB contains 6 modules:
params, winds, thermo, utils, interp, fire, constants, watch_type
Each module has different functions:
interp - interpolates different variables (temperature, dewpoint, wind, etc.) to a specified pressure
winds - functions used to compute different wind-related variables (shear, helicity, mean winds, storm relative vectors)
thermo - temperature unit conversions, theta-e, theta, wetbulb, lifting functions
utils - wind speed unit conversions, wind speed and direction to u and v conversions, QC
params - computation of different parameters, indices, etc. from the Profile object
fire - fire weather indices
Step 2: Load in the SHARPTAB module.
End of explanation
import numpy as np
from StringIO import StringIO
def parseSPC(spc_file):
## read in the file
data = np.array([l.strip() for l in spc_file.split('\n')])
## necessary index points
title_idx = np.where( data == '%TITLE%')[0][0]
start_idx = np.where( data == '%RAW%' )[0] + 1
finish_idx = np.where( data == '%END%')[0]
## create the plot title
data_header = data[title_idx + 1].split()
location = data_header[0]
time = data_header[1][:11]
## put it all together for StringIO
full_data = '\n'.join(data[start_idx : finish_idx][:])
sound_data = StringIO( full_data )
## read the data into arrays
p, h, T, Td, wdir, wspd = np.genfromtxt( sound_data, delimiter=',', comments="%", unpack=True )
return p, h, T, Td, wdir, wspd
pres, hght, tmpc, dwpc, wdir, wspd = parseSPC(spc_file)
prof = tab.profile.create_profile(profile='default', pres=pres, hght=hght, tmpc=tmpc, \
dwpc=dwpc, wspd=wspd, wdir=wdir, missing=-9999, strictQC=True)
Explanation: Step 3: Making a Profile object.
Before running any analysis routines on the data, we have to create a Profile object first. A Profile object describes the vertical thermodynamic and kinematic profiles and is the key object that all SHARPpy routines need to run. Any data source can be passed into a Profile object (i.e. radiosonde, RASS, satellite sounding retrievals, etc.) as long as it has these profiles:
temperature (C)
dewpoint (C)
height (meters above mean sea level)
pressure (millibars)
wind speed (kts)
wind direction (degrees)
or (optional)
- zonal wind component U (kts)
- meridional wind component V (kts)
For example, after reading in the data in the example above, a Profile object can be created. Since this file uses the value -9999 to indicate missing values, we need to tell SHARPpy to ignore these values in its calculations by including the missing field to be -9999. In addition, we tell SHARPpy we want to create a default BasicProfile object. Telling SHARPpy to create a "convective" profile object will generate a Profile object with all of the indices computed in the SHARPpy GUI. If you are only wanting to compute a few indices, you probably don't want to do that.
End of explanation
import matplotlib.pyplot as plt
plt.plot(prof.tmpc, prof.hght, 'r-')
plt.plot(prof.dwpc, prof.hght, 'g-')
#plt.barbs(40*np.ones(len(prof.hght)), prof.hght, prof.u, prof.v)
plt.xlabel("Temperature [C]")
plt.ylabel("Height [m above MSL]")
plt.grid()
plt.show()
Explanation: In SHARPpy, Profile objects have quality control checks built into them to alert the user to bad data and in order to prevent the program from crashing on computational routines. For example, upon construction of the Profile object, the SHARPpy will check for unrealistic values (i.e. dewpoint or temperature below absolute zero, negative wind speeds) and incorrect ordering of the height and pressure arrays. Height arrays must be increasing with array index, and pressure arrays must be decreasing with array index. Repeat values are not allowed.
If the user wishes to avoid these checks, set the "strictQC" flag to False when constructing an object.
Because Python is an interpreted language, it can be quite slow for certain processes. When working with soundings in SHARPpy, we recommend the profiles contain a maximum of 200-500 points. High resolution radiosonde profiles (i.e. 1 second profiles) contain thousands of points and some of the SHARPpy functions that involve lifting parcels (i.e. parcelx) may take a long time to run. To filter your data to make it easier for SHARPpy to work with, you can use a sounding filter such as the one found here:
https://github.com/tsupinie/SoundingFilter
Working with the data:
Once you have a Profile object, you can begin running analysis routines and plotting the data. The following sections show different examples of how to do this.
Plotting the data:
End of explanation
msl_hght = prof.hght[prof.sfc] # Grab the surface height value
print "SURFACE HEIGHT (m MSL):",msl_hght
agl_hght = tab.interp.to_agl(prof, msl_hght) # Converts to AGL
print "SURFACE HEIGHT (m AGL):", agl_hght
msl_hght = tab.interp.to_msl(prof, agl_hght) # Converts to MSL
print "SURFACE HEIGHT (m MSL):",msl_hght
Explanation: SHARPpy Profile objects keep track of the height grid the profile lies on. Within the profile object, the height grid is assumed to be in meters above mean sea level.
In the example data provided, the profile can be converted to and from AGL from MSL:
End of explanation
plt.plot(tab.thermo.ktoc(prof.thetae), prof.hght, 'r-', label='Theta-E')
plt.plot(prof.wetbulb, prof.hght, 'c-', label='Wetbulb')
plt.xlabel("Temperature [C]")
plt.ylabel("Height [m above MSL]")
plt.legend()
plt.grid()
plt.show()
Explanation: Showing derived profiles:
By default, Profile objects also create derived profiles such as Theta-E and Wet-Bulb when they are constructed. These profiles are accessible to the user too.
End of explanation
sfcpcl = tab.params.parcelx( prof, flag=1 ) # Surface Parcel
fcstpcl = tab.params.parcelx( prof, flag=2 ) # Forecast Parcel
mupcl = tab.params.parcelx( prof, flag=3 ) # Most-Unstable Parcel
mlpcl = tab.params.parcelx( prof, flag=4 ) # 100 mb Mean Layer Parcel
Explanation: Lifting Parcels:
In SHARPpy, parcels are lifted via the params.parcelx() routine. The parcelx() routine takes in the arguments of a Profile object and a flag to indicate what type of parcel you would like to be lifted. Additional arguments can allow for custom/user defined parcels to be passed to the parcelx() routine, however most users will likely be using only the Most-Unstable, Surface, 100 mb Mean Layer, and Forecast parcels.
The parcelx() routine by default utilizes the virtual temperature correction to compute variables such as CAPE and CIN. If the dewpoint profile contains missing data, parcelx() will disregard using the virtual temperature correction.
End of explanation
print "Most-Unstable CAPE:", mupcl.bplus # J/kg
print "Most-Unstable CIN:", mupcl.bminus # J/kg
print "Most-Unstable LCL:", mupcl.lclhght # meters AGL
print "Most-Unstable LFC:", mupcl.lfchght # meters AGL
print "Most-Unstable EL:", mupcl.elhght # meters AGL
print "Most-Unstable LI:", mupcl.li5 # C
Explanation: Once your parcel attributes are computed by params.parcelx(), you can extract information about the parcel such as CAPE, CIN, LFC height, LCL height, EL height, etc.
End of explanation
# This serves as an intensive exercise of matplotlib's transforms
# and custom projection API. This example produces a so-called
# SkewT-logP diagram, which is a common plot in meteorology for
# displaying vertical profiles of temperature. As far as matplotlib is
# concerned, the complexity comes from having X and Y axes that are
# not orthogonal. This is handled by including a skew component to the
# basic Axes transforms. Additional complexity comes in handling the
# fact that the upper and lower X-axes have different data ranges, which
# necessitates a bunch of custom classes for ticks,spines, and the axis
# to handle this.
from matplotlib.axes import Axes
import matplotlib.transforms as transforms
import matplotlib.axis as maxis
import matplotlib.spines as mspines
import matplotlib.path as mpath
from matplotlib.projections import register_projection
# The sole purpose of this class is to look at the upper, lower, or total
# interval as appropriate and see what parts of the tick to draw, if any.
class SkewXTick(maxis.XTick):
def draw(self, renderer):
if not self.get_visible(): return
renderer.open_group(self.__name__)
lower_interval = self.axes.xaxis.lower_interval
upper_interval = self.axes.xaxis.upper_interval
if self.gridOn and transforms.interval_contains(
self.axes.xaxis.get_view_interval(), self.get_loc()):
self.gridline.draw(renderer)
if transforms.interval_contains(lower_interval, self.get_loc()):
if self.tick1On:
self.tick1line.draw(renderer)
if self.label1On:
self.label1.draw(renderer)
if transforms.interval_contains(upper_interval, self.get_loc()):
if self.tick2On:
self.tick2line.draw(renderer)
if self.label2On:
self.label2.draw(renderer)
renderer.close_group(self.__name__)
# This class exists to provide two separate sets of intervals to the tick,
# as well as create instances of the custom tick
class SkewXAxis(maxis.XAxis):
def __init__(self, *args, **kwargs):
maxis.XAxis.__init__(self, *args, **kwargs)
self.upper_interval = 0.0, 1.0
def _get_tick(self, major):
return SkewXTick(self.axes, 0, '', major=major)
@property
def lower_interval(self):
return self.axes.viewLim.intervalx
def get_view_interval(self):
return self.upper_interval[0], self.axes.viewLim.intervalx[1]
# This class exists to calculate the separate data range of the
# upper X-axis and draw the spine there. It also provides this range
# to the X-axis artist for ticking and gridlines
class SkewSpine(mspines.Spine):
def _adjust_location(self):
trans = self.axes.transDataToAxes.inverted()
if self.spine_type == 'top':
yloc = 1.0
else:
yloc = 0.0
left = trans.transform_point((0.0, yloc))[0]
right = trans.transform_point((1.0, yloc))[0]
pts = self._path.vertices
pts[0, 0] = left
pts[1, 0] = right
self.axis.upper_interval = (left, right)
# This class handles registration of the skew-xaxes as a projection as well
# as setting up the appropriate transformations. It also overrides standard
# spines and axes instances as appropriate.
class SkewXAxes(Axes):
# The projection must specify a name. This will be used be the
# user to select the projection, i.e. ``subplot(111,
# projection='skewx')``.
name = 'skewx'
def _init_axis(self):
#Taken from Axes and modified to use our modified X-axis
self.xaxis = SkewXAxis(self)
self.spines['top'].register_axis(self.xaxis)
self.spines['bottom'].register_axis(self.xaxis)
self.yaxis = maxis.YAxis(self)
self.spines['left'].register_axis(self.yaxis)
self.spines['right'].register_axis(self.yaxis)
def _gen_axes_spines(self):
spines = {'top':SkewSpine.linear_spine(self, 'top'),
'bottom':mspines.Spine.linear_spine(self, 'bottom'),
'left':mspines.Spine.linear_spine(self, 'left'),
'right':mspines.Spine.linear_spine(self, 'right')}
return spines
def _set_lim_and_transforms(self):
This is called once when the plot is created to set up all the
transforms for the data, text and grids.
rot = 30
#Get the standard transform setup from the Axes base class
Axes._set_lim_and_transforms(self)
# Need to put the skew in the middle, after the scale and limits,
# but before the transAxes. This way, the skew is done in Axes
# coordinates thus performing the transform around the proper origin
# We keep the pre-transAxes transform around for other users, like the
# spines for finding bounds
self.transDataToAxes = self.transScale + (self.transLimits +
transforms.Affine2D().skew_deg(rot, 0))
# Create the full transform from Data to Pixels
self.transData = self.transDataToAxes + self.transAxes
# Blended transforms like this need to have the skewing applied using
# both axes, in axes coords like before.
self._xaxis_transform = (transforms.blended_transform_factory(
self.transScale + self.transLimits,
transforms.IdentityTransform()) +
transforms.Affine2D().skew_deg(rot, 0)) + self.transAxes
# Now register the projection with matplotlib so the user can select
# it.
register_projection(SkewXAxes)
pcl = mupcl
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(6.5875, 6.2125))
ax = fig.add_subplot(111, projection='skewx')
ax.grid(True)
pmax = 1000
pmin = 10
dp = -10
presvals = np.arange(int(pmax), int(pmin)+dp, dp)
# plot the moist-adiabats
for t in np.arange(-10,45,5):
tw = []
for p in presvals:
tw.append(tab.thermo.wetlift(1000., t, p))
ax.semilogy(tw, presvals, 'k-', alpha=.2)
def thetas(theta, presvals):
return ((theta + tab.thermo.ZEROCNK) / (np.power((1000. / presvals),tab.thermo.ROCP))) - tab.thermo.ZEROCNK
# plot the dry adiabats
for t in np.arange(-50,110,10):
ax.semilogy(thetas(t, presvals), presvals, 'r-', alpha=.2)
plt.title(' OAX 140616/1900 (Observed)', fontsize=14, loc='left')
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dicatated by the typical meteorological plot
ax.semilogy(prof.tmpc, prof.pres, 'r', lw=2)
ax.semilogy(prof.dwpc, prof.pres, 'g', lw=2)
ax.semilogy(pcl.ttrace, pcl.ptrace, 'k-.', lw=2)
# An example of a slanted line at constant X
l = ax.axvline(0, color='b', linestyle='--')
l = ax.axvline(-20, color='b', linestyle='--')
# Disables the log-formatting that comes with semilogy
ax.yaxis.set_major_formatter(plt.ScalarFormatter())
ax.set_yticks(np.linspace(100,1000,10))
ax.set_ylim(1050,100)
ax.xaxis.set_major_locator(plt.MultipleLocator(10))
ax.set_xlim(-50,50)
plt.show()
Explanation: Other Parcel Object Attributes:
Here is a list of the attributes and their units contained in each parcel object (pcl):
pcl.pres - Parcel beginning pressure (mb)
pcl.tmpc - Parcel beginning temperature (C)
pcl.dwpc - Parcel beginning dewpoint (C)
pcl.ptrace - Parcel trace pressure (mb)
pcl.ttrace - Parcel trace temperature (C)
pcl.blayer - Pressure of the bottom of the layer the parcel is lifted (mb)
pcl.tlayer - Pressure of the top of the layer the parcel is lifted (mb)
pcl.lclpres - Parcel LCL (lifted condensation level) pressure (mb)
pcl.lclhght - Parcel LCL height (m AGL)
pcl.lfcpres - Parcel LFC (level of free convection) pressure (mb)
pcl.lfchght - Parcel LFC height (m AGL)
pcl.elpres - Parcel EL (equilibrium level) pressure (mb)
pcl.elhght - Parcel EL height (m AGL)
pcl.mplpres - Maximum Parcel Level (mb)
pcl.mplhght - Maximum Parcel Level (m AGL)
pcl.bplus - Parcel CAPE (J/kg)
pcl.bminus - Parcel CIN (J/kg)
pcl.bfzl - Parcel CAPE up to freezing level (J/kg)
pcl.b3km - Parcel CAPE up to 3 km (J/kg)
pcl.b6km - Parcel CAPE up to 6 km (J/kg)
pcl.p0c - Pressure value at 0 C (mb)
pcl.pm10c - Pressure value at -10 C (mb)
pcl.pm20c - Pressure value at -20 C (mb)
pcl.pm30c - Pressure value at -30 C (mb)
pcl.hght0c - Height value at 0 C (m AGL)
pcl.hghtm10c - Height value at -10 C (m AGL)
pcl.hghtm20c - Height value at -20 C (m AGL)
pcl.hghtm30c - Height value at -30 C (m AGL)
pcl.wm10c - Wet bulb velocity at -10 C
pcl.wm20c - Wet bulb velocity at -20 C
pcl.wm30c - Wet bulb at -30 C
pcl.li5 = - Lifted Index at 500 mb (C)
pcl.li3 = - Lifted Index at 300 mb (C)
pcl.brnshear - Bulk Richardson Number Shear
pcl.brnu - Bulk Richardson Number U (kts)
pcl.brnv - Bulk Richardson Number V (kts)
pcl.brn - Bulk Richardson Number (unitless)
pcl.limax - Maximum Lifted Index (C)
pcl.limaxpres - Pressure at Maximum Lifted Index (mb)
pcl.cap - Cap Strength (C)
pcl.cappres - Cap strength pressure (mb)
pcl.bmin - Buoyancy minimum in profile (C)
pcl.bminpres - Buoyancy minimum pressure (mb)
Adding a Parcel Trace and plotting Moist and Dry Adiabats:
End of explanation
sfc = prof.pres[prof.sfc]
p3km = tab.interp.pres(prof, tab.interp.to_msl(prof, 3000.))
p6km = tab.interp.pres(prof, tab.interp.to_msl(prof, 6000.))
p1km = tab.interp.pres(prof, tab.interp.to_msl(prof, 1000.))
mean_3km = tab.winds.mean_wind(prof, pbot=sfc, ptop=p3km)
sfc_6km_shear = tab.winds.wind_shear(prof, pbot=sfc, ptop=p6km)
sfc_3km_shear = tab.winds.wind_shear(prof, pbot=sfc, ptop=p3km)
sfc_1km_shear = tab.winds.wind_shear(prof, pbot=sfc, ptop=p1km)
print "0-3 km Pressure-Weighted Mean Wind (kt):", tab.utils.comp2vec(mean_3km[0], mean_3km[1])[1]
print "0-6 km Shear (kt):", tab.utils.comp2vec(sfc_6km_shear[0], sfc_6km_shear[1])[1]
srwind = tab.params.bunkers_storm_motion(prof)
print "Bunker's Storm Motion (right-mover) [deg,kts]:", tab.utils.comp2vec(srwind[0], srwind[1])
print "Bunker's Storm Motion (left-mover) [deg,kts]:", tab.utils.comp2vec(srwind[2], srwind[3])
srh3km = tab.winds.helicity(prof, 0, 3000., stu = srwind[0], stv = srwind[1])
srh1km = tab.winds.helicity(prof, 0, 1000., stu = srwind[0], stv = srwind[1])
print "0-3 km Storm Relative Helicity [m2/s2]:",srh3km[0]
Explanation: Calculating Kinematic Variables:
SHARPpy also allows the user to compute kinematic variables such as shear, mean-winds, and storm relative helicity. SHARPpy will also compute storm motion vectors based off of the work by Stephen Corfidi and Matthew Bunkers. Below is some example code to compute the following:
1.) 0-3 km Pressure-Weighted Mean Wind
2.) 0-6 km Shear (kts)
3.) Bunker's Storm Motion (right-mover) (Bunkers et al. 2014 version)
4.) Bunker's Storm Motion (left-mover) (Bunkers et al. 2014 version)
5.) 0-3 Storm Relative Helicity
End of explanation
stp_fixed = tab.params.stp_fixed(sfcpcl.bplus, sfcpcl.lclhght, srh1km[0], tab.utils.comp2vec(sfc_6km_shear[0], sfc_6km_shear[1])[1])
ship = tab.params.ship(prof)
eff_inflow = tab.params.effective_inflow_layer(prof)
ebot_hght = tab.interp.to_agl(prof, tab.interp.hght(prof, eff_inflow[0]))
etop_hght = tab.interp.to_agl(prof, tab.interp.hght(prof, eff_inflow[1]))
print "Effective Inflow Layer Bottom Height (m AGL):", ebot_hght
print "Effective Inflow Layer Top Height (m AGL):", etop_hght
effective_srh = tab.winds.helicity(prof, ebot_hght, etop_hght, stu = srwind[0], stv = srwind[1])
print "Effective Inflow Layer SRH (m2/s2):", effective_srh[0]
ebwd = tab.winds.wind_shear(prof, pbot=eff_inflow[0], ptop=eff_inflow[1])
ebwspd = tab.utils.mag( ebwd[0], ebwd[1] )
print "Effective Bulk Wind Difference:", ebwspd
scp = tab.params.scp(mupcl.bplus, effective_srh[0], ebwspd)
stp_cin = tab.params.stp_cin(mlpcl.bplus, effective_srh[0], ebwspd, mlpcl.lclhght, mlpcl.bminus)
print "Supercell Composite Parameter:", scp
print "Significant Tornado Parameter (w/CIN):", stp_cin
print "Significant Tornado Parameter (fixed):", stp_fixed
Explanation: Calculating variables based off of the effective inflow layer:
The effective inflow layer concept is used to obtain the layer of buoyant parcels that feed a storm's inflow. Here are a few examples of how to compute variables that require the effective inflow layer in order to calculate them:
End of explanation
indices = {'SBCAPE': [int(sfcpcl.bplus), 'J/kg'],\
'SBCIN': [int(sfcpcl.bminus), 'J/kg'],\
'SBLCL': [int(sfcpcl.lclhght), 'm AGL'],\
'SBLFC': [int(sfcpcl.lfchght), 'm AGL'],\
'SBEL': [int(sfcpcl.elhght), 'm AGL'],\
'SBLI': [int(sfcpcl.li5), 'C'],\
'MLCAPE': [int(mlpcl.bplus), 'J/kg'],\
'MLCIN': [int(mlpcl.bminus), 'J/kg'],\
'MLLCL': [int(mlpcl.lclhght), 'm AGL'],\
'MLLFC': [int(mlpcl.lfchght), 'm AGL'],\
'MLEL': [int(mlpcl.elhght), 'm AGL'],\
'MLLI': [int(mlpcl.li5), 'C'],\
'MUCAPE': [int(mupcl.bplus), 'J/kg'],\
'MUCIN': [int(mupcl.bminus), 'J/kg'],\
'MULCL': [int(mupcl.lclhght), 'm AGL'],\
'MULFC': [int(mupcl.lfchght), 'm AGL'],\
'MUEL': [int(mupcl.elhght), 'm AGL'],\
'MULI': [int(mupcl.li5), 'C'],\
'0-1 km SRH': [int(srh1km[0]), 'm2/s2'],\
'0-1 km Shear': [int(tab.utils.comp2vec(sfc_1km_shear[0], sfc_1km_shear[1])[1]), 'kts'],\
'0-3 km SRH': [int(srh3km[0]), 'm2/s2'],\
'Eff. SRH': [int(effective_srh[0]), 'm2/s2'],\
'EBWD': [int(ebwspd), 'kts'],\
'PWV': [round(tab.params.precip_water(prof), 2), 'inch'],\
'K-index': [int(tab.params.k_index(prof)), ''],\
'STP(fix)': [round(stp_fixed, 1), ''],\
'SHIP': [round(ship, 1), ''],\
'SCP': [round(scp, 1), ''],\
'STP(cin)': [round(stp_cin, 1), '']}
# Set the parcel trace to be plotted as the Most-Unstable parcel.
pcl = mupcl
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(6.5875, 6.2125))
ax = fig.add_subplot(111, projection='skewx')
ax.grid(True)
pmax = 1000
pmin = 10
dp = -10
presvals = np.arange(int(pmax), int(pmin)+dp, dp)
# plot the moist-adiabats
for t in np.arange(-10,45,5):
tw = []
for p in presvals:
tw.append(tab.thermo.wetlift(1000., t, p))
ax.semilogy(tw, presvals, 'k-', alpha=.2)
def thetas(theta, presvals):
return ((theta + tab.thermo.ZEROCNK) / (np.power((1000. / presvals),tab.thermo.ROCP))) - tab.thermo.ZEROCNK
# plot the dry adiabats
for t in np.arange(-50,110,10):
ax.semilogy(thetas(t, presvals), presvals, 'r-', alpha=.2)
plt.title(' OAX 140616/1900 (Observed)', fontsize=12, loc='left')
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dicatated by the typical meteorological plot
ax.semilogy(prof.tmpc, prof.pres, 'r', lw=2) # Plot the temperature profile
ax.semilogy(prof.wetbulb, prof.pres, 'c-') # Plot the wetbulb profile
ax.semilogy(prof.dwpc, prof.pres, 'g', lw=2) # plot the dewpoint profile
ax.semilogy(pcl.ttrace, pcl.ptrace, 'k-.', lw=2) # plot the parcel trace
# An example of a slanted line at constant X
l = ax.axvline(0, color='b', linestyle='--')
l = ax.axvline(-20, color='b', linestyle='--')
# Plot the effective inflow layer using blue horizontal lines
ax.axhline(eff_inflow[0], color='b')
ax.axhline(eff_inflow[1], color='b')
#plt.barbs(10*np.ones(len(prof.pres)), prof.pres, prof.u, prof.v)
# Disables the log-formatting that comes with semilogy
ax.yaxis.set_major_formatter(plt.ScalarFormatter())
ax.set_yticks(np.linspace(100,1000,10))
ax.set_ylim(1050,100)
ax.xaxis.set_major_locator(plt.MultipleLocator(10))
ax.set_xlim(-50,50)
# List the indices within the indices dictionary on the side of the plot.
string = ''
for key in np.sort(indices.keys()):
string = string + key + ': ' + str(indices[key][0]) + ' ' + indices[key][1] + '\n'
plt.text(1.02, 1, string, verticalalignment='top', transform=plt.gca().transAxes)
# Draw the hodograph on the Skew-T.
# TAS 2015-4-16: hodograph doesn't plot for some reason ...
ax2 = plt.axes([.625,.625,.25,.25])
below_12km = np.where(tab.interp.to_agl(prof, prof.hght) < 12000)[0]
u_prof = prof.u[below_12km]
v_prof = prof.v[below_12km]
ax2.plot(u_prof[~u_prof.mask], v_prof[~u_prof.mask], 'k-', lw=2)
ax2.get_xaxis().set_visible(False)
ax2.get_yaxis().set_visible(False)
for i in range(10,90,10):
# Draw the range rings around the hodograph.
circle = plt.Circle((0,0),i,color='k',alpha=.3, fill=False)
ax2.add_artist(circle)
ax2.plot(srwind[0], srwind[1], 'ro') # Plot Bunker's Storm motion right mover as a red dot
ax2.plot(srwind[2], srwind[3], 'bo') # Plot Bunker's Storm motion left mover as a blue dot
ax2.set_xlim(-60,60)
ax2.set_ylim(-60,60)
ax2.axhline(y=0, color='k')
ax2.axvline(x=0, color='k')
plt.show()
Explanation: Putting it all together into one plot:
End of explanation
print "Functions within params.py:"
for key in tab.params.__all__:
print "\ttab.params." + key + "()"
print "\nFunctions within winds.py:"
for key in tab.winds.__all__:
print "\ttab.winds." + key + "()"
print "\nFunctions within thermo.py:"
for key in tab.thermo.__all__:
print "\ttab.thermo." + key + "()"
print "\nFunctions within interp.py:"
for key in tab.interp.__all__:
print "\ttab.interp." + key + "()"
print "\nFunctions within utils.py:"
for key in tab.utils.__all__:
print "\ttab.utils." + key + "()"
Explanation: List of functions in each module:
This tutorial cannot cover all of the functions in SHARPpy. Below is a list of all of the functions accessible through SHARPTAB. In order to learn more about the function in this IPython Notebook, open up a new "In[]:" field and type in the path to the function (for example):
tab.params.dcape()
Documentation should appear below the cursor describing the function itself, the function's arguments, its output values, and any references to meteorological literature the function was based on.
End of explanation |
6,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lists
<img src= img/aboutDataStructures.png>
Step1: Unpacking
Step2: List Comprehensions
Step3: A list comprehension consists of brackets containing an expression followed by a for clause, then zero or more for or if clauses. | Python Code:
list1 = ['apple', 'banana', 'orange']
list1
list2 = [7, 11, 13, 17, 19]
list2
list3 = ['text', 23, 66, -1, [0, 1]]
list3
empty = []
empty
list1[0]
list1[-1]
list1[-2]
'orange' in list1
'pineapple' in list1
0 in list3
0 in list3[-1]
None in empty
66 in list3
len(list2)
len(list3)
del list2[2]
list2
new_list = list1 + list2
new_list
new_list * 2
[new_list] * 3
list2
list2.append(23)
list2
#list.insert(index, obj)
list2.insert(0, 5)
list2
list2.insert(6, 29)
list2
list2.insert(len(list2), 101)
list2
list2.insert(-1, 999) #it doesn't insert in the last position! :/
list2
list2.remove(999) #remove the item 999
list2
list2.remove(list2[-1]) #here removed the last one!
list2
l1 = [1, 2]
l2 = [3, 4]
l1.extend(l2)
l1
l2
list_unsorted = [10, 3, 7, 12, 1, 20]
list_unsorted.sort()
list_unsorted
list_unsorted.sort(reverse = True) # L.sort(key=None, reverse=False) -> None -- stable sort *IN PLACE*
list_unsorted
l1
l2
l2 = l1.copy()
l2
l2 == l1
id(l1)
id(l2)
# It's better use copy()! Look:
l1 = ['a', 'b', 'c']
l2 = l1
l1
l2
l2.append('d')
l2
l1
l1 == l2
id(l1)
id(l2) # same id! watch it.
# Slicing
list10 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
list10[4:] #took the 4th and goes on
list10[4:6] #list[start:end] #item starts through end-1
list10[-2:] # the last 2 items in the array
list10[:-2] # everthing except the last 2 items
# sliceable[start:stop:step]
list10[::2] # os pares (pula de 2 em 2)
list10[1::2] # os ímpares
# and if I wanna know the index position of an item?
list10
list10.index(6)
list20 = [1, 2, 1, 3, 1, 4, 1, 5]
list20.count(1) # it counts the number of times the "1" appears in the list
list20.count(4)
list20.count(10)
list20.pop()
list20
list20.sort(reverse=True)
list20
max(list20)
sum(list20)
courses = ['History', 'Math', 'Physics', 'CompSci']
for index, course in enumerate(courses):
print(index, course)
course_str = ', '.join(courses)
print(course_str)
new_list = course_str.split(', ')
new_list
cs_courses = ['History', 'Math', 'Physics', 'CompSci']
art_courses = ['History', 'Math', 'Art', 'Design']
list(set(cs_courses) & set(art_courses))
set(cs_courses).intersection(art_courses)
set(cs_courses).difference(art_courses) # there's only in cs_courses
set(art_courses).difference(cs_courses) #there's only in art_courses
set(cs_courses).union(art_courses) # the two lists together
Explanation: Lists
<img src= img/aboutDataStructures.png>
End of explanation
x = ['Patton', 'Zorn', 'Hancock']
a, b, c = x
a
b, c
Explanation: Unpacking
End of explanation
# Examples from https://docs.python.org/3/tutorial/datastructures.html
squares = []
for x in range(10):
squares.append(x ** 2)
squares
squares2 = [x ** 2 for x in range(10)]
squares2
new_range = [i * i for i in range(5) if i % 2 == 0]
new_range
Explanation: List Comprehensions
End of explanation
combs = []
for x in [1, 2, 3]:
for y in [3, 1, 5]:
if x != y:
combs.append((x, y))
combs
# and in list comprehension format:
combs2 = [(x, y) for x in [1, 2, 3] for y in [3, 1, 5] if x != y]
combs2
combs == combs2
vec = [-4, -2, 0, 2, 4]
[x*2 for x in vec]
# exclude negative numbers:
[x for x in vec if x >= 0]
# absolute in all numbers:
[abs(x) for x in vec]
freshfruit = [' banana ', 'passion fruit ']
[weapon.strip() for weapon in freshfruit] # strip() removes the white spaces at the start and end, including spaces, tabs, newlines and carriage returns
# list of 2-tuples like (number, square)
[(y, y**2) for y in range(6)]
# flatten a list using listcomp with two 'for'
vec = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
[num for elem in vec for num in elem]
# in another words:
vec = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
new_list = []
for elem in vec:
for num in elem:
new_list.append(num)
new_list
from math import pi
[str(round(pi, i)) for i in range(1, 6)]
# https://en.wikipedia.org/wiki/List_comprehension
s = {v for v in 'ABCDABCD' if v not in 'CB'}
print(s)
s
type(s)
s = {key: val for key, val in enumerate('ABCD') if val not in 'CB'}
s
# regular list comprehension
a = [(x, y) for x in range(1, 6) for y in range(3, 6)]
a
# in another words:
b = []
for x in range(1, 6):
for y in range(3, 6):
b.append((x,y))
b
a == b
# parallel/zipped list comprehension
c = [x for x in zip(range(1, 6), range(3, 6))]
c
# http://www.pythonforbeginners.com/basics/list-comprehensions-in-python
listOfWords = ['this', 'is', 'a', 'list', 'of', 'words']
[word[0] for word in listOfWords]
# or...
listOfWords = ["this","is","a","list","of","words"]
items = [word[0] for word in listOfWords]
items
[x.lower() for x in ['A', 'B', 'C']]
[x.upper() for x in ['a', 'b', 'c']]
string = 'Hello 12345 World'
numbers = [x for x in string if x.isdigit()]
numbers
another_string = 'Hello 12345 World'
just_text = [x for x in another_string if x.isalpha()]
just_text
just_text = [just_text[:5], just_text[5:]]
just_text
part1 = ''.join(just_text[0])
part1
part2 = ''.join(just_text[1])
part2
# finally...
just_text = [part1, part2]
just_text
Explanation: A list comprehension consists of brackets containing an expression followed by a for clause, then zero or more for or if clauses.
End of explanation |
6,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The seasonal cycle of surface temperature
Look at the observed seasonal cycle in the NCEP reanalysis data.
Read in the necessary data from the online server courtesy of the NOAA Physical Sciences Laboratory.
The catalog is here
Step1: Make two maps
Step2: Make a contour plot of the zonal mean temperature as a function of time of year
Step3: Exploring the amplitude of the seasonal cycle with an EBM
We are looking at the 1D (zonally averaged) energy balance model with diffusive heat transport. The equation is
$C \frac{\partial T(\phi,t)}{\partial t} = \big(1-\alpha\big) Q(\phi,t) - \Big(A+B T(\phi,t) \Big) +
\frac{K}{\cos\phi} \frac{\partial}{\partial \phi} \bigg( \cos\phi \frac{\partial T}{\partial \phi} \bigg)$
and the code in climlab.EBM_seasonal solves this equation numerically.
One handy feature of climlab process code
Step4: All models should have the same annual mean temperature
Step5: There is no automatic function in climlab.EBM to keep track of minimum and maximum temperatures (though we might add that in the future!)
Instead we'll step through one year "by hand" and save all the temperatures.
Step6: Make a figure to compare the observed zonal mean seasonal temperature cycle to what we get from the EBM with different heat capacities
Step7: Which one looks more realistic? Depends a bit on where you look. But overall, the observed seasonal cycle matches the 10 meter case best. The effective heat capacity governing the seasonal cycle of the zonal mean temperature is closer to 10 meters of water than to either 2 or 50 meters.
Making an animation of the EBM solutions
Step8: The seasonal cycle for a planet with 90º obliquity
The EBM code uses our familiar insolation.py code to calculate insolation, and therefore it's easy to set up a model with different orbital parameters. Here is an example with very different orbital parameters
Step9: Repeat the same procedure to calculate and store temperature throughout one year, after letting the models run out to equilibrium.
Step10: And plot the seasonal temperature cycle same as we did above
Step11: Note that the temperature range is much larger than for the Earth-like case above (but same contour interval, 10 degC).
Why is the temperature so uniform in the north-south direction with 50 meters of water?
To see the reason, let's plot the annual mean insolation at 90º obliquity, alongside the present-day annual mean insolation | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import climlab
from climlab import constants as const
import cartopy.crs as ccrs # use cartopy to make some maps
ncep_url = "http://psl.noaa.gov/thredds/dodsC/Datasets/ncep.reanalysis.derived/"
ncep_Ts = xr.open_dataset(ncep_url + "surface_gauss/skt.sfc.mon.1981-2010.ltm.nc", decode_times=False)
lat_ncep = ncep_Ts.lat; lon_ncep = ncep_Ts.lon
Ts_ncep = ncep_Ts.skt
print( Ts_ncep.shape)
Explanation: The seasonal cycle of surface temperature
Look at the observed seasonal cycle in the NCEP reanalysis data.
Read in the necessary data from the online server courtesy of the NOAA Physical Sciences Laboratory.
The catalog is here: https://psl.noaa.gov/thredds/catalog/Datasets/ncep.reanalysis.derived/catalog.html
End of explanation
maxTs = Ts_ncep.max(dim='time')
minTs = Ts_ncep.min(dim='time')
meanTs = Ts_ncep.mean(dim='time')
fig = plt.figure( figsize=(16,6) )
ax1 = fig.add_subplot(1,2,1, projection=ccrs.Robinson())
cax1 = ax1.pcolormesh(lon_ncep, lat_ncep, meanTs, cmap=plt.cm.seismic , transform=ccrs.PlateCarree())
cbar1 = plt.colorbar(cax1)
ax1.set_title('Annual mean surface temperature ($^\circ$C)', fontsize=14 )
ax2 = fig.add_subplot(1,2,2, projection=ccrs.Robinson())
cax2 = ax2.pcolormesh(lon_ncep, lat_ncep, maxTs - minTs, transform=ccrs.PlateCarree() )
cbar2 = plt.colorbar(cax2)
ax2.set_title('Seasonal temperature range ($^\circ$C)', fontsize=14)
for ax in [ax1,ax2]:
#ax.contour( lon_cesm, lat_cesm, topo.variables['LANDFRAC'][:], [0.5], colors='k');
#ax.set_xlabel('Longitude', fontsize=14 ); ax.set_ylabel('Latitude', fontsize=14 )
ax.coastlines()
Explanation: Make two maps: one of annual mean surface temperature, another of the seasonal range (max minus min).
End of explanation
Tmax = 65; Tmin = -Tmax; delT = 10
clevels = np.arange(Tmin,Tmax+delT,delT)
fig_zonobs, ax = plt.subplots( figsize=(10,6) )
cax = ax.contourf(np.arange(12)+0.5, lat_ncep,
Ts_ncep.mean(dim='lon').transpose(), levels=clevels,
cmap=plt.cm.seismic, vmin=Tmin, vmax=Tmax)
ax.set_xlabel('Month', fontsize=16)
ax.set_ylabel('Latitude', fontsize=16 )
cbar = plt.colorbar(cax)
ax.set_title('Zonal mean surface temperature (degC)', fontsize=20)
Explanation: Make a contour plot of the zonal mean temperature as a function of time of year
End of explanation
model1 = climlab.EBM_seasonal()
model1.integrate_years(1, verbose=True)
water_depths = np.array([2., 10., 50.])
num_depths = water_depths.size
Tann = np.empty( [model1.lat.size, num_depths] )
models = []
for n in range(num_depths):
models.append(climlab.EBM_seasonal(water_depth=water_depths[n]))
models[n].integrate_years(20., verbose=False )
models[n].integrate_years(1., verbose=False)
Tann[:,n] = np.squeeze(models[n].timeave['Ts'])
Explanation: Exploring the amplitude of the seasonal cycle with an EBM
We are looking at the 1D (zonally averaged) energy balance model with diffusive heat transport. The equation is
$C \frac{\partial T(\phi,t)}{\partial t} = \big(1-\alpha\big) Q(\phi,t) - \Big(A+B T(\phi,t) \Big) +
\frac{K}{\cos\phi} \frac{\partial}{\partial \phi} \bigg( \cos\phi \frac{\partial T}{\partial \phi} \bigg)$
and the code in climlab.EBM_seasonal solves this equation numerically.
One handy feature of climlab process code: the function integrate_years() automatically calculates the time averaged temperature. So if we run it for exactly one year, we get the annual mean temperature saved in the field T_timeave.
We will look at the seasonal cycle of temperature in three different models with different heat capacities (which we express through an equivalent depth of water in meters):
End of explanation
lat = model1.lat
plt.plot(lat, Tann)
plt.xlim(-90,90)
plt.xlabel('Latitude')
plt.ylabel('Temperature (degC)')
plt.title('Annual mean temperature in the EBM')
plt.legend( water_depths.astype(str) )
plt.show()
Explanation: All models should have the same annual mean temperature:
End of explanation
num_steps_per_year = int(model1.time['num_steps_per_year'])
Tyear = np.empty((lat.size, num_steps_per_year, num_depths))
for n in range(num_depths):
for m in range(num_steps_per_year):
models[n].step_forward()
Tyear[:,m,n] = np.squeeze(models[n].Ts)
Explanation: There is no automatic function in climlab.EBM to keep track of minimum and maximum temperatures (though we might add that in the future!)
Instead we'll step through one year "by hand" and save all the temperatures.
End of explanation
fig = plt.figure( figsize=(16,10) )
ax = fig.add_subplot(2,num_depths,2)
cax = ax.contourf(np.arange(12)+0.5, lat_ncep,
Ts_ncep.mean(dim='lon').transpose(),
levels=clevels, cmap=plt.cm.seismic,
vmin=Tmin, vmax=Tmax)
ax.set_xlabel('Month')
ax.set_ylabel('Latitude')
cbar = plt.colorbar(cax)
ax.set_title('Zonal mean surface temperature - observed (degC)', fontsize=20)
for n in range(num_depths):
ax = fig.add_subplot(2,num_depths,num_depths+n+1)
cax = ax.contourf(4*np.arange(num_steps_per_year),
lat, Tyear[:,:,n], levels=clevels,
cmap=plt.cm.seismic, vmin=Tmin, vmax=Tmax)
cbar1 = plt.colorbar(cax)
ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 )
ax.set_xlabel('Days of year', fontsize=14 )
ax.set_ylabel('Latitude', fontsize=14 )
Explanation: Make a figure to compare the observed zonal mean seasonal temperature cycle to what we get from the EBM with different heat capacities:
End of explanation
def initial_figure(models):
fig, axes = plt.subplots(1,len(models), figsize=(15,4))
lines = []
for n in range(len(models)):
ax = axes[n]
c1 = 'b'
Tsline = ax.plot(lat, models[n].Ts, c1)[0]
ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 )
ax.set_xlabel('Latitude', fontsize=14 )
if n == 0:
ax.set_ylabel('Temperature', fontsize=14, color=c1 )
ax.set_xlim([-90,90])
ax.set_ylim([-60,60])
for tl in ax.get_yticklabels():
tl.set_color(c1)
ax.grid()
c2 = 'r'
ax2 = ax.twinx()
Qline = ax2.plot(lat, models[n].insolation, c2)[0]
if n == 2:
ax2.set_ylabel('Insolation (W m$^{-2}$)', color=c2, fontsize=14)
for tl in ax2.get_yticklabels():
tl.set_color(c2)
ax2.set_xlim([-90,90])
ax2.set_ylim([0,600])
lines.append([Tsline, Qline])
return fig, axes, lines
def animate(step, models, lines):
for n, ebm in enumerate(models):
ebm.step_forward()
# The rest of this is just updating the plot
lines[n][0].set_ydata(ebm.Ts)
lines[n][1].set_ydata(ebm.insolation)
return lines
# Plot initial data
fig, axes, lines = initial_figure(models)
# Some imports needed to make and display animations
from IPython.display import HTML
from matplotlib import animation
num_steps = int(models[0].time['num_steps_per_year'])
ani = animation.FuncAnimation(fig, animate,
frames=num_steps,
interval=80,
fargs=(models, lines),
)
HTML(ani.to_html5_video())
Explanation: Which one looks more realistic? Depends a bit on where you look. But overall, the observed seasonal cycle matches the 10 meter case best. The effective heat capacity governing the seasonal cycle of the zonal mean temperature is closer to 10 meters of water than to either 2 or 50 meters.
Making an animation of the EBM solutions
End of explanation
orb_highobl = {'ecc':0., 'obliquity':90., 'long_peri':0.}
print(orb_highobl)
model_highobl = climlab.EBM_seasonal(orb=orb_highobl)
print(model_highobl.param['orb'])
Explanation: The seasonal cycle for a planet with 90º obliquity
The EBM code uses our familiar insolation.py code to calculate insolation, and therefore it's easy to set up a model with different orbital parameters. Here is an example with very different orbital parameters: 90º obliquity. We looked at the distribution of insolation by latitude and season for this type of planet in the last homework.
End of explanation
Tann_highobl = np.empty( [lat.size, num_depths] )
models_highobl = []
for n in range(num_depths):
models_highobl.append(climlab.EBM_seasonal(water_depth=water_depths[n], orb=orb_highobl))
models_highobl[n].integrate_years(40., verbose=False )
models_highobl[n].integrate_years(1.)
Tann_highobl[:,n] = np.squeeze(models_highobl[n].timeave['Ts'])
Tyear_highobl = np.empty([lat.size, num_steps_per_year, num_depths])
for n in range(num_depths):
for m in range(num_steps_per_year):
models_highobl[n].step_forward()
Tyear_highobl[:,m,n] = np.squeeze(models_highobl[n].Ts)
Explanation: Repeat the same procedure to calculate and store temperature throughout one year, after letting the models run out to equilibrium.
End of explanation
fig = plt.figure( figsize=(16,5) )
Tmax_highobl = 125; Tmin_highobl = -Tmax_highobl; delT_highobl = 10
clevels_highobl = np.arange(Tmin_highobl, Tmax_highobl+delT_highobl, delT_highobl)
for n in range(num_depths):
ax = fig.add_subplot(1,num_depths,n+1)
cax = ax.contourf( 4*np.arange(num_steps_per_year), lat, Tyear_highobl[:,:,n],
levels=clevels_highobl, cmap=plt.cm.seismic, vmin=Tmin_highobl, vmax=Tmax_highobl )
cbar1 = plt.colorbar(cax)
ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 )
ax.set_xlabel('Days of year', fontsize=14 )
ax.set_ylabel('Latitude', fontsize=14 )
Explanation: And plot the seasonal temperature cycle same as we did above:
End of explanation
lat2 = np.linspace(-90, 90, 181)
days = np.linspace(1.,50.)/50 * const.days_per_year
Q_present = climlab.solar.insolation.daily_insolation( lat2, days )
Q_highobl = climlab.solar.insolation.daily_insolation( lat2, days, orb_highobl )
Q_present_ann = np.mean( Q_present, axis=1 )
Q_highobl_ann = np.mean( Q_highobl, axis=1 )
fig, ax = plt.subplots()
ax.plot( lat2, Q_present_ann, label='Earth' )
ax.plot( lat2, Q_highobl_ann, label='90deg obliquity' )
ax.grid()
ax.legend(loc='lower center')
ax.set_xlabel('Latitude', fontsize=14 )
ax.set_ylabel('W m$^{-2}$', fontsize=14 )
ax.set_title('Annual mean insolation for two different obliquities', fontsize=16)
Explanation: Note that the temperature range is much larger than for the Earth-like case above (but same contour interval, 10 degC).
Why is the temperature so uniform in the north-south direction with 50 meters of water?
To see the reason, let's plot the annual mean insolation at 90º obliquity, alongside the present-day annual mean insolation:
End of explanation |
6,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../../images/qiskit-heading.gif" alt="Note
Step1: Step one
Step2: Let us assume that qubits qr[0] and qr[1] belong to Alice and Bob respetively.
In classical bits cr[0] and cr[1] Alice and Bob store their measurement results, and classical bits cr[2] and cr[3] are used by Eve to store her measurement results of Alice's and Bob's qubits.
Now Charlie creates a singlet state
Step3: Qubits qr[0] and qr[1] are now entangled.
After creating a singlet state, Charlie sends qubit qr[0] to Alice and qubit qr[1] to Bob.
Step two
Step4: Supose Alice and Bob want to generate a secret key using $N$ singlet states prepared by Charlie.
Step5: The participants must choose the directions onto which they will measure the spin projections of their qubits.
To do this, Alice and Bob create the strings $b$ and $b^{'}$ with randomly generated elements.
Step6: Now we combine Charlie's device and Alice's and Bob's detectors into one circuit (singlet + Alice's measurement + Bob's measurement).
Step7: Let us look at the name of one of the prepared circuits.
Step8: It tells us about the number of the singlet state received from Charlie, and the measurements applied by Alice and Bob.
In the circuits list we have stored $N$ (numberOfSinglets) circuits similar to those shown in the figure below.
The idea is to model every act of the creation of the singlet state, the distribution of its qubits among the participants and the measurement of the spin projection onto the chosen direction in the E91 protocol by executing each circuit from the circuits list with one shot.
Step three
Step9: Look at the output of the execution of the first circuit.
Step10: It consists of four digits.
Recall that Alice and Bob store the results of the measurement in classical bits cr[0] and cr[1] (two digits on the right).
Since we model the secret key generation process without the presence of an eavesdropper, the classical bits cr[2] and cr[3] are always 0.
Also note that the output is the Python dictionary, in which the keys are the obtained results, and the values are the counts.
Alice and Bob record the results of their measurements as bits of the strings $a$ and $a^{'}$.
To simulate this process we need to use regular expressions module re.
First, we compile the search patterns.
Step11: Using these patterns, we can find particular results in the outputs and fill strings the $a$ and $a^{'}$ with the results of Alice's and Bob's measurements.
Step12: Step four
Step13: The keys $k$ and $k'$ are now stored in the aliceKey and bobKey lists, respectively.
The remaining results which were not used to create the keys can now be revealed.
It is important for Alice and Bob to have the same keys, i.e. strings $k$ and $k^{'}$ must be equal.
Let us compare the bits of strings $k$ and $k^{'}$ and find out how many there are mismatches in the keys.
Step14: Note that since the strings $k$ and $k^{'}$ are secret, Alice and Bob have no information about mismatches in the bits of their keys.
To find out the number of errors, the participants can perform a random sampling test.
Alice randomly selects $\delta$ bits of her secret key and tells Bob which bits she selected.
Then Alice and Bob compare the values of these check bits.
For large enough $\delta$ the number of errors in the check bits will be close to the number of errors in the remaining bits.
Step five
Step15: Output
Now let us print all the interesting values.
Step16: Finaly, Alice and Bob have the secret keys $k$ and $k^{'}$ (aliceKey and bobKey)!
Now they can use the one-time pad technique to encrypt and decrypt messages.
Since we simulate the E91 protocol without the presence of Eve, the CHSH correlation value should be close to $-2\sqrt{2} \approx -2.828$.
In addition, there should be no mismatching bits in the keys of Alice and Bob.
Note also that there are 9 possible combinations of measurements that can be performed by Alice and Bob, but only 2 of them give the results using which the secret keys can be created.
Thus, the ratio of the length of the keys to the number of singlets $N$ should be close to $2/9$.
Simulation of eavesdropping
Suppose some third party wants to interfere in the communication session of Alice and Bob and obtain a secret key.
The eavesdropper can use the intercept-resend attacks
Step17: Like Alice and Bob, Eve must choose the directions onto which she will measure the spin projections of the qubits.
In our simulation, the eavesdropper randomly chooses one of the observables $W \otimes W$ or $Z \otimes Z$ to measure.
Step18: Like we did before, now we create the circuits with singlet states and detectors of Eve, Alice and Bob.
Step19: Now we execute all the prepared circuits on the simulator.
Step20: Let us look at the name of the first circuit and the output after it is executed.
Step21: We can see onto which directions Eve, Alice and Bob measured the spin projections and the results obtained.
Recall that the bits cr[2] and cr[3] (two digits on the left) are used by Eve to store the results of her measurements.
To extract Eve's results from the outputs, we need to compile new search patterns.
Step22: Now Eve, Alice and Bob record the results of their measurements.
Step23: As before, Alice, Bob and Eve create the secret keys using the results obtained after measuring the observables $W \otimes W$ and $Z \otimes Z$.
Step24: To find out the number of mismatching bits in the keys of Alice, Bob and Eve we compare the lists aliceKey, bobKey and eveKeys.
Step25: It is also good to know what percentage of the keys is known to Eve.
Step26: Using the chsh_corr function defined above we calculate the CSHS correlation value.
Step27: And now we print all the results. | Python Code:
# useful additional packages
import numpy as np
import random
# regular expressions module
import re
# importing the QISKit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, Aer
# import basic plot tools
from qiskit.tools.visualization import circuit_drawer, plot_histogram
Explanation: <img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
E91 quantum key distribution protocol
Contributors
Andrey Kardashin
Introduction
Suppose that Alice wants to send a message to Bob.
In order to protect the information in the message from the eavesdropper Eve, it must be encrypted.
Encryption is the process of encoding the plaintext into ciphertext.
The strength of encryption, that is, the property to resist decryption, is determined by its algorithm.
Any encryption algorithm is based on the use of a key.
In order to generate the ciphertext, the one-time pad technique is usually used.
The idea of this technique is to apply the exclusive or (XOR) $\oplus$ operation to bits of the plaintext and bits of the key to obtain the ciphertext.
Thus, if $m=(m_1 \ldots m_n)$, $c=(c_1 \ldots c_n)$ and $k=(k_1 \ldots k_n)$ are binary strings of plaintext, ciphertext and key respectively, then the encryption is defined as $c_i=m_i \oplus k_i$, and decryption as $m_i=c_i \oplus k_i$.
The one-time pad method is proved to be be absolutely secure.
Thus, if Eve intercepted the ciphertext $c$, she will not get any information from the message $m$ until she has the key $k$.
The main problem of modern cryptographic systems is the distribution among the participants of the communication session of a secret key, possession of which should not be available to third parties.
The rapidly developing methods of quantum key distribution can solve this problem regardless of the capabilities of the eavesdropper.
In this tutorial, we show how Alice and Bob can generate a secret key using the E91 quantum key distribution protocol.
Quantum entanglement
The E91 protocol developed by Artur Ekert in 1991 is based on the use of entangled states and Bell's theorem (see Entanglement Revisited QISKit tutorial).
It is known that two electrons A and B can be prepared in such a state that they can not be considered separately from each other.
One of these states is the singlet state
$$\lvert\psi_s\rangle =
\frac{1}{\sqrt{2}}(\lvert0\rangle_A\otimes\lvert1\rangle_B - \lvert1\rangle_A\otimes\lvert0\rangle_B) =
\frac{1}{\sqrt{2}}(\lvert01\rangle - \lvert10\rangle),$$
where the vectors $\lvert 0 \rangle$ and $\lvert 1 \rangle$ describe the states of each electron with the [spin](https://en.wikipedia.org/wiki/Spin_(physics%29) projection along the positive and negative direction of the z axis.
The observable of the projection of the spin onto the direction $\vec{n}=(n_x, n_y, n_z)$ is given by
$$\vec{n} \cdot \vec{\sigma} =
n_x X + n_y Y + n_z Z,$$
where $\vec{\sigma} = (X, Y, Z)$ and $X, Y, Z$ are the Pauli matrices.
For two qubits A and B, the observable $(\vec{a} \cdot \vec{\sigma})_A \otimes (\vec{b} \cdot \vec{\sigma})_B$ describes the joint measurement of the spin projections onto the directions $\vec{a}$ and $\vec{b}$.
It can be shown that the expectation value of this observable in the singlet state is
$$\langle (\vec{a} \cdot \vec{\sigma})A \otimes (\vec{b} \cdot \vec{\sigma})_B \rangle{\psi_s} =
-\vec{a} \cdot \vec{b}. \qquad\qquad (1)$$
Here we see an interesting fact: if Alice and Bob measure the spin projections of electrons A and B onto the same direction, they will obtain the opposite results.
Thus, if Alice got the result $\pm 1$, then Bob definitely will get the result $\mp 1$, i.e. the results will be perfectly anticorrelated.
CHSH inequality
In the framework of classical physics, it is impossible to create a correlation inherent in the singlet state $\lvert\psi_s\rangle$.
Indeed, let us measure the observables $X$, $Z$ for qubit A and observables $W = \frac{1}{\sqrt{2}} (X + Z)$, $V = \frac{1}{\sqrt{2}} (-X + Z)$ for qubit B.
Performing joint measurements of these observables, the following expectation values can be obtained:
\begin{eqnarray}
\langle X \otimes W \rangle_{\psi_s} &= -\frac{1}{\sqrt{2}}, \quad
\langle X \otimes V \rangle_{\psi_s} &= \frac{1}{\sqrt{2}}, \qquad\qquad (2) \
\langle Z \otimes W \rangle_{\psi_s} &= -\frac{1}{\sqrt{2}}, \quad
\langle Z \otimes V \rangle_{\psi_s} &= -\frac{1}{\sqrt{2}}.
\end{eqnarray}
Now we can costruct the Clauser-Horne-Shimony-Holt (CHSH) correlation value:
$$C =
\langle X\otimes W \rangle - \langle X \otimes V \rangle + \langle Z \otimes W \rangle + \langle Z \otimes V \rangle =
-2 \sqrt{2}. \qquad\qquad (3)$$
The local hidden variable theory which was developed in particular to explain the quantum correlations gives that $\lvert C \rvert \leqslant 2$.
But Bell's theorem states that "no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."
Thus, the violation of the CHSH inequality (i.e. $C = -2 \sqrt{2}$ for the singlet state), which is a generalized form of Bell's inequality, can serve as an indicator of quantum entanglement.
This fact finds its application in the E91 protocol.
The protocol
To implement the E91 quantum key distribution protocol, there must be a source of qubits prepared in the singlet state.
It does not matter to whom this source belongs: to Alice, to Bob, to some trusted third-party Charlie or even to Eve.
The steps of the E91 protocol are following.
Charlie, the owner of the singlet state preparation device, creates $N$ entangled states $\lvert\psi_s\rangle$ and sends qubits A to Alice and qubits B to Bob via the quantum channel.
Participants Alice and Bob generate strings $b=(b_1 \ldots b_N)$ and $b^{'}=(b_1^{'} \ldots b_N^{'})$, where $b_i, b^{'}_j = 1, 2, 3$.
Depending on the elements of these strings, Alice and Bob measure the spin projections of their qubits along the following directions:
\begin{align}
b_i = 1: \quad \vec{a}_1 &= (1,0,0) \quad (X \text{ observable}) &
b_j^{'} = 1: \quad \vec{b}_1 &= \left(\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}}\right) \quad (W \text{ observable})
\
b_i = 2: \quad \vec{a}_2 &= \left(\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}}\right) \quad (W \text{ observable}) &
b_j^{'} = 2: \quad \vec{b}_2 &= (0,0,1) \quad ( \text{Z observable})
\
b_i = 3: \quad \vec{a}_3 &= (0,0,1) \quad (Z \text{ observable}) &
b_j^{'} = 3: \quad \vec{b}_3 &= \left(-\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}}\right) \quad (V \text{ observable})
\end{align}
<img src="images/vectors.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="center">
We can describe this process as a measurement of the observables $(\vec{a}_i \cdot \vec{\sigma})_A \otimes (\vec{b}_j \cdot \vec{\sigma})_B$ for each singlet state created by Charlie.
Alice and Bob record the results of their measurements as elements of strings $a=(a_1 \ldots a_N)$ and $a^{'} =(a_1^{'} \ldots a_N^{'})$ respectively, where $a_i, a^{'}_j = \pm 1$.
Using the classical channel, participants compare their strings $b=(b_1 \ldots b_N)$ and $b^{'}=(b_1^{'} \ldots b_N^{'})$.
In other words, Alice and Bob tell each other which measurements they have performed during the step 2.
If Alice and Bob have measured the spin projections of the $m$-th entangled pair of qubits onto the same direction (i.e. $\vec{a}_2/\vec{b}_1$ or $\vec{a}_3/\vec{b}_2$ for Alice's and Bob's qubit respectively), then they are sure that they obtained opposite results, i.e. $a_m = - a_m^{'}$ (see Eq. (1)).
Thus, for the $l$-th bit of the key strings $k=(k_1 \ldots k_n),k^{'}=(k_1^{'} \ldots k_n^{'})$ Alice and Bob can write $k_l = a_m, k_l^{'} = -a_m^{'}$.
Using the results obtained after measuring the spin projections onto the $\vec{a}_1/\vec{b}_1$, $\vec{a}_1/\vec{b}_3$, $\vec{a}_3/\vec{b}_1$ and $\vec{a}_3/\vec{b}_3$ directions (observables $(2)$), Alice and Bob calculate the CHSH correlation value $(3)$.
If $C = -2\sqrt{2}$, then Alice and Bob can be sure that the states they had been receiving from Charlie were entangled indeed.
This fact tells the participants that there was no interference in the quantum channel.
Simulation
In this section we simulate the E91 quantum key distribution protocol without the presence of an eavesdropper.
End of explanation
# Creating registers
qr = QuantumRegister(2, name="qr")
cr = ClassicalRegister(4, name="cr")
Explanation: Step one: creating the singlets
In the first step Alice and Bob receive their qubits of the singlet states $\lvert\psi_s\rangle$ created by Charlie.
For our simulation, we need registers with two quantum bits and four classical bits.
End of explanation
singlet = QuantumCircuit(qr, cr, name='singlet')
singlet.x(qr[0])
singlet.x(qr[1])
singlet.h(qr[0])
singlet.cx(qr[0],qr[1])
Explanation: Let us assume that qubits qr[0] and qr[1] belong to Alice and Bob respetively.
In classical bits cr[0] and cr[1] Alice and Bob store their measurement results, and classical bits cr[2] and cr[3] are used by Eve to store her measurement results of Alice's and Bob's qubits.
Now Charlie creates a singlet state:
End of explanation
## Alice's measurement circuits
# measure the spin projection of Alice's qubit onto the a_1 direction (X basis)
measureA1 = QuantumCircuit(qr, cr, name='measureA1')
measureA1.h(qr[0])
measureA1.measure(qr[0],cr[0])
# measure the spin projection of Alice's qubit onto the a_2 direction (W basis)
measureA2 = QuantumCircuit(qr, cr, name='measureA2')
measureA2.s(qr[0])
measureA2.h(qr[0])
measureA2.t(qr[0])
measureA2.h(qr[0])
measureA2.measure(qr[0],cr[0])
# measure the spin projection of Alice's qubit onto the a_3 direction (standard Z basis)
measureA3 = QuantumCircuit(qr, cr, name='measureA3')
measureA3.measure(qr[0],cr[0])
## Bob's measurement circuits
# measure the spin projection of Bob's qubit onto the b_1 direction (W basis)
measureB1 = QuantumCircuit(qr, cr, name='measureB1')
measureB1.s(qr[1])
measureB1.h(qr[1])
measureB1.t(qr[1])
measureB1.h(qr[1])
measureB1.measure(qr[1],cr[1])
# measure the spin projection of Bob's qubit onto the b_2 direction (standard Z basis)
measureB2 = QuantumCircuit(qr, cr, name='measureB2')
measureB2.measure(qr[1],cr[1])
# measure the spin projection of Bob's qubit onto the b_3 direction (V basis)
measureB3 = QuantumCircuit(qr, cr, name='measureB3')
measureB3.s(qr[1])
measureB3.h(qr[1])
measureB3.tdg(qr[1])
measureB3.h(qr[1])
measureB3.measure(qr[1],cr[1])
## Lists of measurement circuits
aliceMeasurements = [measureA1, measureA2, measureA3]
bobMeasurements = [measureB1, measureB2, measureB3]
Explanation: Qubits qr[0] and qr[1] are now entangled.
After creating a singlet state, Charlie sends qubit qr[0] to Alice and qubit qr[1] to Bob.
Step two: measuring
First let us prepare the measurements which will be used by Alice and Bob.
We define $A(\vec{a}_i) = \vec{a}_i \cdot \vec{\sigma}$ and $B(\vec{b}_j) = \vec{b}_j \cdot \vec{\sigma}$ as the spin projection observables used by Alice and Bob for their measurements.
To perform these measurements, the standard basis $Z$ must be rotated to the proper basis when it is needed (see Superposition and Entanglement and Bell Tests user guides).
Here we define the notation of possible measurements of Alice and Bob:
Blocks on the left side can be considered as detectors used by the participants to measure $X, W, Z$ and $V$ observables.
Now we prepare the corresponding curcuits.
End of explanation
# Define the number of singlets N
numberOfSinglets = 500
Explanation: Supose Alice and Bob want to generate a secret key using $N$ singlet states prepared by Charlie.
End of explanation
aliceMeasurementChoices = [random.randint(1, 3) for i in range(numberOfSinglets)] # string b of Alice
bobMeasurementChoices = [random.randint(1, 3) for i in range(numberOfSinglets)] # string b' of Bob
Explanation: The participants must choose the directions onto which they will measure the spin projections of their qubits.
To do this, Alice and Bob create the strings $b$ and $b^{'}$ with randomly generated elements.
End of explanation
circuits = [] # the list in which the created circuits will be stored
for i in range(numberOfSinglets):
# create the name of the i-th circuit depending on Alice's and Bob's measurement choices
circuitName = str(i) + ':A' + str(aliceMeasurementChoices[i]) + '_B' + str(bobMeasurementChoices[i])
# create the joint measurement circuit
# add Alice's and Bob's measurement circuits to the singlet state curcuit
# singlet state circuit # measurement circuit of Alice # measurement circuit of Bob
circuitName = singlet + aliceMeasurements[aliceMeasurementChoices[i]-1] + bobMeasurements[bobMeasurementChoices[i]-1]
# add the created circuit to the circuits list
circuits.append(circuitName)
Explanation: Now we combine Charlie's device and Alice's and Bob's detectors into one circuit (singlet + Alice's measurement + Bob's measurement).
End of explanation
print(circuits[0].name)
Explanation: Let us look at the name of one of the prepared circuits.
End of explanation
backend=Aer.get_backend('qasm_simulator')
result = execute(circuits, backend=backend, shots=1).result()
print(result)
Explanation: It tells us about the number of the singlet state received from Charlie, and the measurements applied by Alice and Bob.
In the circuits list we have stored $N$ (numberOfSinglets) circuits similar to those shown in the figure below.
The idea is to model every act of the creation of the singlet state, the distribution of its qubits among the participants and the measurement of the spin projection onto the chosen direction in the E91 protocol by executing each circuit from the circuits list with one shot.
Step three: recording the results
First let us execute the circuits on the simulator.
End of explanation
result.get_counts(circuits[0])
plot_histogram(result.get_counts(circuits[0]))
Explanation: Look at the output of the execution of the first circuit.
End of explanation
abPatterns = [
re.compile('..00$'), # search for the '..00' output (Alice obtained -1 and Bob obtained -1)
re.compile('..01$'), # search for the '..01' output
re.compile('..10$'), # search for the '..10' output (Alice obtained -1 and Bob obtained 1)
re.compile('..11$') # search for the '..11' output
]
Explanation: It consists of four digits.
Recall that Alice and Bob store the results of the measurement in classical bits cr[0] and cr[1] (two digits on the right).
Since we model the secret key generation process without the presence of an eavesdropper, the classical bits cr[2] and cr[3] are always 0.
Also note that the output is the Python dictionary, in which the keys are the obtained results, and the values are the counts.
Alice and Bob record the results of their measurements as bits of the strings $a$ and $a^{'}$.
To simulate this process we need to use regular expressions module re.
First, we compile the search patterns.
End of explanation
aliceResults = [] # Alice's results (string a)
bobResults = [] # Bob's results (string a')
for i in range(numberOfSinglets):
res = list(result.get_counts(circuits[i]).keys())[0] # extract the key from the dict and transform it to str; execution result of the i-th circuit
if abPatterns[0].search(res): # check if the key is '..00' (if the measurement results are -1,-1)
aliceResults.append(-1) # Alice got the result -1
bobResults.append(-1) # Bob got the result -1
if abPatterns[1].search(res):
aliceResults.append(1)
bobResults.append(-1)
if abPatterns[2].search(res): # check if the key is '..10' (if the measurement results are -1,1)
aliceResults.append(-1) # Alice got the result -1
bobResults.append(1) # Bob got the result 1
if abPatterns[3].search(res):
aliceResults.append(1)
bobResults.append(1)
Explanation: Using these patterns, we can find particular results in the outputs and fill strings the $a$ and $a^{'}$ with the results of Alice's and Bob's measurements.
End of explanation
aliceKey = [] # Alice's key string k
bobKey = [] # Bob's key string k'
# comparing the stings with measurement choices
for i in range(numberOfSinglets):
# if Alice and Bob have measured the spin projections onto the a_2/b_1 or a_3/b_2 directions
if (aliceMeasurementChoices[i] == 2 and bobMeasurementChoices[i] == 1) or (aliceMeasurementChoices[i] == 3 and bobMeasurementChoices[i] == 2):
aliceKey.append(aliceResults[i]) # record the i-th result obtained by Alice as the bit of the secret key k
bobKey.append(- bobResults[i]) # record the multiplied by -1 i-th result obtained Bob as the bit of the secret key k'
keyLength = len(aliceKey) # length of the secret key
Explanation: Step four: revealing the bases
In the previos step we have stored the measurement results of Alice and Bob in the aliceResults and bobResults lists (strings $a$ and $a^{'}$).
Now the participants compare their strings $b$ and $b^{'}$ via the public classical channel.
If Alice and Bob have measured the spin projections of their qubits of the i-th singlet onto the same direction, then Alice records the result $a_i$ as the bit of the string $k$, and Bob records the result $-a_i$ as the bit of the string $k^{'}$ (see Eq. (1)).
End of explanation
abKeyMismatches = 0 # number of mismatching bits in Alice's and Bob's keys
for j in range(keyLength):
if aliceKey[j] != bobKey[j]:
abKeyMismatches += 1
Explanation: The keys $k$ and $k'$ are now stored in the aliceKey and bobKey lists, respectively.
The remaining results which were not used to create the keys can now be revealed.
It is important for Alice and Bob to have the same keys, i.e. strings $k$ and $k^{'}$ must be equal.
Let us compare the bits of strings $k$ and $k^{'}$ and find out how many there are mismatches in the keys.
End of explanation
# function that calculates CHSH correlation value
def chsh_corr(result):
# lists with the counts of measurement results
# each element represents the number of (-1,-1), (-1,1), (1,-1) and (1,1) results respectively
countA1B1 = [0, 0, 0, 0] # XW observable
countA1B3 = [0, 0, 0, 0] # XV observable
countA3B1 = [0, 0, 0, 0] # ZW observable
countA3B3 = [0, 0, 0, 0] # ZV observable
for i in range(numberOfSinglets):
res = list(result.get_counts(circuits[i]).keys())[0]
# if the spins of the qubits of the i-th singlet were projected onto the a_1/b_1 directions
if (aliceMeasurementChoices[i] == 1 and bobMeasurementChoices[i] == 1):
for j in range(4):
if abPatterns[j].search(res):
countA1B1[j] += 1
if (aliceMeasurementChoices[i] == 1 and bobMeasurementChoices[i] == 3):
for j in range(4):
if abPatterns[j].search(res):
countA1B3[j] += 1
if (aliceMeasurementChoices[i] == 3 and bobMeasurementChoices[i] == 1):
for j in range(4):
if abPatterns[j].search(res):
countA3B1[j] += 1
# if the spins of the qubits of the i-th singlet were projected onto the a_3/b_3 directions
if (aliceMeasurementChoices[i] == 3 and bobMeasurementChoices[i] == 3):
for j in range(4):
if abPatterns[j].search(res):
countA3B3[j] += 1
# number of the results obtained from the measurements in a particular basis
total11 = sum(countA1B1)
total13 = sum(countA1B3)
total31 = sum(countA3B1)
total33 = sum(countA3B3)
# expectation values of XW, XV, ZW and ZV observables (2)
expect11 = (countA1B1[0] - countA1B1[1] - countA1B1[2] + countA1B1[3])/total11 # -1/sqrt(2)
expect13 = (countA1B3[0] - countA1B3[1] - countA1B3[2] + countA1B3[3])/total13 # 1/sqrt(2)
expect31 = (countA3B1[0] - countA3B1[1] - countA3B1[2] + countA3B1[3])/total31 # -1/sqrt(2)
expect33 = (countA3B3[0] - countA3B3[1] - countA3B3[2] + countA3B3[3])/total33 # -1/sqrt(2)
corr = expect11 - expect13 + expect31 + expect33 # calculate the CHSC correlation value (3)
return corr
Explanation: Note that since the strings $k$ and $k^{'}$ are secret, Alice and Bob have no information about mismatches in the bits of their keys.
To find out the number of errors, the participants can perform a random sampling test.
Alice randomly selects $\delta$ bits of her secret key and tells Bob which bits she selected.
Then Alice and Bob compare the values of these check bits.
For large enough $\delta$ the number of errors in the check bits will be close to the number of errors in the remaining bits.
Step five: CHSH correlation value test
Alice and Bob want to be sure that there was no interference in the communication session.
To do that, they calculate the CHSH correlation value $(3)$ using the results obtained after the measurements of spin projections onto the $\vec{a}_1/\vec{b}_1$, $\vec{a}_1/\vec{b}_3$, $\vec{a}_3/\vec{b}_1$ and $\vec{a}_3/\vec{b}_3$ directions.
Recall that it is equivalent to the measurement of the observables $X \otimes W$, $X \otimes V$, $Z \otimes W$ and $Z \otimes V$ respectively.
According to the Born-von Neumann statistical postulate, the expectation value of the observable $E = \sum_j e_j \lvert e_j \rangle \langle e_j \rvert$ in the state $\lvert \psi \rangle$ is given by
$$\langle E \rangle_\psi =
\mathrm{Tr}\, \lvert\psi\rangle \langle\psi\rvert \, E = \
\mathrm{Tr}\, \lvert\psi\rangle \langle\psi\rvert \sum_j e_j \lvert e_j \rangle \langle e_j \rvert =
\sum_j \langle\psi\rvert(e_j \lvert e_j \rangle \langle e_j \rvert) \lvert\psi\rangle =
\sum_j e_j \left|\langle\psi\lvert e_j \rangle \right|^2 = \
\sum_j e_j \mathrm{P}\psi (E \models e_j),$$
where $\lvert e_j \rangle$ is the eigenvector of $E$ with the corresponding eigenvalue $e_j$, and $\mathrm{P}\psi (E \models e_j)$ is the probability of obtainig the result $e_j$ after measuring the observable $E$ in the state $\lvert \psi \rangle$.
A similar expression can be written for the joint measurement of the observables $A$ and $B$:
$$\langle A \otimes B \rangle_\psi =
\sum_{j,k} a_j b_k \mathrm{P}\psi (A \models a_j, B \models b_k) =
\sum{j,k} a_j b_k \mathrm{P}_\psi (a_j, b_k). \qquad\qquad (4)$$
Note that if $A$ and $B$ are the spin projection observables, then the corresponding eigenvalues are $a_j, b_k = \pm 1$.
Thus, for the observables $A(\vec{a}_i)$ and $B(\vec{b}_j)$ and singlet state $\lvert\psi\rangle_s$ we can rewrite $(4)$ as
$$\langle A(\vec{a}_i) \otimes B(\vec{b}_j) \rangle =
\mathrm{P}(-1,-1) - \mathrm{P}(1,-1) - \mathrm{P}(-1,1) + \mathrm{P}(1,1). \qquad\qquad (5)$$
In our experiments, the probabilities on the right side can be calculated as follows:
$$\mathrm{P}(a_j, b_k) = \frac{n_{a_j, b_k}(A \otimes B)}{N(A \otimes B)}, \qquad\qquad (6)$$
where the numerator is the number of results $a_j, b_k$ obtained after measuring the observable $A \otimes B$, and the denominator is the total number of measurements of the observable $A \otimes B$.
Since Alice and Bob revealed their strings $b$ and $b^{'}$, they know what measurements they performed and what results they have obtained.
With this data, participants calculate the expectation values $(2)$ using $(5)$ and $(6)$.
End of explanation
corr = chsh_corr(result) # CHSH correlation value
# CHSH inequality test
print('CHSH correlation value: ' + str(round(corr, 3)))
# Keys
print('Length of the key: ' + str(keyLength))
print('Number of mismatching bits: ' + str(abKeyMismatches) + '\n')
Explanation: Output
Now let us print all the interesting values.
End of explanation
# measurement of the spin projection of Alice's qubit onto the a_2 direction (W basis)
measureEA2 = QuantumCircuit(qr, cr, name='measureEA2')
measureEA2.s(qr[0])
measureEA2.h(qr[0])
measureEA2.t(qr[0])
measureEA2.h(qr[0])
measureEA2.measure(qr[0],cr[2])
# measurement of the spin projection of Allice's qubit onto the a_3 direction (standard Z basis)
measureEA3 = QuantumCircuit(qr, cr, name='measureEA3')
measureEA3.measure(qr[0],cr[2])
# measurement of the spin projection of Bob's qubit onto the b_1 direction (W basis)
measureEB1 = QuantumCircuit(qr, cr, name='measureEB1')
measureEB1.s(qr[1])
measureEB1.h(qr[1])
measureEB1.t(qr[1])
measureEB1.h(qr[1])
measureEB1.measure(qr[1],cr[3])
# measurement of the spin projection of Bob's qubit onto the b_2 direction (standard Z measurement)
measureEB2 = QuantumCircuit(qr, cr, name='measureEB2')
measureEB2.measure(qr[1],cr[3])
# lists of measurement circuits
eveMeasurements = [measureEA2, measureEA3, measureEB1, measureEB2]
Explanation: Finaly, Alice and Bob have the secret keys $k$ and $k^{'}$ (aliceKey and bobKey)!
Now they can use the one-time pad technique to encrypt and decrypt messages.
Since we simulate the E91 protocol without the presence of Eve, the CHSH correlation value should be close to $-2\sqrt{2} \approx -2.828$.
In addition, there should be no mismatching bits in the keys of Alice and Bob.
Note also that there are 9 possible combinations of measurements that can be performed by Alice and Bob, but only 2 of them give the results using which the secret keys can be created.
Thus, the ratio of the length of the keys to the number of singlets $N$ should be close to $2/9$.
Simulation of eavesdropping
Suppose some third party wants to interfere in the communication session of Alice and Bob and obtain a secret key.
The eavesdropper can use the intercept-resend attacks: Eve intercepts one or both of the entangled qubits prepared by Charlie, measures the spin projections of these qubits, prepares new ones depending on the results obtained ($\lvert 01 \rangle$ or $\lvert 10 \rangle$) and sends them to Alice and Bob.
A schematic representation of this process is shown in the figure below.
Here $E(\vec{n}_A) = \vec{n}_A \cdot \vec{\sigma}$ and $E(\vec{n}_B) = \vec{n}_A \cdot \vec{\sigma}$ are the observables of the of the spin projections of Alice's and Bob's qubits onto the directions $\vec{n}_A$ and $\vec{n}_B$.
It would be wise for Eve to choose these directions to be $\vec{n}_A = \vec{a}_2,\vec{a}_3$ and $\vec{n}_B = \vec{b}_1,\vec{b}_2$ since the results obtained from other measurements can not be used to create a secret key.
Let us prepare the circuits for Eve's measurements.
End of explanation
# list of Eve's measurement choices
# the first and the second elements of each row represent the measurement of Alice's and Bob's qubits by Eve respectively
eveMeasurementChoices = []
for j in range(numberOfSinglets):
if random.uniform(0, 1) <= 0.5: # in 50% of cases perform the WW measurement
eveMeasurementChoices.append([0, 2])
else: # in 50% of cases perform the ZZ measurement
eveMeasurementChoices.append([1, 3])
Explanation: Like Alice and Bob, Eve must choose the directions onto which she will measure the spin projections of the qubits.
In our simulation, the eavesdropper randomly chooses one of the observables $W \otimes W$ or $Z \otimes Z$ to measure.
End of explanation
circuits = [] # the list in which the created circuits will be stored
for j in range(numberOfSinglets):
# create the name of the j-th circuit depending on Alice's, Bob's and Eve's choices of measurement
circuitName = str(j) + ':A' + str(aliceMeasurementChoices[j]) + '_B' + str(bobMeasurementChoices[j] + 2) + '_E' + str(eveMeasurementChoices[j][0]) + str(eveMeasurementChoices[j][1] - 1)
# create the joint measurement circuit
# add Alice's and Bob's measurement circuits to the singlet state curcuit
# singlet state circuit # Eve's measurement circuit of Alice's qubit # Eve's measurement circuit of Bob's qubit # measurement circuit of Alice # measurement circuit of Bob
circuitName = singlet + eveMeasurements[eveMeasurementChoices[j][0]-1] + eveMeasurements[eveMeasurementChoices[j][1]-1] + aliceMeasurements[aliceMeasurementChoices[j]-1] + bobMeasurements[bobMeasurementChoices[j]-1]
# add the created circuit to the circuits list
circuits.append(circuitName)
Explanation: Like we did before, now we create the circuits with singlet states and detectors of Eve, Alice and Bob.
End of explanation
backend=Aer.get_backend('qasm_simulator')
result = execute(circuits, backend=backend, shots=1).result()
print(result)
Explanation: Now we execute all the prepared circuits on the simulator.
End of explanation
print(str(circuits[0].name) + '\t' + str(result.get_counts(circuits[0])))
plot_histogram(result.get_counts(circuits[0]))
Explanation: Let us look at the name of the first circuit and the output after it is executed.
End of explanation
ePatterns = [
re.compile('00..$'), # search for the '00..' result (Eve obtained the results -1 and -1 for Alice's and Bob's qubits)
re.compile('01..$'), # search for the '01..' result (Eve obtained the results 1 and -1 for Alice's and Bob's qubits)
re.compile('10..$'),
re.compile('11..$')
]
Explanation: We can see onto which directions Eve, Alice and Bob measured the spin projections and the results obtained.
Recall that the bits cr[2] and cr[3] (two digits on the left) are used by Eve to store the results of her measurements.
To extract Eve's results from the outputs, we need to compile new search patterns.
End of explanation
aliceResults = [] # Alice's results (string a)
bobResults = [] # Bob's results (string a')
# list of Eve's measurement results
# the elements in the 1-st column are the results obtaned from the measurements of Alice's qubits
# the elements in the 2-nd column are the results obtaned from the measurements of Bob's qubits
eveResults = []
# recording the measurement results
for j in range(numberOfSinglets):
res = list(result.get_counts(circuits[j]).keys())[0] # extract a key from the dict and transform it to str
# Alice and Bob
if abPatterns[0].search(res): # check if the key is '..00' (if the measurement results are -1,-1)
aliceResults.append(-1) # Alice got the result -1
bobResults.append(-1) # Bob got the result -1
if abPatterns[1].search(res):
aliceResults.append(1)
bobResults.append(-1)
if abPatterns[2].search(res): # check if the key is '..10' (if the measurement results are -1,1)
aliceResults.append(-1) # Alice got the result -1
bobResults.append(1) # Bob got the result 1
if abPatterns[3].search(res):
aliceResults.append(1)
bobResults.append(1)
# Eve
if ePatterns[0].search(res): # check if the key is '00..'
eveResults.append([-1, -1]) # results of the measurement of Alice's and Bob's qubits are -1,-1
if ePatterns[1].search(res):
eveResults.append([1, -1])
if ePatterns[2].search(res):
eveResults.append([-1, 1])
if ePatterns[3].search(res):
eveResults.append([1, 1])
Explanation: Now Eve, Alice and Bob record the results of their measurements.
End of explanation
aliceKey = [] # Alice's key string a
bobKey = [] # Bob's key string a'
eveKeys = [] # Eve's keys; the 1-st column is the key of Alice, and the 2-nd is the key of Bob
# comparing the strings with measurement choices (b and b')
for j in range(numberOfSinglets):
# if Alice and Bob measured the spin projections onto the a_2/b_1 or a_3/b_2 directions
if (aliceMeasurementChoices[j] == 2 and bobMeasurementChoices[j] == 1) or (aliceMeasurementChoices[j] == 3 and bobMeasurementChoices[j] == 2):
aliceKey.append(aliceResults[j]) # record the i-th result obtained by Alice as the bit of the secret key k
bobKey.append(-bobResults[j]) # record the multiplied by -1 i-th result obtained Bob as the bit of the secret key k'
eveKeys.append([eveResults[j][0], -eveResults[j][1]]) # record the i-th bits of the keys of Eve
keyLength = len(aliceKey) # length of the secret skey
Explanation: As before, Alice, Bob and Eve create the secret keys using the results obtained after measuring the observables $W \otimes W$ and $Z \otimes Z$.
End of explanation
abKeyMismatches = 0 # number of mismatching bits in the keys of Alice and Bob
eaKeyMismatches = 0 # number of mismatching bits in the keys of Eve and Alice
ebKeyMismatches = 0 # number of mismatching bits in the keys of Eve and Bob
for j in range(keyLength):
if aliceKey[j] != bobKey[j]:
abKeyMismatches += 1
if eveKeys[j][0] != aliceKey[j]:
eaKeyMismatches += 1
if eveKeys[j][1] != bobKey[j]:
ebKeyMismatches += 1
Explanation: To find out the number of mismatching bits in the keys of Alice, Bob and Eve we compare the lists aliceKey, bobKey and eveKeys.
End of explanation
eaKnowledge = (keyLength - eaKeyMismatches)/keyLength # Eve's knowledge of Bob's key
ebKnowledge = (keyLength - ebKeyMismatches)/keyLength # Eve's knowledge of Alice's key
Explanation: It is also good to know what percentage of the keys is known to Eve.
End of explanation
corr = chsh_corr(result)
Explanation: Using the chsh_corr function defined above we calculate the CSHS correlation value.
End of explanation
# CHSH inequality test
print('CHSH correlation value: ' + str(round(corr, 3)) + '\n')
# Keys
print('Length of the key: ' + str(keyLength))
print('Number of mismatching bits: ' + str(abKeyMismatches) + '\n')
print('Eve\'s knowledge of Alice\'s key: ' + str(round(eaKnowledge * 100, 2)) + ' %')
print('Eve\'s knowledge of Bob\'s key: ' + str(round(ebKnowledge * 100, 2)) + ' %')
Explanation: And now we print all the results.
End of explanation |
6,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial em Tensorflow
Step1: Vamos usar um dataset bem simples
Step2: Antes de montar o modelo vamos definir todos os Hyper parametros
Step3: Graph e Session são duas classes centrais no tensorflow.
Nós montamos as operações na classe Graph (o grafo de computação) e executamos essas operações dentro de uma Session.
Sempre existe um grafo default.
Quando usamos tf.Graph.as_default sobrescrevemos o grafo default pelo grafo definido no contexto.
Um modo interativo de se rodar um grafo é por meio da tf.InteractiveSession()
Vamos definir a regressão linear no grafo default
Step4: Como temos poucos dados (42 observações) podemos treinar o modelo passando por cada uma das observações uma a uma.
Step5: Treinado o modelo, temos os novos valores para $w$ e $b$.
Assim podemos calcular o $R^2$ e plotar a reta resultante
Step16: O código acima pode ser melhorado.
Podemos encapsular os hyper parametros numa classe. Assim como o modelo de regressão linear.
Step18: Nesse modelo definimos dois tipos de função de erro. Uma delas é chamada de Huber loss.
Relembrando a função
Step19: Tensorboard é uma ótima ferramenta de visualização.
Podemos ver o grafo de computação e ver certas metrícas ao longo do treinamento | Python Code:
import numpy as np
import tensorflow as tf
import pandas as pd
import util
%matplotlib inline
Explanation: Tutorial em Tensorflow: Regressão Linear
Nesse tutorial vamos montar um modelo de regressão linear usando a biblioteca Tensorflow.
End of explanation
# Podemos olhar o começo dessa tabela
df = pd.read_excel('data/fire_theft.xls')
df.head()
# E também podemos ver algumas estatísticas descritivas básicas
df.describe()
#transformando o dataset numa matrix
data = df.as_matrix()
data = data.astype('float32')
Explanation: Vamos usar um dataset bem simples: Fire and Theft in Chicago
As obervações são pares $(X,Y)$ em que
$X =$ incêncios por 1000 moradías
$Y =$ roubos por 1000 habitantes
referentes a cidade de Chicago.
End of explanation
num_samples = data.shape[0]
learning_rate=0.001
num_epochs=101
show_epoch=10
Explanation: Antes de montar o modelo vamos definir todos os Hyper parametros
End of explanation
session = tf.InteractiveSession()
# criando os placeholders para o par (X, Y)
tf_number_fire = tf.placeholder(tf.float32, shape=[], name="X")
tf_number_theft = tf.placeholder(tf.float32, shape=[], name="Y")
# definindo os pesos do modelo. Ambos são inicializados com 0.
tf_weight = tf.get_variable("w", dtype=tf.float32, initializer=0.)
tf_bias = tf.get_variable("b", dtype=tf.float32, initializer=0.)
# criando a predição do modelo: prediction = w*x +b
tf_prediction = (tf_weight * tf_number_fire) + tf_bias
# Definindo a função de custo como
# o erro quadrático médio: (preiction -Y)^2
tf_loss = tf.square(tf_prediction - tf_number_theft)
#Definindo o otimizador para fazer o SGD
tf_opt = tf.train.GradientDescentOptimizer(learning_rate)
tf_optimizer = tf_opt.minimize(tf_loss)
Explanation: Graph e Session são duas classes centrais no tensorflow.
Nós montamos as operações na classe Graph (o grafo de computação) e executamos essas operações dentro de uma Session.
Sempre existe um grafo default.
Quando usamos tf.Graph.as_default sobrescrevemos o grafo default pelo grafo definido no contexto.
Um modo interativo de se rodar um grafo é por meio da tf.InteractiveSession()
Vamos definir a regressão linear no grafo default
End of explanation
print('Start training\n')
session.run(tf.global_variables_initializer())
step = 0
for i in range(num_epochs):
total_loss = 0
for x, y in data:
feed_dict = {tf_number_fire: x,
tf_number_theft: y}
_,loss,w,b = session.run([tf_optimizer,tf_loss, tf_weight, tf_bias], feed_dict=feed_dict)
total_loss += loss
if i % show_epoch == 0:
print("\nEpoch {0}: {1}".format(i, total_loss/num_samples))
Explanation: Como temos poucos dados (42 observações) podemos treinar o modelo passando por cada uma das observações uma a uma.
End of explanation
r2 = util.r_squared(data,w,b)
util.plot_line(data, w, b, "Linear Regression with MSE", r2)
Explanation: Treinado o modelo, temos os novos valores para $w$ e $b$.
Assim podemos calcular o $R^2$ e plotar a reta resultante
End of explanation
class Config():
Class to hold all model hyperparams.
:type learning_rate: float
:type delta: float
:type huber: boolean
:type num_epochs: int
:type show_epoch: int
:type log_path: None or str
def __init__(self,
learning_rate=0.001,
delta=1.0,
huber=False,
num_epochs=101,
show_epoch=10,
log_path=None):
self.learning_rate = learning_rate
self.delta = delta
self.huber = huber
self.num_epochs = num_epochs
self.show_epoch = show_epoch
if log_path is None:
self.log_path = util.get_log_path()
else:
self.log_path = log_path
class LinearRegression:
Class for the linear regression model
:type config: Config
def __init__(self, config):
self.learning_rate = config.learning_rate
self.delta = config.delta
self.huber = config.huber
self.log_path = config.log_path
self.build_graph()
def create_placeholders(self):
Method for creating placeholders for input X (number of fire)
and label Y (number of theft).
self.number_fire = tf.placeholder(tf.float32, shape=[], name="X")
self.number_theft = tf.placeholder(tf.float32, shape=[], name="Y")
def create_variables(self):
Method for creating weight and bias variables.
with tf.name_scope("Weights"):
self.weight = tf.get_variable("w", dtype=tf.float32, initializer=0.)
self.bias = tf.get_variable("b", dtype=tf.float32, initializer=0.)
def create_summaries(self):
Method to create the histogram summaries for all variables
tf.summary.histogram('weights_summ', self.weight)
tf.summary.histogram('bias_summ', self.bias)
def create_prediction(self):
Method for creating the linear regression prediction.
with tf.name_scope("linear-model"):
self.prediction = (self.number_fire * self.weight) + self.bias
def create_MSE_loss(self):
Method for creating the mean square error loss function.
with tf.name_scope("loss"):
self.loss = tf.square(self.prediction - self.number_theft)
tf.summary.scalar("loss", self.loss)
def create_Huber_loss(self):
Method for creating the Huber loss function.
with tf.name_scope("loss"):
residual = tf.abs(self.prediction - self.number_theft)
condition = tf.less(residual, self.delta)
small_residual = 0.5 * tf.square(residual)
large_residual = self.delta * residual - 0.5 * tf.square(self.delta)
self.loss = tf.where(condition, small_residual, large_residual)
tf.summary.scalar("loss", self.loss)
def create_optimizer(self):
Method to create the optimizer of the graph
with tf.name_scope("optimizer"):
opt = tf.train.GradientDescentOptimizer(self.learning_rate)
self.optimizer = opt.minimize(self.loss)
def build_graph(self):
Method to build the computation graph in tensorflow
self.graph = tf.Graph()
with self.graph.as_default():
self.create_placeholders()
self.create_variables()
self.create_summaries()
self.create_prediction()
if self.huber:
self.create_Huber_loss()
else:
self.create_MSE_loss()
self.create_optimizer()
Explanation: O código acima pode ser melhorado.
Podemos encapsular os hyper parametros numa classe. Assim como o modelo de regressão linear.
End of explanation
def run_training(model, config, data, verbose=True):
Function to train the linear regression model
:type model: LinearRegression
:type config: Config
:type data: np array
:type verbose: boolean
:rtype total_loss: float
:rtype w: float
:rtype b: float
num_samples = data.shape[0]
num_epochs = config.num_epochs
show_epoch = config.show_epoch
log_path = model.log_path
with tf.Session(graph=model.graph) as sess:
if verbose:
print('Start training\n')
# functions to write the tensorboard logs
summary_writer = tf.summary.FileWriter(log_path,sess.graph)
all_summaries = tf.summary.merge_all()
# initializing variables
tf.global_variables_initializer().run()
step = 0
for i in range(num_epochs): # run num_epochs epochs
total_loss = 0
for x, y in data:
step += 1
feed_dict = {model.number_fire: x,
model.number_theft: y}
_,loss,summary,w,b = sess.run([model.optimizer, # run optimizer to perform minimization
model.loss,
all_summaries,
model.weight,
model.bias], feed_dict=feed_dict)
#writing the log
summary_writer.add_summary(summary,step)
summary_writer.flush()
total_loss += loss
if i % show_epoch == 0:
print("\nEpoch {0}: {1}".format(i, total_loss/num_samples))
if verbose:
print("\n========= For TensorBoard visualization type ===========")
print("\ntensorboard --logdir={}\n".format(log_path))
return total_loss,w,b
my_config = Config()
my_model = LinearRegression(my_config)
l,w,b = run_training(my_model, my_config, data)
Explanation: Nesse modelo definimos dois tipos de função de erro. Uma delas é chamada de Huber loss.
Relembrando a função:
$L_{\delta}(y,f(x)) = \frac{1}{2}(y-f(x))^{2}$ se $|y-f(x)|\leq \delta$
$L_{\delta}(y,f(x)) = \delta|y-f(x)| -\frac{1}{2}\delta^{2}$ caso contrário
End of explanation
# !tensorboard --logdir=
Explanation: Tensorboard é uma ótima ferramenta de visualização.
Podemos ver o grafo de computação e ver certas metrícas ao longo do treinamento
End of explanation |
6,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Code Testing and CI
Version 0.1
The notebook contains problems about code testing and continuous integration.
E Tollerud (STScI)
Problem 1
Step1: 1b
Step2: 1d
Step3: 1e
Step4: 1f
Step5: 1g
Step6: This should yield a report, which you can use to decide if you need to add more tests to acheive complete coverage. Check out the command line arguments to see if you can get a more detailed line-by-line report.
Problem 2
Step7: Solution (one of many...)
Step8: 2b
This test has an intentional bug... but depending how you right the test you might not catch it... Use unit tests to find it! (and then fix it...)
Step9: Solution (one of many...)
Step11: 2c
There are (at least) two significant bugs in this code (one fairly apparent, one much more subtle). Try to catch them both, and write a regression test that covers those cases once you've found them.
One note about this function
Step12: Solution (one of many...)
Step14: 2d
Hint
Step15: Solution (one of many...)
Step16: Problem 3
Step17: 3b
Step18: Be sure to commit and push this to github before proceeding | Python Code:
!conda install pytest pytest-cov
Explanation: Code Testing and CI
Version 0.1
The notebook contains problems about code testing and continuous integration.
E Tollerud (STScI)
Problem 1: Set up py.test in you repo
In this problem we'll aim to get the py.test testing framework up and running in the code repository you set up in the last set of problems. We can then use it to collect and run tests of the code.
1a: Ensure py.test is installed
Of course py.test must actually be installed before you can use it. The commands below should work for the Anaconda Python Distribution, but if you have some other Python installation you'll want to install pytest (and its coverage plugin) as directed in the install instructions for py.test.
End of explanation
!mkdir #complete
!touch #complete
%%file <yourpackage>/tests/test_something.py
def test_something_func():
assert #complete
Explanation: 1b: Ensure your repo has code suitable for unit tests
Depending on what your code actually does, you might need to modify it to actually perform something testable. For example, if all it does is print something, you might find it difficult to write an effective unit test. Try adding a function that actually performs some operation and returns something different depending on various inputs. That tends to be the easiest function to unit-test: one with a clear "right" answer in certain situations.
Also be sure you have cded to the root of the repo for pytest to operate correctly.
1c: Add a test file with a test function
The test must be part of the package and follow the convention that the file and the function begin with test to get picked up by the test collection machinery. Inside the test function, you'll need some code that fails if the test condition fails. The easiest way to do this is with an assert statement, which raises an error if its first argument is False.
Hint: remember that to be a valid python package, a directory must have an __init__.py
End of explanation
from <yourpackage>.tests import test_something
test_something.test_something_func()
Explanation: 1d: Run the test directly
While this is not how you'd ordinarily run the tests, it's instructive to first try to execute the test directly, without using any fancy test framework. If your test function just runs, all is good. If you get an exception, the test failed (which in this case might be good).
Hint: you may need to use reload or just re-start your notebook kernel to get the cell below to recognize the changes.
End of explanation
!py.test
Explanation: 1e: Run the tests with py.test
Once you have an example test, you can try invoking py.test, which is how you should run the tests in the future. This should yield a report that shows a dot for each test. If all you see are dots, the tests ran sucessfully. But if there's a failure, you'll see the error, and the traceback showing where the error happened.
End of explanation
!py.test
Explanation: 1f: Make the test fail (or succeed...)
If your test failed when you ran it, you should now try to fix the test (or the code...) to make it work. Try running
(Modify your test to fail if it succeeded before, or vice versa)
End of explanation
!py.test --cov=<yourproject> tests/ #complete
Explanation: 1g: Check coverage
The coverage plugin we installed will let you check which lines of your code are actually run by the testing suite.
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
# `math` here is for *scalar* math... normally you'd use numpy but this makes it a bit simpler to debug
import math
inf = float('inf') # this is a quick-and-easy way to get the "infinity" value
def function_a(angle=180):
anglerad = math.radians(angle)
return math.sin(anglerad/2)/math.sin(anglerad)
Explanation: This should yield a report, which you can use to decide if you need to add more tests to acheive complete coverage. Check out the command line arguments to see if you can get a more detailed line-by-line report.
Problem 2: Implement some unit tests
The sub-problems below each contain different unit testing complications. Place the code from the snippets in your repository (either using an editor or the %%file trick), and write tests to ensure the correctness of the functions. Try to achieve 100% coverage for all of them (especially to catch some hidden bugs!).
Also, note that some of these examples are not really practical - that is, you wouldn't want to do this in real code because there's better ways to do it. But because of that, they are good examples of where something can go subtly wrong... and therefore where you want to make tests!
2a
When you have a function with a default, it's wise to test both the with-default call (function_b()), and when you give a value (function_b(1.2))
Hint: Beware of numbers that come close to 0... write your tests to accomodate floating-point errors!
End of explanation
def test_default_bad():
# this will fail, although it *seems* like it should succeed... the sin function has rounding errors
assert function_a() == inf
def test_default_good():
assert function_a() > 1e10
def test_otherval_bad():
# again it seems like it should succed, but rounding errors make it fail
assert function_a(90) == math.sqrt(2)/2
def test_otherval_good():
assert abs(function_a(90) - math.sqrt(2)/2) < 1e-10
Explanation: Solution (one of many...)
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
def function_b(value):
if value < 0:
return value - 1
else:
value2 = subfunction_b(value + 1)
return value + value2
def subfunction_b(inp):
vals_to_accum = []
for i in range(10):
vals_to_accum.append(inp ** (i/10))
if vals_to_accum[-1] > 2:
vals.append(100)
# really you would use numpy to do this kind of number-crunching... but we're doing this for the sake of example right now
return sum(vals_to_accum)
Explanation: 2b
This test has an intentional bug... but depending how you right the test you might not catch it... Use unit tests to find it! (and then fix it...)
End of explanation
def test_neg():
assert function_b(-10) == -11
def test_zero():
assert function_b(0) == 10
def test_pos_lt1():
res = function_b(.5)
assert res > 10
assert res < 100
def test_pos_gt1():
res = function_b(1.5)
assert res > 100
# this test reveals that `subfunction_b()` has a ``vals`` where it should have a ``vals_to_accum``
Explanation: Solution (one of many...)
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
import math
# know that to not have to worry about this, you should just use `astropy.coordinates`.
def angle_to_sexigesimal(angle_in_degrees, decimals=3):
Convert the given angle to a sexigesimal string of hours of RA.
Parameters
----------
angle_in_degrees : float
A scalar angle, expressed in degrees
Returns
-------
hms_str : str
The sexigesimal string giving the hours, minutes, and seconds of RA for the given `angle_in_degrees`
if math.floor(decimals) != decimals:
raise ValueError('decimals should be an integer!')
hours_num = angle_in_degrees*24/180
hours = math.floor(hours_num)
min_num = (hours_num - hours)*60
minutes = math.floor(min_num)
seconds = (min_num - minutes)*60
format_string = '{}:{}:{:.' + str(decimals) + 'f}'
return format_string.format(hours, minutes, seconds)
Explanation: 2c
There are (at least) two significant bugs in this code (one fairly apparent, one much more subtle). Try to catch them both, and write a regression test that covers those cases once you've found them.
One note about this function: in real code you're probably better off just using the Angle object from astropy.coordinates. But this example demonstrates one of the reasons why that was created, as it's very easy to write a buggy version of this code.
Hint: you might find it useful to use astropy.coordinates.Angle to create test cases...
End of explanation
def test_decimals():
assert angle_to_sexigesimal(0) == '0:0:0.000'
assert angle_to_sexigesimal(0, decimals=5) == '0:0:0.00000'
def test_qtrs():
assert angle_to_sexigesimal(90, decimals=0) == '6:0:0'
assert angle_to_sexigesimal(180, decimals=0) == '12:0:0'
assert angle_to_sexigesimal(270, decimals=0) == '18:0:0'
assert angle_to_sexigesimal(360, decimals=0) == '24:0:0'
# this reveals the major bug that the 180 at the top should be 360
def test_350():
assert angle_to_sexigesimal(350, decimals=0) == '23:20:00'
# this fails, revealing that sometimes the values round
def test_neg():
assert angle_to_sexigesimal(-7.5, decimals=0) == '-0:30:0'
assert angle_to_sexigesimal(-20, decimals=0) == angle_to_sexigesimal(340, decimals=0)
# these fail, revealing a "debatable" bug: that negative degrees give negative RAs that are
# nonsense. You could always tell users not to give negative values... but users, particularly
# future you, probably won't listen.
def test_neg_decimals():
import pytest
with pytest.raises(ValueError):
angle_to_sexigesimal(10, decimals=-2)
Explanation: Solution (one of many...)
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
import numpy as np
def function_d(array1=np.arange(10)*2, array2=np.arange(10), operation='-'):
Makes a matrix where the [i,j]th element is array1[i] <operation> array2[j]
if operation == '+':
return array1[:, np.newaxis] + array2
elif operation == '-':
return array1[:, np.newaxis] - array2
elif operation == '*':
return array1[:, np.newaxis] * array2
elif operation == '/':
return array1[:, np.newaxis] / array2
else:
raise ValueError('Unrecognized operation "{}"'.format(operation))
Explanation: 2d
Hint: numpy has some useful functions in numpy.testing for comparing arrays.
End of explanation
def test_minus():
array1 = np.arange(10)*2
array2 = np.arange(10)
func_mat = function_d(array1, array2, operation='-')
for i, val1 in enumerate(array1):
for j, val2 in enumerate(array2):
assert func_mat[i, j] == val1 - val2
def test_plus():
array1 = np.arange(10)*2
array2 = np.arange(10)
func_mat = function_d(array1, array2, operation='+')
for i, val1 in enumerate(array1):
for j, val2 in enumerate(array2):
assert func_mat[i, j] == val1 + val2
def test_times():
array1 = np.arange(10)*2
array2 = np.arange(10)
func_mat = function_d(array1, array2, operation='*')
for i, val1 in enumerate(array1):
for j, val2 in enumerate(array2):
assert func_mat[i, j] == val1 * val2
def test_div():
array1 = np.arange(10)*2
array2 = np.arange(10)
func_mat = function_d(array1, array2, operation='/')
for i, val1 in enumerate(array1):
for j, val2 in enumerate(array2):
assert func_mat[i, j] == val1 / val2
#GOTCHA! This doesn't work because of floating point differences between numpy and python scalars
# This is where that numpy stuff is handy - see the next function
def test_div_npt():
from numpy import testing as npt
array1 = np.arange(10)*2
array2 = np.arange(10)
func_mat = function_d(array1, array2, operation='/')
test_mat = np.empty(((len(array1), len(array2))))
for i, val1 in enumerate(array1):
for j, val2 in enumerate(array2):
test_mat[i, j] = val1 / val2
npt.assert_array_almost_equal(func_mat, test_mat)
Explanation: Solution (one of many...)
End of explanation
!py.test
Explanation: Problem 3: Set up travis to run your tests whenever a change is made
Now that you have a testing suite set up, you can try to turn on a continuous integration service to constantly check that any update you might send doesn't create a bug. We will the Travis-CI service for this purpose, as it has one of the lowest barriers to entry from Github.
3a: Ensure the test suite is passing locally
Seems obvious, but it's easy to forget to check this and only later realize that all the trouble you thought you had setting up the CI service was because the tests were actually broken...
End of explanation
%%file .travis.yml
language: python
python:
- "3.6"
# command to install dependencies
#install: "pip install numpy" #uncomment this if your code depends on numpy or similar
# command to run tests
script: pytest
Explanation: 3b: Set up an account on travis
This turns out to be quite convenient. If you go to the Travis web site, you'll see a "Sign in with GitHub" button. You'll need to authorize Travis, but once you've done so it will automatically log you in and know which repositories are yours.
3c: Create a minimal .travis.yml file.
Before we can activate travis on our repo, we need to tell travis a variety of metadata about what's in the repository and how to run it. The template below should be sufficient for the simplest needs.
End of explanation
!git #complete
Explanation: Be sure to commit and push this to github before proceeding:
End of explanation |
6,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Classifying CIFAR-10 with XLA
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: We define the model, adapted from the Keras CIFAR-10 example
Step3: We train the model using the
RMSprop
optimizer
Step4: Now let's train the model again, using the XLA compiler.
To enable the compiler in the middle of the application, we need to reset the Keras session. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install tensorflow_datasets
import tensorflow as tf
import tensorflow_datasets as tfds
# Check that GPU is available: cf. https://colab.research.google.com/notebooks/gpu.ipynb
assert(tf.test.gpu_device_name())
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(False) # Start with XLA disabled.
def load_data():
result = tfds.load('cifar10', batch_size = -1)
(x_train, y_train) = result['train']['image'],result['train']['label']
(x_test, y_test) = result['test']['image'],result['test']['label']
x_train = x_train.numpy().astype('float32') / 256
x_test = x_test.numpy().astype('float32') / 256
# Convert class vectors to binary class matrices.
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
return ((x_train, y_train), (x_test, y_test))
(x_train, y_train), (x_test, y_test) = load_data()
Explanation: Classifying CIFAR-10 with XLA
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/xla/tutorials/autoclustering_xla"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial trains a TensorFlow model to classify the CIFAR-10 dataset, and we compile it using XLA.
Load and normalize the dataset using the TensorFlow Datasets API:
End of explanation
def generate_model():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(32, (3, 3)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(64, (3, 3), padding='same'),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(64, (3, 3)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10),
tf.keras.layers.Activation('softmax')
])
model = generate_model()
Explanation: We define the model, adapted from the Keras CIFAR-10 example:
End of explanation
def compile_model(model):
opt = tf.keras.optimizers.RMSprop(learning_rate=0.0001, decay=1e-6)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
return model
model = compile_model(model)
def train_model(model, x_train, y_train, x_test, y_test, epochs=25):
model.fit(x_train, y_train, batch_size=256, epochs=epochs, validation_data=(x_test, y_test), shuffle=True)
def warmup(model, x_train, y_train, x_test, y_test):
# Warm up the JIT, we do not wish to measure the compilation time.
initial_weights = model.get_weights()
train_model(model, x_train, y_train, x_test, y_test, epochs=1)
model.set_weights(initial_weights)
warmup(model, x_train, y_train, x_test, y_test)
%time train_model(model, x_train, y_train, x_test, y_test)
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
Explanation: We train the model using the
RMSprop
optimizer:
End of explanation
# We need to clear the session to enable JIT in the middle of the program.
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(True) # Enable XLA.
model = compile_model(generate_model())
(x_train, y_train), (x_test, y_test) = load_data()
warmup(model, x_train, y_train, x_test, y_test)
%time train_model(model, x_train, y_train, x_test, y_test)
Explanation: Now let's train the model again, using the XLA compiler.
To enable the compiler in the middle of the application, we need to reset the Keras session.
End of explanation |
6,568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment objects filters - the rationale
A common transformation on experiment objects are those that apply some sort of filtering of subsetting of data. A syntactic sugar API is thus provided for the various common filtering operations on
the components of the experiment objects (for example the three dataframes of the MicrobiomeExperiment object).
The rationale behind providing such syntactic sugar in the API is that working with three dataframes at the same time can be taxing.
Again, rapid analysis and economy in typing, enabling quick workflow from one step to the next, are the ultimate aspirations here, avoiding repetitive boilerplate, especially knowing that almost the same operations are required in particular downstream analyses in various typical omic experiments.
This notebook/chapter will provide various examples (currently for MicrobiomeExperiment), and can be regarded as a cookbook for various operations performed in a microbial amplicon metabarcoding experiment.
Step1: Experiment filters
A filter applied to a experiment/dataset is basically a sort of Transform. The Filter class itself inherits from Transform class.
As such, filters are applied as other transforms, using the apply and dapply methods.
The filter subpackage
From the transforms.filters subpackage, you can import the various filters
Step2: Sample Filter examples
Step3: An example of method chaining using efilter instead of filter
Step4: Taxonomy filters
Taxonomy filters allows common operations done on the taxonomy metadata of the Observations/OTUs. | Python Code:
%load_ext autoreload
%autoreload 2
#Load our data
from omicexperiment.experiment.microbiome import MicrobiomeExperiment
mapping = "example_map.tsv"
biom = "example_fungal.biom"
tax = "blast_tax_assignments.txt"
exp = MicrobiomeExperiment(biom, mapping,tax)
Explanation: Experiment objects filters - the rationale
A common transformation on experiment objects are those that apply some sort of filtering of subsetting of data. A syntactic sugar API is thus provided for the various common filtering operations on
the components of the experiment objects (for example the three dataframes of the MicrobiomeExperiment object).
The rationale behind providing such syntactic sugar in the API is that working with three dataframes at the same time can be taxing.
Again, rapid analysis and economy in typing, enabling quick workflow from one step to the next, are the ultimate aspirations here, avoiding repetitive boilerplate, especially knowing that almost the same operations are required in particular downstream analyses in various typical omic experiments.
This notebook/chapter will provide various examples (currently for MicrobiomeExperiment), and can be regarded as a cookbook for various operations performed in a microbial amplicon metabarcoding experiment.
End of explanation
exp.data_df
exp.mapping_df
Explanation: Experiment filters
A filter applied to a experiment/dataset is basically a sort of Transform. The Filter class itself inherits from Transform class.
As such, filters are applied as other transforms, using the apply and dapply methods.
The filter subpackage
From the transforms.filters subpackage, you can import the various filters:
from omicexperiment.transforms.filters import Sample
from omicexperiment.transforms.filters import Observation
from omicexperiment.transforms.filters import Taxonomy
These "filters" are also provided on the MicrobiomeExperiment object, as shortcuts. However, I am still considering a better interface to filters so I am re-considering this particular API.
Taxonomy = exp.Taxonomy
#OR
from omicexperiment.transforms.filters import Taxonomy
What are filters?
Filters are subclasses of the Filter class. Filters can be considered fairly magical, as they utilize operator overloading in an attempt to provide a shorthand API with a sugary syntax for applying various filtering on the experiment dataframe objects.
The three filters
* Taxonomy filter: apply various operations on the taxonomy
* Sample filter: apply various operations on samples/sample metadata
* Observation filter: apply various operations on observations (i.e. OTUs in a microbiome context)
The only way to get the gist of how these work is perhaps to view the code examples.
End of explanation
Sample = exp.Sample
#OR
from omicexperiment.transforms.filters import Sample
#1. the count filter
exp.dapply(Sample.count > 90000) #note sample0 was filtered off as its count is = 86870
#note the use of the operator overloading here so that the expression equals to
#a new Filter instance that holds this information
#if you have worked with SQLAlchemy ORM, a very similar technique is used by sqlalchemy filters
(Sample.count > 90000)
#the count filter actually implements other operators as well (due to the FlexibleOperator mixin)
#here we try the __eq__ (==) operator, the cell above we tried the > operator
exp.dapply(Sample.count == 100428)
#it only selected the sample with exact count of 100428
#2. the att (attribute) filter
# this filters on the "attributes" (i.e. metadata) of the samples
# present in the mapping dataframe
# this uses an attribute access (dotted) syntax
#here we only select samples in the 'control' group
exp.dapply(Sample.att.group == 'control') #only one sample in this group
(Sample.att.group == 'control')
#select only samples of asthmatic patients
exp.dapply(Sample.att.asthma == 1) #only three asthma-positive samples
#another alias for the att filter is the c attribute on the Sample Filter
#(c is short for "column", as per sqlalchemy convention)
exp.dapply(Sample.c.asthma == 1) #only three asthma-positive samples
#some columns may not be legal python attribute names,
#so for these we allow the [] (__getitem__) syntax
exp.dapply(Sample.att['#SampleID'] == 'sample0')
Explanation: Sample Filter examples
End of explanation
exp.apply(Sample.c.asthma == 1).dapply(Sample.count > 100000) #two samples
# the Sample groupby Transform
#the aggregate function here is the mean,
exp.dapply(Sample.groupby("group"))
# we can also change the aggfunc from mean to sum
import numpy as np
exp.dapply(Sample.groupby("group", aggfunc=np.sum))
Explanation: An example of method chaining using efilter instead of filter
End of explanation
Taxonomy = exp.Taxonomy #OR from omicexperiment.transforms.filters import Taxonomy
exp.taxonomy_df
'''
We noticed above that one of the assignments was identified at a highest
resolution only at the family level.
We can utilize the taxonomy attribute filters to remove these OTUs that
were classified at a lower resolution than a genus.
'''
genus_or_higher = exp.apply(Taxonomy.rank_resolution >= 'genus')
genus_or_higher.data_df
#The TaxonomyGroupBy Transform
genus_or_higher.apply(Taxonomy.groupby('genus')).data_df
#the Taxonomy.groupby is a shortcut for the TaxonomyGroupBy transform in the transforms.taxonomy module.
#Another example of the various Taxonomy attribute filters
exp.dapply(Taxonomy.genus == 'g__Aspergillus')
#only three otus had a genus assigned as 'g__Aspergillus"
Explanation: Taxonomy filters
Taxonomy filters allows common operations done on the taxonomy metadata of the Observations/OTUs.
End of explanation |
6,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
From https
Step1: There is a clear issue here that $y=x^2$ loses the negative when applied so that the result is peaks at both -2 and 2.
Try doing this again with better constraints on the model (x>=0) | Python Code:
xtrue = 2 # this value is unknown in the real application
x = pymc.rnormal(0, 0.01, size=10000) # initial guess
for i in range(5):
X = pymc.Normal('X', x.mean(), 1./x.var())
Y = X*X # f(x) = x*x
OBS = pymc.Normal('OBS', Y, 0.1, value=xtrue*xtrue+pymc.rnormal(0,1), observed=True)
model = pymc.Model([X,Y,OBS])
mcmc = pymc.MCMC(model)
mcmc.sample(10000)
x = mcmc.trace('X')[:] # posterior samples
pymc.Matplot.plot(mcmc)
Explanation: From https://stackoverflow.com/questions/17409324/solving-inverse-problems-with-pymc
Suppose we're given a prior on X (e.g. X ~ Gaussian) and a forward operator y = f(x). Suppose further we have observed y by means of an experiment and that this experiment can be repeated indefinitely. The output Y is assumed to be Gaussian (Y ~ Gaussian) or noise-free (Y ~ Delta(observation)).
How to consistently update our subjective degree of knowledge about X given the observations? I've tried the following model with PyMC, but it seems I'm missing something:
from pymc import *
xtrue = 2 # this value is unknown in the real application
x = rnormal(0, 0.01, size=10000) # initial guess
for i in range(5):
X = Normal('X', x.mean(), 1./x.var())
Y = X*X # f(x) = x*x
OBS = Normal('OBS', Y, 0.1, value=xtrue*xtrue+rnormal(0,1), observed=True)
model = Model([X,Y,OBS])
mcmc = MCMC(model)
mcmc.sample(10000)
x = mcmc.trace('X')[:] # posterior samples
The posterior is not converging to xtrue.
End of explanation
xtrue = 2 # this value is unknown in the real application
x = pymc.rpoisson(1, size=10000) # initial guess
for i in range(5):
X = pymc.Normal('X', x.mean(), 1./x.var())
Y = X*X # f(x) = x*x
OBS = pymc.Normal('OBS', Y, 0.1, value=xtrue*xtrue+pymc.rnormal(0,1), observed=True)
model = pymc.Model([X,Y,OBS])
mcmc = pymc.MCMC(model)
mcmc.sample(10000)
x = mcmc.trace('X')[:] # posterior samples
pymc.Matplot.plot(mcmc)
Explanation: There is a clear issue here that $y=x^2$ loses the negative when applied so that the result is peaks at both -2 and 2.
Try doing this again with better constraints on the model (x>=0)
End of explanation |
6,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explore ambivalent is_harassment_or_attack labels
It is incorrect to give a revision a label an attack label and a not attack label. Lets see how often this occurs and who makes this error.
Step1: Ok, there are a few users who do this a lot.
Step2: It looks like 90% of ambivalent labels come from 200 users. We might consider dropping all annotations from these users. We can also check what fraction of a users labels are ambivalent and select a threshold based on that.
Step3: Consider dropping all annotations from users who score 1 | Python Code:
df['is_harassment_or_attack'].value_counts(dropna=False)
def attack_and_not_attack(s):
return 'not_attack' in s and s!= 'not_attack'
df[df['is_harassment_or_attack'].apply(attack_and_not_attack)]['_worker_id'].value_counts().head()
Explanation: Explore ambivalent is_harassment_or_attack labels
It is incorrect to give a revision a label an attack label and a not attack label. Lets see how often this occurs and who makes this error.
End of explanation
y = df[df['is_harassment_or_attack'].apply(attack_and_not_attack)]['_worker_id'].value_counts().cumsum()
y = y/y.iloc[-1]
x = list(range(len(y)))
plt.plot(x, y)
plt.xlabel('N')
plt.ylabel('Fraction of ambivalent labels coming from N users')
Explanation: Ok, there are a few users who do this a lot.
End of explanation
counts = df[df['is_harassment_or_attack'].apply(attack_and_not_attack)]['_worker_id'].value_counts()
fraction = (counts / df['_worker_id'].value_counts()).dropna()
d_ambi = pd.DataFrame({'counts': counts, 'fraction':fraction}).sort_values('fraction', ascending=False)
d_ambi['N'] = 1
d_ambi = d_ambi.groupby('fraction', as_index = False).sum()
d_ambi = d_ambi.sort_values('fraction', ascending = False)
d_ambi['cum_counts'] = d_ambi['counts'].cumsum() / d_ambi['counts'].sum()
d_ambi['cum_N'] = d_ambi['N'].cumsum() / d_ambi['N'].sum()
d_ambi['fraction'] = 1 -d_ambi['fraction']
d_ambi.head()
plt.plot(d_ambi['fraction'], d_ambi['cum_counts'])
plt.plot(d_ambi['fraction'], d_ambi['cum_N'])
plt.plot(d_ambi['cum_N'], d_ambi['cum_counts'])
Explanation: It looks like 90% of ambivalent labels come from 200 users. We might consider dropping all annotations from these users. We can also check what fraction of a users labels are ambivalent and select a threshold based on that.
End of explanation
col = 'recipient'
pl = plurality(df[col])
df['plurality'] = pl
df['deviant'] = df[col] != df['plurality']
deviance_scores = df.groupby('_worker_id')['deviant'].mean()
deviance_scores.sort_values(ascending = False).hist(bins=100)
col = 'attack'
pl = plurality(df[col])
df['plurality'] = pl
df['deviant'] = df[col] != df['plurality']
deviance_scores = df.groupby('_worker_id')['deviant'].mean()
deviance_scores.sort_values(ascending = False).hist(bins=100)
col = 'aggression'
pl = plurality(df[col])
df['plurality'] = pl
df['deviant'] = df[col] != df['plurality']
deviance_scores = df.groupby('_worker_id')['deviant'].mean()
deviance_scores.sort_values(ascending = False).hist(bins=100)
Explanation: Consider dropping all annotations from users who score 1:5 comments as ambivalent.
Explore consistently deviant users
End of explanation |
6,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing Raw Text
Accessing Text from the Web and from Disk
Step1: downloading Crime and Punishment**
Step2: number of characters
Step3: Create a Text object from tokens
Step4: find collocations (words that frequently appear together)
Step5: Project Gutenberg is a collocation for this text because it is included as a header and possibly footer for the raw text file
find start and end manually using find
Step6: reverse find using rfind
Step7: Dealing with HTML
Get an "urban legend" article called Blondes to die out in 200 years -- from the BBC
Step8: find start and end indices of the content (manually) and create a Text object
Step9: get concordance of gene -- shows occurrences of the word gene**
Step10: skipping these sections...
Processing Search Engine Results
Processing RSS Feeds
Reading Local Files
Strings
Step11: Using Python string methods and re with Unicode characters
Step12: Can use Unicode strings with NLTK tokenizers
Step13: Regular Expressions for Detecting Word Patterns
skipping this section
cheatsheet
Step14: Frequencies for sequences of 2+ vowels in the text
Step15: Doing More with Word Pieces
Remove internal vowels from words
Step16: Extract consonant-vowel sequences from text
Step17: create an index such that
Step18: Finding Word Stems
one simple approach that just removes suffixes
Step20: alternative using re module...
Step21: Searching Tokenized Text
"<a> <man>" finds all instances of a man in the text
Step23: Normalizing Text
Step24: Stemmers
"off-the-shelf" stemmers included in NLTK
* Porter
* Lancaster
Step25: Porter stemmer correctly handled lying -> lie while Lancaster stemmer did not
Defining a custom Text class that uses the Porter Stemmer and can generate concordance for a text using word stems
Step26: Lemmatization
WordNet lemmatizer only removes affixes for words in its dictionary
dictionary lookup process is much slower than Porter stemmer
Step28: Regular Expressions for Tokenizing Text
Simple Approaches to Tokenization
Step29: split on whitespace
Step30: re offers \w (word characters) and \W (all characters except letters, digits, _ )
split on nonword characters
Step31: exclude empty strings...
Step32: allow internal hyphens and apostrophes in words
Step33: NLTK's Regular Expression Tokenizer
nltk.regexp_tokenize() is similar to re.findall() but more efficient -- don't need to treat parentheses as a special case
Step34: (?x) is a verbose flag -- strips out embedded whitespace and comments
Further Issues with Tokenization
Important to have a "gold standard" for tokenization to compare performance of a custom tokenizer...
NLTK Corpus includes Penn Treebank corpus, tokenized and raw text, for this purpose
Step35: Segmenting a stream of characters into sentences | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import nltk
import re
import pprint
from nltk import word_tokenize
Explanation: Processing Raw Text
Accessing Text from the Web and from Disk
End of explanation
from urllib import request
url = 'http://www.gutenberg.org/files/2554/2554.txt'
response = request.urlopen(url)
raw = response.read().decode('utf8')
type(raw)
Explanation: downloading Crime and Punishment**
End of explanation
len(raw)
raw[:75]
tokens = word_tokenize(raw)
type(tokens)
len(tokens)
tokens[:10]
Explanation: number of characters:
End of explanation
text = nltk.Text(tokens)
type(text)
text[1024:1062]
Explanation: Create a Text object from tokens
End of explanation
text.collocations()
Explanation: find collocations (words that frequently appear together)
End of explanation
raw.find('PART I')
Explanation: Project Gutenberg is a collocation for this text because it is included as a header and possibly footer for the raw text file
find start and end manually using find
End of explanation
raw.rfind("End of Project Gutenberg's Crime")
raw = raw[5338:1157746] # slightly different from NLTK Book value
raw.find("PART I")
Explanation: reverse find using rfind
End of explanation
url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"
html = request.urlopen(url).read().decode('utf8')
html[:60]
type(html)
from bs4 import BeautifulSoup
raw = BeautifulSoup(html, 'html.parser').get_text()
tokens = word_tokenize(raw)
tokens[:50]
Explanation: Dealing with HTML
Get an "urban legend" article called Blondes to die out in 200 years -- from the BBC
End of explanation
tokens = tokens[110:390]
text = nltk.Text(tokens)
text
Explanation: find start and end indices of the content (manually) and create a Text object
End of explanation
text.concordance('gene')
Explanation: get concordance of gene -- shows occurrences of the word gene**
End of explanation
path = nltk.data.find('corpora/unicode_samples/polish-lat2.txt')
path
with open(path, encoding='latin2') as f:
for line in f:
line_strip = line.strip()
print(line_strip)
with open(path, encoding='latin2') as f:
for line in f:
line_strip = line.strip()
print(line_strip.encode('unicode_escape'))
import unicodedata
with open(path, encoding='latin2') as f:
lines = f.readlines()
line = lines[2]
print(line.encode('unicode_escape'))
for c in line:
if ord(c) > 127:
print('{} U+{:04x} {}'.format(c.encode('utf8'), ord(c), unicodedata.name(c)))
for c in line:
if ord(c) > 127:
print('{} U+{:04x} {}'.format(c, ord(c), unicodedata.name(c)))
Explanation: skipping these sections...
Processing Search Engine Results
Processing RSS Feeds
Reading Local Files
Strings: Text Processing at the Lowest Level
skipping basic string and list operations
Text Processing with Unicode
End of explanation
line
line.find('zosta\u0142y')
line = line.lower()
line
line.encode('unicode_escape')
import re
m = re.search('\u015b\w*', line)
m.group()
m.group().encode('unicode_escape')
Explanation: Using Python string methods and re with Unicode characters
End of explanation
word_tokenize(line)
Explanation: Can use Unicode strings with NLTK tokenizers
End of explanation
import re
word = 'supercalifragilisticexpialidocious'
vowel_matches = re.findall(r'[aeiou]', word)
vowel_matches
len(vowel_matches)
Explanation: Regular Expressions for Detecting Word Patterns
skipping this section
cheatsheet:
Operator Behavior
. Wildcard, matches any character
^abc Matches some pattern abc at the start of a string
abc$ Matches some pattern abc at the end of a string
[abc] Matches one of a set of characters
[A-Z0-9] Matches one of a range of characters
ed|ing|s Matches one of the specified strings (disjunction)
* Zero or more of previous item, e.g. a*, [a-z]* (also known as Kleene Closure)
+ One or more of previous item, e.g. a+, [a-z]+
? Zero or one of the previous item (i.e. optional), e.g. a?, [a-z]?
{n} Exactly n repeats where n is a non-negative integer
{n,} At least n repeats
{,n} No more than n repeats
{m,n} At least m and no more than n repeats
a(b|c)+ Parentheses that indicate the scope of the operators
Useful Applications of Regular Expressions
Extracting Word Pieces
find all vowels in a word and count them
End of explanation
wsj = sorted(set(nltk.corpus.treebank.words()))
len(wsj)
fd = nltk.FreqDist(vowels for word in wsj
for vowels in re.findall(r'[aeiou]{2,}', word))
len(fd)
fd.most_common(12)
Explanation: Frequencies for sequences of 2+ vowels in the text
End of explanation
regexp = r'^[AEIOUaeiou]+|[AEIOUaeiou]+$|[^AEIOUaeiou]'
def compress(word):
pieces = re.findall(regexp, word)
return ''.join(pieces)
re.findall(regexp, 'Universal')
english_udhr = nltk.corpus.udhr.words('English-Latin1')
print(nltk.tokenwrap(compress(w) for w in english_udhr[:75]))
Explanation: Doing More with Word Pieces
Remove internal vowels from words
End of explanation
rotokas_words = nltk.corpus.toolbox.words('rotokas.dic')
cvs = [consonant_vowel for w in rotokas_words
for consonant_vowel in re.findall(r'[ptksvr][aeiou]', w)]
cvs[:25]
cfd = nltk.ConditionalFreqDist(cvs)
cfd.tabulate()
Explanation: Extract consonant-vowel sequences from text
End of explanation
cv_word_pairs = [(cv, w) for w in rotokas_words
for cv in re.findall(r'[ptksvr][aeiou]', w)]
cv_index = nltk.Index(cv_word_pairs)
type(cv_index)
cv_index['su']
cv_index['po']
Explanation: create an index such that: cv_index['su'] returns all words containing su
Use nltk.Index()
End of explanation
def stem(word):
for suffix in ['ing', 'ly', 'ed', 'ious', 'ies', 'ive', 'es', 's', 'ment']:
if word.endswith(suffix):
return word[:-len(suffix)]
return word
stem('walking')
Explanation: Finding Word Stems
one simple approach that just removes suffixes:
End of explanation
def stem_regexp(word):
regexp = r'^(.*?)(ing|ly|ed|ious|ies|ive|es|s|ment)?$'
stem, suffix = re.findall(regexp, word)[0]
return stem
raw = DENNIS: Listen, strange women lying in ponds distributing swords
is no basis for a system of government. Supreme executive power derives from
a mandate from the masses, not from some farcical aquatic ceremony.
tokens = word_tokenize(raw)
tokens
[stem_regexp(t) for t in tokens]
Explanation: alternative using re module...
End of explanation
from nltk.corpus import gutenberg, nps_chat
moby = nltk.Text(gutenberg.words('melville-moby_dick.txt'))
moby.findall(r"<a> (<.*>) <man>")
chat = nltk.Text(nps_chat.words())
chat.findall(r"<.*> <.*> <bro>")
chat.findall(r"<l.*>{3,}")
Explanation: Searching Tokenized Text
"<a> <man>" finds all instances of a man in the text
End of explanation
raw = DENNIS: Listen, strange women lying in ponds distributing swords
is no basis for a system of government. Supreme executive power derives from
a mandate from the masses, not from some farcical aquatic ceremony.
tokens = word_tokenize(raw)
tokens
Explanation: Normalizing Text
End of explanation
porter = nltk.PorterStemmer()
lancaster = nltk.LancasterStemmer()
[porter.stem(t) for t in tokens]
[lancaster.stem(t) for t in tokens]
Explanation: Stemmers
"off-the-shelf" stemmers included in NLTK
* Porter
* Lancaster
End of explanation
class IndexedText(object):
def __init__(self, stemmer, text):
self._text = text
self._stemmer = stemmer
self._index = nltk.Index((self._stem(word), i)
for (i, word) in enumerate(text))
def concordance(self, word, width=40):
key = self._stem(word)
wc = int(width/4) # words of context
for i in self._index[key]:
lcontext = ' '.join(self._text[i-wc:i])
rcontext = ' '.join(self._text[i:i+wc])
ldisplay = '{:>{width}}'.format(lcontext[-width:], width=width)
rdisplay = '{:{width}}'.format(rcontext[:width], width=width)
print(ldisplay, rdisplay)
def _stem(self, word):
return self._stemmer.stem(word).lower()
porter = nltk.PorterStemmer()
grail = nltk.corpus.webtext.words('grail.txt')
text = IndexedText(porter, grail)
text.concordance('lie')
Explanation: Porter stemmer correctly handled lying -> lie while Lancaster stemmer did not
Defining a custom Text class that uses the Porter Stemmer and can generate concordance for a text using word stems
End of explanation
wnl = nltk.WordNetLemmatizer()
[wnl.lemmatize(t) for t in tokens]
Explanation: Lemmatization
WordNet lemmatizer only removes affixes for words in its dictionary
dictionary lookup process is much slower than Porter stemmer
End of explanation
raw = 'When I'M a Duchess,' she said to herself, (not in a very hopeful tone
though), 'I won't have any pepper in my kitchen AT ALL. Soup does very
well without--Maybe it's always pepper that makes people hot-tempered,'...
Explanation: Regular Expressions for Tokenizing Text
Simple Approaches to Tokenization
End of explanation
re.split(r' ', raw)
re.split(r'[ \t\n]+', raw)
Explanation: split on whitespace
End of explanation
re.split(r'\W+', raw)
Explanation: re offers \w (word characters) and \W (all characters except letters, digits, _ )
split on nonword characters:
End of explanation
re.findall(r'\w+|\Sw*', raw)
Explanation: exclude empty strings...
End of explanation
re.findall(r"\w+(?:[-']\w+)*|'|[-.(\)]+|\S\w*", raw)
Explanation: allow internal hyphens and apostrophes in words
End of explanation
text = 'That U.S.A. poster-print costs $12.40...'
pattern = r'''(?x) # set flag to allow verbose regexps
([A-Z]\.)+ # abbreviations, e.g. U.S.A.
| \w+(-\w+)* # words with optional internal hyphens
| \$?\d+(\.\d+)?%? # currency and percentages, e.g. $12.40, 82%
| \.\.\. # ellipsis
| [][.,;"'?():-_`] # these are separate tokens; includes ], [
'''
nltk.regexp_tokenize(text, pattern)
Explanation: NLTK's Regular Expression Tokenizer
nltk.regexp_tokenize() is similar to re.findall() but more efficient -- don't need to treat parentheses as a special case
End of explanation
len(nltk.corpus.brown.words()) / len(nltk.corpus.brown.sents())
Explanation: (?x) is a verbose flag -- strips out embedded whitespace and comments
Further Issues with Tokenization
Important to have a "gold standard" for tokenization to compare performance of a custom tokenizer...
NLTK Corpus includes Penn Treebank corpus, tokenized and raw text, for this purpose:
nltk.corpus.treebank_raw.raw() and nltk.corpus.treebank.words()
Segmentation
Tokenization is a specific case of the more general segmentation
Sentence Segmentation
Average number of words per sentence:
End of explanation
import pprint
text = nltk.corpus.gutenberg.raw('chesterton-thursday.txt')
sents = nltk.sent_tokenize(text)
pprint.pprint(sents[79:89])
Explanation: Segmenting a stream of characters into sentences: sent_tokenize
End of explanation |
6,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is a brief sketch of how to use the Deutsch-Jozsa algorithm.
We start by declaring all necessary imports.
Step1: The Deutsch-Jozsa Algorithm can be used to determine if a binary-valued function is constant or balanced (returns 0 on exactly half of its inputs).
Step2: We verify that we have a constant bitmap below.
Step3: To use the Deutsch-Jozsa algorithm on a Quantum Hardware we need to define the connection to the QVM or QPU. However we don't have a real connection in this notebook, so we just mock out the response. If you run this notebook, ensure to replace cxn with a pyQuil connection object.
Step4: Now let's run the Deutsch Jozsa algorithm. We instantiate the Deutsch Jozsa object and then call its is_constant method with the connection object and the bitmap we defined above. Finally we assert its correctness by checking the output. (The method returns a boolean, so here we just check the returned boolean.) | Python Code:
from itertools import product
from mock import patch
from grove.deutsch_jozsa.deutsch_jozsa import DeutschJosza
Explanation: This notebook is a brief sketch of how to use the Deutsch-Jozsa algorithm.
We start by declaring all necessary imports.
End of explanation
bit_value = '0'
bit = ("0", "1")
constant_bitmap = {}
# We construct the bitmap for the algorithm
for bitstring in product(bit, repeat=2):
bitstring = "".join(bitstring)
constant_bitmap[bitstring] = bit_value
Explanation: The Deutsch-Jozsa Algorithm can be used to determine if a binary-valued function is constant or balanced (returns 0 on exactly half of its inputs).
End of explanation
for value in constant_bitmap.values():
assert value == bit_value, "The constant_bitmap is not constant with value bit_value."
Explanation: We verify that we have a constant bitmap below.
End of explanation
with patch("pyquil.api.QuantumComputer") as qc:
qc.run.return_value = [[0], [0]]
Explanation: To use the Deutsch-Jozsa algorithm on a Quantum Hardware we need to define the connection to the QVM or QPU. However we don't have a real connection in this notebook, so we just mock out the response. If you run this notebook, ensure to replace cxn with a pyQuil connection object.
End of explanation
dj = DeutschJosza()
is_constant = dj.is_constant(qc, constant_bitmap)
assert is_constant, "The algorithm said the function was balanced."
Explanation: Now let's run the Deutsch Jozsa algorithm. We instantiate the Deutsch Jozsa object and then call its is_constant method with the connection object and the bitmap we defined above. Finally we assert its correctness by checking the output. (The method returns a boolean, so here we just check the returned boolean.)
End of explanation |
6,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DistArray
Step1: Software Versions
Step2: Set a RandomState
Set a RandomState so random numpy arrays don't change between runs.
Step3: NumPy Arrays
DistArray is built on NumPy and provides a NumPy-array-like interface. First, let's generate a NumPy array and examine some of its attributes.
Step4: DistArrays
We'll make our first DistArray out of the NumPy array created above.
Step5: Universal Functions (ufuncs)
Step6: Reductions
Functions like sum, mean, min, and max are known as reductions, since they take an array and produce a smaller array or a scalar. In NumPy and DistArray, some of these functions can be applied over a specific axis.
Step7: Indexing and Slicing
DistArrays support standard NumPy Indexing and distributed slicing, including slices with a step. Slicing is currently only supported for Block (and undistributed) DistArrays.
Step8: Distributions
Above, when we created a DistArray out of a NumPy array, we didn't specify how the elements should be distributed among our engines. Distributions give you control over this, if you want it. In other words, Distributions control which processes own which (global) indices.
Step9: The Distribution above was created for us by fromarray,
but DistArray lets us specify more complex distributions.
Here, we specify that the 0th dimension has a Block distribution ('b')
and the 1st dimension has a Cyclic distribution.
DistArray supports Block, Cyclic, Block-Cyclic, Unstructured,
and No-distribution dimensions. See the
ScaLAPACK Documentation for more information about Distribution types.
Step10: Redistribution
Since DistArrays are distributed, the equivalent to NumPy's reshape (distribute_as) can be a more complex and costly operation. For convenience, you can supply either a shape or a full Distribution object. Only Block distributions (and No-dist) are currently redistributable.
Step11: Contexts
Context objects manage the setup of and communication to the worker processes for DistArray objects. They also act as the namespace to which
DistArray creation functions are attached.
Step12: Parallel IO
DistArray has support for reading NumPy .npy files in parallel, for reading and writing .dnpy files in parallel (our own flat-file format), and reading and writing HDF5 files in parallel (if you have a parallel build of h5py).
Step15: Context.apply
Global view, local control. The apply method on a Context allows you to write functions that are applied locally (that is, on the engines) to each section of a DistArray. This allows you to push your computation close to your data, avoiding communication round-trips and possibly speeding up your computations.
Step17: Context.register
Context.register is similar to Context.apply, but it allows you to register your function with a Context up front, and then call it repeatedly, with a nice syntax.
Step18: MPI-only Execution
Instead of using an IPython client (which uses ZeroMQ to communicate to the engines), you can run your DistArray code in MPI-only mode (using an extra MPI process for the client). This can be more performant.
Step19: Distributed Array Protocol
Already have a library with its own distributed arrays? Use the Distributed Array Protocol to work with DistArray.
The Distributed Array Protocol (DAP) is a process-local protocol that allows two subscribers, called the "producer" and the "consumer" or the "exporter" and the "importer", to communicate the essential data and metadata necessary to share a distributed-memory array between them. This allows two independently developed components to access, modify, and update a distributed array without copying. The protocol formalizes the metadata and buffers involved in the transfer, allowing several distributed array projects to collaborate, facilitating interoperability. By not copying the underlying array data, the protocol allows for efficient sharing of array data.
http | Python Code:
# some utility imports
from __future__ import print_function
from pprint import pprint
from matplotlib import pyplot as plt
# main imports
import numpy
import distarray
# reduce precision on printed array values
numpy.set_printoptions(precision=2)
# display figures inline
%matplotlib inline
Explanation: DistArray: Distributed Arrays for Python
docs.enthought.com/distarray
Setup
Much of this notebook requires an IPython.parallel cluster to be running.
Outside the notebook, run
dacluster start -n4
End of explanation
print("numpy", numpy.__version__)
import matplotlib
print("matplotlib", matplotlib.__version__)
import h5py
print("h5py", h5py.__version__)
print("distarray", distarray.__version__)
Explanation: Software Versions
End of explanation
from numpy.random import RandomState
prng = RandomState(1234567890)
Explanation: Set a RandomState
Set a RandomState so random numpy arrays don't change between runs.
End of explanation
# a 4-row 5-column NumPy array with random contents
nparr = prng.rand(4, 5)
nparr
# NumPy array attributes
print("type:", type(nparr))
print("dtype:", nparr.dtype)
print("ndim:", nparr.ndim)
print("shape:", nparr.shape)
print("itemsize:", nparr.itemsize)
print("nbytes:", nparr.nbytes)
Explanation: NumPy Arrays
DistArray is built on NumPy and provides a NumPy-array-like interface. First, let's generate a NumPy array and examine some of its attributes.
End of explanation
# First we need a `Context` object. More on this later.
# For now, think of this object like the `NumPy` module.
# `Context`s manage the worker engines for us.
from distarray.globalapi import Context
context = Context()
# Make a DistArray from a NumPy array.
# This will push sections of the original NumPy array out
# to the engines.
darr = context.fromarray(nparr)
darr
# Print the array section stored on each engine
for i, a in enumerate(darr.get_localarrays()):
print(i, a)
# DistArrays have similar attributes to NumPy arrays,
print("type:", type(darr))
print("dtype:", darr.dtype)
print("ndim:", darr.ndim)
print("shape:", darr.shape)
print("itemsize:", darr.itemsize)
print("nbytes:", darr.nbytes)
# and some additional attributes.
print("targets:", darr.targets)
print("context:", darr.context)
print("distribution:", darr.distribution)
Explanation: DistArrays
We'll make our first DistArray out of the NumPy array created above.
End of explanation
# NumPy provides `ufuncs`, or Universal Functions, that operate
# elementwise over NumPy arrays.
numpy.sin(nparr)
# DistArray provides ufuncs as well, for `DistArray`s.
import distarray.globalapi as da
da.sin(darr)
# `toarray` makes a NumPy array out of a DistArray, pulling all of the
# pieces back to the client. We do this to display the contents of the
# DistArray.
da.sin(darr).toarray()
# A NumPy binary ufunc.
nparr + nparr
# The equivalent DistArray ufunc.
# Notice that a new DistArray is created without
# pulling data back to the client.
darr + darr
# Contents of the resulting DistArray.
(darr + darr).toarray()
Explanation: Universal Functions (ufuncs)
End of explanation
# NumPy sum
print("sum:", nparr.sum())
print("sum over an axis:", nparr.sum(axis=1))
# DistArray sum
print("sum:", darr.sum(), darr.sum().toarray())
print("sum over an axis:", darr.sum(axis=1), darr.sum(axis=1).toarray())
Explanation: Reductions
Functions like sum, mean, min, and max are known as reductions, since they take an array and produce a smaller array or a scalar. In NumPy and DistArray, some of these functions can be applied over a specific axis.
End of explanation
# Our example array, as a reminder:
darr.toarray()
# The shapes of the local sections of our DistArray
darr.localshapes()
# Return the value of a single element
darr[0, 2]
# Take a column slice
darr_view = darr[:, 3] # all rows, third column
print(darr_view)
print(darr_view.toarray())
# Slices return a new DistArray that is a view on the
# original, just like in NumPy.
# Changes in the view change the original array.
darr_view[3] = -0.99
print("view:")
print(darr_view.toarray())
print("original:")
print(darr.toarray())
# A more complex slice, with negative indices and a step.
print(darr[:, 2::2])
print(darr[:-1, 2::2].toarray())
# Incomplete indexing
# Grab the first row
darr[0]
Explanation: Indexing and Slicing
DistArrays support standard NumPy Indexing and distributed slicing, including slices with a step. Slicing is currently only supported for Block (and undistributed) DistArrays.
End of explanation
# Let's look at the `Distribution` object that was created for us
# automatically by `fromarray`.
distribution = darr.distribution
# This is a 2D distribution: its 0th dimension is Block-distributed,
# and it's 1st dimension isn't distributed.
pprint(distribution.maps)
# Plot this Distribution, color-coding which process each global index
# belongs to.
from distarray.plotting import plot_array_distribution
process_coords = [(0, 0), (1, 0), (2, 0), (3, 0)]
plot_array_distribution(darr, process_coords, cell_label=False, legend=True)
# Check out which sections of this array's 0th dimension are on
# each process.
distribution.maps[0].bounds
Explanation: Distributions
Above, when we created a DistArray out of a NumPy array, we didn't specify how the elements should be distributed among our engines. Distributions give you control over this, if you want it. In other words, Distributions control which processes own which (global) indices.
End of explanation
from distarray.globalapi import Distribution
distribution = Distribution(context, shape=(64, 64), dist=('b', 'c'))
a = context.zeros(distribution, dtype='int32')
plot_array_distribution(a, process_coords, cell_label=False, legend=True)
Explanation: The Distribution above was created for us by fromarray,
but DistArray lets us specify more complex distributions.
Here, we specify that the 0th dimension has a Block distribution ('b')
and the 1st dimension has a Cyclic distribution.
DistArray supports Block, Cyclic, Block-Cyclic, Unstructured,
and No-distribution dimensions. See the
ScaLAPACK Documentation for more information about Distribution types.
End of explanation
darr
darr.toarray()
# simple reshaping
reshaped = darr.distribute_as((10, 2))
reshaped
reshaped.toarray()
# A more complex resdistribution,
# changing shape, dist, and targets
dist = Distribution(context, shape=(5, 4),
dist=('b', 'b'), targets=(1, 3))
darr.distribute_as(dist)
Explanation: Redistribution
Since DistArrays are distributed, the equivalent to NumPy's reshape (distribute_as) can be a more complex and costly operation. For convenience, you can supply either a shape or a full Distribution object. Only Block distributions (and No-dist) are currently redistributable.
End of explanation
print("targets:", context.targets)
print("comm:", context.comm)
context.zeros((5, 3))
context.ones((20, 20))
Explanation: Contexts
Context objects manage the setup of and communication to the worker processes for DistArray objects. They also act as the namespace to which
DistArray creation functions are attached.
End of explanation
# load .npy files in parallel
numpy.save("/tmp/outfile.npy", nparr)
distribution = Distribution(context, nparr.shape)
new_darr = context.load_npy("/tmp/outfile.npy", distribution)
new_darr
# save to .dnpy (a built-in flat-file format based on .npy)
context.save_dnpy("/tmp/outfile", darr)
# load from .dnpy
context.load_dnpy("/tmp/outfile")
# save DistArrays to .hdf5 files in parallel
context.save_hdf5("/tmp/outfile.hdf5", darr, mode='w')
# load DistArrays from .hdf5 files in parallel (using h5py)
context.load_hdf5("/tmp/outfile.hdf5", distribution)
Explanation: Parallel IO
DistArray has support for reading NumPy .npy files in parallel, for reading and writing .dnpy files in parallel (our own flat-file format), and reading and writing HDF5 files in parallel (if you have a parallel build of h5py).
End of explanation
def get_local_random():
Function to be applied locally.
import numpy
return numpy.random.randint(10)
context.apply(get_local_random)
def get_local_var(darr):
Another local computation.
return darr.ndarray.var()
context.apply(get_local_var, args=(darr.key,))
Explanation: Context.apply
Global view, local control. The apply method on a Context allows you to write functions that are applied locally (that is, on the engines) to each section of a DistArray. This allows you to push your computation close to your data, avoiding communication round-trips and possibly speeding up your computations.
End of explanation
def local_demean(la):
Return the local array with the mean removed.
return la.ndarray - la.ndarray.mean()
context.register(local_demean)
context.local_demean(darr)
Explanation: Context.register
Context.register is similar to Context.apply, but it allows you to register your function with a Context up front, and then call it repeatedly, with a nice syntax.
End of explanation
# an example script to run in MPI-only mode
%cd julia_set
!python benchmark_julia.py -h
# Compile kernel.pyx
!python setup.py build_ext --inplace
# Run the benchmarking script with 5 MPI processes:
# 4 worker processes and 1 client process
!mpiexec -np 5 python benchmark_julia.py --kernel=cython -r1 1024
Explanation: MPI-only Execution
Instead of using an IPython client (which uses ZeroMQ to communicate to the engines), you can run your DistArray code in MPI-only mode (using an extra MPI process for the client). This can be more performant.
End of explanation
def return_protocol_structure(la):
return la.__distarray__()
context.apply(return_protocol_structure, (darr.key,))
Explanation: Distributed Array Protocol
Already have a library with its own distributed arrays? Use the Distributed Array Protocol to work with DistArray.
The Distributed Array Protocol (DAP) is a process-local protocol that allows two subscribers, called the "producer" and the "consumer" or the "exporter" and the "importer", to communicate the essential data and metadata necessary to share a distributed-memory array between them. This allows two independently developed components to access, modify, and update a distributed array without copying. The protocol formalizes the metadata and buffers involved in the transfer, allowing several distributed array projects to collaborate, facilitating interoperability. By not copying the underlying array data, the protocol allows for efficient sharing of array data.
http://distributed-array-protocol.readthedocs.org/en/rel-0.9.0/
End of explanation |
6,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task
Step1: Define path to data
Step2: A few basic libraries that we'll need for the initial exercises
Step3: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
Step4: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline
Step5: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object
Step6: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder
Step7: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
Step8: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
Step9: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
Step10: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four
Step11: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
Step12: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
Step13: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
Step14: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Step15: Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras
Step16: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
Step17: Here's a few examples of the categories we just imported
Step18: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition
Step19: ...and here's the fully-connected definition.
Step20: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model
Step21: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
Step22: We'll learn about what these different blocks do later in the course. For now, it's enough to know that
Step23: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
Step24: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
Step25: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data
Step26: From here we can use exactly the same steps as before to look at predictions from the model.
Step27: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label. | Python Code:
%matplotlib inline
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as at 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
# path = "data/dogscats/"
path = "data/dogscats/sample/"
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
import utils; reload(utils)
from utils import plots
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=4
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
vgg = Vgg16()
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
batches = vgg.get_batches(path+'train', batch_size=4)
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
imgs,labels = next(batches)
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
plots(imgs, titles=labels)
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
x = vgg.predict(imgs, True)
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
vgg.classes[:4]
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
batch_size=32
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
vgg.finetune(batches)
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
vgg.fit(batches, val_batches, nb_epoch=1)
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
# submission_HW = open("submission_HW.csv","w")
# submission_HW.write("id,label")
# id_count = 1
# batch_size = 200
# batches = vgg.get_batches(path+'train',batch_size=batch_size)
# imgs,labels = next(batches)
# print(len(imgs))
# prediction = vgg.predict(imgs, True)
# for label in prediction[1]:
# submission_HW.write('\n'+str(id_count) + ',' + str(label))
# print(str(id_count) + ',' + str(label))
# id_count += 1
# print("Job Done.")
# # batches.filenames gives all filenames in batch
print(batch_size)
batches, predictions = vgg.test(path+'train', batch_size=batch_size*2)
filenames = batches.filenames
# print(filenames[0])
# ids = np.array([int(f[8:f.find('.')]) for f in filenames])
# ids = []
# for f in filenames:
# if f != '':
# ids.append(f[f.find('.')+1:-4])
ids = np.array([int(f[f.find('.')+1:-4]) for f in filenames])
# print(ids[:5])
dog_predictions = predictions[:,1]
# print(predictions[:5])
# print(new_predictions[:5])
submission = np.stack([ids,dog_predictions], axis=1)
print(submission[:5])
submission_file_name = 'submission_HW1_.csv'
np.savetxt(submission_file_name, submission, fmt='%d,%.5f',header='id,label',comments='')
# thingy = np.vstack((ids, predictions))
# print(thingy[:20])
# print(len(predictions), "predictions\n", predictions[:3])
# print(filenames[:3])
# rounded_predictions = predictions[:,0]
# to round to 1-Hot:
# rounded_labels = np.round(1-rounded_predictions)
# print(rounded_labels)
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
End of explanation
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
Explanation: Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
FILES_PATH = 'http://www.platform.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
classes[:5]
Explanation: Here's a few examples of the categories we just imported:
End of explanation
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
Explanation: ...and here's the fully-connected definition.
End of explanation
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
model = VGG_16()
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
batch_size = 4
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation |
6,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SciPy 2016 Scikit-learn Tutorial
Training and Testing Data
To evaluate how well our supervised models generalize, we can split our data into a training and a test set
Step1: Thinking about how machine learning is normally performed, the idea of a train/test split makes sense. Real world systems train on the data they have, and as other data comes in (from customers, sensors, or other sources) the classifier that was trained must predict on fundamentally new data. We can simulate this during training using a train/test split - the test data is a simulation of "future data" which will come into the system during production.
Specifically for iris, the 150 labels in iris are sorted, which means that if we split the data using a proportional split, this will result in fudamentally altered class distributions. For instance, if we'd perform a common 2/3 training data and 1/3 test data split, our training dataset will only consists of flower classes 0 and 1 (Setosa and Versicolor), and our test set will only contain samples with class label 2 (Virginica flowers).
Under the assumption that all samples are independent of each other (in contrast time series data), we want to randomly shuffle the dataset before we split the dataset as illustrated above.
Step2: Now we need to split the data into training and testing. Luckily, this is a common pattern in machine learning and scikit-learn has a pre-built function to split data into training and testing sets for you. Here, we use 50% of the data as training, and 50% testing. 80% and 20% is another common split, but there are no hard and fast rules. The most important thing is to fairly evaluate your system on data it has not seen during training!
Step3: Tip
Step4: So, in order to stratify the split, we can pass the label array as an additional option to the train_test_split function
Step5: By evaluating our classifier performance on data that has been seen during training, we could get false confidence in the predictive power of our model. In the worst case, it may simply memorize the training samples but completely fails classifying new, similar samples -- we really don't want to put such a system into production!
Instead of using the same dataset for training and testing (this is called "resubstitution evaluation"), it is much much better to use a train/test split in order to estimate how well your trained model is doing on new data.
Step6: We can also visualize the correct and failed predictions
Step7: We can see that the errors occur in the area where green (class 1) and gray (class 2) overlap. This gives us insight about what features to add - any feature which helps separate class 1 and class 2 should improve classifier performance.
Exercise
Print the true labels of 3 wrong predictions and modify the scatterplot code, which we used above, to visualize and distinguish these three samples with different markers in the 2D scatterplot. Can you explain why our classifier made these wrong predictions? | Python Code:
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
iris = load_iris()
X, y = iris.data, iris.target
classifier = KNeighborsClassifier()
Explanation: SciPy 2016 Scikit-learn Tutorial
Training and Testing Data
To evaluate how well our supervised models generalize, we can split our data into a training and a test set:
<img src="figures/train_test_split_matrix.svg" width="100%">
End of explanation
y
Explanation: Thinking about how machine learning is normally performed, the idea of a train/test split makes sense. Real world systems train on the data they have, and as other data comes in (from customers, sensors, or other sources) the classifier that was trained must predict on fundamentally new data. We can simulate this during training using a train/test split - the test data is a simulation of "future data" which will come into the system during production.
Specifically for iris, the 150 labels in iris are sorted, which means that if we split the data using a proportional split, this will result in fudamentally altered class distributions. For instance, if we'd perform a common 2/3 training data and 1/3 test data split, our training dataset will only consists of flower classes 0 and 1 (Setosa and Versicolor), and our test set will only contain samples with class label 2 (Virginica flowers).
Under the assumption that all samples are independent of each other (in contrast time series data), we want to randomly shuffle the dataset before we split the dataset as illustrated above.
End of explanation
from sklearn.model_selection import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, y,
train_size=0.5,
random_state=123)
print("Labels for training and testing data")
print(train_y)
print(test_y)
Explanation: Now we need to split the data into training and testing. Luckily, this is a common pattern in machine learning and scikit-learn has a pre-built function to split data into training and testing sets for you. Here, we use 50% of the data as training, and 50% testing. 80% and 20% is another common split, but there are no hard and fast rules. The most important thing is to fairly evaluate your system on data it has not seen during training!
End of explanation
print('All:', np.bincount(y) / float(len(y)) * 100.0)
print('Training:', np.bincount(train_y) / float(len(train_y)) * 100.0)
print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)
Explanation: Tip: Stratified Split
Especially for relatively small datasets, it's better to stratify the split. Stratification means that we maintain the original class proportion of the dataset in the test and training sets. For example, after we randomly split the dataset as shown in the previous code example, we have the following class proportions in percent:
End of explanation
train_X, test_X, train_y, test_y = train_test_split(X, y,
train_size=0.5,
random_state=123,
stratify=y)
print('All:', np.bincount(y) / float(len(y)) * 100.0)
print('Training:', np.bincount(train_y) / float(len(train_y)) * 100.0)
print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)
Explanation: So, in order to stratify the split, we can pass the label array as an additional option to the train_test_split function:
End of explanation
classifier.fit(train_X, train_y)
pred_y = classifier.predict(test_X)
print("Fraction Correct [Accuracy]:")
print(np.sum(pred_y == test_y) / float(len(test_y)))
Explanation: By evaluating our classifier performance on data that has been seen during training, we could get false confidence in the predictive power of our model. In the worst case, it may simply memorize the training samples but completely fails classifying new, similar samples -- we really don't want to put such a system into production!
Instead of using the same dataset for training and testing (this is called "resubstitution evaluation"), it is much much better to use a train/test split in order to estimate how well your trained model is doing on new data.
End of explanation
print('Samples correctly classified:')
correct_idx = np.where(pred_y == test_y)[0]
print(correct_idx)
print('\nSamples incorrectly classified:')
incorrect_idx = np.where(pred_y != test_y)[0]
print(incorrect_idx)
# Plot two dimensions
colors = ["darkblue", "darkgreen", "gray"]
for n, color in enumerate(colors):
idx = np.where(test_y == n)[0]
plt.scatter(test_X[idx, 1], test_X[idx, 2], color=color, label="Class %s" % str(n))
plt.scatter(test_X[incorrect_idx, 1], test_X[incorrect_idx, 2], color="darkred")
plt.xlabel('sepal width [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc=3)
plt.title("Iris Classification results")
plt.show()
Explanation: We can also visualize the correct and failed predictions
End of explanation
# %load solutions/04_wrong-predictions.py
Explanation: We can see that the errors occur in the area where green (class 1) and gray (class 2) overlap. This gives us insight about what features to add - any feature which helps separate class 1 and class 2 should improve classifier performance.
Exercise
Print the true labels of 3 wrong predictions and modify the scatterplot code, which we used above, to visualize and distinguish these three samples with different markers in the 2D scatterplot. Can you explain why our classifier made these wrong predictions?
End of explanation |
6,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Creating and filling arrays
Matrix filled with zeros.
Step2: Vector filled with random number.
Step3: Matrix filled with constant.
Step4: Identity matrix.
Step5: Create matrix from list and set data type.
Step6: Dimensions and size of a matrix.
Step7: Fill matrix with a series of numbers.
Step8: Accessing elements of an array
Indexing similar to list indexing. First element is at zero index.
Step9: Operations with matrices
Operations element by element
Step10: Matrix operations
Step11: Practical examples
Polynomial fitting
Let's fit a polynomial on 2D points using least squares method.
For visualization, we will use another python module called matplotlib.
Step12: Linear equation system
Let's solve the following linear system
3x + 4y + 2z = 21
-x + y + 3z = -6
3x - 4y + z = -7 | Python Code:
import numpy as np
Explanation: <a href="https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/python/numpy_tutor.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Numpy in a Nutshell
Numpy is a very popular Python package for numerical calculations (matrices, linear algebra, etc.)
Numpy, as other Python packages have to be imported. Usually an np alias is used.
End of explanation
a = np.zeros(9).reshape(3,3) # 3 x 3 matrix filled with zeros
print(a)
print(a.dtype)
a1 = np.zeros((3, 3)) # same matrix, note the tuple parameter
Explanation: Creating and filling arrays
Matrix filled with zeros.
End of explanation
b = np.random.rand(6) # random numbers between 0-1
print(b)
Explanation: Vector filled with random number.
End of explanation
a1 = np.full((3, 4), 8)
print(a1)
Explanation: Matrix filled with constant.
End of explanation
i = np.eye(4)
print(i)
Explanation: Identity matrix.
End of explanation
c = np.array([[1, 2, 3], [2, 4, 6]], dtype=np.int32) # default data type is float64
Explanation: Create matrix from list and set data type.
End of explanation
print(c.shape) # return a tuple
print(c.size)
Explanation: Dimensions and size of a matrix.
End of explanation
d = np.arange(10) # integer values from 0 to 9
print(d)
e = np.arange(2, 11, 2) # even numbers from 2 to 10
print(e)
f = np.arange(0.1, 1, 0.1) # for float numbers
print(f)
f1 = np.linspace(0.1, 0.9, 9) # same as above but start, end, number of items
print(f1)
Explanation: Fill matrix with a series of numbers.
End of explanation
t1 = np.arange(80).reshape(10, 8)
print(t1)
print(t1[0, 0]) # first row, first column
print(t1[0][0]) # same as above
print(t1[2]) # third row
print(t1[:,1]) # second column
print(t1[::2]) # odd rows (every second)
print(t1[t1 % 3 == 0]) # elements divisible by three
Explanation: Accessing elements of an array
Indexing similar to list indexing. First element is at zero index.
End of explanation
a1 = np.full((3, 4), 8)
a2 = np.arange(12).reshape(3, 4)
print(a1 * 2) # scalar times matrix
print(np.sqrt(a2)) # square root of all elements
print(a1 - a2) # difference of two matrices
print(a1 * a2) # element wise multiplication!!!
Explanation: Operations with matrices
Operations element by element
End of explanation
b1 = np.arange(12).reshape(4, 3)
print(b1.transpose().dot(b1)) # matrix multiplication with tranpose
print(b1.T.dot(b1)) # same as above
print(np.linalg.inv(b1.T.dot(b1))) # matrix inverse
Explanation: Matrix operations
End of explanation
import matplotlib.pyplot as plt
from math import sqrt
pnts = np.array([[1.1, 0.4], [2.6, 1.9], [4.2, 3.0], [7.0, 3.1], [8.2, 2.4], [9.6, 1.2]])
plt.plot(pnts[:,0], pnts[:,1], "o")
c = np.polyfit(pnts[:,0], pnts[:,1], 2) # parabola fitting
v = np.polyval(c, pnts[:,0]) - pnts[:,1] # corrections for y coordinates
rms = sqrt(np.sum(v**2) / pnts.shape[0]) # RMS error
print(c)
x = np.linspace(np.min(pnts[:,0]), np.max(pnts[:,0]), 100)
plt.plot(x, np.polyval(c, x))
plt.plot(pnts[:,0], pnts[:,1], "o")
Explanation: Practical examples
Polynomial fitting
Let's fit a polynomial on 2D points using least squares method.
For visualization, we will use another python module called matplotlib.
End of explanation
A = np.array([[3, 4, 2], [-1, 1, 3], [3, -4, 1]])
b = np.array([21, -6, -7])
x = np.linalg.solve(A, b)
print(x)
Explanation: Linear equation system
Let's solve the following linear system
3x + 4y + 2z = 21
-x + y + 3z = -6
3x - 4y + z = -7
End of explanation |
6,577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Name
Submitting a Cloud Machine Learning Engine training job as a pipeline step
Label
GCP, Cloud ML Engine, Machine Learning, pipeline, component, Kubeflow, Kubeflow Pipeline
Summary
A Kubeflow Pipeline component to submit a Cloud ML Engine training job as a step in a pipeline.
Details
Intended use
Use this component to submit a training job to Cloud ML Engine from a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|
Step1: Load the component using KFP SDK
Step2: Sample
Note
Step3: Clean up the working directory
Step4: Download the sample trainer code to local
Step5: Package code and upload the package to Cloud Storage
Step6: Example pipeline that uses the component
Step7: Compile the pipeline
Step8: Submit the pipeline for execution
Step9: Inspect the results
Use the following command to inspect the contents in the output directory | Python Code:
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
Explanation: Name
Submitting a Cloud Machine Learning Engine training job as a pipeline step
Label
GCP, Cloud ML Engine, Machine Learning, pipeline, component, Kubeflow, Kubeflow Pipeline
Summary
A Kubeflow Pipeline component to submit a Cloud ML Engine training job as a step in a pipeline.
Details
Intended use
Use this component to submit a training job to Cloud ML Engine from a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|:------------------|:------------------|:----------|:--------------|:-----------------|:-------------|
| project_id | The ID of the Google Cloud Platform (GCP) project of the job. | No | GCPProjectID | | |
| python_module | The name of the Python module to run after installing the training program. | Yes | String | | None |
| package_uris | The Cloud Storage location of the packages that contain the training program and any additional dependencies. The maximum number of package URIs is 100. | Yes | List | | None |
| region | The Compute Engine region in which the training job is run. | Yes | GCPRegion | | us-central1 |
| args | The command line arguments to pass to the training program. | Yes | List | | None |
| job_dir | A Cloud Storage path in which to store the training outputs and other data needed for training. This path is passed to your TensorFlow program as the job-dir command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training. | Yes | GCSPath | | None |
| python_version | The version of Python used in training. If it is not set, the default version is 2.7. Python 3.5 is available when the runtime version is set to 1.4 and above. | Yes | String | | None |
| runtime_version | The runtime version of Cloud ML Engine to use for training. If it is not set, Cloud ML Engine uses the default. | Yes | String | | 1 |
| master_image_uri | The Docker image to run on the master replica. This image must be in Container Registry. | Yes | GCRPath | | None |
| worker_image_uri | The Docker image to run on the worker replica. This image must be in Container Registry. | Yes | GCRPath | | None |
| training_input | The input parameters to create a training job. | Yes | Dict | TrainingInput | None |
| job_id_prefix | The prefix of the job ID that is generated. | Yes | String | | None |
| wait_interval | The number of seconds to wait between API calls to get the status of the job. | Yes | Integer | | 30 |
Input data schema
The component accepts two types of inputs:
* A list of Python packages from Cloud Storage.
* You can manually build a Python package and upload it to Cloud Storage by following this guide.
* A Docker container from Container Registry.
* Follow this guide to publish and use a Docker container with this component.
Output
| Name | Description | Type |
|:------- |:---- | :--- |
| job_id | The ID of the created job. | String |
| job_dir | The Cloud Storage path that contains the trained model output files. | GCSPath |
Cautions & requirements
To use the component, you must:
Set up a cloud environment by following this guide.
The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
Grant the following access to the Kubeflow user service account:
Read access to the Cloud Storage buckets which contain the input data, packages, or Docker images.
Write access to the Cloud Storage bucket of the output directory.
Detailed description
The component builds the TrainingInput payload and submits a job via the Cloud ML Engine REST API.
The steps to use the component in a pipeline are:
Install the Kubeflow Pipeline SDK:
End of explanation
import kfp.components as comp
mlengine_train_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/train/component.yaml')
help(mlengine_train_op)
Explanation: Load the component using KFP SDK
End of explanation
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'CLOUDML - Train'
TRAINER_GCS_PATH = GCS_WORKING_DIR + '/train/trainer.tar.gz'
OUTPUT_GCS_PATH = GCS_WORKING_DIR + '/train/output/'
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code.
In this sample, you use the code from the census estimator sample to train a model in Cloud ML Engine. To upload the code to Cloud ML Engine, package the Python code and upload it to a Cloud Storage bucket.
Note: You must have read and write permissions on the bucket that you use as the working directory.
Set sample parameters
End of explanation
%%capture --no-stderr
!gsutil rm -r $GCS_WORKING_DIR
Explanation: Clean up the working directory
End of explanation
%%capture --no-stderr
!wget https://github.com/GoogleCloudPlatform/cloudml-samples/archive/master.zip
!unzip master.zip
Explanation: Download the sample trainer code to local
End of explanation
%%capture --no-stderr
%%bash -s "$TRAINER_GCS_PATH"
pushd ./cloudml-samples-master/census/estimator/
python setup.py sdist
gsutil cp dist/preprocessing-1.0.tar.gz $1
popd
rm -fr ./cloudml-samples-master/ ./master.zip ./dist
Explanation: Package code and upload the package to Cloud Storage
End of explanation
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='CloudML training pipeline',
description='CloudML training pipeline'
)
def pipeline(
project_id = PROJECT_ID,
python_module = 'trainer.task',
package_uris = json.dumps([TRAINER_GCS_PATH]),
region = 'us-central1',
args = json.dumps([
'--train-files', 'gs://cloud-samples-data/ml-engine/census/data/adult.data.csv',
'--eval-files', 'gs://cloud-samples-data/ml-engine/census/data/adult.test.csv',
'--train-steps', '1000',
'--eval-steps', '100',
'--verbosity', 'DEBUG'
]),
job_dir = OUTPUT_GCS_PATH,
python_version = '',
runtime_version = '1.10',
master_image_uri = '',
worker_image_uri = '',
training_input = '',
job_id_prefix = '',
wait_interval = '30'):
task = mlengine_train_op(
project_id=project_id,
python_module=python_module,
package_uris=package_uris,
region=region,
args=args,
job_dir=job_dir,
python_version=python_version,
runtime_version=runtime_version,
master_image_uri=master_image_uri,
worker_image_uri=worker_image_uri,
training_input=training_input,
job_id_prefix=job_id_prefix,
wait_interval=wait_interval)
Explanation: Example pipeline that uses the component
End of explanation
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
Explanation: Compile the pipeline
End of explanation
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
Explanation: Submit the pipeline for execution
End of explanation
!gsutil ls $OUTPUT_GCS_PATH
Explanation: Inspect the results
Use the following command to inspect the contents in the output directory:
End of explanation |
6,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic PowerShell Execution
Metadata
| | |
|
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Within the classic PowerShell log, event ID 400 indicates when a new PowerShell host process has started. You can filter on powershell.exe as a host application if you want to or leave it without a filter to captuer every single PowerShell host
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
Looking for non-interactive powershell session might be a sign of PowerShell being executed by another application in the background
| Data source | Event Provider | Relationship | Event |
|
Step4: Analytic III
Looking for non-interactive powershell session might be a sign of PowerShell being executed by another application in the background
| Data source | Event Provider | Relationship | Event |
|
Step5: Analytic IV
Monitor for processes loading PowerShell DLL system.management.automation
| Data source | Event Provider | Relationship | Event |
|
Step6: Analytic V
Monitoring for PSHost* pipes is another interesting way to find PowerShell execution
| Data source | Event Provider | Relationship | Event |
|
Step7: Analytic VI
The “PowerShell Named Pipe IPC” event will indicate the name of the PowerShell AppDomain that started. Sign of PowerShell execution
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: Basic PowerShell Execution
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/04/10 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be leveraging PowerShell to execute code within my environment
Technical Context
None
Offensive Tradecraft
Adversaries can use PowerShell to perform a number of actions, including discovery of information and execution of code.
Therefore, it is important to understand the basic artifacts left when PowerShell is used in your environment.
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/02_execution/SDWIN-190518182022.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/execution/host/empire_launcher_vbs.zip |
Analytics
Initialize Analytics Engine
End of explanation
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/execution/host/empire_launcher_vbs.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
Explanation: Download & Process Mordor Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Channel
FROM mordorTable
WHERE (Channel = "Microsoft-Windows-PowerShell/Operational" OR Channel = "Windows PowerShell")
AND (EventID = 400 OR EventID = 4103)
'''
)
df.show(10,False)
Explanation: Analytic I
Within the classic PowerShell log, event ID 400 indicates when a new PowerShell host process has started. You can filter on powershell.exe as a host application if you want to or leave it without a filter to captuer every single PowerShell host
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Powershell | Windows PowerShell | Application host started | 400 |
| Powershell | Microsoft-Windows-PowerShell/Operational | User started Application host | 4103 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, NewProcessName, ParentProcessName
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND NewProcessName LIKE "%powershell.exe"
AND NOT ParentProcessName LIKE "%explorer.exe"
'''
)
df.show(10,False)
Explanation: Analytic II
Looking for non-interactive powershell session might be a sign of PowerShell being executed by another application in the background
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Security-Auditing | Process created Process | 4688 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, ParentImage
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE "%powershell.exe"
AND NOT ParentImage LIKE "%explorer.exe"
'''
)
df.show(10,False)
Explanation: Analytic III
Looking for non-interactive powershell session might be a sign of PowerShell being executed by another application in the background
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, ImageLoaded
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 7
AND (lower(Description) = "system.management.automation" OR lower(ImageLoaded) LIKE "%system.management.automation%")
'''
)
df.show(10,False)
Explanation: Analytic IV
Monitor for processes loading PowerShell DLL system.management.automation
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, PipeName
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 17
AND lower(PipeName) LIKE "\\\\pshost%"
'''
)
df.show(10,False)
Explanation: Analytic V
Monitoring for PSHost* pipes is another interesting way to find PowerShell execution
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Named Pipe | Microsoft-Windows-Sysmon/Operational | Process created Pipe | 17 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Message
FROM mordorTable
WHERE Channel = "Microsoft-Windows-PowerShell/Operational"
AND EventID = 53504
'''
)
df.show(10,False)
Explanation: Analytic VI
The “PowerShell Named Pipe IPC” event will indicate the name of the PowerShell AppDomain that started. Sign of PowerShell execution
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Powershell | Microsoft-Windows-PowerShell/Operational | Application domain started | 53504 |
End of explanation |
6,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now it's your turn to test your new knowledge of missing values handling. You'll probably find it makes a big difference.
Setup
The questions will give you feedback on your work. Run the following cell to set up the feedback system.
Step1: In this exercise, you will work with data from the Housing Prices Competition for Kaggle Learn Users.
Run the next code cell without changes to load the training and validation sets in X_train, X_valid, y_train, and y_valid. The test set is loaded in X_test.
Step2: Use the next code cell to print the first five rows of the data.
Step3: You can already see a few missing values in the first several rows. In the next step, you'll obtain a more comprehensive understanding of the missing values in the dataset.
Step 1
Step4: Part A
Use the above output to answer the questions below.
Step5: Part B
Considering your answers above, what do you think is likely the best approach to dealing with the missing values?
Step6: To compare different approaches to dealing with missing values, you'll use the same score_dataset() function from the tutorial. This function reports the mean absolute error (MAE) from a random forest model.
Step7: Step 2
Step8: Run the next code cell without changes to obtain the MAE for this approach.
Step9: Step 3
Step10: Run the next code cell without changes to obtain the MAE for this approach.
Step11: Part B
Compare the MAE from each approach. Does anything surprise you about the results? Why do you think one approach performed better than the other?
Step12: Step 4
Step13: Run the next code cell to train and evaluate a random forest model. (Note that we don't use the score_dataset() function above, because we will soon use the trained model to generate test predictions!)
Step14: Part B
Use the next code cell to preprocess your test data. Make sure that you use a method that agrees with how you preprocessed the training and validation data, and set the preprocessed test features to final_X_test.
Then, use the preprocessed test features and the trained model to generate test predictions in preds_test.
In order for this step to be marked correct, you need only ensure
Step15: Run the next code cell without changes to save your results to a CSV file that can be submitted directly to the competition. | Python Code:
# Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex2 import *
print("Setup Complete")
Explanation: Now it's your turn to test your new knowledge of missing values handling. You'll probably find it makes a big difference.
Setup
The questions will give you feedback on your work. Run the following cell to set up the feedback system.
End of explanation
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
X_full = pd.read_csv('../input/train.csv', index_col='Id')
X_test_full = pd.read_csv('../input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
X_full.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = X_full.SalePrice
X_full.drop(['SalePrice'], axis=1, inplace=True)
# To keep things simple, we'll use only numerical predictors
X = X_full.select_dtypes(exclude=['object'])
X_test = X_test_full.select_dtypes(exclude=['object'])
# Break off validation set from training data
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,
random_state=0)
Explanation: In this exercise, you will work with data from the Housing Prices Competition for Kaggle Learn Users.
Run the next code cell without changes to load the training and validation sets in X_train, X_valid, y_train, and y_valid. The test set is loaded in X_test.
End of explanation
X_train.head()
Explanation: Use the next code cell to print the first five rows of the data.
End of explanation
# Shape of training data (num_rows, num_columns)
print(X_train.shape)
# Number of missing values in each column of training data
missing_val_count_by_column = (X_train.isnull().sum())
print(missing_val_count_by_column[missing_val_count_by_column > 0])
Explanation: You can already see a few missing values in the first several rows. In the next step, you'll obtain a more comprehensive understanding of the missing values in the dataset.
Step 1: Preliminary investigation
Run the code cell below without changes.
End of explanation
# Fill in the line below: How many rows are in the training data?
num_rows = ____
# Fill in the line below: How many columns in the training data
# have missing values?
num_cols_with_missing = ____
# Fill in the line below: How many missing entries are contained in
# all of the training data?
tot_missing = ____
# Check your answers
step_1.a.check()
#%%RM_IF(PROD)%%
num_rows = 1168
num_cols_with_missing = 3
tot_missing = 212 + 6 + 58
step_1.a.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.a.hint()
#_COMMENT_IF(PROD)_
step_1.a.solution()
Explanation: Part A
Use the above output to answer the questions below.
End of explanation
# Check your answer (Run this code cell to receive credit!)
step_1.b.check()
#_COMMENT_IF(PROD)_
step_1.b.hint()
Explanation: Part B
Considering your answers above, what do you think is likely the best approach to dealing with the missing values?
End of explanation
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
# Function for comparing different approaches
def score_dataset(X_train, X_valid, y_train, y_valid):
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(X_train, y_train)
preds = model.predict(X_valid)
return mean_absolute_error(y_valid, preds)
Explanation: To compare different approaches to dealing with missing values, you'll use the same score_dataset() function from the tutorial. This function reports the mean absolute error (MAE) from a random forest model.
End of explanation
# Fill in the line below: get names of columns with missing values
____ # Your code here
# Fill in the lines below: drop columns in training and validation data
reduced_X_train = ____
reduced_X_valid = ____
# Check your answers
step_2.check()
#%%RM_IF(PROD)%%
# Get names of columns with missing values
cols_with_missing = [col for col in X_train.columns
if X_train[col].isnull().any()]
# Drop columns in training and validation data
reduced_X_train = X_train.drop(cols_with_missing, axis=1)
reduced_X_valid = X_valid.drop(cols_with_missing, axis=1)
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
Explanation: Step 2: Drop columns with missing values
In this step, you'll preprocess the data in X_train and X_valid to remove columns with missing values. Set the preprocessed DataFrames to reduced_X_train and reduced_X_valid, respectively.
End of explanation
print("MAE (Drop columns with missing values):")
print(score_dataset(reduced_X_train, reduced_X_valid, y_train, y_valid))
Explanation: Run the next code cell without changes to obtain the MAE for this approach.
End of explanation
from sklearn.impute import SimpleImputer
# Fill in the lines below: imputation
____ # Your code here
imputed_X_train = ____
imputed_X_valid = ____
# Fill in the lines below: imputation removed column names; put them back
imputed_X_train.columns = ____
imputed_X_valid.columns = ____
# Check your answers
step_3.a.check()
#%%RM_IF(PROD)%%
# Imputation
my_imputer = SimpleImputer()
imputed_X_train = pd.DataFrame(my_imputer.fit_transform(X_train))
imputed_X_valid = pd.DataFrame(my_imputer.transform(X_valid))
step_3.a.assert_check_failed()
#%%RM_IF(PROD)%%
# Imputation
my_imputer = SimpleImputer()
imputed_X_train = pd.DataFrame(my_imputer.fit_transform(X_train))
imputed_X_valid = pd.DataFrame(my_imputer.fit_transform(X_valid))
# Imputation removed column names; put them back
imputed_X_train.columns = X_train.columns
imputed_X_valid.columns = X_valid.columns
step_3.a.assert_check_failed()
#%%RM_IF(PROD)%%
# Imputation
my_imputer = SimpleImputer()
imputed_X_train = pd.DataFrame(my_imputer.fit_transform(X_train))
imputed_X_valid = pd.DataFrame(my_imputer.transform(X_valid))
# Imputation removed column names; put them back
imputed_X_train.columns = X_train.columns
imputed_X_valid.columns = X_valid.columns
step_3.a.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_3.a.hint()
#_COMMENT_IF(PROD)_
step_3.a.solution()
Explanation: Step 3: Imputation
Part A
Use the next code cell to impute missing values with the mean value along each column. Set the preprocessed DataFrames to imputed_X_train and imputed_X_valid. Make sure that the column names match those in X_train and X_valid.
End of explanation
print("MAE (Imputation):")
print(score_dataset(imputed_X_train, imputed_X_valid, y_train, y_valid))
Explanation: Run the next code cell without changes to obtain the MAE for this approach.
End of explanation
# Check your answer (Run this code cell to receive credit!)
step_3.b.check()
#_COMMENT_IF(PROD)_
step_3.b.hint()
Explanation: Part B
Compare the MAE from each approach. Does anything surprise you about the results? Why do you think one approach performed better than the other?
End of explanation
# Preprocessed training and validation features
final_X_train = ____
final_X_valid = ____
# Check your answers
step_4.a.check()
#%%RM_IF(PROD)%%
# Imputation
final_imputer = SimpleImputer(strategy='median')
final_X_train = pd.DataFrame(final_imputer.fit_transform(X_train))
final_X_valid = pd.DataFrame(final_imputer.transform(X_valid))
# Imputation removed column names; put them back
final_X_train.columns = X_train.columns
final_X_valid.columns = X_valid.columns
step_4.a.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_4.a.hint()
#_COMMENT_IF(PROD)_
step_4.a.solution()
Explanation: Step 4: Generate test predictions
In this final step, you'll use any approach of your choosing to deal with missing values. Once you've preprocessed the training and validation features, you'll train and evaluate a random forest model. Then, you'll preprocess the test data before generating predictions that can be submitted to the competition!
Part A
Use the next code cell to preprocess the training and validation data. Set the preprocessed DataFrames to final_X_train and final_X_valid. You can use any approach of your choosing here! in order for this step to be marked as correct, you need only ensure:
- the preprocessed DataFrames have the same number of columns,
- the preprocessed DataFrames have no missing values,
- final_X_train and y_train have the same number of rows, and
- final_X_valid and y_valid have the same number of rows.
End of explanation
# Define and fit model
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(final_X_train, y_train)
# Get validation predictions and MAE
preds_valid = model.predict(final_X_valid)
print("MAE (Your approach):")
print(mean_absolute_error(y_valid, preds_valid))
Explanation: Run the next code cell to train and evaluate a random forest model. (Note that we don't use the score_dataset() function above, because we will soon use the trained model to generate test predictions!)
End of explanation
# Fill in the line below: preprocess test data
final_X_test = ____
# Fill in the line below: get test predictions
preds_test = ____
# Check your answers
step_4.b.check()
#%%RM_IF(PROD)%%
# Preprocess test data
final_X_test = pd.DataFrame(final_imputer.transform(X_test))
# Get test predictions
preds_test = model.predict(final_X_test)
step_4.b.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_4.b.hint()
#_COMMENT_IF(PROD)_
step_4.b.solution()
Explanation: Part B
Use the next code cell to preprocess your test data. Make sure that you use a method that agrees with how you preprocessed the training and validation data, and set the preprocessed test features to final_X_test.
Then, use the preprocessed test features and the trained model to generate test predictions in preds_test.
In order for this step to be marked correct, you need only ensure:
- the preprocessed test DataFrame has no missing values, and
- final_X_test has the same number of rows as X_test.
End of explanation
# Save test predictions to file
output = pd.DataFrame({'Id': X_test.index,
'SalePrice': preds_test})
output.to_csv('submission.csv', index=False)
Explanation: Run the next code cell without changes to save your results to a CSV file that can be submitted directly to the competition.
End of explanation |
6,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table width="100%" border="0">
<tr>
<td><img src="./images/ing.png" alt="" align="left" /></td>
<td><img src="./images/ucv.png" alt="" align="center" height="100" width="100" /></td>
<td><img src="./images/mec.png" alt="" align="right"/></td>
</tr>
</table>
<br>
<h1 style="text-align
Step1: Introducción
El paquete SciPy agrega características a los algorítmos de bajo nivel de NumPy para arreglos multidimensionales, y provee un gran número de algorítmos de alto nivel de uso científico. Algunos de los tópicos que cubre SciPy son
Step2: Funciones Especiales
En muchos problemas de física computacional son importantes varias funciones matemáticas especiales. SciPy provee implementaciones de muchas de estas funciones especiales. Para más detalles, ver la lista de funciones en la documentación http
Step3: Integración
Integración numérica
Step4: Las función quad acepta una gran cantidad de argumentos opcionales, que pueden ser usados para ajustar detalles del comportamiento de la función (ingrese help(quad) para más detalles).
El uso básico es el siguiente
Step6: Si necesitamos incluir argumento extras en la función integrando podemos usar el argumento args
Step7: Para funciones simples podemos usar la función lambda function (función anónima) en lugar de definir explícitamente una función para el integrando
Step8: Como se muestra en este ejemplo, podemos usar 'Inf' y '-Inf' como límites de la integral.
Integrales de dimensión mayor se evalúan de forma similar
Step9: Note como requerimos incorporar funciones lambda para los límites de la integración en y, ya que estos límites pueden en general ser funciones de x.
Ecuaciones diferenencias ordinarias (EDOs)
SciPy provee dos formas diferentes para resolver EDOs
Step10: Un sistema de EDOs es usualmente formulado en forma estándar antes de ser resuelto numéricamente. La forma estánder es
Step12: Las ecuaciones hamiltonianas de movimiento para el péndulo son dadas (ver página de wikipedia)
Step13: Animación simple del movimiento del péndulo. Veremos cómo crear mejores animaciones en la clase 4.
Step15: Ejemplo
Step16: Transformada de Fourier
Las transformadas de Fourier son unas de las herramientas universales de la Computación Científica, que aparece una y otra vez en distintos contextos. SciPy suministra funciones para acceder ala clásica librería FFTPACK de NetLib, que es una librería eficiente y muy bien testeada para FFT, escrita en FORTRAN. La API de SciPy contiene algunas funciones adicionales, pero en general la API está íntimamente relacionada con la librería original en FORTRAN.
Para usar el módulo fftpack en un programa Python, debe incluir
Step17: Para demostrar cómo calcular una transformada rápida de Fourier con SciPy, consideremos la FFT de la solución del oscilador armónico amortiguado del ejemplo anterior
Step18: Como la señal es real, el espectro es simétrico. Por eso, sólo necesitamos graficar la parte que corresponde a las frecuencias positivas. Para extraer esa parte de w y F podemos usar algunos de los trucos con índices para arreglos NumPy que vimos en la clase 2
Step19: Como era de esperar, vemos un peak en el espectro centrado alrededor de 1, que es la frecuencia que usamos para el oscilador.
Álgebra lineal
El módulo de álgebra lineal contiene muchas funciones relacionadas con matrices, incluyendo resolución de ecuaciones lineales, cálculo de valores propios, funciones de matrices (por ejemplo, para exponenciación matricial), varias decomposiciones diferentes (SVD, LU, cholesky), etc.
Una documentación detallada está disponible aquí
Step20: Podemos también hacer lo mismo con
$A X = B$,
donde ahora $A, B$ y $X$ son matrices
Step21: Valores y vectores propios
El problema de valores propios para la matriz $A$
Step22: Los vectores propios correspondientes al $n$-ésimo valor propio (guardado en evals[n]) es la $n$-ésima columna en evecs, es decir, evecs[
Step23: Existen también formas más especializadas para resolver proplemas de valores propios, como por ejemplo eigh para matrices hermíticas.
Operaciones matriciales
Step24: Matrices dispersas
Las matrices dispersas (sparse matrices) son a menudo útiles en simulaciones numéricas que involucran sistemas grandes, si es que el problema puede ser descrito en forma matricial donde las matrices o vectores contienen mayoritariamente ceros. Scipy tiene buen soporte para las matrices dispersas, con operaciones básicas de álgebra lineal (tales como resolución de ecuaciones, cálculos de valores propios, etc).
Existen muchas estrategias posibles para almacenar matrices dispersas de manera eficiente. Algunas de las más comunes son las así llamadas "formas coordenadas" (CCO), "forma de lista de listas" (LIL), y "compressed-sparse column" CSC (también "compressed-sparse row", CSR). Cada formato tiene sus ventajas y desventajas. La mayoría de los algorítmos computacionales (resolución de ecuaciones, multiplicación de matrices, etc) pueden ser implementados eficientemente usando los formatos CSR o CSC, pero ellos no son tan intuitivos ni fáciles de inicializar. Por esto, a menudo una matriz dispersa es inicialmente creada en formato COO o LIL (donde podemos agregar elementos a la matriz dispersa eficientemente), y luego convertirlos a CSC o CSR antes de ser usadas en cálculos reales.
Para más información sobre los formatos para matrices dispersas, vea por ejemplo (en inglés)
Step25: Una forma más eficiente de crear matrices dispersas
Step26: Conviertiendo entre distintos formatos de matriz dispersa
Step27: Podemos calcular usando matrices dispersas como lo hacemos con matrices densas
Step28: Optimización
La optimización (encontrar el máximo o el mínimo de una funciónn) constituye un campo amplio en matemáticas, y la optimización de funciones complicadas o de muchas variables puede ser complicada. Aquí sólo revisaremos algunos casos muy simples. Para una introducción detallada a la optimización con SciPy, ver (en inglés)
Step29: Encontrando máximos
Veamos primero cómo encontrar el mínimo de una función simple de una variable
Step30: Podemos usar la función fmin_bfgs para encontrar el mínimo de la función
Step31: Podemos también usar las funciones brent o fminbound. Estas funciones tienen una sintaxis algo distinta y usan algoritmos diferentes.
Step32: Encontrando las raíces de una función
Para encontrar las soluciones a una ecuación de la forma $f(x) = 0$ podemos usar la función fsolve. Ella requiere especificar un punto inicial
Step33: Interpolación
La interpolación es simple y conveniente en Scipy
Step34: Estadística
El módulo scipy.stats contiene varias distribuciones estadísticas, funciones estadísticas y testss. Para una documentación completa de estas las características, ver (en inglés) http
Step35: Estadística
Step36: Test estadísticos
Test si dos conjuntos de datos aleatorios (independientes) vienen de la misma distribución
Step37: Como el valor p es muy grande, no podemos descartar la hiopótesis que los dos conjuntos de datos aleatorios tienen medias diferentes.
Para testear si la media de una única muestra de datos tiene media 0.1 (la media verdadera es 0.0)
Step38: Un valor de p bajo significa que podemos descartar la hipótesis que la media de Y es 0.1.
Step39: Lectura adicional
http | Python Code:
# ¿qué hace esta línea? La respuesta mas adelante
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import Image
Explanation: <table width="100%" border="0">
<tr>
<td><img src="./images/ing.png" alt="" align="left" /></td>
<td><img src="./images/ucv.png" alt="" align="center" height="100" width="100" /></td>
<td><img src="./images/mec.png" alt="" align="right"/></td>
</tr>
</table>
<br>
<h1 style="text-align: center;"> Curso de Python para Ingenieros Mecánicos </h1>
<h3 style="text-align: center;"> Por: Eduardo Vieira</h3>
<br>
<br>
<h1 style="text-align: center;"> SciPy - Librería de algorítmos científicos para Python </h1>
<br>
End of explanation
import scipy as sp
import numpy as np
Explanation: Introducción
El paquete SciPy agrega características a los algorítmos de bajo nivel de NumPy para arreglos multidimensionales, y provee un gran número de algorítmos de alto nivel de uso científico. Algunos de los tópicos que cubre SciPy son:
Funciones especiales (scipy.special)
Integración (scipy.integrate)
Optimización (scipy.optimize)
Interpolación (scipy.interpolate)
Transformada de Fourier (scipy.fftpack)
Procesamiento de señales (scipy.signal)
Álgebra lineal (scipy.linalg)
Problemas de Eigenvalores de matrices dispersas (scipy.sparse)
Estadística (scipy.stats)
Procesamiento de imágenes multi-dimensional (scipy.ndimage)
Entrada/Salida desde/hacia archivos (scipy.io)
Cada uno de estos submódulos provee un muchas funciones y clases que pueden ser usadas para resolver problemas en sus respectivos tópicos.
En esta clases veremos cómo usar algunos de estos subpaquetes.
Para acceder al paquete SciPy en un programa Python, comenzamos importando todo desde el módulo scipy.
End of explanation
#
# El módulo scipy.special incluye muchas funciones de Bessel
# Aquí usaremos las funciones jn e yn, que son las funciones de Bessel
# de primera y segunda especie, y de orden real. Incluimos también las
# funciones jn_zeros e yn_zeros que entregan los ceros de las
# funciones jn e yn.
#
from scipy.special import jn, yn, jn_zeros, yn_zeros
n = 0 # orden de la función
x = 0.0
# Función de Bessel de primera especie
print("J_%d(%f) = %f" % (n, x, jn(n, x)))
x = 1.0
# Función de Bessel de segunda especie
print("Y_%d(%f) = %f" % (n, x, yn(n, x)))
x = np.linspace(0, 10, 100)
fig, ax = plt.subplots()
for n in range(4):
ax.plot(x, jn(n, x), label=r"$J_%d(x)$" % n)
ax.legend()
# ceros de las funciones de Bessel
n = 0 # orden
m = 4 # número de raices a calcular
jn_zeros(n, m)
Explanation: Funciones Especiales
En muchos problemas de física computacional son importantes varias funciones matemáticas especiales. SciPy provee implementaciones de muchas de estas funciones especiales. Para más detalles, ver la lista de funciones en la documentación http://docs.scipy.org/doc/scipy/reference/special.html#module-scipy.special.
Para demostrar el uso típico de estas funciones especiales nos concentraremos en las funciones de Bessel:
End of explanation
from scipy.integrate import quad, dblquad, tplquad
Explanation: Integración
Integración numérica: cuadraturas
La evaluación numérica de una función, del tipo
$\displaystyle \int_a^b f(x) dx$
es llamada cuadratura numérica, o simplemente cuadratura. SciPy suministra funciones para diferentes tipos de cuadraturas, por ejemplo las funciones quad, dblquad y tplquad para calcular integrales simples, dobles o triples, respectivamente.
End of explanation
# define una función simple para ser integrada
def f(x):
return x
x_inf = 0 # el límite inferior de x
x_sup = 1 # el límite superior de x
val, errabs = quad(f, x_inf, x_sup)
print("valor de la integral =", val, ", error absoluto =", errabs)
Explanation: Las función quad acepta una gran cantidad de argumentos opcionales, que pueden ser usados para ajustar detalles del comportamiento de la función (ingrese help(quad) para más detalles).
El uso básico es el siguiente:
End of explanation
def integrando(x, n):
función de Bessel de primera especie y orden n.
return jn(n, x)
x_inf = 0 # el límite inferior de x
x_sup = 10 # el límite superior de x
val, errabs = quad(integrando, x_inf, x_sup, args=(3,)) # evalua la integral con n=3
print(val, errabs)
Explanation: Si necesitamos incluir argumento extras en la función integrando podemos usar el argumento args:
End of explanation
val, errabs = quad(lambda x: np.exp(-x ** 2), -np.Inf, np.Inf) # Inf = infinito!
print("resultado numérico =", val, errabs)
analitico = np.sqrt(np.pi)
print("analitico =", analitico)
Explanation: Para funciones simples podemos usar la función lambda function (función anónima) en lugar de definir explícitamente una función para el integrando:
End of explanation
def integrando(x, y):
return np.exp(-x**2-y**2)
x_inf = 0
x_sup = 10
y_inf = 0
y_sup = 10
val, errabs = dblquad(integrando, x_inf, x_sup, lambda x : y_inf, lambda x: y_sup)
print(val, errabs)
Explanation: Como se muestra en este ejemplo, podemos usar 'Inf' y '-Inf' como límites de la integral.
Integrales de dimensión mayor se evalúan de forma similar:
End of explanation
from scipy.integrate import odeint, ode
Explanation: Note como requerimos incorporar funciones lambda para los límites de la integración en y, ya que estos límites pueden en general ser funciones de x.
Ecuaciones diferenencias ordinarias (EDOs)
SciPy provee dos formas diferentes para resolver EDOs: Una API (Interfaz de programación de aplicaciones, del inglés "Application programming interface") basada en la función odeint, y una API orientada al objeto basada en la clases ode. Usualmentey odeint es más simplea de usar, pero la clase ode ofrece niveles de control más finos.
Aquí usaremos las funciones odeint. Para mayor información sobre las clases ode, use help(ode). Hace casi todo lo que hace odeint, pero de una forma más orientada al objeto.
Para usar odeint, primero importelo desde el módulo scipy.integrate:
End of explanation
Image(url='http://upload.wikimedia.org/wikipedia/commons/c/c9/Double-compound-pendulum-dimensioned.svg')
Explanation: Un sistema de EDOs es usualmente formulado en forma estándar antes de ser resuelto numéricamente. La forma estánder es:
$y' = f(y, t)$
donde
$y = [y_1(t), y_2(t), ..., y_n(t)]$
y $f$ es una función que determina las derivadas de la función $y_i(t)$. Para resolver la EDO necesitamos conocer la función $f$ y una condición inicial, $y(0)$.
Note que EDOs de orden superior siempre pueden ser escritas en esta forma introduciendo nuevas variables para las derivadas intermedias.
Una vez definida la función f y el arreglo y_0, podemos usar la función odeint:
y_t = odeint(f, y_0, t)
donde t es un arreglo con las coordenadas temporales para las que se resolverá el sistema de EDOs. El resultado y_t es un arreglo con una linea para cada punto de tiempo t, y donde cada columna corresponde a una solución y_i(t) para ese tiempo.
Veremos cómo implementar f e y_0 en código Python en los siguientes ejemplos.
Ejemplo: péndulo doble
Consideremos un problema físico: El péndulo doble compuesto, descrito en más detalle aquí (en inglés): http://en.wikipedia.org/wiki/Double_pendulum.
End of explanation
g = 9.82
L = 0.5
m = 0.1
def dx(x, t):
El lado derecho de la EDO del péndulo
x1, x2, x3, x4 = x[0], x[1], x[2], x[3]
dx1 = 6.0/(m*L**2) * (2 * x3 - 3 * np.cos(x1-x2) * x4)/(16 - 9 * np.cos(x1-x2)**2)
dx2 = 6.0/(m*L**2) * (8 * x4 - 3 * np.cos(x1-x2) * x3)/(16 - 9 * np.cos(x1-x2)**2)
dx3 = -0.5 * m * L**2 * ( dx1 * dx2 * np.sin(x1-x2) + 3 * (g/L) * np.sin(x1))
dx4 = -0.5 * m * L**2 * (-dx1 * dx2 * np.sin(x1-x2) + (g/L) * np.sin(x2))
return [dx1, dx2, dx3, dx4]
# define la condición inicial
x0 = [np.pi/4, np.pi/2, 0, 0]
# tiempos en los que se resolverá la EDO: desde 0 hasta 10 segundos
t = np.linspace(0, 10, 250)
# resuelve el sistema de EDOs
x = odeint(dx, x0, t)
# grafica los ángulos como funciones del tiempo
fig, axes = plt.subplots(1,2, figsize=(12,4))
axes[0].plot(t, x[:, 0], 'r', label="theta1")
axes[0].plot(t, x[:, 1], 'b', label="theta2")
x1 = + L * np.sin(x[:, 0])
y1 = - L * np.cos(x[:, 0])
x2 = x1 + L * np.sin(x[:, 1])
y2 = y1 - L * np.cos(x[:, 1])
axes[1].plot(x1, y1, 'r', label="pendulo1")
axes[1].plot(x2, y2, 'b', label="pendulo2")
axes[1].set_ylim([-1, 0])
axes[1].set_xlim([1, -1]);
Explanation: Las ecuaciones hamiltonianas de movimiento para el péndulo son dadas (ver página de wikipedia):
${\dot \theta_1} = \frac{6}{m\ell^2} \frac{ 2 p_{\theta_1} - 3 \cos(\theta_1-\theta_2) p_{\theta_2}}{16 - 9 \cos^2(\theta_1-\theta_2)}$
${\dot \theta_2} = \frac{6}{m\ell^2} \frac{ 8 p_{\theta_2} - 3 \cos(\theta_1-\theta_2) p_{\theta_1}}{16 - 9 \cos^2(\theta_1-\theta_2)}.$
${\dot p_{\theta_1}} = -\frac{1}{2} m \ell^2 \left [ {\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + 3 \frac{g}{\ell} \sin \theta_1 \right ]$
${\dot p_{\theta_2}} = -\frac{1}{2} m \ell^2 \left [ -{\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + \frac{g}{\ell} \sin \theta_2 \right]$
Para que el código Python sea simple de leer, introduzcamos nuevos nombres de variables y la notación vectorial: $x = [\theta_1, \theta_2, p_{\theta_1}, p_{\theta_2}]$
${\dot x_1} = \frac{6}{m\ell^2} \frac{ 2 x_3 - 3 \cos(x_1-x_2) x_4}{16 - 9 \cos^2(x_1-x_2)}$
${\dot x_2} = \frac{6}{m\ell^2} \frac{ 8 x_4 - 3 \cos(x_1-x_2) x_3}{16 - 9 \cos^2(x_1-x_2)}$
${\dot x_3} = -\frac{1}{2} m \ell^2 \left [ {\dot x_1} {\dot x_2} \sin (x_1-x_2) + 3 \frac{g}{\ell} \sin x_1 \right ]$
${\dot x_4} = -\frac{1}{2} m \ell^2 \left [ -{\dot x_1} {\dot x_2} \sin (x_1-x_2) + \frac{g}{\ell} \sin x_2 \right]$
End of explanation
from IPython.display import clear_output
import time
fig, ax = plt.subplots(figsize=(4,4))
for t_idx, tt in enumerate(t[:200]):
x1 = + L * np.sin(x[t_idx, 0])
y1 = - L * np.cos(x[t_idx, 0])
x2 = x1 + L * np.sin(x[t_idx, 1])
y2 = y1 - L * np.cos(x[t_idx, 1])
ax.cla()
ax.plot([0, x1], [0, y1], 'r.-')
ax.plot([x1, x2], [y1, y2], 'b.-')
ax.set_ylim([-1.5, 0.5])
ax.set_xlim([1, -1])
display(fig)
clear_output() # comentar si no se observa bien
time.sleep(1)
Explanation: Animación simple del movimiento del péndulo. Veremos cómo crear mejores animaciones en la clase 4.
End of explanation
def dy(y, t, zeta, w0):
El lado derecho de la EDO del oscilador amortiguado
x, p = y[0], y[1]
dx = p
dp = -2 * zeta * w0 * p - w0**2 * x
return [dx, dp]
# condición inicial:
y0 = [1.0, 0.0]
# tiempos en los que se resolvera la EDO
t = np.linspace(0, 10, 1000)
w0 = 2*np.pi*1.0
# resuelve el sistema de EDOs para tres valores diferentes del factor de amortiguamiento
y1 = odeint(dy, y0, t, args=(0.0, w0)) # no amortiguado
y2 = odeint(dy, y0, t, args=(0.2, w0)) # subamortiguado
y3 = odeint(dy, y0, t, args=(1.0, w0)) # amortiguado crítico
y4 = odeint(dy, y0, t, args=(5.0, w0)) # sobreamortiguado
fig, ax = plt.subplots()
ax.plot(t, y1[:,0], 'k', label="no amortiguado", linewidth=0.25)
ax.plot(t, y2[:,0], 'r', label="subamortiguado")
ax.plot(t, y3[:,0], 'b', label=u"amortiguado crítico")
ax.plot(t, y4[:,0], 'g', label="sobreamortiguado")
ax.legend();
Explanation: Ejemplo: Oscilador armónico amortiguado
Problemas de EDO son importantes en Física Computacional, de modo que veremos un ejemplo adicional: el oscilador armónico amortiguado. Este problema está bastante bien descrito en wikipedia (en inglés): http://en.wikipedia.org/wiki/Damping.
La ecuación de movimiento para el oscilador amortiguado es:
$\displaystyle \frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 2\zeta\omega_0\frac{\mathrm{d}x}{\mathrm{d}t} + \omega^2_0 x = 0$
donde $x$ es la posición del oscilador, $\omega_0$ la frecuencia, y $\zeta$ es el factor de amortiguamiento. Para escribir esta EDO de segundo orden en la forma estándar, introducimos $p = \frac{\mathrm{d}x}{\mathrm{d}t}$:
$\displaystyle \frac{\mathrm{d}p}{\mathrm{d}t} = - 2\zeta\omega_0 p - \omega^2_0 x$
$\displaystyle \frac{\mathrm{d}x}{\mathrm{d}t} = p$
En la implementación de este ejemplo agregaremos algunos argumentos extras a la función del lado derecho de la EDO, en lugar de usar variables glovales como en el ejemplo anterior. Como consecuencia de los argumentos extra, necesitamos pasar un argumento clave args a la función odeint:
End of explanation
from scipy.fftpack import *
from numpy.fft import *
Explanation: Transformada de Fourier
Las transformadas de Fourier son unas de las herramientas universales de la Computación Científica, que aparece una y otra vez en distintos contextos. SciPy suministra funciones para acceder ala clásica librería FFTPACK de NetLib, que es una librería eficiente y muy bien testeada para FFT, escrita en FORTRAN. La API de SciPy contiene algunas funciones adicionales, pero en general la API está íntimamente relacionada con la librería original en FORTRAN.
Para usar el módulo fftpack en un programa Python, debe incluir
End of explanation
N = len(t)
dt = t[1]-t[0]
# calcula la transformada rápida de Fourier
# y2 es la solución del oscilador subamortiguado del ejemplo anterior
F = fft(y2[:,0])
# calcula las frecuencias para las componentes en F
w = fftfreq(N, dt)
fig, ax = plt.subplots(figsize=(9,3))
ax.plot(w, abs(F));
Explanation: Para demostrar cómo calcular una transformada rápida de Fourier con SciPy, consideremos la FFT de la solución del oscilador armónico amortiguado del ejemplo anterior:
End of explanation
indices = np.where(w > 0) # selecciona sólo los índices de elementos que corresponden a frecuencias positivas
w_pos = w[indices]
F_pos = F[indices]
fig, ax = plt.subplots(figsize=(9,3))
ax.plot(w_pos, abs(F_pos))
ax.set_xlim(0, 5);
Explanation: Como la señal es real, el espectro es simétrico. Por eso, sólo necesitamos graficar la parte que corresponde a las frecuencias positivas. Para extraer esa parte de w y F podemos usar algunos de los trucos con índices para arreglos NumPy que vimos en la clase 2:
End of explanation
A = np.array([[8,2,5], [1,5,2], [7,8,9]])
b = np.array([1,2,3])
x = sp.linalg.solve(A, b)
x
# verificamos la solución
(A @ x) - b
Explanation: Como era de esperar, vemos un peak en el espectro centrado alrededor de 1, que es la frecuencia que usamos para el oscilador.
Álgebra lineal
El módulo de álgebra lineal contiene muchas funciones relacionadas con matrices, incluyendo resolución de ecuaciones lineales, cálculo de valores propios, funciones de matrices (por ejemplo, para exponenciación matricial), varias decomposiciones diferentes (SVD, LU, cholesky), etc.
Una documentación detallada está disponible aquí: http://docs.scipy.org/doc/scipy/reference/linalg.html
Veremos cómo usar algunas de estas funciones:
Sistemas de ecuaciones lineales
Los sistemas de ecuaciones lineales de la forma
$A x = b$
donde $A$ es una matriz y $x,b$ son vectores, pueden ser resueltos del modo siguiente:
End of explanation
A = np.random.rand(3,3)
B = np.random.rand(3,3)
X = sp.linalg.solve(A, B)
X
# verificamos la solución
(A @ X) - B
Explanation: Podemos también hacer lo mismo con
$A X = B$,
donde ahora $A, B$ y $X$ son matrices:
End of explanation
evals = sp.linalg.eigvals(A)
evals
evals, evecs = np.linalg.eig(A)
evals
evecs
Explanation: Valores y vectores propios
El problema de valores propios para la matriz $A$:
$\displaystyle A v_n = \lambda_n v_n$,
donde $v_n$ es el $n$-ésimo vector propio y $\lambda_n$ es el $n$-ésimo valor propio.
Para calcular los vectores propios de una matriz usamos eigvals y para calcular tanto los valores como los vectores propios, podemos usar la función eig:
End of explanation
n = 1
A @ evecs[:,n] - evals[n] * evecs[:,n]
Explanation: Los vectores propios correspondientes al $n$-ésimo valor propio (guardado en evals[n]) es la $n$-ésima columna en evecs, es decir, evecs[:,n]. Para verificar esto, intentemos multiplicar los vectores propios con la matriz y comparar el resultado con el producto del vector propio y el valor propio:
End of explanation
# matriz inversa
sp.linalg.inv(A)
# determinante
sp.linalg.det(A)
# norma de distintos órdenes
sp.linalg.norm(A, ord=2), sp.linalg.norm(A, ord=np.Inf)
Explanation: Existen también formas más especializadas para resolver proplemas de valores propios, como por ejemplo eigh para matrices hermíticas.
Operaciones matriciales
End of explanation
from scipy.sparse import *
# matriz densa
M = np.array([[1,0,0,0], [0,3,0,0], [0,1,1,0], [1,0,0,1]])
M
# convierte de densa a dispersa
A = csr_matrix(M); A
# convierte de dispersa a densa
A.todense()
Explanation: Matrices dispersas
Las matrices dispersas (sparse matrices) son a menudo útiles en simulaciones numéricas que involucran sistemas grandes, si es que el problema puede ser descrito en forma matricial donde las matrices o vectores contienen mayoritariamente ceros. Scipy tiene buen soporte para las matrices dispersas, con operaciones básicas de álgebra lineal (tales como resolución de ecuaciones, cálculos de valores propios, etc).
Existen muchas estrategias posibles para almacenar matrices dispersas de manera eficiente. Algunas de las más comunes son las así llamadas "formas coordenadas" (CCO), "forma de lista de listas" (LIL), y "compressed-sparse column" CSC (también "compressed-sparse row", CSR). Cada formato tiene sus ventajas y desventajas. La mayoría de los algorítmos computacionales (resolución de ecuaciones, multiplicación de matrices, etc) pueden ser implementados eficientemente usando los formatos CSR o CSC, pero ellos no son tan intuitivos ni fáciles de inicializar. Por esto, a menudo una matriz dispersa es inicialmente creada en formato COO o LIL (donde podemos agregar elementos a la matriz dispersa eficientemente), y luego convertirlos a CSC o CSR antes de ser usadas en cálculos reales.
Para más información sobre los formatos para matrices dispersas, vea por ejemplo (en inglés): http://en.wikipedia.org/wiki/Sparse_matrix
<img src="./images/sparse.png" alt="" align="center"/>
Cuando creamos una matriz dispersa debemos elegir en qué formato la almacenaremos. Por ejemplo,
End of explanation
A = lil_matrix((4,4)) # matriz dispersa vacía de 4x4
A[0,0] = 1
A[1,1] = 3
A[2,2] = A[2,1] = 1
A[3,3] = A[3,0] = 1
A
A.todense()
Explanation: Una forma más eficiente de crear matrices dispersas: crear una matriz vacía y llenarla usando indexado de matrices (evita crear una matriz densa potencialmente muy grande)
End of explanation
A
A = csr_matrix(A); A
A = csc_matrix(A); A
Explanation: Conviertiendo entre distintos formatos de matriz dispersa:
End of explanation
A.todense()
(A * A).todense()
(A @ A).todense()
v = np.array([1,2,3,4])[:,np.newaxis]; v
# Multiplicación de matriz dispersa - vector denso
A * v
# el mismo resultado con matriz densa y vector denso
A.todense() * v
Explanation: Podemos calcular usando matrices dispersas como lo hacemos con matrices densas:
End of explanation
from scipy import optimize
Explanation: Optimización
La optimización (encontrar el máximo o el mínimo de una funciónn) constituye un campo amplio en matemáticas, y la optimización de funciones complicadas o de muchas variables puede ser complicada. Aquí sólo revisaremos algunos casos muy simples. Para una introducción detallada a la optimización con SciPy, ver (en inglés): http://scipy-lectures.github.com/advanced/mathematical_optimization/index.html
Para usar el módulo de optimización de Scipy hay que importar el módulo optimize:
End of explanation
def f(x):
return 4*x**3 + (x-2)**2 + x**4
fig, ax = plt.subplots()
x = np.linspace(-5, 3, 100)
ax.plot(x, f(x));
Explanation: Encontrando máximos
Veamos primero cómo encontrar el mínimo de una función simple de una variable:
End of explanation
x_min = optimize.fmin_bfgs(f, -2) # busca un mínimo local cerca -2
x_min
optimize.fmin_bfgs(f, 0.5) # busca un mínimo local cerca 0.5
Explanation: Podemos usar la función fmin_bfgs para encontrar el mínimo de la función:
End of explanation
optimize.brent(f)
optimize.fminbound(f, -4, 2) # busca el mínimo en el intervalo (-4,2)
Explanation: Podemos también usar las funciones brent o fminbound. Estas funciones tienen una sintaxis algo distinta y usan algoritmos diferentes.
End of explanation
omega_c = 3.0
def f(omega):
return np.tan(2*np.pi*omega) - omega_c/omega
fig, ax = plt.subplots(figsize=(10,4))
x = np.linspace(0, 3, 1000)
y = f(x)
mask = np.where(abs(y) > 50)
x[mask] = y[mask] = np.NaN # elimina líneas verticales cuando la función cambia de signo
ax.plot(x, y)
ax.plot([0, 3], [0, 0], 'k')
ax.set_ylim(-5,5);
optimize.fsolve(f, 0.1)
optimize.fsolve(f, 0.6)
optimize.fsolve(f, 1.1)
Explanation: Encontrando las raíces de una función
Para encontrar las soluciones a una ecuación de la forma $f(x) = 0$ podemos usar la función fsolve. Ella requiere especificar un punto inicial:
End of explanation
from scipy.interpolate import *
def f(x):
return np.sin(x)
n = np.arange(0, 10)
x = np.linspace(0, 9, 100)
y_meas = f(n) + 0.1 * np.random.randn(len(n)) # simula medidas con error
y_real = f(x)
linear_interpolation = interp1d(n, y_meas)
y_interp1 = linear_interpolation(x)
cubic_interpolation = interp1d(n, y_meas, kind='cubic')
y_interp2 = cubic_interpolation(x)
fig, ax = plt.subplots(figsize=(10,4))
ax.plot(n, y_meas, 'bs', label='datos con ruido')
ax.plot(x, y_real, 'k', lw=2, label=u'función exacta')
ax.plot(x, y_interp1, 'r', label=u'interpolación lineal')
ax.plot(x, y_interp2, 'g', label=u'interpolación cúbica')
ax.legend(loc=3);
Explanation: Interpolación
La interpolación es simple y conveniente en Scipy: La función interp1d, cuando se le suministran arreglos describiendo datos X e Y, retorna un objeto que se comporta como una función que puede ser llamada para un valor de x arbitrary (en el rango cubierto por X), y que retorna el correspondiente valor interpolado de y:
End of explanation
from scipy import stats
# crea una variable aleatoria (discreta) con distribución poissoniana
X = stats.poisson(3.5) # distribución de fotonoes en un estado coherente n=3.5 fotones
n = np.arange(0,15)
fig, axes = plt.subplots(3,1, sharex=True)
# grafica la "probability mass function" (PMF)
axes[0].step(n, X.pmf(n))
# grafica la "commulative distribution function" (CDF)
axes[1].step(n, X.cdf(n))
# grafica histograma de 1000 realizaciones de la variable estocástica X
axes[2].hist(X.rvs(size=1000));
# crea una variable aleatoria (contínua) con distribución normal
Y = stats.norm()
x = np.linspace(-5,5,100)
fig, axes = plt.subplots(3,1, sharex=True)
# grafica la función distribución de probabilidad ("probability distribution function", PDF)
axes[0].plot(x, Y.pdf(x))
# grafica función de distribución acumulada ("commulative distributin function", CDF)
axes[1].plot(x, Y.cdf(x));
# grafica histograma de 1000 realizaciones aleatorias de la variable estocástica Y
axes[2].hist(Y.rvs(size=1000), bins=50);
Explanation: Estadística
El módulo scipy.stats contiene varias distribuciones estadísticas, funciones estadísticas y testss. Para una documentación completa de estas las características, ver (en inglés) http://docs.scipy.org/doc/scipy/reference/stats.html.
También existe un paquete Python muy poderoso para modelamiento estadístoco llamado statsmodels. Ver http://statsmodels.sourceforge.net para más detalles.
End of explanation
X.mean(), X.std(), X.var() # distribución de Poission
Y.mean(), Y.std(), Y.var() # distribucuón normal
Explanation: Estadística:
End of explanation
t_statistic, p_value = stats.ttest_ind(X.rvs(size=1000), X.rvs(size=1000))
print("t-statistic =", t_statistic)
print("valor p =", p_value)
Explanation: Test estadísticos
Test si dos conjuntos de datos aleatorios (independientes) vienen de la misma distribución:
End of explanation
stats.ttest_1samp(Y.rvs(size=1000), 0.1)
Explanation: Como el valor p es muy grande, no podemos descartar la hiopótesis que los dos conjuntos de datos aleatorios tienen medias diferentes.
Para testear si la media de una única muestra de datos tiene media 0.1 (la media verdadera es 0.0):
End of explanation
Y.mean()
stats.ttest_1samp(Y.rvs(size=1000), Y.mean())
Explanation: Un valor de p bajo significa que podemos descartar la hipótesis que la media de Y es 0.1.
End of explanation
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = './css/aeropython.css'
HTML(open(css_file, "r").read())
Explanation: Lectura adicional
http://www.scipy.org - La página oficial del proyecto SciPy.
http://docs.scipy.org/doc/scipy/reference/tutorial/index.html - Un tutorial sobre cómo comenzar a usar SciPy.
https://github.com/scipy/scipy/ - El códifo fuente de SciPy.
End of explanation |
6,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LeNet Lab Solution
Source
Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
Step2: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
Step3: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
Step4: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
Step5: SOLUTION
Step6: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
Step7: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
Step8: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
Step9: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
Step10: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section. | Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
Explanation: LeNet Lab Solution
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
Explanation: SOLUTION: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation |
6,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installation Instructions
Download and install miniconda
Step1: Parameters to change for the run
Step2: Download files
Step3: Use the data to generate a GSSHA model | Python Code:
from datetime import datetime, timedelta
import os
try:
from urllib import urlretrieve
except ImportError:
from urllib.request import urlretrieve
from gsshapy.modeling import GSSHAModel
Explanation: Installation Instructions
Download and install miniconda: https://conda.io/miniconda.html
Make sure you are using the conda-forge channel:
bash
$ conda config --add channels conda-forge
$ conda update --yes conda python
Install gsshapy:
bash
$ conda create -n gssha python=2
$ source activate gssha
(gssha)$ conda install --yes gsshapy jupyter
End of explanation
base_dir = os.getcwd()
gssha_model_name = 'philippines_example'
land_use_grid_id = 'glcf'
gssha_model_directory = os.path.join(base_dir, gssha_model_name)
# make the directory for the output
try:
os.mkdir(gssha_model_directory)
except OSError:
pass
Explanation: Parameters to change for the run:
End of explanation
base_boundary_url = ('https://github.com/CI-WATER/gsshapy/'
'raw/master/tests/grid_standard/'
'philippines/')
base_shape_filename = 'philippines_5070115700'
# retrieve the shapefile
shapefile_name = base_shape_filename+'.shp'
boundary_shapefile = urlretrieve(base_boundary_url+shapefile_name,
filename=os.path.join(gssha_model_directory, shapefile_name))[0]
for file_extension in ['.shx', '.prj', '.dbf']:
file_name = base_shape_filename+file_extension
urlretrieve(base_boundary_url+file_name,
filename=os.path.join(gssha_model_directory, file_name))
# retrieve the DEM
elevation_file_path = urlretrieve(base_boundary_url + 'gmted_elevation.tif',
filename=os.path.join(gssha_model_directory, 'gmted_elevation.tif'))[0]
# retrieve the land use grid
land_cover_url = ('https://github.com/CI-WATER/gsshapy/'
'raw/master/tests/grid_standard/'
'land_cover/LC_hd_global_2012.tif')
land_use_file_path = urlretrieve(land_cover_url,
filename=os.path.join(gssha_model_directory, 'LC_hd_global_2012.tif'))[0]
Explanation: Download files:
End of explanation
# generate GSSHA model files
model = GSSHAModel(project_name=gssha_model_name,
project_directory=gssha_model_directory,
mask_shapefile=boundary_shapefile,
elevation_grid_path=elevation_file_path,
land_use_grid=land_use_file_path,
land_use_grid_id=land_use_grid_id,
out_hydrograph_write_frequency=1,
load_rasters_to_db=False)
# add card for max depth
model.project_manager.setCard('FLOOD_GRID',
'{0}.fgd'.format(gssha_model_name),
add_quotes=True)
# TODO: Add depth grids to simulation
# MAP_FREQ, DEPTH
# add event for simulation
model.set_event(simulation_start=datetime.utcnow(),
simulation_duration=timedelta(seconds=2*60),
rain_intensity=24,
rain_duration=timedelta(seconds=1*60),
)
model.write()
Explanation: Use the data to generate a GSSHA model:
End of explanation |
6,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using find_MAP on models with discrete variables
Maximum a posterior(MAP) estimation, can be difficult in models which have discrete stochastic variables. Here we demonstrate the problem with a simple model, and present a few possible work arounds.
Step1: We define a simple model of a survey with one data point. We use a $Beta$ distribution for the $p$ parameter in a binomial. We would like to know both the posterior distribution for p, as well as the predictive posterior distribution over the survey parameter.
Step2: First let's try and use find_MAP.
Step3: find_map defaults to find the MAP for only the continuous variables we have to specify if we would like to use the discrete variables.
Step4: We set the disp variable to display a warning that we are using a non-gradient minimization technique, as discrete variables do not give much gradient information. To demonstrate this, if we use a gradient based minimization, fmin_bfgs, with various starting points we see that the map does not converge.
Step5: Once again because the gradient of surv_sim provides no information to the fmin routine and it is only changed in a few cases, most of which are not correct. Manually, looking at the log proability we can see that the maximum is somewhere around surv_sim$=14$ and p$=0.7$. If we employ a non-gradient minimization, such as fmin_powell (the default when discrete variables are detected), we might be able to get a better estimate.
Step6: For most starting values this converges to the maximum log likelihood of $\approx -3.15$, but for particularly low starting values of surv_sim, or values near surv_sim$=14$ there is still some noise. The scipy optimize package contains some more general 'global' minimization functions that we can utilize. The basinhopping algorithm restarts the optimization at places near found minimums. Because it has a slightly different interface to other minimization schemes we have to define a wrapper function.
Step7: By default basinhopping uses a gradient minimization technique, fmin_bfgs, resulting in inaccurate predictions many times. If we force basinhoping to use a non-gradient technique we get much better results
Step8: Confident in our MAP estimate we can sample from the posterior, making sure we use the Metropolis method for our discrete variables. | Python Code:
import pymc3 as mc
Explanation: Using find_MAP on models with discrete variables
Maximum a posterior(MAP) estimation, can be difficult in models which have discrete stochastic variables. Here we demonstrate the problem with a simple model, and present a few possible work arounds.
End of explanation
alpha = 4
beta = 4
n = 20
yes = 15
with mc.Model() as model:
p = mc.Beta('p', alpha, beta)
surv_sim = mc.Binomial('surv_sim', n=n, p=p)
surv = mc.Binomial('surv', n=n, p=p, observed=yes)
Explanation: We define a simple model of a survey with one data point. We use a $Beta$ distribution for the $p$ parameter in a binomial. We would like to know both the posterior distribution for p, as well as the predictive posterior distribution over the survey parameter.
End of explanation
with model:
print(mc.find_MAP())
Explanation: First let's try and use find_MAP.
End of explanation
with model:
print(mc.find_MAP(vars=model.vars, disp=True))
Explanation: find_map defaults to find the MAP for only the continuous variables we have to specify if we would like to use the discrete variables.
End of explanation
with model:
for i in range(n+1):
s = {'p':0.5, 'surv_sim':i}
map_est = mc.find_MAP(start=s, vars=model.vars, fmin=mc.starting.optimize.fmin_bfgs)
print('surv_sim: %i->%i, p: %f->%f, LogP:%f'%(s['surv_sim'],
map_est['surv_sim'],
s['p'],
map_est['p'],
model.logpc(map_est)))
Explanation: We set the disp variable to display a warning that we are using a non-gradient minimization technique, as discrete variables do not give much gradient information. To demonstrate this, if we use a gradient based minimization, fmin_bfgs, with various starting points we see that the map does not converge.
End of explanation
with model:
for i in range(n+1):
s = {'p':0.5, 'surv_sim':i}
map_est = mc.find_MAP(start=s, vars=model.vars)
print('surv_sim: %i->%i, p: %f->%f, LogP:%f'%(s['surv_sim'],
map_est['surv_sim'],
s['p'],
map_est['p'],
model.logpc(map_est)))
Explanation: Once again because the gradient of surv_sim provides no information to the fmin routine and it is only changed in a few cases, most of which are not correct. Manually, looking at the log proability we can see that the maximum is somewhere around surv_sim$=14$ and p$=0.7$. If we employ a non-gradient minimization, such as fmin_powell (the default when discrete variables are detected), we might be able to get a better estimate.
End of explanation
def bh(*args,**kwargs):
result = mc.starting.optimize.basinhopping(*args, **kwargs)
# A `Result` object is returned, the argmin value can be in `x`
return result['x']
with model:
for i in range(n+1):
s = {'p':0.5, 'surv_sim':i}
map_est = mc.find_MAP(start=s, vars=model.vars, fmin=bh)
print('surv_sim: %i->%i, p: %f->%f, LogP:%f'%(s['surv_sim'],
floor(map_est['surv_sim']),
s['p'],
map_est['p'],
model.logpc(map_est)))
Explanation: For most starting values this converges to the maximum log likelihood of $\approx -3.15$, but for particularly low starting values of surv_sim, or values near surv_sim$=14$ there is still some noise. The scipy optimize package contains some more general 'global' minimization functions that we can utilize. The basinhopping algorithm restarts the optimization at places near found minimums. Because it has a slightly different interface to other minimization schemes we have to define a wrapper function.
End of explanation
with model:
for i in range(n+1):
s = {'p':0.5, 'surv_sim':i}
map_est = mc.find_MAP(start=s, vars=model.vars, fmin=bh, minimizer_kwargs={"method": /"Powell"})
print('surv_sim: %i->%i, p: %f->%f, LogP:%f'%(s['surv_sim'],
map_est['surv_sim'],
s['p'],
map_est['p'],
model.logpc(map_est)))
Explanation: By default basinhopping uses a gradient minimization technique, fmin_bfgs, resulting in inaccurate predictions many times. If we force basinhoping to use a non-gradient technique we get much better results
End of explanation
with model:
step1 = mc.step_methods.HamiltonianMC(vars=[p])
step2 = mc.step_methods.Metropolis(vars=[surv_sim])
with model:
trace = mc.sample(25000,[step1,step2],start=map_est)
mc.traceplot(trace);
Explanation: Confident in our MAP estimate we can sample from the posterior, making sure we use the Metropolis method for our discrete variables.
End of explanation |
6,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GA4GH 1000 Genomes Reference Service Example
This example illustrates how to access the available reference sequences offered by a GA4GH instance.
Initialize the client
In this step we create a client object which will be used to communicate with the server. It is initialized using the URL.
Step1: Search reference sets
Reference sets collect together named reference sequences as part of released assemblies. The API provides methods for accessing reference sequences.
The Thousand Genomes data presented here are mapped to GRCh37, and so this server makes that reference genome available. Datasets and reference genomes are decoupled in the data model, so it is possible to use the same reference set in multiple datasets.
Here, we list the details of the Reference Set.
Step2: Obtaining individual Reference Sets by ID
The API can also obtain an individual reference set if the id is known. In this case, we can observe that only one is available. But in the future, more sets might be implemented.
Step3: Search References
From the previous call, we have obtained the parameter required to obtain references which belong to ncbi37. We use its unique identifier to constrain the search for named sequences. As there are 86 of them, we have only chosen to show a few.
Step4: Get Reference by ID
Reference sequence messages, like those above, can be referenced by their identifier directly. This identifier points to chromosome 1 in this server instance.
Step5: List Reference Bases
Using the reference_id from above we can construct a query to list the alleles present on a sequence using start and end offsets. | Python Code:
import ga4gh_client.client as client
c = client.HttpClient("http://1kgenomes.ga4gh.org")
Explanation: GA4GH 1000 Genomes Reference Service Example
This example illustrates how to access the available reference sequences offered by a GA4GH instance.
Initialize the client
In this step we create a client object which will be used to communicate with the server. It is initialized using the URL.
End of explanation
for reference_set in c.search_reference_sets():
ncbi37 = reference_set
print "name: {}".format(ncbi37.name)
print "ncbi_taxon_id: {}".format(ncbi37.ncbi_taxon_id)
print "description: {}".format(ncbi37.description)
print "source_uri: {}".format(ncbi37.source_uri)
Explanation: Search reference sets
Reference sets collect together named reference sequences as part of released assemblies. The API provides methods for accessing reference sequences.
The Thousand Genomes data presented here are mapped to GRCh37, and so this server makes that reference genome available. Datasets and reference genomes are decoupled in the data model, so it is possible to use the same reference set in multiple datasets.
Here, we list the details of the Reference Set.
End of explanation
reference_set = c.get_reference_set(reference_set_id=ncbi37.id)
print reference_set
Explanation: Obtaining individual Reference Sets by ID
The API can also obtain an individual reference set if the id is known. In this case, we can observe that only one is available. But in the future, more sets might be implemented.
End of explanation
counter = 0
for reference in c.search_references(reference_set_id="WyJOQ0JJMzciXQ"):
counter += 1
if counter > 5:
break
print reference
Explanation: Search References
From the previous call, we have obtained the parameter required to obtain references which belong to ncbi37. We use its unique identifier to constrain the search for named sequences. As there are 86 of them, we have only chosen to show a few.
End of explanation
reference = c.get_reference(reference_id="WyJOQ0JJMzciLCIxIl0")
print reference
Explanation: Get Reference by ID
Reference sequence messages, like those above, can be referenced by their identifier directly. This identifier points to chromosome 1 in this server instance.
End of explanation
reference_bases = c.list_reference_bases("WyJOQ0JJMzciLCIxIl0", start=15000, end= 16000)
print reference_bases
print len(reference_bases)
Explanation: List Reference Bases
Using the reference_id from above we can construct a query to list the alleles present on a sequence using start and end offsets.
End of explanation |
6,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Timothy Helton
<br>
<font color="red">
NOTE
Step1: Data Prep
Step2: 1. What was the average age in male and female athletes?
Step3: 2. What are the most common Dates of Birth?
To clarify - day, month, year
Step4: 3. How about the most common birthdays?
Most Common Month
Modst Common Day
Step5: 4. What are the Countries with more than 100 medals?
Step6: 5. Create a bar or pie chart for the results of the previous exercise.
Step7: 6. Male weightlifting competitions are divided into 8 weight classes. Can you estimate these weight classes by looking at the data? Hint
Step8: The predicted weight classes are displayed in text boxes outlined in crimson.
7. Generate a histogram of male and female height distribution among all participants.
Step9: 8. Using the Seaborn package create a box plot for male and female height distribution among all participants.
Step10: 9. Optional | Python Code:
from k2datascience import olympics
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
Explanation: Exercises
Timothy Helton
<br>
<font color="red">
NOTE:
<br>
This notebook uses code found in the
<a href="https://github.com/TimothyHelton/k2datascience/blob/master/k2datascience/olympics.py">
<strong>k2datascience.olympics</strong></a> module.
To execute all the cells do one of the following items:
<ul>
<li>Install the k2datascience package to the active Python interpreter.</li>
<li>Add k2datascience/k2datascience to the PYTHON_PATH system variable.</li>
<li>Create a link to the olympics.py file in the same directory as this notebook.</li>
</font>
Imports
End of explanation
oly = olympics.Medals()
print(f'{"#" * 30}\nAthletes Data\n\n')
print(f'Data Types:\n{oly.athletes.dtypes}\n\n')
print(f'Data Shape:\n{oly.athletes.shape}\n\n')
print(f'Missing Data:\n{oly.athletes.isnull().sum()}\n\n')
oly.athletes.head()
oly.athletes.tail()
oly.athletes.describe()
print(f'\n\n\n{"#" * 30}\nCountries Data\n\n')
print(f'Data Types:\n{oly.countries.dtypes}\n\n')
print(f'Data Shape:\n{oly.countries.shape}\n\n')
print(f'Missing Data:\n{oly.countries.isnull().sum()}\n\n')
oly.countries.head()
oly.countries.tail()
oly.countries.describe()
Explanation: Data Prep
End of explanation
oly.calc_age_means()
Explanation: 1. What was the average age in male and female athletes?
End of explanation
oly.common_full_birthday()
Explanation: 2. What are the most common Dates of Birth?
To clarify - day, month, year
End of explanation
oly.common_month_day_birthday()
Explanation: 3. How about the most common birthdays?
Most Common Month
Modst Common Day
End of explanation
(oly.country_medals
.query('total > 100')
.sort_values('total', ascending=False))
Explanation: 4. What are the Countries with more than 100 medals?
End of explanation
oly.country_medals_plot()
Explanation: 5. Create a bar or pie chart for the results of the previous exercise.
End of explanation
oly.weightlifting_classes()
Explanation: 6. Male weightlifting competitions are divided into 8 weight classes. Can you estimate these weight classes by looking at the data? Hint: Create a scatter plot with Body weight on the x-axis and choose height as y.
End of explanation
oly.height_histograms()
Explanation: The predicted weight classes are displayed in text boxes outlined in crimson.
7. Generate a histogram of male and female height distribution among all participants.
End of explanation
oly.height_boxplot()
Explanation: 8. Using the Seaborn package create a box plot for male and female height distribution among all participants.
End of explanation
oly.height_sport()
Explanation: 9. Optional: What else would you try?
Compare type of medal compared to height.
Compare type of medal compared to weight.
Compare height vs age.
Are younger generation people taller or are Olympic athletes in the top percentile irregardless?
Which sport distributes the most medals.
Calculate the Body Mass Index (BMI) of the athletes.
Compare the BMI by sport.
Find the athlete that is closest to your height and mass.
Compare male and female heights for each sport.
End of explanation |
6,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression
In regression we try to predict a continuous output variable. This can be most easily visualized in one dimension.
We will start with a very simple toy example. We will create a dataset out of a sinus curve with some noise
Step1: Linear Regression
One of the simplest models again is a linear one, that simply tries to predict the data as lying on a line. One way to find such a line is LinearRegression (also known as ordinary least squares).
The interface for LinearRegression is exactly the same as for the classifiers before, only that y now contains float values, instead of classes.
To apply a scikit-learn model, we need to make X be a 2d-array
Step2: We split our data in a training and a test set again
Step3: Then we can built our regression model
Step4: And predict. First let us try the training set
Step5: The line is able to capture the general slope of the data, but not many details.
Let's try the test set
Step6: Again, scikit-learn provides an easy way to evaluate the prediction quantitatively using the score method. For regression tasks, this is the R2 score. Another popular way would be the mean squared error.
Step7: KNeighborsRegression
As for classification, we can also use a neighbor based method for regression. We can simply take the output of the nearest point, or we could average several nearest points. This method is less popular for regression than for classification, but still a good baseline.
Step8: Again, let us look at the behavior on training and test set
Step9: On the training set, we do a perfect job
Step10: On the test set, we also do a better job of capturing the variation, but our estimates look much more messy then before.
Let us look at the R2 score | Python Code:
x = np.linspace(-3, 3, 100)
print(x)
y = np.sin(4 * x) + x + np.random.uniform(size=len(x))
plt.plot(x, y, 'o')
Explanation: Regression
In regression we try to predict a continuous output variable. This can be most easily visualized in one dimension.
We will start with a very simple toy example. We will create a dataset out of a sinus curve with some noise:
End of explanation
print(x.shape)
X = x[:, np.newaxis]
print(X.shape)
Explanation: Linear Regression
One of the simplest models again is a linear one, that simply tries to predict the data as lying on a line. One way to find such a line is LinearRegression (also known as ordinary least squares).
The interface for LinearRegression is exactly the same as for the classifiers before, only that y now contains float values, instead of classes.
To apply a scikit-learn model, we need to make X be a 2d-array:
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
Explanation: We split our data in a training and a test set again:
End of explanation
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
Explanation: Then we can built our regression model:
End of explanation
y_pred_train = regressor.predict(X_train)
plt.plot(X_train, y_train, 'o', label="data")
plt.plot(X_train, y_pred_train, 'o', label="prediction")
plt.legend(loc='best')
Explanation: And predict. First let us try the training set:
End of explanation
y_pred_test = regressor.predict(X_test)
plt.plot(X_test, y_test, 'o', label="data")
plt.plot(X_test, y_pred_test, 'o', label="prediction")
plt.legend(loc='best')
Explanation: The line is able to capture the general slope of the data, but not many details.
Let's try the test set:
End of explanation
regressor.score(X_test, y_test)
Explanation: Again, scikit-learn provides an easy way to evaluate the prediction quantitatively using the score method. For regression tasks, this is the R2 score. Another popular way would be the mean squared error.
End of explanation
from sklearn.neighbors import KNeighborsRegressor
kneighbor_regression = KNeighborsRegressor(n_neighbors=1)
kneighbor_regression.fit(X_train, y_train)
Explanation: KNeighborsRegression
As for classification, we can also use a neighbor based method for regression. We can simply take the output of the nearest point, or we could average several nearest points. This method is less popular for regression than for classification, but still a good baseline.
End of explanation
y_pred_train = kneighbor_regression.predict(X_train)
plt.plot(X_train, y_train, 'o', label="data")
plt.plot(X_train, y_pred_train, 'o', label="prediction")
plt.legend(loc='best')
Explanation: Again, let us look at the behavior on training and test set:
End of explanation
y_pred_test = kneighbor_regression.predict(X_test)
plt.plot(X_test, y_test, 'o', label="data")
plt.plot(X_test, y_pred_test, 'o', label="prediction")
plt.legend(loc='best')
Explanation: On the training set, we do a perfect job: each point is its own nearest neighbor!
End of explanation
kneighbor_regression.score(X_test, y_test)
Explanation: On the test set, we also do a better job of capturing the variation, but our estimates look much more messy then before.
Let us look at the R2 score:
End of explanation |
6,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numbers of Patients Registered at a GP Practice
21/2/17
Number of patients registred with a particular GP who live in a particular LSOA (/via Carl Baker, HoC Library)
Demo sketch of opening data into an interactive Google Map.
Step1: Previously, I have created a simple sqlite3 database containing administrative open data from NHS Digital (database generator script).
Connect to local copy of the db.
Step2: Look up the GP practices on the Isle of Wight... (We could search for the code but I happen to know it...)
Step3: The folium library makes it easy to generate choropleth maps using the Leaflet javascript library. We can use various map tiles - the default is Google Maps.
Step4: Generate a choropleth map for selected GP practice, colouring LSOA by number of folk registered to that practice. | Python Code:
#Original data source
#http:§§//www.content.digital.nhs.uk/catalogue/PUB23139
#Get the datafile
!wget -P data http://www.content.digital.nhs.uk/catalogue/PUB23139/gp-reg-patients-LSOA-alt-tall.csv
#Import best ever data handling package
import pandas as pd
#Load downloaded CSV file
df=pd.read_csv('data/gp-reg-patients-LSOA-alt-tall.csv')
#Preview first few lines
df.head()
Explanation: Numbers of Patients Registered at a GP Practice
21/2/17
Number of patients registred with a particular GP who live in a particular LSOA (/via Carl Baker, HoC Library)
Demo sketch of opening data into an interactive Google Map.
End of explanation
import sqlite3
#Use homebrew database of NHS administrative info
con = sqlite3.connect("nhsadmin.sqlite")
Explanation: Previously, I have created a simple sqlite3 database containing administrative open data from NHS Digital (database generator script).
Connect to local copy of the db.
End of explanation
ccode='10L'
#Find
EPRACCUR='epraccur'
epraccur_iw = pd.read_sql_query('SELECT * FROM {typ} WHERE "Commissioner"="{ccode}"'.format(typ=EPRACCUR,ccode=ccode), con)
epraccur_iw
Explanation: Look up the GP practices on the Isle of Wight... (We could search for the code but I happen to know it...)
End of explanation
import folium
#color brewer palettes: ‘BuGn’, ‘BuPu’, ‘GnBu’, ‘OrRd’, ‘PuBu’, ‘PuBuGn’, ‘PuRd’, ‘RdPu’, ‘YlGn’, ‘YlGnBu’, ‘YlOrBr’, and ‘YlOrRd’.
#Fiona is a powerful library for geo wrangling with various dependencies that can make installation a pain...
#...but I have it installed already so I can use it to trivially find the centre of a set of boundaries in a geojson file
import fiona
#This is a canned demo - I happen to have the Local Authority Code for the Isle of Wight...
#...and copies of LSOA geojson files by LA
# (I could get LA code from the NHS addmin db)
geojson_local='../../IWgeodata/lsoa_by_lad/E06000046.json'
fi=fiona.open(geojson_local)
centre_lat,centre_lon=((fi.bounds[0]+fi.bounds[2])/2,(fi.bounds[1]+fi.bounds[3])/2)
Explanation: The folium library makes it easy to generate choropleth maps using the Leaflet javascript library. We can use various map tiles - the default is Google Maps.
End of explanation
#Add a widget in that lets you select the GP practice by name then fudge the lookup to practice code
#We could also add another widget to select eg Male | Female | All
def generate_map(gpcode):
gpmap = folium.Map([centre_lon,centre_lat], zoom_start=11)
gpmap.choropleth(
geo_path=geojson_local,
data=df[df['PRACTICE_CODE']==gpcode],
columns=['LSOA_CODE', 'ALL_PATIENTS'],
key_on='feature.properties.LSOA11CD',
fill_color='PuBuGn', fill_opacity=0.7,
legend_name='Number of people on list in LSOA'
)
return gpmap
def generate_map_from_gpname(gpname):
gpcode=epraccur_iw[epraccur_iw['Name']==gpname]['Organisation Code'].iloc[0]
return generate_map(gpcode)
#iw_gps=epraccur_iw['Organisation Code'].unique().tolist()
iw_gps=epraccur_iw['Name'].unique().tolist()
iw_gps[:3],len(iw_gps)
from ipywidgets import interact
interact(generate_map_from_gpname, gpname=iw_gps);
Explanation: Generate a choropleth map for selected GP practice, colouring LSOA by number of folk registered to that practice.
End of explanation |
6,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train ML model on Cloud AI Platform
This notebook shows how to
Step1: Try out model file
<b>Note</b> Once the training starts, Interrupt the Kernel (from the notebook ribbon bar above). Because it processes the entire dataset, this will take a long time on the relatively small machine on which you are running Notebooks.
Step2: Create Docker container
Package up the trainer file into a Docker container and submit the image.
Step3: <b>Note</b>
Step4: Deploy to AI Platform
Submit a training job using this custom container that we have just built. After you submit the job, monitor it here. | Python Code:
import logging
import nbformat
import sys
import yaml
def write_parameters(cell_source, params_yaml, outfp):
with open(params_yaml, 'r') as ifp:
y = yaml.safe_load(ifp)
# print out all the lines in notebook
write_code(cell_source, 'PARAMS from notebook', outfp)
# print out YAML file; this will override definitions above
formats = [
'{} = {}', # for integers and floats
'{} = "{}"', # for strings
]
write_code(
'\n'.join([
formats[type(value) is str].format(key, value) for key, value in y.items()]),
'PARAMS from YAML',
outfp
)
def write_code(cell_source, comment, outfp):
lines = cell_source.split('\n')
if len(lines) > 0 and lines[0].startswith('%%'):
prefix = '#'
else:
prefix = ''
print("### BEGIN {} ###".format(comment), file=outfp)
for line in lines:
line = prefix + line.replace('print(', 'logging.info(')
if len(line) > 0 and (line[0] == '!' or line[0] == '%'):
print('#' + line, file=outfp)
else:
print(line, file=outfp)
print("### END {} ###\n".format(comment), file=outfp)
def convert_notebook(notebook_filename, params_yaml, outfp):
write_code('import logging', 'code added by notebook conversion', outfp)
with open(INPUT) as ifp:
nb = nbformat.reads(ifp.read(), nbformat.NO_CONVERT)
for cell in nb.cells:
if cell.cell_type == 'code':
if 'tags' in cell.metadata and 'display' in cell.metadata.tags:
logging.info('Ignoring cell # {} with display tag'.format(cell.execution_count))
elif 'tags' in cell.metadata and 'parameters' in cell.metadata.tags:
logging.info('Writing params cell # {}'.format(cell.execution_count))
write_parameters(cell.source, PARAMS, outfp)
else:
logging.info('Writing model cell # {}'.format(cell.execution_count))
write_code(cell.source, 'Cell #{}'.format(cell.execution_count), outfp)
import os
INPUT='../../06_feateng_keras/solution/taxifare_fc.ipynb'
PARAMS='./notebook_params.yaml'
OUTDIR='./container/trainer'
!mkdir -p $OUTDIR
OUTFILE=os.path.join(OUTDIR, 'model.py')
!touch $OUTDIR/__init__.py
with open(OUTFILE, 'w') as ofp:
#convert_notebook(INPUT, PARAMS, sys.stdout)
convert_notebook(INPUT, PARAMS, ofp)
#!cat $OUTFILE
Explanation: Train ML model on Cloud AI Platform
This notebook shows how to:
* Export training code from a Keras notebook into a trainer file
* Create a Docker container based on a DLVM container
* Deploy training job to cluster
TODO: Export the data from BigQuery to GCS
Navigate to export_data.ipynb
Update 'your-gcs-project-here' to your GCP project name
Run all the notebook cells
TODO: Edit notebook parameters
Navigate to notebook_params.yaml
Replace the bucket name with your own bucket containing your model (likely gcp-project with -ml at the end)
Save the notebook
Return to this notebook and continue
Export code from notebook
This notebook extracts code from a notebook and creates a Python file suitable for use as model.py
End of explanation
!python3 $OUTFILE
Explanation: Try out model file
<b>Note</b> Once the training starts, Interrupt the Kernel (from the notebook ribbon bar above). Because it processes the entire dataset, this will take a long time on the relatively small machine on which you are running Notebooks.
End of explanation
%%writefile container/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
#RUN python3 -m pip install --upgrade --quiet tf-nightly-2.0-preview
RUN python3 -m pip install --upgrade --quiet cloudml-hypertune
COPY trainer /trainer
CMD ["python3", "/trainer/model.py"]
%%writefile container/push_docker.sh
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=serverlessml_training_container
#export IMAGE_TAG=$(date +%Y%m%d_%H%M%S)
#export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME:$IMAGE_TAG
export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME
echo "Building $IMAGE_URI"
docker build -f Dockerfile -t $IMAGE_URI ./
echo "Pushing $IMAGE_URI"
docker push $IMAGE_URI
!find container
Explanation: Create Docker container
Package up the trainer file into a Docker container and submit the image.
End of explanation
%%bash
cd container
bash push_docker.sh
Explanation: <b>Note</b>: If you get a permissions error when running push_docker.sh from Notebooks, do it from CloudShell:
* Open CloudShell on the GCP Console
* git clone https://github.com/GoogleCloudPlatform/training-data-analyst
* cd training-data-analyst/quests/serverlessml/07_caip/solution/container
* bash push_docker.sh
This next step takes 5 - 10 minutes to run
End of explanation
%%bash
JOBID=serverlessml_$(date +%Y%m%d_%H%M%S)
REGION=us-central1
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$(gcloud config list project --format "value(core.project)")-ml
#IMAGE=gcr.io/deeplearning-platform-release/tf2-cpu
IMAGE=gcr.io/$PROJECT_ID/serverlessml_training_container
gcloud beta ai-platform jobs submit training $JOBID \
--staging-bucket=gs://$BUCKET --region=$REGION \
--master-image-uri=$IMAGE \
--master-machine-type=n1-standard-4 --scale-tier=CUSTOM
Explanation: Deploy to AI Platform
Submit a training job using this custom container that we have just built. After you submit the job, monitor it here.
End of explanation |
6,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WDigest Downgrade
Metadata
| | |
|
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Look for any process updating UseLogonCredential registry key value
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: WDigest Downgrade
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/05/10 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might have updated the property value UseLogonCredential of HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\WDigest to 1 in order to be able to extract clear text passwords from memory contents of lsass.
Technical Context
Windows 8.1 introduced a registry setting that allows for disabling the storage of the user’s logon credential in clear text for the WDigest provider.
Offensive Tradecraft
This setting can be modified in the property UseLogonCredential for the registry key HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\WDigest.
If this key does not exists, you can create it and set it to 1 to enable clear text passwords.
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/05_defense_evasion/SDWIN-190518201922.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/defense_evasion/host/empire_wdigest_downgrade.tar.gz |
Analytics
Initialize Analytics Engine
End of explanation
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/defense_evasion/host/empire_wdigest_downgrade.tar.gz"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
Explanation: Download & Process Mordor Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, TargetObject
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 13
AND TargetObject LIKE "%UseLogonCredential"
AND Details = 1
'''
)
df.show(10,False)
Explanation: Analytic I
Look for any process updating UseLogonCredential registry key value
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Windows registry | Microsoft-Windows-Sysmon/Operational | Process modified Windows registry key value | 13 |
End of explanation |
6,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 2
Imports
Step2: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
Step4: Write a function that computes the factorial of small numbers using a Python loop.
Step5: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 2
Imports
End of explanation
n=10
a=np.arange(1,n+1,1)
a.cumprod()
def np_fact(n):
Compute n! = n*(n-1)*...*1 using Numpy.
if n==0: #0! is 1
return 1
elif n==1: #1! is 1
return 1
else:
a=np.arange(2,n+1,1)
return a.cumprod()[n-2] #returns factorial with numpy at the last index
assert np_fact(0)==1
assert np_fact(1)==1
assert np_fact(10)==3628800
assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
Explanation: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
End of explanation
def loop_fact(n):
Compute n! using a Python for loop.
fact=1
for i in range(1,n+1): #using a for loop to calculate the factorial starting at 1
fact*=i
return fact
raise NotImplementedError()
assert loop_fact(0)==1
assert loop_fact(1)==1
assert loop_fact(10)==3628800
assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
Explanation: Write a function that computes the factorial of small numbers using a Python loop.
End of explanation
%timeit -n1 -r1 np_fact(500000) #timing the time to complete calculation
%timeit -n1 -r1 loop_fact(500000)
Explanation: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is:
python
%timeit -n1 -r1 function_to_time()
End of explanation |
6,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
token_up = {}
token_up['.'] = "||Period||"
token_up[','] = "||Comma||"
token_up['--'] = "||Dash||"
token_up[')'] = "||Right_Parentheses||"
token_up['"'] = "||Quotation_Mark||"
token_up['\n'] = "||Return||"
token_up['!'] = "||Exclamation_mark||"
token_up['('] = "||Left_Parentheses||"
token_up[';'] = "||Semicolon||"
token_up['?'] = "||Question_mark||"
return token_up
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
return None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
6,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ChemSpiPy
Step1: Then connect to ChemSpider by creating a ChemSpider instance using your security token
Step2: All your interaction with the ChemSpider database should now happen through this ChemSpider object, cs.
Retrieve a Compound
Retrieving information about a specific Compound in the ChemSpider database is simple.
Let’s get the Compound with ChemSpider ID 2157
Step3: Now we have a Compound object called comp. We can get various identifiers and calculated properties from this object
Step4: Search for a name
What if you don’t know the ChemSpider ID of the Compound you want? Instead use the search method | Python Code:
from chemspipy import ChemSpider
Explanation: ChemSpiPy: Getting Started
Before we start:
Make sure you have installed ChemSpiPy.
Obtain a security token from the ChemSpider web site.
First Steps
Start by importing ChemSpider:
End of explanation
# Tip: Store your security token as an environment variable to reduce the chance of accidentally sharing it
import os
mytoken = os.environ['CHEMSPIDER_SECURITY_TOKEN']
cs = ChemSpider(security_token=mytoken)
Explanation: Then connect to ChemSpider by creating a ChemSpider instance using your security token:
End of explanation
comp = cs.get_compound(2157)
comp
Explanation: All your interaction with the ChemSpider database should now happen through this ChemSpider object, cs.
Retrieve a Compound
Retrieving information about a specific Compound in the ChemSpider database is simple.
Let’s get the Compound with ChemSpider ID 2157:
End of explanation
print(comp.molecular_formula)
print(comp.molecular_weight)
print(comp.smiles)
print(comp.common_name)
Explanation: Now we have a Compound object called comp. We can get various identifiers and calculated properties from this object:
End of explanation
for result in cs.search('glucose'):
print(result)
Explanation: Search for a name
What if you don’t know the ChemSpider ID of the Compound you want? Instead use the search method:
End of explanation |
6,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Smith Sphere
The smith chart is a nomogram used frequently in RF/Microwave Engineering. Since its inception it has been recognised that projecting the chart onto the reimen sphere [1].
[1]H. . Wheeler, “Reflection Charts Relating to Impedance Matching,” IEEE Transactions on Microwave Theory and Techniques, vol. 32, no. 9, pp. 1008–1021, Sep. 1984.
Step1: Starting with an impedance vector $z$, defined by a vector in the impedance plane $B_z$, this vector has two scalar components ( $z^r$, $z^x$) known as resistance and reactance
Step2: stereographically up-projecting this onto the sphere to point $p$,
Step3: If we stereo-project this back onto the impedance plane | Python Code:
#from IPython.display import SVG
#SVG('pics/smith_sphere.svg')
from galgebra.printer import Format, Fmt
from galgebra import ga
from galgebra.ga import Ga
from sympy import *
Format()
(o3d,er,ex,es) = Ga.build('e_r e_x e_s',g=[1,1,1])
(o2d,zr,zx) = Ga.build('z_r z_x',g=[1,1])
Bz = er^ex # impedance plance
Bs = es^ex # reflection coefficient plane
Bx = er^es
I = o3d.I()
def down(p, N):
'''
stereographically project a vector in G3 downto the bivector N
'''
n= -1*N.dual()
return -(n^p)*(n-n*(n|p)).inv()
def up(p):
'''
stereographically project a vector in G2 upto the space G3
'''
if (p^Bz).obj == 0:
N = Bz
elif (p^Bs).obj == 0:
N = Bs
n = -N.dual()
return n + 2*(p*p + 1).inv()*(p-n)
a,b,c,z,s,n = [o3d.mv(k,'vector') for k in ['a','b','c','z','s' ,'n']]
Explanation: Smith Sphere
The smith chart is a nomogram used frequently in RF/Microwave Engineering. Since its inception it has been recognised that projecting the chart onto the reimen sphere [1].
[1]H. . Wheeler, “Reflection Charts Relating to Impedance Matching,” IEEE Transactions on Microwave Theory and Techniques, vol. 32, no. 9, pp. 1008–1021, Sep. 1984.
End of explanation
Bz.dual()
Bz.is_zero()
z = z.proj([er,ex])
z
Explanation: Starting with an impedance vector $z$, defined by a vector in the impedance plane $B_z$, this vector has two scalar components ( $z^r$, $z^x$) known as resistance and reactance
End of explanation
p = up(z)
p
simplify(p.norm2())
Explanation: stereographically up-projecting this onto the sphere to point $p$,
End of explanation
down(p, Bz)
down(p,Bs).simplify()
(z-er)*(z+er).inv()
p
R=((-pi/4)*Bx).exp()
R
R*p*R.rev()
down(R*p*R.rev(),Bz)
Explanation: If we stereo-project this back onto the impedance plane
End of explanation |
6,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Import the necessary packages to read in the data, plot, and create a linear regression model
Step1: 2. Read in the hanford.csv file
Step2: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
Step3: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step4: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step5: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step6: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10 | Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model
End of explanation
df = pd.read_csv("hanford.csv")
df.head()
Explanation: 2. Read in the hanford.csv file
End of explanation
df.describe()
Explanation: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
End of explanation
df.corr()
df.plot(kind='scatter', x='Exposure', y='Mortality')
Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
End of explanation
lm = smf.ols(formula="Mortality~Exposure",data=df).fit()
lm.params
intercept, slope = lm.params
Explanation: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
End of explanation
df.plot(kind="scatter",x="Exposure",y="Mortality")
plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red")
Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
End of explanation
mortality_rate = slope * 10 + intercept
mortality_rate
Explanation: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
End of explanation |
6,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Marte con Python usando poliastro
<img src="http
Step1: Primero
Step2: Segundo
Step3: Tercero
Step5: ...y es Python puro!
Truco
Step6: Quinto | Python Code:
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import astropy.units as u
from astropy import time
from poliastro import iod
from poliastro.plotting import plot
from poliastro.bodies import Sun, Earth
from poliastro.twobody import State
from poliastro import ephem
from jplephem.spk import SPK
ephem.download_kernel("de421")
Explanation: A Marte con Python usando poliastro
<img src="http://poliastro.github.io/_images/logo_text.svg" width="70%" />
Juan Luis Cano Rodríguez juanlu@pybonacci.org
2016-04-09 PyData Madrid 2016
...en 5 minutos :)
Warning: This is rocket science!
¿Qué es la Astrodinámica?
Una rama de la Mecánica (a su vez una rama de la Física) que estudia problemas prácticos acerca del movimiento de cohetes y otros vehículos en el espacio
¿Qué es poliastro?
Una biblioteca de puro Python para Astrodinámica
http://poliastro.github.io/
¡Vamos a Marte!
End of explanation
r = [-6045, -3490, 2500] * u.km
v = [-3.457, 6.618, 2.533] * u.km / u.s
ss = State.from_vectors(Earth, r, v)
with plt.style.context('pybonacci'):
plot(ss)
Explanation: Primero: definir la órbita
End of explanation
epoch = time.Time("2015-06-21 16:35")
r_, v_ = ephem.planet_ephem(ephem.EARTH, epoch)
r_
v_.to(u.km / u.s)
Explanation: Segundo: localiza los planetas
End of explanation
date_launch = time.Time('2011-11-26 15:02', scale='utc')
date_arrival = time.Time('2012-08-06 05:17', scale='utc')
tof = date_arrival - date_launch
r0, _ = ephem.planet_ephem(ephem.EARTH, date_launch)
r, _ = ephem.planet_ephem(ephem.MARS, date_arrival)
(v0, v), = iod.lambert(Sun.k, r0, r, tof)
v0
v
Explanation: Tercero: Calcula la trayectoria
End of explanation
def go_to_mars(offset=500., tof_=6000.):
# Initial data
N = 50
date_launch = time.Time('2016-03-14 09:31', scale='utc') + ((offset - 500.) * u.day)
date_arrival = time.Time('2016-10-19 16:00', scale='utc') + ((offset - 500.) * u.day)
tof = tof_ * u.h
# Calculate vector of times from launch and arrival Julian days
jd_launch = date_launch.jd
jd_arrival = jd_launch + tof.to(u.day).value
jd_vec = np.linspace(jd_launch, jd_arrival, num=N)
times_vector = time.Time(jd_vec, format='jd')
rr_earth, vv_earth = ephem.planet_ephem(ephem.EARTH, times_vector)
rr_mars, vv_mars = ephem.planet_ephem(ephem.MARS, times_vector)
# Compute the transfer orbit!
r0 = rr_earth[:, 0]
rf = rr_mars[:, -1]
(va, vb), = iod.lambert(Sun.k, r0, rf, tof)
ss0_trans = State.from_vectors(Sun, r0, va, date_launch)
ssf_trans = State.from_vectors(Sun, rf, vb, date_arrival)
# Extract whole orbit of Earth, Mars and transfer (for plotting)
rr_trans = np.zeros_like(rr_earth)
rr_trans[:, 0] = r0
for ii in range(1, len(jd_vec)):
tof = (jd_vec[ii] - jd_vec[0]) * u.day
rr_trans[:, ii] = ss0_trans.propagate(tof).r
# Better compute backwards
jd_init = (date_arrival - 1 * u.year).jd
jd_vec_rest = np.linspace(jd_init, jd_launch, num=N)
times_rest = time.Time(jd_vec_rest, format='jd')
rr_earth_rest, _ = ephem.planet_ephem(ephem.EARTH, times_rest)
rr_mars_rest, _ = ephem.planet_ephem(ephem.MARS, times_rest)
# Plot figure
# To add arrows:
# https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/streamplot.py#L140
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
def plot_body(ax, r, color, size, border=False, **kwargs):
Plots body in axes object.
return ax.plot(*r[:, None], marker='o', color=color, ms=size, mew=int(border), **kwargs)
# I like color
color_earth0 = '#3d4cd5'
color_earthf = '#525fd5'
color_mars0 = '#ec3941'
color_marsf = '#ec1f28'
color_sun = '#ffcc00'
color_orbit = '#888888'
color_trans = '#444444'
# Plotting orbits is easy!
ax.plot(*rr_earth.to(u.km).value, color=color_earth0)
ax.plot(*rr_mars.to(u.km).value, color=color_mars0)
ax.plot(*rr_trans.to(u.km).value, color=color_trans)
ax.plot(*rr_earth_rest.to(u.km).value, ls='--', color=color_orbit)
ax.plot(*rr_mars_rest.to(u.km).value, ls='--', color=color_orbit)
# But plotting planets feels even magical!
plot_body(ax, np.zeros(3), color_sun, 16)
plot_body(ax, r0.to(u.km).value, color_earth0, 8)
plot_body(ax, rr_earth[:, -1].to(u.km).value, color_earthf, 8)
plot_body(ax, rr_mars[:, 0].to(u.km).value, color_mars0, 8)
plot_body(ax, rf.to(u.km).value, color_marsf, 8)
# Add some text
ax.text(-0.75e8, -3.5e8, -1.5e8, "ExoMars mission:\nfrom Earth to Mars",
size=20, ha='center', va='center', bbox={"pad": 30, "lw": 0, "fc": "w"})
ax.text(r0[0].to(u.km).value * 2.4, r0[1].to(u.km).value * 0.4, r0[2].to(u.km).value * 1.25,
"Earth at launch\n({})".format(date_launch.to_datetime().strftime("%d %b")),
ha="left", va="bottom", backgroundcolor='#ffffff')
ax.text(rf[0].to(u.km).value * 1.1, rf[1].to(u.km).value * 1.1, rf[2].to(u.km).value,
"Mars at arrival\n({})".format(date_arrival.to_datetime().strftime("%d %b")),
ha="left", va="top", backgroundcolor='#ffffff')
ax.text(-1.9e8, 8e7, 1e8, "Transfer\norbit", ha="right", va="center", backgroundcolor='#ffffff')
# Tune axes
ax.set_xlim(-3e8, 3e8)
ax.set_ylim(-3e8, 3e8)
ax.set_zlim(-3e8, 3e8)
# And finally!
ax.view_init(30, 260)
plt.show()
#fig.savefig("trans_30_260.png", bbox_inches='tight')
#return fig, ax
go_to_mars()
Explanation: ...y es Python puro!
Truco: numba
Cuarto: ¡vamos a Marte!
End of explanation
%matplotlib inline
from ipywidgets import interactive
from IPython.display import display
w = interactive(go_to_mars, offset=(0., 1000.), tof_=(100., 12000.))
display(w)
Explanation: Quinto: ¡¡Hagámoslo interactivo!!!1!
End of explanation |
6,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring Label Relations
Multi-label classification tends to have problems with overfitting and underfitting classifiers when the label space is large, especially in problem transformation approaches. A well known approach to remedy this is to split the problem into subproblems with smaller label subsets to improve the generalization quality.
Scikit-multilearn library is the first Python library to provide this functionality, this will guide your through using different libraries for label space division. Let's start with loading up the well-cited emotions dataset, that use throughout the User Guide
Step1: Label relationships can be exploited in a handful of ways
Step2: This graph builder constructs a Label Graph based on the output matrix where two label nodes are connected when at least one sample is labeled with both of them. If the graph is weighted, the weight of an edge between two label nodes is the number of samples labeled with these two labels. Self-edge weights contain the number of samples with a given label.
Step3: The dictionary edge_map contains the adjacency matrix in dictionary-of-keys format, each key is a label number tuple, weight is the number of samples with the two labels assigned. Its values will be used by all of the supported Label Graph Clusterers below
Step4: Using iGraph
To use igraph with scikit-multilearn you need to install the igraph python package
Step5: Igraph provides a set of community detection methods, out of which the following are supported
Step6: Stochastic Blockmodel from graph-tool
Another approach to label space division is to fit a Stochastic Block Model to the label graph. An efficient implementation of the Stochastic Block Model in Python is provided by graphtool. Note that using graphtool incurs GPL requirements on your code.
Step7: The StochasticBlockModel class fits the model and specifies the variant of SBM to be used, it can include
Step8: The above partition was generated by the model, let's visualize it.
Step9: We can use this clusterer as an argument for the label space partitioning classifier, as we did not enable overlapping communities
Step10: Now let's try to go with the same variant of the model, but now we allow overlapping communities
Step11: We have a division, note that we train the same number of classifiers as in the partitioning case. Let's visualize label membership likelihoods alongside the division
Step12: We can now perform classification, but for it to work we now need to use a classifier that can decide whether to assign a label if more than one subclassifiers were making a decision about the label. We will use the MajorityVotingClassifier which makes a decision if the majority of classifiers decide to assign the label.
Step13: Using scikit-learn clusterers
Scikit-learn offers a variety of clustering methods, some of which have been applied to dividing the label space into subspaces in multi-label classification. The main problem which often concerns these approaches is the need to empirically fit the parameter of the number of clusters to select.
scikit-multilearn provides a clusterer which does not build a graph, instead it employs the scikit-multilearn clusterer on transposed label assignment vectors, i.e. a vector for a given label is a vector of all samples' assignment values. To use this approach, just import a scikit-learn cluster, and pass its instance as a parameter.
Step14: Fixed partition based on expert knowledge
There may be cases where we know something about the label relationships based on expert or intuitive knowledge, or perhaps our knowledge comes from a different machine learning model, or it is crowdsourced, in all of these cases, scikit-multilearn let's you use this knowledge to your advantage. Let's see this on our exampel data set. It has six labels that denote emotions
Step15: Looking at label names we might see, that labels quiet-still and angry-agressive are contradictory, but one can be amazed both in the happy/relaxing context, in the sad/agresive context. Also one can be easily pleased/relaxed and/or calm but not actually amazed. We thus come up with a new intuitive label space division | Python Code:
from skmultilearn.dataset import load_dataset
X_train, y_train, feature_names, label_names = load_dataset('emotions', 'train')
X_test, y_test, _, _ = load_dataset('emotions', 'test')
Explanation: Exploring Label Relations
Multi-label classification tends to have problems with overfitting and underfitting classifiers when the label space is large, especially in problem transformation approaches. A well known approach to remedy this is to split the problem into subproblems with smaller label subsets to improve the generalization quality.
Scikit-multilearn library is the first Python library to provide this functionality, this will guide your through using different libraries for label space division. Let's start with loading up the well-cited emotions dataset, that use throughout the User Guide:
End of explanation
from skmultilearn.cluster import LabelCooccurrenceGraphBuilder
Explanation: Label relationships can be exploited in a handful of ways:
inferring the label space division from the label assignment matrix in the training set:
through building a label graph and inferring community structure of this graph, this can be facilitated with three network libraries in scikit-multilearn: NetworkX (BSD), igraph (GPL) and graphtool (GPL)
through using a traditional clustering approach from scikit-learn to cluster label assignment vectors, ex. using k-means, this usually required parameter estimation
employing expert knowledge to divide the label space
random label space partitioning with methods like random k-label sets
In most cases these approaches are used with a Label Powerset problem transformation classifier and a base multi-class classifier, for the examples in this chapter we will use sklearn's Gaussian Naive Bayes classifier, but you can use whatever classifiers you in your ensembles.
Let's go through the approaches:
Detecting communities in Label Relations Graph
Exploring label relations using the current methods of Network Science is a new approach to improve classification results. This area is still under research, both in terms of methods used for label space division and in terms of what qualities should be represented in the Label Relations Graph.
In scikit-multilearn classifying with label space division based on label graphs requires three elements:
selecting a graph builder, a class that constructs a graph based on the label assignment matrix y, at the moment scikit-multilearn provides one such graph builder, based on the notion of label co-occurrence
selecting a Label Graph clusterer which employs community detection methods from different sources to provide a label space clustering
selecting a classification approach, i.e. how to train and merge results of classifiers, scikit-multilearn provides two approaches:
a partitioning classifier which trains a classifier per label cluster, assuming they are disjoint, and merges the results of each subclassifier's prediction
a majority voting classifier that trains a classifier per label clusters, but if they overlap, it follows the decision of the majority of subclassifiers concerning assigning the label or not
Let's start with looking at the Label Graph builder.
Building a Label Graph
End of explanation
graph_builder = LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False)
edge_map = graph_builder.transform(y_train)
print("{} labels, {} edges".format(len(label_names), len(edge_map)))
print(edge_map)
Explanation: This graph builder constructs a Label Graph based on the output matrix where two label nodes are connected when at least one sample is labeled with both of them. If the graph is weighted, the weight of an edge between two label nodes is the number of samples labeled with these two labels. Self-edge weights contain the number of samples with a given label.
End of explanation
from skmultilearn.cluster import NetworkXLabelGraphClusterer
# we define a helper function for visualization purposes
def to_membership_vector(partition):
return {
member : partition_id
for partition_id, members in enumerate(partition)
for member in members
}
clusterer = NetworkXLabelGraphClusterer(graph_builder, method='louvain')
partition = clusterer.fit_predict(X_train,y_train)
partition
membership_vector = to_membership_vector(partition)
import networkx as nx
names_dict = dict(enumerate(x[0].replace('-','-\n') for x in label_names))
import matplotlib.pyplot as plt
%matplotlib inline
nx.draw(
clusterer.graph_,
pos=nx.circular_layout(clusterer.graph_),
labels=names_dict,
with_labels = True,
width = [10*x/y_train.shape[0] for x in clusterer.weights_['weight']],
node_color = [membership_vector[i] for i in range(y_train.shape[1])],
cmap=plt.cm.Spectral,
node_size=100,
font_size=14
)
from skmultilearn.ensemble import LabelSpacePartitioningClassifier
from skmultilearn.problem_transform import LabelPowerset
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score
classifier = LabelSpacePartitioningClassifier(
classifier = LabelPowerset(classifier=GaussianNB()),
clusterer = clusterer
)
classifier.fit(X_train, y_train)
prediction = classifier.predict(X_test)
accuracy_score(y_test, prediction)
Explanation: The dictionary edge_map contains the adjacency matrix in dictionary-of-keys format, each key is a label number tuple, weight is the number of samples with the two labels assigned. Its values will be used by all of the supported Label Graph Clusterers below:
NetworkX
igraph
graph-tool
All these clusterers take their names from the respected Python graph/network libraries which they are using to infer community structure and provide the label space clustering.
NetworkX
End of explanation
from skmultilearn.cluster import IGraphLabelGraphClusterer
import igraph as ig
Explanation: Using iGraph
To use igraph with scikit-multilearn you need to install the igraph python package:
bash
$ pip install python-igraph
Do not install the igraph package which is not the correct python-igraph library. Information about build requirements of python-igraph can be found in the library documentation.
Let's load the python igraph library and scikit-multilearn's igraph-based clusterer.
End of explanation
clusterer_igraph = IGraphLabelGraphClusterer(graph_builder=graph_builder, method='walktrap')
partition = clusterer_igraph.fit_predict(X_train, y_train)
partition
colors = ['red', 'white', 'blue']
membership_vector = to_membership_vector(partition)
visual_style = {
"vertex_size" : 20,
"vertex_label": [x[0] for x in label_names],
"edge_width" : [10*x/y_train.shape[0] for x in clusterer_igraph.graph_.es['weight']],
"vertex_color": [colors[membership_vector[i]] for i in range(y_train.shape[1])],
"bbox": (400,400),
"margin": 80,
"layout": clusterer_igraph.graph_.layout_circle()
}
ig.plot(clusterer_igraph.graph_, **visual_style)
classifier = LabelSpacePartitioningClassifier(
classifier = LabelPowerset(classifier=GaussianNB()),
clusterer = clusterer_igraph
)
classifier.fit(X_train, y_train)
prediction = classifier.predict(X_test)
accuracy_score(y_test, prediction)
Explanation: Igraph provides a set of community detection methods, out of which the following are supported:
| Method name string | Description |
|--------------------|-------------|
| fastgreedy | Detecting communities with largest modularity using incremental greedy search |
| infomap | Detecting communities through information flow compressing simulated via random walks |
| label_propagation | Detecting communities from colorings via multiple label propagation on the graph |
| leading_eigenvector | Detecting communities with largest modularity through adjacency matrix eigenvectors |
| multilevel | Recursive communitiy detection with largest modularity step by step maximization |
| walktrap | Finding communities by trapping many random walks |
Each of them denotes a community_* method of the Graph object, you can read more about the methods in igraph documentation and in comparison of their performance in multi-label classification.
Let's start with detecting a community structure in the label co-occurrence graph and visualizing it with igraph.
End of explanation
from skmultilearn.cluster.graphtool import GraphToolLabelGraphClusterer, StochasticBlockModel
Explanation: Stochastic Blockmodel from graph-tool
Another approach to label space division is to fit a Stochastic Block Model to the label graph. An efficient implementation of the Stochastic Block Model in Python is provided by graphtool. Note that using graphtool incurs GPL requirements on your code.
End of explanation
model = StochasticBlockModel(nested=False, use_degree_correlation=True, allow_overlap=False, weight_model='real-normal')
clusterer_graphtool = GraphToolLabelGraphClusterer(graph_builder=graph_builder, model=model)
clusterer_graphtool.fit_predict(None, y_train)
Explanation: The StochasticBlockModel class fits the model and specifies the variant of SBM to be used, it can include:
whether to use a nested blockmodel or not
whether to take degree correlation into account
whether to allow overlapping communities
how to model weights of label relationships
Selecting these parameters efficiently for multi-label purposes is still researched, but reading the inference documentation in graphtool will give you an intuition what to choose.
As the emotions data set is small there is no reason to use the nested model, we select the real-normal weight model as it is reasonable to believe that label assignments come from an i.i.d source and should follow some limit theorem.
End of explanation
node_label = clusterer_graphtool.graph_.new_vertex_property("string")
for i, v in enumerate(clusterer_graphtool.graph_.vertices()):
node_label[v] = label_names[i][0]
clusterer_graphtool.model.model_.draw(vertex_text=node_label)
Explanation: The above partition was generated by the model, let's visualize it.
End of explanation
classifier = LabelSpacePartitioningClassifier(
classifier = LabelPowerset(classifier=GaussianNB()),
clusterer = clusterer_graphtool
)
classifier.fit(X_train, y_train)
prediction = classifier.predict(X_test)
accuracy_score(y_test, prediction)
Explanation: We can use this clusterer as an argument for the label space partitioning classifier, as we did not enable overlapping communities:
End of explanation
model = StochasticBlockModel(nested=False, use_degree_correlation=True, allow_overlap=True, weight_model='real-normal')
clusterer_graphtool = GraphToolLabelGraphClusterer(graph_builder=graph_builder, model=model)
clusterer_graphtool.fit_predict(None, y_train)
Explanation: Now let's try to go with the same variant of the model, but now we allow overlapping communities:
End of explanation
node_label = clusterer_graphtool.graph_.new_vertex_property("string")
for i, v in enumerate(clusterer_graphtool.graph_.vertices()):
node_label[v] = label_names[i][0]
clusterer_graphtool.model.model_.draw(vertex_text=node_label, vertex_text_color='black')
Explanation: We have a division, note that we train the same number of classifiers as in the partitioning case. Let's visualize label membership likelihoods alongside the division:
End of explanation
from skmultilearn.ensemble.voting import MajorityVotingClassifier
classifier = MajorityVotingClassifier(
classifier=LabelPowerset(classifier=GaussianNB()),
clusterer=clusterer_graphtool
)
classifier.fit(X_train, y_train)
prediction = classifier.predict(X_test)
accuracy_score(y_test, prediction)
Explanation: We can now perform classification, but for it to work we now need to use a classifier that can decide whether to assign a label if more than one subclassifiers were making a decision about the label. We will use the MajorityVotingClassifier which makes a decision if the majority of classifiers decide to assign the label.
End of explanation
from skmultilearn.cluster import MatrixLabelSpaceClusterer
from sklearn.cluster import KMeans
matrix_clusterer = MatrixLabelSpaceClusterer(clusterer=KMeans(n_clusters=2))
matrix_clusterer.fit_predict(X_train, y_train)
classifier = LabelSpacePartitioningClassifier(
classifier = LabelPowerset(classifier=GaussianNB()),
clusterer = matrix_clusterer
)
classifier.fit(X_train, y_train)
prediction = classifier.predict(X_test)
accuracy_score(y_test, prediction)
Explanation: Using scikit-learn clusterers
Scikit-learn offers a variety of clustering methods, some of which have been applied to dividing the label space into subspaces in multi-label classification. The main problem which often concerns these approaches is the need to empirically fit the parameter of the number of clusters to select.
scikit-multilearn provides a clusterer which does not build a graph, instead it employs the scikit-multilearn clusterer on transposed label assignment vectors, i.e. a vector for a given label is a vector of all samples' assignment values. To use this approach, just import a scikit-learn cluster, and pass its instance as a parameter.
End of explanation
label_names
Explanation: Fixed partition based on expert knowledge
There may be cases where we know something about the label relationships based on expert or intuitive knowledge, or perhaps our knowledge comes from a different machine learning model, or it is crowdsourced, in all of these cases, scikit-multilearn let's you use this knowledge to your advantage. Let's see this on our exampel data set. It has six labels that denote emotions:
End of explanation
from skmultilearn.ensemble import MajorityVotingClassifier
from skmultilearn.cluster import FixedLabelSpaceClusterer
from skmultilearn.problem_transform import LabelPowerset
from sklearn.ensemble import RandomForestClassifier
classifier = MajorityVotingClassifier(
classifier = LabelPowerset(
classifier=RandomForestClassifier(n_estimators=100),
require_dense = [False, True]
),
require_dense = [True, True],
clusterer = FixedLabelSpaceClusterer(clusters=[[0,1, 2], [2, 3 ,4], [0, 4, 5]])
)
# train
classifier.fit(X_train, y_train)
# predict
predictions = classifier.predict(X_test)
accuracy_score(y_test, predictions)
Explanation: Looking at label names we might see, that labels quiet-still and angry-agressive are contradictory, but one can be amazed both in the happy/relaxing context, in the sad/agresive context. Also one can be easily pleased/relaxed and/or calm but not actually amazed. We thus come up with a new intuitive label space division:
End of explanation |
6,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Problem Description
Data-driven approaches are now used in many fields from business to science. Since data storage and computational power has become cheap, machine learning has gained popularity. However, the majority of tools that can extract dependencies from data, are designed for prediction problem. In this notebook, a problem of decision support simulation is considered and it is shown that even good predictive models can lead to wrong conclusions. This occurs under some conditions summarized by an umbrella term called endogeneity. Its particular cases are as follows
Step1: Synthetic Dataset Generation
Let us generate an unobserved parameter and an indicator of treatment such that they are highly correlated.
Step3: Now create historical dataset that is used for learning predictive model.
Step4: Now create two datasets for simulation where the only difference between them is that in the first one treatment is absent and in the second one treatment is assigned to all items.
Step5: Look at the data that are used for simulation.
Step7: Good Model...
Step8: Let us use coefficient of determination as a scorer rather than MSE. Actually, they are linearly dependent
Step9: Although true relationship is non-linear, predictive power of linear regression is good. This is indicated by close to 1 coefficient of determination. Since the winner is model with intercept, its score can be interpreted as follows — the model explains almost all variance of the target around its mean (note that such interpretation can not be used for a model without intercept).
Step10: It looks like almost all combinations of hyperparameters result in error that is close to irreducible error caused by mismatches between the indicator of treatment and the omitted variable.
Step11: The score is even closer to 1 than in case of linear model. Decent result deceptively motivates to think that all important variables are included in the model.
...and Poor Simulation
Step12: And now scores are not perfect, are they?
Step13: It can be seen that effect of treatment is overestimated. In case of absence of treatment, for items with unobserved feature equal to 1, predictions are significantly less than true values. To be more precise, the differences are close to coefficient near unobserved feature in weights_matrix passed to the dataset creation. Similarly, in case of full treatment, for items with unobserved feature equal to 0, predictions are higher than true values and the differences are close to the abovementioned coefficient too.
Finally, let us simulate a wrong decision that the manager can make. Suppose that treatment costs one dollar per item and every unit increase in the target variable leads to creation of value that is equal to one dollar too.
Step14: The model recommends to treat all items. What happens if all of them are treated? | Python Code:
from itertools import combinations
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from sklearn.model_selection import train_test_split, KFold, GridSearchCV
from sklearn.metrics import r2_score
from sklearn.linear_model import LinearRegression
# Startup settings can not suppress a warning from `XGBRegressor` and so this is needed.
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
from xgboost import XGBRegressor
np.random.seed(seed=361)
Explanation: Introduction
Problem Description
Data-driven approaches are now used in many fields from business to science. Since data storage and computational power has become cheap, machine learning has gained popularity. However, the majority of tools that can extract dependencies from data, are designed for prediction problem. In this notebook, a problem of decision support simulation is considered and it is shown that even good predictive models can lead to wrong conclusions. This occurs under some conditions summarized by an umbrella term called endogeneity. Its particular cases are as follows:
* An important variable is omitted;
* Variables that are used as features are measured with biases;
* There is simultaneous or reverse causality between a target variable and some features.
Here, important variable omission is a root of a trouble.
Suppose that situation is as follows. There is a freshly-hired manager that can assign treatment to items in order to increase target metric. Treatment is binary, i.e. for each item it is assigned or it is absent. Because treatment costs something, its assignment should be optimized — only some items should be treated. A historical dataset of items performance is given, but the manager does not know that previously treatment was assigned predominantely based on values of just one parameter. Moreover, this parameter is not included in the dataset. By the way, the manager wants to create a system that predicts an item's target metric in case of treatment and in case of absence of treatment. If this system is deployed, the manager can compare these two cases and decide whether effect of treatment worths its costs.
If machine learning approach results in good prediction scores, chances are that the manager does not suspect that important variable is omitted (at least until some expenses are generated by wrong decisions). Hence, domain knowledge and data understanding are still required for modelling based on data. This is of particular importance when datasets contain values that are produced by someone's decisions, because there is no guarantee that future decisions will not change dramatically. On the flip side, if all factors that affect decisions are included in a dataset, i.e., there is selection on observables for treatment assignment, a powerful enough model is able to estimate treatment effect correctly (but accuracy of predictions still does not ensure causal relationships detection).
References
To read more about causality in data analysis, it is possible to look at these papers:
Angrist J, Pischke J-S. Mostly Harmless Econometrics. Princeton University Press, 2009.
Varian H. Big Data: New Tricks for Econometrics. Journal of Economic Perspectives, 28(2): 3–28, 2013
Preparations
General
End of explanation
unobserved = np.hstack((np.ones(10000), np.zeros(10000)))
treatment = np.hstack((np.ones(9000), np.zeros(10000), np.ones(1000)))
np.corrcoef(unobserved, treatment)
Explanation: Synthetic Dataset Generation
Let us generate an unobserved parameter and an indicator of treatment such that they are highly correlated.
End of explanation
def synthesize_dataset(unobserved, treatment,
given_exogenous=None, n_exogenous_to_draw=2,
weights_matrix=np.array([[5, 0, 0, 0],
[0, 1, 1, 0],
[0, 1, 2, 1],
[0, 0, 1, 3]])):
A helper function for repetitive
pieces of code.
Creates a dataset, where target depends on
`unobserved`, but `unobserved` is not
included as a feature. Independent features
can be passed as `given_exogenous` as well as
be drawn from Gaussian distribution.
Target is generated as linear combination of
features and their interactions in the
following manner. Order features as below:
unobserved variable, treatment indicator,
given exogenous features, drawn exogenous
features. Then the (i, i)-th element of
`weights_matrix` defines coefficient of
the i-th feature, whereas the (i, j)-th
element of `weights_matrix` (where i != j)
defines coefficient of interaction between
the i-th and j-th features.
@type unobserved: numpy.ndarray
@type treatment: numpy.ndarray
@type given_exogenous: numpy.ndarray
@type n_exogenous_to_draw: int
@type weights_matrix: numpy.ndarray
@rtype: tuple(numpy.ndarray)
if unobserved.shape != treatment.shape:
raise ValueError("`unobserved` and `treatment` are not aligned.")
if (given_exogenous is not None and
unobserved.shape[0] != given_exogenous.shape[0]):
raise ValueError("`unobserved` and `given_exogenous` are not " +
"aligned. Try to transpose `given_exogenous`.")
if weights_matrix.shape[0] != weights_matrix.shape[1]:
raise ValueError("Matrix of weights is not square.")
if not np.array_equal(weights_matrix, weights_matrix.T):
raise ValueError("Matrix of weigths is not symmetric.")
len_of_given = given_exogenous.shape[1] if given_exogenous is not None else 0
if 2 + len_of_given + n_exogenous_to_draw != weights_matrix.shape[0]:
raise ValueError("Number of weights is not equal to that of features.")
drawn_features = []
for i in range(n_exogenous_to_draw):
current_feature = np.random.normal(size=unobserved.shape[0])
drawn_features.append(current_feature)
if given_exogenous is None:
features = np.vstack([unobserved, treatment] + drawn_features).T
else:
features = np.vstack([unobserved, treatment, given_exogenous.T] +
drawn_features).T
target = np.dot(features, weights_matrix.diagonal())
indices = list(range(weights_matrix.shape[0]))
interactions = [weights_matrix[i, j] * features[:, i] * features[:, j]
for i, j in combinations(indices, 2)]
target = np.sum(np.vstack([target] + interactions), axis=0)
return features[:, 1:], target
learning_X, learning_y = synthesize_dataset(unobserved, treatment)
Explanation: Now create historical dataset that is used for learning predictive model.
End of explanation
unobserved = np.hstack((np.ones(2500), np.zeros(2500)))
no_treatment = np.zeros(5000)
full_treatment = np.ones(5000)
no_treatment_X, no_treatment_y = synthesize_dataset(unobserved, no_treatment)
full_treatment_X, full_treatment_y = synthesize_dataset(unobserved, full_treatment,
no_treatment_X[:, 1:], 0)
Explanation: Now create two datasets for simulation where the only difference between them is that in the first one treatment is absent and in the second one treatment is assigned to all items.
End of explanation
no_treatment_X[:5, :]
full_treatment_X[:5, :]
no_treatment_y[:5]
full_treatment_y[:5]
Explanation: Look at the data that are used for simulation.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(learning_X, learning_y,
random_state=361)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
def tune_inform(X_train, y_train, rgr, grid_params, kf, scoring):
Just a helper function that combines
all routines related to grid search.
@type X_train: numpy.ndarray
@type y_train: numpy.ndarray
@type rgr: any sklearn regressor
@type grid_params: dict
@type kf: any sklearn folds
@type scoring: str
@rtype: sklearn regressor
grid_search_cv = GridSearchCV(rgr, grid_params, cv=kf,
scoring=scoring)
grid_search_cv.fit(X_train, y_train)
print("Best CV mean score: {}".format(grid_search_cv.best_score_))
means = grid_search_cv.cv_results_['mean_test_score']
stds = grid_search_cv.cv_results_['std_test_score']
print("Detailed results:")
for mean, std, params in zip(means, stds,
grid_search_cv.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, 2 * std, params))
return grid_search_cv.best_estimator_
rgr = LinearRegression()
grid_params = {'fit_intercept': [True, False]}
kf = KFold(n_splits=5, shuffle=True, random_state=361)
Explanation: Good Model...
End of explanation
rgr = tune_inform(X_train, y_train, rgr, grid_params, kf, 'r2')
y_hat = rgr.predict(X_test)
r2_score(y_test, y_hat)
Explanation: Let us use coefficient of determination as a scorer rather than MSE. Actually, they are linearly dependent: $R^2 = 1 - \frac{MSE}{\mathrm{Var}(y)}$, but coefficient of determination is easier to interpret.
End of explanation
rgr = XGBRegressor()
grid_params = {'n_estimators': [50, 100, 200, 300],
'max_depth': [3, 5],
'subsample': [0.8, 1]}
kf = KFold(n_splits=5, shuffle=True, random_state=361)
rgr = tune_inform(X_train, y_train, rgr, grid_params, kf, 'r2')
Explanation: Although true relationship is non-linear, predictive power of linear regression is good. This is indicated by close to 1 coefficient of determination. Since the winner is model with intercept, its score can be interpreted as follows — the model explains almost all variance of the target around its mean (note that such interpretation can not be used for a model without intercept).
End of explanation
y_hat = rgr.predict(X_test)
r2_score(y_test, y_hat)
Explanation: It looks like almost all combinations of hyperparameters result in error that is close to irreducible error caused by mismatches between the indicator of treatment and the omitted variable.
End of explanation
no_treatment_y_hat = rgr.predict(no_treatment_X)
r2_score(no_treatment_y, no_treatment_y_hat)
full_treatment_y_hat = rgr.predict(full_treatment_X)
r2_score(full_treatment_y, full_treatment_y_hat)
Explanation: The score is even closer to 1 than in case of linear model. Decent result deceptively motivates to think that all important variables are included in the model.
...and Poor Simulation
End of explanation
fig = plt.figure(figsize=(14, 7))
ax_one = fig.add_subplot(121)
ax_one.scatter(no_treatment_y_hat, no_treatment_y)
ax_one.set_title("Simulation of absence of treatment")
ax_one.set_xlabel("Predicted values")
ax_one.set_ylabel("True values")
ax_one.grid()
ax_two = fig.add_subplot(122, sharey=ax_one)
ax_two.scatter(full_treatment_y_hat, full_treatment_y)
ax_two.set_title("Simulation of treatment")
ax_two.set_xlabel("Predicted values")
ax_two.set_ylabel("True values")
_ = ax_two.grid()
Explanation: And now scores are not perfect, are they?
End of explanation
estimated_effects = full_treatment_y_hat - no_treatment_y_hat
true_effects = full_treatment_y - no_treatment_y
np.min(estimated_effects)
Explanation: It can be seen that effect of treatment is overestimated. In case of absence of treatment, for items with unobserved feature equal to 1, predictions are significantly less than true values. To be more precise, the differences are close to coefficient near unobserved feature in weights_matrix passed to the dataset creation. Similarly, in case of full treatment, for items with unobserved feature equal to 0, predictions are higher than true values and the differences are close to the abovementioned coefficient too.
Finally, let us simulate a wrong decision that the manager can make. Suppose that treatment costs one dollar per item and every unit increase in the target variable leads to creation of value that is equal to one dollar too.
End of explanation
cost_of_one_treatment = 1
estimated_net_improvement = (np.sum(estimated_effects) -
cost_of_one_treatment * estimated_effects.shape[0])
estimated_net_improvement
true_net_improvement = (np.sum(true_effects) -
cost_of_one_treatment * true_effects.shape[0])
true_net_improvement
Explanation: The model recommends to treat all items. What happens if all of them are treated?
End of explanation |
6,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2018-11-24 02
Step1: let's say for a hypothetical network with 3 layer groups (conv_group_1, conv_group_2, linear_group).
Step2: Interesting, so if you have multiple trainable layer groups, and pass in a slice with only a stop element, you'll get the lr for the last group, and the lr / 3 for all preceeding groups.
Step3: Now what happens when I pass in a start and stop value
Step4: This is so cool. Fastai finds the order / magnitude / exponential / logorithmic mean, not the absolute mean. This is why the step multiplier is (stop/start)**1/(n-1)) where n is the number of layer groups.
$$step = \big(\frac{stop}{start}\big)^{\frac{1}{n - 1}} ,\ \ \ n
Step5: So the question I have, and why I'm here, is
Step6: This is very exciting.
It also means, for my planet resnet34 thing, I don't need to worry about the internals of the learning rate calculation & assignment. I just need to specify the correct start and end lrs.
Which means all I have to do is provide the appropriate aggression for training. This I like.
Step8: | Python Code:
import numpy as np
# from fastai.core
def even_mults(start:float, stop:float, n:int)->np.ndarray:
"Build evenly stepped schedule from `star` to `stop` in `n` steps."
mult = stop/start
step = mult**(1/(n-1))
return np.array([start*(step**i) for i in range(n)])
Explanation: 2018-11-24 02:12:25
End of explanation
layer_groups = ['conv_group_1', 'conv_group_2', 'linear_group']
def lr_range(lr:[float,slice])->np.ndarray:
if not isinstance(lr, slice): return lr
if lr.start: res = even_mults(lr.start, lr.stop, len(layer_groups))
else: res = [lr.stop/3]*(len(layer_groups)-1)+[lr.stop]
return np.array(res)
lr = slice(1e-3)
lr_range(lr)
lr = 1e-3
lr_range(lr)
Explanation: let's say for a hypothetical network with 3 layer groups (conv_group_1, conv_group_2, linear_group).
End of explanation
# 10 layer groups
layer_groups = [i for i in range(10)]
lr = slice(1e-3)
lr_range(lr)
Explanation: Interesting, so if you have multiple trainable layer groups, and pass in a slice with only a stop element, you'll get the lr for the last group, and the lr / 3 for all preceeding groups.
End of explanation
lr = slice(1e-6, 1e-3)
lr_range(lr)
1e-3/30
1e-6*30
(1e-3/30 + 1e-6/30)*2
Explanation: Now what happens when I pass in a start and stop value:
End of explanation
even_mults(1e-6, 1e-3, 3)
even_mults(1e-6, 1e-3, 10)
Explanation: This is so cool. Fastai finds the order / magnitude / exponential / logorithmic mean, not the absolute mean. This is why the step multiplier is (stop/start)**1/(n-1)) where n is the number of layer groups.
$$step = \big(\frac{stop}{start}\big)^{\frac{1}{n - 1}} ,\ \ \ n: \mathrm{number\ of\ layer\ groups}$$
End of explanation
lr_stop = 1e-3
lr_start= lr_stop / 3**2
even_mults(lr_start, lr_stop, 3)
1e-3/9
Explanation: So the question I have, and why I'm here, is: can I have discriminative learning rates with a magnitude separation of 3? So: $\frac{lr}{3^2}, \frac{lr}{3^1}, \frac{lr}{3^0} = $ lr/9, lr/3, lr
End of explanation
(1/9 + 1)/2
5/9
even_mults(1/9, 1, 3)
lr_range(3)
even_mults(1e-10, 1, 11)
Explanation: This is very exciting.
It also means, for my planet resnet34 thing, I don't need to worry about the internals of the learning rate calculation & assignment. I just need to specify the correct start and end lrs.
Which means all I have to do is provide the appropriate aggression for training. This I like.
End of explanation
from fastai import *
from fastai.vision import *
__version__
import torchvision
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms()
data = (ImageItemList.from_folder(path).split_by_folder()
.label_from_folder().transform(tfms).databunch())
learn = create_cnn(data, torchvision.models.inception_v3)
??models.resnet18
??torchvision.models.inception_v3
def inception_v3_2(pretrained=False, **kwargs):
rInception v3 model architecture from
`"Rethinking the Inception Architecture for Computer Vision" <http://arxiv.org/abs/1512.00567>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
model = torchvision.models.Inception3(**kwargs)
# if pretrained:
# if 'transform_input' not in kwargs:
# kwargs['transform_input'] = True
# model.load_state_dict(model_zoo.load_url(model_urls['inception_v3_google']))
return model
create_cnn(data, inception_v3_2)
??learn.fit_one_cycle
??learn.lr_range
??even_mults
Explanation:
End of explanation |
6,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-1', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NCAR
Source ID: SANDBOX-1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:22
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.