markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap.
#Search for hotel in cities and assign to a new column in hotel_df hotelname = [] hotel_df = city_weather_df.copy() params = {} base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json?" for index, row in hotel_df.iterrows(): # get city name, lat, lng from df lat = row["lat"] lng = row["long"] city_name = row["city"] # add keyword to params dict params["location"] = f"{lat},{lng}" params["radius"] = "5000" params["type"] = "hotel" params['keyword'] = 'hotel' params["key"] = gkey url_params = urlencode(params) # assemble url and make API request #print(f"Retrieving Results for Index {index}: {city_name}.") query_string = base_url+url_params #pprint(query_string) # save the hotel name to dataframe try: response = requests.get(query_string).json() # extract results results = response['results'] #print(f"Closest hotel in {city_name} is {results[0]['name']}.") hotel_df.loc[index, "Hotel Name"] = results[0]['name'] time.sleep(.2) # if there is no hotel available, show missing field except (KeyError, IndexError): print(f"{index} - There isn't any hotel in a 5000m radius.") #print("------------") # Print end of search once searching is completed #print("-------End of Search-------") hotel_df hotel_df = hotel_df.dropna() # Make adjustmentss to hotel_df, calculate; # 1. A max temperature lower than 80 degrees but higher than 70. # 2. Wind speed less than 10 mph. # 3. Zero cloudiness. hotel_df = hotel_df.loc[(hotel_df.maxtemp < 290) & (hotel_df.maxtemp > 270)] hotel_df = hotel_df.loc[hotel_df.windspeed < 10] hotel_df = hotel_df.loc[hotel_df.cloudiness == 0] hotel_df #NOTE: Do not change any of the code in this cell # Using the template add the hotel marks to the heatmap info_box_template = """ <dl> <dt>Name</dt><dd>{Hotel Name}</dd> <dt>City</dt><dd>{city}</dd> <dt>Country</dt><dd>{country}</dd> </dl> """ # Store the DataFrame Row # NOTE: be sure to update with your DataFrame name hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()] locations = hotel_df[["lat", "long"]] # Create a map using state centroid coordinates to set markers marker_locations = locations # Create a marker_layer #fig = gmaps.figure() markers = gmaps.marker_layer(marker_locations, info_box_content=hotel_info) fig.add_layer(markers) fig
_____no_output_____
ADSL
VacationPy/VacationPy.ipynb
kdturner83/PythonAPI_Challenge
To install basemap`conda install -c conda-forge proj4``conda install -c anaconda basemap` In this notebook we will preprocess data to be able to compute death rates by state due to covid. You will need this data for plotting a map in hw3. Dataframes A DataFrame object is a two-dimensional matrix with rows and columns. Each column can have different data types, but all values within a column must be of the same data type. The columns behave like [series objects](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html).Data frames columns are ordered and the name-to-column mapping is stored in an index. Data frames also have an index for the rows, just like a series has an index into the values of the series. So, a data frame has two indexes which lets us zero in, for example, on a specific element using row and column index values. Let's use `pd.read_csv` to read a csv file with all covid cases per state. Taken from the [nytimes github]( https://github.com/nytimes/covid-19-data). `.head()` gives the top 5 rows of the dataframe.
covid = pd.read_csv("data/us-states.csv") covid.head()
_____no_output_____
MIT
notebooks/preprocessing-data-covid-map.ipynb
aneridand/msds593
This dataframe has population estimates and a lot of other info. See `data/nst-est2019-alldata.pdf` for a descrition of all columns.
population = pd.read_csv("data/nst-est2019-alldata.csv") population.head() ## let's look at the columns. I am looking for the population of 2019 per state. #list(population.columns)
_____no_output_____
MIT
notebooks/preprocessing-data-covid-map.ipynb
aneridand/msds593
Always look at shapes of objects before and after you manipulate them. You will get `(number of row, number of columns).` How many states in United States of America?
covid.shape, population.shape covid.describe() # note that the counts are different because there are missing values in some columns # covid["confirmed_cases"] covid["confirmed_cases"].isnull() # count how many rows are null? (covid["confirmed_cases"].isnull() == True).sum() # similarly (covid["confirmed_cases"].isnull()).sum() # is.na() also works (covid["confirmed_cases"].isna()).sum() # take first 10 elements of the column "confirmed_cases" c = covid["confirmed_cases"][:10] c # be careful on how different functions behave with respect to NAs len(c), c.count(), c.sum(), c.sum(skipna=False), np.sum(c), sum(c) # if you want to fill the NAs you can do covid = covid.fillna(-1) covid.head()
_____no_output_____
MIT
notebooks/preprocessing-data-covid-map.ipynb
aneridand/msds593
Exercise 1 How to fill NAs with different values for different columns? Subsetting and merging dataframesWe need info about deaths from the covid dataframe and info about population from other dataframe. Let's keep just that. Also we need a way to combine (merge) the two dataframes. The column `fips` is a unique identifier for the state so I will keep that. Also the state name can be useful.
covid.head() # selecting columns covid = covid[["state", "fips", "deaths"]] covid.head() population.head() # from the pdf we have the following info # STATE = State FIPS code # NAME = State name # POPESTIMATE2019 = 7/1/2019 resident total population estimate population = population[["STATE", "NAME", "POPESTIMATE2019"]] # show first 10 rows population.iloc[:10] # we are not interested in top values of the population table (aggregates) population = population.iloc[5:] # all rows after 5 population.head() covid.shape, population.shape
_____no_output_____
MIT
notebooks/preprocessing-data-covid-map.ipynb
aneridand/msds593
There are various ways to merge two dataframes. At the moment we want to preserve all the data.`outer`: use union of keys from both frames
# Can we merge on state name? rates = covid.merge(population, how="outer", left_on='fips', right_on='STATE') rates.iloc[:15] # let's look at rows with NAs na_index = rates["POPESTIMATE2019"].isnull() rates[na_index] ## Let's drop them rates = rates.dropna() rates.shape # cleaning up some more rates = rates[["state", "fips", "deaths", "POPESTIMATE2019"]] rates["rates"] = 1000*rates["deaths"]/rates["POPESTIMATE2019"] # set a new column rates # sorting by rates rates = rates.sort_values(by=["rates"]) #rates ## mean value of the rate column rates["rates"].mean(), rates["rates"].median() rates["rates"].quantile(q=[0.1, 0.25, 0.5, 0.75, 0.9]) # if you want 7 groups of color you need 8 quantiles q = np.linspace(0, 1, 8, endpoint=True) # equidistant numbers between 0 and 1 q # compute quantile of covid rates rates["rates"].quantile(q=q) qq = rates["rates"].quantile(q=q) type(qq) # what is the type? type(qq.values) # I prefer working with numpy arrays boundaries = rates["rates"].quantile(q=q).values boundaries ## let's define a new ordinal variable based on the quantiles of the rates rates["color"] = pd.qcut(rates["rates"], 7) rates["color"] rates["color"].unique() ## let's directly put colors here for our plot colors = ["#ffffd4", "#fee391", "#fec44f", "#fe9929", "#ec7014", "#cc4c02", "#8c2d04"] # from colorbrewer2.org rates["color"] = pd.qcut(rates["rates"], 7, labels=colors) rates["color"].values
_____no_output_____
MIT
notebooks/preprocessing-data-covid-map.ipynb
aneridand/msds593
Dictionary of color per state
# iterate through rows for i, row in rates.iterrows(): print(row["state"], row["color"]) # make a dictionary of color per state state2color = {} for i, row in rates.iterrows(): state2color[row["state"]] = row["color"] # here is a shortcut of the same # dictionary comprehension state2color = {row["state"]: row["color"] for i, row in rates.iterrows()}
_____no_output_____
MIT
notebooks/preprocessing-data-covid-map.ipynb
aneridand/msds593
Making a map in matplotlib Based on these exampleshttps://github.com/matplotlib/basemap/blob/master/examples/fillstates.pyhttps://stackoverflow.com/questions/39742305/how-to-use-basemap-python-to-plot-us-with-50-states
# Lambert Conformal map of lower 48 states. m = Basemap(llcrnrlon=-119,llcrnrlat=22,urcrnrlon=-64,urcrnrlat=49, projection='lcc',lat_1=33,lat_2=45,lon_0=-95) # load the shapefile, use the name 'states' shape = m.readshapefile('st99_d00', name='states', drawbounds=True) ax = plt.gca() # get current axes instance # list of states in the data states = [shapedict['NAME'] for shapedict in m.states_info] for i, seg in enumerate(m.states): state = states[i] color = state2color[state] poly = Polygon(seg, facecolor=color, edgecolor=color) ax.add_patch(poly) states = [shapedict['NAME'] for shapedict in m.states_info] # list comprenhension #states
_____no_output_____
MIT
notebooks/preprocessing-data-covid-map.ipynb
aneridand/msds593
How to make a column bar
colors = ["#ffffd4", "#fee391", "#fec44f", "#fe9929", "#ec7014", "#cc4c02", "#8c2d04"] bounds = [1,2,3,4,5,6,7,8] boundaries = [0.055, 0.139, 0.23, 0.316, 0.387, 0.588, 0.832, 1.804] fig, ax = plt.subplots(figsize=(1, 8)) fig.subplots_adjust(bottom=0.5) cmap = mpl.colors.ListedColormap(colors) cb2 = ColorbarBase(ax, cmap=cmap, boundaries=bounds, ticks=bounds, label=boundaries, orientation='vertical') cb2.set_label('Covid rates') cb2.set_ticklabels(boundaries)
_____no_output_____
MIT
notebooks/preprocessing-data-covid-map.ipynb
aneridand/msds593
Put it together
# rounding boundaries = [0.00, 0.14, 0.23, 0.32, 0.39, 0.59, 0.83, 1.80] # Lambert Conformal map of lower 48 states. fig, ax = plt.subplots(figsize=(12,6)) m = Basemap(llcrnrlon=-119,llcrnrlat=22,urcrnrlon=-64,urcrnrlat=49, projection='lcc',lat_1=33,lat_2=45,lon_0=-95) # load the shapefile, use the name 'states' shape = m.readshapefile('st99_d00', name='states', drawbounds=True, linewidth=0.2,color='#808080') # list of states in the data states = [shapedict['NAME'] for shapedict in m.states_info] for i, seg in enumerate(m.states): state = states[i] color = state2color[state] poly = Polygon(seg, facecolor=color, edgecolor=color) ax.add_patch(poly) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.spines["left"].set_visible(False) plt.annotate("Covid death rates per thousands", xy=(0, 1.05), xycoords='axes fraction', fontsize=20, color='#303030') # [left, bottom, width, height] ax_c = fig.add_axes([0.25, 0.05, 0.5, 0.03]) cmap = mpl.colors.ListedColormap(colors) cb2 = ColorbarBase(ax_c, cmap=cmap, boundaries=bounds, ticks=bounds, label=boundaries, orientation='horizontal') cb2.set_label("") cb2.set_ticklabels(boundaries)
_____no_output_____
MIT
notebooks/preprocessing-data-covid-map.ipynb
aneridand/msds593
More on dataframe manipulation `.iloc` for slicing a dataframe
rates.head() rates = rates.reset_index(drop=True) rates.head() ## keep the first 7 rows rates_top7 = rates.iloc[:7] rates_top7 ## keep columns 2 and 3 rates_top7_cols23 = rates_top7.iloc[:, 2:4] rates_top7_cols23 # we can do it at the same time rates.iloc[:7, 2:4]
_____no_output_____
MIT
notebooks/preprocessing-data-covid-map.ipynb
aneridand/msds593
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
# Importing modules import pandas as pd # Read datasets/yearly_deaths_by_clinic.csv into yearly yearly = pd.read_csv("datasets/yearly_deaths_by_clinic.csv") # Print out yearly yearly
_____no_output_____
MIT
24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb
mohd-faizy/DataScience-With-Python
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
# Calculate proportion of deaths per no. births yearly["proportion_deaths"] = yearly["deaths"] / yearly["births"] # Extract Clinic 1 data into clinic_1 and Clinic 2 data into clinic_2 clinic_1 = yearly[yearly["clinic"] == "clinic 1"] clinic_2 = yearly[yearly["clinic"] == "clinic 2"] # Print out clinic_1 clinic_1
_____no_output_____
MIT
24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb
mohd-faizy/DataScience-With-Python
3. Death at the clinicsIf we now plot the proportion of deaths at both Clinic 1 and Clinic 2 we'll see a curious pattern…
# This makes plots appear in the notebook %matplotlib inline # Plot yearly proportion of deaths at the two clinics ax = clinic_1.plot(x="year", y="proportion_deaths", label="Clinic 1") clinic_2.plot(x="year", y="proportion_deaths", label="Clinic 2", ax=ax, ylabel="Proportion deaths")
_____no_output_____
MIT
24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb
mohd-faizy/DataScience-With-Python
4. The handwashing beginsWhy is the proportion of deaths consistently so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
# Read datasets/monthly_deaths.csv into monthly monthly = pd.read_csv("datasets/monthly_deaths.csv", parse_dates=["date"]) # Calculate proportion of deaths per no. births monthly["proportion_deaths"] = monthly["deaths"] / monthly["births"] # Print out the first rows in monthly monthly.head()
_____no_output_____
MIT
24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb
mohd-faizy/DataScience-With-Python
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
# Plot monthly proportion of deaths ax = monthly.plot(x="date", y="proportion_deaths", ylabel="Proportion deaths")
_____no_output_____
MIT
24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb
mohd-faizy/DataScience-With-Python
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
# Date when handwashing was made mandatory handwashing_start = pd.to_datetime('1847-06-01') # Split monthly into before and after handwashing_start before_washing = monthly[monthly["date"] < handwashing_start] after_washing = monthly[monthly["date"] >= handwashing_start] # Plot monthly proportion of deaths before and after handwashing ax = before_washing.plot(x="date", y="proportion_deaths", label="Before handwashing") after_washing.plot(x="date", y="proportion_deaths", label="After handwashing", ax=ax, ylabel="Proportion deaths")
_____no_output_____
MIT
24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb
mohd-faizy/DataScience-With-Python
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
# Difference in mean monthly proportion of deaths due to handwashing before_proportion = before_washing["proportion_deaths"] after_proportion = after_washing["proportion_deaths"] mean_diff = after_proportion.mean() - before_proportion.mean() mean_diff
_____no_output_____
MIT
24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb
mohd-faizy/DataScience-With-Python
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
# A bootstrap analysis of the reduction of deaths due to handwashing boot_mean_diff = [] for i in range(3000): boot_before = before_proportion.sample(frac=1, replace=True) boot_after = after_proportion.sample(frac=1, replace=True) boot_mean_diff.append( boot_after.mean() - boot_before.mean() ) # Calculating a 95% confidence interval from boot_mean_diff confidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975]) confidence_interval
_____no_output_____
MIT
24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb
mohd-faizy/DataScience-With-Python
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
# The data Semmelweis collected points to that: doctors_should_wash_their_hands = True
_____no_output_____
MIT
24_Project_5_Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb
mohd-faizy/DataScience-With-Python
Carnot efficiency as a function of temperatureassuming a fixed reference temperature
import numpy as np import matplotlib.pyplot as plt # standard temperature 0°C as reference θ_0 = 273.15 # Kelvin # temperature range: 0°C to 200°C θ = np.linspace(θ_0, θ_0+200, num=500) # Carnot efficiency η = (θ - θ_0) / θ fig, ax = plt.subplots(dpi=200) ax.plot(θ, η);
_____no_output_____
MIT
carnot_efficiency.ipynb
MarkusLohmayer/master-thesis-code
Cleaning Up (& Stats About It) - For each annotator: - How many annotation files? - How many txt files? - Number of empty .ann files - How many non-empty .ann files have a `TranscriptionError_Document`/`DuplicatePage` tag? - How many .ann files have ONLY one of those two tags and are empty o/w? -> remove if so => remove corresponding .txt files => create new corpus
def get_all_files(annotator): """ collapsing folder structure per annotator""" data_dir = "../Data/" ann_dir = data_dir+annotator+"/" for cur_dir in glob(ann_dir+"/6*"): txt_files = sorted(glob(cur_dir+"/*.txt")) ann_files = sorted(glob(cur_dir+"/*.ann")) yield from zip(txt_files, ann_files) def has_error_tag(any_string): """Return strings with error tags""" return "TranscriptionError_Document" in any_string or\ "DuplicatePage" in any_string def remove_error_tag_lines(ann_file_content): return [line for line in ann_file_content.strip().split("\n") if not has_error_tag(line)] annotators = "A B C Silja Yolien".split() results = {} print("Total Annotation Files Per Annotator\n") for anno in annotators: empty = [] cur_keep = [] error_tag = [] error_tag_but_non_empty = [] ann_files = list(get_all_files(anno)) print(anno, len(ann_files)) for txt, ann in ann_files: with open(ann) as handle: contents = handle.read() if not contents.strip(): empty.append((txt, ann)) elif has_error_tag(contents): error_tags_removed = remove_error_tag_lines( contents ) if error_tags_removed == []: error_tag.append((txt, ann)) else: error_tag_but_non_empty.append((txt, ann)) else: cur_keep.append((txt, ann)) results[anno] = [cur_keep, empty, error_tag, error_tag_but_non_empty] from tabulate import tabulate stats = pd.DataFrame([ [k, sum(map(len, v))]+ [len(v[0])+len(v[-1])]+ list(map(len, v)) for k, v in results.items() ], columns=["Annotator", "Total", "Keep", "Non-empty-No error", "Empty", "Error", "Err.&Non-Empty"]).set_index("Annotator") print(stats) stats_T = pd.melt(stats[["Total", "Empty", "Keep", "Error"]].reset_index(), id_vars=["Annotator"], value_name="Number") plt.figure(figsize=(10, 7)) sns.barplot(data=stats_T, x='Annotator', y="Number", hue="variable") keep = {anno: v[0]+v[-1] for anno, v in results.items()} {k: len(v) for k, v in keep.items()} # keep
_____no_output_____
MIT
data_2/Notebooks/JaccardDistanceAnalysis.ipynb
budh333/UnSilence_VOC
Make New Corpusby copying files
from shutil import copy2 already_copied = True if not already_copied: from tqdm import tqdm os.makedirs('Keep') for anno, ls in tqdm(keep.items()): cur_dir = f"Keep/{anno}" os.makedirs(cur_dir) for txt, ann in ls: copy2(txt, cur_dir) copy2(ann, cur_dir) else: print("Already copied, doing nothing!")
Already copied, doing nothing!
MIT
data_2/Notebooks/JaccardDistanceAnalysis.ipynb
budh333/UnSilence_VOC
Pairwise Intersections of Annotation Files
def only_names(file_list): "returns only names of files in a particular list" return [ann.split("/")[-1] for txt, ann in file_list] ls = [] for a1, fs1 in keep.items(): for a2, fs2 in keep.items(): if not a1 == a2: names1, names2 = only_names(fs1), only_names(fs2) inter = set(names1) & set(names2) #names of files are identical val = len(inter)/len(names1) total_names1 = only_names(tup for ls in results[a1] for tup in ls) total_names2 = only_names(tup for ls in results[a2] for tup in ls) total_inter = set(total_names1) & set(total_names2) total_val = len(total_inter)/len(total_names1) jacc_val = len(set(names1).intersection(set(names2)))/len(set(names1).union(set(names2))) jacc_val_2 = len(set(total_names1).intersection(set(total_names2)))/len(set(total_names1).union(set(total_names2))) ls.append([a1, a2, len(inter), val, len(total_inter), total_val, jacc_val, jacc_val_2]) inter_stats = pd.DataFrame(ls, columns=["Anno1", "Anno2", "Intersection", "normed_Intersection", "total_Intersection", "total_normed_Intersection", "Jaccard_distance", "Jaccard_Distance_2"]) # inter_stats
_____no_output_____
MIT
data_2/Notebooks/JaccardDistanceAnalysis.ipynb
budh333/UnSilence_VOC
Jaccard Distance to Understand Overlap Pages between Annotators
inter_stats_T = inter_stats.pivot_table( values="Jaccard_distance", index="Anno1", columns="Anno2" ) sns.heatmap(inter_stats_T*100, annot=True, cmap="YlGnBu") _ = plt.title("Before Clean Up: Jaccard Distance (percentage)") plt.show() inter_stats_T = inter_stats.pivot_table( values="Jaccard_Distance_2", index="Anno1", columns="Anno2" ) sns.heatmap(inter_stats_T*100, annot=True, cmap="YlGnBu") _ = plt.title("After Clean Up: Jaccard Distance (percentage)") plt.show() # inter_stats_T = inter_stats.pivot_table( # values="Intersection", # index="Anno1", columns="Anno2" # ) # sns.heatmap(inter_stats_T, # annot=True, cmap="YlGnBu") # _ = plt.title("Before Clean Up: Raw Counts")
_____no_output_____
MIT
data_2/Notebooks/JaccardDistanceAnalysis.ipynb
budh333/UnSilence_VOC
**Conclusion**: Each pair of annotators annotated on average have 6% overlap (over the total documents they annotated together). Check Tag Distributions
def get_lines(ann_file): with open(ann_file) as handle: for l in handle: if not l.strip(): continue yield l.strip().split("\t") def get_entities(ann_file): for line in get_lines(ann_file): if line[0].startswith("T") and len(line) >= 2: tag_type, tag, string = line yield tag.split()[0] ents = {a: [e for txt, ann in files for e in get_entities(ann)] for a, files in keep.items()} from collections import Counter entity_stats = pd.DataFrame( [[a, e, c] for a in ents for e, c in Counter(ents[a]).items() if not e in ["DuplicatePage", "Noteworthy", "TranscriptionError_Document"]], columns=["Annotator", "EntityType", "Count"] ) plt.figure(figsize=(10, 7)) _ = sns.barplot(data=entity_stats, x='Annotator', y="Count", hue="EntityType")
_____no_output_____
MIT
data_2/Notebooks/JaccardDistanceAnalysis.ipynb
budh333/UnSilence_VOC
flavio tutorial Part 5: Extending flavio Adding an observable: photon polarization in $B\to K\pi\pi\gamma$$$\lambda_\gamma = \frac{|G_L|^2 - |G_R|^2}{|G_L|^2 + |G_R|^2}$$$$G_L = C_7^\text{eff} + \ldots, \qquad G_L = C_7' + \ldots $$$\ldots$ refer to non-factorizable hadronic contributions - let's ignore them for simplicity Writing a function that takes a `WilsonCoefficients` instance and a dictionary of parameter values as input
import flavio def ll_lgamma(wc_obj, par_dict): scale = flavio.config['renormalization scale']['bvgamma'] wc_dict = flavio.physics.bdecays.wilsoncoefficients.wctot_dict(wc_obj, 'bsee', scale, par_dict) delta_C7 = flavio.physics.bdecays.matrixelements.delta_C7( par=par_dict, wc=wc_dict, q2=0, scale=scale, qiqj='bs') a = {} GL = abs(wc_dict['C7eff_bs'] + delta_C7)**2 GR = abs(wc_dict['C7effp_bs'])**2 return (GL**2 - GR**2) / (GL**2 + GR**2)
_____no_output_____
MIT
5 Extending.ipynb
DavidMStraub/flavio-tutorial
Defining the `Observable` and `Prediction` instances
obs = 'lambda_gamma' flavio.classes.Observable(obs) flavio.classes.Prediction(obs, ll_lgamma); flavio.sm_prediction('lambda_gamma') wc = flavio.WilsonCoefficients() wc.set_initial({'C7p_bs': 0.25}, 4.8) flavio.np_prediction('lambda_gamma', wc)
_____no_output_____
MIT
5 Extending.ipynb
DavidMStraub/flavio-tutorial
Adding a new parameter
flavio.classes.Parameter('my_fudge_factor') flavio.default_parameters.set_constraint('my_fudge_factor', '0 +- 0.2') flavio.default_parameters.get_central('my_fudge_factor') [flavio.default_parameters.get_random_all()['my_fudge_factor'] for i in range(5)]
_____no_output_____
MIT
5 Extending.ipynb
DavidMStraub/flavio-tutorial
Lambda School Data Science - Loading, Cleaning and Visualizing DataObjectives for today:- Load data from multiple sources into a Python notebook - From a URL (github or otherwise) - CSV upload method - !wget method- "Clean" a dataset using common Python libraries - Removing NaN values "Data Imputation"- Create basic plots appropriate for different data types - Scatter Plot - Histogram - Density Plot - Pairplot (if we have time) Part 1 - Loading DataData comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.Data set sources:- https://archive.ics.uci.edu/ml/datasets.html- https://github.com/awesomedata/awesome-public-datasets- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags). Lecture example - flag data
# Step 1 - find the actual file to download # From navigating the page, clicking "Data Folder" flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data' # You can "shell out" in a notebook for more powerful tools # https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html # Funny extension, but on inspection looks like a csv !curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data # Extensions are just a norm! You have to inspect to be sure what something is # Step 2 - load the data # How to deal with a csv? 🐼 import pandas as pd flag_data = pd.read_csv(flag_data_url) # Step 3 - verify we've got *something* flag_data.head() # Step 4 - Looks a bit odd - verify that it is what we want flag_data.count() !curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc # So we have 193 observations with funny names, file has 194 rows # Looks like the file has no header row, but read_csv assumes it does help(pd.read_csv) # Alright, we can pass header=None to fix this flag_data = pd.read_csv(flag_data_url, header=None) flag_data.head() flag_data.count() flag_data.isna().sum()
_____no_output_____
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Yes, but what does it *mean*?This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).```1. name: Name of the country concerned2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW4. area: in thousands of square km5. population: in round millions6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others8. bars: Number of vertical bars in the flag9. stripes: Number of horizontal stripes in the flag10. colours: Number of different colours in the flag11. red: 0 if red absent, 1 if red present in the flag12. green: same for green13. blue: same for blue14. gold: same for gold (also yellow)15. white: same for white16. black: same for black17. orange: same for orange (also brown)18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)19. circles: Number of circles in the flag20. crosses: Number of (upright) crosses21. saltires: Number of diagonal crosses22. quarters: Number of quartered sections23. sunstars: Number of sun or star symbols24. crescent: 1 if a crescent moon symbol present, else 025. triangle: 1 if any triangles present, 0 otherwise26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 027. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise29. topleft: colour in the top-left corner (moving right to decide tie-breaks)30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)```Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1... Steps of Loading and Exploring a Dataset:- Find a dataset that looks interesting- Learn what you can about it - What's in it? - How many rows and columns? - What types of variables?- Look at the raw contents of the file- Load it into your workspace (notebook) - Handle any challenges with headers - Handle any problems with missing values- Then you can start to explore the data - Look at the summary statistics - Look at counts of different categories - Make some plots to look at the distribution of the data 3 ways of loading a dataset From its URL
dataset_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data' column_headers = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income'] df = pd.read_csv(dataset_url, names=column_headers) print(df.shape) df.head()
(32561, 15)
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
From a local file
from google.colab import files uploaded
_____no_output_____
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Using the `!wget` command
import wget wget https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data
_____no_output_____
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Part 2 - Deal with Missing Values Diagnose Missing ValuesLets use the Adult Dataset from UCI.
df.isnull().sum()
_____no_output_____
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Fill Missing Values
dataset_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data' column_headers = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income'] df = pd.read_csv(dataset_url, names=column_headers, na_values=[' ?']) print(df.shape) df.head(20) df.dtypes df.iloc[14][13]
_____no_output_____
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Part 3 - Explore the Dataset: Look at "Summary Statistics Numeric
df.describe()
_____no_output_____
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Non-Numeric
df.describe(exclude="number")
_____no_output_____
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Look at Categorical Values Part 4 - Basic Visualizations (using the Pandas Library) Histogram
# Pandas Histogram
_____no_output_____
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Density Plot (KDE)
# Pandas Density Plot
_____no_output_____
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Scatter Plot
# Pandas Scatterplot
_____no_output_____
MIT
module2-loadingdata/JeanFraga_LS_DS8_112_Loading_Data.ipynb
JeanFraga/DS-Unit-1-Sprint-1-Dealing-With-Data
Sci-Fi IRL 1: Technology Terminology Velocity A Data Storytelling Project by Tobias Reaper ---- Datalogue 008 ---------- Imports and Configuration
# Three Musketeers import pandas as pd import numpy as np import matplotlib.pyplot as plt from scipy import stats # For using the API import requests # More advanced vizualizations with Bokeh from bokeh.plotting import figure, output_file, output_notebook, show from bokeh.layouts import column from bokeh.models.glyphs import Patches # Import color library import colorcet as cc # Define color palette palette = [cc.bkr[i*15] for i in range(17)] palette # Set pandas display options to allow for more columns and rows pd.set_option("display.max_columns", 100) pd.set_option("display.max_rows", 500)
_____no_output_____
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
--- Functions
def pushshift_api_request(query, subreddit, frequency="month", aggs="created_utc"): """ Returns the JSON response of a PushShift API aggregate comment search as a Python dictionary. Note: if you're reading this note, that means that this function is still only written with the intention of automating a specific set of actions for a specific project. ---- Arguments ---- query: (str) keyword to search. subreddit: (str) subreddit name frequency: (str) set the size of the time buckets. aggs: (str) aggregate function name. Default is "created_utc". (For more information, read the PushShift API Documentation.) ------------------- """ # Build the query url based on endpoints and parameters url = f"https://api.pushshift.io/reddit/search/comment/?q={query}&subreddit={subreddit}&aggs={aggs}&frequency={frequency}&size=100" # Send the request and save the response into the response object response = requests.get(url) # Check the response; stop execution if failed assert response.status_code == 200 # Parse the JSON into a Python dictionary # and return it for further processing return response.json() def create_df(data, keyword, frequency="month"): """ Returns cleaned Pandas DataFrame of keyword frequency over time, given correctly-formatted Python dictionary. Renames the frequency column to keyword; converts month to datetime. Note: if you're reading this note, that means that this function is still only written with the intention of automating a specific set of actions for a specific project. ---- Arguments ---- data: (dict) Python dictionary converted from JSON API response. keyword: (str) the keyword that was queried. time_bucket: (str) size of time buckets, which is also the name of the resulting DataFrame column. Defaults to "month". ------------------- """ # Convert the python object into a pandas dataframe df = pd.DataFrame(data["aggs"]["created_utc"]) # Convert "key" into a datetime column df["key"] = pd.to_datetime(df["key"], unit="s", origin="unix") # Rename "key" to reflect the fact that it is the beginning of the time bucket df = df.rename(mapper={"key": frequency, "doc_count": keyword}, axis="columns") # Return the DataFrame return df def comments_df(data): """ Returns Reddit comments in Pandas DataFrame, given the correctly-formatted Python dictionary. Note: if you're reading this note, that means that this function is still only written with the intention of automating a specific set of actions for a specific project. ---- Arguments ---- data: (dict) Python dictionary converted from JSON API response. ------------------- """ # Convert the comments into a pandas dataframe df = pd.DataFrame(data["data"]) # Return the DataFrame return df def df_to_csv(data, filename): """ Basically just a wrapper around the Pandas `.to_csv()` method, created to standardize the inputs and outputs. ---- Arguments ---- data: (pd.DataFrame) Pandas DataFrame to be saved as a csv. filepath: (str) name or path of the file to be saved. ------------------- """ # Saves the DataFrame to csv data.to_csv(path_or_buf=filename, index=False) # And that's it, folks! def reddit_data_setter(keywords, subreddits, csv=False, frequency="month", aggs="created_utc"): """ Creates two DataFrames that hold combined data of all combinations of keywords / subreddits. Note: if you're reading this note, that means that this function is still only written with the intention of automating a specific set of actions for a specific project. ---- Arguments ---- keywords: (list) keyword(s) to search. subreddits: (list) name of subreddit(s) to include. csv: (bool) if True, save the resulting dataframes as csv file. frequency: (str) set the size of the time buckets. aggs: (str) aggregate function name. Default is "created_utc". (For more information, read the PushShift API Documentation.) ------------------- """ from time import sleep comment_df_list = [] # Empty list to hold comment dataframes word_df_list = [] # Empty list to hold monthly word count dataframes df_comm = pd.DataFrame() # Empty dataframe for comment data df_main = pd.DataFrame() # Empty dataframe for keyword counts # Create the "month" (datetime) column - to be used when joining df_main["month"] = pd.date_range(start="2005-01-01", end="2019-09-01", freq="MS") # Run query for individual keywords on each subreddit # Subreddit (outer) -> keyword (inner) = all keywords in one subreddit at a time for subreddit in subreddits: for word in keywords: # Create unique column name for each subreddit / word combo col_name = f"{subreddit}_{word.replace(' ', '')}" # Indicates current subreddit / keyword start = f"{col_name}..." print(start) sleep(0.5) # Add sleep time to reduce API load # Make request and convert response to dictionary dictionary = pushshift_api_request(word, subreddit) # Append aggs word count df to word_df_list word_df_list.append(create_df(dictionary, col_name)) # Append comments df to comment_df_list comment_df_list.append(comments_df(dictionary)) sleep(0.5) # More sleep to reduce API load sleep(0.5) # Set "month" as index in order to concatenate list of dataframes df_main = pd.concat([df.set_index("month") for df in word_df_list], axis=1, join="outer").reset_index() # Concatenate comment_df_list dataframes df_comm = pd.concat(comment_df_list, axis=0, sort=False, join="outer", ignore_index=True) # If csv parameter is set to True, save datasets to filesystem as csv if csv: df_to_csv(df_main, f"{keywords[0]}-monthly.csv") df_to_csv(df_comm, f"{keywords[0]}-comments.csv") # Return df_main, df_comm, respectively return df_main, df_comm
_____no_output_____
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
------ Term Velocity: AlgorithmThe velocity of the term "algorithm" in each of the target subreddits.
# Define keywords and subreddits as python lists words = [ "algorithm", ] subs = [ "Futurology", "technology", "science", "askscience", "gadgets", "books", "scifi", "movies", "gaming", "television", "news", "worldnews", "politics", "philosophy", "AskReddit", "todayilearned", "explainlikeimfive", ] # Run the function to create and save the dataset df_main, df_comm = reddit_data_setter(words, subs, True) # Take a look to be sure it worked as expected print(df_main.shape) df_main.head()
(156, 18)
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
--- Visualizations
# Load csv df_main = pd.read_csv("008-Session_Exports/algorithm-monthly.csv") df_main["month"] = pd.to_datetime(df_main["month"], infer_datetime_format=True) df_main.head() df_main.dtypes # Color assignments subs_colors = {} for i in range(len(subs)): subs_colors[f"{subs[i]}"] = f"{palette[i]}" # Output to current notebook output_notebook() output_file(f"{words[0]}-velocity-viz.html") p = {} # dict to hold plots p_names = [] # list for plot names for sub in subs_colors: p[f"{sub}"] = figure(title=f"Comments that mention '{words[0]}' in r/{sub}", plot_width=1000, plot_height=200, x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0])) p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_{words[0]}"], line_width=2, line_color=f"{subs_colors[sub]}") p_names.append(p[f"{sub}"]) # Show the results show(column(p_names))
_____no_output_____
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
------ Term Velocity: AIThe velocity of the term "AI" (abbreviation of artificial intelligence) in each of the target subreddits.
# Define keywords and subreddits as python lists words = [ "AI", ] subs = [ "Futurology", "technology", "science", "askscience", "gadgets", "books", "scifi", "movies", "gaming", "television", "news", "worldnews", "politics", "philosophy", "AskReddit", "todayilearned", "explainlikeimfive", ] # Run the function to create and save the dataset df_main, df_comm = reddit_data_setter(words, subs, True) # Take a look to be sure it worked as expected print(df_main.shape) df_main.head()
(156, 18)
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
--- Visualizations
# Color assignments subs_colors = {} for i in range(len(subs)): subs_colors[f"{subs[i]}"] = f"{palette[i]}" # Output to current notebook output_notebook() output_file(f"{words[0]}-velocity-viz.html") p = {} # dict to hold plots p_names = [] # list for plot names for sub in subs_colors: p[f"{sub}"] = figure(title=f"Comments that mention '{words[0]}' in r/{sub}", plot_width=1000, plot_height=200, x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0])) p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_{words[0]}"], line_width=2, line_color=f"{subs_colors[sub]}") p_names.append(p[f"{sub}"]) # Show the results show(column(p_names))
_____no_output_____
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
------ Term Velocity: ARThe velocity of the term "AR" (abbreviation of augmented reality) in each of the target subreddits.
# Define keywords and subreddits as python lists words = [ "AR", ] subs = [ "Futurology", "technology", "science", "askscience", "gadgets", "books", "scifi", "movies", "gaming", "television", "news", "worldnews", "politics", "philosophy", "AskReddit", "todayilearned", "explainlikeimfive", ] # Run the function to create and save the dataset df_main, df_comm = reddit_data_setter(words, subs, True) # Take a look to be sure it worked as expected print(df_main.shape) df_main.head()
(156, 18)
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
--- Visualizations
# Color assignments subs_colors = {} for i in range(len(subs)): subs_colors[f"{subs[i]}"] = f"{palette[i]}" # Output to current notebook output_notebook() output_file(f"{words[0]}-velocity-viz.html") p = {} # dict to hold plots p_names = [] # list for plot names for sub in subs_colors: p[f"{sub}"] = figure(title=f"Comments that mention '{words[0]}' in r/{sub}", plot_width=1000, plot_height=200, x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0])) p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_{words[0]}"], line_width=2, line_color=f"{subs_colors[sub]}") p_names.append(p[f"{sub}"]) # Show the results show(column(p_names))
_____no_output_____
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
------ Term Velocity: AutomationThe velocity of the term "automation" in each of the target subreddits.
# Define keywords and subreddits as python lists words = [ "automation", ] subs = [ "Futurology", "technology", "science", "askscience", "gadgets", "books", "scifi", "movies", "gaming", "television", "news", "worldnews", "politics", "philosophy", "AskReddit", "todayilearned", "explainlikeimfive", ] # Run the function to create and save the dataset df_main, df_comm = reddit_data_setter(words, subs, True) # Take a look to be sure it worked as expected print(df_main.shape) df_main.head()
(151, 18)
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
--- Visualizations
# Output to current notebook output_notebook() output_file(f"{words[0]}-velocity-viz.html") p = {} # dict to hold plots p_names = [] # list for plot names for sub in subs_colors: p[f"{sub}"] = figure(title=f"Comments that mention '{words[0]}' in r/{sub}", plot_width=1000, plot_height=200, x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0])) p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_{words[0]}"], line_width=2, line_color=f"{subs_colors[sub]}") p_names.append(p[f"{sub}"]) # Show the results show(column(p_names))
_____no_output_____
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
------ Term Velocity: Big DataThe velocity of the term "big data" in each of the target subreddits.
# Define keywords and subreddits as python lists words = [ "big data", ] subs = [ "Futurology", "technology", "science", "askscience", "gadgets", "books", "scifi", "movies", "gaming", "television", "news", "worldnews", "politics", "philosophy", "AskReddit", "todayilearned", "explainlikeimfive", ] # Run the function to create and save the dataset df_main, df_comm = reddit_data_setter(words, subs, True) # Take a look to be sure it worked as expected print(df_main.shape) df_main.head()
(153, 18)
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
--- Visualizations
# Output to current notebook output_notebook() output_file(f"{words[0].replace(' ', '')}-velocity-viz.html") p = {} # dict to hold plots p_names = [] # list for plot names for sub in subs_colors: p[f"{sub}"] = figure(title=f"Comments that mention '{words[0]}' in r/{sub}", plot_width=1000, plot_height=200, x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0])) p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_{words[0].replace(' ', '')}"], line_width=2, line_color=f"{subs_colors[sub]}") p_names.append(p[f"{sub}"]) # Show the results show(column(p_names))
_____no_output_____
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
------ Overall Subreddit Comment VelocityThe total number of comments made in each of the subreddits. This is one way I can normalize the data.
# Define keywords and subreddits as python lists words = [""] # Passing in an empty list this time to look at all comments subs = [ "Futurology", "technology", "science", "askscience", "gadgets", "books", "scifi", "movies", "gaming", "television", "news", "worldnews", "politics", "philosophy", "AskReddit", "todayilearned", "explainlikeimfive", ]
_____no_output_____
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
---
def all_comments_monthly(subreddit, frequency="month", aggs="created_utc"): """ Returns the JSON response of a PushShift API aggregate comment search as a Python dictionary. Note: if you're reading this note, that means that this function is still only written with the intention of automating a specific set of actions for a specific project. ---- Arguments ---- query: (str) keyword to search. subreddit: (str) subreddit name frequency: (str) set the size of the time buckets. aggs: (str) aggregate function name. Default is "created_utc". (For more information, read the PushShift API Documentation.) ------------------- """ # Build the query url based on endpoints and parameters url = f"https://api.pushshift.io/reddit/search/comment/?subreddit={subreddit}&aggs={aggs}&frequency={frequency}&size=100" # Send the request and save the response into the response object response = requests.get(url) # Check the response; stop execution if failed assert response.status_code == 200 # Parse the JSON into a Python dictionary and return it for further processing return response.json() def all_comments_aggregator(keywords, subreddits, csv=False, frequency="month", aggs="created_utc"): """ Creates two DataFrames that hold combined data of all comments in all the target subreddits. Note: if you're reading this note, that means that this function is still only written with the intention of automating a specific set of actions for a specific project. ---- Arguments ---- keywords: (list) keyword(s) to search. subreddits: (list) name of subreddit(s) to include. csv: (bool) if True, save the resulting dataframes as csv file. frequency: (str) set the size of the time buckets. aggs: (str) aggregate function name. Default is "created_utc". (For more information, read the PushShift API Documentation.) ------------------- """ from time import sleep comment_df_list = [] # Empty list to hold comment dataframes word_df_list = [] # Empty list to hold monthly word count dataframes df_comm = pd.DataFrame() # Empty dataframe for comment data df_main = pd.DataFrame() # Empty dataframe for keyword counts # Create the "month" (datetime) column - to be used when joining df_main["month"] = pd.date_range(start="2005-01-01", end="2019-09-01", freq="MS") # Run query for individual keywords on each subreddit # Subreddit (outer) -> keyword (inner) = all keywords in one subreddit at a time for subreddit in subreddits: for word in keywords: # Create unique column name for each subreddit / word combo col_name = f"{subreddit}_{word.replace(' ', '')}" # Indicates current subreddit / keyword start = f"{col_name}..." print(start) sleep(0.5) # Add sleep time to reduce API load # Make request and convert response to dictionary dictionary = pushshift_api_request(word, subreddit) # Append aggs word count df to word_df_list word_df_list.append(create_df(dictionary, col_name)) # Append comments df to comment_df_list comment_df_list.append(comments_df(dictionary)) sleep(0.5) # More sleep to reduce API load sleep(0.5) # Set "month" as index in order to concatenate list of dataframes df_main = pd.concat([df.set_index("month") for df in word_df_list], axis=1, join="outer").reset_index() # Concatenate comment_df_list dataframes df_comm = pd.concat(comment_df_list, axis=0, sort=False, join="outer", ignore_index=True) # If csv parameter is set to True, save datasets to filesystem as csv if csv: df_to_csv(df_main, f"{keywords[0]}-monthly.csv") df_to_csv(df_comm, f"{keywords[0]}-comments.csv") # Return df_main, df_comm, respectively return df_main, df_comm
_____no_output_____
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
---
# Run the function to create and save the dataset df_main, df_comm = reddit_data_setter(words, subs, True) # Take a look to be sure it worked as expected print(df_main.shape) df_main.head()
(156, 18)
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
--- Visualizations
# Output to current notebook output_notebook() output_file("overall-subreddit-velocity-viz.html") p = {} # dict to hold plots p_names = [] # list for plot names for sub in subs_colors: p[f"{sub}"] = figure(title=f"Comments in r/{sub}", plot_width=1000, plot_height=200, x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0])) p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_"], line_width=2, line_color=f"{subs_colors[sub]}") p_names.append(p[f"{sub}"]) # Show the results show(column(p_names))
_____no_output_____
MIT
projects/sci_fi_irl/008-Session/sci_fi_irl-008.ipynb
tobias-fyi/data-may-differ
ORF recognition by CNNCompare to ORF_CNN_101.Use 2-layer CNN.Run on Mac.
PC_SEQUENCES=20000 # how many protein-coding sequences NC_SEQUENCES=20000 # how many non-coding sequences PC_TESTS=1000 NC_TESTS=1000 BASES=1000 # how long is each sequence ALPHABET=4 # how many different letters are possible INPUT_SHAPE_2D = (BASES,ALPHABET,1) # Conv2D needs 3D inputs INPUT_SHAPE = (BASES,ALPHABET) # Conv1D needs 2D inputs FILTERS = 32 # how many different patterns the model looks for NEURONS = 16 WIDTH = 3 # how wide each pattern is, in bases STRIDE_2D = (1,1) # For Conv2D how far in each direction STRIDE = 1 # For Conv1D, how far between pattern matches, in bases EPOCHS=10 # how many times to train on all the data SPLITS=5 # SPLITS=3 means train on 2/3 and validate on 1/3 FOLDS=5 # train the model this many times (range 1 to SPLITS) import sys try: from google.colab import drive IN_COLAB = True print("On Google CoLab, mount cloud-local file, get our code from GitHub.") PATH='/content/drive/' #drive.mount(PATH,force_remount=True) # hardly ever need this #drive.mount(PATH) # Google will require login credentials DATAPATH=PATH+'My Drive/data/' # must end in "/" import requests r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py') with open('RNA_gen.py', 'w') as f: f.write(r.text) from RNA_gen import * r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py') with open('RNA_describe.py', 'w') as f: f.write(r.text) from RNA_describe import * r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py') with open('RNA_prep.py', 'w') as f: f.write(r.text) from RNA_prep import * except: print("CoLab not working. On my PC, use relative paths.") IN_COLAB = False DATAPATH='data/' # must end in "/" sys.path.append("..") # append parent dir in order to use sibling dirs from SimTools.RNA_gen import * from SimTools.RNA_describe import * from SimTools.RNA_prep import * MODELPATH="BestModel" # saved on cloud instance and lost after logout #MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login if not assert_imported_RNA_gen(): print("ERROR: Cannot use RNA_gen.") if not assert_imported_RNA_prep(): print("ERROR: Cannot use RNA_prep.") from os import listdir import time # datetime import csv from zipfile import ZipFile import numpy as np import pandas as pd from scipy import stats # mode from sklearn.preprocessing import StandardScaler from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from keras.models import Sequential from keras.layers import Dense,Embedding from keras.layers import Conv1D,Conv2D from keras.layers import Flatten,MaxPooling1D,MaxPooling2D from keras.losses import BinaryCrossentropy # tf.keras.losses.BinaryCrossentropy import matplotlib.pyplot as plt from matplotlib import colors mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1 np.set_printoptions(precision=2) t = time.time() time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)) # Use code from our SimTools library. def make_generators(seq_len): pcgen = Collection_Generator() pcgen.get_len_oracle().set_mean(seq_len) pcgen.set_seq_oracle(Transcript_Oracle()) ncgen = Collection_Generator() ncgen.get_len_oracle().set_mean(seq_len) return pcgen,ncgen pc_sim,nc_sim = make_generators(BASES) pc_train = pc_sim.get_sequences(PC_SEQUENCES) nc_train = nc_sim.get_sequences(NC_SEQUENCES) print("Train on",len(pc_train),"PC seqs") print("Train on",len(nc_train),"NC seqs") # Use code from our LearnTools library. X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles print("Data ready.") def make_DNN(): print("make_DNN") print("input shape:",INPUT_SHAPE) dnn = Sequential() #dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE)) dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same", input_shape=INPUT_SHAPE)) dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same")) dnn.add(MaxPooling1D()) dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same")) dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same")) dnn.add(MaxPooling1D()) dnn.add(Flatten()) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32)) dnn.add(Dense(1,activation="sigmoid",dtype=np.float32)) dnn.compile(optimizer='adam', loss=BinaryCrossentropy(from_logits=False), metrics=['accuracy']) # add to default metrics=loss dnn.build(input_shape=INPUT_SHAPE) #ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE) #bc=tf.keras.losses.BinaryCrossentropy(from_logits=False) #model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"]) return dnn model = make_DNN() print(model.summary()) from keras.callbacks import ModelCheckpoint def do_cross_validation(X,y): cv_scores = [] fold=0 mycallbacks = [ModelCheckpoint( filepath=MODELPATH, save_best_only=True, monitor='val_accuracy', mode='max')] splitter = KFold(n_splits=SPLITS) # this does not shuffle for train_index,valid_index in splitter.split(X): if fold < FOLDS: fold += 1 X_train=X[train_index] # inputs for training y_train=y[train_index] # labels for training X_valid=X[valid_index] # inputs for validation y_valid=y[valid_index] # labels for validation print("MODEL") # Call constructor on each CV. Else, continually improves the same model. model = model = make_DNN() print("FIT") # model.fit() implements learning start_time=time.time() history=model.fit(X_train, y_train, epochs=EPOCHS, verbose=1, # ascii art while learning callbacks=mycallbacks, # called at end of each epoch validation_data=(X_valid,y_valid)) end_time=time.time() elapsed_time=(end_time-start_time) print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time)) # print(history.history.keys()) # all these keys will be shown in figure pd.DataFrame(history.history).plot(figsize=(8,5)) plt.grid(True) plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale plt.show() do_cross_validation(X,y) from keras.models import load_model pc_test = pc_sim.get_sequences(PC_TESTS) nc_test = nc_sim.get_sequences(NC_TESTS) X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET) best_model=load_model(MODELPATH) scores = best_model.evaluate(X, y, verbose=0) print("The best model parameters were saved during cross-validation.") print("Best was defined as maximum validation accuracy at end of any epoch.") print("Now re-load the best model and test it on previously unseen data.") print("Test on",len(pc_test),"PC seqs") print("Test on",len(nc_test),"NC seqs") print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100)) from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score ns_probs = [0 for _ in range(len(y))] bm_probs = best_model.predict(X) ns_auc = roc_auc_score(y, ns_probs) bm_auc = roc_auc_score(y, bm_probs) ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs) bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs) plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc) plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc) plt.title('ROC') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend() plt.show() print("%s: %.2f%%" %('AUC',bm_auc))
_____no_output_____
MIT
Notebooks/ORF_CNN_104.ipynb
ShepherdCode/Soars2021
minGPT License*This notebook port's the [minGPT codebase](https://github.com/karpathy/minGPT) into equivalent NeMo code. The license for minGPT has therefore been attached here.*```The MIT License (MIT) Copyright (c) 2020 Andrej KarpathyPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.``` torch-rnn License*This notebook utilizes the `tiny-shakespeare` dataset from the [torch-rnn](https://github.com/jcjohnson/torch-rnn) codebase. The license for torch-rnn has therefore been attached here.*```The MIT License (MIT)Copyright (c) 2016 Justin JohnsonPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.``` -------***Note: This notebook will intentionally introduce some errors to show the power of Neural Types or model development concepts, inside the cells marked with `[ERROR CELL]`. The explanation of and resolution of such errors can be found in the subsequent cells.***----- The NeMo ModelNeMo comes with many state of the art pre-trained Conversational AI models for users to quickly be able to start training and fine-tuning on their own datasets. In the previous [NeMo Primer](https://colab.research.google.com/github/NVIDIA/NeMo/blob/main/tutorials/00_NeMo_Primer.ipynb) notebook, we learned how to download pretrained checkpoints with NeMo and we also discussed the fundamental concepts of the NeMo Model. The previous tutorial showed us how to use, modify, save, and restore NeMo Models.In this tutorial we will learn how to develop a non-trivial NeMo model from scratch. This helps us to understand the underlying components and how they interact with the overall PyTorch ecosystem. -------At the heart of NeMo lies the concept of the "Model". For NeMo developers, a "Model" is the neural network(s) as well as all the infrastructure supporting those network(s), wrapped into a singular, cohesive unit. As such, most NeMo models are constructed to contain the following out of the box (note: some NeMo models support additional functionality specific to the domain/use case!) - - Neural Network architecture - all of the modules that are required for the model. - Dataset + Data Loaders - all of the components that prepare the data for consumption during training or evaluation. - Preprocessing + Postprocessing - any of the components that process the datasets so the modules can easily consume them. - Optimizer + Schedulers - basic defaults that work out of the box and allow further experimentation with ease. - Any other supporting infrastructure - tokenizers, language model configuration, data augmentation, etc. Constructing a NeMo ModelNeMo "Models" are comprised of a few key components, so let's tackle them one by one. We will attempt to go in the order that's stated above.To make this slightly challenging, let's port a model from the NLP domain this time. Transformers are all the rage, with BERT and his friends from Sesame Street forming the core infrastructure for many NLP tasks. An excellent (yet simple) implementation of one such model - GPT - can be found in the `minGPT` repository - https://github.com/karpathy/minGPT. While the script is short, it explains and succinctly explores all of the core components we expect in a NeMo model, so it's a prime candidate for NeMo! Sidenote: NeMo supports GPT in its NLP collection, and as such, this notebook aims to be an in-depth development walkthrough for such models.In the following notebook, we will attempt to port minGPT to NeMo, and along the way, discuss some core concepts of NeMo itself. Constructing the Neural Network ArchitectureFirst, on the list - the neural network that forms the backbone of the NeMo Model.So how do we create such a model? Using PyTorch! As you'll see below, NeMo components are compatible with all of PyTorch, so you can augment your workflow without ever losing the flexibility of PyTorch itself!Let's start with a couple of imports -
import torch import nemo from nemo.core import NeuralModule from nemo.core import typecheck
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Neural ModuleWait, what's `NeuralModule`? Where is the wonderful `torch.nn.Module`? `NeuralModule` is a subclass of `torch.nn.Module`, and it brings with it a few additional functionalities.In addition to being a `torch.nn.Module`, thereby being entirely compatible with the PyTorch ecosystem, it has the following capabilities - 1) `Typing` - It adds support for `Neural Type Checking` to the model. `Typing` is optional but quite useful, as we will discuss below!2) `Serialization` - Remember the `OmegaConf` config dict and YAML config files? Well, all `NeuralModules` inherently supports serialization/deserialization from such config dictionaries!3) `FileIO` - This is another entirely optional file serialization system. Does your `NeuralModule` require some way to preserve data that can't be saved into a PyTorch checkpoint? Write your serialization and deserialization logic in two handy methods! **Note**: When you create the final NeMo Model, this will be implemented for you! Automatic serialization and deserialization support of NeMo models!
class MyEmptyModule(NeuralModule): def forward(self): print("Neural Module ~ hello world!") x = MyEmptyModule() x()
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Neural TypesNeural Types? You might be wondering what that term refers to.Almost all NeMo components inherit the class `Typing`. `Typing` is a simple class that adds two properties to the class that inherits it - `input_types` and `output_types`. A NeuralType, by its shortest definition, is simply a semantic tensor. It contains information regarding the semantic shape the tensor should hold, as well as the semantic information of what that tensor represents. That's it.So what semantic information does such a typed tensor contain? Let's take an example below. ------Across the Deep Learning domain, we often encounter cases where tensor shapes may match, but the semantics don't match at all. For example take a look at the following rank 3 tensors -
# Case 1: embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=30) x = torch.randint(high=10, size=(1, 5)) print("x :", x) print("embedding(x) :", embedding(x).shape) # Case 2 lstm = torch.nn.LSTM(1, 30, batch_first=True) x = torch.randn(1, 5, 1) print("x :", x) print("lstm(x) :", lstm(x)[0].shape) # Let's take all timestep outputs of the LSTM
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-------As you can see, the output of Case 1 is an embedding of shape [1, 5, 30], and the output of Case 2 is an LSTM output (state `h` over all time steps), also of the same shape [1, 5, 30].Do they have the same shape? **Yes**. If we do a Case 1 .shape == Case 2 .shape, will we get True as an output? **Yes**. Do they represent the same concept? **No**. The ability to recognize that the two tensors do not represent the same semantic information is precisely why we utilize Neural Types. It contains the information of both the shape and the semantic concept of what that tensor represents. If we performed a neural type check between the two outputs of those tensors, it would raise an error saying semantically they were different things (more technically, it would say that they are `INCOMPATIBLE` with each other)! --------You may have read of concepts such as [Named Tensors](https://pytorch.org/docs/stable/named_tensor.html). While conceptually similar, Neural Types attached by NeMo are not as tightly bound to the PyTorch ecosystem - practically any object of a class can be attached with a neural type! Neural Types - UsageNeural Types sound interesting, so how do we go about adding them? Let's take a few cases below. Neural Types are one of the core foundations of NeMo - you will find them in a vast majority of Neural Modules, and every NeMo Model will have its Neural Types defined. While they are entirely optional and unintrusive, NeMo takes great care to support it so that there is no semantic incompatibility between components being used by users. Let's start with a basic example of a type checked module.
from nemo.core.neural_types import NeuralType from nemo.core.neural_types import * class EmbeddingModule(NeuralModule): def __init__(self): super().__init__() self.embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=30) @typecheck() def forward(self, x): return self.embedding(x) @property def input_types(self): return { 'x': NeuralType(axes=('B', 'T'), elements_type=Index()) } @property def output_types(self): return { 'y': NeuralType(axes=('B', 'T', 'C'), elements_type=EmbeddedTextType()) }
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
To show the benefit of Neural Types, we are going to replicate the above cases inside NeuralModules.Let's discuss how we added type checking support to the above class.1) `forward` has a decorator `@typecheck()` on it.2) `input_types` and `output_types` properties are defined.That's it! -------Let's expand on each of the above steps.- `@typecheck()` is a simple decorator that takes any class that inherits `Typing` (NeuralModule does this for us) and adds the two default properties of `input_types` and `output_types`, which by default returns None.The `@typecheck()` decorator's explicit use ensures that, by default, neural type checking is **disabled**. NeMo does not wish to intrude on the development process of models. So users can "opt-in" to type checking by overriding the two properties. Therefore, the decorator ensures that users are not burdened with type checking before they wish to have it.So what is `@typecheck()`? Simply put, you can wrap **any** function of a class that inherits `Typing` with this decorator, and it will look up the definition of the types of that class and enforce them. Typically, `torch.nn.Module` subclasses only implement `forward()` so it is most common to wrap that method, but `@typecheck()` is a very flexible decorator. Inside NeMo, we will show some advanced use cases (which are quite crucial to particular domains such as TTS). ------As we see above, `@typecheck()` enforces the types. How then, do we provide this type of information to NeMo? By overriding `input_types` and `output_types` properties of the class, we can return a dictionary mapping a string name to a `NeuralType`.In the above case, we define a `NeuralType` as two components - - `axes`: This is the semantic information of the carried by the axes themselves. The most common axes information is from single character notation.> `B` = Batch > `C` / `D` - Channel / Dimension (treated the same) > `T` - Time > `H` / `W` - Height / Width - `elements_type`: This is the semantic information of "what the tensor represents". All such types are derived from the basic `ElementType`, and merely subclassing `ElementType` allows us to build a hierarchy of custom semantic types that can be used by NeMo!Here, we declare that the input is an element_type of `Index` (index of the character in the vocabulary) and that the output is an element_type of `EmbeddedTextType` (the text embedding)
embedding_module = EmbeddingModule()
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Now let's construct the equivalent of the Case 2 above, but as a `NeuralModule`.
class LSTMModule(NeuralModule): def __init__(self): super().__init__() self.lstm = torch.nn.LSTM(1, 30, batch_first=True) @typecheck() def forward(self, x): return self.lstm(x) @property def input_types(self): return { 'x': NeuralType(axes=('B', 'T', 'C'), elements_type=SpectrogramType()) } @property def output_types(self): return { 'y': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation()) }
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
------Here, we define the LSTM module from the Case 2 above.We changed the input to be a rank three tensor, now representing a "SpectrogramType". We intentionally keep it generic - it can be a `MelSpectrogramType` or a `MFCCSpectrogramType` as it's input!The output of an LSTM is now an `EncodedRepresentation`. Practically, this can be the output of a CNN layer, a Transformer block, or in this case, an LSTM layer. We can, of course, specialize by subclassing EncodedRepresentation and then using that!
lstm_module = LSTMModule()
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
------Now for the test !
# Case 1 [ERROR CELL] x1 = torch.randint(high=10, size=(1, 5)) print("x :", x1) print("embedding(x) :", embedding_module(x1).shape)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-----You might be wondering why we get a `TypeError` right off the bat. This `TypeError` is raised by design.Positional arguments can cause significant issues during model development, mostly when the model/module design is not finalized. To reduce the potential for mistakes caused by wrong positional arguments and enforce the name of arguments provided to the function, `Typing` requires you to **call all of your type-checked functions by kwargs only**.
# Case 1 print("x :", x1) print("embedding(x) :", embedding_module(x=x1).shape)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Now let's try the same for the `LSTMModule` in Case 2
# Case 2 [ERROR CELL] x2 = torch.randn(1, 5, 1) print("x :", x2) print("lstm(x) :", lstm_module(x=x2)[0].shape) # Let's take all timestep outputs of the LSTM
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-----Now we get a type error stating that the number of output arguments provided does not match what is expected.What exactly is going on here? Well, inside our `LSTMModule` class, we declare the output types to be a single NeuralType - an `EncodedRepresentation` of shape [B, T, C].But the output of an LSTM layer is a tuple of two state values - the hidden state `h` and the cell state `c`!So the neural type system raises an error saying that the number of output arguments does not match what is expected.Let's fix the above.
class CorrectLSTMModule(LSTMModule): # Let's inherit the wrong class to make it easy to override @property def output_types(self): return { 'h': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation()), 'c': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation()), } lstm_module = CorrectLSTMModule() # Case 2 x2 = torch.randn(1, 5, 1) print("x :", x2) print("lstm(x) :", lstm_module(x=x2)[0].shape) # Let's take all timestep outputs of the LSTM `h` gate
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
------Great! So now, the type checking system is happy.If you looked closely, the outputs were ordinary Torch Tensors (this is good news; we don't want to be incompatible with torch Tensors after all!). So, where exactly is the type of information stored?When the `output_types` is overridden, and valid torch tensors are returned as a result, these tensors are attached with the attribute `neural_type`. Let's inspect this -
emb_out = embedding_module(x=x1) lstm_out = lstm_module(x=x2)[0] assert hasattr(emb_out, 'neural_type') assert hasattr(lstm_out, 'neural_type') print("Embedding tensor :", emb_out.neural_type) print("LSTM tensor :", lstm_out.neural_type)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-------So we see that these tensors now have this attribute called `neural_type` and are the same shape.This exercise's entire goal was to assert that the two outputs are semantically **not** the same object, even if they are the same shape. Let's test this!
emb_out.neural_type.compare(lstm_out.neural_type) emb_out.neural_type == lstm_out.neural_type
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Neural Types - LimitationsYou might have noticed one interesting fact - our inputs were just `torch.Tensor` to both typed function calls, and they had no `neural_type` assigned to them.So why did the type check system not raise any error? This is to maintain compatibility - type checking is meant to work on a chain of function calls - and each of these functions should themselves be wrapped with the `@typecheck()` decorator. This is also done because we don't want to overtax the forward call with dozens of checks, and therefore we only type modules that perform some higher-order logical computation. ------As an example, it is mostly unnecessary (but still possible) to type the input and output of every residual block of a ResNet model. However, it is practically important to type the encoder (no matter how many layers is inside it) and the decoder (the classification head) separately so that when one does fine-tuning, there is no semantic mismatch of the tensors input to the encoder and bound to the decoder. -------For this case, since it would be impractical to extend a class to attach a type to the input tensor, we can take a shortcut and directly attach the neural type to the input!
embedding_module = EmbeddingModule() x1 = torch.randint(high=10, size=(1, 5)) # Attach correct neural type x1.neural_type = NeuralType(('B', 'T'), Index()) print("embedding(x) :", embedding_module(x=x1).shape) # Attach wrong neural type [ERROR CELL] x1.neural_type = NeuralType(('B', 'T'), LabelsType()) print("embedding(x) :", embedding_module(x=x1).shape)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Let's create the minGPT componentsNow that we have a somewhat firm grasp of neural type checking, let's begin porting the minGPT example code. Once again, most of the code will be a direct port from the [minGPT repository](https://github.com/karpathy/minGPT).Here, you will notice one thing. By just changing class imports, one `@typecheck()` on forward, and adding `input_types` and `output_types` (which are also entirely optional!), we are almost entirely done with the PyTorch Lightning port!
import math from typing import List, Set, Dict, Tuple, Optional import torch import torch.nn as nn from torch.nn import functional as F
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Creating Element TypesTill now, we have used the Neural Types provided by the NeMo core. But we need not be restricted to the pre-defined element types !Users have total flexibility in defining any hierarchy of element types as they please!
class AttentionType(EncodedRepresentation): """Basic Attention Element Type""" class SelfAttentionType(AttentionType): """Self Attention Element Type""" class CausalSelfAttentionType(SelfAttentionType): """Causal Self Attention Element Type"""
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Creating the modulesNeural Modules are generally top-level modules but can be used at any level of the module hierarchy.For demonstration, we will treat an encoder comprising a block of Causal Self Attention modules as a typed Neural Module. Of course, we can also treat each Causal Self Attention layer itself as a neural module if we require it, but top-level modules are generally preferred.
class CausalSelfAttention(nn.Module): """ A vanilla multi-head masked self-attention layer with a projection at the end. It is possible to use torch.nn.MultiheadAttention here but I am including an explicit implementation here to show that there is nothing too scary here. """ def __init__(self, n_embd, block_size, n_head, attn_pdrop, resid_pdrop): super().__init__() assert n_embd % n_head == 0 self.n_head = n_head # key, query, value projections for all heads self.key = nn.Linear(n_embd, n_embd) self.query = nn.Linear(n_embd, n_embd) self.value = nn.Linear(n_embd, n_embd) # regularization self.attn_drop = nn.Dropout(attn_pdrop) self.resid_drop = nn.Dropout(resid_pdrop) # output projection self.proj = nn.Linear(n_embd, n_embd) # causal mask to ensure that attention is only applied to the left in the input sequence self.register_buffer("mask", torch.tril(torch.ones(block_size, block_size)) .view(1, 1, block_size, block_size)) def forward(self, x, layer_past=None): B, T, C = x.size() # calculate query, key, values for all heads in batch and move head forward to be the batch dim k = self.key(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) q = self.query(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) v = self.value(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs) # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T) att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1))) att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf')) att = F.softmax(att, dim=-1) att = self.attn_drop(att) y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs) y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side # output projection y = self.resid_drop(self.proj(y)) return y class Block(nn.Module): """ an unassuming Transformer block """ def __init__(self, n_embd, block_size, n_head, attn_pdrop, resid_pdrop): super().__init__() self.ln1 = nn.LayerNorm(n_embd) self.ln2 = nn.LayerNorm(n_embd) self.attn = CausalSelfAttention(n_embd, block_size, n_head, attn_pdrop, resid_pdrop) self.mlp = nn.Sequential( nn.Linear(n_embd, 4 * n_embd), nn.GELU(), nn.Linear(4 * n_embd, n_embd), nn.Dropout(resid_pdrop), ) def forward(self, x): x = x + self.attn(self.ln1(x)) x = x + self.mlp(self.ln2(x)) return x
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Building the NeMo ModelSince a NeMo Model is comprised of various parts, we are going to iterate on the model step by step inside this notebook. As such, we will have multiple intermediate NeMo "Models", which will be partial implementations, and they will inherit each other iteratively.In a complete implementation of a NeMo Model (as found in the NeMo collections), all of these components will generally be found in a single class.Let's start by inheriting `ModelPT` - the core class of a PyTorch NeMo Model, which inherits the PyTorch Lightning Module. -------**Remember**: - The NeMo equivalent of `torch.nn.Module` is the `NeuralModule. - The NeMo equivalent of the `LightningModule` is `ModelPT`.
import pytorch_lightning as ptl from nemo.core import ModelPT from omegaconf import OmegaConf
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
------Next, let's construct the bare minimum implementation of the NeMo Model - just the constructor, the initializer of weights, and the forward method.Initially, we will follow the steps followed by the minGPT implementation, and progressively refactor for NeMo
class PTLGPT(ptl.LightningModule): def __init__(self, # model definition args vocab_size: int, # size of the vocabulary (number of possible tokens) block_size: int, # length of the model's context window in time n_layer: int, # depth of the model; number of Transformer blocks in sequence n_embd: int, # the "width" of the model, number of channels in each Transformer n_head: int, # number of heads in each multi-head attention inside each Transformer block # model optimization args learning_rate: float = 3e-4, # the base learning rate of the model weight_decay: float = 0.1, # amount of regularizing L2 weight decay on MatMul ops betas: Tuple[float, float] = (0.9, 0.95), # momentum terms (betas) for the Adam optimizer embd_pdrop: float = 0.1, # \in [0,1]: amount of dropout on input embeddings resid_pdrop: float = 0.1, # \in [0,1]: amount of dropout in each residual connection attn_pdrop: float = 0.1, # \in [0,1]: amount of dropout on the attention matrix ): super().__init__() # save these for optimizer init later self.learning_rate = learning_rate self.weight_decay = weight_decay self.betas = betas # input embedding stem: drop(content + position) self.tok_emb = nn.Embedding(vocab_size, n_embd) self.pos_emb = nn.Parameter(torch.zeros(1, block_size, n_embd)) self.drop = nn.Dropout(embd_pdrop) # deep transformer: just a sequence of transformer blocks self.blocks = nn.Sequential(*[Block(n_embd, block_size, n_head, attn_pdrop, resid_pdrop) for _ in range(n_layer)]) # decoder: at the end one more layernorm and decode the answers self.ln_f = nn.LayerNorm(n_embd) self.head = nn.Linear(n_embd, vocab_size, bias=False) # no need for extra bias due to one in ln_f self.block_size = block_size self.apply(self._init_weights) print("number of parameters: %e" % sum(p.numel() for p in self.parameters())) def forward(self, idx): b, t = idx.size() assert t <= self.block_size, "Cannot forward, model block size is exhausted." # forward the GPT model token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector x = self.drop(token_embeddings + position_embeddings) x = self.blocks(x) x = self.ln_f(x) logits = self.head(x) return logits def get_block_size(self): return self.block_size def _init_weights(self, module): """ Vanilla model initialization: - all MatMul weights \in N(0, 0.02) and biases to zero - all LayerNorm post-normalization scaling set to identity, so weight=1, bias=0 """ if isinstance(module, (nn.Linear, nn.Embedding)): module.weight.data.normal_(mean=0.0, std=0.02) if isinstance(module, nn.Linear) and module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
------Let's create a PyTorch Lightning Model above, just to make sure it works !
m = PTLGPT(vocab_size=100, block_size=32, n_layer=1, n_embd=32, n_head=4)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
------Now, let's convert the above easily into a NeMo Model.A NeMo Model constructor generally accepts only two things - 1) `cfg`: An OmegaConf DictConfig object that defines precisely the components required by the model to define its neural network architecture, data loader setup, optimizer setup, and any additional components needed for the model itself.2) `trainer`: An optional Trainer from PyTorch Lightning if the NeMo model will be used for training. It can be set after construction (if required) using the `set_trainer` method. For this notebook, we will not be constructing the config for the Trainer object. Refactoring Neural ModulesAs we discussed above, Neural Modules are generally higher-level components of the Model and can potentially be replaced by equivalent Neural Modules.As we see above, the embedding modules, deep transformer network, and final decoder layer have all been combined inside the PyTorch Lightning implementation constructor.------However, the decoder could have been an RNN instead of a simple Linear layer, or it could have been a 1D-CNN instead.Likewise, the deep encoder could potentially have a different implementation of Self Attention modules.These changes cannot be easily implemented any more inside the above implementation. However, if we refactor these components into their respective NeuralModules, then we can easily replace them with equivalent modules we construct in the future! Refactoring the Embedding moduleLet's first refactor out the embedding module from the above implementation
class GPTEmbedding(NeuralModule): def __init__(self, vocab_size: int, n_embd: int, block_size: int, embd_pdrop: float = 0.0): super().__init__() # input embedding stem: drop(content + position) self.tok_emb = nn.Embedding(vocab_size, n_embd) self.pos_emb = nn.Parameter(torch.zeros(1, block_size, n_embd)) self.drop = nn.Dropout(embd_pdrop) @typecheck() def forward(self, idx): b, t = idx.size() # forward the GPT model token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector x = self.drop(token_embeddings + position_embeddings) return x @property def input_types(self): return { 'idx': NeuralType(('B', 'T'), Index()) } @property def output_types(self): return { 'embeddings': NeuralType(('B', 'T', 'C'), EmbeddedTextType()) }
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Refactoring the EncoderNext, let's refactor the Encoder - the multi layer Transformer Encoder
class GPTTransformerEncoder(NeuralModule): def __init__(self, n_embd: int, block_size: int, n_head: int, n_layer: int, attn_pdrop: float = 0.0, resid_pdrop: float = 0.0): super().__init__() self.blocks = nn.Sequential(*[Block(n_embd, block_size, n_head, attn_pdrop, resid_pdrop) for _ in range(n_layer)]) @typecheck() def forward(self, embed): return self.blocks(embed) @property def input_types(self): return { 'embed': NeuralType(('B', 'T', 'C'), EmbeddedTextType()) } @property def output_types(self): return { 'encoding': NeuralType(('B', 'T', 'C'), CausalSelfAttentionType()) }
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Refactoring the DecoderFinally, let's refactor the Decoder - the small one-layer feed-forward network to decode the answer.-------Note an interesting detail - The `input_types` of the Decoder accepts the generic `EncoderRepresentation()`, where as the `neural_type` of the `GPTTransformerEncoder` has the `output_type` of `CausalSelfAttentionType`.This is semantically *not* a mismatch! As you can see above in the inheritance chart, we declare `EncodedRepresentation` -> `AttentionType` -> `SelfAttentionType` -> `CausalSelfAttentionType`. Such an inheritance hierarchy for the `element_type` allows future encoders (which also have a neural output type of at least `EncodedRepresentation`) to be swapped in place of the current GPT Causal Self Attention Encoder while keeping the rest of the NeMo model working just fine!
class GPTDecoder(NeuralModule): def __init__(self, n_embd: int, vocab_size: int): super().__init__() self.ln_f = nn.LayerNorm(n_embd) self.head = nn.Linear(n_embd, vocab_size, bias=False) # no need for extra bias due to one in ln_f @typecheck() def forward(self, encoding): x = self.ln_f(encoding) logits = self.head(x) return logits @property def input_types(self): return { 'encoding': NeuralType(('B', 'T', 'C'), EncodedRepresentation()) } @property def output_types(self): return { 'logits': NeuralType(('B', 'T', 'C'), LogitsType()) }
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Refactoring the NeMo GPT ModelNow that we have 3 NeuralModules for the embedding, the encoder, and the decoder, let's refactor the NeMo model to take advantage of this refactor!This time, we inherit from `ModelPT` instead of the general `LightningModule`.
class AbstractNeMoGPT(ModelPT): def __init__(self, cfg: OmegaConf, trainer: ptl.Trainer = None): super().__init__(cfg=cfg, trainer=trainer) # input embedding stem: drop(content + position) self.embedding = self.from_config_dict(self.cfg.embedding) # deep transformer: just a sequence of transformer blocks self.encoder = self.from_config_dict(self.cfg.encoder) # decoder: at the end one more layernorm and decode the answers self.decoder = self.from_config_dict(self.cfg.decoder) self.block_size = self.cfg.embedding.block_size self.apply(self._init_weights) print("number of parameters: %e" % self.num_weights) @typecheck() def forward(self, idx): b, t = idx.size() assert t <= self.block_size, "Cannot forward, model block size is exhausted." # forward the GPT model # Remember: Only kwargs are allowed ! e = self.embedding(idx=idx) x = self.encoder(embed=e) logits = self.decoder(encoding=x) return logits def get_block_size(self): return self.block_size def _init_weights(self, module): """ Vanilla model initialization: - all MatMul weights \in N(0, 0.02) and biases to zero - all LayerNorm post-normalization scaling set to identity, so weight=1, bias=0 """ if isinstance(module, (nn.Linear, nn.Embedding)): module.weight.data.normal_(mean=0.0, std=0.02) if isinstance(module, nn.Linear) and module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) @property def input_types(self): return { 'idx': NeuralType(('B', 'T'), Index()) } @property def output_types(self): return { 'logits': NeuralType(('B', 'T', 'C'), LogitsType()) }
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Creating a config for a ModelAt first glance, not much changed compared to the PyTorch Lightning implementation above. Other than the constructor, which now accepts a config, nothing changed at all!NeMo operates on the concept of a NeMo Model being accompanied by a corresponding config dict (instantiated as an OmegaConf object). This enables us to prototype the model by utilizing Hydra rapidly. This includes various other benefits - such as hyperparameter optimization and serialization/deserialization of NeMo models.Let's look at how actually to construct such config objects!
# model definition args (required) # ================================ # vocab_size: int # size of the vocabulary (number of possible tokens) # block_size: int # length of the model's context window in time # n_layer: int # depth of the model; number of Transformer blocks in sequence # n_embd: int # the "width" of the model, number of channels in each Transformer # n_head: int # number of heads in each multi-head attention inside each Transformer block # model definition args (optional) # ================================ # embd_pdrop: float = 0.1, # \in [0,1]: amount of dropout on input embeddings # resid_pdrop: float = 0.1, # \in [0,1]: amount of dropout in each residual connection # attn_pdrop: float = 0.1, # \in [0,1]: amount of dropout on the attention matrix
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
------As we look at the required parameters above, we need a way to tell OmegaConf that these values are currently not set, but the user should set them before we use them.OmegaConf supports such behavior using the `MISSING` value. A similar effect can be achieved in YAML configs by using `???` as a placeholder.
from omegaconf import MISSING # Let's create a utility for building the class path def get_class_path(cls): return f'{cls.__module__}.{cls.__name__}'
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Structure of a Model configLet's first create a config for the common components of the model level config -
common_config = OmegaConf.create({ 'vocab_size': MISSING, 'block_size': MISSING, 'n_layer': MISSING, 'n_embd': MISSING, 'n_head': MISSING, })
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-----The model config right now is still being built - it needs to contain a lot more details!A complete Model Config should have the sub-configs of all of its top-level modules as well. This means the configs of the `embedding`, `encoder`, and the `decoder`. Structure of sub-module configFor top-level models, we generally don't change the actual module very often, and instead, primarily change the hyperparameters of that model.So we will make use of `Hydra`'s Class instantiation method - which can easily be accessed via the class method `ModelPT.from_config_dict()`.Let's take a few examples below -
embedding_config = OmegaConf.create({ '_target_': get_class_path(GPTEmbedding), 'vocab_size': '${model.vocab_size}', 'n_embd': '${model.n_embd}', 'block_size': '${model.block_size}', 'embd_pdrop': 0.1 }) encoder_config = OmegaConf.create({ '_target_': get_class_path(GPTTransformerEncoder), 'n_embd': '${model.n_embd}', 'block_size': '${model.block_size}', 'n_head': '${model.n_head}', 'n_layer': '${model.n_layer}', 'attn_pdrop': 0.1, 'resid_pdrop': 0.1 }) decoder_config = OmegaConf.create({ '_target_': get_class_path(GPTDecoder), # n_embd: int, vocab_size: int 'n_embd': '${model.n_embd}', 'vocab_size': '${model.vocab_size}' })
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
What is `_target_`?--------In the above config, we see a `_target_` in the config. `_target_` is usually a full classpath to the actual class in the python package/user local directory. It is required for Hydra to locate and instantiate the model from its path correctly.So why do we want to set a classpath?In general, when developing models, we don't often change the encoder or the decoder, but we do change the hyperparameters of the encoder and decoder.This notation helps us keep the Model level declaration of the forward step neat and precise. It also logically helps us demark which parts of the model can be easily replaced - in the future, we can easily replace the encoder with some other type of self-attention block or the decoder with an RNN or 1D-CNN neural module (as long as they have the same Neural Type definition as the current blocks). What is the `${}` syntax?-------OmegaConf, and by extension, Hydra, supports Variable Interpolation. As you can see in the `__init__` of embedding, encoder, and decoder neural modules, they often share many parameters between each other.It would become tedious and error-prone to set each of these constructors' values separately in each of the embedding, encoder, and decoder configs.So instead, we define standard keys inside of the `model` level config and then interpolate these values inside of the respective configs! Attaching the model and module-level configsSo now, we have a Model level and per-module level configs for the core components. Sub-module configs generally fall under the "model" namespace, but you have the flexibility to define the structure as you require.Let's attach them!
model_config = OmegaConf.create({ 'model': common_config }) # Then let's attach the sub-module configs model_config.model.embedding = embedding_config model_config.model.encoder = encoder_config model_config.model.decoder = decoder_config
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-----Let's print this config!
print(OmegaConf.to_yaml(model_config))
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-----Wait, why did OmegaConf not fill in the value of the variable interpolation for the configs yet?This is because OmegaConf takes a deferred approach to variable interpolation. To force it ahead of time, we can use the following snippet -
temp_config = OmegaConf.create(OmegaConf.to_container(model_config, resolve=True)) print(OmegaConf.to_yaml(temp_config))
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-----Now that we have a config, let's try to create an object of the NeMo Model !
import copy # Let's work on a copy of the model config and update it before we send it into the Model. cfg = copy.deepcopy(model_config) # Let's set the values of the config (for some plausible small model) cfg.model.vocab_size = 100 cfg.model.block_size = 128 cfg.model.n_layer = 1 cfg.model.n_embd = 32 cfg.model.n_head = 4 print(OmegaConf.to_yaml(cfg)) # Try to create a model with this config [ERROR CELL] m = AbstractNeMoGPT(cfg.model)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-----You will note that we added the `Abstract` tag for a reason to this NeMo Model and that when we try to instantiate it - it raises an error that we need to implement specific methods.1) `setup_training_data` & `setup_validation_data` - All NeMo models should implement two data loaders - the training data loader and the validation data loader. Optionally, they can go one step further and also implement the `setup_test_data` method to add support for evaluating the Model on its own.Why do we enforce this? NeMo Models are meant to be a unified, cohesive object containing the details about the neural network underlying that Model and the data loaders to train, validate, and optionally test those models.In doing so, once the Model is created/deserialized, it would take just a few more steps to train the Model from scratch / fine-tune/evaluate the Model on any data that the user provides, as long as this user-provided dataset is in a format supported by the Dataset / DataLoader that is used by this Model!2) `list_available_models` - This is a utility method to provide a list of pre-trained NeMo models to the user from the cloud.Typically, NeMo models can be easily packaged into a tar file (which we call a .nemo file in the earlier primer notebook). These tar files contain the model config + the pre-trained checkpoint weights of the Model, and can easily be downloaded from some cloud service. For this notebook, we will not be implementing this method.--------Finally, let's create a concrete implementation of the above NeMo Model!
from nemo.core.classes.common import PretrainedModelInfo class BasicNeMoGPT(AbstractNeMoGPT): @classmethod def list_available_models(cls) -> PretrainedModelInfo: return None def setup_training_data(self, train_data_config: OmegaConf): self._train_dl = None def setup_validation_data(self, val_data_config: OmegaConf): self._validation_dl = None def setup_test_data(self, test_data_config: OmegaConf): self._test_dl = None
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
------Now let's try to create an object of the `BasicNeMoGPT` model
m = BasicNeMoGPT(cfg.model)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Setting up train-val-test stepsThe above `BasicNeMoGPT` Model is a basic PyTorch Lightning Module, with some added functionality - 1) Neural Type checks support - as defined in the Model as well as the internal modules.2) Save and restore of the Model (in the trivial case) to a tarfile.But as the Model is right now, it crucially does not support PyTorch Lightning's `Trainer`. As such, while this Model can be called manually, it cannot be easily trained or evaluated by using the PyTorch Lightning framework.------Let's begin adding support for this then -
class BasicNeMoGPTWithSteps(BasicNeMoGPT): def step_(self, split, batch, batch_idx=None): idx, targets = batch logits = self(idx=idx) loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1)) key = 'loss' if split == 'train' else f"{split}_loss" return {key: loss} def training_step(self, *args, **kwargs): return self.step_('train', *args, **kwargs) def validation_step(self, *args, **kwargs): return self.step_('val', *args, **kwargs) def test_step(self, *args, **kwargs): return self.step_('test', *args, **kwargs) # This is useful for multiple validation data loader setup def multi_validation_epoch_end(self, outputs, dataloader_idx: int = 0): val_loss_mean = torch.stack([x['val_loss'] for x in outputs]).mean() return {'val_loss': val_loss_mean} # This is useful for multiple test data loader setup def multi_test_epoch_end(self, outputs, dataloader_idx: int = 0): test_loss_mean = torch.stack([x['test_loss'] for x in outputs]).mean() return {'test_loss': test_loss_mean} m = BasicNeMoGPTWithSteps(cfg=cfg.model)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Setup for Multi Validation and Multi Test data loadersAs discussed in the NeMo Primer, NeMo has in-built support for multiple data loaders for validation and test steps. Therefore, as an example of how easy it is to add such support, we include the `multi_validation_epoch_end` and `multi_test_epoch_end` overrides.It is also practically essential to collate results from more than one distributed GPUs, and then aggregate results properly at the end of the epoch. NeMo strictly enforces the correct collation of results, even if you will work on only one device! Future-proofing is baked into the model design for this case!Therefore NeMo provides the above two generic methods to support aggregation and simultaneously support multiple datasets!**Please note, you can prepend your already existing `validation_epoch_end` and `test_epoch_end` implementations with the `multi_` in the name, and that alone is sufficient to enable multi-dataset and multi-GPU support!**------**Note: To disable multi-dataset support, simply override `validation_epoch_end` and `test_epoch_end` instead of `multi_validation_epoch_end` and `multi_test_epoch_end`!** Setting up the optimizer / schedulerWe are relatively close to reaching feature parity with the MinGPT Model! But we are missing a crucial piece - the optimizer.All NeMo Model's come with a default implementation of `setup_optimization()`, which will parse the provided model config to obtain the `optim` and `sched` sub-configs, and automatically configure the optimizer and scheduler.If training GPT was as simple as plugging in an Adam optimizer over all the parameters with a cosine weight decay schedule, we could do that from the config alone.-------But GPT is not such a trivial model - more specifically, it requires weight decay to be applied to the weight matrices but not to the biases, the embedding matrix, or the LayerNorm layers.We can drop the support that Nemo provides for such special cases and instead utilize the PyTorch Lightning method `configure_optimizers` to perform the same task.-------Note, for NeMo Models; the `configure_optimizers` is implemented as a trivial call to `setup_optimization()` followed by returning the generated optimizer and scheduler! So we can override the `configure_optimizer` method and manage the optimizer creation manually!NeMo's goal is to provide usable defaults for the general case and simply back off to either PyTorch Lightning or PyTorch nn.Module itself in cases which the additional flexibility becomes necessary!
class BasicNeMoGPTWithOptim(BasicNeMoGPTWithSteps): def configure_optimizers(self): """ This long function is unfortunately doing something very simple and is being very defensive: We are separating out all parameters of the model into two buckets: those that will experience weight decay for regularization and those that won't (biases, and layernorm/embedding weights). We are then returning the PyTorch optimizer object. """ # separate out all parameters to those that will and won't experience weight decay decay = set() no_decay = set() whitelist_weight_modules = (torch.nn.Linear, ) blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding) for mn, m in self.named_modules(): for pn, p in m.named_parameters(): fpn = '%s.%s' % (mn, pn) if mn else pn # full param name if pn.endswith('bias'): # all biases will not be decayed no_decay.add(fpn) elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules): # weights of whitelist modules will be weight decayed decay.add(fpn) elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules): # weights of blacklist modules will NOT be weight decayed no_decay.add(fpn) # special case the position embedding parameter in the root GPT module as not decayed no_decay.add('embedding.pos_emb') # validate that we considered every parameter param_dict = {pn: p for pn, p in self.named_parameters()} inter_params = decay & no_decay union_params = decay | no_decay assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params), ) assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \ % (str(param_dict.keys() - union_params), ) # create the pytorch optimizer object optim_groups = [ {"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": self.cfg.optim.weight_decay}, {"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0}, ] optimizer = torch.optim.AdamW(optim_groups, lr=self.cfg.optim.lr, betas=self.cfg.optim.betas) return optimizer m = BasicNeMoGPTWithOptim(cfg=cfg.model)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-----Now let's setup the config for the optimizer !
OmegaConf.set_struct(cfg.model, False) optim_config = OmegaConf.create({ 'lr': 3e-4, 'weight_decay': 0.1, 'betas': [0.9, 0.95] }) cfg.model.optim = optim_config OmegaConf.set_struct(cfg.model, True)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Setting up the dataset / data loadersSo we were able almost entirely to replicate the MinGPT implementation. Remember, NeMo models should contain all of the logic to load the Dataset and DataLoader for at least the train and validation step.We temporarily provided empty implementations to get around it till now, but let's fill that in now!-------**Note for datasets**: Below, we will show an example using a very small dataset called `tiny_shakespeare`, found at the original [char-rnn repository](https://github.com/karpathy/char-rnn), but practically you could use any text corpus. The one suggested in minGPT is available at http://mattmahoney.net/dc/textdata.html Creating the DatasetNeMo has Neural Type checking support, even for Datasets! It's just a minor change of the import in most cases and one difference in how we handle `collate_fn`.We could paste the dataset info from minGPT, and you'd only need to make 2 changes!-----In this example, we will be writing a thin subclass over the datasets provided by `nlp` from HuggingFace!
from nemo.core import Dataset from torch.utils import data from torch.utils.data.dataloader import DataLoader class TinyShakespeareDataset(Dataset): def __init__(self, data_path, block_size, crop=None, override_vocab=None): # load the data and crop it appropriately with open(data_path, 'r') as f: if crop is None: data = f.read() else: f.seek(crop[0]) data = f.read(crop[1]) # build a vocabulary from data or inherit it vocab = sorted(list(set(data))) if override_vocab is None else override_vocab # Add UNK special_tokens = ['<PAD>', '<UNK>'] # We use just <UNK> and <PAD> in the call, but can add others. if not override_vocab: vocab = [*special_tokens, *vocab] # Update train vocab with special tokens data_size, vocab_size = len(data), len(vocab) print('data of crop %s has %d characters, vocab of size %d.' % (str(crop), data_size, vocab_size)) print('Num samples in dataset : %d' % (data_size // block_size)) self.stoi = { ch:i for i,ch in enumerate(vocab) } self.itos = { i:ch for i,ch in enumerate(vocab) } self.block_size = block_size self.vocab_size = vocab_size self.data = data self.vocab = vocab self.special_tokens = special_tokens def __len__(self): return len(self.data) // self.block_size def __getitem__(self, idx): # attempt to fetch a chunk of (block_size + 1) items, but (block_size) will work too chunk = self.data[idx*self.block_size : min(len(self.data), (idx+1)*self.block_size + 1)] # map the string into a sequence of integers ixes = [self.stoi[s] if s in self.stoi else self.stoi['<UNK>'] for s in chunk ] # if stars align (last idx and len(self.data) % self.block_size == 0), pad with <PAD> if len(ixes) < self.block_size + 1: assert len(ixes) == self.block_size # i believe this is the only way this could happen, make sure ixes.append(self.stoi['<PAD>']) dix = torch.tensor(ixes, dtype=torch.long) return dix[:-1], dix[1:] @property def output_types(self): return { 'input': NeuralType(('B', 'T'), Index()), 'target': NeuralType(('B', 'T'), LabelsType()) }
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
------We didn't have to change anything until here. How then is type-checking done? NeMo does type-checking inside of the collate function implementation itself! In this case, it is not necessary to override the `collate_fn` inside the Dataset, but if we did need to override it, **NeMo requires that the private method `_collate_fn` be overridden instead**.We can then use data loaders with minor modifications!**Also, there is no need to implement the `input_types` for Dataset, as they are the ones generating the input for the model!** -----Let's prepare the dataset that we are going to use - Tiny Shakespeare from the following codebase [char-rnn](https://github.com/karpathy/char-rnn).
import os if not os.path.exists('tiny-shakespeare.txt'): !wget https://raw.githubusercontent.com/jcjohnson/torch-rnn/master/data/tiny-shakespeare.txt !head -n 5 tiny-shakespeare.txt train_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(0, int(1e6))) val_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(int(1e6), int(50e3)), override_vocab=train_dataset.vocab) test_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(int(1.05e6), int(100e3)), override_vocab=train_dataset.vocab)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Setting up dataset/data loader support in the ModelSo we now know our data loader works. Let's integrate it as part of the Model itself!To do this, we use the three special attributes of the NeMo Model - `self._train_dl`, `self._validation_dl` and `self._test_dl`. Once you construct your DataLoader, place your data loader to these three variables. For multi-data loader support, the same applies! NeMo will automatically handle the management of multiple data loaders for you!
class NeMoGPT(BasicNeMoGPTWithOptim): def _setup_data_loader(self, cfg): if self.vocab is None: override_vocab = None else: override_vocab = self.vocab dataset = TinyShakespeareDataset( data_path=cfg.data_path, block_size=cfg.block_size, crop=tuple(cfg.crop) if 'crop' in cfg else None, override_vocab=override_vocab ) if self.vocab is None: self.vocab = dataset.vocab return DataLoader( dataset=dataset, batch_size=cfg.batch_size, shuffle=cfg.shuffle, collate_fn=dataset.collate_fn, # <-- this is necessary for type checking pin_memory=cfg.pin_memory if 'pin_memory' in cfg else False, num_workers=cfg.num_workers if 'num_workers' in cfg else 0 ) def setup_training_data(self, train_data_config: OmegaConf): self.vocab = None self._train_dl = self._setup_data_loader(train_data_config) def setup_validation_data(self, val_data_config: OmegaConf): self._validation_dl = self._setup_data_loader(val_data_config) def setup_test_data(self, test_data_config: OmegaConf): self._test_dl = self._setup_data_loader(test_data_config)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Creating the dataset / dataloader configThe final step to setup this model is to add the `train_ds`, `validation_ds` and `test_ds` configs inside the model config!
OmegaConf.set_struct(cfg.model, False) # Set the data path and update vocabular size cfg.model.data_path = 'tiny-shakespeare.txt' cfg.model.vocab_size = train_dataset.vocab_size OmegaConf.set_struct(cfg.model, True) train_ds = OmegaConf.create({ 'data_path': '${model.data_path}', 'block_size': '${model.block_size}', 'crop': [0, int(1e6)], 'batch_size': 64, 'shuffle': True, }) validation_ds = OmegaConf.create({ 'data_path': '${model.data_path}', 'block_size': '${model.block_size}', 'crop': [int(1e6), int(50e3)], 'batch_size': 4, 'shuffle': False, }) test_ds = OmegaConf.create({ 'data_path': '${model.data_path}', 'block_size': '${model.block_size}', 'crop': [int(1.05e6), int(100e3)], 'batch_size': 4, 'shuffle': False, }) # Attach to the model config OmegaConf.set_struct(cfg.model, False) cfg.model.train_ds = train_ds cfg.model.validation_ds = validation_ds cfg.model.test_ds = test_ds OmegaConf.set_struct(cfg.model, True) # Let's see the config now ! print(OmegaConf.to_yaml(cfg)) # Let's try creating a model now ! model = NeMoGPT(cfg=cfg.model)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
-----All the data loaders load properly ! Yay! Evaluate the model - end to end!Now that the data loaders have been set up, all that's left is to train and test the model! We have most of the components required by this model - the train, val and test data loaders, the optimizer, and the type-checked forward step to perform the train-validation-test steps! But training a GPT model from scratch is not the goal of this primer, so instead, let's do a sanity check by merely testing the model for a few steps using random initial weights.The above will ensure that - 1) Our data loaders work as intended2) The type checking system assures us that our Neural Modules are performing their forward step correctly.3) The loss is calculated, and therefore the model runs end to end, ultimately supporting PyTorch Lightning.
if torch.cuda.is_available(): cuda = 1 else: cuda = 0 trainer = ptl.Trainer(gpus=cuda, test_percent_check=1.0) trainer.test(model)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo
Saving and restoring modelsNeMo internally keeps track of the model configuration, as well as the model checkpoints and parameters.As long as your NeMo follows the above general guidelines, you can call the `save_to` and `restore_from` methods to save and restore your models!
model.save_to('gpt_model.nemo') !ls -d -- *.nemo temp_model = NeMoGPT.restore_from('gpt_model.nemo') # [ERROR CELL] temp_model.setup_test_data(temp_model.cfg.test_ds)
_____no_output_____
Apache-2.0
tutorials/01_NeMo_Models.ipynb
mcdavid109/NeMo