markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
A simple check for the featurization would be to count the different atomic numbers present in the features.
atomic_numbers = features[:, 0] from collections import Counter unique_numbers = Counter(atomic_numbers) print(unique_numbers)
Counter({0.0: 9, 1.0: 8, 6.0: 3})
MIT
examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
patrickphatnguyen/deepchem
For propane, we have $3$ `C-atoms` and $8$ `H-atoms`, and these numbers are in agreement with the results shown above. There's also the additional padding of 9 atoms, to equalize with `max_atoms`. CoulombMatrix `CoulombMatrix`, featurizes a molecule by computing the coulomb matrices for different conformers of the molecule, and returning it as a list.A Coulomb matrix tries to encode the energy structure of a molecule. The matrix is symmetric, with the off-diagonal elements capturing the Coulombic repulsion between pairs of atoms and the diagonal elements capturing atomic energies using the atomic numbers. More information on the functional forms used can be found [here](https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.108.058301).The featurizer takes in `max_atoms` as an argument and also has options for removing hydrogens from the molecule (`remove_hydrogens`), generating additional random coulomb matrices(`randomize`), and getting only the upper triangular matrix (`upper_tri`).
example_smile = "CCC" example_mol = Chem.MolFromSmiles(example_smile) engine = conformers.ConformerGenerator(max_conformers=1) example_mol = engine.generate_conformers(example_mol) print("Number of available conformers for propane: ", len(example_mol.GetConformers())) coulomb_mat = CoulombMatrix(max_atoms=20, randomize=False, remove_hydrogens=False, upper_tri=False) features = coulomb_mat._featurize(mol=example_mol)
_____no_output_____
MIT
examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
patrickphatnguyen/deepchem
A simple check for the featurization is to see if the feature list has the same length as the number of conformers
print(len(example_mol.GetConformers()) == len(features))
True
MIT
examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
patrickphatnguyen/deepchem
CoulombMatrixEig `CoulombMatrix` is invariant to molecular rotation and translation, since the interatomic distances or atomic numbers do not change. However the matrix is not invariant to random permutations of the atom's indices. To deal with this, the `CoulumbMatrixEig` featurizer was introduced, which uses the eigenvalue spectrum of the columb matrix, and is invariant to random permutations of the atom's indices.`CoulombMatrixEig` inherits from `CoulombMatrix` and featurizes a molecule by first computing the coulomb matrices for different conformers of the molecule and then computing the eigenvalues for each coulomb matrix. These eigenvalues are then padded to account for variation in number of atoms across molecules.The featurizer takes in `max_atoms` as an argument and also has options for removing hydrogens from the molecule (`remove_hydrogens`), generating additional random coulomb matrices(`randomize`).
example_smile = "CCC" example_mol = Chem.MolFromSmiles(example_smile) engine = conformers.ConformerGenerator(max_conformers=1) example_mol = engine.generate_conformers(example_mol) print("Number of available conformers for propane: ", len(example_mol.GetConformers())) coulomb_mat_eig = CoulombMatrixEig(max_atoms=20, randomize=False, remove_hydrogens=False) features = coulomb_mat_eig._featurize(mol=example_mol) print(len(example_mol.GetConformers()) == len(features))
True
MIT
examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb
patrickphatnguyen/deepchem
Neural NetworksIn the previous part of this exercise, you implemented multi-class logistic re gression to recognize handwritten digits. However, logistic regression cannot form more complex hypotheses as it is only a linear classifier.In this part of the exercise, you will implement a neural network to recognize handwritten digits using the same training set as before. The neural network will be able to represent complex models that form non-linear hypotheses. For this week, you will be using parameters from a neural network that we have already trained. Your goal is to implement the feedforward propagation algorithm to use our weights for prediction. In next week’s exercise, you will write the backpropagation algorithm for learning the neural network parameters.The file ex3data1 contains a training set.The structure of the dataset described blow:1. X array = 400 columns describe the values of pixels of 20*20 images in flatten format for 5000 samples2. y array = Value of image (number between 0-9)Our assignment has these sections:1. Visualizing the Data 1. Converting .mat to .csv 2. Loading Dataset and Trained Neural Network Weights 3. Ploting Data2. Model Representation3. Feedforward Propagation and PredictionIn each section full description provided. 1. Visualizing the DatasetBefore starting on any task, it is often useful to understand the data by visualizing it. 1.A Converting .mat to .csvIn this specific assignment, the instructor added a .mat file as training set and weights of trained neural network. But we have to convert it to .csv to use in python.After all we now ready to import our new csv files to pandas dataframes and do preprocessing on it and make it ready for next steps.
# import libraries import scipy.io import numpy as np data = scipy.io.loadmat("ex3data1") weights = scipy.io.loadmat('ex3weights')
_____no_output_____
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
Now we extract X and y variables from the .mat file and save them into .csv file for further usage. After running the below code you should see X.csv and y.csv files in your directory.
for i in data: if '__' not in i and 'readme' not in i: np.savetxt((i+".csv"),data[i],delimiter=',') for i in weights: if '__' not in i and 'readme' not in i: np.savetxt((i+".csv"),weights[i],delimiter=',')
_____no_output_____
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
1.B Loading Dataset and Trained Neural Network WeightsFirst we import .csv files into pandas dataframes then save them into numpy arrays.There are 5000 training examples in ex3data1.mat, where each training example is a 20 pixel by 20 pixel grayscale image of the digit. Each pixel is represented by a floating point number indicating the grayscale intensity at that location. The 20 by 20 grid of pixels is "flatten" into a 400-dimensional vector. Each of these training examples becomes a single row in our data matrix X. This gives us a 5000 by 400 matrix X where every row is a training example for a handwritten digit image.The second part of the training set is a 5000-dimensional vector y that contains labels for the training set.Notice: In dataset, the digit zero mapped to the value ten. Therefore, a "0" digit is labeled as "10", while the digits "1" to "9" are labeled as "1" to "9" in their natural order.But this make thing harder so we bring it back to natural order for 0!
# import library import pandas as pd # saving .csv files to pandas dataframes x_df = pd.read_csv('X.csv',names= np.arange(0,400)) y_df = pd.read_csv('y.csv',names=['label']) # saving .csv files to pandas dataframes Theta1_df = pd.read_csv('Theta1.csv',names = np.arange(0,401)) Theta2_df = pd.read_csv('Theta2.csv',names = np.arange(0,26)) # saving x_df and y_df into numpy arrays x = x_df.iloc[:,:].values y = y_df.iloc[:,:].values m, n = x.shape # bring back 0 to 0 !!! y = y.reshape(m,) y[y==10] = 0 y = y.reshape(m,1) print('#{} Number of training samples, #{} features per sample'.format(m,n)) # saving Theta1_df and Theta2_df into numpy arrays theta1 = Theta1_df.iloc[:,:].values theta2 = Theta2_df.iloc[:,:].values
_____no_output_____
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
1.C Plotting DataYou will begin by visualizing a subset of the training set. In first part, the code randomly selects selects 100 rows from X and passes those rows to the display_data function. This function maps each row to a 20 pixel by 20 pixel grayscale image and displays the images together.After plotting, you should see an image like this:
import numpy as np import matplotlib.pyplot as plt import random amount = 100 lines = 10 columns = 10 image = np.zeros((amount, 20, 20)) number = np.zeros(amount) for i in range(amount): rnd = random.randint(0,4999) image[i] = x[rnd].reshape(20, 20) y_temp = y.reshape(m,) number[i] = y_temp[rnd] fig = plt.figure(figsize=(8,8)) for i in range(amount): ax = fig.add_subplot(lines, columns, 1 + i) # Turn off tick labels ax.set_yticklabels([]) ax.set_xticklabels([]) plt.imshow(image[i], cmap='binary') plt.show() print(number)
_____no_output_____
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
2. Model RepresentationOur neural network is shown in below figure. It has 3 layers an input layer, a hidden layer and an output layer. Recall that our inputs are pixel values of digit images. Since the images are of size 20×20, this gives us 400 input layer units (excluding the extra bias unit which always outputs +1).You have been provided with a set of network parameters (Θ(1); Θ(2)) already trained by instructor.Theta1 and Theta2 The parameters have dimensions that are sized for a neural network with 25 units in the second layer and 10 output units (corresponding to the 10 digit classes).
print('theta1 shape = {}, theta2 shape = {}'.format(theta1.shape,theta2.shape))
theta1 shape = (25, 401), theta2 shape = (10, 26)
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
It seems our weights are transposed, so we transpose them to have them in a way our neural network is.
theta1 = theta1.transpose() theta2 = theta2.transpose() print('theta1 shape = {}, theta2 shape = {}'.format(theta1.shape,theta2.shape))
theta1 shape = (401, 25), theta2 shape = (26, 10)
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
3. Feedforward Propagation and PredictionNow you will implement feedforward propagation for the neural network.You should implement the feedforward computation that computes hθ(x(i)) for every example i and returns the associated predictions. Similar to the one-vs-all classification strategy, the prediction from the neural network will be the label that has the largest output hθ(x)k. Implementation Note: The matrix X contains the examples in rows. When you complete the code, you will need to add the column of 1’s to the matrix. The matrices Theta1 and Theta2 contain the parameters for each unit in rows. Specifically, the first row of Theta1 corresponds to the first hidden unit in the second layer. You must get a(l) as a column vector.You should see that the accuracy is about 97.5%.
# adding column of 1's to x x = np.append(np.ones(shape=(m,1)),x,axis = 1)
_____no_output_____
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
h = hypothesis(x,theta) will compute sigmoid function on θTX and return a number which 0.You can use this library for calculating sigmoid.
def sigmoid(z): return 1/(1+np.exp(-z)) def lr_hypothesis(x,theta): return np.dot(x,theta)
_____no_output_____
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
predict(theta1, theta2, x): outputs the predicted label of x given the trained weights of a neural network (theta1, theta2).
layers = 3 num_labels = 10
_____no_output_____
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
Becuase the initial dataset has changed and mapped 0 to "10", so the weights also are changed. So we just rotate columns one step to right, to predict correct values.Recall we have changed mapping 0 to "10" to 0 to "0" but we cannot detect this mapping in weights of neural netwrok. So we have to this rotation on final output of probabilities.
def rotate_column(array): array_ = np.zeros(shape=(m,num_labels)) temp = np.zeros(num_labels,) temp= array[:,9] array_[:,1:10] = array[:,0:9] array_[:,0] = temp return array_ def predict(theta1,theta2,x): z2 = np.dot(x,theta1) # hidden layer a2 = sigmoid(z2) # hidden layer # adding column of 1's to a2 a2 = np.append(np.ones(shape=(m,1)),a2,axis = 1) z3 = np.dot(a2,theta2) a3 = sigmoid(z3) # mapping problem. Rotate left one step y_prob = rotate_column(a3) # prediction on activation a2 y_pred = np.argmax(y_prob, axis=1).reshape(-1,1) return y_pred y_pred = predict(theta1,theta2,x) y_pred.shape
_____no_output_____
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
Now we will compare our predicted result to the true one with confusion_matrix of numpy library.
from sklearn.metrics import confusion_matrix # Function for accuracy def acc(confusion_matrix): t = 0 for i in range(num_labels): t += confusion_matrix[i][i] f = m-t ac = t/(m) return (t,f,ac) #import library from sklearn.metrics import confusion_matrix cm_train = confusion_matrix(y.reshape(m,),y_pred.reshape(m,)) t,f,ac = acc(cm_train) print('With #{} correct, #{} wrong ==========> accuracy = {}%' .format(t,f,ac*100)) cm_train
_____no_output_____
MIT
Week 4 - Multi-Class Classification and Neural Networks/Neural Networks.ipynb
Nikronic/Coursera-Machine-Learning
[View in Colaboratory](https://colab.research.google.com/github/neoaksa/IMDB_Spider/blob/master/Movie_Analysis.ipynb)
# I've already uploaded three files onto googledrive, you can use uploaded function blew to upload the files. # # upload # uploaded = files.upload() # for fn in uploaded.keys(): # print('User uploaded file "{name}" with length {length} bytes'.format( # name=fn, length=len(uploaded[fn]))) import pandas as pd import numpy as np import urllib.request ! pip install pydrive # these classes allow you to request the Google drive API from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials from googleapiclient.discovery import build from google.colab import auth # authenticate google drive auth.authenticate_user() drive_service = build('drive', 'v3') # 1. Authenticate and create the PyDrive client. gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) def downloadFile(inputfilename,outputfilename): downloaded = drive.CreateFile({'id': inputfilename}) # assume the file is called file.csv and it's located at the root of your drive downloaded.GetContentFile(outputfilename) # traning file download MovieItemFile = downloadFile("1w8Ce9An_6vJH_o5Ux7A8Zf0zc2E419xN","MovieItem.csv") MovieReview = downloadFile("1R7kAHF9X_YnPGwsclqMn2_XA1WgVgjlC","MovieReview.csv") MovieStar = downloadFile("15d3ZiHoqvxxdRhS9-5it979D0M60Ued0","MovieStar.csv") df_movieItem = pd.read_csv('MovieItem.csv', delimiter=',',index_col=['id']) df_movieReview = pd.read_csv('MovieReview.csv', delimiter=',',index_col=['id']) df_movieStar = pd.read_csv('MovieStar.csv', delimiter=',',index_col=['id']) # sort by index id(also known by rating) df_movieItem = df_movieItem.sort_index(axis=0) # rating overview import seaborn as sns sns.stripplot(data=df_movieItem,y='rating',jitter= True,orient = 'v' ,size=6) plt.title('Movie Rating Overview') plt.show() # stars analysis # pre-process for movieItem and movieStar star_list = [] for index,stars in df_movieItem[['stars','stars_id']].iterrows(): star_list += [(x.lstrip().replace('"',''),y.lstrip().replace('"','')) for x,y in zip(stars['stars'][1:-1].replace('\'','').split(','),stars['stars_id'][1:-1].replace('\'','').split(','))] # star_id_list += [x.lstrip().replace('"','') for x in stars['stars_id'][1:-1].replace('\'','').split(',')] # reduce duplicate star_list = list(set(star_list)) # create a dataframe for output df_star = pd.DataFrame(columns=['stars_id','stars','avg_rating','num_movie']) df_star['stars_id'] = [x[1] for x in star_list] df_star['stars'] = [x[0] for x in star_list] for index,star_id in enumerate(df_star['stars_id']): filter = df_movieItem['stars_id'].str.contains(star_id) df_star['num_movie'][index] = len(df_movieItem[filter]) df_star['avg_rating'][index] = pd.to_numeric(df_movieItem[filter]['rating'].str[2:-2]).sum(axis=0)/df_star['num_movie'][index] # left join star information df_star # order by # of movies df_star = df_star.sort_values(['num_movie'],ascending=False) print(df_star.head(10)) # order by avg rating df_star = df_star.sort_values(['avg_rating'],ascending=False) print(df_star.head(10))
stars_id stars avg_rating num_movie 330 nm0000134 Robert De Niro 8.375 8 352 nm0000148 Harrison Ford 8.34286 7 172 nm0000138 Leonardo DiCaprio 8.3 6 250 nm0000158 Tom Hanks 8.38333 6 588 nm0000142 Clint Eastwood 8.28 5 62 nm0451148 Aamir Khan 8.2 5 539 nm0000122 Charles Chaplin 8.38 5 26 nm0000199 Al Pacino 8.65 4 208 nm0000197 Jack Nicholson 8.45 4 327 nm0000228 Kevin Spacey 8.425 4 stars_id stars avg_rating num_movie 427 nm0001001 James Caan 9.2 1 176 nm0348409 Bob Gunton 9.2 1 39 nm0000209 Tim Robbins 9.2 1 290 nm0005132 Heath Ledger 9 1 338 nm0001173 Aaron Eckhart 9 1 276 nm0000842 Martin Balsam 8.9 1 343 nm0000168 Samuel L. Jackson 8.9 1 303 nm0000237 John Travolta 8.9 1 398 nm0000553 Liam Neeson 8.9 1 177 nm0005212 Ian McKellen 8.8 3
Apache-2.0
Movie_Analysis.ipynb
neoaksa/IMDB_Spider
Accordig this breif table, we can find **Robert De Niro** took the most movies in top 250 list. Followed by **Harrison**,**Tom** and **Leonardo** .
# visual stars import matplotlib.pyplot as plt # figure = plt.figure() ax1 = plt.subplot() df_aggbyMovie = df_star[df_star['num_movie']>0].groupby(['num_movie']).agg({'stars_id':np.size}) df_aggbyMovie.columns.values[0] ='freq' df_aggbyMovie = df_aggbyMovie.sort_values(['freq']) acc_numMovie = np.cumsum(df_aggbyMovie['freq']) ax1.plot(acc_numMovie) ax1.set_xlabel('# of movies') ax1.set_ylabel('cumulated # of stars') ax1.set_title('Cumulated chart for each segement') plt.gca().invert_xaxis() plt.show() ax2 = plt.subplot() ax2.pie(df_aggbyMovie, labels=df_aggbyMovie.index, startangle=90, autopct='%1.1f%%') ax2.set_title('Percetage of segements') plt.show() # check out which moive the best stars perform. - best stars: who took more than one movies in the top250 list df_star_2plus = df_star[df_star['num_movie']>1]['stars_id'] i = 0 movie_list = [] for index,row in df_movieItem[['stars_id','title']].iterrows(): for x in df_star_2plus.values: if x in row['stars_id']: i +=1 movie_list.append(row['title']) break df_movieItem[df_movieItem['title'].isin(movie_list)].head(10)
_____no_output_____
Apache-2.0
Movie_Analysis.ipynb
neoaksa/IMDB_Spider
**165** movies in top 250 movies are performed by the **100** best stars who is defined that took more than one movies in the list. We picked up these 100 movie stars for future star research
# movie star relationship analysis df_movie_star_plus = df_star[df_star['num_movie']>2][['stars_id','stars']] # transfer star list to relationship list def starlist2network(list): bi_list = [] i = 0 while i<len(list): j = 1 while j<len(list)-i: bi_list.append((list[i],list[i+j])) j += 1 i += 1 return tuple(bi_list) star_map_list =set() for index,stars in df_movieItem[['stars']].iterrows(): star_list = [] star_list += [x.lstrip().replace('"','') for x in stars['stars'][1:-1].replace('\'','').split(',')] for item in starlist2network(star_list): if item[0] in df_movie_star_plus['stars'].values and item[1] in df_movie_star_plus['stars'].values: star_map_list.add(tuple(sorted(item))) !pip install networkx import networkx as nx import matplotlib.pyplot as plt # Creating a Graph G = nx.Graph() # Right now G is empty G.add_edges_from(star_map_list) # k controls the distance between the nodes and varies between 0 and 1 # iterations is the number of times simulated annealing is run # default k =0.1 and iterations=50 pos = nx.spring_layout(G,k=0.55,iterations=50) nx.draw(G,pos, with_labels=True, font_weight='bold',node_shape = 'o')
Requirement already satisfied: networkx in /usr/local/lib/python3.6/dist-packages (2.1) Requirement already satisfied: decorator>=4.1.0 in /usr/local/lib/python3.6/dist-packages (from networkx) (4.3.0)
Apache-2.0
Movie_Analysis.ipynb
neoaksa/IMDB_Spider
I picked up a few stars who took more than 2 movies in the top 250 list, and create a relationship netwrok for them.We can find the major 5 blocks, if we loose the filter, maybe we can find more.
# pick 100 stars for age analysis # rebin the year by 10 years df_movieStar_bin = df_movieStar.copy() df_movieStar_bin['name'] = df_movieStar_bin['name'].str[2:-2] df_movieStar_bin['born_year'] = df_movieStar_bin['born_year'].str[2:-2] df_movieStar_bin['born_area'] = df_movieStar_bin['born_area'].str[2:-2] df_movieStar_bin['born_year'] = pd.cut(pd.to_numeric(df_movieStar_bin['born_year'].str[0:4]),range(1900,2020,10),right=False) df_movieStar_bin = df_movieStar_bin.dropna() df_movieStar_bin['born_year'] = df_movieStar_bin['born_year'].astype(str).str[1:5] + 's' df_movieStar_bin = df_movieStar_bin[df_movieStar_bin.index.isin(df_star_2plus.values)] fig = plt.figure(figsize=(12,6)) plt.style.use('fivethirtyeight') ax3 = plt.subplot() ax3.hist(df_movieStar_bin['born_year']) ax3.set_title('Histogram of Star born year') plt.xlabel('Star Born Year') plt.ylabel('# of Star') plt.show() # star city anlysis df_movieStar_bin['born_state'] = [x.split(',')[1] for x in df_movieStar_bin['born_area']] df_movieStar_by_state = df_movieStar_bin.groupby(['born_state']).size().sort_values(ascending=False) df_movieStar_by_state = df_movieStar_by_state[df_movieStar_by_state>=2].append( pd.Series(df_movieStar_by_state[df_movieStar_by_state<2].sum(),index=['Others'])) # print(df_movieStar_by_state) fig = plt.figure(figsize=(20,6)) plt.bar(range(len(df_movieStar_by_state)), df_movieStar_by_state, align='center', alpha=0.5) plt.xticks(range(len(df_movieStar_by_state)), df_movieStar_by_state.index) plt.ylabel('# of Stars') plt.title('Movie Star by States') plt.show()
_____no_output_____
Apache-2.0
Movie_Analysis.ipynb
neoaksa/IMDB_Spider
From picked 100 movie stars, most of them are born between **1930s to 1970s**. **California, Illinois, New Jersey ** are the states with most movie stars. Even so, none of state or regions is predominant.
# review analysis !pip install wordcloud !pip install multidict from wordcloud import WordCloud import matplotlib.pyplot as plt import nltk from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer import string import multidict as multidict nltk.download('stopwords') nltk.download('punkt') nltk.download('averaged_perceptron_tagger') nltk.download('wordnet') Lemmatizer = WordNetLemmatizer() # remove punctuation list_word = [] for text in df_movieReview['content'].values: nopunc = [char.lower() for char in text if char not in string.punctuation] nopunc = ''.join(nopunc) list_word.append(nopunc) # setting words unuseful del_words = ['movie','character','film','story','wa','ha']# excluded words word_type_list_In = ("JJ","NN") # only picked adj and noun # word_list_Ex = ("/", "br", "<", ">","be","movie","film","have","do","none","none none") words = {} for sent in list_word: text = nltk.word_tokenize(sent) # tokenize sentence to words text = [Lemmatizer.lemmatize(word) for word in text] # get stem of words text_tag = nltk.pos_tag(text) # get words type for item in [x[0] for x in text_tag if x[1][:2] in word_type_list_In and x[0] not in del_words and x[0] not in stopwords.words('english')]: if item not in words: words[item] = 1 else: words[item] += 1 #sort by value sorted_words = sorted(words.items(), key=lambda x: x[1],reverse=True) # filtered_words = ' '.join([x[0] for x in sorted_words if x[1]>=1000]) print(sorted_words[0:20]) fullTermsDict = multidict.MultiDict() for key in words: fullTermsDict.add(key, words[key]) # Create the wordcloud object wordcloud = WordCloud(width=1600, height=800, margin=0,max_font_size=100).generate_from_frequencies(fullTermsDict) # Display the generated image: plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.margins(x=0, y=0) plt.show()
_____no_output_____
Apache-2.0
Movie_Analysis.ipynb
neoaksa/IMDB_Spider
²⁹Si 1D MAS spinning sideband (CSA) After acquiring an NMR spectrum, we often require a least-squares analysis todetermine site populations and nuclear spin interaction parameters. Generally, thiscomprises of two steps:- create a fitting model, and- determine the model parameters that give the best fit to the spectrum.Here, we will use the mrsimulator objects to create a fitting model, and use the`LMFIT `_ library for performing the least-squaresfitting optimization.In this example, we use a synthetic $^{29}\text{Si}$ NMR spectrum of cuspidine,generated from the tensor parameters reported by Hansen `et al.` [f1]_, todemonstrate a simple fitting procedure.We will begin by importing relevant modules and establishing figure size.
import csdmpy as cp import matplotlib.pyplot as plt from lmfit import Minimizer, Parameters from mrsimulator import Simulator, SpinSystem, Site from mrsimulator.methods import BlochDecaySpectrum from mrsimulator import signal_processing as sp from mrsimulator.utils import spectral_fitting as sf
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
Import the datasetUse the `csdmpy `_module to load the synthetic dataset as a CSDM object.
file_ = "https://sandbox.zenodo.org/record/835664/files/synthetic_cuspidine_test.csdf?" synthetic_experiment = cp.load(file_).real # standard deviation of noise from the dataset sigma = 0.03383338 # convert the dimension coordinates from Hz to ppm synthetic_experiment.x[0].to("ppm", "nmr_frequency_ratio") # Plot of the synthetic dataset. plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(synthetic_experiment, "k", alpha=0.5) ax.set_xlim(50, -200) plt.grid() plt.tight_layout() plt.show()
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
Create a fitting modelBefore you can fit a simulation to an experiment, in this case, the synthetic dataset,you will first need to create a fitting model. We will use the ``mrsimulator`` objectsas tools in creating a model for the least-squares fitting.**Step 1:** Create initial guess sites and spin systems.The initial guess is often based on some prior knowledge about the system underinvestigation. For the current example, we know that Cuspidine is a crystalline silicapolymorph with one crystallographic Si site. Therefore, our initial guess model is asingle $^{29}\text{Si}$ site spin system. For non-linear fitting algorithms, asa general recommendation, the initial guess model parameters should be a good startingpoint for the algorithms to converge.
# the guess model comprising of a single site spin system site = Site( isotope="29Si", isotropic_chemical_shift=-82.0, # in ppm, shielding_symmetric={"zeta": -63, "eta": 0.4}, # zeta in ppm ) spin_system = SpinSystem( name="Si Site", description="A 29Si site in cuspidine", sites=[site], # from the above code abundance=100, )
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
**Step 2:** Create the method object.The method should be the same as the one usedin the measurement. In this example, we use the `BlochDecaySpectrum` method. Note,when creating the method object, the value of the method parameters must match therespective values used in the experiment.
MAS = BlochDecaySpectrum( channels=["29Si"], magnetic_flux_density=7.1, # in T rotor_frequency=780, # in Hz spectral_dimensions=[ { "count": 2048, "spectral_width": 25000, # in Hz "reference_offset": -5000, # in Hz } ], experiment=synthetic_experiment, # add the measurement to the method. )
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
**Step 3:** Create the Simulator object, add the method and spin system objects, andrun the simulation.
sim = Simulator(spin_systems=[spin_system], methods=[MAS]) sim.run()
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
**Step 4:** Create a SignalProcessor class and apply post simulation processing.
processor = sp.SignalProcessor( operations=[ sp.IFFT(), # inverse FFT to convert frequency based spectrum to time domain. sp.apodization.Exponential(FWHM="200 Hz"), # apodization of time domain signal. sp.FFT(), # forward FFT to convert time domain signal to frequency spectrum. sp.Scale(factor=3), # scale the frequency spectrum. ] ) processed_data = processor.apply_operations(data=sim.methods[0].simulation).real
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
**Step 5:** The plot the spectrum. We also plot the synthetic dataset for comparison.
plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(synthetic_experiment, "k", linewidth=1, label="Experiment") ax.plot(processed_data, "r", alpha=0.75, linewidth=1, label="guess spectrum") ax.set_xlim(50, -200) plt.legend() plt.grid() plt.tight_layout() plt.show()
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
Setup a Least-squares minimizationNow that our model is ready, the next step is to set up a least-squares minimization.You may use any optimization package of choice, here we show an application usingLMFIT. You may read more on the LMFIT`documentation page `_. Create fitting parametersNext, you will need a list of parameters that will be used in the fit. The *LMFIT*library provides a `Parameters `_class to create a list of parameters.
site1 = spin_system.sites[0] params = Parameters() params.add(name="iso", value=site1.isotropic_chemical_shift) params.add(name="eta", value=site1.shielding_symmetric.eta, min=0, max=1) params.add(name="zeta", value=site1.shielding_symmetric.zeta) params.add(name="FWHM", value=processor.operations[1].FWHM) params.add(name="factor", value=processor.operations[3].factor)
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
Create a minimization functionNote, the above set of parameters does not know about the model. You will need toset up a function that will- update the parameters of the `Simulator` and `SignalProcessor` object based on the LMFIT parameter updates,- re-simulate the spectrum based on the updated values, and- return the difference between the experiment and simulation.
def minimization_function(params, sim, processor, sigma=1): values = params.valuesdict() # the experiment data as a Numpy array intensity = sim.methods[0].experiment.y[0].components[0].real # Here, we update simulation parameters iso, eta, and zeta for the site object site = sim.spin_systems[0].sites[0] site.isotropic_chemical_shift = values["iso"] site.shielding_symmetric.eta = values["eta"] site.shielding_symmetric.zeta = values["zeta"] # run the simulation sim.run() # update the SignalProcessor parameter and apply line broadening. # update the scaling factor parameter at index 3 of operations list. processor.operations[3].factor = values["factor"] # update the exponential apodization FWHM parameter at index 1 of operations list. processor.operations[1].FWHM = values["FWHM"] # apply signal processing processed_data = processor.apply_operations(sim.methods[0].simulation) # return the difference vector. diff = intensity - processed_data.y[0].components[0].real return diff / sigma
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
NoteTo automate the fitting process, we provide a function to parse the ``Simulator`` and ``SignalProcessor`` objects for parameters and construct an *LMFIT* ``Parameters`` object. Similarly, a minimization function, analogous to the above `minimization_function`, is also included in the *mrsimulator* library. See the next example for usage instructions. Perform the least-squares minimizationWith the synthetic dataset, simulation, and the initial guess parameters, we are readyto perform the fit. To fit, we use the *LMFIT*`Minimizer `_ class.
minner = Minimizer(minimization_function, params, fcn_args=(sim, processor, sigma)) result = minner.minimize() result
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
The plot of the fit, measurement and the residuals is shown below.
best_fit = sf.bestfit(sim, processor)[0] residuals = sf.residuals(sim, processor)[0] plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(synthetic_experiment, "k", linewidth=1, label="Experiment") ax.plot(best_fit, "r", alpha=0.75, linewidth=1, label="Best Fit") ax.plot(residuals, alpha=0.75, linewidth=1, label="Residuals") ax.set_xlabel("Frequency / Hz") ax.set_xlim(50, -200) plt.legend() plt.grid() plt.tight_layout() plt.show()
_____no_output_____
BSD-3-Clause
docs/notebooks/fitting/1D_fitting/plot_1_29Si_cuspidine.ipynb
pjgrandinetti/mrsimulator
PytorchPytorch is a framework that challenge you to build a ANN almost from scratch.This tutorial aims to explain how load non-iamges data in Pytorch, and create classification models.1. Learn how to generate synthetic data for classification. The more complex the bidimentional patern, the larger the high dimentional transformation to find a hiperplane that separes the prolem.1. Understand the basic components of a neural network using Pytorch: layers, foward pass, gradient calculation, update weights with any gradient desent method.1. Do a paralallel view of TensorFlow and Pytorch.1. Apply transformations to Loss function to trainning with imbalanced data: class weight, focal loss, etc.__References__https://towardsdatascience.com/pytorch-tabular-binary-classification-a0368da5bb89https://towardsdatascience.com/pytorch-basics-intro-to-dataloaders-and-loss-functions-868e86450047https://towardsdatascience.com/understanding-pytorch-with-an-example-a-step-by-step-tutorial-81fc5f8c4e8ehttps://cs230.stanford.edu/blog/pytorch/
import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader # plotlib and sklearn modules import numpy as np from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.metrics import accuracy_score, f1_score from sklearn.model_selection import train_test_split from matplotlib import pyplot as plt from matplotlib.colors import ListedColormap device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) # binary imbalanced set X_imb, y_imb = make_classification(n_samples=10000, n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1, weights=[0.95, 0.05], class_sep=1.5) rng = np.random.RandomState(2) X_imb += 2 * rng.uniform(size=X_imb.shape) # multiclass set X_multi, y_multi = make_classification(n_samples=10000, n_features=2, n_informative=2, n_redundant=0, n_repeated=0, n_classes=3, n_clusters_per_class=1, weights=[0.33, 0.33, 0.33], class_sep=0.8, random_state=7) # non-linear separable X_moon, y_moon = make_moons(n_samples=10000, noise=0.3, random_state=3) plt.scatter(X_imb[:, 0], X_imb[:, 1], marker='o', c=y_imb, s=25, edgecolor='k') plt.scatter(X_moon[:, 0], X_moon[:, 1], marker='o', c=y_moon, s=25, edgecolor='k') plt.scatter(X_multi[:, 0], X_multi[:, 1], marker='o', c=y_multi, s=25, edgecolor='k')
_____no_output_____
MIT
pytorch.ipynb
lamahechag/pytorch_tensorflow
Data loaderWe create a custom dataset class to iterate our data in the dataloader from Pytorch.`trainData(torch.FloatTensor(X_train), torch.FloatTensor(y_train))`Then we use `DataLoader` to allow auto batching. The function `loader_data()` gather all the pipeline to load tha data in a Pytorch tensor.
class trainData(Dataset): def __init__(self, X_data, y_data): self.X_data = X_data self.y_data = y_data def __getitem__(self, index): return self.X_data[index], self.y_data[index] def __len__ (self): return len(self.X_data) class testData(Dataset): def __init__(self, X_data): self.X_data = X_data def __getitem__(self, index): return self.X_data[index] def __len__ (self): return len(self.X_data) def loader_data(X, y, BATCH_SIZE=500): # create function that recive the X and y, batch and returns: train_loader and test_loader. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=69, stratify=y_imb) train_data = trainData(torch.FloatTensor(X_train), torch.FloatTensor(y_train)) test_data = testData(torch.FloatTensor(X_test)) train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) test_loader = DataLoader(dataset=test_data, batch_size=100) return train_loader, test_loader, y_test
_____no_output_____
MIT
pytorch.ipynb
lamahechag/pytorch_tensorflow
Pytorch ModelTo build a model in Pytorch, you should define a `Class`. The class has two parts:1. `__init__` defines the different elements of calculation, like: hidden layers, activation functions, dropouts, etc.1. `foward` method where you define how the input going through each calculation element.You will see that in the output layer for the binary classifiers there is not `sigmoid` function in the output layer, this is because in Pytorch it can be include in the loss function that will be defined later.
class LogisClassifier(nn.Module): def __init__(self, num_input=2): super(LogisClassifier, self).__init__() self.num_input = num_input # Number of input features self.layer_1 = nn.Linear(self.num_input, 1) def forward(self, inputs): x = self.layer_1(inputs) return x class binaryClassification(nn.Module): def __init__(self, num_input=2): super(binaryClassification, self).__init__() self.num_input = num_input # Number of input features self.layer_1 = nn.Linear(self.num_input, 120) self.layer_2 = nn.Linear(120, 64) self.layer_out = nn.Linear(64, 1) self.relu = nn.ReLU() self.dropout = nn.Dropout(p=0.2) self.batchnorm1 = nn.BatchNorm1d(120) self.batchnorm2 = nn.BatchNorm1d(64) def forward(self, inputs): x = self.relu(self.layer_1(inputs)) x = self.batchnorm1(x) x = self.relu(self.layer_2(x)) x = self.batchnorm2(x) x = self.dropout(x) x = self.layer_out(x) return x class Multiclass(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(2, 50) self.relu1 = nn.ReLU() self.dout = nn.Dropout(0.2) self.fc2 = nn.Linear(50, 100) self.prelu = nn.PReLU(1) self.out = nn.Linear(100, 1) self.out_act = nn.Softmax(dim=1) def forward(self, input_): a1 = self.fc1(input_) h1 = self.relu1(a1) dout = self.dout(h1) a2 = self.fc2(dout) h2 = self.prelu(a2) a3 = self.out(h2) y = self.out_act(a3) return y
_____no_output_____
MIT
pytorch.ipynb
lamahechag/pytorch_tensorflow
Training loopIn a neural network the process of learning is as follow: calculate the output, calculate the gradient, do the backward pass and update the weights.Within the training loop, you should do this in each iteration.1. reset gradient to zero.1. perform backward step.1. update parameters.Also before to measure accuracy and evaluate should be define in Pytorch operations.
def binary_acc(y_pred, y_test): y_pred_tag = torch.round(torch.sigmoid(y_pred)) correct_results_sum = (y_pred_tag == y_test).sum().float() acc = correct_results_sum/y_test.shape[0] acc = torch.round(acc * 100) return acc def eval_testdata(model, test_loader): y_pred_list = [] model.eval() # this 'with' is to evaluate without a gradient step. with torch.no_grad(): for X_batch in test_loader: X_batch = X_batch.to(device) y_test_pred = model(X_batch) y_test_pred = torch.sigmoid(y_test_pred) y_pred_tag = torch.round(y_test_pred) y_pred_list += y_pred_tag.cpu().numpy().squeeze().tolist() return y_pred_list def train_model(model, criterion, optimizer, train_loader, EPOCHS, test_loader, y_test): model.train() for e in range(1, EPOCHS+1): epoch_loss = 0 epoch_acc = 0 for X_batch, y_batch in train_loader: X_batch, y_batch = X_batch.to(device), y_batch.to(device) y_pred = model(X_batch) loss = criterion(y_pred, y_batch.unsqueeze(1)) acc = binary_acc(y_pred, y_batch.unsqueeze(1)) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() y_pred_test = eval_testdata(model, test_loader) eval_acc = round(accuracy_score(y_true=y_test, y_pred=y_pred_test), 2) eval_f1 = round(f1_score(y_true=y_test, y_pred=y_pred_test),2) print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f} | Acc_eval: {eval_acc} | f1_eval: {eval_f1}')
_____no_output_____
MIT
pytorch.ipynb
lamahechag/pytorch_tensorflow
Declare model and trainWe have defined a training loop, but we need a loss function and an optimizer to perform gradient desent step.In the first line the data are loaded, followed by the model declaration and send to the `GPU` device in this case. First experiment: Logistic classifier.
train_loader, test_loader, y_test = loader_data(X_moon, y_moon, BATCH_SIZE=10) model = LogisClassifier() model.to(device) # define loss function criterion = nn.BCEWithLogitsLoss() LEARNING_RATE = 0.001 # define gradient decent optimizer optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE) print(model) # now train(fit) the model EPOCHS = 100 train_model(model, criterion, optimizer, train_loader, EPOCHS, test_loader, y_test)
_____no_output_____
MIT
pytorch.ipynb
lamahechag/pytorch_tensorflow
Drawing Conclusions Using Groupby
# Load `winequality_edited.csv` import pandas as pd df = pd.read_csv('winequality_edited.csv')
_____no_output_____
MIT
.ipynb_checkpoints/conclusions_groupby-checkpoint.ipynb
Siddharth1698/Data-Analyst-Nanodegree
Is a certain type of wine associated with higher quality?
# Find the mean quality of each wine type (red and white) with groupby df.groupby('color').mean().quality
_____no_output_____
MIT
.ipynb_checkpoints/conclusions_groupby-checkpoint.ipynb
Siddharth1698/Data-Analyst-Nanodegree
What level of acidity receives the highest average rating?
# View the min, 25%, 50%, 75%, max pH values with Pandas describe df.describe().pH # Bin edges that will be used to "cut" the data into groups bin_edges = [2.72, 3.11, 3.21, 3.32, 4.01] # Fill in this list with five values you just found # Labels for the four acidity level groups bin_names = ['high', 'mod_high', 'medium', 'low'] # Name each acidity level category # Creates acidity_levels column df['acidity_levels'] = pd.cut(df['pH'], bin_edges, labels=bin_names) # Checks for successful creation of this column df.head() # Find the mean quality of each acidity level with groupby df.groupby('acidity_levels').mean().quality # Save changes for the next section df.to_csv('winequality_edited.csv', index=False)
_____no_output_____
MIT
.ipynb_checkpoints/conclusions_groupby-checkpoint.ipynb
Siddharth1698/Data-Analyst-Nanodegree
Write a function that takes in a string and returns its longest substring without duplicate characters. Assume that there will only be one longest substring without duplication.For example, longest_substring("zaabcde") == 'abcde'
# longest substring function. def longest_substring(s): lastSeen = {} # dictionary of where the last of a character is. longest = [0, 1] # index of beginning and end of longest substring. startIdx = 0 # index of beginning of current substring for i, char in enumerate(s): if char in lastSeen: # start over if character is in current substring. startIdx = max(startIdx, lastSeen[char] + 1) if longest[1] - longest[0] < i + 1 - startIdx: # if current substring is longer than longest. longest = [startIdx, i + 1] lastSeen[char] = i # update dictionary. return s[longest[0] : longest[1]] # driver program. s = "zaabcde" # results for longest substring. print(longest_substring(s))
abcde
MIT
Challenges/LongestSubstringChallenge.ipynb
CVanchieri/LambdaSchool-DS-Challenges
Crimes Svetozar Mateev Putting Crime in the US in Context First I am going to calculate the total crimes by dividing the population by 100 000 and then multiplying it by the crimes percapita.Then I am going to remove the NaN values.
crime_reports=pd.read_csv("report.csv") crime_reports=crime_reports.dropna() crime_reports=crime_reports.reset_index() crime_reports["total_crimes"]=(crime_reports.population/100000*crime_reports.crimes_percapita) #crime_reports[["population",'crimes_percapita','total_crimes']]
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
• Have a look at the “months_reported” column values. What do they mean? What percent of the rows have less than 12 months? How significant is that?
crime_reports["months_reported"].unique() less_than_twelve=crime_reports[crime_reports.months_reported<12] print(str(len(less_than_twelve)/len(crime_reports.months_reported)*100)+'%')
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
The months reported column indicates how much months from the year have been reported and only 1.9% of the rows have less than 12 months reported per year whichn on a 5% level isn't significant. • Overall crime popularity: Create a bar chart of crime frequencies (total, not per capita). Display the type of crime and total occurrences (sum over all years). Sort largest to smallest. Are there any patterns? Which crime is most common?
homicides_total_sum=crime_reports.homicides.sum() rapes_total_sum=crime_reports.rapes.sum() assaults_total_sum=crime_reports.assaults.sum() robberies_total_sum=crime_reports.robberies.sum() total_crimes_total_sum= crime_reports.total_crimes.sum() homicides_frequency=homicides_total_sum/total_crimes_total_sum rapes_frequency=rapes_total_sum/total_crimes_total_sum assaults_frequency=assaults_total_sum/total_crimes_total_sum robberies_frequency=robberies_total_sum/total_crimes_total_sum plt.bar(height=[assaults_frequency,robberies_frequency,rapes_frequency,homicides_frequency],left=[1,2,3,4], align = "center",width=0.2) plt.xticks([1,2,3,4,],['Assaults','Robberies','Rapes','Homicides']) plt.ylabel("Frequency of a crime") plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
The most frequent crimes are the assaults and i can see from the diagram that crimes which are less serious are committed more often. • Crime popularity by year: Break down the analysis of the previous graph by year. What is the most common crime (total, not per capita) for each year? What is the least common one?
homicides_sum=0 rapes_sum=0 assaults_sum=0 robberies_sum=0 for year in crime_reports.report_year.unique(): year_df=crime_reports[crime_reports.report_year==year] homicides_sum_year=year_df.homicides.sum() rapes_sum_year=year_df.rapes.sum() assaults_sum_year=year_df.assaults.sum() robberies_sum_year=year_df.robberies.sum() if(homicides_sum_year>rapes_sum_year and homicides_sum_year>assaults_sum_year and homicides_sum_year>robberies_sum_year): homiciedes_sum+=1 print(str(year)+' '+"homicides") elif(homicides_sum_year<rapes_sum_year and rapes_sum_year>assaults_sum_year and rapes_sum_year>robberies_sum_year): rapes_sum+=1 print(str(year)+' '+"rapes") elif(homicides_sum_year<assaults_sum_year and rapes_sum_year<assaults_sum_year and assaults_sum_year>robberies_sum_year): assaults_sum+=1 print(str(year)+' '+"assaults") elif(homicides_sum_year<robberies_sum_year and rapes_sum_year<robberies_sum_year and assaults_sum_year<robberies_sum_year): robberies_sum+=1 print(str(year)+' '+"robberies") plt.bar(height=[assaults_sum,robberies_sum,homicides_sum,rapes_sum],left=[1,2,3,4],align='center')#most common one through the years plt.xticks([1,2,3,4,],['Assaults','Robberies','Homicides','Rapes']) plt.ylabel("Times a crime was most often for a year") plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
I can see from the bar chart that assault were the most popular crime for a year almost thirty time and that the homicides and rapes were never the most popular crime for a year. • Crime evolution (e. g. crime rates as a function of time): How do crime rates per capita evolve over the years? Create a plot (or a series) of plots displaying how each rate evolves. Create another plot of all crimes (total, not per capita) over the years.
rapes_per_capita=[] homicides_per_capita=[] assaults_per_capita=[] robberies_per_capita=[] for year in crime_reports.report_year.unique(): year_df=crime_reports[crime_reports.report_year==year] homicides_mean_year=year_df.homicides_percapita.mean() rapes_mean_year=year_df.rapes_percapita.mean() assaults_mean_year=year_df.assaults_percapita.mean() robberies_mean_year=year_df.robberies_percapita.mean() homicides_per_capita.append(homicides_mean_year) rapes_per_capita.append(rapes_mean_year) assaults_per_capita.append(assaults_mean_year) robberies_per_capita.append(robberies_mean_year) plt.plot(crime_reports.report_year.unique(),rapes_per_capita) plt.suptitle("Rapes") plt.xlabel("Years") plt.ylabel('Crimes per capira') plt.show() plt.plot(crime_reports.report_year.unique(),homicides_per_capita) plt.suptitle("Homicides") plt.xlabel("Years") plt.ylabel('Crimes per capira') plt.show() plt.plot(crime_reports.report_year.unique(),assaults_per_capita) plt.suptitle("Assaults") plt.xlabel("Years") plt.ylabel('Crimes per capira') plt.show() plt.plot(crime_reports.report_year.unique(),robberies_per_capita) plt.suptitle("Robberies") plt.xlabel("Years") plt.ylabel('Crimes per capira') plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
From the plots we can see that each crime has significanttly lower rate per capita and that for all of them the peak was between 1990 and 1995.
rapes_per_year=[] homicides_per_year=[] assaults_per_year=[] robberies_per_year=[] for year in crime_reports.report_year.unique(): year_df=crime_reports[crime_reports.report_year==year] homicides_mean_year=year_df.homicides.sum() rapes_mean_year=year_df.rapes.sum() assaults_mean_year=year_df.assaults.sum() robberies_mean_year=year_df.robberies.sum() homicides_per_year.append(homicides_mean_year) rapes_per_year.append(rapes_mean_year) assaults_per_year.append(assaults_mean_year) robberies_per_year.append(robberies_mean_year) plt.plot(crime_reports.report_year.unique(),rapes_per_year,label="Rapes") plt.plot(crime_reports.report_year.unique(),assaults_per_year,label="Assaults") plt.plot(crime_reports.report_year.unique(),homicides_per_year,label="Homicides") plt.plot(crime_reports.report_year.unique(),robberies_per_year,label="Robberies") plt.legend() plt.ylabel("Number of crimes") plt.xlabel("Years") plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
Again our observations are confirmed that the peak of the crimes is around 1990 and that in present there are a lot less crimes except the rapes which between 2010 and 2015 have begun raise slightly. Crimes by States • “Criminal” jurisdictions: Plot the sum of all crimes (total, not per capita) for each jurisdiction. Sort largest to smallest. Are any jurisdictions more prone to crime?
#agency_jurisdiction jurisdicitons=[] counter=0 crimes_per_jurisdiction=[] agencies_df=crime_reports.sort_values('violent_crimes',ascending=False) for jurisdiciton in agencies_df.agency_jurisdiction.unique(): jurisdicition_df=agencies_df[agencies_df.agency_jurisdiction==jurisdiciton] all_crimes=jurisdicition_df.violent_crimes.sum() crimes_per_jurisdiction.append(all_crimes) counter+=1 jurisdicitons.append(jurisdiciton) if counter==10: break df_plottt=pd.DataFrame({'area':jurisdicitons,'num':crimes_per_jurisdiction}) df_plottt=df_plottt.sort_values('num',ascending=False) plt.bar(height=df_plottt.num,left=[1,2,3,4,5,6,7,8,9,10],align='center') plt.xticks([1,2,3,4,5,6,7,8,9,10],df_plottt.area,rotation='vertical') plt.ylabel("Number of Crimes") plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
From the bar chart we can see that the New York City,Ny jurisdiction has the most crimes. • “Criminal” jurisdictions, part 2: Create the same type of chart as above, but use the crime rates per capita this time. Are you getting the same distribution? Why? You may need data from the “population” column to answer this. Don’t perform significance tests, just inspect the plots.
jurisdicitons=[] counter=0 crimes_per_jurisdiction=[] population=[] agencies_df=crime_reports agencies_df=crime_reports.sort_values('crimes_percapita',ascending=False) for a in agencies_df.agency_jurisdiction.unique(): agencies_df["crimes_percapita_per_agency"]=agencies_df[agencies_df.agency_jurisdiction==jurisdiciton].crimes_percapita.sum() agencies_df=agencies_df.sort_values('crimes_percapita_per_agency',ascending=True) for jurisdiciton in agencies_df.agency_jurisdiction.unique(): jurisdicition_df=agencies_df[agencies_df.agency_jurisdiction==jurisdiciton] all_crimes=jurisdicition_df.crimes_percapita.sum() crimes_per_jurisdiction.append(all_crimes) counter+=1 jurisdicitons.append(jurisdiciton) population.append(jurisdicition_df.population.mean()) if counter==10: break df_plot=pd.DataFrame({'jurisdicitons':jurisdicitons,'num':crimes_per_jurisdiction}) df_plot=df_plot.sort_values('num',ascending=False,axis=0) plt.bar(height=df_plot.num,left=[1,2,3,4,5,6,7,8,9,10],align='center') plt.xticks([1,2,3,4,5,6,7,8,9,10],df_plot.jurisdicitons,rotation='vertical') plt.ylabel("Number of Crimes") plt.show() df_pop_plot=pd.DataFrame({'area':jurisdicitons,'num':population}) df_pop_plot=df_pop_plot.sort_values('num',ascending=False,axis=0) plt.bar(height=df_pop_plot.num,left=[1,2,3,4,5,6,7,8,9,10],align='center') plt.xticks([1,2,3,4,5,6,7,8,9,10],df_pop_plot.area,rotation='vertical') plt.ylabel("Population") plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
We can see that the crime per capita in Miami is the biggest contary to the previous plot. However it appears to have little correlation between the number of crimes per capita and the population. • “Criminal states”: Create the same type of chart as in the first subproblem, but use the states instead. You can get the state name in two ways: either the first two letters of the agency_code column or the symbols after the comma in the agency_jurisdiction column.
parts=crime_reports['agency_jurisdiction'].str.extract("(\w+), (\w+)", expand = True) parts.columns=['something_else','state'] parts['state'] crime_reports['state']=parts['state'] crime_states=[] total_crimes=[] counter=0 gencies_df=crime_reports.sort_values('violent_crimes',ascending=False) for state in crime_reports.state.unique(): jurisdicition_df=crime_reports[crime_reports.state==state] all_crimes=jurisdicition_df.violent_crimes.sum() total_crimes.append(all_crimes) crime_states.append(state) counter+=1 jurisdicitons.append(jurisdiciton) if counter==10: break plot_df=pd.DataFrame({'states':crime_states,'num':total_crimes}) plot_df=plot_df.sort_values('num',ascending=False) plt.bar(height=plot_df.num,left=[1,2,3,4,5,6,7,8,9,10],align='center') plt.xticks([1,2,3,4,5,6,7,8,9,10],plot_df.states) plt.ylabel("Number Of Crimes") plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
From the chart we can see that New York has the biggest number of crimes. • Hypothesis testing: Are crime rates per capita related to population, e. g. does a more densely populated community produce more crime (because there are more people), or less crime (because there is a better police force)? Plot the total number of crimes vs. population to find out. Is there any correlation? If so, what is it? Is the correlation significant?
total_crimes=[] agency_jurisdiction=[] population=[] counter=0 for jurisdiction in crime_reports.agency_jurisdiction.unique(): jurisdicition_df=crime_reports[crime_reports.agency_jurisdiction==jurisdiction] all_crimes=jurisdicition_df.violent_crimes.sum() total_crimes.append(all_crimes) counter+=1 agency_jurisdiction.append(jurisdiction) population.append(jurisdicition_df.population.mean()) if counter==10: break print(len(total_crimes),len(agency_jurisdiction)) plot_df=pd.DataFrame({'states':agency_jurisdiction,'num':total_crimes,'popu':population}) plot_df=plot_df.sort_values('num',ascending=False) plt.bar(height=plot_df.popu,left=[1,2,3,4,5,6,7,8,9,10],align='center',color='r',label="Population") plt.bar(height=plot_df.num,left=[1,2,3,4,5,6,7,8,9,10],align='center',color='b',label="Crimes") plt.xticks([1,2,3,4,5,6,7,8,9,10],plot_df.states,rotation='vertical') plt.ylabel("Number") plt.legend() plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
We can see that there isn't a corelation between the population and the crimes because some places like Atlanta,GA shows that there might be but others like Baltimore Country,MD show us totaly different results Additional data First I am droping some of the unnecessary columns and then I am tranforming the dates to datetime objects.
crimes=pd.read_csv("crimes.csv") crimes=crimes.drop(['x','y','OBJECTID','ESRI_OID','Time'],axis=1) crimes.columns=['publicaddress', 'controlnbr', 'CCN', 'precinct', 'reported_date', 'begin_date', 'offense', 'description', 'UCRCode', 'entered_date', 'long', 'lat', 'neighborhood', 'lastchanged', 'last_update_date'] crimes.dtypes #2015-09-21T14:16:59.000Z crimes['reported_date']=pd.to_datetime(crimes['reported_date'],format='%Y-%m-%d',errors='ignore') crimes['entered_date']=pd.to_datetime(crimes['entered_date'],format='%Y-%m-%d',errors='ignore') crimes['lastchanged']=pd.to_datetime(crimes['lastchanged'],format='%Y-%m-%d',errors='ignore') crimes['last_update_date']=pd.to_datetime(crimes['last_update_date'],format='%Y-%m-%d',errors='ignore') crimes['begin_date']=pd.to_datetime(crimes['begin_date'],format='%Y-%m-%d',errors='ignore') crimes=crimes.dropna()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
• Total number of crimes per year: Count all crimes for years in the dataset (2010-2016). Print the total number.
print(str(len(crimes))+" "+"crimes between 2010 and 2016")
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
• Plot how crimes evolve each year
year_10=0 year_11=0 year_12=0 year_13=0 year_14=0 year_15=0 year_16=0 for date in crimes.begin_date: if date.year==2010: year_10+=1 elif date.year==2011: year_11+=1 elif date.year==2012: year_12+=1 elif date.year==2013: year_13+=1 elif date.year==2014: year_14+=1 elif date.year==2015: year_15+=1 elif date.year==2016: year_16+=1 plt.bar(height=[year_10,year_11,year_12,year_13,year_14,year_15,year_16],left=[1, 2, 3, 4 ,5 ,6 ,7],align='center') plt.ylabel("Number of Crimes") plt.xticks([1, 2, 3, 4 ,5 ,6 ,7],['2010','2011','2012','2013','2014','2015','2016',]) plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
From 2010 to 2012 ther is a sligth raise in the number of crimes.However from 2012 to 2016 there is a drop in the number of crimes committed. • Compare the previous plot to the plots in the previous exercise.Note: In order to make comparison better, plot the data for all states again, but this time filter only years 2010-2016. Does the crime rate in MN have any connection to the total crime rate? What percentage of the total crime rate (in all given states) is given by MN?
crime_states=[] total_crimes=[] counter=0 gencies_df=crime_reports.sort_values('violent_crimes',ascending=False) for state in crime_reports.state.unique(): jurisdicition_df=crime_reports[crime_reports.state==state] right_year=jurisdicition_df[jurisdicition_df.report_year>2009] all_crimes=right_year.violent_crimes.sum() total_crimes.append(all_crimes) crime_states.append(state) counter+=1 jurisdicitons.append(jurisdiciton) if counter==10: break plot_df=pd.DataFrame({'states':crime_states,'num':total_crimes}) plot_df=plot_df.sort_values('num',ascending=False) plt.bar(height=plot_df.num,left=[1,2,3,4,5,6,7,8,9,10],align='center') plt.xticks([1,2,3,4,5,6,7,8,9,10],plot_df.states) plt.ylabel("Number Of Crimes") plt.show() year_10=0 year_11=0 year_12=0 year_13=0 year_14=0 year_15=0 year_16=0 for date in crimes.begin_date: if date.year==2010: year_10+=1 elif date.year==2011: year_11+=1 elif date.year==2012: year_12+=1 elif date.year==2013: year_13+=1 elif date.year==2014: year_14+=1 elif date.year==2015: year_15+=1 elif date.year==2016: year_16+=1 plt.bar(height=[year_10,year_11,year_12,year_13,year_14,year_15,year_16],left=[1, 2, 3, 4 ,5 ,6 ,7],align='center') plt.ylabel("Number of Crimes") plt.xticks([1, 2, 3, 4 ,5 ,6 ,7],['2010','2011','2012','2013','2014','2015','2016',]) plt.show() whole_number = sum(i for i in total_crimes) print(str(len(crimes)/whole_number)+' '+'% from the total number of crimes committed between 2010 and 2016')
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
• Cross-dataset matching: Get data from the previous dataset (crime rates in the US) again. This time, search only for MN and only for years 2010-2016. Do you have any results? If so, the results for total crime in MN should match in both datasets. Do they match?
year_10n=4064.0 year_11n=3722.0 year_12n=3872.0 year_13n=4038.0 year_14n=4093.0 year_15n=0 year_16n=0 MN=crime_reports[crime_reports.state=="MN"] MN=MN[MN.report_year>2009] number_crimes=sum(MN.violent_crimes) print(str(int(number_crimes))+" from the first data set") print(str(len(crimes))+" "+"from the second data set") year_10=0 year_11=0 year_12=0 year_13=0 year_14=0 year_15=0 year_16=0 for date in crimes.begin_date: if date.year==2010: year_10+=1 elif date.year==2011: year_11+=1 elif date.year==2012: year_12+=1 elif date.year==2013: year_13+=1 elif date.year==2014: year_14+=1 elif date.year==2015: year_15+=1 elif date.year==2016: year_16+=1 plt.bar(height=[year_10,year_11,year_12,year_13,year_14,year_15,year_16],left=[1, 2, 3, 4 ,5 ,6 ,7],align='center',color='r',label="Second DataSet values") plt.bar(height=[year_10n,year_11n,year_12n,year_13n,year_14n,year_15n,year_16n],left=[1,2,3,4,5,6,7],align='center',color='b',label="First DataSet values") plt.legend() plt.xticks([1,2,3,4,5,6,7],['2010','2011','2012','2013','2014','2015','2016',]) plt.ylabel("Crimes") plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
The values in the first data set are until 2014 and they are much smaller than those in the second.There is a big difference between the two. Temporal Analysis • Look at the crime categories. Which is the most popular crime category in MN overall?
crimes.description.unique() d={'Shoplifting':1, 'Theft From Motr Vehc':1, 'Other Theft':1, 'Theft From Building':1, 'Crim Sex Cond-rape':1, 'Burglary Of Dwelling':1, 'Theft From Person':1, 'Motor Vehicle Theft':1, 'Robbery Of Business':1, 'Aslt-police/emerg P':1, 'Domestic Assault/Strangulation':1, 'Theft-motr Veh Parts':1, 'Robbery Of Person':1, 'Asslt W/dngrs Weapon':1, 'Robbery Per Agg':1, 'Burglary Of Business':1, 'Arson':1, 'Theft By Swindle':1, 'Aslt-great Bodily Hm':1, 'Aslt-sgnfcnt Bdly Hm':1, 'On-line Theft':1, '2nd Deg Domes Aslt':1, 'Murder (general)':1, 'Adulteration/poison':1, 'Gas Station Driv-off':1, 'Other Vehicle Theft':1, '3rd Deg Domes Aslt':1, 'Pocket-picking':1, 'Theft/coinop Device':1, 'Disarm a Police Officer':1, 'Theft By Computer':1, '1st Deg Domes Asslt':1, 'Bike Theft':1, 'Scrapping-Recycling Theft':1, 'Justifiable Homicide':0, 'Looting':1} for desc in crimes.description: d[desc]+=1 sorted_d = sorted(d.items(), key=operator.itemgetter(1)) print(sorted_d)
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
The most common type is Other theft but since it si do unclear we can say that Burglary of Dwelling is the most commnon type of theft. • Break down the data by months. Plot the total number of crimes for each month, summed over the years. Is there a seasonal component? Which month has the highest crime rate? Which has the smallest? Are the differences significant?
january=0 february=0 march=0 april=0 may=0 june=0 july=0 august=0 september=0 october=0 november=0 december=0 for date in crimes.begin_date: if(date.month==1): january+=1 elif(date.month==2): february+=1 elif(date.month==3): march+=1 elif(date.month==4): april+=1 elif(date.month==5): may+=1 elif(date.month==6): june+=1 elif(date.month==7): july+=1 elif(date.month==8): august+=1 elif(date.month==9): september+=1 elif(date.month==10): october+=1 elif(date.month==11): november+=1 elif(date.month==12): december+=1 plt.bar(height=[january,february,march,april,may,june,july,august,september,october,november,december] ,left=[1,2,3,4,5,6,7,8,9,10,11,12],align='center') plt.xticks([1,2,3,4,5,6,7,8,9,10,11,12], ['january','february','march','april','may','june','july','august','september','october','november','december'] ,rotation='vertical') plt.ylabel("Number Of Crimes") plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
We can see that most of the crimes are in june and that there is seasonal tendency that most of the crimes are committer in the summer. • Break the results by weekday. You can get the weekday from the date (there are functions for this). Do more crimes happen on the weekends?
Monday=0 Tuesday=0 Wednesday=0 Thursday=0 Friday=0 Saturday=0 Sunday=0 for date in crimes.begin_date: if(date.weekday()==0): Monday+=1 elif(date.weekday()==1): Tuesday+=1 elif(date.weekday()==2): Wednesday+=1 elif(date.weekday()==3): Thursday+=1 elif(date.weekday()==4): Friday+=1 elif(date.weekday()==5): Saturday+=1 elif(date.weekday()==6): Sunday+=1 plt.bar(height=[Monday,Tuesday,Wednesday,Thursday,Friday,Saturday,Sunday] ,left=[1,2,3,4,5,6,7],align='center') plt.xticks([1,2,3,4,5,6,7],['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday'],rotation='vertical') plt.ylabel("Number Of Crimes") plt.show()
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
Most crimes are committed on Fridays.On the second place are Thursdays. • Break the weekday data by crime type. Are certain types of crime more likely to happen on a given day? Comment your findings.I have no time to complete this because I have a Programming Fundamentals Exam to take but I would make 7 plots one for each day of the week with the top 10 types of crimes. 5. Significant Factors in Crime
communities= pd.read_table("communities.data",sep=',',header=None) communities.columns communities_names= pd.read_table('communities.names',header=None) communities_names
_____no_output_____
MIT
DataScienceExam/Exam.ipynb
SvetozarMateev/Data-Science
fastdot.core> Drawing graphs with graphviz.
#export from fastcore.all import * import pydot from matplotlib.colors import rgb2hex, hex2color #export _all_ = ['pydot'] #hide from nbdev.showdoc import *
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
Nodes
#export def Dot(defaults=None, rankdir='LR', directed=True, compound=True, **kwargs): "Create a `pydot.Dot` graph with fastai/fastdot style defaults" return pydot.Dot(rankdir=rankdir, directed=directed, compound=compound, **kwargs) #export def uniq_name(o): return 'n'+(uuid4().hex) def quote(x, q='"'): 'Surround `x` with `"`' return f'"{x}"' @patch def _repr_svg_(self:pydot.Dot): return self.create_svg().decode('utf-8') #export graph_objects = {} object_names = {} #export def add_mapping(graph_item, obj): graph_objects[graph_item.get_name()] = graph_item object_names[id(obj)] = graph_item.get_name() return graph_item #export def _pydot_create(f, obj, **kwargs): for k,v in kwargs.items(): if callable(v): v = kwargs[k] = v(obj) if k not in ('name','graph_name'): kwargs[k] = quote(v) return add_mapping(f(**kwargs), obj) #export node_defaults = dict(label=str, tooltip=str, name=uniq_name, shape='box', style='rounded, filled', fillcolor='white') #export def Node(obj, **kwargs): "Create a `pydot.Node` with a unique name" if not isinstance(obj,str) and isinstance(obj, Collection) and len(obj)==2: obj,kwargs['tooltip'] = obj kwargs = merge(node_defaults, kwargs) return _pydot_create(pydot.Node, obj, **kwargs)
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
`pydot` uses the same name-based approach to identifying graph items as `graphviz`. However we would rather use python objects. Therefore, we patch `pydot` to use unique names.
g = Dot() a = Node('a') g.add_node(a) g
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
If a 2-tuple is passed to `add_node`, then the 2nd element becomes the tooltip. You can also pass any `kwargs` that are accepted by `graphviz`.
g = Dot() g.add_node(Node(['a', "My tooltip"], fillcolor='pink')) g
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
Keyword args can also be arbitrary functions, which will called with the node's label.
g = Dot() o = 'a' g.add_node(Node(o, fillcolor=lambda o:'pink')) g #export def object2graph(o): "Get graph item representing `o`" return graph_objects[object_names[id(o)]] object2graph(o).get_fillcolor()
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
Colors The callable kwargs functionality can be used to map labels to colors in a consistent way..
#export def obj2node_color(cm, minalpha, rangealpha, o): "Create a consistent mapping from objects to colors, using colormap `cm`" h = hash(o) i = float(h % 256) / 256 alpha = (h^hash('something')) % rangealpha + minalpha return rgb2hex(cm(i)) + f'{alpha:02X}' #exports graph_colors1 = partial(obj2node_color, plt.get_cmap('rainbow'), 30, 160) graph_colors2 = partial(obj2node_color, plt.get_cmap('tab20'), 30, 160)
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
These predefined color mapping functions provide a good range of colors and readable text.
g = Dot() g.add_node(Node('a', fillcolor=graph_colors1)) g.add_node(Node('b', fillcolor=graph_colors1)) g g = Dot() g.add_node(Node('a', fillcolor=graph_colors2)) g.add_node(Node('b', fillcolor=graph_colors2)) g
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
We'll use the former color function as our default. You can change it by simply modifying `node_defaults`.
#export node_defaults['fillcolor'] = graph_colors1
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
Clusters and Items
#export cluster_defaults = dict(label=str, tooltip=str, graph_name=uniq_name, style='rounded, filled', fillcolor='#55555522') #export def Cluster(obj='', **kwargs): "Create a `pydot.Cluster` with a unique name" kwargs = merge(cluster_defaults, kwargs) return _pydot_create(pydot.Cluster, obj, **kwargs) g = Dot() sg = Cluster('clus', tooltip='Cluster tooltip') sg.add_node(Node(['a', "My tooltip"])) sg.add_node(Node('b')) g.add_subgraph(sg) g #export @patch def nodes(self:pydot.Graph): "`i`th node in `Graph`" return L(o for o in self.get_nodes() if o.get_label() is not None) #export @patch def __getitem__(self:pydot.Graph, i): "`i`th node in `Graph`" return self.nodes()[i]
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
You can subscript into a `Graph`'s `Node`s by index:
print(sg[0].get_label()) #export @patch def add_item(self:pydot.Graph, item, **kwargs): "Add a `Cluster`, `Node`, or `Edge` to the `Graph`" if not isinstance(item, (pydot.Edge,pydot.Node,pydot.Graph)): item = Node(item, **kwargs) f = self.add_node if isinstance(item, pydot.Node ) else \ self.add_subgraph if isinstance(item, pydot.Graph) else \ self.add_edge if isinstance(item, pydot.Edge ) else None f(item) return item
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
There's no good reason to have different methods for adding clusters vs nodes (as `pydot` requires), so we provide a single method.
g = Dot() sg = Cluster('clus') g.add_item(sg) sg.add_item('a') g #export @patch def add_items(self:pydot.Graph, *items, **kwargs): "Add `items` the `Graph`" return L(self.add_item(it, **kwargs) for it in items) #export def graph_items(*items, **kwargs): "Add `items` to a new `pydot.Dot`" g = Dot() g.add_items(*items, **kwargs) return g sg1 = Cluster('clus') sg1.add_items('n1', 'n2') sg2 = Cluster() sg2.add_item('n') graph_items(sg1,sg2)
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
Edges
#export @patch def first(self:pydot.Graph): "First node in `Graph`, searching subgraphs recursively as needed" nodes = self.nodes() if nodes: return nodes[0] for subg in self.get_subgraphs(): res = subg.first() if res: return res #export @patch def last(self:pydot.Graph): "Lastt node in `Graph`, searching subgraphs recursively as needed" nodes = self.nodes() if nodes: return nodes[-1] for subg in reversed(self.get_subgraphs()): res = subg.last() if res: return res #export @patch def with_compass(self:(pydot.Node,pydot.Graph), compass=None): r = self.get_name() return f'{r}:{compass}' if compass else r # export @patch def connect(self:(pydot.Node,pydot.Graph), item, compass1=None, compass2=None, **kwargs): "Connect two nodes or clusters" a,b,ltail,lhead = self,item,'','' if isinstance(self,pydot.Graph): a = self.last() ltail=self.get_name() if isinstance(item,pydot.Graph): b = item.first() lhead=item.get_name() a,b = a.with_compass(compass1),b.with_compass(compass2) return pydot.Edge(a, b, lhead=lhead, ltail=ltail, **kwargs) sg2 = Cluster('clus2') n1 = sg2.add_item('n1', fillcolor='pink') n2 = sg2.add_item('n2', fillcolor='lightblue') sg2.add_item(n1.connect(n2)) sg1 = Cluster('clus1') sg1.add_item(sg2) a,b = Node('a'),Node('b') edges = a.connect(b),a.connect(a),sg1.connect(b),sg2[0].connect(a) g = Dot() g.add_items(sg1, a, b, *edges) g #export def object_connections(conns): "Create connections between all pairs in `conns`" return [object2graph(a).connect(object2graph(b)) for a,b in conns]
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
This is a shortcut for creating connections between objects that are already in a graph.
a,b = 'a','b' g = graph_items(a, b) g.add_items(*object_connections([(a,b)])) g
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
Sequential Since it's common to want to connect a series sequentially, we provide some simple shortcuts for this functionality.
#export def graph_edges_seq(items): "Add edges between each pair of nodes in `items`" return L(items[i].connect(items[i+1]) for i in range(len(items)-1)) #export @patch def add_edges_seq(self:pydot.Graph, items): "Add edges between each pair of nodes in `items`" return self.add_items(*graph_edges_seq(items)) g = Dot() its = g.add_items('a','b','c') g.add_edges_seq(its) g #export def seq_cluster(items, cluster_label='', **kwargs): sg = Cluster(cluster_label) its = sg.add_items(*items, **kwargs) sg.add_edges_seq(its) return sg g = Dot() g.add_item(seq_cluster(['a','b','c'], 'clust')) g.add_item(seq_cluster(['1','2','c'], 'clust2')) g g = Dot() g.add_item(seq_cluster(['a','b','c'], 'clust')) g sg1 = seq_cluster(['a','b','c'], 'clust1') sg2 = seq_cluster(['a1','a2',sg1], 'clust2') g = Dot() g.add_item(sg2) g sg1 = seq_cluster(['inp'], 'clust1') sg2 = seq_cluster(['a','b','c'], 'clust2') sg2.add_items(sg1.connect(sg2[-1]), sg1.connect(sg2)) g = Dot() g.add_items(sg1,sg2) g # export def Point(label='pnt', **kwargs): "Create a `Node` with a 'point' shape" return (Node('pnt', shape='point')) sg = Cluster('clus') a,b,c = sg.add_items('a','b','c') p = sg.add_item(Point()) sg.add_item(p.connect(c)) sg.add_items(p.connect(a), a.connect(b), b.connect(c)) g = Dot() g.add_items(sg) g
_____no_output_____
Apache-2.0
00_core.ipynb
noklam/fastdot
Export -
#hide from nbdev.export import notebook2script notebook2script()
Converted 00_core.ipynb. Converted index.ipynb.
Apache-2.0
00_core.ipynb
noklam/fastdot
Data Science Boot Camp Introduction to Pandas Part 1 * __Pandas__ is a Python package providing fast, flexible, and expressive data structures designed to work with *relational* or *labeled* data both.* It is a fundamental high-level building block for doing practical, real world data analysis in Python.* Python has always been great for prepping and munging data, but it's never been great for analysis - you'd usually end up using R or loading it into a database and using SQL. Pandas makes Python great for analysis. * Pandas is well suited for: * Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet * Ordered and unordered (not necessarily fixed-frequency) time series data. * Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels * Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure * Key features of Pandas: * Easy handling of __missing data__ * __Size mutability__: columns can be inserted and deleted from DataFrame and higher dimensional objects. * Automatic and explicit __data alignment__: objects can be explicitly aligned to a set of labels, or the data can be aligned automatically. * __Fast__ and __efficient__ DataFrame object with default and customized indexing. * __Reshaping__ and __pivoting__ of data sets. * Key features of Pandas (Continued): * Label-based __slicing__, __indexing__, __fancy indexing__ and __subsetting__ of large data sets. * __Group by__ data for aggregation and transformations. * High performance __merging__ and __joining__ of data. * __IO Tools__ for loading data into in-memory data objects from different file formats. * __Time Series__ functionality. * First thing we have to import pandas and numpy library under the aliases pd and np.* Then check our pandas version.
%matplotlib inline import pandas as pd import numpy as np print(pd.__version__)
0.22.0
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* Let's set some options for `Pandas`
pd.set_option('display.notebook_repr_html', False) pd.set_option('max_columns', 10) pd.set_option('max_rows', 10)
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
Pandas Objects * At the very basic level, Pandas objects can be thought of as enhanced versions of NumPy structured arrays in which the rows and columns are identified with labels rather than simple integer indices.* There are three fundamental Pandas data structures: the Series, DataFrame, and Index. Series * A __Series__ is a single vector of data (like a NumPy array) with an *index* that labels each element in the vector.* It can be created from a list or array as follows:
counts = pd.Series([15029231, 7529491, 7499740, 5445026, 2702492, 2742534, 4279677, 2133548, 2146129]) counts
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the `Series`, while the index is a pandas `Index` object.
counts.values counts.index
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* We can assign meaningful labels to the index, if they are available:
population = pd.Series([15029231, 7529491, 7499740, 5445026, 2702492, 2742534, 4279677, 2133548, 2146129], index=['Istanbul Total', 'Istanbul Males', 'Istanbul Females', 'Ankara Total', 'Ankara Males', 'Ankara Females', 'Izmir Total', 'Izmir Males', 'Izmir Females']) population
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* These labels can be used to refer to the values in the `Series`.
population['Istanbul Total'] mask = [city.endswith('Females') for city in population.index] mask population[mask]
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* As you noticed that we can masking in series.* Also we can still use positional indexing even we assign meaningful labels to the index, if we wish.
population[0]
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* We can give both the array of values and the index meaningful labels themselves:
population.name = 'population' population.index.name = 'city' population
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* Also, NumPy's math functions and other operations can be applied to Series without losing the data structure.
np.ceil(population / 1000000) * 1000000
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* We can also filter according to the values in the `Series` like in the Numpy's:
population[population>3000000]
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* A `Series` can be thought of as an ordered key-value store. In fact, we can create one from a `dict`:
populationDict = {'Istanbul Total': 15029231, 'Ankara Total': 5445026, 'Izmir Total': 4279677} pd.Series(populationDict)
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* Notice that the `Series` is created in key-sorted order.* If we pass a custom index to `Series`, it will select the corresponding values from the dict, and treat indices without corrsponding values as missing. Pandas uses the `NaN` (not a number) type for missing values.
population2 = pd.Series(populationDict, index=['Istanbul Total','Ankara Total','Izmir Total','Bursa Total', 'Antalya Total']) population2 population2.isnull()
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* Critically, the labels are used to **align data** when used in operations with other Series objects:
population + population2
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* Contrast this with NumPy arrays, where arrays of the same length will combine values element-wise; adding Series combined values with the same label in the resulting series. Notice also that the missing values were propogated by addition. DataFrame * A `DataFrame` represents a tabular, spreadsheet-like data structure containing an or- dered collection of columns, each of which can be a different value type (numeric, string, boolean, etc.).* `DataFrame` has both a row and column index; it can be thought of as a dict of Series (one for all sharing the same index).
areaDict = {'Istanbul': 5461, 'Ankara': 25632, 'Izmir': 11891, 'Bursa': 10813, 'Antalya': 20177} area = pd.Series(areaDict) area populationDict = {'Istanbul': 15029231, 'Ankara': 5445026, 'Izmir': 4279677, 'Bursa': 2936803, 'Antalya': 2364396} population3 = pd.Series(populationDict) population3
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* Now that we have 2 Series population by cities and areas by cities, we can use a dictionary to construct a single two-dimensional object containing this information:
cities = pd.DataFrame({'population': population3, 'area': area}) cities
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* Or we can create our cities `DataFrame` with lists and indexes.
cities = pd.DataFrame({ 'population':[15029231, 5445026, 4279677, 2936803, 2364396], 'area':[5461, 25632, 11891, 10813, 20177], 'city':['Istanbul', 'Ankara', 'Izmir', 'Bursa', 'Antalya'] }) cities
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
Notice the `DataFrame` is sorted by column name. We can change the order by indexing them in the order we desire:
cities[['city','area', 'population']]
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* A `DataFrame` has a second index, representing the columns:
cities.columns
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* If we wish to access columns, we can do so either by dictionary like indexing or by attribute:
cities['area'] cities.area type(cities.area) type(cities[['area']])
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* Notice this is different than with `Series`, where dictionary like indexing retrieved a particular element (row). If we want access to a row in a `DataFrame`, we index its `iloc` attribute.
cities.iloc[2] cities.iloc[0:2]
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
Alternatively, we can create a `DataFrame` with a dict of dicts:
cities = pd.DataFrame({ 0: {'city': 'Istanbul', 'area': 5461, 'population': 15029231}, 1: {'city': 'Ankara', 'area': 25632, 'population': 5445026}, 2: {'city': 'Izmir', 'area': 11891, 'population': 4279677}, 3: {'city': 'Bursa', 'area': 10813, 'population': 2936803}, 4: {'city': 'Antalya', 'area': 20177, 'population': 2364396}, }) cities
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* We probably want this transposed:
cities = cities.T cities
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* It's important to note that the Series returned when a DataFrame is indexted is merely a **view** on the DataFrame, and not a copy of the data itself. * So you must be cautious when manipulating this data just like in the Numpy.
areas = cities.area areas areas[3] = 0 areas cities
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* It's a usefull behavior for large data sets but for preventing this you can use copy method.
areas = cities.area.copy() areas[3] = 10813 areas cities
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* We can create or modify columns by assignment:
cities.area[3] = 10813 cities cities['year'] = 2017 cities
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks
* But note that, we can not use the attribute indexing method to add a new column:
cities.projection2020 = 20000000 cities
_____no_output_____
Apache-2.0
iPython Notebooks/Introduction to Pandas Part 1.ipynb
AlpYuzbasioglu/Zeppelin-Notebooks