markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use. - `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model. - `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code. - `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint. - `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize. (TODO) Writing inference codeBefore writing our custom inference code, we will begin by taking a look at the code which has been provided.
!pygmentize serve/predict.py
import argparse import json import os import pickle import sys import sagemaker_containers import pandas as pd import numpy as np import torch import torch.nn as nn import torch.optim as optim import torch.utils.data from model import LSTMClassifier from utils import review_to_words, convert_and_pad def model_fn(model_dir): """Load the PyTorch model from the `model_dir` directory.""" print("Loading model.") # First, load the parameters used to create the model. model_info = {} model_info_path = os.path.join(model_dir, 'model_info.pth') with open(model_info_path, 'rb') as f: model_info = torch.load(f) print("model_info: {}".format(model_info)) # Determine the device and construct the model. device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = LSTMClassifier(model_info['embedding_dim'], model_info['hidden_dim'], model_info['vocab_size']) # Load the store model parameters. model_path = os.path.join(model_dir, 'model.pth') with open(model_path, 'rb') as f: model.load_state_dict(torch.load(f)) # Load the saved word_dict. word_dict_path = os.path.join(model_dir, 'word_dict.pkl') with open(word_dict_path, 'rb') as f: model.word_dict = pickle.load(f) model.to(device).eval() print("Done loading model.") return model def input_fn(serialized_input_data, content_type): print('Deserializing the input data.') if content_type == 'text/plain': data = serialized_input_data.decode('utf-8') return data raise Exception('Requested unsupported ContentType in content_type: ' + content_type) def output_fn(prediction_output, accept): print('Serializing the generated output.') return str(prediction_output) def predict_fn(input_data, model): print('Inferring sentiment of input data.') device = torch.device("cuda" if torch.cuda.is_available() else "cpu") if model.word_dict is None: raise Exception('Model has not been loaded properly, no word_dict.') # TODO: Process input_data so that it is ready to be sent to our model. # You should produce two variables: # data_X - A sequence of length 500 which represents the converted review # data_len - The length of the review data_X = None data_len = None words = review_to_words(input_data) data_X, data_len = convert_and_pad(model.word_dict, words) # Using data_X and data_len we construct an appropriate input tensor. Remember # that our model expects input data of the form 'len, review[500]'. data_pack = np.hstack((data_len, data_X)) data_pack = data_pack.reshape(1, -1) data = torch.from_numpy(data_pack) data = data.to(device) # Make sure to put the model into evaluation mode model.eval() # TODO: Compute the result of applying the model to the input data. The variable `result` should # be a numpy array which contains a single integer which is either 1 or 0 result = None with torch.no_grad(): output = model.forward(data) result = int(np.round(output.numpy())) return result
MIT
Project/SageMaker Project.ipynb
csuquanyanfei/ML_Sagemaker_Studies_Project1
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file. Deploying the modelNow that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
from sagemaker.predictor import RealTimePredictor from sagemaker.pytorch import PyTorchModel class StringPredictor(RealTimePredictor): def __init__(self, endpoint_name, sagemaker_session): super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain') model = PyTorchModel(model_data=estimator.model_data, role = role, framework_version='0.4.0', entry_point='predict.py', source_dir='serve', predictor_cls=StringPredictor) predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
Parameter image will be renamed to image_uri in SageMaker Python SDK v2. 'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
MIT
Project/SageMaker Project.ipynb
csuquanyanfei/ML_Sagemaker_Studies_Project1
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
import glob def test_reviews(data_dir='../data/aclImdb', stop=250): results = [] ground = [] # We make sure to test both positive and negative reviews for sentiment in ['pos', 'neg']: path = os.path.join(data_dir, 'test', sentiment, '*.txt') files = glob.glob(path) files_read = 0 print('Starting ', sentiment, ' files') # Iterate through the files and send them to the predictor for f in files: with open(f) as review: # First, we store the ground truth (was the review positive or negative) if sentiment == 'pos': ground.append(1) else: ground.append(0) # Read in the review and convert to 'utf-8' for transmission via HTTP review_input = review.read().encode('utf-8') # Send the review to the predictor and store the results results.append(int(predictor.predict(review_input))) # Sending reviews to our endpoint one at a time takes a while so we # only send a small number of reviews files_read += 1 if files_read == stop: break return ground, results ground, results = test_reviews() from sklearn.metrics import accuracy_score accuracy_score(ground, results)
_____no_output_____
MIT
Project/SageMaker Project.ipynb
csuquanyanfei/ML_Sagemaker_Studies_Project1
As an additional test, we can try sending the `test_review` that we looked at earlier.
predictor.predict(test_review)
_____no_output_____
MIT
Project/SageMaker Project.ipynb
csuquanyanfei/ML_Sagemaker_Studies_Project1
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for the web app> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function. Setting up a Lambda functionThe first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result. Part A: Create an IAM Role for the Lambda functionSince we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**. Part B: Create a Lambda functionNow it is time to actually create the Lambda function.Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below. ```python We need to use the low-level library to interact with SageMaker since the SageMaker API is not available natively through Lambda.import boto3def lambda_handler(event, context): The SageMaker runtime is what allows us to invoke the endpoint that we've created. runtime = boto3.Session().client('sagemaker-runtime') Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', The name of the endpoint we created ContentType = 'text/plain', The data format that is expected Body = event['body']) The actual review The response is an HTTP response whose body contains the result of our inference result = response['Body'].read().decode('utf-8') return { 'statusCode' : 200, 'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' }, 'body' : result }```Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
predictor.endpoint
_____no_output_____
MIT
Project/SageMaker Project.ipynb
csuquanyanfei/ML_Sagemaker_Studies_Project1
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**. Step 4: Deploying our web appNow that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.**TODO:** Make sure that you include the edited `index.html` file in your project submission. Now that your web app is working, trying playing around with it and see how well it works.**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review? **Answer:** Delete the endpointRemember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
predictor.delete_endpoint()
_____no_output_____
MIT
Project/SageMaker Project.ipynb
csuquanyanfei/ML_Sagemaker_Studies_Project1
Statistics Questions ```{admonition} Problem: JOIN Dataframes:class: dropdown, tipCan you tell me the ways in which 2 pandas data frames can be joined?``` ```{admonition} Solution::class: dropdownA very high level difference is that merge() is used to combine two (or more) dataframes on the basis of values of common columns (indices can also be used, use left_index=True and/or right_index=True), and concat() is used to append one (or more) dataframes one below the other (or sideways, depending on whether the axis option is set to 0 or 1).join() is used to merge 2 dataframes on the basis of the index; instead of using merge() with the option left_index=True we can use join().![Combine DataFrames](images/image1.PNG)``` ```{admonition} Problem: [GOOGLE] Normal Distribution:class: dropdown, tipWrite a function to generate N samples from a normal distribution and plot the histogram.```
import numpy as np import matplotlib.pyplot as plt from scipy import stats def normal_sample_generator(N): # can be done using np.random.randn or stats.norm.rvs #x = np.random.randn(N) x = stats.norm.rvs(size=N) num_bins = 20 plt.hist(x, bins=num_bins, facecolor='blue', alpha=0.5) y = np.linspace(-4, 4, N) bin_width = (x.max() - x.min()) / num_bins plt.plot(y, stats.norm.pdf(y) * N * bin_width) plt.show() normal_sample_generator(10000)
_____no_output_____
MIT
_sources/contents/Python/Statistics.ipynb
mulaab/datasains
```{admonition} Problem: [UBER] Bernoulli trial generator:class: dropdown, tipGiven a random Bernoulli trial generator, write a function to return a value sampled from a normal distribution.``` ```{admonition} Solution::class: dropdownSolution pending, [Reference material link](Given a random Bernoulli trial generator, how do you return a value sampled from a normal distribution?)``` ```{admonition} Problem: [PINTEREST] Interquartile Distance:class: dropdown, tipGiven an array of unsorted random numbers (decimals) find the interquartile distance.```
# Interquartile distance is the difference between first and third quartile # first let's generate a list of random numbers import random import numpy as np li = [round(random.uniform(33.33, 66.66), 2) for i in range(50)] print(li) qtl_1 = np.quantile(li,.25) qtl_3 = np.quantile(li,.75) print("Interquartile distance: ", qtl_1 - qtl_3)
[54.81, 65.68, 63.85, 58.29, 60.14, 53.23, 52.58, 51.62, 61.6, 57.85, 51.37, 38.7, 35.87, 33.95, 61.65, 33.59, 61.33, 44.97, 62.49, 39.67, 51.03, 45.79, 60.99, 60.49, 64.8, 46.16, 46.61, 34.06, 37.78, 56.72, 39.62, 61.38, 55.27, 40.53, 49.31, 58.95, 37.49, 34.39, 60.47, 56.12, 61.41, 34.56, 58.18, 56.35, 63.59, 50.59, 61.51, 42.02, 52.43, 56.71] Interquartile distance: -17.7275
MIT
_sources/contents/Python/Statistics.ipynb
mulaab/datasains
````{admonition} Problem: [GENENTECH] Imputing the mdeian:class: dropdown, tipWrite a function cheese_median to impute the median price of the selected California cheeses in place of the missing values. You may assume at least one cheese is not missing its price.Input:```pythonimport pandas as pdcheeses = {"Name": ["Bohemian Goat", "Central Coast Bleu", "Cowgirl Mozzarella", "Cypress Grove Cheddar", "Oakdale Colby"], "Price" : [15.00, None, 30.00, None, 45.00]}df_cheeses = pd.DataFrame(cheeses)```| Name | Price ||:---------------------:|:-----:|| Bohemian Goat | 15.00 || Central Coast Bleu | 30.00 || Cowgirl Mozzarella | 30.00 || Cypress Grove Cheddar | 30.00 || Oakdale Colby | 45.00 |````
import pandas as pd cheeses = {"Name": ["Bohemian Goat", "Central Coast Bleu", "Cowgirl Mozzarella", "Cypress Grove Cheddar", "Oakdale Colby"], "Price" : [15.00, None, 30.00, None, 45.00]} df_cheeses = pd.DataFrame(cheeses) df_cheeses['Price'] = df_cheeses['Price'].fillna(df_cheeses['Price'].median()) df_cheeses.head()
_____no_output_____
MIT
_sources/contents/Python/Statistics.ipynb
mulaab/datasains
Real Estate Price Prediction
import pandas as pd df = pd.read_csv("data.csv") df.head() df['CHAS'].value_counts() df.info() df.describe() %matplotlib inline import matplotlib.pyplot as plt df.hist(bins=50, figsize=(20,15))
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
train_test_split
import numpy as np def split_train_test(data, test_ratio): np.random.seed(42) shuffled = np.random.permutation(len(data)) test_set_size = int(len(data) * test_ratio) test_indices = shuffled[:test_set_size] train_indices = shuffled[test_set_size:] return data.iloc[train_indices], data.iloc[test_indices] train_set, test_set = split_train_test(df, 0.2) print(f"The length of train dataset is: {len(train_set)}") print(f"The length of train dataset is: {len(test_set)}") def data_percent_allocation(train_set, test_set): total = len(df) train_percent = round((len(train_set)/total) * 100) test_percent = round((len(test_set)/total) * 100) return train_percent, test_percent data_percent_allocation(train_set, test_set)
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
train_test_split from sklearn
from sklearn.model_selection import train_test_split train_set, test_set = train_test_split(df, test_size = 0.2, random_state = 42) print(f"The length of train dataset is: {len(train_set)}") print(f"The length of train dataset is: {len(test_set)}") from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits = 1, test_size = 0.2, random_state = 42) for train_index, test_index in split.split(df, df['CHAS']): strat_train_set = df.loc[train_index] strat_test_set = df.loc[test_index] strat_test_set['CHAS'].value_counts() test_set['CHAS'].value_counts() strat_train_set['CHAS'].value_counts() train_set['CHAS'].value_counts()
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
Stratified learning equal splitting of zero and ones
95/7 376/28 df = strat_train_set.copy()
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
Corelations
from pandas.plotting import scatter_matrix attributes = ["MEDV", "RM", "ZN" , "LSTAT"] scatter_matrix(df[attributes], figsize = (12,8)) df.plot(kind="scatter", x="RM", y="MEDV", alpha=1)
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
Trying out attribute combinations
df["TAXRM"] = df["TAX"]/df["RM"] df.head() corr_matrix = df.corr() corr_matrix['MEDV'].sort_values(ascending=False) # 1 means strong positive corr and -1 means strong negative corr. # EX: if RM will increase our final result(MEDV) in prediction will also increase. df.plot(kind="scatter", x="TAXRM", y="MEDV", alpha=1) df = strat_train_set.drop("MEDV", axis=1) df_labels = strat_train_set["MEDV"].copy()
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
Pipeline
from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.impute import SimpleImputer my_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('std_scaler', StandardScaler()), ]) df_numpy = my_pipeline.fit_transform(df) df_numpy #Numpy array of df as models will take numpy array as input. df_numpy.shape
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
Model Selection
from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor # model = LinearRegression() # model = DecisionTreeRegressor() model = RandomForestRegressor() model.fit(df_numpy, df_labels) some_data = df.iloc[:5] some_labels = df_labels.iloc[:5] prepared_data = my_pipeline.transform(some_data) model.predict(prepared_data) list(some_labels)
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
Evaluating the model
from sklearn.metrics import mean_squared_error df_predictions = model.predict(df_numpy) mse = mean_squared_error(df_labels, df_predictions) rmse = np.sqrt(mse) rmse # from sklearn.metrics import accuracy_score # accuracy_score(some_data, some_labels, normalize=False)
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
Cross Validation
from sklearn.model_selection import cross_val_score scores = cross_val_score(model, df_numpy, df_labels, scoring="neg_mean_squared_error", cv=10) rmse_scores = np.sqrt(-scores) rmse_scores def print_scores(scores): print("Scores:", scores) print("\nMean:", scores.mean()) print("\nStandard deviation:", scores.std()) print_scores(rmse_scores)
Scores: [2.79289168 2.69441597 4.40018895 2.56972379 3.33073436 2.62687167 4.77007351 3.27403209 3.38378214 3.16691711] Mean: 3.3009631251857217 Standard deviation: 0.7076841067486248
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
Saving Model
from joblib import dump, load dump(model, 'final_model.joblib') dump(model, 'final_model.sav')
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
Testing model on test data
X_test = strat_test_set.drop("MEDV", axis=1) Y_test = strat_test_set["MEDV"].copy() X_test_prepared = my_pipeline.transform(X_test) final_predictions = model.predict(X_test_prepared) final_mse = mean_squared_error(Y_test, final_predictions) final_rmse = np.sqrt(final_mse) final_rmse
_____no_output_____
MIT
.ipynb_checkpoints/real_estate-checkpoint.ipynb
shhubhxm/HousePricePrediction-ML_model
In-Place Waveform Library UpdatesThis example notebook shows how one can update pulses data in-place without recompiling.© Raytheon BBN Technologies 2020 Set the `SAVE_WF_OFFSETS` flag in order that QGL will output a map of the waveform data within the compiled binary waveform library.
from QGL import * import QGL import os.path import pickle QGL.drivers.APS2Pattern.SAVE_WF_OFFSETS = True
_____no_output_____
Apache-2.0
doc/ex4_update_in_place.ipynb
gribeill/QGL
Create the usual channel library with a couple of AWGs.
cl = ChannelLibrary(":memory:") q1 = cl.new_qubit("q1") aps2_1 = cl.new_APS2("BBNAPS1", address="192.168.5.101") aps2_2 = cl.new_APS2("BBNAPS2", address="192.168.5.102") dig_1 = cl.new_X6("X6_1", address=0) h1 = cl.new_source("Holz1", "HolzworthHS9000", "HS9004A-009-1", power=-30) h2 = cl.new_source("Holz2", "HolzworthHS9000", "HS9004A-009-2", power=-30) cl.set_control(q1, aps2_1, generator=h1) cl.set_measure(q1, aps2_2, dig_1.ch(1), generator=h2) cl.set_master(aps2_1, aps2_1.ch("m2")) cl["q1"].measure_chan.frequency = 0e6 cl.commit()
Creating engine...
Apache-2.0
doc/ex4_update_in_place.ipynb
gribeill/QGL
Compile a simple sequence.
mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 11)) plot_pulse_files(mf, time=True)
Compiled 11 sequences. <module 'QGL.drivers.APS2Pattern' from '/Users/growland/workspace/QGL/QGL/drivers/APS2Pattern.py'> <module 'QGL.drivers.APS2Pattern' from '/Users/growland/workspace/QGL/QGL/drivers/APS2Pattern.py'>
Apache-2.0
doc/ex4_update_in_place.ipynb
gribeill/QGL
Open the offsets file (in the same directory as the `.aps2` files, one per AWG slice.)
offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets") with open(offset_f, "rb") as FID: offsets = pickle.load(FID) offsets
_____no_output_____
Apache-2.0
doc/ex4_update_in_place.ipynb
gribeill/QGL
Let's replace every single pulse with a fixed amplitude `Utheta`
pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets} wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2") QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets)
_____no_output_____
Apache-2.0
doc/ex4_update_in_place.ipynb
gribeill/QGL
We see that the data in the file has been updated.
plot_pulse_files(mf, time=True)
<module 'QGL.drivers.APS2Pattern' from '/Users/growland/workspace/QGL/QGL/drivers/APS2Pattern.py'> <module 'QGL.drivers.APS2Pattern' from '/Users/growland/workspace/QGL/QGL/drivers/APS2Pattern.py'>
Apache-2.0
doc/ex4_update_in_place.ipynb
gribeill/QGL
Profiling How long does this take?
%timeit mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 100))
Compiled 100 sequences. Compiled 100 sequences. Compiled 100 sequences. Compiled 100 sequences. Compiled 100 sequences. Compiled 100 sequences. Compiled 100 sequences. Compiled 100 sequences. 317 ms ± 6.15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Apache-2.0
doc/ex4_update_in_place.ipynb
gribeill/QGL
Getting the offsets is fast, and only needs to be done once
def get_offsets(): offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets") with open(offset_f, "rb") as FID: offsets = pickle.load(FID) return offsets %timeit offsets = get_offsets() %timeit pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets} wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2") %timeit QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets) # %timeit QGL.drivers.APS2Pattern.update_wf_library("/Users/growland/workspace/AWG/Rabi/Rabi-BBNAPS1.aps2", pulses, offsets)
1.25 ms ± 19.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Apache-2.0
doc/ex4_update_in_place.ipynb
gribeill/QGL
Tutorial 09: InflowsThis tutorial walks you through the process of introducing inflows of vehicles into a network. Inflows allow us to simulate open networks where vehicles may enter (and potentially exit) the network. This exercise is organized as follows: in section 1 we prepare our inflows variables to support inflows into a merge network supplied by Flow, and in section 2 we simulate the merge network in the presence of inflows. 1. Adding InflowsFor this exercise, we will simulate inflows through a highway network with an on-merge. As we will see, the perturbations caused by vehicles entering through the on-merge leads the formation of congested waves downstream in the main highway.We begin by importing the merge scenario class provided by Flow.
from flow.scenarios.merge import MergeScenario
_____no_output_____
MIT
tutorials/tutorial09_inflows.ipynb
nskh/flow
A schematic of the above network is availabe in the figure below. As we can see, the edges at the start of the main highway and the on-merge are named "inflow_highway" and "inflow_merge" respectively. These names will be important to us when we begin specifying our inflows into the network.We will also define the types of vehicles that are placed in the network. These types of vehicles will also be of significance to us once the inflows are being defined. For this exercise, we add only one type of vehicle to the network, with the vehicle identifier "human":
from flow.core.vehicles import Vehicles from flow.controllers import IDMController # create an empty vehicles object vehicles = Vehicles() # add some vehicles to this object of type "human" vehicles.add("human", acceleration_controller=(IDMController, {}), speed_mode="no_collide", # we use the speed mode "no_collide" for better dynamics at the merge num_vehicles=20)
_____no_output_____
MIT
tutorials/tutorial09_inflows.ipynb
nskh/flow
Next, we are ready to import and create an empty inflows object.
from flow.core.params import InFlows inflow = InFlows()
_____no_output_____
MIT
tutorials/tutorial09_inflows.ipynb
nskh/flow
The `InFlows` object is provided as an input during the scenario creation process via the `NetParams` parameter. Introducing these inflows into the network is handled by the backend scenario generation processes during instantiation of the scenario object.In order to add new inflows of vehicles of pre-defined types onto specific edges and lanes in the network, we use the `InFlows` object's `add` method. This function accepts the following parameters:* **veh_type**: type of vehicles entering the edge, must match one of the types set in the Vehicles class* **edge**: starting edge for vehicles in this inflow, must match an edge name in the network* **veh_per_hour**: number of vehicles entering from the edge per hour, may not be achievable due to congestion and safe driving behavior* other parameters, including: **start**, **end**, and **probability**. See documentation for more information.In addition to the above parameters, several optional inputs to the `add` method may be found within sumo's documentation at: http://sumo.dlr.de/wiki/Definition_of_Vehicles,_Vehicle_Types,_and_Routes. Some important features include:* **departLane**: specifies which lane vehicles will enter from on the edge, may be specified as "all" or "random"* **departSpeed**: speed of the vehicles once they enter the networkWe begin by adding inflows of vehicles at a rate of 2000 veh/hr through *all* lanes on the main highways as follows:
inflow.add(veh_type="human", edge="inflow_highway", vehs_per_hour=2000, departSpeed=10, departLane="random")
_____no_output_____
MIT
tutorials/tutorial09_inflows.ipynb
nskh/flow
Next, we specify a second inflow of vehicles through the on-merge lane at a rate of only 100 veh/hr.
inflow.add(veh_type="human", edge="inflow_merge", vehs_per_hour=100, departSpeed=10, departLane="random")
_____no_output_____
MIT
tutorials/tutorial09_inflows.ipynb
nskh/flow
2. Running Simulations with InflowsWe are now ready to test our inflows in simulation. As mentioned in section 1, the inflows are specified in the `NetParams` object, in addition to all other network-specific parameters. For the merge network, this is done as follows:
from flow.scenarios.merge import ADDITIONAL_NET_PARAMS from flow.core.params import NetParams additional_net_params = ADDITIONAL_NET_PARAMS.copy() # we choose to make the main highway slightly longer additional_net_params["pre_merge_length"] = 500 net_params = NetParams(inflows=inflow, # our inflows no_internal_links=False, additional_params=additional_net_params)
_____no_output_____
MIT
tutorials/tutorial09_inflows.ipynb
nskh/flow
Finally, we execute the simulation following simulation creation techniques we learned from exercise 1 using the below code block. Running this simulation, we see an excessive number of vehicles entering from the main highway, but only a sparse number of vehicles entering from the on-merge. Nevertheless, this volume of merging vehicles is sufficient to form congestive patterns within the main highway.
from flow.core.params import SumoParams, EnvParams, InitialConfig from flow.envs.loop.loop_accel import AccelEnv, ADDITIONAL_ENV_PARAMS from flow.core.experiment import SumoExperiment sumo_params = SumoParams(render=True, sim_step=0.2) env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS) initial_config = InitialConfig() scenario = MergeScenario(name="merge-example", vehicles=vehicles, net_params=net_params, initial_config=initial_config) env = AccelEnv(env_params, sumo_params, scenario) exp = SumoExperiment(env, scenario) _ = exp.run(1, 1500)
********************************************************** ********************************************************** ********************************************************** WARNING: Inflows will cause computational performance to significantly decrease after large number of rollouts. In order to avoid this, set SumoParams(restart_instance=True). ********************************************************** ********************************************************** ********************************************************** Round 0, return: 461.69990202420087 Average, std return: 461.69990202420087, 0.0 Average, std speed: 4.068107238565223, 0.0
MIT
tutorials/tutorial09_inflows.ipynb
nskh/flow
IPython magicsThis notebook is used for testing nbqa with ipython magics.
from random import randint from IPython import get_ipython
_____no_output_____
MIT
tests/data/notebook_with_indented_magics.ipynb
girip11/nbQA
Cell magics
%%bash for n in {1..10} do echo -n "$n " done %%time import operator def compute(operand1,operand2, bin_op): """Perform input binary operation over the given operands.""" return bin_op(operand1, operand2) compute(5,1, operator.add)
CPU times: user 31 µs, sys: 4 µs, total: 35 µs Wall time: 37.9 µs
MIT
tests/data/notebook_with_indented_magics.ipynb
girip11/nbQA
Help Magics
str.split?? # would this comment also be considered as magic? str.split? ?str.splitlines
Signature: str.splitlines(self, /, keepends=False) Docstring: Return a list of the lines in the string, breaking at line boundaries. Line breaks are not included in the resulting list unless keepends is given and true. Type: method_descriptor
MIT
tests/data/notebook_with_indented_magics.ipynb
girip11/nbQA
Shell magics
!grep -r '%%HTML' . | wc -l flake8_version = !pip list 2>&1 | grep flake8 if flake8_version: print(flake8_version)
['flake8 3.8.4']
MIT
tests/data/notebook_with_indented_magics.ipynb
girip11/nbQA
Line magics
%time randint(5,10) if __debug__: %time compute(5,1, operator.mul) %time get_ipython().run_line_magic("lsmagic", "") import pprint import sys %time pretty_print_object = pprint.PrettyPrinter(\ indent=4, width=80, stream=sys.stdout, compact=True, depth=5\ )
CPU times: user 29 µs, sys: 0 ns, total: 29 µs Wall time: 33.4 µs
MIT
tests/data/notebook_with_indented_magics.ipynb
girip11/nbQA
$BA_i \sim Beta(81,219)$$y_i \sim Bin(AB_i,BA_i)$$i=1,2,...,8$
#https://mc-stan.org/users/documentation/case-studies/rstan_workflow.html #https://people.duke.edu/~ccc14/sta-663/PyStan.html #http://varianceexplained.org/statistics/beta_distribution_and_baseball/ model_code = ''' data { int<lower=0> N; int<lower=0> at_bats[N]; int<lower=0> hits[N]; real<lower=0> A; real<lower=0> B; } parameters { real<lower=0,upper=1> AVG[N]; } model { AVG ~ beta(A, B); hits ~ binomial(at_bats, AVG); } generated quantities { vector[N] log_lik; vector[N] predicted_hits; for (i in 1:N) { log_lik[i] = binomial_lpmf(hits[i] | at_bats[i], AVG[i]); predicted_hits[i] = binomial_rng(at_bats[i], AVG[i]); } } ''' model_data = dict(N=8, hits=list(hit_totals.values()),at_bats=list(at_bat_totals.values()),A=81,B=219) stan_model = pystan.StanModel(model_code=model_code) fit = stan_model.sampling(data=model_data) print(fit) prior_model_code = ''' data { int<lower=0> N; real<lower=0> A; real<lower=0> B; } parameters { real<lower=0,upper=1> AVG[N]; } model { AVG ~ beta(A, B); } ''' prior_model_data = dict(N=8,A=81,B=219) stan_model_prior = pystan.StanModel(model_code=prior_model_code) prior_fit = stan_model_prior.sampling(data=prior_model_data) stan_data = az.from_pystan(posterior=fit, prior=prior_fit, observed_data="hits", posterior_predictive="predicted_hits", log_likelihood="log_lik", posterior_model=stan_model, coords={"player":list(hit_totals.keys())}, dims={'AVG': ['player'], 'hits': ['player'], 'log_lik': ['player'], 'predicted_hits': ['player']}) density_plots = az.plot_density([stan_data.posterior,stan_data.prior],data_labels=["Posterior","Prior"]) az.plot_ppc(stan_data, data_pairs = {"hits" : "predicted_hits"},flatten=["player"])
_____no_output_____
MIT
beta_binomial_baseball.ipynb
thomasmartins/pystan_misc
Introduction to Pandas
import pandas pandas.__version__ import pandas as pd
_____no_output_____
MIT
code_listings/03.00-Introduction-to-Pandas.ipynb
cesar-rocha/PythonDataScienceHandbook
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ourownstory/neural_prophet/blob/master/example_notebooks/sub_daily_data_yosemite_temps.ipynb) Sub-daily dataNeuralProphet can make forecasts for time series with sub-daily observations by passing in a dataframe with timestamps in the ds column. The format of the timestamps should be `YYYY-MM-DD HH:MM:SS` - see the example csv [here](https://github.com/ourownstory/neural_prophet/blob/master/example_data/yosemite_temps.csv). When sub-daily data are used, daily seasonality will automatically be fit. Here we fit NeuralProphet to data with 5-minute resolution (daily temperatures at Yosemite).
if 'google.colab' in str(get_ipython()): !pip install git+https://github.com/ourownstory/neural_prophet.git # may take a while #!pip install neuralprophet # much faster, but may not have the latest upgrades/bugfixes data_location = "https://raw.githubusercontent.com/ourownstory/neural_prophet/master/" else: data_location = "../" import pandas as pd from neuralprophet import NeuralProphet, set_log_level # set_log_level("ERROR") df = pd.read_csv(data_location + "example_data/yosemite_temps.csv")
_____no_output_____
MIT
example_notebooks/sub_daily_data_yosemite_temps.ipynb
aws-kh/neural_prophet
Now we will attempt to forecast the next 7 days. The `5min` data resulution means that we have `60/5*24=288` daily values. Thus, we want to forecast `7*288` periods ahead.Using some common sense, we set:* First, we disable weekly seasonality, as nature does not follow the human week's calendar.* Second, we disable changepoints, as the dataset only contains two months of data
m = NeuralProphet( n_changepoints=0, weekly_seasonality=False, ) metrics = m.fit(df, freq='5min') future = m.make_future_dataframe(df, periods=7*288, n_historic_predictions=len(df)) forecast = m.predict(future) fig = m.plot(forecast) # fig_comp = m.plot_components(forecast) fig_param = m.plot_parameters()
INFO - (NP.forecaster._handle_missing_data) - 12 NaN values in column y were auto-imputed. INFO - (NP.utils.set_auto_seasonalities) - Disabling yearly seasonality. Run NeuralProphet with yearly_seasonality=True to override this. INFO - (NP.config.set_auto_batch_epoch) - Auto-set batch_size to 128 INFO - (NP.config.set_auto_batch_epoch) - Auto-set epochs to 6
MIT
example_notebooks/sub_daily_data_yosemite_temps.ipynb
aws-kh/neural_prophet
The daily seasonality seems to make sense, when we account for the time being recorded in GMT, while Yosemite local time is GMT-8. Improving trend and seasonality As we have `288` daily values recorded, we can increase the flexibility of `daily_seasonality`, without danger of overfitting. Further, we may want to re-visit our decision to disable changepoints, as the data clearly shows changes in trend, as is typical with the weather. We make the following changes:* increase the `changepoints_range`, as the we are doing a short-term prediction* inrease the `n_changepoints` to allow to fit to the sudden changes in trend* carefully regularize the trend changepoints by setting `trend_reg` in order to avoid overfitting
m = NeuralProphet( changepoints_range=0.95, n_changepoints=50, trend_reg=1.5, weekly_seasonality=False, daily_seasonality=10, ) metrics = m.fit(df, freq='5min') future = m.make_future_dataframe(df, periods=60//5*24*7, n_historic_predictions=len(df)) forecast = m.predict(future) fig = m.plot(forecast) # fig_comp = m.plot_components(forecast) fig_param = m.plot_parameters()
INFO - (NP.config.__post_init__) - Note: Trend changepoint regularization is experimental. INFO - (NP.forecaster._handle_missing_data) - 12 NaN values in column y were auto-imputed. INFO - (NP.utils.set_auto_seasonalities) - Disabling yearly seasonality. Run NeuralProphet with yearly_seasonality=True to override this. INFO - (NP.config.set_auto_batch_epoch) - Auto-set batch_size to 128 INFO - (NP.config.set_auto_batch_epoch) - Auto-set epochs to 6
MIT
example_notebooks/sub_daily_data_yosemite_temps.ipynb
aws-kh/neural_prophet
Deep Deterministic Policy Gradients (DDPG)---In this notebook, we train DDPG with OpenAI Gym's Pendulum-v0 environment. 1. Import the Necessary Packages
import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt %matplotlib inline from ddpg_agent import Agent
_____no_output_____
MIT
ddpg-pendulum/DDPG.ipynb
elexira/deep-reinforcement-learning
2. Instantiate the Environment and Agent
env = gym.make('Pendulum-v0') env.seed(2) agent = Agent(state_size=3, action_size=1, random_seed=2)
_____no_output_____
MIT
ddpg-pendulum/DDPG.ipynb
elexira/deep-reinforcement-learning
ObservationType: Box(3)|Num |Observation |Min| Max|| --- | --- | --- |--- |0 | cos(theta) | -1.0 |1.0|1 | sin(theta) |-1.0 |1.0|2 | theta dot |-8.0 |8.0|ActionsType: Box(1)| Num | Action | Min | Max| | --- | --- | --- |--- || 0 | Joint effort | -2.0 | 2.0| 3. Train the Agent with DDPG
def ddpg(n_episodes=100, max_t=300, print_every=100): scores_deque = deque(maxlen=print_every) scores = [] for i_episode in range(1, n_episodes+1): state = env.reset() agent.reset() score = 0 for t in range(max_t): action = agent.act(state) next_state, reward, done, _ = env.step(action) agent.step(state, action, reward, next_state, done) state = next_state score += reward if done: break scores_deque.append(score) scores.append(score) print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)), end="") torch.save(agent.actor_local.state_dict(), 'checkpoint_actor.pth') torch.save(agent.critic_local.state_dict(), 'checkpoint_critic.pth') if i_episode % print_every == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque))) return scores scores = ddpg() fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(1, len(scores)+1), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show()
Episode 100 Average Score: -595.74
MIT
ddpg-pendulum/DDPG.ipynb
elexira/deep-reinforcement-learning
4. Watch a Smart Agent!
agent.actor_local.load_state_dict(torch.load('checkpoint_actor.pth')) agent.critic_local.load_state_dict(torch.load('checkpoint_critic.pth')) state = env.reset() for t in range(500): action = agent.act(state, add_noise=False) env.render() state, reward, done, _ = env.step(action) if done: break env.close()
_____no_output_____
MIT
ddpg-pendulum/DDPG.ipynb
elexira/deep-reinforcement-learning
6. ExploreIn this exercise, we have provided a sample DDPG agent and demonstrated how to use it to solve an OpenAI Gym environment. To continue your learning, you are encouraged to complete any (or all!) of the following tasks:- Amend the various hyperparameters and network architecture to see if you can get your agent to solve the environment faster than this benchmark implementation. Once you build intuition for the hyperparameters that work well with this environment, try solving a different OpenAI Gym task!- Write your own DDPG implementation. Use this code as reference only when needed -- try as much as you can to write your own algorithm from scratch.- You may also like to implement prioritized experience replay, to see if it speeds learning. - The current implementation adds Ornsetein-Uhlenbeck noise to the action space. However, it has [been shown](https://blog.openai.com/better-exploration-with-parameter-noise/) that adding noise to the parameters of the neural network policy can improve performance. Make this change to the code, to verify it for yourself!- Write a blog post explaining the intuition behind the DDPG algorithm and demonstrating how to use it to solve an RL environment of your choosing.
int(1e5)
_____no_output_____
MIT
ddpg-pendulum/DDPG.ipynb
elexira/deep-reinforcement-learning
Classes and Objects in PythonEstimated time needed: **40** minutes ObjectivesAfter completing this lab you will be able to:- Work with classes and objects- Identify and define attributes and methods Table of Contents Introduction to Classes and Objects Creating a class Instances of a Class: Objects and Attributes Methods Creating a class Creating an instance of a class Circle The Rectangle Class Introduction to Classes and Objects Creating a Class The first part of creating a class is giving it a name: In this notebook, we will create two classes, Circle and Rectangle. We need to determine all the data that make up that class, and we call that an attribute. Think about this step as creating a blue print that we will use to create objects. In figure 1 we see two classes, circle and rectangle. Each has their attributes, they are variables. The class circle has the attribute radius and color, while the rectangle has the attribute height and width. Let’s use the visual examples of these shapes before we get to the code, as this will help you get accustomed to the vocabulary. Figure 1: Classes circle and rectangle, and each has their own attributes. The class circle has the attribute radius and colour, the rectangle has the attribute height and width. Instances of a Class: Objects and Attributes An instance of an object is the realisation of a class, and in Figure 2 we see three instances of the class circle. We give each object a name: red circle, yellow circle and green circle. Each object has different attributes, so let's focus on the attribute of colour for each object. Figure 2: Three instances of the class circle or three objects of type circle. The colour attribute for the red circle is the colour red, for the green circle object the colour attribute is green, and for the yellow circle the colour attribute is yellow. Methods Methods give you a way to change or interact with the object; they are functions that interact with objects. For example, let’s say we would like to increase the radius by a specified amount of a circle. We can create a method called **add_radius(r)** that increases the radius by **r**. This is shown in figure 3, where after applying the method to the "orange circle object", the radius of the object increases accordingly. The “dot” notation means to apply the method to the object, which is essentially applying a function to the information in the object. Figure 3: Applying the method “add_radius” to the object orange circle object. Creating a Class Now we are going to create a class circle, but first, we are going to import a library to draw the objects:
# Import the library import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
The first step in creating your own class is to use the class keyword, then the name of the class as shown in Figure 4. In this course the class parent will always be object: Figure 4: Creating a class Circle. The next step is a special method called a constructor __init__, which is used to initialize the object. The input are data attributes. The term self contains all the attributes in the set. For example the self.color gives the value of the attribute color and self.radius will give you the radius of the object. We also have the method add_radius() with the parameter r, the method adds the value of r to the attribute radius. To access the radius we use the syntax self.radius. The labeled syntax is summarized in Figure 5: Figure 5: Labeled syntax of the object circle. The actual object is shown below. We include the method drawCircle to display the image of a circle. We set the default radius to 3 and the default colour to blue:
# Create a class Circle class Circle(object): # Constructor def __init__(self, radius=3, color='blue'): self.radius = radius self.color = color # Method def add_radius(self, r): self.radius = self.radius + r return(self.radius) # Method def drawCircle(self): plt.gca().add_patch(plt.Circle((0, 0), radius=self.radius, fc=self.color)) plt.axis('scaled') plt.show()
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
Creating an instance of a class Circle Let’s create the object RedCircle of type Circle to do the following:
# Create an object RedCircle RedCircle = Circle(10, 'red')
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
We can use the dir command to get a list of the object's methods. Many of them are default Python methods.
# Find out the methods can be used on the object RedCircle dir(RedCircle)
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
We can look at the data attributes of the object:
# Print the object attribute radius RedCircle.radius # Print the object attribute color RedCircle.color
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
We can change the object's data attributes:
# Set the object attribute radius RedCircle.radius = 1 RedCircle.radius
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
We can draw the object by using the method drawCircle():
# Call the method drawCircle RedCircle.drawCircle()
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
We can increase the radius of the circle by applying the method add_radius(). Let increases the radius by 2 and then by 5:
# Use method to change the object attribute radius print('Radius of object:',RedCircle.radius) RedCircle.add_radius(2) print('Radius of object of after applying the method add_radius(2):',RedCircle.radius) RedCircle.add_radius(5) print('Radius of object of after applying the method add_radius(5):',RedCircle.radius) RedCircle.add_radius(6) print('Radius of object of after applying the method add radius(6):',RedCircle.radius)
Radius of object: 1 Radius of object of after applying the method add_radius(2): 3 Radius of object of after applying the method add_radius(5): 8 Radius of object of after applying the method add radius(6): 14
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
Let’s create a blue circle. As the default colour is blue, all we have to do is specify what the radius is:
# Create a blue circle with a given radius BlueCircle = Circle(radius=100)
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
As before we can access the attributes of the instance of the class by using the dot notation:
# Print the object attribute radius BlueCircle.radius # Print the object attribute color BlueCircle.color
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
We can draw the object by using the method drawCircle():
# Call the method drawCircle BlueCircle.drawCircle()
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
Compare the x and y axis of the figure to the figure for RedCircle; they are different. The Rectangle Class Let's create a class rectangle with the attributes of height, width and color. We will only add the method to draw the rectangle object:
# Create a new Rectangle class for creating a rectangle object class Rectangle(object): # Constructor def __init__(self, width=2, height=3, color='r'): self.height = height self.width = width self.color = color # Method def drawRectangle(self): plt.gca().add_patch(plt.Rectangle((0, 0), self.width, self.height ,fc=self.color)) plt.axis('scaled') plt.show()
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
Let’s create the object SkinnyBlueRectangle of type Rectangle. Its width will be 2 and height will be 3, and the color will be blue:
# Create a new object rectangle SkinnyBlueRectangle = Rectangle(2, 10, 'blue')
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
As before we can access the attributes of the instance of the class by using the dot notation:
# Print the object attribute height SkinnyBlueRectangle.height # Print the object attribute width SkinnyBlueRectangle.width # Print the object attribute color SkinnyBlueRectangle.color
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
We can draw the object:
# Use the drawRectangle method to draw the shape SkinnyBlueRectangle.drawRectangle()
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
Let’s create the object FatYellowRectangle of type Rectangle :
# Create a new object rectangle FatYellowRectangle = Rectangle(20, 5, 'yellow')
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
We can access the attributes of the instance of the class by using the dot notation:
# Print the object attribute height FatYellowRectangle.height # Print the object attribute width FatYellowRectangle.width # Print the object attribute color FatYellowRectangle.color
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
We can draw the object:
# Use the drawRectangle method to draw the shape FatYellowRectangle.drawRectangle()
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
Exercises Text Analysis You have been recruited by your friend, a linguistics enthusiast, to create a utility tool that can perform analysis on a given piece of text. Complete the class'analysedText' with the following methods - Constructor - Takes argument 'text',makes it lower case and removes all punctuation. Assume only the following punctuation is used - period (.), exclamation mark (!), comma (,) and question mark (?). Store the argument in "fmtText" freqAll - returns a dictionary of all unique words in the text along with the number of their occurences. freqOf - returns the frequency of the word passed in argument. The skeleton code has been given to you. Docstrings can be ignored for the purpose of the exercise. Hint: Some useful functions are replace(), lower(), split(), count()
class analysedText(object): def __init__ (self, text): reArrText = text.lower() reArrText = reArrText.replace('.','').replace('!','').replace(',','').replace('?','') self.fmtText = reArrText def freqAll(self): wordList = self.fmtText.split(' ') freqMap = {} for word in set(wordList): # use set to remove duplicates in list freqMap[word] = wordList.count(word) return freqMap def freqOf(self,word): freqDict = self.freqAll() if word in freqDict: return freqDict[word] else: return 0
_____no_output_____
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
Execute the block below to check your progress.
import sys sampleMap = {'eirmod': 1,'sed': 1, 'amet': 2, 'diam': 5, 'consetetur': 1, 'labore': 1, 'tempor': 1, 'dolor': 1, 'magna': 2, 'et': 3, 'nonumy': 1, 'ipsum': 1, 'lorem': 2} def testMsg(passed): if passed: return 'Test Passed' else : return 'Test Failed' print("Constructor: ") try: samplePassage = analysedText("Lorem ipsum dolor! diam amet, consetetur Lorem magna. sed diam nonumy eirmod tempor. diam et labore? et diam magna. et diam amet.") print(testMsg(samplePassage.fmtText == "lorem ipsum dolor diam amet consetetur lorem magna sed diam nonumy eirmod tempor diam et labore et diam magna et diam amet")) except: print("Error detected. Recheck your function " ) print("freqAll: ",) try: wordMap = samplePassage.freqAll() print(testMsg(wordMap==sampleMap)) except: print("Error detected. Recheck your function " ) print("freqOf: ") try: passed = True for word in sampleMap: if samplePassage.freqOf(word) != sampleMap[word]: passed = False break print(testMsg(passed)) except: print("Error detected. Recheck your function " )
Constructor: Test Passed freqAll: Test Passed freqOf: Test Passed
FSFAP
4_Python for Data Science, AI & Development/PY0101EN-3-5-Classes.ipynb
lebinh97/IBM-DataScience-Capstone
Analyzing the Effects of Non-Academic Features on Student Performance
# For reading data sets import pandas # For lots of awesome things import numpy as np # Need this for LabelEncoder from sklearn import preprocessing # For building our net import keras # For plotting import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Read in data Data is seperated by a semicolon (delimiter=";") containing column names as the first row of the file (header = 0).
# Read in student data student_data = np.array(pandas.read_table("./student-por.csv", delimiter=";", header=0)) # Display student data student_data
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Determine what the column labels are...
# Descriptions for each feature (found in the header) feature_descrips = np.array(pandas.read_csv("./student-por.csv", delimiter=";", header=None, nrows=1)) # Display descriptions print(feature_descrips)
[['school' 'sex' 'age' 'address' 'famsize' 'Pstatus' 'Medu' 'Fedu' 'Mjob' 'Fjob' 'reason' 'guardian' 'traveltime' 'studytime' 'failures' 'schoolsup' 'famsup' 'paid' 'activities' 'nursery' 'higher' 'internet' 'romantic' 'famrel' 'freetime' 'goout' 'Dalc' 'Walc' 'health' 'absences' 'G1' 'G2' 'G3']]
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
...and give them clearer descriptions.
# More detailed descriptions feature_descrips = np.array(["School", "Sex", "Age", "Urban or Rural Address", "Family Size", "Parent's Cohabitation status", "Mother's Education", "Father's Education", "Mother's Job", "Father's Job", "Reason for Choosing School", "Student's Gaurdain", "Home to School Travel Time", "Weekly Study Time", "Number of Past Class Failures", "Extra Educational Support", "Family Educational Support", "Extra Paid Classes", "Extra Curricular Activities", "Attended Nursery School", "Wants to Take Higher Education", "Internet Access at Home", "In a Romantic Relationship", "Quality of Family Relationships", "Free Time After School", "Time Spent Going out With Friends", "Workday Alcohol Consumption", "Weekend Alcohol Consumption", "Current Health Status", "Number of Student Absences", "First Period Grade", "Second Period Grade", "Final Grade"])
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Data Cleanup Shuffle data We sampled 2 schools, and right now our data has each school grouped together. We need to get rid of this grouping for training later down the road.
# Shuffle the data! np.random.shuffle(student_data) student_data
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Alphabetically classify scores Because our data is sampled from Portugal, we have to modify their scoring system a bit to represent something more like ours. 0 = F 1 = D 2 = C 3 = B 4 = A
# Array holding final scores for every student scores = student_data[:,32] # Iterate through list of scores, changing them from a 0-19 value ## to a 0-4 value (representing F-A) for i in range(len(scores)): if(scores[i] > 18): scores[i] = 4 elif(scores[i] > 16): scores[i] = 3 elif(scores[i] > 14): scores[i] = 2 elif(scores[i] > 12): scores[i] = 1 else: scores[i] = 0 # Update the final scores in student_data to reflect these changes for i in range(len(scores)): student_data[i,32] = scores[i] # Display new data. Hint: Look at the last column student_data
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Encoding non-numeric data to integers
# One student sample student_data[0,:]
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
We have some qualitative data from the questionaire that needs to be converted to represent numbers.
# Label Encoder le = preprocessing.LabelEncoder() # Columns that hold non-numeric data indices = np.array([0,1,3,4,5,8,9,10,11,15,16,17,18,19,20,21,22]) # Transform the non-numeric data in these columns to integers for i in range(len(indices)): column = indices[i] le.fit(student_data[:,column]) student_data[:,column] = le.transform(student_data[:,column]) student_data[0,:]
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Encoding 0's to -1 for binomial data. We want our weights to change because 0 represents something! Therefore, we need to encode 0's to -1's so the weights will change with that input.
# Columns that hold binomial data indices = np.array([0,1,3,4,5,15,16,17,18,19,20,21,22]) # Change 0's to -1's for i in range(len(indices)): column = indices[i] # values of current feature feature = student_data[:,column] # change values to -1 if equal to 0 feature = np.where(feature==0, -1, feature) student_data[:,column] = feature student_data[0,:]
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Standardizing the nominal and numerical data. We need our input to matter equally (Everyone is important!). We do this by standardizing our data (get a mean of 0 and a stardard deviation of 1).
scaler = preprocessing.StandardScaler() temp = student_data[:,[2,6,7,8,9,10,11,12,13,14,23,24,25,26,27,28,29,30,31]] print(student_data[0,:]) Standardized = scaler.fit_transform(temp) print('Mean:', round(Standardized.mean())) print('Standard deviation:', Standardized.std()) student_data[:,[2,6,7,8,9,10,11,12,13,14,23,24,25,26,27,28,29,30,31]] = Standardized student_data[0,:]
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Convert results to one-hot encoding
# Final grades results = student_data[:,32] # Take a look at first 5 final grades print("First 5 final grades:", results[0:5]) # All unique values for final grades (0-4 representing F-A) possible_results = np.unique(student_data[:,32]).T print("All possible results:", possible_results) # One-hot encode final grades (results) which will be used as our output # The length of the "ID" should be as long as the total number of possible results so each results ## gets its own, personal one-hot encoding y = keras.utils.to_categorical(results,len(possible_results)) # Take a look at the first 5 final grades now (no longer numbers but arrays) y[0:5] # our input, all features except final grades x = student_data[:,0:32]
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Model Building Now let's create a function that will build a model for us. This will come in handy later on. Our model will have two hidden layers. The first hidden layer will have an input size of 800, and the second will have an input size of 400. The optimizer that we are using is adamax which is good at ignoring noise in a datset. The loss function we are using is called categorical cross entropy and which is useful for trying to classify or label something. In this case, we are trying to classify students by letter grades, so this loss function will be of great use to us.
# Function to create network given model def create_network(model): # Specify input/output size input_size = x.shape[1] output_size = y.shape[1] # Create the hidden layer model.add(keras.layers.Dense(800, input_dim = input_size, activation = 'relu')) # Additional hidden layer model.add(keras.layers.Dense(400,activation='relu')) # Output layer model.add(keras.layers.Dense(output_size,activation='softmax')) # Compile - why using adamax? model.compile(loss='categorical_crossentropy', optimizer='adamax', metrics=['accuracy']) # Feed-forward model model = keras.Sequential() create_network(model)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Initial Test of the Network
# Split data into training and testing data x_train = x[0:518,:] x_test = x[519:649,:] y_train = y[0:518,:] y_test = y[519:649,:] # Train on training data! # We're saving this information in the variable -history- so we can take a look at it later history = model.fit(x_train, y_train, batch_size = 32, epochs = 7, verbose = 0, validation_split = 0.2) # Validate using data the network hasn't seen before (testing data) # Save this info in -score- so we can take a look at it score = model.evaluate(x_test,y_test, verbose=0) # Check it's effectiveness print('Test loss:', score[0]) print('Test accuracy:', score[1]) # Plot the data def plot(history): plt.figure(1) # Summarize history for accuracy plt.subplot(211) plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train','test'], loc ='upper left') # Summarize history for loss plt.subplot(212) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train','test'], loc ='upper left') # Display plot plt.tight_layout() plt.show() # Plot current training and validation accuracy and loss plot(history)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Training and Testing Without Individual Features
# Analyze the effects of removing one feature on training def remove_and_analyze(feature): # Told you those feature descriptions would be useful print("Without feature", feature, ":", feature_descrips[feature]) # Create feed-forward network model = keras.Sequential() create_network(model) # Remove feature from columns (axis 1) x = np.delete(student_data, feature, axis = 1) # Split data into training and testing data x_train = x[0:518,:] x_test = x[519:649,:] # Train on training data! history = model.fit(x_train, y_train, batch_size = 32, epochs = 7, verbose = 0, validation_split = 0.2) # Validate using data the network hasn't seen before (testing data) score = model.evaluate(x_test,y_test, verbose=0) # Check it's effectiveness print('Test loss:', score[0]) print('Test accuracy:', score[1]) # Plot the data plot(history) # Analyze the effects of removing one feature on training # Do this for all input features for i in range(student_data.shape[1]-1): remove_and_analyze(i) print("\n \n \n")
Without feature 0 : School Test loss: 0.1621086014179477 Test accuracy: 0.9461538461538461
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Training and Testing Without Five Features
# Delete the five features that most negatively impact accuracy x = np.delete(student_data, 21, axis = 1) x = np.delete(x, 20, axis = 1) x = np.delete(x, 9, axis = 1) x = np.delete(x, 8, axis = 1) x = np.delete(x, 7, axis = 1) # Create feed-forward network model = keras.Sequential() create_network(model) # Split data into training and testing data x_train = x[0:518,:] x_test = x[519:649,:] # Train on training data! history = model.fit(x_train, y_train, batch_size = 32, epochs = 7, verbose = 0, validation_split = 0.2) # Validate using data the network hasn't seen before (testing data) score = model.evaluate(x_test,y_test, verbose=0) # Check it's effectiveness print('Test loss:', score[0]) print('Test accuracy:', score[1]) # Plot the data plot(history)
Test loss: 0.1731401116976765 Test accuracy: 0.9307692307692308
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Grade Distribution Analysis
# Function for analyzing the percent of students with each grade [F,D,C,B,A] def analyze(array): # To hold the total number of students with a certain final grade # Index 0 - F. Index 4 - A sums = np.array([0,0,0,0,0]) # Iterate through array. Update sums according to whether a student got a final grade of a(n) for i in range(len(array)): # F if(array[i]==0): sums[0] += 1 # D elif(array[i]==1): sums[1] +=1 # C elif(array[i]==2): sums[2] +=1 # B elif(array[i]==3): sums[3] +=1 # A else: sums[4] += 1 # Total number of students total = sums[0] + sums[1] + sums[2] + sums[3] + sums[4] # Hold percentage of students with grade of [F,D,C,B,A] percentages = np.array([sums[0]/total*100, sums[1]/total*100, sums[2]/total*100, sums[3]/total*100, sums[4]/total*100]) # One bar for each of the 5 grades x = np.array([1,2,3,4,5]) # Descriptions for each bar. None on y-axis plt.xticks(np.arange(6), ('', 'F', 'D', 'C', 'B','A')) # X axis - grades. Y axis - percentage of students with each grade plt.bar(x,percentages) plt.xlabel("Grades") plt.ylabel("Percentage of Students") # Display bar graph plt.show() # Display percentages print(percentages)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Family Educational Support
# Array holding final grades of all students who have family educational support fam_sup = [] # Array holding final grades of all students who have family educational support no_fam_sup = [] # Iterate through all student samples for i in range(student_data.shape[0]): # Does the student have family educational support? (-1 no, 1 yes) sup = student_data[i][16] # Append student's final grade to corresponding array if(sup==1): fam_sup.append(student_data[i][32]) else: no_fam_sup.append(student_data[i][32])
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Family Educational Support
analyze(fam_sup)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
No Family Educational Support
analyze(no_fam_sup)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Reason for choosing school
# Each array holds the grades of students who chose to go to their school for that reason # Close to home reason1 = [] # School reputation reason2 = [] # Course prefrence reason3 = [] # Other reason4 = [] # Values that represent these unique reasons. They are not integer numbers like in the previous ## example. They're floatig point numbers so we'll save them so we can compare them to the value ## of this feature in each sample unique_reasons = np.unique(student_data[:,10]) # Iterate through all student samples and append final grades to corresponding arrays for i in range(student_data.shape[0]): reason = student_data[i][10] if(reason==unique_reasons[0]): reason1.append(student_data[i][32]) elif(reason==unique_reasons[1]): reason2.append(student_data[i][32]) elif(reason==unique_reasons[2]): reason3.append(student_data[i][32]) else: reason4.append(student_data[i][32])
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Reason 1: Close to Home
analyze(reason1)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Reason 2: School Reputation
analyze(reason2)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Reason 3: Course Prefrence
analyze(reason3)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Reason 4: Other
analyze(reason4)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Frequency of Going Out With Friends
# Each array holds the grades of students who go out with friends for that specified amount of time # (1 - very low, 5 - very high) go_out1 = [] go_out2 = [] go_out3 = [] go_out4 = [] go_out5 = [] # Floating point values representing frequency unique = np.unique(student_data[:,25]) # Iterate through all student samples and append final grades to corresponding arrays for i in range(student_data.shape[0]): frequency = student_data[i][25] if(frequency==unique[0]): go_out1.append(student_data[i][32]) elif(frequency==unique[1]): go_out2.append(student_data[i][32]) elif(frequency==unique[2]): go_out3.append(student_data[i][32]) elif(frequency==unique[3]): go_out4.append(student_data[i][32]) else: go_out5.append(student_data[i][32]) analyze(go_out1) analyze(go_out2) analyze(go_out3) analyze(go_out4) analyze(go_out5)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Free Time after School
# Each array holds the grades of students who have the specified amount of free time after school # (1 - very low, 5 - very high) free1 = [] free2 = [] free3 = [] free4 = [] free5 = [] # Floating point values representing frequency unique = np.unique(student_data[:,24]) # Iterate through all student samples and append final grades to corresponding arrays for i in range(student_data.shape[0]): frequency = student_data[i][24] if(frequency==unique[0]): free1.append(student_data[i][32]) elif(frequency==unique[1]): free2.append(student_data[i][32]) elif(frequency==unique[2]): free3.append(student_data[i][32]) elif(frequency==unique[3]): free4.append(student_data[i][32]) else: free5.append(student_data[i][32]) analyze(free1) analyze(free2) analyze(free3) analyze(free4) analyze(free5)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Paid Classes
# Array holding final grades of all students who have extra paid classes paid_class = [] # Array holding final grades of all students who do not have extra paid classes no_paid_class = [] # Iterate through all student samples and append final grades to corresponding arrays for i in range(student_data.shape[0]): paid = student_data[i][17] if(paid==1): paid_class.append(student_data[i][32]) else: no_paid_class.append(student_data[i][32])
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
Extra Paid Classes
analyze(paid_class)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project
No Extra Paid Classes
analyze(no_paid_class)
_____no_output_____
MIT
LSTM1.ipynb
CSCI4850/S19-team3-project