hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4ad978224db3c17417d135d83159f9f8c2ea909b
| 28,907 |
ipynb
|
Jupyter Notebook
|
nbs/edu_nbs/load_model_from_wandb.ipynb
|
mickeybeurskens/numerblox
|
0e4b887efcd9e6b84309b6ac5fb1d0de327e810e
|
[
"Apache-2.0"
] | 30 |
2022-03-17T03:23:20.000Z
|
2022-03-30T15:20:19.000Z
|
nbs/edu_nbs/load_model_from_wandb.ipynb
|
mickeybeurskens/numerblox
|
0e4b887efcd9e6b84309b6ac5fb1d0de327e810e
|
[
"Apache-2.0"
] | 8 |
2022-03-18T10:31:44.000Z
|
2022-03-31T15:43:46.000Z
|
nbs/edu_nbs/load_model_from_wandb.ipynb
|
mickeybeurskens/numerblox
|
0e4b887efcd9e6b84309b6ac5fb1d0de327e810e
|
[
"Apache-2.0"
] | 5 |
2022-03-18T10:24:38.000Z
|
2022-03-30T14:40:08.000Z
| 54.541509 | 2,811 | 0.600581 |
[
[
[
"# hide\nfrom nbdev.showdoc import *",
"_____no_output_____"
]
],
[
[
"# Load model from Weights & Biases (wandb)",
"_____no_output_____"
],
[
"This tutorial is for people who are using [Weights & Biases (wandb)](https://wandb.ai/site) `WandbCallback` in their training pipeline and are looking for a convenient way to use saved models on W&B cloud to make predictions, evaluate and submit in a few lines of code.\n\nCurrently only Keras models (`.h5`) are supported for wandb loading in this framework. Future versions will include other formats like PyTorch support.",
"_____no_output_____"
],
[
"---------------------------------------------------------------------\n## 0. Authentication\n\nTo authenticate your W&B account you are given several options:\n1. Run `wandb login` in terminal and follow instructions.\n2. Configure global environment variable `'WANDB_API_KEY'`.\n3. Run `wandb.init(project=PROJECT_NAME, entity=ENTITY_NAME)` and pass API key from [https://wandb.ai/authorize](https://wandb.ai/authorize)",
"_____no_output_____"
],
[
"-----------------------------------------------------\n## 1. Download validation data\n\nThe first thing we do is download the current validation data and example predictions to evaluate against. This can be done in a few lines of code with `NumeraiClassicDownloader`.",
"_____no_output_____"
]
],
[
[
"#other\nimport pandas as pd\n\nfrom numerblox.download import NumeraiClassicDownloader\nfrom numerblox.numerframe import create_numerframe\nfrom numerblox.model import WandbKerasModel\nfrom numerblox.evaluation import NumeraiClassicEvaluator",
"_____no_output_____"
],
[
"#other\ndownloader = NumeraiClassicDownloader(\"wandb_keras_test\")\n# Path variables\nval_file = \"numerai_validation_data.parquet\"\nval_save_path = f\"{str(downloader.dir)}/{val_file}\"\n# Download only validation parquet file\ndownloader.download_single_dataset(val_file,\n dest_path=val_save_path)\n# Download example val preds\ndownloader.download_example_data()\n\n# Initialize NumerFrame from parquet file path\ndataf = create_numerframe(val_save_path)\n\n# Add example preds to NumerFrame\nexample_preds = pd.read_parquet(\"wandb_keras_test/example_validation_predictions.parquet\")\ndataf['prediction_example'] = example_preds.values",
"_____no_output_____"
]
],
[
[
"--------------------------------------------------------------------\n## 2. Predict (WandbKerasModel)\n\n`WandbKerasModel` automatically downloads and loads in a `.h5` from a specified wandb run. The path for a run is specified in the [\"Overview\" tab](https://docs.wandb.ai/ref/app/pages/run-page#overview-tab) of the run.\n\n- `file_name`: The default name for the best model in a run is `model-best.h5`. If you want to use a model you have saved under a different name specify `file_name` for `WandbKerasModel` initialization.\n\n\n- `replace`: The model will be downloaded to the directory you are working in. You will be warned if this directory contains models with the same filename. If these models can be overwritten specify `replace=True`.\n\n\n- `combine_preds`: Setting this to True will average all columns in case you have trained a multi-target model.\n\n\n- `autoencoder_mlp:` This argument is for the case where your [model architecture includes an autoencoder](https://forum.numer.ai/t/autoencoder-and-multitask-mlp-on-new-dataset-from-kaggle-jane-street/4338) and therefore the output is a tuple of 3 tensors. `WandbKerasModel` will in this case take the third output of the tuple (target predictions).\n\n",
"_____no_output_____"
]
],
[
[
"#other\nrun_path = \"crowdcent/cc-numerai-classic/h4pwuxwu\"\nmodel = WandbKerasModel(run_path=run_path,\n replace=True, combine_preds=True, autoencoder_mlp=True)",
"_____no_output_____"
]
],
[
[
"After initialization you can generate predictions with one line. `.predict` takes a `NumerFrame` as input and outputs a `NumerFrame` with a new prediction column. The prediction column name will be of the format `prediction_{RUN_PATH}`.",
"_____no_output_____"
]
],
[
[
"#other\ndataf = model.predict(dataf)\ndataf.prediction_cols",
"2022-03-14 12:54:38.254690: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
],
[
"#other\nmain_pred_col = f\"prediction_{run_path}\"\nmain_pred_col",
"_____no_output_____"
]
],
[
[
"----------------------------------------------------------------------\n## 3. Evaluate\n\nWe can now use the output of the model to evaluate in 2 lines of code. Additionally, we can directly submit predictions to Numerai with this `NumerFrame`. Check out the educational notebook `submitting.ipynb` for more information on this.",
"_____no_output_____"
]
],
[
[
"#other\nevaluator = NumeraiClassicEvaluator()\nval_stats = evaluator.full_evaluation(dataf=dataf,\n target_col=\"target\",\n pred_cols=[main_pred_col,\n \"prediction_example\"],\n example_col=\"prediction_example\"\n )",
"_____no_output_____"
]
],
[
[
"The evaluator outputs a `pd.DataFrame` with most of the main validation metrics for Numerai. We welcome new ideas and metrics for Evaluators. See `nbs/07_evaluation.ipynb` in this repository for full Evaluator source code.",
"_____no_output_____"
]
],
[
[
"#other\nval_stats",
"_____no_output_____"
]
],
[
[
"After we are done, downloaded files can be removed with one call on `NumeraiClassicDownloader` (optional).",
"_____no_output_____"
]
],
[
[
"#other\n# Clean up environment\ndownloader.remove_base_directory()",
"_____no_output_____"
]
],
[
[
"------------------------------------------------------------------\nWe hope this tutorial explained clearly to you how to load and predict with Weights & Biases (wandb) models.\n\nBelow you will find the full docs for `WandbKerasModel` and link to the source code:",
"_____no_output_____"
]
],
[
[
"# other\n# hide_input\nshow_doc(WandbKerasModel)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ad9814952b091af5a7098292d0985bff0254ada
| 6,379 |
ipynb
|
Jupyter Notebook
|
workshops/Idiomatic Programmer - handbook 2 - Codelab 2.ipynb
|
apurvmishra99/keras-idiomatic-programmer
|
40ee5482615eadff0a74c349706ae1184c904c13
|
[
"Apache-2.0"
] | 2 |
2019-10-11T15:09:03.000Z
|
2021-08-01T12:09:10.000Z
|
workshops/Idiomatic Programmer - handbook 2 - Codelab 2.ipynb
|
sachin235/keras-idiomatic-programmer
|
a265fddb379c7fc12575650822829e423e902ad8
|
[
"Apache-2.0"
] | null | null | null |
workshops/Idiomatic Programmer - handbook 2 - Codelab 2.ipynb
|
sachin235/keras-idiomatic-programmer
|
a265fddb379c7fc12575650822829e423e902ad8
|
[
"Apache-2.0"
] | 1 |
2021-08-01T12:09:38.000Z
|
2021-08-01T12:09:38.000Z
| 25.930894 | 183 | 0.564038 |
[
[
[
"# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Idiomatic Programmer Code Labs\n\n## Code Labs #2 - Get Familiar with Data Augmentation\n\n## Prerequistes:\n\n 1. Familiar with Python\n 2. Completed Handbook 2/Part 8: Data Augmentation\n\n## Objectives:\n\n 1. Channel Conversion\n 2. Flip Images\n 3. Roll (Shift Images)\n 4. Rotate without Clipping",
"_____no_output_____"
],
[
"## Setup:\n\nInstall the additional relevant packages to continuen with OpenCV, and then import them.",
"_____no_output_____"
]
],
[
[
"# Install the matplotlib library for plotting\n!pip install matplotlib\n# special iPython command --tell's matplotlib to inline (in notebook) displaying plots\n%matplotlib inline\n\n# Adrian Rosenbrock's image manipulation library\n!pip install imutils\n\n# Import matplotlib python plot module\nimport matplotlib.pyplot as plt\n# Import OpenCV\nimport cv2\n# Import numpy scientific module for arrays\nimport numpy as np\n# Import imutils\nimport imutils",
"_____no_output_____"
]
],
[
[
"## Channel Conversions\n\nOpenCV reads in the channels as BGR (Blue, Green, Read) instead of the more common convention of RGB (Red, Green Blue). Let's learn how to change the channel ordering to RGB.\n \nYou fill in the blanks (replace the ??), make sure it passes the Python interpreter.",
"_____no_output_____"
]
],
[
[
"# Let's read in that apple image again.\nimage = cv2.imread('apple.jpg', cv2.IMREAD_COLOR)\nplt.imshow(image)",
"_____no_output_____"
]
],
[
[
"### What, it's a blue apple!\n\nYup. It's the same data, but since matplotlib presumes RGB, then blue is the 3rd channel, but in BGR -- that's the red channel.\n\nLet's reorder the channels from BGR to RGB and then display again.",
"_____no_output_____"
]
],
[
[
"# Let's convert the channel order to RGB\n# HINT: RGB should be a big giveaway.\nimage = cv2.cvtColor(image, cv2.COLOR_BGR2??)\nplt.imshow(image)",
"_____no_output_____"
]
],
[
[
"## Flip Images\n\nLet's use OpenCV to flip an image (apple) vertically and then horizontally.\n ",
"_____no_output_____"
]
],
[
[
"# Flip the image horizontally (upside down)\n# HINT: flip should be a big giveaway\nflip = cv2.??(image, 0)\nplt.imshow(flip)",
"_____no_output_____"
],
[
"# Flip the image vertically (mirrored)\n# HINT: If 0 was horizontal, what number would be your first guess to be vertical?\nflip = cv2.flip(image, ??)\nplt.imshow(flip)",
"_____no_output_____"
]
],
[
[
"## Roll (Shift) Images\n\nLet's use numpy to shift an image -- say 80 pixels to the right.",
"_____no_output_____"
]
],
[
[
"# Let's shift the image vertical 80 pixels to the right, where axis=1 means along the width\n# HINT: another name for shift is roll\nroll = np.??(image, 80, axis=1)\nplt.imshow(roll)",
"_____no_output_____"
],
[
"# Let's shift the image now horizontally 80 pixels down.\n# HINT: if shifting the width axis is a 1, what do you think the value is for \n# shifting along the height axis?\nroll = np.roll(image, 80, axis=??)\nplt.imshow(roll)",
"_____no_output_____"
]
],
[
[
"## Randomly Rotate the Image (w/o Clipping)\n\nLet's use imutils to randomly rotate the image without clipping it.",
"_____no_output_____"
]
],
[
[
"import random\n\n# Let's get a random value between 0 and 60 degrees.\ndegree = random.randint(0, 60)\n\n# Let's rotate the image now by the randomly selected degree\nrot = imutils.rotate_bound(image, ??)\nplt.imshow(rot)",
"_____no_output_____"
]
],
[
[
"## End of Code Lab",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4ad989039399433d43c6ab03b069f5d3bf689825
| 18,335 |
ipynb
|
Jupyter Notebook
|
C3_W1_Lab_1_transfer_learning_cats_dogs.ipynb
|
LedaiThomasNilsson/github-slideshow
|
b0f23bb5390e4dfe510c5bc92f3cf214b29cdab5
|
[
"MIT"
] | null | null | null |
C3_W1_Lab_1_transfer_learning_cats_dogs.ipynb
|
LedaiThomasNilsson/github-slideshow
|
b0f23bb5390e4dfe510c5bc92f3cf214b29cdab5
|
[
"MIT"
] | 3 |
2020-10-26T15:30:55.000Z
|
2020-10-26T16:09:20.000Z
|
C3_W1_Lab_1_transfer_learning_cats_dogs.ipynb
|
LedaiThomasNilsson/github-slideshow
|
b0f23bb5390e4dfe510c5bc92f3cf214b29cdab5
|
[
"MIT"
] | null | null | null | 35.191939 | 273 | 0.510063 |
[
[
[
"<a href=\"https://colab.research.google.com/github/LedaiThomasNilsson/github-slideshow/blob/master/C3_W1_Lab_1_transfer_learning_cats_dogs.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Basic transfer learning with cats and dogs data\n\n",
"_____no_output_____"
],
[
"### Import tensorflow",
"_____no_output_____"
]
],
[
[
"try:\n # %tensorflow_version only exists in Colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass",
"_____no_output_____"
]
],
[
[
"### Import modules and download the cats and dogs dataset.",
"_____no_output_____"
]
],
[
[
"import urllib.request\nimport os\nimport zipfile\nimport random\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import Model\nfrom tensorflow.keras.applications.inception_v3 import InceptionV3\nfrom tensorflow.keras.optimizers import RMSprop\nfrom shutil import copyfile\n\n\ndata_url = \"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\"\ndata_file_name = \"catsdogs.zip\"\ndownload_dir = '/tmp/'\nurllib.request.urlretrieve(data_url, data_file_name)\nzip_ref = zipfile.ZipFile(data_file_name, 'r')\nzip_ref.extractall(download_dir)\nzip_ref.close()\n",
"_____no_output_____"
]
],
[
[
"Check that the dataset has the expected number of examples.",
"_____no_output_____"
]
],
[
[
"print(\"Number of cat images:\",len(os.listdir('/tmp/PetImages/Cat/')))\nprint(\"Number of dog images:\", len(os.listdir('/tmp/PetImages/Dog/')))\n\n# Expected Output:\n# Number of cat images: 12501\n# Number of dog images: 12501",
"_____no_output_____"
]
],
[
[
"Create some folders that will store the training and test data.\n- There will be a training folder and a testing folder.\n- Each of these will have a subfolder for cats and another subfolder for dogs.",
"_____no_output_____"
]
],
[
[
"try:\n os.mkdir('/tmp/cats-v-dogs')\n os.mkdir('/tmp/cats-v-dogs/training')\n os.mkdir('/tmp/cats-v-dogs/testing')\n os.mkdir('/tmp/cats-v-dogs/training/cats')\n os.mkdir('/tmp/cats-v-dogs/training/dogs')\n os.mkdir('/tmp/cats-v-dogs/testing/cats')\n os.mkdir('/tmp/cats-v-dogs/testing/dogs')\nexcept OSError:\n pass",
"_____no_output_____"
]
],
[
[
"### Split data into training and test sets\n\n- The following code put first checks if an image file is empty (zero length)\n- Of the files that are not empty, it puts 90% of the data into the training set, and 10% into the test set.",
"_____no_output_____"
]
],
[
[
"import random\nfrom shutil import copyfile\ndef split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):\n files = []\n for filename in os.listdir(SOURCE):\n file = SOURCE + filename\n if os.path.getsize(file) > 0:\n files.append(filename)\n else:\n print(filename + \" is zero length, so ignoring.\")\n\n training_length = int(len(files) * SPLIT_SIZE)\n testing_length = int(len(files) - training_length)\n shuffled_set = random.sample(files, len(files))\n training_set = shuffled_set[0:training_length]\n testing_set = shuffled_set[training_length:]\n\n for filename in training_set:\n this_file = SOURCE + filename\n destination = TRAINING + filename\n copyfile(this_file, destination)\n\n for filename in testing_set:\n this_file = SOURCE + filename\n destination = TESTING + filename\n copyfile(this_file, destination)\n\n\nCAT_SOURCE_DIR = \"/tmp/PetImages/Cat/\"\nTRAINING_CATS_DIR = \"/tmp/cats-v-dogs/training/cats/\"\nTESTING_CATS_DIR = \"/tmp/cats-v-dogs/testing/cats/\"\nDOG_SOURCE_DIR = \"/tmp/PetImages/Dog/\"\nTRAINING_DOGS_DIR = \"/tmp/cats-v-dogs/training/dogs/\"\nTESTING_DOGS_DIR = \"/tmp/cats-v-dogs/testing/dogs/\"\n\nsplit_size = .9\nsplit_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)\nsplit_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)\n\n# Expected output\n# 666.jpg is zero length, so ignoring\n# 11702.jpg is zero length, so ignoring",
"_____no_output_____"
]
],
[
[
"Check that the training and test sets are the expected lengths.",
"_____no_output_____"
]
],
[
[
"\nprint(\"Number of training cat images\", len(os.listdir('/tmp/cats-v-dogs/training/cats/')))\nprint(\"Number of training dog images\", len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))\nprint(\"Number of testing cat images\", len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))\nprint(\"Number of testing dog images\", len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))\n\n# expected output\n# Number of training cat images 11250\n# Number of training dog images 11250\n# Number of testing cat images 1250\n# Number of testing dog images 1250",
"_____no_output_____"
]
],
[
[
"### Data augmentation (try adjusting the parameters)!\n\nHere, you'll use the `ImageDataGenerator` to perform data augmentation. \n- Things like rotating and flipping the existing images allows you to generate training data that is more varied, and can help the model generalize better during training. \n- You can also use the data generator to apply data augmentation to the validation set.\n\nYou can use the default parameter values for a first pass through this lab.\n- Later, try to experiment with the parameters of `ImageDataGenerator` to improve the model's performance.\n- Try to drive reach 99.9% validation accuracy or better.",
"_____no_output_____"
]
],
[
[
"\nTRAINING_DIR = \"/tmp/cats-v-dogs/training/\"\n# Experiment with your own parameters to reach 99.9% validation accuracy or better\ntrain_datagen = ImageDataGenerator(rescale=1./255,\n rotation_range=40,\n width_shift_range=0.2,\n height_shift_range=0.2,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True,\n fill_mode='nearest')\ntrain_generator = train_datagen.flow_from_directory(TRAINING_DIR,\n batch_size=100,\n class_mode='binary',\n target_size=(150, 150))\n\nVALIDATION_DIR = \"/tmp/cats-v-dogs/testing/\"\n\nvalidation_datagen = ImageDataGenerator(rescale=1./255)\nvalidation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,\n batch_size=100,\n class_mode='binary',\n target_size=(150, 150))\n\n",
"_____no_output_____"
]
],
[
[
"### Get and prepare the model\n\nYou'll be using the `InceptionV3` model. \n- Since you're making use of transfer learning, you'll load the pre-trained weights of the model.\n- You'll also freeze the existing layers so that they aren't trained on your downstream task with the cats and dogs data.\n- You'll also get a reference to the last layer, 'mixed7' because you'll add some layers after this last layer.",
"_____no_output_____"
]
],
[
[
"weights_url = \"https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5\"\nweights_file = \"inception_v3.h5\"\nurllib.request.urlretrieve(weights_url, weights_file)\n\n# Instantiate the model\npre_trained_model = InceptionV3(input_shape=(150, 150, 3),\n include_top=False,\n weights=None)\n\n# load pre-trained weights\npre_trained_model.load_weights(weights_file)\n\n# freeze the layers\nfor layer in pre_trained_model.layers:\n layer.trainable = False\n\n# pre_trained_model.summary()\n\nlast_layer = pre_trained_model.get_layer('mixed7')\nprint('last layer output shape: ', last_layer.output_shape)\nlast_output = last_layer.output\n\n",
"_____no_output_____"
]
],
[
[
"### Add layers\nAdd some layers that you will train on the cats and dogs data.\n- `Flatten`: This will take the output of the `last_layer` and flatten it to a vector.\n- `Dense`: You'll add a dense layer with a relu activation.\n- `Dense`: After that, add a dense layer with a sigmoid activation. The sigmoid will scale the output to range from 0 to 1, and allow you to interpret the output as a prediction between two categories (cats or dogs).\n\nThen create the model object.",
"_____no_output_____"
]
],
[
[
"# Flatten the output layer to 1 dimension\nx = layers.Flatten()(last_output)\n# Add a fully connected layer with 1,024 hidden units and ReLU activation\nx = layers.Dense(1024, activation='relu')(x)\n# Add a final sigmoid layer for classification\nx = layers.Dense(1, activation='sigmoid')(x)\n\nmodel = Model(pre_trained_model.input, x)\n",
"_____no_output_____"
]
],
[
[
"### Train the model\nCompile the model, and then train it on the test data using `model.fit`\n- Feel free to adjust the number of epochs. This project was originally designed with 20 epochs.\n- For the sake of time, you can use fewer epochs (2) to see how the code runs.\n- You can ignore the warnings about some of the images having corrupt EXIF data. Those will be skipped.",
"_____no_output_____"
]
],
[
[
"\n# compile the model\nmodel.compile(optimizer=RMSprop(lr=0.0001),\n loss='binary_crossentropy',\n metrics=['acc'])\n\n# train the model (adjust the number of epochs from 1 to improve performance)\nhistory = model.fit(\n train_generator,\n validation_data=validation_generator,\n epochs=2,\n verbose=1)",
"_____no_output_____"
]
],
[
[
"### Visualize the training and validation accuracy\n\nYou can see how the training and validation accuracy change with each epoch on an x-y plot.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\n\n#-----------------------------------------------------------\n# Retrieve a list of list results on training and test data\n# sets for each training epoch\n#-----------------------------------------------------------\nacc=history.history['acc']\nval_acc=history.history['val_acc']\nloss=history.history['loss']\nval_loss=history.history['val_loss']\n\nepochs=range(len(acc)) # Get number of epochs\n\n#------------------------------------------------\n# Plot training and validation accuracy per epoch\n#------------------------------------------------\nplt.plot(epochs, acc, 'r', \"Training Accuracy\")\nplt.plot(epochs, val_acc, 'b', \"Validation Accuracy\")\nplt.title('Training and validation accuracy')\nplt.figure()\n\n",
"_____no_output_____"
]
],
[
[
"### Predict on a test image\n\nYou can upload any image and have the model predict whether it's a dog or a cat.\n- Find an image of a dog or cat\n- Run the following code cell. It will ask you to upload an image.\n- The model will print \"is a dog\" or \"is a cat\" depending on the model's prediction.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom google.colab import files\nfrom keras.preprocessing import image\n\nuploaded = files.upload()\n\nfor fn in uploaded.keys():\n \n # predicting images\n path = '/content/' + fn\n img = image.load_img(path, target_size=(150, 150))\n x = image.img_to_array(img)\n x = np.expand_dims(x, axis=0)\n\n image_tensor = np.vstack([x])\n classes = model.predict(image_tensor)\n print(classes)\n print(classes[0])\n if classes[0]>0.5:\n print(fn + \" is a dog\")\n else:\n print(fn + \" is a cat\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ad990b5db197cc357f40e723977f89fda8fbc4e
| 6,004 |
ipynb
|
Jupyter Notebook
|
5_cleanup.ipynb
|
aws-samples/sagemaker-feature-store-real-time-recommendations
|
d420876c24a53ad72a1ab177816bebc1f3eff620
|
[
"MIT-0"
] | 9 |
2021-11-23T16:49:06.000Z
|
2022-03-10T23:03:43.000Z
|
5_cleanup.ipynb
|
aws-samples/sagemaker-feature-store-real-time-recommendations
|
d420876c24a53ad72a1ab177816bebc1f3eff620
|
[
"MIT-0"
] | null | null | null |
5_cleanup.ipynb
|
aws-samples/sagemaker-feature-store-real-time-recommendations
|
d420876c24a53ad72a1ab177816bebc1f3eff620
|
[
"MIT-0"
] | null | null | null | 27.925581 | 128 | 0.63008 |
[
[
[
"# Notebook 5: Clean Up Resources\n\nSpecify \"Python 3\" Kernel and \"Data Science\" Image.\n\n### Background\n\nIn this notebook, we will clean up the resources we provisioned during this workshop:\n\n- SageMaker Feature Groups\n- SageMaker Endpoints\n- Amazon Kinesis Data Stream\n- Amazon Kinesis Data Analytics application",
"_____no_output_____"
],
[
"### Imports",
"_____no_output_____"
]
],
[
[
"from parameter_store import ParameterStore\nfrom utils import *",
"_____no_output_____"
]
],
[
[
"### Session variables",
"_____no_output_____"
]
],
[
[
"role = sagemaker.get_execution_role()\nsagemaker_session = sagemaker.Session()\nregion = sagemaker_session.boto_region_name\nboto_session = boto3.Session()\nkinesis_client = boto_session.client(service_name='kinesis', region_name=region)\nkinesis_analytics_client = boto_session.client('kinesisanalytics')\n\nps = ParameterStore(verbose=False)\nps.set_namespace('feature-store-workshop')",
"_____no_output_____"
]
],
[
[
"Load variables from previous notebooks",
"_____no_output_____"
]
],
[
[
"parameters = ps.read()\n\ncustomers_feature_group_name = parameters['customers_feature_group_name']\nproducts_feature_group_name = parameters['products_feature_group_name']\norders_feature_group_name = parameters['orders_feature_group_name']\nclick_stream_historical_feature_group_name = parameters['click_stream_historical_feature_group_name']\nclick_stream_feature_group_name = parameters['click_stream_feature_group_name']\n\ncf_model_endpoint_name = parameters['cf_model_endpoint_name']\nranking_model_endpoint_name = parameters['ranking_model_endpoint_name']\n\nkinesis_stream_name = parameters['kinesis_stream_name']\nkinesis_analytics_application_name = parameters['kinesis_analytics_application_name']",
"_____no_output_____"
]
],
[
[
"### Delete feature groups",
"_____no_output_____"
]
],
[
[
"feature_group_list = [customers_feature_group_name, products_feature_group_name,\n orders_feature_group_name, click_stream_historical_feature_group_name,\n click_stream_feature_group_name]\n\nfor feature_group in feature_group_list:\n print(f'Deleting feature group: {feature_group}')\n delete_feature_group(feature_group)",
"_____no_output_____"
]
],
[
[
"### Delete endpoints and endpoint configurations",
"_____no_output_____"
]
],
[
[
"def clean_up_endpoint(endpoint_name):\n response = sagemaker_session.sagemaker_client.describe_endpoint(EndpointName=endpoint_name)\n endpoint_config_name = response['EndpointConfigName']\n print(f'Deleting endpoint: {endpoint_name}')\n print(f'Deleting endpoint configuration : {endpoint_config_name}')\n sagemaker_session.sagemaker_client.delete_endpoint(EndpointName=endpoint_name)\n sagemaker_session.sagemaker_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)",
"_____no_output_____"
],
[
"endpoint_list = [cf_model_endpoint_name, ranking_model_endpoint_name]\n\nfor endpoint in endpoint_list:\n clean_up_endpoint(endpoint)",
"_____no_output_____"
]
],
[
[
"### Delete Kinesis Data Stream",
"_____no_output_____"
]
],
[
[
"kinesis_client.delete_stream(StreamName=kinesis_stream_name,\n EnforceConsumerDeletion=True)",
"_____no_output_____"
]
],
[
[
"### Delete Kinesis Data Analytics application",
"_____no_output_____"
]
],
[
[
"response = kinesis_analytics_client.describe_application(ApplicationName=kinesis_analytics_application_name)\ncreate_ts = response['ApplicationDetail']['CreateTimestamp']\nkinesis_analytics_client.delete_application(ApplicationName=kinesis_analytics_application_name, CreateTimestamp=create_ts)",
"_____no_output_____"
]
],
[
[
"Go back to Workshop Studio and click on \"Next\".",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4ad9afc83e021795c9835fa6f396aea17a1326f6
| 210,717 |
ipynb
|
Jupyter Notebook
|
uci-pharmsci/lectures/fluctuations_correlations_error/error_analysis_OpenMM_convergence.ipynb
|
aakankschit/drug-computing
|
3ea4bd12f3b56cbffa8ea43396f3a32c009985a9
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
uci-pharmsci/lectures/fluctuations_correlations_error/error_analysis_OpenMM_convergence.ipynb
|
aakankschit/drug-computing
|
3ea4bd12f3b56cbffa8ea43396f3a32c009985a9
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null |
uci-pharmsci/lectures/fluctuations_correlations_error/error_analysis_OpenMM_convergence.ipynb
|
aakankschit/drug-computing
|
3ea4bd12f3b56cbffa8ea43396f3a32c009985a9
|
[
"CC-BY-4.0",
"MIT"
] | null | null | null | 159.031698 | 128,972 | 0.88766 |
[
[
[
"# PharmSci 175/275 (UCI)\n## What is this?? \nThe material below is a Jupyter notebook including some lecture content to supplement class material on fluctuations, correlations, and error analysis from Drug Discovery Computing Techniques, PharmSci 175/275 at UC Irvine. \nExtensive materials for this course, as well as extensive background and related materials, are available on the course GitHub repository: [github.com/mobleylab/drug-computing](https://github.com/mobleylab/drug-computing)\n\nThis material is a set of slides intended for presentation with RISE as detailed [in the course materials on GitHub](https://github.com/MobleyLab/drug-computing/tree/master/uci-pharmsci/lectures/energy_minimization). While it may be useful without RISE, it will also likely appear somewhat less verbose than it would if it were intended for use in written form.",
"_____no_output_____"
],
[
"# Fluctuations, correlations, and error analysis\n\nToday: Chemistry tools in Python; working with molecules; generating 3D conformers; shape search methods\n\n### Instructor: David L. Mobley\n\n### Contributors to today's materials:\n- David L. Mobley\n- I also appreciate John Chodera and Nathan Lim for help with OpenMM\n- Some content also draws on John Chodera's work on [automated equilibration detection](https://www.biorxiv.org/content/early/2015/12/30/021659) and his [`pymbar`](https://github.com/choderalab/pymbar) Python package\n- Density calculation work uses code from a former postdoc, Gaetano Calabró",
"_____no_output_____"
],
[
"# Outline of this notebook\n1. Recap/info on central limit theorem and averaging\n1. Some brief OpenMM basics -- mostly not covered in detail in class, but provides context for the density calculation\n2. A simple example of a density calculation which stops when converged\n3. Analyzing density results in this notebook",
"_____no_output_____"
],
[
"# Remember, the central limit thereom means the of a sum of many independent random variables is a Gaussian/normal distribution",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage('images/CLT.png')",
"_____no_output_____"
]
],
[
[
"## This is true regardless of the starting distribution\n\nThe distribution of a sum of many independent random variables will follow a Gaussian (normal) distribution, regardless of the starting distribution\n\nA mean is a type of sum, so a mean (average) of many measurements or over many molecules will often be a normal distribution",
"_____no_output_____"
],
[
"## But which property? Sometimes several properties are related\n\nSometimes, related properties can be calculated from the same data -- for example, binding free energy:\n$\\Delta G_{bind} = k_B T \\ln K_d$\n\nWhich should be normally dsitributed? $K_d$? $\\Delta G$? Both? \n\nAnswer: Free energy, typically. Why? Equilibrium properties are determined by free energy. We measure, for example, the heat released on binding averaged over many molecules. We might *convert* to $K_d$, but the physical quantity relates to the free energy.",
"_____no_output_____"
],
[
"## This has important implications for averaging\n\nFor example, obtain results from several different experiments (i.e. different groups) and want to combine. How to average?\n\n$\\Delta G_{bind} = k_B T \\ln K_d$\n\nExample: combining vapor pressure measurements as part of a hydration free energy estimate: Three values, 1e-3, 1e-4, 1e-5. (Vapor pressure determined by partitioning between gas and condensed phase -- driven by free energy. It is the free energy which should be normally distributed, not the vapor pressure.)\n\n**Simple mean: 3.7e-4**\n**Average the logarithms and exponentiate: 1e-4**\n\n**So, average the free energy, NOT the $K_d$.**\nThis also applies to other areas -- partitioning coefficients, solubility, etc. ",
"_____no_output_____"
],
[
"# Generate some input files we'll use\n\n- We want some inputs to work with as we see how OpenMM works\n- And for our sample density calculation\n- [`SolvationToolkit`](https://github.com/mobleylab/SolvationToolkit) provides simple wrappers for building arbitrary mixtures\n - OpenEye toolkits\n - [`packmol`](https://github.com/mcubeg/packmol)\n - GAFF small molecule force field (and TIP3P or TIP4P, etc. for water)\n\nLet's build a system to use later",
"_____no_output_____"
]
],
[
[
"from solvationtoolkit.solvated_mixtures import *\nmixture = MixtureSystem('mixtures')\nmixture.addComponent(label='phenol', name=\"phenol\", number=1)\nmixture.addComponent(label='toluene', smiles='Cc1ccccc1', number=10)\nmixture.addComponent(label='cyclohexane', smiles='C1CCCCC1', number=100)\n#Generate output files for AMBER\nmixture.build(amber = True)",
"\nWARNING: component name not provided; label will be used as component name\n\n\nWARNING: component name not provided; label will be used as component name\n\nUnexpected errors encountered running AMBER tool. Offending output:\nExiting LEaP: Errors = 0; Warnings = 0; Notes = 0.\n"
]
],
[
[
"# OpenMM is more of a simulation toolkit than a simulation package\n\n(This material is here mainly as background for the code below, down to [A simple example of a density calculation which stops when converged](#A-simple-example-of-a-density-calculation-which-stops-when-converged), and will not be covered in detail in this lecture).\n\n- Easy-to-use Python API\n- Very fast calculations on GPUs (but slow on CPUs)\n- Really easy to implement new techniques, do new science\n- Key ingredients in a calculation:\n - `Topology`\n - `System`\n - `Simulation` (takes `System`, `Topology`, `Integrator`; contains positions)",
"_____no_output_____"
],
[
"## `Topology`: Chemical composition of your system\n\n- Atoms, bonds, etc.\n- Can be loaded from some common file formats such as PDB, mol2\n- Can be created from OpenEye molecule via [`oeommtools`](https://github.com/oess/oeommtools), such as [`oeommtools.utils.oemol_to_openmmTop`](https://github.com/oess/oeommtools/blob/master/oeommtools/utils.py#L17)\n - Side note: An OE \"molecule\" can contain more than one molecule, so can contain protein+ligand+solvent for example\n- Tangent: Try to retain bond order info if you have it (e.g. from a mol2)",
"_____no_output_____"
]
],
[
[
"# Example Topology generation from a couple mechanisms:\n\n# Load a PDB\nfrom simtk.openmm.app import PDBFile\npdb = PDBFile('sample_files/T4-protein.pdb')\nt4_topology = pdb.topology",
"_____no_output_____"
],
[
"# Load a mol2: MDTraj supports a variety of file formats including mol2\nimport mdtraj\ntraj = mdtraj.load('sample_files/mobley_20524.mol2')\n# MDTraj objects contain a Topology, but an MDTraj topology; they support conversion to OpenMM\ntraj.topology.to_openmm()\n\n# MDTraj can also handle PDB, plus trajectory formats which contain topology information\nprotein_traj = mdtraj.load('sample_files/T4-protein.pdb')\nt4_topology = protein_traj.topology.to_openmm()\n# And we can visualize with nglview (or drop out to VMD)\nimport nglview\nview = nglview.show_mdtraj(protein_traj)\nview",
"_____no_output_____"
],
[
"# Load AMBER gas phase topology\nfrom simtk.openmm.app import *\nprmtop = AmberPrmtopFile('sample_files/mobley_20524.prmtop')\nprint(\"Topology has %s atoms\" % prmtop.topology.getNumAtoms())\n\n# Gromacs files can be loaded by GromacsTopFile and GromacsGroFile but you need topology/coordinate files\n# which don't have include statements, or a GROMACS installation",
"_____no_output_____"
]
],
[
[
"If the below cell does not run because of missing `oeommtools`, you'll need to install it, e.g. via `conda install -c openeye/label/Orion oeommtools -c omnia`",
"_____no_output_____"
]
],
[
[
"# Load an OEMol and convert (note advantage over MDTraj for bond order, etc.)\n\nfrom openeye.oechem import *\nfrom oeommtools.utils import *\nmol = OEMol()\nistream = oemolistream( 'sample_files/mobley_20524.mol2')\nOEReadMolecule(istream, mol)\nistream.close()\n\n# Convert OEMol to Topology using oeommtools -- so you can get a topology from almost any format OE supports\ntopology, positions = oemol_to_openmmTop(mol)\nprint(topology.getNumAtoms())",
"13\n"
]
],
[
[
"## `System`: Your parameterized system\n- Often generated by `createSystem`, but requires OpenMM know how to assign parameters\n - Easy for standard biomolecules (proteins, nucleic acids), waters ions\n - OpenMM FFXML files used; available for many common FFs\n - More complex for general small molecules\n- Can also be loaded from common file formats such as GROMACS, AMBER\n - useful if you set up for AMBER or GROMACS\n- We have new open forcefield effort that provides new force fields with an `openforcefield.createSystem` operator; generates OpenMM Systems.",
"_____no_output_____"
]
],
[
[
"# Example system creation\n#From OpenMM Docs: http://docs.openmm.org/latest/userguide/application.html#running-simulations\nfrom simtk.openmm.app import *\nfrom simtk.openmm import *\nfrom simtk.unit import *\nfrom sys import stdout\n\n# Example System creation using OpenMM XML force field libraries -- good for biomolecules, ions, water\npdb = PDBFile('sample_files/input.pdb')\nforcefield = ForceField('amber99sb.xml', 'tip3p.xml')\nsystem = forcefield.createSystem(pdb.topology, nonbondedMethod=PME,\n nonbondedCutoff=1*nanometer, constraints=HBonds)\n",
"_____no_output_____"
],
[
"# Or you could set up your own molecule for simulation with e.g. GAFF using AmberTools\n\nfrom openmoltools.amber import *\n# Generate GAFF-typed mol2 file and AMBER frcmod file using AmberTools\ngaff_mol2_file, frcmod_file = run_antechamber('phenol', 'sample_files/mobley_20524.mol2')\n# Generate AMBER files\nprmtop_name, inpcrd_name = run_tleap( 'phenol', gaff_mol2_file, frcmod_file)\nprint(\"Generated %s and %s\" % (prmtop_name, inpcrd_name))\n\n# Create System -- in this case, single molecule in gas phase\nprmtop = AmberPrmtopFile( prmtop_name)\ninpcrd = AmberInpcrdFile( inpcrd_name)\nsystem = prmtop.createSystem(nonbondedMethod = NoCutoff, nonbondedCutoff = NoCutoff, constraints = HBonds)",
"Generated /Users/dmobley/github/drug-computing/uci-pharmsci/lectures/fluctuations_correlations_error/phenol.prmtop and /Users/dmobley/github/drug-computing/uci-pharmsci/lectures/fluctuations_correlations_error/phenol.inpcrd\n"
],
[
"# Load the mixture we generated above in Section 3\nfile_prefix = 'mixtures/amber/phenol_toluene_cyclohexane_1_10_100'\nprmtop = AmberPrmtopFile( file_prefix+'.prmtop')\ninpcrd = AmberInpcrdFile( file_prefix+'.inpcrd')\n# Create system: Here, solution phase with periodic boundary conditions and constraints\nsystem = prmtop.createSystem(nonbondedMethod = PME, nonbondedCutoff = 1*nanometer, constraints = HBonds)\n\n#You can visualize the above with VMD, or we can do:\ntraj = mdtraj.load( file_prefix + '.inpcrd', top = file_prefix + '.prmtop')\nview = nglview.show_mdtraj(traj)\nview",
"_____no_output_____"
]
],
[
[
"## `Simulation`: The system, topology, and positions you're simulating, under what conditions\n- Could be for energy minimization, or different types of dynamics\n- Has an integrator attached (even if just minimizing), including temperature\n- `context` -- including positions, periodic box if applicable, etc.\n- If dynamics, has:\n - timestep\n - velocities\n- potentially also has reporters which store properties like energies, trajectory snapshots, etc.\n",
"_____no_output_____"
],
[
"### Let's take that last `System` we set up and energy minimize it\n(The mixture of toluene, phenol, and cyclohexane we generated originally)",
"_____no_output_____"
]
],
[
[
"# Prepare the integrator\nintegrator = LangevinIntegrator(300*kelvin, 1/picosecond, 0.002*picoseconds)",
"_____no_output_____"
],
[
"# Prep the simulation\nsimulation = Simulation(prmtop.topology, system, integrator)\nsimulation.context.setPositions(inpcrd.positions)",
"_____no_output_____"
],
[
"# Get and print initial energy\nstate = simulation.context.getState(getEnergy = True)\nenergy = state.getPotentialEnergy() / kilocalories_per_mole\nprint(\"Energy before minimization (kcal/mol): %.2g\" % energy)\n\n# Energy minimize\nsimulation.minimizeEnergy()\n\n# Get and print final energy\nstate = simulation.context.getState(getEnergy=True, getPositions=True)\nenergy = state.getPotentialEnergy() / kilocalories_per_mole\nprint(\"Energy after minimization (kcal/mol): %.2g\" % energy)",
"Energy before minimization (kcal/mol): 1.4e+11\nEnergy after minimization (kcal/mol): -5.2e+02\n"
],
[
"# While we're at it, why don't we just run a few steps of dynamics\nsimulation.reporters.append(PDBReporter('sample_files/mixture_output.pdb', 100))\nsimulation.reporters.append(StateDataReporter(stdout, 100, step=True,\n potentialEnergy=True, temperature=True))\nsimulation.step(1000) # Runs 1000 steps of dynamics\nstate = simulation.context.getState(getEnergy=True, getPositions=True)",
"#\"Step\",\"Potential Energy (kJ/mole)\",\"Temperature (K)\"\n100,-1104.1612038835774,56.240383237667835\n200,-258.1508279070149,107.11272177220852\n300,486.5828146711101,141.20197403652844\n400,1000.3723654523601,173.45463216100626\n500,1385.82158420236,200.48514456168613\n600,1842.013234592985,220.3795403473273\n700,2149.51836154611,234.29764379251156\n800,2302.23369357736,248.00630540972793\n900,2526.37236545236,261.8691875786783\n1000,2727.16777560861,266.3660576562966\n"
]
],
[
[
"# A simple example of a density calculation which stops when converged\n\n- We'll do a very simple density estimation\n- This is not a recommended protocol since we're just jumping straight in to \"production\"\n- But it illustrates how you can do this type of thing easily with OpenMM\n- For production data, you'd precede by equilibration (usually NVT, then NPT, then production)",
"_____no_output_____"
],
[
"## The most bare-bones version",
"_____no_output_____"
]
],
[
[
"# We'll pick up that same system again, loading it up again so we can add a barostat before setting up the simulation\nimport simtk.openmm as mm\nfile_prefix = 'mixtures/amber/phenol_toluene_cyclohexane_1_10_100'\nprmtop = AmberPrmtopFile( file_prefix+'.prmtop')\ninpcrd = AmberInpcrdFile( file_prefix+'.inpcrd')\nsystem = prmtop.createSystem(nonbondedMethod = PME, nonbondedCutoff = 1*nanometer, constraints = HBonds)\n\n# Now add a barostat\nsystem.addForce(mm.MonteCarloBarostat(1*atmospheres, 300*kelvin, 25))\n\n# Set up integrator and simulation\nintegrator = LangevinIntegrator(300*kelvin, 1/picosecond, 0.002*picoseconds)\nsimulation = Simulation(prmtop.topology, system, integrator)\n\n\n# Let's pull the positions from the end of the brief \"equilibration\" we ran up above.\nsimulation.context.setPositions(state.getPositions())\n\n\n# Set up a reporter to assess progress; will report every 100 steps (somewhat short)\nprod_data_filename = os.path.join('sample_files', os.path.basename(file_prefix)+'.csv')\nsimulation.reporters.append(app.StateDataReporter( prod_data_filename, 100, step=True, potentialEnergy=True,\n temperature=True, density=True))\n",
"_____no_output_____"
],
[
"# Set up for run; for a somewhat reasonable convergence threshold you probably want run_steps >= 2500\n# and a density tolerance of 1e-3 or smaller; higher thresholds will likely result in early termination\n# due to slow fluctuations in density.\n# But that may take some time to run, so feel free to try higher also.\nrun_steps = 2500\nconverged = False\ndensity_tolerance = 0.001\nimport pandas as pd\nfrom pymbar import timeseries as ts",
"_____no_output_____"
],
[
"while not converged:\n simulation.step(run_steps)\n\n # Read data\n d = pd.read_csv(prod_data_filename, names=[\"step\", \"U\", \"Temperature\", \"Density\"], skiprows=1)\n density_ts = np.array(d.Density)\n \n # Detect when it seems to have equilibrated and clip off the part prior\n [t0, g, Neff] = ts.detectEquilibration(density_ts)\n density_ts = density_ts[t0:]\n \n # Compute standard error of what's left\n density_mean_stderr = density_ts.std() / np.sqrt(Neff)\n\n # Print stats, see if converged\n print(\"Current density mean std error = %f g/mL\" % density_mean_stderr)\n\n if density_mean_stderr < density_tolerance :\n converged = True\n print(\"...Convergence is OK; equilibration estimated to be achieved after data point %s\\n\" % t0)",
"_____no_output_____"
]
],
[
[
"# While that's running, let's look at what it's doing\n\n## The first key idea is that we want to be able to tell it to stop when the density is known precisely enough\n\nWe have this bit of code: \n```python\ndensity_mean_stderr = density_ts.std() / np.sqrt(Neff)\n```\nThis is estimating the standard error in the mean -- $\\sigma_{err} = \\frac{\\sigma}{\\sqrt{N_{eff}}}$ where $N_{eff}$ is the number of effective samples and $\\sigma$ is the standard deviation.\n\n**We can stop running our simulation when the number of effective samples gets high enough, relative to the standard deviation, that the standard error becomes as small as we want.**",
"_____no_output_____"
],
[
"## But how do we get the number of effective samples?\n\nAs discussed previously, $N_{eff} = N/g$ where $g$ is the statistical inefficiency -- a measure of how correlated our samples are. \n\nJohn Chodera's `pymbar` module provides a handy `statisticalInefficiency` module which estimates this from calculations of the autocorrelation function/autocorrelation time.",
"_____no_output_____"
],
[
"## But there's another problem: What if our initial box size (and density) is very bad?\n\nWhat if we built the box way too small? Or way too big? Then we'll have an initial period where the density is way off from the correct value.\n\n<center><img src=\"images/Chodera_1_top.png\" alt=\"GitHub\" style=\"width: 800px;\"/></center>\n\nHere, instantaneous density of liquid argon averaged over 500 simulations from [Chodera](https://www.biorxiv.org/content/early/2015/12/30/021659)",
"_____no_output_____"
],
[
"### This could adversely affect computed properties, unless we throw out some data for equilibration\n\nIf we average these results (red) the result is biased relative to the true expectation value out to well past 1000$\\tau$ (time units) [(Chodera)](https://www.biorxiv.org/content/early/2015/12/30/021659):\n\n<center><img src=\"images/Chodera_1_all.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>\n\nThrowing out 500 initial samples as equilibration gives much better results even by 200 $\\tau$",
"_____no_output_____"
],
[
"### With a clever trick, you can do this automatically\n\nKey idea: If you throw away unequilibrated data, it reduces the correlation time/statistical inefficiency, **increasing $N_{eff}$**. But if you throw away equilibrated data, you're just wasting data (**decreasing $N_{eff}$**). So pick how much data to throw away to [**maximize $N_{eff}$.**](https://www.biorxiv.org/content/early/2015/12/30/021659)\n\n<center><img src=\"images/Chodera_2.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"### Basically what we're doing here is making a bias/variance tradeoff\n\nWe pick an amount of data to throw away that minimizes bias without increasing the variance much\n\n<center><img src=\"images/Chodera_3.png\" alt=\"GitHub\" style=\"width: 400px;\"/></center>\n\nThis shows the same argon case, with 500 replicates, and looks at bias and variance as a function of the amount of data discarded (color bar). Throwing out a modest amount of data reduces bias a great deal while keeping the standard error low.\n\nThe arrow marks the point automatically selected.",
"_____no_output_____"
],
[
"### It turns out `pymbar` has code for this, too, and we use it in our example\n\n```python\n# Read stored trajectory data\nd = pd.read_csv(prod_data_filename, names=[\"step\", \"U\", \"Temperature\", \"Density\"], skiprows=1)\ndensity_ts = np.array(d.Density)\n\n# Detect when it seems to have equilibrated and clip off the part prior\n[t0, g, Neff] = ts.detectEquilibration(density_ts)\ndensity_ts = density_ts[t0:]\n```",
"_____no_output_____"
],
[
"# Analyzing density results in this notebook\n\n- The above may take some time to converge\n- (You can set the threshold higher, which may lead to apparently false convergence)\n- Here we'll analyze some sample density data I've provided",
"_____no_output_____"
],
[
"## As usual we'll use matplotlib for analysis: First let's view a sample set of density data",
"_____no_output_____"
]
],
[
[
"# Prep matplotlib/pylab to display here\n%matplotlib inline\nimport pandas as pd\nfrom pylab import *\nfrom pymbar import timeseries as ts\n\n# Load density\n\nd = pd.read_csv('density_simulation/prod/phenol_toluene_cyclohexane_1_10_100_prod.csv', names=[\"step\", \"U\", \"Temperature\", \"Density\"], skiprows=1)\n\n# Plot instantaneous density\n\nxlabel('Step')\nylabel('Density (g/mL)')\nplot( np.array(d.step), np.array(d.Density), 'b-')",
"_____no_output_____"
]
],
[
[
"## Now we want to detect equilibration and throw out unequilibrated data\n\nWe'll also compute the mean density",
"_____no_output_____"
]
],
[
[
"#Detect equilibration\ndensity_ts = np.array(d.Density)\n[t0, g, Neff] = ts.detectEquilibration(density_ts)\nprint(\"Data equilibrated after snapshot number %s...\" % t0)\n\n# Clip out unequilibrated region\ndensity_ts = density_ts[t0:]\nstepnrs = np.array(d.step[t0:])\n\n# Compute mean density up to the present at each time, along with associated uncertainty\nmean_density = [ density_ts[0:i].mean() for i in range(2, len(density_ts)) ]\nmean_density_stderr = [ ]\nfor i in range(2,len(density_ts)):\n g = ts.statisticalInefficiency( density_ts[0:i])\n stderr = density_ts[0:i].std()/sqrt(i/g)\n mean_density_stderr.append(stderr)",
"22\n"
]
],
[
[
"## Finally let's graph and compare to experiment",
"_____no_output_____"
]
],
[
[
"# Plot\nfigure()\nerrorbar(stepnrs[2:], mean_density, yerr=mean_density_stderr, fmt='b-' )\nplot( [0, stepnrs.max()], [0.78, 0.78], 'k-') #Overlay experimental value for cyclohexane\nxlabel('Step')\nylabel('Density (g/mL)')\nshow()\nprint(\"Experimental density of cyclohexane is 0.78 g/mL at 20C\" ) # per PubChem, https://pubchem.ncbi.nlm.nih.gov/compound/cyclohexane#section=Solubility",
"_____no_output_____"
]
],
[
[
"### Exercise: Do the same but for YOUR short simulation\n\nThe sample data analyzed here was generated by the `density.py` script in this directory; it discards a lot amount of data to equilibration BEFORE storing production data. This means (as `detectEquilibration` finds) it's already equilibrated.\n\nAs an exercise, analyze YOUR sample simulation and find out how much of it has equilibrated/how much data needs to be discarded.",
"_____no_output_____"
],
[
"# Now let's shift gears back to some more on statistics and error analysis\n\nThe [2013 Computer Aided Drug Design Gordon Research Conference](http://lanyrd.com/2013/grc-cadd-2013/) focused specificially on statistics relating to drug discovery, error, reproducibility, etc. Slides from many talks are available online along with Python code, statistics info, etc. \n\nHere I draw especially on (with permission) slides from:\n- Woody Sherman (then Schrodinger; now Silicon Therapeutics)\n- Tom Darden (OpenEye)\n- Paul Hawkins (OpenEye)\n- John Chodera (MSKCC) and Terry Stouch (JCAMD)\n\nI'd also like to include material from Ajay Jain that I normally show, but am waiting approval, and likewise content from Paul Hawkins (OpenEye)",
"_____no_output_____"
],
[
"## Some content from Sherman\n\n<center><img src=\"images/Sherman_1.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Sherman_2.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Sherman_3.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"### Why do we not know?\n\n**No null hypothesis** -- no point of comparison as to how well we would do with some other model or just guessing or... We don't know what \"useful\" means!\n\nNull hypothesis used differently in two approaches to statistical inference (same term is used, different meaning):\n- **Significance testing** (Fisher): Null hypothesis is rejected or disproved on basis of data but never accepted or proved; magnitude of effect is unimportant\n- **Hypothesis testing** (Neyman and Pearson): Contrast with alternate hypothesis, decide between them on basis of data. Must be better than alternative hypothesis. \n\nArguably hypothesis testing is more important, though significance testing is done a lot (see last lecture).\n\nSherman's slides below focus on hypothesis testing.",
"_____no_output_____"
],
[
"<center><img src=\"images/Sherman_4.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Sherman_5.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Sherman_6.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Sherman_7.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Sherman_8.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Sherman_9.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"### Follow-up note\n\nSherman explains that Knight subsequently used this as an example of lessons learned/how important statistics are in a job talk at Schrodinger and was hired.",
"_____no_output_____"
],
[
"### As a result of this, design hypothesis testing studies carefully\n\nFor hypothesis testing/model comparison, pick a null/alternate model which has SOME value\n\nBe careful not to bias your study against certain methods -- i.e. if you construct a test set to have very diverse structures, this will have implications for what types of methods will do well",
"_____no_output_____"
],
[
"## Some slides from Tom Darden, OpenEye",
"_____no_output_____"
],
[
"<center><img src=\"images/Darden_1.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Darden_2.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Darden_3.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Darden_4.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"<center><img src=\"images/Darden_5.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
],
[
"## A final reminder from John Chodera and Terry Stouch\n\n<center><img src=\"images/Chodera_talk_1.png\" alt=\"GitHub\" style=\"width: 1000px;\"/></center>",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4ad9e0b429667294208a33e7bb6574723dc1ce97
| 3,449 |
ipynb
|
Jupyter Notebook
|
type.ipynb
|
gaurinarayan/python
|
435ebf3292b8be83b31474ee38fc6094a3c26dfa
|
[
"MIT"
] | null | null | null |
type.ipynb
|
gaurinarayan/python
|
435ebf3292b8be83b31474ee38fc6094a3c26dfa
|
[
"MIT"
] | null | null | null |
type.ipynb
|
gaurinarayan/python
|
435ebf3292b8be83b31474ee38fc6094a3c26dfa
|
[
"MIT"
] | null | null | null | 17.245 | 68 | 0.407075 |
[
[
[
"a = np.arange(9,17)\nprint(a)\ns = pd.Series(data = a**2,index=a)\ns",
"[ 9 10 11 12 13 14 15 16]\n"
],
[
"import pandas as pd\nimport numpy as np\n\nsection=['A','B','C','D']\n\ncontri=[6700,5600,5000,5200]\n\ns=pd.Series(data=contri, index=section, dtype = np.float64)\n\nprint(s)\n",
"A 6700.0\nB 5600.0\nC 5000.0\nD 5200.0\ndtype: float64\n"
],
[
"a = 12\nb = 10\nprint(a)\nprint(b)\nprint(a+b)\nprint(a*b)\nprint(a/b)\nprint(a-b)\nprint(a%b)\nprint(a//b)\nprint(2**6)",
"12\n10\n22\n120\n1.2\n2\n2\n1\n64\n"
],
[
"s1 = pd.Series([11, 12, 13])\nprint(s1)",
"0 11\n1 12\n2 13\ndtype: int64\n"
],
[
"s2 = pd.Series([11.1, 12.2, 13.3])\nprint(s2)",
"0 11.1\n1 12.2\n2 13.3\ndtype: float64\n"
],
[
"s3 = pd.Series([11, 12, 13.3])\nprint(s3)",
"0 11.0\n1 12.0\n2 13.3\ndtype: float64\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ad9e43860eddfab826641a13c1fea592b61153b
| 27,705 |
ipynb
|
Jupyter Notebook
|
Cell_Migration.ipynb
|
shc443/Cell-Migration-in-Microfluidic-Mazes
|
964ccce47d2776ce8d156ae165d7f66399450ac5
|
[
"MIT"
] | null | null | null |
Cell_Migration.ipynb
|
shc443/Cell-Migration-in-Microfluidic-Mazes
|
964ccce47d2776ce8d156ae165d7f66399450ac5
|
[
"MIT"
] | 3 |
2021-10-31T18:51:26.000Z
|
2021-11-16T17:35:02.000Z
|
Cell_Migration.ipynb
|
shc443/Cell-Migration-in-Microfluidic-Mazes
|
964ccce47d2776ce8d156ae165d7f66399450ac5
|
[
"MIT"
] | 1 |
2021-11-06T20:38:44.000Z
|
2021-11-06T20:38:44.000Z
| 97.897527 | 17,724 | 0.778415 |
[
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport math\nimport json\nimport os\n\nclass cell_migration3:\n \n def __init__(self, L ,W, H, N0, C0, Uc, Un, Dc, Dn, Qcb0, Qcd0, Qn, A0, dx, dt):\n #W = 10 #width\n #L = 850 #length\n #H = 17 #height\n L_ = 850\n V = L_*H*W\n M = 20 #number of tubes\n L1= V/(M*W*H)\n self.eps = 0.01\n \n self.d1 = Dc/Dn\n self.d2 = 0 #Un*L/Dc\n self.d3 = Qn*C0*L**2/Dn\n self.e1 = Uc*L/Dc\n self.e2 = A0*N0/Dc\n self.e3 = Qcb0*N0*C0*L**2/Dc\n self.e4 = Qcd0*L**2/(Dc*N0)\n \n self.l_ = L/L_ #L = L^\n self.l1 = L1/L_\n \n self.dx = dx\n self.dt = dt\n \n self.a = int((self.l_+self.l1)/dx)#end of the real tube\n self.b = int(1/dt) # n of step for iteration -> time\n \n self.e = int(self.l_/dx) #end of our experiment: end of real+img. tube\n \n #concentration of cell\n self.c = pd.DataFrame(np.zeros([self.a+1, self.b+1]))\n self.c.iloc[:,0] = 0\n self.c.iloc[0,1:] = 1\n \n #concentration of nutrient \n self.n = pd.DataFrame(np.zeros([self.a+1, self.b+1]))\n self.n.iloc[:int(1/dx),0] = 0\n self.n.iloc[0,:] = 0\n self.n.iloc[int(1/dx):,:] = 1 \n \n \n def f1(self,i):\n f = self.e1*self.dt/(2*self.dx) - self.dt/self.dx**2 - self.e2*self.dt/(4*self.dx**2) \\\n *(self.n.iloc[:,i].shift(-1) - self.n.iloc[:,i].shift(1))\n return f\n\n def g1(self,i):\n g = (1+2*self.dt/self.dx**2 - self.e2*self.dt/self.dx**2 * \\\n (self.n.iloc[:,i].shift(-1) -2*self.n.iloc[:,i] + self.n.iloc[:,i].shift(1)) \\\n - self.e3*self.dt*self.n.iloc[:,i]*(1-self.c.iloc[:,i]) + self.e4*self.dt/(self.c.iloc[:,i]+self.eps))\n return g\n\n def k1(self,i):\n k = (-self.e1*self.dt/(2*self.dx) -self.dt/self.dx**2 + self.e2*self.dt/(4*self.dx**2)\\\n *(self.n.iloc[:,i].shift(-1) - self.n.iloc[:,i].shift(1)))\n return k\n # x => 1\n\n def f2(self,i):\n f =self.e1*self.dt/(2*self.dx) - self.dt/self.dx**2 \n return f\n\n def g2(self,i):\n f = 1 + 2*self.dt/self.dx**2 + self.e3*(1-self.c.iloc[self.e+1:,i]) + self.e4*self.dt/(1+self.eps) \n return f\n\n def k2(self,i):\n f = -self.e1*self.dt/(2*self.dx) - self.dt/self.dx**2\n return f\n\n def n_new(self,i):\n phi = self.d3 * self.dx**2 * self.c.values[1:self.e+1,i] + 2\n A = (-np.diag(phi) + np.diag(np.ones(self.e-1),1) + np.diag(np.ones(self.e-1),-1))\n A[-1] = np.append(np.zeros(self.e-1),1)\n return np.linalg.solve(A, np.append(np.zeros(self.e-1),1))\n\n def n_new2(self,i):\n phi = self.d3 * self.dx**2 * self.c + 2\n A = (-np.diag(phi) + np.diag(np.ones(self.e-1),1) + np.diag(np.ones(self.e-1),-1))\n A[-1] = np.append(np.zeros(self.e-1),1)\n return np.linalg.solve(A, np.append(np.zeros(self.e-1),1))\n\n def n_new3(self,i):\n phi = self.d3 * self.dx**2 * self.c.values[1:self.e+1,i] + 2\n A = (-np.diag(phi) + np.diag(np.ones(self.e-1),1) + np.diag(np.ones(self.e-1),-1))\n A[-1] = np.append(np.zeros(self.e-1),1)\n return A\n\n def new_c(self,j):\n f_diag = self.f1(j)\n f_diag[self.e] = (self.e1*self.dt/(2*self.dx) - self.dt/self.dx**2 - self.e2*self.dt/(4*self.dx**2)*(self.n.iloc[self.e+1,j] - self.n.iloc[self.e-1,j]))\n f_diag[self.e+1:] = self.f2(j)\n\n #g1\n g_diag = self.g1(j)\n g_diag[self.e] = (1+2*self.dt/self.dx**2 - self.e2*self.dt/self.dx**2\\\n *(self.n.iloc[self.e+1,j] - 2*self.n.iloc[self.e,j] + self.n.iloc[self.e-1,j]) \\\n - self.e3*self.dt*self.n.iloc[self.e,j]*(1-self.c.iloc[self.e,j]) + self.e4*self.dt/(self.n.iloc[self.e,j]+self.eps))\n g_diag[self.e+1:] = self.g2(j)\n g_diag[self.a+1] = 1\n\n #k1\n k_diag = self.k1(j).shift(1)\n k_diag[self.e] = (-self.e1*self.dt/(2*self.dx) -self.dt/self.dx**2 + self.e2*self.dt/(4*self.dx**2)*(self.n.iloc[self.e+1,j] - self.n.iloc[self.e-1,j])) \n k_diag[self.e+1:] = self.k2(j)\n k_diag[self.a+1] = 0\n\n c_df_test = pd.DataFrame(np.zeros(self.c.shape))\n c_df_test = c_df_test + self.c.values\n c_test = c_df_test.iloc[1:,j-1].values\n c_test[0] = c_test[0] - self.k2(j)\n c_test = np.append(c_test,0)\n\n U = np.diag(g_diag.dropna()) + np.diag(k_diag.dropna(),-1) + np.diag(f_diag.dropna(),1)\n U[self.a, self.a-2] = -1\n\n return np.linalg.solve(U, c_test)[:-1]\n \n def compute_all(self):\n for cq in range(0,self.b):\n self.n.iloc[1:self.e+1,cq+1] = self.n_new(cq)[:]\n self.c.iloc[1:,cq+1] = self.new_c(cq)[:]\n\n def compute_all_all(self):\n comp = self.compute_all(var1,var2)\n return com.sum()\n \n def avg_channel(self):\n return self.c.values[1:self.e,1:self.a].sum() / (self.e*(self.a))\n \n def avg_entering(self):\n return self.c.values[self.e,1:self.a].sum() / (self.a)\n \n def plotting_conc(self,name):\n fig_n = sns.lineplot(x = np.tile(np.arange(0,cm.a+1),cm.b+1), y = pd.melt(cm.n).value, hue = np.repeat(np.arange(0,cm.a+1),cm.b+1),palette = \"Blues\")\n\n fig_c = sns.lineplot(x = np.tile(np.arange(0,cm.a+1),cm.b+1), y = pd.melt(cm.c).value, hue = np.repeat(np.arange(0,cm.a+1),cm.b+1),palette = \"Blues\")\n\n plt.xlabel(\"x\")\n plt.ylabel(\"concentration\")\n plt.title(\"Cell & Nutrient Concentration\")\n fig_n.legend_.remove()\n \n plt.plot(np.arange(self.a), np.zeros(self.a)+self.avg_channel(), linestyle='dashed')\n plt.plot(np.arange(self.a), np.zeros(self.a)+self.avg_entering(), linestyle='-.')\n \n #plt.text(self.a+self.b-9,self.avg_channel()-0.1, 'Avg # of Cells in a Channel')\n #plt.text(self.a+self.b-9,self.avg_entering()-0.1, 'Avg # of Cells entering')\n plt.savefig(name)\n \n def get_n(self):\n return self.n\n\n def get_c(self):\n return self.c\n",
"_____no_output_____"
],
[
"L = 100 #length\n\nW = 10 #width\nL_ = 850\nH = 17 #height\n# V = L*H*W\n\n\n'''\nis it has to be L_ or L? for the V\n'''\nV = L_*H*W \nM = 20 #number of tubes\n\nN0 = 1.204 #mol/um^3\nC0 = 5*10**-4 #cells/um^2\nUc = 2 #um/min\nUn = 0\nDc = 1\nDn = 1.8 #um^2/min\nQcb0 = 1\nQcd0 = 1\nQn = 1\nA0 = 1\n\nd1 = Dc/Dn\nd2 = Un*L/Dc # = 0\nd3 = Qn*C0*L**2 / Dn\n\ne1 = Uc*L/Dc\ne2 = A0*N0/Dc\ne3 = Qcb0*N0*C0*L**2/Dc\ne4 = Qcd0*L**2/Dc/N0\nL1 = V/(M*W*H)\nl_ = L/L_\nl1 = L1/L_",
"_____no_output_____"
],
[
"dx = 0.05\ndt = 0.05\n\ncm = cell_migration3(10000, W, H, N0, C0, Uc, Un, Dc, Dn, Qcb0, Qcd0, Qn, A0, dx, dt)\n\ncm.compute_all()\ncm.c.round(4)\ncm.plotting_conc('hi')\n",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
4ad9e5ff813ff6bcb48d6974cc3a2af03a75a677
| 4,240 |
ipynb
|
Jupyter Notebook
|
sentiment_BERT/Untitled.ipynb
|
ElroyHimself/bert-sentiment
|
fe1ceb58e861db33513c02f4e06de349419a8056
|
[
"MIT"
] | null | null | null |
sentiment_BERT/Untitled.ipynb
|
ElroyHimself/bert-sentiment
|
fe1ceb58e861db33513c02f4e06de349419a8056
|
[
"MIT"
] | null | null | null |
sentiment_BERT/Untitled.ipynb
|
ElroyHimself/bert-sentiment
|
fe1ceb58e861db33513c02f4e06de349419a8056
|
[
"MIT"
] | null | null | null | 51.707317 | 1,575 | 0.635613 |
[
[
[
"import os\nimport tokenizers\n\nMAX_LEN = 128\nTRAIN_BATCH_SIZE = 32\nVALID_BATCH_SIZE = 16\nEPOCHS = 10\nBERT_PATH = \"..input/bert-base-uncased/\"\nMODEL_PATH = \"model.bin\"\nTRAINING_FILE = \"../input/train.csv\"\nTOKENIZER = tokenizers.BertWordPieceTokenizer(\n os.path.join(BERT_PATH,\"vocab.txt\"),\n #lowercase=True\n)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4ad9f517c590e11ffd9c2e4b9505b8ecdab8bbe3
| 71,040 |
ipynb
|
Jupyter Notebook
|
Supplemental/Fig1.ipynb
|
gsedor/nsclc-model
|
5653ff9b8ea20016b4d85572aa679b7a5367e09d
|
[
"MIT"
] | null | null | null |
Supplemental/Fig1.ipynb
|
gsedor/nsclc-model
|
5653ff9b8ea20016b4d85572aa679b7a5367e09d
|
[
"MIT"
] | null | null | null |
Supplemental/Fig1.ipynb
|
gsedor/nsclc-model
|
5653ff9b8ea20016b4d85572aa679b7a5367e09d
|
[
"MIT"
] | null | null | null | 229.902913 | 61,896 | 0.901914 |
[
[
[
"import numpy as np\nexp = np.exp\narange = np.arange\nln = np.log\nfrom datetime import *\n\nimport matplotlib.pyplot as plt\nfrom matplotlib import patches\n\n# import plotly.plotly as py\n# import plotly.graph_objs as go\n\nfrom scipy.stats import norm\nfrom scipy import interpolate as interp\npdf = norm.pdf\ncdf = norm.cdf\nppf = norm.ppf\n\nfrom scipy import stats\nfrom scipy import special\nerf = special.erf\n\nimport pandas as pd\n# import palettable\nimport seaborn as sns\ncp = sns.color_palette()\n\nfrom lifelines import KaplanMeierFitter\nfrom sklearn.metrics import brier_score_loss\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import mixture\nfrom sklearn import preprocessing",
"_____no_output_____"
],
[
"nsclc = pd.read_csv('nsclc_data.csv')\n\nlc_df = pd.read_csv('lc_data.csv')",
"_____no_output_____"
],
[
"def create_kde(array, bandwidth=None):\n \"\"\" calculating KDE and CDF using scipy \"\"\"\n if bandwidth == None:\n bw = 'scott'\n else:\n bw = bandwidth\n kde = stats.gaussian_kde(dataset=array,bw_method=bw)\n \n num_test_points=200\n x = np.linspace(0,np.max(array)*1.2,num_test_points)\n kdens=kde.pdf(x)\n \n cdf=np.zeros(shape=num_test_points)\n for i in range(num_test_points):\n cdf[i] = kde.integrate_box_1d(low=0,high=x[i])\n \n return x,kdens,cdf\n\n\ndef calc_cdf(array,var,bandwidth=None):\n if bandwidth == None:\n bw = 1.2*array.std()*np.power(array.size,-1/5)\n else:\n bw = bandwidth\n kde=stats.gaussian_kde(dataset=array,bw_method=bw)\n return kde.integrate_box_1d(low=0,high=var)\n\n\n",
"_____no_output_____"
]
],
[
[
"## fig 1",
"_____no_output_____"
]
],
[
[
"from matplotlib import patches\nfrom matplotlib import path\nPath=path.Path\n\ndef bracket(xi, y, dy=.1, dx = .04,tail=.1):\n\n yi = y - dy/2\n xf = xi+dx\n yf = yi+dy\n vertices = [(xi,yi),(xf,yi),(xf,yf),(xi,yf)]+[(xf,y),(xf+tail,y)]\n codes = [Path.MOVETO] + [Path.LINETO]*3 + [Path.MOVETO] + [Path.LINETO]\n return Path(vertices,codes)\n\ndef hbracket(x, yi, dx=.1, dy = .04,tail=.1):\n\n xi = x - dx/2\n xf = xi+dx\n yf = yi-dy\n vertices = [(xi,yi),(xi,yf),(xf,yf),(xf,yi)]+[(x,yf),(x,yf-tail)]\n codes = [Path.MOVETO] + [Path.LINETO]*3 + [Path.MOVETO] + [Path.LINETO]\n return Path(vertices,codes)\n\ndef double_arrow(x,y,length,orient,endlength=.04,r=10):\n l=length\n if orient == 'horz':\n x1= x - l/2\n x2 = x + l/2\n el = endlength/2\n vertices = [(x1,y),(x2,y)]+[(x1+l/r,y+el),(x1,y),(x1+l/r,y-el)]+[(x2-l/r,y+el),(x2,y),(x2-l/r,y-el)]\n else:\n y1= y - l/2\n y2 = y + l/2\n el = endlength/2\n vertices = [(x,y1),(x,y2)]+[(x-el,y1+l/r),(x,y1),(x+el,y1+l/r)]+[(x+el,y2-l/r),(x,y2),(x-el,y2-l/r)]\n codes = [Path.MOVETO,Path.LINETO]+[Path.MOVETO]+[Path.LINETO]*2+[Path.MOVETO]+[Path.LINETO]*2\n return Path(vertices,codes)\n\n",
"_____no_output_____"
],
[
"div_cmap = sns.light_palette((0,.5,.8),n_colors=20)#as_cmap=True)\n#sns.palplot(div_cmap, size = .8)\n\ncolors = [(0,.5,.8),(.98,.98,.98),(.7,.1,.1)]\n# sns.palplot(sns.blend_palette(colors,n_colors=20))\n\ncolmap=sns.blend_palette(colors,as_cmap=True)",
"_____no_output_____"
],
[
"\nfig,axes = plt.subplots(nrows=1,ncols=3,figsize=(18,6))\n\naxes[0].set_title('(A)', loc='left')\naxes[1].set_title('(B)', loc='left')\naxes[2].set_title('(C)', loc='left')\n\nax=axes[0]\n\nr = nsclc.rsi\nd = 2\nbeta = 0.05\n\nx, k, c = create_kde(r)\nax.plot(x,k)\n\nbins=np.arange(0,1,.04)\nhist = np.histogram(r,bins=bins,density=True)\n\nbar_width = (hist[1][1]-hist[1][0])*.7\nax.bar(hist[1][:-1],hist[0],width=bar_width,alpha=.6,color=(.6,.6,.6))\nax.set_yticks([])\n\n\"\"\"-----------------------------------------------------------------------------------------------\"\"\"\n\nax = axes[1]\n\nx = lc_df.new_dose_5070.values\nx.sort()\nrange60 = range(1,61)\nx2 = lc_df.new_dose.values\nx2.sort()\ndose_5070 = lc_df.new_dose_5070.sort_values()\nfull70 = np.full(len(x),70)\n\nax.scatter(range60,x2, s = 80, c=x2,cmap=colmap,edgecolors='k',zorder=10) #label = 'RxRSI > 70')\nax.scatter(range60,x,edgecolor = 'k',facecolor='white', marker = 'o', s = 60, zorder = 5, label = 'RxRSI scaled\\nto 50-70')\n\nax.hlines(y = [50,70],xmin = [-2,-2],xmax=[62,62], color = 'k',lw=1.5,zorder=0)\nax.fill_between([-2,62],70,50, color = (.95,.95,.95),alpha=.2)\n\nj = np.where(x2<50)[0][-1]\nk = np.where(x2>70)[0][0]\nax.vlines(range60[k:],ymin = full70[k:], ymax = x2[k:], lw = .5, linestyle = '--')\nax.vlines(x = range60[:j], ymin = x2[:j], ymax = np.full(j,50), lw = .5, linestyle = '--')\n\nax.set_xticklabels('')\nax.set_ylim((10,100))\nax.set_xlim(-1,61)\nax.set_ylabel('RxRSI (Gy)')\nax.set_xlabel('Patient IDs')\nax.set_xticks([])\n\n\"\"\"-------------------------------------------------------------------------------\"\"\"\n\nax=axes[2]\n\nr = nsclc.rsi\nd = 2\nbeta = 0.05\n\n# for SF2 alpha\nn = 1\nalpha_tcc = np.log(r)/(-n*d) - beta*d\nrxdose_tcc = 33/(alpha_tcc+beta*d)\nrxdose_tcc=rxdose_tcc.values\n\n\"\"\" plotting histograms \"\"\"\n\nbinlist=list(np.arange(0,150,2))+[300]\n\n\"\"\" <60 range \"\"\"\nxdata = rxdose_tcc[np.where(rxdose_tcc<60)]\nwts = np.full(len(xdata),.0002)\nax.hist(xdata,bins = binlist,\n alpha=.6,#ec = 'k',\n color=cp[0], \n weights = wts)\n\"\"\" 60-74 range \"\"\"\nxdata = rxdose_tcc[np.where((rxdose_tcc>60)&(rxdose_tcc<74))]\nwts = np.full(len(xdata),.0002)\nax.hist(xdata,bins = binlist,\n alpha=.8,#ec = 'k',\n color=(.4,.4,.4), \n weights = wts,zorder=5)\n\"\"\" >74 range \"\"\"\nxdata = rxdose_tcc[np.where((rxdose_tcc>74))] #&(rxdose_tcc<80))]\nwts = np.full(len(xdata),.0002)\nax.hist(xdata,bins = binlist,\n alpha=.7,#ec = 'k',\n color=cp[3], \n weights = wts)\n\nrxdose_kde = create_kde(rxdose_tcc,bandwidth=.28)\nax.plot(rxdose_kde[0], rxdose_kde[1] , c=(.2,.2,.3),lw=1,ls='--',label = 'KDE')\n\nax.set_xlim(-2,130)\nax.set_yticks([])\nax.set_xlabel('RxRSI for TCC Lung')\n\nfig.subplots_adjust(left=.06, right=.95, wspace=.25)\n",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4ad9f7474c4074636e9e9dfa0dc3889f38fd8b4c
| 48,543 |
ipynb
|
Jupyter Notebook
|
2_numpy_linearRegression_with_CostFn.ipynb
|
tomwilde/100DaysOfML
|
7c260c0618d479c60507be2451f1d1459cca5587
|
[
"MIT"
] | 1 |
2019-09-17T08:27:11.000Z
|
2019-09-17T08:27:11.000Z
|
2_numpy_linearRegression_with_CostFn.ipynb
|
tomwilde/100DaysOfMLCode
|
7c260c0618d479c60507be2451f1d1459cca5587
|
[
"MIT"
] | null | null | null |
2_numpy_linearRegression_with_CostFn.ipynb
|
tomwilde/100DaysOfMLCode
|
7c260c0618d479c60507be2451f1d1459cca5587
|
[
"MIT"
] | null | null | null | 167.968858 | 22,974 | 0.867602 |
[
[
[
"[View in Colaboratory](https://colab.research.google.com/github/tomwilde/100DaysOfMLCode/blob/master/2_numpy_linearRegression_with_CostFn.ipynb)",
"_____no_output_____"
]
],
[
[
"!pip install -U -q PyDrive\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas\nimport io\n\n\n# Install the PyDrive wrapper & import libraries.\n# This only needs to be done once per notebook.\n!pip install -U -q PyDrive\nfrom pydrive.auth import GoogleAuth\nfrom pydrive.drive import GoogleDrive\nfrom google.colab import auth\nfrom oauth2client.client import GoogleCredentials\n\n# Authenticate and create the PyDrive client.\n# This only needs to be done once per notebook.\nauth.authenticate_user()\ngauth = GoogleAuth()\ngauth.credentials = GoogleCredentials.get_application_default()\ndrive = GoogleDrive(gauth)\n\n",
"_____no_output_____"
],
[
"# from: https://ml-cheatsheet.readthedocs.io/en/latest/linear_regression.html#cost-function\n#\n# We need a cost fn and its derivative...",
"_____no_output_____"
],
[
"# Download a file based on its file ID.\n#\n# A file ID looks like: laggVyWshwcyP6kEI-y_W3P8D26sz\nfile_id = '1_d2opSoZgMsSeoQUjtOcRQj5l0zO-Upi'\ndownloaded = drive.CreateFile({'id': file_id})\n#print('Downloaded content \"{}\"'.format(downloaded.GetContentString()))\n\ndataset = pandas.read_csv(io.StringIO(downloaded.GetContentString())).as_matrix()",
"_____no_output_____"
],
[
"def cost_function(X, y, weight, bias):\n n = len(X)\n total_error = 0.0\n for i in range(n):\n total_error += (y[i] - (weight*X[i] + bias))**2\n return total_error / n",
"_____no_output_____"
],
[
"def update_weights(X, y, weight, bias, alpha):\n weight_deriv = 0\n bias_deriv = 0\n n = len(X)\n\n for i in range(n):\n # Calculate partial derivatives\n # -2x(y - (mx + b))\n weight_deriv += -2*X[i] * (y[i] - (weight * X[i] + bias))\n\n # -2(y - (mx + b))\n bias_deriv += -2*(y[i] - (weight * X[i] + bias))\n\n # We subtract because the derivatives point in direction of steepest ascent\n weight -= (weight_deriv / n) * alpha\n bias -= (bias_deriv / n) * alpha\n\n return weight, bias",
"_____no_output_____"
],
[
"def train(X, y, weight, bias, alpha, iters):\n cost_history = []\n\n for i in range(iters):\n weight,bias = update_weights(X, y, weight, bias, alpha)\n\n #Calculate cost for auditing purposes\n cost = cost_function(X, y, weight, bias)\n # cost_history.append(cost)\n\n # Log Progress\n if i % 10 == 0:\n print \"iter: \"+str(i) + \" weight: \"+str(weight) +\" bias: \"+str(bias) + \" cost: \"+str(cost)\n\n return weight, bias #, cost_history",
"_____no_output_____"
],
[
"# work out \ny = dataset[:,4].reshape(200,1)\nX = dataset[:,1].reshape(200,1)\n\nm = 0\nc = 0\n\nalpha = 0.1\niters = 100\n\n# normalise the data\ny = y/np.linalg.norm(y, ord=np.inf, axis=0, keepdims=True)\nX = X/np.linalg.norm(X, ord=np.inf, axis=0, keepdims=True)",
"_____no_output_____"
],
[
"weight, bias = train(X, y, m, c, alpha, iters)",
"iter: 0 weight: [0.06024246] bias: [0.10387037] cost: [0.18089265]\niter: 10 weight: [0.25442336] bias: [0.38069542] cost: [0.02054556]\niter: 20 weight: [0.29515622] bias: [0.37969695] cost: [0.01875648]\niter: 30 weight: [0.32379468] bias: [0.36560545] cost: [0.0177426]\niter: 40 weight: [0.34848476] bias: [0.35255119] cost: [0.01696661]\niter: 50 weight: [0.37007888] bias: [0.3410839] cost: [0.01637189]\niter: 60 weight: [0.38898224] bias: [0.33104279] cost: [0.01591609]\niter: 70 weight: [0.40553104] bias: [0.32225223] cost: [0.01556676]\niter: 80 weight: [0.42001862] bias: [0.31455656] cost: [0.01529904]\niter: 90 weight: [0.43270172] bias: [0.30781941] cost: [0.01509385]\n"
],
[
"_ = plt.plot(X,y, 'o', [0, 1], [bias, weight + bias], '-')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ada075c4a58b8518bd3ca82f25f3787362977fc
| 86,398 |
ipynb
|
Jupyter Notebook
|
ipython/XYZ + RDKitMol.ipynb
|
IannLiu/RDMC
|
bfaf5ff5529b7f6cd73f21178381aadeba8c2e76
|
[
"MIT"
] | 7 |
2021-12-08T18:31:29.000Z
|
2022-02-11T18:45:35.000Z
|
ipython/XYZ + RDKitMol.ipynb
|
IannLiu/RDMC
|
bfaf5ff5529b7f6cd73f21178381aadeba8c2e76
|
[
"MIT"
] | 10 |
2021-06-27T20:53:03.000Z
|
2022-02-22T15:56:29.000Z
|
ipython/XYZ + RDKitMol.ipynb
|
IannLiu/RDMC
|
bfaf5ff5529b7f6cd73f21178381aadeba8c2e76
|
[
"MIT"
] | 3 |
2021-09-27T07:57:12.000Z
|
2022-02-01T21:20:22.000Z
| 272.548896 | 19,840 | 0.920079 |
[
[
[
"# A demo of XYZ and RDKitMol",
"_____no_output_____"
],
[
"There is no easy way to convert xyz to RDKit Mol/RWMol. Here RDKitMol shows a possibility by using openbabel / method from Jensen et al. [1] as a molecule perception backend. \n\n[1] https://github.com/jensengroup/xyz2mol. ",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nsys.path.append(os.path.dirname(os.path.abspath('')))\n\nfrom rdmc.mol import RDKitMol",
"_____no_output_____"
]
],
[
[
"### 1. An example of xyz str block",
"_____no_output_____"
]
],
[
[
"######################################\n# INPUT\nxyz=\"\"\"14\n\nC -1.77596 0.55032 -0.86182\nC -1.86964 0.09038 -2.31577\nH -0.88733 1.17355 -0.71816\nH -1.70996 -0.29898 -0.17103\nO -2.90695 1.36613 -0.53334\nC -0.58005 -0.57548 -2.76940\nH -0.35617 -1.45641 -2.15753\nH 0.26635 0.11565 -2.71288\nH -0.67469 -0.92675 -3.80265\nO -2.92111 -0.86791 -2.44871\nH -2.10410 0.93662 -2.97107\nO -3.87923 0.48257 0.09884\nH -4.43402 0.34141 -0.69232\nO -4.16782 -0.23433 -2.64382\n\"\"\"\n\nxyz_wo_header = \"\"\"O 2.136128 0.058786 -0.999372\nC -1.347448 0.039725 0.510465\nC 0.116046 -0.220125 0.294405\nC 0.810093 0.253091 -0.73937\nH -1.530204 0.552623 1.461378\nH -1.761309 0.662825 -0.286624\nH -1.923334 -0.892154 0.536088\nH 0.627132 -0.833978 1.035748\nH 0.359144 0.869454 -1.510183\nH 2.513751 -0.490247 -0.302535\"\"\"\n######################################",
"_____no_output_____"
]
],
[
[
"### 2. Use pybel to generate a OBMol from xyz",
"_____no_output_____"
],
[
"pybel backend, `header` to indicate if the str includes lines of atom number and title.",
"_____no_output_____"
]
],
[
[
"rdkitmol = RDKitMol.FromXYZ(xyz, backend='openbabel', header=True)\nrdkitmol",
"_____no_output_____"
]
],
[
[
"Please correctly use `header` arguments, otherwise molecule perception can be problematic",
"_____no_output_____"
]
],
[
[
"rdkitmol = RDKitMol.FromXYZ(xyz_wo_header, backend='openbabel', header=False)\nrdkitmol",
"_____no_output_____"
]
],
[
[
"Using `jensen` backend. For most cases, Jensen's method returns the same molecule as using `pybel` backend",
"_____no_output_____"
]
],
[
[
"rdkitmol = RDKitMol.FromXYZ(xyz, backend='jensen', header=True)\nrdkitmol",
"_____no_output_____"
]
],
[
[
"Here some options for Jensen et al. method are listed. The nomenclature is kept as it is in the original API.",
"_____no_output_____"
]
],
[
[
"rdkitmol = RDKitMol.FromXYZ(xyz, backend='jensen',\n header=True,\n allow_charged_fragments=False, # radical => False\n use_graph=False, # accelerate for larger molecule but needs networkx as backend\n use_huckel=True,\n embed_chiral=True)\nrdkitmol",
"_____no_output_____"
]
],
[
[
"### 3. Check the xyz of rdkitmol conformer",
"_____no_output_____"
]
],
[
[
"rdkitmol.GetConformer().GetPositions()",
"_____no_output_____"
]
],
[
[
"### 4. Export xyz",
"_____no_output_____"
]
],
[
[
"print(rdkitmol.ToXYZ(header=False))",
"C -1.775960 0.550320 -0.861820\nC -1.869640 0.090380 -2.315770\nH -0.887330 1.173550 -0.718160\nH -1.709960 -0.298980 -0.171030\nO -2.906950 1.366130 -0.533340\nC -0.580050 -0.575480 -2.769400\nH -0.356170 -1.456410 -2.157530\nH 0.266350 0.115650 -2.712880\nH -0.674690 -0.926750 -3.802650\nO -2.921110 -0.867910 -2.448710\nH -2.104100 0.936620 -2.971070\nO -3.879230 0.482570 0.098840\nH -4.434020 0.341410 -0.692320\nO -4.167820 -0.234330 -2.643820\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ada07d7b66c15fe6ff2b3dfb3b4fd0fe5657626
| 7,147 |
ipynb
|
Jupyter Notebook
|
class4/model_save.ipynb
|
janewen134/tensorflow_self_improvement
|
7872b3571f822a513c532d166cf2058b21fe7a6b
|
[
"MIT"
] | null | null | null |
class4/model_save.ipynb
|
janewen134/tensorflow_self_improvement
|
7872b3571f822a513c532d166cf2058b21fe7a6b
|
[
"MIT"
] | null | null | null |
class4/model_save.ipynb
|
janewen134/tensorflow_self_improvement
|
7872b3571f822a513c532d166cf2058b21fe7a6b
|
[
"MIT"
] | null | null | null | 33.087963 | 254 | 0.464391 |
[
[
[
"<a href=\"https://colab.research.google.com/github/janewen134/tensorflow_self_improvement/blob/master/class4/model_save.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport os",
"_____no_output_____"
],
[
"mnist = tf.keras.datasets.mnist\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 0s 0us/step\n"
],
[
"model = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(10, activation='softmax')\n])",
"_____no_output_____"
],
[
"model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),\n metrics=['sparse_categorical_accuracy'])",
"_____no_output_____"
],
[
"checkpoint_save_path = \"./checkpoint/mnist.ckpt\"\nif os.path.exists(checkpoint_save_path + '.index'):\n print('-------------load the model-----------------')\n model.load_weights(checkpoint_save_path)",
"-------------load the model-----------------\n"
],
[
"cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,\n save_weights_only=True,\n save_best_only=True)",
"_____no_output_____"
],
[
"history = model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1,\n callbacks=[cp_callback])",
"Epoch 1/5\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.2569 - sparse_categorical_accuracy: 0.9268 - val_loss: 0.1372 - val_sparse_categorical_accuracy: 0.9584\nEpoch 2/5\n1875/1875 [==============================] - 3s 2ms/step - loss: 0.1120 - sparse_categorical_accuracy: 0.9675 - val_loss: 0.1012 - val_sparse_categorical_accuracy: 0.9694\nEpoch 3/5\n1875/1875 [==============================] - 3s 2ms/step - loss: 0.0768 - sparse_categorical_accuracy: 0.9776 - val_loss: 0.0830 - val_sparse_categorical_accuracy: 0.9733\nEpoch 4/5\n1875/1875 [==============================] - 3s 2ms/step - loss: 0.0575 - sparse_categorical_accuracy: 0.9823 - val_loss: 0.0773 - val_sparse_categorical_accuracy: 0.9762\nEpoch 5/5\n1875/1875 [==============================] - 3s 2ms/step - loss: 0.0455 - sparse_categorical_accuracy: 0.9857 - val_loss: 0.0763 - val_sparse_categorical_accuracy: 0.9770\n"
],
[
"model.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nflatten (Flatten) (None, 784) 0 \n_________________________________________________________________\ndense (Dense) (None, 128) 100480 \n_________________________________________________________________\ndense_1 (Dense) (None, 10) 1290 \n=================================================================\nTotal params: 101,770\nTrainable params: 101,770\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ada0cbab15a88dba4662c73e50f2fa5fde1df26
| 9,286 |
ipynb
|
Jupyter Notebook
|
doc/ipython-notebooks/converter/Tapkee.ipynb
|
Impulse-Machine/shogun
|
a83df03f6503b852b5167240259bd39ceb0318e2
|
[
"BSD-3-Clause"
] | 2,753 |
2015-01-02T11:34:13.000Z
|
2022-03-25T07:04:27.000Z
|
doc/ipython-notebooks/converter/Tapkee.ipynb
|
Impulse-Machine/shogun
|
a83df03f6503b852b5167240259bd39ceb0318e2
|
[
"BSD-3-Clause"
] | 2,404 |
2015-01-02T19:31:41.000Z
|
2022-03-09T10:58:22.000Z
|
doc/ipython-notebooks/converter/Tapkee.ipynb
|
Impulse-Machine/shogun
|
a83df03f6503b852b5167240259bd39ceb0318e2
|
[
"BSD-3-Clause"
] | 1,156 |
2015-01-03T01:57:21.000Z
|
2022-03-26T01:06:28.000Z
| 43.190698 | 786 | 0.646672 |
[
[
[
"# Dimensionality Reduction with the Shogun Machine Learning Toolbox",
"_____no_output_____"
],
[
"#### *By Sergey Lisitsyn ([lisitsyn](https://github.com/lisitsyn)) and Fernando J. Iglesias Garcia ([iglesias](https://github.com/iglesias)).*",
"_____no_output_____"
],
[
"This notebook illustrates <a href=\"http://en.wikipedia.org/wiki/Unsupervised_learning\">unsupervised learning</a> using the suite of dimensionality reduction algorithms available in Shogun. Shogun provides access to all these algorithms using [Tapkee](http://tapkee.lisitsyn.me/), a C++ library especialized in <a href=\"http://en.wikipedia.org/wiki/Dimensionality_reduction\">dimensionality reduction</a>.",
"_____no_output_____"
],
[
"## Hands-on introduction to dimension reduction",
"_____no_output_____"
],
[
"First of all, let us start right away by showing what the purpose of dimensionality reduction actually is. To this end, we will begin by creating a function that provides us with some data:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\n\ndef generate_data(curve_type, num_points=1000):\n\tif curve_type=='swissroll':\n\t\ttt = np.array((3*np.pi/2)*(1+2*np.random.rand(num_points)))\n\t\theight = np.array((np.random.rand(num_points)-0.5))\n\t\tX = np.array([tt*np.cos(tt), 10*height, tt*np.sin(tt)])\n\t\treturn X,tt\n\tif curve_type=='scurve':\n\t\ttt = np.array((3*np.pi*(np.random.rand(num_points)-0.5)))\n\t\theight = np.array((np.random.rand(num_points)-0.5))\n\t\tX = np.array([np.sin(tt), 10*height, np.sign(tt)*(np.cos(tt)-1)])\n\t\treturn X,tt\n\tif curve_type=='helix':\n\t\ttt = np.linspace(1, num_points, num_points).T / num_points\n\t\ttt = tt*2*np.pi\n\t\tX = np.r_[[(2+np.cos(8*tt))*np.cos(tt)],\n\t\t [(2+np.cos(8*tt))*np.sin(tt)],\n\t\t [np.sin(8*tt)]]\n\t\treturn X,tt",
"_____no_output_____"
]
],
[
[
"The function above can be used to generate three-dimensional datasets with the shape of a [Swiss roll](http://en.wikipedia.org/wiki/Swiss_roll), the letter S, or an helix. These are three examples of datasets which have been extensively used to compare different dimension reduction algorithms. As an illustrative exercise of what dimensionality reduction can do, we will use a few of the algorithms available in Shogun to embed this data into a two-dimensional space. This is essentially the dimension reduction process as we reduce the number of features from 3 to 2. The question that arises is: what principle should we use to keep some important relations between datapoints? In fact, different algorithms imply different criteria to answer this question.",
"_____no_output_____"
],
[
"Just to start, lets pick some algorithm and one of the data sets, for example lets see what embedding of the Swissroll is produced by the Isomap algorithm. The Isomap algorithm is basically a slightly modified Multidimensional Scaling (MDS) algorithm which finds embedding as a solution of the following optimization problem:\n\n$$\n\\min_{x'_1, x'_2, \\dots} \\sum_i \\sum_j \\| d'(x'_i, x'_j) - d(x_i, x_j)\\|^2,\n$$\n\nwith defined $x_1, x_2, \\dots \\in X~~$ and unknown variables $x_1, x_2, \\dots \\in X'~~$ while $\\text{dim}(X') < \\text{dim}(X)~~~$,\n$d: X \\times X \\to \\mathbb{R}~~$ and $d': X' \\times X' \\to \\mathbb{R}~~$ are defined as arbitrary distance functions (for example Euclidean). \n\nSpeaking less math, the MDS algorithm finds an embedding that preserves pairwise distances between points as much as it is possible. The Isomap algorithm changes quite small detail: the distance - instead of using local pairwise relationships it takes global factor into the account with shortest path on the neighborhood graph (so-called geodesic distance). The neighborhood graph is defined as graph with datapoints as nodes and weighted edges (with weight equal to the distance between points). The edge between point $x_i~$ and $x_j~$ exists if and only if $x_j~$ is in $k~$ nearest neighbors of $x_i$. Later we will see that that 'global factor' changes the game for the swissroll dataset.\n\nHowever, first we prepare a small function to plot any of the original data sets together with its embedding.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n%matplotlib inline\n\ndef plot(data, embedded_data, colors='m'):\n\tfig = plt.figure()\n\tfig.set_facecolor('white')\n\tax = fig.add_subplot(121,projection='3d')\n\tax.scatter(data[0],data[1],data[2],c=colors,cmap=plt.cm.Spectral)\n\tplt.axis('tight'); plt.axis('off')\n\tax = fig.add_subplot(122)\n\tax.scatter(embedded_data[0],embedded_data[1],c=colors,cmap=plt.cm.Spectral)\n\tplt.axis('tight'); plt.axis('off')\n\tplt.show()",
"_____no_output_____"
],
[
"import shogun as sg\n\n# wrap data into Shogun features\ndata, colors = generate_data('swissroll')\nfeats = sg.create_features(data)\n\n# create instance of Isomap converter and set number of neighbours used in kNN search to 20\nisomap = sg.create_transformer('Isomap', target_dim=2, k=20)\n\n# create instance of Multidimensional Scaling converter and configure it\nmds = sg.create_transformer('MultidimensionalScaling', target_dim=2)\n\n# embed Swiss roll data\nembedded_data_mds = mds.transform(feats).get('feature_matrix')\nembedded_data_isomap = isomap.transform(feats).get('feature_matrix')\n\nplot(data, embedded_data_mds, colors)\nplot(data, embedded_data_isomap, colors)",
"_____no_output_____"
]
],
[
[
"As it can be seen from the figure above, Isomap has been able to \"unroll\" the data, reducing its dimension from three to two. At the same time, points with similar colours in the input space are close to points with similar colours in the output space. This is, a new representation of the data has been obtained; this new representation maintains the properties of the original data, while it reduces the amount of information required to represent it. Note that the fact the embedding of the Swiss roll looks good in two dimensions stems from the *intrinsic* dimension of the input data. Although the original data is in a three-dimensional space, its intrinsic dimension is lower, since the only degree of freedom are the polar angle and distance from the centre, or height. ",
"_____no_output_____"
],
[
"Finally, we use yet another method, Stochastic Proximity Embedding (SPE) to embed the helix:",
"_____no_output_____"
]
],
[
[
"# wrap data into Shogun features\ndata, colors = generate_data('helix')\nfeatures = sg.create_features(data)\n\n# create MDS instance\nconverter = sg.create_transformer('StochasticProximityEmbedding', target_dim=2)\n\n# embed helix data\nembedded_features = converter.transform(features)\nembedded_data = embedded_features.get('feature_matrix')\n\nplot(data, embedded_data, colors)",
"_____no_output_____"
]
],
[
[
"## References",
"_____no_output_____"
],
[
"- Lisitsyn, S., Widmer, C., Iglesias Garcia, F. J. Tapkee: An Efficient Dimension Reduction Library. ([Link to paper in JMLR](http://jmlr.org/papers/v14/lisitsyn13a.html#!).)\n- Tenenbaum, J. B., de Silva, V. and Langford, J. B. A Global Geometric Framework for Nonlinear Dimensionality Reduction. ([Link to Isomap's website](http://isomap.stanford.edu/).)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4ada21fb8f3a1f8e42104dca5dcce6d6f187ad64
| 3,881 |
ipynb
|
Jupyter Notebook
|
AIOpSchool/KIKS/DeepLearningBasis/0100_StomataTeller.ipynb
|
dwengovzw/PythonNotebooks
|
633bea4b07efbd920349d6f1dc346522ce118b70
|
[
"CC0-1.0"
] | null | null | null |
AIOpSchool/KIKS/DeepLearningBasis/0100_StomataTeller.ipynb
|
dwengovzw/PythonNotebooks
|
633bea4b07efbd920349d6f1dc346522ce118b70
|
[
"CC0-1.0"
] | 3 |
2021-09-30T11:38:24.000Z
|
2021-10-04T09:25:39.000Z
|
AIOpSchool/KIKS/DeepLearningBasis/0100_StomataTeller.ipynb
|
dwengovzw/PythonNotebooks
|
633bea4b07efbd920349d6f1dc346522ce118b70
|
[
"CC0-1.0"
] | null | null | null | 27.524823 | 324 | 0.58825 |
[
[
[
"<img src=\"images/logosnb.png\" alt=\"Banner\" width=\"800\"/>",
"_____no_output_____"
],
[
"<div>\n <font color=#690027 markdown=\"1\">\n <h1>DETECTIE VAN HUIDMONDJES IN EEN EIGEN AFBEELDING</h1>\n </font>\n</div>",
"_____no_output_____"
],
[
"<div class=\"alert alert-box alert-success\">\nIn deze notebook kan je een microfoto uploaden en de stomata op de foto laten tellen. <br>\nDeze foto kan er bv. een zijn uit de KIKS-dataset of een die je zelf genomen hebt. \n</div>",
"_____no_output_____"
],
[
"Voer onderstaande code-cel uit om van de functies in deze notebook gebruik te kunnen maken.",
"_____no_output_____"
]
],
[
[
"import imp\nwith open(\"../.scripts/vind_stomata.py\", \"rb\") as fp:\n vind_stomata = imp.load_module(\".scripts\", fp, \"../.scripts/vind_stomata.py\", (\".py\", \"rb\", imp.PY_SOURCE))",
"_____no_output_____"
]
],
[
[
"In deze notebook kan je de stomata op je eigen foto laten tellen. <br>\nHou er wel rekening mee dat het model op zoek gaat naar 1 stoma in een gebied van 120 x 120 pixels. Als de stomata op jouw foto binnen een gebied vallen met een andere verhouding, kan het zijn dat er geen stomata gevonden worden. \n\nVoer de volgende code-cel uit. Je kan dan vervolgens een bestand kiezen op je computer om te uploaden.",
"_____no_output_____"
]
],
[
[
"display(vind_stomata.upload_widget)",
"_____no_output_____"
]
],
[
[
"Voer de volgende instructie uit. Je foto zal dan op het scherm verschijnen. <br>\nAls je afbeelding groter is dan 800 x 800 pixels, zal je moeten kiezen welk deel je wilt laten verwerken door het model.",
"_____no_output_____"
]
],
[
[
"vind_stomata.toon_eigen_afbeelding()",
"_____no_output_____"
]
],
[
[
"Voer tot slot de volgende code-cel uit om de stomata te detecteren en te tellen. Dat kan enige tijd duren.",
"_____no_output_____"
]
],
[
[
"vind_stomata.vind_stomata_eigen_afbeelding()",
"_____no_output_____"
]
],
[
[
"Als je dat wenst, kan je nu op een ander deel van je foto de huidmondjes laten tellen of een andere foto uploaden.",
"_____no_output_____"
],
[
"<img src=\"images/cclic.png\" alt=\"Banner\" align=\"left\" width=\"100\"/><br><br>\nNotebook KIKS, zie <a href=\"http://www.aiopschool.be\">ai op school</a>, van F. wyffels, A. Meheus, T. Neutens & N. Gesquière is in licentie gegeven volgens een <a href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative Commons Naamsvermelding-NietCommercieel-GelijkDelen 4.0 Internationaal-licentie</a>. ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4ada3cdb3fccbfc20b1a14048188f8bb703d36b4
| 320,790 |
ipynb
|
Jupyter Notebook
|
13.Decision-Tree/01-Decision Trees and Random Forests in Python.ipynb
|
BhavyaSree/pythonForDataScience
|
6b6f2faa7a2e327dbe0d4588d098e8dfcca35cb9
|
[
"Apache-2.0"
] | null | null | null |
13.Decision-Tree/01-Decision Trees and Random Forests in Python.ipynb
|
BhavyaSree/pythonForDataScience
|
6b6f2faa7a2e327dbe0d4588d098e8dfcca35cb9
|
[
"Apache-2.0"
] | null | null | null |
13.Decision-Tree/01-Decision Trees and Random Forests in Python.ipynb
|
BhavyaSree/pythonForDataScience
|
6b6f2faa7a2e327dbe0d4588d098e8dfcca35cb9
|
[
"Apache-2.0"
] | null | null | null | 618.092486 | 269,522 | 0.938016 |
[
[
[
"___\n\n<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n___\n# Decision Trees and Random Forests in Python",
"_____no_output_____"
],
[
"This is the code for the lecture video which goes over tree methods in Python. Reference the video lecture for the full explanation of the code!\n\nI also wrote a [blog post](https://medium.com/@josemarcialportilla/enchanted-random-forest-b08d418cb411#.hh7n1co54) explaining the general logic of decision trees and random forests which you can check out. \n\n## Import Libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Get the Data",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('kyphosis.csv')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"## EDA\n\nWe'll just check out a simple pairplot for this small dataset.",
"_____no_output_____"
]
],
[
[
"sns.pairplot(df,hue='Kyphosis',palette='Set1')",
"_____no_output_____"
]
],
[
[
"## Train Test Split\n\nLet's split up the data into a training set and a test set!",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"X = df.drop('Kyphosis',axis=1)\ny = df['Kyphosis']",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)",
"_____no_output_____"
]
],
[
[
"## Decision Trees\n\nWe'll start just by training a single decision tree.",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier",
"_____no_output_____"
],
[
"dtree = DecisionTreeClassifier()",
"_____no_output_____"
],
[
"dtree.fit(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"## Prediction and Evaluation \n\nLet's evaluate our decision tree.",
"_____no_output_____"
]
],
[
[
"predictions = dtree.predict(X_test)",
"_____no_output_____"
],
[
"from sklearn.metrics import classification_report,confusion_matrix",
"_____no_output_____"
],
[
"print(classification_report(y_test,predictions))",
" precision recall f1-score support\n\n absent 0.85 0.85 0.85 20\n present 0.40 0.40 0.40 5\n\navg / total 0.76 0.76 0.76 25\n\n"
],
[
"print(confusion_matrix(y_test,predictions))",
"[[17 3]\n [ 3 2]]\n"
]
],
[
[
"## Tree Visualization\n\nScikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this:",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image \nfrom sklearn.externals.six import StringIO \nfrom sklearn.tree import export_graphviz\nimport pydot \n\nfeatures = list(df.columns[1:])\nfeatures",
"_____no_output_____"
],
[
"dot_data = StringIO() \nexport_graphviz(dtree, out_file=dot_data,feature_names=features,filled=True,rounded=True)\n\ngraph = pydot.graph_from_dot_data(dot_data.getvalue()) \nImage(graph[0].create_png()) ",
"_____no_output_____"
]
],
[
[
"## Random Forests\n\nNow let's compare the decision tree model to a random forest.",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\nrfc = RandomForestClassifier(n_estimators=100)\nrfc.fit(X_train, y_train)",
"_____no_output_____"
],
[
"rfc_pred = rfc.predict(X_test)",
"_____no_output_____"
],
[
"print(confusion_matrix(y_test,rfc_pred))",
"[[18 2]\n [ 3 2]]\n"
],
[
"print(classification_report(y_test,rfc_pred))",
" precision recall f1-score support\n\n absent 0.86 0.90 0.88 20\n present 0.50 0.40 0.44 5\n\navg / total 0.79 0.80 0.79 25\n\n"
]
],
[
[
"# Great Job!",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4ada411a9862c2866a07ae595be70d762543ce7b
| 1,256 |
ipynb
|
Jupyter Notebook
|
cifar-10/jupyter/explore.ipynb
|
seanywang0408/template
|
6242d972712682bc98c1b4dfa306ab24b8775fca
|
[
"MIT"
] | 4 |
2019-09-25T06:52:26.000Z
|
2021-11-19T11:56:22.000Z
|
cifar-10/jupyter/explore.ipynb
|
seanywang0408/template
|
6242d972712682bc98c1b4dfa306ab24b8775fca
|
[
"MIT"
] | 1 |
2019-10-15T05:55:03.000Z
|
2019-10-15T05:55:03.000Z
|
cifar-10/jupyter/explore.ipynb
|
seanywang0408/template
|
6242d972712682bc98c1b4dfa306ab24b8775fca
|
[
"MIT"
] | 3 |
2019-10-13T08:16:35.000Z
|
2021-05-21T19:45:59.000Z
| 19.030303 | 53 | 0.526274 |
[
[
[
"import os\nfrom tqdm import tqdm\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport torch\nimport torch.nn as nn\nfrom torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader\n\nimport _init_paths\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2",
"add code root path (with `mylib`).\n"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4ada418165b2f7190f787db10b0a55d3757ddd35
| 92,556 |
ipynb
|
Jupyter Notebook
|
network/captcha_breaker.ipynb
|
Plux1/Captcha
|
e5aceab52a4c1bb6e5833dd60840863a9d2b98a7
|
[
"MIT"
] | null | null | null |
network/captcha_breaker.ipynb
|
Plux1/Captcha
|
e5aceab52a4c1bb6e5833dd60840863a9d2b98a7
|
[
"MIT"
] | null | null | null |
network/captcha_breaker.ipynb
|
Plux1/Captcha
|
e5aceab52a4c1bb6e5833dd60840863a9d2b98a7
|
[
"MIT"
] | null | null | null | 269.05814 | 83,556 | 0.922523 |
[
[
[
"# NOTE",
"_____no_output_____"
]
],
[
[
"This document make use of LeNet Architecture:\n The main objective is to create a deep learning model to predict the correct captcha out of input image",
"_____no_output_____"
]
],
[
[
"# IMPORTS",
"_____no_output_____"
]
],
[
[
"# Architecture\nfrom keras import layers\nfrom keras import models\nfrom keras.preprocessing.image import load_img\nfrom keras import backend as K\nfrom keras.utils import plot_model\n\n# Automatic Downloads\nimport numpy as np\nimport requests\nimport time\nimport os\n\n# Image labeling\nimport cv2\nimport imutils\nfrom imutils import paths",
"Using TensorFlow backend.\n"
]
],
[
[
"# HELPER",
"_____no_output_____"
],
[
"# DATASET",
"_____no_output_____"
],
[
"## Download Captcha",
"_____no_output_____"
]
],
[
[
"url = 'https://www.e-zpassny.com/vector/jcaptcha.do'\ntotal = 0\nnum_images = 2\noutput_path = 'C:/Users/Tajr/Desktop/Data/RadonPlus/RadonTechnology/Dev/Case Studi/Captcha/output'\n\n# Loop over the number of images to download \nfor i in np.arange(0, num_images):\n try:\n # Grab a new captcha image\n r = requests.get(url, timeout=60)\n \n # save the image to disk\n p = os.path.sep.join([output_path, '{}.do'.format(str(total).zfill(5))])\n f = open(p, 'wb')\n f.write(r.content)\n f.close()\n \n # update the counter\n print('[INFO] downloaded: {}'.format(p))\n total += 1\n except:\n print('[INFO] error downloading image...')\n \n # introduce a small time sleep\n time.sleep(0.1)\n\n# r = requests.get(url, timeout=60)\n# print(r.content)",
"[INFO] downloaded: C:/Users/Tajr/Desktop/Data/RadonPlus/RadonTechnology/Dev/Case Studi/Captcha/output\\00000.do\n[INFO] downloaded: C:/Users/Tajr/Desktop/Data/RadonPlus/RadonTechnology/Dev/Case Studi/Captcha/output\\00001.do\n"
]
],
[
[
"## Labeling",
"_____no_output_____"
]
],
[
[
"image_paths = list(paths.list_images(output_path))\ncounts = {}\n\nfor(i, image_path) in enumerate(image_paths):\n print('[INFO] processing image {}/{}'.format(str(i + 1), len(image_paths)))\n \n \n image = cv2.imread(image_path)\n gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n gray = cv2.copyMakeBorder(gray, 8, 8, 8, 8, cv2.BORDER_REPLICATE)\n cv2.imshow('imagwi',gray)\n cv2.waitKey(0)\n",
"[INFO] processing image 1/3\n[INFO] processing image 2/3\n[INFO] processing image 3/3\n"
]
],
[
[
"# ARCHITECTURE",
"_____no_output_____"
],
[
"## Variables",
"_____no_output_____"
]
],
[
[
"width = 28\nheight = 28\ndepth = 1\nclasses = 10\ninput_shape = (width, height, depth)\n\nif K.image_data_format == 'channels_first':\n input_shape = (depth, width, height)",
"_____no_output_____"
]
],
[
[
"## Model Definition",
"_____no_output_____"
]
],
[
[
"model = models.Sequential()\nmodel.add(layers.Conv2D(20, (5, 5) ,padding='same', input_shape=input_shape))\nmodel.add(layers.Activation('relu'))\nmodel.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\nmodel.add(layers.Conv2D(50, (5, 5), padding='same'))\nmodel.add(layers.Activation('relu'))\nmodel.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\nmodel.add(layers.Flatten())\nmodel.add(layers.Dense(500))\nmodel.add(layers.Activation('relu'))\nmodel.add(layers.Dense(classes))\nmodel.add(layers.Activation('softmax'))\n\nmodel.summary()",
"Model: \"sequential_3\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_5 (Conv2D) (None, 28, 28, 20) 520 \n_________________________________________________________________\nactivation_9 (Activation) (None, 28, 28, 20) 0 \n_________________________________________________________________\nmax_pooling2d_5 (MaxPooling2 (None, 14, 14, 20) 0 \n_________________________________________________________________\nconv2d_6 (Conv2D) (None, 14, 14, 50) 25050 \n_________________________________________________________________\nactivation_10 (Activation) (None, 14, 14, 50) 0 \n_________________________________________________________________\nmax_pooling2d_6 (MaxPooling2 (None, 7, 7, 50) 0 \n_________________________________________________________________\nflatten_3 (Flatten) (None, 2450) 0 \n_________________________________________________________________\ndense_5 (Dense) (None, 500) 1225500 \n_________________________________________________________________\nactivation_11 (Activation) (None, 500) 0 \n_________________________________________________________________\ndense_6 (Dense) (None, 10) 5010 \n_________________________________________________________________\nactivation_12 (Activation) (None, 10) 0 \n=================================================================\nTotal params: 1,256,080\nTrainable params: 1,256,080\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"## Model Visualization",
"_____no_output_____"
]
],
[
[
"seriarize_to = ''\nplot_model(model, to_file='serialized/model_architecture.png', show_shapes=True)",
"_____no_output_____"
]
],
[
[
"# COMPILATION",
"_____no_output_____"
],
[
"# TRAINING",
"_____no_output_____"
],
[
"# PLOTTING",
"_____no_output_____"
],
[
"# EVALUATION",
"_____no_output_____"
],
[
"# PREDICTIONS",
"_____no_output_____"
]
]
] |
[
"markdown",
"raw",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"raw"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4ada47089427513f4b610f6d9b1147d975c949b9
| 4,865 |
ipynb
|
Jupyter Notebook
|
manifold-examples/feature-list-view.ipynb
|
kryvokhyzha/examples-and-courses
|
477e82ee24e6abba8a6b6d92555f2ed549ca682c
|
[
"MIT"
] | 1 |
2021-12-13T15:41:48.000Z
|
2021-12-13T15:41:48.000Z
|
manifold-examples/feature-list-view.ipynb
|
kryvokhyzha/examples-and-courses
|
477e82ee24e6abba8a6b6d92555f2ed549ca682c
|
[
"MIT"
] | 15 |
2021-09-12T15:06:13.000Z
|
2022-03-31T19:02:08.000Z
|
manifold-examples/feature-list-view.ipynb
|
kryvokhyzha/examples-and-courses
|
477e82ee24e6abba8a6b6d92555f2ed549ca682c
|
[
"MIT"
] | 1 |
2022-01-29T00:37:52.000Z
|
2022-01-29T00:37:52.000Z
| 33.095238 | 133 | 0.526824 |
[
[
[
"# Feature List View\n\n## Usage",
"_____no_output_____"
]
],
[
[
"import sys, json, math\nfrom mlvis import FeatureListView\nfrom random import uniform, gauss\nfrom IPython.display import display\nif sys.version_info[0] < 3:\n import urllib2 as url\nelse:\n import urllib.request as url\n \ndef generate_random_steps(k):\n randoms = [uniform(0, 1) / 2 for i in range(0, k)]\n steps = [0] * (k - 1)\n t = 0\n for i in range(0, k - 1):\n steps[i] = t + (1 - t) * randoms[i]\n t = steps[i]\n return steps + [1]\n\ndef generate_categorical_feature(states):\n size = len(states)\n distro_a = [uniform(0, 1) for i in range(0, size)] \n distro_b = [uniform(0, 1) for i in range(0, size)]\n return {\n 'name': 'dummy-categorical-feature',\n 'type': 'categorical',\n 'domain': list(states.values()),\n 'distributions': [distro_a, distro_b],\n 'distributionNormalized': [distro_a, distro_b],\n 'colors': ['#47B274', '#6F5AA7'],\n 'divergence': uniform(0, 1)\n }\n\ndef generate_numerical_feature():\n domain_size = 100\n distro_a = [uniform(0, 1) for i in range(0, domain_size)]\n distro_b = [uniform(0, 1) for i in range(0, domain_size)]\n return {\n 'name': 'dummy-categorical-numerical',\n 'type': 'numerical',\n 'domain': generate_random_steps(domain_size),\n 'distributions': [distro_a, distro_b],\n 'distributionNormalized': [distro_a, distro_b],\n 'colors': ['#47B274', '#6F5AA7'],\n 'divergence': uniform(0, 1)\n }\n\ndef generate_random_categorical_values(states):\n k = 10000\n values = [None] * k\n domain = list(states.values())\n size = len(states)\n for i in range(0, k):\n d = int(math.floor(uniform(0, 1) * size))\n values[i] = domain[d]\n return values\n\ndef generate_raw_categorical_feature(states):\n return {\n 'name': 'dummy-raw-categorical-feature',\n 'type': 'categorical',\n 'values': [generate_random_categorical_values(states),\n generate_random_categorical_values(states)]\n }\n\ndef generate_raw_numerical_feature():\n return {\n 'name': 'dummy-raw-numerical-feature',\n 'type': 'numerical',\n 'values': [\n [gauss(2, 0.5) for i in range(0, 2500)],\n [gauss(0, 1) for i in range(0, 7500)]\n ]\n }\n\n# load the US states data\nPREFIX = 'https://d1a3f4spazzrp4.cloudfront.net/mlvis/'\nresponse = url.urlopen(PREFIX + 'jupyter/states.json')\nstates = json.loads(response.read().decode())\n\n# Randomly generate the data for the feature list view\ncategorical_feature = generate_categorical_feature(states)\nraw_categorical_feature = generate_raw_categorical_feature(states)\nnumerical_feature = generate_numerical_feature()\nraw_numerical_feature = generate_raw_numerical_feature()\ndata = [categorical_feature, raw_categorical_feature, numerical_feature, raw_numerical_feature]\n\nfeature_list_view = FeatureListView(props={\"data\": data, \"width\": 1000})\n\ndisplay(feature_list_view) ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
]
] |
4ada4cbfe4b180f07fe664494e9285b59d6df256
| 57,222 |
ipynb
|
Jupyter Notebook
|
costco-rental.ipynb
|
scko823/web-scraping-selenium-example
|
4284ed8c1ec0c048ee11066f073e2e443d99ca34
|
[
"MIT"
] | null | null | null |
costco-rental.ipynb
|
scko823/web-scraping-selenium-example
|
4284ed8c1ec0c048ee11066f073e2e443d99ca34
|
[
"MIT"
] | null | null | null |
costco-rental.ipynb
|
scko823/web-scraping-selenium-example
|
4284ed8c1ec0c048ee11066f073e2e443d99ca34
|
[
"MIT"
] | null | null | null | 30.698498 | 1,100 | 0.414439 |
[
[
[
"from selenium import webdriver\n#import urllib you can use urllib to send web request to websites and get back html text as response\nimport pandas as pd\nfrom bs4 import BeautifulSoup\nfrom selenium.webdriver.common.keys import Keys\nfrom lxml import html\nimport numpy\n# import dependencies",
"_____no_output_____"
],
[
"browser = webdriver.Firefox() #I only tested in firefox\nbrowser.get('http://costcotravel.com/Rental-Cars')\nbrowser.implicitly_wait(5)#wait for webpage download",
"_____no_output_____"
],
[
"browser.find_element_by_id('pickupLocationTextWidget').send_keys(\"PHX\");",
"_____no_output_____"
],
[
"browser.find_element_by_css_selector('.sayt-result').click()\n",
"_____no_output_____"
],
[
"browser.find_element_by_id(\"pickupDateWidget\").send_keys('08/27/2016')#you can't send it directly, need to clear first",
"_____no_output_____"
],
[
"browser.find_element_by_id(\"pickupDateWidget\").clear()",
"_____no_output_____"
],
[
"browser.find_element_by_id(\"pickupDateWidget\").send_keys('08/27/2016')",
"_____no_output_____"
],
[
"browser.find_element_by_id(\"dropoffDateWidget\").clear()",
"_____no_output_____"
],
[
"browser.find_element_by_id(\"dropoffDateWidget\").send_keys('08/31/2016',Keys.RETURN)",
"_____no_output_____"
],
[
"browser.find_element_by_css_selector('#pickupTimeWidget option[value=\"03:00 PM\"]').click() #select time ",
"_____no_output_____"
],
[
"browser.find_element_by_css_selector('#dropoffTimeWidget option[value=\"03:00 PM\"]').click()",
"_____no_output_____"
],
[
"browser.find_element_by_link_text('SEARCH').click() #click the red button !!",
"_____no_output_____"
],
[
"n = browser.page_source #grab the page source",
"_____no_output_____"
]
],
[
[
"The follow code is same as before, but you can send the commands all in one go. \nHowever, there are implicit wait for the driver so it can do AJAX request and render the page for elements\nalso, you can you find_element_by_xpath method",
"_____no_output_____"
]
],
[
[
"\n# browser = webdriver.Firefox() #I only tested in firefox\n# browser.get('http://costcotravel.com/Rental-Cars')\n# browser.implicitly_wait(5)#wait for webpage download\n# browser.find_element_by_id('pickupLocationTextWidget').send_keys(\"PHX\");\n# browser.implicitly_wait(5) #wait for the airport suggestion box to show\n# browser.find_element_by_xpath('//li[@class=\"sayt-result\"]').click() \n# #click the airport suggestion box \n\n# browser.find_element_by_xpath('//input[@id=\"pickupDateWidget\"]').send_keys('08/27/2016')\n# browser.find_element_by_xpath('//input[@id=\"dropoffDateWidget\"]').send_keys('08/30/2016',Keys.RETURN)\n\n# browser.find_element_by_xpath('//select[@id=\"pickupTimeWidget\"]/option[@value=\"09:00 AM\"]').click()\n# browser.find_element_by_xpath('//select[@id=\"dropoffTimeWidget\"]/option[@value=\"05:00 PM\"]').click()\n# browser.implicitly_wait(5) #wait for the clicks to be completed\n# browser.find_element_by_link_text('SEARCH').click()\n# #click the search box\n\n# time.sleep(8) #wait for firefox to download and render the page\n# n = browser.page_source #grab the html source code",
"_____no_output_____"
],
[
"type(n) #the site use unicode",
"_____no_output_____"
],
[
"soup = BeautifulSoup(n,'lxml') #use BeautifulSoup to parse the source",
"_____no_output_____"
],
[
"print \"--------------first 1000 characters:--------------\\n\"\nprint soup.prettify()[:1000]\nprint \"\\n--------------last 1000 characters:--------------\"\nprint soup.prettify()[-1000:]",
"--------------first 1000 characters:--------------\n\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n<html style=\"height:100%;\" xmlns=\"http://www.w3.org/1999/xhtml\">\n <head>\n <title>\n Rental Car Low Price Finder at Costco Travel\n </title>\n <meta content=\"IE=edge,chrome=1\" http-equiv=\"X-UA-Compatible\"/>\n <meta content=\"Price all brands in one search with our Low Price Finder. Enter your criteria and we'll shop all coupons, codes and discounts for the lowest prices!\" name=\"description\"/>\n <meta content=\"www.costcotravel.com: Rental Cars,Car Rentals,Low Price Finder,Alamo,Avis,Budget,Enterprise\" name=\"keywords\"/>\n <meta content=\"text/html; charset=utf-8\" http-equiv=\"Content-Type\"/>\n <meta content=\"GENERAL\" name=\"rating\"/>\n <meta content=\"\" name=\"robots\"/>\n <link href=\"https://www.costcotravel.com\" id=\"desktop\" media=\"only screen and (min-device-width: 641px)\" rel=\"alternate\"/>\n <link href=\"https://m.costcotravel.com\" id=\"phone\" media=\"only screen and (max-\n\n--------------last 1000 characters:--------------\nltsrcattempted=\"0\" id=\"_searchingImage\" initialized=\"1\" src=\"https://www.costcotravel.com/shared/images/searching.gif\" style=\"width: 168px; height: 16px;\"/>\n </div>\n </div>\n </div>\n <script type=\"text/javascript\">\n loadBackground();\n </script>\n <!--\t\n\t|***** Server Information ******\n\t|* \n\t|* Server = 04\n\t|* Time = Thursday, April 28, 2016 7:07:09 PM PDT \n\t|* \n\t|*******************************\t\n\t-->\n <script type=\"text/javascript\">\n DomUtil.createCookie(\"Csrf-token\", \"1834581341518e741efdb22652c356e348d3ef52cb012b7bd2c9ba4ac3fd807928305bb8c5a530f6664277773b29801d2516cb01490820cc4ae8f29d0838338a\");\n\t\t\t\n\t\t\t\tUnsupportedBrowserPopup.showUnsupportedBrowserPopup();\n\t\t\t\tif(!DomUtil.areCookiesEnabled()){\n\t\t\t\t\tMessageUtil.showErrorMessage(Navigation.TranslationMessages.COOKIES_DISABLED_MESSAGE, \"dataContent\");\n\t\t\t\t}\n\t\t\t\t\n\t\t\t\tNavigation.manageDynamicElements();\n\t\t\t\tNavigation.removeLocaleCodeQueryParameter();\n </script>\n </body>\n</html>\n"
],
[
"table = soup.find('div',{'class':'rentalCarTableDetails'}) #find the table",
"_____no_output_____"
],
[
"print \"--------------first 1000 characters:--------------\\n\"\nprint table.prettify()[:1000]\nprint \"\\n--------------last 1000 characters:--------------\"\nprint table.prettify()[-1000:]",
"--------------first 1000 characters:--------------\n\n<div class=\"rentalCarTableDetails\">\n <table border=\"0\" cellpadding=\"5\" cellspacing=\"0\" class=\"carMatrixTable\" width=\"100%\">\n <tbody>\n <tr>\n <th class=\"w123 nob tar fs10\" rowspan=\"2\">\n Taxes and fees are included in the price\n </th>\n <th class=\"w141 tac\">\n <img alt=\"Alamo Rent A Car Logo\" altsrcattempted=\"0\" initialized=\"1\" src=\"https://www.costcotravel.com/content/shared/images/logos/car/Alamo_m.gif\"/>\n </th>\n <th class=\"w141 tac\">\n <img alt=\"Avis Rent A Car Logo\" altsrcattempted=\"0\" initialized=\"1\" src=\"https://www.costcotravel.com/content/shared/images/logos/car/Avis_m.gif\"/>\n </th>\n <th class=\"w141 tac\">\n <img alt=\"Budget Rent A Car Logo\" altsrcattempted=\"0\" initialized=\"1\" src=\"https://www.costcotravel.com/content/shared/images/logos/car/Budget_m.gif\"/>\n </th>\n <th class=\"w141 tac\">\n <img alt=\"Enterprise Rent A Car Logo\" altsrcattempted=\"0\" initialized=\"1\" src=\"https://www.costcotravel.com/content/shared/images/logos/car/Enterprise_\n\n--------------last 1000 characters:--------------\n850bb:1545fc32ada:28a3', false );\">\n $595\n </a>\n </div>\n </td>\n <td class=\"\">\n <div class=\"carCell\">\n <a class=\"u \" href=\"javascript:;\" onclick=\"javascript:RentalCarMatrix.selectCarCategoryFromGrid( '-604850bb:1545fc32ada:28a3', '-604850bb:1545fc32ada:28ee', '-604850bb:1545fc32ada:28a3', false );\">\n $739\n </a>\n </div>\n </td>\n <td class=\"\">\n <div class=\"carCell\">\n <a class=\"u \" href=\"javascript:;\" onclick=\"javascript:RentalCarMatrix.selectCarCategoryFromGrid( '-604850bb:1545fc32ada:28a3', '-604850bb:1545fc32ada:28ca', '-604850bb:1545fc32ada:28a3', false );\">\n $730\n </a>\n </div>\n </td>\n <td class=\"\">\n <div class=\"carCell\">\n <a class=\"u \" href=\"javascript:;\" onclick=\"javascript:RentalCarMatrix.selectCarCategoryFromGrid( '-604850bb:1545fc32ada:28a3', '-604850bb:1545fc32ada:2d1f', '-604850bb:1545fc32ada:28a3', false );\">\n $595\n </a>\n </div>\n </td>\n </tr>\n </tbody>\n </table>\n</div>\n\n"
],
[
"tr = table.select('tr') #let's look at one of the row",
"_____no_output_____"
],
[
"type(tr)",
"_____no_output_____"
],
[
"#lets look at first three row\nfor i in tr[0:3]:\n print i.prettify()\n print \"-----------------------------------\"",
"<tr>\n <th class=\"w123 nob tar fs10\" rowspan=\"2\">\n Taxes and fees are included in the price\n </th>\n <th class=\"w141 tac\">\n <img alt=\"Alamo Rent A Car Logo\" altsrcattempted=\"0\" initialized=\"1\" src=\"https://www.costcotravel.com/content/shared/images/logos/car/Alamo_m.gif\"/>\n </th>\n <th class=\"w141 tac\">\n <img alt=\"Avis Rent A Car Logo\" altsrcattempted=\"0\" initialized=\"1\" src=\"https://www.costcotravel.com/content/shared/images/logos/car/Avis_m.gif\"/>\n </th>\n <th class=\"w141 tac\">\n <img alt=\"Budget Rent A Car Logo\" altsrcattempted=\"0\" initialized=\"1\" src=\"https://www.costcotravel.com/content/shared/images/logos/car/Budget_m.gif\"/>\n </th>\n <th class=\"w141 tac\">\n <img alt=\"Enterprise Rent A Car Logo\" altsrcattempted=\"0\" initialized=\"1\" src=\"https://www.costcotravel.com/content/shared/images/logos/car/Enterprise_m.gif\"/>\n </th>\n</tr>\n\n-----------------------------------\n<tr>\n <th class=\"nob tac\">\n Phx Sky Harbor Intl Arpt\n <br/>\n Shuttle\n </th>\n <th class=\"nob tac\">\n Phoenix Sky Harbor Airport\n <br/>\n Shuttle\n </th>\n <th class=\"nob tac\">\n Sky Harbor Intl Airport\n <br/>\n Shuttle\n </th>\n <th class=\"nob tac\">\n Phx Sky Harbor Intl Arpt\n <br/>\n Shuttle\n </th>\n</tr>\n\n-----------------------------------\n<tr>\n <th class=\"w123 nob tar fs10\">\n Location Hours\n </th>\n <th class=\"nob tac\">\n 24 Hours\n </th>\n <th class=\"nob tac\">\n 24 Hours\n </th>\n <th class=\"nob tac\">\n 24 Hours\n </th>\n <th class=\"nob tac\">\n 24 Hours\n </th>\n</tr>\n\n-----------------------------------\n"
]
],
[
[
"let play with one of the row",
"_____no_output_____"
]
],
[
[
"row = tr[3] ",
"_____no_output_____"
],
[
"row.find('th',{'class':'tar'}).text.encode('utf-8')",
"_____no_output_____"
],
[
"row",
"_____no_output_____"
],
[
"row.contents[4].text #1. this is unicode, 2. the dollar sign is in the way",
"_____no_output_____"
],
[
"'Car' in 'Econ Car' #use this string logic to filter out unwanted data",
"_____no_output_____"
],
[
"rows = [i for i in tr if (('Price' not in i.contents[0].text and 'Fees' not in i.contents[0].text and 'Location' not in i.contents[0].text and i.contents[0].text !='') and len(i.contents[0].text)<30)]\n# use this crazy list comprehension to get the data we want \n#1. don't want the text 'Price' in the first column\n#2. don't want the text 'Fee' in the first column\n#3. don't want the text 'Location' in the first column\n#4. the text length of first column must be less than 30 characters long",
"_____no_output_____"
],
[
"rows[0].contents[0].text #just exploring here...",
"_____no_output_____"
],
[
"rows[0].contents[4].text #need to get rid of the $....",
"_____no_output_____"
],
[
"rows[3].contents[0].text #need to make it utf-8",
"_____no_output_____"
],
[
"#process the data\nprices = {} \nfor i in rows:\n #print the 1st column text\n print i.contents[0].text.encode('utf-8')\n prices[i.contents[0].text.encode('utf-8')] = [i.contents[1].text.encode('utf-8'),i.contents[2].text.encode('utf-8'), i.contents[3].text.encode('utf-8'),i.contents[4].text.encode('utf-8')]",
"Economy Car\nCompact Car\nIntermediate Car\nStandard Car\nFullsize Car\nPremium Car\nIntermediate SUV\nStandard SUV\nPremium Crossover\nMini Van\nStandard Van\nFullsize SUV\nStandard Convertible\nStandard Specialty\nFullsize Specialty\nPremium Specialty\nLuxury Specialty\nLuxury Car\nPremium SUV\nLuxury SUV\nFullsize Van\n"
],
[
"prices",
"_____no_output_____"
],
[
"iteritems = prices.iteritems() \n#call .iteritems() on a dictionary will give you a generator which you can iter over",
"_____no_output_____"
],
[
"iteritems.next() #run me five times",
"_____no_output_____"
],
[
"for name, priceList in prices.iteritems():\n newPriceList = []\n for i in priceList:\n newPriceList.append(i.replace('$',''))\n prices[name] = newPriceList",
"_____no_output_____"
],
[
"prices",
"_____no_output_____"
],
[
"data = pd.DataFrame.from_dict(prices, orient='index') #get a pandas DataFrame from the prices dictionary",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data = data.replace('Not Available', numpy.nan) #replace the 'Not Available' data point to numpy.nan",
"_____no_output_____"
],
[
"data = pd.to_numeric(data, errors='coerce') #cast to numeric data",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data.columns= ['Alamo','Avis','Budget','Enterprise'] #set column names",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data.notnull() #check for missing data ",
"_____no_output_____"
],
[
"data.min(axis=1, skipna=True) #look at the cheapest car in each class",
"_____no_output_____"
]
],
[
[
"From this point on, you can set up to run every night and email yourself results etc.",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4ada58f87c70e37056c4b62ce77522abfdbf4a4d
| 9,508 |
ipynb
|
Jupyter Notebook
|
tasks/data_augmentation/notebooks/assisted_labeling/Final_assisted_labeling_8.ipynb
|
thefirebanks/policy-data-analyzer
|
670a4ea72ab71975b84c4a4ec43d573371c4a986
|
[
"RSA-MD"
] | 13 |
2020-12-11T12:10:20.000Z
|
2021-04-27T22:54:25.000Z
|
tasks/data_augmentation/notebooks/assisted_labeling/Final_assisted_labeling_8.ipynb
|
thefirebanks/policy-data-analyzer
|
670a4ea72ab71975b84c4a4ec43d573371c4a986
|
[
"RSA-MD"
] | 40 |
2020-11-24T06:48:53.000Z
|
2021-04-28T05:20:37.000Z
|
tasks/data_augmentation/notebooks/assisted_labeling/Final_assisted_labeling_8.ipynb
|
thefirebanks/policy-data-analyzer
|
670a4ea72ab71975b84c4a4ec43d573371c4a986
|
[
"RSA-MD"
] | 5 |
2020-11-26T08:23:05.000Z
|
2021-04-19T18:08:20.000Z
| 27.243553 | 173 | 0.525873 |
[
[
[
"# General purpose libraries\nimport boto3\nimport copy\nimport csv\nimport datetime\nimport json\nimport numpy as np\nimport pandas as pd\nimport s3fs\nfrom collections import defaultdict\nimport time\nimport re\nimport random\nfrom sentence_transformers import SentenceTransformer\nimport sentencepiece\nfrom scipy.spatial import distance\nfrom json import JSONEncoder\nimport sys\nsys.path.append(\"/Users/dafirebanks/Projects/policy-data-analyzer/\")\nsys.path.append(\"C:/Users/jordi/Documents/GitHub/policy-data-analyzer/\")\nfrom tasks.data_loading.src.utils import *",
"_____no_output_____"
]
],
[
[
"### 1. Set up AWS",
"_____no_output_____"
]
],
[
[
"def aws_credentials_from_file(f_name):\n with open(f_name, \"r\") as f:\n creds = json.load(f)\n \n return creds[\"aws\"][\"id\"], creds[\"aws\"][\"secret\"]\n\ndef aws_credentials(path, filename):\n file = path + filename\n with open(file, 'r') as dict:\n key_dict = json.load(dict)\n for key in key_dict:\n KEY = key\n SECRET = key_dict[key]\n return KEY, SECRET",
"_____no_output_____"
]
],
[
[
"### 2. Optimized full loop",
"_____no_output_____"
]
],
[
[
"def aws_credentials(path, filename):\n file = path + filename\n with open(file, 'r') as dict:\n key_dict = json.load(dict)\n for key in key_dict:\n KEY = key\n SECRET = key_dict[key]\n return KEY, SECRET\n\ndef aws_credentials_from_file(f_name):\n with open(f_name, \"r\") as f:\n creds = json.load(f)\n \n return creds[\"aws\"][\"id\"], creds[\"aws\"][\"secret\"]\n\ndef load_all_sentences(language, s3, bucket_name, init_doc, end_doc):\n policy_dict = {}\n sents_folder = f\"{language}_documents/sentences\"\n \n for i, obj in enumerate(s3.Bucket(bucket_name).objects.all().filter(Prefix=\"english_documents/sentences/\")):\n \n if not obj.key.endswith(\"/\") and init_doc <= i < end_doc:\n \n serializedObject = obj.get()['Body'].read()\n policy_dict = {**policy_dict, **json.loads(serializedObject)}\n \n return labeled_sentences_from_dataset(policy_dict)\n\ndef save_results_as_separate_csv(results_dictionary, queries_dictionary, init_doc, results_limit, aws_id, aws_secret):\n path = \"s3://wri-nlp-policy/english_documents/assisted_labeling\"\n col_headers = [\"sentence_id\", \"similarity_score\", \"text\"]\n for i, query in enumerate(results_dictionary.keys()):\n filename = f\"{path}/query_{queries_dictionary[query]}_{i}_results_{init_doc}.csv\"\n pd.DataFrame(results_dictionary[query], columns=col_headers).head(results_limit).to_csv(filename, storage_options={\"key\": aws_id, \"secret\": aws_secret})\n\ndef labeled_sentences_from_dataset(dataset):\n sentence_tags_dict = {}\n\n for document in dataset.values():\n sentence_tags_dict.update(document['sentences'])\n\n return sentence_tags_dict",
"_____no_output_____"
],
[
"# Set up AWS\ncredentials_file = '/Users/dafirebanks/Documents/credentials.json'\naws_id, aws_secret = aws_credentials_from_file(credentials_file)\nregion = 'us-east-1'\n\ns3 = boto3.resource(\n service_name = 's3',\n region_name = region,\n aws_access_key_id = aws_id,\n aws_secret_access_key = aws_secret\n)",
"_____no_output_____"
],
[
"path = \"C:/Users/jordi/Documents/claus/\"\nfilename = \"AWS_S3_keys_wri.json\"\naws_id, aws_secret = aws_credentials(path, filename)\nregion = 'us-east-1'\n\nbucket = 'wri-nlp-policy'\n\ns3 = boto3.resource(\n service_name = 's3',\n region_name = region,\n aws_access_key_id = aws_id,\n aws_secret_access_key = aws_secret\n)",
"_____no_output_____"
],
[
"# Define params\ninit_at_doc = 13136\nend_at_doc = 14778\n\nsimilarity_threshold = 0\nsearch_results_limit = 500\n\nlanguage = \"english\"\nbucket_name = 'wri-nlp-policy'\n\ntransformer_name = 'xlm-r-bert-base-nli-stsb-mean-tokens'\nmodel = SentenceTransformer(transformer_name)\n\n\n# Get all sentence documents\n\nsentences = load_all_sentences(language, s3, bucket_name, init_at_doc, end_at_doc )\n\n# Define queries\npath = \"../../input/\"\nfilename = \"English_queries.xlsx\"\nfile = path + filename\ndf = pd.read_excel(file, engine='openpyxl', sheet_name = \"Hoja1\", usecols = \"A:C\")\n\nqueries = {}\nfor index, row in df.iterrows():\n queries[row['Query sentence']] = row['Policy instrument']\n\n\n\n# Calculate and store query embeddings\nquery_embeddings = dict(zip(queries, [model.encode(query.lower(), show_progress_bar=False) for query in queries]))\n\n# For each sentence, calculate its embedding, and store the similarity\nquery_similarities = defaultdict(list)\n\ni = 0\nfor sentence_id, sentence in sentences.items():\n sentence_embedding = model.encode(sentence['text'].lower(), show_progress_bar=False)\n i += 1\n if i % 100 == 0:\n print(i)\n \n for query_text, query_embedding in query_embeddings.items():\n score = round(1 - distance.cosine(sentence_embedding, query_embedding), 4)\n if score > similarity_threshold:\n query_similarities[query_text].append([sentence_id, score, sentences[sentence_id]['text']])\n \n# Sort results by similarity score\nfor query in query_similarities:\n query_similarities[query] = sorted(query_similarities[query], key = lambda x : x[1], reverse=True)\n \n# Store results\nsave_results_as_separate_csv(query_similarities, queries, init_at_doc, search_results_limit, aws_id, aws_secret)\n",
"10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n110\n120\n130\n140\n150\n160\n170\n180\n190\n200\n210\n220\n230\n240\n250\n260\n270\n280\n290\n300\n310\n320\n330\n340\n350\n360\n370\n380\n390\n400\n410\n420\n430\n440\n450\n460\n470\n480\n490\n500\n510\n520\n530\n540\n550\n560\n570\n580\n590\n600\n610\n620\n630\n640\n650\n660\n670\n680\n690\n700\n710\n720\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4ada5cdc9e22a616f68f94336e8d0c2415195bdb
| 193,383 |
ipynb
|
Jupyter Notebook
|
notebooks/2018-03-15-ssh-skillscore.ipynb
|
ekarati/notebooks_demos
|
1053437a9050a3366e97aa449091b5c570e62d48
|
[
"MIT"
] | 19 |
2016-07-05T13:05:57.000Z
|
2021-10-04T09:25:10.000Z
|
notebooks/2018-03-15-ssh-skillscore.ipynb
|
ekarati/notebooks_demos
|
1053437a9050a3366e97aa449091b5c570e62d48
|
[
"MIT"
] | 316 |
2016-05-10T20:47:36.000Z
|
2021-11-17T18:53:00.000Z
|
notebooks/2018-03-15-ssh-skillscore.ipynb
|
ekarati/notebooks_demos
|
1053437a9050a3366e97aa449091b5c570e62d48
|
[
"MIT"
] | 24 |
2016-05-24T19:31:36.000Z
|
2021-03-30T05:32:40.000Z
| 116.636309 | 27,893 | 0.827632 |
[
[
[
"# Investigating ocean models skill for sea surface height with IOOS catalog and Python\n\n\nThe IOOS [catalog](https://ioos.noaa.gov/data/catalog) offers access to hundreds of datasets and data access services provided by the 11 regional associations.\nIn the past we demonstrate how to tap into those datasets to obtain sea [surface temperature data from observations](http://ioos.github.io/notebooks_demos/notebooks/2016-12-19-exploring_csw),\n[coastal velocity from high frequency radar data](http://ioos.github.io/notebooks_demos/notebooks/2017-12-15-finding_HFRadar_currents),\nand a simple model vs observation visualization of temperatures for the [Boston Light Swim competition](http://ioos.github.io/notebooks_demos/notebooks/2016-12-22-boston_light_swim).\n\nIn this notebook we'll demonstrate a step-by-step workflow on how ask the catalog for a specific variable, extract only the model data, and match the nearest model grid point to an observation. The goal is to create an automated skill score for quick assessment of ocean numerical models.\n\n\nThe first cell is only to reduce iris' noisy output,\nthe notebook start on cell [2] with the definition of the parameters:\n- start and end dates for the search;\n- experiment name;\n- a bounding of the region of interest;\n- SOS variable name for the observations;\n- Climate and Forecast standard names;\n- the units we want conform the variables into;\n- catalogs we want to search.",
"_____no_output_____"
]
],
[
[
"import warnings\n\n# Suppresing warnings for a \"pretty output.\"\nwarnings.simplefilter(\"ignore\")",
"_____no_output_____"
],
[
"%%writefile config.yaml\n\ndate:\n start: 2018-2-28 00:00:00\n stop: 2018-3-5 00:00:00\n\nrun_name: 'latest'\n\nregion:\n bbox: [-71.20, 41.40, -69.20, 43.74]\n crs: 'urn:ogc:def:crs:OGC:1.3:CRS84'\n\nsos_name: 'water_surface_height_above_reference_datum'\n\ncf_names:\n - sea_surface_height\n - sea_surface_elevation\n - sea_surface_height_above_geoid\n - sea_surface_height_above_sea_level\n - water_surface_height_above_reference_datum\n - sea_surface_height_above_reference_ellipsoid\n\nunits: 'm'\n\ncatalogs:\n - https://data.ioos.us/csw",
"Overwriting config.yaml\n"
]
],
[
[
"To keep track of the information we'll setup a `config` variable and output them on the screen for bookkeeping.",
"_____no_output_____"
]
],
[
[
"import os\nimport shutil\nfrom datetime import datetime\n\nfrom ioos_tools.ioos import parse_config\n\nconfig = parse_config(\"config.yaml\")\n\n# Saves downloaded data into a temporary directory.\nsave_dir = os.path.abspath(config[\"run_name\"])\nif os.path.exists(save_dir):\n shutil.rmtree(save_dir)\nos.makedirs(save_dir)\n\nfmt = \"{:*^64}\".format\nprint(fmt(\"Saving data inside directory {}\".format(save_dir)))\nprint(fmt(\" Run information \"))\nprint(\"Run date: {:%Y-%m-%d %H:%M:%S}\".format(datetime.utcnow()))\nprint(\"Start: {:%Y-%m-%d %H:%M:%S}\".format(config[\"date\"][\"start\"]))\nprint(\"Stop: {:%Y-%m-%d %H:%M:%S}\".format(config[\"date\"][\"stop\"]))\nprint(\n \"Bounding box: {0:3.2f}, {1:3.2f},\"\n \"{2:3.2f}, {3:3.2f}\".format(*config[\"region\"][\"bbox\"])\n)",
"Saving data inside directory /home/filipe/IOOS/notebooks_demos/notebooks/latest\n*********************** Run information ************************\nRun date: 2018-11-30 13:25:17\nStart: 2018-02-28 00:00:00\nStop: 2018-03-05 00:00:00\nBounding box: -71.20, 41.40,-69.20, 43.74\n"
]
],
[
[
"To interface with the IOOS catalog we will use the [Catalogue Service for the Web (CSW)](https://live.osgeo.org/en/standards/csw_overview.html) endpoint and [python's OWSLib library](https://geopython.github.io/OWSLib).\n\nThe cell below creates the [Filter Encoding Specification (FES)](http://www.opengeospatial.org/standards/filter) with configuration we specified in cell [2]. The filter is composed of:\n- `or` to catch any of the standard names;\n- `not` some names we do not want to show up in the results;\n- `date range` and `bounding box` for the time-space domain of the search.",
"_____no_output_____"
]
],
[
[
"def make_filter(config):\n from owslib import fes\n from ioos_tools.ioos import fes_date_filter\n\n kw = dict(\n wildCard=\"*\", escapeChar=\"\\\\\", singleChar=\"?\", propertyname=\"apiso:Subject\"\n )\n\n or_filt = fes.Or(\n [fes.PropertyIsLike(literal=(\"*%s*\" % val), **kw) for val in config[\"cf_names\"]]\n )\n\n not_filt = fes.Not([fes.PropertyIsLike(literal=\"GRIB-2\", **kw)])\n\n begin, end = fes_date_filter(config[\"date\"][\"start\"], config[\"date\"][\"stop\"])\n\n bbox_crs = fes.BBox(config[\"region\"][\"bbox\"], crs=config[\"region\"][\"crs\"])\n\n filter_list = [fes.And([bbox_crs, begin, end, or_filt, not_filt])]\n return filter_list\n\n\nfilter_list = make_filter(config)",
"_____no_output_____"
]
],
[
[
"We need to wrap `OWSlib.csw.CatalogueServiceWeb` object with a custom function,\n` get_csw_records`, to be able to paginate over the results.\n\nIn the cell below we loop over all the catalogs returns and extract the OPeNDAP endpoints.",
"_____no_output_____"
]
],
[
[
"from ioos_tools.ioos import get_csw_records, service_urls\nfrom owslib.csw import CatalogueServiceWeb\n\ndap_urls = []\nprint(fmt(\" Catalog information \"))\nfor endpoint in config[\"catalogs\"]:\n print(\"URL: {}\".format(endpoint))\n try:\n csw = CatalogueServiceWeb(endpoint, timeout=120)\n except Exception as e:\n print(\"{}\".format(e))\n continue\n csw = get_csw_records(csw, filter_list, esn=\"full\")\n OPeNDAP = service_urls(csw.records, identifier=\"OPeNDAP:OPeNDAP\")\n odp = service_urls(\n csw.records, identifier=\"urn:x-esri:specification:ServiceType:odp:url\"\n )\n dap = OPeNDAP + odp\n dap_urls.extend(dap)\n\n print(\"Number of datasets available: {}\".format(len(csw.records.keys())))\n\n for rec, item in csw.records.items():\n print(\"{}\".format(item.title))\n if dap:\n print(fmt(\" DAP \"))\n for url in dap:\n print(\"{}.html\".format(url))\n print(\"\\n\")\n\n# Get only unique endpoints.\ndap_urls = list(set(dap_urls))",
"********************* Catalog information **********************\nURL: https://data.ioos.us/csw\nNumber of datasets available: 13\nurn:ioos:station:NOAA.NOS.CO-OPS:8447386 station, Fall River, MA\nurn:ioos:station:NOAA.NOS.CO-OPS:8447435 station, Chatham, Lydia Cove, MA\nurn:ioos:station:NOAA.NOS.CO-OPS:8447930 station, Woods Hole, MA\nCoupled Northwest Atlantic Prediction System (CNAPS)\nHYbrid Coordinate Ocean Model (HYCOM): Global\nNECOFS (FVCOM) - Scituate - Latest Forecast\nNECOFS Massachusetts (FVCOM) - Boston - Latest Forecast\nROMS ESPRESSO Real-Time Operational IS4DVAR Forecast System Version 2 (NEW) 2013-present FMRC Averages\nROMS ESPRESSO Real-Time Operational IS4DVAR Forecast System Version 2 (NEW) 2013-present FMRC History\nurn:ioos:station:NOAA.NOS.CO-OPS:8418150 station, Portland, ME\nurn:ioos:station:NOAA.NOS.CO-OPS:8419317 station, Wells, ME\nurn:ioos:station:NOAA.NOS.CO-OPS:8423898 station, Fort Point, NH\nurn:ioos:station:NOAA.NOS.CO-OPS:8443970 station, Boston, MA\n***************************** DAP ******************************\nhttp://oos.soest.hawaii.edu/thredds/dodsC/pacioos/hycom/global.html\nhttp://tds.marine.rutgers.edu/thredds/dodsC/roms/espresso/2013_da/avg/ESPRESSO_Real-Time_v2_Averages_Best.html\nhttp://tds.marine.rutgers.edu/thredds/dodsC/roms/espresso/2013_da/his/ESPRESSO_Real-Time_v2_History_Best.html\nhttp://thredds.secoora.org/thredds/dodsC/SECOORA_NCSU_CNAPS.nc.html\nhttp://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_FVCOM_OCEAN_BOSTON_FORECAST.nc.html\nhttp://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_FVCOM_OCEAN_SCITUATE_FORECAST.nc.html\nhttps://opendap.co-ops.nos.noaa.gov/ioos-dif-sos/images/tide_gauge.jpg.html\n\n\n"
]
],
[
[
"We found 10 dataset endpoints but only 9 of them have the proper metadata for us to identify the OPeNDAP endpoint,\nthose that contain either `OPeNDAP:OPeNDAP` or `urn:x-esri:specification:ServiceType:odp:url` scheme.\nUnfortunately we lost the `COAWST` model in the process.\n\nThe next step is to ensure there are no observations in the list of endpoints.\nWe want only the models for now.",
"_____no_output_____"
]
],
[
[
"from ioos_tools.ioos import is_station\nfrom timeout_decorator import TimeoutError\n\n# Filter out some station endpoints.\nnon_stations = []\nfor url in dap_urls:\n try:\n if not is_station(url):\n non_stations.append(url)\n except (IOError, OSError, RuntimeError, TimeoutError) as e:\n print(\"Could not access URL {}.html\\n{!r}\".format(url, e))\n\ndap_urls = non_stations\n\nprint(fmt(\" Filtered DAP \"))\nfor url in dap_urls:\n print(\"{}.html\".format(url))",
"Could not access URL https://opendap.co-ops.nos.noaa.gov/ioos-dif-sos/images/tide_gauge.jpg.html\nOSError(-90, 'NetCDF: file not found')\n************************* Filtered DAP *************************\nhttp://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_FVCOM_OCEAN_BOSTON_FORECAST.nc.html\nhttp://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_FVCOM_OCEAN_SCITUATE_FORECAST.nc.html\nhttp://thredds.secoora.org/thredds/dodsC/SECOORA_NCSU_CNAPS.nc.html\nhttp://tds.marine.rutgers.edu/thredds/dodsC/roms/espresso/2013_da/avg/ESPRESSO_Real-Time_v2_Averages_Best.html\nhttp://tds.marine.rutgers.edu/thredds/dodsC/roms/espresso/2013_da/his/ESPRESSO_Real-Time_v2_History_Best.html\nhttp://oos.soest.hawaii.edu/thredds/dodsC/pacioos/hycom/global.html\n"
]
],
[
[
"Now we have a nice list of all the models available in the catalog for the domain we specified.\nWe still need to find the observations for the same domain.\nTo accomplish that we will use the `pyoos` library and search the [SOS CO-OPS](https://opendap.co-ops.nos.noaa.gov/ioos-dif-sos/) services using the virtually the same configuration options from the catalog search.",
"_____no_output_____"
]
],
[
[
"from pyoos.collectors.coops.coops_sos import CoopsSos\n\ncollector_coops = CoopsSos()\n\ncollector_coops.set_bbox(config[\"region\"][\"bbox\"])\ncollector_coops.end_time = config[\"date\"][\"stop\"]\ncollector_coops.start_time = config[\"date\"][\"start\"]\ncollector_coops.variables = [config[\"sos_name\"]]\n\nofrs = collector_coops.server.offerings\ntitle = collector_coops.server.identification.title\nprint(fmt(\" Collector offerings \"))\nprint(\"{}: {} offerings\".format(title, len(ofrs)))",
"********************* Collector offerings **********************\nNOAA.NOS.CO-OPS SOS: 1229 offerings\n"
]
],
[
[
"To make it easier to work with the data we extract the time-series as pandas tables and interpolate them to a common 1-hour interval index.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom ioos_tools.ioos import collector2table\n\ndata = collector2table(\n collector=collector_coops,\n config=config,\n col=\"water_surface_height_above_reference_datum (m)\",\n)\n\ndf = dict(\n station_name=[s._metadata.get(\"station_name\") for s in data],\n station_code=[s._metadata.get(\"station_code\") for s in data],\n sensor=[s._metadata.get(\"sensor\") for s in data],\n lon=[s._metadata.get(\"lon\") for s in data],\n lat=[s._metadata.get(\"lat\") for s in data],\n depth=[s._metadata.get(\"depth\") for s in data],\n)\n\npd.DataFrame(df).set_index(\"station_code\")",
"_____no_output_____"
],
[
"index = pd.date_range(\n start=config[\"date\"][\"start\"].replace(tzinfo=None),\n end=config[\"date\"][\"stop\"].replace(tzinfo=None),\n freq=\"1H\",\n)\n\n# Preserve metadata with `reindex`.\nobservations = []\nfor series in data:\n _metadata = series._metadata\n series.index = series.index.tz_localize(None)\n obs = series.reindex(index=index, limit=1, method=\"nearest\")\n obs._metadata = _metadata\n observations.append(obs)",
"_____no_output_____"
]
],
[
[
"The next cell saves those time-series as CF-compliant netCDF files on disk,\nto make it easier to access them later.",
"_____no_output_____"
]
],
[
[
"import iris\nfrom ioos_tools.tardis import series2cube\n\nattr = dict(\n featureType=\"timeSeries\",\n Conventions=\"CF-1.6\",\n standard_name_vocabulary=\"CF-1.6\",\n cdm_data_type=\"Station\",\n comment=\"Data from http://opendap.co-ops.nos.noaa.gov\",\n)\n\n\ncubes = iris.cube.CubeList([series2cube(obs, attr=attr) for obs in observations])\n\noutfile = os.path.join(save_dir, \"OBS_DATA.nc\")\niris.save(cubes, outfile)",
"_____no_output_____"
]
],
[
[
"We still need to read the model data from the list of endpoints we found.\n\nThe next cell takes care of that.\nWe use `iris`, and a set of custom functions from the `ioos_tools` library,\nthat downloads only the data in the domain we requested.",
"_____no_output_____"
]
],
[
[
"from ioos_tools.ioos import get_model_name\nfrom ioos_tools.tardis import is_model, proc_cube, quick_load_cubes\nfrom iris.exceptions import ConstraintMismatchError, CoordinateNotFoundError, MergeError\n\nprint(fmt(\" Models \"))\ncubes = dict()\nfor k, url in enumerate(dap_urls):\n print(\"\\n[Reading url {}/{}]: {}\".format(k + 1, len(dap_urls), url))\n try:\n cube = quick_load_cubes(url, config[\"cf_names\"], callback=None, strict=True)\n if is_model(cube):\n cube = proc_cube(\n cube,\n bbox=config[\"region\"][\"bbox\"],\n time=(config[\"date\"][\"start\"], config[\"date\"][\"stop\"]),\n units=config[\"units\"],\n )\n else:\n print(\"[Not model data]: {}\".format(url))\n continue\n mod_name = get_model_name(url)\n cubes.update({mod_name: cube})\n except (\n RuntimeError,\n ValueError,\n ConstraintMismatchError,\n CoordinateNotFoundError,\n IndexError,\n ) as e:\n print(\"Cannot get cube for: {}\\n{}\".format(url, e))",
"**************************** Models ****************************\n\n[Reading url 1/6]: http://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_FVCOM_OCEAN_BOSTON_FORECAST.nc\n\n[Reading url 2/6]: http://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_FVCOM_OCEAN_SCITUATE_FORECAST.nc\n\n[Reading url 3/6]: http://thredds.secoora.org/thredds/dodsC/SECOORA_NCSU_CNAPS.nc\n\n[Reading url 4/6]: http://tds.marine.rutgers.edu/thredds/dodsC/roms/espresso/2013_da/avg/ESPRESSO_Real-Time_v2_Averages_Best\n\n[Reading url 5/6]: http://tds.marine.rutgers.edu/thredds/dodsC/roms/espresso/2013_da/his/ESPRESSO_Real-Time_v2_History_Best\n\n[Reading url 6/6]: http://oos.soest.hawaii.edu/thredds/dodsC/pacioos/hycom/global\n"
]
],
[
[
"Now we can match each observation time-series with its closest grid point (0.08 of a degree) on each model.\nThis is a complex and laborious task! If you are running this interactively grab a coffee and sit comfortably :-)\n\nNote that we are also saving the model time-series to files that align with the observations we saved before.",
"_____no_output_____"
]
],
[
[
"import iris\nfrom ioos_tools.tardis import (\n add_station,\n ensure_timeseries,\n get_nearest_water,\n make_tree,\n)\nfrom iris.pandas import as_series\n\nfor mod_name, cube in cubes.items():\n fname = \"{}.nc\".format(mod_name)\n fname = os.path.join(save_dir, fname)\n print(fmt(\" Downloading to file {} \".format(fname)))\n try:\n tree, lon, lat = make_tree(cube)\n except CoordinateNotFoundError:\n print(\"Cannot make KDTree for: {}\".format(mod_name))\n continue\n # Get model series at observed locations.\n raw_series = dict()\n for obs in observations:\n obs = obs._metadata\n station = obs[\"station_code\"]\n try:\n kw = dict(k=10, max_dist=0.08, min_var=0.01)\n args = cube, tree, obs[\"lon\"], obs[\"lat\"]\n try:\n series, dist, idx = get_nearest_water(*args, **kw)\n except RuntimeError as e:\n print(\"Cannot download {!r}.\\n{}\".format(cube, e))\n series = None\n except ValueError:\n status = \"No Data\"\n print(\"[{}] {}\".format(status, obs[\"station_name\"]))\n continue\n if not series:\n status = \"Land \"\n else:\n raw_series.update({station: series})\n series = as_series(series)\n status = \"Water \"\n print(\"[{}] {}\".format(status, obs[\"station_name\"]))\n if raw_series: # Save cube.\n for station, cube in raw_series.items():\n cube = add_station(cube, station)\n try:\n cube = iris.cube.CubeList(raw_series.values()).merge_cube()\n except MergeError as e:\n print(e)\n ensure_timeseries(cube)\n try:\n iris.save(cube, fname)\n except AttributeError:\n # FIXME: we should patch the bad attribute instead of removing everything.\n cube.attributes = {}\n iris.save(cube, fname)\n del cube\n print(\"Finished processing [{}]\".format(mod_name))",
" Downloading to file /home/filipe/IOOS/notebooks_demos/notebooks/latest/Forecasts-NECOFS_FVCOM_OCEAN_BOSTON_FORECAST.nc \n[No Data] Portland, ME\n[No Data] Wells, ME\n[Water ] Boston, MA\n[No Data] Fall River, MA\n[No Data] Chatham, Lydia Cove, MA\n[No Data] Woods Hole, MA\nFinished processing [Forecasts-NECOFS_FVCOM_OCEAN_BOSTON_FORECAST]\n Downloading to file /home/filipe/IOOS/notebooks_demos/notebooks/latest/Forecasts-NECOFS_FVCOM_OCEAN_SCITUATE_FORECAST.nc \n[No Data] Portland, ME\n[No Data] Wells, ME\n[No Data] Boston, MA\n[No Data] Fall River, MA\n[No Data] Chatham, Lydia Cove, MA\n[No Data] Woods Hole, MA\nFinished processing [Forecasts-NECOFS_FVCOM_OCEAN_SCITUATE_FORECAST]\n Downloading to file /home/filipe/IOOS/notebooks_demos/notebooks/latest/SECOORA_NCSU_CNAPS.nc \n[Land ] Portland, ME\n[Water ] Wells, ME\n[Land ] Boston, MA\n[Land ] Fall River, MA\n[Land ] Chatham, Lydia Cove, MA\n[Water ] Woods Hole, MA\nFinished processing [SECOORA_NCSU_CNAPS]\n Downloading to file /home/filipe/IOOS/notebooks_demos/notebooks/latest/roms_2013_da_avg-ESPRESSO_Real-Time_v2_Averages_Best.nc \n[No Data] Portland, ME\n[No Data] Wells, ME\n[Land ] Boston, MA\n[Land ] Fall River, MA\n[Water ] Chatham, Lydia Cove, MA\n[Water ] Woods Hole, MA\nFinished processing [roms_2013_da_avg-ESPRESSO_Real-Time_v2_Averages_Best]\n Downloading to file /home/filipe/IOOS/notebooks_demos/notebooks/latest/roms_2013_da-ESPRESSO_Real-Time_v2_History_Best.nc \n[No Data] Portland, ME\n[No Data] Wells, ME\n[Land ] Boston, MA\n[Land ] Fall River, MA\n[Water ] Chatham, Lydia Cove, MA\n[Water ] Woods Hole, MA\nFinished processing [roms_2013_da-ESPRESSO_Real-Time_v2_History_Best]\n Downloading to file /home/filipe/IOOS/notebooks_demos/notebooks/latest/pacioos_hycom-global.nc \n[Land ] Portland, ME\n[Water ] Wells, ME\n[Land ] Boston, MA\n[Land ] Fall River, MA\n[Water ] Chatham, Lydia Cove, MA\n[Land ] Woods Hole, MA\nFinished processing [pacioos_hycom-global]\n"
]
],
[
[
"With the matched set of models and observations time-series it is relatively easy to compute skill score metrics on them. In cells [13] to [16] we apply both mean bias and root mean square errors to the time-series.",
"_____no_output_____"
]
],
[
[
"from ioos_tools.ioos import stations_keys\n\n\ndef rename_cols(df, config):\n cols = stations_keys(config, key=\"station_name\")\n return df.rename(columns=cols)",
"_____no_output_____"
],
[
"from ioos_tools.ioos import load_ncs\nfrom ioos_tools.skill_score import apply_skill, mean_bias\n\ndfs = load_ncs(config)\n\ndf = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)\nskill_score = dict(mean_bias=df.to_dict())\n\n# Filter out stations with no valid comparison.\ndf.dropna(how=\"all\", axis=1, inplace=True)\ndf = df.applymap(\"{:.2f}\".format).replace(\"nan\", \"--\")",
"_____no_output_____"
],
[
"from ioos_tools.skill_score import rmse\n\ndfs = load_ncs(config)\n\ndf = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)\nskill_score[\"rmse\"] = df.to_dict()\n\n# Filter out stations with no valid comparison.\ndf.dropna(how=\"all\", axis=1, inplace=True)\ndf = df.applymap(\"{:.2f}\".format).replace(\"nan\", \"--\")",
"_____no_output_____"
],
[
"import pandas as pd\n\n# Stringfy keys.\nfor key in skill_score.keys():\n skill_score[key] = {str(k): v for k, v in skill_score[key].items()}\n\nmean_bias = pd.DataFrame.from_dict(skill_score[\"mean_bias\"])\nmean_bias = mean_bias.applymap(\"{:.2f}\".format).replace(\"nan\", \"--\")\n\nskill_score = pd.DataFrame.from_dict(skill_score[\"rmse\"])\nskill_score = skill_score.applymap(\"{:.2f}\".format).replace(\"nan\", \"--\")",
"_____no_output_____"
]
],
[
[
"Last but not least we can assemble a GIS map, cells [17-23],\nwith the time-series plot for the observations and models,\nand the corresponding skill scores.",
"_____no_output_____"
]
],
[
[
"import folium\nfrom ioos_tools.ioos import get_coordinates\n\n\ndef make_map(bbox, **kw):\n line = kw.pop(\"line\", True)\n zoom_start = kw.pop(\"zoom_start\", 5)\n\n lon = (bbox[0] + bbox[2]) / 2\n lat = (bbox[1] + bbox[3]) / 2\n m = folium.Map(\n width=\"100%\", height=\"100%\", location=[lat, lon], zoom_start=zoom_start\n )\n\n if line:\n p = folium.PolyLine(\n get_coordinates(bbox), color=\"#FF0000\", weight=2, opacity=0.9,\n )\n p.add_to(m)\n return m",
"_____no_output_____"
],
[
"bbox = config[\"region\"][\"bbox\"]\n\nm = make_map(bbox, zoom_start=8, line=True, layers=True)",
"_____no_output_____"
],
[
"all_obs = stations_keys(config)\n\nfrom glob import glob\nfrom operator import itemgetter\n\nimport iris\nfrom folium.plugins import MarkerCluster\n\niris.FUTURE.netcdf_promote = True\n\nbig_list = []\nfor fname in glob(os.path.join(save_dir, \"*.nc\")):\n if \"OBS_DATA\" in fname:\n continue\n cube = iris.load_cube(fname)\n model = os.path.split(fname)[1].split(\"-\")[-1].split(\".\")[0]\n lons = cube.coord(axis=\"X\").points\n lats = cube.coord(axis=\"Y\").points\n stations = cube.coord(\"station_code\").points\n models = [model] * lons.size\n lista = zip(models, lons.tolist(), lats.tolist(), stations.tolist())\n big_list.extend(lista)\n\nbig_list.sort(key=itemgetter(3))\ndf = pd.DataFrame(big_list, columns=[\"name\", \"lon\", \"lat\", \"station\"])\ndf.set_index(\"station\", drop=True, inplace=True)\ngroups = df.groupby(df.index)\n\n\nlocations, popups = [], []\nfor station, info in groups:\n sta_name = all_obs[station]\n for lat, lon, name in zip(info.lat, info.lon, info.name):\n locations.append([lat, lon])\n popups.append(\"[{}]: {}\".format(name, sta_name))\n\nMarkerCluster(locations=locations, popups=popups, name=\"Cluster\").add_to(m)",
"_____no_output_____"
],
[
"titles = {\n \"coawst_4_use_best\": \"COAWST_4\",\n \"pacioos_hycom-global\": \"HYCOM\",\n \"NECOFS_GOM3_FORECAST\": \"NECOFS_GOM3\",\n \"NECOFS_FVCOM_OCEAN_MASSBAY_FORECAST\": \"NECOFS_MassBay\",\n \"NECOFS_FVCOM_OCEAN_BOSTON_FORECAST\": \"NECOFS_Boston\",\n \"SECOORA_NCSU_CNAPS\": \"SECOORA/CNAPS\",\n \"roms_2013_da_avg-ESPRESSO_Real-Time_v2_Averages_Best\": \"ESPRESSO Avg\",\n \"roms_2013_da-ESPRESSO_Real-Time_v2_History_Best\": \"ESPRESSO Hist\",\n \"OBS_DATA\": \"Observations\",\n}",
"_____no_output_____"
],
[
"from itertools import cycle\n\nfrom bokeh.embed import file_html\nfrom bokeh.models import HoverTool, Legend\nfrom bokeh.palettes import Category20\nfrom bokeh.plotting import figure\nfrom bokeh.resources import CDN\nfrom folium import IFrame\n\n# Plot defaults.\ncolors = Category20[20]\ncolorcycler = cycle(colors)\ntools = \"pan,box_zoom,reset\"\nwidth, height = 750, 250\n\n\ndef make_plot(df, station):\n p = figure(\n toolbar_location=\"above\",\n x_axis_type=\"datetime\",\n width=width,\n height=height,\n tools=tools,\n title=str(station),\n )\n leg = []\n for column, series in df.iteritems():\n series.dropna(inplace=True)\n if not series.empty:\n if \"OBS_DATA\" not in column:\n bias = mean_bias[str(station)][column]\n skill = skill_score[str(station)][column]\n line_color = next(colorcycler)\n kw = dict(alpha=0.65, line_color=line_color)\n else:\n skill = bias = \"NA\"\n kw = dict(alpha=1, color=\"crimson\")\n line = p.line(\n x=series.index,\n y=series.values,\n line_width=5,\n line_cap=\"round\",\n line_join=\"round\",\n **kw\n )\n leg.append((\"{}\".format(titles.get(column, column)), [line]))\n p.add_tools(\n HoverTool(\n tooltips=[\n (\"Name\", \"{}\".format(titles.get(column, column))),\n (\"Bias\", bias),\n (\"Skill\", skill),\n ],\n renderers=[line],\n )\n )\n legend = Legend(items=leg, location=(0, 60))\n legend.click_policy = \"mute\"\n p.add_layout(legend, \"right\")\n p.yaxis[0].axis_label = \"Water Height (m)\"\n p.xaxis[0].axis_label = \"Date/time\"\n return p\n\n\ndef make_marker(p, station):\n lons = stations_keys(config, key=\"lon\")\n lats = stations_keys(config, key=\"lat\")\n\n lon, lat = lons[station], lats[station]\n html = file_html(p, CDN, station)\n iframe = IFrame(html, width=width + 40, height=height + 80)\n\n popup = folium.Popup(iframe, max_width=2650)\n icon = folium.Icon(color=\"green\", icon=\"stats\")\n marker = folium.Marker(location=[lat, lon], popup=popup, icon=icon)\n return marker",
"_____no_output_____"
],
[
"dfs = load_ncs(config)\n\nfor station in dfs:\n sta_name = all_obs[station]\n df = dfs[station]\n if df.empty:\n continue\n p = make_plot(df, station)\n marker = make_marker(p, station)\n marker.add_to(m)\n\nfolium.LayerControl().add_to(m)",
"_____no_output_____"
],
[
"def embed_map(m):\n from IPython.display import HTML\n\n m.save(\"index.html\")\n with open(\"index.html\") as f:\n html = f.read()\n\n iframe = '<iframe srcdoc=\"{srcdoc}\" style=\"width: 100%; height: 750px; border: none\"></iframe>'\n srcdoc = html.replace('\"', \""\")\n return HTML(iframe.format(srcdoc=srcdoc))\n\n\nembed_map(m)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ada720f47f417883d58dfa5ecad8e8c703e4b93
| 149,947 |
ipynb
|
Jupyter Notebook
|
ML_AI/predicting_cost_of_used_cars/cost_of_used_cars.ipynb
|
PranavHegde99/OpenOctober
|
1018f92d9aedee168539004cd15f098d927aeeca
|
[
"Apache-2.0"
] | 32 |
2020-10-17T09:58:41.000Z
|
2021-10-13T04:43:35.000Z
|
ML_AI/predicting_cost_of_used_cars/cost_of_used_cars.ipynb
|
vasu-1/OpenOctober
|
0cfd89ea6e0343e2d89c4d10b544c1a8e55f083a
|
[
"Apache-2.0"
] | 380 |
2020-10-18T15:35:49.000Z
|
2021-12-25T05:03:50.000Z
|
ML_AI/predicting_cost_of_used_cars/cost_of_used_cars.ipynb
|
vasu-1/OpenOctober
|
0cfd89ea6e0343e2d89c4d10b544c1a8e55f083a
|
[
"Apache-2.0"
] | 68 |
2020-10-17T17:29:54.000Z
|
2021-10-13T04:43:35.000Z
| 110.093245 | 40,188 | 0.815561 |
[
[
[
"### Made by Kartikey Sharma (IIT Goa)",
"_____no_output_____"
],
[
"### GOAL \nPredicting the costs of used cars given the data collected from various sources and distributed across various locations in India.\n\n#### FEATURES:\n<b>Name</b>: The brand and model of the car.<br>\n<b>Location</b>: The location in which the car is being sold or is available for purchase.<br>\n<b>Year</b>: The year or edition of the model.<br>\n<b>Kilometers_Driven</b>: The total kilometres driven in the car by the previous owner(s) in KM.<br>\n<b>Fuel_Type</b>: The type of fuel used by the car.<br>\n<b>Transmission</b>: The type of transmission used by the car.<br>\n<b>Owner_Type</b>: Whether the ownership is Firsthand, Second hand or other.<br>\n<b>Mileage</b>: The standard mileage offered by the car company in kmpl or km/kg.<br>\n<b>Engine</b>: The displacement volume of the engine in cc.<br>\n<b>Power</b>: The maximum power of the engine in bhp.<br>\n<b>Seats</b>: The number of seats in the car.<br>\n<b>Price</b>: The price of the used car in INR Lakhs.<br>\n\n\n### Process\nClean the data (missing values and categorical variables).'.\n<br>Build the model and check the MAE.\n<br>Try to improve the model.\n<br>Brand matters too! I could select the brand name of the car and treat them as categorical data.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport warnings\nimport seaborn as sns\nsns.set_style('darkgrid')\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"#Importing datasets\ndf_train = pd.read_excel(\"Data_Train.xlsx\")\ndf_test = pd.read_excel(\"Data_Test.xlsx\")\ndf_train.head()",
"_____no_output_____"
],
[
"df_train.shape",
"_____no_output_____"
],
[
"df_train.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6019 entries, 0 to 6018\nData columns (total 12 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Name 6019 non-null object \n 1 Location 6019 non-null object \n 2 Year 6019 non-null int64 \n 3 Kilometers_Driven 6019 non-null int64 \n 4 Fuel_Type 6019 non-null object \n 5 Transmission 6019 non-null object \n 6 Owner_Type 6019 non-null object \n 7 Mileage 6017 non-null object \n 8 Engine 5983 non-null object \n 9 Power 5983 non-null object \n 10 Seats 5977 non-null float64\n 11 Price 6019 non-null float64\ndtypes: float64(2), int64(2), object(8)\nmemory usage: 564.4+ KB\n"
],
[
"#No of duplicated values in the train set\ndf_train.duplicated().sum()",
"_____no_output_____"
],
[
"#Seeing the number of duplicated values\ndf_test.duplicated().sum()",
"_____no_output_____"
],
[
"#Number of null values\n\ndf_train.isnull().sum()",
"_____no_output_____"
],
[
"df_train.nunique()",
"_____no_output_____"
],
[
"df_train['Name'] = df_train.Name.str.split().str.get(0)\ndf_test['Name'] = df_test.Name.str.split().str.get(0)\ndf_train.head()",
"_____no_output_____"
],
[
"# all rows have been modified\ndf_train['Name'].value_counts().sum()",
"_____no_output_____"
]
],
[
[
"### Missing Values",
"_____no_output_____"
]
],
[
[
"# Get names of columns with missing values\ncols_with_missing = [col for col in df_train.columns\n if df_train[col].isnull().any()]\nprint(\"Columns with missing values:\")\nprint(cols_with_missing)",
"Columns with missing values:\n['Mileage', 'Engine', 'Power', 'Seats']\n"
],
[
"df_train['Seats'].fillna(df_train['Seats'].mean(),inplace=True)\ndf_test['Seats'].fillna(df_test['Seats'].mean(),inplace=True)",
"_____no_output_____"
],
[
"# for more accurate predicitions\ndata = pd.concat([df_train,df_test], sort=False)",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,5))\ndata['Mileage'].value_counts().head(100).plot.bar()\nplt.show()",
"_____no_output_____"
],
[
"df_train['Mileage'] = df_train['Mileage'].fillna('17.0 kmpl')\ndf_test['Mileage'] = df_test['Mileage'].fillna('17.0 kmpl')\n# o(zero) and null are both missing values clearly\ndf_train['Mileage'] = df_train['Mileage'].replace(\"0.0 kmpl\", \"17.0 kmpl\")\ndf_test['Mileage'] = df_test['Mileage'].replace(\"0.0 kmpl\", \"17.0 kmpl\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,5))\ndata['Engine'].value_counts().head(100).plot.bar()\nplt.show()",
"_____no_output_____"
],
[
"df_train['Engine'] = df_train['Engine'].fillna('1000 CC')\ndf_test['Engine'] = df_test['Engine'].fillna('1000 CC')",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,5))\ndata['Power'].value_counts().head(100).plot.bar()\nplt.show()",
"_____no_output_____"
],
[
"df_train['Power'] = df_train['Power'].fillna('74 bhp')\ndf_test['Power'] = df_test['Power'].fillna('74 bhp')\n#null bhp created a problem during LabelEncoding\ndf_train['Power'] = df_train['Power'].replace(\"null bhp\", \"74 bhp\")\ndf_test['Power'] = df_test['Power'].replace(\"null bhp\", \"74 bhp\")",
"_____no_output_____"
],
[
"# Method to extract 'float' from 'object' \nimport re\ndef get_number(name):\n title_search = re.search('([\\d+\\.+\\d]+\\W)', name)\n \n if title_search:\n return title_search.group(1)\n return \"\"",
"_____no_output_____"
],
[
"df_train.isnull().sum()",
"_____no_output_____"
],
[
"df_train.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6019 entries, 0 to 6018\nData columns (total 12 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Name 6019 non-null object \n 1 Location 6019 non-null object \n 2 Year 6019 non-null int64 \n 3 Kilometers_Driven 6019 non-null int64 \n 4 Fuel_Type 6019 non-null object \n 5 Transmission 6019 non-null object \n 6 Owner_Type 6019 non-null object \n 7 Mileage 6019 non-null object \n 8 Engine 6019 non-null object \n 9 Power 6019 non-null object \n 10 Seats 6019 non-null float64\n 11 Price 6019 non-null float64\ndtypes: float64(2), int64(2), object(8)\nmemory usage: 564.4+ KB\n"
],
[
"#Acquring float values and isolating them\ndf_train['Mileage'] = df_train['Mileage'].apply(get_number).astype('float')\ndf_train['Engine'] = df_train['Engine'].apply(get_number).astype('int')\ndf_train['Power'] = df_train['Power'].apply(get_number).astype('float')\n\ndf_test['Mileage'] = df_test['Mileage'].apply(get_number).astype('float')\ndf_test['Engine'] = df_test['Engine'].apply(get_number).astype('int')\ndf_test['Power'] = df_test['Power'].apply(get_number).astype('float')\n\ndf_train.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6019 entries, 0 to 6018\nData columns (total 12 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Name 6019 non-null object \n 1 Location 6019 non-null object \n 2 Year 6019 non-null int64 \n 3 Kilometers_Driven 6019 non-null int64 \n 4 Fuel_Type 6019 non-null object \n 5 Transmission 6019 non-null object \n 6 Owner_Type 6019 non-null object \n 7 Mileage 6019 non-null float64\n 8 Engine 6019 non-null int32 \n 9 Power 6019 non-null float64\n 10 Seats 6019 non-null float64\n 11 Price 6019 non-null float64\ndtypes: float64(4), int32(1), int64(2), object(5)\nmemory usage: 540.9+ KB\n"
],
[
"df_test.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1234 entries, 0 to 1233\nData columns (total 11 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Name 1234 non-null object \n 1 Location 1234 non-null object \n 2 Year 1234 non-null int64 \n 3 Kilometers_Driven 1234 non-null int64 \n 4 Fuel_Type 1234 non-null object \n 5 Transmission 1234 non-null object \n 6 Owner_Type 1234 non-null object \n 7 Mileage 1234 non-null float64\n 8 Engine 1234 non-null int32 \n 9 Power 1234 non-null float64\n 10 Seats 1234 non-null float64\ndtypes: float64(3), int32(1), int64(2), object(5)\nmemory usage: 101.4+ KB\n"
],
[
"df_train.head()",
"_____no_output_____"
]
],
[
[
"### Categorical Variables",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n\ny = np.log1p(df_train.Price) # Made a HUGE difference. MAE went down highly\nX = df_train.drop(['Price'],axis=1)\n\nX_train, X_valid, y_train, y_valid = train_test_split(X,y,train_size=0.82,test_size=0.18,random_state=0)\n",
"_____no_output_____"
],
[
"from sklearn.preprocessing import LabelEncoder\nlabel_encoder = LabelEncoder()\n\nX_train['Name'] = label_encoder.fit_transform(X_train['Name'])\nX_valid['Name'] = label_encoder.transform(X_valid['Name'])\ndf_test['Name'] = label_encoder.fit_transform(df_test['Name'])\n\nX_train['Location'] = label_encoder.fit_transform(X_train['Location'])\nX_valid['Location'] = label_encoder.transform(X_valid['Location'])\ndf_test['Location'] = label_encoder.fit_transform(df_test['Location'])\n\nX_train['Fuel_Type'] = label_encoder.fit_transform(X_train['Fuel_Type'])\nX_valid['Fuel_Type'] = label_encoder.transform(X_valid['Fuel_Type'])\ndf_test['Fuel_Type'] = label_encoder.fit_transform(df_test['Fuel_Type'])\n\nX_train['Transmission'] = label_encoder.fit_transform(X_train['Transmission'])\nX_valid['Transmission'] = label_encoder.transform(X_valid['Transmission'])\ndf_test['Transmission'] = label_encoder.fit_transform(df_test['Transmission'])\n\nX_train['Owner_Type'] = label_encoder.fit_transform(X_train['Owner_Type'])\nX_valid['Owner_Type'] = label_encoder.transform(X_valid['Owner_Type'])\ndf_test['Owner_Type'] = label_encoder.fit_transform(df_test['Owner_Type'])",
"_____no_output_____"
],
[
"X_train.head()",
"_____no_output_____"
],
[
"X_train.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 4935 entries, 2058 to 2732\nData columns (total 11 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Name 4935 non-null int32 \n 1 Location 4935 non-null int32 \n 2 Year 4935 non-null int64 \n 3 Kilometers_Driven 4935 non-null int64 \n 4 Fuel_Type 4935 non-null int32 \n 5 Transmission 4935 non-null int32 \n 6 Owner_Type 4935 non-null int32 \n 7 Mileage 4935 non-null float64\n 8 Engine 4935 non-null int32 \n 9 Power 4935 non-null float64\n 10 Seats 4935 non-null float64\ndtypes: float64(3), int32(6), int64(2)\nmemory usage: 347.0 KB\n"
]
],
[
[
"## Model",
"_____no_output_____"
]
],
[
[
"from xgboost import XGBRegressor\nfrom sklearn.metrics import mean_absolute_error,mean_squared_error,mean_squared_log_error\nfrom math import sqrt\n\nmy_model = XGBRegressor(n_estimators=1000, learning_rate=0.05)\nmy_model.fit(X_train, y_train, \n early_stopping_rounds=5, \n eval_set=[(X_valid, y_valid)], \n verbose=False)\n\npredictions = my_model.predict(X_valid)\nprint(\"MAE: \" + str(mean_absolute_error(predictions, y_valid)))\nprint(\"MSE: \" + str(mean_squared_error(predictions, y_valid)))\nprint(\"MSLE: \" + str(mean_squared_log_error(predictions, y_valid)))\n\nprint(\"RMSE: \"+ str(sqrt(mean_squared_error(predictions, y_valid))))",
"MAE: 0.10643605651544133\nMSE: 0.027429293987309863\nMSLE: 0.0030055833644411435\nRMSE: 0.16561791565923617\n"
]
],
[
[
"## Prediciting on Test",
"_____no_output_____"
]
],
[
[
"preds_test = my_model.predict(df_test)\npreds_test = np.exp(preds_test)-1 #converting target to original state\nprint(preds_test)\n\n# The Price is in the format xx.xx So let's round off and submit.\n\npreds_test = preds_test.round(5)\nprint(preds_test)",
"[ 3.440185 3.140221 14.026105 ... 3.0297828 3.2455106 15.280251 ]\n[ 3.44018 3.14022 14.0261 ... 3.02978 3.24551 15.28025]\n"
],
[
"output = pd.DataFrame({'Price': preds_test})\noutput.to_excel('Output.xlsx', index=False)",
"_____no_output_____"
]
],
[
[
"#### NOTE",
"_____no_output_____"
],
[
"Treating 'Mileage' and the others as categorical variables was a mistake. Eg.: Mileage went up from 23.6 to around 338! Converting it to numbers fixed it.\nLabelEncoder won't work if there are missing values.\nValueError: y contains previously unseen label 'Bentley'. Fixed it by increasing training_size in train_test_split.\nScaling all the columns made the model worse (as expected).",
"_____no_output_____"
],
[
"==============================================End of Project======================================================",
"_____no_output_____"
]
],
[
[
"# Code by Kartikey Sharma\n# Veni.Vidi.Vici.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
4ada8d4d4b768af0f7847edc25a689a6da195032
| 180,256 |
ipynb
|
Jupyter Notebook
|
notebooks/optimize.ipynb
|
benmontet/rm_fits
|
22dd092f28b87fa039e369c1300e354bd1af9d3b
|
[
"MIT"
] | null | null | null |
notebooks/optimize.ipynb
|
benmontet/rm_fits
|
22dd092f28b87fa039e369c1300e354bd1af9d3b
|
[
"MIT"
] | null | null | null |
notebooks/optimize.ipynb
|
benmontet/rm_fits
|
22dd092f28b87fa039e369c1300e354bd1af9d3b
|
[
"MIT"
] | null | null | null | 60.856178 | 31,152 | 0.779458 |
[
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport starry\n\nimport exoplanet as exo\n",
"_____no_output_____"
],
[
"starry.__version__",
"_____no_output_____"
],
[
"map = starry.Map(ydeg=20, udeg=2, rv=True, lazy=False)\n",
"_____no_output_____"
],
[
"time, vels, verr = np.loadtxt('../data/transit.vels', usecols=[0,1,2], unpack=True)\ntime -= 2458706.5",
"_____no_output_____"
],
[
"Prot = 2.85 # days\nP = 8.1387 # days\nt0 = 0.168\ne = 0.0 \nw = 0.0\ninc = 90.0\n\n\nvsini = 18.3 * 1e3 # m /s\nr = 0.06472 # In units of Rstar\nb = -0.40 # I want it to transit in the South!\na = 19.42 # In units of Rstar\nu1 = 0.95\nu2 = 0.20\nobl = -0\ngamma = -15\ngammadot = 100\ngammadotdot = 800\n\nveq = vsini / np.sin(inc * np.pi / 180.0)\n\n",
"_____no_output_____"
],
[
"map.reset()\nmap.inc = inc\nmap.obl = obl\n#map.add_spot(spot_amp, sigma=spot_sig, lon=spot_lon, lat=-spot_lat)\nmap[1:] = [u1, u2]\nmap.veq = veq\n\norbit = exo.orbits.KeplerianOrbit(period=P, a=a, t0=t0, b=b, ecc=e, omega=w, r_star=1.0) \n\n\nt = np.linspace(0.05, 0.30, 1000)\n\n\nf = (t - t0)/P*2*np.pi\nI = np.arccos(b/a)\n\nzo = a*np.cos(f) \nyo = -a*np.sin(np.pi/2+f)*np.cos(I)\nxo = a*np.sin(f)*np.sin(I)\n\n\ntheta = 360.0 / Prot * t\n\nrv = map.rv(xo=xo, yo=yo, zo=zo, ro=r, theta=theta)\nrv += gamma + gammadot*(t-0.15) + gammadotdot*(t-0.15)**2\n \n\nplt.figure(figsize=(15,5))\nplt.plot(t, rv, \"C1\", lw=3)\nplt.errorbar(time, vels, yerr=verr, fmt='.')\nplt.ylim(-60, 40);",
"Compiling `rv`... Done.\n/home/bmontet/anaconda3/envs/p35/lib/python3.5/site-packages/theano/tensor/subtensor.py:2190: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n rval = inputs[0].__getitem__(inputs[1:])\n"
],
[
"#map.show(rv=False)",
"_____no_output_____"
],
[
"from scipy.optimize import minimize",
"_____no_output_____"
],
[
"tuse = time + 0.0\neuse = verr + 0.0\nvuse = vels + 0.0\n\n\ndef rmcurve(params):\n \n vsini, r, b, a, u1, u2, obl, gamma, gammadot, gammadotdot, jitter_good, jitter_bad, q, factor, t0 = params\n veq = vsini / np.sin(inc * np.pi / 180.0)\n \n if u1 + u2 > 1.0:\n print('inf')\n return 2700\n\n map.reset()\n\n map.inc = inc\n map.obl = obl\n #map.add_spot(spot_amp, sigma=spot_sig, lon=spot_lon, lat=-spot_lat)\n map[1:] = [u1, u2]\n map.veq = veq\n\n\n f = (tuse - t0)/P*2*np.pi\n I = np.arccos(b/a)\n\n zo = a*np.cos(f) \n yo = -a*np.sin(np.pi/2+f)*np.cos(I)\n xo = a*np.sin(f)*np.sin(I)\n\n\n theta = 360.0 / Prot * tuse\n\n rv_0 = map.rv(xo=xo, yo=yo, zo=zo, ro=r, theta=theta)\n trend = gamma + gammadot*(tuse-0.15) + gammadotdot*(tuse-0.15)**2\n rv = rv_0 + trend\n \n \n var_good = (euse**2 + jitter_good**2)\n var_bad = (euse**2 + jitter_bad**2)\n \n goodgauss = q / np.sqrt(2*np.pi*var_good) * np.exp(-(rv-vuse)**2/(2*var_good))\n badgauss = (1-q) / np.sqrt(2*np.pi*var_bad) * np.exp(-(rv_0*factor+trend-vuse)**2/(2*var_bad))\n\n totgauss = np.log(goodgauss + badgauss)\n \n #print(np.log(goodgauss))\n #print(np.log(badgauss))\n \n print(-1*np.sum(totgauss))\n return -1*np.sum(totgauss)\n\n",
"_____no_output_____"
],
[
"def plot_rmcurve(params):\n\n vsini, r, b, a, u1, u2, obl, gamma, gammadot, gammadotdot, gamma3, gamma4, jitter_good, jitter_bad, q, factor, t0 = params\n veq = vsini / np.sin(inc * np.pi / 180.0)\n\n map.reset()\n\n map.inc = inc\n map.obl = obl\n #map.add_spot(spot_amp, sigma=spot_sig, lon=spot_lon, lat=-spot_lat)\n map[1:] = [u1, u2]\n map.veq = veq\n\n\n f = (t - t0)/P*2*np.pi\n I = np.arccos(b/a)\n\n zo = a*np.cos(f) \n yo = -a*np.sin(np.pi/2+f)*np.cos(I)\n xo = a*np.sin(f)*np.sin(I)\n\n theta = 360.0 / Prot * t\n\n rv = map.rv(xo=xo, yo=yo, zo=zo, ro=r, theta=theta)\n trend = gamma + gammadot*(t-0.15) + gammadotdot*(t-0.15)**2 + gamma3*(t-0.15)**3 + gamma4*(t-0.15)**4\n rv += trend\n \n\n plt.figure(figsize=(15,5))\n plt.plot(t, rv, \"C1\", lw=3)\n plt.plot(t, trend, \"C1\", lw=3)\n \n plt.errorbar(time, vels, yerr=verr, fmt='.')\n plt.ylim(-50, 40);\n plt.show()\n",
"_____no_output_____"
],
[
"inputs = np.array([19300, 0.0588, -0.09, 20.79, 0.8, 0.00, 10.0, -15.0, 100.1, 1300.0, 5500, -30000, 1.0, 1.0, 0.8, 0.60, 0.166])\n\nbnds = ((12000, 24000), (0.04, 0.07), (-1.0, 0.0), (15,25), (0,1),(0,1), (-30,90), (-20,20),(50,300), (0, 3000), (-30000, 300000), (-100000, 100000), (0.0, 2.0), (0.0, 20.0), (0.4, 1.0), (0.0, 1.0), (0.16, 0.175))\n#rmcurve(inputs)\n\nplot_rmcurve(inputs)",
"_____no_output_____"
],
[
"res = minimize(rmcurve, inputs, method='L-BFGS-B', bounds=bnds)",
"190.7659992701508\n190.76599927014186\n190.76599411900986\n190.7659997117711\n190.76599910298438\n190.76599885867563\n190.76599899170662\n190.76599926913738\n190.7659992728386\n190.76599926701223\n190.76599927018668\n190.76599924164242\n190.76599926168706\n190.7659995136279\n190.76599881384917\n190.76598116074285\ninf\ninf\ninf\ninf\ninf\ninf\ninf\ninf\ninf\ninf\ninf\ninf\ninf\ninf\ninf\ninf\n188.4147372187533\n188.41473721874405\n188.41473188410032\n188.41473760922045\n188.41473708326853\n188.41473685950382\n188.41473698344936\n188.41473721777496\n188.41473722205262\n188.41473721567502\n188.4147372187868\n188.41473719142988\n188.41473721098035\n188.4147374472033\n188.4147367802099\n188.4147300050881\n179.77119668183465\n179.7711966818316\n179.77119440740853\n179.7711963338303\n179.771196789527\n179.7711967865288\n179.7711968283221\n179.7711966788956\n179.77119668290715\n179.77119667916045\n179.77119668186765\n179.77119665711436\n179.77119667658033\n179.77119676705928\n179.7711965345463\n179.77119423738225\n356.92583001859975\n356.9258300187202\n356.9258909327474\n356.92583096205277\n356.9258297646866\n356.9258287227418\n356.9258291760869\n356.9258300142304\n356.925829982707\n356.9258300127678\n356.9258300182333\n356.92582994500054\n356.9258298662999\n356.9258308214109\n356.92583394447325\n356.9261304690883\n179.61176771186956\n179.611767711869\n179.61176700980587\n179.6117674309466\n179.61176780228044\n179.6117678010611\n179.61176784022166\n179.6117677095418\n179.61176771370967\n179.6117677091427\n179.6117677118959\n179.61176768797617\n179.6117677067957\n179.6117678120975\n179.6117675691197\n179.61177845026583\n178.8030365449106\n178.8030365449066\n178.8030341128511\n178.80303659347553\n178.80303657176918\n178.80303647286803\n178.80303654523448\n178.80303654193096\n178.80303654469927\n178.80303654200202\n178.80303654493923\n178.80303652130655\n178.80303653994727\n178.80303665153406\n178.8030362984036\n178.8030384896549\n178.790075719631\n178.7900757196355\n178.79007867072997\n178.7900758028322\n178.79007575473307\n178.79007568467117\n178.79007574990442\n178.79007571585404\n178.7900757192947\n178.79007571657993\n178.79007571965658\n178.79007569639992\n178.79007571468674\n178.79007584716248\n178.79007556155645\n178.79007235342098\n178.72817005813593\n178.7281700581362\n178.7281703533494\n178.72817012502594\n178.72817008927748\n178.72817000559627\n178.7281700739302\n178.72817005472567\n178.72817005775192\n178.72817005515324\n178.72817005816296\n178.7281700348012\n178.7281700532278\n178.7281701758552\n178.72816985573934\n178.72816910672475\n178.71624021669425\n178.71624021669416\n178.71624026260545\n178.71624024582772\n178.71624025489714\n178.7162401793294\n178.7162402447453\n178.7162402133802\n178.71624021655725\n178.71624021374112\n178.71624021672153\n178.71624019338842\n178.71624021177234\n178.7162403319756\n178.71624001959606\n178.71624025982624\n178.714914719345\n178.71491471934488\n178.71491474561174\n178.71491474807826\n178.71491475758674\n178.71491468198406\n178.714914747412\n178.71491471603323\n178.71491471920882\n178.71491471639246\n178.71491471937233\n178.71491469603868\n178.7149147144224\n178.7149148345128\n178.71491452221883\n178.71491476092464\n178.70968465737454\n178.70968465737428\n178.7096846051426\n178.70968468450852\n178.70968469577141\n178.70968462002833\n178.70968468550464\n178.70968465407188\n178.7096846572416\n178.7096846544244\n178.7096846574019\n178.70968463406592\n178.70968465244908\n178.70968477208766\n178.70968446013708\n178.70968469290716\n178.68991231640626\n178.68991231640555\n178.68991195144574\n178.68991233716358\n178.6899123554202\n178.68991227910337\n178.68991234477963\n178.68991231314007\n178.68991231628675\n178.68991231346567\n178.68991231643398\n178.68991229308722\n178.68991231146907\n178.68991242929394\n178.68991211873433\n178.6899123302927\n178.38189339848245\n178.3818933984776\n178.38189037812023\n178.38189337200677\n178.3818934403968\n178.3818933553509\n178.38189342518794\n178.38189339556632\n178.3818933984181\n178.38189339560162\n178.38189339851195\n178.38189337518125\n178.38189339337325\n178.3818934911482\n178.38189323684202\n178.38189407850302\n178.0623885569701\n178.06238855696387\n178.06238467419558\n178.06238852834207\n178.06238859671714\n178.06238850329711\n178.06238857861229\n178.06238855414733\n178.0623885566929\n178.06238855407156\n178.0623885569994\n178.06238853388572\n178.06238855157065\n178.06238863547102\n178.0623884766011\n178.06238900521492\n177.81894071730318\n177.81894071729957\n177.8189385752982\n177.81894074209586\n177.81894074904676\n177.81894064813497\n177.81894073014246\n177.8189407142634\n177.8189407165079\n177.81894071429247\n177.81894071732887\n177.8189406947929\n177.81894071137853\n177.81894079339045\n177.81894075356394\n177.8189412516742\n177.7786730434328\n177.7786730434319\n177.7786726681253\n177.77867311217724\n177.77867306991433\n177.77867296651365\n177.77867305139935\n177.77867304015956\n177.77867304231359\n177.77867304034268\n177.77867304345602\n177.7786730212422\n177.7786730372531\n177.77867312426258\n177.77867313006672\n177.77867328399566\n177.77554416743845\n177.77554416743817\n177.77554421316836\n177.7755442471484\n177.77554419269813\n177.7755440891981\n177.77554417424616\n177.77554416410212\n177.77554416626043\n177.77554416433262\n177.77554416746122\n177.77554414529317\n177.77554416126512\n177.77554425051406\n177.77554425658465\n177.77554415964363\n177.77446253334807\n177.77446253334807\n177.7744627398206\n177.77446261671287\n177.77446255825294\n177.77446245499743\n177.7744625399014\n177.77446252998928\n177.7744625321621\n177.77446253023768\n177.77446253337072\n177.77446251121526\n177.77446252719682\n177.77446261755662\n177.77446262084314\n177.77446245347286\n177.77280684661807\n177.7728068466182\n177.77280715944048\n177.7728069316436\n177.77280687161326\n177.77280676897223\n177.77280685362817\n177.7728068432327\n177.77280684542905\n177.77280684350552\n177.77280684664078\n177.7728068244908\n177.77280684047966\n177.77280693155907\n177.7728069325092\n177.77280639155137\n177.76710083001691\n177.76710083001757\n177.76710144559888\n177.76710092134928\n177.7671008543625\n177.76710075239666\n177.76710083663227\n177.7671008265948\n177.76710082884017\n177.76710082689686\n177.76710083003948\n177.76710080792432\n177.7671008239199\n177.76710091693695\n177.76710091118682\n177.767100336465\n177.77500482319832\n177.7750048231995\n177.77500580457374\n177.7750049011071\n177.7750048537413\n177.77500476113508\n177.7750048423679\n177.7750048195617\n177.77500482211497\n177.77500482008693\n177.77500482322336\n177.77500480110402\n177.77500481704337\n177.77500491107568\n177.77500490162683\n177.77499933491728\n177.76282016676936\n177.76282016677024\n177.76282091828168\n177.7628202531449\n177.76282019341224\n177.76282009498394\n177.7628201780851\n177.76282016326806\n177.76282016562777\n177.76282016365244\n177.76282016679278\n177.762820144685\n177.76282016065173\n177.7628202540215\n177.7628202469874\n177.76281781603447\n177.748076028411\n177.74807602841233\n177.748077113486\n177.74807611838693\n177.7480760547144\n177.7480759575675\n177.74807604005974\n177.74807602489315\n177.74807602734273\n177.74807602528742\n177.74807602843404\n177.74807600640256\n177.74807602230712\n177.74807611701263\n177.74807610565236\n177.74807410637226\n177.6690930799978\n177.66909308000098\n177.66909532003913\n177.66909317656422\n177.66909310583563\n177.66909301459063\n177.66909309450378\n177.6690930764515\n177.66909307936518\n177.66909307685594\n177.66909308001968\n177.6690930583492\n177.66909307383378\n177.66909317012306\n177.66909314864685\n177.66909278808097\n177.52004010700446\n177.52004010700978\n177.52004368167985\n177.52004020067608\n177.52004013333908\n177.52004005159768\n177.5200401276488\n177.52004010350302\n177.5200401072872\n177.52004010385147\n177.5200401070256\n177.5200400859654\n177.520040100555\n177.52004019235287\n177.52004016715892\n177.52004194428451\n177.19510166684216\n177.19510166684992\n177.1951067412715\n177.19510173389807\n177.19510169536613\n177.19510163074347\n177.19510170022522\n177.19510166363622\n177.19510166922151\n177.19510166370074\n177.19510166686402\n177.19510164703075\n177.1951016595508\n177.19510173226678\n177.19510171570386\n177.19510644179718\n176.64935519232\n176.6493551923284\n176.64936061389062\n176.64935520405407\n176.64935522259125\n176.64935517752605\n176.6493552394449\n176.64935518982708\n176.64935519780357\n176.64935518923545\n176.64935519234575\n176.64935517450374\n176.64935518345516\n176.64935521905576\n176.64935522089846\n176.6493619822159\n176.0896872525856\n176.08968725258978\n176.08969032795957\n176.0896872794947\n176.08968726369338\n176.08968723281853\n176.08968728562027\n176.089687250948\n"
],
[
"# vsini, r, b, a, u1, u2, obl, gamma, gammadot, gammadotdot, jitter_good, jitter_bad, q, factor, t0\nprint(res.x.tolist())",
"[19299.999257861025, 0.06078305799279333, -0.26842308404132836, 20.45127084401793, 0.9999999998010554, 0.0, 10.048742594672486, -15.075031151578681, 100.04159613003901, 1300.0007435582133, 1.0844716946805428, 1.0055472803019998, 0.6647339097213981, 0.6908688052159306, 0.16522744788036733]\n"
],
[
"test = res.x + 0.0\n#test[0] = 20000\n#test[4] = 1.0\n#test[5] = 0.0\nrmcurve(test)\nplot_rmcurve(test)",
"175.2370347430302\n"
],
[
"orbit = exo.orbits.KeplerianOrbit(period=P, a=a, t0=t0, b=0.4, ecc=e, omega=w, r_star=1.0) \n\nx, y, z = orbit.get_relative_position(tuse)\n",
"_____no_output_____"
],
[
"xp = x.eval()\nyp = y.eval()\nzp = z.eval()",
"_____no_output_____"
],
[
"(zp**2+yp**2+xp**2)**0.5",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"tuse = np.arange(-4, 4, 0.1)\n\nf = (tuse - t0)/P*2*np.pi\nI = np.arccos(b/a)\n",
"_____no_output_____"
],
[
"\n\nzpos = a*np.cos(f) \nypos = -a*np.sin(np.pi/2+f)*np.cos(I)\nxpos = a*np.sin(f)*np.sin(I)",
"_____no_output_____"
],
[
"zpos-zp",
"_____no_output_____"
],
[
"x = np.arange(-10, 10, 0.02)\nsigma1 = 3\nmu1 = 0\n\nsigma2 = 2\nmu2 = -7\ng1 = 0.8/np.sqrt(2*np.pi*sigma1**2) * np.exp(-(x-mu1)**2/(2*sigma1**2))\ng2 = 0.2/np.sqrt(2*np.pi*sigma2**2) * np.exp(-(x-mu2)**2/(2*sigma2**2))\n\nplt.plot(x, np.log(g1+g2))\n#plt.plot(x, g1)\n#plt.plot(x, g2)\n",
"_____no_output_____"
],
[
"var_good = (euse**2 + jitter_good**2)\nvar_bad = (euse**2 + jitter_bad**2)\ngooddata = -0.5*q*(np.sum((rv-vuse)**2/var_good + np.log(2*np.pi*var_good)))\nbaddata = -0.5*(1-q)*(np.sum((rv-vuse)**2/var_bad + np.log(2*np.pi*var_bad)))\nlnprob = gooddata + baddata\n \ngoodgauss = q / np.sqrt(2*np.pi*var_good) * np.exp(-(rv-vuse)**2/var_good)\nbadgauss = (1-q) / np.sqrt(2*np.pi*var_bad) * np.exp(-(rv-vuse)**2/var_bad)\n\ntotgauss = np.log(goodgauss + badgauss)\n\n",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ada954e0ae83c71071666e63737071c0b84119e
| 733 |
ipynb
|
Jupyter Notebook
|
1. python-course-udemy/Notas/Testando Jupyter.ipynb
|
karlscode/python-basics
|
90f215de323f907cb692369b87c34659ba49f1d2
|
[
"MIT"
] | null | null | null |
1. python-course-udemy/Notas/Testando Jupyter.ipynb
|
karlscode/python-basics
|
90f215de323f907cb692369b87c34659ba49f1d2
|
[
"MIT"
] | null | null | null |
1. python-course-udemy/Notas/Testando Jupyter.ipynb
|
karlscode/python-basics
|
90f215de323f907cb692369b87c34659ba49f1d2
|
[
"MIT"
] | null | null | null | 15.934783 | 36 | 0.482947 |
[
[
[
"2 + 3",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4ada9651551e7ce503e1b0c978250586c4fe7459
| 2,984 |
ipynb
|
Jupyter Notebook
|
docs/examples/legend_options.ipynb
|
ahuang11/HoloExt
|
01e4f0aaf3b8244c9e029056b54a0e3320957fad
|
[
"MIT"
] | 11 |
2018-02-14T16:48:21.000Z
|
2021-02-24T20:34:43.000Z
|
docs/examples/legend_options.ipynb
|
ahuang11/HoloExt
|
01e4f0aaf3b8244c9e029056b54a0e3320957fad
|
[
"MIT"
] | 18 |
2018-02-20T05:10:03.000Z
|
2019-07-26T13:36:28.000Z
|
docs/examples/legend_options.ipynb
|
ahuang11/HoloExt
|
01e4f0aaf3b8244c9e029056b54a0e3320957fad
|
[
"MIT"
] | null | null | null | 20.867133 | 76 | 0.519437 |
[
[
[
"### Legend Options",
"_____no_output_____"
]
],
[
[
"import warnings\nimport numpy as np\nimport holoviews as hv\nfrom holoext.xbokeh import Mod\n\nwarnings.filterwarnings('ignore') # bokeh deprecation warnings\nhv.extension('bokeh')",
"_____no_output_____"
],
[
"x = np.array([8, 4, 2, 1])\ny1 = np.array([2, 4, 5, 9])\ny2 = np.array([3, 4, 1, 4])\n\ncurve1 = hv.Curve((x, y1), label='blue')\ncurve2 = hv.Curve((x, y2), label='red')\ncurves = curve1 * curve2",
"_____no_output_____"
]
],
[
[
"### Change the orientation and location of the legend",
"_____no_output_____"
]
],
[
[
"Mod(\n curves,\n legend_orientation='vertical',\n legend_position='bottom_left', # autocorrects to valid keyword\n).apply(curves)",
"_____no_output_____"
]
],
[
[
"### Change the location of the legend using an alias",
"_____no_output_____"
]
],
[
[
"Mod(\n curves,\n legend_location='top center', # autocorrects to valid keyword\n).apply(curves)",
"_____no_output_____"
]
],
[
[
"### Change the background and border alpha (opacity)",
"_____no_output_____"
]
],
[
[
"Mod(\n curves,\n legend_background_fill_alpha=1,\n legend_border_line_alpha=1,\n).apply(curves)",
"_____no_output_____"
]
],
[
[
"### Change the labels and glypths spacing and width",
"_____no_output_____"
]
],
[
[
"Mod(\n curves,\n legend_label_standoff=20, # space between label and glyph\n legend_glyph_width=5, # width of the glypth\n legend_spacing=100, # space between each legend item\n).apply(curves)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adaaff468a88d2e082dc2a2f00d06178b34f18d
| 220,945 |
ipynb
|
Jupyter Notebook
|
DataInput/GigaScienceManuscriptFigures/ChemChasteGigaScienceManuscriptFigures.ipynb
|
OSS-Lab/ChemChaste
|
d32c36afa1cd870512fee3cba0753d5c6faf8109
|
[
"BSD-3-Clause"
] | null | null | null |
DataInput/GigaScienceManuscriptFigures/ChemChasteGigaScienceManuscriptFigures.ipynb
|
OSS-Lab/ChemChaste
|
d32c36afa1cd870512fee3cba0753d5c6faf8109
|
[
"BSD-3-Clause"
] | null | null | null |
DataInput/GigaScienceManuscriptFigures/ChemChasteGigaScienceManuscriptFigures.ipynb
|
OSS-Lab/ChemChaste
|
d32c36afa1cd870512fee3cba0753d5c6faf8109
|
[
"BSD-3-Clause"
] | null | null | null | 291.869221 | 76,228 | 0.923148 |
[
[
[
"Includes:",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport math\nimport pandas as pd\nimport seaborn as sns\nimport scipy.integrate",
"_____no_output_____"
]
],
[
[
"Data and plots for Figure 2. Figure 1 is a cartoon while Figures 3-5 were produced directly in the ParaView visualisation software from the ChemChaste simulation output. This output is fully producible using the RunChemChaste.py control file provided. ",
"_____no_output_____"
],
[
"Fisher-KPP equation definition as defined in the Manuscript:",
"_____no_output_____"
]
],
[
[
"def fisher_KPP(z,c,order=1):\n if order ==1:\n U = 1/(1+np.exp(z/c)) # first order\n elif order == 2:\n U = 1/(1+np.exp(z/c)) + (1/pow(c,2))*(np.exp(z/c)/pow((1+np.exp(z/c)),2))*np.log( 4*np.exp(z/c)/pow((1+np.exp(z/c)),2) )# second order\n else:\n U=0*z\n return U\n\n\ndef integrand(x, t):\n\n xdrift = 50\n a = 1.0\n c=a+(1/a)\n tscale = 1.0\n \n x=x+xdrift\n x=1.0*x\n z=x-c*t*tscale\n \n return fisher_KPP(z,c,order=2)",
"_____no_output_____"
]
],
[
[
"Solving the Fisher-KPP equation:",
"_____no_output_____"
]
],
[
[
"times2 = np.arange(0,201,1)*3.0\ntimes1 = np.arange(0,61,1)*10.0\ntimes = np.arange(0,601,1)\nintegralValues = []\nintegralValues1 = []\nintegralValues2 = []\n\nfor i in range(0,len(times)):\n t=times[i]\n #print(t)\n I = scipy.integrate.quad(integrand, 15, 100, args=(t) )\n integralValues.append( I[0]/85)\n \nfor i in range(0,len(times1)):\n t=times[i]\n #print(t)\n I = scipy.integrate.quad(integrand, 15, 100, args=(t) )\n integralValues1.append( I[0]/85)\n \nfor i in range(0,len(times2)):\n t=times[i]\n #print(t)\n I = scipy.integrate.quad(integrand, 15, 100, args=(t) )\n integralValues2.append( I[0]/85)\n \ngradInt= np.gradient(np.array(integralValues))\n \nplt.plot(np.array(times),np.array(integralValues),label=\"Analytic\",linestyle='solid')",
"_____no_output_____"
]
],
[
[
"Load and plot ParaView line output for Figure 2 a):",
"_____no_output_____"
]
],
[
[
"dfTrace2 = pd.read_csv(\"Fisher_Slice/160.csv\");\ndfTrace4 = pd.read_csv(\"Fisher_Slice/240.csv\");\ndfTrace6 = pd.read_csv(\"Fisher_Slice/320.csv\");\ndfTrace8 = pd.read_csv(\"Fisher_Slice/400.csv\");",
"_____no_output_____"
]
],
[
[
"For Figure 2 a) plot the line output from ParaView against solution for Fisher-KPP",
"_____no_output_____"
]
],
[
[
"A=1\nxsteps =np.arange(0,105,step=10)\nxdrift = 8\n\ntvec =[1,2,3,4]\n\n\ncolorVec=[\"tab:orange\",\"tab:green\",\"tab:blue\",\"tab:red\",\"tab:purple\",\"tab:brown\",\"tab:cyan\"]\n\nplt.plot(dfTrace2['arc_length'],dfTrace2['PDE variable 0'],label=\"t=160\",color=colorVec[1]) \nplt.plot(dfTrace4['arc_length'],dfTrace4['PDE variable 0'],label=\"t=240\",color=colorVec[2]) \nplt.plot(dfTrace6['arc_length'],dfTrace6['PDE variable 0'],label=\"t=320\",color=colorVec[3]) \nplt.plot(dfTrace8['arc_length'],dfTrace8['PDE variable 0'],label=\"t=400\",color=colorVec[4]) \n\n\nx = np.arange(10, 80, 0.01) \na = 1\nc=a+(1/a)\ntscale = 7.55\n\nfor tint in tvec:\n t=tint*tscale\n z=x-c*t\n \n plt.plot(x+xdrift, fisher_KPP(z,c,order=1),linestyle='dashed',color=colorVec[tint])\n plt.plot(x+xdrift, fisher_KPP(z,c,order=2),linestyle='dotted',color=colorVec[tint])\n\nplt.xticks(xsteps);\nplt.xlabel(\"X\");\nplt.ylabel(\"U(X)\");\nplt.legend(loc=\"right\");\nplt.xlabel(\"Position\");\nplt.xlim([10, 80])\nplt.savefig('wavePlot.png')\nplt.grid();",
"_____no_output_____"
]
],
[
[
"Full data processing from ParaView output for Figure 2 b)",
"_____no_output_____"
]
],
[
[
"dx = [1,0.8,0.6,0.4,0.2,0.1,0.08,0.06,0.04,0.02,0.01,0.008,0.006,0.004,0.002,0.001,0.0008,0.0006,0.0004,0.0002,0.0001]\ndt = [0.1,0.08,0.06,0.04,0.02,0.01,0.008,0.006,0.004,0.002,0.001,0.0008,0.0006,0.0004,0.0002,0.0001]\n\nfilename_prefix_1 = \"DataOut/\"\nfilename_prefix_2 = \"dx_\"\nfilename_mid = \"_dt_\"\nfilename_suffix = \".csv\"\n\nfiles_exist = []\ndata_names = []\nfiles_names = []\nfiles_csv =[]\nl2 = []\ngradData = []\n\ndt2=[0.1]\ndx2 = [0.01]\nfor x in dx:\n for t in dt: \n filename = filename_prefix_1+filename_prefix_2+str(x)+filename_mid+str(t)\n dataname = filename_prefix_2+str(x)+filename_mid+str(t)\n filename = filename.replace('.', '')\n filename = filename+filename_suffix\n try:\n df = pd.read_csv(filename)\n files_exist.append(True)\n data_names.append(dataname)\n files_names.append(filename)\n files_csv.append(df)\n l2.append(np.sum(np.power((np.array(integralValues)-df['avg(PDE variable 0)']),2)))\n gradData.append(np.gradient(np.array(df['avg(PDE variable 0)'])))\n print(filename)\n except:\n files_exist.append(False)\n files_names.append(\"\")\n data_names.append(\"\")\n files_csv.append(\"\")\n l2.append(1)\n gradData.append(0)",
"DataOut/dx_1_dt_01.csv\nDataOut/dx_1_dt_001.csv\nDataOut/dx_1_dt_00001.csv\nDataOut/dx_08_dt_01.csv\nDataOut/dx_08_dt_008.csv\nDataOut/dx_08_dt_006.csv\nDataOut/dx_08_dt_004.csv\nDataOut/dx_08_dt_002.csv\nDataOut/dx_08_dt_001.csv\nDataOut/dx_08_dt_0008.csv\nDataOut/dx_08_dt_0006.csv\nDataOut/dx_08_dt_0004.csv\nDataOut/dx_08_dt_0002.csv\nDataOut/dx_08_dt_0001.csv\nDataOut/dx_08_dt_00008.csv\nDataOut/dx_08_dt_00006.csv\nDataOut/dx_08_dt_00004.csv\nDataOut/dx_08_dt_00002.csv\nDataOut/dx_08_dt_00001.csv\nDataOut/dx_06_dt_01.csv\nDataOut/dx_06_dt_008.csv\nDataOut/dx_06_dt_006.csv\nDataOut/dx_06_dt_004.csv\nDataOut/dx_06_dt_002.csv\nDataOut/dx_06_dt_001.csv\nDataOut/dx_06_dt_0008.csv\nDataOut/dx_06_dt_0006.csv\nDataOut/dx_06_dt_0004.csv\nDataOut/dx_06_dt_0002.csv\nDataOut/dx_06_dt_0001.csv\nDataOut/dx_06_dt_00008.csv\nDataOut/dx_06_dt_00006.csv\nDataOut/dx_06_dt_00004.csv\nDataOut/dx_06_dt_00002.csv\nDataOut/dx_06_dt_00001.csv\nDataOut/dx_04_dt_01.csv\nDataOut/dx_04_dt_008.csv\nDataOut/dx_04_dt_006.csv\nDataOut/dx_04_dt_004.csv\nDataOut/dx_04_dt_002.csv\nDataOut/dx_04_dt_001.csv\nDataOut/dx_04_dt_0008.csv\nDataOut/dx_04_dt_0006.csv\nDataOut/dx_04_dt_0004.csv\nDataOut/dx_04_dt_0002.csv\nDataOut/dx_04_dt_0001.csv\nDataOut/dx_04_dt_00008.csv\nDataOut/dx_04_dt_00006.csv\nDataOut/dx_04_dt_00004.csv\nDataOut/dx_04_dt_00002.csv\nDataOut/dx_04_dt_00001.csv\nDataOut/dx_02_dt_01.csv\nDataOut/dx_02_dt_008.csv\nDataOut/dx_02_dt_006.csv\nDataOut/dx_02_dt_004.csv\nDataOut/dx_02_dt_002.csv\nDataOut/dx_02_dt_001.csv\nDataOut/dx_02_dt_0008.csv\nDataOut/dx_02_dt_0006.csv\nDataOut/dx_02_dt_0004.csv\nDataOut/dx_02_dt_0002.csv\nDataOut/dx_02_dt_0001.csv\nDataOut/dx_02_dt_00008.csv\nDataOut/dx_02_dt_00006.csv\nDataOut/dx_02_dt_00004.csv\nDataOut/dx_02_dt_00002.csv\nDataOut/dx_02_dt_00001.csv\nDataOut/dx_01_dt_01.csv\nDataOut/dx_01_dt_008.csv\nDataOut/dx_01_dt_006.csv\nDataOut/dx_01_dt_004.csv\nDataOut/dx_01_dt_002.csv\nDataOut/dx_01_dt_001.csv\nDataOut/dx_01_dt_0008.csv\nDataOut/dx_01_dt_0006.csv\nDataOut/dx_01_dt_0004.csv\nDataOut/dx_01_dt_0002.csv\nDataOut/dx_01_dt_0001.csv\nDataOut/dx_01_dt_00008.csv\nDataOut/dx_01_dt_00006.csv\nDataOut/dx_01_dt_00004.csv\nDataOut/dx_01_dt_00002.csv\nDataOut/dx_01_dt_00001.csv\nDataOut/dx_008_dt_01.csv\nDataOut/dx_008_dt_008.csv\nDataOut/dx_008_dt_006.csv\nDataOut/dx_008_dt_004.csv\nDataOut/dx_008_dt_002.csv\nDataOut/dx_008_dt_001.csv\nDataOut/dx_008_dt_0008.csv\nDataOut/dx_008_dt_0006.csv\nDataOut/dx_008_dt_0004.csv\nDataOut/dx_008_dt_0002.csv\nDataOut/dx_008_dt_0001.csv\nDataOut/dx_008_dt_00008.csv\nDataOut/dx_008_dt_00006.csv\nDataOut/dx_008_dt_00004.csv\nDataOut/dx_008_dt_00002.csv\nDataOut/dx_008_dt_00001.csv\nDataOut/dx_006_dt_01.csv\nDataOut/dx_006_dt_008.csv\nDataOut/dx_006_dt_006.csv\nDataOut/dx_006_dt_004.csv\nDataOut/dx_006_dt_002.csv\nDataOut/dx_006_dt_001.csv\nDataOut/dx_006_dt_0008.csv\nDataOut/dx_006_dt_0006.csv\nDataOut/dx_006_dt_0004.csv\nDataOut/dx_006_dt_0002.csv\nDataOut/dx_006_dt_0001.csv\nDataOut/dx_006_dt_00008.csv\nDataOut/dx_006_dt_00006.csv\nDataOut/dx_006_dt_00004.csv\nDataOut/dx_006_dt_00002.csv\nDataOut/dx_006_dt_00001.csv\nDataOut/dx_004_dt_01.csv\nDataOut/dx_004_dt_008.csv\nDataOut/dx_004_dt_006.csv\nDataOut/dx_004_dt_004.csv\nDataOut/dx_004_dt_002.csv\nDataOut/dx_004_dt_001.csv\nDataOut/dx_004_dt_0008.csv\nDataOut/dx_004_dt_0006.csv\nDataOut/dx_004_dt_0004.csv\nDataOut/dx_004_dt_0002.csv\nDataOut/dx_004_dt_0001.csv\nDataOut/dx_004_dt_00008.csv\nDataOut/dx_004_dt_00006.csv\nDataOut/dx_004_dt_00004.csv\nDataOut/dx_004_dt_00002.csv\nDataOut/dx_004_dt_00001.csv\nDataOut/dx_002_dt_01.csv\nDataOut/dx_002_dt_008.csv\nDataOut/dx_002_dt_006.csv\nDataOut/dx_002_dt_004.csv\nDataOut/dx_002_dt_002.csv\nDataOut/dx_002_dt_001.csv\nDataOut/dx_002_dt_0008.csv\nDataOut/dx_002_dt_0006.csv\nDataOut/dx_002_dt_0004.csv\nDataOut/dx_002_dt_0002.csv\nDataOut/dx_002_dt_0001.csv\nDataOut/dx_002_dt_00008.csv\nDataOut/dx_002_dt_00006.csv\nDataOut/dx_002_dt_00004.csv\nDataOut/dx_002_dt_00002.csv\nDataOut/dx_002_dt_00001.csv\nDataOut/dx_001_dt_01.csv\nDataOut/dx_001_dt_008.csv\nDataOut/dx_001_dt_006.csv\nDataOut/dx_001_dt_004.csv\nDataOut/dx_001_dt_002.csv\nDataOut/dx_001_dt_001.csv\nDataOut/dx_001_dt_0008.csv\nDataOut/dx_001_dt_0006.csv\nDataOut/dx_001_dt_0004.csv\nDataOut/dx_001_dt_0002.csv\nDataOut/dx_001_dt_0001.csv\nDataOut/dx_001_dt_00008.csv\nDataOut/dx_001_dt_00006.csv\nDataOut/dx_001_dt_00004.csv\nDataOut/dx_001_dt_00002.csv\nDataOut/dx_001_dt_00001.csv\nDataOut/dx_0008_dt_01.csv\nDataOut/dx_0008_dt_008.csv\nDataOut/dx_0008_dt_006.csv\nDataOut/dx_0008_dt_004.csv\nDataOut/dx_0008_dt_002.csv\nDataOut/dx_0008_dt_001.csv\nDataOut/dx_0008_dt_0008.csv\nDataOut/dx_0008_dt_0006.csv\nDataOut/dx_0008_dt_0004.csv\nDataOut/dx_0008_dt_0002.csv\nDataOut/dx_0008_dt_0001.csv\nDataOut/dx_0008_dt_00008.csv\nDataOut/dx_0008_dt_00006.csv\nDataOut/dx_0008_dt_00004.csv\nDataOut/dx_0008_dt_00002.csv\nDataOut/dx_0008_dt_00001.csv\nDataOut/dx_0006_dt_01.csv\nDataOut/dx_0006_dt_008.csv\nDataOut/dx_0006_dt_006.csv\nDataOut/dx_0006_dt_004.csv\nDataOut/dx_0006_dt_002.csv\nDataOut/dx_0006_dt_001.csv\nDataOut/dx_0006_dt_0008.csv\nDataOut/dx_0006_dt_0006.csv\nDataOut/dx_0006_dt_0004.csv\nDataOut/dx_0006_dt_0002.csv\nDataOut/dx_0006_dt_0001.csv\nDataOut/dx_0006_dt_00008.csv\nDataOut/dx_0006_dt_00006.csv\nDataOut/dx_0006_dt_00004.csv\nDataOut/dx_0006_dt_00002.csv\nDataOut/dx_0006_dt_00001.csv\nDataOut/dx_0004_dt_002.csv\nDataOut/dx_0004_dt_001.csv\nDataOut/dx_0004_dt_0008.csv\nDataOut/dx_0004_dt_0006.csv\nDataOut/dx_0004_dt_0004.csv\nDataOut/dx_0004_dt_0002.csv\nDataOut/dx_0004_dt_0001.csv\nDataOut/dx_0004_dt_00008.csv\nDataOut/dx_0004_dt_00006.csv\nDataOut/dx_0004_dt_00004.csv\nDataOut/dx_0004_dt_00002.csv\nDataOut/dx_0004_dt_00001.csv\nDataOut/dx_0002_dt_001.csv\nDataOut/dx_0002_dt_0008.csv\nDataOut/dx_0002_dt_0006.csv\nDataOut/dx_0002_dt_0004.csv\nDataOut/dx_0002_dt_0002.csv\nDataOut/dx_0002_dt_0001.csv\nDataOut/dx_0002_dt_00008.csv\nDataOut/dx_0002_dt_00006.csv\nDataOut/dx_0002_dt_00004.csv\nDataOut/dx_0002_dt_00002.csv\nDataOut/dx_0002_dt_00001.csv\nDataOut/dx_0001_dt_0002.csv\nDataOut/dx_0001_dt_0001.csv\nDataOut/dx_0001_dt_00008.csv\nDataOut/dx_0001_dt_00006.csv\nDataOut/dx_0001_dt_00004.csv\nDataOut/dx_0001_dt_00002.csv\nDataOut/dx_0001_dt_00001.csv\nDataOut/dx_00008_dt_0002.csv\nDataOut/dx_00008_dt_0001.csv\nDataOut/dx_00008_dt_00008.csv\nDataOut/dx_00008_dt_00006.csv\nDataOut/dx_00008_dt_00004.csv\nDataOut/dx_00008_dt_00002.csv\nDataOut/dx_00008_dt_00001.csv\nDataOut/dx_00006_dt_0001.csv\nDataOut/dx_00006_dt_00008.csv\nDataOut/dx_00006_dt_00006.csv\nDataOut/dx_00006_dt_00004.csv\nDataOut/dx_00006_dt_00002.csv\nDataOut/dx_00006_dt_00001.csv\nDataOut/dx_00004_dt_00004.csv\nDataOut/dx_00004_dt_00002.csv\nDataOut/dx_00004_dt_00001.csv\nDataOut/dx_00002_dt_00001.csv\n"
]
],
[
[
"For Figure 2 b)",
"_____no_output_____"
]
],
[
[
"\nthreshold = 0.4\n\nX = list(set(dx))\nT = list(set(dt))\nX.sort()\nT.sort()\nM = np.ones((len(X),len(T)))*threshold\n\nfor i in range(0,len(X)):\n for j in range(0,len(T)):\n \n dataname = filename_prefix_2+str(X[i])+filename_mid+str(T[j])\n for k in range(0,len(data_names)):\n if data_names[k] == dataname:\n if l2[k]>threshold:\n M[i,j]=threshold\n else:\n M[i,j]=l2[k]\n \nax = sns.heatmap(M, linewidth=0,yticklabels=X,xticklabels=T,cmap=\"gist_gray_r\")\nax.invert_xaxis()\nax.set(xlabel=\"Spatial step size (dx)\", ylabel = \"Temporal step size (dt)\")\nplt.savefig('heatmap.png')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Processing for the data subset where $dt = 0.1$:",
"_____no_output_____"
]
],
[
[
"dx = [1,0.8,0.6,0.4,0.2,0.1,0.08,0.06,0.04,0.02,0.01,0.008,0.006,0.004,0.002,0.001,0.0008,0.0006,0.0004,0.0002,0.0001]\ndt2=[0.1]\n\nfilename_prefix_1 = \"DataOut/\"\nfilename_prefix_2 = \"dx_\"\nfilename_mid = \"_dt_\"\nfilename_suffix = \".csv\"\n\nfiles_exist = []\ndata_names = []\nfiles_names = []\nfiles_csv =[]\nl2 = []\ngradData = []\n\nfor x in dx:\n for t in dt2: \n filename = filename_prefix_1+filename_prefix_2+str(x)+filename_mid+str(t)\n dataname = filename_prefix_2+str(x)+filename_mid+str(t)\n filename = filename.replace('.', '')\n filename = filename+filename_suffix\n try:\n df = pd.read_csv(filename)\n files_exist.append(True)\n data_names.append(dataname)\n files_names.append(filename)\n files_csv.append(df)\n l2.append(np.sum(np.power((np.array(integralValues)-df['avg(PDE variable 0)']),2)))\n gradData.append(np.gradient(np.array(df['avg(PDE variable 0)'])))\n print(filename)\n except:\n files_exist.append(False)\n files_names.append(\"\")\n data_names.append(\"\")\n files_csv.append(\"\")\n l2.append(1)\n gradData.append(0)",
"DataOut/dx_1_dt_01.csv\nDataOut/dx_08_dt_01.csv\nDataOut/dx_06_dt_01.csv\nDataOut/dx_04_dt_01.csv\nDataOut/dx_02_dt_01.csv\nDataOut/dx_01_dt_01.csv\nDataOut/dx_008_dt_01.csv\nDataOut/dx_006_dt_01.csv\nDataOut/dx_004_dt_01.csv\nDataOut/dx_002_dt_01.csv\nDataOut/dx_001_dt_01.csv\nDataOut/dx_0008_dt_01.csv\nDataOut/dx_0006_dt_01.csv\n"
]
],
[
[
"For Figure 2 c)",
"_____no_output_____"
]
],
[
[
"plt.plot(np.array(times),np.array(integralValues),label=\"Analytic\",linestyle='solid')\nfor i in range(len(files_exist)):\n if files_exist[i] == True:\n #plt.plot(df[i]['Time'],df[i]['avg(PDE variable 0)'],label=files_names[i],linestyle='dotted')\n df = pd.read_csv(files_names[i])\n plt.plot(df['Time'],df['avg(PDE variable 0)'],label=data_names[i],linestyle='dotted')\n plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))\nplt.xlabel(\"Time\")\nplt.ylabel(\"Domain average U\")\nplt.xlim([0,600])\nplt.ylim([0,1.05])\nplt.grid()\nplt.savefig('averagePlot.png')\nplt.show()",
"_____no_output_____"
]
],
[
[
"For Figure 2 d)",
"_____no_output_____"
]
],
[
[
"plt.plot(np.array(times),np.array(gradInt),label=\"Analytic\",linestyle='solid')\nfor i in range(len(files_exist)):\n if files_exist[i] == True:\n #plt.plot(df[i]['Time'],df[i]['avg(PDE variable 0)'],label=files_names[i],linestyle='dotted')\n df = pd.read_csv(files_names[i])\n try:\n plt.plot(df['Time'],gradData[i],label=data_names[i],linestyle='dotted')\n plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))\n except:\n print(\"skip\")\n\nplt.xlim([0,100])\nplt.ylim([0,0.026])\nplt.xlabel(\"Time\")\nplt.ylabel(\"Line gradient\")\nplt.grid() \nplt.savefig('gradientPlot.png')\nplt.show()\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adac211739ef272bea7cd51de892b0a22d86fbc
| 9,901 |
ipynb
|
Jupyter Notebook
|
Legacy Machine Learning Methods/SVM - Hyperparameter Tuning.ipynb
|
josh-marsh/thursday
|
0acb0511b8007091a409e674bb42c2c0fe8f8a82
|
[
"BSD-3-Clause"
] | null | null | null |
Legacy Machine Learning Methods/SVM - Hyperparameter Tuning.ipynb
|
josh-marsh/thursday
|
0acb0511b8007091a409e674bb42c2c0fe8f8a82
|
[
"BSD-3-Clause"
] | 34 |
2018-01-31T05:22:42.000Z
|
2018-04-26T02:14:43.000Z
|
Legacy Machine Learning Methods/SVM - Hyperparameter Tuning.ipynb
|
josh-marsh/thursday
|
0acb0511b8007091a409e674bb42c2c0fe8f8a82
|
[
"BSD-3-Clause"
] | 1 |
2018-04-03T00:26:32.000Z
|
2018-04-03T00:26:32.000Z
| 24.031553 | 277 | 0.536208 |
[
[
[
"In this notebook, we shall test the centered images on all major machine learning methods that predate neural networks. We do this in order to establish a baseline of performance for any later classifer that is developed.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom scipy import *\nimport os\nimport h5py\nfrom keras.utils import np_utils\nimport matplotlib.pyplot as plt\nimport pickle \nfrom skimage.transform import rescale\nfrom keras.models import model_from_json\nfrom sklearn import svm\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.model_selection import RandomizedSearchCV",
"_____no_output_____"
],
[
"file = open(\"train_x.dat\",'rb')\ntrain_x = pickle.load(file)\nfile.close()\n\nfile = open(\"train_y.dat\",'rb')\ntrain_y = pickle.load(file)\nfile.close()\n\nfile = open(\"test_x.dat\",'rb')\ntest_x = pickle.load(file)\nfile.close()\n\nfile = open(\"test_y.dat\",'rb')\ntest_y = pickle.load(file)\nfile.close()\n\nfile = open(\"raw_train_x.dat\",'rb')\nraw_train_x = pickle.load(file)\nfile.close()\n\nfile = open(\"raw_test_x.dat\",'rb')\nraw_test_x = pickle.load(file)\nfile.close()",
"_____no_output_____"
],
[
"##### HOG Images #####",
"_____no_output_____"
],
[
"# Defining hyperparameter range for Random Forrest Tree\n\n# Number of trees in random forest\nn_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]\n# Number of features to consider at every split\nmax_features = ['auto', 'sqrt']\n# Maximum number of levels in tree\nmax_depth = [int(x) for x in np.linspace(10, 110, num = 11)]\nmax_depth.append(None)\n# Minimum number of samples required to split a node\nmin_samples_split = [2, 5, 10]\n# Minimum number of samples required at each leaf node\nmin_samples_leaf = [1, 2, 4]\n# Method of selecting samples for training each tree\nbootstrap = [True, False]\n\n# Create the random grid\nrandom_grid = {'n_estimators': n_estimators,\n 'max_features': max_features,\n 'max_depth': max_depth,\n 'min_samples_split': min_samples_split,\n 'min_samples_leaf': min_samples_leaf,\n 'bootstrap': bootstrap}\n\nprint(random_grid)",
"{'n_estimators': [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000], 'max_features': ['auto', 'sqrt'], 'max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, None], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4], 'bootstrap': [True, False]}\n"
],
[
"# Random Forrest Tree\n# Use the random grid to search for best hyperparameters\n# First create the base model to tune\nrf = RandomForestClassifier()\n\n# Random search of parameters, using 3 fold cross validation, \n# search across 100 different combinations, and use all available cores\nrf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, \n n_iter = 100, cv = 3, verbose=2, random_state=42, \n n_jobs = -1)\n\n# Fit the random search model\nrf_random.fit(train_x, train_y)",
"Fitting 3 folds for each of 100 candidates, totalling 300 fits\n"
],
[
"score",
"_____no_output_____"
],
[
"rf_random.best_params_",
"_____no_output_____"
],
[
"def evaluate(model, test_features, test_labels):\n predictions = model.predict(test_features)\n errors = abs(predictions - test_labels)\n mape = 100 * np.mean(errors / test_labels)\n accuracy = 100 - mape\n print('Model Performance')\n print('Average Error: {:0.4f} degrees.'.format(np.mean(errors)))\n print('Accuracy = {:0.2f}%.'.format(accuracy))\n \n return accuracy\nbase_model = RandomForestRegressor(n_estimators = 10, random_state = 42)\nbase_model.fit(train_x, train_y)\nbase_accuracy = evaluate(base_model, test_x, train_y)\n\n\nbest_random = rf_random.best_estimator_\nrandom_accuracy = evaluate(best_random, test_features, test_labels)",
"_____no_output_____"
],
[
"# Naïve Bayes\ngnb = GaussianNB()\ngnb = gnb.fit(train_x, train_y)\nscore2 = gnb.score(test_x, test_y)",
"_____no_output_____"
],
[
"score2",
"_____no_output_____"
],
[
"# Support Vector Machine\nC = 0.1 # SVM regularization parameter",
"_____no_output_____"
],
[
"# LinearSVC (linear kernel)\nlin_svc = svm.LinearSVC(C=C).fit(train_x, train_y)\nscore4 = lin_svc.score(test_x, test_y)",
"_____no_output_____"
],
[
"score4",
"_____no_output_____"
],
[
"#### Raw Images #####",
"_____no_output_____"
],
[
"raw_train_x = raw_train_x.reshape(raw_train_x.shape[0], -1)\nraw_test_x = raw_test_x.reshape(raw_test_x.shape[0], -1)",
"_____no_output_____"
],
[
"# Random Forrest Tree\nclf_raw = RandomForestClassifier(n_estimators=100)\nclf_raw = clf_raw.fit(raw_train_x, train_y)\nscore5 = clf_raw.score(raw_test_x, test_y)",
"_____no_output_____"
],
[
"score5",
"_____no_output_____"
],
[
"# Naïve Bayes\ngnb_raw = GaussianNB()\ngnb_raw = gnb_raw.fit(raw_train_x, train_y)\nscore6 = gnb_raw.score(raw_test_x, test_y)",
"_____no_output_____"
],
[
"score6",
"_____no_output_____"
],
[
"# LinearSVC (linear kernel)\nlin_svc_raw = svm.LinearSVC(C=C).fit(raw_train_x, train_y)\nscore7 = lin_svc_raw.score(raw_test_x, test_y)",
"_____no_output_____"
],
[
"score7",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adacba1fbbad71eab4664f96cbbe5006076d0b3
| 22,344 |
ipynb
|
Jupyter Notebook
|
Untitled.ipynb
|
lowerasdf/CS564_PP_3
|
e6a345417f535b1dce430935001f848264d37cb5
|
[
"Apache-2.0"
] | null | null | null |
Untitled.ipynb
|
lowerasdf/CS564_PP_3
|
e6a345417f535b1dce430935001f848264d37cb5
|
[
"Apache-2.0"
] | null | null | null |
Untitled.ipynb
|
lowerasdf/CS564_PP_3
|
e6a345417f535b1dce430935001f848264d37cb5
|
[
"Apache-2.0"
] | null | null | null | 240.258065 | 1,602 | 0.710213 |
[
[
[
"%load_ext sql\n%sql sqlite:///./relA",
"_____no_output_____"
],
[
"%%sql\nselect name from sqlite_master where type = 'table';",
" * sqlite:///./relA\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code"
]
] |
4adad0c458fbe1af8988a6e329cd98d1c493a851
| 90,695 |
ipynb
|
Jupyter Notebook
|
notebook/Word_Cloud.ipynb
|
aceking007/Word-Cloud
|
76799e2f2b4009b5c5e64340164f2e5efeefdb96
|
[
"MIT"
] | null | null | null |
notebook/Word_Cloud.ipynb
|
aceking007/Word-Cloud
|
76799e2f2b4009b5c5e64340164f2e5efeefdb96
|
[
"MIT"
] | null | null | null |
notebook/Word_Cloud.ipynb
|
aceking007/Word-Cloud
|
76799e2f2b4009b5c5e64340164f2e5efeefdb96
|
[
"MIT"
] | null | null | null | 347.490421 | 80,432 | 0.920139 |
[
[
[
"# Generating a Word Cloud",
"_____no_output_____"
],
[
"For this project, we generate a \"word cloud\" from a given text file. The script will process the text (should be \"utf-8\" encoded), remove punctuation, ignore words that do not contain english alphabets, ignore uninteresting or irrelevant words, and count the word frequencies. It then uses the `wordcloud` module to generate the image from the word frequencies.",
"_____no_output_____"
],
[
"The input text needs to be a file that contains text only. For the text itself, you can copy and paste the contents of a website you like. Or you can use a site like [Project Gutenberg](https://www.gutenberg.org/) to find books that are available online. You could see what word clouds you can get from famous books, like a Shakespeare play or a novel by Jane Austen. Save this as a .txt file somewhere on your computer.\n<br><br>\nYou will need to upload your input file here so that your script will be able to process it. To do the upload, you will need an uploader widget. Run the following cell to perform all the installs and imports for your word cloud script and uploader widget. It may take a minute for all of this to run and there will be a lot of output messages. But, be patient. Once you get the following final line of output, the code is done executing. Then you can continue on with the rest of the instructions for this notebook.\n<br><br>\n**Enabling notebook extension fileupload/extension...**\n<br>\n**- Validating: <font color =green>OK</font>**\n<br><br>\n*Side Note* - Uncomment the lines beginning with `pip install` to make the code work properly. You can alternatively run the `pip install requirements.txt` command as mentioned in the README.md file accompanying the project.",
"_____no_output_____"
]
],
[
[
"# Here are all the installs and imports you will need for your word cloud script and uploader widget\n\n# Requirements - already installed in the virtual environment\n# !pip install wordcloud\n# !pip install fileupload\n# !pip install ipywidgets\n!jupyter nbextension install --py --user fileupload\n!jupyter nbextension enable --py fileupload\n\nimport wordcloud\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom IPython.display import display\nimport fileupload\nimport io\nimport sys",
"Installing c:\\users\\arpit\\desktop\\it automation\\word cloud\\venv\\lib\\site-packages\\fileupload\\static -> fileupload\nUp to date: C:\\Users\\ARPIT\\AppData\\Roaming\\jupyter\\nbextensions\\fileupload\\extension.js\nUp to date: C:\\Users\\ARPIT\\AppData\\Roaming\\jupyter\\nbextensions\\fileupload\\widget.js\nUp to date: C:\\Users\\ARPIT\\AppData\\Roaming\\jupyter\\nbextensions\\fileupload\\fileupload\\widget.js\n- Validating: ok\n\n To initialize this nbextension in the browser every time the notebook (or other app) loads:\n \n jupyter nbextension enable fileupload --user --py\n \nEnabling notebook extension fileupload/extension...\n - Validating: ok\nMatplotlib is building the font cache; this may take a moment.\n"
]
],
[
[
"Whew! That was a lot. All of the installs and imports for your word cloud script and uploader widget have been completed. \n<br><br>\n**IMPORTANT!** If this was your first time running the above cell containing the installs and imports, you will need save this notebook now. Then under the File menu above, select Close and Halt. When the notebook has completely shut down, reopen it. This is the only way the necessary changes will take affect.\n<br><br>\nTo upload your text file, run the following cell that contains all the code for a custom uploader widget. Once you run this cell, a \"Browse\" button should appear below it. Click this button and navigate the window to locate your saved text file.",
"_____no_output_____"
]
],
[
[
"# This is the uploader widget\n\ndef _upload():\n\n _upload_widget = fileupload.FileUploadWidget()\n\n def _cb(change):\n global file_contents\n decoded = io.StringIO(change['owner'].data.decode('utf-8'))\n filename = change['owner'].filename\n print('Uploaded `{}` ({:.2f} kB)'.format(\n filename, len(decoded.read()) / 2 **10))\n file_contents = decoded.getvalue()\n\n _upload_widget.observe(_cb, names='data')\n display(_upload_widget)\n\n_upload()",
"_____no_output_____"
]
],
[
[
"The function below does the text processing as described previously.<br>\nIt removes punctuation, non-alphabetic characters, removes pre-defined uninteresting words, and returns a dictionary containing the word frequencies in the text.\n<br><br>\nFeel free to tweak the `uninteresting_words` list and include any words that you don't want to see in the final word cloud. Some standard frequently occuring uninteresting words are already added in the list.",
"_____no_output_____"
]
],
[
[
"def calculate_frequencies(file_contents):\n # Here is a list of punctuations and uninteresting words you can use to process your text\n punctuations = '''!()-[]{};:'\"\\,<>./?@#$%^&*_~'''\n uninteresting_words = [\"the\", \"a\", \"to\", \"if\", \"is\", \"it\", \"of\", \"and\", \"or\", \"an\", \"as\", \"i\", \"me\", \"my\", \\\n \"we\", \"our\", \"ours\", \"you\", \"your\", \"yours\", \"he\", \"she\", \"him\", \"his\", \"her\", \"hers\", \"its\", \"they\", \"them\", \\\n \"their\", \"what\", \"which\", \"who\", \"whom\", \"this\", \"that\", \"am\", \"are\", \"was\", \"were\", \"be\", \"been\", \"being\", \\\n \"have\", \"has\", \"had\", \"do\", \"does\", \"did\", \"but\", \"at\", \"by\", \"with\", \"from\", \"here\", \"when\", \"where\", \"how\", \\\n \"all\", \"any\", \"both\", \"each\", \"few\", \"more\", \"some\", \"such\", \"no\", \"nor\", \"too\", \"very\", \"can\", \"will\", \"just\", \\\n \"in\", \"on\", \"one\", \"not\", \"he\", \"she\", \"two\", \"three\", \"four\", \"five\", \"six\", \"seven\", \"eight\", \"nine\", \"ten\"]\n \n mod_file = \"\"\n for ind in range(len(file_contents)):\n if file_contents[ind] not in punctuations:\n mod_file += file_contents[ind].lower()\n \n frequencies = {}\n init_list = mod_file.split()\n word_list = []\n for word in init_list:\n if word not in uninteresting_words:\n word_list.append(word)\n \n for word in word_list:\n if word in frequencies:\n frequencies[word] += 1\n else:\n frequencies[word] = 1\n #wordcloud\n cloud = wordcloud.WordCloud()\n cloud.generate_from_frequencies(frequencies)\n return cloud.to_array()",
"_____no_output_____"
]
],
[
[
"Run the cell below to generate the final word cloud.\n<br><br>\nFeel free to download and share the word clouds that you generate!",
"_____no_output_____"
]
],
[
[
"# Display your wordcloud image\nmyimage = calculate_frequencies(file_contents)\nplt.imshow(myimage, interpolation = 'nearest')\nplt.axis('off')\nplt.show()",
"_____no_output_____"
]
],
[
[
"For the sample, I used text from an article on [\"The arms race between bats and moths\"](https://www.bbc.com/news/science-environment-11010458) by BBC. The generated word cloud does indeed match the general \"feel\" of the article. Have a read to confirm!",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4adadb69c5463d75b64a6eca1f74b6add8bc0b9c
| 671,431 |
ipynb
|
Jupyter Notebook
|
PH125.2x - Intro to ggplot2.ipynb
|
angelaraya/r-notebooks
|
9e372b1b24a2131bdc6cee360098433042143638
|
[
"MIT"
] | 1 |
2022-03-24T07:22:13.000Z
|
2022-03-24T07:22:13.000Z
|
PH125.2x - Intro to ggplot2.ipynb
|
angel-araya/r-notebooks
|
9e372b1b24a2131bdc6cee360098433042143638
|
[
"MIT"
] | null | null | null |
PH125.2x - Intro to ggplot2.ipynb
|
angel-araya/r-notebooks
|
9e372b1b24a2131bdc6cee360098433042143638
|
[
"MIT"
] | null | null | null | 879.988204 | 131,182 | 0.942183 |
[
[
[
"library(dslabs)\nlibrary(HistData)\n\nlibrary(tidyverse)",
"_____no_output_____"
],
[
"data(heights)\ndata(Galton)\ndata(murders)",
"_____no_output_____"
],
[
"# HarvardX Data Science Course\n# Module 2: Data Visualization",
"_____no_output_____"
],
[
"x <- Galton$child\nx_with_error <- x\nx_with_error[1] <- x_with_error[1] * 10\nmean(x_with_error) - mean(x)",
"_____no_output_____"
],
[
"sd(x_with_error) - sd(x)",
"_____no_output_____"
],
[
"# Median and MAD (median absolute deviation) are robust measurements\nmedian(x_with_error) - median(x)\nmad(x_with_error) - mad(x)",
"_____no_output_____"
],
[
"# Using EDA (exploratory data analisys) to explore changes\n# Returns the average of the vector x after the first entry changed to k\nerror_avg <- function(k) {\n z <- x\n z[1] = k\n mean(z)\n}\n\nerror_avg(10^4)\nerror_avg(-10^4)",
"_____no_output_____"
],
[
"# Quantile-quantile Plots\nmale_heights <- heights$height[heights$sex == 'Male']\np <- seq(0.05, 0.95, 0.05)\nobserved_quantiles <- quantile(male_heights, p)\ntheoretical_quantiles <- qnorm(p, mean=mean(male_heights), sd=sd(male_heights))\n\nplot(theoretical_quantiles, observed_quantiles)\nabline(0,1)",
"_____no_output_____"
],
[
"# It is better to use standard units\nz <- scale(male_heights)\nobserved_quantiles <- quantile(z, p)\ntheoretical_quantiles <- qnorm(p)\nplot(theoretical_quantiles, observed_quantiles)\nabline(0,1)",
"_____no_output_____"
],
[
"# Porcentiles: when the value of p = 0.01...0.99",
"_____no_output_____"
],
[
"# Excercises\nmale <- heights$height[heights$sex == 'Male']\nfemale <- heights$height[heights$sex == 'Female']\n\nlength(male)\nlength(female)",
"_____no_output_____"
],
[
"male_percentiles <- quantile(male, seq(0.1, 0.9, 0.2))\nfemale_percentiles <- quantile(female, seq(0.1, 0.9, 0.2))\n\ndf <- data.frame(female=female_percentiles, male=male_percentiles)\ndf",
"_____no_output_____"
],
[
"# Excercises uing Galton data\nmean(x)\nmedian(x)",
"_____no_output_____"
],
[
"# ggplot2 basics\n\nmurders %>% ggplot(aes(population, total, label = abb)) + geom_point() + geom_label(color = 'blue')\n",
"_____no_output_____"
],
[
"murders_plot <- murders %>% ggplot(aes(population, total, label = abb, color = region)) \nmurders_plot + geom_point() + geom_label()",
"_____no_output_____"
],
[
"murders_plot + \n geom_point() +\n geom_label() +\n scale_x_log10() + \n scale_y_log10() + \n ggtitle('Gun Murder Data')",
"_____no_output_____"
],
[
"heights_plot <- heights %>% ggplot(aes(x = height))\nheights_plot + geom_histogram(binwidth = 1, color = 'darkgrey', fill = 'darkblue')",
"_____no_output_____"
],
[
"heights %>% ggplot(aes(height)) + geom_density()",
"_____no_output_____"
],
[
"heights %>% ggplot(aes(x = height, group = sex)) + geom_density()",
"_____no_output_____"
],
[
"# When setting a color category ggplot know that it has to draw more than 1 plot so the 'group' param is inferred\nheights %>% ggplot(aes(x = height, color = sex)) + geom_density()",
"_____no_output_____"
],
[
"heights_plot <- heights %>% ggplot(aes(x = height, fill = sex)) + geom_density(alpha = 0.2)\nheights_plot",
"_____no_output_____"
],
[
"# These two lines achieve the same, summarize creates a second data frame with a single column \"rate\", .$rate reads the single value finally to the \"r\" object, see ?summarize\nr <- sum(murders$total) / sum(murders$population) * 10^6\nr <- murders %>% summarize(rate = sum(total) / sum(population) * 10^6) %>% .$rate",
"_____no_output_____"
],
[
"library(ggthemes)\nlibrary(ggrepel)",
"_____no_output_____"
],
[
"murders_plot <- murders %>% ggplot(aes(x = population / 10^6, y = total, color = region, label = abb))\nmurders_plot <- murders_plot +\n geom_abline(intercept = log10(r), lty = 2, color = 'darkgray') +\n geom_point(size = 2) +\n geom_text_repel() +\n scale_x_log10() +\n scale_y_log10() + \n ggtitle(\"US Gun Murders in the US, 2010\") +\n xlab(\"Population in millions (log scale)\") +\n ylab(\"Total number of murders (log scale)\") +\n scale_color_discrete(name = 'Region') +\n theme_economist()\nmurders_plot",
"_____no_output_____"
],
[
"library(gridExtra)",
"_____no_output_____"
],
[
"grid.arrange(heights_plot, murders_plot, ncol = 1)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adaf722387d3c0d5ad7a7d80b22f54488def014
| 10,625 |
ipynb
|
Jupyter Notebook
|
VettingFlares/Hunt_down_Hyades_dispcrepant_stars.ipynb
|
ekaterinailin/flares-in-clusters-with-k2-ii
|
40d0f35f55184897b64a3392e45151c118a8a731
|
[
"MIT"
] | 2 |
2019-04-22T20:55:40.000Z
|
2021-12-19T04:10:34.000Z
|
VettingFlares/Hunt_down_Hyades_dispcrepant_stars.ipynb
|
ekaterinailin/flares-in-clusters-with-k2-ii
|
40d0f35f55184897b64a3392e45151c118a8a731
|
[
"MIT"
] | null | null | null |
VettingFlares/Hunt_down_Hyades_dispcrepant_stars.ipynb
|
ekaterinailin/flares-in-clusters-with-k2-ii
|
40d0f35f55184897b64a3392e45151c118a8a731
|
[
"MIT"
] | null | null | null | 34.496753 | 227 | 0.468141 |
[
[
[
"import pandas as pd\n",
"_____no_output_____"
],
[
"df = pd.read_csv(\"../k2scoc/results/tables/full_table.csv\")\n\n\nhasflares = (df.real==1) & (df.todrop.isnull())\nwassearched = (df.real==0) & (df.todrop.isnull())\ndf = df[hasflares & (df.cluster==\"hyades\") & (df.Teff_median > 3250.) & (df.Teff_median < 3500.)]\ndf[[\"EPIC\"]].drop_duplicates()\n",
"_____no_output_____"
]
],
[
[
"3500K < Teff < 3750 K:\n\n- [EPIC 247122957](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+36+04.172+%09%2B18+53+18.88&Radius=2&Radius.unit=arcsec&submit=submit+query)\n- [EPIC 211036776](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+211036776&NbIdent=1&Radius=2&Radius.unit=arcsec&submit=submit+id) **binary or multiple**\n- [EPIC 210923016](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+19+29.784+%09%2B21+45+13.99&Radius=2&Radius.unit=arcsec&submit=submit+query)\n- [EPIC 246806983](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=05+11+09.708+%09%2B15+48+57.47&Radius=2&Radius.unit=arcsec&submit=submit+query)\n- [EPIC 247289039](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+42+30.301+%09%2B20+27+11.43&Radius=2&Radius.unit=arcsec&submit=submit+query) **spectroscopic binary**\n- [EPIC 247592661](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+30+38.192+%09%2B22+54+28.88&Radius=2&Radius.unit=arcsec&submit=submit+query) flare star\n- [EPIC 247973705](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+40+06.776+%09%2B25+36+46.40&Radius=2&Radius.unit=arcsec&submit=submit+query)\n- [EPIC 210317378](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+210317378&submit=submit+id)\n- [EPIC 210721261](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+29+01.010+%09%2B18+40+25.33+%09&Radius=2&Radius.unit=arcsec&submit=submit+query) BY Dra\n- [EPIC 210741091](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+210741091&submit=submit+id)\n- [EPIC 247164626](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+40+12.713+%09%2B19+17+09.97&CooFrame=FK5&CooEpoch=2000&CooEqui=2000&CooDefinedFrames=none&Radius=2&Radius.unit=arcsec&submit=submit+query&CoordList=)",
"_____no_output_____"
],
[
"Teff < 3000 K:\n\n- [EPIC 210563410](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=%403995861&Name=%5BRSP2011%5D+75&submit=display+all+measurements#lab_meas) p=21d\n- [EPIC 248018423](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+51+18.846+%09%2B25+56+33.36+%09&Radius=2&Radius.unit=arcsec&submit=submit+query)\n- [EPIC 210371851](http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=EPIC+210371851&submit=SIMBAD+search) **binary**\n- [EPIC 210523892](http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=EPIC+210523892&submit=SIMBAD+search) not a binary in Gizis+Reid(1995)\n- [EPIC 210643507](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=%403995810&Name=EPIC+210643507&submit=display+all+measurements#lab_meas) p=22d\n- [EPIC 210839963](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+210839963&NbIdent=1&Radius=2&Radius.unit=arcsec&submit=submit+id) no rotation\n- [EPIC 210835057](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+210835057&NbIdent=1&Radius=2&Radius.unit=arcsec&submit=submit+id) \n- [EPIC 247230044](http://simbad.u-strasbg.fr/simbad/sim-id?Ident=EPIC+247230044&submit=submit+id)\n- [EPIC 247254123](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=+%0904+35+13.549+%09%2B20+08+01.41+%09&Radius=2&Radius.unit=arcsec&submit=submit+query)\n- [EPIC 247523445](http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=EPIC+247523445&submit=SIMBAD+search)\n- [EPIC 247829435](http://simbad.u-strasbg.fr/simbad/sim-coo?Coord=04+46+44.990+%09%2B24+36+40.40&CooFrame=FK5&CooEpoch=2000&CooEqui=2000&CooDefinedFrames=none&Radius=2&Radius.unit=arcsec&submit=submit+query&CoordList=)\n",
"_____no_output_____"
]
]
] |
[
"code",
"markdown"
] |
[
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4adaf75a6a1a6e2ac8537f6a3e546d0ac9e38bca
| 127,827 |
ipynb
|
Jupyter Notebook
|
S5/s5pass1.ipynb
|
VijayPrakashReddy-k/EVA
|
fd78ff8bda4227aebd0f5db14865d3c5a47b19b0
|
[
"MIT"
] | null | null | null |
S5/s5pass1.ipynb
|
VijayPrakashReddy-k/EVA
|
fd78ff8bda4227aebd0f5db14865d3c5a47b19b0
|
[
"MIT"
] | null | null | null |
S5/s5pass1.ipynb
|
VijayPrakashReddy-k/EVA
|
fd78ff8bda4227aebd0f5db14865d3c5a47b19b0
|
[
"MIT"
] | null | null | null | 64.009514 | 58,316 | 0.683682 |
[
[
[
"# Import Libraries",
"_____no_output_____"
]
],
[
[
"from __future__ import print_function\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torchvision import datasets, transforms",
"_____no_output_____"
]
],
[
[
"## Data Transformations\n\n",
"_____no_output_____"
]
],
[
[
"# Train Phase transformations\ntrain_transforms = transforms.Compose([\n # transforms.Resize((28, 28)),\n # transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values. \n # Note the difference between (0.1307) and (0.1307,)\n ])\n\n\n# Test Phase transformations\ntest_transforms = transforms.Compose([\n # transforms.Resize((28, 28)),\n # transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))\n ])\n",
"_____no_output_____"
]
],
[
[
"Transforms.compose-Composes several transforms together.\nToTensor()-Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor\nConverts a PIL Image or numpy.ndarray (H x W x C) in the range\n [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]\n if the PIL Image belongs to one of the modes (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1)\n or if the numpy.ndarray has dtype = np.uint8\nNormalize- Normalize a tensor image with mean and standard deviation.\n Given mean: ``(M1,...,Mn)`` and std: ``(S1,..,Sn)`` for ``n`` channels, this transform\n will normalize each channel of the input ``torch.*Tensor`` i.e.\n ``input[channel] = (input[channel] - mean[channel]) / std[channel]\n \nResize-Resize the input PIL Image to the given size.\nCenterCrop-Crops the given PIL Image at the center.\nPad-Pad the given PIL Image on all sides with the given \"pad\" value\nRandomTransforms-Base class for a list of transformations with randomness\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"# Dataset and Creating Train/Test Split",
"_____no_output_____"
]
],
[
[
"train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)\ntest = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)",
"Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./data/MNIST/raw/train-images-idx3-ubyte.gz\n"
]
],
[
[
"# Dataloader Arguments & Test/Train Dataloaders\n",
"_____no_output_____"
]
],
[
[
"SEED = 1\n\n# CUDA?\ncuda = torch.cuda.is_available()\nprint(\"CUDA Available?\", cuda)\n\n# For reproducibility\ntorch.manual_seed(SEED)\n\nif cuda:\n torch.cuda.manual_seed(SEED)\n\n# dataloader arguments - something you'll fetch these from cmdprmt\ndataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)\n\n# train dataloader\ntrain_loader = torch.utils.data.DataLoader(train, **dataloader_args)\n\n# test dataloader\ntest_loader = torch.utils.data.DataLoader(test, **dataloader_args)",
"CUDA Available? True\n"
],
[
"#defining the network structure\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n\n self.conv1 = nn.Sequential(\n nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),\n nn.BatchNorm2d(16),\n nn.ReLU(),\n nn.Dropout(0.15),\n \n nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),\n nn.BatchNorm2d(32),\n nn.ReLU(),\n nn.Dropout(0.15),\n nn.Conv2d(in_channels=32, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),\n nn.ReLU(),\n nn.MaxPool2d(2, 2)\n \n )\n \n self.conv2 = nn.Sequential(\n nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),\n nn.BatchNorm2d(32),\n nn.ReLU(),\n nn.Dropout(0.15),\n nn.Conv2d(in_channels=32, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),\n nn.BatchNorm2d(32),\n nn.ReLU(),\n nn.Dropout(0.15)\n )\n \n self.conv3 = nn.Sequential(\n nn.Conv2d(in_channels=32, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),\n nn.BatchNorm2d(32),\n nn.ReLU(),\n nn.Dropout(0.15),\n nn.Conv2d(in_channels=32, out_channels=10, kernel_size=(5, 5), padding=0, bias=False)\n )\n\n\n def forward(self, x):\n x = self.conv1(x)\n x = self.conv2(x)\n x = self.conv3(x)\n \n x = x.view(-1, 10)\n return F.log_softmax(x)",
"_____no_output_____"
]
],
[
[
"# Model Params\nCan't emphasize on how important viewing Model Summary is. \nUnfortunately, there is no in-built model visualizer, so we have to take external help",
"_____no_output_____"
]
],
[
[
"!pip install torchsummary\nfrom torchsummary import summary\nuse_cuda = torch.cuda.is_available()\ndevice = torch.device(\"cuda\" if use_cuda else \"cpu\")\nprint(device)\nmodel = Net().to(device)\nsummary(model, input_size=(1, 28, 28))",
"Requirement already satisfied: torchsummary in /usr/local/lib/python3.6/dist-packages (1.5.1)\ncuda\n----------------------------------------------------------------\n Layer (type) Output Shape Param #\n================================================================\n Conv2d-1 [-1, 16, 26, 26] 144\n BatchNorm2d-2 [-1, 16, 26, 26] 32\n ReLU-3 [-1, 16, 26, 26] 0\n Dropout-4 [-1, 16, 26, 26] 0\n Conv2d-5 [-1, 32, 24, 24] 4,608\n BatchNorm2d-6 [-1, 32, 24, 24] 64\n ReLU-7 [-1, 32, 24, 24] 0\n Dropout-8 [-1, 32, 24, 24] 0\n Conv2d-9 [-1, 16, 22, 22] 4,608\n ReLU-10 [-1, 16, 22, 22] 0\n MaxPool2d-11 [-1, 16, 11, 11] 0\n Conv2d-12 [-1, 32, 9, 9] 4,608\n BatchNorm2d-13 [-1, 32, 9, 9] 64\n ReLU-14 [-1, 32, 9, 9] 0\n Dropout-15 [-1, 32, 9, 9] 0\n Conv2d-16 [-1, 32, 7, 7] 9,216\n BatchNorm2d-17 [-1, 32, 7, 7] 64\n ReLU-18 [-1, 32, 7, 7] 0\n Dropout-19 [-1, 32, 7, 7] 0\n Conv2d-20 [-1, 32, 5, 5] 9,216\n BatchNorm2d-21 [-1, 32, 5, 5] 64\n ReLU-22 [-1, 32, 5, 5] 0\n Dropout-23 [-1, 32, 5, 5] 0\n Conv2d-24 [-1, 10, 1, 1] 8,000\n================================================================\nTotal params: 40,688\nTrainable params: 40,688\nNon-trainable params: 0\n----------------------------------------------------------------\nInput size (MB): 0.00\nForward/backward pass size (MB): 1.18\nParams size (MB): 0.16\nEstimated Total Size (MB): 1.34\n----------------------------------------------------------------\n"
]
],
[
[
"# Training and Testing\n\nLooking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. \n\nLet's write train and test functions",
"_____no_output_____"
]
],
[
[
"!pip install tqdm",
"Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (4.28.1)\n"
],
[
"from tqdm import tqdm\n\ntrain_losses = []\ntest_losses = []\ntrain_acc = []\ntest_acc = []\n\ndef train(model, device, train_loader, optimizer, epoch):\n model.train()\n pbar = tqdm(train_loader)\n correct = 0\n processed = 0\n for batch_idx, (data, target) in enumerate(pbar):\n # get samples\n data, target = data.to(device), target.to(device)\n\n # Init\n optimizer.zero_grad()\n # In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes. \n # Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.\n\n # Predict\n y_pred = model(data)\n\n # Calculate loss\n loss = F.nll_loss(y_pred, target)\n train_losses.append(loss)\n\n # Backpropagation\n loss.backward()\n optimizer.step()\n\n # Update pbar-tqdm\n \n pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability\n correct += pred.eq(target.view_as(pred)).sum().item()\n processed += len(data)\n\n pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')\n train_acc.append(100*correct/processed)\n\ndef test(model, device, test_loader):\n model.eval()\n test_loss = 0\n correct = 0\n with torch.no_grad():\n for data, target in test_loader:\n data, target = data.to(device), target.to(device)\n output = model(data)\n test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss\n pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability\n correct += pred.eq(target.view_as(pred)).sum().item()\n\n test_loss /= len(test_loader.dataset)\n test_losses.append(test_loss)\n\n print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\\n'.format(\n test_loss, correct, len(test_loader.dataset),\n 100. * correct / len(test_loader.dataset)))\n \n test_acc.append(100. * correct / len(test_loader.dataset))",
"_____no_output_____"
]
],
[
[
"# Let's Train and test our model",
"_____no_output_____"
]
],
[
[
"model = Net().to(device)\noptimizer = optim.SGD(model.parameters(), lr=0.0335, momentum=0.9)\nEPOCHS = 15\nfor epoch in range(EPOCHS):\n print(\"EPOCH:\", epoch)\n train(model, device, train_loader, optimizer, epoch)\n test(model, device, test_loader)",
"\r 0%| | 0/469 [00:00<?, ?it/s]"
],
[
"import matplotlib.pyplot as plt\n \nfig, axs = plt.subplots(2,2,figsize=(15,10))\naxs[0, 0].plot(train_losses)\naxs[0, 0].set_title(\"Training Loss\")\naxs[1, 0].plot(train_acc)\naxs[1, 0].set_title(\"Training Accuracy\")\naxs[0, 1].plot(test_losses)\naxs[0, 1].set_title(\"Test Loss\")\naxs[1, 1].plot(test_acc)\naxs[1, 1].set_title(\"Test Accuracy\")",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4adb033f371de030340590b922977aa0821d2da6
| 320,743 |
ipynb
|
Jupyter Notebook
|
Consumidor.gov.br/AE_Consumidor.gov.ipynb
|
ThayaneMoreira/Analises_Exploratorias
|
d362af06ea34f8c1914e866128cdee78cabb28f2
|
[
"MIT"
] | null | null | null |
Consumidor.gov.br/AE_Consumidor.gov.ipynb
|
ThayaneMoreira/Analises_Exploratorias
|
d362af06ea34f8c1914e866128cdee78cabb28f2
|
[
"MIT"
] | null | null | null |
Consumidor.gov.br/AE_Consumidor.gov.ipynb
|
ThayaneMoreira/Analises_Exploratorias
|
d362af06ea34f8c1914e866128cdee78cabb28f2
|
[
"MIT"
] | null | null | null | 308.703561 | 49,193 | 0.91515 |
[
[
[
"# Análise de Dados da Plataforma Consumidor.gov.br em 2019",
"_____no_output_____"
],
[
"O Consumidor.gov.br, plataforma criada pelo Governo Federal como alternativa para desafogar o Procon, trouxe ainda uma maior proximidade entre consumidor e empresa para a resolução de conflitos já que não há intermediadores. Serviço público, gratuito e monitorado pelos órgãos de defesa do consumidor, juntamente com a Secretaria Nacional do Consumidor do Ministério da Justiça.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"A análise exploratória da plataforma, possibilita a compreensão de como ela está sendo utilizada e se seu propósito vem sendo cumprido.\n\nOs dados foram obtidos no próprio site Consumidor.gov.br, nos dados abertos onde estão armazenados os dados do ano de 2014 ao ano de 2020.",
"_____no_output_____"
],
[
"### Importando Bibliotecas",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n\n%matplotlib inline\n%pylab inline\n\nplt.style.use('ggplot')\n\n",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"### Carregando Dataset\n\nOs dados da plataforma referente ao ano de 2019 estão em um único arquivo xlsx.",
"_____no_output_____"
]
],
[
[
"df = pd.read_excel('C:\\Analises_Exploratorias\\Analises_Exploratorias\\Consumidor.gov.br\\Dados_gov.xlsx')",
"_____no_output_____"
],
[
"df.columns=['UF', 'Cidade', 'Sexo', 'Faixa Etária', 'Data Finalização', 'Tempo Resposta', 'Nome Fantasia', 'Segmentação de Mercado', 'Área', 'Assunto', 'Grupo Problema', 'Problema', 'Como Comprou Contratou', 'Procurou Empresa', 'Respondida', 'Situação', 'Avaliação Reclamação', 'Nota Consumidor']",
"_____no_output_____"
]
],
[
[
"### Dicionário de Dados\n\n- UF: Sigla do estado do consumidor reclamante;\n- Cidade: Município do consumidor reclamante;\n- Sexo: Sigla do sexo do consumidor reclamante;\n- Faixa Etária: Faixa etária do consumidor;\n- Data Finalização: Data de finalização da reclamação;\n- Tempo Resposta: Número de dias para a resposta da reclamação, desconsiderando o tempo que a reclamação tenha ficado em análise pelo gestor;\n- Nome Fantasia: Nome pelo qual a empresa reclamada é conhecida pelo mercado;\n- Segmentação de mercado: Principal segmento de mercado da empresa participante;\n- Área: Área à qual percente o assunto objeot da reclamação;\n- Assunto: Assunto objeto da reclamação;\n- Grupo Problema: Agrupamento do qual faz parte o problema classificado na reclamação;\n- Problema: Descrição do problema objeto da reclamação;\n- Como Comprou Contratou: Descrição do meio utilizado para contratação/aquisição do produto ou serviço reclamado;\n- Procurou Empresa: Sigla da resposta do consumidor à pergunta: \"Procurou a empresa para solucionar o problema?\"\n- Respondida: Sigla que indica se a empresa respondeu à reclamação ou não;\n- Situação: Situação atual da reclamação no sistema;\n- Avaliação Reclamação: Classificação atribuída pelo consumidor sobre o desfecho da reclamação;\n- Nota do Consumidor: Número da nota de 1 a 5 atribuída pelo consumidor ao atendimento da empresa;\n",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
]
],
[
[
"## Análise Exploratória dos Dados\n",
"_____no_output_____"
]
],
[
[
"# Quantidade de entradas e variáveis\n\nprint(\"Número de entradas do df:\", df.shape[0])\nprint(\"Número de variáveis do df:\", df.shape[1])",
"Número de entradas do df: 780168\nNúmero de variáveis do df: 18\n"
]
],
[
[
"Nota-se que o dataset é bem extensos, possuindo centenas de milhares de entradas. Isso mostra que tal ferramenta de resolução dos conflitos, vem sendo bastante requisitada pelos consumidores.\n",
"_____no_output_____"
],
[
"Através do pd.dtypes saberemos quais são os tipos de dados existentes no DataFrame.\n",
"_____no_output_____"
]
],
[
[
"# Tipos das variáveis\n\ndf.dtypes",
"_____no_output_____"
]
],
[
[
"A grande maioria dos dados estão no formato object.",
"_____no_output_____"
],
[
"Em seguida veremos qual é a porcentagem de dados nulos presente no DataFrame. Dependendo da quantidade de valores nulos, será necessário removermos ou preenchermos essas ausências.",
"_____no_output_____"
]
],
[
[
"# Percentual de valores nulos em cada variável\n\ndf.isnull().sum().sort_values(ascending=False) / df.shape[0]",
"_____no_output_____"
]
],
[
[
"Tempo Resposta possui um valor irrisório de 1% de valores nulos. Já a variável Nota Consumidor está com quase 44% de seus dados ausentes. Para realizarmos uma análise mais precisa, faremos uma cópia do DataFrame chamada df_limpo, onde serão removidos todas as entradas com valores nulos.",
"_____no_output_____"
]
],
[
[
"# Criação de um novo DF para exclusão de valores nulos\n\ndf_limpo = df.copy()",
"_____no_output_____"
],
[
"# dropar os valores nulos \n\ndf_limpo.dropna(axis=0, inplace=True)",
"_____no_output_____"
]
],
[
[
"### Descrição do dataset\n\nVisualizaremos agora algumas características estatísticas do DataFrame através da função pd.describe(). Como somente 2 variáveis possuem valores de floats, somente serão plotadas as estatísticas dessas 2 colunas.",
"_____no_output_____"
]
],
[
[
"df.describe()",
"_____no_output_____"
]
],
[
[
"Na variável Tempo Resposta encontramos os seguintes dados estatísticos:\n\n- O tempo médio para as empresas responderem aos consumidores é de 6,5 dias;\n- O máximo de tempo para as empresas darem uma resposta foi de 15 dias;\n\nPercebemos que o tempo de resposta, sendo esse considerado apenas ao finalizar a reclamação, no geral, é satisfatório. \n\nNa variável Nota Consumidor encontramos os seguintes dados estatísticos:\n\n- A média da nota dada pelos consumidores foi de 3,2 pontos;\n- Metade das notas foram de 4 pontos;\n- Apenas 25% das notas foram de 5 pontos;\n\nAs notas dispostas pelos consumidores revelam que a resolução das queixas ainda não atingiu o alto nível de satisfação. ",
"_____no_output_____"
],
[
"## Análise Gráfica dos Dados\n\nRealizada a Análise Exploratória, o passo seguinte é analisarmos os gráficos e identificarmos quais insights poderão ser obtidos deles.\n\nO primeiro gráfico a ser plotado será referente aos 6 estados com a maior porcentagem de consumidores que deram entrada na plataforma. Para isso definiremos o DataFrame df_uf com esses valores.",
"_____no_output_____"
]
],
[
[
"# criação do DataFrame com os 6 estados com a maior porcentagem de consumidores\n\ndf_uf = (df['UF'].value_counts() / df.shape[0])[0:6].copy()",
"_____no_output_____"
],
[
"# gráfico de barras para os 6 estados com maior número de registros\n\nsns.set()\nfig, ax = plt.subplots(figsize=(10, 6))\ndf_uf.plot(kind=\"bar\", ax=ax)\nax.set_xlabel(\"Estados\", fontsize=16)\nax.set_ylabel(\"Porcentagem de Registros\", fontsize=16)\nax.set_title(\"6 Estados com o Maior Número de Registros de Reclamações\", fontsize=20)\nplt.xticks(rotation=0)\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"Identificamos que São Paulo, obteve o maior número de reclamações registradas na plataforma. Podemos considerar o fato de ser o estado mais populoso do país e seus habitantes possuírem um maior poder de compra.\n\nDos 6 estados com maior número de reclamações, 3 são do Sudestes e 2 são do Sul, sendo o 6º colocado o estado da Bahia, na região Nordeste.\n",
"_____no_output_____"
]
],
[
[
"# DataFrame com a quantidade de registros da variável Faixa Etária\n\ndf_etaria = df['Faixa Etária'].value_counts().copy()",
"_____no_output_____"
],
[
"# gráfico de barras da faixa etária dos consumidores\n\nfig, ax = plt.subplots(figsize=(10,6))\ndf_etaria.plot(kind='bar', ax=ax)\nax.set_title(\"Faixa Etária dos Consumidores\", fontsize=20)\nax.set_xlabel(\"Faixa Etária\", fontsize=16)\nax.set_ylabel(\"Número de Registros\", fontsize=16)\nplt.xticks(rotation=35)\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"Vemos que a grande maioria dos consumidores possuem entre 21 e 40 anos. O apontamento não poderia ser outro, tendo em vista que a população nessa idade representa os maiores acessos a tecnologia.",
"_____no_output_____"
]
],
[
[
"# Quantidade de Reclamações por Sexo\n\ndf_sexo = df['Sexo'].value_counts().copy()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(8,6))\nsns.set(style='darkgrid')\nsexo = df[u'Sexo'].unique()\ncont = df[u'Sexo'].value_counts()\nax.set_title(\"Reclamações por Sexo\", fontsize=20)\nsns.barplot(x=sexo, y=cont)\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"Podemos identificar que a maioria dos consumidores que deram registraram reclamações na plataforma são do sexo masculino.",
"_____no_output_____"
]
],
[
[
"# Procuram a Empresa antes de Registrarem a Reclamação?\n\nfig, ax = plt.subplots(figsize=(8,4))\ndf2 = df[df[u'Faixa Etária']=='entre 31 a 40 anos']\ndf2['Procurou Empresa'].value_counts().plot.barh()\nax.set_title(\"Procurou Empresa antes de Registrar Reclamação?\", fontsize=20)\nplt.tight_layout()",
"_____no_output_____"
],
[
"# Criação do DataFrame com as 10 empresas com o maior número de reclamações\n\ndf_empresas = df['Nome Fantasia'].value_counts()[0:10].copy()",
"_____no_output_____"
],
[
"# gráfico de barras das 10 empresas com o maior número de reclamações\n\nfig, ax = plt.subplots(figsize=(10,5))\ndf_empresas.plot(kind='barh', ax=ax)\nax.set_title('10 Empresas com Maior Número de Reclamações', fontsize=20)\nax.set_xlabel('Número de Registros', fontsize=16)\nax.set_ylabel('Empresas', fontsize=16)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Das 10 empresas com o maior número de reclamações, 5 delas são do segmento de telefonia e/ou internet.",
"_____no_output_____"
]
],
[
[
"# Grupo de problema mais comum\n\ndf['Grupo Problema'].value_counts()",
"_____no_output_____"
],
[
"# gráfico de barras para a quantidade de reclamações respondidas\nfig, ax = plt.subplots()\nsns.countplot(df['Respondida'], ax=ax);\nax.set_title(\"Quantidade de Reclamações Respondidas e Não Respondidas\", fontsize=20)\nax.set_xlabel(\"Resposta\", fontsize=16)\nax.set_ylabel(\"Quantidade de Entradas\", fontsize=16)",
"c:\\users\\carleana\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages\\seaborn\\_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n"
]
],
[
[
"Percebe-se que a comunicação entre consumidor e empresa estabelecida pela plataforma é eficaz, com alto índice de respostas para resolução dos conflitos. \n\nNa Análise Exploratória vimos estatísticas relacionadas ao tempo de resposta, como o tempo médio que as empresas leval para finalizar um registro de reclamação.\n\nPara uma melhor observação dessa variável, podemos utilizar um histograma, permitindo visualizar a relação entre a quantidade de entradas e a quantidade de tempo dispendido para se obter uma resposta.",
"_____no_output_____"
]
],
[
[
"# histograma do tempo de resposta\n\nfig, ax = plt.subplots(figsize=(10,6))\ndf.hist('Tempo Resposta', ax=ax)\nax.set_title('Tempo de Resposta', fontsize=20)\nax.set_xlabel('Número de Dias', fontsize=16)\nax.set_ylabel('Número de Entradas', fontsize=16)\nplt.show()",
"_____no_output_____"
]
],
[
[
"O histograma revela que as resposta a maior parte das reclamações se dado entre 6 e 10 dias.\n\nOutro histograma importante é o da nota dada pelos consumidores quanto a resolução de seus problemas. ",
"_____no_output_____"
]
],
[
[
"# histograma da variável nota_consumidor\n\nfig, ax = plt.subplots(figsize=(10,6))\ndf_limpo.hist('Nota Consumidor', ax=ax)\nax.set_title('Nota do Consumidor', fontsize=20)\nax.set_xlabel('Número de Dias', fontsize=16)\nax.set_ylabel('Número de Entradas', fontsize=16)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Conclusão\n\n\n\nDiversos fatores podem ser apontados como responsáveis pelo aumento exponencial do consumo da população brasileira. Os estímulos das empresas e da sociedade ao consumismo desenfreado e incosciente, juntamente com a obsolescência programada dos produtos, podem ser o fator de maior impacto nesse número. E, como em qualquer setor, serviço/produto ofertado, problemas acontecem. \n\nIndependetemente das causas e apesar das diversas contribuições econômicas e sociais desse comportamento, uma das consequências mais prejudiciais à sociedade é o abarrotamento gradativo sofrido pelo judiciário de causas envolvendo as relações de consumo. \nComo vimos ao longo dessa análise, tal ferramenta vem obtendo uma crescente adesão e resolvendo de forma satisfatória os problemas apontados pelos consumidores, com baixo tempo de reposta e taxa de aprovação consideravelmente agradável.\n\nAssim, o espaço onde é promovido o diálogo entre consumidores e empresas de diversos nichos econômicos de forma voluntária e participativa, possibilita o alcance de um desfecho benéfico a ambas as partes. Favorecendo ainda no engajamento dos clientes para que voltem a fazer negócios com as empresas reclamadas. Hoje, a plataforma possui em torno de 861 empresas ativas em seu catálogo.\n\n\n\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4adb0f8cc42903e9000611b07132b5a9ffd3a9f2
| 41,838 |
ipynb
|
Jupyter Notebook
|
in_progress/Tutorial-GiRaFFE_NRPy-Afield_flux.ipynb
|
ksible/nrpytutorial
|
4ca6e9da22def2a9c9bcbcad75847fd1db159f4b
|
[
"BSD-2-Clause"
] | 1 |
2020-06-09T16:16:21.000Z
|
2020-06-09T16:16:21.000Z
|
in_progress/Tutorial-GiRaFFE_NRPy-Afield_flux.ipynb
|
ksible/nrpytutorial
|
4ca6e9da22def2a9c9bcbcad75847fd1db159f4b
|
[
"BSD-2-Clause"
] | null | null | null |
in_progress/Tutorial-GiRaFFE_NRPy-Afield_flux.ipynb
|
ksible/nrpytutorial
|
4ca6e9da22def2a9c9bcbcad75847fd1db159f4b
|
[
"BSD-2-Clause"
] | null | null | null | 56.922449 | 858 | 0.608896 |
[
[
[
"# `GiRaFFE_NRPy`: Solving the Induction Equation\n\n## Author: Patrick Nelson\n\nThis notebook documents the function from the original `GiRaFFE` that calculates the flux for $A_i$ according to the method of Harten, Lax, von Leer, and Einfeldt (HLLE), assuming that we have calculated the values of the velocity and magnetic field on the cell faces according to the piecewise-parabolic method (PPM) of [Colella and Woodward (1984)](https://crd.lbl.gov/assets/pubs_presos/AMCS/ANAG/A141984.pdf), modified for the case of GRFFE. \n\n**Notebook Status:** <font color=green><b> Validated </b></font>\n\n**Validation Notes:** This code has been validated by showing that it converges to the exact answer at the expected order\n\n### NRPy+ Source Code for this module: \n* [GiRaFFE_NRPy/Afield_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Afield_flux.py)\n\nOur goal in this module is to write the code necessary to solve the induction equation \n$$\n\\partial_t A_i = \\underbrace{\\epsilon_{ijk} v^j B^k}_{\\rm Flux\\ terms} - \\underbrace{\\partial_i \\left(\\alpha \\Phi - \\beta^j A_j \\right)}_{\\rm Gauge\\ terms}.\n$$\nTo properly handle the flux terms and avoiding problems with shocks, we cannot simply take a cross product of the velocity and magnetic field at the cell centers. Instead, we must solve the Riemann problem at the cell faces using the reconstructed values of the velocity and magnetic field on either side of the cell faces. The reconstruction is done using PPM (see [here](Tutorial-GiRaFFE_NRPy-PPM.ipynb)); in this module, we will assume that that step has already been done. Metric quantities are assumed to have been interpolated to cell faces, as is done in [this](Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb) tutorial. \n\nTóth's [paper](https://www.sciencedirect.com/science/article/pii/S0021999100965197?via%3Dihub), Eqs. 30 and 31, are one of the first implementations of such a scheme. The original GiRaFFE used a 2D version of the algorithm from [Del Zanna, et al. (2002)](https://arxiv.org/abs/astro-ph/0210618); but since we are not using staggered grids, we can greatly simplify this algorithm with respect to the version used in the original `GiRaFFE`. Instead, we will adapt the implementations of the algorithm used in [Mewes, et al. (2020)](https://arxiv.org/abs/2002.06225) and [Giacomazzo, et al. (2011)](https://arxiv.org/abs/1009.2468), Eqs. 3-11. \n\nWe first write the flux contribution to the induction equation RHS as \n$$\n\\partial_t A_i = -E_i,\n$$\nwhere the electric field $E_i$ is given in ideal MHD (of which FFE is a subset) as\n$$\n-E_i = \\epsilon_{ijk} v^j B^k,\n$$\nwhere $v^i$ is the drift velocity, $B^i$ is the magnetic field, and $\\epsilon_{ijk} = \\sqrt{\\gamma} [ijk]$ is the Levi-Civita tensor.\nIn Cartesian coordinates, \n\\begin{align}\n-E_x &= [F^y(B^z)]_x = -[F^z(B^y)]_x \\\\\n-E_y &= [F^z(B^x)]_y = -[F^x(B^z)]_y \\\\\n-E_z &= [F^x(B^y)]_z = -[F^y(B^x)]_z, \\\\\n\\end{align}\nwhere \n$$\n[F^i(B^j)]_k = \\sqrt{\\gamma} (v^i B^j - v^j B^i).\n$$\nTo compute the actual contribution to the RHS in some direction $i$, we average the above listed field as calculated on the $+j$, $-j$, $+k$, and $-k$ faces. That is, at some point $(i,j,k)$ on the grid,\n\\begin{align}\n-E_x(x_i,y_j,z_k) &= \\frac{1}{4} \\left( [F_{\\rm HLL}^y(B^z)]_{x(i,j+1/2,k)}+[F_{\\rm HLL}^y(B^z)]_{x(i,j-1/2,k)}-[F_{\\rm HLL}^z(B^y)]_{x(i,j,k+1/2)}-[F_{\\rm HLL}^z(B^y)]_{x(i,j,k-1/2)} \\right) \\\\\n-E_y(x_i,y_j,z_k) &= \\frac{1}{4} \\left( [F_{\\rm HLL}^z(B^x)]_{y(i,j,k+1/2)}+[F_{\\rm HLL}^z(B^x)]_{y(i,j,k-1/2)}-[F_{\\rm HLL}^x(B^z)]_{y(i+1/2,j,k)}-[F_{\\rm HLL}^x(B^z)]_{y(i-1/2,j,k)} \\right) \\\\\n-E_z(x_i,y_j,z_k) &= \\frac{1}{4} \\left( [F_{\\rm HLL}^x(B^y)]_{z(i+1/2,j,k)}+[F_{\\rm HLL}^x(B^y)]_{z(i-1/2,j,k)}-[F_{\\rm HLL}^y(B^x)]_{z(i,j+1/2,k)}-[F_{\\rm HLL}^y(B^x)]_{z(i,j-1/2,k)} \\right). \\\\\n\\end{align}\nNote the use of $F_{\\rm HLL}$ here. This change signifies that the quantity output here is from the HLLE Riemann solver. Note also the indices on the fluxes. Values of $\\pm 1/2$ indicate that these are computed on cell faces using the reconstructed values of $v^i$ and $B^i$ and the interpolated values of the metric gridfunctions. So, \n$$\nF_{\\rm HLL}^i(B^j) = \\frac{c_{\\rm min} F_{\\rm R}^i(B^j) + c_{\\rm max} F_{\\rm L}^i(B^j) - c_{\\rm min} c_{\\rm max} (B_{\\rm R}^j-B_{\\rm L}^j)}{c_{\\rm min} + c_{\\rm max}}.\n$$\n\nThe speeds $c_\\min$ and $c_\\max$ are characteristic speeds that waves can travel through the plasma. In GRFFE, the expressions defining them reduce a function of only the metric quantities. $c_\\min$ is the negative of the minimum amongst the speeds $c_-$ and $0$ and $c_\\max$ is the maximum amongst the speeds $c_+$ and $0$. The speeds $c_\\pm = \\left. \\left(-b \\pm \\sqrt{b^2-4ac}\\right)\\middle/ \\left(2a\\right) \\right.$ must be calculated on both the left and right faces, where \n$$a = 1/\\alpha^2,$$ \n$$b = 2 \\beta^i / \\alpha^2$$\nand $$c = g^{ii} - (\\beta^i)^2/\\alpha^2.$$\nAn outline of a general finite-volume method is as follows, with the current step in bold:\n1. The Reconstruction Step - Piecewise Parabolic Method\n 1. Within each cell, fit to a function that conserves the volume in that cell using information from the neighboring cells\n * For PPM, we will naturally use parabolas\n 1. Use that fit to define the state at the left and right interface of each cell\n 1. Apply a slope limiter to mitigate Gibbs phenomenon\n1. Interpolate the value of the metric gridfunctions on the cell faces\n1. **Solving the Riemann Problem - Harten, Lax, (This notebook, $E_i$ only)**\n 1. **Use the left and right reconstructed states to calculate the unique state at boundary**\n\nWe will assume in this notebook that the reconstructed velocities and magnetic fields are available on cell faces as input. We will also assume that the metric gridfunctions have been interpolated on the metric faces. \n\nSolving the Riemann problem, then, consists of two substeps: First, we compute the flux through each face of the cell. Then, we add the average of these fluxes to the right-hand side of the evolution equation for the vector potential. ",
"_____no_output_____"
],
[
"<a id='toc'></a>\n\n# Table of Contents\n$$\\label{toc}$$\n\nThis notebook is organized as follows\n\n1. [Step 1](#prelim): Preliminaries\n1. [Step 2](#a_i_flux): Computing the Magnetic Flux\n 1. [Step 2.a](#hydro_speed): GRFFE characteristic wave speeds\n 1. [Step 2.b](#fluxes): Compute the HLLE fluxes\n1. [Step 3](#code_validation): Code Validation against `GiRaFFE_NRPy.Afield_flux` NRPy+ Module\n1. [Step 4](#latex_pdf_output): Output this notebook to $\\LaTeX$-formatted PDF file",
"_____no_output_____"
],
[
"<a id='prelim'></a>\n\n# Step 1: Preliminaries \\[Back to [top](#toc)\\]\n$$\\label{prelim}$$\n\nWe begin by importing the NRPy+ core functionality. We also import the Levi-Civita symbol, the GRHD module, and the GRFFE module.",
"_____no_output_____"
]
],
[
[
"# Step 0: Add NRPy's directory to the path\n# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory\nimport os, sys # Standard Python modules for multiplatform OS-level functions\nnrpy_dir_path = os.path.join(\"..\")\nif nrpy_dir_path not in sys.path:\n sys.path.append(nrpy_dir_path)\n\nfrom outputC import outCfunction, outputC # NRPy+: Core C code output module\nimport sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends\nimport NRPy_param_funcs as par # NRPy+: Parameter interface\nimport grid as gri # NRPy+: Functions having to do with numerical grids\nimport indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support\nimport cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\n\nthismodule = \"GiRaFFE_NRPy-Afield_flux\"\n\nimport GRHD.equations as GRHD\n# import GRFFE.equations as GRFFE",
"_____no_output_____"
]
],
[
[
"<a id='a_i_flux'></a>\n\n# Step 2: Computing the Magnetic Flux \\[Back to [top](#toc)\\]\n$$\\label{a_i_flux}$$\n\n<a id='hydro_speed'></a>\n\n## Step 2.a: GRFFE characteristic wave speeds \\[Back to [top](#toc)\\]\n$$\\label{hydro_speed}$$\n\nNext, we will find the speeds at which the hydrodynamics waves propagate. We start from the speed of light (since FFE deals with very diffuse plasmas), which is $c=1.0$ in our chosen units. We then find the speeds $c_+$ and $c_-$ on each face with the function `find_cp_cm`; then, we find minimum and maximum speeds possible from among those.\n\n\n\nBelow is the source code for `find_cp_cm`, edited to work with the NRPy+ version of GiRaFFE. One edit we need to make in particular is to the term `psim4*gupii` in the definition of `c`; that was written assuming the use of the conformal metric $\\tilde{g}^{ii}$. Since we are not using that here, and are instead using the ADM metric, we should not multiply by $\\psi^{-4}$.\n\n```c\nstatic inline void find_cp_cm(REAL &cplus,REAL &cminus,const REAL v02,const REAL u0,\n const REAL vi,const REAL lapse,const REAL shifti,\n const REAL gammadet,const REAL gupii) {\n const REAL u0_SQUARED=u0*u0;\n const REAL ONE_OVER_LAPSE_SQUARED = 1.0/(lapse*lapse);\n // sqrtgamma = psi6 -> psim4 = gammadet^(-1.0/3.0)\n const REAL psim4 = pow(gammadet,-1.0/3.0);\n //Find cplus, cminus:\n const REAL a = u0_SQUARED * (1.0-v02) + v02*ONE_OVER_LAPSE_SQUARED;\n const REAL b = 2.0* ( shifti*ONE_OVER_LAPSE_SQUARED * v02 - u0_SQUARED * vi * (1.0-v02) );\n const REAL c = u0_SQUARED*vi*vi * (1.0-v02) - v02 * ( gupii -\n shifti*shifti*ONE_OVER_LAPSE_SQUARED);\n REAL detm = b*b - 4.0*a*c;\n //ORIGINAL LINE OF CODE:\n //if(detm < 0.0) detm = 0.0;\n //New line of code (without the if() statement) has the same effect:\n detm = sqrt(0.5*(detm + fabs(detm))); /* Based on very nice suggestion from Roland Haas */\n \n cplus = 0.5*(detm-b)/a;\n cminus = -0.5*(detm+b)/a;\n if (cplus < cminus) {\n const REAL cp = cminus;\n cminus = cplus;\n cplus = cp;\n }\n}\n```\nComments documenting this have been excised for brevity, but are reproduced in $\\LaTeX$ [below](#derive_speed).\n\nWe could use this code directly, but there's substantial improvement we can make by changing the code into a NRPyfied form. Note the `if` statement; NRPy+ does not know how to handle these, so we must eliminate it if we want to leverage NRPy+'s full power. (Calls to `fabs()` are also cheaper than `if` statements.) This can be done if we rewrite this, taking inspiration from the other eliminated `if` statement documented in the above code block:\n```c\n cp = 0.5*(detm-b)/a;\n cm = -0.5*(detm+b)/a;\n cplus = 0.5*(cp+cm+fabs(cp-cm));\n cminus = 0.5*(cp+cm-fabs(cp-cm));\n```\nThis can be simplified further, by substituting `cp` and `cm` into the below equations and eliminating terms as appropriate. First note that `cp+cm = -b/a` and that `cp-cm = detm/a`. Thus,\n```c\n cplus = 0.5*(-b/a + fabs(detm/a));\n cminus = 0.5*(-b/a - fabs(detm/a));\n```\nThis fulfills the original purpose of the `if` statement in the original code because we have guaranteed that $c_+ \\geq c_-$.\n\nThis leaves us with an expression that can be much more easily NRPyfied. So, we will rewrite the following in NRPy+, making only minimal changes to be proper Python. However, it turns out that we can make this even simpler. In GRFFE, $v_0^2$ is guaranteed to be exactly one. In GRMHD, this speed was calculated as $$v_{0}^{2} = v_{\\rm A}^{2} + c_{\\rm s}^{2}\\left(1-v_{\\rm A}^{2}\\right),$$ where the Alfvén speed $v_{\\rm A}^{2}$ $$v_{\\rm A}^{2} = \\frac{b^{2}}{\\rho_{b}h + b^{2}}.$$ So, we can see that when the density $\\rho_b$ goes to zero, $v_{0}^{2} = v_{\\rm A}^{2} = 1$. Then \n\\begin{align}\na &= (u^0)^2 (1-v_0^2) + v_0^2/\\alpha^2 \\\\\n&= 1/\\alpha^2 \\\\\nb &= 2 \\left(\\beta^i v_0^2 / \\alpha^2 - (u^0)^2 v^i (1-v_0^2)\\right) \\\\\n&= 2 \\beta^i / \\alpha^2 \\\\\nc &= (u^0)^2 (v^i)^2 (1-v_0^2) - v_0^2 \\left(\\gamma^{ii} - (\\beta^i)^2/\\alpha^2\\right) \\\\\n&= -\\gamma^{ii} + (\\beta^i)^2/\\alpha^2,\n\\end{align}\nare simplifications that should save us some time; we can see that $a \\geq 0$ is guaranteed. Note that we also force `detm` to be positive. Thus, `detm/a` is guaranteed to be positive itself, rendering the calls to `nrpyAbs()` superfluous. Furthermore, we eliminate any dependence on the Valencia 3-velocity and the time compoenent of the four-velocity, $u^0$. This leaves us free to solve the quadratic in the familiar way: $$c_\\pm = \\frac{-b \\pm \\sqrt{b^2-4ac}}{2a}$$.",
"_____no_output_____"
]
],
[
[
"# We'll write this as a function so that we can calculate the expressions on-demand for any choice of i\ndef find_cp_cm(lapse,shifti,gammaUUii):\n # Inputs: u0,vi,lapse,shift,gammadet,gupii\n # Outputs: cplus,cminus\n\n # a = 1/(alpha^2)\n a = sp.sympify(1)/(lapse*lapse)\n # b = 2 beta^i / alpha^2\n b = sp.sympify(2) * shifti /(lapse*lapse)\n # c = -g^{ii} + (beta^i)^2 / alpha^2\n c = - gammaUUii + shifti*shifti/(lapse*lapse)\n\n # Now, we are free to solve the quadratic equation as usual. We take care to avoid passing a\n # negative value to the sqrt function.\n detm = b*b - sp.sympify(4)*a*c\n\n import Min_Max_and_Piecewise_Expressions as noif\n detm = sp.sqrt(noif.max_noif(sp.sympify(0),detm))\n global cplus,cminus\n cplus = sp.Rational(1,2)*(-b/a + detm/a)\n cminus = sp.Rational(1,2)*(-b/a - detm/a)",
"_____no_output_____"
]
],
[
[
"In flat spacetime, where $\\alpha=1$, $\\beta^i=0$, and $\\gamma^{ij} = \\delta^{ij}$, $c_+ > 0$ and $c_- < 0$. For the HLLE solver, we will need both `cmax` and `cmin` to be positive; we also want to choose the speed that is larger in magnitude because overestimating the characteristic speeds will help damp unwanted oscillations. (However, in GRFFE, we only get one $c_+$ and one $c_-$, so we only need to fix the signs here.) Hence, the following function. \n\nWe will now write a function in NRPy+ similar to the one used in the old `GiRaFFE`, allowing us to generate the expressions with less need to copy-and-paste code; the key difference is that this one will be in Python, and generate optimized C code integrated into the rest of the operations. Notice that since we eliminated the dependence on velocities, none of the input quantities are different on either side of the face. So, this function won't really do much besides guarantee that `cmax` and `cmin` are positive, but we'll leave the machinery here since it is likely to be a useful guide to somebody who wants to something similar. The only modifications we'll make are those necessary to eliminate calls to `fabs(0)` in the C code. We use the same technique as above to replace the `if` statements inherent to the `MAX()` and `MIN()` functions. ",
"_____no_output_____"
]
],
[
[
"# We'll write this as a function, and call it within HLLE_solver, below.\ndef find_cmax_cmin(field_comp,gamma_faceDD,beta_faceU,alpha_face):\n # Inputs: flux direction field_comp, Inverse metric gamma_faceUU, shift beta_faceU,\n # lapse alpha_face, metric determinant gammadet_face\n # Outputs: maximum and minimum characteristic speeds cmax and cmin\n # First, we need to find the characteristic speeds on each face\n gamma_faceUU,unusedgammaDET = ixp.generic_matrix_inverter3x3(gamma_faceDD)\n # Original needed for GRMHD\n# find_cp_cm(alpha_face,beta_faceU[field_comp],gamma_faceUU[field_comp][field_comp])\n# cpr = cplus\n# cmr = cminus\n# find_cp_cm(alpha_face,beta_faceU[field_comp],gamma_faceUU[field_comp][field_comp])\n# cpl = cplus\n# cml = cminus\n find_cp_cm(alpha_face,beta_faceU[field_comp],gamma_faceUU[field_comp][field_comp])\n cp = cplus\n cm = cminus\n\n # The following algorithms have been verified with random floats:\n\n global cmax,cmin\n # Now, we need to set cmax to the larger of cpr,cpl, and 0\n\n import Min_Max_and_Piecewise_Expressions as noif\n cmax = noif.max_noif(cp,sp.sympify(0))\n\n # And then, set cmin to the smaller of cmr,cml, and 0\n cmin = -noif.min_noif(cm,sp.sympify(0))",
"_____no_output_____"
]
],
[
[
"<a id='fluxes'></a>\n\n## Step 2.b: Compute the HLLE fluxes \\[Back to [top](#toc)\\]\n$$\\label{fluxes}$$\n\nHere, we we calculate the flux and state vectors for the electric field. The flux vector is here given as \n$$\n[F^i(B^j)]_k = \\sqrt{\\gamma} (v^i B^j - v^j B^i).\n$$\nHere, $v^i$ is the drift velocity and $B^i$ is the magnetic field.\nThis can be easily handled for an input flux direction $i$ with\n$$\n[F^j(B^k)]_i = \\epsilon_{ijk} v^j B^k,\n$$\nwhere $\\epsilon_{ijk} = \\sqrt{\\gamma} [ijk]$ and $[ijk]$ is the Levi-Civita symbol.\n\nThe state vector is simply the magnetic field $B^j$.",
"_____no_output_____"
]
],
[
[
"def calculate_flux_and_state_for_Induction(field_comp,flux_dirn, gammaDD,betaU,alpha,ValenciavU,BU):\n # Define Levi-Civita symbol\n def define_LeviCivitaSymbol_rank3(DIM=-1):\n if DIM == -1:\n DIM = par.parval_from_str(\"DIM\")\n\n LeviCivitaSymbol = ixp.zerorank3()\n\n for i in range(DIM):\n for j in range(DIM):\n for k in range(DIM):\n # From https://codegolf.stackexchange.com/questions/160359/levi-civita-symbol :\n LeviCivitaSymbol[i][j][k] = (i - j) * (j - k) * (k - i) * sp.Rational(1,2)\n return LeviCivitaSymbol\n GRHD.compute_sqrtgammaDET(gammaDD)\n # Here, we import the Levi-Civita tensor and compute the tensor with lower indices\n LeviCivitaDDD = define_LeviCivitaSymbol_rank3()\n for i in range(3):\n for j in range(3):\n for k in range(3):\n LeviCivitaDDD[i][j][k] *= GRHD.sqrtgammaDET\n\n global U,F\n # Flux F = \\epsilon_{ijk} v^j B^k\n F = sp.sympify(0)\n for j in range(3):\n for k in range(3):\n F += LeviCivitaDDD[field_comp][j][k] * (alpha*ValenciavU[j]-betaU[j]) * BU[k]\n # U = B^i\n U = BU[flux_dirn]",
"_____no_output_____"
]
],
[
[
"Now, we write a standard HLLE solver based on eq. 3.15 in [the HLLE paper](https://epubs.siam.org/doi/pdf/10.1137/1025002),\n$$\nF^{\\rm HLL} = \\frac{c_{\\rm min} F_{\\rm R} + c_{\\rm max} F_{\\rm L} - c_{\\rm min} c_{\\rm max} (U_{\\rm R}-U_{\\rm L})}{c_{\\rm min} + c_{\\rm max}}\n$$",
"_____no_output_____"
]
],
[
[
"def HLLE_solver(cmax, cmin, Fr, Fl, Ur, Ul):\n # This solves the Riemann problem for the flux of E_i in one direction\n\n # F^HLL = (c_\\min f_R + c_\\max f_L - c_\\min c_\\max ( st_j_r - st_j_l )) / (c_\\min + c_\\max)\n return (cmin*Fr + cmax*Fl - cmin*cmax*(Ur-Ul) )/(cmax + cmin)",
"_____no_output_____"
]
],
[
[
"Here, we will use the function we just wrote to calculate the flux through a face. We will pass the reconstructed Valencia 3-velocity and magnetic field on either side of an interface to this function (designated as the \"left\" and \"right\" sides) along with the value of the 3-metric, shift vector, and lapse function on the interface. The parameter `flux_dirn` specifies which face through which we are calculating the flux. However, unlike when we used this method to calculate the flux term, the RHS of each component of $A_i$ does not depend on all three of the flux directions. Instead, the flux of one component of the $E_i$ field depends on flux through the faces in the other two directions. This will be handled when we generate the C function, as demonstrated in the example code after this next function.\n\nNote that we allow the user to declare their own gridfunctions if they wish, and default to declaring basic symbols if they are not provided. The default names are chosen to imply interpolation of the metric gridfunctions and reconstruction of the primitives.",
"_____no_output_____"
]
],
[
[
"def calculate_E_i_flux(flux_dirn,alpha_face=None,gamma_faceDD=None,beta_faceU=None,\\\n Valenciav_rU=None,B_rU=None,Valenciav_lU=None,B_lU=None):\n global E_fluxD\n E_fluxD = ixp.zerorank1()\n for field_comp in range(3):\n find_cmax_cmin(field_comp,gamma_faceDD,beta_faceU,alpha_face)\n calculate_flux_and_state_for_Induction(field_comp,flux_dirn, gamma_faceDD,beta_faceU,alpha_face,\\\n Valenciav_rU,B_rU)\n Fr = F\n Ur = U\n calculate_flux_and_state_for_Induction(field_comp,flux_dirn, gamma_faceDD,beta_faceU,alpha_face,\\\n Valenciav_lU,B_lU)\n Fl = F\n Ul = U\n E_fluxD[field_comp] += HLLE_solver(cmax, cmin, Fr, Fl, Ur, Ul)",
"_____no_output_____"
]
],
[
[
"Below, we will write some example code to use the above functions to generate C code for `GiRaFFE_NRPy`. We need to write our own memory reads and writes because we need to add contributions from *both* faces in a given direction, which is expressed in the code as adding contributions from adjacent gridpoints to the RHS, which is not something `FD_outputC` can handle. The `.replace()` function calls adapt these reads and writes to the different directions. Note that, for reconstructions in a given direction, the fluxes are only added to the other two components, as can be seen in the equations we are implementing.\n\\begin{align}\n-E_x(x_i,y_j,z_k) &= \\frac{1}{4} \\left( [F_{\\rm HLL}^y(B^z)]_{x(i,j+1/2,k)}+[F_{\\rm HLL}^y(B^z)]_{x(i,j-1/2,k)}-[F_{\\rm HLL}^z(B^y)]_{x(i,j,k+1/2)}-[F_{\\rm HLL}^z(B^y)]_{x(i,j,k-1/2)} \\right) \\\\\n-E_y(x_i,y_j,z_k) &= \\frac{1}{4} \\left( [F_{\\rm HLL}^z(B^x)]_{y(i,j,k+1/2)}+[F_{\\rm HLL}^z(B^x)]_{y(i,j,k-1/2)}-[F_{\\rm HLL}^x(B^z)]_{y(i+1/2,j,k)}-[F_{\\rm HLL}^x(B^z)]_{y(i-1/2,j,k)} \\right) \\\\\n-E_z(x_i,y_j,z_k) &= \\frac{1}{4} \\left( [F_{\\rm HLL}^x(B^y)]_{z(i+1/2,j,k)}+[F_{\\rm HLL}^x(B^y)]_{z(i-1/2,j,k)}-[F_{\\rm HLL}^y(B^x)]_{z(i,j+1/2,k)}-[F_{\\rm HLL}^y(B^x)]_{z(i,j-1/2,k)} \\right). \\\\\n\\end{align}\nFrom this, we can see that when, for instance, we reconstruct and interpolate in the $x$-direction, we must add only to the $y$- and $z$-components of the electric field.\n\nRecall that when we reconstructed the velocity and magnetic field, we constructed to the $i-1/2$ face, so the data at $i+1/2$ is stored at $i+1$.\n",
"_____no_output_____"
]
],
[
[
"def generate_Afield_flux_function_files(out_dir,subdir,alpha_face,gamma_faceDD,beta_faceU,\\\n Valenciav_rU,B_rU,Valenciav_lU,B_lU,inputs_provided=True):\n if not inputs_provided:\n # declare all variables\n alpha_face = sp.symbols(alpha_face)\n beta_faceU = ixp.declarerank1(\"beta_faceU\")\n gamma_faceDD = ixp.declarerank2(\"gamma_faceDD\",\"sym01\")\n Valenciav_rU = ixp.declarerank1(\"Valenciav_rU\")\n B_rU = ixp.declarerank1(\"B_rU\")\n Valenciav_lU = ixp.declarerank1(\"Valenciav_lU\")\n B_lU = ixp.declarerank1(\"B_lU\")\n\n Memory_Read = \"\"\"const double alpha_face = auxevol_gfs[IDX4S(ALPHA_FACEGF, i0,i1,i2)];\n const double gamma_faceDD00 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF, i0,i1,i2)];\n const double gamma_faceDD01 = auxevol_gfs[IDX4S(GAMMA_FACEDD01GF, i0,i1,i2)];\n const double gamma_faceDD02 = auxevol_gfs[IDX4S(GAMMA_FACEDD02GF, i0,i1,i2)];\n const double gamma_faceDD11 = auxevol_gfs[IDX4S(GAMMA_FACEDD11GF, i0,i1,i2)];\n const double gamma_faceDD12 = auxevol_gfs[IDX4S(GAMMA_FACEDD12GF, i0,i1,i2)];\n const double gamma_faceDD22 = auxevol_gfs[IDX4S(GAMMA_FACEDD22GF, i0,i1,i2)];\n const double beta_faceU0 = auxevol_gfs[IDX4S(BETA_FACEU0GF, i0,i1,i2)];\n const double beta_faceU1 = auxevol_gfs[IDX4S(BETA_FACEU1GF, i0,i1,i2)];\n const double beta_faceU2 = auxevol_gfs[IDX4S(BETA_FACEU2GF, i0,i1,i2)];\n const double Valenciav_rU0 = auxevol_gfs[IDX4S(VALENCIAV_RU0GF, i0,i1,i2)];\n const double Valenciav_rU1 = auxevol_gfs[IDX4S(VALENCIAV_RU1GF, i0,i1,i2)];\n const double Valenciav_rU2 = auxevol_gfs[IDX4S(VALENCIAV_RU2GF, i0,i1,i2)];\n const double B_rU0 = auxevol_gfs[IDX4S(B_RU0GF, i0,i1,i2)];\n const double B_rU1 = auxevol_gfs[IDX4S(B_RU1GF, i0,i1,i2)];\n const double B_rU2 = auxevol_gfs[IDX4S(B_RU2GF, i0,i1,i2)];\n const double Valenciav_lU0 = auxevol_gfs[IDX4S(VALENCIAV_LU0GF, i0,i1,i2)];\n const double Valenciav_lU1 = auxevol_gfs[IDX4S(VALENCIAV_LU1GF, i0,i1,i2)];\n const double Valenciav_lU2 = auxevol_gfs[IDX4S(VALENCIAV_LU2GF, i0,i1,i2)];\n const double B_lU0 = auxevol_gfs[IDX4S(B_LU0GF, i0,i1,i2)];\n const double B_lU1 = auxevol_gfs[IDX4S(B_LU1GF, i0,i1,i2)];\n const double B_lU2 = auxevol_gfs[IDX4S(B_LU2GF, i0,i1,i2)];\n REAL A_rhsD0 = 0; REAL A_rhsD1 = 0; REAL A_rhsD2 = 0;\n \"\"\"\n Memory_Write = \"\"\"rhs_gfs[IDX4S(AD0GF,i0,i1,i2)] += A_rhsD0;\n rhs_gfs[IDX4S(AD1GF,i0,i1,i2)] += A_rhsD1;\n rhs_gfs[IDX4S(AD2GF,i0,i1,i2)] += A_rhsD2;\n \"\"\"\n\n indices = [\"i0\",\"i1\",\"i2\"]\n indicesp1 = [\"i0+1\",\"i1+1\",\"i2+1\"]\n\n for flux_dirn in range(3):\n calculate_E_i_flux(flux_dirn,alpha_face,gamma_faceDD,beta_faceU,\\\n Valenciav_rU,B_rU,Valenciav_lU,B_lU)\n\n E_field_to_print = [\\\n sp.Rational(1,4)*E_fluxD[(flux_dirn+1)%3],\n sp.Rational(1,4)*E_fluxD[(flux_dirn+2)%3],\n ]\n\n E_field_names = [\\\n \"A_rhsD\"+str((flux_dirn+1)%3),\n \"A_rhsD\"+str((flux_dirn+2)%3),\n ]\n\n desc = \"Calculate the electric flux on the left face in direction \" + str(flux_dirn) + \".\"\n name = \"calculate_E_field_D\" + str(flux_dirn) + \"_right\"\n outCfunction(\n outfile = os.path.join(out_dir,subdir,name+\".h\"), desc=desc, name=name,\n params =\"const paramstruct *params,const REAL *auxevol_gfs,REAL *rhs_gfs\",\n body = Memory_Read \\\n +outputC(E_field_to_print,E_field_names,\"returnstring\",params=\"outCverbose=False\").replace(\"IDX4\",\"IDX4S\")\\\n +Memory_Write,\n loopopts =\"InteriorPoints\",\n rel_path_for_Cparams=os.path.join(\"../\"))\n\n desc = \"Calculate the electric flux on the left face in direction \" + str(flux_dirn) + \".\"\n name = \"calculate_E_field_D\" + str(flux_dirn) + \"_left\"\n outCfunction(\n outfile = os.path.join(out_dir,subdir,name+\".h\"), desc=desc, name=name,\n params =\"const paramstruct *params,const REAL *auxevol_gfs,REAL *rhs_gfs\",\n body = Memory_Read.replace(indices[flux_dirn],indicesp1[flux_dirn]) \\\n +outputC(E_field_to_print,E_field_names,\"returnstring\",params=\"outCverbose=False\").replace(\"IDX4\",\"IDX4S\")\\\n +Memory_Write,\n loopopts =\"InteriorPoints\",\n rel_path_for_Cparams=os.path.join(\"../\"))\n\n",
"_____no_output_____"
]
],
[
[
"<a id='code_validation'></a>\n\n# Step 3: Code Validation against `GiRaFFE_NRPy.Induction_Equation` NRPy+ Module \\[Back to [top](#toc)\\]\n$$\\label{code_validation}$$\n\n\nHere, as a code validation check, we verify agreement in the SymPy expressions for the $\\texttt{GiRaFFE}$ evolution equations and auxiliary quantities we intend to use between\n1. this tutorial and \n2. the NRPy+ [GiRaFFE_NRPy.Induction_Equation](../../edit/in_progress/GiRaFFE_NRPy/Induction_Equation.py) module.\n\nBelow are the gridfunction registrations we will need for testing. We will pass these to the above functions to self-validate the module that corresponds with this tutorial.",
"_____no_output_____"
]
],
[
[
"all_passed=True\ndef comp_func(expr1,expr2,basename,prefixname2=\"C2P_P2C.\"):\n if str(expr1-expr2)!=\"0\":\n print(basename+\" - \"+prefixname2+basename+\" = \"+ str(expr1-expr2))\n all_passed=False\n\ndef gfnm(basename,idx1,idx2=None,idx3=None):\n if idx2 is None:\n return basename+\"[\"+str(idx1)+\"]\"\n if idx3 is None:\n return basename+\"[\"+str(idx1)+\"][\"+str(idx2)+\"]\"\n return basename+\"[\"+str(idx1)+\"][\"+str(idx2)+\"][\"+str(idx3)+\"]\"\n\n# These are the standard gridfunctions we've used before.\n#ValenciavU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"ValenciavU\",DIM=3)\n#gammaDD = ixp.register_gridfunctions_for_single_rank2(\"AUXEVOL\",\"gammaDD\",\"sym01\")\n#betaU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"betaU\")\n#alpha = gri.register_gridfunctions(\"AUXEVOL\",[\"alpha\"])\n#AD = ixp.register_gridfunctions_for_single_rank1(\"EVOL\",\"AD\",DIM=3)\n#BU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"BU\",DIM=3)\n\n# We will pass values of the gridfunction on the cell faces into the function. This requires us\n# to declare them as C parameters in NRPy+. We will denote this with the _face infix/suffix.\nalpha_face = gri.register_gridfunctions(\"AUXEVOL\",\"alpha_face\")\ngamma_faceDD = ixp.register_gridfunctions_for_single_rank2(\"AUXEVOL\",\"gamma_faceDD\",\"sym01\")\nbeta_faceU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"beta_faceU\")\n\n# We'll need some more gridfunctions, now, to represent the reconstructions of BU and ValenciavU\n# on the right and left faces\nValenciav_rU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"Valenciav_rU\",DIM=3)\nB_rU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"B_rU\",DIM=3)\nValenciav_lU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"Valenciav_lU\",DIM=3)\nB_lU = ixp.register_gridfunctions_for_single_rank1(\"AUXEVOL\",\"B_lU\",DIM=3)\n\nimport GiRaFFE_NRPy.Afield_flux as Af\n\nexpr_list = []\nexprcheck_list = []\nnamecheck_list = []\n\nfor flux_dirn in range(3):\n calculate_E_i_flux(flux_dirn,alpha_face,gamma_faceDD,beta_faceU,\\\n Valenciav_rU,B_rU,Valenciav_lU,B_lU)\n Af.calculate_E_i_flux(flux_dirn,alpha_face,gamma_faceDD,beta_faceU,\\\n Valenciav_rU,B_rU,Valenciav_lU,B_lU)\n namecheck_list.extend([gfnm(\"E_fluxD\",flux_dirn)])\n exprcheck_list.extend([Af.E_fluxD[flux_dirn]])\n expr_list.extend([E_fluxD[flux_dirn]])\n\nfor mom_comp in range(len(expr_list)):\n comp_func(expr_list[mom_comp],exprcheck_list[mom_comp],namecheck_list[mom_comp])\n\nimport sys\nif all_passed:\n print(\"ALL TESTS PASSED!\")\nelse:\n print(\"ERROR: AT LEAST ONE TEST DID NOT PASS\")\n sys.exit(1)",
"ALL TESTS PASSED!\n"
]
],
[
[
"We will also check the output C code to make sure it matches what is produced by the python module.",
"_____no_output_____"
]
],
[
[
"import difflib\nimport sys\n\nsubdir = os.path.join(\"RHSs\")\n\nout_dir = os.path.join(\"GiRaFFE_standalone_Ccodes\")\ncmd.mkdir(out_dir)\ncmd.mkdir(os.path.join(out_dir,subdir))\nvaldir = os.path.join(\"GiRaFFE_Ccodes_validation\")\ncmd.mkdir(valdir)\ncmd.mkdir(os.path.join(valdir,subdir))\n\ngenerate_Afield_flux_function_files(out_dir,subdir,alpha_face,gamma_faceDD,beta_faceU,\\\n Valenciav_rU,B_rU,Valenciav_lU,B_lU,inputs_provided=True)\nAf.generate_Afield_flux_function_files(valdir,subdir,alpha_face,gamma_faceDD,beta_faceU,\\\n Valenciav_rU,B_rU,Valenciav_lU,B_lU,inputs_provided=True)\n\nprint(\"Printing difference between original C code and this code...\")\n# Open the files to compare\nfiles = [\"RHSs/calculate_E_field_D0_right.h\",\n \"RHSs/calculate_E_field_D0_left.h\",\n \"RHSs/calculate_E_field_D1_right.h\",\n \"RHSs/calculate_E_field_D1_left.h\",\n \"RHSs/calculate_E_field_D2_right.h\",\n \"RHSs/calculate_E_field_D2_left.h\"]\n\nfor file in files:\n print(\"Checking file \" + file)\n with open(os.path.join(valdir,file)) as file1, open(os.path.join(out_dir,file)) as file2:\n # Read the lines of each file\n file1_lines = file1.readlines()\n file2_lines = file2.readlines()\n num_diffs = 0\n for line in difflib.unified_diff(file1_lines, file2_lines, fromfile=os.path.join(valdir,file), tofile=os.path.join(out_dir,file)):\n sys.stdout.writelines(line)\n num_diffs = num_diffs + 1\n if num_diffs == 0:\n print(\"No difference. TEST PASSED!\")\n else:\n print(\"ERROR: Disagreement found with .py file. See differences above.\")",
"Output C function calculate_E_field_D0_right() to file GiRaFFE_standalone_Ccodes\\RHSs\\calculate_E_field_D0_right.h\nOutput C function calculate_E_field_D0_left() to file GiRaFFE_standalone_Ccodes\\RHSs\\calculate_E_field_D0_left.h\nOutput C function calculate_E_field_D1_right() to file GiRaFFE_standalone_Ccodes\\RHSs\\calculate_E_field_D1_right.h\nOutput C function calculate_E_field_D1_left() to file GiRaFFE_standalone_Ccodes\\RHSs\\calculate_E_field_D1_left.h\nOutput C function calculate_E_field_D2_right() to file GiRaFFE_standalone_Ccodes\\RHSs\\calculate_E_field_D2_right.h\nOutput C function calculate_E_field_D2_left() to file GiRaFFE_standalone_Ccodes\\RHSs\\calculate_E_field_D2_left.h\nOutput C function calculate_E_field_D0_right() to file GiRaFFE_Ccodes_validation\\RHSs\\calculate_E_field_D0_right.h\nOutput C function calculate_E_field_D0_left() to file GiRaFFE_Ccodes_validation\\RHSs\\calculate_E_field_D0_left.h\nOutput C function calculate_E_field_D1_right() to file GiRaFFE_Ccodes_validation\\RHSs\\calculate_E_field_D1_right.h\nOutput C function calculate_E_field_D1_left() to file GiRaFFE_Ccodes_validation\\RHSs\\calculate_E_field_D1_left.h\nOutput C function calculate_E_field_D2_right() to file GiRaFFE_Ccodes_validation\\RHSs\\calculate_E_field_D2_right.h\nOutput C function calculate_E_field_D2_left() to file GiRaFFE_Ccodes_validation\\RHSs\\calculate_E_field_D2_left.h\nPrinting difference between original C code and this code...\nChecking file RHSs/calculate_E_field_D0_right.h\nNo difference. TEST PASSED!\nChecking file RHSs/calculate_E_field_D0_left.h\nNo difference. TEST PASSED!\nChecking file RHSs/calculate_E_field_D1_right.h\nNo difference. TEST PASSED!\nChecking file RHSs/calculate_E_field_D1_left.h\nNo difference. TEST PASSED!\nChecking file RHSs/calculate_E_field_D2_right.h\nNo difference. TEST PASSED!\nChecking file RHSs/calculate_E_field_D2_left.h\nNo difference. TEST PASSED!\n"
]
],
[
[
"<a id='latex_pdf_output'></a>\n\n# Step 4: Output this notebook to $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n$$\\label{latex_pdf_output}$$\n\nThe following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n[Tutorial-GiRaFFE_NRPy-Induction_Equation.pdf](Tutorial-GiRaFFE_NRPy-Induction_Equation.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)",
"_____no_output_____"
]
],
[
[
"import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface\ncmd.output_Jupyter_notebook_to_LaTeXed_PDF(\"Tutorial-GiRaFFE_NRPy-Afield_flux\")",
"Notebook output to PDF is only supported on Linux systems, with pdflatex installed.\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adb1345073f8489924548c35d0148cb97996f40
| 100,727 |
ipynb
|
Jupyter Notebook
|
1 Supervised Learning/Classification/Decision Tree Breast Cancer DS.ipynb
|
HJJ256/Machine-Learning-2018-present-
|
f01d2f13025b02792612ee42e2e58f13f461b999
|
[
"MIT"
] | 1 |
2020-09-26T20:02:57.000Z
|
2020-09-26T20:02:57.000Z
|
1 Supervised Learning/Classification/Decision Tree Breast Cancer DS.ipynb
|
HJJ256/Machine-Learning-2018-present-
|
f01d2f13025b02792612ee42e2e58f13f461b999
|
[
"MIT"
] | null | null | null |
1 Supervised Learning/Classification/Decision Tree Breast Cancer DS.ipynb
|
HJJ256/Machine-Learning-2018-present-
|
f01d2f13025b02792612ee42e2e58f13f461b999
|
[
"MIT"
] | null | null | null | 44.907267 | 96 | 0.384971 |
[
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport math\nfrom matplotlib import style\nfrom collections import Counter\nstyle.use('fivethirtyeight') #Shows Grid\nimport pandas as pd\nimport random",
"_____no_output_____"
],
[
"df = pd.read_csv('Breast-Cancer.csv',na_values = ['?'])\nmeans = df.mean().to_dict()\ndf.drop(['id'],1,inplace=True)\nheader = list(df)\ndf.fillna(df.mean(),inplace = True)\nfull_data = df.astype(float).values.tolist()\nfull_data",
"_____no_output_____"
],
[
"test_size1 = 0.5\ntrain_data1 = full_data[:-int(test_size1*len(full_data))]\ntest_data1 = full_data[-int(test_size1*len(full_data)):]\nlen(test_data1)",
"_____no_output_____"
],
[
"test_size2 = 0.1\ntrain_data2 = full_data[:-int(test_size2*len(full_data))]\ntest_data2 = full_data[-int(test_size2*len(full_data)):]\nlen(test_data2)",
"_____no_output_____"
],
[
"test_size3 = 0.3\ntrain_data3 = full_data[:-int(test_size3*len(full_data))]\ntest_data3 = full_data[-int(test_size3*len(full_data)):]\nlen(test_data3)",
"_____no_output_____"
],
[
"def unique_vals(Data,col):\n return set([row[col] for row in Data])",
"_____no_output_____"
],
[
"def class_counts(Data):\n counts = {}\n for row in Data:\n label = row[-1]\n if label not in counts:\n counts[label] = 0\n counts[label] += 1\n return counts",
"_____no_output_____"
],
[
"class Question:\n def __init__(self,column,value):\n self.column = column\n self.value = value\n def match(self,example):\n val = example[self.column]\n return val == self.value\n def __repr__(self):\n return \"Is %s %s %s?\" %(\n header[self.column],\"==\",str(self.value))",
"_____no_output_____"
],
[
"def partition(Data,question):\n true_rows,false_rows = [],[]\n for row in Data:\n if(question.match(row)):\n true_rows.append(row)\n else:\n false_rows.append(row)\n return true_rows,false_rows",
"_____no_output_____"
],
[
"def gini(Data):\n counts = class_counts(Data)\n impurity = 1\n for lbl in counts:\n prob_of_lbl = counts[lbl]/float(len(Data))\n impurity-=prob_of_lbl**2\n return impurity",
"_____no_output_____"
],
[
"def info_gain(left,right,current_uncertainty):\n p = float(len(left))/(len(left)+len(right))\n return current_uncertainty - p*gini(left) - (1-p)*gini(right)",
"_____no_output_____"
],
[
"def find_best_split(Data):\n best_gain = 0\n best_question = None\n current_uncertainty = gini(Data)\n n_features = len(Data[0]) - 1\n for col in range(n_features):\n values = unique_vals(Data,col)\n for val in values:\n question = Question(col,val)\n true_rows,false_rows = partition(Data,question)\n if(len(true_rows) == 0 or len(false_rows)==0):\n continue\n gain = info_gain(true_rows,false_rows,current_uncertainty)\n if gain>=best_gain:\n best_gain, best_question = gain , question\n return best_gain,best_question",
"_____no_output_____"
],
[
"class Leaf:\n def __init__(self,Data):\n self.predictions = class_counts(Data)",
"_____no_output_____"
],
[
"class Decision_Node:\n def __init__(self, question, true_branch,false_branch):\n self.question = question\n self.true_branch = true_branch\n self.false_branch = false_branch\n #print(self.question)",
"_____no_output_____"
],
[
"def build_tree(Data,i=0):\n gain, question = find_best_split(Data)\n \n if gain == 0:\n return Leaf(Data)\n true_rows , false_rows = partition(Data,question)\n true_branch = build_tree(true_rows,i)\n false_branch = build_tree(false_rows,i)\n return Decision_Node(question,true_branch,false_branch)",
"_____no_output_____"
],
[
"def print_tree(node,spacing=\"\"):\n if isinstance(node, Leaf):\n print(spacing + \"Predict\",node.predictions)\n return\n print(spacing+str(node.question))\n print(spacing + \"--> True:\")\n print_tree(node.true_branch , spacing + \" \")\n \n print(spacing + \"--> False:\")\n print_tree(node.false_branch , spacing + \" \")\n ",
"_____no_output_____"
],
[
"def print_leaf(counts):\n total = sum(counts.values())*1.0\n probs = {}\n for lbl in counts.keys():\n probs[lbl] = str(int(counts[lbl]/total * 100)) + \"%\"\n return probs",
"_____no_output_____"
],
[
"def classify(row,node):\n if isinstance(node,Leaf):\n return node.predictions\n if node.question.match(row):\n return classify(row,node.true_branch)\n else:\n return classify(row,node.false_branch)",
"_____no_output_____"
],
[
"my_tree = build_tree(train_data1)\nprint_tree(my_tree)",
"Is x2 == 1.0?\n--> True:\n Is x6 == 10.0?\n --> True:\n Is x8 == 1.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 2}\n --> False:\n Is x8 == 10.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Is x6 == 5.0?\n --> True:\n Is x7 == 2.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Predict {2.0: 4}\n --> False:\n Predict {2.0: 155}\n--> False:\n Is x3 == 1.0?\n --> True:\n Predict {2.0: 9}\n --> False:\n Is x3 == 2.0?\n --> True:\n Is x9 == 1.0?\n --> True:\n Predict {2.0: 8}\n --> False:\n Predict {4.0: 3}\n --> False:\n Is x8 == 2.0?\n --> True:\n Is x9 == 1.0?\n --> True:\n Predict {2.0: 3}\n --> False:\n Predict {4.0: 2}\n --> False:\n Is x6 == 3.5446559297218156?\n --> True:\n Is x1 == 8.0?\n --> True:\n Predict {4.0: 2}\n --> False:\n Predict {2.0: 3}\n --> False:\n Is x1 == 4.0?\n --> True:\n Is x8 == 3.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 2}\n --> False:\n Is x4 == 5.0?\n --> True:\n Is x8 == 5.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Is x6 == 7.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 8}\n --> False:\n Is x8 == 7.0?\n --> True:\n Is x7 == 4.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Is x7 == 3.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 8}\n --> False:\n Is x1 == 1.0?\n --> True:\n Is x8 == 1.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 2}\n --> False:\n Is x2 == 7.0?\n --> True:\n Is x7 == 3.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 6}\n --> False:\n Is x6 == 1.0?\n --> True:\n Is x5 == 3.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 10}\n --> False:\n Predict {4.0: 112}\n"
],
[
"def calc_accuracy(test_data, my_tree):\n correct,total = 0,0\n for row in test_data:\n if(row[-1] in print_leaf(classify(row,my_tree)).keys()):\n correct += 1\n total += 1\n return correct/total",
"_____no_output_____"
],
[
"for row in test_data1:\n print(\"Actual: %s. Predicted: %s\" % (row[-1],print_leaf(classify(row,my_tree))))\naccuracy = calc_accuracy(test_data1,my_tree)",
"Actual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\n"
],
[
"print(accuracy,\"accuracy for 50% train data and 50% test data\")",
"0.9455587392550143 accuracy for 50% train data and 50% test data\n"
],
[
"my_tree2 = build_tree(train_data2)\nprint_tree(my_tree2)",
"Is x2 == 1.0?\n--> True:\n Is x6 == 10.0?\n --> True:\n Is x8 == 1.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 2}\n --> False:\n Is x8 == 10.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Is x6 == 5.0?\n --> True:\n Is x5 == 1.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Predict {2.0: 7}\n --> False:\n Predict {2.0: 324}\n--> False:\n Is x6 == 1.0?\n --> True:\n Is x3 == 10.0?\n --> True:\n Predict {4.0: 6}\n --> False:\n Is x2 == 4.0?\n --> True:\n Is x5 == 3.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 3}\n --> False:\n Is x5 == 10.0?\n --> True:\n Predict {4.0: 2}\n --> False:\n Is x9 == 4.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Is x7 == 10.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Predict {2.0: 43}\n --> False:\n Is x3 == 2.0?\n --> True:\n Is x7 == 3.0?\n --> True:\n Predict {2.0: 5}\n --> False:\n Is x7 == 2.0?\n --> True:\n Predict {2.0: 2}\n --> False:\n Is x4 == 4.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 6}\n --> False:\n Is x3 == 1.0?\n --> True:\n Predict {2.0: 3}\n --> False:\n Is x6 == 3.5446559297218156?\n --> True:\n Is x1 == 8.0?\n --> True:\n Predict {4.0: 2}\n --> False:\n Predict {2.0: 3}\n --> False:\n Is x8 == 2.0?\n --> True:\n Is x7 == 4.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Is x6 == 2.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Is x5 == 7.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 5}\n --> False:\n Is x6 == 10.0?\n --> True:\n Is x8 == 5.0?\n --> True:\n Is x7 == 3.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 8}\n --> False:\n Predict {4.0: 111}\n --> False:\n Is x2 == 4.0?\n --> True:\n Is x1 == 4.0?\n --> True:\n Predict {2.0: 2}\n --> False:\n Is x8 == 8.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Is x5 == 7.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 8}\n --> False:\n Is x7 == 6.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Is x8 == 7.0?\n --> True:\n Is x7 == 7.0?\n --> True:\n Predict {4.0: 4}\n --> False:\n Is x9 == 1.0?\n --> True:\n Predict {2.0: 2}\n --> False:\n Predict {4.0: 1}\n --> False:\n Is x2 == 7.0?\n --> True:\n Is x7 == 3.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 5}\n --> False:\n Predict {4.0: 61}\n"
],
[
"for row in test_data2:\n print(\"Actual: %s. Predicted: %s\" % (row[-1],print_leaf(classify(row,my_tree2))))\naccuracy2 = calc_accuracy(test_data2,my_tree2)",
"Actual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\n"
],
[
"print(accuracy2,\"accuracy for 90% train data and 10% test data\")",
"0.9565217391304348 accuracy for 90% train data and 10% test data\n"
],
[
"my_tree3 = build_tree(train_data3)\nprint_tree(my_tree3)",
"Is x2 == 1.0?\n--> True:\n Is x6 == 10.0?\n --> True:\n Is x8 == 1.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 2}\n --> False:\n Is x8 == 10.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Is x6 == 5.0?\n --> True:\n Is x7 == 2.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Predict {2.0: 5}\n --> False:\n Predict {2.0: 230}\n--> False:\n Is x6 == 1.0?\n --> True:\n Is x3 == 10.0?\n --> True:\n Predict {4.0: 6}\n --> False:\n Is x3 == 4.0?\n --> True:\n Is x8 == 1.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 3}\n --> False:\n Is x5 == 10.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Is x5 == 6.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Predict {2.0: 34}\n --> False:\n Is x3 == 2.0?\n --> True:\n Is x7 == 3.0?\n --> True:\n Predict {2.0: 5}\n --> False:\n Is x9 == 1.0?\n --> True:\n Is x7 == 4.0?\n --> True:\n Predict {4.0: 1}\n --> False:\n Predict {2.0: 2}\n --> False:\n Predict {4.0: 4}\n --> False:\n Is x3 == 1.0?\n --> True:\n Predict {2.0: 3}\n --> False:\n Is x6 == 3.5446559297218156?\n --> True:\n Is x1 == 8.0?\n --> True:\n Predict {4.0: 2}\n --> False:\n Predict {2.0: 3}\n --> False:\n Is x8 == 2.0?\n --> True:\n Is x9 == 1.0?\n --> True:\n Is x5 == 10.0?\n --> True:\n Predict {4.0: 2}\n --> False:\n Predict {2.0: 3}\n --> False:\n Predict {4.0: 2}\n --> False:\n Is x1 == 4.0?\n --> True:\n Is x3 == 4.0?\n --> True:\n Predict {2.0: 2}\n --> False:\n Predict {4.0: 4}\n --> False:\n Is x8 == 7.0?\n --> True:\n Is x7 == 4.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Is x7 == 3.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 10}\n --> False:\n Is x4 == 5.0?\n --> True:\n Is x8 == 5.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Is x6 == 7.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 9}\n --> False:\n Is x5 == 7.0?\n --> True:\n Is x8 == 6.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 4}\n --> False:\n Is x2 == 7.0?\n --> True:\n Is x7 == 3.0?\n --> True:\n Predict {2.0: 1}\n --> False:\n Predict {4.0: 7}\n --> False:\n Predict {4.0: 135}\n"
],
[
"for row in test_data3:\n print(\"Actual: %s. Predicted: %s\" % (row[-1],print_leaf(classify(row,my_tree3))))\naccuracy3 = calc_accuracy(test_data3,my_tree3)",
"Actual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 2.0. Predicted: {2.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\nActual: 4.0. Predicted: {4.0: '100%'}\n"
],
[
"print(accuracy3,\"accuracy for 70% train data and 30% test data\")",
"0.9617224880382775 accuracy for 70% train data and 30% test data\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adb1827552c8640f2e175187737f80fa9320201
| 12,916 |
ipynb
|
Jupyter Notebook
|
notebooks/misc/osmnx_streets/official_examples/04-simplify-graph-consolidate-nodes.ipynb
|
dylanhogg/jupyter-experiments
|
c1b3a6b50eaeeb5be73bb30fb69cbc813c61ed26
|
[
"CC-BY-4.0"
] | null | null | null |
notebooks/misc/osmnx_streets/official_examples/04-simplify-graph-consolidate-nodes.ipynb
|
dylanhogg/jupyter-experiments
|
c1b3a6b50eaeeb5be73bb30fb69cbc813c61ed26
|
[
"CC-BY-4.0"
] | null | null | null |
notebooks/misc/osmnx_streets/official_examples/04-simplify-graph-consolidate-nodes.ipynb
|
dylanhogg/jupyter-experiments
|
c1b3a6b50eaeeb5be73bb30fb69cbc813c61ed26
|
[
"CC-BY-4.0"
] | null | null | null | 39.619632 | 1,002 | 0.65686 |
[
[
[
"# Simplify network topology and consolidate intersections\n\nAuthor: [Geoff Boeing](https://geoffboeing.com/)\n\n - [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)\n - [GitHub repo](https://github.com/gboeing/osmnx)\n - [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples)\n - [Documentation](https://osmnx.readthedocs.io/en/stable/)\n - [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/)",
"_____no_output_____"
]
],
[
[
"import networkx as nx\nimport osmnx as ox\n\n%matplotlib inline\nox.__version__",
"_____no_output_____"
]
],
[
[
"## 1. Complex intersection consolidation\n\nMany real-world street networks feature complex intersections and traffic circles, resulting in a cluster of graph nodes where there is really just one true intersection, as we would think of it in transportation or urban design. Similarly, divided roads are often represented by separate centerline edges: the intersection of two divided roads thus creates 4 nodes, representing where each edge intersects a perpendicular edge, but these 4 nodes represent a single intersection in the real world. Traffic circles similarly create a cluster of nodes where each street's edge intersects the roundabout.\n\nOSMnx can consolidate nearby intersections and optionally rebuild the graph's topology.",
"_____no_output_____"
]
],
[
[
"# get a street network and plot it with all edge intersections\npoint = 37.858495, -122.267468\nG = ox.graph_from_point(point, network_type=\"drive\", dist=500)\nfig, ax = ox.plot_graph(G, node_color=\"r\")",
"_____no_output_____"
]
],
[
[
"Notice the complex intersections and traffic circles creating clusters of nodes.\n\nWe'll specify that any nodes with 15 meter buffers of each other in this network are part of the same intersection. Adjust this tolerance based on the street design standards in the community you are examining, and use a projected graph to work in meaningful units like meters. We'll also specify that we do not want dead-ends returned in our list of consolidated intersections.",
"_____no_output_____"
]
],
[
[
"# get a GeoSeries of consolidated intersections\nG_proj = ox.project_graph(G)\nints = ox.consolidate_intersections(G_proj, rebuild_graph=False, tolerance=15, dead_ends=False)\nlen(ints)",
"_____no_output_____"
],
[
"# compare to number of nodes in original graph\nlen(G)",
"_____no_output_____"
]
],
[
[
"Note that these cleaned up intersections give us more accurate intersection counts and densities, but do not alter or integrate with the network's topology.\n\nTo do that, we need to **rebuild the graph**.",
"_____no_output_____"
]
],
[
[
"# consolidate intersections and rebuild graph topology\n# this reconnects edge geometries to the new consolidated nodes\nG2 = ox.consolidate_intersections(G_proj, rebuild_graph=True, tolerance=15, dead_ends=False)\nlen(G2)",
"_____no_output_____"
],
[
"fig, ax = ox.plot_graph(G2, node_color=\"r\")",
"_____no_output_____"
]
],
[
[
"Notice how the traffic circles' many nodes are merged into a new single centroid node, with edge geometries extended to connect to it. Similar consolidation occurs at the intersection of the divided roads.\n\nRunning `consolidate_intersections` with `rebuild_graph=True` may yield somewhat (but not very) different intersection counts/densities compared to `rebuild_graph=False`. The difference lies in that the latter just merges buffered node points that overlap, whereas the former checks the topology of the overlapping node buffers before merging them.\n\nThis prevents topologically remote but spatially proximate nodes from being merged. For example:\n\n - A street intersection may lie directly below a freeway overpass's intersection with an on-ramp. We would not want to merge these together and connnect their edges: they are distinct junctions in the system of roads.\n - In a residential neighborhood, a bollarded street may create a dead-end immediately next to an intersection or traffic circle. We would not want to merge this dead-end with the intersection and connect their edges.\n\nThese examples illustrate (two-dimensional) geometric proximity, but topological remoteness. Accordingly, in some situations we may expect higher intersection counts when using `rebuild_graph=True` because it is more cautious with merging in these cases. The trade-off is that it has higher time complexity than `rebuild_graph=False`.\n\n## 2. Graph simplification\n\nUse simplification to clean-up nodes that are not intersections or dead-ends while retaining the complete edge geometry. OSMnx does this automatically by default when constructing a graph.",
"_____no_output_____"
]
],
[
[
"# create a network around some (lat, lng) point and plot it\nlocation_point = (33.299896, -111.831638)\nG = ox.graph_from_point(location_point, dist=500, simplify=False)\nfig, ax = ox.plot_graph(G, node_color=\"r\")",
"_____no_output_____"
],
[
"# show which nodes we'd remove if we simplify it (yellow)\nnc = [\"r\" if ox.simplification._is_endpoint(G, node) else \"y\" for node in G.nodes()]\nfig, ax = ox.plot_graph(G, node_color=nc)",
"_____no_output_____"
],
[
"# simplify the network\nG2 = ox.simplify_graph(G)",
"_____no_output_____"
],
[
"# plot the simplified network and highlight any self-loop edges\nloops = [edge[0] for edge in nx.selfloop_edges(G2)]\nnc = [\"r\" if node in loops else \"y\" for node in G2.nodes()]\nfig, ax = ox.plot_graph(G2, node_color=nc)",
"_____no_output_____"
],
[
"# turn off strict mode and see what nodes we'd remove\nnc = [\"r\" if ox.simplification._is_endpoint(G, node, strict=False) else \"y\" for node in G.nodes()]\nfig, ax = ox.plot_graph(G, node_color=nc)",
"_____no_output_____"
],
[
"# simplify network with strict mode turned off\nG3 = ox.simplify_graph(G.copy(), strict=False)\nfig, ax = ox.plot_graph(G3, node_color=\"r\")",
"_____no_output_____"
]
],
[
[
"## 3. Cleaning up the periphery of the network\n\nThis is related to simplification. OSMnx by default (with clean_periphery parameter equal to True) buffers the area you request by 0.5km, and then retrieves the street network within this larger, buffered area. Then it simplifies the topology so that nodes represent intersections of streets (rather than including all the interstitial OSM nodes). Then it calculates the (undirected) degree of each node in this larger network. Next it truncates this network by the actual area you requested (either by bounding box, or by polygon). Finally it saves a dictionary of node degree values as a graph attribute.\n\nThis has two primary benefits. First, it cleans up stray false edges around the periphery. If clean_periphery=False, peripheral non-intersection nodes within the requested area appear to be cul-de-sacs, as the rest of the edge leading to an intersection outside the area is ignored. If clean_periphery=True, the larger graph is first created, allowing simplification of such edges to their true intersections, allowing their entirety to be pruned after truncating down to the actual requested area. Second, it gives accurate node degrees by both a) counting node neighbors even if they fall outside the retained network (so you don't claim a degree-4 node is degree-2 because only 2 of its neighbors lie within the area), and b) not counting all those stray false edges' terminus nodes as cul-de-sacs that otherwise grossly inflate the count of nodes with degree=1, even though these nodes are really just interstitial nodes in the middle of a chopped-off street segment between intersections.\n\nSee two examples below.",
"_____no_output_____"
]
],
[
[
"# get some bbox\nbbox = ox.utils_geo.bbox_from_point((45.518698, -122.679964), dist=300)\nnorth, south, east, west = bbox",
"_____no_output_____"
],
[
"G = ox.graph_from_bbox(north, south, east, west, network_type=\"drive\", clean_periphery=False)\nfig, ax = ox.plot_graph(G, node_color=\"r\")",
"_____no_output_____"
],
[
"# the node degree distribution for this graph has many false cul-de-sacs\nk = dict(G.degree())\n{n: list(k.values()).count(n) for n in range(max(k.values()) + 1)}",
"_____no_output_____"
]
],
[
[
"Above, notice all the peripheral stray edge stubs. Below, notice these are cleaned up and that the node degrees are accurate with regards to the wider street network that may extend beyond the limits of the requested area.",
"_____no_output_____"
]
],
[
[
"G = ox.graph_from_bbox(north, south, east, west, network_type=\"drive\")\nfig, ax = ox.plot_graph(G, node_color=\"r\")",
"_____no_output_____"
],
[
"# the streets per node distribution for this cleaned up graph is more accurate\n# dict keys = count of streets emanating from the node (ie, intersections and dead-ends)\n# dict vals = number of nodes with that count\nk = nx.get_node_attributes(G, \"street_count\")\n{n: list(k.values()).count(n) for n in range(max(k.values()) + 1)}",
"_____no_output_____"
]
],
[
[
"A final example. Compare the network below to the ones in the section above. It has the stray peripheral edges cleaned up. Also notice toward the bottom left, two interstitial nodes remain in that east-west street. Why? These are actually intersections, but their (southbound) edges were removed because these edges' next intersections were south of the requested area's boundaries. However, OSMnx correctly kept these nodes in the graph because they are in fact intersections and should be counted in measures of intersection density, etc.",
"_____no_output_____"
]
],
[
[
"location_point = (33.299896, -111.831638)\nG = ox.graph_from_point(location_point, dist=500, simplify=True)\nfig, ax = ox.plot_graph(G, node_color=\"r\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adb2fd3c19fe8726617fd5a213803f02a885363
| 41,691 |
ipynb
|
Jupyter Notebook
|
dev/first_sketch/dev-6.ipynb
|
jpcbertoldo/pymdr
|
b9896948a82c104bf7bae30ea69255a08bb39f48
|
[
"MIT"
] | 1 |
2021-02-26T02:26:29.000Z
|
2021-02-26T02:26:29.000Z
|
dev/first_sketch/dev-6.ipynb
|
joaopcbertoldo/pymdr
|
b9896948a82c104bf7bae30ea69255a08bb39f48
|
[
"MIT"
] | 2 |
2020-04-02T14:00:37.000Z
|
2021-03-31T19:43:11.000Z
|
dev/first_sketch/dev-6.ipynb
|
joaopcbertoldo/pymdr
|
b9896948a82c104bf7bae30ea69255a08bb39f48
|
[
"MIT"
] | null | null | null | 41.483582 | 132 | 0.45806 |
[
[
[
"import os\nfrom collections import defaultdict, namedtuple\nfrom copy import deepcopy\nfrom pprint import pprint\n\nimport lxml\nimport lxml.html\nimport lxml.etree\nfrom graphviz import Digraph\nfrom similarity.normalized_levenshtein import NormalizedLevenshtein\n\n\nnormalized_levenshtein = NormalizedLevenshtein()\nTAG_NAME_ATTRIB = '___tag_name___'\nHIERARCHICAL = 'hierarchical'\nSEQUENTIAL = 'sequential'\n\nclass DataRegion(\n # todo rename n_nodes_per_region -> gnode_size\n # todo rename start_child_index -> first_gnode_start_index\n namedtuple(\"DataRegion\", [\"n_nodes_per_region\", \"start_child_index\", \"n_nodes_covered\",])\n):\n def __str__(self):\n return \"DR({0}, {1}, {2})\".format(self[0], self[1], self[2])\n \n def extend_one_gnode(self):\n return self.__class__(\n self.n_nodes_per_region, self.start_child_index, self.n_nodes_covered + self.n_nodes_per_region\n )\n \n @classmethod\n def binary_from_last_gnode(cls, gnode):\n gnode_size = gnode.end - gnode.start\n return cls(gnode_size, gnode.start - gnode_size, 2 * gnode_size)\n \n @classmethod\n def empty(cls):\n return cls(None, None, 0)\n return cls(0, 0, 0)\n \n @property\n def is_empty(self):\n return self[0] is None\n \n# todo use this more extensively\n# Generalized Node\nclass GNode(\n namedtuple(\"GNode\", [\"start\", \"end\"])\n):\n def __str__(self):\n return \"GN({start}, {end})\".format(start=self.start, end=self.end)\n\n\ndef open_doc(folder, filename):\n folder = os.path.abspath(folder)\n filepath = os.path.join(folder, filename)\n\n with open(filepath, 'r') as file:\n doc = lxml.html.fromstring(\n lxml.etree.tostring(\n lxml.html.parse(file), method='html'\n )\n )\n return doc\n\n\ndef html_to_dot_sequential_name(html, with_text=False):\n graph = Digraph(name='html')\n tag_counts = defaultdict(int)\n \n def add_node(html_node):\n tag = html_node.tag\n tag_sequential = tag_counts[tag]\n tag_counts[tag] += 1\n node_name = \"{}-{}\".format(tag, tag_sequential)\n graph.node(node_name, node_name)\n \n if len(html_node) > 0:\n for child in html_node.iterchildren():\n child_name = add_node(child)\n graph.edge(node_name, child_name)\n else:\n child_name = \"-\".join([node_name, \"txt\"])\n graph.node(child_name, html_node.text)\n graph.edge(node_name, child_name)\n return node_name\n add_node(html)\n return graph\n\n\ndef html_to_dot_hierarchical_name(html, with_text=False):\n graph = Digraph(name='html')\n \n def add_node(html_node, parent_suffix, brotherhood_index):\n tag = html_node.tag\n if parent_suffix is None and brotherhood_index is None:\n node_suffix = \"\"\n node_name = tag\n else:\n node_suffix = (\n \"-\".join([parent_suffix, str(brotherhood_index)]) \n if parent_suffix else \n str(brotherhood_index)\n )\n node_name = \"{}-{}\".format(tag, node_suffix)\n graph.node(node_name, node_name, path=node_suffix)\n \n if len(html_node) > 0:\n for child_index, child in enumerate(html_node.iterchildren()):\n child_name = add_node(child, node_suffix, child_index)\n graph.edge(node_name, child_name)\n else:\n child_name = \"-\".join([node_name, \"txt\"])\n child_path = \"-\".join([node_suffix, \"txt\"])\n graph.node(child_name, html_node.text, path=child_path)\n graph.edge(node_name, child_name)\n return node_name\n add_node(html, None, None)\n return graph\n\n\ndef html_to_dot(html, name_option='hierarchical', with_text=False):\n if name_option == SEQUENTIAL:\n return html_to_dot_sequential_name(html, with_text=with_text)\n elif name_option == HIERARCHICAL:\n return html_to_dot_hierarchical_name(html, with_text=with_text)\n else:\n raise Exception('No name option `{}`'.format(name_option))",
"_____no_output_____"
],
[
"class MDR:\n\n MINIMUM_DEPTH = 3\n\n def __init__(self, max_tag_per_gnode, edit_distance_threshold, verbose=(False, False, False)):\n self.max_tag_per_gnode = max_tag_per_gnode\n self.edit_distance_threshold = edit_distance_threshold\n self._verbose = verbose\n self._phase = None\n\n def _debug(self, msg, tabs=0, force=False):\n if self._verbose[self._phase] or (any(self._verbose) and force):\n if type(msg) == str:\n print(tabs * '\\t' + msg)\n else:\n pprint(msg)\n\n @staticmethod\n def depth(node):\n d = 0\n while node is not None:\n d += 1\n node = node.getparent()\n return d\n\n @staticmethod\n def gnode_to_string(list_of_nodes):\n return \" \".join([\n lxml.etree.tostring(child).decode('utf-8') for child in list_of_nodes\n ])\n \n def __call__(self, root):\n self.distances = {}\n self.data_regions = {}\n self.tag_counts = defaultdict(int)\n self.root_copy = deepcopy(root)\n self._checked_data_regions = defaultdict(set)\n\n self._phase = 0\n self._debug(\n \">\" * 20 + \" COMPUTE DISTANCES PHASE ({}) \".format(self._phase) + \"<\" * 20, force=True\n )\n self._compute_distances(root)\n self._debug(\n \"<\" * 20 + \" COMPUTE DISTANCES PHASE ({}) \".format(self._phase) + \">\" * 20, force=True\n )\n # todo remove debug variable\n global DEBUG_DISTANCES\n self.distances = DEBUG_DISTANCES if DEBUG_DISTANCES else self.distances\n # todo change _identify_data_regions to get dist table as an input\n\n self._debug(\"\\n\\nself.distances\\n\", force=True)\n self._debug(self.distances, force=True)\n self._debug(\"\\n\\n\", force=True)\n\n self._phase = 1\n self._debug(\n \">\" * 20 + \" FIND DATA REGIONS PHASE ({}) \".format(self._phase) + \"<\" * 20, force=True\n )\n \n self._find_data_regions(root)\n self._debug(\n \"<\" * 20 + \" FIND DATA REGIONS PHASE ({}) \".format(self._phase) + \">\" * 20, force=True\n )\n\n self._phase = 2\n\n def _compute_distances(self, node):\n # each tag is named sequentially\n tag = node.tag\n tag_name = \"{}-{}\".format(tag, self.tag_counts[tag])\n self.tag_counts[tag] += 1\n\n self._debug(\"in _compute_distances of `{}`\".format(tag_name))\n\n # todo: stock depth in attrib???\n node_depth = MDR.depth(node)\n\n if node_depth >= MDR.MINIMUM_DEPTH:\n # get all possible distances of the n-grams of children\n distances = self._compare_combinations(node.getchildren())\n\n self._debug(\"`{}` distances\".format(tag_name))\n self._debug(distances)\n else:\n distances = None\n \n # !!! ATTENTION !!! this modifies the input HTML \n # it is important that this comes after `compare_combinations` because \n # otherwise the edit distances would change\n # todo: remember, in the last phase, to clear the `TAG_NAME_ATTRIB` from all tags\n node.set(TAG_NAME_ATTRIB, tag_name)\n self.distances[tag_name] = distances\n\n self._debug(\"\\n\\n\")\n\n for child in node:\n self._compute_distances(child)\n\n def _compare_combinations(self, node_list):\n \"\"\"\n Notation: gnode = \"generalized node\"\n\n :param node_list:\n :return:\n \"\"\"\n\n self._debug(\"in _compare_combinations\")\n\n if not node_list:\n return {}\n\n # version 1: {gnode_size: {((,), (,)): float}}\n distances = defaultdict(dict)\n # version 2: {gnode_size: {starting_tag: {{ ((,), (,)): float }}}}\n # distances = defaultdict(lambda: defaultdict(dict))\n \n n_nodes = len(node_list)\n\n # for (i = 1; i <= K; i++) /* start from each node */\n for starting_tag in range(1, self.max_tag_per_gnode + 1):\n self._debug('starting_tag (i): {}'.format(starting_tag), 1)\n\n # for (j = i; j <= K; j++) /* comparing different combinations */\n for gnode_size in range(starting_tag, self.max_tag_per_gnode + 1): # j\n self._debug('gnode_size (j): {}'.format(gnode_size), 2)\n\n # if NodeList[i+2*j-1] exists then\n if (starting_tag + 2 * gnode_size - 1) < n_nodes + 1: # +1 for pythons open set notation\n self._debug(\" \")\n self._debug(\">>> if 1 <<<\", 3)\n\n left_gnode_start = starting_tag - 1 # st\n\n # for (k = i+j; k < Size(NodeList); k+j)\n # for k in range(i + j, n, j):\n for right_gnode_start in range(starting_tag + gnode_size - 1, n_nodes, gnode_size): # k\n self._debug('left_gnode_start (st): {}'.format(left_gnode_start), 4)\n self._debug('right_gnode_start (k): {}'.format(right_gnode_start), 4)\n\n # if NodeList[k+j-1] exists then\n if right_gnode_start + gnode_size < n_nodes + 1:\n self._debug(\" \")\n self._debug(\">>> if 2 <<<\", 5)\n # todo: avoid recomputing strings?\n # todo: avoid recomputing edit distances?\n # todo: check https://pypi.org/project/strsim/ ?\n\n # NodeList[St..(k-1)]\n left_gnode_indices = (left_gnode_start, right_gnode_start)\n left_gnode = node_list[left_gnode_indices[0]:left_gnode_indices[1]]\n left_gnode_str = MDR.gnode_to_string(left_gnode)\n self._debug('left_gnode_indices: {}'.format(left_gnode_indices), 5)\n\n # NodeList[St..(k-1)]\n right_gnode_indices = (right_gnode_start, right_gnode_start + gnode_size)\n right_gnode = node_list[right_gnode_indices[0]:right_gnode_indices[1]]\n right_gnode_str = MDR.gnode_to_string(right_gnode)\n self._debug('right_gnode_indices: {}'.format(right_gnode_indices), 5)\n\n # edit distance\n edit_distance = normalized_levenshtein.distance(left_gnode_str, right_gnode_str)\n self._debug('edit_distance: {}'.format(edit_distance), 5)\n \n # version 1\n distances[gnode_size][(left_gnode_indices, right_gnode_indices)] = edit_distance\n # version 2\n # distances[gnode_size][starting_tag][\n # (left_gnode_indices, right_gnode_indices)\n # ] = edit_distance\n \n left_gnode_start = right_gnode_start\n else:\n self._debug(\"skipped\\n\", 5)\n self._debug(' ')\n else:\n self._debug(\"skipped\\n\", 3)\n self._debug(' ')\n \n # version 1\n return dict(distances)\n # version 2\n # return {k: dict(v) for k, v in distances.items()}\n \n def _find_data_regions(self, node):\n tag_name = node.attrib[TAG_NAME_ATTRIB]\n node_depth = MDR.depth(node)\n \n self._debug(\"in _find_data_regions of `{}`\".format(tag_name))\n \n # if TreeDepth(Node) => 3 then\n if node_depth >= MDR.MINIMUM_DEPTH:\n \n # Node.DRs = IdenDRs(1, Node, K, T);\n # data_regions = self._identify_data_regions(1, node) # 0 or 1???\n data_regions = self._identify_data_regions(0, node)\n self.data_regions[tag_name] = data_regions\n \n # todo remove debug thing\n if tag_name == \"table-0\":\n return \n \n # tempDRs = ∅;\n temp_data_regions = set()\n \n # for each Child ∈ Node.Children do\n for child in node.getchildren():\n \n # FindDRs(Child, K, T);\n self._find_data_regions(child)\n \n # tempDRs = tempDRs ∪ UnCoveredDRs(Node, Child);\n uncovered_data_regions = self._uncovered_data_regions(node, child)\n temp_data_regions = temp_data_regions | uncovered_data_regions\n \n # Node.DRs = Node.DRs ∪ tempDRs\n self.data_regions[tag_name] |= temp_data_regions\n \n else:\n for child in node.getchildren():\n self._find_data_regions(child)\n \n self._debug(\" \")\n \n def _identify_data_regions(self, start_index, node):\n \"\"\"\n Notation: dr = data_region\n \"\"\"\n tag_name = node.attrib[TAG_NAME_ATTRIB]\n self._debug(\"in _identify_data_regions node:{}\".format(tag_name))\n self._debug(\"start_index:{}\".format(start_index), 1)\n\n # 1 maxDR = [0, 0, 0];\n # max_dr = DataRegion(0, 0, 0)\n # current_dr = DataRegion(0, 0, 0)\n max_dr = DataRegion.empty()\n current_dr = DataRegion.empty()\n\n # 2 for (i = 1; i <= K; i++) /* compute for each i-combination */\n for gnode_size in range(1, self.max_tag_per_gnode + 1):\n self._debug('gnode_size (i): {}'.format(gnode_size), 2)\n\n # 3 for (f = start; f <= start+i; f++) /* start from each node */\n # for start_gnode_start_index in range(start_index, start_index + gnode_size + 1):\n for first_gn_start_idx in range(start_index, start_index + gnode_size): # todo check if this covers everything\n self._debug('first_gn_start_idx (f): {}'.format(first_gn_start_idx), 3)\n\n # 4 flag = true;\n dr_has_started = False\n\n # 5 for (j = f; j < size(Node.Children); j+i)\n # for left_gnode_start in range(start_node, len(node) , gnode_size):\n for last_gn_start_idx in range(\n # start_gnode_start_index, len(node) - gnode_size + 1, gnode_size\n first_gn_start_idx + gnode_size, len(node) - gnode_size + 1, gnode_size\n ):\n self._debug('last_gn_start_idx (j): {}'.format(last_gn_start_idx), 4)\n\n # 6 if Distance(Node, i, j) <= T then\n\n # todo: correct here\n # from _compare_combinations\n # left_gnode_indices = (left_gnode_start, right_gnode_start)\n # right_gnode_indices = (right_gnode_start, right_gnode_start + gnode_size)\n \n # left_gnode_indices = (start_gnode_start_index, start_gnode_start_index + gnode_size)\n # right_gnode_indices = (end_gnode_start_index, end_gnode_start_index + gnode_size)\n \n # gn_before_last = (last_gn_start_idx - gnode_size, last_gn_start_idx)\n # gn_last = (last_gn_start_idx, last_gn_start_idx + gnode_size)\n \n gn_before_last = GNode(last_gn_start_idx - gnode_size, last_gn_start_idx)\n gn_last = GNode(last_gn_start_idx, last_gn_start_idx + gnode_size)\n\n self._debug('gn_before_last : {}'.format(gn_before_last), 5)\n self._debug('gn_last : {}'.format(gn_last), 5)\n\n gn_pair = (gn_before_last, gn_last)\n distance = self.distances[tag_name][gnode_size][gn_pair]\n self._checked_data_regions[tag_name].add(gn_pair)\n \n self._debug('dist : {}'.format(distance), 5)\n\n if distance <= self.edit_distance_threshold:\n \n self._debug('dist passes the threshold!'.format(distance), 6)\n\n # 7 if flag=true then\n if not dr_has_started:\n \n self._debug('it is the first pair, init the `current_dr`...'.format(distance), 6)\n \n # 8 curDR = [i, j, 2*i];\n # current_dr = DataRegion(gnode_size, first_gn_start_idx - gnode_size, 2 * gnode_size)\n # current_dr = DataRegion(gnode_size, first_gn_start_idx, 2 * gnode_size)\n current_dr = DataRegion.binary_from_last_gnode(gn_last)\n \n self._debug('current_dr: {}'.format(current_dr), 6)\n\n # 9 flag = false;\n dr_has_started = True\n\n # 10 else curDR[3] = curDR[3] + i;\n else:\n self._debug('extending the DR...'.format(distance), 6)\n # current_dr = DataRegion(\n # current_dr[0], current_dr[1], current_dr[2] + gnode_size\n # ) \n current_dr = current_dr.extend_one_gnode()\n self._debug('current_dr: {}'.format(current_dr), 6)\n\n # 11 elseif flag = false then Exit-inner-loop;\n elif dr_has_started:\n self._debug('above the threshold, breaking the loop...', 6)\n # todo: keep track of all continuous regions per node...\n break\n \n self._debug(\" \")\n\n # 13 if (maxDR[3] < curDR[3]) and (maxDR[2] = 0 or (curDR[2]<= maxDR[2]) then\n current_is_strictly_larger = max_dr.n_nodes_covered < current_dr.n_nodes_covered\n current_starts_at_same_node_or_before = (\n max_dr.is_empty or current_dr.start_child_index <= max_dr.start_child_index\n )\n \n if current_is_strictly_larger and current_starts_at_same_node_or_before: \n self._debug('current DR is bigger than max! replacing...', 3)\n \n # 14 maxDR = curDR;\n self._debug('old max_dr: {}, new max_dr: {}'.format(max_dr, current_dr), 3)\n max_dr = current_dr\n \n self._debug('max_dr: {}'.format(max_dr), 2)\n self._debug(\" \")\n \n self._debug(\"max_dr: {}\\n\".format(max_dr))\n \n # 16 if ( maxDR[3] != 0 ) then\n if max_dr.n_nodes_covered != 0:\n \n # 17 if (maxDR[2]+maxDR[3]-1 != size(Node.Children)) then\n last_covered_tag_index = max_dr.start_child_index + max_dr.n_nodes_covered - 1\n self._debug(\"last_covered_tag_index: {}\".format(last_covered_tag_index))\n \n if last_covered_tag_index < len(node) - 1:\n # 18 return {maxDR} ∪ IdentDRs(maxDR[2]+maxDR[3], Node, K, T)\n self._debug(\"calling recursion! \\n\".format(last_covered_tag_index))\n return {max_dr} | self._identify_data_regions(last_covered_tag_index + 1, node)\n\n # 19 else return {maxDR}\n else:\n self._debug(\"returning max dr\".format(last_covered_tag_index))\n self._debug('max_dr: {}'.format(max_dr))\n return {max_dr}\n\n # 21 return ∅;\n self._debug(\"returning empty set\")\n return set()\n\n def _uncovered_data_regions(self, node, child):\n return set()\n",
"_____no_output_____"
],
[
"# tests for cases in dev_6_cases\n%load_ext autoreload\n%autoreload 2\n\nfolder = '.'\nfilename = 'tables-2.html'\ndoc = open_doc(folder, filename)\ndot = html_to_dot(doc, name_option=SEQUENTIAL)\n\nfrom dev_6_cases import all_cases as cases\nfrom dev_6_cases import DEBUG_THRESHOLD as edit_distance_threshold\n\ncases = [\n {\n 'body-0': None,\n 'html-0': None,\n 'table-0': case\n }\n for case in cases\n]\n\nDEBUG_DISTANCES = cases[2]\n\nmdr = MDR(\n max_tag_per_gnode=3, \n edit_distance_threshold=edit_distance_threshold, \n verbose=(False, True, False)\n)\nmdr(doc)",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n>>>>>>>>>>>>>>>>>>>> COMPUTE DISTANCES PHASE (0) <<<<<<<<<<<<<<<<<<<<\n<<<<<<<<<<<<<<<<<<<< COMPUTE DISTANCES PHASE (0) >>>>>>>>>>>>>>>>>>>>\n\n\nself.distances\n\n{'body-0': None,\n 'html-0': None,\n 'table-0': {1: {((0, 1), (1, 2)): 0.1,\n ((1, 2), (2, 3)): 0.1,\n ((2, 3), (3, 4)): 0.9,\n ((3, 4), (4, 5)): 0.1,\n ((4, 5), (5, 6)): 0.9,\n ((5, 6), (6, 7)): 0.1,\n ((6, 7), (7, 8)): 0.9,\n ((7, 8), (8, 9)): 0.9,\n ((8, 9), (9, 10)): 0.1,\n ((9, 10), (10, 11)): 0.1,\n ((10, 11), (11, 12)): 0.9},\n 2: {((0, 2), (2, 4)): 0.9,\n ((1, 3), (3, 5)): 0.9,\n ((2, 4), (4, 6)): 0.1,\n ((3, 5), (5, 7)): 0.1,\n ((4, 6), (6, 8)): 0.1,\n ((5, 7), (7, 9)): 0.1,\n ((6, 8), (8, 10)): 0.1,\n ((7, 9), (9, 11)): 0.1,\n ((8, 10), (10, 12)): 0.9},\n 3: {((0, 3), (3, 6)): 0.9,\n ((1, 4), (4, 7)): 0.1,\n ((2, 5), (5, 8)): 0.9,\n ((3, 6), (6, 9)): 0.1,\n ((4, 7), (7, 10)): 0.1,\n ((5, 8), (8, 11)): 0.9,\n ((6, 9), (9, 12)): 0.9}}}\n\n\n\n>>>>>>>>>>>>>>>>>>>> FIND DATA REGIONS PHASE (1) <<<<<<<<<<<<<<<<<<<<\nin _find_data_regions of `html-0`\nin _find_data_regions of `body-0`\nin _find_data_regions of `table-0`\nin _identify_data_regions node:table-0\n\tstart_index:0\n\t\tgnode_size (i): 1\n\t\t\tfirst_gn_start_idx (f): 0\n\t\t\t\tlast_gn_start_idx (j): 1\n\t\t\t\t\tgn_before_last : GN(0, 1)\n\t\t\t\t\tgn_last : GN(1, 2)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\tit is the first pair, init the `current_dr`...\n\t\t\t\t\t\tcurrent_dr: DR(1, 0, 2)\n \n\t\t\t\tlast_gn_start_idx (j): 2\n\t\t\t\t\tgn_before_last : GN(1, 2)\n\t\t\t\t\tgn_last : GN(2, 3)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\textending the DR...\n\t\t\t\t\t\tcurrent_dr: DR(1, 0, 3)\n \n\t\t\t\tlast_gn_start_idx (j): 3\n\t\t\t\t\tgn_before_last : GN(2, 3)\n\t\t\t\t\tgn_last : GN(3, 4)\n\t\t\t\t\tdist : 0.9\n\t\t\t\t\t\tabove the threshold, breaking the loop...\n\t\t\tcurrent DR is bigger than max! replacing...\n\t\t\told max_dr: DR(None, None, 0), new max_dr: DR(1, 0, 3)\n\t\tmax_dr: DR(1, 0, 3)\n \n\t\tgnode_size (i): 2\n\t\t\tfirst_gn_start_idx (f): 0\n\t\t\t\tlast_gn_start_idx (j): 2\n\t\t\t\t\tgn_before_last : GN(0, 2)\n\t\t\t\t\tgn_last : GN(2, 4)\n\t\t\t\t\tdist : 0.9\n \n\t\t\t\tlast_gn_start_idx (j): 4\n\t\t\t\t\tgn_before_last : GN(2, 4)\n\t\t\t\t\tgn_last : GN(4, 6)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\tit is the first pair, init the `current_dr`...\n\t\t\t\t\t\tcurrent_dr: DR(2, 2, 4)\n \n\t\t\t\tlast_gn_start_idx (j): 6\n\t\t\t\t\tgn_before_last : GN(4, 6)\n\t\t\t\t\tgn_last : GN(6, 8)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\textending the DR...\n\t\t\t\t\t\tcurrent_dr: DR(2, 2, 6)\n \n\t\t\t\tlast_gn_start_idx (j): 8\n\t\t\t\t\tgn_before_last : GN(6, 8)\n\t\t\t\t\tgn_last : GN(8, 10)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\textending the DR...\n\t\t\t\t\t\tcurrent_dr: DR(2, 2, 8)\n \n\t\t\t\tlast_gn_start_idx (j): 10\n\t\t\t\t\tgn_before_last : GN(8, 10)\n\t\t\t\t\tgn_last : GN(10, 12)\n\t\t\t\t\tdist : 0.9\n\t\t\t\t\t\tabove the threshold, breaking the loop...\n\t\tmax_dr: DR(1, 0, 3)\n \n\t\t\tfirst_gn_start_idx (f): 1\n\t\t\t\tlast_gn_start_idx (j): 3\n\t\t\t\t\tgn_before_last : GN(1, 3)\n\t\t\t\t\tgn_last : GN(3, 5)\n\t\t\t\t\tdist : 0.9\n \n\t\t\t\tlast_gn_start_idx (j): 5\n\t\t\t\t\tgn_before_last : GN(3, 5)\n\t\t\t\t\tgn_last : GN(5, 7)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\tit is the first pair, init the `current_dr`...\n\t\t\t\t\t\tcurrent_dr: DR(2, 3, 4)\n \n\t\t\t\tlast_gn_start_idx (j): 7\n\t\t\t\t\tgn_before_last : GN(5, 7)\n\t\t\t\t\tgn_last : GN(7, 9)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\textending the DR...\n\t\t\t\t\t\tcurrent_dr: DR(2, 3, 6)\n \n\t\t\t\tlast_gn_start_idx (j): 9\n\t\t\t\t\tgn_before_last : GN(7, 9)\n\t\t\t\t\tgn_last : GN(9, 11)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\textending the DR...\n\t\t\t\t\t\tcurrent_dr: DR(2, 3, 8)\n \n\t\tmax_dr: DR(1, 0, 3)\n \n\t\tgnode_size (i): 3\n\t\t\tfirst_gn_start_idx (f): 0\n\t\t\t\tlast_gn_start_idx (j): 3\n\t\t\t\t\tgn_before_last : GN(0, 3)\n\t\t\t\t\tgn_last : GN(3, 6)\n\t\t\t\t\tdist : 0.9\n \n\t\t\t\tlast_gn_start_idx (j): 6\n\t\t\t\t\tgn_before_last : GN(3, 6)\n\t\t\t\t\tgn_last : GN(6, 9)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\tit is the first pair, init the `current_dr`...\n\t\t\t\t\t\tcurrent_dr: DR(3, 3, 6)\n \n\t\t\t\tlast_gn_start_idx (j): 9\n\t\t\t\t\tgn_before_last : GN(6, 9)\n\t\t\t\t\tgn_last : GN(9, 12)\n\t\t\t\t\tdist : 0.9\n\t\t\t\t\t\tabove the threshold, breaking the loop...\n\t\tmax_dr: DR(1, 0, 3)\n \n\t\t\tfirst_gn_start_idx (f): 1\n\t\t\t\tlast_gn_start_idx (j): 4\n\t\t\t\t\tgn_before_last : GN(1, 4)\n\t\t\t\t\tgn_last : GN(4, 7)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\tit is the first pair, init the `current_dr`...\n\t\t\t\t\t\tcurrent_dr: DR(3, 1, 6)\n \n\t\t\t\tlast_gn_start_idx (j): 7\n\t\t\t\t\tgn_before_last : GN(4, 7)\n\t\t\t\t\tgn_last : GN(7, 10)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\textending the DR...\n\t\t\t\t\t\tcurrent_dr: DR(3, 1, 9)\n \n\t\tmax_dr: DR(1, 0, 3)\n \n\t\t\tfirst_gn_start_idx (f): 2\n\t\t\t\tlast_gn_start_idx (j): 5\n\t\t\t\t\tgn_before_last : GN(2, 5)\n\t\t\t\t\tgn_last : GN(5, 8)\n\t\t\t\t\tdist : 0.9\n \n\t\t\t\tlast_gn_start_idx (j): 8\n\t\t\t\t\tgn_before_last : GN(5, 8)\n\t\t\t\t\tgn_last : GN(8, 11)\n\t\t\t\t\tdist : 0.9\n \n\t\tmax_dr: DR(1, 0, 3)\n \nmax_dr: DR(1, 0, 3)\n\nlast_covered_tag_index: 2\ncalling recursion! \n\nin _identify_data_regions node:table-0\n\tstart_index:3\n\t\tgnode_size (i): 1\n\t\t\tfirst_gn_start_idx (f): 3\n\t\t\t\tlast_gn_start_idx (j): 4\n\t\t\t\t\tgn_before_last : GN(3, 4)\n\t\t\t\t\tgn_last : GN(4, 5)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\tit is the first pair, init the `current_dr`...\n\t\t\t\t\t\tcurrent_dr: DR(1, 3, 2)\n \n\t\t\t\tlast_gn_start_idx (j): 5\n\t\t\t\t\tgn_before_last : GN(4, 5)\n\t\t\t\t\tgn_last : GN(5, 6)\n\t\t\t\t\tdist : 0.9\n\t\t\t\t\t\tabove the threshold, breaking the loop...\n\t\t\tcurrent DR is bigger than max! replacing...\n\t\t\told max_dr: DR(None, None, 0), new max_dr: DR(1, 3, 2)\n\t\tmax_dr: DR(1, 3, 2)\n \n\t\tgnode_size (i): 2\n\t\t\tfirst_gn_start_idx (f): 3\n\t\t\t\tlast_gn_start_idx (j): 5\n\t\t\t\t\tgn_before_last : GN(3, 5)\n\t\t\t\t\tgn_last : GN(5, 7)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\tit is the first pair, init the `current_dr`...\n\t\t\t\t\t\tcurrent_dr: DR(2, 3, 4)\n \n\t\t\t\tlast_gn_start_idx (j): 7\n\t\t\t\t\tgn_before_last : GN(5, 7)\n\t\t\t\t\tgn_last : GN(7, 9)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\textending the DR...\n\t\t\t\t\t\tcurrent_dr: DR(2, 3, 6)\n \n\t\t\t\tlast_gn_start_idx (j): 9\n\t\t\t\t\tgn_before_last : GN(7, 9)\n\t\t\t\t\tgn_last : GN(9, 11)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\textending the DR...\n\t\t\t\t\t\tcurrent_dr: DR(2, 3, 8)\n \n\t\t\tcurrent DR is bigger than max! replacing...\n\t\t\told max_dr: DR(1, 3, 2), new max_dr: DR(2, 3, 8)\n\t\tmax_dr: DR(2, 3, 8)\n \n\t\t\tfirst_gn_start_idx (f): 4\n\t\t\t\tlast_gn_start_idx (j): 6\n\t\t\t\t\tgn_before_last : GN(4, 6)\n\t\t\t\t\tgn_last : GN(6, 8)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\tit is the first pair, init the `current_dr`...\n\t\t\t\t\t\tcurrent_dr: DR(2, 4, 4)\n \n\t\t\t\tlast_gn_start_idx (j): 8\n\t\t\t\t\tgn_before_last : GN(6, 8)\n\t\t\t\t\tgn_last : GN(8, 10)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\textending the DR...\n\t\t\t\t\t\tcurrent_dr: DR(2, 4, 6)\n \n\t\t\t\tlast_gn_start_idx (j): 10\n\t\t\t\t\tgn_before_last : GN(8, 10)\n\t\t\t\t\tgn_last : GN(10, 12)\n\t\t\t\t\tdist : 0.9\n\t\t\t\t\t\tabove the threshold, breaking the loop...\n\t\tmax_dr: DR(2, 3, 8)\n \n\t\tgnode_size (i): 3\n\t\t\tfirst_gn_start_idx (f): 3\n\t\t\t\tlast_gn_start_idx (j): 6\n\t\t\t\t\tgn_before_last : GN(3, 6)\n\t\t\t\t\tgn_last : GN(6, 9)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\tit is the first pair, init the `current_dr`...\n\t\t\t\t\t\tcurrent_dr: DR(3, 3, 6)\n \n\t\t\t\tlast_gn_start_idx (j): 9\n\t\t\t\t\tgn_before_last : GN(6, 9)\n\t\t\t\t\tgn_last : GN(9, 12)\n\t\t\t\t\tdist : 0.9\n\t\t\t\t\t\tabove the threshold, breaking the loop...\n\t\tmax_dr: DR(2, 3, 8)\n \n\t\t\tfirst_gn_start_idx (f): 4\n\t\t\t\tlast_gn_start_idx (j): 7\n\t\t\t\t\tgn_before_last : GN(4, 7)\n\t\t\t\t\tgn_last : GN(7, 10)\n\t\t\t\t\tdist : 0.1\n\t\t\t\t\t\tdist passes the threshold!\n\t\t\t\t\t\tit is the first pair, init the `current_dr`...\n\t\t\t\t\t\tcurrent_dr: DR(3, 4, 6)\n \n\t\tmax_dr: DR(2, 3, 8)\n \n\t\t\tfirst_gn_start_idx (f): 5\n\t\t\t\tlast_gn_start_idx (j): 8\n\t\t\t\t\tgn_before_last : GN(5, 8)\n\t\t\t\t\tgn_last : GN(8, 11)\n\t\t\t\t\tdist : 0.9\n \n\t\tmax_dr: DR(2, 3, 8)\n \nmax_dr: DR(2, 3, 8)\n\nlast_covered_tag_index: 10\ncalling recursion! \n\nin _identify_data_regions node:table-0\n\tstart_index:11\n\t\tgnode_size (i): 1\n\t\t\tfirst_gn_start_idx (f): 11\n\t\tmax_dr: DR(None, None, 0)\n \n\t\tgnode_size (i): 2\n\t\t\tfirst_gn_start_idx (f): 11\n\t\tmax_dr: DR(None, None, 0)\n \n\t\t\tfirst_gn_start_idx (f): 12\n\t\tmax_dr: DR(None, None, 0)\n \n\t\tgnode_size (i): 3\n\t\t\tfirst_gn_start_idx (f): 11\n\t\tmax_dr: DR(None, None, 0)\n \n\t\t\tfirst_gn_start_idx (f): 12\n\t\tmax_dr: DR(None, None, 0)\n \n\t\t\tfirst_gn_start_idx (f): 13\n\t\tmax_dr: DR(None, None, 0)\n \nmax_dr: DR(None, None, 0)\n\nreturning empty set\n \n \n<<<<<<<<<<<<<<<<<<<< FIND DATA REGIONS PHASE (1) >>>>>>>>>>>>>>>>>>>>\n"
],
[
"# tests for cases in dev_5_cases\n\n# %load_ext autoreload\n# %autoreload 2\n# \n# folder = '.'\n# filename = 'tables-1.html'\n# doc = open_doc(folder, filename)\n# dot = html_to_dot(doc, name_option=SEQUENTIAL)\n# \n# from dev_5_cases import all_cases as cases\n# from dev_5_cases import DEBUG_THRESHOLD as edit_distance_threshold\n# \n# cases = [\n# {\n# 'body-0': None,\n# 'html-0': None,\n# 'table-0': case\n# }\n# for case in cases\n# ]\n# \n# DEBUG_DISTANCES = cases[6]\n# \n# mdr = MDR(\n# max_tag_per_gnode=3, \n# edit_distance_threshold=edit_distance_threshold, \n# verbose=(False, True, False)\n# )\n# mdr(doc)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
4adb32ca8e92aaea6abe747a33392c8b2ea882bf
| 49,516 |
ipynb
|
Jupyter Notebook
|
assignment4/assignment4/assignment43/Imitation Learning.ipynb
|
AnthonyNg404/Deep-Learning
|
ef1dafaa1d07e9c9b574ba1722a7954c16ef463d
|
[
"Unlicense"
] | null | null | null |
assignment4/assignment4/assignment43/Imitation Learning.ipynb
|
AnthonyNg404/Deep-Learning
|
ef1dafaa1d07e9c9b574ba1722a7954c16ef463d
|
[
"Unlicense"
] | null | null | null |
assignment4/assignment4/assignment43/Imitation Learning.ipynb
|
AnthonyNg404/Deep-Learning
|
ef1dafaa1d07e9c9b574ba1722a7954c16ef463d
|
[
"Unlicense"
] | null | null | null | 49,516 | 49,516 | 0.723483 |
[
[
[
"# Imitation Learning with Neural Network Policies\nIn this notebook, you will implement the supervised losses for behavior cloning and use it to train policies for locomotion tasks.",
"_____no_output_____"
]
],
[
[
"import os\nfrom google.colab import drive\ndrive.mount('/content/gdrive')",
"_____no_output_____"
],
[
"DRIVE_PATH = '/content/gdrive/My\\ Drive/282'\nDRIVE_PYTHON_PATH = DRIVE_PATH.replace('\\\\', '')\nif not os.path.exists(DRIVE_PYTHON_PATH):\n %mkdir $DRIVE_PATH\n\n## the space in `My Drive` causes some issues,\n## make a symlink to avoid this\nSYM_PATH = '/content/282'\nif not os.path.exists(SYM_PATH):\n !ln -s $DRIVE_PATH $SYM_PATH\n!apt update \n!apt install -y --no-install-recommends \\\n build-essential \\\n curl \\\n git \\\n gnupg2 \\\n make \\\n cmake \\\n ffmpeg \\\n swig \\\n libz-dev \\\n unzip \\\n zlib1g-dev \\\n libglfw3 \\\n libglfw3-dev \\\n libxrandr2 \\\n libxinerama-dev \\\n libxi6 \\\n libxcursor-dev \\\n libgl1-mesa-dev \\\n libgl1-mesa-glx \\\n libglew-dev \\\n libosmesa6-dev \\\n lsb-release \\\n ack-grep \\\n patchelf \\\n wget \\\n xpra \\\n xserver-xorg-dev \\\n xvfb \\\n python-opengl \\\n ffmpeg > /dev/null 2>&1\nMJC_PATH = '{}/mujoco'.format(SYM_PATH)\nif not os.path.exists(MJC_PATH):\n %mkdir $MJC_PATH\n%cd $MJC_PATH\nif not os.path.exists(os.path.join(MJC_PATH, 'mujoco200')):\n !wget -q https://www.roboti.us/download/mujoco200_linux.zip\n !unzip -q mujoco200_linux.zip\n %mv mujoco200_linux mujoco200\n %rm mujoco200_linux.zip\nimport os\n\nos.environ['LD_LIBRARY_PATH'] += ':{}/mujoco200/bin'.format(MJC_PATH)\nos.environ['MUJOCO_PY_MUJOCO_PATH'] = '{}/mujoco200'.format(MJC_PATH)\nos.environ['MUJOCO_PY_MJKEY_PATH'] = '{}/mjkey.txt'.format(MJC_PATH)\n\n## installation on colab does not find *.so files\n## in LD_LIBRARY_PATH, copy over manually instead\n!cp $MJC_PATH/mujoco200/bin/*.so /usr/lib/x86_64-linux-gnu/\n%cd $MJC_PATH\nif not os.path.exists('mujoco-py'):\n !git clone https://github.com/openai/mujoco-py.git\n%cd mujoco-py\n%pip install -e .\n\n## cythonize at the first import\nimport mujoco_py\n%cd $SYM_PATH\n\n%cd assignment4\n%pip install -r requirements.txt",
"Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount(\"/content/gdrive\", force_remount=True).\nHit:1 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease\nIgn:2 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease\nHit:3 http://security.ubuntu.com/ubuntu bionic-security InRelease\nIgn:4 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease\nHit:5 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release\nHit:6 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release\nHit:7 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease\nHit:8 http://archive.ubuntu.com/ubuntu bionic InRelease\nHit:10 http://archive.ubuntu.com/ubuntu bionic-updates InRelease\nHit:11 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease\nHit:13 http://archive.ubuntu.com/ubuntu bionic-backports InRelease\nHit:14 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease\nHit:15 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\n54 packages can be upgraded. Run 'apt list --upgradable' to see them.\n/content/gdrive/My Drive/282/mujoco\n/content/gdrive/My Drive/282/mujoco\n/content/gdrive/My Drive/282/mujoco/mujoco-py\nObtaining file:///content/gdrive/My%20Drive/282/mujoco/mujoco-py\n Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n\u001b[33m WARNING: Missing build requirements in pyproject.toml for file:///content/gdrive/My%20Drive/282/mujoco/mujoco-py.\u001b[0m\n\u001b[33m WARNING: The project does not specify a build backend, and pip cannot fall back to setuptools without 'wheel'.\u001b[0m\n Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n Installing backend dependencies ... \u001b[?25l\u001b[?25hdone\n Preparing wheel metadata ... \u001b[?25l\u001b[?25hdone\nRequirement already satisfied: imageio>=2.1.2 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13) (2.4.1)\nRequirement already satisfied: fasteners~=0.15 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13) (0.16)\nRequirement already satisfied: cffi>=1.10 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13) (1.14.5)\nRequirement already satisfied: glfw>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13) (2.1.0)\nRequirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13) (1.19.5)\nRequirement already satisfied: Cython>=0.27.2 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13) (0.29.22)\nRequirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from imageio>=2.1.2->mujoco-py==2.0.2.13) (7.0.0)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from fasteners~=0.15->mujoco-py==2.0.2.13) (1.15.0)\nRequirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.10->mujoco-py==2.0.2.13) (2.20)\nInstalling collected packages: mujoco-py\n Found existing installation: mujoco-py 2.0.2.13\n Can't uninstall 'mujoco-py'. No files were found to uninstall.\n Running setup.py develop for mujoco-py\nSuccessfully installed mujoco-py\nCompiling /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/cymj.pyx because it changed.\n[1/1] Cythonizing /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/cymj.pyx\nrunning build_ext\nbuilding 'mujoco_py.cymj' extension\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive/My Drive\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive/My Drive/282\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive/My Drive/282/mujoco\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive/My Drive/282/mujoco/mujoco-py\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/gl\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-a56wZI/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-a56wZI/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -Imujoco_py -I/content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py -I/content/282/mujoco/mujoco200/include -I/usr/local/lib/python3.7/dist-packages/numpy/core/include -I/usr/include/python3.7m -c /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/cymj.c -o /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/cymj.o -fopenmp -w\nx86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-a56wZI/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-a56wZI/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -Imujoco_py -I/content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py -I/content/282/mujoco/mujoco200/include -I/usr/local/lib/python3.7/dist-packages/numpy/core/include -I/usr/include/python3.7m -c /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/gl/osmesashim.c -o /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/gl/osmesashim.o -fopenmp -w\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/lib.linux-x86_64-3.7\ncreating /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/lib.linux-x86_64-3.7/mujoco_py\nx86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fdebug-prefix-map=/build/python3.7-a56wZI/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/cymj.o /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/temp.linux-x86_64-3.7/content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/gl/osmesashim.o -L/content/282/mujoco/mujoco200/bin -Wl,--enable-new-dtags,-R/content/282/mujoco/mujoco200/bin -lmujoco200 -lglewosmesa -lOSMesa -lGL -o /content/gdrive/My Drive/282/mujoco/mujoco-py/mujoco_py/generated/_pyxbld_2.0.2.13_37_linuxcpuextensionbuilder/lib.linux-x86_64-3.7/mujoco_py/cymj.cpython-37m-x86_64-linux-gnu.so -fopenmp\n/content/gdrive/My Drive/282\n/content/gdrive/My Drive/282/assignment4\nCollecting gym[atari]==0.17.2\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/b3/99/7cc3e510678119cdac91f33fb9235b98448f09a6bdf0cafea2b108d9ce51/gym-0.17.2.tar.gz (1.6MB)\n\u001b[K |████████████████████████████████| 1.6MB 4.5MB/s \n\u001b[?25hRequirement already satisfied: mujoco-py==2.0.2.13 in /content/gdrive/My Drive/282/mujoco/mujoco-py (from -r requirements.txt (line 2)) (2.0.2.13)\nCollecting tensorboard==2.3.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/e9/1b/6a420d7e6ba431cf3d51b2a5bfa06a958c4141e3189385963dc7f6fbffb6/tensorboard-2.3.0-py3-none-any.whl (6.8MB)\n\u001b[K |████████████████████████████████| 6.8MB 16.3MB/s \n\u001b[?25hCollecting tensorboardX==1.8\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/c3/12/dcaf67e1312475b26db9e45e7bb6f32b540671a9ee120b3a72d9e09bc517/tensorboardX-1.8-py2.py3-none-any.whl (216kB)\n\u001b[K |████████████████████████████████| 225kB 25.4MB/s \n\u001b[?25hCollecting matplotlib==2.2.2\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/da/1d/e6d9af0b5045597869537391f1036ab841c613c3f3e40f16bbc1d75450ee/matplotlib-2.2.2-cp37-cp37m-manylinux1_x86_64.whl (12.6MB)\n\u001b[K |████████████████████████████████| 12.6MB 19.3MB/s \n\u001b[?25hCollecting ipython==6.4.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/b1/7f/91d50f28af3e3a24342561983a7857e399ce24093876e6970b986a0b6677/ipython-6.4.0-py3-none-any.whl (750kB)\n\u001b[K |████████████████████████████████| 757kB 42.6MB/s \n\u001b[?25hCollecting moviepy==1.0.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/fb/32/a93f4af8b88985304a748ca0a66a64eb9fac53d0a9355ec33e713c4a3bf5/moviepy-1.0.0.tar.gz (398kB)\n\u001b[K |████████████████████████████████| 399kB 16.4MB/s \n\u001b[?25hCollecting pyvirtualdisplay==1.3.2\n Downloading https://files.pythonhosted.org/packages/d0/8a/643043cc70791367bee2d19eb20e00ed1a246ac48e5dbe57bbbcc8be40a9/PyVirtualDisplay-1.3.2-py2.py3-none-any.whl\nRequirement already satisfied: torch==1.8.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 9)) (1.8.0+cu101)\nCollecting opencv-python==4.4.0.42\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/fa/e3/7ed67a8f3116113a364671fb4142c446dd804c63f3d9df5c11168a1e4dbb/opencv_python-4.4.0.42-cp37-cp37m-manylinux2014_x86_64.whl (49.4MB)\n\u001b[K |████████████████████████████████| 49.4MB 165kB/s \n\u001b[?25hCollecting ipdb==0.13.3\n Downloading https://files.pythonhosted.org/packages/c1/4c/c2552dc5c2f3a4657ae84c1a91e3c7d4f2b7df88a38d6d282e48d050ad58/ipdb-0.13.3.tar.gz\nCollecting box2d-py\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/87/34/da5393985c3ff9a76351df6127c275dcb5749ae0abbe8d5210f06d97405d/box2d_py-2.3.8-cp37-cp37m-manylinux1_x86_64.whl (448kB)\n\u001b[K |████████████████████████████████| 450kB 21.5MB/s \n\u001b[?25hCollecting numpy==1.20.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/3a/6c/322f6aa128179d0ea45a543a4e29a74da2317117109899cfd56d09bf3de0/numpy-1.20.0-cp37-cp37m-manylinux2010_x86_64.whl (15.3MB)\n\u001b[K |████████████████████████████████| 15.3MB 31.4MB/s \n\u001b[?25hRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym[atari]==0.17.2->-r requirements.txt (line 1)) (1.4.1)\nRequirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym[atari]==0.17.2->-r requirements.txt (line 1)) (1.5.0)\nRequirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym[atari]==0.17.2->-r requirements.txt (line 1)) (1.3.0)\nRequirement already satisfied: atari_py~=0.2.0 in /usr/local/lib/python3.7/dist-packages (from gym[atari]==0.17.2->-r requirements.txt (line 1)) (0.2.6)\nRequirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from gym[atari]==0.17.2->-r requirements.txt (line 1)) (7.0.0)\nRequirement already satisfied: glfw>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13->-r requirements.txt (line 2)) (2.1.0)\nRequirement already satisfied: Cython>=0.27.2 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13->-r requirements.txt (line 2)) (0.29.22)\nRequirement already satisfied: imageio>=2.1.2 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13->-r requirements.txt (line 2)) (2.4.1)\nRequirement already satisfied: cffi>=1.10 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13->-r requirements.txt (line 2)) (1.14.5)\nRequirement already satisfied: fasteners~=0.15 in /usr/local/lib/python3.7/dist-packages (from mujoco-py==2.0.2.13->-r requirements.txt (line 2)) (0.16)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (1.0.1)\nRequirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (2.23.0)\nRequirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (1.15.0)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (1.8.0)\nRequirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (1.27.1)\nRequirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (0.10.0)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (0.4.3)\nRequirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (1.32.0)\nRequirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (54.1.2)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (3.3.4)\nRequirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (3.12.4)\nRequirement already satisfied: wheel>=0.26; python_version >= \"3\" in /usr/local/lib/python3.7/dist-packages (from tensorboard==2.3.0->-r requirements.txt (line 3)) (0.36.2)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib==2.2.2->-r requirements.txt (line 5)) (0.10.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib==2.2.2->-r requirements.txt (line 5)) (2.4.7)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib==2.2.2->-r requirements.txt (line 5)) (2.8.1)\nRequirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from matplotlib==2.2.2->-r requirements.txt (line 5)) (2018.9)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib==2.2.2->-r requirements.txt (line 5)) (1.3.1)\nRequirement already satisfied: jedi>=0.10 in /usr/local/lib/python3.7/dist-packages (from ipython==6.4.0->-r requirements.txt (line 6)) (0.18.0)\nRequirement already satisfied: backcall in /usr/local/lib/python3.7/dist-packages (from ipython==6.4.0->-r requirements.txt (line 6)) (0.2.0)\nRequirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython==6.4.0->-r requirements.txt (line 6)) (2.6.1)\nRequirement already satisfied: pexpect; sys_platform != \"win32\" in /usr/local/lib/python3.7/dist-packages (from ipython==6.4.0->-r requirements.txt (line 6)) (4.8.0)\nRequirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython==6.4.0->-r requirements.txt (line 6)) (0.7.5)\nRequirement already satisfied: prompt-toolkit<2.0.0,>=1.0.15 in /usr/local/lib/python3.7/dist-packages (from ipython==6.4.0->-r requirements.txt (line 6)) (1.0.18)\nRequirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython==6.4.0->-r requirements.txt (line 6)) (4.4.2)\nRequirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython==6.4.0->-r requirements.txt (line 6)) (5.0.5)\nRequirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython==6.4.0->-r requirements.txt (line 6)) (0.8.1)\nRequirement already satisfied: tqdm<5.0,>=4.11.2 in /usr/local/lib/python3.7/dist-packages (from moviepy==1.0.0->-r requirements.txt (line 7)) (4.41.1)\nCollecting proglog<=1.0.0\n Downloading https://files.pythonhosted.org/packages/fe/ab/4cb19b578e1364c0b2d6efd6521a8b4b4e5c4ae6528041d31a2a951dd991/proglog-0.1.9.tar.gz\nCollecting imageio_ffmpeg>=0.2.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/89/0f/4b49476d185a273163fa648eaf1e7d4190661d1bbf37ec2975b84df9de02/imageio_ffmpeg-0.4.3-py3-none-manylinux2010_x86_64.whl (26.9MB)\n\u001b[K |████████████████████████████████| 26.9MB 169kB/s \n\u001b[?25hCollecting EasyProcess\n Downloading https://files.pythonhosted.org/packages/48/3c/75573613641c90c6d094059ac28adb748560d99bd27ee6f80cce398f404e/EasyProcess-0.3-py2.py3-none-any.whl\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.8.0->-r requirements.txt (line 9)) (3.7.4.3)\nRequirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym[atari]==0.17.2->-r requirements.txt (line 1)) (0.16.0)\nRequirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.10->mujoco-py==2.0.2.13->-r requirements.txt (line 2)) (2.20)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard==2.3.0->-r requirements.txt (line 3)) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard==2.3.0->-r requirements.txt (line 3)) (2020.12.5)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard==2.3.0->-r requirements.txt (line 3)) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard==2.3.0->-r requirements.txt (line 3)) (3.0.4)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard==2.3.0->-r requirements.txt (line 3)) (4.2.1)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard==2.3.0->-r requirements.txt (line 3)) (0.2.8)\nRequirement already satisfied: rsa<5,>=3.1.4; python_version >= \"3.6\" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard==2.3.0->-r requirements.txt (line 3)) (4.7.2)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard==2.3.0->-r requirements.txt (line 3)) (1.3.0)\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard==2.3.0->-r requirements.txt (line 3)) (3.7.2)\nRequirement already satisfied: parso<0.9.0,>=0.8.0 in /usr/local/lib/python3.7/dist-packages (from jedi>=0.10->ipython==6.4.0->-r requirements.txt (line 6)) (0.8.1)\nRequirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect; sys_platform != \"win32\"->ipython==6.4.0->-r requirements.txt (line 6)) (0.7.0)\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.15->ipython==6.4.0->-r requirements.txt (line 6)) (0.2.5)\nRequirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.2->ipython==6.4.0->-r requirements.txt (line 6)) (0.2.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard==2.3.0->-r requirements.txt (line 3)) (0.4.8)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard==2.3.0->-r requirements.txt (line 3)) (3.1.0)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < \"3.8\"->markdown>=2.6.8->tensorboard==2.3.0->-r requirements.txt (line 3)) (3.4.1)\nBuilding wheels for collected packages: gym, moviepy, ipdb, proglog\n Building wheel for gym (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for gym: filename=gym-0.17.2-cp37-none-any.whl size=1650891 sha256=ae4b67ef36dd067a19a946a200449760da5181c3af6de338e4a95910553cf6ba\n Stored in directory: /root/.cache/pip/wheels/87/e0/91/f56e44e8062f8cd549673da49f59e1d4fe8b17398119b1d221\n Building wheel for moviepy (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for moviepy: filename=moviepy-1.0.0-cp37-none-any.whl size=131366 sha256=10f61a576f2f3cbe7441ff1d2c0481e3defa3741c64ed4f0671bb9172082473e\n Stored in directory: /root/.cache/pip/wheels/52/e2/4c/f594a5945bc98e052ef248b46a0f1f7ea838b0b2a5f8895651\n Building wheel for ipdb (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for ipdb: filename=ipdb-0.13.3-cp37-none-any.whl size=10848 sha256=f8bb6001cace4764b65aaae51ed89e6ce525782bf446fd4de6f58aef0485f3f0\n Stored in directory: /root/.cache/pip/wheels/75/00/30/4169bcc3643f0cf946dcf37af1b71364b390c4df91da02b03c\n Building wheel for proglog (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for proglog: filename=proglog-0.1.9-cp37-none-any.whl size=6148 sha256=2db2106aeea920b32058fae166a1c12169c864e49c84d6c81ac6328a0f419972\n Stored in directory: /root/.cache/pip/wheels/65/56/60/1d0306a8d90b188af393c1812ddb502a8821b70917f82dcc00\nSuccessfully built gym moviepy ipdb proglog\n\u001b[31mERROR: tensorflow 2.4.1 has requirement numpy~=1.19.2, but you'll have numpy 1.20.0 which is incompatible.\u001b[0m\n\u001b[31mERROR: tensorflow 2.4.1 has requirement tensorboard~=2.4, but you'll have tensorboard 2.3.0 which is incompatible.\u001b[0m\n\u001b[31mERROR: plotnine 0.6.0 has requirement matplotlib>=3.1.1, but you'll have matplotlib 2.2.2 which is incompatible.\u001b[0m\n\u001b[31mERROR: moviepy 1.0.0 has requirement imageio<3.0,>=2.5; python_version >= \"3.4\", but you'll have imageio 2.4.1 which is incompatible.\u001b[0m\n\u001b[31mERROR: mizani 0.6.0 has requirement matplotlib>=3.1.1, but you'll have matplotlib 2.2.2 which is incompatible.\u001b[0m\n\u001b[31mERROR: google-colab 1.0.0 has requirement ipython~=5.5.0, but you'll have ipython 6.4.0 which is incompatible.\u001b[0m\n\u001b[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.\u001b[0m\n\u001b[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.\u001b[0m\nInstalling collected packages: numpy, opencv-python, gym, tensorboard, tensorboardX, matplotlib, ipython, proglog, imageio-ffmpeg, moviepy, EasyProcess, pyvirtualdisplay, ipdb, box2d-py\n Found existing installation: numpy 1.19.5\n Uninstalling numpy-1.19.5:\n Successfully uninstalled numpy-1.19.5\n Found existing installation: opencv-python 4.1.2.30\n Uninstalling opencv-python-4.1.2.30:\n Successfully uninstalled opencv-python-4.1.2.30\n Found existing installation: gym 0.17.3\n Uninstalling gym-0.17.3:\n Successfully uninstalled gym-0.17.3\n Found existing installation: tensorboard 2.4.1\n Uninstalling tensorboard-2.4.1:\n Successfully uninstalled tensorboard-2.4.1\n Found existing installation: matplotlib 3.2.2\n Uninstalling matplotlib-3.2.2:\n Successfully uninstalled matplotlib-3.2.2\n Found existing installation: ipython 5.5.0\n Uninstalling ipython-5.5.0:\n Successfully uninstalled ipython-5.5.0\n Found existing installation: moviepy 0.2.3.5\n Uninstalling moviepy-0.2.3.5:\n Successfully uninstalled moviepy-0.2.3.5\nSuccessfully installed EasyProcess-0.3 box2d-py-2.3.8 gym-0.17.2 imageio-ffmpeg-0.4.3 ipdb-0.13.3 ipython-6.4.0 matplotlib-2.2.2 moviepy-1.0.0 numpy-1.20.0 opencv-python-4.4.0.42 proglog-0.1.9 pyvirtualdisplay-1.3.2 tensorboard-2.3.0 tensorboardX-1.8\n"
],
[
"#@title imports\n# As usual, a bit of setup\nimport os\nimport shutil\nimport time\nimport numpy as np\nimport torch\n\nimport deeprl.infrastructure.pytorch_util as ptu\n\nfrom deeprl.infrastructure.rl_trainer import RL_Trainer\nfrom deeprl.infrastructure.trainers import BC_Trainer\nfrom deeprl.agents.bc_agent import BCAgent\nfrom deeprl.policies.loaded_gaussian_policy import LoadedGaussianPolicy\nfrom deeprl.policies.MLP_policy import MLPPolicySL\n\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef remove_folder(path):\n # check if folder exists\n if os.path.exists(path): \n print(\"Clearing old results at {}\".format(path))\n # remove if exists\n shutil.rmtree(path)\n else:\n print(\"Folder {} does not exist yet. No old results to delete\".format(path))",
"_____no_output_____"
],
[
"bc_base_args_dict = dict(\n expert_policy_file = 'deeprl/policies/experts/Hopper.pkl', #@param\n expert_data = 'deeprl/expert_data/expert_data_Hopper-v2.pkl', #@param\n env_name = 'Hopper-v2', #@param ['Ant-v2', 'Humanoid-v2', 'Walker2d-v2', 'HalfCheetah-v2', 'Hopper-v2']\n exp_name = 'test_bc', #@param\n do_dagger = True, #@param {type: \"boolean\"}\n ep_len = 1000, #@param {type: \"integer\"}\n save_params = False, #@param {type: \"boolean\"}\n\n # Training\n num_agent_train_steps_per_iter = 1000, #@param {type: \"integer\"})\n n_iter = 1, #@param {type: \"integer\"})\n\n # batches & buffers\n batch_size = 10000, #@param {type: \"integer\"})\n eval_batch_size = 1000, #@param {type: \"integer\"}\n train_batch_size = 100, #@param {type: \"integer\"}\n max_replay_buffer_size = 1000000, #@param {type: \"integer\"}\n\n #@markdown network\n n_layers = 2, #@param {type: \"integer\"}\n size = 64, #@param {type: \"integer\"}\n learning_rate = 5e-3, #@param {type: \"number\"}\n\n #@markdown logging\n video_log_freq = -1, #@param {type: \"integer\"}\n scalar_log_freq = 1, #@param {type: \"integer\"}\n\n #@markdown gpu & run-time settings\n no_gpu = False, #@param {type: \"boolean\"}\n which_gpu = 0, #@param {type: \"integer\"}\n seed = 2, #@param {type: \"integer\"}\n logdir = 'test',\n)\n",
"_____no_output_____"
]
],
[
[
"# Infrastructure\n**Policies**: We have provided implementations of simple neural network policies for your convenience. For discrete environments, the neural network takes in the current state and outputs the logits of the policy's action distribution at this state. The policy then outputs a categorical distribution using those logits. In environments with continuous action spaces, the network will output the mean of a diagonal Gaussian distribution, as well as having a separate single parameter for the log standard deviations of the Gaussian. \n\nCalling forward on the policy will output a torch distribution object, so look at the documentation at https://pytorch.org/docs/stable/distributions.html.\nLook at <code>policies/MLP_policy</code> to make sure you understand the implementation.\n\n**RL Training Loop**: The reinforcement learning training loop, which alternates between gathering samples from the environment and updating the policy (and other learned functions) can be found in <code>infrastructure/rl_trainer.py</code>. While you won't need to understand this for the basic behavior cloning part (as you only use a fixed set of expert data), you should read through and understand the run_training_loop function before starting the Dagger implementation.",
"_____no_output_____"
],
[
"# Basic Behavior Cloning\nThe first part of the assignment will be a familiar exercise in supervised learning. Given a dataset of expert trajectories, we will simply train our policy to imitate the expert via maximum likelihood. Fill out the update method in the MLPPolicySL class in <code>policies/MLP_policy.py</code>.",
"_____no_output_____"
]
],
[
[
"### Basic test for correctness of loss and gradients\ntorch.manual_seed(0)\nac_dim = 2\nob_dim = 3\nbatch_size = 5\n\npolicy = MLPPolicySL(\n ac_dim=ac_dim,\n ob_dim=ob_dim,\n n_layers=1,\n size=2,\n learning_rate=0.25)\n\nnp.random.seed(0)\nobs = np.random.normal(size=(batch_size, ob_dim))\nacts = np.random.normal(size=(batch_size, ac_dim))\n\nfirst_weight_before = np.array(ptu.to_numpy(next(policy.mean_net.parameters())))\nprint(\"Weight before update\", first_weight_before)\n\nfor i in range(5):\n loss = policy.update(obs, acts)['Training Loss']\n\nprint(loss)\nexpected_loss = 2.628419\nloss_error = rel_error(loss, expected_loss)\nprint(\"Loss Error\", loss_error, \"should be on the order of 1e-6 or lower\")\n\nfirst_weight_after = ptu.to_numpy(next(policy.mean_net.parameters()))\nprint('Weight after update', first_weight_after)\n\nweight_change = first_weight_after - first_weight_before\nprint(\"Change in weights\", weight_change)\n\nexpected_change = np.array([[ 0.04385546, -0.4614172, -1.0613215 ],\n [ 0.20986436, -1.2060736, -1.0026767 ]])\nupdated_weight_error = rel_error(weight_change, expected_change)\nprint(\"Weight Update Error\", updated_weight_error, \"should be on the order of 1e-6 or lower\")\n",
"Weight before update [[-0.00432252 0.30971584 -0.47518533]\n [-0.4248946 -0.22236897 0.15482073]]\n[[-1.4790758 0.69899046]\n [-1.1322743 0.54298437]\n [-0.2453982 0.36416277]\n [-1.3827623 -1.2013232 ]\n [-1.5841014 -0.08110598]]\ntensor([[-0.2873, 1.0582],\n [-0.7216, -1.1946],\n [-0.8755, -0.9074],\n [-1.2642, -0.5128],\n [-1.8740, -0.9291]])\n"
]
],
[
[
"Having implemented our behavior cloning loss, we can now start training some policies to imitate the expert policies provided. \n\nRun the following cell to train policies with simple behavior cloning on the HalfCheetah environment.",
"_____no_output_____"
]
],
[
[
"bc_args = dict(bc_base_args_dict)\n\nenv_str = 'HalfCheetah'\nbc_args['expert_policy_file'] = 'deeprl/policies/experts/{}.pkl'.format(env_str)\nbc_args['expert_data'] = 'deeprl/expert_data/expert_data_{}-v2.pkl'.format(env_str)\nbc_args['env_name'] = '{}-v2'.format(env_str)\n\n# Delete all previous logs\nremove_folder('logs/behavior_cloning/{}'.format(env_str))\n\nfor seed in range(3):\n print(\"Running behavior cloning experiment with seed\", seed)\n bc_args['seed'] = seed\n bc_args['logdir'] = 'logs/behavior_cloning/{}/seed{}'.format(env_str, seed)\n bctrainer = BC_Trainer(bc_args)\n bctrainer.run_training_loop()",
"_____no_output_____"
]
],
[
[
"Visualize your results using Tensorboard. You should see that on HalfCheetah, the returns of your learned policies (Eval_AverageReturn) are fairly similar (thought a bit lower) to that of the expert (Initial_DataCollection_Average_Return).",
"_____no_output_____"
]
],
[
[
"### Visualize behavior cloning results on HalfCheetah\n%load_ext tensorboard\n%tensorboard --logdir logs/behavior_cloning/HalfCheetah",
"_____no_output_____"
]
],
[
[
"Now run the following cell to train policies with simple behavior cloning on Hopper.",
"_____no_output_____"
]
],
[
[
"bc_args = dict(bc_base_args_dict)\n\nenv_str = 'Hopper'\nbc_args['expert_policy_file'] = 'deeprl/policies/experts/{}.pkl'.format(env_str)\nbc_args['expert_data'] = 'deeprl/expert_data/expert_data_{}-v2.pkl'.format(env_str)\nbc_args['env_name'] = '{}-v2'.format(env_str)\n\n# Delete all previous logs\nremove_folder('logs/behavior_cloning/{}'.format(env_str))\n\nfor seed in range(3):\n print(\"Running behavior cloning experiment on Hopper with seed\", seed)\n bc_args['seed'] = seed\n bc_args['logdir'] = 'logs/behavior_cloning/{}/seed{}'.format(env_str, seed)\n bctrainer = BC_Trainer(bc_args)\n bctrainer.run_training_loop()",
"_____no_output_____"
]
],
[
[
"Visualize your results using Tensorboard. You should see that on Hopper, the returns of your learned policies (Eval_AverageReturn) are substantially lower than that of the expert (Initial_DataCollection_Average_Return), due to the distribution shift issues that arise when doing naive behavior cloning.",
"_____no_output_____"
]
],
[
[
"### Visualize behavior cloning results on Hopper\n%load_ext tensorboard\n%tensorboard --logdir logs/behavior_cloning/Hopper",
"_____no_output_____"
]
],
[
[
"# Dataset Aggregation\nAs discussed in lecture, behavior cloning can suffer from distribution shift, as a small mismatch between the learned and expert policy can take the learned policy to new states that were unseen during training, on which the learned policy hasn't been trained. In Dagger, we will address this issue iteratively, where we use our expert policy to provide labels for the new states we encounter with our learned policy, and then retrain our policy on these newly labeled states.\n\nImplement the <code>do_relabel_with_expert</code> function in <code>infrastructure/rl_trainer.py</code>. The errors in the expert actions should be on the order of 1e-6 or less.",
"_____no_output_____"
]
],
[
[
"### Test do relabel function\nbc_args = dict(bc_base_args_dict)\n\nenv_str = 'Hopper'\nbc_args['expert_policy_file'] = 'deeprl/policies/experts/{}.pkl'.format(env_str)\nbc_args['expert_data'] = 'deeprl/expert_data/expert_data_{}-v2.pkl'.format(env_str)\nbc_args['env_name'] = '{}-v2'.format(env_str)\nbctrainer = BC_Trainer(bc_args)\n\nnp.random.seed(0)\nT = 2\nob_dim = 11\nac_dim = 3\n\npaths = []\nfor i in range(3):\n obs = np.random.normal(size=(T, ob_dim))\n acs = np.random.normal(size=(T, ac_dim))\n paths.append(dict(observation=obs,\n action=acs))\n \nrl_trainer = bctrainer.rl_trainer\nrelabeled_paths = rl_trainer.do_relabel_with_expert(bctrainer.loaded_expert_policy, paths)\n\nexpert_actions = np.array([[[-1.7814021, -0.11137983, 1.763353 ],\n [-2.589222, -5.463195, 2.4301376 ]],\n [[-2.8287444, -5.298558, 3.0320463],\n [ 3.9611065, 2.626403, -2.8639293]],\n [[-0.3055225, -0.9865407, 0.80830705],\n [ 2.8788857, 3.5550566, -0.92875874]]])\n\nfor i, (path, relabeled_path) in enumerate(zip(paths, relabeled_paths)):\n assert np.all(path['observation'] == relabeled_path['observation'])\n print(\"Path {} expert action error\".format(i), rel_error(expert_actions[i], relabeled_path['action']))",
"_____no_output_____"
]
],
[
[
"We can run Dagger on the Hopper env again.",
"_____no_output_____"
]
],
[
[
"dagger_args = dict(bc_base_args_dict)\n\ndagger_args['do_dagger'] = True\ndagger_args['n_iter'] = 10\n\nenv_str = 'Hopper'\ndagger_args['expert_policy_file'] = 'deeprl/policies/experts/{}.pkl'.format(env_str)\ndagger_args['expert_data'] = 'deeprl/expert_data/expert_data_{}-v2.pkl'.format(env_str)\ndagger_args['env_name'] = '{}-v2'.format(env_str)\n",
"_____no_output_____"
],
[
"# Delete all previous logs\nremove_folder('logs/dagger/{}'.format(env_str))\n\nfor seed in range(3):\n print(\"Running Dagger experiment with seed\", seed)\n dagger_args['seed'] = seed\n dagger_args['logdir'] = 'logs/dagger/{}/seed{}'.format(env_str, seed)\n bctrainer = BC_Trainer(dagger_args)\n bctrainer.run_training_loop()",
"_____no_output_____"
]
],
[
[
"Visualizing the Dagger results on Hopper, we see that Dagger is able to recover the performance of the expert policy after a few iterations of online interaction and expert relabeling.",
"_____no_output_____"
]
],
[
[
"### Visualize Dagger results on Hopper\n%load_ext tensorboard\n%tensorboard --logdir logs/dagger/Hopper",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adb44560fba7c06c0d70e044a1150d45dd957dc
| 4,296 |
ipynb
|
Jupyter Notebook
|
11_ene_Daa.ipynb
|
BrandonSanchezMorales/daa_2021_1
|
b1540d1f19c945c19144ab543c2619a26ccbe448
|
[
"MIT"
] | null | null | null |
11_ene_Daa.ipynb
|
BrandonSanchezMorales/daa_2021_1
|
b1540d1f19c945c19144ab543c2619a26ccbe448
|
[
"MIT"
] | null | null | null |
11_ene_Daa.ipynb
|
BrandonSanchezMorales/daa_2021_1
|
b1540d1f19c945c19144ab543c2619a26ccbe448
|
[
"MIT"
] | null | null | null | 32.545455 | 594 | 0.491155 |
[
[
[
"<a href=\"https://colab.research.google.com/github/BrandonSanchezMorales/daa_2021_1/blob/master/11_ene_Daa.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"class NodoArbol:\r\n def_init_( self , value , left=None , rigth=None:\r\n self.data = value\r\n self.left = left\r\n self.rigth = rigth\r\n",
"_____no_output_____"
],
[
"arbol = NodoArbol(\"R\", NodoArbol(\"C\") , NodoArbol(\"H\"))",
"_____no_output_____"
],
[
"nodo1 = NodoArbol(\"C\") \r\nnodo2 = NodoArbol(\"H\")\r\narbol_v2 = NodoArbol(\"R\", nodo1, nodo2 ) ",
"_____no_output_____"
],
[
"print(arbol.rigth.data)\r\nprint(arbol_v2.rigth.data)",
"_____no_output_____"
],
[
"arbol2 =NodoArbol(4,NodoArbol(3,NodoArbol(2,NodoArbol(2))),NodoArbol(5))",
"_____no_output_____"
],
[
"arbol = ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adb45db5307a85887028a995a18e5f2039abedb
| 156,778 |
ipynb
|
Jupyter Notebook
|
notebooks/cnn_mnist_simple.ipynb
|
smwgf/Tensorflow-101
|
d5bc262378a70c72242debda4d5198f429e1102d
|
[
"MIT"
] | null | null | null |
notebooks/cnn_mnist_simple.ipynb
|
smwgf/Tensorflow-101
|
d5bc262378a70c72242debda4d5198f429e1102d
|
[
"MIT"
] | null | null | null |
notebooks/cnn_mnist_simple.ipynb
|
smwgf/Tensorflow-101
|
d5bc262378a70c72242debda4d5198f429e1102d
|
[
"MIT"
] | null | null | null | 170.41087 | 9,528 | 0.901179 |
[
[
[
"## SIMPLE CONVOLUTIONAL NEURAL NETWORK",
"_____no_output_____"
]
],
[
[
"import numpy as np\n# import tensorflow as tf\nimport tensorflow.compat.v1 as tf\nimport matplotlib.pyplot as plt\n# from tensorflow.examples.tutorials.mnist import input_data\n%matplotlib inline \nprint (\"PACKAGES LOADED\")",
"PACKAGES LOADED\n"
]
],
[
[
"# LOAD MNIST",
"_____no_output_____"
]
],
[
[
"def OnehotEncoding(target):\n from sklearn.preprocessing import OneHotEncoder\n target_re = target.reshape(-1,1)\n enc = OneHotEncoder()\n enc.fit(target_re)\n return enc.transform(target_re).toarray()\n\ndef SuffleWithNumpy(data_x, data_y):\n idx = np.random.permutation(len(data_x))\n x,y = data_x[idx], data_y[idx]\n return x,y",
"_____no_output_____"
],
[
"# mnist = input_data.read_data_sets('data/', one_hot=True)\n# trainimg = mnist.train.images\n# trainlabel = mnist.train.labels\n# testimg = mnist.test.images\n# testlabel = mnist.test.labels\n# print (\"MNIST ready\")\n\nprint (\"Download and Extract MNIST dataset\")\n# mnist = input_data.read_data_sets('data/', one_hot=True)\nmnist = tf.keras.datasets.mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0\n\nprint\nprint (\" tpye of 'mnist' is %s\" % (type(mnist)))\nprint (\" number of train data is %d\" % (len(x_train)))\nprint (\" number of test data is %d\" % (len(x_test)))\nnum_train_data = len(x_train)\ntrainimg = x_train\ntrainimg = trainimg.reshape(len(trainimg),784)\ntrainlabel = OnehotEncoding(y_train)\ntestimg = x_test\ntestimg = testimg.reshape(len(testimg),784)\ntestlabel = OnehotEncoding(y_test)\nprint (\"MNIST loaded\")\ntf.disable_eager_execution()",
"Download and Extract MNIST dataset\n tpye of 'mnist' is <class 'tensorflow.python.util.module_wrapper.TFModuleWrapper'>\n number of train data is 60000\n number of test data is 10000\nMNIST loaded\n"
]
],
[
[
"# SELECT DEVICE TO BE USED",
"_____no_output_____"
]
],
[
[
"device_type = \"/gpu:1\"",
"_____no_output_____"
]
],
[
[
"# DEFINE CNN ",
"_____no_output_____"
]
],
[
[
"with tf.device(device_type): # <= This is optional\n n_input = 784\n n_output = 10\n weights = {\n 'wc1': tf.Variable(tf.random_normal([3, 3, 1, 64], stddev=0.1)),\n 'wd1': tf.Variable(tf.random_normal([14*14*64, n_output], stddev=0.1))\n }\n biases = {\n 'bc1': tf.Variable(tf.random_normal([64], stddev=0.1)),\n 'bd1': tf.Variable(tf.random_normal([n_output], stddev=0.1))\n }\n def conv_simple(_input, _w, _b):\n # Reshape input\n _input_r = tf.reshape(_input, shape=[-1, 28, 28, 1])\n # Convolution\n _conv1 = tf.nn.conv2d(_input_r, _w['wc1'], strides=[1, 1, 1, 1], padding='SAME')\n # Add-bias\n _conv2 = tf.nn.bias_add(_conv1, _b['bc1'])\n # Pass ReLu\n _conv3 = tf.nn.relu(_conv2)\n # Max-pooling\n _pool = tf.nn.max_pool(_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')\n # Vectorize\n _dense = tf.reshape(_pool, [-1, _w['wd1'].get_shape().as_list()[0]])\n # Fully-connected layer\n _out = tf.add(tf.matmul(_dense, _w['wd1']), _b['bd1'])\n # Return everything\n out = {\n 'input_r': _input_r, 'conv1': _conv1, 'conv2': _conv2, 'conv3': _conv3\n , 'pool': _pool, 'dense': _dense, 'out': _out\n }\n return out\nprint (\"CNN ready\")",
"CNN ready\n"
]
],
[
[
"# DEFINE COMPUTATIONAL GRAPH",
"_____no_output_____"
]
],
[
[
"# tf Graph input\nx = tf.placeholder(tf.float32, [None, n_input])\ny = tf.placeholder(tf.float32, [None, n_output])\n# Parameters\nlearning_rate = 0.001\ntraining_epochs = 10\nbatch_size = 10\ndisplay_step = 1\n# Functions! \nwith tf.device(device_type): # <= This is optional\n _pred = conv_simple(x, weights, biases)['out']\n cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=_pred))\n optm = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\n _corr = tf.equal(tf.argmax(_pred,1), tf.argmax(y,1)) # Count corrects\n accr = tf.reduce_mean(tf.cast(_corr, tf.float32)) # Accuracy\n init = tf.global_variables_initializer()\n# Saver \nsave_step = 1;\nsavedir = \"nets/\"\nsaver = tf.train.Saver(max_to_keep=3) \nprint (\"Network Ready to Go!\")",
"WARNING:tensorflow:From d:\\program\\python_3_8_5\\lib\\site-packages\\tensorflow\\python\\util\\dispatch.py:201: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\n\nFuture major versions of TensorFlow will allow gradients to flow\ninto the labels input on backprop by default.\n\nSee `tf.nn.softmax_cross_entropy_with_logits_v2`.\n\nNetwork Ready to Go!\n"
]
],
[
[
"# OPTIMIZE\n## DO TRAIN OR NOT",
"_____no_output_____"
]
],
[
[
"do_train = 1\n# check operation gpu or cpu\n# sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))\nconfig = tf.ConfigProto()\n# config.gpu_options.allow_growth = True\n\n\nconfig.gpu_options.per_process_gpu_memory_fraction = 0.4\n\n \nconfig.allow_soft_placement=True\nsess = tf.Session(config=config)\n\n\nsess.run(init)",
"_____no_output_____"
],
[
"len(testimg)",
"_____no_output_____"
],
[
"if do_train == 1:\n for epoch in range(training_epochs):\n avg_cost = 0.\n total_batch = int(num_train_data/batch_size)\n # Loop over all batches\n for i in range(total_batch):\n batch_xs=trainimg[i*batch_size:(i+1)*batch_size]\n batch_ys=trainlabel[i*batch_size:(i+1)*batch_size]\n # Fit training using batch data\n sess.run(optm, feed_dict={x: batch_xs, y: batch_ys})\n # Compute average loss\n avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys})/total_batch\n\n # Display logs per epoch step\n if (epoch +1)% display_step == 0: \n print (\"Epoch: %03d/%03d cost: %.9f\" % (epoch+1, training_epochs, avg_cost))\n total_batch = int(num_train_data/batch_size)\n train_acc=0\n for i in range(total_batch):\n batch_xs=trainimg[i*batch_size:(i+1)*batch_size]\n batch_ys=trainlabel[i*batch_size:(i+1)*batch_size]\n train_acc = train_acc + sess.run(accr, feed_dict={x: batch_xs, y: batch_ys})\n print (\" Training accuracy: %.3f\" % (train_acc/total_batch))\n \n# randidx = np.random.randint(len(testimg), size=1000)\n# batch_test_xs = testimg[randidx, :]\n# batch_test_ys = testlabel[randidx, :] \n# test_acc = sess.run(accr, feed_dict={x: batch_test_xs, y: batch_test_ys})\n total_batch = int(len(testimg)/batch_size)\n test_acc=0\n for i in range(total_batch):\n batch_xs=testimg[i*batch_size:(i+1)*batch_size]\n batch_ys=testlabel[i*batch_size:(i+1)*batch_size]\n test_acc = test_acc + sess.run(accr, feed_dict={x: batch_xs, y: batch_ys}) \n \n print (\" Test accuracy: %.3f\" % (test_acc/total_batch))\n\n # Save Net\n if epoch % save_step == 0:\n saver.save(sess, \"nets/cnn_mnist_simple.ckpt-\" + str(epoch))\n trainimg,trainlabel = SuffleWithNumpy(trainimg,trainlabel)\n print (\"Optimization Finished.\")",
"Epoch: 000/010 cost: 0.035667057\n Training accuracy: 0.986\n Test accuracy: 0.980\nEpoch: 001/010 cost: 0.027397077\n Training accuracy: 0.994\n Test accuracy: 0.985\nEpoch: 002/010 cost: 0.018888790\n Training accuracy: 0.997\n Test accuracy: 0.985\nEpoch: 003/010 cost: 0.013064439\n Training accuracy: 0.994\n Test accuracy: 0.984\nWARNING:tensorflow:From d:\\program\\python_3_8_5\\lib\\site-packages\\tensorflow\\python\\training\\saver.py:969: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse standard file APIs to delete files with this prefix.\nEpoch: 004/010 cost: 0.009839889\n Training accuracy: 0.998\n Test accuracy: 0.985\nEpoch: 005/010 cost: 0.006764985\n Training accuracy: 0.998\n Test accuracy: 0.983\nEpoch: 006/010 cost: 0.004653526\n Training accuracy: 0.996\n Test accuracy: 0.983\nEpoch: 007/010 cost: 0.003468891\n Training accuracy: 0.999\n Test accuracy: 0.983\nEpoch: 008/010 cost: 0.002509124\n Training accuracy: 0.998\n Test accuracy: 0.984\nEpoch: 009/010 cost: 0.001840721\n Training accuracy: 0.998\n Test accuracy: 0.983\nOptimization Finished.\n"
]
],
[
[
"# RESTORE ",
"_____no_output_____"
]
],
[
[
"do_train = 0\nif do_train == 0:\n epoch = training_epochs-1\n# epoch = 3\n saver.restore(sess, \"nets/cnn_mnist_simple.ckpt-\" + str(epoch))\n print (\"NETWORK RESTORED\")",
"INFO:tensorflow:Restoring parameters from nets/cnn_mnist_simple.ckpt-9\nNETWORK RESTORED\n"
]
],
[
[
"# LET'S SEE HOW CNN WORKS",
"_____no_output_____"
]
],
[
[
"with tf.device(device_type):\n conv_out = conv_simple(x, weights, biases)\n\ninput_r = sess.run(conv_out['input_r'], feed_dict={x: trainimg[0:1, :]})\nconv1 = sess.run(conv_out['conv1'], feed_dict={x: trainimg[0:1, :]})\nconv2 = sess.run(conv_out['conv2'], feed_dict={x: trainimg[0:1, :]})\nconv3 = sess.run(conv_out['conv3'], feed_dict={x: trainimg[0:1, :]})\npool = sess.run(conv_out['pool'], feed_dict={x: trainimg[0:1, :]})\ndense = sess.run(conv_out['dense'], feed_dict={x: trainimg[0:1, :]})\nout = sess.run(conv_out['out'], feed_dict={x: trainimg[0:1, :]})",
"_____no_output_____"
]
],
[
[
"# Input",
"_____no_output_____"
]
],
[
[
"# Let's see 'input_r'\nprint (\"Size of 'input_r' is %s\" % (input_r.shape,))\nlabel = np.argmax(trainlabel[0, :])\nprint (\"Label is %d\" % (label))\n\n# Plot ! \nplt.matshow(input_r[0, :, :, 0], cmap=plt.get_cmap('gray'))\nplt.title(\"Label of this image is \" + str(label) + \"\")\nplt.colorbar()\nplt.show()",
"Size of 'input_r' is (1, 28, 28, 1)\nLabel is 1\n"
]
],
[
[
"# Conv1 (convolution)",
"_____no_output_____"
]
],
[
[
"# Let's see 'conv1'\nprint (\"Size of 'conv1' is %s\" % (conv1.shape,))\n\n# Plot ! \nfor i in range(3):\n plt.matshow(conv1[0, :, :, i], cmap=plt.get_cmap('gray'))\n plt.title(str(i) + \"th conv1\")\n plt.colorbar()\n plt.show() ",
"Size of 'conv1' is (1, 28, 28, 64)\n"
]
],
[
[
"# Conv2 (+bias)",
"_____no_output_____"
]
],
[
[
"# Let's see 'conv2'\nprint (\"Size of 'conv2' is %s\" % (conv2.shape,))\n\n# Plot ! \nfor i in range(3):\n plt.matshow(conv2[0, :, :, i], cmap=plt.get_cmap('gray'))\n plt.title(str(i) + \"th conv2\")\n plt.colorbar()\n plt.show() ",
"Size of 'conv2' is (1, 28, 28, 64)\n"
]
],
[
[
"# Conv3 (ReLU)",
"_____no_output_____"
]
],
[
[
"# Let's see 'conv3'\nprint (\"Size of 'conv3' is %s\" % (conv3.shape,))\n\n# Plot ! \nfor i in range(3):\n plt.matshow(conv3[0, :, :, i], cmap=plt.get_cmap('gray'))\n plt.title(str(i) + \"th conv3\")\n plt.colorbar()\n plt.show() ",
"Size of 'conv3' is (1, 28, 28, 64)\n"
]
],
[
[
"# Pool (max_pool)",
"_____no_output_____"
]
],
[
[
"# Let's see 'pool'\nprint (\"Size of 'pool' is %s\" % (pool.shape,))\n\n# Plot ! \nfor i in range(3):\n plt.matshow(pool[0, :, :, i], cmap=plt.get_cmap('gray'))\n plt.title(str(i) + \"th pool\")\n plt.colorbar()\n plt.show() ",
"Size of 'pool' is (1, 14, 14, 64)\n"
]
],
[
[
"# Dense",
"_____no_output_____"
]
],
[
[
"# Let's see 'dense'\nprint (\"Size of 'dense' is %s\" % (dense.shape,))\n# Let's see 'out'\nprint (\"Size of 'out' is %s\" % (out.shape,))\nplt.matshow(out, cmap=plt.get_cmap('gray'))\nplt.title(\"out\")\nplt.colorbar()\nplt.show() ",
"Size of 'dense' is (1, 12544)\nSize of 'out' is (1, 10)\n"
]
],
[
[
"# Convolution filters",
"_____no_output_____"
]
],
[
[
"# Let's see weight! \nwc1 = sess.run(weights['wc1'])\nprint (\"Size of 'wc1' is %s\" % (wc1.shape,))\n\n# Plot ! \nfor i in range(3):\n plt.matshow(wc1[:, :, 0, i], cmap=plt.get_cmap('gray'))\n plt.title(str(i) + \"th conv filter\")\n plt.colorbar()\n plt.show() ",
"Size of 'wc1' is (3, 3, 1, 64)\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adb4e235757fb9088f2767c523e154ea9b71ce2
| 14,713 |
ipynb
|
Jupyter Notebook
|
docs/tutorials/custom_estimators.ipynb
|
SamuelMarks/lattice
|
f05aa0bf2e85756f7a5f49f1378f0d1e428bea2d
|
[
"Apache-2.0"
] | null | null | null |
docs/tutorials/custom_estimators.ipynb
|
SamuelMarks/lattice
|
f05aa0bf2e85756f7a5f49f1378f0d1e428bea2d
|
[
"Apache-2.0"
] | null | null | null |
docs/tutorials/custom_estimators.ipynb
|
SamuelMarks/lattice
|
f05aa0bf2e85756f7a5f49f1378f0d1e428bea2d
|
[
"Apache-2.0"
] | null | null | null | 33.979215 | 521 | 0.519133 |
[
[
[
"##### Copyright 2020 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# TF Lattice Custom Estimators",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lattice/tutorials/custom_estimators\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/custom_estimators.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/lattice/blob/master/docs/tutorials/custom_estimators.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/custom_estimators.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"## Overview\n\nYou can use custom estimators to create arbitrarily monotonic models using TFL layers. This guide outlines the steps needed to create such estimators.",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
],
[
"Installing TF Lattice package:",
"_____no_output_____"
]
],
[
[
"#@test {\"skip\": true}\n!pip install tensorflow-lattice",
"_____no_output_____"
]
],
[
[
"Importing required packages:",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\nimport logging\nimport numpy as np\nimport pandas as pd\nimport sys\nimport tensorflow_lattice as tfl\nfrom tensorflow import feature_column as fc\n\nfrom tensorflow_estimator.python.estimator.canned import optimizers\nfrom tensorflow_estimator.python.estimator.head import binary_class_head\nlogging.disable(sys.maxsize)",
"_____no_output_____"
]
],
[
[
"Downloading the UCI Statlog (Heart) dataset:",
"_____no_output_____"
]
],
[
[
"csv_file = tf.keras.utils.get_file(\n 'heart.csv', 'http://storage.googleapis.com/applied-dl/heart.csv')\ndf = pd.read_csv(csv_file)\ntarget = df.pop('target')\ntrain_size = int(len(df) * 0.8)\ntrain_x = df[:train_size]\ntrain_y = target[:train_size]\ntest_x = df[train_size:]\ntest_y = target[train_size:]\ndf.head()",
"_____no_output_____"
]
],
[
[
"Setting the default values used for training in this guide:",
"_____no_output_____"
]
],
[
[
"LEARNING_RATE = 0.1\nBATCH_SIZE = 128\nNUM_EPOCHS = 1000",
"_____no_output_____"
]
],
[
[
"## Feature Columns\n\nAs for any other TF estimator, data needs to be passed to the estimator, which is typically via an input_fn and parsed using [FeatureColumns](https://www.tensorflow.org/guide/feature_columns).",
"_____no_output_____"
]
],
[
[
"# Feature columns.\n# - age\n# - sex\n# - ca number of major vessels (0-3) colored by flourosopy\n# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect\nfeature_columns = [\n fc.numeric_column('age', default_value=-1),\n fc.categorical_column_with_vocabulary_list('sex', [0, 1]),\n fc.numeric_column('ca'),\n fc.categorical_column_with_vocabulary_list(\n 'thal', ['normal', 'fixed', 'reversible']),\n]",
"_____no_output_____"
]
],
[
[
"Note that categorical features do not need to be wrapped by a dense feature column, since `tfl.laysers.CategoricalCalibration` layer can directly consume category indices.",
"_____no_output_____"
],
[
"## Creating input_fn\n\nAs for any other estimator, you can use input_fn to feed data to the model for training and evaluation.",
"_____no_output_____"
]
],
[
[
"train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=train_x,\n y=train_y,\n shuffle=True,\n batch_size=BATCH_SIZE,\n num_epochs=NUM_EPOCHS,\n num_threads=1)\n\ntest_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=test_x,\n y=test_y,\n shuffle=False,\n batch_size=BATCH_SIZE,\n num_epochs=1,\n num_threads=1)",
"_____no_output_____"
]
],
[
[
"## Creating model_fn\n\nThere are several ways to create a custom estimator. Here we will construct a `model_fn` that calls a Keras model on the parsed input tensors. To parse the input features, you can use `tf.feature_column.input_layer`, `tf.keras.layers.DenseFeatures`, or `tfl.estimators.transform_features`. If you use the latter, you will not need to wrap categorical features with dense feature columns, and the resulting tensors will not be concatenated, which makes it easier to use the features in the calibration layers.\n\nTo construct a model, you can mix and match TFL layers or any other Keras layers. Here we create a calibrated lattice Keras model out of TFL layers and impose several monotonicity constraints. We then use the Keras model to create the custom estimator.\n",
"_____no_output_____"
]
],
[
[
"def model_fn(features, labels, mode, config):\n \"\"\"model_fn for the custom estimator.\"\"\"\n del config\n input_tensors = tfl.estimators.transform_features(features, feature_columns)\n inputs = {\n key: tf.keras.layers.Input(shape=(1,), name=key) for key in input_tensors\n }\n\n lattice_sizes = [3, 2, 2, 2]\n lattice_monotonicities = ['increasing', 'none', 'increasing', 'increasing']\n lattice_input = tf.keras.layers.Concatenate(axis=1)([\n tfl.layers.PWLCalibration(\n input_keypoints=np.linspace(10, 100, num=8, dtype=np.float32),\n # The output range of the calibrator should be the input range of\n # the following lattice dimension.\n output_min=0.0,\n output_max=lattice_sizes[0] - 1.0,\n monotonicity='increasing',\n )(inputs['age']),\n tfl.layers.CategoricalCalibration(\n # Number of categories including any missing/default category.\n num_buckets=2,\n output_min=0.0,\n output_max=lattice_sizes[1] - 1.0,\n )(inputs['sex']),\n tfl.layers.PWLCalibration(\n input_keypoints=[0.0, 1.0, 2.0, 3.0],\n output_min=0.0,\n output_max=lattice_sizes[0] - 1.0,\n # You can specify TFL regularizers as tuple\n # ('regularizer name', l1, l2).\n kernel_regularizer=('hessian', 0.0, 1e-4),\n monotonicity='increasing',\n )(inputs['ca']),\n tfl.layers.CategoricalCalibration(\n num_buckets=3,\n output_min=0.0,\n output_max=lattice_sizes[1] - 1.0,\n # Categorical monotonicity can be partial order.\n # (i, j) indicates that we must have output(i) <= output(j).\n # Make sure to set the lattice monotonicity to 'increasing' for this\n # dimension.\n monotonicities=[(0, 1), (0, 2)],\n )(inputs['thal']),\n ])\n output = tfl.layers.Lattice(\n lattice_sizes=lattice_sizes, monotonicities=lattice_monotonicities)(\n lattice_input)\n\n training = (mode == tf.estimator.ModeKeys.TRAIN)\n model = tf.keras.Model(inputs=inputs, outputs=output)\n logits = model(input_tensors, training=training)\n\n if training:\n optimizer = optimizers.get_optimizer_instance_v2('Adagrad', LEARNING_RATE)\n else:\n optimizer = None\n\n head = binary_class_head.BinaryClassHead()\n return head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=optimizer,\n logits=logits,\n trainable_variables=model.trainable_variables,\n update_ops=model.updates)",
"_____no_output_____"
]
],
[
[
"## Training and Estimator\n\nUsing the `model_fn` we can create and train the estimator.",
"_____no_output_____"
]
],
[
[
"estimator = tf.estimator.Estimator(model_fn=model_fn)\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('AUC: {}'.format(results['auc']))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adb515b77ace55e5b581266abc4d6bcf97c9b5c
| 11,664 |
ipynb
|
Jupyter Notebook
|
section_2/02_titanic_random_forest.ipynb
|
yukinaga/minnano_kaggle
|
bffcea83f9f446424367d64842d15ba6d62b3817
|
[
"MIT"
] | 1 |
2022-03-29T12:10:34.000Z
|
2022-03-29T12:10:34.000Z
|
section_2/02_titanic_random_forest.ipynb
|
yukinaga/minnano_kaggle
|
bffcea83f9f446424367d64842d15ba6d62b3817
|
[
"MIT"
] | null | null | null |
section_2/02_titanic_random_forest.ipynb
|
yukinaga/minnano_kaggle
|
bffcea83f9f446424367d64842d15ba6d62b3817
|
[
"MIT"
] | null | null | null | 30.775726 | 254 | 0.520233 |
[
[
[
"<a href=\"https://colab.research.google.com/github/yukinaga/minnano_kaggle/blob/main/section_2/02_titanic_random_forest.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# タイタニック号生存者の予測\n「ランダムフォレスト」という機械学習のアルゴリズムにより、タイタニック号の生存者を予測します。 \n訓練済みのモデルによる予測結果は、csvファイルに保存して提出します。 ",
"_____no_output_____"
],
[
"## データの読み込み\nタイタニック号の乗客データを読み込みます。 \n以下のページからタイタニック号の乗客データをダウロードして、「train.csv」「test.csv」をノートブック環境にアップします。 \nhttps://www.kaggle.com/c/titanic/data\n\n訓練データには乗客が生き残ったどうかを表す\"Survived\"の列がありますが、テストデータにはありません。 \n訓練済みのモデルに、テストデータを入力して判定した結果を提出することになります。 \n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\n\ntrain_data = pd.read_csv(\"train.csv\") # 訓練データ\ntest_data = pd.read_csv(\"test.csv\") # テストデータ\n\ntrain_data.head()",
"_____no_output_____"
]
],
[
[
"## データの前処理\n判定に使用可能なデータのみを取り出し、データの欠損に対して適切な処理を行います。 \nまた、文字列などのカテゴリ変数に対しては、数値に変換する処理を行います。 \n\n以下のコードでは、訓練データ及びテストデータから判定に使える列のみを取り出しています。",
"_____no_output_____"
]
],
[
[
"test_id = test_data[\"PassengerId\"] # 結果の提出時に使用\n\nlabels = [\"Pclass\",\"Sex\",\"Age\",\"SibSp\",\"Parch\",\"Fare\",\"Embarked\"]\ntrain_data = train_data[labels + [\"Survived\"]]\ntest_data = test_data[labels]\n\ntrain_data.head()",
"_____no_output_____"
]
],
[
[
"`info()`によりデータの全体像を確認することができます。 \nNon-Null Countにより欠損していないデータの数が確認できるので、データの欠損が存在する列を確認しておきます。 ",
"_____no_output_____"
]
],
[
[
"train_data.info()\ntest_data.info()",
"_____no_output_____"
]
],
[
[
"AgeとFare、Embarkedに欠損が存在します。 \nAgeとFareの欠損値には平均値を、Embarkedの欠損値には最頻値をあてがって対処します。 ",
"_____no_output_____"
]
],
[
[
"# Age\nage_mean = train_data[\"Age\"].mean() # 平均値\ntrain_data[\"Age\"] = train_data[\"Age\"].fillna(age_mean)\ntest_data[\"Age\"] = test_data[\"Age\"].fillna(age_mean)\n\n# Fare\nfare_mean = train_data[\"Fare\"].mean() # 平均値\ntrain_data[\"Fare\"] = train_data[\"Fare\"].fillna(fare_mean)\ntest_data[\"Fare\"] = test_data[\"Fare\"].fillna(fare_mean)\n\n# Embarked\nembarked_mode = train_data[\"Embarked\"].mode() # 最頻値\ntrain_data[\"Embarked\"] = train_data[\"Embarked\"].fillna(embarked_mode)\ntest_data[\"Embarked\"] = test_data[\"Embarked\"].fillna(embarked_mode)",
"_____no_output_____"
]
],
[
[
"`get_dummies()`により、カテゴリ変数の列を0か1の値を持つ複数の列に変換します。 ",
"_____no_output_____"
]
],
[
[
"cat_labels = [\"Sex\", \"Pclass\", \"Embarked\"] # カテゴリ変数のラベル\ntrain_data = pd.get_dummies(train_data, columns=cat_labels)\ntest_data = pd.get_dummies(test_data, columns=cat_labels)\n\ntrain_data.head()",
"_____no_output_____"
]
],
[
[
"## モデルの訓練\n入力と正解を用意します。 \n\"Survived\"の列が正解となります。",
"_____no_output_____"
]
],
[
[
"t_train = train_data[\"Survived\"] # 正解\nx_train = train_data.drop(labels=[\"Survived\"], axis=1) # \"Survived\"の列を削除して入力に\nx_test = test_data",
"_____no_output_____"
]
],
[
[
"ランダムフォレストは、複数の決定木を組み合わせた「アンサンブル学習」の一種です。 \nアンサンブル学習は複数の機械学習を組み合わせる手法で、しばしば高い性能を発揮します。 \n\n以下のコードでは、`RandomForestClassifier()`を使い、ランダムフォレストのモデルを作成して訓練します。 \n多数の決定木の多数決により、分類が行われることになります。 ",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\n\n# n_estimators: 決定木の数 max_depth: 決定木の深さ\nmodel = RandomForestClassifier(n_estimators=100, max_depth=5)\nmodel.fit(x_train, t_train)",
"_____no_output_____"
]
],
[
[
"## 結果の確認と提出\n`feature_importances_`により各特徴量の重要度を取得し、棒グラフで表示します。",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\nlabels = x_train.columns\nimportances = model.feature_importances_\n\nplt.figure(figsize = (10,6))\nplt.barh(range(len(importances)), importances)\nplt.yticks(range(len(labels)), labels)\nplt.show()",
"_____no_output_____"
]
],
[
[
"テストデータを使って予測を行います。 \n予測結果には、分類されるクラスを表す「Label」列と、そのクラスに含まれる確率を表す「Score」ラベルが含まれます。 \n形式を整えた上で提出用のcsvファイルとして保存します。",
"_____no_output_____"
]
],
[
[
"# 判定\ny_test = model.predict(x_test)\n\n# 形式を整える\nsurvived_test = pd.Series(y_test, name=\"Survived\")\nsubm_data = pd.concat([test_id, survived_test], axis=1)\n\n# 提出用のcsvファイルを保存\nsubm_data.to_csv(\"submission_titanic.csv\", index=False)\n\nsubm_data",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adb576b54da4c7ca80c5726026383c67312e3a0
| 659,816 |
ipynb
|
Jupyter Notebook
|
docs/tut/analysis/07 S Param Simulation.ipynb
|
bicycle315/Qiskit-Metal
|
41790052d79de0cad94e8c716e6ca1316164fa02
|
[
"Apache-2.0"
] | 1 |
2022-01-27T07:11:49.000Z
|
2022-01-27T07:11:49.000Z
|
docs/tut/analysis/07 S Param Simulation.ipynb
|
WhenTheyCry96/qiskit-metal
|
c2e6c0fe3b0bfaad49942cc010af5c9ed2671856
|
[
"Apache-2.0"
] | null | null | null |
docs/tut/analysis/07 S Param Simulation.ipynb
|
WhenTheyCry96/qiskit-metal
|
c2e6c0fe3b0bfaad49942cc010af5c9ed2671856
|
[
"Apache-2.0"
] | null | null | null | 1,007.352672 | 641,000 | 0.954525 |
[
[
[
"# Driven Modal Simulation and S-Parameters",
"_____no_output_____"
],
[
"## Prerequisite\nYou must have a working local installation of Ansys.",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n\nimport qiskit_metal as metal\nfrom qiskit_metal import designs, draw\nfrom qiskit_metal import MetalGUI, Dict, Headings\nimport pyEPR as epr",
"_____no_output_____"
]
],
[
[
"## Create the design in Metal\nSet up a design of a given dimension. Dimensions will be respected in the design rendering.\n<br>\nNote the chip design is centered at origin (0,0). ",
"_____no_output_____"
]
],
[
[
"design = designs.DesignPlanar({}, True)\ndesign.chips.main.size['size_x'] = '2mm'\ndesign.chips.main.size['size_y'] = '2mm'\n\n#Reference to Ansys hfss QRenderer\nhfss = design.renderers.hfss\n\ngui = MetalGUI(design)",
"_____no_output_____"
]
],
[
[
"Perform the necessary imports.",
"_____no_output_____"
]
],
[
[
"from qiskit_metal.qlibrary.couplers.coupled_line_tee import CoupledLineTee\nfrom qiskit_metal.qlibrary.tlines.meandered import RouteMeander\nfrom qiskit_metal.qlibrary.qubits.transmon_pocket import TransmonPocket\nfrom qiskit_metal.qlibrary.tlines.straight_path import RouteStraight\nfrom qiskit_metal.qlibrary.terminations.open_to_ground import OpenToGround",
"_____no_output_____"
]
],
[
[
"Add 2 transmons to the design.",
"_____no_output_____"
]
],
[
[
"options = dict(\n # Some options we want to modify from the deafults\n # (see below for defaults)\n pad_width = '425 um', \n pocket_height = '650um',\n # Adding 4 connectors (see below for defaults)\n connection_pads=dict( \n a = dict(loc_W=+1,loc_H=+1), \n b = dict(loc_W=-1,loc_H=+1, pad_height='30um'),\n c = dict(loc_W=+1,loc_H=-1, pad_width='200um'),\n d = dict(loc_W=-1,loc_H=-1, pad_height='50um')\n )\n)\n\n## Create 2 transmons\n\nq1 = TransmonPocket(design, 'Q1', options = dict(\n pos_x='+1.4mm', pos_y='0mm', orientation = '90', **options))\nq2 = TransmonPocket(design, 'Q2', options = dict(\n pos_x='-0.6mm', pos_y='0mm', orientation = '90', **options))\n\ngui.rebuild()\ngui.autoscale()",
"_____no_output_____"
]
],
[
[
"Add 2 hangers consisting of capacitively coupled transmission lines.",
"_____no_output_____"
]
],
[
[
"TQ1 = CoupledLineTee(design, 'TQ1', options=dict(pos_x='1mm',\n pos_y='3mm',\n coupling_length='200um'))\nTQ2 = CoupledLineTee(design, 'TQ2', options=dict(pos_x='-1mm',\n pos_y='3mm',\n coupling_length='200um'))\n\ngui.rebuild()\ngui.autoscale()",
"_____no_output_____"
]
],
[
[
"Add 2 meandered CPWs connecting the transmons to the hangers.",
"_____no_output_____"
]
],
[
[
"ops=dict(fillet='90um')\ndesign.overwrite_enabled = True\n\noptions1 = Dict(\n total_length='8mm',\n hfss_wire_bonds = True,\n pin_inputs=Dict(\n start_pin=Dict(\n component='TQ1',\n pin='second_end'),\n end_pin=Dict(\n component='Q1',\n pin='a')),\n lead=Dict(\n start_straight='0.1mm'),\n **ops\n)\n\noptions2 = Dict(\n total_length='9mm',\n hfss_wire_bonds = True,\n pin_inputs=Dict(\n start_pin=Dict(\n component='TQ2',\n pin='second_end'),\n end_pin=Dict(\n component='Q2',\n pin='a')),\n lead=Dict(\n start_straight='0.1mm'),\n **ops\n)\n\nmeanderQ1 = RouteMeander(design, 'meanderQ1', options=options1)\nmeanderQ2 = RouteMeander(design, 'meanderQ2', options=options2)\n\ngui.rebuild()\ngui.autoscale()",
"_____no_output_____"
]
],
[
[
"Add 2 open to grounds at the ends of the horizontal CPW.",
"_____no_output_____"
]
],
[
[
"otg1 = OpenToGround(design, 'otg1', options = dict(pos_x='3mm', \n pos_y='3mm'))\notg2 = OpenToGround(design, 'otg2', options = dict(pos_x = '-3mm', \n pos_y='3mm', \n orientation='180'))\n\ngui.rebuild()\ngui.autoscale()",
"_____no_output_____"
]
],
[
[
"Add 3 straight CPWs that comprise the long horizontal CPW.",
"_____no_output_____"
]
],
[
[
"ops_oR = Dict(hfss_wire_bonds = True,\n pin_inputs=Dict(\n start_pin=Dict(\n component='TQ1',\n pin='prime_end'),\n end_pin=Dict(\n component='otg1',\n pin='open')))\nops_mid = Dict(hfss_wire_bonds = True,\n pin_inputs=Dict(\n start_pin=Dict(\n component='TQ1',\n pin='prime_start'),\n end_pin=Dict(\n component='TQ2',\n pin='prime_end')))\nops_oL = Dict(hfss_wire_bonds = True,\n pin_inputs=Dict(\n start_pin=Dict(\n component='TQ2',\n pin='prime_start'),\n end_pin=Dict(\n component='otg2',\n pin='open')))\n\ncpw_openRight = RouteStraight(design, 'cpw_openRight', options=ops_oR)\ncpw_middle = RouteStraight(design, 'cpw_middle', options=ops_mid)\ncpw_openLeft = RouteStraight(design, 'cpw_openLeft', options=ops_oL)\n\ngui.rebuild()\ngui.autoscale()",
"_____no_output_____"
]
],
[
[
"## Render the qubit from Metal into the HangingResonators design in Ansys. <br>",
"_____no_output_____"
],
[
"Open a new Ansys window, connect to it, and add a driven modal design called HangingResonators to the currently active project.<br>\nIf Ansys is already open, you can skip `hfss.open_ansys()`. <br>\n**Wait for Ansys to fully open before proceeding.**<br> If necessary, also close any Ansys popup windows.",
"_____no_output_____"
]
],
[
[
"hfss.open_ansys()",
"_____no_output_____"
],
[
"hfss.connect_ansys()",
"INFO 11:32AM [connect_project]: Connecting to Ansys Desktop API...\nINFO 11:32AM [load_ansys_project]: \tOpened Ansys App\nINFO 11:32AM [load_ansys_project]: \tOpened Ansys Desktop v2020.2.0\nINFO 11:32AM [load_ansys_project]: \tOpened Ansys Project\n\tFolder: C:/Ansoft/\n\tProject: Project10\nINFO 11:32AM [connect_design]: No active design found (or error getting active design).\nINFO 11:32AM [connect]: \t Connected to project \"Project10\". No design detected\n"
],
[
"hfss.activate_drivenmodal_design(\"HangingResonators\")",
"11:32AM 25s WARNING [activate_drivenmodal_design]: The name=HangingResonators was not in active project. A new design will be inserted to the project. Names in active project are: \n[]. \nINFO 11:32AM [connect_design]: \tOpened active design\n\tDesign: HangingResonators [Solution type: DrivenModal]\nWARNING 11:32AM [connect_setup]: \tNo design setup detected.\nWARNING 11:32AM [connect_setup]: \tCreating drivenmodal default setup.\nINFO 11:32AM [get_setup]: \tOpened setup `Setup` (<class 'pyEPR.ansys.HfssDMSetup'>)\n"
]
],
[
[
"Set the buffer width at the edge of the design to be 0.5 mm in both directions.",
"_____no_output_____"
]
],
[
[
"hfss.options['x_buffer_width_mm'] = 0.5\nhfss.options['y_buffer_width_mm'] = 0.5",
"_____no_output_____"
]
],
[
[
"Here, pin cpw_openRight_end and cpw_openLeft_end are converted into lumped ports, each with an impedance of 50 Ohms. <br>\nNeither of the junctions in Q1 or Q2 are rendered. <br>\nAs a reminder, arguments are given as <br><br>\nFirst parameter: List of components to render (empty list if rendering whole Metal design) <br>\nSecond parameter: List of pins (qcomp, pin) with open endcaps <br>\nThird parameter: List of pins (qcomp, pin, impedance) to render as lumped ports <br>\nFourth parameter: List of junctions (qcomp, qgeometry_name, impedance, draw_ind) to render as lumped ports or as lumped port in parallel with a sheet inductance <br>\nFifth parameter: List of junctions (qcomp, qgeometry_name) to omit altogether during rendering\nSixth parameter: Whether to render chip via box plus buffer or fixed chip size",
"_____no_output_____"
]
],
[
[
"hfss.render_design([], \n [], \n [('cpw_openRight', 'end', 50), ('cpw_openLeft', 'end', 50)], \n [], \n [('Q1', 'rect_jj'), ('Q2', 'rect_jj')],\n True)",
"_____no_output_____"
],
[
"hfss.save_screenshot()",
"_____no_output_____"
],
[
"hfss.add_sweep(setup_name=\"Setup\", \n name=\"Sweep\", \n start_ghz=4.0,\n stop_ghz=8.0,\n count=2001,\n type=\"Interpolating\")",
"INFO 11:33AM [get_setup]: \tOpened setup `Setup` (<class 'pyEPR.ansys.HfssDMSetup'>)\n"
],
[
"hfss.analyze_sweep('Sweep', 'Setup')",
"INFO 11:33AM [get_setup]: \tOpened setup `Setup` (<class 'pyEPR.ansys.HfssDMSetup'>)\nINFO 11:33AM [analyze]: Analyzing setup Setup : Sweep\n"
]
],
[
[
"Plot S, Y, and Z parameters as a function of frequency. <br>\nThe left and right plots display the magnitude and phase, respectively.",
"_____no_output_____"
]
],
[
[
"hfss.plot_params(['S11', 'S21'])",
"_____no_output_____"
],
[
"hfss.plot_params(['Y11', 'Y21'])",
"_____no_output_____"
],
[
"hfss.plot_params(['Z11', 'Z21'])",
"_____no_output_____"
],
[
"hfss.disconnect_ansys()",
"_____no_output_____"
],
[
"gui.main_window.close()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
4adb582116121f6baf0fc09f06e692cb174318a2
| 185,662 |
ipynb
|
Jupyter Notebook
|
week10/ASSIGNMENT-10 EE17B109.ipynb
|
suhas1999/EE2703
|
e508f61d7af0c2445c6b30c465eca3fad455f853
|
[
"MIT"
] | null | null | null |
week10/ASSIGNMENT-10 EE17B109.ipynb
|
suhas1999/EE2703
|
e508f61d7af0c2445c6b30c465eca3fad455f853
|
[
"MIT"
] | null | null | null |
week10/ASSIGNMENT-10 EE17B109.ipynb
|
suhas1999/EE2703
|
e508f61d7af0c2445c6b30c465eca3fad455f853
|
[
"MIT"
] | null | null | null | 443.107399 | 30,324 | 0.945842 |
[
[
[
"# Introduction\n\nIn this experiment we will be trying to do convolution operation on various signals and using\nboth methods of linear convolution and circular convolution.In the theory class we have ana-\nlyzed the advantages of using circular convolution over using linear convolution when we are\nrecieving continuos samples of inputs.We will also be analysing the effect of passing the signal\nx = cos(0.2 ∗ pi ∗ n) + cos(0.85 ∗ pi ∗ n) through a given filter.At last we will be analysing the cross-\ncorrellation output of the Zadoff–Chu sequence.",
"_____no_output_____"
],
[
"# Importing packages",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport csv\nfrom scipy import signal\nimport matplotlib.pyplot as plt\nfrom math import *\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"# Filter sequence\n\nNow we will use the signal.freqz function to visuaize the filter in frequency domain.",
"_____no_output_____"
]
],
[
[
"a = np.genfromtxt('h.csv',delimiter=',')\nw,h = signal.freqz(a)\nfig,ax = plt.subplots(2,sharex=True)\nplt.grid(True,which=\"all\")\nax[0].plot(w,(abs(h)),\"b\")\nax[0].set_title(\"Filter Magnitude response\")\nax[0].set_xlabel(\"Frequency(w rad/s)\")\nax[0].set_ylabel(\"AMplitude dB\")\nangle = np.unwrap(np.angle(h))\nax[1].plot(w,angle,\"g\")\nax[1].set_title(\"Filter Phase response\")\nax[1].set_xlabel(\"Frequency(w rad/s)\")\nax[1].set_ylabel(\"Phase\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Here I have plotted both the magnitude and phase response of the filter in the appropriate\nfrequency range. It is clear from the plot that the given filter is a low-pass filter with a cutoff\nfrequency of about 0.75 rad/s.",
"_____no_output_____"
],
[
"# Given Signal:",
"_____no_output_____"
]
],
[
[
"n = np.linspace(1,2**10,2**10)\nx = np.cos(0.2*pi*n) + np.cos(0.85*pi*n)\nfig2,bx = plt.subplots(1,sharex=True)\nbx.plot(n,x)\nbx.set_title(\"Sequence plot\")\nbx.set_xlabel(\"n\")\nbx.set_ylabel(\"x\")\nbx.set_xlim(0,40)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Clearly the input seqence has frequency components of 0.2pi = 0.628 rad/s and 0.85pi = 2.669\nrad/s.",
"_____no_output_____"
],
[
"# Passing signal through Filter",
"_____no_output_____"
]
],
[
[
"y = np.convolve(x,a,mode=\"same\")\nfig3,cx = plt.subplots(1,sharex=True)\ncx.plot(y)\ncx.set_title(\"Filtered output plot using linear convolution \")\ncx.set_xlabel(\"n\")\ncx.set_ylabel(\"y\")\ncx.set_xlim(0,40)\nplt.show()",
"_____no_output_____"
]
],
[
[
"We can clearly see that it acted as a low pass filter",
"_____no_output_____"
],
[
"# Using Circular Convolution",
"_____no_output_____"
]
],
[
[
"a_adjusted = np.pad(a,(0,len(x)-len(a)),\"constant\")\ny1 = np.fft.ifft(np.fft.fft(x) * np.fft.fft(a_adjusted))\nfig4,dx = plt.subplots(1,sharex=True)\ndx.plot(y1)\ndx.set_title(\"Filtered output plot using circular convolution \")\ndx.set_xlabel(\"n\")\ndx.set_ylabel(\"y\")\ndx.set_xlim(0,40)\nplt.show()\n",
"/home/suhas/anaconda3/lib/python3.6/site-packages/numpy/core/numeric.py:492: ComplexWarning: Casting complex values to real discards the imaginary part\n return array(a, dtype, copy=False, order=order)\n"
]
],
[
[
"norder to compute the output using circular-convolution I am imitially padding my signals in\nto avoid overlapping of the output over itself.By doing this we will be getting the output seqence\njust like linear convolution.",
"_____no_output_____"
],
[
"# Circular Convolution using linear stiching",
"_____no_output_____"
]
],
[
[
"N = len(a) + len(x) - 1\nfil = np.concatenate([a,np.zeros(N-len(a))])\ny_modified = np.concatenate([x,np.zeros(N-len(x))])\ny2 = np.fft.ifft(np.fft.fft(y_modified) * np.fft.fft(fil))\nfig5,fx = plt.subplots(1,sharex=True)\nfx.plot(y2)\nfx.set_title(\"Filtered output plot using linear convolution as circular convolution \")\nfx.set_xlabel(\"n\")\nfx.set_ylabel(\"y\")\nfx.set_xlim(0,40)\nplt.show()\n",
"/home/suhas/anaconda3/lib/python3.6/site-packages/numpy/core/numeric.py:492: ComplexWarning: Casting complex values to real discards the imaginary part\n return array(a, dtype, copy=False, order=order)\n"
]
],
[
[
"We clearly see that the output is exactly similar to the one which we got by linear convolu-\ntion.Hence it is seen that by padding the sequence appropriately we will be able to achieve the\noutput using circular convolution.",
"_____no_output_____"
],
[
"# Zadoff Sequence",
"_____no_output_____"
]
],
[
[
"zadoff = pd.read_csv(\"x1.csv\").values[:,0]\nzadoff = np.array([complex(zadoff[i].replace(\"i\",\"j\")) for i in range(len(zadoff))])\nzw,zh = signal.freqz(zadoff)\nfig5,ex = plt.subplots(2,sharex=True)\nplt.grid(True,which=\"all\")\nex[0].plot(zw,(abs(zh)),\"b\")\nex[0].set_title(\"zadoff Magnitude response\")\nex[0].set_xlabel(\"Frequency(w rad/s)\")\nex[0].set_ylabel(\"Zadoff Amplitude dB\")\nangle_z = np.unwrap(np.angle(zh))\nex[1].plot(zw,angle_z,\"g\")\nex[1].set_title(\"Zadoff Phase response\")\nex[1].set_xlabel(\"Frequency(w rad/s)\")\nex[1].set_ylabel(\"Phase\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Properties of Zadoff-Chu sequence:\n$(a) It is a complex sequence.$\n$(b) It is a constant amplitude sequence.$\n$(c) The auto correlation of a Zadoff–Chu sequence with a cyclically shifted version\nof itself is zero.$\n$(d) Correlation of Zadoff–Chu sequence with the delayed version of itself will give\na peak at that delay.$",
"_____no_output_____"
],
[
"# Co-relation with shifting with itself",
"_____no_output_____"
]
],
[
[
"zadoff_modified = np.concatenate([zadoff[-5:],zadoff[:-5]])\n\nz_out = np.correlate(zadoff,zadoff_modified,\"full\")\nfig7,gx = plt.subplots(1,sharex=True)\nplt.grid(True,which=\"all\")\ngx.plot((abs(z_out)),\"b\")\ngx.set_title(\" correlation of zadoff and shifted Z Magnitude \")\ngx.set_xlabel(\"n\")\ngx.set_ylabel(\"Magnitude\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"We clearly see a peak at shited magnitude frequency correspondent",
"_____no_output_____"
],
[
"# Conclusion\nHence through this assignment we are able to take the output of a system for a given signal\nusing convolution. We approached convolution using linear method and circular method.We use\npadding to make the filter of appropriate size before we do the circular convolution.Later we\nanalysed the crosscorrelation function of Zadoff–Chu sequence with its circularly shifted version\nof itself.We observe a sharp peak in appropriate location according to the circular shift done.\n\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4adb6c0445b2c8216855d33233e6fb0a588fc85b
| 12,627 |
ipynb
|
Jupyter Notebook
|
reference/Grover's Algorithm.ipynb
|
LSaldyt/curry-examples
|
3a5ed137ff7a040cb6505813af849465cc39d59d
|
[
"MIT"
] | null | null | null |
reference/Grover's Algorithm.ipynb
|
LSaldyt/curry-examples
|
3a5ed137ff7a040cb6505813af849465cc39d59d
|
[
"MIT"
] | null | null | null |
reference/Grover's Algorithm.ipynb
|
LSaldyt/curry-examples
|
3a5ed137ff7a040cb6505813af849465cc39d59d
|
[
"MIT"
] | null | null | null | 54.193133 | 6,392 | 0.734537 |
[
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport itertools\nimport numpy as np\nimport pyquil.api as api\nfrom pyquil.gates import *\nfrom pyquil.quil import Program",
"_____no_output_____"
],
[
"def qubit_strings(n):\n qubit_strings = []\n for q in itertools.product(['0', '1'], repeat=n):\n qubit_strings.append(''.join(q))\n return qubit_strings",
"_____no_output_____"
],
[
"def black_box_map(n, q_find):\n \"\"\"\n Black-box map, f(x), on n qubits such that f(q_find) = 1, and otherwise = 0\n \"\"\"\n qubs = qubit_strings(n)\n d_blackbox = {q: 1 if q == q_find else 0 for q in qubs}\n return d_blackbox",
"_____no_output_____"
],
[
"def qubit_ket(qub_string):\n \"\"\"\n Form a basis ket out of n-bit string specified by the input 'qub_string', e.g.\n '001' -> |001>\n \"\"\"\n e0 = np.array([[1], [0]])\n e1 = np.array([[0], [1]])\n d_qubstring = {'0': e0, '1': e1}\n\n # initialize ket\n ket = d_qubstring[qub_string[0]]\n for i in range(1, len(qub_string)):\n ket = np.kron(ket, d_qubstring[qub_string[i]])\n \n return ket",
"_____no_output_____"
],
[
"def projection_op(qub_string):\n \"\"\"\n Creates a projection operator out of the basis element specified by 'qub_string', e.g.\n '101' -> |101> <101|\n \"\"\"\n ket = qubit_ket(qub_string)\n bra = np.transpose(ket) # all entries real, so no complex conjugation necessary\n proj = np.kron(ket, bra)\n return proj",
"_____no_output_____"
],
[
"def black_box(n, q_find):\n \"\"\"\n Unitary representation of the black-box operator on (n+1)-qubits\n \"\"\"\n d_bb = black_box_map(n, q_find)\n # initialize unitary matrix\n N = 2**(n+1)\n unitary_rep = np.zeros(shape=(N, N))\n # populate unitary matrix\n for k, v in d_bb.items():\n unitary_rep += np.kron(projection_op(k), np.eye(2) + v*(-np.eye(2) + np.array([[0, 1], [1, 0]])))\n return unitary_rep",
"_____no_output_____"
],
[
"def U_grov(n):\n \"\"\"\n The operator 2|psi><psi| - I , where |psi> = H^n |0>\n \"\"\"\n qubs = qubit_strings(n)\n N = 2**n\n proj_psipsi = np.zeros(shape=(N, N))\n for s_ket in qubs:\n ket = qubit_ket(s_ket)\n for s_bra in qubs:\n bra = np.transpose(qubit_ket(s_bra)) # no complex conjugation required\n proj_psipsi += np.kron(ket, bra)\n # add normalization factor\n proj_psipsi *= 1/N\n\n return 2*proj_psipsi - np.eye(N)",
"_____no_output_____"
]
],
[
[
"### Grover's Search Algorithm",
"_____no_output_____"
]
],
[
[
"# Specify an item to find\nfindme = '1011'",
"_____no_output_____"
],
[
"# number of qubits (excluding the ancilla)\nn = len(findme)\n# number of iterations\nnum_iters = max(1, int(np.sqrt(2**(n-2))))\n\np = Program()\n# define blackbox operator (see above)\np.defgate(\"U_bb\", black_box(n, findme))\n# define the U_grov (see above)\np.defgate(\"U_grov\", U_grov(n))\n# Apply equal superposition state\nfor q in range(1, n+1):\n p.inst(H(q))\n# Make 0th qubit an eigenstate of the black-box operator\np.inst(H(0))\np.inst(Z(0))\n \n# Grover iterations\nfor _ in range(num_iters):\n # apply oracle\n p.inst((\"U_bb\",) + tuple(range(n+1)[::-1]))\n # apply H . U_perp . H\n p.inst((\"U_grov\",) + tuple(range(1, n+1)[::-1]))\n \n# measure and discard ancilla\np.measure(0, [0])\n \n# run program, and investigate wavefunction\nqvm = api.QVMConnection()\nwavefunc = qvm.wavefunction(p)\noutcome_probs = wavefunc.get_outcome_probs()\nprint (\"The most probable outcome is: |%s>\" % (max(outcome_probs, key=outcome_probs.get)[:-1]))\n\n# histogram of outcome probs\nplt.figure(figsize=(8, 6))\nplt.bar([i[:-1] for i in outcome_probs.keys()], outcome_probs.values())\nplt.show()",
"The most probable outcome is: |1011>\n"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4adb722151bfdfc47a2dfab5ee1852bea8ba26e9
| 10,526 |
ipynb
|
Jupyter Notebook
|
docs/examples/quickstart.ipynb
|
TimSchmeier/recommenders
|
5712f07c8744d2e8e3cc9635f07229167fb8a1cb
|
[
"Apache-2.0"
] | 1 |
2021-06-17T10:56:59.000Z
|
2021-06-17T10:56:59.000Z
|
docs/examples/quickstart.ipynb
|
anisayari/recommenders
|
1ac907558752eadc1d7f1a3e8a548a5d1e2b0ba3
|
[
"Apache-2.0"
] | null | null | null |
docs/examples/quickstart.ipynb
|
anisayari/recommenders
|
1ac907558752eadc1d7f1a3e8a548a5d1e2b0ba3
|
[
"Apache-2.0"
] | 1 |
2021-09-10T13:38:38.000Z
|
2021-09-10T13:38:38.000Z
| 31.51497 | 275 | 0.519856 |
[
[
[
"##### Copyright 2020 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# TensorFlow Recommenders: Quickstart\n\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/recommenders/quickstart\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/quickstart.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/recommenders/blob/main/docs/examples/quickstart.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/quickstart.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"In this tutorial, we build a simple matrix factorization model using the [MovieLens 100K dataset](https://grouplens.org/datasets/movielens/100k/) with TFRS. We can use this model to recommend movies for a given user.",
"_____no_output_____"
],
[
"### Import TFRS\n\nFirst, install and import TFRS:",
"_____no_output_____"
]
],
[
[
"!pip install -q tensorflow-recommenders\n!pip install -q --upgrade tensorflow-datasets",
"_____no_output_____"
],
[
"from typing import Dict, Text\n\nimport numpy as np\nimport tensorflow as tf\n\nimport tensorflow_datasets as tfds\nimport tensorflow_recommenders as tfrs",
"_____no_output_____"
]
],
[
[
"### Read the data",
"_____no_output_____"
]
],
[
[
"# Ratings data.\nratings = tfds.load('movielens/100k-ratings', split=\"train\")\n# Features of all the available movies.\nmovies = tfds.load('movielens/100k-movies', split=\"train\")\n\n# Select the basic features.\nratings = ratings.map(lambda x: {\n \"movie_title\": x[\"movie_title\"],\n \"user_id\": x[\"user_id\"]\n})\nmovies = movies.map(lambda x: x[\"movie_title\"])",
"_____no_output_____"
]
],
[
[
"Build vocabularies to convert user ids and movie titles into integer indices for embedding layers:",
"_____no_output_____"
]
],
[
[
"user_ids_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)\nuser_ids_vocabulary.adapt(ratings.map(lambda x: x[\"user_id\"]))\n\nmovie_titles_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)\nmovie_titles_vocabulary.adapt(movies)",
"_____no_output_____"
]
],
[
[
"### Define a model\n\nWe can define a TFRS model by inheriting from `tfrs.Model` and implementing the `compute_loss` method:",
"_____no_output_____"
]
],
[
[
"class MovieLensModel(tfrs.Model):\n # We derive from a custom base class to help reduce boilerplate. Under the hood,\n # these are still plain Keras Models.\n\n def __init__(\n self,\n user_model: tf.keras.Model,\n movie_model: tf.keras.Model,\n task: tfrs.tasks.Retrieval):\n super().__init__()\n\n # Set up user and movie representations.\n self.user_model = user_model\n self.movie_model = movie_model\n\n # Set up a retrieval task.\n self.task = task\n\n def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:\n # Define how the loss is computed.\n\n user_embeddings = self.user_model(features[\"user_id\"])\n movie_embeddings = self.movie_model(features[\"movie_title\"])\n\n return self.task(user_embeddings, movie_embeddings)",
"_____no_output_____"
]
],
[
[
"Define the two models and the retrieval task.",
"_____no_output_____"
]
],
[
[
"# Define user and movie models.\nuser_model = tf.keras.Sequential([\n user_ids_vocabulary,\n tf.keras.layers.Embedding(user_ids_vocabulary.vocab_size(), 64)\n])\nmovie_model = tf.keras.Sequential([\n movie_titles_vocabulary,\n tf.keras.layers.Embedding(movie_titles_vocabulary.vocab_size(), 64)\n])\n\n# Define your objectives.\ntask = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(\n movies.batch(128).map(movie_model)\n )\n)",
"_____no_output_____"
]
],
[
[
"\n### Fit and evaluate it.\n\nCreate the model, train it, and generate predictions:\n\n",
"_____no_output_____"
]
],
[
[
"# Create a retrieval model.\nmodel = MovieLensModel(user_model, movie_model, task)\nmodel.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))\n\n# Train for 3 epochs.\nmodel.fit(ratings.batch(4096), epochs=3)\n\n# Use brute-force search to set up retrieval using the trained representations.\nindex = tfrs.layers.factorized_top_k.BruteForce(model.user_model)\nindex.index(movies.batch(100).map(model.movie_model), movies)\n\n# Get some recommendations.\n_, titles = index(np.array([\"42\"]))\nprint(f\"Top 3 recommendations for user 42: {titles[0, :3]}\")",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4adb74c94e43a6c3f5c24ba79fae8f20e81b6105
| 19,037 |
ipynb
|
Jupyter Notebook
|
intro-to-deep-learning-mnist-digit-recognition.ipynb
|
shohan4556/deep-learning
|
34359e0cb6e74372630b45b285792a54d28bcc14
|
[
"MIT"
] | 1 |
2020-04-25T06:51:07.000Z
|
2020-04-25T06:51:07.000Z
|
intro-to-deep-learning-mnist-digit-recognition.ipynb
|
shohan4556/deep-learning
|
34359e0cb6e74372630b45b285792a54d28bcc14
|
[
"MIT"
] | null | null | null |
intro-to-deep-learning-mnist-digit-recognition.ipynb
|
shohan4556/deep-learning
|
34359e0cb6e74372630b45b285792a54d28bcc14
|
[
"MIT"
] | null | null | null | 19,037 | 19,037 | 0.832274 |
[
[
[
"# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load in \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n# Input data files are available in the \"../input/\" directory.\n# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory\n\nimport os\nfor dirname, _, filenames in os.walk('/kaggle/input'):\n for filename in filenames:\n print(os.path.join(dirname, filename))\n\n# Any results you write to the current directory are saved as output.",
"_____no_output_____"
]
],
[
[
"### Import Data Set & Normalize\n--- \nwe have imported the famoous mnist dataset, it is a 28x28 gray-scale hand written digits dataset. we have loaded the dataset, split the dataset. we also need to normalize the dataset. The original dataset has pixel value between 0 to 255. we have normalized it to 0 to 1. ",
"_____no_output_____"
]
],
[
[
"import keras\nfrom keras.datasets import mnist # 28x28 image data written digits 0-9\nfrom keras.utils import normalize\n\n#print(keras.__version__)\n\n#split train and test dataset \n(x_train, y_train), (x_test,y_test) = mnist.load_data()\n\n#normalize data \nx_train = normalize(x_train, axis=1)\nx_test = normalize(x_test, axis=1)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nplt.imshow(x_train[0], cmap=plt.cm.binary)\nplt.show()\n#print(x_train[0])",
"_____no_output_____"
]
],
[
[
"## Specify Architecture: \n--- \nwe have specified our model architecture. added commonly used densely-connected neural network. For the output node we specified our activation function **softmax** it is a probability distribution function. \n",
"_____no_output_____"
]
],
[
[
"from keras.models import Sequential\nfrom keras.layers import Flatten\nfrom keras.layers import Dense \n\n# created model \nmodel = Sequential()\n\n# flatten layer so it is operable by this layer \nmodel.add(Flatten())\n\n# regular densely-connected NN layer.\n#layer 1, 128 node \nmodel.add(Dense(128, activation='relu'))\n\n#layer 2, 128 node \nmodel.add(Dense(128, activation='relu'))\n\n#output layer, since it is probability distribution we will use 'softmax'\nmodel.add(Dense(10, activation='softmax'))",
"_____no_output_____"
]
],
[
[
"### Compile\n--- \nwe have compiled the model with earlystopping callback. when we see there are no improvement on accuracy we will stop compiling. ",
"_____no_output_____"
]
],
[
[
"from keras.callbacks import EarlyStopping\n\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics = ['accuracy'])\n\n#stop when see model not improving \nearly_stopping_monitor = EarlyStopping(monitor='val_loss', patience=2)\n",
"_____no_output_____"
]
],
[
[
"### Fit\n---\nFit the model with train data, with epochs 10. \n",
"_____no_output_____"
]
],
[
[
"model.fit(x_train, y_train, epochs=10, callbacks=[early_stopping_monitor], validation_data=(x_test, y_test))",
"Train on 60000 samples, validate on 10000 samples\nEpoch 1/10\n60000/60000 [==============================] - 5s 89us/step - loss: 0.2654 - accuracy: 0.9219 - val_loss: 0.1370 - val_accuracy: 0.9569\nEpoch 2/10\n60000/60000 [==============================] - 5s 85us/step - loss: 0.1074 - accuracy: 0.9663 - val_loss: 0.1020 - val_accuracy: 0.9674\nEpoch 3/10\n60000/60000 [==============================] - 5s 83us/step - loss: 0.0750 - accuracy: 0.9763 - val_loss: 0.0866 - val_accuracy: 0.9732\nEpoch 4/10\n60000/60000 [==============================] - 5s 84us/step - loss: 0.0539 - accuracy: 0.9823 - val_loss: 0.1024 - val_accuracy: 0.9690\nEpoch 5/10\n60000/60000 [==============================] - 5s 85us/step - loss: 0.0411 - accuracy: 0.9862 - val_loss: 0.0824 - val_accuracy: 0.9747\nEpoch 6/10\n60000/60000 [==============================] - 5s 83us/step - loss: 0.0322 - accuracy: 0.9891 - val_loss: 0.1017 - val_accuracy: 0.9702\nEpoch 7/10\n60000/60000 [==============================] - 5s 84us/step - loss: 0.0253 - accuracy: 0.9914 - val_loss: 0.0939 - val_accuracy: 0.9744\n"
]
],
[
[
"### Evaluate\n---\nEvaluate the accuracy of the model. ",
"_____no_output_____"
]
],
[
[
"val_loss, val_acc = model.evaluate(x_test,y_test)\nprint(val_loss, val_acc)",
"10000/10000 [==============================] - 0s 32us/step\n0.09387863340429176 0.974399983882904\n"
]
],
[
[
"### Save\n--- \nSave the model and show summary. ",
"_____no_output_____"
]
],
[
[
"model.save('mnist_digit.h5')\nmodel.summary()",
"Model: \"sequential_3\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nflatten_3 (Flatten) (None, 784) 0 \n_________________________________________________________________\ndense_7 (Dense) (None, 128) 100480 \n_________________________________________________________________\ndense_8 (Dense) (None, 128) 16512 \n_________________________________________________________________\ndense_9 (Dense) (None, 10) 1290 \n=================================================================\nTotal params: 118,282\nTrainable params: 118,282\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"### Load\n----\nLoad the model. ",
"_____no_output_____"
]
],
[
[
"from keras.models import load_model\n\nnew_model = load_model('mnist_digit.h5')",
"_____no_output_____"
]
],
[
[
"### Predict \n----\nHere our model predicted the probability distribution, we have to covnert it to classifcation/label.",
"_____no_output_____"
]
],
[
[
"predict = new_model.predict([x_test])\n\n#return the probability \nprint(predict)",
"[[4.2676600e-14 3.0285038e-11 4.8696002e-11 ... 1.0000000e+00\n 3.2324130e-12 5.4897120e-10]\n [2.2162140e-12 4.6030357e-07 9.9999952e-01 ... 5.1144473e-13\n 8.7472928e-12 1.5468627e-16]\n [1.1383704e-13 9.9997509e-01 1.1891178e-08 ... 1.7382994e-05\n 7.5184744e-06 8.6839680e-10]\n ...\n [5.1449442e-15 2.2787440e-11 3.4375176e-11 ... 1.5216089e-07\n 3.9822585e-09 4.9883429e-06]\n [3.9062195e-12 6.8857619e-14 9.2085549e-12 ... 4.1195853e-12\n 2.0591863e-03 3.0217221e-13]\n [1.3771667e-10 2.4271757e-10 5.0005507e-11 ... 3.9432331e-15\n 1.9255349e-08 1.2746119e-13]]\n"
],
[
"print(predict[1].argmax(axis=-1))",
"2\n"
],
[
"plt.imshow(x_test[1])\nplt.show()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4adb7a8b675b5720d76f05e11f5f8861505ece3f
| 66,154 |
ipynb
|
Jupyter Notebook
|
tarea_98/tarea_98.ipynb
|
zumaia/theEgg
|
f1ba4163eb5ab5ea844b08cf6cbc398fcdb5c80f
|
[
"MIT"
] | 1 |
2020-06-16T18:30:59.000Z
|
2020-06-16T18:30:59.000Z
|
tarea_98/tarea_98.ipynb
|
zumaia/theEgg
|
f1ba4163eb5ab5ea844b08cf6cbc398fcdb5c80f
|
[
"MIT"
] | null | null | null |
tarea_98/tarea_98.ipynb
|
zumaia/theEgg
|
f1ba4163eb5ab5ea844b08cf6cbc398fcdb5c80f
|
[
"MIT"
] | null | null | null | 133.10664 | 26,946 | 0.840448 |
[
[
[
"# Tarea 98 - Análisis del rendimiento de las aplicaciones de IA \n\n## Ejercicio: Debes programar el problema que se plantea en la siguiente secuencia de videos en el lenguaje de programación que desees:",
"_____no_output_____"
],
[
"## Primera parte",
"_____no_output_____"
],
[
"[](https://www.youtube.com/watch?v=GD254Gotp-4 \"video\")",
"_____no_output_____"
],
[
"#### Reto para hacer:\n\nDefinir dos funciones, una, suma_lineal, que lleve a cabo la suma de n números del 1 a n, de una forma básica, y otra, suma_constante, que lleve a cabo la misma tarea, pero utilizando la fórmula de la suma aritmética de los números del 1 a n.",
"_____no_output_____"
]
],
[
[
"#Instalamos line_profiler en el único caso en que no funcione el siguiente script\n\n#! pip install line_profiler\n",
"_____no_output_____"
],
[
"%load_ext line_profiler",
"_____no_output_____"
],
[
"import time",
"_____no_output_____"
],
[
"def suma_lineal(n):\n pass\n\ndef suma_constante(n):\n pass\n\ncantidad = 1000000\n\ndef ejemplo(cantidad):\n \n for i in range(4): # incrementamos 5 veces\n\n start_time = time.time()\n\n suma1 = suma_lineal(cantidad)\n\n middle_time = time.time()\n\n suma2 = suma_constante(cantidad)\n\n stop_time = time.time()\n\n set_time = middle_time - start_time\n list_time = stop_time - middle_time\n \n print(\"\\tTest en lineal para la cantidad de {}:\\t\\t{} segundos\".format(cantidad, set_time))\n print(\"\\tTest en constantepara para la cantidad de {}:\\t{} segundos\".format(cantidad, list_time))\n\n cantidad *= 10 # comienza en 1000000 luego *10... hasta 10000000000\n \n # return set_time, list_time",
"_____no_output_____"
],
[
"ejemplo(cantidad)",
"\tTest en lineal para la cantidad de 1000000:\t\t1.1920928955078125e-06 segundos\n\tTest en constantepara para la cantidad de 1000000:\t7.152557373046875e-07 segundos\n\tTest en lineal para la cantidad de 10000000:\t\t9.5367431640625e-07 segundos\n\tTest en constantepara para la cantidad de 10000000:\t9.5367431640625e-07 segundos\n\tTest en lineal para la cantidad de 100000000:\t\t4.76837158203125e-07 segundos\n\tTest en constantepara para la cantidad de 100000000:\t4.76837158203125e-07 segundos\n\tTest en lineal para la cantidad de 1000000000:\t\t2.384185791015625e-07 segundos\n\tTest en constantepara para la cantidad de 1000000000:\t4.76837158203125e-07 segundos\n"
]
],
[
[
"El código itera sobre la lista de entrada, extrayendo elementos de esta y acumulándolos en otra lista para cada iteración. Podemos utilizar lprun para ver cuales son las operaciones más costosas.",
"_____no_output_____"
]
],
[
[
"%lprun -f ejemplo ejemplo(cantidad)",
"\tTest en lineal para la cantidad de 1000000:\t\t5.9604644775390625e-06 segundos\n\tTest en constantepara para la cantidad de 1000000:\t4.291534423828125e-06 segundos\n\tTest en lineal para la cantidad de 10000000:\t\t4.5299530029296875e-06 segundos\n\tTest en constantepara para la cantidad de 10000000:\t3.5762786865234375e-06 segundos\n\tTest en lineal para la cantidad de 100000000:\t\t3.0994415283203125e-06 segundos\n\tTest en constantepara para la cantidad de 100000000:\t2.86102294921875e-06 segundos\n\tTest en lineal para la cantidad de 1000000000:\t\t3.5762786865234375e-06 segundos\n\tTest en constantepara para la cantidad de 1000000000:\t3.337860107421875e-06 segundos\n"
]
],
[
[
"El código tarda aproximandamente 0.003842 segundos en ejecutarse (el resultado puede variar en función de vuestra máquina). Del tiempo de ejecución, aprox. la mitad (42%) se utiliza para la función lineal) y un 55% para la suma constante) y el resto del tiempo básicamente para completar la función.",
"_____no_output_____"
],
[
"## Segunda parte",
"_____no_output_____"
],
[
"[](https://www.youtube.com/watch?v=MaY6FpP0FEU \"video\") ",
"_____no_output_____"
],
[
"En este video hacemos una introducción a la notación asintótica, y la complejidad de los algoritmos, y resolvemos el reto que teníamos pendiente de definir dos funciones para sumar de 1 a n números enteros, mediante dos algoritmos con complejidad lineal y complejidad constante.",
"_____no_output_____"
]
],
[
[
"def suma_lineal(n):\n suma=0\n for i in range(1, n+1):\n suma += i\n return suma\n\ndef suma_constante(n):\n return (n/2) * (n+1)\n\ncantidad = 1000000\n\ndef ejemplo2(cantidad):\n\n for i in range(4): # incrementamos 4 veces\n\n start_time = time.time()\n\n suma1 = suma_lineal(cantidad)\n\n middle_time = time.time()\n\n suma2 = suma_constante(cantidad)\n\n stop_time = time.time()\n\n set_time = middle_time - start_time\n list_time = stop_time - middle_time\n \n print(\"\\tTest en lineal para la cantidad de {}:\\t\\t{} segundos\".format(cantidad, set_time))\n print(\"\\tTest en constantepara para la cantidad de {}:\\t{} segundos\".format(cantidad, list_time))\n\n cantidad *= 10 # comienza en 1000000 luego *10... hasta 10000000000\n \n # return set_time, list_time",
"_____no_output_____"
],
[
"%time ejemplo2(cantidad)",
"\tTest en lineal para la cantidad de 1000000:\t\t0.06890273094177246 segundos\n\tTest en constantepara para la cantidad de 1000000:\t4.5299530029296875e-06 segundos\n\tTest en lineal para la cantidad de 10000000:\t\t0.55631422996521 segundos\n\tTest en constantepara para la cantidad de 10000000:\t4.5299530029296875e-06 segundos\n\tTest en lineal para la cantidad de 100000000:\t\t5.6764421463012695 segundos\n\tTest en constantepara para la cantidad de 100000000:\t3.0994415283203125e-06 segundos\n"
],
[
"ejemplo2(cantidad)",
"\tTest en lineal para la cantidad de 1000000:\t\t0.06506776809692383 segundos\n\tTest en constantepara para la cantidad de 1000000:\t3.814697265625e-06 segundos\n\tTest en lineal para la cantidad de 10000000:\t\t0.6292898654937744 segundos\n\tTest en constantepara para la cantidad de 10000000:\t3.337860107421875e-06 segundos\n\tTest en lineal para la cantidad de 100000000:\t\t5.86493444442749 segundos\n\tTest en constantepara para la cantidad de 100000000:\t1.1444091796875e-05 segundos\n\tTest en lineal para la cantidad de 1000000000:\t\t53.54281735420227 segundos\n\tTest en constantepara para la cantidad de 1000000000:\t3.5762786865234375e-06 segundos\n"
],
[
"%lprun -f ejemplo2 ejemplo2(cantidad)\n# Podemos utilizar lprun para ver cuales son las operaciones más costosas.",
"\tTest en lineal para la cantidad de 1000000:\t\t0.27519702911376953 segundos\n\tTest en constantepara para la cantidad de 1000000:\t7.152557373046875e-06 segundos\n\tTest en lineal para la cantidad de 10000000:\t\t2.9877612590789795 segundos\n\tTest en constantepara para la cantidad de 10000000:\t7.152557373046875e-06 segundos\n\tTest en lineal para la cantidad de 100000000:\t\t3499.5601897239685 segundos\n\tTest en constantepara para la cantidad de 100000000:\t7.3909759521484375e-06 segundos\n\tTest en lineal para la cantidad de 1000000000:\t\t259.0800063610077 segundos\n\tTest en constantepara para la cantidad de 1000000000:\t7.152557373046875e-06 segundos\n"
]
],
[
[
"# Representación gŕafica según su complejidad.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors",
"_____no_output_____"
],
[
"def plot_funs(xs):\n \"\"\"\n Plot a set of predefined functions for the x values in 'xs'.\n \"\"\"\n ys0 = [1 for x in xs]\n ys1 = [x for x in xs]\n ys1_b = [x + 25 for x in xs]\n ys2 = [x**2 for x in xs]\n ys2_b = [x**2 + x for x in xs]\n ys3 = [x**3 for x in xs]\n ys3_b = [x**3 + x**2 for x in xs]\n\n fig = plt.figure()\n plt.plot(xs, ys0, '-', color='tab:brown')\n plt.plot(xs, ys1, '-', color='tab:blue')\n plt.plot(xs, ys1_b, ':', color='tab:blue')\n plt.plot(xs, ys2, '-', color='tab:orange')\n plt.plot(xs, ys2_b, ':', color='tab:orange')\n plt.plot(xs, ys3, '-', color='tab:green')\n plt.plot(xs, ys3_b, ':', color='tab:green')\n\n plt.legend([\"$1$\", \"$x$\", \"$x+25$\", \"$x^2$\", \"$x^2+x$\", \"$x^3$\",\n \"$x^3+x^2$\"])\n\n plt.xlabel('$n$')\n plt.ylabel('$f(n)$')\n plt.title('Function growth')\n plt.show()\n\n\nplot_funs(range(10))",
"_____no_output_____"
]
],
[
[
"Las líneas de un mismo color representan funciones que tienen el mismo grado. Así, la línea marrón que casi no se aprecia muestra una función constante (𝑓(𝑛)=1), las líneas azules muestran funciones lineales (𝑥 y 𝑥+25), las líneas naranjas funciones cuadráticas (𝑥2 y 𝑥2+𝑥), y las líneas verdes funciones cúbicas (𝑥3 y 𝑥3+𝑥2). Para cada color, la línea continua (sólida) representa la función que contiene solo el término de mayor grado, y la línea de puntos es una función que tiene también otros términos de menor grado. Como se puede apreciar, el crecimiento de las funciones con el mismo grado es similar, sobre todo cuando crece el valor de 𝑛. Fijaos con la representación de las mismas funciones si aumentamos el valor de 𝑛 de 10 (gráfica anterior) a 100 (gráfica de la celda siguiente):",
"_____no_output_____"
]
],
[
[
"plot_funs(range(100))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adb7dbb1982727c49b0d95aec71a35e4cc87bad
| 201,030 |
ipynb
|
Jupyter Notebook
|
wtf2020.ipynb
|
gacek91/wtf2020
|
6ba61202eaf2f2c0409fff90973892dc69f27f93
|
[
"MIT"
] | 1 |
2020-09-01T16:14:30.000Z
|
2020-09-01T16:14:30.000Z
|
wtf2020.ipynb
|
gacek91/wtf2020
|
6ba61202eaf2f2c0409fff90973892dc69f27f93
|
[
"MIT"
] | null | null | null |
wtf2020.ipynb
|
gacek91/wtf2020
|
6ba61202eaf2f2c0409fff90973892dc69f27f93
|
[
"MIT"
] | null | null | null | 41.707469 | 53,184 | 0.680854 |
[
[
[
"### Komentarze w Pythonie robimy przy użyciu # - jeśli go nie użyjemy, Python będzie to próbował zinterpretować jako kod",
"_____no_output_____"
]
],
[
[
"#jupyter notebook; jupyter hub",
"_____no_output_____"
],
[
"jupyter notebook; jupyter hub",
"_____no_output_____"
],
[
"10 + 5",
"_____no_output_____"
],
[
"2 - 7",
"_____no_output_____"
],
[
"4 * 6",
"_____no_output_____"
],
[
"9 / 3",
"_____no_output_____"
],
[
"8 ** 2",
"_____no_output_____"
],
[
"x = 10",
"_____no_output_____"
],
[
"x = ergbreoubhtoebeobf",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"print(x)",
"10\n"
],
[
"rocznik = 1991",
"_____no_output_____"
],
[
"rocznik",
"_____no_output_____"
],
[
"teraz = 2020",
"_____no_output_____"
],
[
"teraz - rocznik",
"_____no_output_____"
],
[
"ile_lat = teraz - rocznik",
"_____no_output_____"
],
[
"ile_lat",
"_____no_output_____"
],
[
"ile_lat + 25",
"_____no_output_____"
],
[
"#integer - liczba całkowita\n\ntype(ile_lat)",
"_____no_output_____"
],
[
"zarobki_biednego_doktoranta = 5.50",
"_____no_output_____"
],
[
"zarobki_biednego_doktoranta",
"_____no_output_____"
],
[
"#float - liczba rzeczywista\n\ntype(zarobki_biednego_doktoranta)",
"_____no_output_____"
],
[
"werdykt = \"To są marne zarobki\"\nwerdykt = 'To są marne zarobki'",
"_____no_output_____"
],
[
"type(werdykt)",
"_____no_output_____"
],
[
"zarobki_biednego_doktoranta + werdykt",
"_____no_output_____"
],
[
"10 + 5.50",
"_____no_output_____"
],
[
"# Przemnożenie ciągu liter przez np. 2 duplikuje ów ciąg\n\nwerdykt * 2",
"_____no_output_____"
],
[
"# Ciągu znaków nie można dzielić (a przynajmniej nie w taki sposób)\n\nwerdykt / 3",
"_____no_output_____"
],
[
"\"swps\" + \"jest super\"",
"_____no_output_____"
],
[
"a = \"UW\"\nb = \"jest\"\nc = \"super\"",
"_____no_output_____"
],
[
"a+b+c",
"_____no_output_____"
],
[
"a + \" \" + b + \" \" + c",
"_____no_output_____"
],
[
"print(a, b, c)",
"UW jest super\n"
],
[
"?print",
"_____no_output_____"
],
[
"print(a, b, c, sep = \" nie \")",
"UW nie jest nie super\n"
],
[
"print(a, b, c, sep = \"\\n\")",
"UW\njest\nsuper\n"
],
[
"test = print(a, b, c)",
"UW jest super\n"
],
[
"\"UW jest super a absolwenci tej uczelni zarabiają więcej niż 5.50 brutto na h\"",
"_____no_output_____"
],
[
"info = f\"{a} {b} {c} a absolwenci tej uczelni zarabiają więcej niż {zarobki_biednego_doktoranta} brutto na h\"",
"_____no_output_____"
],
[
"info",
"_____no_output_____"
],
[
"ocena = 2\nmaksimum = 10",
"_____no_output_____"
],
[
"# Metoda - rodzaj funkcji, dedykowany tylko dla konkretnego rodzaju zmiennej\n\n\n\"Te warsztaty zostały ocenione na {} pkt na {} możliwych\".format(ocena, maksimum)",
"_____no_output_____"
],
[
"10.format()",
"_____no_output_____"
],
[
"# backslash (\\) pozawala na podzielenie długiego kodu na parę linijek\n\n\"Te warsztaty zostały ocenione \\\n na {} pkt \\\n na {} możliwych\".format(ocena, maksimum)",
"_____no_output_____"
],
[
"\"Te warsztaty zostały ocenione\n na {} pkt\n na {} możliwych\".format(ocena, maksimum)",
"_____no_output_____"
],
[
"nr = \"300\"",
"_____no_output_____"
],
[
"nr + 100",
"_____no_output_____"
],
[
"int(nr) + 100",
"_____no_output_____"
],
[
"float(nr)",
"_____no_output_____"
],
[
"int(info)",
"_____no_output_____"
],
[
"prawda = True\nfalsz = False\n\n## w R: TRUE/T ; FALSE/F",
"_____no_output_____"
],
[
"prawda ",
"_____no_output_____"
],
[
"type(prawda)",
"_____no_output_____"
],
[
"prawda + prawda",
"_____no_output_____"
],
[
"True == 1",
"_____no_output_____"
],
[
"False == 0",
"_____no_output_____"
],
[
"10 > 5",
"_____no_output_____"
],
[
"10 > 20",
"_____no_output_____"
],
[
"10 < 5\n\n20 >= 10\n\n50 <= 5\n\n10 != 3",
"_____no_output_____"
],
[
"[]",
"_____no_output_____"
],
[
"list()",
"_____no_output_____"
],
[
"moja_lista = [\"Samsung\", \"Huawei\", \"Xiaomi\"]",
"_____no_output_____"
],
[
"type(moja_lista)",
"_____no_output_____"
],
[
"aj = \"Apple\"",
"_____no_output_____"
],
[
"lista2 = [\"Samsung\", \"Huawei\", \"Xiaomi\", aj]",
"_____no_output_____"
],
[
"lista2",
"_____no_output_____"
]
],
[
[
"#### W Pythonie adresowanie elementów zaczynamy od zera!!!",
"_____no_output_____"
]
],
[
[
"lista2[1]",
"_____no_output_____"
],
[
"lista2[0]",
"_____no_output_____"
],
[
"lista2[-1]",
"_____no_output_____"
]
],
[
[
"#### Python nie bierze ostatniego elementu, zatem taki kod jak poniżej wybierze nam tylko elementy 0 i 1 (pierwsze dwa)",
"_____no_output_____"
]
],
[
[
"lista2[0:2]",
"_____no_output_____"
],
[
"lista2[:3]",
"_____no_output_____"
],
[
"lista2[0][0:3]",
"_____no_output_____"
],
[
"ruskie = 4.50\nziemniaki = 2.35\nsurowka = 2.15\nnalesniki = 7.50\nkompot = 2.00\nkotlet = 8.50\npomidorowa = 2.35\nwodka_spod_lady = 4.00\nleniwe = 3.90\nkasza = 2.25\n\nceny = [ruskie, ziemniaki, surowka, nalesniki, kompot, kotlet, pomidorowa, wodka_spod_lady, leniwe, kasza]\n\nmenu = ['ruskie', ruskie,\n 'ziemniaki', ziemniaki,\n 'surowka', surowka,\n 'nalesniki', nalesniki,\n 'kompot', kompot,\n 'kotlet', kotlet,\n 'pomidorowa', pomidorowa,\n 'wódka spod lady', wodka_spod_lady,\n 'leniwe', leniwe,\n 'kasza', kasza]\n",
"_____no_output_____"
],
[
"menu = [\n ['ruskie', ruskie],\n ['ziemniaki', ziemniaki],\n ['surowka', surowka],\n ['nalesniki', nalesniki],\n ['kompot', kompot],\n ['kotlet', kotlet],\n ['pomidorowa', pomidorowa],\n ['wódka spod lady', wodka_spod_lady],\n ['leniwe', leniwe],\n ['kasza', kasza] \n]",
"_____no_output_____"
],
[
"menu[0]",
"_____no_output_____"
],
[
"menu[0][1]\nmenu[0][-1]",
"_____no_output_____"
],
[
"#ruskie, surowka, wódka spod lady\n\nmenu[0][-1] + menu[2][-1] + menu[-3][-1]",
"_____no_output_____"
],
[
"menu[-1] = [\"suchy chleb\", 10.50]",
"_____no_output_____"
],
[
"menu",
"_____no_output_____"
],
[
"len(menu)",
"_____no_output_____"
],
[
"?len",
"_____no_output_____"
],
[
"len(\"to ejst tekst\")",
"_____no_output_____"
],
[
"ceny.sort()",
"_____no_output_____"
],
[
"#stack overflow",
"_____no_output_____"
],
[
"ceny",
"_____no_output_____"
],
[
"ceny2 = [4.0, 4.5, 7.5, 8.5,2.0, 2.15, 2.25]",
"_____no_output_____"
],
[
"sorted(ceny2)",
"_____no_output_____"
],
[
"ceny2",
"_____no_output_____"
],
[
"ceny2 = sorted(ceny2)",
"_____no_output_____"
],
[
"menuDict = {\n 'ruskie': ruskie,\n 'ziemniaki': ziemniaki,\n 'surowka': surowka,\n 'nalesniki': nalesniki,\n 'kompot': kompot,\n 'kotlet': kotlet,\n 'pomidorowa': pomidorowa,\n 'wódka spod lady': wodka_spod_lady,\n 'leniwe': leniwe,\n 'kasza': kasza \n}",
"_____no_output_____"
]
],
[
[
"#### Słowniki - możemy je adresować tylko po hasłach (nie możemy po pozycji w zbiorze)",
"_____no_output_____"
]
],
[
[
"menuDict[0]",
"_____no_output_____"
],
[
"menuDict",
"_____no_output_____"
],
[
"menuDict[\"ruskie\"]",
"_____no_output_____"
],
[
"menuDict.keys()",
"_____no_output_____"
],
[
"menuDict.values()",
"_____no_output_____"
],
[
"menuDict.items()",
"_____no_output_____"
],
[
"menuDict.keys()[0]",
"_____no_output_____"
],
[
"menuDict['wódka spod lady']",
"_____no_output_____"
]
],
[
[
"##### Krotka vs lista - działają podobnie, natomiast elementy listy można zmieniać. Krotki zmienić się nie da.",
"_____no_output_____"
]
],
[
[
"lista = [1,2,3,4]\nkrotka = (1,2,3,4)",
"_____no_output_____"
],
[
"lista",
"_____no_output_____"
],
[
"krotka",
"_____no_output_____"
],
[
"lista[0]",
"_____no_output_____"
],
[
"krotka[0]",
"_____no_output_____"
],
[
"lista[0] = 6",
"_____no_output_____"
],
[
"lista",
"_____no_output_____"
],
[
"krotka[0] = 6",
"_____no_output_____"
],
[
"type(krotka)",
"_____no_output_____"
],
[
"mini = [1,2,3,4,5]",
"_____no_output_____"
],
[
"for i in mini:\n print(i)",
"1\n2\n3\n4\n5\n"
],
[
"for i in mini:\n q = i ** 2\n print(q)",
"1\n4\n9\n16\n25\n"
],
[
"\"Liczba {} podniesiona do kwadratu daje {}\".format(x, y)\nf\"Liczba {x} podniesiona do kwadratu daje {y}\"",
"_____no_output_____"
],
[
"for i in mini:\n print(f\"Liczba {i} podniesiona do kwadratu daje {i**2}\")",
"Liczba 1 podniesiona do kwadratu daje 1\nLiczba 2 podniesiona do kwadratu daje 4\nLiczba 3 podniesiona do kwadratu daje 9\nLiczba 4 podniesiona do kwadratu daje 16\nLiczba 5 podniesiona do kwadratu daje 25\n"
],
[
"mini2 = [5, 10, 15, 20, 50]\n\n\nfor index, numer in enumerate(mini2):\n print(index, numer)",
"0 5\n1 10\n2 15\n3 20\n4 50\n"
],
[
"for index, numer in enumerate(mini2):\n print(f\"Liczba {numer} (na pozycji {index}) podniesiona do kwadratu daje {numer**2}\")",
"Liczba 5 (na pozycji 0) podniesiona do kwadratu daje 25\nLiczba 10 (na pozycji 1) podniesiona do kwadratu daje 100\nLiczba 15 (na pozycji 2) podniesiona do kwadratu daje 225\nLiczba 20 (na pozycji 3) podniesiona do kwadratu daje 400\nLiczba 50 (na pozycji 4) podniesiona do kwadratu daje 2500\n"
],
[
"mini2",
"_____no_output_____"
],
[
"100 % 2 == 0",
"_____no_output_____"
],
[
"for index, numer in enumerate(mini2):\n if numer % 2 == 0:\n print(f\"Liczba {numer} (na pozycji {index}) jest parzysta.\")\n else:\n print(f\"Liczba {numer} (na pozycji {index}) jest nieparzysta.\")",
"Liczba 5 (na pozycji 0) jest nieparzysta.\nLiczba 10 (na pozycji 1) jest parzysta.\nLiczba 15 (na pozycji 2) jest nieparzysta.\nLiczba 20 (na pozycji 3) jest parzysta.\nLiczba 50 (na pozycji 4) jest parzysta.\n"
],
[
"parzyste = []\nnieparzyste = []\n\n#how to add a value to a list (in a loop) python\n\nfor index, numer in enumerate(mini2):\n if numer % 2 == 0:\n print(f\"Liczba {numer} (na pozycji {index}) jest parzysta.\")\n parzyste.append(numer)\n else:\n print(f\"Liczba {numer} (na pozycji {index}) jest nieparzysta.\")\n nieparzyste.append(numer)",
"Liczba 5 (na pozycji 0) jest nieparzysta.\nLiczba 10 (na pozycji 1) jest parzysta.\nLiczba 15 (na pozycji 2) jest nieparzysta.\nLiczba 20 (na pozycji 3) jest parzysta.\nLiczba 50 (na pozycji 4) jest parzysta.\n"
],
[
"parzyste",
"_____no_output_____"
],
[
"nieparzyste",
"_____no_output_____"
],
[
"mini3 = [5, 10, 15, 20, 50, 60, 80, 30, 100, 7]",
"_____no_output_____"
],
[
"for numer in mini3:\n if numer == 50:\n print(\"To jest zakazana liczba. Nie tykam.\")\n elif numer % 2 == 0:\n print(f\"Liczba {numer} jest parzysta.\")\n else:\n print(f\"Liczba {numer} jest nieparzysta.\")",
"Liczba 5 jest nieparzysta.\nLiczba 10 jest parzysta.\nLiczba 15 jest nieparzysta.\nLiczba 20 jest parzysta.\nTo jest zakazana liczba. Nie tykam.\nLiczba 60 jest parzysta.\nLiczba 80 jest parzysta.\nLiczba 30 jest parzysta.\nLiczba 100 jest parzysta.\nLiczba 7 jest nieparzysta.\n"
],
[
"mini4 = [5, 10, 15, 20, 50, 666, 80, 30, 100, 7]",
"_____no_output_____"
],
[
"for numer in mini4:\n if numer == 666:\n print(\"To jest szatańska liczba. Koniec warsztatów.\")\n break\n elif numer == 50:\n print(\"To jest zakazana liczba. Nie tykam.\")\n elif numer % 2 == 0:\n print(f\"Liczba {numer} jest parzysta.\")\n else:\n print(f\"Liczba {numer} jest nieparzysta.\")",
"Liczba 5 jest nieparzysta.\nLiczba 10 jest parzysta.\nLiczba 15 jest nieparzysta.\nLiczba 20 jest parzysta.\nTo jest zakazana liczba. Nie tykam.\nTo jest szatańska liczba. Koniec warsztatów.\n"
],
[
"menuDict",
"_____no_output_____"
],
[
"menuDict[\"ruskie\"]",
"_____no_output_____"
],
[
"co_bralem = [\"pomidorowa\", \"ruskie\", \"wódka spod lady\"]\nile_place = 0\n\nfor pozycja in co_bralem:\n #ile_place = ile_place + menuDict[pozycja]\n ile_place += menuDict[pozycja]",
"_____no_output_____"
],
[
"ile_place",
"_____no_output_____"
],
[
"tqdm",
"_____no_output_____"
],
[
"!pip install tqdm #anaconda/colab\n!pip3 intall tqdm\n\n# w terminalu/wierszu polecen\n\npip install xxx\npip3 install xxx\n\n\n\n\n\n",
"_____no_output_____"
],
[
"!pip3 install tqdm",
"Requirement already satisfied: tqdm in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (4.45.0)\n\u001b[33mWARNING: You are using pip version 20.0.2; however, version 20.2.2 is available.\nYou should consider upgrading via the '/Library/Frameworks/Python.framework/Versions/3.7/bin/python3.7 -m pip install --upgrade pip' command.\u001b[0m\n"
],
[
"import tqdm",
"_____no_output_____"
],
[
"tqdm.tqdm",
"_____no_output_____"
],
[
"#from NAZWA_PAKIETU import NAZWA_FUNKCJI\nfrom tqdm import tqdm",
"_____no_output_____"
],
[
"tqdm",
"_____no_output_____"
],
[
"n = 0\n\nfor i in tqdm(range(0, 100000)):\n x = (i * i) / 3\n n += x",
"100%|██████████| 100000/100000 [00:00<00:00, 224059.97it/s]\n"
],
[
"n",
"_____no_output_____"
],
[
"import numpy as np",
"_____no_output_____"
],
[
"seed = np.random.RandomState(100)\nwzrost_lista = list(seed.normal(loc=1.70,scale=.15,size=100000).round(2))\nseed2 = np.random.RandomState(100)\nwaga_lista = list(seed2.normal(loc=80,scale=10,size=100000).round(2))",
"_____no_output_____"
],
[
"# bmi = waga / wzrost**2\n\nwaga_lista / wzrost_lista**2",
"_____no_output_____"
],
[
"bmi_lista = []\n\nfor index, value in tqdm(enumerate(wzrost_lista)):\n bmi = waga_lista[index]/wzrost_lista[index]**2\n bmi_lista.append(bmi)\n",
"100000it [00:01, 99883.45it/s]\n"
],
[
"bmi_lista[:20]",
"_____no_output_____"
],
[
"seed = np.random.RandomState(100)\nwzrost = seed.normal(loc=1.70,scale=.15,size=100000).round(2)\nseed2 = np.random.RandomState(100)\nwaga = seed2.normal(loc=80,scale=10,size=100000).round(2)",
"_____no_output_____"
],
[
"wzrost",
"_____no_output_____"
],
[
"len(wzrost)",
"_____no_output_____"
],
[
"mini5 = [5, 10, 30, 60, 100]",
"_____no_output_____"
],
[
"np.array(mini5)",
"_____no_output_____"
],
[
"vector = np.array([5, 10, 30, 60, 100])",
"_____no_output_____"
],
[
"mini5 * 2",
"_____no_output_____"
],
[
"vector * 2",
"_____no_output_____"
],
[
"vector / 3",
"_____no_output_____"
],
[
"vector ** 2",
"_____no_output_____"
],
[
"mini5 + 3",
"_____no_output_____"
],
[
"vector + 3",
"_____no_output_____"
],
[
"seed = np.random.RandomState(100)\nwzrost = seed.normal(loc=1.70,scale=.15,size=100000).round(2)\nseed2 = np.random.RandomState(100)\nwaga = seed2.normal(loc=80,scale=10,size=100000).round(2)",
"_____no_output_____"
],
[
"bmi = waga / wzrost ** 2",
"_____no_output_____"
],
[
"bmi",
"_____no_output_____"
],
[
"np.min(bmi)",
"_____no_output_____"
],
[
"np.max(bmi)",
"_____no_output_____"
],
[
"np.mean(bmi)",
"_____no_output_____"
],
[
"!pip3 install pandas\n!pip3 install gapminder",
"Requirement already satisfied: pandas in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (1.0.1)\nRequirement already satisfied: python-dateutil>=2.6.1 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas) (2019.3)\nRequirement already satisfied: numpy>=1.13.3 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas) (1.17.2)\nRequirement already satisfied: six>=1.5 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas) (1.14.0)\n\u001b[33mWARNING: You are using pip version 20.0.2; however, version 20.2.2 is available.\nYou should consider upgrading via the '/Library/Frameworks/Python.framework/Versions/3.7/bin/python3.7 -m pip install --upgrade pip' command.\u001b[0m\nRequirement already satisfied: gapminder in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (0.1)\nRequirement already satisfied: pandas in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from gapminder) (1.0.1)\nRequirement already satisfied: numpy>=1.13.3 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas->gapminder) (1.17.2)\nRequirement already satisfied: python-dateutil>=2.6.1 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas->gapminder) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas->gapminder) (2019.3)\nRequirement already satisfied: six>=1.5 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas->gapminder) (1.14.0)\n\u001b[33mWARNING: You are using pip version 20.0.2; however, version 20.2.2 is available.\nYou should consider upgrading via the '/Library/Frameworks/Python.framework/Versions/3.7/bin/python3.7 -m pip install --upgrade pip' command.\u001b[0m\n"
],
[
"from gapminder import gapminder as df",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"np.mean(df[\"lifeExp\"])",
"_____no_output_____"
],
[
"df.iloc[0:10]",
"_____no_output_____"
],
[
"df.iloc[0:10][\"lifeExp\"]",
"_____no_output_____"
],
[
"df.iloc[0:10, -1]",
"_____no_output_____"
],
[
"df.iloc[0:20,:].loc[:,\"pop\"]",
"_____no_output_____"
],
[
"df[\"year\"] == 2007",
"_____no_output_____"
],
[
"df2007 = df[df[\"year\"] == 2007]",
"_____no_output_____"
],
[
"df2007",
"_____no_output_____"
],
[
"#matplotlib\n\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"plt.style.use(\"ggplot\")",
"_____no_output_____"
],
[
"#!pip install matplotlib",
"_____no_output_____"
],
[
"?plt.plot",
"_____no_output_____"
],
[
"df[\"gdpPercap\"]",
"_____no_output_____"
],
[
"plt.plot(df2007[\"gdpPercap\"], df2007['lifeExp'])",
"_____no_output_____"
],
[
"plt.scatter(df2007[\"gdpPercap\"], df2007['lifeExp'])\n\nplt.show()",
"_____no_output_____"
],
[
"plt.scatter(df2007[\"gdpPercap\"], df2007['lifeExp'])\nplt.xscale(\"log\")\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(df2007['lifeExp'])",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adb7e7b74cec49ae671de20cc5257dc4d4b9560
| 615,341 |
ipynb
|
Jupyter Notebook
|
codici/svm_xor.ipynb
|
tvml/ml1920
|
03143155d61163ff6cca55d3e9a1c0ca9ac5f178
|
[
"MIT"
] | null | null | null |
codici/svm_xor.ipynb
|
tvml/ml1920
|
03143155d61163ff6cca55d3e9a1c0ca9ac5f178
|
[
"MIT"
] | null | null | null |
codici/svm_xor.ipynb
|
tvml/ml1920
|
03143155d61163ff6cca55d3e9a1c0ca9ac5f178
|
[
"MIT"
] | null | null | null | 1,594.147668 | 314,424 | 0.959684 |
[
[
[
"## Support vector machine applicate a XOR",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline",
"_____no_output_____"
],
[
"import numpy as np\nfrom sklearn import svm\nfrom sklearn.kernel_approximation import RBFSampler\nfrom sklearn.linear_model import SGDClassifier",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors\nfrom matplotlib import cm\n\nplt.style.use('fivethirtyeight')\n\nplt.rcParams['font.family'] = 'sans-serif'\nplt.rcParams['font.serif'] = 'Ubuntu'\nplt.rcParams['font.monospace'] = 'Ubuntu Mono'\nplt.rcParams['font.size'] = 10\nplt.rcParams['axes.labelsize'] = 10\nplt.rcParams['axes.labelweight'] = 'bold'\nplt.rcParams['axes.titlesize'] = 10\nplt.rcParams['xtick.labelsize'] = 8\nplt.rcParams['ytick.labelsize'] = 8\nplt.rcParams['legend.fontsize'] = 10\nplt.rcParams['figure.titlesize'] = 12\nplt.rcParams['image.cmap'] = 'jet'\nplt.rcParams['image.interpolation'] = 'none'\nplt.rcParams['figure.figsize'] = (16, 8)\nplt.rcParams['lines.linewidth'] = 2\nplt.rcParams['lines.markersize'] = 8\n\ncolors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c', \n'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', \n'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']\n\ncmap = mcolors.LinearSegmentedColormap.from_list(\"\", [\"#82cafc\", \"#069af3\", \"#0485d1\", colors[0], colors[8]])\ncmap_big = cm.get_cmap('Spectral', 512)\ncmap = mcolors.ListedColormap(cmap_big(np.linspace(0.5, 1, 128)))",
"_____no_output_____"
],
[
"xx, yy = np.meshgrid(np.linspace(-3, 3, 500),\n np.linspace(-3, 3, 500))\nnp.random.seed(0)\nX = np.random.randn(300, 2)\nY = np.logical_xor(X[:, 0] > 0, X[:, 1] > 0)",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(16,8))\nfig.patch.set_facecolor('white')\nfor i in range(2):\n idx = np.where(Y == i)\n plt.scatter(X[idx, 0], X[idx, 1], c=colors[i], s=40, edgecolors='k', alpha = .9, label='Class {0:d}'.format(i),cmap=cmap)\nplt.xlabel('$x_1$', fontsize=14)\nplt.ylabel('$x_2$', fontsize=14)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(-3, 3)\nplt.ylim(-3, 3)\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"# fit the model\nclf= svm.SVC(gamma=40)\n#clf=svm.SVC(kernel='linear')\n#clf=svm.SVC(kernel='poly', degree=5, coef0=1)\n#clf=svm.SVC(kernel='sigmoid', gamma=15)\nclf = clf.fit(X, Y)",
"_____no_output_____"
],
[
"# plot the decision function for each datapoint on the grid\nZ = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])\nZ = Z.reshape(xx.shape)\n\nfig = plt.figure(figsize=(16,8))\nfig.patch.set_facecolor('white')\nax = fig.gca()\nimshow_handle = plt.imshow(Z, interpolation='nearest',\n extent=(xx.min(), xx.max(), yy.min(), yy.max()), aspect='auto',\n origin='lower', alpha=.5, cmap=cmap)\ncontours = plt.contour(xx, yy, Z, levels=[0], linewidths=2,\n linetypes='--', colors=[colors[9]])\nfor i in range(2):\n idx = np.where(Y == i)\n plt.scatter(X[idx, 0], X[idx, 1], c=colors[i], edgecolors='k', s=40, \n label='Class {0:d}'.format(i),cmap=cmap)\nplt.xlabel('$x_1$', fontsize=14)\nplt.ylabel('$x_2$', fontsize=14)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(-3, 3)\nplt.ylim(-3, 3)\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"print('Accuracy: {0:3.5f}'.format(np.sum(Y==clf.predict(X))/float(X.shape[0])*100))",
"Accuracy: 100.00000\n"
]
],
[
[
"# Gradient descent con hinge loss",
"_____no_output_____"
]
],
[
[
"def phi(X,nc):\n rbf_feature = RBFSampler(gamma=10, n_components=nc, random_state=1)\n Z = rbf_feature.fit_transform(X)\n return Z",
"_____no_output_____"
],
[
"X.shape",
"_____no_output_____"
],
[
"nc =20\nX_features = phi(X, nc)\nclf = SGDClassifier(loss='hinge', penalty='l2', max_iter=1000, alpha=.001) \nclf = clf.fit(X_features, Y)",
"_____no_output_____"
],
[
"X_features.shape",
"_____no_output_____"
],
[
"print('Accuracy: {0:3.5f}'.format(np.sum(Y==clf.predict(X_features))/float(X_features.shape[0])*100))",
"Accuracy: 85.33333\n"
],
[
"X_grid = np.c_[xx.ravel(), yy.ravel()]\nX_grid_features = phi(X_grid,nc)\nZ = clf.decision_function(X_grid_features)\nZ = Z.reshape(xx.shape)",
"_____no_output_____"
],
[
"fig = plt.figure()\nfig.patch.set_facecolor('white')\nax = fig.gca()\nimshow_handle = plt.imshow(Z, interpolation='nearest',\n extent=(xx.min(), xx.max(), yy.min(), yy.max()), aspect='auto',\n origin='lower', alpha=.3)\ncontours = plt.contour(xx, yy, Z, levels=[0], linewidths=1,\n linetypes='--')\nfor i in range(2):\n idx = np.where(Y == i)\n plt.scatter(X[idx, 0], X[idx, 1], c=colors[i], edgecolors='k', s=40, \n label='Class {0:d}'.format(i),cmap=cmap)\nplt.xlabel('$x_1$', fontsize=14)\nplt.ylabel('$x_2$', fontsize=14)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(-3, 3)\nplt.ylim(-3, 3)\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"Z",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adb8656216ff0df1eeba2adbb70c630247326f7
| 38,685 |
ipynb
|
Jupyter Notebook
|
DeepRL_For_HPE/Rewards calculation 2.ipynb
|
muratcancicek/Deep_RL_For_Head_Pose_Est
|
b3436a61a44d20d8bcfd1341792e0533e3ff9fc2
|
[
"Apache-2.0"
] | null | null | null |
DeepRL_For_HPE/Rewards calculation 2.ipynb
|
muratcancicek/Deep_RL_For_Head_Pose_Est
|
b3436a61a44d20d8bcfd1341792e0533e3ff9fc2
|
[
"Apache-2.0"
] | null | null | null |
DeepRL_For_HPE/Rewards calculation 2.ipynb
|
muratcancicek/Deep_RL_For_Head_Pose_Est
|
b3436a61a44d20d8bcfd1341792e0533e3ff9fc2
|
[
"Apache-2.0"
] | null | null | null | 57.481426 | 6,997 | 0.659299 |
[
[
[
"from keras.preprocessing.sequence import TimeseriesGenerator\nfrom FC_RNN_Evaluater.FC_RNN_Evaluater import *\nfrom keras.initializers import RandomNormal\nimport numpy as np",
"/home/mcicek/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n"
],
[
"timesteps = 10\ninputMatrix = np.random.rand(57,224,224,3)# np.array([[[[i, i, i]]] for i in range(57)])\nlabels = np.array([[i, i, i] for i in range(57)])\n\ninputMatrix, inputLabels, outputLabels = getSequencesToSequences(inputMatrix, labels, timesteps)\nbatch_size=1 \nimg_gen = TimeseriesGenerator(inputMatrix, outputLabels, length=timesteps, sampling_rate=1, stride=timesteps, batch_size=batch_size)\nang_gen = TimeseriesGenerator(inputLabels, outputLabels, length=timesteps, sampling_rate=1, stride=timesteps, batch_size=batch_size)\nbatch_01 = img_gen[0]\nbatch_0 = ang_gen[0]\ninputFrames, y = batch_01\ninputSeq, y = batch_0",
"_____no_output_____"
],
[
"m = np.zeros((n,)+y.shape)\nwith tf.Session():\n m = samples.eval()",
"_____no_output_____"
],
[
"from FC_RNN_Evaluater.Stateful_FC_RNN_Configuration import *",
"_____no_output_____"
],
[
"vgg_model, full_model, modelID, preprocess_input = getFinalModel(timesteps = timesteps, lstm_nodes = lstm_nodes, lstm_dropout = lstm_dropout, lstm_recurrent_dropout = lstm_recurrent_dropout, \n num_outputs = num_outputs, lr = learning_rate, include_vgg_top = include_vgg_top, use_vgg16 = use_vgg16)",
"_____no_output_____"
],
[
"from keras.models import Sequential\nfrom keras.layers import Dense, Activation\nfrom keras import backend as k\nfrom keras import losses\nimport numpy as np\nimport tensorflow as tf\nfrom sklearn.metrics import mean_squared_error\nfrom math import sqrt",
"_____no_output_____"
],
[
"model = full_model\nn = 5\noutputs = model.predict([inputFrames, inputSeq])\ntargets = y\nprint(outputs.shape, targets.shape)\nsigma = 0.05",
"(1, 10, 3) (1, 10, 3)\n"
],
[
"samples = RandomNormal(mean=model.outputs, stddev=sigma, seed=None)((n,)+outputs.shape)",
"_____no_output_____"
],
[
"samples",
"_____no_output_____"
],
[
"#with tf.Session() as sess:\n# print(samples.eval())",
"_____no_output_____"
],
[
"yy = tf.convert_to_tensor(np.repeat(targets[np.newaxis, ...], n, axis=0), name='yy', dtype=tf.float32)",
"_____no_output_____"
],
[
"yy",
"_____no_output_____"
],
[
"abs_diff = tf.abs(samples - yy)\nrti = - tf.reduce_mean(abs_diff, axis = -1) - tf.reduce_mean(abs_diff, axis = -1)",
"_____no_output_____"
],
[
"rti",
"_____no_output_____"
],
[
"ri = tf.reduce_sum(rti, axis=-1)",
"_____no_output_____"
],
[
"ri",
"_____no_output_____"
],
[
"bt = tf.reduce_mean(rti, axis=0)",
"_____no_output_____"
],
[
"bt",
"_____no_output_____"
],
[
"rti_bt = rti - bt",
"_____no_output_____"
],
[
"rti_bt",
"_____no_output_____"
],
[
"ri_b = tf.reduce_sum(rti_bt, axis=-1)",
"_____no_output_____"
],
[
"ri_b",
"_____no_output_____"
],
[
"mu = tf.convert_to_tensor(np.repeat(outputs[np.newaxis, ...], n, axis=0), name='mu', dtype=tf.float32)",
"_____no_output_____"
],
[
"mu",
"_____no_output_____"
],
[
"gti = (samples - mu) / tf.convert_to_tensor(sigma**2)",
"_____no_output_____"
],
[
"gti",
"_____no_output_____"
],
[
"gradients_per_episode = []\nfor i in range(samples.shape[0]):\n #print(samples[i], model.output)\n #tf.assign(model.output, samples[i])\n #loss = losses.mean_squared_error(targets, model.output)tf..eval()\n loss = losses.mean_squared_error(targets, samples[i])\n gradients = k.gradients(loss, model.trainable_weights)\n #print(gradients)\n gradients = [g*ri_b[i] for g in gradients]\n gradients_per_episode.append(gradients)",
"_____no_output_____"
],
[
"len(gradients_per_episode)",
"_____no_output_____"
],
[
"len(gradients_per_episode[0])",
"_____no_output_____"
],
[
"gradients_per_episode[0]",
"_____no_output_____"
],
[
"stacked_gradients = []\nfor i in range(len(gradients_per_episode[0])):\n stacked_gradients.append(tf.stack([gradients[i] for gradients in gradients_per_episode])) ",
"_____no_output_____"
],
[
"stacked_gradients",
"_____no_output_____"
],
[
"final_gradients = [tf.reduce_mean(g, axis=0) for g in stacked_gradients]",
"_____no_output_____"
],
[
"final_gradients",
"_____no_output_____"
],
[
"for i in range(len(model.trainable_weights)):\n tf.assign_sub(model.trainable_weights[i], final_gradients[i])",
"_____no_output_____"
],
[
"\n# Begin TensorFlow\nsess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\n\nsteps = 1 # steps of gradient descent\nfor s in range(steps):\n #print(model.input)\n # ===== Numerical gradient =====\n #evaluated_gradients = sess.run(gradients, feed_dict={'tdCNN_input:0': inputFrames, 'aux_input:0': inputSeq})\n\n # Step down the gradient for each layer\n for i in range(len(model.trainable_weights)):\n sess.run(tf.assign_sub(model.trainable_weights[i], final_gradients[i]))\n\n # Every 10 steps print the RMSE\n if s % 10 == 0:\n print(\"step \" + str(s))\n\n#final_outputs = model.predict([inputFrames, inputSeq])\n\nprint(\"===AFTER STEPPING DOWN GRADIENT===\")\nprint(\"outputs:\\n\", outputs)\nprint(\"targets:\\n\", targets)",
"_____no_output_____"
],
[
"\nloss = losses.mean_squared_error(targets, model.output)\nprint(targets.shape, model.output.shape)\nprint(loss)\n# ===== Symbolic Gradient =====\ngradients = k.gradients(loss, model.trainable_weights)\nprint(gradients)\n\n#print(\"===BEFORE WALKING DOWN GRADIENT===\")\n#print(\"outputs:\\n\", outputs)\n#print(\"targets:\\n\", targets)",
"(1, 10, 3) (1, ?, 3)\nTensor(\"Mean_13:0\", shape=(1, 10), dtype=float32)\n[<tf.Tensor 'gradients_6/AddN_5:0' shape=(4099, 4096) dtype=float32>, <tf.Tensor 'gradients_6/AddN_4:0' shape=(1024, 4096) dtype=float32>, <tf.Tensor 'gradients_6/AddN_3:0' shape=(4096,) dtype=float32>, <tf.Tensor 'gradients_6/time_distributed_1/while/MatMul/Enter_grad/b_acc_3:0' shape=(1024, 3) dtype=float32>, <tf.Tensor 'gradients_6/time_distributed_1/while/BiasAdd/Enter_grad/b_acc_3:0' shape=(3,) dtype=float32>]\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adb8c665b7c36e0e463688beb7f0fa7310d862d
| 38,541 |
ipynb
|
Jupyter Notebook
|
module4-logistic-regression/Khislat_Zhuraeva_LS_DS_214_assignment.ipynb
|
Khislatz/DS-Unit-2-Linear-Models
|
0e21e8fd59cc7d30e3b445712afa2a2aa8a5d6d8
|
[
"MIT"
] | null | null | null |
module4-logistic-regression/Khislat_Zhuraeva_LS_DS_214_assignment.ipynb
|
Khislatz/DS-Unit-2-Linear-Models
|
0e21e8fd59cc7d30e3b445712afa2a2aa8a5d6d8
|
[
"MIT"
] | null | null | null |
module4-logistic-regression/Khislat_Zhuraeva_LS_DS_214_assignment.ipynb
|
Khislatz/DS-Unit-2-Linear-Models
|
0e21e8fd59cc7d30e3b445712afa2a2aa8a5d6d8
|
[
"MIT"
] | null | null | null | 32.305951 | 296 | 0.368646 |
[
[
[
"<a href=\"https://colab.research.google.com/github/Khislatz/DS-Unit-2-Linear-Models/blob/master/module4-logistic-regression/Khislat_Zhuraeva_LS_DS_214_assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Lambda School Data Science\n\n*Unit 2, Sprint 1, Module 4*\n\n---",
"_____no_output_____"
],
[
"# Logistic Regression\n\n\n## Assignment 🌯\n\nYou'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?\n\n> We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.\n\n- [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.\n- [ ] Begin with baselines for classification.\n- [ ] Use scikit-learn for logistic regression.\n- [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.)\n- [ ] Get your model's test accuracy. (One time, at the end.)\n- [ ] Commit your notebook to your fork of the GitHub repo.\n\n\n## Stretch Goals\n\n- [ ] Add your own stretch goal(s) !\n- [ ] Make exploratory visualizations.\n- [ ] Do one-hot encoding.\n- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).\n- [ ] Get and plot your coefficients.\n- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).",
"_____no_output_____"
]
],
[
[
"%%capture\nimport sys\n\n# If you're on Colab:\nif 'google.colab' in sys.modules:\n DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'\n !pip install category_encoders==2.*\n\n# If you're working locally:\nelse:\n DATA_PATH = '../data/'",
"_____no_output_____"
],
[
"# Load data downloaded from https://srcole.github.io/100burritos/\nimport pandas as pd\ndf = pd.read_csv(DATA_PATH+'burritos/burritos.csv')",
"_____no_output_____"
],
[
"# Derive binary classification target:\n# We define a 'Great' burrito as having an\n# overall rating of 4 or higher, on a 5 point scale.\n# Drop unrated burritos.\ndf = df.dropna(subset=['overall'])\ndf['Great'] = df['overall'] >= 4",
"_____no_output_____"
],
[
"# Clean/combine the Burrito categories\ndf['Burrito'] = df['Burrito'].str.lower()\n\ncalifornia = df['Burrito'].str.contains('california')\nasada = df['Burrito'].str.contains('asada')\nsurf = df['Burrito'].str.contains('surf')\ncarnitas = df['Burrito'].str.contains('carnitas')\n\ndf.loc[california, 'Burrito'] = 'California'\ndf.loc[asada, 'Burrito'] = 'Asada'\ndf.loc[surf, 'Burrito'] = 'Surf & Turf'\ndf.loc[carnitas, 'Burrito'] = 'Carnitas'\ndf.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'",
"_____no_output_____"
],
[
"# Drop some high cardinality categoricals\ndf = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])",
"_____no_output_____"
],
[
"# Drop some columns to prevent \"leakage\"\ndf = df.drop(columns=['Rec', 'overall'])",
"_____no_output_____"
],
[
"df.head(3)",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df['Date'] = pd.to_datetime(df['Date'])",
"_____no_output_____"
],
[
"train = df[df['Date']<='12/31/2016']\nval = df[(df['Date']>='01/01/2017') & (df['Date']<='12/31/2017')]\ntest = df[df['Date']>='01/01/2018']\ntrain.shape, val.shape, test.shape",
"_____no_output_____"
],
[
"df['Great'].dtypes",
"_____no_output_____"
],
[
"target = 'Great'\ny_train = train[target]\ny_train.value_counts(normalize=True) ",
"_____no_output_____"
],
[
"majority_class = y_train.mode()[0]\ny_pred = [majority_class] * len(y_train) #majority is False ",
"_____no_output_____"
],
[
"### Training accuracy of majority class baseline = \n### frequency of majority class\nfrom sklearn.metrics import accuracy_score\naccuracy_score(y_train, y_pred)",
"_____no_output_____"
],
[
"### Validation accuracy of majority class baseline = \n### usually similar to Train accuracy\ny_val = val[target]\ny_pred = [majority_class]*len(y_val)\naccuracy_score(y_val, y_pred)",
"_____no_output_____"
],
[
"train.describe().head(4)\n",
"_____no_output_____"
],
[
"# 1. Import estimator class\nfrom sklearn.linear_model import LogisticRegression\n# 2. Instantiate this class\nlog_reg = LogisticRegression()\n# 3. Arrange X feature matrices (already did y target vectors)\nfeatures = ['Hunger', 'Circum','Volume', 'Tortilla','Temp', 'Meat','Fillings','Meat:filling','Salsa','Wrap']\nX_train = train[features]\nX_val = val[features]\n\n# Impute missing values\nfrom sklearn.impute import SimpleImputer\nimputer = SimpleImputer()\nX_train_imputed = imputer.fit_transform(X_train)\nX_val_imputed = imputer.transform(X_val)\n",
"_____no_output_____"
],
[
"# 4. Fit the model\nlog_reg.fit(X_train_imputed, y_train)\nprint('Validation Accuracy', log_reg.score(X_val_imputed, y_val))\n#Same things as\ny_pred = log_reg.predict(X_val_imputed)\nprint('Validation Accuracy', accuracy_score(y_pred, y_val))\n",
"Validation Accuracy 0.8705882352941177\nValidation Accuracy 0.8705882352941177\n"
],
[
"#The predictions look like this\nlog_reg.predict(X_val_imputed)",
"_____no_output_____"
],
[
"test_case = [[0.500000, 17.000000, 0.400000,1.400000,1.000000,1.000000,1.000000,0.500000,0.000000,0.000000]] ",
"_____no_output_____"
],
[
"log_reg.predict(test_case)\nlog_reg.predict_proba(test_case)[0]",
"_____no_output_____"
],
[
"log_reg.intercept_",
"_____no_output_____"
],
[
"# The logistic sigmoid \"squishing\" function, implemented to accept numpy arrays\nimport numpy as np\n\ndef sigmoid(x):\n return 1 / (1 + np.e**(-x))",
"_____no_output_____"
],
[
"sigmoid(log_reg.intercept_ + np.dot(log_reg.coef_, np.transpose(test_case)))",
"_____no_output_____"
],
[
"1 - sigmoid(log_reg.intercept_ + np.dot(log_reg.coef_, np.transpose(test_case)))",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegressionCV\nX_test = test[features]\ny_test = test[target]\n\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train_imputed)\n\nmodel = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)\nmodel.fit(X_train_scaled, y_train);\nX_test_imputed = imputer.transform(X_test)\nX_test_scaled = scaler.transform(X_test_imputed)\nprint('Test Accuracy', model.score(X_test_scaled, y_test))",
"Test Accuracy 0.7894736842105263\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adb9b2ec6a05f398f6214776e1777196b1b03e8
| 18,779 |
ipynb
|
Jupyter Notebook
|
python prep/8. Basic Plotting in Python.ipynb
|
MattGraham1996/Erdos-Institute
|
4d13f5c7ff439fee80db3dba3da0482998ca147b
|
[
"ECL-2.0"
] | null | null | null |
python prep/8. Basic Plotting in Python.ipynb
|
MattGraham1996/Erdos-Institute
|
4d13f5c7ff439fee80db3dba3da0482998ca147b
|
[
"ECL-2.0"
] | null | null | null |
python prep/8. Basic Plotting in Python.ipynb
|
MattGraham1996/Erdos-Institute
|
4d13f5c7ff439fee80db3dba3da0482998ca147b
|
[
"ECL-2.0"
] | null | null | null | 28.802147 | 323 | 0.524575 |
[
[
[
"# Basic Plotting in Python\n\nMaking explatory plots is a common task in data science and many good presentations usually feature excellent plots.\n\nFor us the most important plotting package is `matplotlib`, which is python's attempt to copy MATLAB's plotting functionality. Also of note is the package `seaborn`, but we won't be using this package nearly as much as `matplotlib`. We'll briefly touch on a `seaborn` feature that I like, but won't go beyond that.\n\nFirst let's check that you have both packages installed.",
"_____no_output_____"
]
],
[
[
"## It is standard to import matplotlib.pyplot as plt\nimport matplotlib.pyplot as plt\n\n## It is standard to import seaborn as sns\nimport seaborn as sns",
"_____no_output_____"
],
[
"## Let's perform a version check\nimport matplotlib\n\n# I had 3.3.2 when I wrote this\nprint(\"Your matplotlib version is\",matplotlib.__version__)\n\n# I had 0.11.0 when I wrote this\nprint(\"Your seaborn version is\",sns.__version__)",
"_____no_output_____"
]
],
[
[
"##### Be sure you can run both of the above code chunks before continuing with this notebook, again it should be fine if your package version is slightly different than mine.\n\n##### As a second note, you'll be able to run a majority of the notebook with just matplotlib. I'll put the seaborn content at the bottom of the notebook.",
"_____no_output_____"
]
],
[
[
"## We'll be using what we learned in the \n## previous two notebooks to help\n## generate data\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"## A First Plot \n\nBefore getting into the nitty gritty, let's look at a first plot made with `matplotlib`.",
"_____no_output_____"
]
],
[
[
"## Here's our data\nx = [0,1,2,3,4,5,6,7,8,9,10]\ny = [2*i - 3 for i in x]\n\n## plt.plot will make the plot\n## First put what you want on the x-axis, then the y-axis\nplt.plot(x,y)\n\n## Always end your plotting block with plt.show\n## in jupyter this makes sure that the plot displays \n## properly\nplt.show()",
"_____no_output_____"
]
],
[
[
"##### What Happened?\n\nSo what happened when we ran the above code?\n\n`matplotlib` creates a figure object, and on that object it places a subplot object, and finally it places the points on the subplot then connects the points with straight lines.\n\nWe'll return to the topic of subplots later in the notebook\n\nNow you try plotting the following `x` and `y`.",
"_____no_output_____"
]
],
[
[
"## Run this code first\n## np.linspace makes an array that\n## goes from -5 to 5 broken into \n## 100 evenly spaced steps\nx = 10*np.linspace(-5,5,100)\ny = x**2 - 3",
"_____no_output_____"
],
[
"## You code\n## Plot y against x\n\n\n\n",
"_____no_output_____"
]
],
[
[
"## Getting More Control of your Figures\n\nSo while you can certainly use the simple code above to generate figures, the best presentations typically have excellent graphics demonstrating the outcome. So why don't we learn how to control our figures a little bit more.\n\nThis process typically involves explicitly defining a figure and subplot object. Let's see.",
"_____no_output_____"
]
],
[
[
"## plt.figure() will make the figure object\n## figsize can control how large it is (width,height)\n## here we make a 10 x 12 window\nplt.figure(figsize = (10,12))\n\n## This still creates the subplot object\n## that we plot on\nplt.plot(x,y)\n\n## we can add axis labels\n## and control their fontsize\n## A good rule of thumb is the bigger the better\n## You want your plots to be readable\n## As a note: matplotlib can use LaTeX commands\n## so if you place math text in dollar signs it will\n## be in a LaTeX environment\nplt.xlabel(\"$x$\", fontsize = 16)\nplt.ylabel(\"$y$\", fontsize = 16)\n\n## we can set the plot axis limits like so\n## This makes the x axis bounded between -20 and 20\nplt.xlim((-20,20))\n\n## this makes the y axis bounded between -100 and 100\nplt.ylim(-100,100)\n\n## Also a title\n## again make it large font\nplt.title(\"A Plot Title\", fontsize = 20)\n\n## Now we show the plot\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Controlling How the Plotted Data Looks\n\nWe can control the appearance of what is plotted. Here's a quick cheatsheet of easy to use options:\n\n\n\n| Color | Description |\n| :-------------: |:------------:|\n| r | red |\n| b | blue |\n| k | black |\n| g | green |\n| y | yellow |\n| m | magenta |\n| c | cyan |\n| w | white |\n\n|Line Style | Description |\n|:---------:|:-------------:|\n| - | Solid line |\n| -- | Dashed line |\n| : | Dotted line |\n| -. | Dash-dot line |\n\n| Marker | Description |\n|:------:|:--------------:|\n|o | Circle |\n|+ | Plus Sign |\n|* | Asterisk |\n|. | Point |\n| x | Cross |\n| s | Square |\n|d | Diamond |\n|^ | Up Triangle |\n|< | Right Triangle |\n|> | Left Triangle |\n|p | Pentagram |\n| h | hexagram |\n\nLet's try the above plot one more time, but using some of these to jazz it up.",
"_____no_output_____"
]
],
[
[
"## plt.figure() will make the figure object\n## figsize can control how large it is (width,height)\nplt.figure(figsize = (10,12))\n\n## The third argument to plot(), 'mp' here\n## tells matplotlib to make the points magenta\n## and to use pentagrams, the absence of a line character\n## means there will be no line connecting these points\n## we can also add a label, and insert a legend later\nplt.plot(x,y,'mp', label=\"points\")\n\n## We can even plot two things on the same plot\n## here the third argument tells matplotlib to make a\n## green dotted line\nplt.plot(x+10,y-100,'g--', label=\"shifted line\")\n\n## we can add axis labels\n## and control their fontsize\nplt.xlabel(\"$x$\", fontsize = 16)\nplt.ylabel(\"$y$\", fontsize = 16)\n\n## Also a title\nplt.title(\"A Plot Title\", fontsize = 20)\n\n## plt.legend() adds the legend to the plot\n## This will display the labels we had above\nplt.legend(fontsize=14)\n\n\n# Now we show the plot\nplt.show()",
"_____no_output_____"
],
[
"## You code\n## Redefine x and y to be this data\nx = 10*np.random.random(100) - 5\ny = x**3 - x**2 + x",
"_____no_output_____"
],
[
"## You code\n## Plot y against x here\n## play around with different colors and markers\n\n\n\n\n\n\n\n\n\n",
"_____no_output_____"
]
],
[
[
"## Subplots\n\nSometimes you'll want to plot multiple things in the same Figure. Luckily `matplotlib` has the functionality to create subplots.",
"_____no_output_____"
]
],
[
[
"## plt.subplots makes a figure object\n## then populates it with subplots\n## the first number is the number of rows\n## the second number is the number of columns\n## so this makes a 2 by 2 subplot matrix\n## fig is the figure object\n## axes is a matrix containing the four subplots\nfig, axes = plt.subplots(2, 2, figsize = (10,8))\n\n## We can plot like before but instead of plt.plot\n## we use axes[i,j].plot\n## A random cumulative sum on axes[0,0]\naxes[0,0].plot(np.random.randn(20).cumsum(),'r--')\n## note I didn't have an x, y pair here\n## so what happened was, matplotlib populated\n## the x-values for us, and used the input\n## as the y-values.\n\n\n\n## I can set x and y labels on subplots like so\n## Notice that here I must use set_xlabel instead of \n## simply xlabel\naxes[0,0].set_xlabel(\"$x$\", fontsize=14)\naxes[0,0].set_ylabel(\"$y$\", fontsize=14)\n\n## show the plot\nplt.show()",
"_____no_output_____"
],
[
"## plt can also make a number of other useful graph types\n\n\nfig, axes = plt.subplots(2, 2, figsize = (10,8))\n\n\naxes[0,0].plot(np.random.randn(20).cumsum(),'r--')\n\n## like scatter plots\n## for these put the x, then the y\n## you can then specify the \"c\"olor, \"s\"ize, and \"marker\"shape\n## it is also good practice to let long code go onto multiple lines\n## in python, you can go to a new line following a comma in a\n## function call\naxes[0,1].scatter(np.random.random(10), # start a new line now\n np.random.random(10),\n c = \"purple\", # color\n s = 50, # marker size\n marker = \"*\") # marker shape\n\n## or histograms\n## this can be done with .hist\n## you input the data you want a histogram of\n## and you can specify the number of bins with\n## bins\naxes[1,0].hist(np.random.randint(0,100,100), bins = 40)\n\n\n## and text\n## for this you call .text()\n## you input the x, y position of the text\n## then the text itself, then you can specify the fontsize\naxes[1,1].text(.5, .5, \"Hi Mom!\", fontsize=20)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"As a note all of the plotting capabilities shown above (`hist()`, `scatter()`, and `text()`) are available outside of subplots as well. You'd just call `plt.hist()`, `plt.scatter()` or `plt.text()` instead.",
"_____no_output_____"
]
],
[
[
"## You code\n## Make a 2 x 2 subplot\n## Use numpy to generate data and plot \n## a cubic function in the 0,0 plot\n## a scatter plot of two 100 pulls from random normal distribution\n## in the 0,1 plot\n## a histogram of 1000 pulls from the random normal distribution\n## in the 1,0 plot\n## and whatever text you'd like in the 1,1 plot\n\n\n\n\n\n\n\n",
"_____no_output_____"
]
],
[
[
"### Saving a Figure\n\nWe can also save a figure after we've plotted it with `plt.savefig(figure_name)`.",
"_____no_output_____"
]
],
[
[
"## We'll make a simple figure\n## then save it\nplt.figure(figsize=(8,8))\n\nplt.plot([1,2,3,4], [1,2,3,4], 'k--')\n\n\n## all you'll need is the figure name\n## the default is to save the image as a png file\nplt.savefig(\"my_first_matplotlib_plot.png\")\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"If you check your repository you should now see `my_first_matplotlib_plot.png`. Open it up to admire its beauty.\n\n\nThat's really all we'll need to know for making plots in the boot camp. Of course we've come nowhere close to understanding the totality of `matplotlib`, so if you're interested check out the documentation, <a href=\"https://matplotlib.org/\">https://matplotlib.org/</a>.",
"_____no_output_____"
],
[
"## `seaborn`\n\n`seaborn` is a pretty user friendly package that can make nice plots quickly, however, we won't explore it much in this notebook. But we will introduce a useful function that allows you to give your plot gridlines for easier reading.\n\nFor those interesting in seeing fun `seaborn` plots check out this link, <a href=\"https://seaborn.pydata.org/examples/index.html\">https://seaborn.pydata.org/examples/index.html</a>.",
"_____no_output_____"
]
],
[
[
"## Let's recall this plot from before\nx = 10*np.linspace(-5,5,100)\ny = x**2 - 3\n\nplt.figure(figsize = (10,12))\n\n\nplt.plot(x,y,'mp', label=\"points\")\n\nplt.plot(x+10,y-100,'g--', label=\"shifted line\")\n\nplt.xlabel(\"$x$\", fontsize = 16)\nplt.ylabel(\"$y$\", fontsize = 16)\n\n\nplt.title(\"A Plot Title\", fontsize = 20)\n\n\nplt.legend(fontsize=14)\n\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"Now we can use `seaborn` to add gridlines to the figure, which will allow for easier reading of plots like the one above.",
"_____no_output_____"
]
],
[
[
"## Run this code\nsns.set_style(\"whitegrid\")",
"_____no_output_____"
],
[
"## Now rerun the plot\nx = 10*np.linspace(-5,5,100)\ny = x**2 - 3\n\nplt.figure(figsize = (10,12))\n\n\nplt.plot(x,y,'mp', label=\"points\")\n\nplt.plot(x+10,y-100,'g--', label=\"shifted line\")\n\nplt.xlabel(\"$x$\", fontsize = 16)\nplt.ylabel(\"$y$\", fontsize = 16)\n\n\nplt.title(\"A Plot Title\", fontsize = 20)\n\n\nplt.legend(fontsize=14)\n\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"See the difference?",
"_____no_output_____"
]
],
[
[
"## You code\n## see what this does to your plots\nsns.set_style(\"darkgrid\")",
"_____no_output_____"
],
[
"## Now rerun the plot\nx = 10*np.linspace(-5,5,100)\ny = x**2 - 3\n\nplt.figure(figsize = (10,12))\n\n\nplt.plot(x,y,'mp', label=\"points\")\n\nplt.plot(x+10,y-100,'g--', label=\"shifted line\")\n\nplt.xlabel(\"$x$\", fontsize = 16)\nplt.ylabel(\"$y$\", fontsize = 16)\n\n\nplt.title(\"A Plot Title\", fontsize = 20)\n\n\nplt.legend(fontsize=14)\n\n\nplt.show()\n\n\n\n\n",
"_____no_output_____"
]
],
[
[
"## That's it!\n\nThat's all for this notebook. You now have a firm grasp of the basics of plotting figures with `matplotlib`. With a little practice you'll be a `matplotlib` pro in no time.",
"_____no_output_____"
],
[
"This notebook was written for the Erdős Institute Cőde Data Science Boot Camp by Matthew Osborne, Ph. D., 2021.\n\nRedistribution of the material contained in this repository is conditional on acknowledgement of Matthew Tyler Osborne, Ph.D.'s original authorship and sponsorship of the Erdős Institute as subject to the license (see License.md)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4adba0fb756b3cbf8c23949fc6dd2c47e3e114a0
| 19,180 |
ipynb
|
Jupyter Notebook
|
doc/source/ray-contribute/docs.ipynb
|
kisuke95/ray
|
f1a1f9799238c995995f558636a09c2147ecabe6
|
[
"Apache-2.0"
] | 22 |
2018-05-08T05:52:34.000Z
|
2020-04-01T10:09:55.000Z
|
doc/source/ray-contribute/docs.ipynb
|
kisuke95/ray
|
f1a1f9799238c995995f558636a09c2147ecabe6
|
[
"Apache-2.0"
] | 51 |
2018-05-17T05:55:28.000Z
|
2020-03-18T06:49:49.000Z
|
doc/source/ray-contribute/docs.ipynb
|
kisuke95/ray
|
f1a1f9799238c995995f558636a09c2147ecabe6
|
[
"Apache-2.0"
] | 10 |
2018-04-27T10:50:59.000Z
|
2020-02-24T02:41:43.000Z
| 47.241379 | 190 | 0.643431 |
[
[
[
"(docs-contribute)=\n\n# Contributing to the Ray Documentation\n\nThere are many ways to contribute to the Ray documentation, and we're always looking for new contributors.\nEven if you just want to fix a typo or expand on a section, please feel free to do so!\n\nThis document walks you through everything you need to do to get started.\n\n## Building the Ray documentation\n\nIf you want to contribute to the Ray documentation, you'll need a way to build it.\nYou don't have to build Ray itself, which is a bit more involved.\nJust clone the Ray repository and change into the `ray/doc` directory.\n\n```shell\ngit clone [email protected]:ray-project/ray.git\ncd ray/doc\n```\n\nTo install the documentation dependencies, run the following command:\n\n```shell\npip install -r requirements-doc.txt\n```\n\nAdditionally, it's best if you install the dependencies for our linters with\n\n```shell\npip install -r ../python/requirements_linters.txt\n```\n\nso that you can make sure your changes comply with our style guide.\nBuilding the documentation is done by running the following command:\n\n```shell\nmake html\n```\n\nwhich will build the documentation into the `_build` directory.\nAfter the build finishes, you can simply open the `_build/html/index.html` file in your browser.\nIt's considered good practice to check the output of your build to make sure everything is working as expected.\n\nBefore committing any changes, make sure you run the \n[linter](https://docs.ray.io/en/latest/ray-contribute/getting-involved.html#lint-and-formatting)\nwith `../scripts/format.sh` from the `doc` folder,\nto make sure your changes are formatted correctly.\n\n## The basics of our build system\n\nThe Ray documentation is built using the [`sphinx`](https://www.sphinx-doc.org/) build system.\nWe're using the [Sphinx Book Theme](https://github.com/executablebooks/sphinx-book-theme) from the\n[executable books project](https://github.com/executablebooks).\n\nThat means that you can write Ray documentation in either Sphinx's native \n[reStructuredText (rST)](https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html) or in\n[Markedly Structured Text (MyST)](https://myst-parser.readthedocs.io/en/latest/).\nThe two formats can be converted to each other, so the choice is up to you.\nHaving said that, it's important to know that MyST is\n[common markdown compliant](https://myst-parser.readthedocs.io/en/latest/syntax/reference.html#commonmark-block-tokens).\nIf you intend to add a new document, we recommend starting from an `.md` file.\n\nThe Ray documentation also fully supports executable formats like [Jupyter Notebooks](https://jupyter.org/).\nMany of our examples are notebooks with [MyST markdown cells](https://myst-nb.readthedocs.io/en/latest/index.html).\nIn fact, this very document you're reading _is_ a notebook.\nYou can check this for yourself by either downloading the `.ipynb` file,\nor directly launching this notebook into either Binder or Google Colab in the top navigation bar.\n\n## What to contribute?\n\nIf you take Ray Tune as an example, you can see that our documentation is made up of several types of documentation,\nall of which you can contribute to:\n\n- [a project landing page](https://docs.ray.io/en/latest/tune/index.html),\n- [a getting started guide](https://docs.ray.io/en/latest/tune/getting-started.html),\n- [a key concepts page](https://docs.ray.io/en/latest/tune/key-concepts.html),\n- [user guides for key features](https://docs.ray.io/en/latest/tune/tutorials/overview.html),\n- [practical examples](https://docs.ray.io/en/latest/tune/examples/index.html),\n- [a detailed FAQ](https://docs.ray.io/en/latest/tune/faq.html),\n- [and API references](https://docs.ray.io/en/latest/tune/api_docs/overview.html).\n\nThis structure is reflected in the\n[Ray documentation source code](https://github.com/ray-project/ray/tree/master/doc/source/tune) as well, so you\nshould have no problem finding what you're looking for.\nAll other Ray projects share a similar structure, but depending on the project there might be minor differences.\n\nEach type of documentation listed above has its own purpose, but at the end our documentation\ncomes down to _two types_ of documents:\n\n- Markup documents, written in MyST or rST. If you don't have a lot of (executable) code to contribute or\n use more complex features such as\n [tabbed content blocks](https://docs.ray.io/en/latest/ray-core/walkthrough.html#starting-ray), this is the right\n choice. Most of the documents in Ray Tune are written in this way, for instance the\n [key concepts](https://github.com/ray-project/ray/blob/master/doc/source/tune/key-concepts.rst) or\n [API documentation](https://github.com/ray-project/ray/blob/master/doc/source/tune/api_docs/overview.rst).\n- Notebooks, written in `.ipynb` format. All Tune examples are written as notebooks. These notebooks render in\n the browser like `.md` or `.rst` files, but have the added benefit of adding launch buttons to the top of the\n document, so that users can run the code themselves in either Binder or Google Colab. A good first example to look\n at is [this Tune example](https://github.com/ray-project/ray/blob/master/doc/source/tune/examples/tune-serve-integration-mnist.ipynb).\n\n## Fixing typos and improving explanations\n\nIf you spot a typo in any document, or think that an explanation is not clear enough, please consider\nopening a pull request.\nIn this scenario, just run the linter as described above and submit your pull request.\n\n## Adding API references\n\nWe use [Sphinx's autodoc extension](https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html) to generate\nour API documentation from our source code.\nIn case we're missing a reference to a function or class, please consider adding it to the respective document in question.\n\nFor example, here's how you can add a function or class reference using `autofunction` and `autoclass`:\n\n```markdown\n.. autofunction:: ray.tune.integration.docker.DockerSyncer\n\n.. autoclass:: ray.tune.integration.keras.TuneReportCallback\n```\n\nThe above snippet was taken from the\n[Tune API documentation](https://github.com/ray-project/ray/blob/master/doc/source/tune/api_docs/integration.rst),\nwhich you can look at for reference.\n\nIf you want to change the content of the API documentation, you will have to edit the respective function or class\nsignatures directly in the source code.\nFor example, in the above `autofunction` call, to change the API reference for `ray.tune.integration.docker.DockerSyncer`,\nyou would have to [change the following source file](https://github.com/ray-project/ray/blob/7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065/python/ray/tune/integration/docker.py#L15-L38).\n\n## Adding code to an `.rST` or `.md` file\n\nModifying text in an existing documentation file is easy, but you need to be careful when it comes to adding code.\nThe reason is that we want to ensure every code snippet on our documentation is tested.\nThis requires us to have a process for including and testing code snippets in documents.\n\nIn an `.rST` or `.md` file, you can add code snippets using `literalinclude` from the Sphinx system.\nFor instance, here's an example from the Tune's \"Key Concepts\" documentation: \n\n```markdown\n.. literalinclude:: doc_code/key_concepts.py\n :language: python\n :start-after: __function_api_start__\n :end-before: __function_api_end__\n```\n\nNote that in the whole file there's not a single literal code block, code _has to be_ imported using the `literalinclude` directive.\nThe code that gets added to the document by `literalinclude`, including `start-after` and `end-before` tags,\nreads as follows:",
"_____no_output_____"
]
],
[
[
"# __function_api_start__\nfrom ray import tune\n\n\ndef objective(x, a, b): # Define an objective function.\n return a * (x ** 0.5) + b\n\n\ndef trainable(config): # Pass a \"config\" dictionary into your trainable.\n\n for x in range(20): # \"Train\" for 20 iterations and compute intermediate scores.\n score = objective(x, config[\"a\"], config[\"b\"])\n\n tune.report(score=score) # Send the score to Tune.\n\n\n# __function_api_end__",
"_____no_output_____"
]
],
[
[
"This code is imported by `literalinclude` from a file called `doc_code/key_concepts.py`.\nEvery Python file in the `doc_code` directory will automatically get tested by our CI system,\nbut make sure to run scripts that you change (or new scripts) locally first.\nYou do not need to run the testing framework locally.\n\nIn rare situations, when you're adding _obvious_ pseudo-code to demonstrate a concept, it is ok to add it\nliterally into your `.rST` or `.md` file, e.g. using a `.. code-cell:: python` directive.\nBut if your code is supposed to run, it needs to be tested.\n\n## Creating a new document from scratch\n\nSometimes you might want to add a completely new document to the Ray documentation, like adding a new\nuser guide or a new example.\n\nFor this to work, you need to make sure to add the new document explicitly to the \n[`_toc.yml` file](https://github.com/ray-project/ray/blob/master/doc/source/_toc.yml) that determines\nthe structure of the Ray documentation.\n\nDepending on the type of document you're adding, you might also have to make changes to an existing overview\npage that curates the list of documents in question.\nFor instance, for Ray Tune each user guide is added to the\n[user guide overview page](https://docs.ray.io/en/latest/tune/tutorials/overview.html) as a panel, and the same\ngoes for [all Tune examples](https://docs.ray.io/en/latest/tune/examples/index.html).\nAlways check the structure of the Ray sub-project whose documentation you're working on to see how to integrate\nit within the existing structure.\nIn some cases you may be required to choose an image for the panel. Images are located in\n`doc/source/images`. \n\n## Creating a notebook example\n\nTo add a new executable example to the Ray documentation, you can start from our\n[MyST notebook template](https://github.com/ray-project/ray/tree/master/doc/source/_templates/template.md) or\n[Jupyter notebook template](https://github.com/ray-project/ray/tree/master/doc/source/_templates/template.ipynb).\nYou could also simply download the document you're reading right now (click on the respective download button at the\ntop of this page to get the `.ipynb` file) and start modifying it.\nAll the example notebooks in Ray Tune get automatically tested by our CI system, provided you place them in the\n[`examples` folder](https://github.com/ray-project/ray/tree/master/doc/source/tune/examples).\nIf you have questions about how to test your notebook when contributing to other Ray sub-projects, please make\nsure to ask a question in [the Ray community Slack](https://forms.gle/9TSdDYUgxYs8SA9e8) or directly on GitHub,\nwhen opening your pull request.\n\nTo work off of an existing example, you could also have a look at the\n[Ray Tune Hyperopt example (`.ipynb`)](https://github.com/ray-project/ray/blob/master/doc/source/tune/examples/hyperopt_example.ipynb)\nor the [Ray Serve guide for RLlib (`.md`)](https://github.com/ray-project/ray/blob/master/doc/source/serve/tutorials/rllib.md).\nWe recommend that you start with an `.md` file and convert your file to an `.ipynb` notebook at the end of the process.\nWe'll walk you through this process below.\n\nWhat makes these notebooks different from other documents is that they combine code and text in one document,\nand can be launched in the browser.\nWe also make sure they are tested by our CI system, before we add them to our documentation.\nTo make this work, notebooks need to define a _kernel specification_ to tell a notebook server how to interpret\nand run the code.\nFor instance, here's the kernel specification of a Python notebook:\n\n```markdown\n---\njupytext:\n text_representation:\n extension: .md\n format_name: myst\nkernelspec:\n display_name: Python 3\n language: python\n name: python3\n---\n```\n\nIf you write a notebook in `.md` format, you need this YAML front matter at the top of the file.\nTo add code to your notebook, you can use the `code-cell` directive.\nHere's an example:\n\n````markdown\n```{code-cell} python3\n:tags: [hide-cell]\n\nimport ray\nimport ray.rllib.agents.ppo as ppo\nfrom ray import serve\n\ndef train_ppo_model():\n trainer = ppo.PPOTrainer(\n config={\"framework\": \"torch\", \"num_workers\": 0},\n env=\"CartPole-v0\",\n )\n # Train for one iteration\n trainer.train()\n trainer.save(\"/tmp/rllib_checkpoint\")\n return \"/tmp/rllib_checkpoint/checkpoint_000001/checkpoint-1\"\n\n\ncheckpoint_path = train_ppo_model()\n```\n````\n\nPutting this markdown block into your document will render as follows in the browser:",
"_____no_output_____"
]
],
[
[
"import ray\nimport ray.rllib.agents.ppo as ppo\nfrom ray import serve\n\ndef train_ppo_model():\n trainer = ppo.PPOTrainer(\n config={\"framework\": \"torch\", \"num_workers\": 0},\n env=\"CartPole-v0\",\n )\n # Train for one iteration\n trainer.train()\n trainer.save(\"/tmp/rllib_checkpoint\")\n return \"/tmp/rllib_checkpoint/checkpoint_000001/checkpoint-1\"\n\n\ncheckpoint_path = train_ppo_model()",
"_____no_output_____"
]
],
[
[
"As you can see, the code block is hidden, but you can expand it by click on the \"+\" button.\n\n### Tags for your notebook\n\nWhat makes this work is the `:tags: [hide-cell]` directive in the `code-cell`.\nThe reason we suggest starting with `.md` files is that it's much easier to add tags to them, as you've just seen.\nYou can also add tags to `.ipynb` files, but you'll need to start a notebook server for that first, which may\nnot want to do to contribute a piece of documentation.\n\nApart from `hide-cell`, you also have `hide-input` and `hide-output` tags that hide the input and output of a cell.\nAlso, if you need code that gets executed in the notebook, but you don't want to show it in the documentation,\nyou can use the `remove-cell`, `remove-input`, and `remove-output` tags in the same way.\n\n### Testing notebooks\n\nRemoving cells can be particularly interesting for compute-intensive notebooks.\nWe want you to contribute notebooks that use _realistic_ values, not just toy examples.\nAt the same time we want our notebooks to be tested by our CI system, and running them should not take too long.\nWhat you can do to address this is to have notebook cells with the parameters you want the users to see first:\n\n````markdown\n```{code-cell} python3\nnum_workers = 8\nnum_gpus = 2\n```\n````\n\nwhich will render as follows in the browser:",
"_____no_output_____"
]
],
[
[
"num_workers = 8\nnum_gpus = 2",
"_____no_output_____"
]
],
[
[
"But then in your notebook you follow that up with a _removed_ cell that won't get rendered, but has much smaller\nvalues and make the notebook run faster:\n\n````markdown\n```{code-cell} python3\n:tags: [remove-cell]\nnum_workers = 0\nnum_gpus = 0\n```\n````\n\n### Converting markdown notebooks to ipynb\n\nOnce you're finished writing your example, you can convert it to an `.ipynb` notebook using `jupytext`:\n\n```shell\njupytext your-example.md --to ipynb\n```\n\nIn the same way, you can convert `.ipynb` notebooks to `.md` notebooks with `--to myst`.\nAnd if you want to convert your notebook to a Python file, e.g. to test if your whole script runs without errors,\nyou can use `--to py` instead.\n\n## Where to go from here?\n\nThere are many other ways to contribute to Ray other than documentation.\nSee {ref}`our contributor guide <getting-involved>` for more information.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4adba54109b33bf18dee6963064bbe48a7d03240
| 91,978 |
ipynb
|
Jupyter Notebook
|
ETL .ipynb
|
estherkim0998/ETL-Project
|
2cda5561a37e53c34a9ac26989ca43aefe46e0cd
|
[
"MIT"
] | null | null | null |
ETL .ipynb
|
estherkim0998/ETL-Project
|
2cda5561a37e53c34a9ac26989ca43aefe46e0cd
|
[
"MIT"
] | null | null | null |
ETL .ipynb
|
estherkim0998/ETL-Project
|
2cda5561a37e53c34a9ac26989ca43aefe46e0cd
|
[
"MIT"
] | null | null | null | 41.338427 | 341 | 0.335798 |
[
[
[
"import pandas as pd\nfrom sqlalchemy import create_engine",
"_____no_output_____"
],
[
"# Store CSV into a DF\ncsv_file = \"./Resources/MoviesOnStreamingPlatforms_updated.csv\"\nstreaming_df = pd.read_csv(csv_file)\nstreaming_df",
"_____no_output_____"
],
[
"# Store CSV into a DF\ncsv_file2 = \"./Resources/rotten_tomatoes_movies.csv\"\ntomato_df = pd.read_csv(csv_file2)\ntomato_df",
"_____no_output_____"
],
[
"tomato_df.columns",
"_____no_output_____"
],
[
"streaming_df.columns\n",
"_____no_output_____"
],
[
"stream_df = streaming_df.rename(columns={\"ID\": \"id\", \"Title\": \"title\", \"Year\": \"year\", \"Age\": \"age\", \"IMDb\": \"imdb\",\"Rotten Tomatoes\": \"rotten_tomatoes\", \"Netflix\": \"netflix\", \"Hulu\": \"hulu\", \"Prime Video\": \"prime_video\", \"Disney+\": \"disney\", \"Type\": \"type\", \"Directors\": \"directors\",\n \"Genres\": \"genres\", \"Country\": \"country\", \"Language\": \"language\", \"Runtime\": \"runtime\"})\nstream_df.columns",
"_____no_output_____"
],
[
"new_streaming_df = stream_df[[\"id\", \"title\", \"year\", \"age\", \"imdb\", \"rotten_tomatoes\",\n \"netflix\", \"hulu\", \"prime_video\", \"disney\", \"type\", \"directors\",\n \"genres\", \"country\", \"language\", \"runtime\"]].copy()\nnew_streaming_df",
"_____no_output_____"
]
],
[
[
"#### Create a schema for where data will be loaded this is the SQL PART\n\n```sql\nCREATE TABLE streaming (\nID INT PRIMARY KEY, \nTitle TEXT, \nYear INT, \nAge VARCHAR,\nIMDb DECIMAL,\nRotten_Tomatoes VARCHAR,\nNetflix INT, \nHulu INT, \nPrime_Video INT, \nDisney INT, \nType INT, \nDirectors TEXT,\nGenres TEXT, \nCountry TEXT, \nLanguage TEXT, \nRuntime DECIMAL\n);\n\n\nCREATE TABLE tomato (\n rotten_tomatoes_link TEXT, \n movie_title TEXT, \n movie_info TEXT,\n critics_consensus TEXT,\n content_rating TEXT, \n genres TEXT, \n directors TEXT, \n authors TEXT,\n actors TEXt, \n original_release_date TEXT, \n streaming_release_date TEXT, \n runtime INT,\n production_company TEXT, \n tomatometer_status TEXT, \n tomatometer_rating DECIMAL,\n tomatometer_count DECIMAL, \n audience_status TEXT, \n audience_rating DECIMAL,\n audience_count DECIMAL, \n tomatometer_top_critics_count INT,\n tomatometer_fresh_critics_count INT, \n tomatometer_rotten_critics_count INT);\n\nSELECT * FROM streaming;\nSELECT * FROM tomato;\n```\n",
"_____no_output_____"
]
],
[
[
"# Connect to the db \nconnection_string = \"postgres:{insertownpassword}@localhost:5432/ETL \"\nengine = create_engine(f'postgresql://{connection_string}')",
"_____no_output_____"
],
[
"engine.table_names()",
"_____no_output_____"
],
[
"# use pandas to load csv converted to DF into database\nnew_streaming_df.to_sql(name=\"streaming\", con=engine, if_exists='append', index=False)",
"_____no_output_____"
],
[
"# Use pandas to load json converted to DF into database \ntomato_df.to_sql(name='tomato', con=engine, if_exists='append', index=False)",
"_____no_output_____"
],
[
"# Confirm data is in the customer_name table\npd.read_sql_query('select * from streaming', con=engine)",
"_____no_output_____"
],
[
"pd.read_sql_query('select * from tomato', con=engine)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adba80ccceaf90f1a7318ca46bcc3b8b3c39e20
| 23,901 |
ipynb
|
Jupyter Notebook
|
exercises/5_distributed_parallelization.ipynb
|
omlins/julia-gpu-course
|
a25865c1d14df81a882fe116413bbbd24ddebdb4
|
[
"MIT"
] | 44 |
2021-11-01T18:58:47.000Z
|
2022-01-26T19:32:29.000Z
|
exercises/5_distributed_parallelization.ipynb
|
omlins/julia-gpu-course
|
a25865c1d14df81a882fe116413bbbd24ddebdb4
|
[
"MIT"
] | null | null | null |
exercises/5_distributed_parallelization.ipynb
|
omlins/julia-gpu-course
|
a25865c1d14df81a882fe116413bbbd24ddebdb4
|
[
"MIT"
] | 4 |
2021-11-02T12:57:06.000Z
|
2021-12-07T00:32:14.000Z
| 44.842402 | 708 | 0.576378 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4adbb6d03baa2b481dd244b47390d42d25084688
| 853,661 |
ipynb
|
Jupyter Notebook
|
utils/notebooks/self-supervised-neuroevolution/figures.ipynb
|
MaximilienLC/nevo
|
c701a1202bc18d89a622472918733bf78ba5e304
|
[
"Apache-2.0"
] | null | null | null |
utils/notebooks/self-supervised-neuroevolution/figures.ipynb
|
MaximilienLC/nevo
|
c701a1202bc18d89a622472918733bf78ba5e304
|
[
"Apache-2.0"
] | null | null | null |
utils/notebooks/self-supervised-neuroevolution/figures.ipynb
|
MaximilienLC/nevo
|
c701a1202bc18d89a622472918733bf78ba5e304
|
[
"Apache-2.0"
] | 1 |
2022-03-31T20:44:09.000Z
|
2022-03-31T20:44:09.000Z
| 1,892.818182 | 451,014 | 0.955605 |
[
[
[
"import os\nimport pickle\nimport re\nimport sys\nsys.path.append(os.path.abspath('') + '/../../..')\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nfrom matplotlib.ticker import MultipleLocator\nimport matplotlib.font_manager as font_manager\n\nmatplotlib.rcParams['pdf.fonttype'] = 42\nmatplotlib.rcParams['ps.fonttype'] = 42\nfont = {'family' : 'sans-serif',\n 'weight' : 'normal',\n 'size' : 9}\nmatplotlib.rc('font', **font)\nplt.rc('axes', labelsize=9)\nplt.rc('axes', titlesize=9)\n\nfrom utils.functions.control import get_task_name",
"_____no_output_____"
],
[
"def plot(list):\n\n fig, axs = plt.subplots(2,4, dpi=300)\n\n for i in range(len(list)):\n \n task, nb_gens, ylims, yticks, sb3_dict = list[i]\n\n axs[i%2][i//2].set_title(\n get_task_name(task)[:-3], fontdict={'fontweight': 'bold'})\n\n gens = [1] + np.arange(nb_gens//100, nb_gens+1, nb_gens//100).tolist()\n path = '../../../../../Videos/envs.multistep.imitate.control/'\n path += 'merge.no~steps.0~task.' + task + '~transfer.no/'\n path += 'bots.network.static.rnn.control/64/'\n\n scores = {}\n\n scores[64] = np.zeros( len(gens) ) * np.nan\n scores['64 (elite)'] = np.zeros( len(gens) ) * np.nan\n\n for gen in gens:\n scores[64][gens.index(gen)] = \\\n np.load(path + str(gen) + '/scores.npy').mean()\n scores['64 (elite)'][gens.index(gen)] = \\\n np.load(path + str(gen) + '/scores.npy').mean(axis=1).max()\n\n if i in [1, 3, 5, 7]:\n axs[i%2][i//2].set_xlabel('# Generations')\n if i in [0, 1]:\n axs[i%2][i//2].set_ylabel(\"Mean Score\")\n\n if nb_gens >= 1000:\n xticks = 1000\n else:\n xticks = 100\n\n axs[i%2][i//2].set_xticks(np.arange(0, nb_gens+1, step=xticks))\n axs[i%2][i//2].set_yticks(\n np.arange(yticks[0], yticks[1]+1, step=yticks[2]))\n axs[i%2][i//2].set_xlim(\n [-nb_gens//50+(xticks>=1000),nb_gens+nb_gens//50-(xticks>=1000)])\n axs[i%2][i//2].set_ylim([ylims[0],ylims[1]])\n\n sorted_sb3_dict = {k: v for k, v in sorted(sb3_dict.items(),\n key=lambda item: item[1])}\n sb3 = []\n\n for key in sorted_sb3_dict:\n\n if 'dqn' in key:\n sb3.append(axs[i%2][i//2].hlines(sorted_sb3_dict[key],\n -nb_gens//100, nb_gens+nb_gens//100, linestyles='dotted',\n colors='red', label=key))\n elif 'ppo' in key:\n sb3.append(axs[i%2][i//2].hlines(sorted_sb3_dict[key],\n -nb_gens//100, nb_gens+nb_gens//100, linestyles='dotted',\n colors='peru', label=key))\n elif 'sac' in key:\n sb3.append(axs[i%2][i//2].hlines(sorted_sb3_dict[key],\n -nb_gens//100, nb_gens+nb_gens//100, linestyles='dotted',\n colors='darkviolet', label=key))\n elif 'td3' in key:\n sb3.append(axs[i%2][i//2].hlines(sorted_sb3_dict[key],\n -nb_gens//100, nb_gens+nb_gens//100, linestyles='dotted',\n colors='pink', label=key))\n elif 'tqc' in key:\n sb3.append(axs[i%2][i//2].hlines(sorted_sb3_dict[key],\n -nb_gens//100, nb_gens+nb_gens//100, linestyles='dotted',\n colors='seagreen', label=key))\n \n sb3.reverse()\n\n ne = []\n ne.append(axs[i%2][i//2].plot(\n gens, scores[64], '.-', c='darkgrey', label='64')[0])\n ne.append(axs[i%2][i//2].plot(\n gens, scores['64 (elite)'], '.-', c='royalblue',\n label='64 (elite)')[0])\n\n ne.reverse()\n\n if i == 6:\n\n leg1 = axs[i%2][i//2].legend(\n handles=ne, title=\"Population size\", loc='lower right',\n edgecolor='palegoldenrod', labelspacing=0.2)\n leg1.get_frame().set_alpha(None)\n leg1.get_frame().set_facecolor((1, 1, 1, .45))\n axs[i%2][i//2].add_artist(leg1)\n\n if i == 1:\n\n font = font_manager.FontProperties(family='monospace')\n\n leg2 = axs[i%2][i//2].legend(handles=sb3, loc='lower right',\n edgecolor='palegoldenrod', labelspacing=0, prop=font)\n leg2.get_frame().set_alpha(None)\n leg2.get_frame().set_facecolor((1, 1, 1, .45))\n\n axs[i%2][i//2].xaxis.set_minor_locator(MultipleLocator(100))\n\n fig.tight_layout(pad=0.5)",
"_____no_output_____"
],
[
"plt.rcParams[\"figure.figsize\"] = [9, 4]\nplot([['acrobot', 300, [-510, -60], [-500, -50, 100], {'dqn':-80.4}],\n ['cart_pole', 300, [-5, 512], [0, 500, 100], {'ppo':500.0, 'dqn':-580.4, 'sac':-680.4, 'tqc':-780.4, 'td3':-880.4}],\n ['mountain_car', 1000, [-203, -98], [-200, -100, 25], {'dqn':-100.8}],\n ['mountain_car_continuous', 200, [-105, 105], [-100, 100, 50], {'sac':94.6}],\n ['pendulum', 3200, [-1505, -120], [-1500, -150, 450], {'tqc':-150.6}],\n ['lunar_lander', 1000, [-1180, 200], [-1100, 150, 250], {'ppo':142.7}],\n ['lunar_lander_continuous', 2500, [-670, 290], [-650, 250, 300], {'sac':269.7}],\n ['swimmer', 600, [-35, 375], [0, 361, 120], {'td3':358.3}]])\nplt.savefig(\"../../../data/states/envs.multistep.imitate.control/extra/figures/results1.pdf\")",
"_____no_output_____"
],
[
"def plot(list):\n\n fig, axs = plt.subplots(2,4, dpi=300)\n\n for i in range(len(list)):\n \n task, nb_gens, ylims, yticks, sb3_agent = list[i]\n\n ### Set plot & variables\n\n axs[i%2][i//2].set_title(\n get_task_name(task)[:-3], fontdict = {'fontweight': 'bold'})\n\n if i in [1, 3, 5, 7]:\n axs[i%2][i//2].set_xlabel('# Timesteps')\n if i in [0, 1]:\n axs[i%2][i//2].set_ylabel(\"Score\")\n\n if 'dqn' in sb3_agent:\n sb3_color = 'red'\n elif 'ppo' in sb3_agent:\n sb3_color = 'peru'\n elif 'sac' in sb3_agent:\n sb3_color = 'darkviolet'\n elif 'td3' in sb3_agent:\n sb3_color = 'pink'\n else: # 'tqc' in sb3_agent:\n sb3_color = 'seagreen'\n\n gens = [1, nb_gens//4, nb_gens//2, nb_gens//2+nb_gens//4, nb_gens]\n\n ### Load data\n\n # SB3\n path_0 = '../../../data/states/envs.multistep.imitate.control/extra/'\n path_0 += 'sb3_agent_rewards/' + task + '/rewards.pkl'\n\n with open(path_0, 'rb') as f:\n sb3_rewards_list = pickle.load(f)\n \n # NE\n path_1 = '../../../../../Videos/envs.multistep.imitate.control/'\n path_1 += 'merge.no~steps.0~task.' + task + '~transfer.no/'\n path_1 += 'bots.network.static.rnn.control/64/'\n\n ne_rewards_list_5_timepoints = []\n\n for gen in gens:\n with open(path_1 + str(gen) + '/rewards.pkl', 'rb') as f:\n ne_rewards_list_5_timepoints.append(pickle.load(f))\n\n ### Calculate run lengths\n\n run_lengths = np.zeros(6, dtype=np.int32)\n\n # SB3\n run_lengths[0] = len(sb3_rewards_list[0])\n\n # NE\n for k in range(5):\n run_lengths[k+1] = len(ne_rewards_list_5_timepoints[k][0])\n\n ### Fill Rewards Numpy Array\n\n rewards = np.zeros((6, run_lengths.max())) * np.nan\n\n # SB3\n rewards[0][:run_lengths[0]] = sb3_rewards_list[0]\n\n # NE\n for k in range(5):\n rewards[k+1][:run_lengths[k+1]] = \\\n ne_rewards_list_5_timepoints[k][0]\n\n ### Calculate Cumulative Sums\n\n cum_sums = rewards.cumsum(axis=1)\n\n if run_lengths.max() >= 999:\n xticks = 1000\n else:\n xticks = 100\n\n axs[i%2][i//2].set_xticks(\n np.arange(0, run_lengths.max()+2, step=xticks))\n axs[i%2][i//2].set_yticks(\n np.arange(yticks[0], yticks[1]+1, step=yticks[2]))\n axs[i%2][i//2].set_xlim(\n [-run_lengths.max()//50,run_lengths.max()+run_lengths.max()//50])\n axs[i%2][i//2].set_ylim([ylims[0],ylims[1]])\n\n if task == 'cart_pole':\n\n cum_sums[1] -= 25\n cum_sums[2] -= 20\n cum_sums[3] -= 15\n cum_sums[4] -= 10\n cum_sums[5] -= 5\n\n if task == 'acrobot':\n\n cum_sums[1] -= 10\n cum_sums[2] -= 8\n cum_sums[3] -= 6\n cum_sums[4] -= 4\n cum_sums[5] -= 2\n\n if task == 'mountain_car':\n\n cum_sums[1] -= 5\n cum_sums[2] -= 4\n cum_sums[3] -= 3\n cum_sums[4] -= 2\n cum_sums[5] -= 1\n\n ne = []\n\n ne.append(axs[i%2][i//2].plot(\n np.arange(0, run_lengths.max()),\n cum_sums[1],\n '-', c='gainsboro', label=' 0%')[0])\n\n ne.append(axs[i%2][i//2].plot(\n np.arange(0, run_lengths.max()),\n cum_sums[2],\n '-', c='silver', label=' 25%')[0])\n\n ne.append(axs[i%2][i//2].plot(\n np.arange(0, run_lengths.max()),\n cum_sums[3],\n '-', c='darkgrey', label=' 50%')[0])\n\n ne.append(axs[i%2][i//2].plot(\n np.arange(0, run_lengths.max()),\n cum_sums[4],\n '-', c='grey', label=' 75%')[0])\n\n ne.append(axs[i%2][i//2].plot(\n np.arange(0, run_lengths.max()),\n cum_sums[5],\n '-', c='black', label='100%')[0])\n\n sb3 = []\n\n sb3.append(axs[i%2][i//2].plot(\n np.arange(0, run_lengths.max()),\n cum_sums[0],\n '-', c=sb3_color, label=sb3_agent)[0])\n\n if task in ['cart_pole']:\n\n if 'dqn' in sb3_agent:\n sb3_color = 'red'\n elif 'ppo' in sb3_agent:\n sb3_color = 'peru'\n elif 'sac' in sb3_agent:\n sb3_color = 'darkviolet'\n elif 'td3' in sb3_agent:\n sb3_color = 'pink'\n else: # 'tqc' in sb3_agent:\n sb3_color = 'seagreen'\n\n sb3.append(axs[i%2][i//2].plot(\n np.arange(0, 1),\n np.arange(0, 1),\n '-', c='red', label='dqn')[0])\n\n sb3.append(axs[i%2][i//2].plot(\n np.arange(0, 1),\n np.arange(0, 1),\n '-', c='darkviolet', label='sac')[0])\n\n sb3.append(axs[i%2][i//2].plot(\n np.arange(0, 1),\n np.arange(0, 1),\n '-', c='seagreen', label='tqc')[0])\n\n sb3.append(axs[i%2][i//2].plot(\n np.arange(0, 1),\n np.arange(0, 1),\n '-', c='pink', label='td3')[0])\n\n\n leg1 = axs[i%2][i//2].legend(handles=sb3, loc='lower right', edgecolor='palegoldenrod', labelspacing=0)\n leg1.get_frame().set_alpha(None)\n leg1.get_frame().set_facecolor((1, 1, 1, .45))\n axs[i%2][i//2].add_artist(leg1)\n\n if task == 'lunar_lander_continuous':\n leg2 = axs[i%2][i//2].legend(handles=ne, title='Generations', loc='lower right', edgecolor='palegoldenrod', labelspacing=0.1)\n leg2.get_frame().set_alpha(None)\n leg2.get_frame().set_facecolor((1, 1, 1, .45))\n axs[i%2][i//2].add_artist(leg2)\n\n axs[i%2][i//2].xaxis.set_minor_locator(MultipleLocator(100))\n\n fig.tight_layout(pad=0.5)",
"_____no_output_____"
],
[
"plt.rcParams[\"figure.figsize\"] = [9, 4]\nplot([['acrobot', 300, [-520, 10], [-500, 50, 100], 'dqn'],\n ['cart_pole', 300, [-30, 510], [0, 500, 100], 'ppo'],\n ['mountain_car', 1000, [-210, 5], [-200, 0, 50], 'dqn'],\n ['mountain_car_continuous', 200, [-105, 105], [-100, 100, 50], 'sac'],\n ['pendulum', 3200, [-1530, 30], [-1500, 0, 500], 'tqc'],\n ['lunar_lander', 1000, [-110, 210], [-100, 200, 100], 'ppo'],\n ['lunar_lander_continuous', 2500, [-810, 290], [-800, 250, 350], 'sac'],\n ['swimmer', 600, [-10, 370], [0, 361, 120], 'td3']])\nplt.savefig(\"../../../data/states/envs.multistep.imitate.control/extra/figures/results2.pdf\")",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
4adbc2ebe1f1daf2e0ca93cced198d8435ebff07
| 16,462 |
ipynb
|
Jupyter Notebook
|
project/NN_6.ipynb
|
SJSlavin/phys202-project
|
bc81aebefd38b4c31e10d95fe46277a707cb0e6d
|
[
"MIT"
] | null | null | null |
project/NN_6.ipynb
|
SJSlavin/phys202-project
|
bc81aebefd38b4c31e10d95fe46277a707cb0e6d
|
[
"MIT"
] | null | null | null |
project/NN_6.ipynb
|
SJSlavin/phys202-project
|
bc81aebefd38b4c31e10d95fe46277a707cb0e6d
|
[
"MIT"
] | null | null | null | 44.978142 | 1,833 | 0.564816 |
[
[
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.html.widgets import interact\n\nfrom sklearn.datasets import load_digits\ndigits = load_digits()",
"_____no_output_____"
],
[
"def sigmoid(x):\n return 1/(1 + np.exp(-x))\n\nsigmoid_v = np.vectorize(sigmoid)\n\ndef sigmoidprime(x):\n return sigmoid(x) * (1 - sigmoid(x))\n\nsigmoidprime_v = np.vectorize(sigmoidprime)",
"_____no_output_____"
],
[
"size = [64, 20, 10]\n\nweights = []\nfor n in range(1, len(size)):\n weights.append(np.random.rand(size[n], size[n-1]) * 2 - 1)\n\nbiases = []\nfor n in range(1, len(size)):\n biases.append(np.random.rand(size[n]) * 2 - 1)\n\ntrainingdata = digits.data[0:1200]\ntraininganswers = digits.target[0:1200]\nlc = 0.02\n\n#convert the integer answers into a 10-dimension array\ntraininganswervectors = np.zeros((1796,10))\nfor n in range(1796):\n traininganswervectors[n][digits.target[n]] = 1",
"_____no_output_____"
],
[
"def feedforward(weights, biases, a):\n b = []\n #first element is inputs \"a\"\n b.append(a)\n for n in range(1, len(size)):\n #all other elements depend on the number of neurons\n b.append(np.zeros(size[n]))\n for n2 in range(0, size[n]):\n b[n][n2] = sigmoid_v(np.dot(weights[n-1][n2], b[n-1]) + biases[n-1][n2])\n \n return b",
"_____no_output_____"
],
[
"opt = feedforward(weights, biases, trainingdata[0])\nprint(opt[-1])\nprint(traininganswervectors[0])\nprint(costderivative(opt[-1], traininganswervectors[0]))",
"[ 0.95268903 0.15219509 0.37781353 0.0787356 0.89811514 0.20636925\n 0.40752871 0.4777203 0.96000983 0.14310585]\n[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n[-0.04731097 0.15219509 0.37781353 0.0787356 0.89811514 0.20636925\n 0.40752871 0.4777203 0.96000983 0.14310585]\n"
],
[
"def gradient_descent(weights, biases, inputs, answers, batchsize, lc, epochs):\n for n in range(epochs):\n #pick random locations for input/result data\n locations = np.random.randint(0, len(inputs), batchsize)\n minibatch = []\n #create tuples (inputs, result) based on random locations\n for n2 in range(batchsize):\n minibatch.append((inputs[locations[n2]], answers[locations[n2]]))\n for n3 in range(batchsize):\n weights, biases = train(weights, biases, minibatch, lc)\n \n \n results = []\n for n4 in range(len(trainingdata)):\n results.append(feedforward(weights, biases, inputs[n4])[-1])\n \n accresult = accuracy(inputs, results, answers)\n print(\"Epoch \", n, \" : \", accresult)\n \n return weights, biases",
"_____no_output_____"
],
[
"def train(weights, biases, minibatch, lc):\n #set the nabla functions to be the functions themselves initially, same size\n nb = [np.zeros(b.shape) for b in biases]\n nw = [np.zeros(w.shape) for w in weights]\n #largely taken from Michael Nielsen's implementation\n for i, r in minibatch:\n dnb, dnw = backprop(weights, biases, i, r)\n nb = [a+b for a, b in zip(nb, dnb)]\n nw = [a+b for a, b in zip(nw, dnw)]\n \n weights = [w-(lc/len(minibatch))*n_w for w, n_w in zip(weights, nw)]\n biases = [b-(lc/len(minibatch))*n_b for b, n_b in zip(biases, nb)]\n return weights, biases",
"_____no_output_____"
],
[
"def backprop(weights, biases, inputs, answers):\n #set the nabla functions to be the same size as functions\n nb = [np.zeros(b.shape) for b in biases]\n nw = [np.zeros(w.shape) for w in weights]\n a = inputs\n alist = [inputs]\n zlist = []\n for b, w in zip(biases, weights):\n z = np.dot(w, a)+b\n zlist.append(z)\n a = sigmoid_v(z)\n alist.append(a)\n \n delta = costderivative(alist[-1], answers) * sigmoidprime_v(zlist[-1])\n nb[-1] = delta\n print(\"delta\", delta)\n print(\"alist\", alist)\n #different from MN, alist[-2] not same size as delta?\n nw[-1] = np.dot(delta, alist[-2].transpose())\n \n for n in range(2, len(size)):\n delta = np.dot(weights[-n+1].transpose(), delta) * sigmoidprime_v(zlist[-n])\n nb[-n] = delta\n #same here\n nw[-n] = np.dot(delta, alist[-n-1].transpose())\n \n return nb, nw",
"_____no_output_____"
],
[
"def costderivative(output, answers):\n return (output - answers)",
"_____no_output_____"
],
[
"def accuracy(inputs, results, answers):\n correct = 0\n binresults = results\n for n in range(0, len(results)):\n #converts the output into a binary y/n for each digit\n for n2 in range(len(results[n])):\n if results[n][n2] == np.amax(results[n]):\n binresults[n][n2] = 1\n else:\n binresults[n][n2] = 0\n \n if np.array_equal(answers[n], binresults[n]):\n correct += 1\n return correct / len(results)",
"_____no_output_____"
],
[
"size = [64, 20, 10]\n\nweights = []\nfor n in range(1, len(size)):\n weights.append(np.random.rand(size[n], size[n-1]) * 2 - 1)\n\nbiases = []\nfor n in range(1, len(size)):\n biases.append(np.random.rand(size[n]) * 2 - 1)\n\ntrainingdata = digits.data[0:500]\ntraininganswers = digits.target[0:500]\n\ntraininganswervectors = np.zeros((500,10))\nfor n in range(500):\n traininganswervectors[n][digits.target[n]] = 1",
"_____no_output_____"
],
[
"final_weights, final_biases = gradient_descent(weights, biases, trainingdata,\n traininganswervectors, 5, 1, 100)\n\nprint(final_weights)",
"delta [ 0.07675723 0.02780815 0.0093905 0.13425283 0.03234364 0.14556868\n 0.00592747 0.07448789 -0.09589887 0.00622013]\nalist [array([ 0., 0., 5., 14., 15., 2., 0., 0., 0., 0., 13.,\n 14., 9., 10., 0., 0., 0., 0., 15., 8., 2., 15.,\n 3., 0., 0., 0., 11., 12., 9., 14., 2., 0., 0.,\n 0., 7., 16., 14., 2., 0., 0., 0., 0., 13., 14.,\n 16., 4., 0., 0., 0., 3., 15., 8., 14., 10., 0.,\n 0., 0., 0., 6., 16., 16., 8., 0., 0.]), array([ 8.71512058e-16, 1.34421193e-03, 1.00000000e+00,\n 1.41393057e-02, 1.00000000e+00, 1.00000000e+00,\n 1.32874316e-07, 9.99999962e-01, 1.28709646e-20,\n 1.00000000e+00, 1.00000000e+00, 6.37924075e-01,\n 9.99895559e-01, 1.00000000e+00, 1.00000000e+00,\n 9.99999999e-01, 1.00000000e+00, 9.99996665e-01,\n 4.54852532e-03, 4.59835385e-04]), array([ 0.90661599, 0.18468104, 0.10227581, 0.77846156, 0.96528839,\n 0.61449925, 0.08027991, 0.91006188, 0.60031596, 0.08232965])]\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adbc8d7a0b577a8df948e00cbe98c8a4a700a87
| 5,740 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/sqlalchemy-challenge-checkpoint.ipynb
|
mpadgett004/sqlalchemy-challenge
|
10813dcfcab6e5152604c30f06befcca36c0d944
|
[
"ADSL"
] | null | null | null |
.ipynb_checkpoints/sqlalchemy-challenge-checkpoint.ipynb
|
mpadgett004/sqlalchemy-challenge
|
10813dcfcab6e5152604c30f06befcca36c0d944
|
[
"ADSL"
] | null | null | null |
.ipynb_checkpoints/sqlalchemy-challenge-checkpoint.ipynb
|
mpadgett004/sqlalchemy-challenge
|
10813dcfcab6e5152604c30f06befcca36c0d944
|
[
"ADSL"
] | null | null | null | 21.025641 | 123 | 0.546864 |
[
[
[
"%matplotlib inline\nfrom matplotlib import style\nstyle.use('fivethirtyeight')\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport datetime as dt",
"_____no_output_____"
]
],
[
[
"# Reflect Tables into SQLAlchemy ORM",
"_____no_output_____"
]
],
[
[
"# Python SQL toolkit and Object Relational Mapper\nimport sqlalchemy\nfrom sqlalchemy.ext.automap import automap_base\nfrom sqlalchemy.orm import Session\nfrom sqlalchemy import create_engine, func",
"_____no_output_____"
],
[
"# create engine to hawaii.sqlite\nengine = create_engine(\"sqlite:///hawaii.sqlite\")",
"_____no_output_____"
],
[
"# reflect an existing database into a new model\n\n# reflect the tables\n",
"_____no_output_____"
],
[
"# View all of the classes that automap found\n",
"_____no_output_____"
],
[
"# Save references to each table\n",
"_____no_output_____"
],
[
"# Create our session (link) from Python to the DB\n",
"_____no_output_____"
]
],
[
[
"# Exploratory Precipitation Analysis",
"_____no_output_____"
]
],
[
[
"# Find the most recent date in the data set.",
"_____no_output_____"
],
[
"# Design a query to retrieve the last 12 months of precipitation data and plot the results. \n# Starting from the most recent data point in the database. \n\n# Calculate the date one year from the last date in data set.\n\n\n# Perform a query to retrieve the data and precipitation scores\n\n\n# Save the query results as a Pandas DataFrame and set the index to the date column\n\n\n# Sort the dataframe by date\n\n\n# Use Pandas Plotting with Matplotlib to plot the data\n\n\n",
"_____no_output_____"
],
[
"# Use Pandas to calcualte the summary statistics for the precipitation data\n",
"_____no_output_____"
]
],
[
[
"# Exploratory Station Analysis",
"_____no_output_____"
]
],
[
[
"# Design a query to calculate the total number stations in the dataset\n",
"_____no_output_____"
],
[
"# Design a query to find the most active stations (i.e. what stations have the most rows?)\n# List the stations and the counts in descending order.\n",
"_____no_output_____"
],
[
"# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.\n",
"_____no_output_____"
],
[
"# Using the most active station id\n# Query the last 12 months of temperature observation data for this station and plot the results as a histogram\n",
"_____no_output_____"
]
],
[
[
"# Close session",
"_____no_output_____"
]
],
[
[
"# Close Session\nsession.close()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adbd11bc53838e1c5f5468f9f9b13a34d170976
| 22,224 |
ipynb
|
Jupyter Notebook
|
4° Período/Programação de Computadores/lista 2/Lista_2.ipynb
|
sullyvan15/UVV
|
2390cc2881792d036db1d8b098fe366f47cd98c3
|
[
"MIT"
] | null | null | null |
4° Período/Programação de Computadores/lista 2/Lista_2.ipynb
|
sullyvan15/UVV
|
2390cc2881792d036db1d8b098fe366f47cd98c3
|
[
"MIT"
] | 1 |
2020-10-07T23:33:21.000Z
|
2020-10-08T01:15:11.000Z
|
4° Período/Programação de Computadores/lista 2/Lista_2.ipynb
|
sullyvan15/Universidade-Vila-Velha
|
2390cc2881792d036db1d8b098fe366f47cd98c3
|
[
"MIT"
] | null | null | null | 31.839542 | 240 | 0.452484 |
[
[
[
"<a href=\"https://colab.research.google.com/github/sullyvan15/Universidade-Vila-Velha/blob/master/Lista_2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<html>\n <body>\n <header></header>\n <CENTER>\n <img src=\"https://www.uvv.br/wp-content/themes/uvvBr/templates/assets//img/logouvv.svg\" alt=\"UVV-LOGO\" style = width=\"100px\"; height=\"100px\">\n </CENTER> \n <CENTER><b>Laboratório de Programação- PYTHON</b><br/>\n <CENTER><b>Lista de Exercício 2 - Estrutura de Repetição</b><br/>\n <CENTER><b>Professor Alessandro Bertolani Oliveira</b></CENTER><br/>",
"_____no_output_____"
],
[
"## NOME: Sullyvan Marks Nascimento De Oliveira",
"_____no_output_____"
],
[
"# Estrutura de repetição: for / in range / while / break",
"_____no_output_____"
],
[
"## Exercício 1\nEscrever um algoritmo para exibir os múltiplos de 3 compreendidos no intervalo: [3 100].",
"_____no_output_____"
]
],
[
[
"cont = 0\nfor x in range(3, 100):\n if x % 3 == 0:\n cont = cont + 1\n print(f'Multiplo {cont}: {x}')\n",
"Multiplo 1: 3\nMultiplo 2: 6\nMultiplo 3: 9\nMultiplo 4: 12\nMultiplo 5: 15\nMultiplo 6: 18\nMultiplo 7: 21\nMultiplo 8: 24\nMultiplo 9: 27\nMultiplo 10: 30\nMultiplo 11: 33\nMultiplo 12: 36\nMultiplo 13: 39\nMultiplo 14: 42\nMultiplo 15: 45\nMultiplo 16: 48\nMultiplo 17: 51\nMultiplo 18: 54\nMultiplo 19: 57\nMultiplo 20: 60\nMultiplo 21: 63\nMultiplo 22: 66\nMultiplo 23: 69\nMultiplo 24: 72\nMultiplo 25: 75\nMultiplo 26: 78\nMultiplo 27: 81\nMultiplo 28: 84\nMultiplo 29: 87\nMultiplo 30: 90\nMultiplo 31: 93\nMultiplo 32: 96\nMultiplo 33: 99\n"
]
],
[
[
"## Exercício 2\nEscrever um algoritmo para exibir os múltiplos de 11, a soma e a média dos múltiplos de 11, em ordem\ndecrescente (inversa), compreendidos entre o intervalo: [200 100].\n\n",
"_____no_output_____"
]
],
[
[
"soma = 0\nmedia = 0\ncontador = 0\n\nfor contador in range(200, 100, -1):\n if contador % 11 == 0:\n print(f'Multiplo: {contador}')\n soma = contador + soma\n media = soma / contador\nprint(f'soma: {soma}')\nprint(f'media: {media}')\n",
"Multiplo: 198\nMultiplo: 187\nMultiplo: 176\nMultiplo: 165\nMultiplo: 154\nMultiplo: 143\nMultiplo: 132\nMultiplo: 121\nMultiplo: 110\nsoma: 1386\nmedia: 12.6\n"
]
],
[
[
"## Exercício 4\nFaça um algoritmo que exiba a soma dos PARES e ÍMPARES compreendidos entre [10 99].\n",
"_____no_output_____"
]
],
[
[
"somapar = 0\n\nfor contador in range(10, 99):\n if contador % 2 == 0:\n somapar = somapar + contador\n\n else:\n somaimpar = somapar + contador\n\nprint(f'Soma dos pares: {somapar}')\nprint(f'Soma dos impares: {somaimpar}')\n",
"Soma dos pares: 2430\nSoma dos impares: 2429\n"
]
],
[
[
"## Exercício 5\nEscreva um algoritmo que leia de 10.000 habitantes de uma pequena cidade se está empregado ou não e exiba em porcentagem a quantidade de empregados e desempregados desta pequena cidade.",
"_____no_output_____"
]
],
[
[
"\nempregado = 0;\ndesempregado = 0;\nContHab = 1;\nhabitante = 10000\nfor h in range(0, habitante):\n ler = int(input(f'Habitante {ContHab} está empregado? (Digite 1- Sim 2- Não): '))\n if ler != 1 and ler != 2:\n print(\"Dados inválidos. Tente novamente.\")\n else:\n ContHab += 1\n if ler == 1:\n empregado += 1\n else:\n desempregado += 1\n\nQuantEmpregado = empregado * 100 / habitante\nQuantDesempregado = desempregado * 100 / habitante\n\nprint(f'Quantidade de Empregados: {empregado} sendo {QuantEmpregado: .2f}% do Total de habitantes ')\nprint(f'Quantidade de Desempregados: {desempregado} sendo {QuantDesempregado: .2f}% do Total de habitantes ')",
"Habitante 1 esta? Digite 1-Sim 2-Não 1\nHabitante 2 esta? Digite 1-Sim 2-Não 2\nHabitante 3 esta? Digite 1-Sim 2-Não 3\nDados inválidos. Tente novamente.\nHabitante 3 esta? Digite 1-Sim 2-Não 1\nHabitante 4 esta? Digite 1-Sim 2-Não 2\nHabitante 5 esta? Digite 1-Sim 2-Não 3\nDados inválidos. Tente novamente.\nHabitante 5 esta? Digite 1-Sim 2-Não 1\nHabitante 6 esta? Digite 1-Sim 2-Não 1\nHabitante 7 esta? Digite 1-Sim 2-Não 2\nHabitante 8 esta? Digite 1-Sim 2-Não 1\nQuantidade de Empregados: 5 sendo 0.05% do Total de habitantes \nQuantidade de Desempregados: 3 sendo 0.03% do Total de habitantes \n"
]
],
[
[
"## Exercício 6\nEscreva um algoritmo que leia o salário em reais (R$) de 1000 clientes de um shopping e exiba na tela, em porcentagem, a divisão dos clientes por tipo: A, B ou C, conforme a seguir:\n\n✓ A: Maior ou igual a 15 Salários Mínimos ou\n\n✓ B: Menor que 15 Salários Mínimos ou maior ou igual a 5 Salários Mínimos ou\n\n✓ C: Menor que 5 Salários Mínimos.\nDeclarar o Salário Mínimo (SM: R$ 998.05).\n",
"_____no_output_____"
]
],
[
[
"\nsalMinimo = 998.05\ncontclient = 0\n\na = 0\nb = 0\nc = 0\n\nTotalCliente = int(input('Insira a Quantidade de Cliente a ser Pesquisado: '))\n\nfor x in range(0, TotalCliente):\n contclient += 1\n salario = float(input(f'Insira o Salário do Cliente {contclient} em R$: '))\n if salario >= (salMinimo * 15):\n print('Cliente Tipo A ')\n a += 1\n elif salario < (salMinimo * 15) and salario >= (salMinimo * 5):\n print('Cliente Tipo B')\n b += 1\n\n else:\n print('Cliente Tipo C')\n c += 1\n\n\nclaA = a * 100 / TotalCliente\n\nclaB = b * 100 / TotalCliente\n\nclaC = c * 100 / TotalCliente\n\nprint(f'Total de Tipos de Clientes:')\nprint(f'A = {a} Cerca de {claA: .1f}%')\nprint(f'B = {b} Cerca de {claB: .1f}%')\nprint(f'C = {c} Cerca de {claC: .1f}%')",
"Insira a Quantidade de Cliente a ser Pesquisado: 10\nInsira o Salário do Cliente 1 em R$: 1000.00\nCliente Tipo C\nInsira o Salário do Cliente 2 em R$: 2000.00\nCliente Tipo C\nInsira o Salário do Cliente 3 em R$: 3000.00\nCliente Tipo C\nInsira o Salário do Cliente 4 em R$: 6000.00\nCliente Tipo B\nInsira o Salário do Cliente 5 em R$: 7000.00\nCliente Tipo B\nInsira o Salário do Cliente 6 em R$: 8000.00\nCliente Tipo B\nInsira o Salário do Cliente 7 em R$: 10000.00\nCliente Tipo B\nInsira o Salário do Cliente 8 em R$: 15000.00\nCliente Tipo A \nInsira o Salário do Cliente 9 em R$: 25000.00\nCliente Tipo A \nInsira o Salário do Cliente 10 em R$: 35000.00\nCliente Tipo A \nTotal de Tipos de Clientes:\nA = 3 Cerca de 30.0%\nB = 4 Cerca de 40.0%\nC = 3 Cerca de 30.0%\n"
]
],
[
[
"## Exercício 7\nEscrever um algoritmo que conte e soma todos os números ímpares que são múltiplos de três e NÃO\nmúltiplos de 5 que se encontram no intervalo [9 90]. Exiba a Contagem e a Soma destes números.\n\n",
"_____no_output_____"
]
],
[
[
"soma = 0\nposicao = 0\n# Ok\nfor contador in range(9, 90):\n if contador % 2 == 1 and contador % 3 == 0 and contador % 5 != 0:\n soma = contador + soma\n posicao = posicao = posicao + 1\n\n print(f'Soma N°{posicao}: {soma}')\n",
"Soma N°1: 9\nSoma N°2: 30\nSoma N°3: 57\nSoma N°4: 90\nSoma N°5: 129\nSoma N°6: 180\nSoma N°7: 237\nSoma N°8: 300\nSoma N°9: 369\nSoma N°10: 450\nSoma N°11: 537\n"
]
],
[
[
"## Exercício 9\nEscrever um algoritmo que leia vários Números 𝑁 (𝑢𝑚 𝑝𝑜𝑟 𝑣𝑒𝑧) que, no intervalo entre [10 90], divididos por 5 possuem resto 2. Exiba a soma dos números lidos, parando o programa para 𝑁=0.",
"_____no_output_____"
]
],
[
[
"\nsoma = 0\nfor x in range(10, 90):\n ler = int(input('Digite um Número que dividido por 5, o resto é 2, Pressione 0 para parar: '))\n if ler % 5 == 2:\n print(f'O Número {ler} está Aprovado')\n soma = soma + ler\n elif ler > 0 and ler < 10:\n print('Erro Na Leitura, Porfavor Escolha Valores entre 10 a 90')\n elif ler == 0:\n print('Fim da Leitura')\n break\n else:\n print('O Número não é divisível por 5 ou resto não = 2')\n\nprint(f'Somatória dos Números Lidos: {soma}')",
"Digite um Número que dividido por 5, o resto é 2, Pressione 0 para parar: 22\nO Número 22 está Aprovado\nDigite um Número que dividido por 5, o resto é 2, Pressione 0 para parar: 0\nFim da Leitura\nSomatória dos Números Lidos: 22\n"
]
],
[
[
"## Exercício 10\nEscrever um algoritmo para que calcule a média dos números múltiplos de 6 que se encontram no intervalo de [6,6𝑥]. Onde 𝑥 é um (1) único número inteiro positivo (𝑥≥1), lido do usuário.\n\n",
"_____no_output_____"
]
],
[
[
"numero = int(input('Insira o Número: '))\ncontador = 0\nsoma = 0\nfor x in range(6, 6 * numero):\n if x % 6 == 0:\n print(f'{x}')\n contador += 1\n soma = soma + x\nprint(f'Somatória: {soma}')\nmedia = soma / contador\nprint(f'Quantidade de Múltiplos: {contador}')\nprint(f'Média dos Múltiplos: {media}')",
"Insira o Número: 2\n6\nSomatória: 6\nQuantidade de Múltiplos: 1\nMédia dos Múltiplos: 6.0\n"
]
],
[
[
"## Exercício 15\nEscrever um algoritmo que leia vários números reais (um por um) e exiba, em porcentagem, a\nquantidade de positivos e de negativos lidos. Pare o programa quando o usuário digitar ZERO.\n\n",
"_____no_output_____"
]
],
[
[
"contanegativo = 0\ncontageral = 0\ncontapositivo = 0\nnumPositivos = 0\nnumNegativos = 0\nvalor = 1\n\nwhile valor != 0:\n valor = float(input('Digite um número real ou 0 para sair do programa: '))\n if valor < 0 or valor > 0: # Para excluir o 0 dos valores lidos\n contageral += 1\n if valor % 2 == 0:\n contapositivo += 1\n else:\n contanegativo += 1\n if contageral != 0:\n numPositivos = contapositivo / contageral * 100\n numNegativos = contanegativo / contageral * 100\n\n else:\n print(f'Nenhum numero positivo e negativo lido. ')\n\nprint(f'números postivos: {numPositivos: .1f} %')\nprint(f'números negativos: {numNegativos: .1f} %')\n",
"Digite um número real ou O para sair do programa: 1\nDigite um número real ou O para sair do programa: 2\nDigite um número real ou O para sair do programa: 3\nDigite um número real ou O para sair do programa: 4\nDigite um número real ou O para sair do programa: 5\nDigite um número real ou O para sair do programa: 6\nDigite um número real ou O para sair do programa: 7\nDigite um número real ou O para sair do programa: 8\nDigite um número real ou O para sair do programa: 9\nDigite um número real ou O para sair do programa: 0\nnúmeros postivos: 44.4 %\nnúmeros negativos: 55.6 %\n"
]
],
[
[
"## Exercício 16\nEscreva um algoritmo que leia 300 números positivos e exiba o menor e o maior: par e ímpar.\n\n",
"_____no_output_____"
]
],
[
[
"i = 0\nmaiorpar = 0\nmaiorimpar = 0\nmenorpar = menorimpar = 9999999999\n\n\nwhile i < 300:\n numero = float(input('Entre com um número: '))\n if numero % 2 == 0:\n if numero > maiorpar:\n maiorpar = numero\n if numero < menorpar:\n menorpar = numero\n else:\n if numero > maiorimpar:\n maiorimpar = numero\n if numero < menorimpar:\n menorimpar = numero\n i = i + 1\n\nprint(f'Maior par: {maiorpar} e Menor par: {menorpar} \\n'\n f'Maior Impar: {maiorimpar} e Menor Impar: {menorimpar}')\n",
"Entre com um número: 1\nEntre com um número: 2\nEntre com um número: 3\nEntre com um número: 4\nEntre com um número: 5\nEntre com um número: 6\nEntre com um número: 7\nEntre com um número: 8\nEntre com um número: 9\nEntre com um número: 10\nMaior par: 10.0 e Menor par: 2.0 \nMaior Impar: 9.0 e Menor Impar: 1.0\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adbf499282745a8510553797a950a939dfa1b8b
| 31,013 |
ipynb
|
Jupyter Notebook
|
jupyter/notebooks/invest.ipynb
|
wwweslei/financeDataScience
|
4a1f60eeb1a7e3290aa659dbd20e826640553cfa
|
[
"MIT"
] | null | null | null |
jupyter/notebooks/invest.ipynb
|
wwweslei/financeDataScience
|
4a1f60eeb1a7e3290aa659dbd20e826640553cfa
|
[
"MIT"
] | null | null | null |
jupyter/notebooks/invest.ipynb
|
wwweslei/financeDataScience
|
4a1f60eeb1a7e3290aa659dbd20e826640553cfa
|
[
"MIT"
] | null | null | null | 36.272515 | 1,017 | 0.337407 |
[
[
[
"import investpy as inv\nimport pandas as pd \n\ndf = inv.get_stocks_overview(country=\"Brazil\", as_json=False, n_results=1000)",
"_____no_output_____"
],
[
"df[\"change_percentage\"] = df[\"change_percentage\"].astype(str).str.replace(\"%\", \"\")\ndf = df[ df[\"turnover\"] > 100000]\ndf [\"change_percentage\"] = pd.to_numeric(df[\"change_percentage\"]) \ndf = df.sort_values(\"change_percentage\")\ndf.tail(45)\n",
"_____no_output_____"
],
[
"aapl = inv.search_quotes(text=\"bova11\", n_results=1)\n\nprint(aapl)",
"{\"id_\": 39004, \"name\": \"Ishares Ibovespa\", \"symbol\": \"BOVA11\", \"country\": \"brazil\", \"tag\": \"/etfs/ishares-ibovespa\", \"pair_type\": \"etfs\", \"exchange\": \"BM&FBovespa\"}\n"
],
[
"TICKER = \"petr4\"\ninv.get_stock_company_profile(stock=TICKER, country=\"Brazil\")",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
4adbfbacb5baa2a306ed21f1cd87bb5f80e163fb
| 1,677 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/wordtochar-checkpoint.ipynb
|
raotnameh/ctc
|
c291d53966b0ff9965c2e89354271114d0db3b66
|
[
"Apache-2.0"
] | null | null | null |
.ipynb_checkpoints/wordtochar-checkpoint.ipynb
|
raotnameh/ctc
|
c291d53966b0ff9965c2e89354271114d0db3b66
|
[
"Apache-2.0"
] | null | null | null |
.ipynb_checkpoints/wordtochar-checkpoint.ipynb
|
raotnameh/ctc
|
c291d53966b0ff9965c2e89354271114d0db3b66
|
[
"Apache-2.0"
] | null | null | null | 20.45122 | 118 | 0.471676 |
[
[
[
"# To train a character Language model (LM) using Kenlm",
"_____no_output_____"
]
],
[
[
"from tqdm import tqdm\nimport os\nos.chdir(\"/home/hemant/txt/\") # change the directory to the text data \n\na = []\nm = ''\nwith open(\"out.txt\",\"r\") as f: # text file contains all the sentences with a next line ('\\n') separater.\n text = f.read()\nfor i,j in tqdm(enumerate(text)):\n if i%10000 == 9999:\n a.append(m)\n m = ''\n if j == \"\\n\":\n m = m + j\n else: m = m + j + \" \"",
"_____no_output_____"
]
],
[
[
"### To save the data ",
"_____no_output_____"
]
],
[
[
"with open(\"char.txt\", \"w\") as f:\n f.write(\"\".join(a))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4adc017166dfb9e2631ddf021d049b29e49aa227
| 6,354 |
ipynb
|
Jupyter Notebook
|
vispygis/vispygis.ipynb
|
KTH-dESA/demos
|
3df7cbbef1b31bb619f48df328ba6eb01338e870
|
[
"MIT"
] | 2 |
2019-06-13T11:23:32.000Z
|
2019-06-13T11:33:17.000Z
|
vispygis/vispygis.ipynb
|
KTH-dESA/demos
|
3df7cbbef1b31bb619f48df328ba6eb01338e870
|
[
"MIT"
] | 1 |
2021-05-05T12:51:36.000Z
|
2021-05-05T12:51:36.000Z
|
vispygis/vispygis.ipynb
|
KTH-dESA/demos
|
3df7cbbef1b31bb619f48df328ba6eb01338e870
|
[
"MIT"
] | 1 |
2019-06-13T11:25:10.000Z
|
2019-06-13T11:25:10.000Z
| 26.148148 | 127 | 0.5565 |
[
[
[
"import urllib.request\nimport os\nimport geopandas as gpd\nimport rasterio\nfrom rasterio.plot import show\nimport zipfile\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"# GIS visualizations with geopandas",
"_____no_output_____"
]
],
[
[
"url = 'https://biogeo.ucdavis.edu/data/gadm3.6/shp/gadm36_COL_shp.zip'\ndest = os.path.join('data', 'admin')",
"_____no_output_____"
],
[
"os.makedirs(dest, exist_ok=True)\nurllib.request.urlretrieve(url, os.path.join(dest, 'gadm36_COL_shp.zip'))\n\nwith zipfile.ZipFile(os.path.join(dest, 'gadm36_COL_shp.zip'), 'r') as zip_ref:\n zip_ref.extractall(dest)",
"_____no_output_____"
],
[
"gdf_adm0 = gpd.read_file(os.path.join(dest, 'gadm36_COL_0.shp'))\ngdf_adm1 = gpd.read_file(os.path.join(dest, 'gadm36_COL_1.shp'))",
"_____no_output_____"
],
[
"gdf_adm1",
"_____no_output_____"
],
[
"gdf_adm0.plot()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1, 1, figsize=(12, 12))\n\ngdf_adm0.plot(color='white', edgecolor='black', ax=ax)\ngdf_adm1.plot(column='NAME_1', ax=ax, cmap='Set2', \n legend=True, \n legend_kwds={'loc': \"upper right\",\n 'bbox_to_anchor': (1.4, 1)})",
"_____no_output_____"
],
[
"url = 'https://download.geofabrik.de/south-america/colombia-latest-free.shp.zip'\ndest = os.path.join('data', 'places')",
"_____no_output_____"
],
[
"os.makedirs(dest, exist_ok=True)\nurllib.request.urlretrieve(url, os.path.join(dest, 'colombia-latest-free.shp.zip'))\n\nwith zipfile.ZipFile(os.path.join(dest, 'colombia-latest-free.shp.zip'), 'r') as zip_ref:\n zip_ref.extractall(dest)",
"_____no_output_____"
],
[
"gdf_water = gpd.read_file(os.path.join(dest, 'gis_osm_water_a_free_1.shp'))\ngdf_places = gpd.read_file(os.path.join(dest, 'gis_osm_places_free_1.shp'))\ngdf_cities = gdf_places.loc[gdf_places['fclass']=='city'].copy()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1, 1, figsize=(12, 12))\n\ngdf_adm0.plot(color='white', edgecolor='black', ax=ax)\ngdf_adm1.plot(color='white', ax=ax)\ngdf_water.plot(edgecolor='blue', ax=ax)\ngdf_cities.plot(column='population', ax=ax, legend=True)",
"_____no_output_____"
],
[
"gdf_cities['size'] = gdf_cities['population'] / gdf_cities['population'].max() * 500",
"_____no_output_____"
],
[
"from mpl_toolkits.axes_grid1 import make_axes_locatable\n\nfig, ax = plt.subplots(1, 1, figsize=(12, 12))\n\ndivider = make_axes_locatable(ax)\n\ncax = divider.append_axes(\"right\", size=\"5%\", pad=0.1)\n\ngdf_adm0.plot(color='white', edgecolor='black', ax=ax)\ngdf_adm1.plot(color='white', edgecolor='gray', ax=ax)\ngdf_water.plot(edgecolor='lightblue', ax=ax)\ngdf_cities.plot(markersize='size', column='population', \n cmap='viridis', edgecolor='white', \n ax=ax, cax=cax, legend=True,\n legend_kwds={'label': \"Population by city\"})",
"_____no_output_____"
],
[
"url = 'https://data.worldpop.org/GIS/Population_Density/Global_2000_2020_1km_UNadj/2020/COL/col_pd_2020_1km_UNadj.tif'\ndest = os.path.join('data', 'pop')",
"_____no_output_____"
],
[
"os.makedirs(dest, exist_ok=True)\nurllib.request.urlretrieve(url, os.path.join(dest, 'col_pd_2020_1km_UNadj.tif'))",
"_____no_output_____"
],
[
"with rasterio.open(os.path.join(dest, 'col_pd_2020_1km_UNadj.tif')) as src:\n fig, ax = plt.subplots(figsize=(12, 12))\n show(src, ax=ax, cmap='viridis_r')\n gdf_adm1.boundary.plot(edgecolor='gray', linewidth=0.5, ax=ax)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adc0dff358d5a66521b7d16e3e32fe9ad03feb9
| 3,815 |
ipynb
|
Jupyter Notebook
|
notebooks/cookie.ipynb
|
cazsol/ThinkBayes2
|
cf3dbdb4cde114d53e2802007656edb398778020
|
[
"MIT"
] | null | null | null |
notebooks/cookie.ipynb
|
cazsol/ThinkBayes2
|
cf3dbdb4cde114d53e2802007656edb398778020
|
[
"MIT"
] | null | null | null |
notebooks/cookie.ipynb
|
cazsol/ThinkBayes2
|
cf3dbdb4cde114d53e2802007656edb398778020
|
[
"MIT"
] | null | null | null | 24.612903 | 182 | 0.579554 |
[
[
[
"# Think Bayes\n\nThis notebook presents example code and exercise solutions for Think Bayes.\n\nCopyright 2018 Allen B. Downey\n\nMIT License: https://opensource.org/licenses/MIT",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append(\"..\")",
"_____no_output_____"
],
[
"# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import classes from thinkbayes2\nfrom thinkbayes2 import Hist, Pmf, Suite",
"_____no_output_____"
]
],
[
[
"## The cookie problem\n\nHere's the original statement of the cookie problem:\n\n> Suppose there are two bowls of cookies. Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. Bowl 2 contains 20 of each.\n\n> Now suppose you choose one of the bowls at random and, without looking, select a cookie at random. The cookie is vanilla. What is the probability that it came from Bowl 1?\n\nIf we only draw one cookie, this problem is simple, but if we draw more than one cookie, there is a complication: do we replace the cookie after each draw, or not?\n\nIf we replace the cookie, the proportion of vanilla and chocolate cookies stays the same, and we can perform multiple updates with the same likelihood function.\n\nIf we *don't* replace the cookie, the proportions change and we have to keep track of the number of cookies in each bowl.\n\n**Exercise:**\n\nModify the solution from the book to handle selection without replacement.\n\nHint: Add instance variables to the `Cookie` class to represent the hypothetical state of the bowls, and modify the `Likelihood` function accordingly.\n\nTo represent the state of a Bowl, you might want to use the `Hist` class from `thinkbayes2`.",
"_____no_output_____"
]
],
[
[
"# Solution goes here",
"_____no_output_____"
],
[
"# Solution goes here",
"_____no_output_____"
],
[
"# Solution goes here",
"_____no_output_____"
],
[
"# Solution goes here",
"_____no_output_____"
],
[
"# Solution goes here",
"_____no_output_____"
],
[
"# Solution goes here",
"_____no_output_____"
],
[
"# Solution goes here",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adc1b8385052dda6ce397439fafe650e38bd1d6
| 94,549 |
ipynb
|
Jupyter Notebook
|
1 - Curve Fitting.ipynb
|
sarvai/MLtutorial
|
37f6cac05451d9937b491d53690dd1a3f8c6c8ea
|
[
"MIT"
] | null | null | null |
1 - Curve Fitting.ipynb
|
sarvai/MLtutorial
|
37f6cac05451d9937b491d53690dd1a3f8c6c8ea
|
[
"MIT"
] | null | null | null |
1 - Curve Fitting.ipynb
|
sarvai/MLtutorial
|
37f6cac05451d9937b491d53690dd1a3f8c6c8ea
|
[
"MIT"
] | null | null | null | 178.39434 | 24,934 | 0.887254 |
[
[
[
"%matplotlib inline\nfrom matplotlib import pyplot as pp\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"# Introduction",
"_____no_output_____"
],
[
"Let's assume that we are given the function $f(\\mathbf{x}) : \\mathbb{R}^M \\rightarrow \\mathbb{R}$. At each point $\\mathbf{x}$ this function produces the value $y = f(\\mathbf{x})$. Due to real world circumstances, this assignment is usually noisy, meaning that for a point $\\mathbf{x}$ rather than measuring $y$, we obtain a slightly misplaced value $\\hat{y}$ calculated as\n\\begin{equation}\n \\hat{y} = f(\\mathbf{x}) + \\mathcal{N}(0,\\sigma^2).\n\\end{equation}\nHere $\\mathcal{N}(0,\\sigma^2)$, is the normal distribution with mean $0$ and standard deviation $\\sigma^2$. Here we will refere to each $(\\mathbf{x},\\hat{y})$ as a data point. This term is referred as Gaussian noise. Now given a set of data points $\\{(\\mathbf{x}_n,\\hat{y}_n)\\}_{n=1}^{N}$, our goal is to find a function $g(\\mathbf{x}) : \\mathbb{R}^M \\rightarrow \\mathbb{R}$ that approximates $f$ as close as possible. In other words, we would like to find the function $g$ such that\n\\begin{equation}\n \\| g(\\mathbf{x}) - f(\\mathbf{x}) \\|^2\n\\end{equation}\nis minimized.",
"_____no_output_____"
]
],
[
[
"# This class contains the dataset\n\nclass dataset:\n def _build_dataset( self ):\n self._content = []\n for i in range( self.N ):\n x = np.random.uniform( self.I0, self.I1 )\n y = self.func( x )\n y_n = y + np.random.normal( 0, self.noise_std )\n self._content.append( [ x,y,y_n ] )\n self._content = np.array( self._content )\n \n def _build_dataset_dense( self ):\n x = np.arange( self.I0, self.I1, ( self.I1 - self.I0 )/(self.N_dense+1) )\n y = self.func( x )\n \n arr = np.array( [ x, y ] )\n self._content_dense = arr.transpose()\n \n \n def __init__( self ):\n self.func = np.sin\n self.N = 20\n self.N_dense = 100\n self.noise_std = 0.2\n \n self.I0 = -1*np.pi\n self.I1 = np.pi\n \n self._build_dataset()\n self._build_dataset_dense()\n \n @property\n def x( self ):\n return self._content[:,0].ravel()\n \n @property\n def y( self ):\n return self._content[:,1].ravel()\n \n @property\n def y_n( self ):\n return self._content[:,2].ravel()\n \n @property\n def x_dense( self ):\n return self._content_dense[:,0].ravel()\n \n @property\n def y_dense( self ):\n return self._content_dense[:,1].ravel()\n \ndset = dataset()",
"_____no_output_____"
]
],
[
[
"# Sinus Curve",
"_____no_output_____"
],
[
"Let $f(x) = \\sin(x)$ for the values in the interval $[-\\pi,\\pi]$. The goal of this section is to reproduce the red data points given the blue plot data points. To achieve this goal we will be looking at two different ways of modeling the data.",
"_____no_output_____"
]
],
[
[
"pp.plot( dset.x_dense, dset.y_dense,'k-.', label='Actual Curve' )\npp.plot( dset.x, dset.y, 'r.', label='Actual Values')\npp.plot( dset.x, dset.y_n, 'b.', label='Noisy Values')\npp.legend()\npp.grid()",
"_____no_output_____"
]
],
[
[
"## Polynomial Curve-Fitting",
"_____no_output_____"
],
[
"We can assume that $g$ belongs to the class of polynomial functions of the degree $D$. This gives $g$ the form of\n\\begin{equation}\n g(x) = \\sum_{d=0}^{D} a_d x^d.\n\\end{equation}\nTo find $g$ we have to determine that values of $a_0, \\dots, a_D$. For each $x_n$ we have the following equation\n\\begin{equation}\n \\sum_{d=0}^{D} a_d x_n^d = \\hat{y}_n,\n\\end{equation}\nand this yields to a system of linear equations from which values of $a_0, \\dots, a_D$ can be calculated. We assume that this system has the form of $Xa=Y$ and the matrices $X \\in \\mathbb{R}^{(N,D)}$ and $Y\\in \\mathbb{R}^{(N,1)}$ as calculated as following :",
"_____no_output_____"
]
],
[
[
"D = 5\n\nX = np.zeros((dset.N,D+1))\nY = np.zeros((dset.N,1))\n\nfor i in range( dset.N ):\n Y[i,0] = dset.y_n[i]\n for d in range( D+1 ):\n X[i,d] = dset.x[i] ** d\n \nX = np.matrix( X )\nY = np.matrix( Y )",
"_____no_output_____"
]
],
[
[
"One solution to this system is obtained by $a = (X^TX)^{-1}X^{T}Y$.",
"_____no_output_____"
]
],
[
[
"a = np.linalg.inv( X.T * X ) * X.T * Y\na = np.array( a )\na = a.ravel()\nprint( a )",
"[-0.00933626 1.0647227 -0.04343132 -0.17778385 0.00544987 0.00821083]\n"
]
],
[
[
"Now given $a$ for every $x$ we can calculate the value of $g(x)$. We will do so for the values of x_dense.",
"_____no_output_____"
]
],
[
[
"def g(x,D,a) :\n o = 0.0\n for d in range(D+1):\n o += a[d] * x ** d\n return o\n\ny_pred = []\nfor p in dset.x_dense :\n y_pred.append( g(p,D,a) )\ny_pred = np.array( y_pred )\n\npp.plot( dset.x_dense,dset.y_dense,'k-.', label='Actual Curve' )\npp.plot( dset.x_dense, y_pred, 'r-.', label='Predicted Values')\npp.legend()\npp.grid()",
"_____no_output_____"
]
],
[
[
"### Questions\n\n1. How does changing $N$ and $D$ change the shape of the predicted curve?\n2. Can we do this for some other functions? Lienar/Non-linear\n3. How else can we obtain $a$?\n4. How can we measure the error of this prediction?",
"_____no_output_____"
],
[
"## Radial Basis Function Kernel Curve Fitting (RBF Kernel)",
"_____no_output_____"
],
[
"Similar to polynomial curve fitting, our goal here to obtain the parameters of the predicted curve by solving a system of linear equations. Here, we will solve this problem by placing a basis function at each data point and formulate the predicted curve as a weighted sum of the basis functions. The Radial Basis funciton kernel has the form of \n\\begin{equation}\n k(x,x') = \\exp(- \\frac{\\|x-x'\\|^2}{2\\sigma^2}).\n\\end{equation}\nAt each point the shape of this kernel is as following :",
"_____no_output_____"
]
],
[
[
"def rbf( x, x_base, sigma2 ):\n return np.exp(-1* (( x-x_base )**2) / (2*sigma2) )\n\nkernal_sigma2 = 0.5\nx_base = 0\n\ny_rbf = []\nfor x in dset.x_dense :\n y_rbf.append( rbf(x, x_base, kernal_sigma2) )\n \npp.plot( dset.x_dense,y_rbf,'k-.', label='RBF Kernel' )\npp.legend()\npp.grid()",
"_____no_output_____"
]
],
[
[
"Placing this function at each data point and calculating a weighted sum will give us \n\\begin{equation}\n g(x) = \\sum_{n=1}^{N} a_n k(x,x_n).\n\\end{equation}\nThis sum again provides us with a system of linear equations with the form a $Ka=Y$ where $K \\in \\mathbb{R}^{(N,N)}$ and $Y \\in \\mathbb{R}^{(N,1)}$ and they are calculated as :",
"_____no_output_____"
]
],
[
[
"K = np.zeros((dset.N,dset.N))\nY = np.zeros((dset.N,1))\n\nfor i in range( dset.N ):\n Y[i,0] = dset.y_n[i]\n for j in range( dset.N ):\n K[i,j] = rbf( dset.x[i], dset.x[j], kernal_sigma2 )\n\n# Regularizer\nK = K + np.eye(dset.N)\n \nK = np.matrix( K )\nY = np.matrix( Y )",
"_____no_output_____"
]
],
[
[
"Similarly, we solve this system as $a = (K^TK)^{-1}K^{T}Y$.",
"_____no_output_____"
]
],
[
[
"a = np.linalg.inv( K.T * K ) * K.T * Y\na = np.array( a )\na = a.ravel()\nprint( a )",
"[ 0.22360265 -0.4208425 -0.12855211 -0.0325646 0.07441013 -0.00932151\n 0.08161617 0.23760018 -0.33090978 -0.02289759 0.23674976 -0.29165578\n -0.17067804 -0.01623991 0.41249722 -0.09758318 -0.23785424 -0.13231298\n 0.28005175 0.21525552]\n"
]
],
[
[
"Now given $a$ for every $x$ we can calculate the value of $g(x)$. We will do so for the values of x_dense.",
"_____no_output_____"
]
],
[
[
"def g(x,x_basis,a) :\n o = 0.0\n for d in range(dset.N):\n o += a[d] * rbf(x,x_basis[d],kernal_sigma2)\n return o\n\ny_pred = []\nfor x in dset.x_dense :\n y_pred.append( g(x,dset.x,a) )\ny_pred = np.array( y_pred )\n\npp.plot( dset.x_dense,dset.y_dense,'k-.', label='Actual Curve' )\npp.plot( dset.x_dense, y_pred, 'r-.', label='Predicted Values')\npp.legend()\npp.grid()",
"_____no_output_____"
]
],
[
[
"### Questions\n\n1. How does changing $\\sigma^2$ changes the shape of the predicted curve?\n2. What other basis functions can we use?\n3. How can we measure the error of this prediction?",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4adc1da993a94dd7faeae5c7e570bfffbf207cff
| 117,383 |
ipynb
|
Jupyter Notebook
|
imtroduccion pandas.ipynb
|
Quisutd3us/Python-Pandas
|
39f53ac98ca6a5fdcc8dcdf244ae3556ed6c2de2
|
[
"MIT"
] | null | null | null |
imtroduccion pandas.ipynb
|
Quisutd3us/Python-Pandas
|
39f53ac98ca6a5fdcc8dcdf244ae3556ed6c2de2
|
[
"MIT"
] | null | null | null |
imtroduccion pandas.ipynb
|
Quisutd3us/Python-Pandas
|
39f53ac98ca6a5fdcc8dcdf244ae3556ed6c2de2
|
[
"MIT"
] | null | null | null | 33.663034 | 104 | 0.26589 |
[
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"df=pd.read_csv(\"C:/Users/d3us/Desktop/cursoDataScienceLinkedIn/bd/2008.csv\", nrows=100000)\ndf[10:34]",
"_____no_output_____"
],
[
"df.sample(frac=1) #desordena el df",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"df.NASDelay.head()",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"df2=df.head(100) #copiando solo los primeros 100 registros\ndf2.head()",
"_____no_output_____"
],
[
"df2.columns",
"_____no_output_____"
],
[
"df2[[\"Year\",\"Month\",\"Cancelled\",\"CancellationCode\"]] #clumnas",
"_____no_output_____"
],
[
"df2[:19] #muestra filas",
"_____no_output_____"
],
[
"df2[[\"Year\",\"Month\",\"Cancelled\",\"CancellationCode\"]][:10] #combinacion de filas y columnas",
"_____no_output_____"
],
[
"#aplicando filtros\ndf[df[\"Cancelled\"]==1]",
"_____no_output_____"
],
[
"cancelled=df[df[\"Cancelled\"]==1]\ncancelled[[\"Year\",\"Month\",\"Cancelled\",\"CancellationCode\",\"Origin\"]][:]",
"_____no_output_____"
],
[
"cancelled_lax=df[(df[\"Cancelled\"]==1) & (df[\"Origin\"]=='LAX')]\ncancelled_lax[[\"Year\",\"Month\",\"Cancelled\",\"CancellationCode\",\"Origin\"]]",
"_____no_output_____"
],
[
"cancelled_lax_las=df[(df[\"Cancelled\"]==1) & (df.Origin.isin([\"LAX\",\"LAS\"]))]\ncancelled_lax_las[[\"Year\",\"Month\",\"Cancelled\",\"CancellationCode\",\"Origin\"]]",
"_____no_output_____"
],
[
"df[pd.isna(df[\"DepTime\"])].head() #buscando valores nan o vacios",
"_____no_output_____"
],
[
"len(df[pd.isna(df[\"DepTime\"])])",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adc2a95e20ecb7652c0ad1495aeb5886a1b29a5
| 69,152 |
ipynb
|
Jupyter Notebook
|
notebooks/.ipynb_checkpoints/Lawley strains-checkpoint.ipynb
|
mjenior/Jenior_CdifficileGENRE_2021
|
12003916b65e71d4a2200cb0536f4629e64bb95d
|
[
"MIT"
] | 1 |
2020-11-25T18:57:03.000Z
|
2020-11-25T18:57:03.000Z
|
notebooks/.ipynb_checkpoints/Lawley strains-checkpoint.ipynb
|
mjenior/Jenior_CdifficileGENRE_2020
|
12003916b65e71d4a2200cb0536f4629e64bb95d
|
[
"MIT"
] | null | null | null |
notebooks/.ipynb_checkpoints/Lawley strains-checkpoint.ipynb
|
mjenior/Jenior_CdifficileGENRE_2020
|
12003916b65e71d4a2200cb0536f4629e64bb95d
|
[
"MIT"
] | null | null | null | 38.675615 | 1,452 | 0.540519 |
[
[
[
"import cobra\nimport copy\n\nimport mackinac\nmackinac.modelseed.ms_client.url = 'http://p3.theseed.org/services/ProbModelSEED/'\nmackinac.workspace.ws_client.url = 'http://p3.theseed.org/services/Workspace'\nmackinac.genome.patric_url = 'https://www.patricbrc.org/api/'",
"_____no_output_____"
],
[
"# PATRIC user information\nmackinac.get_token('mljenior')\n# password: matrix54",
"patric password: ········\n"
]
],
[
[
"### Generate models",
"_____no_output_____"
]
],
[
[
"# Barnesiella intestinihominis\n\ngenome_id = '742726.3'\ntemplate_id = '/chenry/public/modelsupport/templates/GramNegModelTemplate'\nmedia_id = '/chenry/public/modelsupport/media/Complete'\nfile_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/barn_int.draft.json'\nstrain_id = 'Barnesiella intestinihominis YIT 11860'\n\nmackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)\nmackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)\nmackinac.optimize_modelseed_model(genome_id)\nmodel = mackinac.create_cobra_model_from_modelseed_model(genome_id)\nmodel.id = strain_id\n\ncobra.io.save_json_model(model, file_id)",
"_____no_output_____"
],
[
"# Lactobacillus reuteri\n\ngenome_id = '863369.3'\ntemplate_id = '/chenry/public/modelsupport/templates/GramPosModelTemplate'\nmedia_id = '/chenry/public/modelsupport/media/Complete'\nfile_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/lact_reut.draft.json'\nstrain_id = 'Lactobacillus reuteri mlc3'\n\nmackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)\nmackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)\nmackinac.optimize_modelseed_model(genome_id)\nmodel = mackinac.create_cobra_model_from_modelseed_model(genome_id)\nmodel.id = strain_id\n\ncobra.io.save_json_model(model, file_id)",
"_____no_output_____"
],
[
"# Enterococcus hirae\n\ngenome_id = '768486.3'\ntemplate_id = '/chenry/public/modelsupport/templates/GramPosModelTemplate'\nmedia_id = '/chenry/public/modelsupport/media/Complete'\nfile_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/ent_hir.draft.json'\nstrain_id = 'Enterococcus hirae ATCC 9790'\n\nmackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)\nmackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)\nmackinac.optimize_modelseed_model(genome_id)\nmodel = mackinac.create_cobra_model_from_modelseed_model(genome_id)\nmodel.id = strain_id\n\ncobra.io.save_json_model(model, file_id)",
"_____no_output_____"
],
[
"# Anaerostipes caccae\n\ngenome_id = '411490.6'\ntemplate_id = '/chenry/public/modelsupport/templates/GramPosModelTemplate'\nmedia_id = '/chenry/public/modelsupport/media/Complete'\nfile_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/ana_stip.draft.json'\nstrain_id = 'Anaerostipes caccae DSM 14662'\n\nmackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)\nmackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)\nmackinac.optimize_modelseed_model(genome_id)\nmodel = mackinac.create_cobra_model_from_modelseed_model(genome_id)\nmodel.id = strain_id\n\ncobra.io.save_json_model(model, file_id)",
"_____no_output_____"
],
[
"# Staphylococcus warneri\n\ngenome_id = '596319.3'\ntemplate_id = '/chenry/public/modelsupport/templates/GramPosModelTemplate'\nmedia_id = '/chenry/public/modelsupport/media/Complete'\nfile_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/staph_warn.draft.json'\nstrain_id = 'Staphylococcus warneri L37603'\n\nmackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)\nmackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)\nmackinac.optimize_modelseed_model(genome_id)\nmodel = mackinac.create_cobra_model_from_modelseed_model(genome_id)\nmodel.id = strain_id\n\ncobra.io.save_json_model(model, file_id)",
"_____no_output_____"
],
[
"# Adlercreutzia equolifaciens\n\ngenome_id = '1384484.3'\ntemplate_id = '/chenry/public/modelsupport/templates/GramPosModelTemplate'\nmedia_id = '/chenry/public/modelsupport/media/Complete'\nfile_id = '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/adl_equ.draft.json'\nstrain_id = 'Adlercreutzia equolifaciens DSM 19450'\n\nmackinac.reconstruct_modelseed_model(genome_id, template_reference=template_id)\nmackinac.gapfill_modelseed_model(genome_id, media_reference=media_id)\nmackinac.optimize_modelseed_model(genome_id)\nmodel = mackinac.create_cobra_model_from_modelseed_model(genome_id)\nmodel.id = strain_id\n\ncobra.io.save_json_model(model, file_id)",
"_____no_output_____"
]
],
[
[
"### Curate Draft Models",
"_____no_output_____"
]
],
[
[
"# Read in draft models\nmixB1 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/barn_int.draft.json')\nmixB2 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/lact_reut.draft.json')\nmixB3 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/ent_hir.draft.json')\nmixB4 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/ana_stip.draft.json')\nmixB5 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/staph_warn.draft.json')\nmixB6 = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/adl_equ.draft.json')",
"_____no_output_____"
],
[
"# Quality check functions\n\n# Identify potentially gapfilled reactions\ndef _findGapfilledRxn(model, exclude):\n gapfilled = []\n transport = _findTransports(model)\n if not type(exclude) is list:\n exclude = [exclude]\n \n for index in model.reactions:\n if len(list(index.genes)) == 0:\n if not index in model.boundary:\n if not index.id in exclude or not index.id in transport:\n gapfilled.append(index.id)\n \n if len(gapfilled) > 0:\n print(str(len(gapfilled)) + ' metabolic reactions not associated with genes')\n \n return gapfilled\n\n# Check for missing transport and exchange reactions\ndef _missingRxns(model, extracellular):\n\n transporters = set(_findTransports(model))\n exchanges = set([x.id for x in model.exchanges])\n \n missing_exchanges = []\n missing_transports = []\n \n for metabolite in model.metabolites:\n if not metabolite.compartment == extracellular:\n continue\n\n curr_rxns = set([x.id for x in list(metabolite.reactions)])\n \n if not bool(curr_rxns & transporters):\n missing_transports.append(metabolite.id)\n if not bool(curr_rxns & exchanges):\n missing_exchanges.append(metabolite.id)\n \n if len(missing_transports) != 0:\n print(str(len(missing_transports)) + ' extracellular metabolites are missing transport reactions')\n if len(missing_exchanges) != 0:\n print(str(len(missing_exchanges)) + ' extracellular metabolites are missing exchange reactions')\n \n return missing_transports, missing_exchanges\n\n\n# Checks which cytosolic metabolites are generated for free (bacteria only)\ndef _checkFreeMass(raw_model, cytosol):\n \n model = copy.deepcopy(raw_model)\n \n # Close all exchanges\n for index in model.boundary:\n model.reactions.get_by_id(index.id).lower_bound = 0.\n \n # Identify all metabolites that are produced within the network\n demand_metabolites = [x.reactants[0].id for x in model.demands if len(x.reactants) > 0] + [x.products[0].id for x in model.demands if len(x.products) > 0]\n\n free = []\n for index in model.metabolites: \n if index.id in demand_metabolites:\n continue\n elif not index.compartment in cytosol:\n continue\n else:\n demand = model.add_boundary(index, type='demand')\n model.objective = demand\n obj_val = model.slim_optimize(error_value=0.)\n if obj_val > 1e-8:\n free.append(index.id)\n model.remove_reactions([demand])\n \n if len(free) > 0:\n print(str(len(free)) + ' metabolites are generated for free')\n\n return free\n\n\n# Check for mass and charge balance in reactions\ndef _checkBalance(model, exclude=[]):\n \n imbalanced = []\n mass_imbal = 0\n charge_imbal = 0\n elem_set = set()\n for metabolite in model.metabolites:\n try:\n elem_set |= set(metabolite.elements.keys())\n except:\n pass\n \n if len(elem_set) == 0:\n imbalanced = model.reactions\n mass_imbal = len(model.reactions)\n charge_imbal = len(model.reactions)\n print('No elemental data associated with metabolites!')\n \n else:\n if not type(exclude) is list: exclude = [exclude]\n for index in model.reactions:\n if index in model.boundary or index.id in exclude:\n continue\n\n else:\n try:\n test = index.check_mass_balance()\n except ValueError:\n continue\n\n if len(list(test)) > 0:\n imbalanced.append(index.id)\n\n if 'charge' in test.keys():\n charge_imbal += 1\n if len(set(test.keys()).intersection(elem_set)) > 0:\n mass_imbal += 1\n\n if mass_imbal != 0:\n print(str(mass_imbal) + ' reactions are mass imbalanced')\n if charge_imbal != 0:\n print(str(charge_imbal) + ' reactions are charge imbalanced')\n \n return imbalanced\n\n\n# Identify transport reactions (for any number compartments)\ndef _findTransports(model):\n transporters = []\n compartments = set(list(model.compartments))\n if len(compartments) == 1:\n raise Exception('Model only has one compartment!')\n \n for reaction in model.reactions:\n \n reactant_compartments = set([x.compartment for x in reaction.reactants])\n product_compartments = set([x.compartment for x in reaction.products])\n reactant_baseID = set([x.id.split('_')[0] for x in reaction.reactants])\n product_baseID = set([x.id.split('_')[0] for x in reaction.products])\n \n if reactant_compartments == product_compartments and reactant_baseID != product_baseID:\n continue\n elif bool(compartments & reactant_compartments) == True and bool(compartments & product_compartments) == True:\n transporters.append(reaction.id)\n \n return transporters\n\n\n# Checks the quality of models by a couple metrics and returns problems\ndef checkQuality(model, exclude=[], cytosol='c', extracellular='e'):\n\n gaps = _findGapfilledRxn(model, exclude)\n freemass = _checkFreeMass(model, cytosol)\n balance = _checkBalance(model, exclude)\n trans, exch = _missingRxns(model, extracellular)\n \n test = gaps + freemass + balance\n if len(test) == 0: print('No inconsistencies detected')\n \n # Create reporting data structure\n quality = {}\n quality['gaps'] = gaps\n quality['freemass'] = freemass\n quality['balance'] = balance\n quality['trans'] = trans\n quality['exch'] = exch\n\n return quality\n",
"_____no_output_____"
],
[
"mixB1",
"_____no_output_____"
],
[
"mixB1_errors = checkQuality(mixB1)",
"148 metabolic reactions not associated with genes\n1 reactions are mass imbalanced\n40 reactions are charge imbalanced\n"
],
[
"mixB2",
"_____no_output_____"
],
[
"mixB2_errors = checkQuality(mixB2)",
"136 metabolic reactions not associated with genes\n1 reactions are mass imbalanced\n34 reactions are charge imbalanced\n"
],
[
"mixB3",
"_____no_output_____"
],
[
"mixB3_errors = checkQuality(mixB3)",
"143 metabolic reactions not associated with genes\n1 reactions are mass imbalanced\n33 reactions are charge imbalanced\n"
],
[
"mixB4",
"_____no_output_____"
],
[
"mixB4_errors = checkQuality(mixB4)",
"139 metabolic reactions not associated with genes\n1 reactions are mass imbalanced\n26 reactions are charge imbalanced\n"
],
[
"mixB5",
"_____no_output_____"
],
[
"mixB5_errors = checkQuality(mixB5)",
"109 metabolic reactions not associated with genes\n1 reactions are mass imbalanced\n37 reactions are charge imbalanced\n"
],
[
"mixB6",
"_____no_output_____"
],
[
"mixB6_errors = checkQuality(mixB6)",
"153 metabolic reactions not associated with genes\n1 reactions are mass imbalanced\n26 reactions are charge imbalanced\n"
],
[
"# Remove old bio1 (generic Gram-positive Biomass function) and macromolecule demand reactions\nmixB1.remove_reactions(['rxn13783_c', 'rxn13784_c', 'rxn13782_c', 'bio1', 'SK_cpd11416_c'])\n\n",
"_____no_output_____"
],
[
"\n# Make sure all the models can grow anaerobically\nmodel.reactions.get_by_id('EX_cpd00007_e').lower_bound = 0. ",
"_____no_output_____"
],
[
"# Universal reaction bag\nuniversal = cobra.io.load_json_model('/home/mjenior/Desktop/repos/Jenior_Cdifficile_2019/data/universal.json')\n\n# Fix compartments\ncompartment_dict = {'Cytosol': 'cytosol', 'Extracellular': 'extracellular', 'c': 'cytosol', 'e': 'extracellular', \n 'cytosol': 'cytosol', 'extracellular': 'extracellular'}\nfor cpd in universal.metabolites: \n cpd.compartment = compartment_dict[cpd1.compartment]",
"_____no_output_____"
],
[
"import copy\nimport cobra\nimport symengine\n\n# pFBA gapfiller\ndef fast_gapfill(model, objective, universal, extracellular='extracellular', media=[], transport=False):\n '''\n Parameters\n ----------\n model_file : str\n Model to be gapfilled\n objective : str\n Reaction ID for objective function\n universal_file : str\n Reaction bag reference\n extracellular : str\n Label for extracellular compartment of model\n media : list\n list of metabolite IDs in media condition\n transport : bool\n Determine if passive transporters should be added in defined media\n '''\n \n # Define overlapping components\n target_rxns = set([str(x.id) for x in model.reactions])\n target_cpds = set([str(y.id) for y in model.metabolites])\n ref_rxns = set([str(z.id) for z in universal.reactions])\n shared_rxns = ref_rxns.intersection(target_rxns)\n \n # Remove overlapping reactions from universal bag, add model reactions to universal bag\n temp_universal = copy.deepcopy(universal)\n for rxn in shared_rxns: temp_universal.reactions.get_by_id(rxn).remove_from_model()\n temp_universal.add_reactions(list(copy.deepcopy(model.reactions)))\n \n # Define minimum objective value\n temp_universal.objective = objective\n obj_constraint = temp_universal.problem.Constraint(temp_universal.objective.expression, lb=1.0, ub=1000.0)\n temp_universal.add_cons_vars([obj_constraint])\n temp_universal.solver.update()\n \n # Set up pFBA objective\n pfba_expr = symengine.RealDouble(0)\n for rxn in temp_universal.reactions:\n if not rxn.id in target_rxns:\n pfba_expr += 1.0 * rxn.forward_variable\n pfba_expr += 1.0 * rxn.reverse_variable\n else:\n pfba_expr += 0.0 * rxn.forward_variable\n pfba_expr += 0.0 * rxn.reverse_variable\n temp_universal.objective = temp_universal.problem.Objective(pfba_expr, direction='min', sloppy=True)\n temp_universal.solver.update()\n \n # Set media condition\n for rxn in temp_universal.reactions:\n if len(rxn.reactants) == 0 or len(rxn.products) == 0:\n substrates = set([x.id for x in rxn.metabolites])\n if len(media) == 0 or bool(substrates & set(media)) == True:\n rxn.bounds = (max(rxn.lower_bound, -1000.), min(rxn.upper_bound, 1000.))\n else:\n rxn.bounds = (0.0, min(rxn.upper_bound, 1000.))\n \n # Run FBA and save solution\n solution = temp_universal.optimize()\n active_rxns = set([rxn.id for rxn in temp_universal.reactions if abs(solution.fluxes[rxn.id]) > 1e-6])\n \n # Screen new reaction IDs\n new_rxns = active_rxns.difference(target_rxns)\n \n # Get reactions and metabolites to be added to the model\n new_rxns = copy.deepcopy([universal.reactions.get_by_id(rxn) for rxn in new_rxns])\n new_cpds = set()\n for rxn in new_rxns: new_cpds |= set([str(x.id) for x in list(rxn.metabolites)]).difference(target_cpds)\n new_cpds = copy.deepcopy([universal.metabolites.get_by_id(cpd) for cpd in new_cpds])\n \n # Create gapfilled model \n new_model = copy.deepcopy(model)\n new_model.add_metabolites(new_cpds)\n new_model.add_reactions(new_rxns)\n \n # Identify extracellular metabolites that need new exchanges\n new_exchs = 0\n model_exchanges = set()\n rxns = set([str(rxn.id) for rxn in model.reactions])\n for rxn in new_model.reactions:\n if len(rxn.reactants) == 0 or len(rxn.products) == 0:\n if extracellular in [str(cpd.compartment) for cpd in rxn.metabolites]:\n model_exchanges |= set([rxn.id]) \n for cpd in new_model.metabolites:\n if cpd.compartment != extracellular: continue\n current_rxns = set([x.id for x in cpd.reactions])\n if bool(current_rxns & model_exchanges) == False:\n new_id = 'EX_' + cpd.id\n new_model.add_boundary(cpd, type='exchange', reaction_id=new_id, lb=-1000.0, ub=1000.0)\n new_exchs += 1\n\n # Report to user\n print('Gapfilled ' + str(len(new_rxns) + new_exchs) + ' reactions and ' + str(len(new_cpds)) + ' metabolites') \n if new_model.slim_optimize() <= 1e-6: print('WARNING: Objective does not carries flux')\n \n return new_model\n",
"_____no_output_____"
],
[
"# Load in models\nmodel = cobra.io.load_json_model('/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/lact_reut.draft.json')",
"_____no_output_____"
],
[
"# Fix compartments\ncompartment_dict = {'Cytosol': 'cytosol', 'Extracellular': 'extracellular', 'c': 'cytosol', 'e': 'extracellular', \n 'cytosol': 'cytosol', 'extracellular': 'extracellular'}\nfor cpd2 in model.metabolites:\n cpd2.compartment = compartment_dict[cpd2.compartment]",
"_____no_output_____"
],
[
"# Thoroughly remove orphan reactions and metabolites\ndef all_orphan_prune(model):\n \n pruned_cpd = 0\n pruned_rxn = 0\n removed = 1\n while removed == 1:\n removed = 0\n\n # Metabolites\n for cpd in model.metabolites:\n if len(cpd.reactions) == 0:\n cpd.remove_from_model()\n pruned_cpd += 1\n removed = 1\n\n # Reactions\n for rxn in model.reactions:\n if len(rxn.metabolites) == 0: \n rxn.remove_from_model()\n pruned_rxn += 1\n removed = 1\n \n if pruned_cpd > 0: print('Pruned ' + str(pruned_cpd) + ' orphan metabolites')\n if pruned_rxn > 0: print('Pruned ' + str(pruned_rxn) + ' orphan reactions')\n\n return model\n",
"_____no_output_____"
],
[
"# Remove incorrect biomass-related components\n\n# Unwanted reactions\nrm_reactions = ['bio1']\nfor x in rm_reactions:\n model.reactions.get_by_id(x).remove_from_model()\n \n# Unwanted metabolites\nrm_metabolites = ['cpd15666_c','cpd17041_c','cpd17042_c','cpd17043_c']\nfor y in rm_metabolites:\n for z in model.metabolites.get_by_id(y).reactions:\n z.remove_from_model()\n model.metabolites.get_by_id(y).remove_from_model()\n\n# Remove gap-filled reactions",
"_____no_output_____"
],
[
"# Gram-positive Biomass formulation\n\n# DNA replication\n\ncpd00115_c = universal.metabolites.get_by_id('cpd00115_c') # dATP\ncpd00356_c = universal.metabolites.get_by_id('cpd00356_c') # dCTP\ncpd00357_c = universal.metabolites.get_by_id('cpd00357_c') # TTP\ncpd00241_c = universal.metabolites.get_by_id('cpd00241_c') # dGTP\ncpd00002_c = universal.metabolites.get_by_id('cpd00002_c') # ATP\ncpd00001_c = universal.metabolites.get_by_id('cpd00001_c') # H2O\n\ncpd00008_c = universal.metabolites.get_by_id('cpd00008_c') # ADP\ncpd00009_c = universal.metabolites.get_by_id('cpd00009_c') # Phosphate\ncpd00012_c = universal.metabolites.get_by_id('cpd00012_c') # PPi\n\ncpd17042_c = cobra.Metabolite(\n 'cpd17042_c',\n formula='',\n name='DNA polymer',\n compartment='cytosol')\n\ndna_rxn = cobra.Reaction('dna_rxn')\ndna_rxn.lower_bound = 0.\ndna_rxn.upper_bound = 1000.\ndna_rxn.add_metabolites({\n cpd00115_c: -1.0,\n cpd00356_c: -0.5,\n cpd00357_c: -1.0,\n cpd00241_c: -0.5,\n cpd00002_c: -4.0,\n cpd00001_c: -1.0,\n cpd17042_c: 1.0,\n cpd00008_c: 4.0,\n cpd00009_c: 4.0,\n cpd00012_c: 1.0\n})\n\n#--------------------------------------------------------------------------------#\n\n# RNA transcription\n\ncpd00002_c = universal.metabolites.get_by_id('cpd00002_c') # ATP\ncpd00052_c = universal.metabolites.get_by_id('cpd00052_c') # CTP\ncpd00062_c = universal.metabolites.get_by_id('cpd00062_c') # UTP\ncpd00038_c = universal.metabolites.get_by_id('cpd00038_c') # GTP\ncpd00001_c = universal.metabolites.get_by_id('cpd00001_c') # H2O\n\ncpd00008_c = universal.metabolites.get_by_id('cpd00008_c') # ADP\ncpd00009_c = universal.metabolites.get_by_id('cpd00009_c') # Phosphate\ncpd00012_c = universal.metabolites.get_by_id('cpd00012_c') # PPi\n\ncpd17043_c = cobra.Metabolite(\n 'cpd17043_c',\n formula='',\n name='RNA polymer',\n compartment='cytosol')\n\nrna_rxn = cobra.Reaction('rna_rxn')\nrna_rxn.name = 'RNA transcription'\nrna_rxn.lower_bound = 0.\nrna_rxn.upper_bound = 1000.\nrna_rxn.add_metabolites({\n cpd00002_c: -2.0,\n cpd00052_c: -0.5,\n cpd00062_c: -0.5,\n cpd00038_c: -0.5,\n cpd00001_c: -1.0,\n cpd17043_c: 1.0,\n cpd00008_c: 2.0,\n cpd00009_c: 2.0,\n cpd00012_c: 1.0\n})\n\n#--------------------------------------------------------------------------------#\n\n# Protein biosynthesis\n\ncpd00035_c = universal.metabolites.get_by_id('cpd00035_c') # L-Alanine\ncpd00051_c = universal.metabolites.get_by_id('cpd00051_c') # L-Arginine\ncpd00132_c = universal.metabolites.get_by_id('cpd00132_c') # L-Asparagine\ncpd00041_c = universal.metabolites.get_by_id('cpd00041_c') # L-Aspartate\ncpd00084_c = universal.metabolites.get_by_id('cpd00084_c') # L-Cysteine\ncpd00053_c = universal.metabolites.get_by_id('cpd00053_c') # L-Glutamine\ncpd00023_c = universal.metabolites.get_by_id('cpd00023_c') # L-Glutamate\ncpd00033_c = universal.metabolites.get_by_id('cpd00033_c') # Glycine\ncpd00119_c = universal.metabolites.get_by_id('cpd00119_c') # L-Histidine\ncpd00322_c = universal.metabolites.get_by_id('cpd00322_c') # L-Isoleucine\ncpd00107_c = universal.metabolites.get_by_id('cpd00107_c') # L-Leucine\ncpd00039_c = universal.metabolites.get_by_id('cpd00039_c') # L-Lysine\ncpd00060_c = universal.metabolites.get_by_id('cpd00060_c') # L-Methionine\ncpd00066_c = universal.metabolites.get_by_id('cpd00066_c') # L-Phenylalanine\ncpd00129_c = universal.metabolites.get_by_id('cpd00129_c') # L-Proline\ncpd00054_c = universal.metabolites.get_by_id('cpd00054_c') # L-Serine\ncpd00161_c = universal.metabolites.get_by_id('cpd00161_c') # L-Threonine\ncpd00065_c = universal.metabolites.get_by_id('cpd00065_c') # L-Tryptophan\ncpd00069_c = universal.metabolites.get_by_id('cpd00069_c') # L-Tyrosine\ncpd00156_c = universal.metabolites.get_by_id('cpd00156_c') # L-Valine\ncpd00002_c = universal.metabolites.get_by_id('cpd00002_c') # ATP\ncpd00001_c = universal.metabolites.get_by_id('cpd00001_c') # H2O\n\ncpd00008_c = universal.metabolites.get_by_id('cpd00008_c') # ADP\ncpd00009_c = universal.metabolites.get_by_id('cpd00009_c') # Phosphate\n\ncpd17041_c = cobra.Metabolite(\n 'cpd17041_c',\n formula='',\n name='Protein polymer',\n compartment='cytosol')\n\nprotein_rxn = cobra.Reaction('protein_rxn')\nprotein_rxn.name = 'Protein biosynthesis'\nprotein_rxn.lower_bound = 0.\nprotein_rxn.upper_bound = 1000.\nprotein_rxn.add_metabolites({\n cpd00035_c: -0.5,\n cpd00051_c: -0.25,\n cpd00132_c: -0.5,\n cpd00041_c: -0.5,\n cpd00084_c: -0.05,\n cpd00053_c: -0.25,\n cpd00023_c: -0.5,\n cpd00033_c: -0.5,\n cpd00119_c: -0.05,\n cpd00322_c: -0.5,\n cpd00107_c: -0.5,\n cpd00039_c: -0.5,\n cpd00060_c: -0.25,\n cpd00066_c: -0.5,\n cpd00129_c: -0.25,\n cpd00054_c: -0.5,\n cpd00161_c: -0.5,\n cpd00065_c: -0.05,\n cpd00069_c: -0.25,\n cpd00156_c: -0.5,\n cpd00002_c: -20.0,\n cpd00001_c: -1.0,\n cpd17041_c: 1.0,\n cpd00008_c: 20.0,\n cpd00009_c: 20.0\n})\n\n#--------------------------------------------------------------------------------#\n\n# Cell wall synthesis\n\ncpd02967_c = universal.metabolites.get_by_id('cpd02967_c') # N-Acetyl-beta-D-mannosaminyl-1,4-N-acetyl-D-glucosaminyldiphosphoundecaprenol\ncpd00402_c = universal.metabolites.get_by_id('cpd00402_c') # CDPglycerol\ncpd00046_c = universal.metabolites.get_by_id('cpd00046_c') # CMP\n\ncpd12894_c = cobra.Metabolite(\n 'cpd12894_c',\n formula='',\n name='Teichoic acid',\n compartment='cytosol')\n\nteichoicacid_rxn = cobra.Reaction('teichoicacid_rxn')\nteichoicacid_rxn.name = 'Teichoic acid biosynthesis'\nteichoicacid_rxn.lower_bound = 0.\nteichoicacid_rxn.upper_bound = 1000.\nteichoicacid_rxn.add_metabolites({\n cpd02967_c: -1.0,\n cpd00402_c: -1.0,\n cpd00046_c: 1.0,\n cpd12894_c: 1.0\n})\n\n# Peptidoglycan subunits\n # Undecaprenyl-diphospho-N-acetylmuramoyl--N-acetylglucosamine-L-ala-D-glu-meso-2-6-diaminopimeloyl-D-ala-D-ala (right)\ncpd03495_c = universal.metabolites.get_by_id('cpd03495_c')\n# Undecaprenyl-diphospho-N-acetylmuramoyl-(N-acetylglucosamine)-L-alanyl-gamma-D-glutamyl-L-lysyl-D-alanyl-D-alanine (left)\ncpd03491_c = universal.metabolites.get_by_id('cpd03491_c') \n\ncpd00002_c = universal.metabolites.get_by_id('cpd00002_c') # ATP\ncpd00001_c = universal.metabolites.get_by_id('cpd00001_c') # H2O\n\ncpd02229_c = universal.metabolites.get_by_id('cpd02229_c') # Bactoprenyl diphosphate\ncpd00117_c = universal.metabolites.get_by_id('cpd00117_c') # D-Alanine\ncpd00008_c = universal.metabolites.get_by_id('cpd00008_c') # ADP\ncpd00009_c = universal.metabolites.get_by_id('cpd00009_c') # Phosphate\n\ncpd16661_c = cobra.Metabolite(\n 'cpd16661_c',\n formula='',\n name='Peptidoglycan polymer',\n compartment='cytosol')\n\npeptidoglycan_rxn = cobra.Reaction('peptidoglycan_rxn')\npeptidoglycan_rxn.name = 'Peptidoglycan biosynthesis'\npeptidoglycan_rxn.lower_bound = 0.\npeptidoglycan_rxn.upper_bound = 1000.\npeptidoglycan_rxn.add_metabolites({\n cpd03491_c: -1.0,\n cpd03495_c: -1.0,\n cpd00002_c: -4.0,\n cpd00001_c: -1.0,\n cpd16661_c: 1.0,\n cpd02229_c: 1.0,\n cpd00117_c: 0.5, # D-Alanine\n cpd00008_c: 4.0, # ADP\n cpd00009_c: 4.0 # Phosphate\n})\n\ncellwall_c = cobra.Metabolite(\n 'cellwall_c',\n formula='',\n name='Cell Wall polymer',\n compartment='cytosol')\n\ncellwall_rxn = cobra.Reaction('cellwall_rxn')\ncellwall_rxn.name = 'Cell wall biosynthesis'\ncellwall_rxn.lower_bound = 0.\ncellwall_rxn.upper_bound = 1000.\ncellwall_rxn.add_metabolites({\n cpd16661_c: -1.5,\n cpd12894_c: -0.05,\n cellwall_c: 1.0\n})\n\n#--------------------------------------------------------------------------------#\n\n# Lipid pool\n\ncpd15543_c = universal.metabolites.get_by_id('cpd15543_c') # Phosphatidylglycerophosphate ditetradecanoyl\ncpd15545_c = universal.metabolites.get_by_id('cpd15545_c') # Phosphatidylglycerophosphate dihexadecanoyl\ncpd15540_c = universal.metabolites.get_by_id('cpd15540_c') # Phosphatidylglycerol dioctadecanoyl\ncpd15728_c = universal.metabolites.get_by_id('cpd15728_c') # Diglucosyl-1,2 dipalmitoylglycerol\ncpd15729_c = universal.metabolites.get_by_id('cpd15729_c') # Diglucosyl-1,2 dimyristoylglycerol\ncpd15737_c = universal.metabolites.get_by_id('cpd15737_c') # Monoglucosyl-1,2 dipalmitoylglycerol\ncpd15738_c = universal.metabolites.get_by_id('cpd15738_c') # Monoglucosyl-1,2 dimyristoylglycerol\n\ncpd11852_c = cobra.Metabolite(\n 'cpd11852_c',\n formula='',\n name='Lipid Pool',\n compartment='cytosol')\n\nlipid_rxn = cobra.Reaction('lipid_rxn')\nlipid_rxn.name = 'Lipid composition'\nlipid_rxn.lower_bound = 0.\nlipid_rxn.upper_bound = 1000.\nlipid_rxn.add_metabolites({\n cpd15543_c: -0.005,\n cpd15545_c: -0.005,\n cpd15540_c: -0.005,\n cpd15728_c: -0.005,\n cpd15729_c: -0.005,\n cpd15737_c: -0.005,\n cpd15738_c: -0.005,\n cpd11852_c: 1.0\n})\n\n#--------------------------------------------------------------------------------#\n\n# Ions, Vitamins, & Cofactors\n\n# Vitamins\ncpd00104_c = universal.metabolites.get_by_id('cpd00104_c') # Biotin MDM\ncpd00644_c = universal.metabolites.get_by_id('cpd00644_c') # Pantothenate MDM\ncpd00263_c = universal.metabolites.get_by_id('cpd00263_c') # Pyridoxine MDM\ncpd00393_c = universal.metabolites.get_by_id('cpd00393_c') # folate\ncpd00133_c = universal.metabolites.get_by_id('cpd00133_c') # nicotinamide\ncpd00443_c = universal.metabolites.get_by_id('cpd00443_c') # p-aminobenzoic acid\ncpd00220_c = universal.metabolites.get_by_id('cpd00220_c') # riboflavin \ncpd00305_c = universal.metabolites.get_by_id('cpd00305_c') # thiamin\n\n# Ions\ncpd00149_c = universal.metabolites.get_by_id('cpd00149_c') # Cobalt\ncpd00030_c = universal.metabolites.get_by_id('cpd00030_c') # Manganese\ncpd00254_c = universal.metabolites.get_by_id('cpd00254_c') # Magnesium\ncpd00971_c = universal.metabolites.get_by_id('cpd00971_c') # Sodium\ncpd00063_c = universal.metabolites.get_by_id('cpd00063_c') # Calcium\ncpd10515_c = universal.metabolites.get_by_id('cpd10515_c') # Iron\ncpd00205_c = universal.metabolites.get_by_id('cpd00205_c') # Potassium\ncpd00099_c = universal.metabolites.get_by_id('cpd00099_c') # Chloride\n\n# Cofactors\ncpd00022_c = universal.metabolites.get_by_id('cpd00022_c') # Acetyl-CoA\ncpd00010_c = universal.metabolites.get_by_id('cpd00010_c') # CoA\ncpd00015_c = universal.metabolites.get_by_id('cpd00015_c') # FAD\ncpd00003_c = universal.metabolites.get_by_id('cpd00003_c') # NAD\ncpd00004_c = universal.metabolites.get_by_id('cpd00004_c') # NADH\ncpd00006_c = universal.metabolites.get_by_id('cpd00006_c') # NADP\ncpd00005_c = universal.metabolites.get_by_id('cpd00005_c') # NADPH\n\n# Energy molecules\ncpd00002_c = universal.metabolites.get_by_id('cpd00002_c') # ATP\ncpd00008_c = universal.metabolites.get_by_id('cpd00008_c') # ADP\ncpd00009_c = universal.metabolites.get_by_id('cpd00009_c') # Phosphate\ncpd00012_c = universal.metabolites.get_by_id('cpd00012_c') # PPi\ncpd00038_c = universal.metabolites.get_by_id('cpd00038_c') # GTP\ncpd00031_c = universal.metabolites.get_by_id('cpd00031_c') # GDP\n\ncpd00274_c = universal.metabolites.get_by_id('cpd00274_c') # Citrulline\n\ncofactor_c = cobra.Metabolite(\n 'cofactor_c',\n formula='',\n name='Cofactor Pool',\n compartment='cytosol')\n\ncofactor_rxn = cobra.Reaction('cofactor_rxn')\ncofactor_rxn.name = 'Cofactor Pool'\ncofactor_rxn.lower_bound = 0.\ncofactor_rxn.upper_bound = 1000.\ncofactor_rxn.add_metabolites({\n cpd00104_c: -0.005, \n cpd00644_c: -0.005, \n cpd00263_c: -0.005, \n cpd00393_c: -0.005,\n cpd00133_c: -0.005,\n cpd00443_c: -0.005,\n cpd00220_c: -0.005,\n cpd00305_c: -0.005,\n cpd00149_c: -0.005, \n cpd00030_c: -0.005, \n cpd00254_c: -0.005, \n cpd00971_c: -0.005, \n cpd00063_c: -0.005, \n cpd10515_c: -0.005, \n cpd00205_c: -0.005,\n cpd00099_c: -0.005, \n cpd00022_c: -0.005,\n cpd00010_c: -0.0005,\n cpd00015_c: -0.0005,\n cpd00003_c: -0.005,\n cpd00004_c: -0.005,\n cpd00006_c: -0.005,\n cpd00005_c: -0.005,\n cpd00002_c: -0.005, \n cpd00008_c: -0.005, \n cpd00009_c: -0.5,\n cpd00012_c: -0.005, \n cpd00038_c: -0.005, \n cpd00031_c: -0.005, \n\n cofactor_c: 1.0\n})\n\n#--------------------------------------------------------------------------------#\n\n# Final Biomass\n\ncpd11416_c = cobra.Metabolite(\n 'cpd11416_c',\n formula='',\n name='Biomass',\n compartment='cytosol')\n\nbiomass_rxn = cobra.Reaction('biomass')\nbiomass_rxn.name = 'Gram-positive Biomass Reaction'\nbiomass_rxn.lower_bound = 0.\nbiomass_rxn.upper_bound = 1000.\nbiomass_rxn.add_metabolites({\n cpd17041_c: -0.4, # Protein\n cpd17043_c: -0.15, # RNA\n cpd17042_c: -0.05, # DNA\n cpd11852_c: -0.05, # Lipid\n cellwall_c: -0.2,\n cofactor_c: -0.2,\n cpd00001_c: -20.0,\n cpd00002_c: -20.0,\n cpd00008_c: 20.0,\n cpd00009_c: 20.0,\n cpd11416_c: 1.0 # Biomass\n})\n\ngrampos_biomass_components = [dna_rxn,rna_rxn, protein_rxn, teichoicacid_rxn, peptidoglycan_rxn, cellwall_rxn, lipid_rxn, cofactor_rxn, biomass_rxn]\n",
"_____no_output_____"
],
[
"# Add components to new model\nmodel.add_reactions(grampos_biomass_components)\nmodel.add_boundary(cpd11416_c, type='sink', reaction_id='EX_biomass', lb=0.0, ub=1000.0)\n\n# Set new objective\nmodel.objective = 'biomass'",
"_____no_output_____"
],
[
"model",
"_____no_output_____"
],
[
"model.slim_optimize()",
"_____no_output_____"
],
[
"# Gapfill\nnew_model = fast_gapfill(model, universal=universal, objective='biomass')",
"Gapfilled 18 reactions and 6 metabolites\n"
],
[
"new_model",
"_____no_output_____"
],
[
"# Define minimal components and add to minimal media formulation\nfrom cobra.medium import minimal_medium\n\nov = new_model.slim_optimize() * 0.1\ncomponents = minimal_medium(new_model, ov) # pick out necessary cofactors\nessential = ['cpd00063_e','cpd00393_e','cpd00048_e','cpd00305_e','cpd00205_e','cpd00104_e','cpd00099_e',\n 'cpd00099_e','cpd00099_e','cpd00149_e','cpd00030_e','cpd10516_e','cpd00254_e','cpd00220_e',\n 'cpd00355_e','cpd00064_e','cpd00971_e','cpd00067_e']\n\n# Wegkamp et al. 2009. Applied Microbiology.\n# Lactobacillus minimal media\npmm5 = ['cpd00001_e','cpd00009_e','cpd00026_e','cpd00029_e','cpd00059_e','cpd00051_e','cpd00023_e',\n 'cpd00107_e','cpd00322_e','cpd00060_e','cpd00066_e','cpd00161_e','cpd00065_e','cpd00069_e',\n 'cpd00156_e','cpd00218_e','cpd02201_e','cpd00220_e','cpd00220_e','cpd04877_e','cpd28790_e',\n 'cpd00355_e']\nminimal = essential + pmm5\n\nnew_model = fast_gapfill(new_model, universal=universal, objective='biomass', media=minimal)",
"Gapfilled 23 reactions and 4 metabolites\n"
],
[
"new_model",
"_____no_output_____"
],
[
"new_model = all_orphan_prune(new_model)",
"_____no_output_____"
],
[
"new_model",
"_____no_output_____"
],
[
"print(len(new_model.genes))",
"488\n"
],
[
"new_model.slim_optimize()",
"_____no_output_____"
],
[
"new_model.name = 'Lactobactillus reuteri mlc3'\nnew_model.id = 'iLr488'",
"_____no_output_____"
],
[
"cobra.io.save_json_model(new_model, '/home/mjenior/Desktop/Lawley_MixB/draft_reconstructions/lact_reut.curated.json')",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adc355774bba553146fd29ec28d2bdccc339d2c
| 229,148 |
ipynb
|
Jupyter Notebook
|
5-3 Support vector machines(Application01).ipynb
|
woaij100/Classic_machine_learning
|
3bb29f5b7449f11270014184d999171a1c7f5e71
|
[
"Apache-2.0"
] | 77 |
2018-12-14T02:09:06.000Z
|
2020-03-07T03:47:22.000Z
|
5-3 Support vector machines(Application01).ipynb
|
woaij100/Classic_machine_learning
|
3bb29f5b7449f11270014184d999171a1c7f5e71
|
[
"Apache-2.0"
] | null | null | null |
5-3 Support vector machines(Application01).ipynb
|
woaij100/Classic_machine_learning
|
3bb29f5b7449f11270014184d999171a1c7f5e71
|
[
"Apache-2.0"
] | 10 |
2019-03-05T09:50:55.000Z
|
2019-08-07T01:37:45.000Z
| 161.371831 | 158,104 | 0.877909 |
[
[
[
"# SVM\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport sympy as sym\nimport pandas as pd\nfrom sklearn.datasets import load_iris\nfrom sklearn.model_selection import train_test_split\nimport matplotlib.pyplot as plt\n%matplotlib inline\nnp.random.seed(1)",
"_____no_output_____"
]
],
[
[
"## Simple Example Application\n\n对于简单的数据样本例子(也就是说可以进行线性划分,且不包含噪声点)\n\n**算法:**\n\n输入:线性可分训练集$T={(x_1,y_1),(x_2,y_2),...,(x_N,y_N)}$,其中$x_i \\in \\textit{X}=\\textit{R},y_i \\in \\textit{Y}={+1,-1},i=1,2...,N$\n\n输出:分离超平面和分类决策函数\n\n(1) 构造并求解约束条件最优化问题\n\n$\\underset{\\alpha}{min}$ $\\frac{1}{2}\\sum_{i=1}^{N}\\sum_{j=1}^{N}\\alpha_i \\alpha_j y_i y_j <x_i \\cdot x_j>-\\sum_{i=1}^{N}\\alpha_i$\n\ns.t $\\sum_{i=1}^{N}\\alpha_i y_i=0$\n\n$\\alpha_i \\geq 0,i=1,2,...,N$\n\n求得最优$\\alpha^{*}=(\\alpha_1^{*},\\alpha_2^{*},...,\\alpha_n^{*})$\n\n\n其中正分量$\\alpha_j^{*}>0$就为支持向量",
"_____no_output_____"
],
[
"(2) 计算\n\n$w^{*} = \\sum_{i=1}^{N}\\alpha_i^{*}y_ix_i$\n\n选择$\\alpha^{*}$的一个正分量$\\alpha_j^{*}>0$,计算\n\n$b^{*}=y_j-\\sum_{i=1}^{N}\\alpha_i^{*}y_i<x_i \\cdot x_j>$\n\n",
"_____no_output_____"
],
[
"(3) 求得分离超平面\n\n$w^{*}\\cdot x + b^{*}=0$\n\n分类决策函数:\n\n$f(x)=sign(w^{*}\\cdot x + b^{*})$\n\n这里的sign表示:值大于0的为1,值小于0的为-1.",
"_____no_output_____"
]
],
[
[
"def loadSimpleDataSet():\n \"\"\"\n 从文本加载数据集\n \n 返回:\n 数据集和标签集\n \"\"\"\n train_x = np.array([[3,3],[4,3],[1,1]]).T\n train_y = np.array([[1,1,-1]]).T\n return train_x,train_y",
"_____no_output_____"
],
[
"train_x,train_y = loadSimpleDataSet()\nprint(\"train_x shape is : \",train_x.shape)\nprint(\"train_y shape is : \",train_y.shape)",
"train_x shape is : (2, 3)\ntrain_y shape is : (3, 1)\n"
],
[
"plt.scatter(train_x[0,:],train_x[1,:],c=np.squeeze(train_y))",
"_____no_output_____"
]
],
[
[
"为了方便计算$\\sum_{i=1}^{N}\\sum_{j=1}^{N}\\alpha_i \\alpha_j y_i y_j <x_i \\cdot x_j>$\n\n我们需要先求出train_x、train_y、alphas的内积然后逐个元素相乘然后累加.",
"_____no_output_____"
],
[
"计算train_x的内积",
"_____no_output_____"
]
],
[
[
"Inner_train_x = np.dot(train_x.T,train_x)\nprint(\"Train_x is:\\n\",train_x)\nprint(\"Inner train x is:\\n\",Inner_train_x)",
"Train_x is:\n [[3 4 1]\n [3 3 1]]\nInner train x is:\n [[18 21 6]\n [21 25 7]\n [ 6 7 2]]\n"
]
],
[
[
"计算train_y的内积",
"_____no_output_____"
]
],
[
[
"Inner_train_y = np.dot(train_y,train_y.T)\nprint(\"Train y is:\\n\",train_y)\nprint(\"Inner train y is:\\n\",Inner_train_y)",
"Train y is:\n [[ 1]\n [ 1]\n [-1]]\nInner train y is:\n [[ 1 1 -1]\n [ 1 1 -1]\n [-1 -1 1]]\n"
]
],
[
[
"计算alphas(拉格朗日乘子)的内积,但是要注意,我们在这里固定拉格朗日乘子中的某两个alpha之外的其他alpha,因为根据理论知识,我们需要固定两个alpha之外的其他alphas,然后不断的再一堆alphas中去迭代更新这两个alpha.由于这个例子过于简单,且只有3个样本点(事实上$\\alpha_1,\\alpha_3$就是支持向量).\n\n\n将约束条件带入其中:\n\n$\\sum_{i=1}^3\\alpha_i y_i=\\alpha_1y_1+\\alpha_2y_2+\\alpha_3y_3 =0 \\Rightarrow $\n--\n$\\alpha_3 = -(\\alpha_1y_1+\\alpha_2y_2)/y_3 $\n--\n",
"_____no_output_____"
]
],
[
[
"alphas_sym = sym.symbols('alpha1:4')\nalphas = np.array([alphas_sym]).T\nalphas[-1]= -np.sum(alphas[:-1,:]*train_y[:-1,:]) / train_y[-1,:]\nInner_alphas = np.dot(alphas,alphas.T)\nprint(\"alphas is: \\n\",alphas)\nprint(\"Inner alphas is:\\n\",Inner_alphas)",
"alphas is: \n [[alpha1]\n [alpha2]\n [1.0*alpha1 + 1.0*alpha2]]\nInner alphas is:\n [[alpha1**2 alpha1*alpha2 alpha1*(1.0*alpha1 + 1.0*alpha2)]\n [alpha1*alpha2 alpha2**2 alpha2*(1.0*alpha1 + 1.0*alpha2)]\n [alpha1*(1.0*alpha1 + 1.0*alpha2) alpha2*(1.0*alpha1 + 1.0*alpha2)\n (1.0*alpha1 + 1.0*alpha2)**2]]\n"
]
],
[
[
"现在求最优的$\\alpha^{*}=(\\alpha_1^{*},\\alpha_2^{*},...,\\alpha_n^{*})$\n\n$\\underset{\\alpha}{min}$ $\\frac{1}{2}\\sum_{i=1}^{N}\\sum_{j=1}^{N}\\alpha_i \\alpha_j y_i y_j <x_i \\cdot x_j>-\\sum_{i=1}^{N}\\alpha_i$\n\n**注意:**\n\n这里需要使用sympy库,详情请见[柚子皮-Sympy符号计算库](https://blog.csdn.net/pipisorry/article/details/39123247)\n\n或者[Sympy](https://www.sympy.org/en/index.html)",
"_____no_output_____"
]
],
[
[
"def compute_dual_function(alphas,Inner_alphas,Inner_train_x,Inner_train_y):\n \"\"\"\n Parameters:\n alphas: initialization lagrange multiplier,shape is (n,1).\n n:number of example.\n Inner_alphas: Inner product of alphas.\n Inner_train_x: Inner product of train x set.\n Inner_train_y: Inner product of train y set.\n \n simplify : simplify compute result of dual function.\n \n return:\n s_alpha: result of dual function\n \"\"\"\n s_alpha = sym.simplify(1/2*np.sum(Inner_alphas * Inner_train_x*Inner_train_y) - (np.sum(alphas)))\n return s_alpha\n",
"_____no_output_____"
],
[
"s_alpha = compute_dual_function(alphas,Inner_alphas,Inner_train_x,Inner_train_y)\nprint('s_alpha is:\\n ',s_alpha)",
"s_alpha is:\n 4.0*alpha1**2 + 10.0*alpha1*alpha2 - 2.0*alpha1 + 6.5*alpha2**2 - 2.0*alpha2\n"
]
],
[
[
"现在对每一个alpha求偏导令其等于0.",
"_____no_output_____"
]
],
[
[
"def Derivative_alphas(alphas,s_alpha):\n \"\"\"\n Parameters:\n alphas: lagrange multiplier.\n s_alpha: dual function\n \n return:\n bool value.\n True: Meet all constraints,means,all lagrange multiplier >0\n False:Does not satisfy all constraints,means some lagrange multiplier <0.\n \"\"\"\n cache_derivative_alpha = []\n for alpha in alphas.squeeze()[:-1]: # remove the last element.\n derivative = s_alpha.diff(alpha) # diff: derivative\n cache_derivative_alpha.append(derivative)\n \n derivative_alpha = sym.solve(cache_derivative_alpha,set=True) # calculate alphas.\n print('derivative_alpha is: ',derivative_alpha)\n \n # check alpha > 0\n check_alpha_np = np.array(list(derivative_alpha[1])) > 0\n \n return check_alpha_np.all()",
"_____no_output_____"
],
[
"check_alpha = Derivative_alphas(alphas,s_alpha)\nprint(\"Constraint lagrange multiplier is: \",check_alpha)",
"derivative_alpha is: ([alpha1, alpha2], {(1.50000000000000, -1.00000000000000)})\nConstraint lagrange multiplier is: False\n"
]
],
[
[
"可以看出如果是对于$\\alpha_2<0$,不满足$\\alpha_2 \\geqslant 0 $所以我们不能使用极值\n\n-------------",
"_____no_output_____"
],
[
"由于在求偏导的情况下不满足拉格朗日乘子约束条件,所以我们将固定某一个$\\alpha_i$,将其他的$\\alpha$令成0,使偏导等于0求出当前$\\alpha_i$,然后在带入到对偶函数中求出最后的结果.比较所有的结果挑选出结果最小的值所对应的$\\alpha_i$,在从中选出$\\alpha_i>0$的去求我们最开始固定的其他alphas.\n\n\n**算法:**\n\n输入: 拉格朗日乘子数组,数组中不包括最开始固定的其他alphas\n输出: 最优的拉格朗日乘子,也就是支持向量\n\n(1) 将输入的拉格朗日数组扩增一行或者一列并初始化为0\n - alphas_zeros = np.zeros((alphas.shape[0],1))[:-1]\n - alphas_add_zeros = np.c_[alphas[:-1],alphas_zeros]\n(2) 将扩增后的数组进行\"mask\"掩模处理,目的是为了将一个$\\alpha$保留,其他的$\\alpha$全部为0.\n - mask_alpha = np.ma.array(alphas_add_zeros, mask=False) # create mask array.\n - mask_alpha.mask[i] = True # masked alpha\n - 在sysmpy中使用掩模处理会报出一个警告:将掩模值处理为None,其实问题不大,应该不会改变对偶方程中的alpha对象\n\n(3) 使用掩模后的数组放入对偶函数中求偏导$\\alpha_i$,并令其等于0求出$\\alpha_i$\n \n(4) 将求出的$\\alpha_i$和其他都等于0的alphas带入到对偶函数中求出值\n\n(5) 比较所有的对偶函数中的值,选取最小值所对应的alpha组.计算最开始固定值的alphas.\n",
"_____no_output_____"
]
],
[
[
"\ndef choose_best_alphas(alphas,s_alpha):\n \"\"\"\n Parameters:\n alphas: Lagrange multiplier.\n s_alpha: dual function\n \n return:\n best_vector: best support vector machine.\n \"\"\"\n # add col in alphas,and initialize value equal 0. about 2 lines.\n alphas_zeros = np.zeros((alphas.shape[0],1))[:-1]\n alphas_add_zeros = np.c_[alphas[:-1],alphas_zeros]\n \n # cache some parameters.\n cache_alphas_add = np.zeros((alphas.shape[0],1))[:-1] # cache derivative alphas.\n cache_alphas_compute_result = np.zeros((alphas.shape[0],1))[:-1] # cache value in dual function result\n cache_alphas_to_compute = alphas_add_zeros.copy() # get minmux dual function value,cache this values.\n \n \n for i in range(alphas_add_zeros.shape[0]):\n mask_alpha = np.ma.array(alphas_add_zeros, mask=False) # create mask array.\n mask_alpha.mask[i] = True # masked alpha\n value = sym.solve(s_alpha.subs(mask_alpha).diff())[0] # calculate alpha_i\n \n cache_alphas_add[i] = value\n cache_alphas_to_compute[i][1] = value\n cache_alphas_compute_result[i][0] = s_alpha.subs(cache_alphas_to_compute) # calculate finally dual function result.\n cache_alphas_to_compute[i][1] = 0 # make sure other alphas equal 0.\n \n \n min_alpha_value_index = cache_alphas_compute_result.argmin()\n \n best_vector =np.array([cache_alphas_add[min_alpha_value_index]] + [- cache_alphas_add[min_alpha_value_index] / train_y[-1]]) \n \n \n return [min_alpha_value_index]+[2],best_vector\n\n\n ",
"_____no_output_____"
],
[
"min_alpha_value_index,best_vector = choose_best_alphas(alphas,s_alpha)\nprint(min_alpha_value_index)\nprint('support vector machine is:',alphas[min_alpha_value_index])",
"[0, 2]\nsupport vector machine is: [[alpha1]\n [1.0*alpha1 + 1.0*alpha2]]\n"
]
],
[
[
"$w^{*} = \\sum_{i=1}^{N}\\alpha_i^{*}y_ix_i$\n",
"_____no_output_____"
]
],
[
[
"w = np.sum(np.multiply(best_vector , train_y[min_alpha_value_index].T) * train_x[:,min_alpha_value_index],axis=1)\nprint(\"W is: \",w)",
"W is: [0.5 0.5]\n"
]
],
[
[
"选择$\\alpha^{*}$的一个正分量$\\alpha_j^{*}>0$,计算\n\n$b^{*}=y_j-\\sum_{i=1}^{N}\\alpha_i^{*}y_i<x_i \\cdot x_j>$\n\n这里我选alpha1",
"_____no_output_____"
]
],
[
[
"b = train_y[0]-np.sum(best_vector.T * np.dot(train_x[:,min_alpha_value_index].T,train_x[:,min_alpha_value_index])[0] \n * train_y[min_alpha_value_index].T)\nprint(\"b is: \",b)",
"b is: [-2.]\n"
]
],
[
[
"所以超平面为:\n\n$f(x)=sign[wx+b]$",
"_____no_output_____"
],
[
"# SMO\n\n这里实现简单版本的smo算法,这里所谓的简单版本指的是速度没有SVC快,参数自动选择没有SCV好等.但是通过调节参数一样可以达到和SVC差不多的结果\n\n### 算法:\n\n#### 1.SMO选择第一个变量的过程为选择一个违反KKT条件最严重的样本点为$\\alpha_1$,即违反以下KKT条件:\n\n$\\alpha_i=0\\Leftrightarrow y_ig(x_i)\\geqslant1$\n\n$0<\\alpha_i<C\\Leftrightarrow y_ig(x_i)=1$\n\n$\\alpha_i=C \\Leftrightarrow y_ig(x_i)\\leqslant1$\n\n其中:\n\n$g(x_i)=\\sum_{j=1}^{N}\\alpha_iy_iK(x_i,x_j)+b$\n\n**注意:**\n- 初始状态下$\\alpha_i$定义为0,且和样本数量一致.\n- 该检验是在$\\varepsilon$范围内的\n- 在检验过程中我们先遍历所有满足$0<\\alpha_i<C$的样本点,即在间隔边界上的支持向量点,找寻违反KKT最严重的样本点\n- 如果没有满足$0<\\alpha_i<C$则遍历所有的样本点,找违反KKT最严重的样本点\n- 这里的*违反KKT最严重的样本点*可以选择为$y_ig(x_i)$最小的点作为$\\alpha_1$\n\n#### 2.SMO选择第二个变量的过程为希望$\\alpha_2$有足够的变化\n\n因为$\\alpha_2^{new}$是依赖于$|E_1-E_2|$的,并且使得|E_1-E_2|最大,为了加快计算,一种简单的做法是:\n\n如果$E_1$是正的,那么选择最小的$E_i$作为$E_2$,如果$E_1$是负的,那么选择最大的$E_i$作为$E_2$,为了节省计算时间,将$E_i$保存在一个列表中\n\n**注意:**\n- 如果通过以上方法找到的$\\alpha_2$不能使得目标函数有足够的下降,那么采用以下启发式方法继续选择$\\alpha_2$,遍历在间隔边上的支持向量的点依次将其对应的变量作为$\\alpha_2$试用,直到目标函数有足够的下降,若还是找不到使得目标函数有足够下降,则抛弃第一个$\\alpha_1$,在重新选择另一个$\\alpha_1$\n\n- 这个简单版本的SMO算法并没有处理这种特殊情况\n\n\n\n\n",
"_____no_output_____"
],
[
"#### 3.计算$\\alpha_1^{new},\\alpha_2^{new}$\n\n计算$\\alpha_1^{new},\\alpha_2^{new}$,是为了计算$b_i,E_i$做准备.\n\n3.1 计算$\\alpha_2$的边界:\n\n- if $y_1 \\neq y_2$:$L=max(0,\\alpha_2^{old}-\\alpha_1^{old})$,$H=min(C,C+\\alpha_2^{old}-\\alpha_1^{old})$\n\n- if $y_1 = y_2$:$L=max(0,\\alpha_2^{old}+\\alpha_1^{old}-C)$,$H=min(C,C+\\alpha_2^{old}+\\alpha_1^{old})$\n\n3.2 计算$\\alpha_2^{new,unc} = \\alpha_2^{old}+\\frac{y_2(E_1-E_2)}{\\eta}$\n\n其中:\n\n$\\eta = K_{11}+K_{22}-2K_{12}$,这里的$K_n$值得是核函数,可以是高斯核,多项式核等.\n\n3.3 修剪$\\alpha_2$\n\n$\\alpha_2^{new}=\\left\\{\\begin{matrix}\nH, &\\alpha_2^{new,unc}>H \\\\ \n \\alpha_2^{new,unc},& L\\leqslant \\alpha_2^{new,unc}\\leqslant H \\\\ \n L,& \\alpha_2^{new,unc}<L\n\\end{matrix}\\right.$",
"_____no_output_____"
],
[
"3.3 计算$\\alpha_1^{new}$\n\n$\\alpha_1^{new}=\\alpha_1^{old}+y_1y_2(\\alpha_2^{old}-\\alpha_2^{new})$",
"_____no_output_____"
],
[
"#### 4.计算阈值b和差值$E_i$\n\n$b_1^{new}=-E_1-y_1K_{11}(\\alpha_1^{new}-\\alpha_1^{old})-y_2K_{21}(\\alpha_2^{new}-\\alpha_2^{old})+b^{old}$\n\n$b_2^{new}=-E_2-y_1K_{12}(\\alpha_1^{new}-\\alpha_1^{old})-y_2K_{22}(\\alpha_2^{new}-\\alpha_2^{old})+b^{old}$",
"_____no_output_____"
],
[
"如果$\\alpha_1^{new},\\alpha_2^{new}$,同时满足条件$0<\\alpha_i^{new}<C,i=1,2$,\n\n那么$b_1^{new}=b_2^{new}=b^{new}$.\n\n如果$\\alpha_1^{new},\\alpha_2^{new}$是0或者C,那么$b_1^{new},b_2^{new}$之间的数\n都符合KKT条件阈值,此时取中点为$b^{new}$\n\n$E_i^{new}=(\\sum_sy_j\\alpha_jK(x_i,x_j))+b^{new}-y_i$\n\n其中s是所有支持向量$x_j$的集合.",
"_____no_output_____"
],
[
"#### 5. 更新参数\n\n更新$\\alpha_i,E_i,b_i$",
"_____no_output_____"
],
[
"#### 注意:\n\n在训练完毕后,绝大部分的$\\alpha_i$的分量都为0,只有极少数的分量不为0,那么那些不为0的分量就是支持向量",
"_____no_output_____"
],
[
"### SMO简单例子\n\n加载数据,来自于scikit中的的鸢尾花数据,其每次请求是变化的",
"_____no_output_____"
]
],
[
[
"# data\ndef create_data():\n iris = load_iris()\n df = pd.DataFrame(iris.data, columns=iris.feature_names)\n df['label'] = iris.target\n df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label']\n data = np.array(df.iloc[:100, [0, 1, -1]])\n for i in range(len(data)):\n if data[i,-1] == 0:\n data[i,-1] = -1\n \n return data[:,:2], data[:,-1]",
"_____no_output_____"
],
[
"X, y = create_data()\n\n# 划分训练样本和测试样本\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)",
"_____no_output_____"
],
[
"plt.scatter(X[:,0],X[:,1],c=y)",
"_____no_output_____"
]
],
[
[
"### 开始搭建SMO算法代码",
"_____no_output_____"
]
],
[
[
"class SVM:\n def __init__(self,max_iter = 100,kernel = 'linear',C=1.,is_print=False,sigma=1):\n \"\"\"\n Parameters:\n max_iter:最大迭代数\n kernel:核函数,这里只定义了\"线性\"和\"高斯\"\n sigma:高斯核函数的参数\n C:惩罚项,松弛变量\n is_print:是否打印\n \n \"\"\"\n \n self.max_iter = max_iter\n self.kernel = kernel\n self.C = C # 松弛变量C\n self.is_print = is_print\n self.sigma = sigma\n def init_args(self,features,labels):\n \"\"\"\n self.m:样本数量\n self.n:特征数\n \"\"\"\n self.m,self.n = features.shape\n self.X = features\n self.Y = labels\n self.b = 0.\n \n # 将E_i 保存在一个列表中\n self.alpha = np.zeros(self.m) + 0.0001\n self.E = [self._E(i) for i in range(self.m)]\n\n def _g(self,i):\n \"\"\"\n 预测值g(x_i)\n \n \"\"\"\n g_x = np.sum(self.alpha*self.Y*self._kernel(self.X[i],self.X)) + self.b\n return g_x\n \n \n def _E(self,i):\n \"\"\"\n E(x) 为g(x) 对输入x的预测值和y的差值\n \"\"\"\n g_x = self._g(i) - self.Y[i]\n return g_x\n \n \n def _kernel(self,x1,x2):\n \"\"\"\n 计算kernel\n \"\"\"\n if self.kernel == \"linear\":\n return np.sum(np.multiply(x1,x2),axis=1)\n if self.kernel == \"Gaussion\":\n return np.sum(np.exp(-((x1-x2)**2)/(2*self.sigma)),axis=1)\n \n def _KKT(self,i):\n \"\"\"\n 判断KKT\n \"\"\"\n y_g = np.round(np.float64(np.multiply(self._g(i),self.Y[i]))) # 存在精度问题也就是说在epsilon范围内,所以这里使用round\n\n if self.alpha[i] == 0:\n\n return y_g >= 1,y_g\n \n elif 0<self.alpha[i]<self.C:\n\n return y_g == 1,y_g\n \n elif self.alpha[i] == self.C:\n\n return y_g <=1,y_g\n \n else:\n return ValueError\n \n def _init_alpha(self):\n \"\"\"\n 外层循环首先遍历所有满足0<a<C的样本点,检验是否满足KKT\n 0<a<C的样本点为间隔边界上支持向量点\n \"\"\"\n index_array = np.where(np.logical_and(self.alpha>0,self.alpha<self.C))[0] # 因为这里where的特殊性,所以alpha必须是(m,)\n\n if len(index_array) !=0:\n cache_list = []\n for i in index_array:\n bool_,y_g = self._KKT(i)\n if not bool_:\n cache_list.append((y_g,i))\n \n # 如果没有则遍历整个样本\n else:\n cache_list = []\n for i in range(self.m):\n bool_,y_g = self._KKT(i)\n if not bool_:\n cache_list.append((y_g,i))\n \n #获取违反KKT最严重的样本点,也就是g(x_i)*y_i 最小的\n min_i = sorted(cache_list,key=lambda x:x[0])[0][1]\n \n # 选择第二个alpha2\n E1 = self.E[min_i]\n\n if E1 > 0:\n j = np.argmin(self.E)\n else:\n j = np.argmax(self.E)\n\n return min_i,j\n\n def _prune(self,alpha,L,H):\n \"\"\"\n 修剪alpha\n \"\"\"\n if alpha > H:\n return H\n elif L<=alpha<=H:\n return alpha\n elif alpha < L:\n return L\n else:\n return ValueError\n \n \n def fit(self,features, labels):\n self.init_args(features, labels)\n for t in range(self.max_iter):\n # 开始寻找alpha1,和alpha2\n i1,i2 = self._init_alpha()\n \n # 计算边界\n if self.Y[i1] == self.Y[i2]: # 同号\n L = max(0,self.alpha[i2]+self.alpha[i1]-self.C)\n H = min(self.C,self.alpha[i2]+self.alpha[i1])\n else:\n L = max(0,self.alpha[i2]-self.alpha[i1])\n H = min(self.C,self.C+self.alpha[i2]-self.alpha[i1])\n\n\n # 计算阈值b_i 和差值E_i\n E1 = self.E[i1]\n E2 = self.E[i2]\n\n eta = self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i1]) + \\\n self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i2]) - \\\n 2 * self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i2])\n\n if eta <=0:\n continue\n\n alpha2_new_nuc = self.alpha[i2] + (self.Y[i2] * (E1-E2) /eta)\n # 修剪 alpha2_new_nuc\n alpha2_new = self._prune(alpha2_new_nuc,L,H)\n\n alpha1_new = self.alpha[i1] + self.Y[i1] * self.Y[i2] * (self.alpha[i2]-alpha2_new)\n\n\n # 计算b_i\n b1_new = -E1-self.Y[i1]*self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i1])*(alpha1_new - self.alpha[i1])\\\n - self.Y[i2] * self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i1])*(alpha2_new - self.alpha[i2]) + self.b\n b2_new = -E2-self.Y[i1]*self._kernel(self.X[np.newaxis,i1],self.X[np.newaxis,i2])*(alpha1_new - self.alpha[i1])\\\n - self.Y[i2] * self._kernel(self.X[np.newaxis,i2],self.X[np.newaxis,i2])*(alpha2_new - self.alpha[i2]) + self.b\n\n\n if 0 < alpha1_new < self.C:\n b_new = b1_new\n elif 0 < alpha2_new < self.C:\n b_new = b2_new\n else:\n # 选择中点\n b_new = (b1_new + b2_new) / 2\n\n\n\n # 更新参数\n self.alpha[i1] = alpha1_new\n self.alpha[i2] = alpha2_new\n self.b = b_new\n \n self.E[i1] = self._E(i1)\n self.E[i2] = self._E(i2)\n \n if self.is_print:\n print(\"Train Done!\")\n \n \n def predict(self,data):\n \n predict_y = np.sum(self.alpha*self.Y*self._kernel(data,self.X)) + self.b\n return np.sign(predict_y)[0]\n \n def score(self,test_X,test_Y):\n m,n = test_X.shape\n count = 0\n for i in range(m):\n predict_i = self.predict(test_X[i])\n if predict_i == np.float(test_Y[i]):\n count +=1\n return count / m ",
"_____no_output_____"
]
],
[
[
"由于鸢尾花数据每次请求都会变化,我们在这里取正确率的均值与SVC进行对比",
"_____no_output_____"
]
],
[
[
"count = 0\nfailed2 = []\nfor i in range(20):\n X, y = create_data()\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)\n svm = SVM(max_iter=200,C=2,kernel='linear')\n svm.fit(X_train,y_train)\n test_accourate = svm.score(X_test,y_test)\n train_accourate = svm.score(X_train,y_train)\n \n if test_accourate < 0.8:\n failed2.append((X_train, X_test, y_train, y_test)) # 储存正确率过低的样本集\n print(\"Test accourate:\",test_accourate)\n print(\"Train accourate:\",train_accourate)\n print('--------------------------')\n count += test_accourate\nprint(\"Test average accourate is: \",count/20)",
"Test accourate: 0.88\nTrain accourate: 0.9466666666666667\n--------------------------\nTest accourate: 0.92\nTrain accourate: 0.9733333333333334\n--------------------------\nTest accourate: 0.92\nTrain accourate: 0.84\n--------------------------\nTest accourate: 0.84\nTrain accourate: 0.8266666666666667\n--------------------------\nTest accourate: 0.8\nTrain accourate: 0.7866666666666666\n--------------------------\nTest accourate: 0.96\nTrain accourate: 1.0\n--------------------------\nTest accourate: 1.0\nTrain accourate: 1.0\n--------------------------\nTest accourate: 1.0\nTrain accourate: 0.96\n--------------------------\nTest accourate: 0.84\nTrain accourate: 0.9066666666666666\n--------------------------\nTest accourate: 1.0\nTrain accourate: 0.9333333333333333\n--------------------------\nTest accourate: 0.92\nTrain accourate: 0.8\n--------------------------\nTest accourate: 0.64\nTrain accourate: 0.88\n--------------------------\nTest accourate: 0.96\nTrain accourate: 1.0\n--------------------------\nTest accourate: 0.48\nTrain accourate: 0.6933333333333334\n--------------------------\nTest accourate: 0.76\nTrain accourate: 0.8\n--------------------------\nTest accourate: 1.0\nTrain accourate: 0.9733333333333334\n--------------------------\nTest accourate: 0.48\nTrain accourate: 0.5066666666666667\n--------------------------\nTest accourate: 0.96\nTrain accourate: 0.9333333333333333\n--------------------------\nTest accourate: 0.92\nTrain accourate: 0.96\n--------------------------\nTest accourate: 1.0\nTrain accourate: 1.0\n--------------------------\nTest average accourate is: 0.8640000000000001\n"
]
],
[
[
"可以发现,有些数据的正确率较高,有些正确率非常的底,我们将低正确率的样本保存,取出进行试验",
"_____no_output_____"
]
],
[
[
"failed2X_train, failed2X_test, failed2y_train, failed2y_test= failed2[2]",
"_____no_output_____"
]
],
[
[
"我们可以看出,在更改C后,正确率依然是客观的,这说明简单版本的SMO算法是可行的.只是我们在测算\n平均正确率的时候,C的值没有改变,那么可能有些样本的C值不合适.",
"_____no_output_____"
]
],
[
[
"svm = SVM(max_iter=200,C=5,kernel='linear')\nsvm.fit(failed2X_train,failed2y_train)\naccourate = svm.score(failed2X_test,failed2y_test)\naccourate",
"_____no_output_____"
]
],
[
[
"使用Scikit-SVC测试",
"_____no_output_____"
],
[
"### Scikit-SVC\n基于scikit-learn的[SVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC.decision_function)\n\n例子1:",
"_____no_output_____"
]
],
[
[
"from sklearn.svm import SVC\ncount = 0\nfor i in range(10):\n X, y = create_data()\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)\n clf = SVC(kernel=\"linear\",C=2)\n clf.fit(X_train, y_train)\n accourate = clf.score(X_test, y_test)\n print(\"accourate\",accourate)\n count += accourate\nprint(\"average accourate is: \",count/10)",
"accourate 0.96\naccourate 0.96\naccourate 1.0\naccourate 1.0\naccourate 1.0\naccourate 0.96\naccourate 0.96\naccourate 1.0\naccourate 0.96\naccourate 0.96\naverage accourate is: 0.9760000000000002\n"
]
],
[
[
"当然由于是简单版本的SMO算法,所以平均正确率肯定没有SVC高,但是我们可以调节C和kernel来使得正确率提高",
"_____no_output_____"
],
[
"## Multilabel classification",
"_____no_output_____"
],
[
"多标签:一个实例可以有多个标签比如一个电影可以是动作,也可以是爱情.\n\n多类分类(multi-class classification):有多个类别需要分类,但一个样本只属于一个类别\n\n多标签分类(multi-label classificaton):每个样本有多个标签\n\n对于多类分类,最后一层使用softmax函数进行预测,训练阶段使用categorical_crossentropy作为损失函数 \n\n对于多标签分类,最后一层使用sigmoid函数进行预测,训练阶段使用binary_crossentropy作为损失函数\n\nThis example simulates a multi-label document classification problem. The dataset is generated randomly based on the following process:\n\n- pick the number of labels: n ~ Poisson(n_labels)\n- n times, choose a class c: c ~ Multinomial(theta)\n- pick the document length: k ~ Poisson(length)\n- k times, choose a word: w ~ Multinomial(theta_c)\n\nIn the above process, rejection sampling is used to make sure that n is more than 2, and that the document length is never zero. Likewise, we reject classes which have already been chosen. The documents that are assigned to both classes are plotted surrounded by two colored circles.\n\nThe classification is performed by projecting to the first two principal components found by [PCA](http://www.cnblogs.com/jerrylead/archive/2011/04/18/2020209.html) and [CCA](https://files-cdn.cnblogs.com/files/jerrylead/%E5%85%B8%E5%9E%8B%E5%85%B3%E8%81%94%E5%88%86%E6%9E%90.pdf) for visualisation purposes, followed by using the [sklearn.multiclass.OneVsRestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.multiclass.OneVsRestClassifier.html#sklearn.multiclass.OneVsRestClassifier) metaclassifier using two SVCs with linear kernels to learn a discriminative model for each class. Note that PCA is used to perform an unsupervised dimensionality reduction, while CCA is used to perform a supervised one.\n\nNote: in the plot, “unlabeled samples” does not mean that we don’t know the labels (as in semi-supervised learning) but that the samples simply do not have a label.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import make_multilabel_classification\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.decomposition import PCA\nfrom sklearn.cross_decomposition import CCA\n",
"_____no_output_____"
],
[
"def plot_hyperplance(clf,min_x,max_x,linestyle,label):\n \n # get the separating heyperplance\n # 0 = w0*x0 + w1*x1 +b\n w = clf.coef_[0]\n a = -w[0] /w[1]\n xx = np.linspace(min_x -5,max_x + 5)\n yy = a * xx -(clf.intercept_[0]) / w[1] # clf.intercept_[0] get parameter b, \n plt.plot(xx,yy,linestyle,label=label)",
"_____no_output_____"
],
[
"def plot_subfigure(X,Y,subplot,title,transform):\n if transform == \"pca\": # pca执行无监督分析(不注重label)\n \n X = PCA(n_components=2).fit_transform(X)\n print(\"PCA\",X.shape)\n \n elif transform == \"cca\": # pca 执行监督分析(注重label),也即是说会分析label之间的关系\n X = CCA(n_components=2).fit(X, Y).transform(X)\n print(\"CCA\",X.shape)\n else:\n raise ValueError\n\n min_x = np.min(X[:, 0])\n max_x = np.max(X[:, 0])\n\n min_y = np.min(X[:, 1])\n max_y = np.max(X[:, 1])\n\n classif = OneVsRestClassifier(SVC(kernel='linear')) # 使用 one -reset 进行SVM训练\n classif.fit(X, Y)\n\n plt.subplot(2, 2, subplot)\n plt.title(title)\n\n zero_class = np.where(Y[:, 0]) # 找到第一类的label 索引\n \n one_class = np.where(Y[:, 1]) # 找到第二类的\n plt.scatter(X[:, 0], X[:, 1], s=40, c='gray', edgecolors=(0, 0, 0))\n plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b',\n facecolors='none', linewidths=2, label='Class 1')\n plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange',\n facecolors='none', linewidths=2, label='Class 2')\n \n\n # classif.estimators_[0],获取第一个估算器,得到第一个决策边界\n plot_hyperplance(classif.estimators_[0], min_x, max_x, 'k--',\n 'Boundary\\nfor class 1')\n # classif.estimators_[1],获取第二个估算器,得到第一个决策边界\n plot_hyperplance(classif.estimators_[1], min_x, max_x, 'k-.',\n 'Boundary\\nfor class 2')\n plt.xticks(())\n plt.yticks(())\n\n plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x)\n plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y)\n if subplot == 2:\n plt.xlabel('First principal component')\n plt.ylabel('Second principal component')\n plt.legend(loc=\"upper left\")",
"_____no_output_____"
]
],
[
[
"**make_multilabel_classification:**\n\nmake_multilabel_classification(n_samples=100, n_features=20, n_classes=5, n_labels=2, length=50, allow_unlabeled=True, sparse=False, return_indicator='dense', return_distributions=False, random_state=None)",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(8, 6))\n \n# If ``True``, some instances might not belong to any class.也就是说某些实例可以并不属于任何标签([[0,0]]),使用hot形式\nX, Y = make_multilabel_classification(n_classes=2, n_labels=1,\n allow_unlabeled=True,\n random_state=1)\n\nprint(\"Original:\",X.shape)\nplot_subfigure(X, Y, 1, \"With unlabeled samples + CCA\", \"cca\")\nplot_subfigure(X, Y, 2, \"With unlabeled samples + PCA\", \"pca\")\n\n\nX, Y = make_multilabel_classification(n_classes=2, n_labels=1,\n allow_unlabeled=False,\n random_state=1)\n\nprint(\"Original:\",X.shape)\nplot_subfigure(X, Y, 3, \"Without unlabeled samples + CCA\", \"cca\")\nplot_subfigure(X, Y, 4, \"Without unlabeled samples + PCA\", \"pca\")\n\nplt.subplots_adjust(.04, .02, .97, .94, .09, .2)\nplt.show()",
"Original: (100, 20)\nCCA (100, 2)\nPCA (100, 2)\nOriginal: (100, 20)\nCCA (100, 2)\nPCA (100, 2)\n"
]
],
[
[
"由于是使用多标签(也就是说一个实例可以有多个标签),无论是标签1还是标签2还是未知标签(“没有标签的样本”).图中直观来看应该是CCA会由于PCA(无论是有没有采用\"没有标签的样本\"),因为CCA考虑了label之间的关联.\n\n因为我们有2个标签在实例中,所以我们能够绘制2条决策边界(使用classif.estimators_[index])获取,并使用$x_1 = \\frac{w_0}{w_1}x_1-\\frac{b}{w_1}$绘制决策边界",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4adc387c53030579ecec66cf8254485c2c5b45cc
| 7,896 |
ipynb
|
Jupyter Notebook
|
Redis_demo/Redis Demo.ipynb
|
JKocher13/Week9-ResearchProjects
|
72a4340d315e7e2be47b75c58158beea3677f571
|
[
"MIT"
] | null | null | null |
Redis_demo/Redis Demo.ipynb
|
JKocher13/Week9-ResearchProjects
|
72a4340d315e7e2be47b75c58158beea3677f571
|
[
"MIT"
] | null | null | null |
Redis_demo/Redis Demo.ipynb
|
JKocher13/Week9-ResearchProjects
|
72a4340d315e7e2be47b75c58158beea3677f571
|
[
"MIT"
] | null | null | null | 24.37037 | 105 | 0.491641 |
[
[
[
"/* These are a list of Redis commands that can be executed in Redis Cli or Scala with Redis_Scala*/",
"_____no_output_____"
],
[
"//Redis CLI",
"_____no_output_____"
],
[
"//launch by going to your redis file and using the cli command redis-cli",
"_____no_output_____"
],
[
"//to check your connection type \n// PING\n// you shoudl receive PONG in returns\n",
"_____no_output_____"
],
[
"/*\n\nSET for one Key/Value Pair:\n\n\nSET foo 100\nGET foo\n 100\nINCR foo\nGET foo\n 101\nDECR foo\nGET Foo\n 100\nSET bar \"Sup\"\nGET bar\n \"Sup\"\nEXSISTS bar\n 1\nDEL bar\nEXSISTS bar\n 0\nFLUSHALL \nEXSISTS foo\n 0\n*/",
"_____no_output_____"
],
[
"/*\n\nMultipe k/v pair: MSET\n\nMSET key1 \"Hello\" key2 \"World\"\nGET key1\n \"Hello\"\nGET key2\n \"Hello\"\nAPPEND key1 \"World\"\nGET key1\n \"Hello\"\n \"World\"\n \n*/",
"_____no_output_____"
],
[
"/*\nLISTS: Ordered key value pairs\n\nLPUSH key vale --> Beginning of list (index =0)\nRPUSH key value--> End of list (index = last value)\nLLEN key --> Length of list\nLRANGE key starting_index ending_index --> returns values fro start to stop\n\n\nLPUSH Eagles \"Wentz\"\nLPUSH Eagles \"Sanders\"\nLPUSH Eagles \"Reagor\"\nLRANGE Eagles 1 2\n \"Sanders\"\n \"Wentz\"\nLRANGE Eagles 0 1\n \"Reagor\"\n \"Sanders\"\nRPUSH Eagles \"Johnson\"\nLRANGE Eagles 0 -1\n \"Reagor\"\n \"Sanders\"\n \"Wentz\"\n \"Johnson\"\nLLEN Eagles\n 4\n \nLPOP Eagles\n \"Reagor\"\nRPOP Eagles\n \"Johnson\"\nLRANGE Eagles 0 -1\n \"Sanders\"\n \"Wentz\"\n \nLINSERT Eagles BEFORE \"Wentz\" \"Kelce\"\nLRANGE Eagles 0 -1\n \"Sanders\"\n \"Kelce\"\n \"Wentz\"\n \n*/",
"_____no_output_____"
],
[
"/*\nSETS: Unordered key value pairs\n\nSADD key value--> add a value to a key\nSREM key value--> remove a value from a key\nSMEMBERS key --> list all members in a set\nSCARD key --> return lengths of set\nSMOVE key new_key value --> moves value from one key to a new_key\nSDIFF key1 key2 --> tell you what key1 has that key 2 does not\nSINTER key1 key2 --> tell you what is in common between the two sets \n\n\nSADD sixers \"Simmons\"\nSADD sixers \"Embiid\"\nSADD sixers \"Harris\"\nSADD sixers \"Butler\"\nSREM sixers \"Butler\"\nSMEMBERS sixers\n \"Simmons\"\n \"Embiid\"\n \"Harris\"\nSISMEMBER \"Butler\"\n 0\nSCARD sixers\n 3\n \n\nSADD bulls \"Michael Jordan\"\nSADD bulls \"Scottie Pippen\"\nSADD whitesox \"Michael Jordan\"\nSADD whitesox \"Frank Thomas\"\n\nSDIFF bulls whitesox\n \"Scottie Pippen\"\nSDIFF whitesox bulls \n \"Frank Thomas\"\nSINTER whitesox bulls\n \"Michael Jordan\"\n \n",
"_____no_output_____"
],
[
"/*\nSorted Sets: Same as sets but all members are associated with a score\n\nZADD key score value --> adds set with score to key\nZRANK key value --> returns ranking by score (higher score = higher rank )\nZRANGE key start stop --> return list of all values in that range\n\n\nZADD players 361 \"Christian McCaffery\"\nZADD players 283 \"Derrick Henry\"\nZADD players 269 \"Aaron Jones\"\nZADD players 137 \"David Mongomery\"\nZADD players 137 \"LeVeon Bell\"\n\nZRANK players \"Christian McCaffery\"\n 4\nZRANK players \"LeVeon Bell\"\n 1\nZRANK players \"David Mongomery\"\n 0\nZRANGE 0 -1 \n 1) \"David Mongomery\"\n 2) \"LeVeon Bell\"\n 3) \"Aaron Jones\"\n 4) \"Derrick Henry\"\n 5) \"Christian McCaffery\"\nZINCRBY players 1 \"David Mongomery\"\n 138\nZRANGE player 0 -1\n 1) \"LeVeon Bell\"\n 2) \"David Mongomery\"\n 3) \"Aaron Jones\"\n 4) \"Derrick Henry\"\n 5) \"Christian McCaffery\"\n*/\n",
"_____no_output_____"
],
[
"/*\nHASHES - reminds me of classes\n\nHSET hash key value --> Set a hash\nHGET hash key value --> returns value\nHGET ALL hash \nHVALS hash --> returns all keys for hash \nHKEYS hash --> returns all keys for hash\nHMSET key1 hash value key1 hash 2 value key2 hash value .... --> hset multiples \n\nHSET user:James name \"James Kocher\"\nHGET user:James name\n \"James Kocher\"\nHSET user:James email \"[email protected]\"\nHGETALL\n 1) \"name\"\n 2) \"James Kocher\"\n 3) \"email\"\n 4) \"[email protected]\"\n\n\n*/",
"_____no_output_____"
],
[
"/* You will need to be runing sbt to build you scala if you wish to use Redis_Scal\n\nyou can copy and paste this to your sbt.build:\n\n libraryDependencies ++= Seq(\n \"net.debasishg\" %% \"redisclient\" % \"3.20\")\n \nin your scala file run the follow import commands:\n\n import com.redis._\n \nMake the connections for your Redis data base adn then run the following command:\n\n val r = new RedisClient(\"localhost\", 6379)\n\n*/",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adc42aed48964a8a940c7de4362d4133a23f408
| 27,174 |
ipynb
|
Jupyter Notebook
|
python/02_basic_syntax.ipynb
|
JeongwonKim-777/TIL
|
b57a71d1469fe07bb024d63b7163f972c3f81dbe
|
[
"MIT"
] | null | null | null |
python/02_basic_syntax.ipynb
|
JeongwonKim-777/TIL
|
b57a71d1469fe07bb024d63b7163f972c3f81dbe
|
[
"MIT"
] | null | null | null |
python/02_basic_syntax.ipynb
|
JeongwonKim-777/TIL
|
b57a71d1469fe07bb024d63b7163f972c3f81dbe
|
[
"MIT"
] | null | null | null | 19.029412 | 562 | 0.439427 |
[
[
[
"### Python basic syntax\n- 변수 선언, 식별자, 자료형, 형변환, 연산자 등",
"_____no_output_____"
],
[
"### 1. 주석(comment)과 출력(print)",
"_____no_output_____"
]
],
[
[
"# 주석 : 앞에 #을 붙이면 코드로 실행되지 않음\n# 코드에 대한 설명이나 중간에 코드를 실행시키고 싶지 않을 때 사용\n# 단축키 : command + /\n# block 설정 : shift + 방향키\n\nprint(1)\n# print(2)\nprint(3)",
"1\n3\n"
],
[
"# 출력 : print 함수\n# 코드 중간에 변수 안에 들어있는 값을 확인하고 싶을 때 사용함",
"_____no_output_____"
],
[
"a = 1\nb = 2\nprint(b)\nc = 3\nb = 4\nprint(b)",
"2\n4\n"
],
[
"# print 함수의 옵션\n# docstring : 함수에 대한 설명. 단축키는 (shift + tab)\n# 자동완성 : 함수 이름의 일부분을 입력하고 tab을 누르면 자동완성 가능\n\nprint(1, 2, sep=\"-\", end='\\t')\nprint(3)",
"1-2\t3\n"
]
],
[
[
"### 2. 변수 선언\n- RAM 저장공간에 값을 할당하는 행위",
"_____no_output_____"
]
],
[
[
"a = 1\nb = 2\na + b",
"_____no_output_____"
]
],
[
[
"위의 코드 동작원리\n1. a = 1을 선언하는 순간 ram 에 a 라는 이름의 index, 1 이라는 value가 생성됨\n2. b = 2를 선언하는 순간 ram 에 b 라는 이름의 index, 2 라는 value가 생성됨\n3. a + b 를 요청할 때, 각 저장공간에 있는 value를 참조하여 연산 수행 ",
"_____no_output_____"
]
],
[
[
"# 변수 선언은 여러가지 방식으로 가능함\n\nd, e = 3, 4\nf = g = 5",
"_____no_output_____"
]
],
[
[
"### 3. 식별자\n- 변수, 함수, 클래스, 모듈 등의 이름\n- 식별자 규칙\n - 소문자, 대문자, 숫자, _(underscore)사용\n - 가장 앞에 숫자 사용 불가\n - 예약어의 사용 불가 ex) def, class, try, except...등\n - convention(python 기준, 언어마다 조금씩 다르다)\n - snake case : fast_campus : 변수 및 함수 선언할 때 사용\n - camel case : fastCampus : 클래스 선언할 때 사용",
"_____no_output_____"
],
[
"### 4. 데이터 타입\n- RAM 저장공간을 효율적으로 사용하기 위해서 저장공간의 타입을 설정\n- 동적타이핑 : 변수 선언 시 저장되는 값에 따라서 자동으로 데이터 타입이 설정됨\n- 기본 데이터 타입\n - int, float, bool, str\n- 컬렉션 데이터 타입\n - list, tuple, dict",
"_____no_output_____"
]
],
[
[
"a = 1\nb = \"python\"\n\ntype(a), type(b)",
"_____no_output_____"
],
[
"# 기본 데이터 타입 int, float, bool, str\na = 1\nb = 1.2\nc = True\nd = \"data\"\n\ntype(a), type(b), type(c), type(d)",
"_____no_output_____"
],
[
"a + b",
"_____no_output_____"
],
[
"a + d # int 와 str은 연산이 불가능함",
"_____no_output_____"
],
[
"# 문자열 데이터 타입에서 사용할 수 있는 함수의 예\n\ne = d.upper()\n\nd, e",
"_____no_output_____"
],
[
"# 문자열에서 사용 가능한 주요한 함수 몇가지 예\n\nf = \" Fast campuS \"\n\n# lower: 소문자로 변환\nf.lower()\n# strip: 공백을 제거\nf.strip()\n# replace: 특정 문자열을 치환\nf.replace(\"Fast\",\"Slow\")",
"_____no_output_____"
],
[
"# 적용가능한 함수 보는 방법: dir()",
"_____no_output_____"
],
[
"# 오프셋 인덱스 : 마스크, 마스킹 : []\n# 문자열은 순서가 있는 문자들의 집합",
"_____no_output_____"
],
[
"g = \"abcdefg\"",
"_____no_output_____"
],
[
"# 인덱스는 0부터 시작. 음수로 지정할 경우 가장 마지막 문자를 -1로 인식함\n# 인덱스 범위 설정할 때는 콜론(:) 사용\n# 콜론을 두번씩 쓸 때는 (::) 건너뛰는 것을 의미\n\ng[1], g[-2], g[2:5], g[:2], g[-2:], g[::2]",
"_____no_output_____"
],
[
"numbers = \"123456789\"\n\n# 여기서 97531을 출력하기 위해서는\n# 1. 우선 역순임, 2. 2씩 건너 뜀\n\nnumbers[::-2]",
"_____no_output_____"
]
],
[
[
"### 컬렉션 데이터 타입: list, tuple, dict\n- list [] : 순서가 있고, 수정 가능한 데이터 타입\n- tuple () : 순서가 있고, 수정이 불가능한 데이터 타입\n- dict {} : 순서가 없고, (키:값) 으로 구성되어 있는 데이터 타입",
"_____no_output_____"
]
],
[
[
"# list\nls = [1, 2, 3, 4, \"five\", [5, 6], True, 1.2]\ntype(ls), ls",
"_____no_output_____"
],
[
"# list는 순서가 있기 때문에 offset index 사용 가능함\n\nls[3], ls[4], ls[1:3], ls[::-1]",
"_____no_output_____"
],
[
"# list 에서 사용되는 함수\n\nls2 = [1, 5, 2, 4]\n",
"_____no_output_____"
],
[
"# append : 가장 뒤에 값을 추가\n\nls2.append(3)\n\n# sort : 오름차순으로 정렬\n\nls2.sort()\n\n# 내림차순은 별도로 없기 때문에, 오름차순 정렬한 리스트를 역으로 다시 출력해야 함\n# 결과값이 요상하게 나오는 이유는 append를 입력한 셀을 여러번 실행시켰기 때문에 3이 반복적으로 추가된 것\n\nls2[::-1]",
"_____no_output_____"
],
[
"# pop : 가장 마지막 데이터를 출력하고 그 값을 삭제\n\nnum = ls2.pop()\nnum, ls2",
"_____no_output_____"
],
[
"# 단축키: ctrl + enter -> 현재 셀에서 실행됨. 커서 안넘어감.",
"_____no_output_____"
],
[
"# 리스트의 복사\n\nls3 = [1, 2, 3]\nls4 = ls3\nls3, ls4",
"_____no_output_____"
],
[
"ls3[2] = 5\n\n# ls3 내의 값을 바꿨는데, ls4의 값도 동시에 바뀌었음\nls4",
"_____no_output_____"
],
[
"# 단순하게 대입 연산자로 매칭을 시키게 되면, 실제 값을 복사하는 것이 아닌 주소값을 참조한다는 의미. 이를 얕은 복사라 지칭.\n# 따라서 진짜 값을 복사하기 위해서는, list 에서 사용할 수 있는 함수 중 하나인 copy를 사용해야 함. 이를 깊은 복사라 지칭.\n\nls5 = ls3.copy()\nls3[2] = 7\nls3, ls5",
"_____no_output_____"
]
],
[
[
"### tuple : list와 같지만 수정이 불가능한 데이터 타입\n- ( )을 통해서 선언하지만, 생략하고 그냥 쓰면 tuple로 인식함\n- tuple은 list 대비해서 같은 데이터를 가졌을 때 저장공간을 작게 사용함. 따라서 변수값이 불변일 때는 가급적 tuple로 지정하는 것이 바람직.",
"_____no_output_____"
]
],
[
[
"tp1 = 1, 2, 3\ntp2 = (4, 5, 6)\ntype(tp1), type(tp2), tp1, tp2",
"_____no_output_____"
],
[
"# tuple에서도 offset index 사용 가능\n\ntp1[1], tp1[::-1]",
"_____no_output_____"
],
[
"# list와 tuple의 저장공간 차이 확인\n\nimport sys\n\nls = [1, 2, 3]\ntp = (1, 2, 3)\n\nprint(sys.getsizeof(ls), sys.getsizeof(tp))",
"96 80\n"
]
],
[
[
"### dictionary\n- { }로 선언\n- 순서가 없고 {key:value} 형태로 구성되어 있는 데이터 타입",
"_____no_output_____"
]
],
[
[
"# 선언할 때: 키는 정수, 문자열 데이터 타입만 사용 가능\n\ndicex = {\n 1 : \"one\",\n \"two\" : 2,\n \"three\" : [1, 2, 3,],\n}\ntype(dicex), dicex",
"_____no_output_____"
],
[
"# dict에서는 index 가 key 임\n\ndicex[\"two\"], dicex[\"three\"]",
"_____no_output_____"
],
[
"dicex[\"two\"] = 22\n\ndicex",
"_____no_output_____"
],
[
"# 아래의 데이터를 list와 dict으로 각각 선언할 경우\n# list로는 두 번 할 일을, dict으로는 한 번에 처리 가능함\n\n# 도시 : Seoul, Busan, Daegu\n# 인구 : 9,700,000, 3,400,000, 2,400,000",
"_____no_output_____"
],
[
"ls_city = [\"Seoul\", \"Busan\", \"Daegu\"]\nls_pop = [9700000, 3400000, 2400000]\n\nls_city, ls_pop",
"_____no_output_____"
],
[
"dict_citypop = {\n \"Seoul\" : 9700000,\n \"Busan\" : 3400000,\n \"Daegu\" : 2400000\n}\n\ndict_citypop",
"_____no_output_____"
],
[
"# list에 동일한 타입의 자료형만 있을 경우, 함수를 통해 쉽게 계산 가능함\n\nsum(ls_pop)",
"_____no_output_____"
],
[
"# dict 타입은 key와 value가 공존하므로 계산을 위해서는 values 함수를 통해 별도로 지정해줘야 함\n\ndict_citypop.values()",
"_____no_output_____"
],
[
"sum(dict_citypop.values())",
"_____no_output_____"
]
],
[
[
"### 5. 형변환\n- 데이터 타입을 변환하는 방법\n- int, float, bool, str, list, tuple, dict",
"_____no_output_____"
]
],
[
[
"# 같은 데이터 타입에서만 연산이 가능하다\n\na = 1\nb = \"2\"\na + b",
"_____no_output_____"
],
[
"# 따라서 데이터 타입을 맞춰줘야 함\n\na + int(b)",
"_____no_output_____"
],
[
"# 만약 문자열로 합성을 원한다면\n\nstr(a) + b",
"_____no_output_____"
],
[
"# zip : 같은 인덱스 데이터끼리 묶어주는 함수. 데이터끼리는 tuple 형태로 묶임\n\nzip(ls_city, ls_pop)",
"_____no_output_____"
],
[
"# 출력 형태를 각각 조절할 수 있음. collection data type 3개 중 한개로 지정.\n\nlist(zip(ls_city, ls_pop))",
"_____no_output_____"
],
[
"tuple(zip(ls_city, ls_pop))",
"_____no_output_____"
],
[
"newdict = dict(zip(ls_city, ls_pop))\nnewdict",
"_____no_output_____"
],
[
"# dict 형태로 묶인 것을 다시 풀기 위해서는 keys 혹은 values 함수를 사용\n\nlist(newdict.keys())",
"_____no_output_____"
],
[
"tuple(newdict.values())",
"_____no_output_____"
]
],
[
[
"### 6. 연산자\n- 두 개(이상)의 데이터를 통해 특정한 연산을 위한 기호\n- 산술연산자: +, -, *, /, //(몫), %(나머지), **(제곱). -> 가장 마지막 것부터 우선순위 낮아짐\n- 할당연산자: 특정 변수에 누적시켜서 연산 : +=, //=, **=, 등..\n- 비교연산자: <, >, <=, >=, !=, ==. True 혹은 False로 출력됨\n- 논리연산자: True, False를 연산. and, or, not 총 3종류\n- 멤버연산자: 특정 데이터가 있는지 확인할 때 사용. not in, in 사용\n- 우선순위: 산술, 할당, 비교, 멤버, 논리 순",
"_____no_output_____"
]
],
[
[
"# 산술 연산: 기본 규칙과 동일함(괄호가 우선)\n\n(1 + 4) / 2 ** 2",
"_____no_output_____"
],
[
"# 할당 연산\n\na = 10\na += 10\na",
"_____no_output_____"
],
[
"# 비교 연산\n\nb = 30\n\na > b, a < b, a == b, a != b",
"_____no_output_____"
],
[
"# 논리 연산\n\nTrue or False, True and False, not False",
"_____no_output_____"
],
[
"# 멤버 연산\n\nls_ex = [\"Kim\", \"Lee\", \"Park\"]\n\n\"Kim\" in ls_ex, \"Chang\" in ls_ex, \"Lee\" not in ls_ex\n",
"_____no_output_____"
],
[
"# 번외: 랜덤 함수\n\nimport random\n\nrandom.randint(1, 10)",
"_____no_output_____"
],
[
"# 번외: 입력 함수\n\ndata = input(\"insert string : \")",
"insert string : Hi\n"
],
[
"data",
"_____no_output_____"
],
[
"# 심심풀이: 해결의 책\n\n# 1. 솔루션을 list로 작성\n# 2. 질문 입력 받음\n# 3. 솔루션의 갯수에 맞게 랜덤한 index 정수 값을 생성\n# 4. index 해당하는 솔루션 list의 데이터를 출력",
"_____no_output_____"
],
[
"import random\n\nsolution = [\"잠을 자라\", \"돈 벌어라\", \"운동 해라\", \"답 없다\", \"다시 태어나라\"]\n\ninput(\"What's your problem? : \")\n\nidx = random.randint(0, len(solution) - 1)\n\nsolution[idx]",
"What's your problem? : 삶이 무료하다\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adc48e92b3f78d35ec2acffed74f2e6d074e440
| 13,195 |
ipynb
|
Jupyter Notebook
|
image/dogs-vs-cats/vgg16-simple.ipynb
|
danielfrg/deep-learning
|
95237321b7e941f3c346c91e8d04fa64a5e968c1
|
[
"Apache-2.0"
] | 1 |
2018-12-08T02:15:42.000Z
|
2018-12-08T02:15:42.000Z
|
image/dogs-vs-cats/vgg16-simple.ipynb
|
danielfrg/deep-learning
|
95237321b7e941f3c346c91e8d04fa64a5e968c1
|
[
"Apache-2.0"
] | null | null | null |
image/dogs-vs-cats/vgg16-simple.ipynb
|
danielfrg/deep-learning
|
95237321b7e941f3c346c91e8d04fa64a5e968c1
|
[
"Apache-2.0"
] | 3 |
2018-05-17T15:29:23.000Z
|
2019-03-11T01:58:00.000Z
| 28.4375 | 135 | 0.571883 |
[
[
[
"This notebooks finetunes VGG16 by adding a couple of Dense layers and trains it to classify between cats and dogs.\n\nThis gives a better classification of around 95% accuracy on the validation dataset",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"import numpy as np\n\nimport tensorflow as tf\n\nfrom tensorflow.contrib.keras import layers\nfrom tensorflow.contrib.keras import models\nfrom tensorflow.contrib.keras import optimizers\nfrom tensorflow.contrib.keras import applications\nfrom tensorflow.contrib.keras.python.keras.preprocessing import image\nfrom tensorflow.contrib.keras.python.keras.applications import imagenet_utils",
"_____no_output_____"
],
[
"def get_batches(dirpath, gen=image.ImageDataGenerator(), shuffle=True, batch_size=64, class_mode='categorical'):\n return gen.flow_from_directory(dirpath, target_size=(224,224), class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)",
"_____no_output_____"
],
[
"batch_size = 64",
"_____no_output_____"
],
[
"train_batches = get_batches('./data/train', batch_size=batch_size)",
"Found 22797 images belonging to 2 classes.\n"
],
[
"val_batches = get_batches('./data/valid', batch_size=batch_size)",
"Found 2203 images belonging to 2 classes.\n"
]
],
[
[
"Model creation",
"_____no_output_____"
]
],
[
[
"vgg16 = applications.VGG16(weights=\"imagenet\", include_top=False, input_shape=(224, 224, 3))",
"_____no_output_____"
],
[
"##",
"_____no_output_____"
],
[
"finetune_in = vgg16.output",
"_____no_output_____"
],
[
"x = layers.Flatten(name='flatten')(finetune_in)\nx = layers.Dense(4096, activation='relu', name='fc1')(x)\nx = layers.BatchNormalization()(x)\nx = layers.Dropout(0.5)(x)\nx = layers.Dense(4096, activation='relu', name='fc2')(x)\nx = layers.BatchNormalization()(x)\nx = layers.Dropout(0.5)(x)",
"_____no_output_____"
],
[
"predictions = layers.Dense(train_batches.num_class, activation='softmax', name='predictions')(x)",
"_____no_output_____"
],
[
"model = models.Model(inputs=vgg16.input, outputs=predictions)",
"_____no_output_____"
],
[
"##",
"_____no_output_____"
]
],
[
[
"We tell the model to train on the last 3 layers",
"_____no_output_____"
]
],
[
[
"for layer in model.layers[:-7]:\n layer.trainable = False",
"_____no_output_____"
],
[
"model.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 224, 224, 3) 0 \n_________________________________________________________________\nblock1_conv1 (Conv2D) (None, 224, 224, 64) 1792 \n_________________________________________________________________\nblock1_conv2 (Conv2D) (None, 224, 224, 64) 36928 \n_________________________________________________________________\nblock1_pool (MaxPooling2D) (None, 112, 112, 64) 0 \n_________________________________________________________________\nblock2_conv1 (Conv2D) (None, 112, 112, 128) 73856 \n_________________________________________________________________\nblock2_conv2 (Conv2D) (None, 112, 112, 128) 147584 \n_________________________________________________________________\nblock2_pool (MaxPooling2D) (None, 56, 56, 128) 0 \n_________________________________________________________________\nblock3_conv1 (Conv2D) (None, 56, 56, 256) 295168 \n_________________________________________________________________\nblock3_conv2 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_conv3 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_pool (MaxPooling2D) (None, 28, 28, 256) 0 \n_________________________________________________________________\nblock4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 \n_________________________________________________________________\nblock4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_pool (MaxPooling2D) (None, 14, 14, 512) 0 \n_________________________________________________________________\nblock5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_pool (MaxPooling2D) (None, 7, 7, 512) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 25088) 0 \n_________________________________________________________________\nfc1 (Dense) (None, 4096) 102764544 \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, 4096) 16384 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 4096) 0 \n_________________________________________________________________\nfc2 (Dense) (None, 4096) 16781312 \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, 4096) 16384 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 4096) 0 \n_________________________________________________________________\npredictions (Dense) (None, 2) 8194 \n=================================================================\nTotal params: 134,301,506\nTrainable params: 119,570,434\nNon-trainable params: 14,731,072\n_________________________________________________________________\n"
],
[
"for i, layer in enumerate(model.layers):\n print(i, layer.name, layer.trainable)",
"0 input_1 False\n1 block1_conv1 False\n2 block1_conv2 False\n3 block1_pool False\n4 block2_conv1 False\n5 block2_conv2 False\n6 block2_pool False\n7 block3_conv1 False\n8 block3_conv2 False\n9 block3_conv3 False\n10 block3_pool False\n11 block4_conv1 False\n12 block4_conv2 False\n13 block4_conv3 False\n14 block4_pool False\n15 block5_conv1 False\n16 block5_conv2 False\n17 block5_conv3 False\n18 block5_pool False\n19 flatten False\n20 fc1 True\n21 batch_normalization_1 True\n22 dropout_1 True\n23 fc2 True\n24 batch_normalization_2 True\n25 dropout_2 True\n26 predictions True\n"
],
[
"model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
],
[
"epochs = 1",
"_____no_output_____"
],
[
"steps_per_epoch = train_batches.samples // train_batches.batch_size\nvalidation_steps = val_batches.samples // val_batches.batch_size",
"_____no_output_____"
],
[
"model.fit_generator(train_batches, validation_data=val_batches, epochs=epochs,\n steps_per_epoch=steps_per_epoch,validation_steps=validation_steps)",
"_____no_output_____"
]
],
[
[
"This give us a validation score of around: `val_loss: 0.2865 - val_acc: 0.9536`\n\n## Gen submission file",
"_____no_output_____"
]
],
[
[
"import submission",
"_____no_output_____"
],
[
"test_batches, steps = submission.test_batches()",
"Found 12500 images belonging to 1 classes.\n"
],
[
"preds = model.predict_generator(test_batches, steps)",
"_____no_output_____"
],
[
"preds.shape",
"_____no_output_____"
],
[
"submission.gen_file(preds, test_batches)",
"_____no_output_____"
]
],
[
[
"This gave a score of around `0.39` on the public leaderboard",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4adc5134d4a0db34fedd9ca8e48014b18df00ca3
| 815,984 |
ipynb
|
Jupyter Notebook
|
Air Quality Index/Code/PGCBAA02B002_Akshara.ipynb
|
AksharaSivakumar01/Data-Science-with-Python
|
dbc3571591ce6cf4a697ae4dfd5220554bc3a939
|
[
"MIT"
] | null | null | null |
Air Quality Index/Code/PGCBAA02B002_Akshara.ipynb
|
AksharaSivakumar01/Data-Science-with-Python
|
dbc3571591ce6cf4a697ae4dfd5220554bc3a939
|
[
"MIT"
] | null | null | null |
Air Quality Index/Code/PGCBAA02B002_Akshara.ipynb
|
AksharaSivakumar01/Data-Science-with-Python
|
dbc3571591ce6cf4a697ae4dfd5220554bc3a939
|
[
"MIT"
] | null | null | null | 815,984 | 815,984 | 0 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4adc6c6eec9a843969729fd37406be5c664c5af7
| 9,425 |
ipynb
|
Jupyter Notebook
|
_notebooks/2020-02-14-power-analyis.ipynb
|
anaveenan/naveenanarjunan
|
ffaa0371c9bf6d94b79098c4857663b50897ce44
|
[
"Apache-2.0"
] | null | null | null |
_notebooks/2020-02-14-power-analyis.ipynb
|
anaveenan/naveenanarjunan
|
ffaa0371c9bf6d94b79098c4857663b50897ce44
|
[
"Apache-2.0"
] | 3 |
2021-05-20T21:17:03.000Z
|
2022-02-26T09:52:27.000Z
|
_notebooks/2020-02-14-power-analyis.ipynb
|
anaveenan/naveenanarjunan
|
ffaa0371c9bf6d94b79098c4857663b50897ce44
|
[
"Apache-2.0"
] | null | null | null | 36.25 | 447 | 0.637984 |
[
[
[
"# Using Simulation to Estimate the Power of an A/B experiment\n> A tutorial on estimating power of an A/B experiment\n\n- toc: false\n- badges: true\n- comments: true\n- categories: [a/b testing, python]\n- image: images/chart-preview.png\n",
"_____no_output_____"
],
[
"# About\n\nThis article was originally posted in my medium blog post [here](https://medium.com/analytics-vidhya/using-simulation-to-estimate-the-power-of-an-a-b-experiment-d38adf32b29c)\n\n\nPower of an experiment measures the ability of the experiment to detect a specific alternate hypothesis. For example, an e-commerce company is trying to increase the time users spend on the website by changing the design of the website. They plan to use the well-known two-sample t-test. Power helps in answering the question: will the t-test be able to detect a difference in mean time spend (if it exists) by rejecting the null hypothesis?",
"_____no_output_____"
],
[
"\nLets state the hypothesis \n\n**Null Hypothesis H<sub>0</sub>**: New design has no effect on the time users spend on the website \n**Alternate Hypothesis H<sub>a</sub>**: New design impacts the time users spend on the website \n\n\n\nWhen an A/B experiment is run to measure the impact of the website redesign, \nwe want to ensure that the experiment has at least 80% power. The following parameters impact the power of the experiment: \n\n\n**1. Sample size(n):** Larger the sample size, smaller the standard error becomes; and makes sampling distribution smaller. Increasing the sample size, increases the power of the experiment \n**2. Effect size(𝛿):** Difference between the means sampling distribution of null and alternative hypothesis. Smaller the effect size, need more samples to detect an effect at predefined power \n**3. Alpha(𝛼):** Significance value is typically set at 0.05; this is the cut off at which we accept or reject our null hypothesis. Making alpha smaller requires more samples to detect an effect at predefined power \n**4. Beta(β):** Power is defined as 1-β \n\n\nWhy power analysis is done to determine sample size before running an experiment? \n\n1. Running experiments is expensive and time consuming \n2. Increases the chance of finding significant effect \n3. Increases the chance of replicating an effect detected in an experiment \n\n\nFor example, the time users spend currently on the website is normally distributed with mean 2 minutes and standard deviation 1 minute. The product manager wants to design an experiment to understand if the redesigned website helps in increasing the time spent on the website. \n\nThe experiment should be able to detect a minimum of 5% change in time spent on the website. For a test like this, an exact solution is available to estimate sample size since sampling distribution is known. Here we will use the simulation method to estimate the sample and validate the same using exact method. \n\nThe following steps estimate the power of two-sample t-test:\n\n1. Simulate data for the model under null 𝒩(2,1) and alternate hypothesis 𝒩(2+𝛿,1) \n2. Perform t-test on the sample and record whether the t-test rejects the null hypothesis \n3. Run the simulation multiple number of times and count the number of times the t-test rejects the null hypothesis. ",
"_____no_output_____"
],
[
"### Code to compute power of experiment for a specified sample size, effect size and significance level: \n\nPower of the experiment is 58.8% with sample size of 1000",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport scipy.stats as st\n# Initialize delta(minimum lift the product manager expect), control_mean, control_sd\ndelta=0.05\ncontrol_mean=2\ncontrol_sd=1\nsample_size=1000\nalpha=0.05#significance of the experiment\nn_sim=1000#Total number of samples to simulate\n\nnp.random.seed(123)#set seed\ndef simulate_data(control_mean,control_sd,sample_size,n_sim):\n # Simulate the time spend under null hypothesis\n control_time_spent = np.random.normal(loc=control_mean, scale=control_sd, size=(sample_size,n_sim))\n # Simulate the time spend under alternate hypothesis\n treatment_time_spent = np.random.normal(loc=control_mean*(1+delta), scale=control_sd, size=(sample_size,n_sim))\n return control_time_spent,treatment_time_spent\n# Run the t-test and get the p_value\ncontrol_time_spent, treatment_time_spent=simulate_data(control_mean,control_sd,sample_size,n_sim)\nt_stat, p_value = st.ttest_ind(control_time_spent, treatment_time_spent)\npower=(p_value<0.05).sum()/n_sim\nprint(\"Power of the experiment {:.1%}\".format(power))\n#Power of the experiment 58.8%",
"Power of the experiment 58.8%\n"
]
],
[
[
"### Code to compute sample size required to reach 80% power for specified effect size and significance level: \nBased on simulation methods we need 1560 users to reach power of 80% and this closely matches with sample size estimated using exact method",
"_____no_output_____"
]
],
[
[
"#increment sample size till required power is reached \nsample_size=1000\nnp.random.seed(123)\nwhile True:\n control_time_spent, treatment_time_spent=simulate_data(control_mean,control_sd,sample_size,n_sim)\n t_stat, p_value = st.ttest_ind(control_time_spent, treatment_time_spent)\n power=(p_value<alpha).sum()/n_sim\n if power>.80:\n print(\"Minimum sample size required to reach significance {}\".format(sample_size))\n break\n else:\n sample_size+=10\n#Minimum sample size required to reach significance 1560",
"Minimum sample size required to reach significance 1560\n"
]
],
[
[
"### Code to compute sample size using exact method:",
"_____no_output_____"
]
],
[
[
"#Analtyical solution to compute sample size\nfrom statsmodels.stats.power import tt_ind_solve_power\n\ntreat_mean=control_mean*(1+delta)\nmean_diff=treat_mean-control_mean\n\ncohen_d=mean_diff/np.sqrt((control_sd**2+control_sd**2)/2)\n\nn = tt_ind_solve_power(effect_size=cohen_d, alpha=alpha, power=0.8, ratio=1, alternative='two-sided')\nprint('Minimum sample size required to reach significance: {:.0f}'.format(round(n)))\n",
"Minimum sample size required to reach significance: 1571\n"
]
],
[
[
"### Conclusion\nThis article explained how simulation can be used to estimate power of an A/B experiment when a closed form solution doesn’t exist.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4adc77a9cb0a7c9b93e9f9bdac80e42b90c0a3b7
| 26,831 |
ipynb
|
Jupyter Notebook
|
wienerschnitzelgemeinschaft/src/Russ/resave_xx1a.ipynb
|
guitarmind/HPA-competition-solutions
|
547d53aaca148fdb5f4585526ad7364dfa47967d
|
[
"MIT"
] | null | null | null |
wienerschnitzelgemeinschaft/src/Russ/resave_xx1a.ipynb
|
guitarmind/HPA-competition-solutions
|
547d53aaca148fdb5f4585526ad7364dfa47967d
|
[
"MIT"
] | null | null | null |
wienerschnitzelgemeinschaft/src/Russ/resave_xx1a.ipynb
|
guitarmind/HPA-competition-solutions
|
547d53aaca148fdb5f4585526ad7364dfa47967d
|
[
"MIT"
] | null | null | null | 38.494978 | 653 | 0.550744 |
[
[
[
"midx = '1a'\n# midx = '1a1'\n# midx = '1a2'\n# midx = '1a3'",
"_____no_output_____"
],
[
"import socket\nimport timeit\nimport time\nfrom datetime import datetime\nimport os\nimport glob\nfrom collections import OrderedDict\nimport numpy as np\nimport pandas as pd\nimport pickle\nimport gc\nimport cv2\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-white')\nimport seaborn as sns\nsns.set_style(\"white\")\nimport random\nimport PIL\nimport pathlib\nimport pathlib\n\nimport torch\nfrom torch.autograd import Variable\nimport torch.optim as optim\nfrom torch.utils import data\nfrom torch.utils.data import Dataset, DataLoader\nfrom torchvision import transforms\nfrom torchvision.utils import make_grid\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom torch.optim.lr_scheduler import LambdaLR, ReduceLROnPlateau, StepLR\nfrom torch.utils.data.sampler import WeightedRandomSampler\nimport torchvision\n\nimport albumentations as A\n\nfrom skimage.exposure import histogram, equalize_hist, equalize_adapthist\nfrom skimage.morphology import dilation, remove_small_objects, remove_small_holes, label\n\nimport pretrainedmodels\nfrom xception import xception\n\nfrom tensorboardX import SummaryWriter\n\nfrom scipy.special import logit\nfrom sklearn.metrics import jaccard_similarity_score, f1_score\nfrom sklearn.preprocessing import MultiLabelBinarizer\n\nimport imgaug as ia\nfrom imgaug import augmenters as iaa\nimport multiprocessing\nimport threading\n\nfrom dataloaders import utils\nfrom dataloaders import custom_transforms as tr\n\n# from losses import CombinedLoss, BCELoss2d\nimport lovasz_losses as L",
"_____no_output_____"
],
[
"directory = './'\n\nori_size = 512\nup_size = 512\nimage_size = 512\n\ninterp = cv2.INTER_AREA\n# methods=[(\"area\", cv2.INTER_AREA), \n# (\"nearest\", cv2.INTER_NEAREST), \n# (\"linear\", cv2.INTER_LINEAR), \n# (\"cubic\", cv2.INTER_CUBIC), \n# (\"lanczos4\", cv2.INTER_LANCZOS4)]\n\ny_pad = image_size - up_size\ny_min_pad = int(y_pad / 2)\ny_max_pad = y_pad - y_min_pad\n\nx_pad = image_size - up_size\nx_min_pad = int(x_pad / 2)\nx_max_pad = x_pad - x_min_pad\n\nprint(ori_size, up_size, image_size)",
"512 512 512\n"
],
[
"PATH_TO_TRAIN = './train/'\nPATH_TO_TEST = './test/'\nPATH_TO_EXTERNAL2 = './external_data2/'\nPATH_TO_EXTERNAL3 = './external_data3/'\nPATH_TO_TARGET = './train.csv'\nPATH_TO_TARGETXX = './HPAv18Y.csv'\nPATH_TO_SUB = './sample_submission.csv'\n\nLABEL_MAP = {\n0: \"Nucleoplasm\" ,\n1: \"Nuclear membrane\" ,\n2: \"Nucleoli\" ,\n3: \"Nucleoli fibrillar center\", \n4: \"Nuclear speckles\" ,\n5: \"Nuclear bodies\" ,\n6: \"Endoplasmic reticulum\" ,\n7: \"Golgi apparatus\" ,\n8: \"Peroxisomes\" ,\n9: \"Endosomes\" ,\n10: \"Lysosomes\" ,\n11: \"Intermediate filaments\" , \n12: \"Actin filaments\" ,\n13: \"Focal adhesion sites\" ,\n14: \"Microtubules\" ,\n15: \"Microtubule ends\" ,\n16: \"Cytokinetic bridge\" ,\n17: \"Mitotic spindle\" ,\n18: \"Microtubule organizing center\", \n19: \"Centrosome\",\n20: \"Lipid droplets\" ,\n21: \"Plasma membrane\" ,\n22: \"Cell junctions\" ,\n23: \"Mitochondria\" ,\n24: \"Aggresome\" ,\n25: \"Cytosol\" ,\n26: \"Cytoplasmic bodies\",\n27: \"Rods & rings\"}\n\nLOC_MAP = {}\nfor k in LABEL_MAP.keys(): LOC_MAP[LABEL_MAP[k]] = k",
"_____no_output_____"
],
[
"# from Tomomi\ndxx = pd.read_csv(PATH_TO_TARGETXX, index_col = None)\ndxx.set_index('Id',inplace=True)\nprint(dxx.head())\nprint(dxx.shape)",
" Target GotYellow\nId \nENSG00000000003_4109_24_H11_1 25 1\nENSG00000000003_4109_24_H11_2 25 1\nENSG00000000003_4109_23_H11_1 25 1\nENSG00000000003_4109_23_H11_2 25 1\nENSG00000000003_4109_25_H11_1 25 1\n(77878, 2)\n"
],
[
"# dataloader bombs out on iteration 63914, so limit size here\n# dxx = dxx.iloc[:50000]\n# dxx = dxx.iloc[50000:]\n# dxx = dxx.iloc[37154:]\nprint(dxx.shape)",
"(77878, 2)\n"
],
[
"def image_histogram_equalization(image, number_bins=256):\n # from http://www.janeriksolem.net/2009/06/histogram-equalization-with-python-and.html\n\n # get image histogram\n image_histogram, bins = np.histogram(image.flatten(), number_bins, density=True)\n cdf = image_histogram.cumsum() # cumulative distribution function\n cdf = 255 * cdf / cdf[-1] # normalize\n\n # use linear interpolation of cdf to find new pixel values\n image_equalized = np.interp(image.flatten(), bins[:-1], cdf)\n\n # return image_equalized.reshape(image.shape), cdf\n return image_equalized.reshape(image.shape)\n\ndef equalize(arr):\n arr = arr.astype('float')\n # usually do not touch the alpha channel\n # but here we do since it is yellow\n for i in range(arr.shape[-1]):\n # arr[...,i] = 255 * equalize_hist(arr[...,i])\n arr[...,i] = image_histogram_equalization(arr[...,i]) \n return arr\n\ndef normalize(arr, q=0.01):\n arr = arr.astype('float')\n # usually do not touch the alpha channel\n # but here we do since it is yellow\n # print('arr before',arr.shape,arr.min(),arr.mean(),arr.max())\n for i in range(arr.shape[-1]):\n # arr[...,i] = 255 * equalize_hist(arr[...,i])\n ai = arr[...,i]\n # print('ai ' + str(i) + ' before',i,ai.shape,ai.min(),ai.mean(),ai.max())\n qlow = np.percentile(ai,100*q)\n qhigh = np.percentile(ai,100*(1.0-q))\n if qlow == qhigh:\n arr[...,i] = 0.\n else:\n arr[...,i] = 255.*(np.clip(ai,qlow,qhigh) - qlow)/(qhigh - qlow) \n # print('ai ' + str(i) + ' after',i,ai.shape,ai.min(),ai.mean(),ai.max())\n # print('arr after',arr.shape,arr.min(),arr.mean(),arr.max())\n return arr\n\ndef standardize(arr):\n arr = arr.astype('float')\n # usually do not touch the alpha channel\n # but here we do since it is yellow\n # print('arr before',arr.shape,arr.min(),arr.mean(),arr.max())\n for i in range(arr.shape[-1]):\n # arr[...,i] = 255 * equalize_hist(arr[...,i])\n ai = (arr[...,i] - arr.mean())/(arr.std() + 1e-6)\n # print('ai ' + str(i) + ' after',i,ai.shape,ai.min(),ai.mean(),ai.max())\n # print('arr after',arr.shape,arr.min(),arr.mean(),arr.max())\n return arr\n\n\n\nclass MultiBandMultiLabelDataset(Dataset):\n \n# BANDS_NAMES = ['_red.png','_green.png','_blue.png','_yellow.png']\n BANDS_NAMES = ['_red','_green','_blue']\n \n def __len__(self):\n return len(self.images_df)\n \n def __init__(self, images_df, \n base_path, \n image_transform=None, \n augmentator=None,\n train_mode=True,\n external=0\n ):\n if not isinstance(base_path, pathlib.Path):\n base_path = pathlib.Path(base_path)\n \n self.images_df = images_df.reset_index()\n self.image_transform = image_transform\n self.augmentator = augmentator\n self.images_df.Id = self.images_df.Id.apply(lambda x: base_path / x)\n self.mlb = MultiLabelBinarizer(classes=list(LABEL_MAP.keys()))\n self.train_mode = train_mode\n self.external = external\n if self.external == 2: self.suffix = '.jpg'\n else: self.suffix = '.png'\n self.cache = {}\n \n def __getitem__(self, index):\n # print('index class',index.__class__)\n if isinstance(index, torch.Tensor): index = index.item()\n if index in self.cache: \n X, y = self.cache[index]\n else:\n y = None\n X = self._load_multiband_image(index)\n if self.train_mode:\n y = self._load_multilabel_target(index)\n self.cache[index] = (X,y)\n \n # augmentator can be for instance imgaug augmentation object\n if self.augmentator is not None:\n# print('getitem before aug',X.shape,np.min(X),np.mean(X),np.max(X))\n# X = self.augmentator(np.array(X))\n X = self.augmentator(image=X)['image']\n# print('getitem after aug',X.shape,np.min(X),np.mean(X),np.max(X))\n \n if self.image_transform is not None:\n X = self.image_transform(X)\n \n return X, y \n \n def _load_multiband_image(self, index):\n row = self.images_df.iloc[index]\n \n if self.external == 1:\n p = str(row.Id.absolute()) + self.suffix\n band3image = PIL.Image.open(p)\n \n else:\n image_bands = []\n for i,band_name in enumerate(self.BANDS_NAMES):\n p = str(row.Id.absolute()) + band_name + self.suffix\n pil_channel = PIL.Image.open(p)\n if self.external == 2: \n pa = np.array(pil_channel)[...,i] \n# pa = np.array(pil_channel)\n# print(i,band_name,pil_channel.mode,pa.shape,pa.min(),pa.mean(),pa.max())\n if pa.max() > 0:\n pil_channel = PIL.Image.fromarray(pa.astype('uint8'),'L')\n pil_channel = pil_channel.convert(\"L\")\n image_bands.append(pil_channel)\n\n # pretend its a RBGA image to support 4 channels\n# band4image = PIL.Image.merge('RGBA', bands=image_bands)\n band3image = PIL.Image.merge('RGB', bands=image_bands)\n \n band3image = band3image.resize((image_size,image_size), PIL.Image.ANTIALIAS)\n\n # normalize each channel \n# arr = np.array(band4image)\n arr = np.array(band3image)\n \n# # average red and yellow channels, orange\n# arr[...,0] = (arr[...,0] + arr[...,3])/2.0\n# arr = arr[...,:3]\n \n # arr = np.array(band3image)\n # print('arr shape',arr.shape)\n # if index==0: print(index,'hist before',histogram(arr))\n \n# arr = normalize(arr)\n# arr = standardize(arr)\n# arr = equalize(arr)\n \n# # average red and yellow channels, orange\n# arr[...,0] = (arr[...,0] + arr[...,3])/2.0\n# arr = arr[...,:3]\n \n # if index==0: print(index,'hist after',histogram(arr))\n band3image = PIL.Image.fromarray(arr.astype('uint8'),'RGB')\n# band4image = PIL.Image.fromarray(arr.astype('uint8'),'RGBA')\n\n # histogram equalize each channel\n \n# arr = np.array(band4image)\n# # print('arr',arr.shape)\n# # if index==0: print(index,'hist before',histogram(arr))\n# arr = equalize(arr)\n# # if index==0: print(index,'hist after',histogram(arr))\n# band4image = PIL.Image.fromarray(arr.astype('uint8'),'RGBA')\n \n# return band4image\n return band3image\n# return arr\n\n# band3image = PIL.Image.new(\"RGB\", band4image.size, (255, 255, 255))\n# band3image.paste(band4image, mask=band4image.split()[3]) \n# band3image = band3image.resize((image_size,image_size), PIL.Image.ANTIALIAS)\n# return band3image\n \n \n def _load_multilabel_target(self, index):\n y = self.images_df.iloc[index].Target.split(' ')\n# print(y)\n try:\n yl = list(map(int, y))\n except:\n yl = []\n return yl\n \n \n def collate_func(self, batch):\n labels = None\n images = [x[0] for x in batch]\n \n if self.train_mode:\n labels = [x[1] for x in batch]\n labels_one_hot = self.mlb.fit_transform(labels)\n labels = torch.FloatTensor(labels_one_hot)\n \n \n # return torch.stack(images)[:,:4,:,:], labels\n return torch.stack(images), labels\n",
"_____no_output_____"
],
[
"imean = (0.08069, 0.05258, 0.05487)\nistd = (0.13704, 0.10145, 0.15313)\n\ntrain_aug = A.Compose([\n# A.Rotate((0,30),p=0.75),\n A.RandomRotate90(p=1),\n A.HorizontalFlip(p=0.5),\n A.ShiftScaleRotate(p=0.9),\n# A.RandomBrightness(0.05),\n# A.RandomContrast(0.05),\n A.Normalize(mean=imean, std=istd,max_pixel_value=255.)\n ])\n\ntest_aug = A.Compose([\n A.Normalize(mean=imean, std=istd, max_pixel_value=255.)\n ])\n",
"_____no_output_____"
],
[
"composed_transforms_train = transforms.Compose([\n# transforms.Resize(size=final_size),\n# # transforms.RandomResizedCrop(size=224),\n# transforms.RandomHorizontalFlip(p=0.5),\n# transforms.RandomVerticalFlip(p=0.5),\n# # transforms.RandomRotation(degrees=45),\n# transforms.RandomAffine(degrees=45, translate=(0.1,0.1), shear=10, scale=(0.9,1.1)),\n transforms.ToTensor()\n# transforms.Normalize(mean=[0.456]*4, std=[0.224]*4)\n])\n\ncomposed_transforms_test = transforms.Compose([\n# transforms.Resize(size=final_size),\n transforms.ToTensor()\n# transforms.Normalize(mean=[0.456]*4, std=[0.224]*4)\n])",
"_____no_output_____"
],
[
"eps = 1e-5\ngpu_id = 0\n\nthresh = 0.1\n\n# save_dir_root = os.path.join(os.path.dirname(os.path.abspath(__file__)))\n# exp_name = os.path.dirname(os.path.abspath(__file__)).split('/')[-1]\n\nsave_dir_root = './'\n\ngc.collect()",
"_____no_output_____"
],
[
"fold = -1\n\nif gpu_id >= 0:\n print('Using GPU: {} '.format(gpu_id))\n torch.cuda.set_device(device=gpu_id)\n\ntorch.cuda.empty_cache()\n\nfrom os import listdir\nfrom os.path import isfile, join\nfile_list_x = [f for f in listdir(PATH_TO_EXTERNAL2) if isfile(join(PATH_TO_EXTERNAL2, f))]\nprint(file_list_x[:15],len(file_list_x))",
"Using GPU: 0 \n['ENSG00000000460_24451_224_G2_1_blue.jpg', 'ENSG00000000003_4109_24_H11_1_red.jpg', 'ENSG00000000003_4109_23_H11_1_blue.jpg', 'ENSG00000000003_4109_24_H11_1_blue.jpg', 'ENSG00000000003_4109_24_H11_2_blue.jpg', 'ENSG00000000003_4109_23_H11_1_red.jpg', 'ENSG00000000003_4109_24_H11_1_yellow.jpg', 'ENSG00000000003_4109_23_H11_1_yellow.jpg', 'ENSG00000000003_4109_24_H11_2_red.jpg', 'ENSG00000000003_4109_23_H11_1_green.jpg', 'ENSG00000000003_4109_24_H11_2_green.jpg', 'ENSG00000000003_4109_24_H11_1_green.jpg', 'ENSG00000000003_4109_24_H11_2_yellow.jpg', 'ENSG00000000003_4109_23_H11_2_red.jpg', 'ENSG00000000003_4109_23_H11_2_blue.jpg'] 311022\n"
],
[
"db_xx = MultiBandMultiLabelDataset(dxx, \n base_path=PATH_TO_EXTERNAL2,\n# augmentator=test_aug,\n image_transform=composed_transforms_test,\n external=2)\n\nxxloader = DataLoader(db_xx, collate_fn=db_xx.collate_func,\n batch_size=1, shuffle=False,\n num_workers=1)",
"_____no_output_____"
],
[
"id_list = []\nim_list = []\ny_list = []\nfor i, (im, y) in enumerate(xxloader):\n# if i % 1000 == 0: print(i,id)\n# if i < 63914: continue\n id = str(db_xx.images_df.Id[i])\n im = im.cpu().detach().numpy()[0].transpose(1,2,0)*255\n# print(im.shape,im.min(),im.mean(),im.max())\n im = PIL.Image.fromarray(im.astype('uint8'),'RGB')\n id = PATH_TO_EXTERNAL3 + id[15:]\n im.save(id+'.png',\"PNG\")\n# y = y.cpu().detach().numpy()\n# id_list.append(id)\n# im_list.append(im)\n# y_list.append(y)\n if i % 1000 == 0: print(i,id)\n# if i % 1000 == 0: print(i,id,s,y)\n# if i==10: break",
"0 ./external_data3/ENSG00000000003_4109_24_H11_1\n1000 ./external_data3/ENSG00000008394_44840_559_C9_2\n2000 ./external_data3/ENSG00000023572_57224_980_B6_3\n3000 ./external_data3/ENSG00000047579_29615_511_B1_1\n4000 ./external_data3/ENSG00000060069_70389_1746_A7_17_cr5954c69c25641\n5000 ./external_data3/ENSG00000067225_19421_611_C7_1\n6000 ./external_data3/ENSG00000072952_8704_47_H1_2\n7000 ./external_data3/ENSG00000078053_19828_242_E7_2\n8000 ./external_data3/ENSG00000083544_34925_942_C1_1\n9000 ./external_data3/ENSG00000088538_37543_428_C2_1\n10000 ./external_data3/ENSG00000092621_24031_387_A4_1\n11000 ./external_data3/ENSG00000100105_47893_722_H9_2\n12000 ./external_data3/ENSG00000100749_17929_117_B1_2\n13000 ./external_data3/ENSG00000101940_913_76_B2_2\n14000 ./external_data3/ENSG00000103423_44229_964_F1_1\n15000 ./external_data3/ENSG00000104980_43052_si25_E6_8\n16000 ./external_data3/ENSG00000106028_2866_35_B11_1\n17000 ./external_data3/ENSG00000107937_56468_913_G8_2\n18000 ./external_data3/ENSG00000109606_47047_725_B8_1\n19000 ./external_data3/ENSG00000111424_47740_714_B3_3\n20000 ./external_data3/ENSG00000112984_36910_570_G8_2\n21000 ./external_data3/ENSG00000114867_28487_281_D7_1\n22000 ./external_data3/ENSG00000116171_27135_604_H9_1\n23000 ./external_data3/ENSG00000117751_27452_231_C4_1\n24000 ./external_data3/ENSG00000119878_56765_956_E3_3\n25000 ./external_data3/ENSG00000121964_36694_408_H9_1\n26000 ./external_data3/ENSG00000123989_35123_374_D4_1\n27000 ./external_data3/ENSG00000125651_22793_222_B12_1\n28000 ./external_data3/ENSG00000127124_5728_1248_A3_4\n29000 ./external_data3/ENSG00000129116_35905_404_E2_2\n30000 ./external_data3/ENSG00000130766_18191_151_F12_2\n31000 ./external_data3/ENSG00000132467_35655_1685_F12_2\n32000 ./external_data3/ENSG00000134056_35802_383_G10_3\n33000 ./external_data3/ENSG00000135316_41275_1041_H9_1\n34000 ./external_data3/ENSG00000136451_27520_279_D10_1\n35000 ./external_data3/ENSG00000137486_49318_682_C11_2\n36000 ./external_data3/ENSG00000138496_66708_1366_C12_4\n37000 ./external_data3/ENSG00000140105_5573_110_A1_2\n38000 ./external_data3/ENSG00000141522_10005_921_B9_2\n39000 ./external_data3/ENSG00000143322_1866_59_G5_1\n40000 ./external_data3/ENSG00000144579_62654_1294_C9_1\n41000 ./external_data3/ENSG00000146112_51000_761_E9_2\n42000 ./external_data3/ENSG00000147862_3956_27_H11_2\n43000 ./external_data3/ENSG00000149972_39492_462_F6_2\n44000 ./external_data3/ENSG00000152443_68553_1394_A9_1\n45000 ./external_data3/ENSG00000154370_43879_771_H1_1\n46000 ./external_data3/ENSG00000156689_65454_1559_F7_1\n47000 ./external_data3/ENSG00000158711_28863_295_B11_1\n48000 ./external_data3/ENSG00000160298_35111_624_G12_2\n49000 ./external_data3/ENSG00000162302_8649_635_D3_1\n50000 ./external_data3/ENSG00000163322_37654_750_D2_2\n51000 ./external_data3/ENSG00000164062_29702_365_B3_1\n52000 ./external_data3/ENSG00000164916_18864_150_F8_1\n53000 ./external_data3/ENSG00000165899_40364_418_E10_1\n54000 ./external_data3/ENSG00000166888_1861_26_A4_2\n55000 ./external_data3/ENSG00000167861_31406_820_A10_3\n56000 ./external_data3/ENSG00000168795_21521_1219_A10_2\n57000 ./external_data3/ENSG00000169955_3203_61_H10_1\n58000 ./external_data3/ENSG00000171204_14480_598_G1_1\n59000 ./external_data3/ENSG00000172465_57345_991_C9_1\n60000 ./external_data3/ENSG00000173905_1677_13_C10_1\n61000 ./external_data3/ENSG00000175573_45938_579_D11_1\n62000 ./external_data3/ENSG00000177469_49838_751_B9_5\n63000 ./external_data3/ENSG00000179119_47957_756_E3_6\n64000 ./external_data3/ENSG00000181019_7308_10_A7_1\n65000 ./external_data3/ENSG00000182985_37266_640_E3_1\n66000 ./external_data3/ENSG00000184575_57602_1208_D9_2\n67000 ./external_data3/ENSG00000185946_52434_865_H1_1\n68000 ./external_data3/ENSG00000187954_59543_1218_C5_1\n69000 ./external_data3/ENSG00000196150_20853_190_F9_1\n70000 ./external_data3/ENSG00000197037_55127_911_G1_2\n71000 ./external_data3/ENSG00000198081_50758_1105_C2_2\n72000 ./external_data3/ENSG00000198934_3047_1746_G1_1\n73000 ./external_data3/ENSG00000205220_30225_322_F10_3\n74000 ./external_data3/ENSG00000214827_32040_1550_A1_3\n75000 ./external_data3/ENSG00000237172_62734_1159_G6_2\n76000 ./external_data3/ENSG00000254004_28806_587_B5_1\n77000 ./external_data3/ENSG00000267855_59251_1020_B9_4\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adcca5327c595aa80cd8f6aea6bf93dc043a719
| 59,143 |
ipynb
|
Jupyter Notebook
|
10_per_patient.ipynb
|
pete88b/vulcan_medication_bundle
|
d0239805ec04430abb5e92572e984e2cd343a49c
|
[
"Apache-2.0"
] | null | null | null |
10_per_patient.ipynb
|
pete88b/vulcan_medication_bundle
|
d0239805ec04430abb5e92572e984e2cd343a49c
|
[
"Apache-2.0"
] | null | null | null |
10_per_patient.ipynb
|
pete88b/vulcan_medication_bundle
|
d0239805ec04430abb5e92572e984e2cd343a49c
|
[
"Apache-2.0"
] | null | null | null | 45.147328 | 287 | 0.515057 |
[
[
[
"#default_exp per_patient",
"_____no_output_____"
]
],
[
[
"# Per-Patient\n\n> Create a FHIR bundle of medications for a single patient.\n\nAs the first step in converting FHIR resources to the CDISC \"Concomitant/Prior Medications\" CM domain, we'll create a `Bundle` containing one `Patient` and any number of `MedicationAdministration`, `MedicationDispense`, `MedicationRequest` and `MedicationStatement` resources.\n\nSo that subsequent use of this bundle doesn't have to make any FHIR server requests, the bundle will also contain;\n- `Medication` referenced by `medicationReference`\n- `Condition` and/or `Observation` referenced by `reasonReference`\n\n## See also\n\n[vulcan_medication_bundle_getting_started.ipynb](https://colab.research.google.com/github/pete88b/smart-on-fhir-client-py-demo/blob/main/vulcan_medication_bundle_getting_started.ipynb) explains;\n- Why we are not using `List` and\n- Why we are reading FHIR resources as raw JSON\n\n## TODO: Remove non-concomitant medications from the list\n\nIdentifying concomitant medications might get quite complicated - I'm assuming we won't be able to cover all logic needed when pulling data from the FHIR servers. I think it makes sense to pull all medications, then add a concomitant medication filter as a subsequent step.\n\n### How are we defining concomitant medications? \n\nAny medication \n- that is not the medication being investigated\n- that is being taken while a patient is participating in a study\n\nWe might also want to list subset of concomitant medications - i.e. thoes listed in exclusion criteria, relevant medications that the study would like to follow (e.g. concomitant use of ACE inhibitors might be important but single dose paracetamil might not).\n\nTo know if the medication was being taken while the patient was/is participating in a study, we could\n\ncould compare study participation\n- study participation from `ResearchSubject.period`\n- study duration from `ResearchStudy.period`\n - if either start or end date are missing from `ResearchSubject.period`\n- user specified start and end date\n - if `ResearchStudy` etc are not in FHIR?\n \nwith start and end time of medication \"administration\"\n- `MedicationStatement.effectiveX`, `MedicationStatement.dateAsserted`, `MedicationStatement.dosage`\n - don't forget `MedicationX.status` not-taken etc\n- `MedicationRequest.authoredOn`, `MedicationRequest.encounter`, `MedicationRequest.dosageInstruction`, `MedicationRequest.basedOn`, `MedicationRequest.dispenseRequest` ...\n - Don't forget `MedicationRequest.doNotPerform`\n- `MedicationDispense.daysSupply`, `MedicationDispense.whenPrepared`, `MedicationDispense.whenHandedOver`, `MedicationDispense.dosageInstruction`, `MedicationDispense.partOf`, `MedicationDispense.authorizingPrescription` \n- `MedicationAdministration.effectiveX`, partOf, supportingInformation ...\n\n\n## TODO: think about a \"human in the loop\" to help with things ↑ that will be hard to reliably automate\n\n\n## Next steps\n\nMight we want to\n- define some kind of order of entries in the bundle\n- think about how we handle resources that fail validation \n - We can use https://inferno.healthit.gov/validator/ to validate the bundes created\n - TODO: Can we discuss how we want to action this output?",
"_____no_output_____"
],
[
"## Required resources?\n\nThe CM tab (https://wiki.cdisc.org/display/FHIR2CDISCUG/FHIR+to+CDISC+Mapping+User+Guide+Home FHIR-to-CDISC Mappings xlsx) lists the following resources;\n\n- ~~`ResearchSubject` with `ResearchStudy`~~ NOT YET?\n- `Subject`\n- `MedicationStatement`\n- `MedicationRequest`\n- `MedicationDispense`\n- `MedicationAdministration`\n- ~~`Immunization`~~ don't think we're doing immunization yet?\n- `Medication` referenced by `medicationReference`\n- `Condition` or `Observation` referenced by `reasonReference`\n\nTODO: For now, I'm just pulling the resources that Jay highlighted as required - we can easily add the others (o:",
"_____no_output_____"
]
],
[
[
"#export\nfrom vulcan_medication_bundle.core import *\nfrom pathlib import Path\nimport json",
"_____no_output_____"
],
[
"from io import StringIO\nimport requests",
"_____no_output_____"
],
[
"#demo\nimport pandas as pd",
"_____no_output_____"
],
[
"# api_base = 'http://hapi.fhir.org/baseR4'\napi_base, patient_id = 'https://r4.smarthealthit.org', '11f2b925-43b2-45e4-ac34-7811a9eb9c1b'",
"_____no_output_____"
],
[
"bundle = get_as_raw_json(api_base, 'MedicationRequest', dict(subject=patient_id))\nprint('Patient', patient_id, 'has', len(bundle['entry']), 'MedicationRequest resources')",
"GET https://r4.smarthealthit.org/MedicationRequest?_format=json&subject=11f2b925-43b2-45e4-ac34-7811a9eb9c1b\nPatient 11f2b925-43b2-45e4-ac34-7811a9eb9c1b has 10 MedicationRequest resources\n"
],
[
"# uncomment and run this cell to see the bundle as raw JSON\n# bundle",
"_____no_output_____"
]
],
[
[
"## Create and save a single patient medication bundle\n\nTODO: extract ref for `statusReasonReference` - see https://www.hl7.org/fhir/medicationdispense.html? maybe",
"_____no_output_____"
]
],
[
[
"#export\ndef create_single_patient_medication_bundle(api_base, patient_id):\n \"Return a Bundle containing one Patient and any number of MedicationX resources\"\n result = new_bundle()\n references = []\n for resource_type, url_suffix in [\n ['Patient', dict(_id=patient_id)],\n ['MedicationRequest', dict(subject=f'Patient/{patient_id}')],\n ['MedicationDispense', dict(subject=f'Patient/{patient_id}')],\n ['MedicationAdministration', dict(subject=f'Patient/{patient_id}')],\n ['MedicationStatement', dict(subject=f'Patient/{patient_id}')]]:\n try:\n single_resource_bundle = get_as_raw_json(api_base, resource_type, url_suffix)\n while single_resource_bundle is not None and single_resource_bundle['total'] > 0:\n result['entry'].extend(single_resource_bundle['entry'])\n # TODO: xxx medicationReference and reasonReference might not be enough\n references.extend(extract_references(single_resource_bundle, ['medicationReference', 'reasonReference']))\n single_resource_bundle = get_next_as_raw_json(single_resource_bundle)\n except Exception as ex:\n print(f'Failed to get {resource_type}, {url_suffix} from {api_base}\\n{ex}')\n for reference in set(references):\n try:\n result['entry'].extend(get_by_reference(api_base, reference))\n except Exception as ex:\n print(f'Failed to reference {reference} from {api_base}\\n{ex}')\n return result",
"_____no_output_____"
],
[
"bundle = create_single_patient_medication_bundle(api_base, patient_id)\nbundle ",
"GET https://r4.smarthealthit.org/Patient?_format=json&_id=11f2b925-43b2-45e4-ac34-7811a9eb9c1b\nGET https://r4.smarthealthit.org/MedicationRequest?_format=json&subject=Patient%2F11f2b925-43b2-45e4-ac34-7811a9eb9c1b\nGET https://r4.smarthealthit.org/MedicationDispense?_format=json&subject=Patient%2F11f2b925-43b2-45e4-ac34-7811a9eb9c1b\nGET https://r4.smarthealthit.org/MedicationAdministration?_format=json&subject=Patient%2F11f2b925-43b2-45e4-ac34-7811a9eb9c1b\nGET https://r4.smarthealthit.org/MedicationStatement?_format=json&subject=Patient%2F11f2b925-43b2-45e4-ac34-7811a9eb9c1b\nGET https://r4.smarthealthit.org/Condition/9a459588-d2e2-4f83-8155-327757db91ed?_format=json\nGET https://r4.smarthealthit.org/Condition/7446b6f9-7cc2-4692-8f5e-31e0de1b1a86?_format=json\n"
]
],
[
[
"### What should we do about \"bad\" references?\n\nSome examples of \"bad\" references\n- Invalid reference value – points to a server that doesn't exist\n- Server can’t find resource by ID\n- Unknown reference format ...\n\nWe could\n- Raise an error as soon as we hit a bad reference\n - you won't get any data if there is even 1 problem with a reference )o:\n- Silently ignore problems and just get what we can\n - you will know that references are missing (o: but won't know why )o:\n- Build up a list of issues that can be retured with the patient bundle\n - you can choose what to do about each kind of issue\n - TODO: this is probably the preferred option",
"_____no_output_____"
]
],
[
[
"#export\ndef save_single_patient_medication_bundle(bundle, output_path='data'):\n \"Write a patient medication bundle to file.\"\n Path(output_path).mkdir(exist_ok=True)\n patient = bundle['entry'][0]['resource']\n if patient['resourceType'] != 'Patient':\n raise Exception(f'expected a patient but found {patient}')\n patient_id = patient['id']\n f_name = f'{output_path}/patient_medication_bundle_{patient_id}.json'\n with open(f_name, 'w') as f:\n json.dump(bundle, f, indent=2)\n print('Bundle saved to', f_name)",
"_____no_output_____"
]
],
[
[
"Now we can save the JSON bundle to file to pass on to the next step of the process (o:",
"_____no_output_____"
]
],
[
[
"save_single_patient_medication_bundle(bundle)",
"Bundle saved to data/patient_medication_bundle_11f2b925-43b2-45e4-ac34-7811a9eb9c1b.json\n"
]
],
[
[
"## Bundle cleanup\n\nThe result of `create_single_patient_medication_bundle` is a `collection`, so we need to remove `search` elements from each `entry`. This removes some validation errors reported by https://inferno.healthit.gov/validator/ - thanks Mike (o:\n\nTODO: Do we care what the `search` element is telling us? i.e. what if it's not `match`?",
"_____no_output_____"
]
],
[
[
"#export\ndef handle_entry_search(bundle):\n \"Remove `search` elements from each `entry`\"\n for entry in bundle['entry']:\n if 'search' in entry: del entry['search']\n return bundle",
"_____no_output_____"
]
],
[
[
"## Bundle filtering\n\nTODO: Moved to 20a_status_filter - clean this up",
"_____no_output_____"
],
[
"### Medication status filter\n\nRemove medication if the status tells us it was not or will not be taken. \n\n- https://www.hl7.org/fhir/valueset-medicationrequest-status.html\n- https://www.hl7.org/fhir/valueset-medicationdispense-status.html\n- https://www.hl7.org/fhir/valueset-medication-admin-status.html\n- https://www.hl7.org/fhir/valueset-medication-statement-status.html\n\n\n#### Statuses that we want to remove from the bundle\n\n- MedicationRequest (Include: active, on-hold, completed, entered-in-error, unknown)\n - cancelled \n - The prescription has been withdrawn before any administrations have occurred\n - stopped \n - Actions implied by the prescription are to be permanently halted, before all of the administrations occurred. \n - TODO: This is a ? **halted, before all ...** i.e. might some of the administrations occured\n - draft\n - The prescription is not yet 'actionable'\n \n \n- MedicationDispense (Include: on-hold, completed, unknown)\n - preparation\t\n - The core event has not started yet, \n - in-progress\n - The dispensed product is ready for pickup\n - cancelled\t\n - The dispensed product was not and will never be picked up by the patient\n - entered-in-error\t\n - The dispense was entered in error and therefore nullified\n - stopped\t\n - Actions implied by the dispense have been permanently halted, before all of them occurred\n - TODO: This is a ? **hatled, before all ...** i.e. might some of the actions occured\n - declined\t\n - The dispense was declined and not performed.\n \n- MedicationAdministration (Include: in-progress, on-hold, completed, stopped, unknown)\n - not-done\t\n - The administration was terminated prior to any impact on the subject\n - entered-in-error\t\n - The administration was entered in error and therefore nullified\n \n- MedicationStatement (Include: active, completed, entered-in-error, intended, stopped, on-hold, unknown)\n - not-taken\n - The medication was not consumed by the patient",
"_____no_output_____"
],
[
"#### What if these statuses are not appropriate for every study?\n\nIt's possible that a study needs to see medication records confirming that a medication was not taken.\n\ni.e. If previous treatment with a medication is an exclusion criteria, absense of a medication record might not be enough to be sure the patient didn't take it.\n\nSo we'll need to make filters configurable ...",
"_____no_output_____"
],
[
"#### Should we always run the status filter?\n\nThe CMOCCUR part of FHIR-to-CDISC Mappings xlsx includes status filtering instructions.\n\ni.e We might not want to implement status filtering on the patient medication bundle.\n\nSo we'll need to make filters optional ...",
"_____no_output_____"
]
],
[
[
"#export\ndef medication_status_filter(entry):\n \"Remove medications if the status tells us the medication was not or will not be taken\"\n statuses_to_remove_map = dict(\n MedicationRequest=['cancelled','stopped','draft'],\n MedicationDispense=['preparation','in-progress','cancelled','entered-in-error','stopped','declined'],\n MedicationAdministration=['not-done','entered-in-error'],\n MedicationStatement=['not-taken'])\n resource = entry.get('resource', {})\n resourceType, status = resource.get('resourceType'), resource.get('status')\n statuses_to_remove = statuses_to_remove_map.get(resourceType)\n if statuses_to_remove is not None and status in statuses_to_remove:\n print('Removing', resourceType, 'with status', status)\n return False\n return True",
"_____no_output_____"
]
],
[
[
"### \"Do Not Perform\" filter",
"_____no_output_____"
]
],
[
[
"#export\ndef do_not_perform_filter(entry):\n \"Remove medications that have the `doNotPerform` flag set to true\"\n resource = entry.get('resource', {})\n if resource.get('doNotPerform', False):\n print('Removing', resource.get('resourceType'), 'with doNotPerform = true')\n return False\n return True",
"_____no_output_____"
]
],
[
[
"## Create medication bundles for all subjects in a study",
"_____no_output_____"
],
[
"When the HAPI FHIR server is available, we should be able to do something like\n\n```\napi_base = 'http://hapi.fhir.org/baseR4'\n```\nFind a patient in a study\n```\nget_as_raw_json(api_base, 'ResearchSubject')\n```\nList all resources associated with a study\n```\nresearch_study_id = 1171831\n# Note: &_revinclude=* gives us everything refering to the study\nget_as_raw_json(api_base, 'ResearchStudy', dict(_id=research_study_id, _revinclude='*'))\n```\nPick a patient from the above bundle and pull medication requests ↓\n```\n# 'subject': {'reference': 'Patient/0c4a1143-8d1c-42ed-b509-eac97d77c9b2'\nget_as_raw_json(api_base, 'MedicationRequest', dict(subject='0c4a1143-8d1c-42ed-b509-eac97d77c9b2'))\n```\nCreate medication bundles for all subjects in a study ↓\n```\nstudy_and_subject_bundle = get_as_raw_json(\n api_base, 'ResearchStudy', \n dict(_id=research_study_id, _revinclude='ResearchSubject:study'))\nfor i, entry in enumerate(study_and_subject_bundle['entry']):\n resource = entry.get('resource', {})\n if resource.get('resourceType', 'unk') != 'ResearchSubject': continue\n patient_reference = resource.get('individual',{}).get('reference')[8:]\n bundle = create_single_patient_medication_bundle(api_base, patient_reference)\n bundle = handle_entry_search(bundle)\n bundle = filter_bundle(bundle, medication_status_filter)\n bundle = filter_bundle(bundle, do_not_perform_filter)\n save_single_patient_medication_bundle(bundle)\n if i>1: break # stop early (o:\n```\nNote: ↑ We're starting to build a bundle processing pipeline (by adding calls to `handle_entry_search` and `filter_bundle`) - and we'll add more functions like this to remove non-concomitant medications etc",
"_____no_output_____"
],
[
"## Convert FHIR bundle to SDTM csv\n\nJay Gustafson built https://mylinks-prod-sdtmtool.azurewebsites.net/TransformBundle that allows parsing a FHIR bundle into SDTM csv content.\n\nAlso, you can POST a raw json string to https://mylinks-prod-sdtmtool.azurewebsites.net/TransformBundle/Process and it will return a JSON object containing the SDTM csv content in the following structure:\n```\n{'cmcsv': '\"STUDYID\",\"DOMAIN\",\"USUBJID\",...\\r\\n\"RWD-STUDY-01\",\"CM\",\"RWD-SUBJECT-01-30\",...\\r\\n',\n 'suppcmcsv': '\"STUDYID\",\"RDOMAIN\",\"USUBJID\",\"IDVAR\",\"IDVARVAL\",\"QNAM\",\"QLABEL\",\"QVAL\"\\r\\n\"RWD-STUDY-01\",\"CM\",\"RWD-SUBJECT-01-30\",\"CMSEQ\",\"1\",\"CMSOURCE\",\"Resource Name\",\"MedicationRequest\"\\r\\n...',\n 'dmcsv': '\"STUDYID\",\"DOMAIN\",\"USUBJID\",...\\r\\n\"RWD-STUDY-01\",\"DM\",\"RWD-SUBJECT-01-30\",...\\r\\n'}\n```",
"_____no_output_____"
]
],
[
[
"#demo\nresponse = requests.post('https://mylinks-prod-sdtmtool.azurewebsites.net/TransformBundle/Process', json=bundle)",
"_____no_output_____"
]
],
[
[
"### View the response as a table",
"_____no_output_____"
]
],
[
[
"#demo\npd.read_csv(StringIO(response.json()['cmcsv']))",
"_____no_output_____"
],
[
"#hide\nfrom nbdev.export import notebook2script\nnotebook2script()",
"Converted 00_core.ipynb.\nConverted 10_per_patient.ipynb.\nConverted 20a_status_filter.ipynb.\nConverted 30_cli.ipynb.\nConverted 50_web_app.ipynb.\nConverted 50a_web_demo.ipynb.\nConverted index.ipynb.\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4adcd4621b2135e09fc4f55e3ff0d5ce3036b3e1
| 125,102 |
ipynb
|
Jupyter Notebook
|
lessons/notebooks/Probability-Theory-Review.ipynb
|
charlielu05/BMLIP
|
70c4c3810e0fdea42d611b6c4aab9003506dc243
|
[
"CC-BY-3.0"
] | 1 |
2021-08-07T08:06:06.000Z
|
2021-08-07T08:06:06.000Z
|
lessons/notebooks/Probability-Theory-Review.ipynb
|
charlielu05/BMLIP
|
70c4c3810e0fdea42d611b6c4aab9003506dc243
|
[
"CC-BY-3.0"
] | null | null | null |
lessons/notebooks/Probability-Theory-Review.ipynb
|
charlielu05/BMLIP
|
70c4c3810e0fdea42d611b6c4aab9003506dc243
|
[
"CC-BY-3.0"
] | null | null | null | 80.815245 | 44,822 | 0.797293 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4adcd663c88b34a8d70743150365d9ff45ee62a1
| 29,614 |
ipynb
|
Jupyter Notebook
|
course1_foundation/SFrame_quiz.ipynb
|
EM-GEE/machine_learning_specialization
|
5d778d26de89d1dc82f59824d5c24d0d0e7c42f8
|
[
"MIT"
] | null | null | null |
course1_foundation/SFrame_quiz.ipynb
|
EM-GEE/machine_learning_specialization
|
5d778d26de89d1dc82f59824d5c24d0d0e7c42f8
|
[
"MIT"
] | null | null | null |
course1_foundation/SFrame_quiz.ipynb
|
EM-GEE/machine_learning_specialization
|
5d778d26de89d1dc82f59824d5c24d0d0e7c42f8
|
[
"MIT"
] | null | null | null | 57.169884 | 1,334 | 0.551057 |
[
[
[
"# Check the version / executable of python installation.",
"_____no_output_____"
]
],
[
[
"import sys\nsys.executable",
"_____no_output_____"
]
],
[
[
"# Import packages",
"_____no_output_____"
]
],
[
[
"import turicreate",
"_____no_output_____"
]
],
[
[
"# Load data",
"_____no_output_____"
]
],
[
[
"sf = turicreate.SFrame('../data/people_wiki.sframe')",
"_____no_output_____"
],
[
"sf",
"_____no_output_____"
]
],
[
[
"# Quiz Questions & Solutions",
"_____no_output_____"
]
],
[
[
"# first question\nsf.head()",
"_____no_output_____"
],
[
"# second question\nsf.num_rows()",
"_____no_output_____"
],
[
"# Third question\nsf[sf.num_rows()-1]",
"_____no_output_____"
],
[
"# Fourth question\nsf[2]\nsf[(sf['name'] == 'Harpdog Brown')]['text']",
"_____no_output_____"
],
[
"#Fifth question\nsf.sort('text',ascending=True)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
4adcdf1461cf85aa60c0573f5600863d4d0c8f0f
| 763 |
ipynb
|
Jupyter Notebook
|
how to flatten any list.ipynb
|
leylalkan/Python-Repo
|
7026ee7942b2382104918db80e0938345964d0fa
|
[
"MIT"
] | null | null | null |
how to flatten any list.ipynb
|
leylalkan/Python-Repo
|
7026ee7942b2382104918db80e0938345964d0fa
|
[
"MIT"
] | null | null | null |
how to flatten any list.ipynb
|
leylalkan/Python-Repo
|
7026ee7942b2382104918db80e0938345964d0fa
|
[
"MIT"
] | null | null | null | 20.078947 | 108 | 0.537353 |
[
[
[
"flattened_list = []\ndef flatten_any_list(l):\n x = [flattened_list.append(i) if type(i) != list else flatten_any_list(i) for i in l] \n return flattened_list",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4adcf043f9ff64d52906d6464fe01d693ae7d950
| 109,190 |
ipynb
|
Jupyter Notebook
|
business/01_Understanding Customer.ipynb
|
ecosystemai/ecosystem-algorithms
|
5f807b166320501d859314c49efcfb5ea0107860
|
[
"MIT"
] | 2 |
2020-08-30T12:50:37.000Z
|
2021-02-03T08:54:58.000Z
|
business/01_Understanding Customer.ipynb
|
ecosystemai/ecosystem-algorithms
|
5f807b166320501d859314c49efcfb5ea0107860
|
[
"MIT"
] | null | null | null |
business/01_Understanding Customer.ipynb
|
ecosystemai/ecosystem-algorithms
|
5f807b166320501d859314c49efcfb5ea0107860
|
[
"MIT"
] | 1 |
2020-09-02T17:34:36.000Z
|
2020-09-02T17:34:36.000Z
| 128.61013 | 9,332 | 0.873276 |
[
[
[
"## Introduction",
"_____no_output_____"
],
[
"**Offer Recommender example:** \n___\nIn this example we will show how to:\n- Setup the required environment for accessing the ecosystem prediction server.\n- View and track business performance of the Offer Recommender.",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
],
[
"**Setting up import path:** \n___\nAdd path of ecosystem notebook wrappers. It needs to point to the ecosystem notebook wrapper to allow access to the packages required for running the prediction server via python.\n- **notebook_path:** Path to notebook repository. ",
"_____no_output_____"
]
],
[
[
"notebook_path = \"/path of to ecosystem notebook repository\"",
"_____no_output_____"
],
[
"# ---- Uneditible ----\nimport sys\nsys.path.append(notebook_path)\n# ---- Uneditible ----",
"_____no_output_____"
]
],
[
[
"**Import required packages:** \n___\nImport and load all packages required for the following usecase.",
"_____no_output_____"
]
],
[
[
"# ---- Uneditible ----\nimport pymongo\nfrom bson.son import SON\nimport pprint\nimport pandas as pd\nimport json\nimport numpy\nimport operator\nimport datetime\nimport time\nimport os\nimport matplotlib.pyplot as plt\n\nfrom prediction import jwt_access\nfrom prediction import notebook_functions\nfrom prediction.apis import functions\nfrom prediction.apis import data_munging_engine\nfrom prediction.apis import data_management_engine\nfrom prediction.apis import worker_h2o\nfrom prediction.apis import prediction_engine\nfrom prediction.apis import worker_file_service\n\n%matplotlib inline\n# ---- Uneditible ----",
"_____no_output_____"
]
],
[
[
"**Setup prediction server access:** \n___\nCreate access token for prediction server.\n- **url:** Url for the prediction server to access.\n- **username:** Username for prediction server.\n- **password:** Password for prediction server.",
"_____no_output_____"
]
],
[
[
"url = \"http://demo.ecosystem.ai:3001/api\"\nusername = \"[email protected]\"\npassword = \"cd486be3-9955-4364-8ccc-a9ab3ffbc168\"",
"_____no_output_____"
],
[
"# ---- Uneditible ----\nauth = jwt_access.Authenticate(url, username, password)\n# ---- Uneditible ----",
"Login Successful.\n"
],
[
"database = \"master\"\ncollection = \"bank_customer\"\nfield = \"{}\"\nlimit = 100\nprojections = \"{}\"\nskip = 0",
"_____no_output_____"
],
[
"output = data_management_engine.get_data(auth, database, collection, field, limit, projections, skip)",
"get /getMongoDBFind?database=master&collection=bank_customer&field={}&limit=100&projections={}&skip=0&\n"
],
[
"df = pd.DataFrame(output)\ndf.head()",
"_____no_output_____"
],
[
"counts = df[\"education\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"gender\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"changeIndicatorThree\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"language\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"numberOfProducts\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"changeIndicatorSix\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"numberOfChildren\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"changeIndicatorSix\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"numberOfChildren\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"numberOfAddresses\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"segment_enum\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"region\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"age\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
],
[
"counts = df[\"proprtyOwnership\"].value_counts()\ncounts.plot(kind=\"bar\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4adcf7440391cbdd23851b5ec0f2b6b2b62adbe0
| 6,232 |
ipynb
|
Jupyter Notebook
|
collocation.ipynb
|
bingjeff/colloxate
|
11017d8a8518d064736f58d2784024f00ad154cb
|
[
"MIT"
] | null | null | null |
collocation.ipynb
|
bingjeff/colloxate
|
11017d8a8518d064736f58d2784024f00ad154cb
|
[
"MIT"
] | 3 |
2020-03-24T18:38:59.000Z
|
2020-08-11T03:00:13.000Z
|
collocation.ipynb
|
bingjeff/colloxate
|
11017d8a8518d064736f58d2784024f00ad154cb
|
[
"MIT"
] | null | null | null | 28.851852 | 256 | 0.49663 |
[
[
[
"import jax.numpy as jnp\nimport numpy as np\n\nimport jax\nimport urdf_loader",
"_____no_output_____"
],
[
"chain = urdf_loader.read_chain_from_urdf('data/kuka_iiwa.urdf', 'lbr_iiwa_link_0', 'lbr_iiwa_link_7')\nkinematics = urdf_loader.make_kinematic_chain_function(chain)\nkinematics_j = jax.jit(kinematics)\nzero_pose = jnp.array([0., 0., 0., 0., 0., 0., 0.])\nkinematics(zero_pose)",
"_____no_output_____"
],
[
"joint_configurations = np.random.rand(100, 7)\nans = jax.vmap(kinematics_j)(joint_configurations)",
"_____no_output_____"
],
[
"%timeit jax.vmap(kinematics_j)(joint_configurations)",
"737 µs ± 24.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
],
[
"%timeit jax.vmap(kinematics)(joint_configurations)",
"985 ms ± 56.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
],
[
"%timeit kinematics_j(zero_pose)",
"_____no_output_____"
]
],
[
[
"The problem that I nominally want to solve is:\n\n$$\n\\min_{x(t)} \\int_0^T L\\left(x(t), u(t)\\right)dt + \\Phi\\left(x(T)\\right) \\\\\ns.t. \\\\\n\\dot{x}(t) = f\\left(x(t), u(t)\\right)\\\\\nx_l < x(t) < x_u\n$$",
"_____no_output_____"
],
[
"We can discretize the problem with $h=t_{k+1} - t_k$:\n\n$$\n\\min_{\\hat{x}, \\hat{u}} \\sum_k^{N-1} \\frac{h}{2}\\left(L_{k+1} + L_k\\right) + \\Phi\\left(x_N\\right)\n \\\\\ns.t.\\\\\nx_c = \\left(\\frac{1}{2}\\left(x_{k+1} + x_k\\right) - \\frac{h}{8}\\left(f_{k+1} - f_k\\right)\\right) \\\\\n0 = \\frac{3}{2h}\\left(\\left(x_{k+1} - x_k\\right) - \\frac{h}{6}\\left(f_{k+1} + 4f_c + f_k\\right) \\right) \\\\\nx_l < x_k < x_u\n$$\n\n",
"_____no_output_____"
],
[
"Including the equality constraint gives the following problem:\n\n$$\n\\min_{\\hat{x}, \\hat{u}, \\hat{\\lambda}} \\sum_k^{N-1} \\frac{h}{2}\\left(L_{k+1} + L_k\\right) + \\sum_k^{N-1} \\lambda_k \\left(\\left(x_{k+1} - x_k\\right) - \\frac{h}{6}\\left(f_{k+1} + 4f_c + f_k\\right) \\right) + \\Phi\\left(x_N\\right)\n \\\\\ns.t.\\\\\nx_l < x_k < x_u\n$$",
"_____no_output_____"
],
[
"The first-order conditions are then:\n$$\n\\frac{\\partial V}{\\partial x_0} = \n\\frac{h}{2}\\frac{\\partial L_0}{\\partial x_0} + \n\\lambda_0 \\left(-1 - \\frac{h}{6}\\left(4\\frac{\\partial f_c}{\\partial x_0} + \\frac{\\partial f_0}{\\partial x_0}\\right)\\right)\n$$\n\n$$\n\\frac{\\partial V}{\\partial x_{k+1}} = \n\\frac{h}{2}\\frac{\\partial L_{k+1}}{\\partial x_{k+1}} + \n\\lambda_{k+1} \\left(-1 - \\frac{h}{6}\\left(4\\frac{\\partial f_{c+1}}{\\partial x_{k+1}}+\\frac{\\partial f_{k+1}}{\\partial x_{k+1}}\\right)\\right)\n+\n\\frac{h}{2}\\frac{\\partial L_{k+1}}{\\partial x_{k+1}} + \n\\lambda_k \\left(1 - \\frac{h}{6}\\left(\\frac{\\partial f_{k+1}}{\\partial x_{k+1}} + 4\\frac{\\partial f_c}{\\partial x_{k+1}}\\right)\\right)\n$$\n\n$$\n\\frac{\\partial V}{\\partial x_N} = \\frac{\\partial \\Phi}{\\partial x_N}\n$$\n\n$$\n\\frac{\\partial V}{\\partial u_0} = \n\\frac{h}{2}\\frac{\\partial L_0}{\\partial u_0} + \n\\lambda_0 \\left(- \\frac{h}{6}\\left(4\\frac{\\partial f_c}{\\partial u_0} + \\frac{\\partial f_0}{\\partial u_0}\\right)\\right)\n$$\n\n$$\n\\frac{\\partial V}{\\partial u_{k+1}} = \n\\frac{h}{2}\\frac{\\partial L_{k+1}}{\\partial u_{k+1}} + \n\\lambda_{k+1} \\left(- \\frac{h}{6}\\left(4\\frac{\\partial f_{c+1}}{\\partial u_{k+1}}+\\frac{\\partial f_{k+1}}{\\partial u_{k+1}}\\right)\\right)\n+\n\\frac{h}{2}\\frac{\\partial L_{k+1}}{\\partial u_{k+1}} + \n\\lambda_k \\left(- \\frac{h}{6}\\left(\\frac{\\partial f_{k+1}}{\\partial u_{k+1}} + 4\\frac{\\partial f_c}{\\partial u_{k+1}}\\right)\\right)\n$$\n\n$$\n\\frac{\\partial V}{\\partial \\lambda_k} = \\left(x_{k+1} - x_k\\right) - \\frac{h}{6}\\left(f_{k+1} + 4f_c + f_k\\right)\n$$",
"_____no_output_____"
]
],
[
[
"Plan is to just take the hamiltonian\nand discretize it temporaly, then use the magic of ",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
4adcfa98f329e1f28876e7ae0bc55b7ce4c6253b
| 892 |
ipynb
|
Jupyter Notebook
|
_notebooks/2022-02-09-deploying-ml-models.ipynb
|
kargarisaac/Isaac-Kargar
|
bf62cd2b97cd495500720e83d1bf943cb47af6ba
|
[
"Apache-2.0"
] | null | null | null |
_notebooks/2022-02-09-deploying-ml-models.ipynb
|
kargarisaac/Isaac-Kargar
|
bf62cd2b97cd495500720e83d1bf943cb47af6ba
|
[
"Apache-2.0"
] | null | null | null |
_notebooks/2022-02-09-deploying-ml-models.ipynb
|
kargarisaac/Isaac-Kargar
|
bf62cd2b97cd495500720e83d1bf943cb47af6ba
|
[
"Apache-2.0"
] | null | null | null | 21.756098 | 90 | 0.557175 |
[
[
[
"# \"Deploying Machine Learning Models using Docker and AWS Elastic Beanstalk\"\n> \"We will see how to use a trained ML model and use it in a web service.\"\n\n- toc: True\n- branch: master\n- badges: true\n- comments: true\n- categories: [data engineering, jupyter]\n- image: images/some_folder/your_image.png\n- hide: false\n- search_exclude: true",
"_____no_output_____"
],
[
"This post is based on week 5 of the ML engineering zoomcamp course by DataTalksClub.",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown"
]
] |
4add19957245c1af3c6d1df6aa0499ec127e7a89
| 120,781 |
ipynb
|
Jupyter Notebook
|
chapter10/fashionMNIST_dropout_adversarial_generation.ipynb
|
christian-westbrook/strengthening-dnns
|
5dc306e7127523eb893b07c1d2fa8ad936dadd97
|
[
"MIT"
] | 36 |
2019-06-01T16:15:07.000Z
|
2022-01-03T08:58:52.000Z
|
chapter10/fashionMNIST_dropout_adversarial_generation.ipynb
|
JafarBadour/strengthening-dnns
|
bf8af0aedcfd724f9a0728cc39e0d26f929b9fb2
|
[
"MIT"
] | 7 |
2020-03-14T07:49:23.000Z
|
2022-03-25T16:21:23.000Z
|
chapter10/fashionMNIST_dropout_adversarial_generation.ipynb
|
JafarBadour/strengthening-dnns
|
bf8af0aedcfd724f9a0728cc39e0d26f929b9fb2
|
[
"MIT"
] | 12 |
2019-08-13T18:40:28.000Z
|
2022-03-18T05:02:27.000Z
| 401.265781 | 110,192 | 0.930585 |
[
[
[
"> Code to accompany **Chapter 10: Defending Against Adversarial Inputs**",
"_____no_output_____"
],
[
"# Fashion-MNIST - Generating Adversarial Examples on a Drop-out Network\n\nThis notebook demonstrates how to generate adversarial examples using a network that incorporates randomised drop-out.",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nfrom tensorflow import keras\nimport numpy as np\n\nfashion_mnist = tf.keras.datasets.fashion_mnist\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()\ntrain_images = train_images/255.0\ntest_images = test_images/255.0\nclass_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', \n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']",
"_____no_output_____"
]
],
[
[
"## Create a Simple Network with drop-out for Image Classification\n\nWe need to use the Keras __functional API__ (rather than the sequential API) to access the \ndropout capability with `training = True` at test time. \n\nThe cell below has drop-out enabled at training time only. You can experiment by moving the drop-out layer \nor adding drop-out to test time by replacing the `Dropout` line as indicated in the comments.",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.layers import Input, Dense, Flatten, Dropout\nfrom tensorflow.keras.models import Model\n\ninputs = Input(shape=(28,28))\nx = Flatten()(inputs)\nx = Dense(56, activation='relu')(x)\nx = Dropout(0.2)(x) # Use this line for drop-out at training time only\n# x = Dropout(0.2)(x, training=True) # Use this line instead for drop-out at test and training time\nx = Dense(56, activation='relu')(x)\npredictions = Dense(10, activation='softmax')(x)\nmodel = Model(inputs=inputs, outputs=predictions)\n\nprint(model)\n\nmodel.compile(optimizer=tf.train.AdamOptimizer(),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\nmodel.summary()",
"WARNING:tensorflow:From C:\\Users\\katyw\\Anaconda3\\envs\\strengthening-dnns\\lib\\site-packages\\tensorflow\\python\\ops\\resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\nWARNING:tensorflow:From C:\\Users\\katyw\\Anaconda3\\envs\\strengthening-dnns\\lib\\site-packages\\tensorflow\\python\\keras\\layers\\core.py:143: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\n<tensorflow.python.keras.engine.training.Model object at 0x000001A6A8C2BD68>\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 28, 28) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 784) 0 \n_________________________________________________________________\ndense (Dense) (None, 56) 43960 \n_________________________________________________________________\ndropout (Dropout) (None, 56) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 56) 3192 \n_________________________________________________________________\ndense_2 (Dense) (None, 10) 570 \n=================================================================\nTotal params: 47,722\nTrainable params: 47,722\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"Train the model and evaluate it.\n\nIf drop-out is included at test time, the model will be unpredictable.",
"_____no_output_____"
]
],
[
[
"model.fit(train_images, train_labels, epochs=6)\ntest_loss, test_acc = model.evaluate(test_images, test_labels)\nprint('Model accuracy based on test data:', test_acc)",
"Epoch 1/6\n60000/60000 [==============================] - 4s 64us/sample - loss: 0.5789 - acc: 0.7922\nEpoch 2/6\n60000/60000 [==============================] - 4s 60us/sample - loss: 0.4324 - acc: 0.8424\nEpoch 3/6\n60000/60000 [==============================] - 4s 62us/sample - loss: 0.3964 - acc: 0.8555\nEpoch 4/6\n60000/60000 [==============================] - 4s 61us/sample - loss: 0.3782 - acc: 0.8608\nEpoch 5/6\n60000/60000 [==============================] - 4s 75us/sample - loss: 0.3609 - acc: 0.8662\nEpoch 6/6\n60000/60000 [==============================] - 4s 62us/sample - loss: 0.3534 - acc: 0.8690\n10000/10000 [==============================] - 0s 41us/sample - loss: 0.3635 - acc: 0.8677\nModel accuracy based on test data: 0.8677\n"
]
],
[
[
"## Create Some Adversarial Examples Using the Model",
"_____no_output_____"
]
],
[
[
"# Import helper function\nimport sys\nsys.path.append('..')\nfrom strengtheningdnns.adversarial_utils import generate_adversarial_data",
"_____no_output_____"
],
[
"import foolbox\nfmodel = foolbox.models.TensorFlowModel.from_keras(model, bounds=(0, 255))\n\nnum_images = 1000\nx_images = train_images[0:num_images, :]\n\nattack_criterion = foolbox.criteria.Misclassification()\nattack_fn = foolbox.attacks.GradientSignAttack(fmodel, criterion=attack_criterion)\nx_adv_images, x_adv_perturbs, x_labels = generate_adversarial_data(original_images = x_images, \n predictions = model.predict(x_images), \n attack_fn = attack_fn)",
"C:\\Users\\katyw\\Anaconda3\\envs\\strengthening-dnns\\lib\\site-packages\\foolbox\\attacks\\base.py:148: UserWarning: GradientSignAttack did not find an adversarial, maybe the model or the criterion is not supported by this attack.\n ' attack.'.format(self.name()))\n"
]
],
[
[
"## Take a Peek at some Results\n\nThe adversarial examples plotted should all be misclassified. However, if the model is running with drop-out at test \ntime also (see model creation above), they may be classified correctly due to uncertainty of the model's behaviour.",
"_____no_output_____"
]
],
[
[
"images_to_plot = x_adv_images\n\nimport matplotlib.pyplot as plt\nadversarial_predictions = model.predict(images_to_plot)\n\nplt.figure(figsize=(15, 30))\nfor i in range(30):\n plt.subplot(10,5,i+1)\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n plt.imshow(images_to_plot[i], cmap=plt.cm.binary)\n predicted_label = np.argmax(adversarial_predictions[i])\n original_label = x_labels[i]\n if predicted_label == original_label:\n color = 'blue'\n else:\n color = 'red'\n plt.xlabel(\"{} ({})\".format(class_names[predicted_label], \n class_names[original_label]), \n color=color)\n \n",
"_____no_output_____"
]
],
[
[
"Save the images if you wish so you can load them later.",
"_____no_output_____"
]
],
[
[
"np.save('../resources/test_images_GSAttack_dropout', x_adv_images)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4add3ded33752fb1cf2d95adcb060357a22faa1a
| 1,086 |
ipynb
|
Jupyter Notebook
|
cclhm0069/mod4b/sem13.ipynb
|
ericbrasiln/intro-historia-digital
|
5733dc55396beffeb916693c552fd4eb987472d0
|
[
"MIT"
] | null | null | null |
cclhm0069/mod4b/sem13.ipynb
|
ericbrasiln/intro-historia-digital
|
5733dc55396beffeb916693c552fd4eb987472d0
|
[
"MIT"
] | null | null | null |
cclhm0069/mod4b/sem13.ipynb
|
ericbrasiln/intro-historia-digital
|
5733dc55396beffeb916693c552fd4eb987472d0
|
[
"MIT"
] | null | null | null | 20.490566 | 135 | 0.518416 |
[
[
[
"# Semana 13\n\n## Módulo 4b: Programação para História\n\n**Período**: 24/01/2022 a 28/01/2022\n\n**CH**: 2h30",
"_____no_output_____"
],
[
"## Live Coding 1 (AS)\n\n**Tema**: Python do Zero I\n\n**Data**: 26/01/2022, às 19h\n\n**CH**: 2h30\n\n**Plataforma**: `Google Meet` - link enviado por e-mail.\n\n```{Attention}\n[_Clique aqui para acessar a apresentação da aula_](https://ericbrasiln.github.io/intro-historia-digital/mod4b/sem13_ap.html).\n```",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown"
]
] |
4add4d1562b4892ec6e40c9be910bbe9c2d87b4c
| 3,598 |
ipynb
|
Jupyter Notebook
|
2020/sem1/lecture11_optimization_techniques/repetition.ipynb
|
ivafanas/cpp_shad_students
|
3c78d1675494b963cd03a123d96d4a601bbd04cd
|
[
"MIT"
] | 12 |
2019-08-18T19:34:26.000Z
|
2022-03-17T13:33:24.000Z
|
2020/sem1/lecture11_optimization_techniques/repetition.ipynb
|
ivafanas/cpp_shad_students
|
3c78d1675494b963cd03a123d96d4a601bbd04cd
|
[
"MIT"
] | 21 |
2017-09-20T05:26:16.000Z
|
2019-10-19T10:13:47.000Z
|
2020/sem1/lecture11_optimization_techniques/repetition.ipynb
|
ivafanas/cpp_shad_students
|
3c78d1675494b963cd03a123d96d4a601bbd04cd
|
[
"MIT"
] | 14 |
2017-11-14T19:39:10.000Z
|
2022-01-05T02:09:25.000Z
| 31.017241 | 92 | 0.489717 |
[
[
[
"**Вопросы для повторения:**\n\n* Какие структуры данных будем использовать для задачи:\n * Студенты университета имеют идентификаторы - целые числа 1, 2, 3, ..., N.<br />\n Нужно по id студента находить его имя.\n * Студенты университета имеют идентификаторы - произвольные целые числа.<br />\n Нужно по id студента находить его имя.\n * Студенты университета имеют идентификаторы - произвольные целые числа.<br />\n Нужно по целому числу узнавать, существует ли студент с таким id.\n * У человека максимум 32 зуба (предположим).<br />\n Нужно для человека хранить информацию, какие зубы у него есть.\n * У человека максимум 40 зубов (скорее всего).<br />\n Нужно для человека хранить информацию, какие зубы у него есть.\n\n\n* Что такое функтор? Что такое лямбда?\n\n\n* Что здесь происходит?\n\n ```c++\n auto make_age_less_filter(int x)\n {\n return [&x](const Person& person){ return person.age < x; };\n }\n\n std::copy_if(people.begin(),\n people.end(),\n make_age_less_filter(33),\n std::ostream_iterator<Person>(std::cout, \"\\n\"));\n ```\n\n\n* Понять что происходит, исправить ошибки и упростить:\n\n ```c++\n struct Person\n {\n std::string name;\n int age;\n };\n\n int function(std::vector<Persion> people)\n {\n auto tmp = people;\n\n std::remove_if(tmp.begin(), tmp.end(), [](const Person& p){\n return p.age < 33;\n });\n\n std::sort(tmp.begin(), tmp.end(), [](const Person& lhs, const Person& rhs){\n reutrn lhs.age < rhs.age;\n });\n\n auto it = std::find_if(tmp.rbegin(), tmp.rend(), [](const Person& p){\n return p.name == \"Fedor\";\n });\n\n return it->age;\n }\n ```\n\n* Понять что происходит, исправить ошибки и упростить:\n\n ```c++\n std::set<int, int> function(const std::vector<Person>& people)\n {\n std::map<int, std::vector<std::string>> age_to_names;\n for (const auto& p : people)\n age_to_names[p.age].push_back(p.name);\n\n for (auto& age_and_names : age_to_names)\n {\n auto names = age_and_names.second;\n std::sort(names.begin(), names.end());\n }\n\n std::set<int, int> rv;\n for (auto& age_and_names : age_to_names)\n rv[age_and_names.first] = age_and_names.second.size();\n\n return rv;\n }\n ```",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown"
]
] |
4add5cd437c2aab45026996c27ea6c927a738c3f
| 75,629 |
ipynb
|
Jupyter Notebook
|
1. Introduction to TensorFlow/LZ/Machine Learning Basics/TF2_0_Linear_Classification.ipynb
|
AmirRazaMBA/TensorFlow-Certification
|
ec0990007cff6daf36beac6d00d95c81cdf80353
|
[
"MIT"
] | 1 |
2020-11-20T14:46:45.000Z
|
2020-11-20T14:46:45.000Z
|
1. Introduction to TensorFlow/LZ/Machine Learning Basics/TF2_0_Linear_Classification.ipynb
|
AmirRazaMBA/TF_786
|
ec0990007cff6daf36beac6d00d95c81cdf80353
|
[
"MIT"
] | null | null | null |
1. Introduction to TensorFlow/LZ/Machine Learning Basics/TF2_0_Linear_Classification.ipynb
|
AmirRazaMBA/TF_786
|
ec0990007cff6daf36beac6d00d95c81cdf80353
|
[
"MIT"
] | 1 |
2021-11-17T02:40:23.000Z
|
2021-11-17T02:40:23.000Z
| 68.69119 | 16,538 | 0.63395 |
[
[
[
"# Install TensorFlow\n# !pip install -q tensorflow-gpu==2.0.0-beta1\n\ntry:\n %tensorflow_version 2.x # Colab only.\nexcept Exception:\n pass\n\nimport tensorflow as tf\nprint(tf.__version__)",
"`%tensorflow_version` only switches the major version: 1.x or 2.x.\nYou set: `2.x # Colab only.`. This will be interpreted as: `2.x`.\n\n\nTensorFlow 2.x selected.\n2.2.0\n"
],
[
"# Load in the data\nfrom sklearn.datasets import load_breast_cancer",
"_____no_output_____"
],
[
"# load the data\ndata = load_breast_cancer()",
"_____no_output_____"
],
[
"# check the type of 'data'\ntype(data)",
"_____no_output_____"
],
[
"# note: it is a Bunch object\n# this basically acts like a dictionary where you can treat the keys like attributes\ndata.keys()",
"_____no_output_____"
],
[
"# 'data' (the attribute) means the input data\ndata.data.shape\n# it has 569 samples, 30 features",
"_____no_output_____"
],
[
"# 'targets'\ndata.target\n# note how the targets are just 0s and 1s\n# normally, when you have K targets, they are labeled 0..K-1",
"_____no_output_____"
],
[
"# their meaning is not lost\ndata.target_names",
"_____no_output_____"
],
[
"# there are also 569 corresponding targets\ndata.target.shape",
"_____no_output_____"
],
[
"# you can also determine the meaning of each feature\ndata.feature_names",
"_____no_output_____"
],
[
"# normally we would put all of our imports at the top\n# but this lets us tell a story\nfrom sklearn.model_selection import train_test_split\n\n\n# split the data into train and test sets\n# this lets us simulate how our model will perform in the future\nX_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.33)\nN, D = X_train.shape",
"_____no_output_____"
],
[
"# Scale the data\n# you'll learn why scaling is needed in a later course\nfrom sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\nX_train = scaler.fit_transform(X_train)\nX_test = scaler.transform(X_test)",
"_____no_output_____"
],
[
"# Now all the fun Tensorflow stuff\n# Build the model\n\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Input(shape=(D,)),\n tf.keras.layers.Dense(1, activation='sigmoid')\n])\n\n# Alternatively, you can do:\n# model = tf.keras.models.Sequential()\n# model.add(tf.keras.layers.Dense(1, input_shape=(D,), activation='sigmoid'))\n\nmodel.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n\n\n# Train the model\nr = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=100)\n\n\n# Evaluate the model - evaluate() returns loss and accuracy\nprint(\"Train score:\", model.evaluate(X_train, y_train))\nprint(\"Test score:\", model.evaluate(X_test, y_test))",
"Epoch 1/100\n12/12 [==============================] - 0s 15ms/step - loss: 0.8609 - accuracy: 0.4698 - val_loss: 0.8084 - val_accuracy: 0.5213\nEpoch 2/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.7827 - accuracy: 0.5223 - val_loss: 0.7401 - val_accuracy: 0.5904\nEpoch 3/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.7130 - accuracy: 0.5696 - val_loss: 0.6800 - val_accuracy: 0.6383\nEpoch 4/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.6531 - accuracy: 0.6404 - val_loss: 0.6275 - val_accuracy: 0.6755\nEpoch 5/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.6000 - accuracy: 0.6903 - val_loss: 0.5826 - val_accuracy: 0.7128\nEpoch 6/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.5545 - accuracy: 0.7244 - val_loss: 0.5436 - val_accuracy: 0.7553\nEpoch 7/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.5138 - accuracy: 0.7638 - val_loss: 0.5107 - val_accuracy: 0.7766\nEpoch 8/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.4782 - accuracy: 0.7874 - val_loss: 0.4826 - val_accuracy: 0.7819\nEpoch 9/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.4476 - accuracy: 0.8110 - val_loss: 0.4579 - val_accuracy: 0.7979\nEpoch 10/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.4204 - accuracy: 0.8399 - val_loss: 0.4360 - val_accuracy: 0.8085\nEpoch 11/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.3959 - accuracy: 0.8556 - val_loss: 0.4169 - val_accuracy: 0.8298\nEpoch 12/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.3743 - accuracy: 0.8793 - val_loss: 0.3996 - val_accuracy: 0.8351\nEpoch 13/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.3544 - accuracy: 0.8845 - val_loss: 0.3845 - val_accuracy: 0.8511\nEpoch 14/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.3370 - accuracy: 0.8871 - val_loss: 0.3705 - val_accuracy: 0.8564\nEpoch 15/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.3217 - accuracy: 0.8976 - val_loss: 0.3578 - val_accuracy: 0.8617\nEpoch 16/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.3069 - accuracy: 0.9029 - val_loss: 0.3459 - val_accuracy: 0.8617\nEpoch 17/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.2936 - accuracy: 0.9029 - val_loss: 0.3354 - val_accuracy: 0.8723\nEpoch 18/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.2822 - accuracy: 0.9055 - val_loss: 0.3254 - val_accuracy: 0.8777\nEpoch 19/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.2713 - accuracy: 0.9134 - val_loss: 0.3164 - val_accuracy: 0.8777\nEpoch 20/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.2615 - accuracy: 0.9160 - val_loss: 0.3081 - val_accuracy: 0.8883\nEpoch 21/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.2524 - accuracy: 0.9160 - val_loss: 0.3004 - val_accuracy: 0.8936\nEpoch 22/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.2442 - accuracy: 0.9213 - val_loss: 0.2930 - val_accuracy: 0.8936\nEpoch 23/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.2364 - accuracy: 0.9291 - val_loss: 0.2866 - val_accuracy: 0.8936\nEpoch 24/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.2295 - accuracy: 0.9318 - val_loss: 0.2802 - val_accuracy: 0.9043\nEpoch 25/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.2227 - accuracy: 0.9370 - val_loss: 0.2744 - val_accuracy: 0.9096\nEpoch 26/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.2167 - accuracy: 0.9423 - val_loss: 0.2689 - val_accuracy: 0.9149\nEpoch 27/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.2110 - accuracy: 0.9423 - val_loss: 0.2635 - val_accuracy: 0.9202\nEpoch 28/100\n12/12 [==============================] - 0s 6ms/step - loss: 0.2055 - accuracy: 0.9449 - val_loss: 0.2586 - val_accuracy: 0.9202\nEpoch 29/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.2005 - accuracy: 0.9449 - val_loss: 0.2540 - val_accuracy: 0.9202\nEpoch 30/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1958 - accuracy: 0.9449 - val_loss: 0.2494 - val_accuracy: 0.9255\nEpoch 31/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1913 - accuracy: 0.9449 - val_loss: 0.2455 - val_accuracy: 0.9255\nEpoch 32/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1870 - accuracy: 0.9501 - val_loss: 0.2414 - val_accuracy: 0.9255\nEpoch 33/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1829 - accuracy: 0.9528 - val_loss: 0.2374 - val_accuracy: 0.9255\nEpoch 34/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1790 - accuracy: 0.9554 - val_loss: 0.2341 - val_accuracy: 0.9255\nEpoch 35/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1754 - accuracy: 0.9554 - val_loss: 0.2306 - val_accuracy: 0.9309\nEpoch 36/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1719 - accuracy: 0.9554 - val_loss: 0.2274 - val_accuracy: 0.9309\nEpoch 37/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1686 - accuracy: 0.9554 - val_loss: 0.2243 - val_accuracy: 0.9255\nEpoch 38/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1655 - accuracy: 0.9554 - val_loss: 0.2213 - val_accuracy: 0.9255\nEpoch 39/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1624 - accuracy: 0.9580 - val_loss: 0.2186 - val_accuracy: 0.9309\nEpoch 40/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1596 - accuracy: 0.9580 - val_loss: 0.2157 - val_accuracy: 0.9309\nEpoch 41/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1568 - accuracy: 0.9580 - val_loss: 0.2132 - val_accuracy: 0.9309\nEpoch 42/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1541 - accuracy: 0.9580 - val_loss: 0.2107 - val_accuracy: 0.9309\nEpoch 43/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1516 - accuracy: 0.9580 - val_loss: 0.2082 - val_accuracy: 0.9309\nEpoch 44/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1492 - accuracy: 0.9580 - val_loss: 0.2060 - val_accuracy: 0.9309\nEpoch 45/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1468 - accuracy: 0.9580 - val_loss: 0.2038 - val_accuracy: 0.9309\nEpoch 46/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1446 - accuracy: 0.9580 - val_loss: 0.2019 - val_accuracy: 0.9309\nEpoch 47/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1425 - accuracy: 0.9580 - val_loss: 0.1995 - val_accuracy: 0.9309\nEpoch 48/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1403 - accuracy: 0.9606 - val_loss: 0.1977 - val_accuracy: 0.9309\nEpoch 49/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1383 - accuracy: 0.9633 - val_loss: 0.1957 - val_accuracy: 0.9362\nEpoch 50/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1363 - accuracy: 0.9659 - val_loss: 0.1939 - val_accuracy: 0.9362\nEpoch 51/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1345 - accuracy: 0.9659 - val_loss: 0.1922 - val_accuracy: 0.9415\nEpoch 52/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1327 - accuracy: 0.9685 - val_loss: 0.1903 - val_accuracy: 0.9415\nEpoch 53/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1309 - accuracy: 0.9685 - val_loss: 0.1888 - val_accuracy: 0.9415\nEpoch 54/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1292 - accuracy: 0.9685 - val_loss: 0.1872 - val_accuracy: 0.9415\nEpoch 55/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1276 - accuracy: 0.9685 - val_loss: 0.1857 - val_accuracy: 0.9415\nEpoch 56/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1260 - accuracy: 0.9685 - val_loss: 0.1841 - val_accuracy: 0.9415\nEpoch 57/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1245 - accuracy: 0.9685 - val_loss: 0.1827 - val_accuracy: 0.9415\nEpoch 58/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1230 - accuracy: 0.9685 - val_loss: 0.1813 - val_accuracy: 0.9415\nEpoch 59/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1216 - accuracy: 0.9685 - val_loss: 0.1799 - val_accuracy: 0.9415\nEpoch 60/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1202 - accuracy: 0.9685 - val_loss: 0.1788 - val_accuracy: 0.9415\nEpoch 61/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1188 - accuracy: 0.9685 - val_loss: 0.1775 - val_accuracy: 0.9415\nEpoch 62/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1176 - accuracy: 0.9685 - val_loss: 0.1762 - val_accuracy: 0.9415\nEpoch 63/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1163 - accuracy: 0.9685 - val_loss: 0.1750 - val_accuracy: 0.9415\nEpoch 64/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1150 - accuracy: 0.9685 - val_loss: 0.1738 - val_accuracy: 0.9468\nEpoch 65/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1138 - accuracy: 0.9685 - val_loss: 0.1728 - val_accuracy: 0.9468\nEpoch 66/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1126 - accuracy: 0.9685 - val_loss: 0.1717 - val_accuracy: 0.9468\nEpoch 67/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1115 - accuracy: 0.9685 - val_loss: 0.1707 - val_accuracy: 0.9468\nEpoch 68/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1104 - accuracy: 0.9685 - val_loss: 0.1696 - val_accuracy: 0.9521\nEpoch 69/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1093 - accuracy: 0.9738 - val_loss: 0.1687 - val_accuracy: 0.9521\nEpoch 70/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1083 - accuracy: 0.9738 - val_loss: 0.1677 - val_accuracy: 0.9521\nEpoch 71/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1073 - accuracy: 0.9738 - val_loss: 0.1668 - val_accuracy: 0.9521\nEpoch 72/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1062 - accuracy: 0.9764 - val_loss: 0.1658 - val_accuracy: 0.9521\nEpoch 73/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1053 - accuracy: 0.9764 - val_loss: 0.1650 - val_accuracy: 0.9521\nEpoch 74/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1043 - accuracy: 0.9764 - val_loss: 0.1641 - val_accuracy: 0.9521\nEpoch 75/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1034 - accuracy: 0.9764 - val_loss: 0.1632 - val_accuracy: 0.9574\nEpoch 76/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.1025 - accuracy: 0.9764 - val_loss: 0.1624 - val_accuracy: 0.9574\nEpoch 77/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1017 - accuracy: 0.9790 - val_loss: 0.1615 - val_accuracy: 0.9574\nEpoch 78/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1008 - accuracy: 0.9790 - val_loss: 0.1607 - val_accuracy: 0.9574\nEpoch 79/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.1000 - accuracy: 0.9790 - val_loss: 0.1601 - val_accuracy: 0.9574\nEpoch 80/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.0991 - accuracy: 0.9790 - val_loss: 0.1593 - val_accuracy: 0.9574\nEpoch 81/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.0983 - accuracy: 0.9790 - val_loss: 0.1586 - val_accuracy: 0.9574\nEpoch 82/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.0976 - accuracy: 0.9790 - val_loss: 0.1578 - val_accuracy: 0.9628\nEpoch 83/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.0968 - accuracy: 0.9790 - val_loss: 0.1572 - val_accuracy: 0.9628\nEpoch 84/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0960 - accuracy: 0.9790 - val_loss: 0.1565 - val_accuracy: 0.9628\nEpoch 85/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.0953 - accuracy: 0.9790 - val_loss: 0.1559 - val_accuracy: 0.9628\nEpoch 86/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0946 - accuracy: 0.9790 - val_loss: 0.1553 - val_accuracy: 0.9628\nEpoch 87/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.0939 - accuracy: 0.9790 - val_loss: 0.1546 - val_accuracy: 0.9628\nEpoch 88/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0932 - accuracy: 0.9790 - val_loss: 0.1539 - val_accuracy: 0.9628\nEpoch 89/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0925 - accuracy: 0.9790 - val_loss: 0.1533 - val_accuracy: 0.9628\nEpoch 90/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0919 - accuracy: 0.9790 - val_loss: 0.1528 - val_accuracy: 0.9628\nEpoch 91/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.0912 - accuracy: 0.9790 - val_loss: 0.1522 - val_accuracy: 0.9628\nEpoch 92/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0906 - accuracy: 0.9790 - val_loss: 0.1516 - val_accuracy: 0.9628\nEpoch 93/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0900 - accuracy: 0.9790 - val_loss: 0.1511 - val_accuracy: 0.9628\nEpoch 94/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0894 - accuracy: 0.9790 - val_loss: 0.1504 - val_accuracy: 0.9628\nEpoch 95/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0888 - accuracy: 0.9790 - val_loss: 0.1500 - val_accuracy: 0.9628\nEpoch 96/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0882 - accuracy: 0.9790 - val_loss: 0.1494 - val_accuracy: 0.9628\nEpoch 97/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.0876 - accuracy: 0.9790 - val_loss: 0.1489 - val_accuracy: 0.9628\nEpoch 98/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0870 - accuracy: 0.9790 - val_loss: 0.1486 - val_accuracy: 0.9628\nEpoch 99/100\n12/12 [==============================] - 0s 4ms/step - loss: 0.0865 - accuracy: 0.9790 - val_loss: 0.1481 - val_accuracy: 0.9628\nEpoch 100/100\n12/12 [==============================] - 0s 5ms/step - loss: 0.0860 - accuracy: 0.9790 - val_loss: 0.1475 - val_accuracy: 0.9628\n12/12 [==============================] - 0s 1ms/step - loss: 0.0856 - accuracy: 0.9790\nTrain score: [0.08562987297773361, 0.9790025949478149]\n6/6 [==============================] - 0s 2ms/step - loss: 0.1475 - accuracy: 0.9628\nTest score: [0.14749325811862946, 0.9627659320831299]\n"
],
[
"# Plot what's returned by model.fit()\nimport matplotlib.pyplot as plt\nplt.plot(r.history['loss'], label='loss')\nplt.plot(r.history['val_loss'], label='val_loss')\nplt.legend()",
"_____no_output_____"
],
[
"# Plot the accuracy too\nplt.plot(r.history['accuracy'], label='acc')\nplt.plot(r.history['val_accuracy'], label='val_acc')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"# Part 2: Making Predictions\n\nThis goes with the lecture \"Making Predictions\"",
"_____no_output_____"
]
],
[
[
"# Make predictions\nP = model.predict(X_test)\nprint(P) # they are outputs of the sigmoid, interpreted as probabilities p(y = 1 | x)",
"[[3.86378058e-04]\n [9.84780431e-01]\n [8.10497477e-06]\n [9.67254221e-01]\n [9.75205123e-01]\n [1.03592151e-06]\n [7.36993432e-01]\n [9.74341214e-01]\n [2.01676026e-01]\n [7.63189942e-02]\n [9.07921493e-01]\n [9.98336017e-01]\n [9.91316557e-01]\n [1.49583220e-02]\n [2.73370892e-01]\n [9.63929355e-01]\n [4.18202132e-01]\n [5.22907972e-01]\n [9.72203434e-01]\n [9.43208098e-01]\n [9.01003957e-01]\n [7.98124552e-01]\n [9.55958247e-01]\n [6.44394546e-04]\n [5.47593057e-01]\n [9.44584012e-01]\n [9.39119279e-01]\n [8.86917651e-01]\n [9.99867797e-01]\n [9.96247470e-01]\n [9.01305000e-04]\n [9.86370981e-01]\n [9.90954220e-01]\n [9.78590846e-01]\n [2.98584223e-01]\n [9.74971592e-01]\n [2.24541244e-03]\n [7.43708992e-03]\n [1.08735815e-01]\n [9.70864236e-01]\n [9.97598588e-01]\n [9.95253563e-01]\n [6.66293949e-02]\n [9.99299169e-01]\n [2.64693741e-02]\n [3.46294008e-02]\n [6.97628781e-02]\n [5.90775162e-04]\n [1.68447688e-01]\n [4.17051703e-01]\n [8.28669071e-01]\n [9.91819620e-01]\n [9.86989319e-01]\n [9.12003045e-04]\n [1.78345054e-01]\n [9.91083503e-01]\n [7.01007573e-03]\n [2.41460242e-07]\n [1.43330127e-01]\n [4.55556874e-05]\n [9.81389105e-01]\n [8.74749124e-01]\n [9.24449384e-01]\n [9.51820910e-01]\n [9.89835441e-01]\n [9.40594554e-01]\n [1.48434803e-01]\n [9.88991499e-01]\n [9.99458015e-01]\n [9.85807061e-01]\n [9.91644800e-01]\n [9.57303226e-01]\n [9.97576058e-01]\n [9.24584508e-01]\n [9.49344873e-01]\n [9.66827691e-01]\n [9.94582117e-01]\n [9.98655438e-01]\n [1.65901758e-06]\n [9.99411941e-01]\n [2.30099795e-06]\n [8.93940866e-01]\n [8.20179224e-01]\n [9.96064723e-01]\n [7.76759675e-03]\n [2.04935772e-04]\n [2.47613713e-01]\n [9.78444517e-01]\n [1.19358050e-02]\n [9.96814549e-01]\n [2.86849365e-02]\n [9.98123467e-01]\n [9.92729127e-01]\n [9.02851462e-01]\n [9.95400250e-01]\n [9.67106462e-01]\n [8.26600716e-02]\n [9.92462158e-01]\n [9.84530568e-01]\n [1.05292011e-05]\n [8.87999892e-01]\n [1.94387667e-05]\n [9.97542977e-01]\n [9.72390294e-01]\n [9.85767126e-01]\n [7.85340488e-01]\n [6.10192299e-01]\n [9.65034664e-01]\n [3.99673767e-02]\n [9.67306435e-01]\n [8.84589493e-01]\n [9.02074575e-01]\n [3.98189098e-01]\n [9.99246955e-01]\n [8.53020966e-01]\n [2.39426382e-02]\n [3.30956355e-02]\n [9.87658203e-01]\n [9.93813396e-01]\n [9.92372811e-01]\n [6.59648001e-01]\n [1.52575020e-02]\n [8.56655359e-04]\n [9.81236696e-01]\n [8.38881969e-01]\n [3.22945329e-04]\n [9.75976408e-01]\n [3.42319865e-04]\n [9.97464538e-01]\n [3.05744916e-01]\n [7.02146068e-02]\n [9.43387687e-01]\n [9.98602211e-01]\n [4.21485252e-04]\n [4.34686430e-02]\n [9.97005880e-01]\n [5.49177110e-01]\n [1.51920943e-02]\n [7.39880800e-01]\n [9.95039642e-01]\n [1.27456868e-02]\n [9.82785881e-01]\n [8.21013782e-06]\n [9.18114245e-01]\n [9.12319779e-01]\n [2.13819161e-01]\n [4.20795023e-01]\n [7.56392002e-01]\n [5.75207267e-03]\n [6.93836570e-01]\n [9.95563984e-01]\n [9.90276217e-01]\n [9.98211980e-01]\n [9.72716749e-01]\n [9.98421311e-01]\n [8.68965745e-01]\n [3.26108411e-02]\n [9.82722759e-01]\n [9.97514963e-01]\n [9.97721136e-01]\n [7.51649439e-01]\n [6.21980190e-01]\n [9.13870573e-01]\n [9.86600816e-01]\n [9.80454147e-01]\n [9.97537494e-01]\n [6.38170242e-02]\n [5.64098001e-01]\n [9.37707663e-01]\n [9.99191940e-01]\n [1.27754986e-01]\n [9.79737759e-01]\n [9.73214984e-01]\n [8.08745086e-01]\n [2.13803322e-10]\n [9.85395610e-01]\n [9.96816814e-01]\n [7.70437479e-01]\n [9.99257147e-01]\n [9.94040430e-01]\n [1.26547611e-03]\n [3.48072406e-03]\n [8.29080164e-01]\n [8.82580280e-01]\n [9.84677672e-01]\n [1.66605075e-03]\n [9.40553248e-01]\n [8.90425436e-05]]\n"
],
[
"# Round to get the actual predictions\n# Note: has to be flattened since the targets are size (N,) while the predictions are size (N,1)\nimport numpy as np\nP = np.round(P).flatten()\nprint(P)",
"[0. 1. 0. 1. 1. 0. 1. 1. 0. 0. 1. 1. 1. 0. 0. 1. 0. 1. 1. 1. 1. 1. 1. 0.\n 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 0. 1. 0. 0. 0. 1. 1. 1. 0. 1. 0. 0. 0. 0.\n 0. 0. 1. 1. 1. 0. 0. 1. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 1. 1.\n 1. 1. 1. 1. 1. 1. 0. 1. 0. 1. 1. 1. 0. 0. 0. 1. 0. 1. 0. 1. 1. 1. 1. 1.\n 0. 1. 1. 0. 1. 0. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 0. 0. 1. 1. 1.\n 1. 0. 0. 1. 1. 0. 1. 0. 1. 0. 0. 1. 1. 0. 0. 1. 1. 0. 1. 1. 0. 1. 0. 1.\n 1. 0. 0. 1. 0. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 1.\n 1. 1. 0. 1. 1. 1. 0. 1. 1. 1. 1. 1. 0. 0. 1. 1. 1. 0. 1. 0.]\n"
],
[
"# Calculate the accuracy, compare it to evaluate() output\nprint(\"Manually calculated accuracy:\", np.mean(P == y_test))\nprint(\"Evaluate output:\", model.evaluate(X_test, y_test))",
"Manually calculated accuracy: 0.9627659574468085\n6/6 [==============================] - 0s 2ms/step - loss: 0.1475 - accuracy: 0.9628\nEvaluate output: [0.14749325811862946, 0.9627659320831299]\n"
]
],
[
[
"# Part 3: Saving and Loading a Model\n\nThis goes with the lecture \"Saving and Loading a Model\"",
"_____no_output_____"
]
],
[
[
"# Let's now save our model to a file\nmodel.save('linearclassifier.h5')",
"_____no_output_____"
],
[
"# Check that the model file exists\n!ls -lh ",
"total 24K\n-rw-r--r-- 1 root root 19K Jun 18 16:13 linearclassifier.h5\ndrwxr-xr-x 1 root root 4.0K Jun 17 16:18 sample_data\n"
],
[
"# Let's load the model and confirm that it still works\n# Note: there is a bug in Keras where load/save only works if you DON'T use the Input() layer explicitly\n# So, make sure you define the model with ONLY Dense(1, input_shape=(D,))\n# At least, until the bug is fixed\n# https://github.com/keras-team/keras/issues/10417\nmodel = tf.keras.models.load_model('linearclassifier.h5')\nprint(model.layers)\nmodel.evaluate(X_test, y_test)",
"[<tensorflow.python.keras.layers.core.Dense object at 0x7f97afe4a4a8>]\n6/6 [==============================] - 0s 3ms/step - loss: 0.1475 - accuracy: 0.9628\n"
],
[
"\n# Download the file - requires Chrome (at this point)\nfrom google.colab import files\nfiles.download('linearclassifier.h5')",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4add663a9e677d47c4051f6009b4430e39b53ff3
| 205,595 |
ipynb
|
Jupyter Notebook
|
LearningOnMarkedData/week2/sklearn.datasets.ipynb
|
ishatserka/MachineLearningAndDataAnalysisCoursera
|
e82e772df2f4aec162cb34ac6127df10d14a625a
|
[
"MIT"
] | null | null | null |
LearningOnMarkedData/week2/sklearn.datasets.ipynb
|
ishatserka/MachineLearningAndDataAnalysisCoursera
|
e82e772df2f4aec162cb34ac6127df10d14a625a
|
[
"MIT"
] | null | null | null |
LearningOnMarkedData/week2/sklearn.datasets.ipynb
|
ishatserka/MachineLearningAndDataAnalysisCoursera
|
e82e772df2f4aec162cb34ac6127df10d14a625a
|
[
"MIT"
] | null | null | null | 208.938008 | 60,484 | 0.876947 |
[
[
[
"# Sklearn",
"_____no_output_____"
],
[
"## sklearn.datasets",
"_____no_output_____"
],
[
"документация: http://scikit-learn.org/stable/datasets/",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets",
"_____no_output_____"
],
[
"%pylab inline",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"### Генерация выборок",
"_____no_output_____"
],
[
"**Способы генерации данных:** \n* make_classification\n* make_regression\n* make_circles\n* make_checkerboard\n* etc",
"_____no_output_____"
],
[
"#### datasets.make_circles",
"_____no_output_____"
]
],
[
[
"circles = datasets.make_circles()",
"_____no_output_____"
],
[
"print(\"features: {}\".format(circles[0][:10]))\nprint(\"target: {}\".format(circles[1][:10]))",
"features: [[-0.80901699 0.58778525]\n [ 0.50993919 -0.61641059]\n [-0.96858316 0.24868989]\n [-0.74382119 -0.29449964]\n [-0.42866144 -0.67546234]\n [-0.50993919 0.61641059]\n [ 0.96858316 -0.24868989]\n [-0.79369176 0.10026659]\n [-0.53582679 -0.84432793]\n [ 0.14990505 -0.7858298 ]]\ntarget: [0 1 0 1 1 1 0 1 0 1]\n"
],
[
"from matplotlib.colors import ListedColormap",
"_____no_output_____"
],
[
"colors = ListedColormap(['blue', 'green'])\n\npyplot.figure(figsize(8, 8))\npyplot.scatter(list(map(lambda x: x[0], circles[0])), list(map(lambda x: x[1], circles[0])), c = circles[1], cmap = colors)",
"_____no_output_____"
],
[
"def plot_2d_dataset(data, colors):\n pyplot.figure(figsize(8, 8))\n pyplot.scatter(list(map(lambda x: x[0], data[0])), list(map(lambda x: x[1], data[0])), c = data[1], cmap = colors)",
"_____no_output_____"
],
[
"noisy_circles = datasets.make_circles(noise = 0.15)",
"_____no_output_____"
],
[
"plot_2d_dataset(noisy_circles, colors)",
"_____no_output_____"
]
],
[
[
"#### datasets.make_classification",
"_____no_output_____"
]
],
[
[
"simple_classification_problem = datasets.make_classification(n_features = 2, n_informative = 1, \n n_redundant = 1, n_clusters_per_class = 1,\n random_state = 1 )",
"_____no_output_____"
],
[
"plot_2d_dataset(simple_classification_problem, colors)",
"_____no_output_____"
],
[
"classification_problem = datasets.make_classification(n_features = 2, n_informative = 2, n_classes = 4, \n n_redundant = 0, n_clusters_per_class = 1, random_state = 1)\n\ncolors = ListedColormap(['red', 'blue', 'green', 'orange'])",
"_____no_output_____"
],
[
"plot_2d_dataset(classification_problem, colors)",
"_____no_output_____"
]
],
[
[
"### \"Игрушечные\" наборы данных",
"_____no_output_____"
],
[
"**Наборы данных:** \n* load_iris \n* load_boston\n* load_diabetes\n* load_digits\n* load_linnerud\n* etc",
"_____no_output_____"
],
[
"#### datasets.load_iris",
"_____no_output_____"
]
],
[
[
"iris = datasets.load_iris()",
"_____no_output_____"
],
[
"iris",
"_____no_output_____"
],
[
"iris.keys()",
"_____no_output_____"
],
[
"print(iris.DESCR)",
"Iris Plants Database\n====================\n\nNotes\n-----\nData Set Characteristics:\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%[email protected])\n :Date: July, 1988\n\nThis is a copy of UCI ML iris datasets.\nhttp://archive.ics.uci.edu/ml/datasets/Iris\n\nThe famous Iris database, first used by Sir R.A Fisher\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher's paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\nReferences\n----------\n - Fisher,R.A. \"The use of multiple measurements in taxonomic problems\"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in \"Contributions to\n Mathematical Statistics\" (John Wiley, NY, 1950).\n - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) \"Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments\". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) \"The Reduced Nearest Neighbor Rule\". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al\"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...\n\n"
],
[
"print(\"feature names: {}\".format(iris.feature_names))\nprint(\"target names: {names}\".format(names = iris.target_names))",
"feature names: ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']\ntarget names: ['setosa' 'versicolor' 'virginica']\n"
],
[
"iris.data[:10]",
"_____no_output_____"
],
[
"iris.target",
"_____no_output_____"
]
],
[
[
"### Визуализация выбокри",
"_____no_output_____"
]
],
[
[
"from pandas import DataFrame",
"_____no_output_____"
],
[
"iris_frame = DataFrame(iris.data)\niris_frame.columns = iris.feature_names\niris_frame['target'] = iris.target",
"_____no_output_____"
],
[
"iris_frame.head()",
"_____no_output_____"
],
[
"iris_frame.target = iris_frame.target.apply(lambda x : iris.target_names[x])",
"_____no_output_____"
],
[
"iris_frame.head()",
"_____no_output_____"
],
[
"iris_frame[iris_frame.target == 'setosa'].hist('sepal length (cm)')",
"_____no_output_____"
],
[
"pyplot.figure(figsize(20, 24))\n\nplot_number = 0\nfor feature_name in iris['feature_names']:\n for target_name in iris['target_names']:\n plot_number += 1\n pyplot.subplot(4, 3, plot_number)\n pyplot.hist(iris_frame[iris_frame.target == target_name][feature_name])\n pyplot.title(target_name)\n pyplot.xlabel('cm')\n pyplot.ylabel(feature_name[:-4])",
"_____no_output_____"
]
],
[
[
"### Бонус: библиотека seaborn",
"_____no_output_____"
]
],
[
[
"import seaborn as sns",
"_____no_output_____"
],
[
"sns.pairplot(iris_frame, hue = 'target')",
"_____no_output_____"
],
[
"?sns.set()",
"_____no_output_____"
],
[
"sns.set(font_scale = 1.3)\ndata = sns.load_dataset(\"iris\")\nsns.pairplot(data, hue = \"species\")",
"_____no_output_____"
]
],
[
[
"#### **Если Вас заинтересовала библиотека seaborn:**\n* установка: https://stanford.edu/~mwaskom/software/seaborn/installing.html\n* установка c помощью анаконды: https://anaconda.org/anaconda/seaborn\n* руководство: https://stanford.edu/~mwaskom/software/seaborn/tutorial.html\n* примеры: https://stanford.edu/~mwaskom/software/seaborn/examples/",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4add698ab8e66ad9b2cbbb8c681c47a7a618ab88
| 27,691 |
ipynb
|
Jupyter Notebook
|
notebooks/responsible-ml/make-predictions.ipynb
|
lostmygithubaccount/azureml-flight-delay
|
daf42b6dcac6f9f733bf55017ae6574ac2f50ff9
|
[
"MIT"
] | null | null | null |
notebooks/responsible-ml/make-predictions.ipynb
|
lostmygithubaccount/azureml-flight-delay
|
daf42b6dcac6f9f733bf55017ae6574ac2f50ff9
|
[
"MIT"
] | null | null | null |
notebooks/responsible-ml/make-predictions.ipynb
|
lostmygithubaccount/azureml-flight-delay
|
daf42b6dcac6f9f733bf55017ae6574ac2f50ff9
|
[
"MIT"
] | null | null | null | 40.483918 | 383 | 0.550395 |
[
[
[
"# Responsible ML - Homomorphic Encryption\n\n## Install prerequisites\n\nBefore running the notebook, make sure the correct versions of these libraries are installed.",
"_____no_output_____"
]
],
[
[
"!pip install encrypted-inference --upgrade",
"_____no_output_____"
]
],
[
[
"## Setup Azure ML\n\nIn the next cell, we create a new Workspace config object using the `<subscription_id>`, `<resource_group_name>`, and `<workspace_name>`. This will fetch the matching Workspace and prompt you for authentication. Please click on the link and input the provided details.\n\nFor more information on **Workspace**, please visit: [Microsoft Workspace Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py)\n\n`<subscription_id>` = You can get this ID from the landing page of your Resource Group.\n\n`<resource_group_name>` = This is the name of your Resource Group.\n\n`<workspace_name>` = This is the name of your Workspace.",
"_____no_output_____"
]
],
[
[
"from azureml.core.workspace import Workspace\r\nimport warnings\r\n\r\nwarnings.filterwarnings('ignore')\r\n\r\ntry: \r\n ws = Workspace(\r\n subscription_id = '<subscription_id>', \r\n resource_group = '<resource_group>', \r\n workspace_name = '<workspace_name>')\r\n\r\n # Writes workspace config file\r\n ws.write_config()\r\n \r\n print('Library configuration succeeded')\r\nexcept Exception as e:\r\n print(e)\r\n print('Workspace not found')",
"_____no_output_____"
]
],
[
[
"# Homomorphic Encryption\n\nHomomorphic Encryption refers to a new type of encryption technology that allows computation to be directly on encrypted data, without requiring any decryption in the process. \n\n<img src=\"./images/encrypted.png\" alt=\"Forest\" style=\"display: inline-block;margin-left: auto;margin-right: auto;width:45%\">",
"_____no_output_____"
],
[
"## Fetch Model from registry\n\nNext, fetch the latest model from our model registry.",
"_____no_output_____"
]
],
[
[
"from azureml.core.model import Model\nfrom scripts.utils import *\n\ntabular = fetch_registered_dataset(ws)\nsynth_df, Y = prepareDataset(tabular)\nX_train, X_test, Y_train, Y_test, A_train, A_test = split_dataset(synth_df, Y)\nmodel = Model(ws, 'loan_approval_grid_model_30')\nmodel.version",
"_____no_output_____"
]
],
[
[
"## Create managed-endpoints directory\n\nCreate a new directory to hold the configuration files for deploying a managed endpoint.",
"_____no_output_____"
]
],
[
[
"import os\r\n\r\nmanaged_endpoints = './managed-endpoints'\r\n\r\n# Working directory\r\nif not os.path.exists(managed_endpoints):\r\n os.makedirs(managed_endpoints)\r\n \r\nif os.path.exists(os.path.join(managed_endpoints,\".amlignore\")):\r\n os.remove(os.path.join(managed_endpoints,\".amlignore\"))",
"_____no_output_____"
]
],
[
[
"## Create Scoring File\r\n\r\nCreating the scoring file is next step before deploying the service. This file is responsible for the actual generation of predictions using the model. The values or scores generated can represent predictions of future values, but they might also represent a likely category or outcome.\r\n\r\nThe first thing to do in the scoring file is to fetch the model. This is done by calling `Model.get_model_path()` and passing the model name as a parameter.\r\n\r\nAfter the model has been loaded, the function `model.predict()` function should be called to start the scoring process.\r\n\r\nFor more information on **Machine Learning - Score**, please visit: [Microsoft Machine Learning - Score Documentation](https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/machine-learning-score)\r\n",
"_____no_output_____"
]
],
[
[
"%%writefile $managed_endpoints/score.py\nimport os\nimport json\nimport pandas as pd\nfrom azureml.core.model import Model\nimport joblib\nfrom azure.storage.blob import BlobServiceClient\nfrom encrypted.inference.eiserver import EIServer\n\ndef init():\n global model\n # this name is model.id of model that we want to deploy\n model_path = os.path.join(os.getenv(\"AZUREML_MODEL_DIR\"), \"loan_approval_grid_model_30.pkl\")\n # deserialize the model file back into a sklearn model\n model = joblib.load(model_path)\n \n global server\n server = EIServer(model.coef_, model.intercept_, verbose=True)\n\ndef run(raw_data):\n json_properties = json.loads(raw_data)\n\n key_id = json_properties['key_id']\n conn_str = json_properties['conn_str']\n container = json_properties['container']\n data = json_properties['data']\n\n # download the Galois keys from blob storage \n blob_service_client = BlobServiceClient.from_connection_string(conn_str=conn_str)\n blob_client = blob_service_client.get_blob_client(container=container, blob=key_id)\n public_keys = blob_client.download_blob().readall()\n \n result = {}\n # make prediction\n result = server.predict(data, public_keys)\n\n # you can return any data type as long as it is JSON-serializable\n return result",
"_____no_output_____"
]
],
[
[
"## Create the environment definition\r\n\r\nThe following file contains the details of the environment to host the model and code. ",
"_____no_output_____"
]
],
[
[
"%%writefile $managed_endpoints/score-new.yml\nname: loan-managed-env\nchannels:\n - conda-forge\ndependencies:\n - python=3.7\n - numpy\n - pip\n - scikit-learn==0.22.1\n - scipy\n - pip:\n - azureml-defaults\n - azureml-sdk[notebooks,automl]\n - pandas\n - inference-schema[numpy-support]\n - joblib\n - numpy\n - scipy\n - encrypted-inference==0.9\n - azure-storage-blob",
"_____no_output_____"
]
],
[
[
"## Define the endpoint configuration\r\nSpecific inputs are required to deploy a model on an online endpoint:\r\n\r\n1. Model files.\r\n1. The code that's required to score the model.\r\n1. An environment in which your model runs.\r\n1. Settings to specify the instance type and scaling capacity.",
"_____no_output_____"
]
],
[
[
"%%writefile $managed_endpoints/endpointconfig.yml\nname: loan-managed-endpoint\ntype: online\nauth_mode: key\ntraffic:\n blue: 100\n\ndeployments:\n #blue deployment\n - name: blue\n model: azureml:loan_approval_grid_model_30:1\n code_configuration:\n code:\n local_path: ./\n scoring_script: score.py\n environment: \n name: loan-managed-env\n version: 1\n path: ./\n conda_file: file:./score-new.yml\n docker:\n image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1\n instance_type: Standard_DS3_v2\n scale_settings:\n scale_type: manual\n instance_count: 1\n min_instances: 1\n max_instances: 2",
"_____no_output_____"
]
],
[
[
"## Deployment\n\n<img align=\"center\" src=\"./images/MLOPs-2.gif\"/>",
"_____no_output_____"
],
[
"## Deploy your managed online endpoint to Azure\r\n\r\nThis deployment might take up to 15 minutes, depending on whether the underlying environment or image is being built for the first time. Subsequent deployments that use the same environment will finish processing more quickly.",
"_____no_output_____"
]
],
[
[
"!az ml endpoint create -g [your resource group name] -w [your AML workspace name] -n loan-managed-endpoint -f ./managed-endpoints/endpointconfig.yml",
"_____no_output_____"
]
],
[
[
"## Create public and private keys\n\nIn order to work with Homomorphic Encryption we need to generate our private and public keys to satisfy the encryption process.\n\n`EILinearRegressionClient` allows us to create a homomorphic encryption based client, and public keys.\n\nTo register our training data with our Workspace we need to get the data into the data store. The Workspace will already have a default data store. The function `ws.get_default_datastore()` returns an instance of the data store associated with the Workspace.\n\nFor more information on **Datastore**, please visit: [Microsoft Datastore Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore?view=azure-ml-py)\n\nFor more information on **How to deploy an encrypted inferencing web service**, please visit: [Microsoft How to deploy an encrypted inferencing web service Documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-homomorphic-encryption-seal)\n",
"_____no_output_____"
]
],
[
[
"import os\nimport azureml.core\nfrom azureml.core import Workspace, Datastore\nfrom encrypted.inference.eiclient import EILinearRegressionClient\n\n# Create a new Encrypted inference client and a new secret key.\nedp = EILinearRegressionClient(verbose=True)\n\npublic_keys_blob, public_keys_data = edp.get_public_keys()\n\ndatastore = ws.get_default_datastore()\ncontainer_name = datastore.container_name\n\n# Create a local file and write the keys to it\npublic_keys = open(public_keys_blob, \"wb\")\npublic_keys.write(public_keys_data)\npublic_keys.close()\n\n# Upload the file to blob store\ndatastore.upload_files([public_keys_blob])\n\n# Delete the local file\nos.remove(public_keys_blob)",
"_____no_output_____"
],
[
"sample_index = 4\n\nprint(X_test.iloc[sample_index].to_frame())\ninputData = X_test.iloc[sample_index]\nsample_data = (X_test.to_numpy())",
"_____no_output_____"
],
[
"raw_data = edp.encrypt(sample_data[sample_index])",
"_____no_output_____"
]
],
[
[
"## Testing the Service with Encrypted data\n\nNow with test data, we can get it into a suitable format to consume the web service. First an instance of the web service should be obtained by calling the constructor `Webservice()` with the Workspace object and the service name as parameters. \n\nFor more information on **Webservice**, please visit: [Microsoft Webservice Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice?view=azure-ml-py)",
"_____no_output_____"
]
],
[
[
"import json\n\n#pass the connection string for blob storage to give the server access to the uploaded public keys \nconn_str_template = 'DefaultEndpointsProtocol={};AccountName={};AccountKey={};EndpointSuffix=core.windows.net'\nconn_str = conn_str_template.format(datastore.protocol, datastore.account_name, datastore.account_key)\n\n#build the json \ndata = json.dumps({\"data\": raw_data, \"key_id\" : public_keys_blob, \"conn_str\" : conn_str, \"container\" : container_name })",
"_____no_output_____"
]
],
[
[
"## Generate a sample request JSON file\n\nExport some test data to a JSON file we can send to the endpoint.",
"_____no_output_____"
]
],
[
[
"with open(os.path.join(managed_endpoints, 'sample-request.json'), 'w') as file:\n file.write(data)",
"_____no_output_____"
]
],
[
[
"## Invoke the endpoint to score data by using your model\r\n\r\nYou can use either the invoke command or a REST client of your choice to invoke the endpoint and score against it.",
"_____no_output_____"
]
],
[
[
"!az ml endpoint invoke -g [your resource group name] -w [your AML workspace name] -n loan-managed-endpoint --request-file ./managed-endpoints/sample-request.json > ./managed_endpoints/sample-response.json",
"_____no_output_____"
]
],
[
[
"## Decrypting Service Response\n\nThe below cell uses the `decrypt()` function to decrypt the response from the deployed service. ",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport json\n\neresult = None\nwith open(os.path.join(managed_endpoints, 'sample-response.json'), 'r') as file:\n eresult = json.loads(json.loads(file.read()))\n\nresults = edp.decrypt(eresult)\n\nprint ('Decrypted the results ', results)\n\n#Apply argmax to identify the prediction result\nprediction = 'Deny'\nif results[0] > 0:\n prediction = 'Approve'\n\nactual = 'Deny'\nif Y_test[sample_index] == 1:\n actual = 'Approve'\n\nprint ( ' Prediction : ', prediction)\nprint( 'Actual : ', actual)",
"_____no_output_____"
]
],
[
[
"## Optional: Deploy to Azure Container Instance",
"_____no_output_____"
]
],
[
[
"!cp $managed_endpoints/score.py ./score.py",
"_____no_output_____"
]
],
[
[
"## Deployment dependencies\n\nThe first step is to define the dependencies that are needed for the service to run and they are defined by calling `CondaDependencies.create()`. This create function will receive as parameters the pip and conda packages to install on the remote machine. Secondly, the output of this function is persisted into a `.yml` file that will be leveraged later on the process.\n\nNow it's time to create a `InferenceConfig` object by calling its constructor and passing the runtime type, the path to the `entry_script` (score.py), and the `conda_file` (the previously created file that holds the environment dependencies).\n\nThe `CondaDependencies.create()` function initializes a new CondaDependencies object.\n\nFor more information on **CondaDependencies**, please visit: [Microsoft CondaDependencies Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py)\n\nFor more information on **InferenceConfig**, please visit: [Microsoft InferenceConfig Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py)",
"_____no_output_____"
]
],
[
[
"from azureml.core.model import InferenceConfig, Model\nfrom azureml.core.conda_dependencies import CondaDependencies\n\nazureml_pip_packages = ['azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry',\n 'azureml-interpret', 'azureml-dataprep','azureml-dataprep[fuse,pandas]','joblib',\n 'matplotlib','scikit-learn==0.22.1','seaborn','fairlearn','encrypted-inference==0.9','azure-storage-blob']\n\n# Define dependencies needed in the remote environment\nmyenv = CondaDependencies.create(pip_packages=azureml_pip_packages)\n\n# Write dependencies to yml file\nwith open(\"myenv.yml\",\"w\") as f:\n f.write(myenv.serialize_to_string())\n\n# Create an inference config object based on the score.py and myenv.yml from previous steps\ninference_config = InferenceConfig(runtime= \"python\",\n entry_script=\"score.py\",\n conda_file=\"myenv.yml\")",
"_____no_output_____"
]
],
[
[
"## Deploy model to Azure Container Instance\n\nIn order to deploy the to an Azure Container Instance, the function `Model.deploy()` should be called, passing along the workspace object, service name and list of models to deploy.\n\n`Webservice` defines base functionality for deploying models as web service endpoints in Azure Machine Learning. Webservice constructor is used to retrieve a cloud representation of a Webservice object associated with the provided Workspace.\n\nThe `AciWebService` represents a machine learning model deployed as a web service endpoint on Azure Container Instances. A deployed service is created from a model, script, and associated files. The resulting web service is a load-balanced, HTTP endpoint with a REST API. You can send data to this API and receive the prediction returned by the model.\n\n\nFor more information on **Model**, please visit: [Microsoft Model Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py)\n\nFor more information on **Webservice**, please visit: [Microsoft Webservice Class Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice(class)?view=azure-ml-py)\n\nFor more information on **AciWebservice**, please visit: [Microsoft AciWebservice Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice?view=azure-ml-py)\n\n**Note:** Please wait for the execution of the cell to finish before moving forward.",
"_____no_output_____"
]
],
[
[
"from azureml.core.model import InferenceConfig\nfrom azureml.core.webservice import AciWebservice\nfrom azureml.core.webservice import Webservice\nfrom azureml.exceptions import WebserviceException\nfrom azureml.core.model import Model\n\naciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n memory_gb = 2,\n description = \"Loan approval service\")\n\nservice_name_aci = 'loan-approval-aci'\nprint(service_name_aci)\n\ntry:\n aci_service = Webservice(ws, service_name_aci)\n print(aci_service.state)\nexcept WebserviceException:\n aci_service = Model.deploy(ws, service_name_aci, [model], inference_config, aciconfig)\n aci_service.wait_for_deployment(True)\n print(aci_service.state)",
"_____no_output_____"
]
],
[
[
"## Testing the Service with Encrypted data\n\nNow with test data, we can get it into a suitable format to consume the web service. First an instance of the web service should be obtained by calling the constructor `Webservice()` with the Workspace object and the service name as parameters. \n\nFor more information on **Webservice**, please visit: [Microsoft Webservice Documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice?view=azure-ml-py)",
"_____no_output_____"
]
],
[
[
"import json\r\nfrom azureml.core import Webservice\r\n\r\nservice = Webservice(ws, service_name_aci)\r\n\r\n#pass the connection string for blob storage to give the server access to the uploaded public keys \r\nconn_str_template = 'DefaultEndpointsProtocol={};AccountName={};AccountKey={};EndpointSuffix=core.windows.net'\r\nconn_str = conn_str_template.format(datastore.protocol, datastore.account_name, datastore.account_key)\r\n\r\n#build the json \r\ndata = json.dumps({\"data\": raw_data, \"key_id\" : public_keys_blob, \"conn_str\" : conn_str, \"container\" : container_name })\r\ndata = bytes(data, encoding='ASCII')\r\n\r\nprint ('Making an encrypted inference web service call ')\r\neresult = service.run(input_data=data)\r\n\r\nprint ('Received encrypted inference results')\r\nprint (f'Encrypted results: ...', eresult[0][0:100], '...')",
"_____no_output_____"
]
],
[
[
"## Decrypting Service Response\n\nThe below cell uses the `decrypt()` function to decrypt the response from the deployed ACI Service. ",
"_____no_output_____"
]
],
[
[
"import numpy as np \r\n\r\nresults = edp.decrypt(eresult)\r\n\r\nprint ('Decrypted the results ', results)\r\n\r\n#Apply argmax to identify the prediction result\r\nprediction = 'Deny'\r\nif results[0] > 0:\r\n prediction = 'Approve'\r\n\r\nactual = 'Deny'\r\nif Y_test[sample_index] == 1:\r\n actual = 'Approve'\r\n\r\nprint ( ' Prediction : ', prediction)\r\nprint( 'Actual : ', actual)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4add7fb282556b7b06b4343709df7b6addb35d29
| 16,134 |
ipynb
|
Jupyter Notebook
|
dft_workflow/job_analysis/prepare_oer_sets/write_oer_sets.ipynb
|
raulf2012/PROJ_IrOx_OER
|
56883d6f5b62e67703fe40899e2e68b3f5de143b
|
[
"MIT"
] | 1 |
2022-03-21T04:43:47.000Z
|
2022-03-21T04:43:47.000Z
|
dft_workflow/job_analysis/prepare_oer_sets/write_oer_sets.ipynb
|
raulf2012/PROJ_IrOx_OER
|
56883d6f5b62e67703fe40899e2e68b3f5de143b
|
[
"MIT"
] | null | null | null |
dft_workflow/job_analysis/prepare_oer_sets/write_oer_sets.ipynb
|
raulf2012/PROJ_IrOx_OER
|
56883d6f5b62e67703fe40899e2e68b3f5de143b
|
[
"MIT"
] | 1 |
2021-02-13T12:55:02.000Z
|
2021-02-13T12:55:02.000Z
| 30.213483 | 120 | 0.426429 |
[
[
[
"# Writing OER sets to file for\n---",
"_____no_output_____"
],
[
"### Import Modules",
"_____no_output_____"
]
],
[
[
"import os\nprint(os.getcwd())\nimport sys\nimport time; ti = time.time()\n\nimport json\n\nimport pandas as pd\nimport numpy as np\n\n# #########################################################\nfrom methods import (\n get_df_features_targets,\n get_df_jobs,\n get_df_jobs_paths,\n get_df_atoms_sorted_ind,\n )\nfrom methods import create_name_str_from_tup\nfrom methods import get_df_jobs_paths, get_df_jobs_data\n\n# #########################################################\nfrom local_methods import write_other_jobs_in_set",
"/mnt/f/Dropbox/01_norskov/00_git_repos/PROJ_IrOx_OER/dft_workflow/job_analysis/prepare_oer_sets\n"
],
[
"from methods import isnotebook \nisnotebook_i = isnotebook()\nif isnotebook_i:\n from tqdm.notebook import tqdm\n verbose = True\nelse:\n from tqdm import tqdm\n verbose = False",
"_____no_output_____"
]
],
[
[
"### Read Data",
"_____no_output_____"
]
],
[
[
"df_jobs = get_df_jobs()\ndf_jobs_paths = get_df_jobs_paths()\ndf_features_targets = get_df_features_targets()\ndf_atoms = get_df_atoms_sorted_ind()\n\ndf_jobs_paths = get_df_jobs_paths()\ndf_jobs_data = get_df_jobs_data()",
"_____no_output_____"
],
[
"df_atoms = df_atoms.set_index(\"job_id\")",
"_____no_output_____"
]
],
[
[
"\n\n",
"_____no_output_____"
]
],
[
[
"### Main loop | writing OER sets",
"_____no_output_____"
]
],
[
[
"# # TEMP\n\n# name_i = ('slac', 'wufulafe_03', 58.0)\n# df_features_targets = df_features_targets.loc[[name_i]]",
"_____no_output_____"
],
[
"# # TEMP\n# print(111 * \"TEMP | \")\n\n# # df_features_targets.index[329]\n\n# indices = [\n# ('slac', 'relovalu_12', 24.0),\n# ]\n\n# df_features_targets = df_features_targets.loc[indices]",
"_____no_output_____"
],
[
"# for name_i, row_i in df_features_targets.iterrows():\n\niterator = tqdm(df_features_targets.index, desc=\"1st loop\")\nfor i_cnt, index_i in enumerate(iterator):\n row_i = df_features_targets.loc[index_i]\n\n # if verbose:\n # print(name_i)\n\n # #####################################################\n job_id_o_i = row_i.data.job_id_o.iloc[0]\n job_id_bare_i = row_i.data.job_id_bare.iloc[0]\n job_id_oh_i = row_i.data.job_id_oh.iloc[0]\n # #####################################################\n\n if job_id_bare_i is None:\n continue\n\n oh_exists = False\n if job_id_oh_i is not None:\n oh_exists = True\n\n # #####################################################\n df_atoms__o = df_atoms.loc[job_id_o_i]\n df_atoms__bare = df_atoms.loc[job_id_bare_i]\n\n # #####################################################\n atoms__o = df_atoms__o.atoms_sorted_good\n atoms__bare = df_atoms__bare.atoms_sorted_good\n\n if oh_exists:\n df_atoms__oh = df_atoms.loc[job_id_oh_i]\n atoms__oh = df_atoms__oh.atoms_sorted_good\n\n # #########################################################\n # #########################################################\n # dir_name = create_name_str_from_tup(name_i)\n dir_name = create_name_str_from_tup(index_i)\n\n dir_path = os.path.join(\n os.environ[\"PROJ_irox_oer\"],\n \"dft_workflow/job_analysis/prepare_oer_sets\",\n \"out_data/oer_group_files\",\n dir_name)\n\n if not os.path.exists(dir_path):\n os.makedirs(dir_path)\n\n\n # #####################################################\n atoms__o.write(\n os.path.join(dir_path, \"atoms__o.traj\"))\n\n atoms__o.write(\n os.path.join(dir_path, \"atoms__o.cif\"))\n\n atoms__bare.write(\n os.path.join(dir_path, \"atoms__bare.traj\"))\n atoms__bare.write(\n os.path.join(dir_path, \"atoms__bare.cif\"))\n\n if oh_exists:\n atoms__oh.write(\n os.path.join(dir_path, \"atoms__oh.traj\"))\n atoms__oh.write(\n os.path.join(dir_path, \"atoms__oh.cif\"))\n\n\n # #####################################################\n data_dict_to_write = dict(\n job_id_o=job_id_o_i,\n job_id_bare=job_id_bare_i,\n job_id_oh=job_id_oh_i,\n )\n\n data_path = os.path.join(dir_path, \"data.json\")\n with open(data_path, \"w\") as outfile:\n json.dump(data_dict_to_write, outfile, indent=2)\n\n\n # #####################################################\n # Write other jobs in OER set\n write_other_jobs_in_set(\n job_id_bare_i,\n dir_path=dir_path,\n df_jobs=df_jobs, df_atoms=df_atoms,\n df_jobs_paths=df_jobs_paths,\n df_jobs_data=df_jobs_data,\n )",
"_____no_output_____"
]
],
[
[
"\n",
"_____no_output_____"
]
],
[
[
"# Writing top systems to file ROUGH TEMP",
"_____no_output_____"
]
],
[
[
"# TOP SYSTEMS\n\nif False:\n# if True:\n df_features_targets = df_features_targets.loc[\n [\n\n (\"slac\", \"tefovuto_94\", 16.0),\n# slac__nifupidu_92__032\n# sherlock__bihetofu_24__036\n\n ('slac', 'hobukuno_29', 16.0),\n ('sherlock', 'ramufalu_44', 56.0),\n ('slac', 'nifupidu_92', 32.0),\n ('sherlock', 'bihetofu_24', 36.0),\n ('slac', 'dotivela_46', 32.0),\n ('slac', 'vovumota_03', 33.0),\n ('slac', 'ralutiwa_59', 32.0),\n ('sherlock', 'bebodira_65', 16.0),\n ('sherlock', 'soregawu_05', 62.0),\n ('slac', 'hivovaru_77', 26.0),\n ('sherlock', 'vegarebo_06', 50.0),\n ('slac', 'ralutiwa_59', 30.0),\n ('sherlock', 'kamevuse_75', 49.0),\n ('nersc', 'hesegula_40', 94.0),\n ('slac', 'fewirefe_11', 39.0),\n ('sherlock', 'vipikema_98', 60.0),\n ('slac', 'gulipita_22', 48.0),\n ('sherlock', 'rofetaso_24', 48.0),\n ('slac', 'runopeno_56', 32.0),\n ('slac', 'magiwuni_58', 26.0),\n ]\n ]\n\n for name_i, row_i in df_features_targets.iterrows():\n\n # #####################################################\n job_id_o_i = row_i.data.job_id_o.iloc[0]\n job_id_bare_i = row_i.data.job_id_bare.iloc[0]\n job_id_oh_i = row_i.data.job_id_oh.iloc[0]\n # #####################################################\n\n oh_exists = False\n if job_id_oh_i is not None:\n oh_exists = True\n\n # #####################################################\n df_atoms__o = df_atoms.loc[job_id_o_i]\n df_atoms__bare = df_atoms.loc[job_id_bare_i]\n\n # #####################################################\n atoms__o = df_atoms__o.atoms_sorted_good\n atoms__bare = df_atoms__bare.atoms_sorted_good\n\n if oh_exists:\n df_atoms__oh = df_atoms.loc[job_id_oh_i]\n atoms__oh = df_atoms__oh.atoms_sorted_good\n\n # #########################################################\n # #########################################################\n dir_name = create_name_str_from_tup(name_i)\n\n dir_path = os.path.join(\n os.environ[\"PROJ_irox_oer\"],\n \"dft_workflow/job_analysis/prepare_oer_sets\",\n \"out_data/top_overpot_sys\")\n # dir_name)\n\n if not os.path.exists(dir_path):\n os.makedirs(dir_path)\n\n # atoms__o.write(\n # os.path.join(dir_path, dir_name + \"_o.cif\"))\n\n # atoms__bare.write(\n # os.path.join(dir_path, dir_name + \"_bare.cif\"))\n\n if oh_exists:\n atoms__oh.write(\n os.path.join(dir_path, dir_name + \"_oh.cif\"))",
"_____no_output_____"
]
],
[
[
"# MISC | Writing random cifs to file to open in VESTA",
"_____no_output_____"
]
],
[
[
"df_subset = df_features_targets.sample(n=6)\n\nif False:\n for name_i, row_i in df_subset.iterrows():\n tmp = 42\n\n job_id_oh_i = row_i[(\"data\", \"job_id_oh\", \"\", )]\n\n\n # # #####################################################\n # job_id_o_i = row_i.data.job_id_o.iloc[0]\n # job_id_bare_i = row_i.data.job_id_bare.iloc[0]\n # job_id_oh_i = row_i.data.job_id_oh.iloc[0]\n # # #####################################################\n\n # if job_id_bare_i is None:\n # continue\n\n oh_exists = False\n if job_id_oh_i is not None:\n oh_exists = True\n\n # # #####################################################\n # df_atoms__o = df_atoms.loc[job_id_o_i]\n # df_atoms__bare = df_atoms.loc[job_id_bare_i]\n\n # # #####################################################\n # atoms__o = df_atoms__o.atoms_sorted_good\n # atoms__bare = df_atoms__bare.atoms_sorted_good\n\n if oh_exists:\n df_atoms__oh = df_atoms.loc[job_id_oh_i]\n atoms__oh = df_atoms__oh.atoms_sorted_good\n\n # #########################################################\n # #########################################################\n file_name_i = create_name_str_from_tup(name_i)\n print(file_name_i)\n\n dir_path = os.path.join(\n os.environ[\"PROJ_irox_oer\"],\n \"dft_workflow/job_analysis/prepare_oer_sets\",\n \"out_data/misc_cif_files_oh\")\n # dir_name)\n\n if not os.path.exists(dir_path):\n os.makedirs(dir_path)\n\n\n # #####################################################\n # atoms__o.write(\n # os.path.join(dir_path, \"atoms__o.traj\"))\n\n # atoms__o.write(\n # os.path.join(dir_path, \"atoms__o.cif\"))\n\n # atoms__bare.write(\n # os.path.join(dir_path, \"atoms__bare.traj\"))\n # atoms__bare.write(\n # os.path.join(dir_path, \"atoms__bare.cif\"))\n\n if oh_exists:\n atoms__oh.write(\n os.path.join(dir_path, file_name_i + \".cif\"))\n\n # os.path.join(dir_path, \"atoms__oh.traj\"))\n\n # atoms__oh.write(\n # os.path.join(dir_path, \"atoms__oh.cif\"))",
"_____no_output_____"
],
[
"# #########################################################\nprint(20 * \"# # \")\nprint(\"All done!\")\nprint(\"Run time:\", np.round((time.time() - ti) / 60, 3), \"min\")\nprint(\"write_oer_sets.ipynb\")\nprint(20 * \"# # \")\n# #########################################################",
"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # \nAll done!\nRun time: 0.03 min\nwrite_oer_sets.ipynb\n# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # \n"
]
],
[
[
"\n\n",
"_____no_output_____"
]
],
[
[
"# import os\n# print(os.getcwd())\n# import sys\n\n# import pickle\n\n# pd.set_option('display.max_columns', None)\n# # pd.set_option('display.max_rows', None)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"raw",
"markdown",
"code",
"raw",
"markdown",
"code",
"markdown",
"code",
"raw",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"raw"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"raw"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"raw"
],
[
"code"
]
] |
4add9da6b90223791d6fe608145b9b410103415f
| 16,282 |
ipynb
|
Jupyter Notebook
|
introductory-tutorials/intro-to-julia/09. Julia is fast.ipynb
|
Nihal-197/JuliaTutorials
|
a8f7ebcf14f19242f584965f6eff897b686dbb36
|
[
"MIT"
] | null | null | null |
introductory-tutorials/intro-to-julia/09. Julia is fast.ipynb
|
Nihal-197/JuliaTutorials
|
a8f7ebcf14f19242f584965f6eff897b686dbb36
|
[
"MIT"
] | null | null | null |
introductory-tutorials/intro-to-julia/09. Julia is fast.ipynb
|
Nihal-197/JuliaTutorials
|
a8f7ebcf14f19242f584965f6eff897b686dbb36
|
[
"MIT"
] | null | null | null | 20.610127 | 292 | 0.507923 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4adda565b70811d737b47bb7e43b78333b7b4c5b
| 58,489 |
ipynb
|
Jupyter Notebook
|
hypersolvers-control/experiments/cartpole/01b_multistage_pretrain_separated.ipynb
|
DiffEqML/diffeqml_research
|
7f4379f1fc703a5b19b9e248bbbeaf188511bdb9
|
[
"Apache-2.0"
] | null | null | null |
hypersolvers-control/experiments/cartpole/01b_multistage_pretrain_separated.ipynb
|
DiffEqML/diffeqml_research
|
7f4379f1fc703a5b19b9e248bbbeaf188511bdb9
|
[
"Apache-2.0"
] | null | null | null |
hypersolvers-control/experiments/cartpole/01b_multistage_pretrain_separated.ipynb
|
DiffEqML/diffeqml_research
|
7f4379f1fc703a5b19b9e248bbbeaf188511bdb9
|
[
"Apache-2.0"
] | null | null | null | 142.309002 | 17,382 | 0.877327 |
[
[
[
"# Multistage Hypersolver pre-train (separated training and ablation study)",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n\nimport time\nimport torch\nimport torch.nn as nn\nimport matplotlib.pyplot as plt\nfrom math import pi as π\n\nimport sys; sys.path.append(2*'../') # go n dirs back\nfrom src import *\n\nfrom torchdyn.core import NeuralODE\nfrom torchdyn.datasets import *\nfrom torchdyn.numerics import odeint, Euler, HyperEuler\nfrom torchdyn.numerics.solvers import Midpoint, SolverTemplate\nfrom torchdyn.numerics.hypersolvers import HyperMidpoint\n\ndevice = 'cpu' # feel free to change!",
"_____no_output_____"
],
[
"class MultiStageHypersolver(SolverTemplate):\n \"\"\"\n Explicit multistage ODE stepper: inner stage is a vector field corrector\n while the outer stage is a residual approximator of the ODE solver\n \"\"\"\n def __init__(self, inner_stage: nn.Module, outer_stage: nn.Module,\n base_solver=Midpoint, dtype=torch.float32):\n super().__init__(order=base_solver().order)\n self.dtype = dtype\n self.stepping_class = 'fixed'\n self.base_solver = base_solver\n self.inner_stage = inner_stage\n self.outer_stage = outer_stage\n\n def step(self, f, x, t, dt, k1=None):\n # Correct vector field with inner stage and propagate\n self.vector_field = f\n _, _x_sol, _ = self.base_solver().step(self.corrected_vector_field, x, t, dt, k1=k1)\n # Residual correction with outer stage\n x_sol = _x_sol + dt**self.base_solver().order * self.outer_stage(t, f(t, x))\n return _, x_sol, _ \n\n def corrected_vector_field(self, t, x):\n return self.vector_field(t, x) + self.inner_stage(t, x)\n\n\nclass HyperNetwork(nn.Module):\n \"\"\"Simple hypernetwork using as input the current state, vector field and controller\"\"\"\n def __init__(self, net, sys):\n super().__init__()\n self.net = net\n self.sys = sys\n \n def forward(self, t, x):\n xfu = torch.cat([x, self.sys.cur_f, self.sys.cur_u], -1)\n return self.net(xfu)\n ",
"_____no_output_____"
],
[
"# System we have\nsys = CartPole(RandConstController())\n\n# Real system\nsys_nominal = CartPole(RandConstController())\nsys_nominal.frictioncart = 0.1\nsys_nominal.frictionpole = 0.03\n\n\n# Initial distribution\nx0 = 2*π # limit of the state distribution (in rads and rads/second)\ninit_dist = torch.distributions.Uniform(torch.Tensor([-x0, -x0, -x0, -x0]), torch.Tensor([x0, x0, x0, x0]))\nu_min, u_max = -30, 30\n",
"_____no_output_____"
]
],
[
[
"## Training loop (inner stage only)\nWe train via stochastic exploration\n(This will take some time)",
"_____no_output_____"
]
],
[
[
"\nhdim = 32\nbase_net_inner = nn.Sequential(nn.Linear(9, hdim), Snake(hdim), nn.Linear(hdim, hdim), Snake(hdim), nn.Linear(hdim, 4))\nbase_net_outer = nn.Sequential(nn.Linear(9, hdim), Snake(hdim), nn.Linear(hdim, hdim), Snake(hdim), nn.Linear(hdim, 4))\n# base_net_inner = nn.Sequential(nn.Linear(9, 32), nn.Softplus(), nn.Linear(32, 32), nn.Tanh(), nn.Linear(32, 4))\n# base_net_outer = nn.Sequential(nn.Linear(9, 32), nn.Softplus(), nn.Linear(32, 32), nn.Tanh(), nn.Linear(32, 4))\n\ninner_stage = HyperNetwork(base_net_inner, sys)\nouter_stage = HyperNetwork(base_net_outer, sys)\n# Avoid outer stage being trained\ndef dummy_stage(t, x):\n return 0\n\n# Use only inner stage\nmultistagehs = MultiStageHypersolver(inner_stage, dummy_stage, base_solver=Midpoint)\nopt = torch.optim.Adam(multistagehs.inner_stage.parameters(), lr=1e-2) # only train inner\nscheduler = torch.optim.lr_scheduler.MultiStepLR(opt, milestones=[3e5, 4e5], gamma=0.1)\n# opt = torch.optim.Adam(multistagehs.inner_stage.parameters(), lr=1e-3)\n\nloss_func = nn.MSELoss()\nepochs = 50000\nbs = 128\ndt = 0.05\nspan = torch.linspace(0, dt, 2)\nlosses = []\nfor i in range(epochs):\n x0 = init_dist.sample((bs,)).to(device)\n val = torch.Tensor(bs, 1).uniform_(u_min, u_max).to(device)\n sys.u.u0 = val\n sys_nominal.u.u0 = val\n \n # Compute residuals\n _, sol_gt = odeint(sys_nominal._dynamics, x0, span, solver='rk4')[-1]\n _, sol_hs = odeint(sys._dynamics, x0, span, solver=multistagehs)[-1]\n loss = loss_func(sol_gt, sol_hs)\n\n # Optimization step\n loss.backward(); opt.step(); opt.zero_grad(); scheduler.step()\n print(f'Step: {i}, Residual loss: {loss:.8f}', end='\\r')\n losses.append(loss.detach().cpu().item())",
"Step: 49999, Residual loss: 0.00005685\r"
],
[
"fig, ax = plt.subplots(1, 1)\nax.plot(losses)\nax.set_yscale('log')",
"_____no_output_____"
]
],
[
[
"## Outer stage training only\nThis is basically finetuning once we get the first stage",
"_____no_output_____"
]
],
[
[
"base_net_outer = nn.Sequential(nn.Linear(9, hdim), Snake(hdim), nn.Linear(hdim, hdim), Snake(hdim), nn.Linear(hdim, 4))\nouter_stage = HyperNetwork(base_net_outer, sys)\n\n# Use only inner stage\nmultistagehs.outer_stage = outer_stage # new outer stage in pre-trained hypersolver\nopt = torch.optim.Adam(multistagehs.outer_stage.parameters(), lr=1e-2) # only train outer stage\nscheduler = torch.optim.lr_scheduler.MultiStepLR(opt, milestones=[3e5, 4e5], gamma=0.1)\n\nloss_func = nn.MSELoss()\nepochs = 50000\nbs = 128\ndt = 0.05\nspan = torch.linspace(0, dt, 2)\nlosses = []\nfor i in range(epochs):\n x0 = init_dist.sample((bs,)).to(device)\n val = torch.Tensor(bs, 1).uniform_(u_min, u_max).to(device)\n sys.u.u0 = val\n sys_nominal.u.u0 = val\n \n # Compute residuals\n _, sol_gt = odeint(sys_nominal._dynamics, x0, span, solver='rk4')[-1]\n _, sol_hs = odeint(sys._dynamics, x0, span, solver=multistagehs)[-1]\n loss = loss_func(sol_gt, sol_hs)\n\n # Optimization step\n loss.backward(); opt.step(); opt.zero_grad(); scheduler.step()\n print(f'Step: {i}, Residual loss: {loss:.8f}', end='\\r')\n losses.append(loss.detach().cpu().item())",
"Step: 49999, Residual loss: 0.00003245\r"
],
[
"fig, ax = plt.subplots(1, 1)\nax.plot(losses)\nax.set_yscale('log')",
"_____no_output_____"
],
[
"# Save the model\ntorch.save(multistagehs, 'saved_models/hs_multistage_separated_snake.pt')",
"_____no_output_____"
]
],
[
[
"## Training residual one-step dynamic model\nWe can use the same `multistage hypersolver` with the inner stage corrector set to zero",
"_____no_output_____"
]
],
[
[
"hdim = 32\nbase_net_outer = nn.Sequential(nn.Linear(9, hdim), Snake(hdim), nn.Linear(hdim, hdim), Snake(hdim), nn.Linear(hdim, 4))\nouter_stage = HyperNetwork(base_net_outer, sys)\n# Avoid outer stage being trained\ndef dummy_stage(t, x):\n return 0\n\n# Use only inner outer stage (residual dynamics)\nresidual_dynamics_solver = MultiStageHypersolver(dummy_stage, outer_stage, base_solver=Midpoint)\nopt = torch.optim.Adam(residual_dynamics_solver.outer_stage.parameters(), lr=1e-2) \nscheduler = torch.optim.lr_scheduler.MultiStepLR(opt, milestones=[3e5, 4e5], gamma=0.1)\n\nloss_func = nn.MSELoss()\nepochs = 50000\nbs = 128\ndt = 0.05\nspan = torch.linspace(0, dt, 2)\nlosses = []\nfor i in range(epochs):\n x0 = init_dist.sample((bs,)).to(device)\n val = torch.Tensor(bs, 1).uniform_(u_min, u_max).to(device)\n sys.u.u0 = val\n sys_nominal.u.u0 = val\n \n # Compute residuals\n _, sol_gt = odeint(sys_nominal._dynamics, x0, span, solver='rk4')[-1]\n _, sol_hs = odeint(sys._dynamics, x0, span, solver=residual_dynamics_solver)[-1]\n loss = loss_func(sol_gt, sol_hs)\n\n # Optimization step\n loss.backward(); opt.step(); opt.zero_grad(); scheduler.step()\n print(f'Step: {i}, Residual loss: {loss:.8f}', end='\\r')\n losses.append(loss.detach().cpu().item())",
"Step: 49999, Residual loss: 0.00031161\r"
],
[
"fig, ax = plt.subplots(1, 1)\nax.plot(losses)\nax.set_yscale('log')",
"_____no_output_____"
],
[
"# Save the model\ntorch.save(residual_dynamics_solver, 'saved_models/residual_dynamics_solver.pt')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4adda8533322cb909a033ffc5e12f06da8073c97
| 8,189 |
ipynb
|
Jupyter Notebook
|
ImageColorizerColabStable.ipynb
|
loftwah/DeOldify
|
e55c1abc86475a9a840ae2868a473e6e9779a393
|
[
"MIT"
] | 2 |
2021-05-13T23:00:36.000Z
|
2021-08-17T09:10:01.000Z
|
ImageColorizerColabStable.ipynb
|
loftwah/DeOldify
|
e55c1abc86475a9a840ae2868a473e6e9779a393
|
[
"MIT"
] | 3 |
2021-06-08T21:52:21.000Z
|
2022-03-12T00:37:23.000Z
|
ImageColorizerColabStable.ipynb
|
loftwah/DeOldify
|
e55c1abc86475a9a840ae2868a473e6e9779a393
|
[
"MIT"
] | 2 |
2020-11-02T12:54:57.000Z
|
2020-11-09T09:47:36.000Z
| 26.84918 | 565 | 0.58371 |
[
[
[
"<a href=\"https://colab.research.google.com/github/jantic/DeOldify/blob/master/ImageColorizerColabStable.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"### **<font color='blue'> Stable Colorizer </font>**",
"_____no_output_____"
],
[
"#◢ DeOldify - Colorize your own photos!\n\n####**Credits:**\n\nSpecial thanks to:\n\nMatt Robinson and María Benavente for pioneering the DeOldify image colab notebook. \n\nDana Kelley for doing things, breaking stuff & having an opinion on everything.",
"_____no_output_____"
],
[
"\n\n---\n\n\n#◢ Verify Correct Runtime Settings\n\n**<font color='#FF000'> IMPORTANT </font>**\n\nIn the \"Runtime\" menu for the notebook window, select \"Change runtime type.\" Ensure that the following are selected:\n* Runtime Type = Python 3\n* Hardware Accelerator = GPU \n",
"_____no_output_____"
],
[
"#◢ Git clone and install DeOldify",
"_____no_output_____"
]
],
[
[
"!git clone https://github.com/jantic/DeOldify.git DeOldify ",
"_____no_output_____"
],
[
"cd DeOldify",
"_____no_output_____"
]
],
[
[
"#◢ Setup",
"_____no_output_____"
]
],
[
[
"#NOTE: This must be the first call in order to work properly!\nfrom deoldify import device\nfrom deoldify.device_id import DeviceId\n#choices: CPU, GPU0...GPU7\ndevice.set(device=DeviceId.GPU0)\n\nimport torch\n\nif not torch.cuda.is_available():\n print('GPU not available.')",
"_____no_output_____"
],
[
"!pip install -r colab_requirements.txt",
"_____no_output_____"
],
[
"import fastai\nfrom deoldify.visualize import *\n\ntorch.backends.cudnn.benchmark = True",
"_____no_output_____"
],
[
"!mkdir 'models'\n!wget https://www.dropbox.com/s/mwjep3vyqk5mkjc/ColorizeStable_gen.pth?dl=0 -O ./models/ColorizeStable_gen.pth",
"_____no_output_____"
],
[
"!wget https://media.githubusercontent.com/media/jantic/DeOldify/master/resource_images/watermark.png -O ./resource_images/watermark.png",
"_____no_output_____"
],
[
"colorizer = get_image_colorizer(artistic=False)",
"_____no_output_____"
]
],
[
[
"#◢ Instructions",
"_____no_output_____"
],
[
"### source_url\nType in a url to a direct link of an image. Usually that means they'll end in .png, .jpg, etc. NOTE: If you want to use your own image, upload it first to a site like Imgur. \n\n### render_factor\nThe default value of 35 has been carefully chosen and should work -ok- for most scenarios (but probably won't be the -best-). This determines resolution at which the color portion of the image is rendered. Lower resolution will render faster, and colors also tend to look more vibrant. Older and lower quality images in particular will generally benefit by lowering the render factor. Higher render factors are often better for higher quality images, but the colors may get slightly washed out. \n\n### watermarked\nSelected by default, this places a watermark icon of a palette at the bottom left corner of the image. This is intended to be a standard way to convey to others viewing the image that it is colorized by AI. We want to help promote this as a standard, especially as the technology continues to improve and the distinction between real and fake becomes harder to discern. This palette watermark practice was initiated and lead by the company MyHeritage in the MyHeritage In Color feature (which uses a newer version of DeOldify than what you're using here).\n\n#### How to Download a Copy\nSimply right click on the displayed image and click \"Save image as...\"!\n\n## Pro Tips\n\nYou can evaluate how well the image is rendered at each render_factor by using the code at the bottom (that cell under \"See how well render_factor values perform on a frame here\"). \n\n## Troubleshooting\nIf you get a 'CUDA out of memory' error, you probably have the render_factor too high.",
"_____no_output_____"
],
[
"#◢ Colorize!!",
"_____no_output_____"
]
],
[
[
"source_url = '' #@param {type:\"string\"}\nrender_factor = 35 #@param {type: \"slider\", min: 7, max: 45}\nwatermarked = True #@param {type:\"boolean\"}\n\nif source_url is not None and source_url !='':\n image_path = colorizer.plot_transformed_image_from_url(url=source_url, render_factor=render_factor, compare=True, watermarked=watermarked)\n show_image_in_notebook(image_path)\nelse:\n print('Provide an image url and try again.')",
"_____no_output_____"
]
],
[
[
"## See how well render_factor values perform on the image here",
"_____no_output_____"
]
],
[
[
"for i in range(10,45,2):\n colorizer.plot_transformed_image('test_images/image.png', render_factor=i, display_render_factor=True, figsize=(8,8))",
"_____no_output_____"
]
],
[
[
"---\n#⚙ Recommended image sources \n* [/r/TheWayWeWere](https://www.reddit.com/r/TheWayWeWere/)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4addae6ebd35602ee9b246a00fad017255547195
| 417,068 |
ipynb
|
Jupyter Notebook
|
scripts/practiceScripts/scRFE-Copy1.ipynb
|
czbiohub/scRFE
|
716b0f59b4b949e6842af3080276c7ea835618a9
|
[
"MIT"
] | 11 |
2020-03-24T17:10:50.000Z
|
2021-09-08T22:56:16.000Z
|
scripts/practiceScripts/scRFE-Copy1.ipynb
|
czbiohub/scRFE
|
716b0f59b4b949e6842af3080276c7ea835618a9
|
[
"MIT"
] | null | null | null |
scripts/practiceScripts/scRFE-Copy1.ipynb
|
czbiohub/scRFE
|
716b0f59b4b949e6842af3080276c7ea835618a9
|
[
"MIT"
] | 1 |
2020-03-26T23:42:00.000Z
|
2020-03-26T23:42:00.000Z
| 100.232636 | 9,224 | 0.699349 |
[
[
[
"# testing scRFE",
"_____no_output_____"
],
[
"pip list",
"Package Version \r\n---------------------------------- -----------------\r\nalabaster 0.7.12 \r\nanaconda-client 1.7.2 \r\nanaconda-navigator 1.9.12 \r\nanaconda-project 0.8.3 \r\nanndata 0.7.3 \r\napplaunchservices 0.2.1 \r\nappnope 0.1.0 \r\nappscript 1.0.1 \r\nargh 0.26.2 \r\nasn1crypto 1.3.0 \r\nastroid 2.3.3 \r\nastropy 4.0 \r\natomicwrites 1.3.0 \r\nattrs 19.3.0 \r\nautopep8 1.4.4 \r\nBabel 2.8.0 \r\nbackcall 0.1.0 \r\nbackports.functools-lru-cache 1.6.1 \r\nbackports.shutil-get-terminal-size 1.0.0 \r\nbackports.tempfile 1.0 \r\nbackports.weakref 1.0.post1 \r\nbeautifulsoup4 4.8.2 \r\nbitarray 1.2.1 \r\nbkcharts 0.2 \r\nbleach 3.1.0 \r\nbokeh 1.4.0 \r\nboto 2.49.0 \r\nBottleneck 1.3.2 \r\ncertifi 2019.11.28 \r\ncffi 1.14.0 \r\nchardet 3.0.4 \r\nClick 7.0 \r\ncloudpickle 1.3.0 \r\nclyent 1.2.2 \r\ncolorama 0.4.3 \r\nconda 4.8.3 \r\nconda-build 3.18.11 \r\nconda-package-handling 1.6.0 \r\nconda-verify 3.4.2 \r\ncontextlib2 0.6.0.post1 \r\ncryptography 2.8 \r\ncycler 0.10.0 \r\nCython 0.29.15 \r\ncytoolz 0.10.1 \r\ndask 2.11.0 \r\ndecorator 4.4.1 \r\ndefusedxml 0.6.0 \r\ndiff-match-patch 20181111 \r\ndill 0.3.2 \r\ndistributed 2.11.0 \r\ndocutils 0.16 \r\nentrypoints 0.3 \r\net-xmlfile 1.0.1 \r\nfastcache 1.1.0 \r\nfbpca 1.0 \r\nfilelock 3.0.12 \r\nflake8 3.7.9 \r\nFlask 1.1.1 \r\nfsspec 0.6.2 \r\nfuture 0.18.2 \r\nget-version 2.1 \r\ngevent 1.4.0 \r\nglob2 0.7 \r\ngmpy2 2.0.8 \r\ngreenlet 0.4.15 \r\nh5py 2.10.0 \r\nHeapDict 1.0.1 \r\nhtml5lib 1.0.1 \r\nhypothesis 5.5.4 \r\nidna 2.8 \r\nimageio 2.6.1 \r\nimagesize 1.2.0 \r\nimportlib-metadata 1.5.0 \r\nintervaltree 3.0.2 \r\nipykernel 5.1.4 \r\nipython 7.12.0 \r\nipython-genutils 0.2.0 \r\nipywidgets 7.5.1 \r\nisort 4.3.21 \r\nitsdangerous 1.1.0 \r\njdcal 1.4.1 \r\njedi 0.14.1 \r\nJinja2 2.11.1 \r\njoblib 0.14.1 \r\njson5 0.9.1 \r\njsonschema 3.2.0 \r\njupyter 1.0.0 \r\njupyter-client 5.3.4 \r\njupyter-console 6.1.0 \r\njupyter-core 4.6.1 \r\njupyterlab 1.2.6 \r\njupyterlab-server 1.0.6 \r\nkeyring 21.1.0 \r\nkiwisolver 1.1.0 \r\nlazy-object-proxy 1.4.3 \r\nlegacy-api-wrap 1.2 \r\nlibarchive-c 2.8 \r\nlief 0.9.0 \r\nllvmlite 0.31.0 \r\nlocket 0.2.0 \r\nlxml 4.5.0 \r\nMarkupSafe 1.1.1 \r\nmatplotlib 3.1.3 \r\nmatplotlib-venn 0.11.5 \r\nmccabe 0.6.1 \r\nmistune 0.8.4 \r\nmkl-fft 1.0.15 \r\nmkl-random 1.1.0 \r\nmkl-service 2.3.0 \r\nmock 4.0.1 \r\nmore-itertools 8.2.0 \r\nmpmath 1.1.0 \r\nmsgpack 0.6.1 \r\nmultipledispatch 0.6.0 \r\nnatsort 7.0.1 \r\nnavigator-updater 0.2.1 \r\nnbconvert 5.6.1 \r\nnbformat 5.0.4 \r\nnetworkx 2.4 \r\nnltk 3.4.5 \r\nnose 1.3.7 \r\nnotebook 6.0.3 \r\nnumba 0.48.0 \r\nnumexpr 2.7.1 \r\nnumpy 1.18.1 \r\nnumpydoc 0.9.2 \r\nolefile 0.46 \r\nopenpyxl 3.0.3 \r\npackaging 20.1 \r\npandas 1.0.1 \r\npandocfilters 1.4.2 \r\nparso 0.5.2 \r\npartd 1.1.0 \r\npath 13.1.0 \r\npathlib2 2.3.5 \r\npathtools 0.1.2 \r\npatsy 0.5.1 \r\npep8 1.7.1 \r\npexpect 4.8.0 \r\npickleshare 0.7.5 \r\nPillow 7.0.0 \r\npip 20.0.2 \r\npip-autoremove 0.9.1 \r\npkginfo 1.5.0.1 \r\npluggy 0.13.1 \r\nply 3.11 \r\nprogress 1.5 \r\nprogressbar 2.5 \r\nprometheus-client 0.7.1 \r\nprompt-toolkit 3.0.3 \r\npsutil 5.6.7 \r\nptyprocess 0.6.0 \r\npy 1.8.1 \r\npycodestyle 2.5.0 \r\npycosat 0.6.3 \r\npycparser 2.19 \r\npycrypto 2.6.1 \r\npycurl 7.43.0.5 \r\npydocstyle 4.0.1 \r\npyflakes 2.1.1 \r\nPygments 2.5.2 \r\npylint 2.4.4 \r\npyodbc 4.0.0-unsupported\r\npyOpenSSL 19.1.0 \r\npyparsing 2.4.6 \r\npyrsistent 0.15.7 \r\nPySocks 1.7.1 \r\npytest 5.3.5 \r\npytest-arraydiff 0.3 \r\npytest-astropy 0.8.0 \r\npytest-astropy-header 0.1.2 \r\npytest-doctestplus 0.5.0 \r\npytest-openfiles 0.4.0 \r\npytest-remotedata 0.3.2 \r\npython-dateutil 2.8.1 \r\npython-jsonrpc-server 0.3.4 \r\npython-language-server 0.31.7 \r\npytz 2019.3 \r\nPyWavelets 1.1.1 \r\nPyYAML 5.3 \r\npyzmq 18.1.1 \r\nQDarkStyle 2.8 \r\nQtAwesome 0.6.1 \r\nqtconsole 4.6.0 \r\nQtPy 1.9.0 \r\nreadme-renderer 26.0 \r\nrequests 2.22.0 \r\nrequests-toolbelt 0.9.1 \r\nrope 0.16.0 \r\nRtree 0.9.3 \r\nruamel-yaml 0.15.87 \r\nscanpy 1.5.1 \r\nscikit-image 0.16.2 \r\nscikit-learn 0.22.1 \r\nscipy 1.5.0 \r\nscRFE 1.5.1 \r\nseaborn 0.10.1 \r\nSend2Trash 1.5.0 \r\nsetuptools 47.3.1 \r\nsetuptools-scm 4.1.2 \r\nsimplegeneric 0.8.1 \r\nsingledispatch 3.4.0.3 \r\nsix 1.14.0 \r\nsnowballstemmer 2.0.0 \r\nsortedcollections 1.1.2 \r\nsortedcontainers 2.1.0 \r\nsoupsieve 1.9.5 \r\nSphinx 2.4.0 \r\nsphinxcontrib-applehelp 1.0.1 \r\nsphinxcontrib-devhelp 1.0.1 \r\nsphinxcontrib-htmlhelp 1.0.2 \r\nsphinxcontrib-jsmath 1.0.1 \r\nsphinxcontrib-qthelp 1.0.2 \r\nsphinxcontrib-serializinghtml 1.1.3 \r\nsphinxcontrib-websupport 1.2.0 \r\nspyder 4.0.1 \r\nspyder-kernels 1.8.1 \r\nSQLAlchemy 1.3.13 \r\nstatsmodels 0.11.0 \r\nsympy 1.5.1 \r\ntables 3.6.1 \r\ntbb 2019.0 \r\ntblib 1.6.0 \r\nterminado 0.8.3 \r\ntestpath 0.4.4 \r\ntoolz 0.10.0 \r\ntornado 6.0.3 \r\ntqdm 4.42.1 \r\ntraitlets 4.3.3 \r\ntwine 3.1.1 \r\nujson 1.35 \r\numap-learn 0.4.4 \r\nunicodecsv 0.14.1 \r\nurllib3 1.25.8 \r\nvenn 0.1.3 \r\nverboselogs 1.7 \r\nwatchdog 0.10.2 \r\nwcwidth 0.1.8 \r\nwebencodings 0.5.1 \r\nWerkzeug 1.0.0 \r\nwheel 0.34.2 \r\nwidgetsnbextension 3.5.1 \r\nwrapt 1.11.2 \r\nwurlitzer 2.0.0 \r\nxlrd 1.2.0 \r\nXlsxWriter 1.2.7 \r\nxlwings 0.17.1 \r\nxlwt 1.3.0 \r\nxmltodict 0.12.0 \r\nyapf 0.28.0 \r\nzict 1.0.0 \r\nzipp 2.2.0 \r\n"
],
[
"from scRFE import scRFE\nfrom scRFE import scRFEimplot\nfrom scRFE.scRFE import makeOneForest",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nfrom anndata import read_h5ad\n\nadata = read_h5ad('/Users/madelinepark/Downloads/Liver_droplet.h5ad')",
"/Users/madelinepark/opt/anaconda3/lib/python3.7/site-packages/anndata/compat/__init__.py:161: FutureWarning: Moving element from .uns['neighbors']['distances'] to .obsp['distances'].\n\nThis is where adjacency matrices should go now.\n FutureWarning,\n/Users/madelinepark/opt/anaconda3/lib/python3.7/site-packages/anndata/compat/__init__.py:161: FutureWarning: Moving element from .uns['neighbors']['connectivities'] to .obsp['connectivities'].\n\nThis is where adjacency matrices should go now.\n FutureWarning,\n"
],
[
"madeForest = makeOneForest(dataMatrix=adata, classOfInterest='age', labelOfInterest='3m', nEstimators=10,\n randomState=0, min_cells=15, keep_small_categories=True,\n nJobs=-1, oobScore=True, Step=0.2, Cv=3, verbosity=True)",
"Trying to set attribute `.obs` of view, copying.\n"
],
[
"type(madeForest[4])",
"_____no_output_____"
],
[
"from scRFE.scRFE import scRFEimplot\n\nscRFEimplot(X_new=madeForest[3], y = madeForest[4])",
"_____no_output_____"
],
[
"from scRFE.scRFE import scRFE\nfrom scRFE.scRFE import scRFEimplot\nfrom scRFE.scRFE import makeOneForest",
"_____no_output_____"
],
[
"scRFE(adata, classOfInterest = 'age', nEstimators = 10, Cv = 3)",
"\r 0%| | 0/3 [00:00<?, ?it/s]"
]
],
[
[
"# scRFE",
"_____no_output_____"
]
],
[
[
"# Imports \nimport numpy as np\nimport pandas as pd\nimport scanpy as sc\nimport random\nfrom anndata import read_h5ad\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.feature_selection import SelectFromModel\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.feature_selection import RFE\nfrom sklearn.feature_selection import RFECV\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport scanpy.external as sce\nimport logging as logg",
"_____no_output_____"
],
[
"adata = read_h5ad('/Users/madelinepark/Downloads/Liver_droplet.h5ad')",
"/Users/madelinepark/opt/anaconda3/lib/python3.7/site-packages/anndata/compat/__init__.py:161: FutureWarning: Moving element from .uns['neighbors']['distances'] to .obsp['distances'].\n\nThis is where adjacency matrices should go now.\n FutureWarning,\n/Users/madelinepark/opt/anaconda3/lib/python3.7/site-packages/anndata/compat/__init__.py:161: FutureWarning: Moving element from .uns['neighbors']['connectivities'] to .obsp['connectivities'].\n\nThis is where adjacency matrices should go now.\n FutureWarning,\n"
],
[
"def columnToString (dataMatrix):\n cat_columns = dataMatrix.obs.select_dtypes(['category']).columns\n dataMatrix.obs[cat_columns] = dataMatrix.obs[cat_columns].astype(str)\n \n return dataMatrix",
"_____no_output_____"
],
[
"def filterNormalize (dataMatrix, classOfInterest, verbosity):\n np.random.seed(644685)\n# sc.pp.filter_cells(dataMatrix, min_genes=0)\n# sc.pp.filter_genes(dataMatrix, min_cells=0)\n dataMatrix = dataMatrix[dataMatrix.obs[classOfInterest]!='nan']\n dataMatrix = dataMatrix[~dataMatrix.obs[classOfInterest].isna()]\n if verbosity == True:\n print ('na data removed')\n return dataMatrix",
"_____no_output_____"
],
[
"filterNormalize(dataMatrix = adata, classOfInterest = 'age', verbosity = True)",
"na data removed\n"
],
[
"def labelSplit (dataMatrix, classOfInterest, labelOfInterest, verbosity):\n dataMatrix = filterNormalize (dataMatrix, classOfInterest, verbosity)\n dataMatrix.obs['classification_group'] = 'B'\n dataMatrix.obs.loc[dataMatrix.obs[dataMatrix.obs[classOfInterest]==labelOfInterest]\n .index,'classification_group'] = 'A' #make labels based on A/B of\n# classofInterest\n return dataMatrix",
"_____no_output_____"
],
[
"def downsampleToSmallestCategory(dataMatrix, random_state, min_cells, \n keep_small_categories, verbosity,\n classOfInterest = 'classification_group'\n) -> sc.AnnData:\n \"\"\"\n returns an annData object in which all categories in 'classOfInterest' have\n the same size\n classOfInterest\n column with the categories to downsample\n min_cells\n Minimum number of cells to downsample.\n Categories having less than `min_cells` are discarded unless\n keep_small_categories is True\n keep_small_categories\n Be default categories with less than min_cells are discarded.\n Set to true to keep them\n \"\"\"\n counts = dataMatrix.obs[classOfInterest].value_counts(sort=False)\n if len(counts[counts < min_cells]) > 0 and keep_small_categories is False:\n logg.warning(\n \"The following categories have less than {} cells and will be \"\n \"ignored: {}\".format(min_cells, dict(counts[counts < min_cells]))\n )\n min_size = min(counts[counts >= min_cells])\n sample_selection = None\n for sample, num_cells in counts.items():\n if num_cells <= min_cells:\n if keep_small_categories:\n sel = dataMatrix.obs.index.isin(\n dataMatrix.obs[dataMatrix.obs[classOfInterest] == sample].index)\n else:\n continue\n else:\n sel = dataMatrix.obs.index.isin(\n dataMatrix.obs[dataMatrix.obs[classOfInterest] == sample]\n .sample(min_size, random_state=random_state)\n .index\n )\n if sample_selection is None:\n sample_selection = sel\n else:\n sample_selection |= sel\n logg.info(\n \"The cells in category {!r} had been down-sampled to have each {} cells. \"\n \"The original counts where {}\".format(classOfInterest, min_size, dict(counts))\n )\n return dataMatrix[sample_selection].copy()",
"_____no_output_____"
],
[
"def makeOneForest (dataMatrix, classOfInterest, labelOfInterest, nEstimators, \n randomState, min_cells, keep_small_categories,\n nJobs, oobScore, Step, Cv, verbosity): \n\n \"\"\"\n Builds and runs a random forest for one label in a class of interest\n \n Parameters\n ----------\n dataMatrix : anndata object\n The data file of interest\n classOfInterest : str\n The class you will split the data by in the set of dataMatrix.obs\n labelOfInterest : str\n The specific label within the class that the random forezt will run a \n \"one vs all\" classification on\n nEstimators : int\n The number of trees in the forest\n randomState : int\n Controls random number being used\n nJobs : int\n The number of jobs to run in parallel\n oobScore : bool\n Whether to use out-of-bag samples to estimate the generalization accuracy\n Step : float\n Corresponds to percentage of features to remove at each iteration\n Cv : int\n Determines the cross-validation splitting strategy\n \n Returns\n -------\n feature_selected : list\n list of top features from random forest\n selector.estimator_.feature_importances_ : list\n list of top ginis corresponding to to features\n \n \"\"\"\n splitDataMatrix = labelSplit (dataMatrix, classOfInterest, labelOfInterest, verbosity)\n \n downsampledMatrix = downsampleToSmallestCategory (dataMatrix = splitDataMatrix, \n random_state = randomState, min_cells = min_cells, \n keep_small_categories = keep_small_categories, verbosity = verbosity,\n classOfInterest = 'classification_group', )\n\n feat_labels = downsampledMatrix.var_names \n X = downsampledMatrix.X\n y = downsampledMatrix.obs['classification_group'] #'A' or 'B' labels from labelSplit\n \n clf = RandomForestClassifier(n_estimators = nEstimators, random_state = randomState, \n n_jobs = nJobs, oob_score = oobScore)\n \n selector = RFECV(clf, step = Step, cv = Cv)\n \n clf.fit(X, y)\n selector.fit(X, y)\n feature_selected = feat_labels[selector.support_] \n dataMatrix.obs['classification_group'] = 'B'\n\n return feature_selected, selector.estimator_.feature_importances_",
"_____no_output_____"
],
[
"def resultWrite (classOfInterest, results_df, labelOfInterest,\n feature_selected, feature_importance):\n\n column_headings = [] \n column_headings.append(labelOfInterest)\n column_headings.append(labelOfInterest + '_gini')\n resaux = pd.DataFrame(columns = column_headings)\n resaux[labelOfInterest] = feature_selected\n resaux[labelOfInterest + '_gini'] = feature_importance\n resaux = resaux.sort_values(by = [labelOfInterest + '_gini'], ascending = False)\n resaux.reset_index(drop = True, inplace = True)\n\n results_df = pd.concat([results_df, resaux], axis=1)\n return results_df ",
"_____no_output_____"
],
[
"def scRFE (adata, classOfInterest, nEstimators = 5000, randomState = 0, min_cells = 15,\n keep_small_categories = True, nJobs = -1, oobScore = True, Step = 0.2, Cv = 5, \n verbosity = True):\n\n \"\"\"\n Builds and runs a random forest with one vs all classification for each label \n for one class of interest\n \n Parameters\n ----------\n adata : anndata object\n The data file of interest\n classOfInterest : str\n The class you will split the data by in the set of dataMatrix.obs\n nEstimators : int\n The number of trees in the forest\n randomState : int\n Controls random number being used\n min_cells : int\n Minimum number of cells in a given class to downsample.\n keep_small_categories : bool \n Whether to keep classes with small number of observations, or to remove. \n nJobs : int\n The number of jobs to run in parallel\n oobScore : bool\n Whether to use out-of-bag samples to estimate the generalization accuracy\n Step : float\n Corresponds to percentage of features to remove at each iteration\n Cv : int\n Determines the cross-validation splitting strategy\n \n Returns\n -------\n results_df : pd.DataFrame\n Dataframe with results for each label in the class, formatted as \n \"label\" for one column, then \"label + gini\" for the corresponding column\n \n \"\"\"\n dataMatrix = adata.copy()\n dataMatrix = columnToString (dataMatrix)\n dataMatrix = filterNormalize (dataMatrix, classOfInterest, verbosity)\n results_df = pd.DataFrame()\n \n for labelOfInterest in np.unique(dataMatrix.obs[classOfInterest]): \n dataMatrix_labelOfInterest = dataMatrix.copy()\n \n feature_selected, feature_importance = makeOneForest(\n dataMatrix = dataMatrix_labelOfInterest, classOfInterest = classOfInterest,\n labelOfInterest = labelOfInterest,\n nEstimators = nEstimators, randomState = randomState, min_cells = min_cells, \n keep_small_categories = keep_small_categories, nJobs = nJobs, \n oobScore = oobScore, Step = Step, Cv = Cv, verbosity=verbosity) \n \n results_df = resultWrite (classOfInterest, results_df, \n labelOfInterest = labelOfInterest, \n feature_selected = feature_selected, \n feature_importance = feature_importance)\n \n\n return results_df",
"_____no_output_____"
],
[
"adata = read_h5ad('/Users/madelinepark/Downloads/Liver_droplet.h5ad')",
"_____no_output_____"
],
[
"scRFE (adata, classOfInterest = 'age', nEstimators = 10, Cv = 3)",
"_____no_output_____"
],
[
"import logging\nlogging.info('%s before you %s', 'Look', 'leap!')",
"_____no_output_____"
],
[
"def logprint (verbosity):\n if verbosity == True:\n print('hi')",
"_____no_output_____"
],
[
"logprint(verbosity=True)",
"hi\n"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4addc2441a0145601d8752a4a908f8d7e4c8507b
| 204,323 |
ipynb
|
Jupyter Notebook
|
notebooks/sirt6/testing/20191218_CMH_extract_loc_data.ipynb
|
ixianid/cell_counting
|
d0af45f8e516f57a80702e956af41fdd225cef67
|
[
"MIT"
] | 1 |
2021-10-13T04:34:34.000Z
|
2021-10-13T04:34:34.000Z
|
notebooks/sirt6/testing/20191218_CMH_extract_loc_data.ipynb
|
ixianid/cell_counting
|
d0af45f8e516f57a80702e956af41fdd225cef67
|
[
"MIT"
] | 6 |
2020-03-24T18:10:59.000Z
|
2021-09-08T01:34:35.000Z
|
notebooks/sirt6/testing/20191218_CMH_extract_loc_data.ipynb
|
CamHolman/cell_counting
|
d0af45f8e516f57a80702e956af41fdd225cef67
|
[
"MIT"
] | null | null | null | 357.208042 | 61,207 | 0.64099 |
[
[
[
"import pickle\n\nPIK = 'data/sirt6/final/20191217_m87e_counts.pkl'\nwith open(PIK, 'rb') as f:\n m87e_clobs = pickle.load(f)\n\n",
"_____no_output_____"
],
[
"m87e_clobs",
"_____no_output_____"
],
[
"import pandas as pd\ndef extract_panda(clob_list):\n dictlist = []\n for i in range(len(clob_list)):\n dictlist += [clob_list[i].to_dict()]\n DF = pd.DataFrame(dictlist)\n return DF",
"_____no_output_____"
],
[
"m87e_clobs[1].to_dict()",
"_____no_output_____"
],
[
"m87df = extract_panda(m87e_clobs)\nm87df",
"_____no_output_____"
],
[
"check = ['ctx', 'hip']\n\nfor idx, row in m87df.iterrows():\n for c in check:\n if c in row['name']:\n m87df.ix[idx, 'brain_loc'] = c\n\n\n",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:6: FutureWarning: \n.ix is deprecated. Please use\n.loc for label based indexing or\n.iloc for positional indexing\n\nSee the documentation here:\nhttp://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#ix-indexer-is-deprecated\n \n"
],
[
"m87df",
"_____no_output_____"
],
[
"tn = m87df.name[20]\ntn",
"_____no_output_____"
],
[
"'ctx' in tn",
"_____no_output_____"
],
[
"m87df.brain_loc[20]",
"_____no_output_____"
],
[
"ctx_m87df = m87df[m87df.brain_loc == 'ctx']\nhip_m87df = m87df[m87df.brain_loc == 'hip']",
"_____no_output_____"
],
[
"ctx_m87df",
"_____no_output_____"
],
[
"ctx_m87_mean_cells_per_um = ctx_m87df.cells_per_area.mean()\nhip_m87_mean_cells_per_um = hip_m87df.cells_per_area.mean()\n",
"_____no_output_____"
],
[
"print('87E CTX mean cells per um:', ctx_m87_mean_cells_per_um) \nprint('87E HIP mean cells per um:', hip_m87_mean_cells_per_um) ",
"87E CTX mean cells per um: 0.0003647294952478124\n87E HIP mean cells per um: 0.00034590722965173215\n"
],
[
"ctx_m87_mean_cells_per_mm2 = ctx_m87_mean_cells_per_um * 10e5\nhip_m87_mean_cells_per_mm2 = hip_m87_mean_cells_per_um * 10e5\n\nprint('87E CTX mean cells per mm2:', ctx_m87_mean_cells_per_mm2) \nprint('87E HIP mean cells per mm2:', hip_m87_mean_cells_per_mm2)",
"87E CTX mean cells per mm2: 364.7294952478124\n87E HIP mean cells per mm2: 345.90722965173217\n"
],
[
"PIK = 'data/sirt6/final/20191217_m91e_counts.pik'\nwith open(PIK, 'rb') as f:\n m91e_clobs = pickle.load(f)",
"_____no_output_____"
],
[
"m91df = extract_panda(m91e_clobs)\nm91df\n",
"_____no_output_____"
],
[
"m91e_clobs[1].to_dict()",
"_____no_output_____"
],
[
"check = ['Ctx', 'Hip']\nfor idx, row in m91df.iterrows():\n for c in check:\n if c in row['name']:\n m91df.ix[idx, 'brain_loc'] = c\n\nm91df\n",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:5: FutureWarning: \n.ix is deprecated. Please use\n.loc for label based indexing or\n.iloc for positional indexing\n\nSee the documentation here:\nhttp://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#ix-indexer-is-deprecated\n \"\"\"\n"
],
[
"ctx_m91df = m91df[m91df.brain_loc == 'Ctx']\nhip_m91df = m91df[m91df.brain_loc == 'Hip']\n",
"_____no_output_____"
],
[
"\nctx_m91_mean_cells_per_um = ctx_m91df.cells_per_area.mean()\nhip_m91_mean_cells_per_um = hip_m91df.cells_per_area.mean()\nprint('91E CTX mean cells per um:', ctx_m91_mean_cells_per_um) \nprint('91E HIP mean cells per um:', hip_m91_mean_cells_per_um) \n\nctx_m91_mean_cells_per_mm2 = ctx_m91_mean_cells_per_um * 10e5\nhip_m91_mean_cells_per_mm2 = hip_m91_mean_cells_per_um * 10e5\nprint('91E CTX mean cells per mm2:', ctx_m91_mean_cells_per_mm2) \nprint('91E HIP mean cells per mm2:', hip_m91_mean_cells_per_mm2)",
"91E CTX mean cells per um: 0.00033155666858844675\n91E HIP mean cells per um: 0.00032646992219465237\n91E CTX mean cells per mm2: 331.55666858844677\n91E HIP mean cells per mm2: 326.4699221946524\n"
],
[
"print('87E CTX mean cells per mm2:', ctx_m87_mean_cells_per_mm2) \nprint('87E HIP mean cells per mm2:', hip_m87_mean_cells_per_mm2)\n\nprint('91E CTX mean cells per mm2:', ctx_m91_mean_cells_per_mm2) \nprint('91E HIP mean cells per mm2:', hip_m91_mean_cells_per_mm2)",
"87E CTX mean cells per mm2: 3647.294952478124\n87E HIP mean cells per mm2: 3459.0722965173213\n91E CTX mean cells per mm2: 3315.5666858844675\n91E HIP mean cells per mm2: 3264.6992219465237\n"
],
[
"ctx_m87_sd = ctx_m87df.cells_per_area.std() * 10e5\nhip_m87_sd = hip_m87df.cells_per_area.std() * 10e5\n\nctx_m91_sd = ctx_m91df.cells_per_area.std() * 10e5\nhip_m91_sd = hip_m91df.cells_per_area.std() * 10e5",
"_____no_output_____"
],
[
"print('87E CTX mean cells per mm2:', ctx_m87_mean_cells_per_mm2) \nprint('87E CTX std cells per mm2: ', ctx_m87_sd)\nprint('')\n\nprint('91E CTX mean cells per mm2:', ctx_m91_mean_cells_per_mm2)\nprint('91E CTX std cells per mm2: ', ctx_m91_sd)\nprint('')\nprint('')\n\nprint('87E HIP mean cells per mm2:', hip_m87_mean_cells_per_mm2)\nprint('87E HIP std cells per mm2: ', hip_m87_sd)\nprint('')\n \nprint('91E HIP mean cells per mm2:', hip_m91_mean_cells_per_mm2)\nprint('91E HIP std cells per mm2: ', hip_m91_sd)\nprint('')",
"87E CTX mean cells per mm2: 364.7294952478124\n87E CTX std cells per mm2: 110.18854425220567\n\n91E CTX mean cells per mm2: 331.55666858844677\n91E CTX std cells per mm2: 101.1628497260086\n\n\n87E HIP mean cells per mm2: 345.90722965173217\n87E HIP std cells per mm2: 125.34229934488823\n\n91E HIP mean cells per mm2: 326.4699221946524\n91E HIP std cells per mm2: 96.9411154492071\n\n"
],
[
"m87e_ctx = ctx_m87df.cells_per_area * 10e5\nm87e_hip = hip_m87df.cells_per_area * 10e5\n\n\nm91e_ctx = ctx_m91df.cells_per_area * 10e5\nm91e_hip = hip_m91df.cells_per_area * 10e5\n\ncombined = pd.DataFrame(m87e_ctx, m87e_hip, m91e_ctx, m91e_hip)",
"_____no_output_____"
],
[
"combined",
"_____no_output_____"
],
[
"m91e_ctx.plot.box()\nm91e_hip.plot.box()",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\ndata_to_plot = [m87e_ctx, m87e_hip, m91e_ctx, m91e_hip]\n\n# Create a figure instance\nfig = plt.figure(1, figsize=(9, 6))\n\n# Create an axes instance\nax = fig.add_subplot(111)\n\n# Create the boxplot\nbp = ax.boxplot(data_to_plot)\n\n# Save the figure\nfig.savefig('fig1.png', bbox_inches='tight')\n\nax.set_xticklabels(['Control Cortex', 'Control Hip', 'S6cKO Cortex', 'S6cKO Hip'])\nax.set_ylabel('Average Cell Count per mm^2') \nax.set_title('Average Cell Counts Per mm^2 in Sirt6cKO vs Littermate Control', fontsize = 12, fontweight = 'bold')",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4addd7613371a3fd8880644cbf7583205e8c590a
| 80,554 |
ipynb
|
Jupyter Notebook
|
exploration/scripts/scripts/main_PA1673_gradient.ipynb
|
greenelab/Pseudomonas_latent_spaces
|
0d78dc927a246c49f631abeddc0b952add4c6d0c
|
[
"BSD-3-Clause"
] | null | null | null |
exploration/scripts/scripts/main_PA1673_gradient.ipynb
|
greenelab/Pseudomonas_latent_spaces
|
0d78dc927a246c49f631abeddc0b952add4c6d0c
|
[
"BSD-3-Clause"
] | 12 |
2018-07-02T19:35:31.000Z
|
2019-03-09T00:24:09.000Z
|
exploration/scripts/scripts/main_PA1673_gradient.ipynb
|
greenelab/Pseudomonas_latent_spaces
|
0d78dc927a246c49f631abeddc0b952add4c6d0c
|
[
"BSD-3-Clause"
] | 1 |
2018-06-25T14:21:51.000Z
|
2018-06-25T14:21:51.000Z
| 126.458399 | 22,700 | 0.752588 |
[
[
[
"#-------------------------------------------------------------------------------------------------------------------------------\n# By Alexandra Lee\n# (updated October 2018)\n# \n# Main\n#\n# Dataset: Pseudomonas aeruginosa gene expression from compendium \n# referenced in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5069748/\n# \n# Condition: expression of PA1673 gene\n#\n# Task: To predict the expression of other (non-PA1673) genes by:\n# 1. Define offset vector = avg(expression of genes corresponding to high levels of PA1673) \n# - avg(expression of genes corresponding to low levels of PA1673)\n# 2. scale factor = how far along the gradient of low-high PA1673 expression\n# 3. prediction = baseline expression + scale factor * offset vector \n#-------------------------------------------------------------------------------------------------------------------------------\nimport os\nimport pandas as pd\nimport numpy as np\n\nfrom functions import generate_input, vae, def_offset, interpolate, plot\n\nrandomState = 123\nfrom numpy.random import seed\nseed(randomState)",
"Using TensorFlow backend.\n"
],
[
"# Name of analysis\nanalysis_name = 'PA1673_gradient_test'\n\n# Create list of base directories\nbase_dirs = [os.path.join(os.path.dirname(os.getcwd()), 'data'),\n os.path.join(os.path.dirname(os.getcwd()), 'encoded'),\n os.path.join(os.path.dirname(os.getcwd()), 'models'),\n os.path.join(os.path.dirname(os.getcwd()), 'output'),\n os.path.join(os.path.dirname(os.getcwd()), 'stats'),\n os.path.join(os.path.dirname(os.getcwd()), 'viz') \n]\n\n# Check if directory exist otherwise create\nfor each_dir in base_dirs:\n analysis_dir = os.path.join(each_dir, analysis_name)\n if os.path.exists(analysis_dir):\n print('directory already exists: {}'.format(analysis_dir))\n else:\n os.mkdir(analysis_dir)\n print('creating new directory: {}'.format(analysis_dir))",
"creating new directory: /home/alexandra/Documents/Repos/Pseudomonas_latent_spaces/data/PA1673_gradient_test\ncreating new directory: /home/alexandra/Documents/Repos/Pseudomonas_latent_spaces/encoded/PA1673_gradient_test\ncreating new directory: /home/alexandra/Documents/Repos/Pseudomonas_latent_spaces/models/PA1673_gradient_test\ncreating new directory: /home/alexandra/Documents/Repos/Pseudomonas_latent_spaces/output/PA1673_gradient_test\ncreating new directory: /home/alexandra/Documents/Repos/Pseudomonas_latent_spaces/stats/PA1673_gradient_test\ncreating new directory: /home/alexandra/Documents/Repos/Pseudomonas_latent_spaces/viz/PA1673_gradient_test\n"
],
[
"# Pre-process input\ndata_dir = os.path.join(base_dirs[0], analysis_name)\ngenerate_input.generate_input_PA1673_gradient(data_dir)\n\n# Run Tybalt\nlearning_rate = 0.001\nbatch_size = 100\nepochs = 200\nkappa = 0.01\nintermediate_dim = 100\nlatent_dim = 10\nepsilon_std = 1.0\n\nbase_dir = os.path.dirname(os.getcwd())\nvae.tybalt_2layer_model(learning_rate, batch_size, epochs, kappa, intermediate_dim, latent_dim, epsilon_std, base_dir, analysis_name)\n\n\n# Define offset vectors in gene space and latent space\ndata_dir = os.path.join(base_dirs[0], analysis_name)\ntarget_gene = \"PA1673\"\npercent_low = 5\npercent_high = 95\n\ndef_offset.gene_space_offset(data_dir, target_gene, percent_low, percent_high)\n\nmodel_dir = os.path.join(base_dirs[2], analysis_name)\nencoded_dir = os.path.join(base_dirs[1], analysis_name)\n\ndef_offset.latent_space_offset(data_dir, model_dir, encoded_dir, target_gene, percent_low, percent_high)\n\n\n# Predict gene expression using offset in gene space and latent space\nout_dir = os.path.join(base_dirs[3], analysis_name)\n\ninterpolate.interpolate_in_gene_space(data_dir, target_gene, out_dir, percent_low, percent_high)\ninterpolate.interpolate_in_latent_space(data_dir, model_dir, encoded_dir, target_gene, out_dir, percent_low, percent_high)\n\n\n# Plot prediction per sample along gradient of PA1673 expression\nviz_dir = os.path.join(base_dirs[5], analysis_name)\nplot.plot_corr_gradient(out_dir, viz_dir)",
"/home/alexandra/Documents/Repos/Pseudomonas_latent_spaces/scripts/functions/vae.py:259: UserWarning: Output \"custom_variational_layer_1\" missing from loss dictionary. We assume this was done on purpose, and we will not be expecting any data to be passed to \"custom_variational_layer_1\" during training.\n vae.compile(optimizer=adam, loss=None, loss_weights=[beta])\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
4addde1192ee6bd1a5046680fd138d3aaba7018d
| 59,314 |
ipynb
|
Jupyter Notebook
|
courses/machine_learning/deepdive2/art_and_science_of_ml/solutions/hyperparameter_tuning.ipynb
|
Glairly/introduction_to_tensorflow
|
aa0a44d9c428a6eb86d1f79d73f54c0861b6358d
|
[
"Apache-2.0"
] | 2 |
2022-01-06T11:52:57.000Z
|
2022-01-09T01:53:56.000Z
|
courses/machine_learning/deepdive2/art_and_science_of_ml/solutions/hyperparameter_tuning.ipynb
|
Glairly/introduction_to_tensorflow
|
aa0a44d9c428a6eb86d1f79d73f54c0861b6358d
|
[
"Apache-2.0"
] | null | null | null |
courses/machine_learning/deepdive2/art_and_science_of_ml/solutions/hyperparameter_tuning.ipynb
|
Glairly/introduction_to_tensorflow
|
aa0a44d9c428a6eb86d1f79d73f54c0861b6358d
|
[
"Apache-2.0"
] | null | null | null | 44.198212 | 554 | 0.593823 |
[
[
[
"# Performing the Hyperparameter tuning\n\n**Learning Objectives**\n1. Learn how to use `cloudml-hypertune` to report the results for Cloud hyperparameter tuning trial runs\n2. Learn how to configure the `.yaml` file for submitting a Cloud hyperparameter tuning job\n3. Submit a hyperparameter tuning job to Cloud AI Platform\n\n## Introduction\n\nLet's see if we can improve upon that by tuning our hyperparameters.\n\nHyperparameters are parameters that are set *prior* to training a model, as opposed to parameters which are learned *during* training. \n\nThese include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.\n\nHere are the four most common ways to finding the ideal hyperparameters:\n1. Manual\n2. Grid Search\n3. Random Search\n4. Bayesian Optimzation\n\n**1. Manual**\n\nTraditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance. \n\nPros\n- Educational, builds up your intuition as a data scientist\n- Inexpensive because only one trial is conducted at a time\n\nCons\n- Requires alot of time and patience\n\n**2. Grid Search**\n\nOn the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination. \n\nPros\n- Can run hundreds of trials in parallel using the cloud\n- Gauranteed to find the best solution within the search space\n\nCons\n- Expensive\n\n**3. Random Search**\n\nAlternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range. \n\nPros\n- Can run hundreds of trials in parallel using the cloud\n- Requires less trials than Grid Search to find a good solution\n\nCons\n- Expensive (but less so than Grid Search)\n\n**4. Bayesian Optimization**\n\nUnlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here [here](https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization). \n\nPros\n- Picks values intelligenty based on results from past trials\n- Less expensive because requires fewer trials to get a good result\n\nCons\n- Requires sequential trials for best results, takes longer\n\n**AI Platform HyperTune**\n\nAI Platform HyperTune, powered by [Google Vizier](https://ai.google/research/pubs/pub46180), uses Bayesian Optimization by default, but [also supports](https://cloud.google.com/ml-engine/docs/tensorflow/hyperparameter-tuning-overview#search_algorithms) Grid Search and Random Search. \n\n\nWhen tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.",
"_____no_output_____"
]
],
[
[
"# Use the chown command to change the ownership of the repository\n!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst",
"_____no_output_____"
],
[
"# Installing the latest version of the package\n!pip install --user google-cloud-bigquery==1.25.0",
"Collecting google-cloud-bigquery==1.25.0\nDownloading https://files.pythonhosted.org/packages/48/6d/e8f5e5cd05ee968682d389cec3fdbccb920f1f8302464a46ef87b7b8fdad/google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169kB)\n|████████████████████████████████| 174kB 3.2MB/s eta 0:00:01\nRequirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (1.2.0)\nRequirement already satisfied: google-resumable-media<0.6dev,>=0.5.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (0.5.0)\nRequirement already satisfied: google-auth<2.0dev,>=1.9.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (1.10.1)\nRequirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (3.11.2)\nRequirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (1.16.0)\nRequirement already satisfied: six<2.0.0dev,>=1.13.0 in /usr/local/lib/python3.5/dist-packages (from google-cloud-bigquery==1.25.0) (1.14.0)\nRequirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.5/dist-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (45.0.0)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.5/dist-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.0.0)\nRequirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.5/dist-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.0)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.5/dist-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)\nRequirement already satisfied: pytz in /usr/local/lib/python3.5/dist-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2019.3)\nRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.5/dist-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.22.0)\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.5/dist-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)\nRequirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.5/dist-packages (from rsa<4.1,>=3.1.4->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.5/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.24.2)\nRequirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.5/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.8)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.5/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.5/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2019.11.28)\nInstalling collected packages: google-cloud-bigquery\nSuccessfully installed google-cloud-bigquery-1.25.0\nWARNING: You are using pip version 19.3.1; however, version 20.2.3 is available.\nYou should consider upgrading via the 'pip install --upgrade pip' command.\n"
]
],
[
[
"**Note**: Restart your kernel to use updated packages.",
"_____no_output_____"
],
[
"Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.",
"_____no_output_____"
]
],
[
[
"# Importing the necessary module\nimport os\n\nfrom google.cloud import bigquery",
"_____no_output_____"
],
[
"# Change with your own bucket and project below:\nBUCKET = \"<BUCKET>\"\nPROJECT = \"<PROJECT>\"\nREGION = \"<YOUR REGION>\"\n\nOUTDIR = \"gs://{bucket}/taxifare/data\".format(bucket=BUCKET)\n\nos.environ['BUCKET'] = BUCKET\nos.environ['OUTDIR'] = OUTDIR\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\nos.environ['TFVERSION'] = \"2.6\"",
"_____no_output_____"
],
[
"%%bash\n# Setting up cloud SDK properties\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION",
"Updated property [core/project].\nUpdated property [compute/region]."
]
],
[
[
"## Make code compatible with AI Platform Training Service\nIn order to make our code compatible with AI Platform Training Service we need to make the following changes:\n\n1. Upload data to Google Cloud Storage \n2. Move code into a trainer Python package\n4. Submit training job with `gcloud` to train on AI Platform",
"_____no_output_____"
],
[
"## Upload data to Google Cloud Storage (GCS)\n\nCloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.",
"_____no_output_____"
],
[
"## Create BigQuery tables",
"_____no_output_____"
],
[
"If you haven not already created a BigQuery dataset for our data, run the following cell:",
"_____no_output_____"
]
],
[
[
"bq = bigquery.Client(project = PROJECT)\ndataset = bigquery.Dataset(bq.dataset(\"taxifare\"))\n\n# Creating a dataset\ntry:\n bq.create_dataset(dataset)\n print(\"Dataset created\")\nexcept:\n print(\"Dataset already exists\")",
"Dataset created\n"
]
],
[
[
"Let's create a table with 1 million examples.\n\nNote that the order of columns is exactly what was in our CSV files.",
"_____no_output_____"
]
],
[
[
"%%bigquery\n\nCREATE OR REPLACE TABLE taxifare.feateng_training_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\n 'unused' AS key\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0",
"_____no_output_____"
]
],
[
[
"Make the validation dataset be 1/10 the size of the training dataset.",
"_____no_output_____"
]
],
[
[
"%%bigquery\n\nCREATE OR REPLACE TABLE taxifare.feateng_valid_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\n 'unused' AS key\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0",
"_____no_output_____"
]
],
[
[
"## Export the tables as CSV files",
"_____no_output_____"
]
],
[
[
"%%bash\n\necho \"Deleting current contents of $OUTDIR\"\ngsutil -m -q rm -rf $OUTDIR\n\necho \"Extracting training data to $OUTDIR\"\nbq --location=US extract \\\n --destination_format CSV \\\n --field_delimiter \",\" --noprint_header \\\n taxifare.feateng_training_data \\\n $OUTDIR/taxi-train-*.csv\n\necho \"Extracting validation data to $OUTDIR\"\nbq --location=US extract \\\n --destination_format CSV \\\n --field_delimiter \",\" --noprint_header \\\n taxifare.feateng_valid_data \\\n $OUTDIR/taxi-valid-*.csv\n\n# List the files of the bucket\ngsutil ls -l $OUTDIR",
"Deleting current contents of gs://qwiklabs-gcp-04-dfa7f2847b66/taxifare/data\nExtracting training data to gs://qwiklabs-gcp-04-dfa7f2847b66/taxifare/data\nExtracting validation data to gs://qwiklabs-gcp-04-dfa7f2847b66/taxifare/data\n88345235 2020-09-15T08:22:19Z gs://qwiklabs-gcp-04-dfa7f2847b66/taxifare/data/taxi-train-000000000000.csv\n8725746 2020-09-15T08:22:31Z gs://qwiklabs-gcp-04-dfa7f2847b66/taxifare/data/taxi-valid-000000000000.csv\nTOTAL: 2 objects, 97070981 bytes (92.57 MiB)\nCommandException: 1 files/objects could not be removed.\nWaiting on bqjob_r1e4cd03662db6875_0000017490db46b4_1 ... (23s) Current status: DONE\nWaiting on bqjob_r3625309ca3e0b342_0000017490dba856_1 ... (2s) Current status: DONE\n"
],
[
"# Here, it shows the short header for each object\n!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2",
"18,2014-05-30 15:58:00 UTC,-73.925395,40.743742,-73.861645,40.732345,2,unused\n21.7,2010-09-18 23:36:00 UTC,-73.783133,40.648572,-73.992458,40.747715,2,unused\n"
]
],
[
[
"If all ran smoothly, you should be able to list the data bucket by running the following command:",
"_____no_output_____"
]
],
[
[
"# List the files of the bucket\n!gsutil ls gs://$BUCKET/taxifare/data",
"gs://qwiklabs-gcp-04-dfa7f2847b66/taxifare/data/taxi-train-000000000000.csv\ngs://qwiklabs-gcp-04-dfa7f2847b66/taxifare/data/taxi-valid-000000000000.csv\n"
]
],
[
[
"## Move code into python package\n\nHere, we moved our code into a python package for training on Cloud AI Platform. Let's just check that the files are there. You should see the following files in the `taxifare/trainer` directory:\n - `__init__.py`\n - `model.py`\n - `task.py`",
"_____no_output_____"
]
],
[
[
"# It will list all the files in the mentioned directory with a long listing format\n!ls -la taxifare/trainer",
"total 20\ndrwxr-xr-x 2 jupyter jupyter 4096 Sep 15 08:19 .\ndrwxr-xr-x 5 jupyter jupyter 4096 Sep 15 08:19 ..\n-rw-r--r-- 1 jupyter jupyter 0 Sep 15 08:19 __init__.py\n-rw-r--r-- 1 jupyter jupyter 7368 Sep 15 08:19 model.py\n-rw-r--r-- 1 jupyter jupyter 1750 Sep 15 08:19 task.py\n"
]
],
[
[
"To use hyperparameter tuning in your training job you must perform the following steps:\n\n 1. Specify the hyperparameter tuning configuration for your training job by including a HyperparameterSpec in your TrainingInput object.\n\n 2. Include the following code in your training application:\n\n - Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial.\nAdd your hyperparameter metric to the summary for your graph.\n\n - To submit a hyperparameter tuning job, we must modify `model.py` and `task.py` to expose any variables we want to tune as command line arguments.",
"_____no_output_____"
],
[
"### Modify model.py",
"_____no_output_____"
]
],
[
[
"%%writefile ./taxifare/trainer/model.py\n\n# Importing the necessary modules\nimport datetime\nimport hypertune\nimport logging\nimport os\nimport shutil\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow.keras import activations\nfrom tensorflow.keras import callbacks\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import models\n\nfrom tensorflow import feature_column as fc\n\nlogging.info(tf.version.VERSION)\n\n\nCSV_COLUMNS = [\n 'fare_amount',\n 'pickup_datetime',\n 'pickup_longitude',\n 'pickup_latitude',\n 'dropoff_longitude',\n 'dropoff_latitude',\n 'passenger_count',\n 'key',\n]\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]\nDAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']\n\n\n# Splits features and labels from feature dictionary\ndef features_and_labels(row_data):\n for unwanted_col in ['key']:\n row_data.pop(unwanted_col)\n label = row_data.pop(LABEL_COLUMN)\n return row_data, label\n\n\n# Loads dataset using the tf.data API from CSV files\ndef load_dataset(pattern, batch_size, num_repeat):\n dataset = tf.data.experimental.make_csv_dataset(\n file_pattern=pattern,\n batch_size=batch_size,\n column_names=CSV_COLUMNS,\n column_defaults=DEFAULTS,\n num_epochs=num_repeat,\n )\n return dataset.map(features_and_labels)\n\n\n# Prefetch overlaps the preprocessing and model execution of a training step\ndef create_train_dataset(pattern, batch_size):\n dataset = load_dataset(pattern, batch_size, num_repeat=None)\n return dataset.prefetch(1)\n\n\ndef create_eval_dataset(pattern, batch_size):\n dataset = load_dataset(pattern, batch_size, num_repeat=1)\n return dataset.prefetch(1)\n\n\n# Parse a string and return a datetime.datetime\ndef parse_datetime(s):\n if type(s) is not str:\n s = s.numpy().decode('utf-8')\n return datetime.datetime.strptime(s, \"%Y-%m-%d %H:%M:%S %Z\")\n\n\n# Here, tf.sqrt Computes element-wise square root of the input tensor\ndef euclidean(params):\n lon1, lat1, lon2, lat2 = params\n londiff = lon2 - lon1\n latdiff = lat2 - lat1\n return tf.sqrt(londiff*londiff + latdiff*latdiff)\n\n\n# Timestamp.weekday() function return the day of the week represented by the date in the given Timestamp object\ndef get_dayofweek(s):\n ts = parse_datetime(s)\n return DAYS[ts.weekday()]\n\n\n# It wraps a python function into a TensorFlow op that executes it eagerly\[email protected]\ndef dayofweek(ts_in):\n return tf.map_fn(\n lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),\n ts_in\n )\n\ndef transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):\n # Pass-through columns\n transformed = inputs.copy()\n del transformed['pickup_datetime']\n\n feature_columns = {\n colname: fc.numeric_column(colname)\n for colname in NUMERIC_COLS\n }\n\n # Scaling longitude from range [-70, -78] to [0, 1]\n for lon_col in ['pickup_longitude', 'dropoff_longitude']:\n transformed[lon_col] = layers.Lambda(\n lambda x: (x + 78)/8.0,\n name='scale_{}'.format(lon_col)\n )(inputs[lon_col])\n\n # Scaling latitude from range [37, 45] to [0, 1]\n for lat_col in ['pickup_latitude', 'dropoff_latitude']:\n transformed[lat_col] = layers.Lambda(\n lambda x: (x - 37)/8.0,\n name='scale_{}'.format(lat_col)\n )(inputs[lat_col])\n\n # Adding Euclidean dist (no need to be accurate: NN will calibrate it)\n transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([\n inputs['pickup_longitude'],\n inputs['pickup_latitude'],\n inputs['dropoff_longitude'],\n inputs['dropoff_latitude']\n ])\n feature_columns['euclidean'] = fc.numeric_column('euclidean')\n\n # hour of day from timestamp of form '2010-02-08 09:17:00+00:00'\n transformed['hourofday'] = layers.Lambda(\n lambda x: tf.strings.to_number(\n tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),\n name='hourofday'\n )(inputs['pickup_datetime'])\n feature_columns['hourofday'] = fc.indicator_column(\n fc.categorical_column_with_identity(\n 'hourofday', num_buckets=24))\n\n latbuckets = np.linspace(0, 1, nbuckets).tolist()\n lonbuckets = np.linspace(0, 1, nbuckets).tolist()\n b_plat = fc.bucketized_column(\n feature_columns['pickup_latitude'], latbuckets)\n b_dlat = fc.bucketized_column(\n feature_columns['dropoff_latitude'], latbuckets)\n b_plon = fc.bucketized_column(\n feature_columns['pickup_longitude'], lonbuckets)\n b_dlon = fc.bucketized_column(\n feature_columns['dropoff_longitude'], lonbuckets)\n ploc = fc.crossed_column(\n [b_plat, b_plon], nbuckets * nbuckets)\n dloc = fc.crossed_column(\n [b_dlat, b_dlon], nbuckets * nbuckets)\n pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)\n feature_columns['pickup_and_dropoff'] = fc.embedding_column(\n pd_pair, 100)\n\n return transformed, feature_columns\n\n\n# Here, tf.sqrt Computes element-wise square root of the input tensor\ndef rmse(y_true, y_pred):\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))\n\n\ndef build_dnn_model(nbuckets, nnsize, lr):\n # input layer is all float except for pickup_datetime which is a string\n STRING_COLS = ['pickup_datetime']\n NUMERIC_COLS = (\n set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)\n )\n inputs = {\n colname: layers.Input(name=colname, shape=(), dtype='float32')\n for colname in NUMERIC_COLS\n }\n inputs.update({\n colname: layers.Input(name=colname, shape=(), dtype='string')\n for colname in STRING_COLS\n })\n\n # transforms\n transformed, feature_columns = transform(\n inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)\n dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)\n\n x = dnn_inputs\n for layer, nodes in enumerate(nnsize):\n x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)\n output = layers.Dense(1, name='fare')(x)\n \n model = models.Model(inputs, output)\n lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)\n model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])\n \n return model\n\n\n# Define train and evaluate method to evaluate performance of the model\ndef train_and_evaluate(hparams):\n batch_size = hparams['batch_size']\n eval_data_path = hparams['eval_data_path']\n nnsize = hparams['nnsize']\n nbuckets = hparams['nbuckets']\n lr = hparams['lr']\n num_evals = hparams['num_evals']\n num_examples_to_train_on = hparams['num_examples_to_train_on']\n output_dir = hparams['output_dir']\n train_data_path = hparams['train_data_path']\n\n if tf.io.gfile.exists(output_dir):\n tf.io.gfile.rmtree(output_dir)\n \n timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')\n savedmodel_dir = os.path.join(output_dir, 'savedmodel')\n model_export_path = os.path.join(savedmodel_dir, timestamp)\n checkpoint_path = os.path.join(output_dir, 'checkpoints')\n tensorboard_path = os.path.join(output_dir, 'tensorboard') \n\n dnn_model = build_dnn_model(nbuckets, nnsize, lr)\n logging.info(dnn_model.summary())\n\n trainds = create_train_dataset(train_data_path, batch_size)\n evalds = create_eval_dataset(eval_data_path, batch_size)\n\n steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)\n\n checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_path,\n save_weights_only=True,\n verbose=1)\n\n tensorboard_cb = callbacks.TensorBoard(tensorboard_path,\n histogram_freq=1)\n\n history = dnn_model.fit(\n trainds,\n validation_data=evalds,\n epochs=num_evals,\n steps_per_epoch=max(1, steps_per_epoch),\n verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch\n callbacks=[checkpoint_cb, tensorboard_cb]\n )\n\n # Exporting the model with default serving function.\n tf.saved_model.save(dnn_model, model_export_path)\n \n # TODO 1\n hp_metric = history.history['val_rmse'][num_evals-1]\n \n # TODO 1\n hpt = hypertune.HyperTune()\n hpt.report_hyperparameter_tuning_metric(\n hyperparameter_metric_tag='rmse',\n metric_value=hp_metric,\n global_step=num_evals\n )\n\n return history\n",
"Overwriting ./taxifare/trainer/model.py\n"
]
],
[
[
"### Modify task.py",
"_____no_output_____"
]
],
[
[
"%%writefile taxifare/trainer/task.py\n# Importing the necessary module\nimport argparse\nimport json\nimport os\n\nfrom trainer import model\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--batch_size\",\n help = \"Batch size for training steps\",\n type = int,\n default = 32\n )\n parser.add_argument(\n \"--eval_data_path\",\n help = \"GCS location pattern of eval files\",\n required = True\n )\n parser.add_argument(\n \"--nnsize\",\n help = \"Hidden layer sizes (provide space-separated sizes)\",\n nargs = \"+\",\n type = int,\n default=[32, 8]\n )\n parser.add_argument(\n \"--nbuckets\",\n help = \"Number of buckets to divide lat and lon with\",\n type = int,\n default = 10\n )\n parser.add_argument(\n \"--lr\",\n help = \"learning rate for optimizer\",\n type = float,\n default = 0.001\n )\n parser.add_argument(\n \"--num_evals\",\n help = \"Number of times to evaluate model on eval data training.\",\n type = int,\n default = 5\n )\n parser.add_argument(\n \"--num_examples_to_train_on\",\n help = \"Number of examples to train on.\",\n type = int,\n default = 100\n )\n parser.add_argument(\n \"--output_dir\",\n help = \"GCS location to write checkpoints and export models\",\n required = True\n )\n parser.add_argument(\n \"--train_data_path\",\n help = \"GCS location pattern of train files containing eval URLs\",\n required = True\n )\n parser.add_argument(\n \"--job-dir\",\n help = \"this model ignores this field, but it is required by gcloud\",\n default = \"junk\"\n )\n\n args, _ = parser.parse_known_args()\n \n hparams = args.__dict__\n hparams[\"output_dir\"] = os.path.join(\n hparams[\"output_dir\"],\n json.loads(\n os.environ.get(\"TF_CONFIG\", \"{}\")\n ).get(\"task\", {}).get(\"trial\", \"\")\n )\n print(\"output_dir\", hparams[\"output_dir\"])\n model.train_and_evaluate(hparams)\n",
"Overwriting taxifare/trainer/task.py\n"
]
],
[
[
"### Create config.yaml file\n\nSpecify the hyperparameter tuning configuration for your training job\nCreate a HyperparameterSpec object to hold the hyperparameter tuning configuration for your training job, and add the HyperparameterSpec as the hyperparameters object in your TrainingInput object.\n\nIn your HyperparameterSpec, set the hyperparameterMetricTag to a value representing your chosen metric. If you don't specify a hyperparameterMetricTag, AI Platform Training looks for a metric with the name training/hptuning/metric. The following example shows how to create a configuration for a metric named metric1:",
"_____no_output_____"
]
],
[
[
"%%writefile hptuning_config.yaml\n# Setting parameters for hptuning_config.yaml\ntrainingInput:\n scaleTier: BASIC\n hyperparameters:\n goal: MINIMIZE\n maxTrials: 10 # TODO 2\n maxParallelTrials: 2 # TODO 2\n hyperparameterMetricTag: rmse # TODO 2\n enableTrialEarlyStopping: True\n params:\n - parameterName: lr\n # TODO 2\n type: DOUBLE\n minValue: 0.0001\n maxValue: 0.1\n scaleType: UNIT_LOG_SCALE\n - parameterName: nbuckets\n # TODO 2\n type: INTEGER\n minValue: 10\n maxValue: 25\n scaleType: UNIT_LINEAR_SCALE\n - parameterName: batch_size\n # TODO 2\n type: DISCRETE\n discreteValues:\n - 15\n - 30\n - 50\n ",
"Writing hptuning_config.yaml\n"
]
],
[
[
"#### Report your hyperparameter metric to AI Platform Training\n\nThe way to report your hyperparameter metric to the AI Platform Training service depends on whether you are using TensorFlow for training or not. It also depends on whether you are using a runtime version or a custom container for training.\n\nWe recommend that your training code reports your hyperparameter metric to AI Platform Training frequently in order to take advantage of early stopping.\n\nTensorFlow with a runtime version\nIf you use an AI Platform Training runtime version and train with TensorFlow, then you can report your hyperparameter metric to AI Platform Training by writing the metric to a TensorFlow summary. Use one of the following functions.\n\nYou may need to install `cloudml-hypertune` on your machine to run this code locally.",
"_____no_output_____"
]
],
[
[
"# Installing the latest version of the package\n!pip install cloudml-hypertune",
"Collecting cloudml-hypertune\nDownloading https://files.pythonhosted.org/packages/84/54/142a00a29d1c51dcf8c93b305f35554c947be2faa0d55de1eabcc0a9023c/cloudml-hypertune-0.1.0.dev6.tar.gz\nBuilding wheels for collected packages: cloudml-hypertune\nRunning setup.py bdist_wheel for cloudml-hypertune ... done\nStored in directory: /root/.cache/pip/wheels/71/ac/62/80b621f3fe2994f3f367a36123d8351d75e3ea5591b4a62c85\nSuccessfully built cloudml-hypertune\nInstalling collected packages: cloudml-hypertune\nSuccessfully installed cloudml-hypertune-0.1.0.dev6\n"
]
],
[
[
"Kindly ignore, if you get the version warnings related to pip install command.",
"_____no_output_____"
]
],
[
[
"%%bash\n\n# Testing our training code locally\nEVAL_DATA_PATH=./taxifare/tests/data/taxi-valid*\nTRAIN_DATA_PATH=./taxifare/tests/data/taxi-train*\nOUTPUT_DIR=./taxifare-model\n\nrm -rf ${OUTDIR}\nexport PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare\n \npython3 -m trainer.task \\\n--eval_data_path $EVAL_DATA_PATH \\\n--output_dir $OUTPUT_DIR \\\n--train_data_path $TRAIN_DATA_PATH \\\n--batch_size 5 \\\n--num_examples_to_train_on 100 \\\n--num_evals 1 \\\n--nbuckets 10 \\\n--lr 0.001 \\\n--nnsize 32 8",
"output_dir ./taxifare-model/\nModel: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to\n==================================================================================================\ndropoff_latitude (InputLayer) [(None,)] 0\n__________________________________________________________________________________________________\ndropoff_longitude (InputLayer) [(None,)] 0\n__________________________________________________________________________________________________\npickup_longitude (InputLayer) [(None,)] 0\n__________________________________________________________________________________________________\npickup_latitude (InputLayer) [(None,)] 0\n__________________________________________________________________________________________________\npickup_datetime (InputLayer) [(None,)] 0\n__________________________________________________________________________________________________\nscale_dropoff_latitude (Lambda) (None,) 0 dropoff_latitude[0][0]\n__________________________________________________________________________________________________\nscale_dropoff_longitude (Lambda (None,) 0 dropoff_longitude[0][0]\n__________________________________________________________________________________________________\neuclidean (Lambda) (None,) 0 pickup_longitude[0][0]\npickup_latitude[0][0]\ndropoff_longitude[0][0]\ndropoff_latitude[0][0]\n__________________________________________________________________________________________________\nhourofday (Lambda) (None,) 0 pickup_datetime[0][0]\n__________________________________________________________________________________________________\npassenger_count (InputLayer) [(None,)] 0\n__________________________________________________________________________________________________\nscale_pickup_latitude (Lambda) (None,) 0 pickup_latitude[0][0]\n__________________________________________________________________________________________________\nscale_pickup_longitude (Lambda) (None,) 0 pickup_longitude[0][0]\n__________________________________________________________________________________________________\ndense_features (DenseFeatures) (None, 130) 1000000 scale_dropoff_latitude[0][0]\nscale_dropoff_longitude[0][0]\neuclidean[0][0]\nhourofday[0][0]\npassenger_count[0][0]\nscale_pickup_latitude[0][0]\nscale_pickup_longitude[0][0]\n__________________________________________________________________________________________________\nh0 (Dense) (None, 32) 4192 dense_features[0][0]\n__________________________________________________________________________________________________\nh1 (Dense) (None, 8) 264 h0[0][0]\n__________________________________________________________________________________________________\nfare (Dense) (None, 1) 9 h1[0][0]\n==================================================================================================\nTotal params: 1,004,465\nTrainable params: 1,004,465\nNon-trainable params: 0\n__________________________________________________________________________________________________\nTrain for 20 steps\nEpoch 00001: saving model to ./taxifare-model/checkpoints\n20/20 - 3s - loss: 298.1680 - rmse: 15.3373 - mse: 298.1681 - val_loss: 211.5741 - val_rmse: 13.6915 - val_mse: 212.6633\nUser settings:\nKMP_AFFINITY=granularity=fine,verbose,compact,1,0\nKMP_BLOCKTIME=0\nKMP_SETTINGS=1\nOMP_NUM_THREADS=4\nEffective settings:\nKMP_ABORT_DELAY=0\nKMP_ADAPTIVE_LOCK_PROPS='1,1024'\nKMP_ALIGN_ALLOC=64\nKMP_ALL_THREADPRIVATE=128\nKMP_ATOMIC_MODE=2\nKMP_BLOCKTIME=0\nKMP_CPUINFO_FILE: value is not defined\nKMP_DETERMINISTIC_REDUCTION=false\nKMP_DEVICE_THREAD_LIMIT=2147483647\nKMP_DISP_HAND_THREAD=false\nKMP_DISP_NUM_BUFFERS=7\nKMP_DUPLICATE_LIB_OK=false\nKMP_FORCE_REDUCTION: value is not defined\nKMP_FOREIGN_THREADS_THREADPRIVATE=true\nKMP_FORKJOIN_BARRIER='2,2'\nKMP_FORKJOIN_BARRIER_PATTERN='hyper,hyper'\nKMP_FORKJOIN_FRAMES=true\nKMP_FORKJOIN_FRAMES_MODE=3\nKMP_GTID_MODE=3\nKMP_HANDLE_SIGNALS=false\nKMP_HOT_TEAMS_MAX_LEVEL=1\nKMP_HOT_TEAMS_MODE=0\nKMP_INIT_AT_FORK=true\nKMP_ITT_PREPARE_DELAY=0\nKMP_LIBRARY=throughput\nKMP_LOCK_KIND=queuing\nKMP_MALLOC_POOL_INCR=1M\nKMP_MWAIT_HINTS=0\nKMP_NUM_LOCKS_IN_BLOCK=1\nKMP_PLAIN_BARRIER='2,2'\nKMP_PLAIN_BARRIER_PATTERN='hyper,hyper'\nKMP_REDUCTION_BARRIER='1,1'\nKMP_REDUCTION_BARRIER_PATTERN='hyper,hyper'\nKMP_SCHEDULE='static,balanced;guided,iterative'\nKMP_SETTINGS=true\nKMP_SPIN_BACKOFF_PARAMS='4096,100'\nKMP_STACKOFFSET=64\nKMP_STACKPAD=0\nKMP_STACKSIZE=8M\nKMP_STORAGE_MAP=false\nKMP_TASKING=2\nKMP_TASKLOOP_MIN_TASKS=0\nKMP_TASK_STEALING_CONSTRAINT=1\nKMP_TEAMS_THREAD_LIMIT=4\nKMP_TOPOLOGY_METHOD=all\nKMP_USER_LEVEL_MWAIT=false\nKMP_USE_YIELD=1\nKMP_VERSION=false\nKMP_WARNINGS=true\nOMP_AFFINITY_FORMAT='OMP: pid %P tid %i thread %n bound to OS proc set {%A}'\nOMP_ALLOCATOR=omp_default_mem_alloc\nOMP_CANCELLATION=false\nOMP_DEBUG=disabled\nOMP_DEFAULT_DEVICE=0\nOMP_DISPLAY_AFFINITY=false\nOMP_DISPLAY_ENV=false\nOMP_DYNAMIC=false\nOMP_MAX_ACTIVE_LEVELS=2147483647\nOMP_MAX_TASK_PRIORITY=0\nOMP_NESTED=false\nOMP_NUM_THREADS='4'\nOMP_PLACES: value is not defined\nOMP_PROC_BIND='intel'\nOMP_SCHEDULE='static'\nOMP_STACKSIZE=8M\nOMP_TARGET_OFFLOAD=DEFAULT\nOMP_THREAD_LIMIT=2147483647\nOMP_TOOL=enabled\nOMP_TOOL_LIBRARIES: value is not defined\nOMP_WAIT_POLICY=PASSIVE\nKMP_AFFINITY='verbose,warnings,respect,granularity=fine,compact,1,0'\n2020-09-15 08:26:46.280821: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2300000000 Hz\n2020-09-15 08:26:46.281148: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x556423524920 executing computations on platform Host. Devices:\n2020-09-15 08:26:46.281258: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version\nOMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.\nOMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info\nOMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-3\nOMP: Info #156: KMP_AFFINITY: 4 available OS procs\nOMP: Info #157: KMP_AFFINITY: Uniform topology\nOMP: Info #179: KMP_AFFINITY: 1 packages x 2 cores/pkg x 2 threads/core (2 total cores)\nOMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:\nOMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0 thread 0\nOMP: Info #171: KMP_AFFINITY: OS proc 2 maps to package 0 core 0 thread 1\nOMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 1 thread 0\nOMP: Info #171: KMP_AFFINITY: OS proc 3 maps to package 0 core 1 thread 1\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29724 thread 0 bound to OS proc set 0\n2020-09-15 08:26:46.281751: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.\nWARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4276: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.\nInstructions for updating:\nThe old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.\nWARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4276: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.\nInstructions for updating:\nThe old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.\nWARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4331: IdentityCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.\nInstructions for updating:\nThe old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.\nWARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4331: IdentityCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.\nInstructions for updating:\nThe old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.\nWARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/data/experimental/ops/readers.py:521: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_determinstic`.\nWARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/data/experimental/ops/readers.py:521: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_determinstic`.\nWARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/data/experimental/ops/readers.py:215: shuffle_and_repeat (from tensorflow.python.data.experimental.ops.shuffle_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.shuffle(buffer_size, seed)` followed by `tf.data.Dataset.repeat(count)`. Static tf.data optimizations will take care of using the fused implementation.\nWARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/data/experimental/ops/readers.py:215: shuffle_and_repeat (from tensorflow.python.data.experimental.ops.shuffle_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.shuffle(buffer_size, seed)` followed by `tf.data.Dataset.repeat(count)`. Static tf.data optimizations will take care of using the fused implementation.\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29766 thread 1 bound to OS proc set 1\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29771 thread 2 bound to OS proc set 2\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29745 thread 3 bound to OS proc set 3\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29774 thread 4 bound to OS proc set 0\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29775 thread 5 bound to OS proc set 1\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29776 thread 6 bound to OS proc set 2\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29744 thread 7 bound to OS proc set 3\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29777 thread 8 bound to OS proc set 0\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29778 thread 9 bound to OS proc set 1\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29779 thread 10 bound to OS proc set 2\n2020-09-15 08:26:48.605688: I tensorflow/core/profiler/lib/profiler_session.cc:184] Profiler session started.\nOMP: Info #250: KMP_AFFINITY: pid 29724 tid 29783 thread 11 bound to OS proc set 3\n2020-09-15 08:26:49.715834: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence\n[[{{node IteratorGetNext}}]]\n2020-09-15 08:26:51.674116: W tensorflow/python/util/util.cc:299] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.\nWARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1781: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\nWARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1781: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\n"
],
[
"ls taxifare-model/tensorboard",
"train/ validation/\n"
]
],
[
[
"The below hyperparameter training job step will take **upto 45 minutes** to complete.",
"_____no_output_____"
]
],
[
[
"%%bash\n\nPROJECT_ID=$(gcloud config list project --format \"value(core.project)\")\nBUCKET=$PROJECT_ID\nREGION=\"us-central1\"\nTFVERSION=\"2.4\"\n\n# Output directory and jobID\nOUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)\nJOBID=taxifare_$(date -u +%y%m%d_%H%M%S)\necho ${OUTDIR} ${REGION} ${JOBID}\ngsutil -m rm -rf ${OUTDIR}\n\n# Model and training hyperparameters\nBATCH_SIZE=15\nNUM_EXAMPLES_TO_TRAIN_ON=100\nNUM_EVALS=10\nNBUCKETS=10\nLR=0.001\nNNSIZE=\"32 8\"\n\n# GCS paths\nGCS_PROJECT_PATH=gs://$BUCKET/taxifare\nDATA_PATH=$GCS_PROJECT_PATH/data\nTRAIN_DATA_PATH=$DATA_PATH/taxi-train*\nEVAL_DATA_PATH=$DATA_PATH/taxi-valid*\n\n# TODO 3\ngcloud ai-platform jobs submit training $JOBID \\\n --module-name=trainer.task \\\n --package-path=taxifare/trainer \\\n --staging-bucket=gs://${BUCKET} \\\n --config=hptuning_config.yaml \\\n --python-version=3.7 \\\n --runtime-version=${TFVERSION} \\\n --region=${REGION} \\\n -- \\\n --eval_data_path $EVAL_DATA_PATH \\\n --output_dir $OUTDIR \\\n --train_data_path $TRAIN_DATA_PATH \\\n --batch_size $BATCH_SIZE \\\n --num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \\\n --num_evals $NUM_EVALS \\\n --nbuckets $NBUCKETS \\\n --lr $LR \\\n --nnsize $NNSIZE ",
"gs://qwiklabs-gcp-04-dfa7f2847b66/taxifare/trained_model_200915_082706 us-central1 taxifare_200915_082706\njobId: taxifare_200915_082706\nstate: QUEUED\nCommandException: 1 files/objects could not be removed.\nJob [taxifare_200915_082706] submitted successfully.\nYour job is still active. You may view the status of your job with the command\n$ gcloud ai-platform jobs describe taxifare_200915_082706\nor continue streaming the logs with the command\n$ gcloud ai-platform jobs stream-logs taxifare_200915_082706\n"
]
],
[
[
"Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4adde61e47219dd32b804f8ac1fd3632edd44db9
| 61,309 |
ipynb
|
Jupyter Notebook
|
src/model.ipynb
|
TomMonks/treatment-centre-sim
|
f1c63424b8b0829f283eb8448ff021b639f6889f
|
[
"MIT"
] | null | null | null |
src/model.ipynb
|
TomMonks/treatment-centre-sim
|
f1c63424b8b0829f283eb8448ff021b639f6889f
|
[
"MIT"
] | null | null | null |
src/model.ipynb
|
TomMonks/treatment-centre-sim
|
f1c63424b8b0829f283eb8448ff021b639f6889f
|
[
"MIT"
] | null | null | null | 36.342027 | 329 | 0.488493 |
[
[
[
"# SimPy: Treatment Centre\n\n> **To run all code in this notebook go to menu item `Run -> Run All Cells`.**\n\n`simpy` uses process based model worldview. Given its simplicity it is a highly flexible discrete-event simulation package.\n\nOne of the benefits of a package like `simpy` is that it is written in standard python and is free and open for others to use. \n* For research this is highly beneficial:\n * models and methods tested against them can be shared without concerns for commerical licensing. \n * experimental results (either from model or method) can be recreated by other research teams.\n* The version of `simpy` in use can also be controlled. This avoids backwards compatibility problems if models are returned to after several years.\n\nHere we will take a look at code that implements a full `simpy` model including, time dependent arrivals, results collection, control of random numbers and multiple replications.\n\n> The full scope of what is possible in `simpy` it out of scope. Detailed documentation for `simpy` and additional models can be found here: https://simpy.readthedocs.io/en/latest/\n\n---",
"_____no_output_____"
],
[
"## Imports\n\nIt is recommended that you use the provided conda virtual environment `treat-sim`. \n\n>If you are running this code in **Google Colab** then `simpy` can be pip installed.",
"_____no_output_____"
]
],
[
[
"# install simpy if running in Google Colab\nimport sys\nif 'google.colab' in sys.modules:\n !pip install simpy==4.0.1",
"_____no_output_____"
],
[
"import simpy\nsimpy.__version__",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport itertools\nimport math\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"---\n\n## FirstTreatment: A health clinic based in the US.\n\n**This example is based on exercise 13 from Nelson (2013) page 170.**\n\n> *Nelson. B.L. (2013). [Foundations and methods of stochastic simulation](https://www.amazon.co.uk/Foundations-Methods-Stochastic-Simulation-International/dp/1461461596/ref=sr_1_1?dchild=1&keywords=foundations+and+methods+of+stochastic+simulation&qid=1617050801&sr=8-1). Springer.* \n\nPatients arrive to the health clinic between 6am and 12am following a non-stationary poisson process. After 12am arriving patients are diverted elsewhere and remaining WIP is completed. On arrival, all patients quickly sign-in and are **triaged**. \n\nThe health clinic expects two types of patient arrivals: \n\n**Trauma arrivals:**\n* patients with severe illness and trauma that must first be stablised in a **trauma room**.\n* these patients then undergo **treatment** in a cubicle before being discharged.\n\n**Non-trauma arrivals**\n* patients with minor illness and no trauma go through **registration** and **examination** activities\n* a proportion of non-trauma patients require **treatment** in a cubicle before being dicharged. \n\n> In this model treatment of trauma and non-trauma patients is modelled seperately. ",
"_____no_output_____"
],
[
"## Constants and defaults for modelling **as-is**",
"_____no_output_____"
],
[
"### Distribution parameters",
"_____no_output_____"
]
],
[
[
"# sign-in/triage parameters\nDEFAULT_TRIAGE_MEAN = 3.0\n\n# registration parameters\nDEFAULT_REG_MEAN = 5.0\nDEFAULT_REG_VAR= 2.0\n\n# examination parameters\nDEFAULT_EXAM_MEAN = 16.0\nDEFAULT_EXAM_VAR = 3.0\n\n# trauma/stabilisation\nDEFAULT_TRAUMA_MEAN = 90.0\n\n# Trauma treatment\nDEFAULT_TRAUMA_TREAT_MEAN = 30.0\nDEFAULT_TRAUMA_TREAT_VAR = 4.0\n\n# Non trauma treatment\nDEFAULT_NON_TRAUMA_TREAT_MEAN = 13.3\nDEFAULT_NON_TRAUMA_TREAT_VAR = 2.0\n\n# prob patient requires treatment given trauma\nDEFAULT_NON_TRAUMA_TREAT_P = 0.60\n\n# proportion of patients triaged as trauma\nDEFAULT_PROB_TRAUMA = 0.12",
"_____no_output_____"
]
],
[
[
"### Time dependent arrival rates data\n\nThe data for arrival rates varies between clinic opening at 6am and closure at 12am.",
"_____no_output_____"
]
],
[
[
"NSPP_PATH = 'https://raw.githubusercontent.com/TomMonks/' \\\n + 'open-science-for-sim/main/src/notebooks/01_foss_sim/data/ed_arrivals.csv'\n\n# visualise\nax = pd.read_csv(NSPP_PATH).plot(y='arrival_rate', x='period', rot=45,\n kind='bar',figsize=(12,5), legend=False)\nax.set_xlabel('hour of day')\nax.set_ylabel('mean arrivals');",
"_____no_output_____"
]
],
[
[
"### Resource counts\n\n> Inter count variables representing the number of resources at each activity in the processes.",
"_____no_output_____"
]
],
[
[
"DEFAULT_N_TRIAGE = 1\nDEFAULT_N_REG = 1\nDEFAULT_N_EXAM = 3\nDEFAULT_N_TRAUMA = 2\n\n# Non-trauma cubicles\nDEFAULT_N_CUBICLES_1 = 1\n\n# trauma pathway cubicles\nDEFAULT_N_CUBICLES_2 = 1",
"_____no_output_____"
]
],
[
[
"### Simulation model run settings",
"_____no_output_____"
]
],
[
[
"# default random number SET\nDEFAULT_RNG_SET = None\nN_STREAMS = 20\n\n# default results collection period\nDEFAULT_RESULTS_COLLECTION_PERIOD = 60 * 19\n\n# number of replications.\nDEFAULT_N_REPS = 5\n\n# Show the a trace of simulated events\n# not recommended when running multiple replications\nTRACE = True",
"_____no_output_____"
]
],
[
[
"## Utility functions",
"_____no_output_____"
]
],
[
[
"def trace(msg):\n '''\n Utility function for printing a trace as the\n simulation model executes.\n Set the TRACE constant to False, to turn tracing off.\n \n Params:\n -------\n msg: str\n string to print to screen.\n '''\n if TRACE:\n print(msg)",
"_____no_output_____"
]
],
[
[
"## Distribution classes\n\nTo help with controlling sampling `numpy` distributions are packaged up into classes that allow easy control of random numbers.\n\n**Distributions included:**\n* Exponential\n* Log Normal\n* Bernoulli\n* Normal\n* Uniform",
"_____no_output_____"
]
],
[
[
"class Exponential:\n '''\n Convenience class for the exponential distribution.\n packages up distribution parameters, seed and random generator.\n '''\n def __init__(self, mean, random_seed=None):\n '''\n Constructor\n \n Params:\n ------\n mean: float\n The mean of the exponential distribution\n \n random_seed: int, optional (default=None)\n A random seed to reproduce samples. If set to none then a unique\n sample is created.\n '''\n self.rng = np.random.default_rng(seed=random_seed)\n self.mean = mean\n \n def sample(self, size=None):\n '''\n Generate a sample from the exponential distribution\n \n Params:\n -------\n size: int, optional (default=None)\n the number of samples to return. If size=None then a single\n sample is returned.\n '''\n return self.rng.exponential(self.mean, size=size)\n\n \nclass Bernoulli:\n '''\n Convenience class for the Bernoulli distribution.\n packages up distribution parameters, seed and random generator.\n '''\n def __init__(self, p, random_seed=None):\n '''\n Constructor\n \n Params:\n ------\n p: float\n probability of drawing a 1\n \n random_seed: int, optional (default=None)\n A random seed to reproduce samples. If set to none then a unique\n sample is created.\n '''\n self.rng = np.random.default_rng(seed=random_seed)\n self.p = p\n \n def sample(self, size=None):\n '''\n Generate a sample from the exponential distribution\n \n Params:\n -------\n size: int, optional (default=None)\n the number of samples to return. If size=None then a single\n sample is returned.\n '''\n return self.rng.binomial(n=1, p=self.p, size=size)\n\nclass Lognormal:\n \"\"\"\n Encapsulates a lognormal distirbution\n \"\"\"\n def __init__(self, mean, stdev, random_seed=None):\n \"\"\"\n Params:\n -------\n mean: float\n mean of the lognormal distribution\n \n stdev: float\n standard dev of the lognormal distribution\n \n random_seed: int, optional (default=None)\n Random seed to control sampling\n \"\"\"\n self.rng = np.random.default_rng(seed=random_seed)\n mu, sigma = self.normal_moments_from_lognormal(mean, stdev**2)\n self.mu = mu\n self.sigma = sigma\n \n def normal_moments_from_lognormal(self, m, v):\n '''\n Returns mu and sigma of normal distribution\n underlying a lognormal with mean m and variance v\n source: https://blogs.sas.com/content/iml/2014/06/04/simulate-lognormal\n -data-with-specified-mean-and-variance.html\n\n Params:\n -------\n m: float\n mean of lognormal distribution\n v: float\n variance of lognormal distribution\n \n Returns:\n -------\n (float, float)\n '''\n phi = math.sqrt(v + m**2)\n mu = math.log(m**2/phi)\n sigma = math.sqrt(math.log(phi**2/m**2))\n return mu, sigma\n \n def sample(self):\n \"\"\"\n Sample from the normal distribution\n \"\"\"\n return self.rng.lognormal(self.mu, self.sigma)",
"_____no_output_____"
],
[
"class Normal:\n '''\n Convenience class for the normal distribution.\n packages up distribution parameters, seed and random generator.\n '''\n def __init__(self, mean, sigma, random_seed=None):\n '''\n Constructor\n \n Params:\n ------\n mean: float\n The mean of the normal distribution\n \n sigma: float\n The stdev of the normal distribution\n \n random_seed: int, optional (default=None)\n A random seed to reproduce samples. If set to none then a unique\n sample is created.\n '''\n self.rng = np.random.default_rng(seed=random_seed)\n self.mean = mean\n self.sigma = sigma\n \n def sample(self, size=None):\n '''\n Generate a sample from the normal distribution\n \n Params:\n -------\n size: int, optional (default=None)\n the number of samples to return. If size=None then a single\n sample is returned.\n '''\n return self.rng.normal(self.mean, self.sigma, size=size)\n\n \nclass Uniform():\n '''\n Convenience class for the Uniform distribution.\n packages up distribution parameters, seed and random generator.\n '''\n def __init__(self, low, high, random_seed=None):\n '''\n Constructor\n \n Params:\n ------\n low: float\n lower range of the uniform\n \n high: float\n upper range of the uniform\n \n random_seed: int, optional (default=None)\n A random seed to reproduce samples. If set to none then a unique\n sample is created.\n '''\n self.rand = np.random.default_rng(seed=random_seed)\n self.low = low\n self.high = high\n \n def sample(self, size=None):\n '''\n Generate a sample from the uniform distribution\n \n Params:\n -------\n size: int, optional (default=None)\n the number of samples to return. If size=None then a single\n sample is returned.\n '''\n return self.rand.uniform(low=self.low, high=self.high, size=size)",
"_____no_output_____"
]
],
[
[
"## Model parameterisation\n\nFor convienience a container class is used to hold the large number of model parameters. The `Scenario` class includes defaults these can easily be changed and at runtime to experiments with different designs.",
"_____no_output_____"
]
],
[
[
"class Scenario:\n '''\n Container class for scenario parameters/arguments\n \n Passed to a model and its process classes\n '''\n def __init__(self, random_number_set=DEFAULT_RNG_SET):\n '''\n The init method sets up our defaults.\n \n Parameters:\n -----------\n random_number_set: int, optional (default=DEFAULT_RNG_SET)\n Set to control the initial seeds of each stream of pseudo\n random numbers used in the model.\n '''\n # sampling\n self.random_number_set = random_number_set\n self.init_sampling()\n \n # count of each type of resource\n self.init_resourse_counts()\n \n def set_random_no_set(self, random_number_set):\n '''\n Controls the random sampling \n Parameters:\n ----------\n random_number_set: int\n Used to control the set of psuedo random numbers\n used by the distributions in the simulation.\n '''\n self.random_number_set = random_number_set\n self.init_sampling()\n\n def init_resourse_counts(self):\n '''\n Init the counts of resources to default values...\n '''\n self.n_triage = DEFAULT_N_TRIAGE\n self.n_reg = DEFAULT_N_REG\n self.n_exam = DEFAULT_N_EXAM\n self.n_trauma = DEFAULT_N_TRAUMA\n \n # non-trauma (1), trauma (2) treatment cubicles\n self.n_cubicles_1 = DEFAULT_N_CUBICLES_1\n self.n_cubicles_2 = DEFAULT_N_CUBICLES_2\n\n def init_sampling(self):\n '''\n Create the distributions used by the model and initialise \n the random seeds of each.\n '''\n # create random number streams\n rng_streams = np.random.default_rng(self.random_number_set)\n self.seeds = rng_streams.integers(0, 999999999, size=N_STREAMS)\n\n # create distributions\n \n # Triage duration\n self.triage_dist = Exponential(DEFAULT_TRIAGE_MEAN, \n random_seed=self.seeds[0])\n \n # Registration duration (non-trauma only)\n self.reg_dist = Lognormal(DEFAULT_REG_MEAN, \n np.sqrt(DEFAULT_REG_VAR),\n random_seed=self.seeds[1])\n \n # Evaluation (non-trauma only)\n self.exam_dist = Normal(DEFAULT_EXAM_MEAN,\n np.sqrt(DEFAULT_EXAM_VAR),\n random_seed=self.seeds[2])\n \n # Trauma/stablisation duration (trauma only)\n self.trauma_dist = Exponential(DEFAULT_TRAUMA_MEAN, \n random_seed=self.seeds[3])\n \n # Non-trauma treatment\n self.nt_treat_dist = Lognormal(DEFAULT_NON_TRAUMA_TREAT_MEAN, \n np.sqrt(DEFAULT_NON_TRAUMA_TREAT_VAR),\n random_seed=self.seeds[4])\n \n # treatment of trauma patients\n self.treat_dist = Lognormal(DEFAULT_TRAUMA_TREAT_MEAN, \n np.sqrt(DEFAULT_TRAUMA_TREAT_VAR),\n random_seed=self.seeds[5])\n \n # probability of non-trauma patient requiring treatment\n self.nt_p_treat_dist = Bernoulli(DEFAULT_NON_TRAUMA_TREAT_P, \n random_seed=self.seeds[6])\n \n \n # probability of non-trauma versus trauma patient\n self.p_trauma_dist = Bernoulli(DEFAULT_PROB_TRAUMA, \n random_seed=self.seeds[7])\n \n # init sampling for non-stationary poisson process\n self.init_nspp()\n \n def init_nspp(self):\n \n # read arrival profile\n self.arrivals = pd.read_csv(NSPP_PATH)\n self.arrivals['mean_iat'] = 60 / self.arrivals['arrival_rate']\n \n # maximum arrival rate (smallest time between arrivals)\n self.lambda_max = self.arrivals['arrival_rate'].max()\n \n # thinning exponential\n self.arrival_dist = Exponential(60.0 / self.lambda_max,\n random_seed=self.seeds[8])\n \n # thinning uniform rng\n self.thinning_rng = Uniform(low=0.0, high=1.0, \n random_seed=self.seeds[9])",
"_____no_output_____"
]
],
[
[
"## Patient Pathways Process Logic\n\n`simpy` uses a process based worldview. We can easily create whatever logic - simple or complex for the model. Here the process logic for trauma and non-trauma patients is seperated into two classes `TraumaPathway` and `NonTraumaPathway`. ",
"_____no_output_____"
]
],
[
[
"class TraumaPathway:\n '''\n Encapsulates the process a patient with severe injuries or illness.\n \n These patients are signed into the ED and triaged as having severe injuries\n or illness.\n \n Patients are stabilised in resus (trauma) and then sent to Treatment. \n Following treatment they are discharged.\n '''\n def __init__(self, identifier, env, args):\n '''\n Constructor method\n \n Params:\n -----\n identifier: int\n a numeric identifier for the patient.\n \n env: simpy.Environment\n the simulation environment\n \n args: Scenario\n Container class for the simulation parameters\n \n '''\n self.identifier = identifier\n self.env = env\n self.args = args\n \n # metrics\n self.arrival = -np.inf\n self.wait_triage = -np.inf\n self.wait_trauma = -np.inf\n self.wait_treat = -np.inf\n self.total_time = -np.inf\n \n self.triage_duration = -np.inf\n self.trauma_duration = -np.inf\n self.treat_duration = -np.inf\n \n def execute(self):\n '''\n simulates the major treatment process for a patient\n \n 1. request and wait for sign-in/triage\n 2. trauma\n 3. treatment\n '''\n # record the time of arrival and entered the triage queue\n self.arrival = self.env.now\n\n # request sign-in/triage \n with self.args.triage.request() as req:\n yield req\n # record the waiting time for triage\n self.wait_triage = self.env.now - self.arrival \n \n trace(f'patient {self.identifier} triaged to trauma '\n f'{self.env.now:.3f}')\n \n # sample triage duration.\n self.triage_duration = args.triage_dist.sample()\n yield self.env.timeout(self.triage_duration)\n self.triage_complete()\n \n # record the time that entered the trauma queue\n start_wait = self.env.now\n \n # request trauma room \n with self.args.trauma.request() as req:\n yield req\n \n # record the waiting time for trauma\n self.wait_trauma = self.env.now - start_wait\n \n # sample stablisation duration.\n self.trauma_duration = args.trauma_dist.sample()\n yield self.env.timeout(self.trauma_duration)\n \n self.trauma_complete()\n \n # record the time that entered the treatment queue\n start_wait = self.env.now\n \n # request treatment cubicle \n with self.args.cubicle_2.request() as req:\n yield req\n \n # record the waiting time for trauma\n self.wait_treat = self.env.now - start_wait\n trace(f'treatment of patient {self.identifier} at '\n f'{self.env.now:.3f}')\n \n # sample treatment duration.\n self.treat_duration = args.trauma_dist.sample()\n yield self.env.timeout(self.treat_duration)\n \n self.treatment_complete()\n \n # total time in system\n self.total_time = self.env.now - self.arrival \n \n def triage_complete(self):\n '''\n Triage complete event\n '''\n trace(f'triage {self.identifier} complete {self.env.now:.3f}; '\n f'waiting time was {self.wait_triage:.3f}')\n \n def trauma_complete(self):\n '''\n Patient stay in trauma is complete.\n '''\n trace(f'stabilisation of patient {self.identifier} at '\n f'{self.env.now:.3f}')\n \n def treatment_complete(self):\n '''\n Treatment complete event\n '''\n trace(f'patient {self.identifier} treatment complete {self.env.now:.3f}; '\n f'waiting time was {self.wait_treat:.3f}')",
"_____no_output_____"
],
[
"class NonTraumaPathway(object):\n '''\n Encapsulates the process a patient with minor injuries and illness.\n \n These patients are signed into the ED and triaged as having minor \n complaints and streamed to registration and then examination. \n \n Post examination 40% are discharged while 60% proceed to treatment. \n Following treatment they are discharged.\n '''\n def __init__(self, identifier, env, args):\n '''\n Constructor method\n \n Params:\n -----\n identifier: int\n a numeric identifier for the patient.\n \n env: simpy.Environment\n the simulation environment\n \n args: Scenario\n Container class for the simulation parameters\n \n '''\n self.identifier = identifier\n self.env = env\n self.args = args\n \n # triage resource\n self.triage = args.triage\n \n # metrics\n self.arrival = -np.inf\n self.wait_triage = -np.inf\n self.wait_reg = -np.inf\n self.wait_exam = -np.inf\n self.wait_treat = -np.inf\n self.total_time = -np.inf\n \n self.triage_duration = -np.inf\n self.reg_duration = -np.inf\n self.exam_duration = -np.inf\n self.treat_duration = -np.inf\n \n \n def execute(self):\n '''\n simulates the non-trauma/minor treatment process for a patient\n \n 1. request and wait for sign-in/triage\n 2. patient registration\n 3. examination\n 4.1 40% discharged\n 4.2 60% treatment then discharge\n '''\n # record the time of arrival and entered the triage queue\n self.arrival = self.env.now\n\n # request sign-in/triage \n with self.triage.request() as req:\n yield req\n \n # record the waiting time for triage\n self.wait_triage = self.env.now - self.arrival\n trace(f'patient {self.identifier} triaged to minors '\n f'{self.env.now:.3f}')\n \n # sample triage duration.\n self.triage_duration = args.triage_dist.sample()\n yield self.env.timeout(self.triage_duration)\n \n trace(f'triage {self.identifier} complete {self.env.now:.3f}; '\n f'waiting time was {self.wait_triage:.3f}')\n \n # record the time that entered the registration queue\n start_wait = self.env.now\n \n # request registration clert \n with self.args.registration.request() as req:\n yield req\n \n # record the waiting time for registration\n self.wait_reg = self.env.now - start_wait\n trace(f'registration of patient {self.identifier} at '\n f'{self.env.now:.3f}')\n \n # sample registration duration.\n self.reg_duration = args.reg_dist.sample()\n yield self.env.timeout(self.reg_duration)\n \n trace(f'patient {self.identifier} registered at'\n f'{self.env.now:.3f}; '\n f'waiting time was {self.wait_reg:.3f}')\n \n # record the time that entered the evaluation queue\n start_wait = self.env.now\n \n # request examination resource\n with self.args.exam.request() as req:\n yield req\n \n # record the waiting time for registration\n self.wait_exam = self.env.now - start_wait\n trace(f'examination of patient {self.identifier} begins '\n f'{self.env.now:.3f}')\n \n # sample examination duration.\n self.exam_duration = args.exam_dist.sample()\n yield self.env.timeout(self.exam_duration)\n \n trace(f'patient {self.identifier} examination complete ' \n f'at {self.env.now:.3f};'\n f'waiting time was {self.wait_exam:.3f}')\n \n # sample if patient requires treatment?\n self.require_treat = self.args.nt_p_treat_dist.sample()\n \n if self.require_treat:\n \n # record the time that entered the treatment queue\n start_wait = self.env.now\n \n # request treatment cubicle\n with self.args.cubicle_1.request() as req:\n yield req\n\n # record the waiting time for treatment\n self.wait_treat = self.env.now - start_wait\n trace(f'treatment of patient {self.identifier} begins '\n f'{self.env.now:.3f}')\n\n # sample treatment duration.\n self.treat_duration = args.nt_treat_dist.sample()\n yield self.env.timeout(self.treat_duration)\n\n trace(f'patient {self.identifier} treatment complete '\n f'at {self.env.now:.3f};'\n f'waiting time was {self.wait_treat:.3f}')\n \n # total time in system\n self.total_time = self.env.now - self.arrival ",
"_____no_output_____"
]
],
[
[
"## Main model class\n\nThe main class that a user interacts with to run the model is `TreatmentCentreModel`. This implements a `.run()` method, contains a simple algorithm for the non-stationary poission process for patients arrivals and inits instances of `TraumaPathway` or `NonTraumaPathway` depending on the arrival type.",
"_____no_output_____"
]
],
[
[
"class TreatmentCentreModel:\n '''\n The treatment centre model\n \n Patients arrive at random to a treatment centre, are triaged\n and then processed in either a trauma or non-trauma pathway.\n '''\n def __init__(self, args):\n self.env = simpy.Environment()\n self.args = args\n self.init_resources()\n \n self.patients = []\n self.trauma_patients = []\n self.non_trauma_patients = []\n\n self.rc_period = None\n self.results = None\n \n def init_resources(self):\n '''\n Init the number of resources\n and store in the arguments container object\n \n Resource list:\n 1. Sign-in/triage bays\n 2. registration clerks\n 3. examination bays\n 4. trauma bays\n 5. non-trauma cubicles (1)\n 6. trauma cubicles (2)\n \n '''\n # sign/in triage\n self.args.triage = simpy.Resource(self.env, \n capacity=self.args.n_triage)\n \n # registration\n self.args.registration = simpy.Resource(self.env, \n capacity=self.args.n_reg)\n \n # examination\n self.args.exam = simpy.Resource(self.env, \n capacity=self.args.n_exam)\n \n # trauma\n self.args.trauma = simpy.Resource(self.env, \n capacity=self.args.n_trauma)\n \n # non-trauma treatment\n self.args.cubicle_1 = simpy.Resource(self.env, \n capacity=self.args.n_cubicles_1)\n \n # trauma treatment\n self.args.cubicle_2 = simpy.Resource(self.env, \n capacity=self.args.n_cubicles_2)\n \n \n \n def run(self, results_collection_period=DEFAULT_RESULTS_COLLECTION_PERIOD):\n '''\n Conduct a single run of the model in its current \n configuration\n \n \n Parameters:\n ----------\n results_collection_period, float, optional\n default = DEFAULT_RESULTS_COLLECTION_PERIOD\n \n warm_up, float, optional (default=0)\n \n length of initial transient period to truncate\n from results.\n \n Returns:\n --------\n None\n '''\n # setup the arrival generator process\n self.env.process(self.arrivals_generator())\n \n # store rc perio\n self.rc_period = results_collection_period\n \n # run\n self.env.run(until=results_collection_period)\n \n \n def arrivals_generator(self): \n ''' \n Simulate the arrival of patients to the model\n \n Patients either follow a TraumaPathway or\n NonTraumaPathway simpy process.\n \n Non stationary arrivals implemented via Thinning acceptance-rejection \n algorithm.\n '''\n for patient_count in itertools.count():\n\n # this give us the index of dataframe to use\n t = int(self.env.now // 60) % self.args.arrivals.shape[0]\n lambda_t = self.args.arrivals['arrival_rate'].iloc[t]\n\n #set to a large number so that at least 1 sample taken!\n u = np.Inf\n \n interarrival_time = 0.0\n\n # reject samples if u >= lambda_t / lambda_max\n while u >= (lambda_t / self.args.lambda_max):\n interarrival_time += self.args.arrival_dist.sample()\n u = self.args.thinning_rng.sample()\n\n # iat\n yield self.env.timeout(interarrival_time)\n \n trace(f'patient {patient_count} arrives at: {self.env.now:.3f}')\n \n # sample if the patient is trauma or non-trauma\n trauma = self.args.p_trauma_dist.sample()\n if trauma:\n # create and store a trauma patient to update KPIs.\n new_patient = TraumaPathway(patient_count, self.env, self.args)\n self.trauma_patients.append(new_patient)\n else:\n # create and store a non-trauma patient to update KPIs.\n new_patient = NonTraumaPathway(patient_count, self.env, \n self.args)\n self.non_trauma_patients.append(new_patient)\n \n # start the pathway process for the patient\n self.env.process(new_patient.execute())",
"_____no_output_____"
]
],
[
[
"### Logic to process end of run results.\n\nthe class `SimulationSummary` accepts a `TraumaCentreModel`. At the end of a run it can be used calculate mean queuing times and the percentage of the total run that a resource was in use.",
"_____no_output_____"
]
],
[
[
"class SimulationSummary:\n '''\n End of run result processing logic of the simulation model\n '''\n def __init__(self, model):\n '''\n Constructor\n \n Params:\n ------\n model: TraumaCentreModel\n The model.\n '''\n self.model = model\n self.args = model.args\n self.results = None\n \n def process_run_results(self):\n '''\n Calculates statistics at end of run.\n '''\n self.results = {}\n # list of all patients \n patients = self.model.non_trauma_patients + self.model.trauma_patients\n \n # mean triage times (both types of patient)\n mean_triage_wait = self.get_mean_metric('wait_triage', patients)\n \n # triage utilisation (both types of patient)\n triage_util = self.get_resource_util('triage_duration', \n self.args.n_triage, \n patients)\n \n # mean waiting time for registration (non_trauma)\n mean_reg_wait = self.get_mean_metric('wait_reg', \n self.model.non_trauma_patients)\n \n # registration utilisation (trauma)\n reg_util = self.get_resource_util('reg_duration', \n self.args.n_reg, \n self.model.non_trauma_patients)\n \n # mean waiting time for examination (non_trauma)\n mean_wait_exam = self.get_mean_metric('wait_exam', \n self.model.non_trauma_patients)\n \n # examination utilisation (non-trauma)\n exam_util = self.get_resource_util('exam_duration', \n self.args.n_exam, \n self.model.non_trauma_patients)\n \n \n # mean waiting time for treatment (non-trauma) \n mean_treat_wait = self.get_mean_metric('wait_treat', \n self.model.non_trauma_patients)\n \n # treatment utilisation (non_trauma)\n treat_util1 = self.get_resource_util('treat_duration', \n self.args.n_cubicles_1, \n self.model.non_trauma_patients)\n \n # mean total time (non_trauma)\n mean_total = self.get_mean_metric('total_time', \n self.model.non_trauma_patients)\n \n # mean waiting time for trauma \n mean_trauma_wait = self.get_mean_metric('wait_trauma', \n self.model.trauma_patients)\n \n # trauma utilisation (trauma)\n trauma_util = self.get_resource_util('trauma_duration', \n self.args.n_trauma, \n self.model.trauma_patients)\n \n # mean waiting time for treatment (rauma) \n mean_treat_wait2 = self.get_mean_metric('wait_treat', \n self.model.trauma_patients)\n \n # treatment utilisation (trauma)\n treat_util2 = self.get_resource_util('treat_duration', \n self.args.n_cubicles_2, \n self.model.trauma_patients)\n\n # mean total time (trauma)\n mean_total2 = self.get_mean_metric('total_time', \n self.model.trauma_patients)\n \n \n self.results = {'00_arrivals':len(patients),\n '01a_triage_wait': mean_triage_wait,\n '01b_triage_util': triage_util,\n '02a_registration_wait':mean_reg_wait,\n '02b_registration_util': reg_util,\n '03a_examination_wait':mean_wait_exam,\n '03b_examination_util': exam_util,\n '04a_treatment_wait(non_trauma)':mean_treat_wait,\n '04b_treatment_util(non_trauma)':treat_util1,\n '05_total_time(non-trauma)':mean_total,\n '06a_trauma_wait':mean_trauma_wait,\n '06b_trauma_util':trauma_util,\n '07a_treatment_wait(trauma)':mean_treat_wait2,\n '07b_treatment_util(trauma)':treat_util2,\n '08_total_time(trauma)':mean_total2,\n '09_throughput': self.get_throughput(patients)}\n \n def get_mean_metric(self, metric, patients):\n '''\n Calculate mean of the performance measure for the\n select cohort of patients,\n \n Only calculates metrics for patients where it has been \n measured.\n \n Params:\n -------\n metric: str\n The name of the metric e.g. 'wait_treat'\n \n patients: list\n A list of patients\n '''\n mean = np.array([getattr(p, metric) for p in patients \n if getattr(p, metric) > -np.inf]).mean()\n return mean\n \n \n def get_resource_util(self, metric, n_resources, patients):\n '''\n Calculate proportion of the results collection period\n where a resource was in use.\n \n Done by tracking the duration by patient.\n \n Only calculates metrics for patients where it has been \n measured.\n \n Params:\n -------\n metric: str\n The name of the metric e.g. 'treatment_duration'\n \n patients: list\n A list of patients\n '''\n total = np.array([getattr(p, metric) for p in patients \n if getattr(p, metric) > -np.inf]).sum() \n \n return total / (self.model.rc_period * n_resources)\n \n def get_throughput(self, patients):\n '''\n Returns the total number of patients that have successfully\n been processed and discharged in the treatment centre\n (they have a total time record)\n \n Params:\n -------\n patients: list\n list of all patient objects simulated.\n \n Returns:\n ------\n float\n '''\n return len([p for p in patients if p.total_time > -np.inf])\n \n def summary_frame(self):\n '''\n Returns run results as a pandas.DataFrame\n \n Returns:\n -------\n pd.DataFrame\n '''\n #append to results df\n if self.results is None:\n self.process_run_results()\n\n df = pd.DataFrame({'1':self.results})\n df = df.T\n df.index.name = 'rep'\n return df\n ",
"_____no_output_____"
]
],
[
[
"## Executing a model\n\nWe note that there are **many ways** to setup a `simpy` model and execute it (that is part of its fantastic flexibility). The organisation of code we show below is based on our experience of using the package in practice. The approach also allows for easy parallisation over multiple CPU cores using `joblib`.\n\nWe include two functions. `single_run()` and `multiple_replications`. The latter is used to repeatedly call and process the results from `single_run`.",
"_____no_output_____"
]
],
[
[
"def single_run(scenario, rc_period=DEFAULT_RESULTS_COLLECTION_PERIOD, \n random_no_set=DEFAULT_RNG_SET):\n '''\n Perform a single run of the model and return the results\n \n Parameters:\n -----------\n \n scenario: Scenario object\n The scenario/paramaters to run\n \n rc_period: int\n The length of the simulation run that collects results\n \n random_no_set: int or None, optional (default=DEFAULT_RNG_SET)\n Controls the set of random seeds used by the stochastic parts of the \n model. Set to different ints to get different results. Set to None\n for a random set of seeds.\n \n Returns:\n --------\n pandas.DataFrame:\n results from single run.\n ''' \n \n # set random number set - this controls sampling for the run.\n scenario.set_random_no_set(random_no_set)\n\n # create an instance of the model\n model = TreatmentCentreModel(scenario)\n\n # run the model\n model.run(results_collection_period=rc_period)\n \n # run results\n summary = SimulationSummary(model)\n summary_df = summary.summary_frame()\n \n return summary_df",
"_____no_output_____"
],
[
"def multiple_replications(scenario, rc_period=DEFAULT_RESULTS_COLLECTION_PERIOD, \n n_reps=5):\n '''\n Perform multiple replications of the model.\n \n Params:\n ------\n scenario: Scenario\n Parameters/arguments to configurethe model\n \n rc_period: float, optional (default=DEFAULT_RESULTS_COLLECTION_PERIOD)\n results collection period. \n the number of minutes to run the model to collect results\n\n n_reps: int, optional (default=DEFAULT_N_REPS)\n Number of independent replications to run.\n \n Returns:\n --------\n pandas.DataFrame\n '''\n\n results = [single_run(scenario, rc_period, random_no_set=rep) \n for rep in range(n_reps)]\n \n #format and return results in a dataframe\n df_results = pd.concat(results)\n df_results.index = np.arange(1, len(df_results)+1)\n df_results.index.name = 'rep'\n return df_results",
"_____no_output_____"
]
],
[
[
"### Single run of the model\n\nThe script below performs a single replication of the simulation model. \n\n**Try:**\n\n* Changing the `random_no_set` of the `single_run` call.\n* Assigning the value `True` to `TRACE`",
"_____no_output_____"
]
],
[
[
"# Change this to True to see a trace...\nTRACE = False\n\n# create the default scenario\nargs = Scenario()\n\n# use the single_run() func\n# try changing `random_no_set` to see different run results\nprint('Running simulation ...', end=' => ')\nresults = single_run(args, random_no_set=42)\nprint('simulation complete.')\n\n# show results (transpose replication for easier view)\nresults.T",
"_____no_output_____"
]
],
[
[
"### Multiple independent replications\n\nGiven the set up it is now easy to perform multiple replications of the model.\n\n**Try**:\n* Changing `n_reps`",
"_____no_output_____"
]
],
[
[
"%%time\nargs = Scenario()\n\n#run multiple replications.\n#by default it runs 5 replications.\nprint('Running multiple replications', end=' => ')\nresults = multiple_replications(args, n_reps=50)\nprint('done.\\n')\nresults.head(3)",
"_____no_output_____"
],
[
"# summarise the results (2.dp)\nresults.mean().round(2)",
"_____no_output_____"
]
],
[
[
"### Visualise replications",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(2, 1, figsize=(12,4))\nax[0].hist(results['01a_triage_wait']);\nax[0].set_ylabel('wait for triage')\nax[1].hist(results['02a_registration_wait']);\nax[1].set_ylabel('wait for registration');",
"_____no_output_____"
]
],
[
[
"## Scenario Analysis\n\nThe structured approach we took to organising our `simpy` model allows us to easily experiment with alternative scenarios. We could employ a formal experimental design if needed. For simplicity here we will limit ourselves by running user chosen competing scenarios and compare their mean performance to the base case.\n\n> Note that we have our `simpy` model includes an implementation of **Common Random Numbers** across scenarios. ",
"_____no_output_____"
]
],
[
[
"def get_scenarios():\n '''\n Creates a dictionary object containing\n objects of type `Scenario` to run.\n \n Returns:\n --------\n dict\n Contains the scenarios for the model\n '''\n scenarios = {}\n scenarios['base'] = Scenario()\n \n # extra triage capacity\n scenarios['triage+1'] = Scenario()\n scenarios['triage+1'].n_triage += 1\n \n # extra examination capacity\n scenarios['exam+1'] = Scenario()\n scenarios['exam+1'].n_exam += 1\n \n # extra non-trauma treatment capacity\n scenarios['treat+1'] = Scenario()\n scenarios['treat+1'].n_cubicles_1 += 1\n \n scenarios['triage+exam'] = Scenario()\n scenarios['triage+exam'].n_triage += 1\n scenarios['triage+exam'].n_exam += 1\n \n return scenarios",
"_____no_output_____"
],
[
"def run_scenario_analysis(scenarios, rc_period, n_reps):\n '''\n Run each of the scenarios for a specified results\n collection period and replications.\n \n Params:\n ------\n scenarios: dict\n dictionary of Scenario objects\n \n rc_period: float\n model run length\n \n n_rep: int\n Number of replications\n \n '''\n print('Scenario Analysis')\n print(f'No. Scenario: {len(scenarios)}')\n print(f'Replications: {n_reps}')\n\n scenario_results = {}\n for sc_name, scenario in scenarios.items():\n \n print(f'Running {sc_name}', end=' => ')\n replications = multiple_replications(scenario, rc_period=rc_period,\n n_reps=n_reps)\n print('done.\\n')\n \n #save the results\n scenario_results[sc_name] = replications\n \n print('Scenario analysis complete.')\n return scenario_results",
"_____no_output_____"
]
],
[
[
"### Script to run scenario analysis",
"_____no_output_____"
]
],
[
[
"#number of replications\nN_REPS = 20\n\n#get the scenarios\nscenarios = get_scenarios()\n\n#run the scenario analysis\nscenario_results = run_scenario_analysis(scenarios, \n DEFAULT_RESULTS_COLLECTION_PERIOD,\n N_REPS)",
"_____no_output_____"
],
[
"def scenario_summary_frame(scenario_results):\n '''\n Mean results for each performance measure by scenario\n \n Parameters:\n ----------\n scenario_results: dict\n dictionary of replications. \n Key identifies the performance measure\n \n Returns:\n -------\n pd.DataFrame\n '''\n columns = []\n summary = pd.DataFrame()\n for sc_name, replications in scenario_results.items():\n summary = pd.concat([summary, replications.mean()], axis=1)\n columns.append(sc_name)\n\n summary.columns = columns\n return summary",
"_____no_output_____"
],
[
"# as well as rounding you may want to rename the cols/rows to \n# more readable alternatives.\nsummary_frame = scenario_summary_frame(scenario_results)\nsummary_frame.round(2)",
"_____no_output_____"
]
],
[
[
"## End",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
4addedad0b86599c54286a9c0b6495fe7966ab6f
| 60,395 |
ipynb
|
Jupyter Notebook
|
scripts_v1/32_Play_with_Ratings.ipynb
|
czarrar/recipe_rec
|
b72fc68d0b32a54d890af9598d1c3c226a61b004
|
[
"MIT"
] | null | null | null |
scripts_v1/32_Play_with_Ratings.ipynb
|
czarrar/recipe_rec
|
b72fc68d0b32a54d890af9598d1c3c226a61b004
|
[
"MIT"
] | null | null | null |
scripts_v1/32_Play_with_Ratings.ipynb
|
czarrar/recipe_rec
|
b72fc68d0b32a54d890af9598d1c3c226a61b004
|
[
"MIT"
] | null | null | null | 180.283582 | 18,692 | 0.896217 |
[
[
[
"# Load in packages to use throughout\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"# Read in Data",
"_____no_output_____"
]
],
[
[
"recipes = pd.read_csv('../data/30_ingredients+ave_ratings.csv').iloc[:,1:]\nrecipes.head()",
"_____no_output_____"
],
[
"recipes.",
"_____no_output_____"
]
],
[
[
"# Play with Ratings\n\nMy goal is to see what the ratings are and how they are distributed. It actually seems like all the recipes have some ratings and so you could see how well the suggested ratings from the proximity model is what \"someone\" likes, where someone is the average rating.",
"_____no_output_____"
],
[
"## Average Ratings",
"_____no_output_____"
]
],
[
[
"sns.distplot(recipes.aver_rate, axlabel=\"Average Ratings\")",
"_____no_output_____"
],
[
"sns.distplot(recipes.aver_rate, axlabel=\"Average Ratings\", hist=True, kde=False)",
"_____no_output_____"
],
[
"# Examine the distribution of ratings via this table\n# Apparent from this and above that most of the ratings are above 4\nprint('below 5: ', np.sum(recipes.aver_rate < 5))\nprint('below 4: ', np.sum(recipes.aver_rate < 4))\nprint('below 3: ', np.sum(recipes.aver_rate < 3))\nprint('below 2: ', np.sum(recipes.aver_rate < 2))\nprint('below 1: ', np.sum(recipes.aver_rate < 1))",
"below 5: 43724\nbelow 4: 7076\nbelow 3: 569\nbelow 2: 57\nbelow 1: 2\n"
]
],
[
[
"## Number of Ratings",
"_____no_output_____"
]
],
[
[
"sns.distplot(recipes.review_nums, axlabel=\"Number of Ratings Per Recipe\")",
"_____no_output_____"
],
[
"ax = sns.distplot(np.log(recipes.review_nums), axlabel=\"Number of Ratings on a Log Scale\")\nax.set_xticklabels(np.exp([0.0001,2,4,6,8,10]).round(0));",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4addf0d12d8f45226d4894631707e0fd7c2e3698
| 13,862 |
ipynb
|
Jupyter Notebook
|
twitter_usage/1_import_data.ipynb
|
NIU-Data-Science/open_source_info_env_public
|
9202b714c5641e4bfb2b098fae29b415296d53bf
|
[
"Apache-2.0"
] | null | null | null |
twitter_usage/1_import_data.ipynb
|
NIU-Data-Science/open_source_info_env_public
|
9202b714c5641e4bfb2b098fae29b415296d53bf
|
[
"Apache-2.0"
] | null | null | null |
twitter_usage/1_import_data.ipynb
|
NIU-Data-Science/open_source_info_env_public
|
9202b714c5641e4bfb2b098fae29b415296d53bf
|
[
"Apache-2.0"
] | null | null | null | 40.413994 | 129 | 0.543861 |
[
[
[
"# Step 1 - Read and Import the Archived Twitter Data\n\nThe first step is to read and import all the archived data. Download \"spritzer\" archived data from \nRaw data from: https://archive.org/details/twitterstream\nThe data is quite large, and is stored in directory/file format as:\n./yyyy/mm/dd/hh/{00-99}.json.bz2 \n\nSince our dataframes will overload the computer memory if we read it all in at once, we'll need\nto be careful about memory management. For example, we can read in one directory at a time,\ndiscard data that we don't want or need in foreseeable future, and save to a csv file; then\ndump or re-use memory and go again.\n\nAfter downloading hte data you want to analyze, run the portions of this file first, then garbage collect\nor refresh your kernel to free up memory. A csv from this ipynb file will be used as the basis for \nfurther analysis in parts 2 and 3.",
"_____no_output_____"
]
],
[
[
"# import necssary modules\nimport pandas as pd \nimport csv\nimport json\nimport os\nimport bz2\nimport time\n",
"_____no_output_____"
],
[
"# Function to check if a 'place' or a 'coordinates' are included in a tweet\n# One or both can exist in a tweet. For this code, 'place' is checked first and, \n# if it exists, returns true before 'coordinates' is checked\n\ndef does_this_tweet_have_a_place(tweet):\n \"\"\"Function to check if a 'place' or a 'coordinates' are included in a tweet\"\"\"\n\n if tweet['place']:\n country_code = (tweet['place']['country_code'])\n #print(\"country code: \" + country_code)\n return True\n elif tweet['coordinates']:\n #print(\"geo coordinates: {}\".format(tweet['coordinates']))\n return True\n else:\n return False",
"_____no_output_____"
],
[
"# Function to read in all the tweets from any one bz-zipped json file\ndef read_tweets_from_bzfile(filename):\n \"\"\"Function read in all the tweets from any one bz-zipped json file\"\"\"\n\n # local variables\n tweets = []\n read_count = 0\n kept_count = 0\n\n # open and unzip the bz2 file\n with bz2.open(filename, \"rb\") as data_file:\n for line in data_file:\n try: \n # load the tweet on this line of the file\n tweet = json.loads(line)\n read_count += 1\n #print(tweet['text'])\n\n # check if the tweet has a place or geo coordinates\n if does_this_tweet_have_a_place(tweet) :\n tweet['file_path'] = filename\n tweets.append(tweet)\n kept_count += 1\n except:\n pass\n\n # print some outputs so we can watch it working\n print(\"file read: {}\".format(filename))\n print(\" tweets read in file: {}\".format(read_count))\n print(\" tweets kept from file: {} \".format(kept_count))\n if read_count != 0:\n print(\" kept tweets rate: {:0>2f} %\".format(100*kept_count/read_count))\n\n return tweets, read_count, kept_count\n\n### uncomment and run this to test/debug the read_tweets_from_bzfile function using a single file\n#tweets = []\n#read_tweets_from_bzfile(\"00.json.bz2\", tweets)",
"_____no_output_____"
],
[
"# Function to iterate through a directory, get all the archive files, and then \n# read them in one at a time\ndef read_tweets_from_datetimehour_dir(rootdir):\n \"\"\"Function to iterate through a directory, get all the archive files, and then read them in one at a time\"\"\"\n\n # declare variables\n tweets = [] # keep tweets as an array for now for mem management\n num_read = 0\n num_kept = 0\n\n # will count the number of files as we go\n num_files_read = 0\n\n # iterate through the directories\n for directory, subdirectory, filenames in os.walk(rootdir):\n #print(directory)\n\n # iterate through the filenames\n for filename in filenames:\n full_path_filename = os.path.join(directory, filename)\n\n # call the read tweets function and keep track of counters\n tw, nr, nk = read_tweets_from_bzfile(full_path_filename)\n\n # append to the tweets array\n tweets.extend( tw ) # important to use \"extend\" method \n\n # increment the counters\n num_files_read += 1 \n num_read += nr\n num_kept += nk\n\n print(\" files read so far in this dir: {}\".format(num_files_read))\n print(\" results so far in this dir: {} tweets\".format(len(tweets)))\n\n print(\"done. size of tweets array: {}\".format(len(tweets)))\n\n return tweets, num_read, num_kept, num_files_read # return stats with the tweets array",
"_____no_output_____"
],
[
"# Function to check if a file exists; was used in development\ndef check_if_output_file_exists(filepath):\n if os.path.exists(filepath):\n print(\"file {} exists.\".format(filepath))\n while True:\n if os.path.isfile( filepath ):\n overwrite = input('Delete old file? (If no, output will be appended)\\n Y = yes, N = no\\n')\n if 'y' in overwrite.lower():\n os.remove(filepath)\n return False\n elif 'n' in overwrite.lower():\n return True\n else:\n print('input not understood; please try again')\n ",
"_____no_output_____"
]
],
[
[
"# Main run block\nNow that we've defined some key functions, we can run through it all. This will take a while.\n\nThe current set up is to run a week's worth of data. The data should be defined in the initial declarations of \nthis block, with year, month, day, and hour. Change according to the data you downloaded and want to analyze.\nIt's a good idea to restrict this to a smaller range for testing and verification before embarking on the\nentire run you want to do.",
"_____no_output_____"
]
],
[
[
"# MAIN BLOCK\n\n# instead of using os.walk, we'll specifically declare what we want to iterate through\n# so that we have control of the size of this job, and to be flexible when we want to do\n# smaller test runs\n\n# we'll also create the df and save off the results by the hour, which is about the right size to not crash\n# everything on a pentium i5 with 8GB of RAM\n# but for sanity we'll make csv files by the day, so 7 files for the week\n\n# set these variables to determine which directories will be read\n# in this example, we are going with 1 week in December 2020\nyear = 2020\nmonth = 12\nday_start = 1\nday_end = 7\nhour_start = 0 # possible range: 0-23\nhour_end = 23\n\n# counters\ntotal_tweets_read = 0\ntotal_tweets_kept = 0\ntotal_files_read = 0 \n\n# other variables\ndir = \"\"\noutput_csv_file = \"tweets_with_places\"\n\ntic = time.perf_counter() # start a timer\n\n# now start iterating through files\nfor day in range(day_start, day_end +1):\n \n # the dir/file structure is hard coded\n output_csv_file = \"tweets_with_places_\" + \\\n str('{:0>4d}').format(year) + \\\n str('{:0>2d}').format(month) + \\\n str('{:0>2d}').format(day) + \\\n \".csv\"\n \n write_csv_header = True # start with true, change to false after first write-out\n \n for hour in range (hour_start, hour_end +1):\n\n dir = os.path.join(str('{:0>4d}').format(year), \\\n str('{:0>2d}').format(month), \\\n str('{:0>2d}').format(day), \\\n str('{:0>2d}').format(hour))\n print(\"starting new directory: \" + dir)\n\n if os.path.exists(dir) == False:\n print(\"directory does not exist; moving on\")\n break\n\n # read the file and get back only those witih places or geo coordinates\n tweets, tweets_read, tweets_kept, files_read = read_tweets_from_datetimehour_dir(dir)\n tweets_df = pd.DataFrame( tweets )\n \n # print some outputs and statistics\n #print(tweets_df.columns)\n print(\"total tweets: {}\".format(tweets_read))\n tweets_df['created_at'] = pd.to_datetime(tweets_df['created_at'])\n print(\"date time range: {} to {}\".format(\\\n tweets_df['created_at'].min(),tweets_df['created_at'].max()))\n try:\n print(\" percentage tweets kept for {} d {} h: {:0>2f} %\".format(day, hour, 100*tweets_kept/tweets_read ))\n except:\n print(\" no tweets read \")\n\n # increment the counters\n total_tweets_read = total_tweets_read + tweets_read\n total_tweets_kept = total_tweets_kept + tweets_kept\n total_files_read = total_files_read + files_read\n\n # we can still keep lots of information from the tweet while dropping lots of extraneous or \n # repeated information; this saves file size\n if len(tweets_df) > 0:\n filtered_df = tweets_df[[\\\n 'created_at','id','text','source','user',\\\n 'geo','coordinates','place','entities','lang','file_path']]\n\n # write to the csv file\n filtered_df.to_csv(output_csv_file, mode='a', header=write_csv_header)\n\n write_csv_header = False # don't write headers after the first time\n\n print(\"wrote to file\")\n else:\n print(\"nothing written to file\")\n\n # print stats for the 'hour' read in\n print(\"hour {} ended\".format(hour))\n print(\"TOTAL tweets kept, tweets read: {}, {}\".format(total_tweets_kept, total_tweets_read))\n print(\"TOTAL files read: {}\".format(total_files_read))\n print(\"TOTAL percentage tweets kept: {:0>2f} %\".format( 100*total_tweets_kept/total_tweets_read ))\n\n # print stats for the 'day' read in\n print(\"day {} ended\".format(day))\n print(\"TOTAL tweets kept, tweets read: {}, {}\".format(total_tweets_kept, total_tweets_read))\n print(\"TOTAL files read: {}\".format(total_files_read))\n print(\"TOTAL percentage tweets kept: {:0>2f} %\".format( 100*total_tweets_kept/total_tweets_read ))\n\n# print overall stats\nprint(\"all files read\")\nprint(\"TOTAL tweets kept, tweets read: {}, {}\".format(total_tweets_kept, total_tweets_read))\nprint(\"TOTAL files read: {}\".format(total_files_read))\nprint(\"TOTAL percentage tweets kept: {:0>2f} %\".format( 100*total_tweets_kept/total_tweets_read ))\n \n# how long did that take?\ntoc = time.perf_counter()\nprint(f\"iterating and determining place from geo coords took {toc - tic:0.4f} seconds\")\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ade07124e432af176436bfc875b9d650140dbea
| 6,285 |
ipynb
|
Jupyter Notebook
|
main.ipynb
|
rkhawam750/Python_homework
|
785ba4590f3bec552e6e01224320a6ad520d5333
|
[
"FTL"
] | null | null | null |
main.ipynb
|
rkhawam750/Python_homework
|
785ba4590f3bec552e6e01224320a6ad520d5333
|
[
"FTL"
] | null | null | null |
main.ipynb
|
rkhawam750/Python_homework
|
785ba4590f3bec552e6e01224320a6ad520d5333
|
[
"FTL"
] | null | null | null | 25.864198 | 735 | 0.526492 |
[
[
[
"## My Bank Homework",
"_____no_output_____"
]
],
[
[
"# Import the pathlib and csv library\nfrom pathlib import Path\nimport csv",
"_____no_output_____"
],
[
"# Set the file path\ncsvpath = Path(\"budget_data.csv\")\n\nmonths = []\nmonth = ''\n\npnls = []\npnl = 0\n\nchanges = []\nnet_monthly_avg = 0\n\nline_num = 0\n\n",
"_____no_output_____"
],
[
"print(csvpath)",
"budget_data.csv\n"
],
[
"# Open the csv file as an object\n # print (type(data_file_))\n # read file\ntotal_net = 0\nwith open(csvpath, 'r') as csvfile:\n csvreader = csv.reader(csvfile, delimiter=',')\n header = next(csvreader)\n for row in csvreader:\n total_net += int(row[1])\nprint(total_net)",
"38382578\n"
],
[
"# Open the csv file as an object\nwith open(csvpath, \"r\") as data_file:\n \n # print(type(data_file))\n # Read the file\n csvreader = csv.reader(data_file, delimiter =',')\n\n # Read the head row\n header = next(csvreader)\n line_num+=1\n\n # Iterate through the rows of the csv file\n for row in csvreader:\n\n # Set month to corresponding column 1\n month = row[0]\n # Update months list with append\n months.append(month)\n\n total_months = len(months)\n",
"_____no_output_____"
],
[
"with open(csvpath, 'r') as data_file:\n\n csvreader = csv.reader(data_file, delimiter=',')\n\n header = next(csvreader)\n line_num+=1\n\n for row in csvreader:\n pnl = int(row[1])\n\n pnls.append(pnl)\n ",
"_____no_output_____"
],
[
"deltas = [pnls[i+1] - pnls[i] for i in range(len(pnls)-1)]\n\nprint(deltas)\n",
"[116771, -662642, -391430, 379920, 212354, 510239, -428211, -821271, 693918, 416278, -974163, 860159, -1115009, 1033048, 95318, -308093, 99052, -521393, 605450, 231727, -65187, -702716, 177975, -1065544, 1926159, -917805, 898730, -334262, -246499, -64055, -1529236, 1497596, 304914, -635801, 398319, -183161, -37864, -253689, 403655, 94168, 306877, -83000, 210462, -2196167, 1465222, -956983, 1838447, -468003, -64602, 206242, -242155, -449079, 315198, 241099, 111540, 365942, -219310, -368665, 409837, 151210, -110244, -341938, -1212159, 683246, -70825, 335594, 417334, -272194, -236462, 657432, -211262, -128237, -1750387, 925441, 932089, -311434, 267252, -1876758, 1733696, 198551, -665765, 693229, -734926, 77242, 532869]\n"
],
[
"def Average(lst):\n return sum(lst) / len (lst)\n\navg = Average(deltas)\n\nnet_monthly_avg = round(avg, 2)\n",
"_____no_output_____"
],
[
"greatest_increase = max(deltas)\nmax_index = deltas.index(greatest_increase)\n\ngreatest_increase_month = max_index + 1\n\n",
"_____no_output_____"
],
[
"greatest_decrease = min(deltas)\nmin_index = deltas.index(greatest_decrease)\n\ngreatest_decrease_month = min_index + 1\n",
"_____no_output_____"
],
[
"with open(\"Bank_Numbers.txt\", \"w\") as txt_file:\n txt_file.write(f\"Financial Analysis\\n\")\n txt_file.write(f\"----------------------------\\n\")\n txt_file.write(f\"Total Months: {total_months}\\n\")\n txt_file.write(f\"Total: ${total_net}\\n\")\n txt_file.write(f\"Average Change: ${net_monthly_avg}\\n\")\n txt_file.write(f\"Greatest Increase in Profits: {months[greatest_increase_month]} (${greatest_increase})\\n\")\n txt_file.write(f\"Greatest Decrease in Profits: {months[greatest_decrease_month]} (${greatest_decrease})\\n\")\n\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ade2be036bb955e3829e0db6571df8a28c2845f
| 22,732 |
ipynb
|
Jupyter Notebook
|
examples/00-basics/basic-signals.ipynb
|
acorbe/autogpy
|
fe25e9e908d4d1e98268124233efeff6f3a92e06
|
[
"BSD-3-Clause"
] | 7 |
2019-05-16T20:34:12.000Z
|
2021-07-06T14:29:11.000Z
|
examples/00-basics/basic-signals.ipynb
|
acorbe/autogpy
|
fe25e9e908d4d1e98268124233efeff6f3a92e06
|
[
"BSD-3-Clause"
] | 5 |
2019-07-03T13:52:18.000Z
|
2020-08-09T16:57:32.000Z
|
examples/00-basics/basic-signals.ipynb
|
acorbe/autogpy
|
fe25e9e908d4d1e98268124233efeff6f3a92e06
|
[
"BSD-3-Clause"
] | null | null | null | 202.964286 | 20,196 | 0.92086 |
[
[
[
"%pylab inline\n%load_ext autoreload\n%autoreload 2\nimport autogpy",
"Populating the interactive namespace from numpy and matplotlib\nThe autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"## building some example signals with numpy\n\ntt = np.linspace(-np.pi,np.pi,100)\nyy1 = np.sin(tt)\nyy2 = 1.5 * np.cos(tt)",
"_____no_output_____"
],
[
"## using autogpy without any flag or parameter\n\nwith autogpy.AutoGnuplotFigure(\"basic-signals-output\") as fig:\n fig.plot(tt,yy1)\n fig.plot(tt,yy2)",
"_____no_output_____"
],
[
"## check the output in the folder `basic-signals-output`\n## it contains the gnuplot script, data and makefile to re-generate this figure\n! ls basic-signals-output",
"Makefile fig__.pdflatex_compile.sh\r\n\u001b[1m\u001b[34mfig.latex.nice\u001b[m\u001b[m fig__.tikz.gnu\r\nfig__.core.gnu fig__.tikz_compile.sh\r\nfig__.jpg.gnu fig__0__.dat\r\nfig__.pdf fig__1__.dat\r\nfig__.pdf_converted_to.png plot_out.eps\r\nfig__.pdflatex.gnu sync_me.sh\r\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.