hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4a7da8a278b20876f18dda6707f53c8a7dd8c3d8
| 4,691 |
ipynb
|
Jupyter Notebook
|
notebooks/eflint3-features/3_postulation_overrides_derivation.ipynb
|
leegbestand/eflint-jupyter
|
ebb7b1016f7b4bbeeee13c8791b29f3c0ba4f34e
|
[
"BSD-3-Clause"
] | 2 |
2022-01-18T12:55:20.000Z
|
2022-01-21T15:26:13.000Z
|
notebooks/eflint3-features/3_postulation_overrides_derivation.ipynb
|
leegbestand/eflint-jupyter
|
ebb7b1016f7b4bbeeee13c8791b29f3c0ba4f34e
|
[
"BSD-3-Clause"
] | null | null | null |
notebooks/eflint3-features/3_postulation_overrides_derivation.ipynb
|
leegbestand/eflint-jupyter
|
ebb7b1016f7b4bbeeee13c8791b29f3c0ba4f34e
|
[
"BSD-3-Clause"
] | null | null | null | 27.757396 | 468 | 0.579407 |
[
[
[
"# 3. Postulation and derivation\nIn older versions of eFLINT, a strict separation between *postulated types* and *derived types* was in effect. That is, the instances of a type were either added to the knowledge based through the execution of events or actions (postulated) or by a derivation rule (derived). A programmer breaking this rule would notice that facts postulated about types with derivation rules would never appear in the knowledge base (unless also added by a derivation rule).\n\nIn eflint-3.0, this is no longer the case, owing to an alternative semantics for derivation rules. In the new semantics, an instance is considered to hold true when it is postulated as being true or (when also not postulated as being false) a derivation rule can generate it. \n\nThis subtle change makes it much simpler to express certain recurring patterns, such as closure relations in which a derivation rule acts as a closure operation. For example, the symmetric relation of being ones neighbour: ",
"_____no_output_____"
]
],
[
[
"Fact person Identified by Alice, Bob, Chloe\n\nFact neighbour-of Identified by person1 * person2 Where person1 != person2\n Holds when neighbour-of(person2, person1) // symmetry",
"_____no_output_____"
]
],
[
[
"When an individual instance of the `neighbour-of` is postulated to hold true, its reverse also holds true:",
"_____no_output_____"
]
],
[
[
"+neighbour-of(Alice,Bob)",
"_____no_output_____"
]
],
[
[
"As another example, the following code cells defines the symmetric and transitive relation `family-of`:",
"_____no_output_____"
]
],
[
[
"Fact family-of Identified by person1 * person2 Where person1 != person2\n Holds when family-of(person2, person1) // symmetry\n , family-of(person2, person3) && family-of(person3,person1). // transitivity\n\n+family-of(Alice,Bob).\n+family-of(Bob, Chloe).",
"_____no_output_____"
]
],
[
[
"Old code in which a strict separation between postulated and derived types was maintained is unaffected by this change.",
"_____no_output_____"
],
[
"### warning",
"_____no_output_____"
],
[
"The current implementation is very naïve, e.g. it does not use caching of any kind. Therefore combinatorial explosing is a risk, as demonstrated by the following change to the domain of discourse:",
"_____no_output_____"
]
],
[
[
"Fact person Identified by Alice, Bob, Chloe, David",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
4a7db41159b3c1c508d25629f835b6cd6dad6619
| 4,921 |
ipynb
|
Jupyter Notebook
|
03-Notebooks/04-CPI.ipynb
|
Serrat96/GoldAnalysis
|
5191c6b764e4a3a629ed4f1b5c68df660512d578
|
[
"MIT"
] | null | null | null |
03-Notebooks/04-CPI.ipynb
|
Serrat96/GoldAnalysis
|
5191c6b764e4a3a629ed4f1b5c68df660512d578
|
[
"MIT"
] | null | null | null |
03-Notebooks/04-CPI.ipynb
|
Serrat96/GoldAnalysis
|
5191c6b764e4a3a629ed4f1b5c68df660512d578
|
[
"MIT"
] | 1 |
2021-07-27T07:28:27.000Z
|
2021-07-27T07:28:27.000Z
| 24.605 | 104 | 0.42593 |
[
[
[
"import pandas as pd\nfrom fredapi import Fred",
"_____no_output_____"
]
],
[
[
"## Obtención de datos de FRED",
"_____no_output_____"
]
],
[
[
"fred = Fred(api_key='a7eca89fdf2905baea21d67b942c9ef7')\ncpi=fred.get_series('CPIAUCNS')",
"_____no_output_____"
],
[
"cpi = pd.DataFrame(data=cpi, columns=['Value'])",
"_____no_output_____"
]
],
[
[
"## Creación del pickle para no llamar a la API siempre",
"_____no_output_____"
]
],
[
[
"#cpi.to_pickle(r'..\\02-Data\\04-CPI\\01-CPI_USD_1913-01-01_2021-06-01_monthly_fred.pkl')\ncpi = pd.read_pickle(r'..\\02-Data\\04-CPI\\01-CPI_USD_1913-01-01_2021-06-01_monthly_fred.pkl')\ncpi",
"_____no_output_____"
],
[
"cpi.to_csv(r'..\\02-Data\\04-CPI\\01-CPI_USD_1913-01-01_2021-06-01_monthly_fred.csv')",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a7dbaa968e39796d49ce04d960268bde98670a7
| 916,744 |
ipynb
|
Jupyter Notebook
|
Examples/Image/Detection/FastRCNN/CNTK_FastRCNN_Eval.ipynb
|
digimatronics/MicrosoftCNTK2
|
274924836b6aea688f0a5225334862168c40acea
|
[
"RSA-MD"
] | 1 |
2017-11-10T18:05:58.000Z
|
2017-11-10T18:05:58.000Z
|
Examples/Image/Detection/FastRCNN/CNTK_FastRCNN_Eval.ipynb
|
piandpower/CNTK
|
274924836b6aea688f0a5225334862168c40acea
|
[
"RSA-MD"
] | null | null | null |
Examples/Image/Detection/FastRCNN/CNTK_FastRCNN_Eval.ipynb
|
piandpower/CNTK
|
274924836b6aea688f0a5225334862168c40acea
|
[
"RSA-MD"
] | null | null | null | 1,639.971377 | 739,488 | 0.945755 |
[
[
[
"## Evaluate CNTK Fast-RCNN model directly from python\n\nThis notebook demonstrates how to evaluate a single image using a CNTK Fast-RCNN model.\n\nFor a full description of the model and the algorithm, please see the following <a href=\"https://docs.microsoft.com/en-us/cognitive-toolkit/Object-Detection-using-Fast-R-CNN\" target=\"_blank\">tutorial</a>.\n\nBelow, you will see sample code for:\n1. Preparing the input data for the network (including image size adjustments)\n2. Evaluation of the input data using the model\n3. Processing the evaluation result and presenting the selected regions back on the image.\n\n<b>Important</b>: Before running this notebook, please make sure that:\n<ol>\n<li>You have version >= 2.0 RC 1 of CNTK installed. Installation instructions are available <a href=\"https://docs.microsoft.com/en-us/cognitive-toolkit/Setup-CNTK-on-your-machine\" target=\"_blank\">here</a>.\n\n<li>This notebook uses the CNTK python APIs and should be run from the CNTK python environment.</li>\n\n<li>OpenCV and the other required python packages for the Fast-RCNN scenario are installed. Please follow the instructions <a href=\"https://docs.microsoft.com/en-us/cognitive-toolkit/Object-Detection-using-Fast-R-CNN#setup\" target=\"_blank\">in here</a> to install the required packages.\n</ol>",
"_____no_output_____"
],
[
"##### 1. Download the sample dataset and make sure that the model exists\nFirst things first - we will download the sample Grocery dataset (if it's not already there), and we'll also make sure that the Fast-RCNN model file exists. The script will use your local trained model (if available), or will download and use the pre-trained model if a local trained model isn't available.\nIn case we run inside the CNTK test enviornment, the model and data are copied from the test data directory.\nWe also set the device to cpu / gpu for the test environment. If you have both CPU and GPU on your machine, you can optionally switch the devices. By default, we choose the best available device.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n# the above line enable us to draw the images inside the notebooks\n\nimport os\nimport sys\nfrom os import path\nimport cntk\n\n# Check for an environment variable defined in CNTK's test infrastructure\ndef is_test(): return 'CNTK_EXTERNAL_TESTDATA_SOURCE_DIRECTORY' in os.environ\n\n\n# Select the right target device when this notebook is being tested\n# Currently supported only for GPU \n# Setup data environment for pre-built data sources for testing\nif is_test(): \n if 'TEST_DEVICE' in os.environ:\n if os.environ['TEST_DEVICE'] == 'cpu':\n cntk.device.try_set_default_device(cntk.device.cpu()) \n else:\n cntk.device.try_set_default_device(cntk.device.gpu(0))\n sys.path.append(os.path.join(*\"../../../../Tests/EndToEndTests/CNTKv2Python/Examples\".split(\"/\")))\n import prepare_test_data as T\n T.prepare_Grocery_data()\n T.prepare_fastrcnn_grocery_100_model()\n\n#Make sure the grocery dataset is installed \nsys.path.append('../../DataSets/Grocery')\nfrom install_grocery import download_grocery_data\ndownload_grocery_data()\n\n# Make sure the FRCNN model exists - check if the model was trained and exists, if not - download the existing model\n\nsys.path.append('../../PretrainedModels')\nfrom models_util import download_model_by_name\ndownload_model_by_name(\"Fast-RCNN_grocery100\")\nmodel_path = '../../PretrainedModels/Fast-RCNN_grocery100.model'\n",
"Data already available at /home/nadavbar/code/CNTK/Examples/Image/DataSets/Grocery/../Grocery\nCNTK model already available at /home/nadavbar/code/CNTK/Examples/Image/PretrainedModels/Fast-RCNN_grocery100.model\n"
]
],
[
[
"### 3. load the model and prepare it for evaluation:\nAs a first step for using the Fast-RCNN model, we load the trained model file.\n\nThe trained model accepts 3 inputs: The image data, the bounding box (region of interest, or ROI) proposals and the ground truth labels of the ROIs. Since we are evaluating a new image - we probably don't have the ground truth labels for the image, hence - we need to adjust the network to accept only the image and the ROIs as input.\nIn order to do that we use the CNTK APIs to clone the network and change its input nodes.\n\nMore information and examples regarding cloning nodes of a network are available in the <a href=\"https://docs.microsoft.com/en-us/cognitive-toolkit/Build-your-own-image-classifier-using-Transfer-Learning\" target=\"_blank\">Transfer Learning</a> tutorial.",
"_____no_output_____"
]
],
[
[
"from cntk import load_model\nfrom cntk import placeholder\nfrom cntk.logging.graph import find_by_name, get_node_outputs\nfrom cntk.ops import combine\nfrom cntk.ops.sequence import input_variable\nfrom cntk.ops.functions import CloneMethod\n\n# load the trained model\ntrained_frcnn_model = load_model(model_path)\n\n# find the original features and rois input nodes\nfeatures_node = find_by_name(trained_frcnn_model, \"features\")\nrois_node = find_by_name(trained_frcnn_model, \"rois\")\n\n# find the output \"z\" node\nz_node = find_by_name(trained_frcnn_model, 'z')\n\n# define new input nodes for the features (image) and rois\nimage_input = input_variable(features_node.shape, name='features')\nroi_input = input_variable(rois_node.shape, name='rois')\n\n# Clone the desired layers with fixed weights and place holder for the new input nodes\ncloned_nodes = combine([z_node.owner]).clone(\n CloneMethod.freeze,\n {features_node: placeholder(name='features'), rois_node: placeholder(name='rois')})\n\n# apply the cloned nodes to the input nodes\nfrcnn_model = cloned_nodes(image_input, roi_input)\n\nprint(\"Fast-RCNN Grocery model loaded succesfully!\")",
"Fast-RCNN Grocery model loaded succesfully!\n"
]
],
[
[
"### 4. Load an image and convert it to the network format\n\nNext, we load an image from the test set using OpenCV, and then resize according to the network input dimensions. (Which are set when the network is trained).\nWhen resizing, we preserve scale and pad the border areas with a constant value (114), which is later used for normalization by the network.",
"_____no_output_____"
]
],
[
[
"import cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimage_height = 1000\nimage_width = 1000 \n\ndef resize_and_pad(img, width, height, pad_value=114):\n # port of the c++ code from CNTK: https://github.com/Microsoft/CNTK/blob/f686879b654285d06d75c69ee266e9d4b7b87bc4/Source/Readers/ImageReader/ImageTransformers.cpp#L316\n img_width = len(img[0])\n img_height = len(img)\n \n scale_w = img_width > img_height\n \n target_w = width\n target_h = height\n \n if scale_w:\n target_h = int(np.round(img_height * float(width) / float(img_width)))\n else:\n target_w = int(np.round(img_width * float(height) / float(img_height)))\n \n resized = cv2.resize(img, (target_w, target_h), 0, 0, interpolation=cv2.INTER_NEAREST)\n \n top = int(max(0, np.round((height - target_h) / 2)))\n left = int(max(0, np.round((width - target_w) / 2)))\n \n bottom = height - top - target_h\n right = width - left - target_w\n \n resized_with_pad = cv2.copyMakeBorder(resized, top, bottom, left, right, \n cv2.BORDER_CONSTANT, value=[pad_value, pad_value, pad_value])\n \n #tranpose(2,0,1) converts the image to the HWC format which CNTK accepts\n model_arg_rep = np.ascontiguousarray(np.array(resized_with_pad, dtype=np.float32).transpose(2,0,1))\n \n return resized_with_pad, model_arg_rep\n\ndef load_image_and_scale(image_path, width, height, pad_value=114):\n img = cv2.imread(image_path)\n return resize_and_pad(img, width, height, pad_value), img\n\ntest_image_path = r\"../../DataSets/Grocery/testImages/WIN_20160803_11_28_42_Pro.jpg\"\n(test_img, test_img_model_arg), original_img = load_image_and_scale(test_image_path, image_width, image_height)\n\nplt.imshow(cv2.cvtColor(test_img, cv2.COLOR_BGR2RGB))\nplt.axis(\"off\")",
"_____no_output_____"
]
],
[
[
"### 5. Generate ROIs for testing\n\nNow, we produce regions of interest (ROIs) proposals using selective search & grid methods, using the same method as in the script: A1_GenerateInputROIs.py.\n\nEach ROI is in the format of [x,y,w,h], where the coordinates real numbers in the range of 0 to 1, and scaled according to the resized and padded image. \nThe ROIs array is padded with regions of [0,0,0,0] at the end to match the 2000 ROIs input format of the model.",
"_____no_output_____"
]
],
[
[
"# Parameters taken from PARAMETERS.py\n# ROI generation\nroi_minDimRel = 0.04\nroi_maxDimRel = 0.4\nroi_minNrPixelsRel = 2 * roi_minDimRel * roi_minDimRel\nroi_maxNrPixelsRel = 0.33 * roi_maxDimRel * roi_maxDimRel\nroi_maxAspectRatio = 4.0 # maximum aspect Ratio of a ROI vertically and horizontally\nroi_maxImgDim = 200 # image size used for ROI generation\nss_scale = 100 # selective search ROIS: parameter controlling cluster size for segmentation\nss_sigma = 1.2 # selective search ROIs: width of gaussian kernal for segmentation\nss_minSize = 20 # selective search ROIs: minimum component size for segmentation\ngrid_nrScales = 7 # uniform grid ROIs: number of iterations from largest possible ROI to smaller ROIs\ngrid_aspectRatios = [1.0, 2.0, 0.5] # uniform grid ROIs: aspect ratio of ROIs\ncntk_nrRois = 100 # 100 # how many ROIs to zero-pad\ncntk_padWidth = 1000\ncntk_padHeight = 1000\n\nfrom cntk_helpers import imArrayWidthHeight, getSelectiveSearchRois, imresizeMaxDim\nfrom cntk_helpers import getGridRois, filterRois, roiTransformPadScaleParams, roiTransformPadScale\n\ndef get_rois_for_image(img, use_selective_search=True, use_grid_rois=True):\n \n roi_minDim = roi_minDimRel * roi_maxImgDim\n roi_maxDim = roi_maxDimRel * roi_maxImgDim\n roi_minNrPixels = roi_minNrPixelsRel * roi_maxImgDim*roi_maxImgDim\n roi_maxNrPixels = roi_maxNrPixelsRel * roi_maxImgDim*roi_maxImgDim\n\n\n imgOrig = img.copy()\n\n # get rois\n if use_selective_search:\n print (\"Calling selective search..\")\n rects, scaled_img, scale = getSelectiveSearchRois(imgOrig, ss_scale, ss_sigma, ss_minSize, roi_maxImgDim) #interpolation=cv2.INTER_AREA\n print (\"Number of rois detected using selective search: \" + str(len(rects)))\n else:\n rects = []\n scaled_img, scale = imresizeMaxDim(imgOrig, roi_maxImgDim, boUpscale=True, interpolation=cv2.INTER_AREA)\n \n imgWidth, imgHeight = imArrayWidthHeight(scaled_img)\n\n # add grid rois\n if use_grid_rois:\n rectsGrid = getGridRois(imgWidth, imgHeight, grid_nrScales, grid_aspectRatios)\n print (\"Number of rois on grid added: \" + str(len(rectsGrid)))\n rects += rectsGrid\n\n # run filter\n print (\"Number of rectangles before filtering = \" + str(len(rects)))\n rois = filterRois(rects, imgWidth, imgHeight, roi_minNrPixels, roi_maxNrPixels, roi_minDim, roi_maxDim, roi_maxAspectRatio)\n if len(rois) == 0: #make sure at least one roi returned per image\n rois = [[5, 5, imgWidth-5, imgHeight-5]]\n print (\"Number of rectangles after filtering = \" + str(len(rois)))\n\n # scale up to original size and save to disk\n # note: each rectangle is in original image format with [x,y,x2,y2]\n original_rois = np.int32(np.array(rois) / scale)\n \n img_width = len(img[0])\n img_height = len(img)\n\n # all rois need to be scaled + padded to cntk input image size\n targetw, targeth, w_offset, h_offset, scale = roiTransformPadScaleParams(img_width, img_height,\n cntk_padWidth, cntk_padHeight)\n \n rois = []\n for original_roi in original_rois:\n x, y, x2, y2 = roiTransformPadScale(original_roi, w_offset, h_offset, scale)\n\n xrel = float(x) / (1.0 * targetw)\n yrel = float(y) / (1.0 * targeth)\n wrel = float(x2 - x) / (1.0 * targetw)\n hrel = float(y2 - y) / (1.0 * targeth)\n \n rois.append([xrel, yrel, wrel, hrel])\n \n # pad rois if needed:\n if len(rois) < cntk_nrRois:\n rois += [[0, 0, 0, 0]] * (cntk_nrRois - len(rois))\n elif len(rois) > cntk_nrRois:\n rois = rois[:cntk_nrRois]\n return np.array(rois), original_rois\n \ntest_rois, original_rois = get_rois_for_image(original_img)\nroi_padding_index = len(original_rois)\nprint(\"Number of rois for evaluation:\", len(test_rois))",
"--------------------------------------------------------------\nCalling selective search..\nNumber of rois detected using selective search: 590\nNumber of rois on grid added: 110580\nNumber of rectangles before filtering = 111170\nNumber of rectangles after filtering = 1345\nNumber of rois for evaluation: 100\n"
]
],
[
[
"### 6. Evaluate the sample\nHere, we prepare the data to be in CNTK's expected arguments format and run it through the model used the model's **eval** method.\n\nWe then process the result by trimming the padded ROIs part, and calculate the predicted labels and their probabilities.",
"_____no_output_____"
]
],
[
[
"from cntk_helpers import softmax2D\n\n# a dummy variable for labels the will be given as an input to the network but will be ignored\ndummy_labels = np.zeros((2000,17))\n\n#Index the names of the arguments so we can get them by name\nargs_indices = {}\nfor i,arg in enumerate(frcnn_model.arguments):\n args_indices[arg.name] = i\n \n# prepare the arguments\narguments = {\n frcnn_model.arguments[args_indices['features']]: [test_img_model_arg],\n frcnn_model.arguments[args_indices['rois']]: [test_rois],\n}\n\n# run it through the model\noutput = frcnn_model.eval(arguments)\n\n# we now extract the \"z\" values from the output, which are the values of the layer that is just before\n# the softmax layer.\n# we take just the relevant part from that array \nrois_values = output[0][0][:roi_padding_index]\n\n# get the prediction for each roi by taking the index with the maximal value in each row \nrois_labels_predictions = np.argmax(rois_values, axis=1)\n\n# calculate the probabilities using softmax\nrois_probs = softmax2D(rois_values) \n\n# print the number of ROIs that were detected as non-background\nprint(\"Number of detections: %d\"%np.sum(rois_labels_predictions > 0))",
"/home/nadavbar/anaconda3/envs/cntk-py34/lib/python3.4/site-packages/cntk/core.py:330: UserWarning: your data is of type \"float64\", but your input variable (uid \"Input225\") expects \"<class 'numpy.float32'>\". Please convert your data beforehand to speed up training.\n (sample.dtype, var.uid, str(var.dtype)))\n"
]
],
[
[
"### 7. Merge overlapping regions using Non-Maxima-Suppression\nBefore inspecting the predictions, we need to merge overlapping regions that were detected using the Non-Maxima-Suppression algorithm that is implemented in the cntk_helpers module.",
"_____no_output_____"
]
],
[
[
"from cntk_helpers import applyNonMaximaSuppression\nnms_threshold = 0.1\nnon_padded_rois = test_rois[:roi_padding_index]\nmax_probs = np.amax(rois_probs, axis=1).tolist()\nrois_prediction_indices = applyNonMaximaSuppression(nms_threshold, rois_labels_predictions, max_probs, non_padded_rois)\nprint(\"Indices of selected regions:\",rois_prediction_indices)",
"Indices of selected regions: [0, 67, 24, 10, 75, 71, 82, 60, 39, 91]\n"
]
],
[
[
"### 8. Visualize the results\n\nAs a final step, we use the OpenCV **rectangle** and **putText** methods in order to draw the selected regions on the original image alongside their corresponding predicted labels.",
"_____no_output_____"
]
],
[
[
"rois_with_prediction = test_rois[rois_prediction_indices]\nrois_prediction_labels = rois_labels_predictions[rois_prediction_indices]\nrois_predicion_scores = rois_values[rois_prediction_indices]\noriginal_rois_predictions = original_rois[rois_prediction_indices]\n\n# class names taken from PARAMETERS.py:\nclasses = ('__background__', # always index 0\n 'avocado', 'orange', 'butter', 'champagne', 'eggBox', 'gerkin', 'joghurt', 'ketchup',\n 'orangeJuice', 'onion', 'pepper', 'tomato', 'water', 'milk', 'tabasco', 'mustard')\n\noriginal_img_cpy = original_img.copy()\n\nfor roi,label in zip(original_rois_predictions, rois_prediction_labels):\n (x1,y1,x2,y2) = roi\n cv2.rectangle(original_img_cpy, (x1, y1), (x2, y2), (0, 255, 0), 5)\n cv2.putText(original_img_cpy,classes[label],(x1,y2 + 30), cv2.FONT_HERSHEY_DUPLEX, 2,(200,0,255),3,cv2.LINE_AA)\n\nprint(\"Evaluation result:\")\nplt.figure(figsize=(10, 10)) \nplt.imshow(cv2.cvtColor(original_img_cpy, cv2.COLOR_BGR2RGB), interpolation='nearest')\n\nplt.axis(\"off\")",
"Evaluation result:\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a7dc42f3ef5643b43ef60e71ba4b6a3c261d2a1
| 108,327 |
ipynb
|
Jupyter Notebook
|
rating prediction models/Naive Bayes Classifier.ipynb
|
anjanakethineni/Restaurant-Recommendations-with-Yelp
|
bef5a8d123a1c13678a922555533066a61c0f654
|
[
"MIT"
] | 23 |
2018-11-17T09:35:22.000Z
|
2021-11-24T11:27:13.000Z
|
rating prediction models/Naive Bayes Classifier.ipynb
|
anjanakethineni/Restaurant-Recommendations-with-Yelp
|
bef5a8d123a1c13678a922555533066a61c0f654
|
[
"MIT"
] | 1 |
2020-10-22T14:30:05.000Z
|
2020-10-22T14:30:05.000Z
|
rating prediction models/Naive Bayes Classifier.ipynb
|
anjanakethineni/Restaurant-Recommendations-with-Yelp
|
bef5a8d123a1c13678a922555533066a61c0f654
|
[
"MIT"
] | 9 |
2019-11-20T17:44:22.000Z
|
2022-03-19T09:21:25.000Z
| 138.525575 | 27,700 | 0.876365 |
[
[
[
"# Predicting Review rating from review text",
"_____no_output_____"
],
[
"# <span style=\"color:dodgerblue\"> Naive Bayes Classifier Using 5 Classes (1,2,3,4 and 5 Rating)</span>",
"_____no_output_____"
]
],
[
[
"%pylab inline\nimport warnings\nwarnings.filterwarnings('ignore')",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"from IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport nltk\nfrom nltk.corpus import stopwords",
"_____no_output_____"
],
[
"# Importing the reviews dataset\nreviews_dataset = pd.read_csv('reviews_restaurants_text.csv')",
"_____no_output_____"
],
[
"# Creating X and Y for the classifier. X is the review text and Y is the rating\nx = reviews_dataset['text']\ny = reviews_dataset['stars']",
"_____no_output_____"
],
[
"# Text preprocessing\nimport string\ndef text_preprocessing(text):\n no_punctuation = [ch for ch in text if ch not in string.punctuation]\n no_punctuation = ''.join(no_punctuation)\n return [w for w in no_punctuation.split() if w.lower() not in stopwords.words('english')]\n",
"_____no_output_____"
],
[
"%%time\n# Estimated time: 30 min\n\n# Vectorization\n# Converting each review into a vector using bag-of-words approach\n\nfrom sklearn.feature_extraction.text import CountVectorizer\nvector = CountVectorizer(analyzer=text_preprocessing).fit(x)\nx = vector.transform(x)",
"Wall time: 43min 59s\n"
],
[
"# Spitting data into training and test set\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, Y_train, Y_test = train_test_split(x, y, test_size=0.20, random_state=0, shuffle =False)\n\n# Building Multinomial Naive Bayes modle and fit it to our training set\nfrom sklearn.naive_bayes import MultinomialNB\nclassifier = MultinomialNB()\nclassifier.fit(X_train, Y_train)",
"_____no_output_____"
],
[
"# Using our trained classifier to predict the ratings from text\n# Testing our model on the test set\n\npreds = classifier.predict(X_test)\nprint(\"Actual Ratings(Stars): \",end = \"\")\ndisplay(Y_test[:15])\nprint(\"Predicted Ratings: \",end = \"\")\nprint(preds[:15])",
"Actual Ratings(Stars): "
]
],
[
[
"## Evaluating the model",
"_____no_output_____"
],
[
"## <span style=\"color:orangered\"> Accuracy </span>",
"_____no_output_____"
]
],
[
[
"# Accuracy of the model\n\nfrom sklearn.metrics import accuracy_score\naccuracy_score(Y_test, preds)",
"_____no_output_____"
]
],
[
[
"## <span style=\"color:orangered\"> Precision and Recall of the model</span>",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import precision_score\nfrom sklearn.metrics import recall_score\nprint ('Precision: ' + str(precision_score(Y_test, preds, average='weighted')))\nprint ('Recall: ' + str(recall_score(Y_test,preds, average='weighted')))",
"Precision: 0.624972643164\nRecall: 0.664210934471\n"
]
],
[
[
"## <span style=\"color:orangered\"> Classification Report </span>",
"_____no_output_____"
]
],
[
[
"# Evaluating the model\nfrom sklearn.metrics import confusion_matrix, classification_report\nprint(confusion_matrix(Y_test, preds))\nprint('\\n')\nprint(classification_report(Y_test, preds))",
"[[ 2111 60 193 203 246]\n [ 572 76 389 422 232]\n [ 229 39 623 1237 494]\n [ 116 19 168 2420 3865]\n [ 151 38 70 1649 15326]]\n\n\n precision recall f1-score support\n\n 1 0.66 0.75 0.70 2813\n 2 0.33 0.04 0.08 1691\n 3 0.43 0.24 0.31 2622\n 4 0.41 0.37 0.39 6588\n 5 0.76 0.89 0.82 17234\n\navg / total 0.62 0.66 0.63 30948\n\n"
]
],
[
[
"## <span style=\"color:orangered\">Confusion Matrix of the model</span>",
"_____no_output_____"
]
],
[
[
"# citation: http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py\nimport itertools \ndef plot_confusion_matrix(cm, classes,\n normalize=False,\n title='Confusion matrix',\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n print(cm)\n\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n fmt = '.2f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j], fmt),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')",
"_____no_output_____"
],
[
"from sklearn import metrics\nclass_names = ['1','2','3','4','5']\n\n# Compute confusion matrix\ncnf_matrix = metrics.confusion_matrix(Y_test, preds\n )\nnp.set_printoptions(precision=2)\n\n# Plot non-normalized confusion matrix\nplt.figure()\nplot_confusion_matrix(cnf_matrix, classes=class_names,\n title='Confusion matrix, without normalization')\n\n# Plot normalized confusion matrix\nplt.figure()\nplot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,\n title='Normalized confusion matrix')\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"# <span style=\"color:dodgerblue\"> Naive Bayes Classifier Using 2 Classes <span style=\"color:dodgerblue\"> (1 and 5 Rating: Positive & Negative Reviews)</span>",
"_____no_output_____"
]
],
[
[
"# Importing the datasets\nreviews = pd.read_csv('reviews_restaurants_text.csv')\nreviews['text'] = reviews['text'].str[2:-2]\n\n# Reducing the dataset to 2 classes i.e 1 and 5 star rating\nreviews['stars'][reviews.stars == 3] = 1\nreviews['stars'][reviews.stars == 2] = 1\nreviews['stars'][reviews.stars == 4] = 5\n\n#Undersampling of the dataset to get a balanced dataset\nreview1 = reviews[reviews['stars'] == 1]\nreview5 = reviews[reviews['stars'] == 5][0:34062]\nframes = [review1, review5]\nreviews = pd.concat(frames)",
"_____no_output_____"
],
[
"# Creating X and Y for the classifier. X is the review text and Y is the rating\nx2 = reviews['text']\ny2 = reviews['stars']",
"_____no_output_____"
],
[
"# Vectorization\n# Converting each review into a vector using bag-of-words approach\n\nfrom sklearn.feature_extraction.text import CountVectorizer\nvector2 = CountVectorizer(analyzer=text_preprocessing).fit(x2)\nx2 = vector.transform(x2)",
"_____no_output_____"
],
[
"# Spitting data into training and test set\nfrom sklearn.model_selection import train_test_split\nX2_train, X2_test, Y2_train, Y2_test = train_test_split(x2, y2, test_size=0.20, random_state=0)",
"_____no_output_____"
],
[
"# Building Multinomial Naive Bayes modle and fit it to our training set\nfrom sklearn.naive_bayes import MultinomialNB\nclassifier2 = MultinomialNB()\nclassifier2.fit(X2_train, Y2_train)",
"_____no_output_____"
],
[
"# Testing our model on the test set\nY2_pred = classifier2.predict(X2_test)",
"_____no_output_____"
]
],
[
[
"## <span style=\"color:orangered\"> Classification Report </span>",
"_____no_output_____"
]
],
[
[
"# Evaluating the model\nfrom sklearn.metrics import confusion_matrix, classification_report\nprint(confusion_matrix(Y2_test, Y2_pred))\nprint('\\n')\nprint(classification_report(Y2_test, Y2_pred))",
"[[6232 821]\n [ 815 6112]]\n\n\n precision recall f1-score support\n\n 1 0.88 0.88 0.88 7053\n 5 0.88 0.88 0.88 6927\n\navg / total 0.88 0.88 0.88 13980\n\n"
]
],
[
[
"## <span style=\"color:orangered\"> Accuracy of the model </span>",
"_____no_output_____"
]
],
[
[
"\n# Accuracy of the model\nfrom sklearn.metrics import accuracy_score\naccuracy_score(Y2_test, Y2_pred)",
"_____no_output_____"
]
],
[
[
"## <span style=\"color:orangered\"> Precision and Recall of the model</span>",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import precision_score\nfrom sklearn.metrics import recall_score\nprint ('Precision: ' + str(precision_score(Y2_test, Y2_pred, average='weighted')))\nprint ('Recall: ' + str(recall_score(Y2_test, Y2_pred, average='weighted')))",
"Precision: 0.882976867141\nRecall: 0.882975679542\n"
]
],
[
[
"## <span style=\"color:orangered\"> Confusion Matrix of the model </font>",
"_____no_output_____"
]
],
[
[
"class_names = ['Negative','Positive']\n\n# Compute confusion matrix\ncnf_matrix = metrics.confusion_matrix(Y2_test, Y2_pred)\nnp.set_printoptions(precision=2)\n\n# Plot non-normalized confusion matrix\nplt.figure()\nplot_confusion_matrix(cnf_matrix, classes=class_names,\n title='Confusion matrix, without normalization')\n\n# Plot normalized confusion matrix\nplt.figure()\nplot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,\n title='Normalized confusion matrix')\n\nplt.show()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a7dc4dc07f35789636586bb4f4f83ea20c947e6
| 9,784 |
ipynb
|
Jupyter Notebook
|
tutorial-ja/307_exactcover_ja.ipynb
|
gyu-don/blueqat-tutorials
|
4f32fff7dede61eedbcadc5281a61ceb15ae253d
|
[
"Apache-2.0"
] | 1 |
2022-02-09T02:10:48.000Z
|
2022-02-09T02:10:48.000Z
|
tutorial-ja/307_exactcover_ja.ipynb
|
gyu-don/blueqat-tutorials-japanese
|
4f32fff7dede61eedbcadc5281a61ceb15ae253d
|
[
"Apache-2.0"
] | null | null | null |
tutorial-ja/307_exactcover_ja.ipynb
|
gyu-don/blueqat-tutorials-japanese
|
4f32fff7dede61eedbcadc5281a61ceb15ae253d
|
[
"Apache-2.0"
] | null | null | null | 29.558912 | 561 | 0.467702 |
[
[
[
"# Exact Cover問題\n\n最初にExact Cover問題について説明します。\n\nある自然数の集合Uを考えます。またその自然数を含むいくつかのグループ$V_{1}, V_{2}, \\ldots, V_{N}$を想定します。1つの自然数が複数のグループに属していても構いません。さて、そのグループ$V_{i}$からいくつかピックアップしたときに、それらに同じ自然数が複数回含まれず、Uに含まれる自然数セットと同じになるようにピックアップする問題をExact Cover問題といいます。\nさらに、選んだグループ数を最小になるようにするものを、Smallest Exact Coverといいます。",
"_____no_output_____"
],
[
"## 準備",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport blueqat.wq as wq\nfrom blueqat import vqe",
"_____no_output_____"
]
],
[
[
"## QUBOの作成",
"_____no_output_____"
],
[
"解きたい問題のQUBOマトリクスを作成します。\n\n最初に自然数の集合を $U = \\{1, \\ldots, n\\}$、グループを$V_{i} \\subseteq U(i=1, \\ldots, N)$とします。また、i番目のグループをピックアップしたかどうかを$x_{i} \\in \\{1, 0\\}$で表します。ピックアップされた場合は1、されなかった場合は0です。ここで、各自然数(αとします)についてピックアップされた1つのグループのみに含まれている場合に最小となるようなコスト関数$E_{A}$を考えます。\n\nこの場合、\n\n$E_{A} = A \\sum _ { \\alpha = 1 } ^ { n } \\left( 1 - \\sum _ { i : \\alpha \\in V _ { i } } x _ { i } \\right) ^ { 2 }$\n\nとすると、各自然数αに対して1つのグループのみがピックアップされた場合、$E_{A} = 0$となります。\n\nこれをQUBO形式に変換していきます。まず括弧の中を展開します。\n\n$E_{A} = A \\sum _ { \\alpha = 1 } ^ { n } \\{ 1 - 2\\sum _ { i : \\alpha \\in V _ { i } } x _ { i } + ( \\sum _ { i : \\alpha \\in V _ { i } } x _ { i } ) ^ { 2 } \\} $\n\n今回$E_{A}$を最小化する問題なので、定数である{}内の第一項は無視できます。\n第二項は、$x_{i} \\in {1,0}$であることを利用して、次のように書き換えることができます。\n\n$ - 2\\sum _ { i : \\alpha \\in V _ { i } } x _ { i } = - 2\\sum _ { i = j, i : \\alpha \\in V _ { i }, j : \\alpha \\in V _ { j } } x _ { i } x _ {j}$\n\n第三項についても、i = jの場合と、$i \\neq j$の場合に分けると、次の様に書き換えられます。\n\n$ ( \\sum _ { i : \\alpha \\in V _ { i } } x _ { i } ) ^ { 2 } = \\sum _ { i = j, i : \\alpha \\in V _ { i }, j : \\alpha \\in V _ { j } } x _ { i } x _ {j} + 2 \\sum _ { i \\neq j, i : \\alpha \\in V _ { i }, j : \\alpha \\in V _ { j } } x _ { i } x _ {j} $\n\nまとめると、\n\n$E_{A} = A \\sum _ { \\alpha = 1 } ^ { n } ( - \\sum _ { i = j, i : \\alpha \\in V _ { i }, j : \\alpha \\in V _ { j } } x _ { i } x _ {j} + 2 \\sum _ { i \\neq j, i : \\alpha \\in V _ { i }, j : \\alpha \\in V _ { j } } x _ { i } x _ {j} )$\n\nとなり、QUBO形式にすることができました。",
"_____no_output_____"
]
],
[
[
"U = [1,2,3,4,5,6,7,8,9,10]\nA = 1\n\ndef get_qubo(V):\n Q = np.zeros( (len(V), len(V)) )\n\n for i in range(len(V)):\n for j in range(len(V)):\n for k in range(len(U)):\n alpha = U[k]\n in_Vi = V[i].count(alpha) > 0 #V[i]に存在しているか\n in_Vj = V[j].count(alpha) > 0 #V[j]に存在しているか\n if i == j and in_Vi:\n Q[i][j] += -1\n elif i < j and in_Vi and in_Vj:\n Q[i][j] += 2\n\n return Q * A",
"_____no_output_____"
]
],
[
[
"また、結果を表示する関数を定義しておきます。",
"_____no_output_____"
]
],
[
[
"def display_answer(list_x, energies = None, show_graph = False):\n print(\"Result x:\", list_x)\n text = \"\"\n for i in range(len(list_x)):\n if(list_x[i]):\n text += str(V[i])\n print(\"Picked {} group(s): {}\".format(sum(list_x), text))\n if energies is not None:\n print(\"Energy:\", a.E[-1])\n if show_graph:\n plt.plot(a.E)\n plt.show()",
"_____no_output_____"
]
],
[
[
"次の通り実行してみると、正しい答えが得られていることが分かります。",
"_____no_output_____"
]
],
[
[
"V = [ [1,2], [3,4,5,6], [7,8,9,10], [1,3,5], [10] ]\nqubo = get_qubo(V)\nresult = vqe.Vqe(vqe.QaoaAnsatz(wq.pauli(qubo), step=4)).run()\nanswer = result.most_common(12)\nprint(answer)\ndisplay_answer(answer[0][0])",
"(((1, 1, 1, 0, 0), 0.3783998933018464), ((0, 0, 1, 1, 0), 0.19080753539598078), ((1, 0, 1, 1, 0), 0.10904143237775482), ((1, 1, 1, 0, 1), 0.0890989364939838), ((0, 0, 1, 1, 1), 0.0449279949063271), ((0, 1, 1, 1, 0), 0.030382807525070173), ((1, 1, 0, 0, 1), 0.028032010700668703), ((1, 0, 1, 1, 1), 0.025675154329096672), ((0, 0, 1, 0, 0), 0.022072489455340405), ((1, 0, 1, 0, 0), 0.014930464403380785), ((0, 0, 0, 1, 1), 0.014135096147402253), ((1, 1, 1, 1, 0), 0.0095426809095386))\nResult x: (1, 1, 1, 0, 0)\nPicked 3 group(s): [1, 2][3, 4, 5, 6][7, 8, 9, 10]\n"
]
],
[
[
"## Vをもう少し複雑にしてみる",
"_____no_output_____"
],
[
"Vをもう少し複雑にして(2つグループを追加して)、実行してみます。",
"_____no_output_____"
]
],
[
[
"V = [ [1,2], [3,4,5,6], [7,8,9,10], [1,3,5], [10], [7,9], [2,4,6,8] ]\nqubo = get_qubo(V)\nresult = vqe.Vqe(vqe.QaoaAnsatz(wq.pauli(qubo), step=2)).run()\nanswer = result.most_common(12)\nprint(answer)\ndisplay_answer(answer[0][0])",
"(((1, 1, 1, 0, 0, 0, 0), 0.0700844494957699), ((0, 0, 0, 1, 1, 1, 1), 0.0700844494957699), ((1, 1, 0, 0, 1, 1, 0), 0.0695151945834572), ((0, 0, 1, 1, 0, 0, 1), 0.0695151945834572), ((1, 1, 0, 0, 0, 0, 0), 0.03942041130713249), ((0, 0, 1, 1, 1, 1, 1), 0.03942041130713249), ((1, 0, 0, 0, 1, 1, 0), 0.031317165076161634), ((0, 1, 1, 1, 0, 0, 1), 0.031317165076161634), ((1, 1, 1, 0, 1, 0, 0), 0.02802224137437182), ((0, 0, 0, 1, 0, 1, 1), 0.02802224137437182), ((1, 0, 1, 0, 0, 0, 0), 0.02728514456746815), ((0, 1, 0, 1, 1, 1, 1), 0.02728514456746815))\nResult x: (1, 1, 1, 0, 0, 0, 0)\nPicked 3 group(s): [1, 2][3, 4, 5, 6][7, 8, 9, 10]\n"
]
],
[
[
"正しい答えが得られていることが分かります。",
"_____no_output_____"
],
[
"### 意地悪ケース\n最後に意地悪なケースを試します。\n{1,2}{3}{4}{5}{6}{7}{8}{9}{10}が選ばれるのが正解です。\n\n結果を見ると、概ね正しい答えが選ばれるようですが、まれに少しエネルギーの高い不正解の方が選ばれてしまいます。",
"_____no_output_____"
]
],
[
[
"V = [ [1,2], [3], [4], [5], [6], [7], [8], [9], [10], [2,3,4,5,6,7,8,9,10]]\nfor i in range(5):\n print(\"---{}回目\".format(i+1))\n qubo = get_qubo(V)\n result = vqe.Vqe(vqe.QaoaAnsatz(wq.pauli(qubo), step=6)).run()\n answer = result.most_common(12)\n display_answer(answer[0][0])",
"---1回目\nResult x: (1, 1, 1, 1, 1, 1, 1, 1, 1, 0)\nPicked 9 group(s): [1, 2][3][4][5][6][7][8][9][10]\n---2回目\nResult x: (1, 1, 1, 1, 1, 1, 1, 1, 1, 0)\nPicked 9 group(s): [1, 2][3][4][5][6][7][8][9][10]\n---3回目\nResult x: (1, 0, 0, 0, 0, 0, 0, 0, 0, 1)\nPicked 2 group(s): [1, 2][2, 3, 4, 5, 6, 7, 8, 9, 10]\n---4回目\nResult x: (1, 1, 1, 1, 1, 1, 1, 1, 1, 0)\nPicked 9 group(s): [1, 2][3][4][5][6][7][8][9][10]\n---5回目\nResult x: (1, 0, 0, 0, 0, 0, 0, 0, 0, 1)\nPicked 2 group(s): [1, 2][2, 3, 4, 5, 6, 7, 8, 9, 10]\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4a7de3d89684a0a8a70e41b0c8c7f20ade032dbf
| 31,068 |
ipynb
|
Jupyter Notebook
|
GMC/00_Preprocessing.ipynb
|
m-konopka/ml-scoring
|
35f534578849834f9259e235ed677e2cc1e5d413
|
[
"MIT"
] | null | null | null |
GMC/00_Preprocessing.ipynb
|
m-konopka/ml-scoring
|
35f534578849834f9259e235ed677e2cc1e5d413
|
[
"MIT"
] | null | null | null |
GMC/00_Preprocessing.ipynb
|
m-konopka/ml-scoring
|
35f534578849834f9259e235ed677e2cc1e5d413
|
[
"MIT"
] | null | null | null | 36.041763 | 149 | 0.335136 |
[
[
[
"# Preprocessing",
"_____no_output_____"
],
[
"Source: https://www.kaggle.com/c/GiveMeSomeCredit/",
"_____no_output_____"
]
],
[
[
"import os\nimport numpy as np\nimport pandas as pd\nimport config as cfg\n\nfrom sklearn.model_selection import train_test_split\nfrom imblearn.under_sampling import RandomUnderSampler\nfrom pandas_profiling import ProfileReport\n\npd.set_option(\"display.max_columns\", None)",
"_____no_output_____"
]
],
[
[
"### Train/test split",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(os.path.join(\"Data\", \"data_original\", \"cs-training.csv\")).drop(['Unnamed: 0'], axis=1)\ndf[\"BAD\"] = df[\"SeriousDlqin2yrs\"]\ndf = df.drop([\"SeriousDlqin2yrs\"], axis=1)\ndf",
"_____no_output_____"
],
[
"print(\"Bad rate:\", df[\"BAD\"].mean())",
"Bad rate: 0.06684\n"
],
[
"X = df.drop(['BAD'], axis=1)\ny = df['BAD']\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=cfg.TEST_SIZE, random_state=cfg.SEED, stratify=y)\n\nX_train = pd.get_dummies(X_train)\nX_test = pd.get_dummies(X_test)\n\nrus = RandomUnderSampler(sampling_strategy=cfg.SAMPLING_STRATEGY)\nX_train, y_train = rus.fit_resample(X_train, y_train)\n\nX_train.to_csv(os.path.join(\"Data\", \"data_preprocessed\", \"X_train.csv\"), index=False)\nX_test.to_csv(os.path.join(\"Data\", \"data_preprocessed\", \"X_test.csv\"), index=False)\ny_train.to_csv(os.path.join(\"Data\", \"data_preprocessed\", \"y_train.csv\"), index=False)\ny_test.to_csv(os.path.join(\"Data\", \"data_preprocessed\", \"y_test.csv\"), index=False)\n\nProfileReport(X_train, minimal=True).to_file(os.path.join(\"Results\", \"X_train.html\"))",
"_____no_output_____"
],
[
"print(\"X_train:\", X_train.shape)\nprint(\"X_test:\", X_test.shape)\nprint(\"Bad rate:\", y_train.mean())",
"X_train: (15875, 10)\nX_test: (37500, 10)\nBad rate: 0.4737007874015748\n"
]
],
[
[
"### Train/test split with binning",
"_____no_output_____"
]
],
[
[
"df_binned = df.copy()\n\ndf_binned['age'] = pd.qcut(df['age'], 10)\ndf_binned['RevolvingUtilizationOfUnsecuredLines'] = pd.qcut(df['RevolvingUtilizationOfUnsecuredLines'], 10)\ndf_binned['NumberOfTime30-59DaysPastDueNotWorse'] = pd.cut(df_binned['NumberOfTime30-59DaysPastDueNotWorse'], bins=[0, 1, 100], right=False)\ndf_binned['DebtRatio'] = pd.qcut(df_binned['DebtRatio'], 10)\ndf_binned['MonthlyIncome'] = pd.qcut(df_binned['MonthlyIncome'], 10)\ndf_binned['NumberOfOpenCreditLinesAndLoans'] = pd.qcut(df_binned['NumberOfOpenCreditLinesAndLoans'], 10)\ndf_binned['NumberOfTimes90DaysLate'] = pd.cut(df_binned['NumberOfTimes90DaysLate'], bins=[0, 1, 100], right=False)\ndf_binned['NumberRealEstateLoansOrLines'] = pd.cut(df_binned['NumberRealEstateLoansOrLines'], bins=[0, 1, 2, 100], right=False)\ndf_binned['NumberOfTime60-89DaysPastDueNotWorse'] = pd.cut(df_binned['NumberOfTime60-89DaysPastDueNotWorse'], bins=[0, 1, 100], right=False)\ndf_binned['NumberOfDependents'] = pd.cut(df_binned['NumberOfDependents'], bins=[0, 1, 2, 3, 100], right=False)\n \ndf_binned",
"_____no_output_____"
],
[
"print(\"Bad rate:\", df_binned[\"BAD\"].mean())",
"Bad rate: 0.06684\n"
],
[
"X = df_binned.drop(['BAD'], axis=1)\ny = df_binned['BAD']\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=cfg.TEST_SIZE, random_state=cfg.SEED, stratify=y)\n\nrus = RandomUnderSampler(sampling_strategy=cfg.SAMPLING_STRATEGY)\nX_train, y_train = rus.fit_resample(X_train, y_train)\n\nX_train.to_csv(os.path.join(\"Data\", \"data_preprocessed_binned\", \"X_train.csv\"), index=False)\nX_test.to_csv(os.path.join(\"Data\", \"data_preprocessed_binned\", \"X_test.csv\"), index=False)\ny_train.to_csv(os.path.join(\"Data\", \"data_preprocessed_binned\", \"y_train.csv\"), index=False)\ny_test.to_csv(os.path.join(\"Data\", \"data_preprocessed_binned\", \"y_test.csv\"), index=False)",
"_____no_output_____"
],
[
"print(\"X_train:\", X_train.shape)\nprint(\"X_test:\", X_test.shape)\nprint(\"Bad rate:\", y_train.mean())",
"X_train: (15875, 10)\nX_test: (37500, 10)\nBad rate: 0.4737007874015748\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a7dfe9de5e45c4cfd9e5b739702a48ad4cd7822
| 521,722 |
ipynb
|
Jupyter Notebook
|
bayesian_optimization.ipynb
|
hbcbh1999/bayesian-machine-learning
|
fd17ff2ea04cd42a477b70bd557f280567752077
|
[
"Apache-2.0"
] | null | null | null |
bayesian_optimization.ipynb
|
hbcbh1999/bayesian-machine-learning
|
fd17ff2ea04cd42a477b70bd557f280567752077
|
[
"Apache-2.0"
] | null | null | null |
bayesian_optimization.ipynb
|
hbcbh1999/bayesian-machine-learning
|
fd17ff2ea04cd42a477b70bd557f280567752077
|
[
"Apache-2.0"
] | 1 |
2019-01-10T17:07:44.000Z
|
2019-01-10T17:07:44.000Z
| 776.372024 | 264,768 | 0.945059 |
[
[
[
"# Bayesian optimization\n\n## Introduction \n\nMany optimization problems in machine learning are black box optimization problems where the objective function $f(\\mathbf{x})$ is a black box function<sup>[1][2]</sup>. We do not have an analytical expression for $f$ nor do we know its derivatives. Evaluation of the function is restricted to sampling at a point $\\mathbf{x}$ and getting a possibly noisy response. \n\nIf $f$ is cheap to evaluate we could sample at many points e.g. via grid search, random search or numeric gradient estimation. However, if function evaluation is expensive e.g. tuning hyperparameters of a deep neural network, probe drilling for oil at given geographic coordinates or evaluating the effectiveness of a drug candidate taken from a chemical search space then it is important to minimize the number of samples drawn from the black box function $f$.\n\nThis is the domain where Bayesian optimization techniques are most useful. They attempt to find the global optimimum in a minimum number of steps. Bayesian optimization incorporates prior belief about $f$ and updates the prior with samples drawn from f to get a posterior that better approximates $f$. The model used for approximating the objective function is called *surrogate model*. Bayesian optimization also uses an *acquisition function* that directs sampling to areas where an improvement over the current best observation is likely.\n\n### Surrogate model\n\nA popular surrogate model for Bayesian optimization are [Gaussian processes](https://en.wikipedia.org/wiki/Gaussian_process) (GPs). I wrote about Gaussian processes in a [previous post](https://krasserm.github.io/2018/03/19/gaussian-processes/). If you are not familiar with GPs I recommend reading it first. GPs define a prior over functions and we can use them to incorporate prior beliefs about the objective function (smoothness, ...). The GP posterior is cheap to evaluate and is used to propose points in the search space where sampling is likely to yield an improvement. \n\n### Acquisition functions\n\nProposing sampling points in the search space is done by acquisition functions. They trade off exploitation and exploration. Exploitation means sampling where the surrogate model predicts a high objective and exploration means sampling at locations where the prediction uncertainty is high. Both correspond to high acquisition function values and the goal is to maximize the acquisition function to determine the next sampling point. \n\n\n\nMore formally, the objective function $f$ will be sampled at $\\mathbf{x}_t = \\mathrm{argmax}_{\\mathbf{x}} u(\\mathbf{x} \\lvert \\mathcal{D}_{1:t-1})$ where $u$ is the acquisition function and $\\mathcal{D}_{1:t-1} = \\{(\\mathbf{x}_1, y_1),...,(\\mathbf{x}_{t-1}, y_{t-1})\\}$ are the $t-1$ samples drawn from $f$ so far. Popular acquisition functions are *maximum probability of improvement* (MPI), *expected improvement* (EI) and *upper confidence bound* (UCB)<sup>[1]</sup>. In the following, we will use the expected improvement (EI) which is most widely used and described further below. \n\n### Optimization algorithm\n\nThe Bayesian optimization procedure is as follows. For $t = 1,2,...$ repeat:\n\n- Find the next sampling point $\\mathbf{x}_{t}$ by optimizing the acquisition function over the GP: $\\mathbf{x}_t = \\mathrm{argmax}_{\\mathbf{x}} u(\\mathbf{x} \\lvert \\mathcal{D}_{1:t-1})$\n- Obtain a possibly noisy sample $y_t = f(\\mathbf{x}_t) + \\epsilon_t$ from the objective function $f$.\n- Add the sample to previous samples $\\mathcal{D}_{1:t} = \\{\\mathcal{D}_{1:t-1}, (\\mathbf{x}_t,y_t)\\}$ and update the GP.\n\n### Expected improvement\n\nExpected improvement is defined as\n\n$$\\mathrm{EI}(\\mathbf{x}) = \\mathbb{E}\\max(f(\\mathbf{x}) - f(\\mathbf{x}^+), 0)\\tag{1}$$\n\nwhere $f(\\mathbf{x}^+)$ is the value of the best sample so far and $\\mathbf{x}^+$ is the location of that sample i.e. $\\mathbf{x}^+ = \\mathrm{argmax}_{\\mathbf{x}_i \\in \\mathbf{x}_{1:t}} f(\\mathbf{x}_i)$. The expected improvement can be evaluated analytically under the GP model<sup>[3]</sup>:\n\n$$\n\\mathrm{EI}(\\mathbf{x}) =\n\\begin{cases}\n(\\mu(\\mathbf{x}) - f(\\mathbf{x}^+) - \\xi)\\Phi(Z) + \\sigma(\\mathbf{x})\\phi(Z) &\\text{if}\\ \\sigma(\\mathbf{x}) > 0 \\\\\n0 & \\text{if}\\ \\sigma(\\mathbf{x}) = 0\n\\end{cases}\\tag{2}\n$$\n\nwhere\n\n$$\nZ =\n\\begin{cases}\n\\frac{\\mu(\\mathbf{x}) - f(\\mathbf{x}^+) - \\xi}{\\sigma(\\mathbf{x})} &\\text{if}\\ \\sigma(\\mathbf{x}) > 0 \\\\\n0 & \\text{if}\\ \\sigma(\\mathbf{x}) = 0\n\\end{cases}\n$$\n\nwhere $\\mu(\\mathbf{x})$ and $\\sigma(\\mathbf{x})$ are the mean and the standard deviation of the GP pesterior predictive at $\\mathbf{x}$, respectively. $\\Phi$ and $\\phi$ are the PDF and CDF of the standard normal distribution, respectively. The first summation term in Equation (2) is the exploitation term and second summation term is the exploration term.\n\nParameter $\\xi$ in Equation (2) determines the amount of exploration during optimization and higher $\\xi$ values lead to more exploration. In other words, with increasing $\\xi$ values, the importance of improvements predicted by the GP posterior mean $\\mu(\\mathbf{x})$ decreases relative to the importance of potential improvements in regions of high prediction uncertainty, represented by large $\\sigma(\\mathbf{x})$ values. A recommended default value for $\\xi$ is $0.01$.\n\nWith this minimum of theory we can start implementing Bayesian optimization. The next section shows a basic implementation with plain NumPy and SciPy, later sections demonstrate how to use existing libraries. Finally, Bayesian optimization is used to tune the hyperparameters of a tree-based regression model.",
"_____no_output_____"
],
[
"## Implementation with NumPy and SciPy\n\nIn this section, we will implement the acquisition function and its optimization in plain NumPy and SciPy and use scikit-learn for the Gaussian process implementation. Although we have an analytical expression of the optimization objective `f` in the following example, we treat is as black box and iteratively approximate it with a Gaussian process during Bayesian optimization. Furthermore, samples drawn from the objective function are noisy and the noise level is given by the `noise` variable. Optimization is done within given `bounds`. We also assume that there exist two initial samples in `X_init` and `Y_init`.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n%matplotlib inline\n\nbounds = np.array([[-1.0, 2.0]])\nnoise = 0.2\n\ndef f(X, noise=noise):\n return -np.sin(3*X) - X**2 + 0.7*X + noise * np.random.randn(*X.shape)\n\nX_init = np.array([[-0.9], [1.1]])\nY_init = f(X_init)",
"_____no_output_____"
]
],
[
[
"The following plot shows the noise-free objective function, the amount of noise by plotting a large number of samples and the two initial samples.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\n# Dense grid of points within bounds\nX = np.arange(bounds[:, 0], bounds[:, 1], 0.01).reshape(-1, 1)\n\n# Noise-free objective function values at X \nY = f(X,0)\n\n# Plot optimization objective with noise level \nplt.plot(X, Y, 'y--', lw=2, label='Noise-free objective')\nplt.plot(X, f(X), 'bx', lw=1, alpha=0.1, label='Noisy samples')\nplt.plot(X_init, Y_init, 'kx', mew=3, label='Initial samples')\nplt.legend();",
"_____no_output_____"
]
],
[
[
"Goal is to find the global optimum on the left in a small number of steps. The next step is to implement the acquisition function defined in Equation (2) as `expected_improvement` function. ",
"_____no_output_____"
]
],
[
[
"from scipy.stats import norm\n\ndef expected_improvement(X, X_sample, Y_sample, gpr, xi=0.01):\n '''\n Computes the EI at points X based on existing samples X_sample\n and Y_sample using a Gaussian process surrogate model.\n \n Args:\n X: Points at which EI shall be computed (m x d).\n X_sample: Sample locations (n x d).\n Y_sample: Sample values (n x 1).\n gpr: A GaussianProcessRegressor fitted to samples.\n xi: Exploitation-exploration trade-off parameter.\n \n Returns:\n Expected improvements at points X.\n '''\n mu, sigma = gpr.predict(X, return_std=True)\n mu_sample = gpr.predict(X_sample)\n\n sigma = sigma.reshape(-1, X_sample.shape[1])\n \n # Needed for noise-based model,\n # otherwise use np.max(Y_sample).\n # See also section 2.4 in [...]\n mu_sample_opt = np.max(mu_sample)\n\n with np.errstate(divide='warn'):\n imp = mu - mu_sample_opt - xi\n Z = imp / sigma\n ei = imp * norm.pdf(Z) + sigma * norm.pdf(Z)\n ei[sigma == 0.0] = 0.0\n\n return ei",
"_____no_output_____"
]
],
[
[
"We also need a function that proposes the next sampling point by computing the location of the acquisition function maximum. Optimization is restarted `n_restarts` times to avoid local optima.",
"_____no_output_____"
]
],
[
[
"from scipy.optimize import minimize\n\ndef propose_location(acquisition, X_sample, Y_sample, gpr, bounds, n_restarts=25):\n '''\n Proposes the next sampling point by optimizing the acquisition function.\n \n Args:\n acquisition: Acquisition function.\n X_sample: Sample locations (n x d).\n Y_sample: Sample values (n x 1).\n gpr: A GaussianProcessRegressor fitted to samples.\n\n Returns:\n Location of the acquisition function maximum.\n '''\n dim = X_sample.shape[1]\n min_val = 1\n min_x = None\n \n def min_obj(X):\n # Minimization objective is the negative acquisition function\n return -acquisition(X.reshape(-1, dim), X_sample, Y_sample, gpr)\n \n # Find the best optimum by starting from n_restart different random points.\n for x0 in np.random.uniform(bounds[:, 0], bounds[:, 1], size=(n_restarts, dim)):\n res = minimize(min_obj, x0=x0, bounds=bounds, method='L-BFGS-B') \n if res.fun < min_val:\n min_val = res.fun[0]\n min_x = res.x \n \n return min_x.reshape(-1, 1)",
"_____no_output_____"
]
],
[
[
"Now we have all components needed to run Bayesian optimization with the [algorithm](#Optimization-algorithm) outlined above. The Gaussian process in the following example is configured with a [Matérn kernel](http://scikit-learn.org/stable/modules/gaussian_process.html#matern-kernel) which is a generalization of the squared exponential kernel or RBF kernel. The known noise level is configured with the `alpha` parameter. \n\nBayesian optimization runs for 10 iterations. In each iteration, a row with two plots is produced. The left plot shows the noise-free objective function, the surrogate function which is the GP posterior predictive mean, the 95% confidence interval of the mean and the noisy samples obtained from the objective function so far. The right plot shows the acquisition function. The vertical dashed line in both plots shows the proposed sampling point for the next iteration which corresponds to the maximum of the acquisition function.",
"_____no_output_____"
]
],
[
[
"from sklearn.gaussian_process import GaussianProcessRegressor\nfrom sklearn.gaussian_process.kernels import ConstantKernel, Matern\nfrom bayesian_optimization_util import plot_approximation, plot_acquisition\n\n# Gaussian process with Matérn kernel as surrogate model\nm52 = ConstantKernel(1.0) * Matern(length_scale=1.0, nu=2.5)\ngpr = GaussianProcessRegressor(kernel=m52, alpha=noise**2)\n\n# Initialize samples\nX_sample = X_init\nY_sample = Y_init\n\n# Number of iterations\nn_iter = 10\n\nplt.figure(figsize=(12, n_iter * 3))\nplt.subplots_adjust(hspace=0.4)\n\nfor i in range(n_iter):\n # Update Gaussian process with existing samples\n gpr.fit(X_sample, Y_sample)\n\n # Obtain next sampling point from the acquisition function (expected_improvement)\n X_next = propose_location(expected_improvement, X_sample, Y_sample, gpr, bounds)\n \n # Obtain next noisy sample from the objective function\n Y_next = f(X_next, noise)\n \n # Plot samples, surrogate function, noise-free objective and next sampling location\n plt.subplot(n_iter, 2, 2 * i + 1)\n plot_approximation(gpr, X, Y, X_sample, Y_sample, X_next, show_legend=i==0)\n plt.title(f'Iteration {i+1}')\n\n plt.subplot(n_iter, 2, 2 * i + 2)\n plot_acquisition(X, expected_improvement(X, X_sample, Y_sample, gpr), X_next, show_legend=i==0)\n \n # Add sample to previous samples\n X_sample = np.vstack((X_sample, X_next))\n Y_sample = np.vstack((Y_sample, Y_next))",
"_____no_output_____"
]
],
[
[
"Note how the two initial samples initially drive search into the direction of the local maximum on the right side but exploration allows the algorithm to escape from that local optimum and find the global optimum on the left side. Also note how sampling point proposals often fall within regions of high uncertainty (exploration) and are not only driven by the highest surrogate function values (exploitation).\n\nA convergence plot reveals how many iterations are needed the find a maximum and if the sampling point proposals stay around that maximum i.e. converge to small proposal differences between consecutive steps. ",
"_____no_output_____"
]
],
[
[
"from bayesian_optimization_util import plot_convergence\n\nplot_convergence(X_sample, Y_sample)",
"_____no_output_____"
]
],
[
[
"## Bayesian optimization libraries\n\nThere are numerous Bayesian optimization libraries out there and giving a comprehensive overview is not the goal of this article. Instead, I'll pick two that I used in the past and show the minimum setup needed to get the previous example running.\n\n### Scikit-optimize\n\n[Scikit-optimize](https://scikit-optimize.github.io/) is a library for sequential model-based optimization that is based on [scikit-learn](http://scikit-learn.org/). It also supports Bayesian optimization using Gaussian processes. The API is designed around minimization, hence, we have to provide negative objective function values. The results obtained here slightly differ from previous results because of non-deterministic optimization behavior and different noisy samples drawn from the objective function.",
"_____no_output_____"
]
],
[
[
"from sklearn.base import clone\nfrom skopt import gp_minimize\nfrom skopt.learning import GaussianProcessRegressor\nfrom skopt.learning.gaussian_process.kernels import ConstantKernel, Matern\n\n# Use custom kernel and estimator to match previous example\nm52 = ConstantKernel(1.0) * Matern(length_scale=1.0, nu=2.5)\ngpr = GaussianProcessRegressor(kernel=m52, alpha=noise**2)\n\nr = gp_minimize(lambda x: -f(np.array(x))[0], \n bounds.tolist(),\n base_estimator=gpr,\n acq_func='EI', # expected improvement\n xi=0.01, # exploitation-exploration trade-off\n n_calls=10, # number of iterations\n n_random_starts=0, # initial samples are provided\n x0=X_init.tolist(), # initial samples\n y0=-Y_init.ravel())\n\n# Fit GP model to samples for plotting results\ngpr.fit(r.x_iters, -r.func_vals)\n\n# Plot the fitted model and the noisy samples\nplot_approximation(gpr, X, Y, r.x_iters, -r.func_vals, show_legend=True)",
"_____no_output_____"
],
[
"plot_convergence(np.array(r.x_iters), -r.func_vals)",
"_____no_output_____"
]
],
[
[
"## GPyOpt\n\n[GPyOpt](http://sheffieldml.github.io/GPyOpt/) is a Bayesian optimization library based on [GPy](https://sheffieldml.github.io/GPy/). The abstraction level of the API is comparable to that of scikit-optimize. The `BayesianOptimization` API provides a `maximize` parameter to configure whether the objective function shall be maximized or minimized (default). In version 1.2.1, this seems to be ignored when providing initial samples, so we have to negate their target values manually in the following example. Also, the built-in `plot_acquisition` and `plot_convergence` methods display the minimization result in any case. Again, the results obtained here slightly differ from previous results because of non-deterministic optimization behavior and different noisy samples drawn from the objective function. ",
"_____no_output_____"
]
],
[
[
"import GPy\nimport GPyOpt\n\nfrom GPyOpt.methods import BayesianOptimization\n\nkernel = GPy.kern.Matern52(input_dim=1, variance=1.0, lengthscale=1.0)\nbds = [{'name': 'X', 'type': 'continuous', 'domain': bounds.ravel()}]\n\noptimizer = BayesianOptimization(f=f, \n domain=bds,\n model_type='GP',\n kernel=kernel,\n acquisition_type ='EI',\n acquisition_jitter = 0.01,\n X=X_init,\n Y=-Y_init,\n noise_var = noise**2,\n exact_feval=False,\n normalize_Y=False,\n maximize=True)\n\noptimizer.run_optimization(max_iter=10)\noptimizer.plot_acquisition()",
"_____no_output_____"
],
[
"optimizer.plot_convergence()",
"_____no_output_____"
]
],
[
[
"## Application\n\nThis section demonstrates how to optimize the hyperparameters of an `XGBRegressor` with GPyOpt and how Bayesian optimization performance compares to random search. `XGBRegressor` is part of [XGBoost](https://xgboost.readthedocs.io/), a flexible and scalable gradient boosting library. `XGBRegressor` implements the scikit-learn estimator API and can be applied to regression problems. Regression is performed on a small [toy dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html#sklearn.datasets.load_diabetes) that is part of scikit-learn.",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets\nfrom sklearn.model_selection import RandomizedSearchCV, cross_val_score\n\nfrom scipy.stats import uniform\nfrom xgboost import XGBRegressor\n\n# Load the diabetes dataset (for regression)\nX, Y = datasets.load_diabetes(return_X_y=True)\n\n# Instantiate an XGBRegressor with default hyperparameter settings\nxgb = XGBRegressor()\n\n# and compute a baseline to beat with hyperparameter optimization \nbaseline = cross_val_score(xgb, X, Y, scoring='neg_mean_squared_error').mean()",
"_____no_output_____"
]
],
[
[
"### Hyperparameter tuning with random search\n\nFor hyperparameter tuning with random search, we use `RandomSearchCV` of scikit-learn and compute a cross-validation score for each randomly selected point in hyperparameter space. Results will be discussed below.",
"_____no_output_____"
]
],
[
[
"# Hyperparameters to tune and their ranges\nparam_dist = {\"learning_rate\": uniform(0, 1),\n \"gamma\": uniform(0, 5),\n \"max_depth\": range(1,50),\n \"n_estimators\": range(1,300),\n \"min_child_weight\": range(1,10)}\n\nrs = RandomizedSearchCV(xgb, param_distributions=param_dist, \n scoring='neg_mean_squared_error', n_iter=25)\n\n# Run random search for 25 iterations\nrs.fit(X, Y);",
"_____no_output_____"
]
],
[
[
"### Hyperparameter tuning with Bayesian optimization\n\nTo tune hyperparameters with Bayesian optimization we implement an objective function `cv_score` that takes hyperparameters as input and returns a cross-validation score. Here, we assume that cross-validation at a given point in hyperparameter space is deterministic and therefore set the `exact_feval` parameter of `BayesianOptimization` to `True`. Depending on model fitting and cross-validation details this might not be the case but we ignore that here.",
"_____no_output_____"
]
],
[
[
"bds = [{'name': 'learning_rate', 'type': 'continuous', 'domain': (0, 1)},\n {'name': 'gamma', 'type': 'continuous', 'domain': (0, 5)},\n {'name': 'max_depth', 'type': 'discrete', 'domain': (1, 50)},\n {'name': 'n_estimators', 'type': 'discrete', 'domain': (1, 300)},\n {'name': 'min_child_weight', 'type': 'discrete', 'domain': (1, 10)}]\n\n# Optimization objective \ndef cv_score(parameters):\n parameters = parameters[0]\n score = cross_val_score(\n XGBRegressor(learning_rate=parameters[0],\n gamma=int(parameters[1]),\n max_depth=int(parameters[2]),\n n_estimators=int(parameters[3]),\n min_child_weight = parameters[4]), \n X, Y, scoring='neg_mean_squared_error').mean()\n score = np.array(score)\n return score\n\noptimizer = BayesianOptimization(f=cv_score, \n domain=bds,\n model_type='GP',\n acquisition_type ='EI',\n acquisition_jitter = 0.05,\n exact_feval=True, \n maximize=True)\n\n# Only 20 iterations because we have 5 initial random points\noptimizer.run_optimization(max_iter=20)",
"_____no_output_____"
]
],
[
[
"### Results\n\nOn average, Bayesian optimization finds a better optimium in a smaller number of steps than random search and beats the baseline in almost every run. This trend becomes even more prominent in higher-dimensional search spaces. Here, the search space is 5-dimensional which is rather low to substantially profit from Bayesian optimization. One advantage of random search is that it is trivial to parallelize. Parallelization of Bayesian optimization is much harder and subject to research (see \\[4\\], for example).",
"_____no_output_____"
]
],
[
[
"y_rs = np.maximum.accumulate(rs.cv_results_['mean_test_score'])\ny_bo = np.maximum.accumulate(-optimizer.Y).ravel()\n\nprint(f'Baseline neg. MSE = {baseline:.2f}')\nprint(f'Random search neg. MSE = {y_rs[-1]:.2f}')\nprint(f'Bayesian optimization neg. MSE = {y_bo[-1]:.2f}')\n\nplt.plot(y_rs, 'ro-', label='Random search')\nplt.plot(y_bo, 'bo-', label='Bayesian optimization')\nplt.xlabel('Iteration')\nplt.ylabel('Neg. MSE')\nplt.ylim(-5000, -3000)\nplt.title('Value of the best sampled CV score');\nplt.legend();",
"Baseline neg. MSE = -3498.95\nRandom search neg. MSE = -3678.77\nBayesian optimization neg. MSE = -3185.50\n"
]
],
[
[
"## References\n\n\\[1\\] Eric Brochu, Vlad M. Cora, Nando de Freitas, [A Tutorial on Bayesian Optimization of Expensive Cost Functions](https://arxiv.org/abs/1012.2599). \n\\[2\\] Jonas Mockus, [Application of Bayesian approach to numerical methods of global and stochastic optimization](https://link.springer.com/article/10.1007/BF01099263). \n\\[3\\] Donald R. JonesMatthias SchonlauWilliam J. Welch, [Efficient Global Optimization of Expensive Black-Box Functions](https://link.springer.com/article/10.1023/A:1008306431147). \n\\[4\\] Jialei Wang, Scott C. Clark, Eric Liu, Peter I. Frazier, [Parallel Bayesian Global Optimization of Expensive Functions](https://arxiv.org/abs/1602.05149). ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a7e03b2f4c04d5d7574ee019d7bad67763be092
| 1,630 |
ipynb
|
Jupyter Notebook
|
Demo/fonctions.ipynb
|
paulber/IntroductionAMatlab
|
37476ed0c2363c3cbfe71604cc524d1c19c9ffba
|
[
"MIT"
] | null | null | null |
Demo/fonctions.ipynb
|
paulber/IntroductionAMatlab
|
37476ed0c2363c3cbfe71604cc524d1c19c9ffba
|
[
"MIT"
] | null | null | null |
Demo/fonctions.ipynb
|
paulber/IntroductionAMatlab
|
37476ed0c2363c3cbfe71604cc524d1c19c9ffba
|
[
"MIT"
] | null | null | null | 18.314607 | 91 | 0.488957 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a7e0e3e9b3d6962f265313e65da952c4110c713
| 14,484 |
ipynb
|
Jupyter Notebook
|
simple-ga.ipynb
|
romanak/swarm-intelligence
|
c76e00542be3448b2068ff2f483592561872783c
|
[
"MIT"
] | null | null | null |
simple-ga.ipynb
|
romanak/swarm-intelligence
|
c76e00542be3448b2068ff2f483592561872783c
|
[
"MIT"
] | null | null | null |
simple-ga.ipynb
|
romanak/swarm-intelligence
|
c76e00542be3448b2068ff2f483592561872783c
|
[
"MIT"
] | null | null | null | 25.321678 | 118 | 0.508354 |
[
[
[
"# Simple genetic algorithm",
"_____no_output_____"
],
[
"## Step-by-step implementation",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n# initiate random number generator\nseed = 1\nrng = np.random.default_rng(seed)\n\n# population number\npopulation_size = 4",
"_____no_output_____"
],
[
"# initialize the population\npopulation = list()\nfor i in range(population_size):\n gene = rng.integers(low=0, high=2, size=5, dtype=np.uint8)\n population.append(gene)\n\npopulation",
"_____no_output_____"
],
[
"# gene decoding function\ndef gene_decode(gene):\n n = gene.shape[0]\n b = 2**np.arange(n)\n x = np.sum(b * np.flip(gene))\n return x",
"_____no_output_____"
],
[
"# decode the genotype\ngenotype_decode = [gene_decode(g) for g in population]\n\ngenotype_decode",
"_____no_output_____"
],
[
"# calculate fitness for each individual\nfitness = np.square(genotype_decode)\n\nfitness",
"_____no_output_____"
],
[
"# calculate the probability that an individual will be chosen to become a parent\nparenting_probability = fitness/np.sum(fitness)\n\nparenting_probability",
"_____no_output_____"
],
[
"# calculate the expected number of copies of each individual after selection\nexpected_count = fitness/np.mean(fitness)\n\nexpected_count",
"_____no_output_____"
],
[
"# calculate the actual number of copies in the mating pool\nactual_count = np.around(expected_count, decimals=0).astype(int)\nwhile sum(actual_count) < population_size:\n actual_count[np.argmax(expected_count)] += 1\n\nactual_count",
"_____no_output_____"
],
[
"# form the mating pool\nmating_pool = list()\nfor i, count in enumerate(actual_count):\n for j in range(count):\n mating_pool.append(population[i])\n\nmating_pool",
"_____no_output_____"
],
[
"# form pairs at random\narranged_mates = list(rng.permutation(mating_pool))\nformed_pairs = [(arranged_mates[i], arranged_mates[i+1]) for i in range(len(arranged_mates)) if i%2 == 0]\n\nformed_pairs",
"_____no_output_____"
],
[
"# select the crossover point at random\nchildren = list()\nfor pair in formed_pairs:\n xover = rng.integers(1, 5)\n print(xover)\n child1 = np.concatenate((pair[0][:xover], pair[1][xover:]))\n child2 = np.concatenate((pair[1][:xover], pair[0][xover:]))\n children.append(child1)\n children.append(child2)\n\nchildren",
"2\n2\n"
],
[
"# mutate the genes with mutation rate 0.001\nmutation_rate = 0.001\nfor child in children:\n for i, gene in enumerate(child):\n if rng.uniform() < mutation_rate:\n print('Flipped in', child, i)\n if gene == 0:\n child[i] = 1\n else:\n child[i] = 0\n\nchildren",
"_____no_output_____"
],
[
"# replace the population with descendants\npopulation = children\n\npopulation",
"_____no_output_____"
]
],
[
[
"## Putting it all together",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n# initiate random number generator\nseed = 1\nrng = np.random.default_rng(seed)\n\n# population number\npopulation_size = 4\n\noverall_fitness = list()\nmaximum_fitness = list()\n\n# initialize the population\npopulation = list()\nfor i in range(population_size):\n gene = rng.integers(low=0, high=2, size=5, dtype=np.uint8)\n population.append(gene)",
"_____no_output_____"
],
[
"def sga(population): \n # gene decoding function\n def gene_decode(gene):\n n = gene.shape[0]\n b = 2**np.arange(n)\n x = np.sum(b * np.flip(gene))\n return x\n\n # decode the genotype\n genotype_decode = [gene_decode(g) for g in population]\n\n # calculate fitness for each individual\n fitness = np.square(genotype_decode)\n\n # calculate the probability that an individual will be chosen to become a parent\n parenting_probability = fitness/np.sum(fitness)\n\n # calculate the expected number of copies of each individual after selection\n expected_count = fitness/np.mean(fitness)\n\n # calculate the actual number of copies in the mating pool\n actual_count = np.around(expected_count, decimals=0).astype(int)\n while sum(actual_count) < population_size:\n actual_count[np.argmax(expected_count)] += 1\n\n # form the mating pool\n mating_pool = list()\n for i, count in enumerate(actual_count):\n for j in range(count):\n mating_pool.append(population[i])\n\n # form pairs at random\n arranged_mates = list(rng.permutation(mating_pool))\n formed_pairs = [(arranged_mates[i], arranged_mates[i+1]) for i in range(len(arranged_mates)) if i%2 == 0]\n\n # select the crossover point at random\n children = list()\n for pair in formed_pairs:\n xover = rng.integers(1, 5)\n child1 = np.concatenate((pair[0][:xover], pair[1][xover:]))\n child2 = np.concatenate((pair[1][:xover], pair[0][xover:]))\n children.append(child1)\n children.append(child2)\n\n # mutate the genes with mutation rate 0.001\n mutation_rate = 0.001\n for child in children:\n for i, gene in enumerate(child):\n if rng.uniform() < mutation_rate:\n if gene == 0:\n child[i] = 1\n else:\n child[i] = 0\n\n # replace the population with descendants\n population = children\n\n # decode the new genotype\n genotype_decode = [gene_decode(g) for g in population]\n\n # calculate fitness for each new individual\n fitness = np.square(genotype_decode)\n\n return (population, fitness)",
"_____no_output_____"
],
[
"for i in range(10):\n population, fitness = sga(population)\n overall_fitness.append(fitness.sum())\n maximum_fitness.append(fitness.max())",
"_____no_output_____"
],
[
"overall_fitness",
"_____no_output_____"
],
[
"maximum_fitness",
"_____no_output_____"
],
[
"population",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7e15e10293dfb8180fb538f527eb8d547b1f11
| 188,902 |
ipynb
|
Jupyter Notebook
|
tutmom-master/intro.ipynb
|
sunny2309/scipy_conf_notebooks
|
30a85d5137db95e01461ad21519bc1bdf294044b
|
[
"MIT"
] | 2 |
2021-01-09T15:57:26.000Z
|
2021-11-29T01:44:21.000Z
|
tutmom-master/intro.ipynb
|
sunny2309/scipy_conf_notebooks
|
30a85d5137db95e01461ad21519bc1bdf294044b
|
[
"MIT"
] | 5 |
2019-11-15T02:00:26.000Z
|
2021-01-06T04:26:40.000Z
|
tutmom-master/intro.ipynb
|
sunny2309/scipy_conf_notebooks
|
30a85d5137db95e01461ad21519bc1bdf294044b
|
[
"MIT"
] | null | null | null | 104.250552 | 45,402 | 0.802411 |
[
[
[
"# Introduction to optimization",
"_____no_output_____"
],
[
"The basic components",
"_____no_output_____"
],
[
"* The objective function (also called the 'cost' function)",
"_____no_output_____"
]
],
[
[
"import numpy as np\nobjective = np.poly1d([1.3, 4.0, 0.6])\nprint(objective)",
" 2\n1.3 x + 4 x + 0.6\n"
]
],
[
[
"* The \"optimizer\"",
"_____no_output_____"
]
],
[
[
"import scipy.optimize as opt\nx_ = opt.fmin(objective, [3])\nprint(\"solved: x={}\".format(x_))",
"Optimization terminated successfully.\n Current function value: -2.476923\n Iterations: 20\n Function evaluations: 40\nsolved: x=[-1.53845215]\n"
],
[
"%matplotlib notebook",
"_____no_output_____"
],
[
"x = np.linspace(-4,1,101)\nimport matplotlib.pylab as mpl\nmpl.plot(x, objective(x))\nmpl.plot(x_, objective(x_), 'ro')",
"_____no_output_____"
]
],
[
[
"Additional components",
"_____no_output_____"
],
[
"* \"Box\" constraints",
"_____no_output_____"
]
],
[
[
"import scipy.special as ss\nimport scipy.optimize as opt\nimport numpy as np\nimport matplotlib.pylab as mpl\n\nx = np.linspace(2, 7, 200)\n\n# 1st order Bessel\nj1x = ss.j1(x)\nmpl.plot(x, j1x)\n\n# use scipy.optimize's more modern \"results object\" interface\nresult = opt.minimize_scalar(ss.j1, method=\"bounded\", bounds=[2, 4])\n\nj1_min = ss.j1(result.x)\nmpl.plot(result.x, j1_min,'ro')",
"_____no_output_____"
]
],
[
[
"* The gradient and/or hessian",
"_____no_output_____"
]
],
[
[
"import mystic.models as models\nprint(models.rosen.__doc__)",
"evaluates an N-dimensional Rosenbrock saddle for a list of coeffs\n\nf(x) = \\sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)\n\nInspect with mystic_model_plotter using::\n mystic.models.rosen -b \"-3:3:.1, -1:5:.1, 1\" -d -x 1\n\nThe minimum is f(x)=0.0 at x_i=1.0 for all i\n"
],
[
"import mystic\nmystic.model_plotter(mystic.models.rosen, kwds='-f -d -x 1 -b \"-3:3:.1, -1:5:.1, 1\"')",
"_____no_output_____"
],
[
"import scipy.optimize as opt\nimport numpy as np\n\n# initial guess\nx0 = [1.3, 1.6, -0.5, -1.8, 0.8]\n\nresult = opt.minimize(opt.rosen, x0)\nprint(result.x)\n\n# number of function evaluations\nprint(result.nfev)\n\n# again, but this time provide the derivative\nresult = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)\nprint(result.x)\n\n# number of function evaluations and derivative evaluations\nprint(result.nfev, result.njev)\nprint('')\n\n# however, note for a different x0...\nfor i in range(5):\n x0 = np.random.randint(-20,20,5)\n result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)\n print(\"{} @ {} evals\".format(result.x, result.nfev))",
"[-0.9620502 0.9357378 0.88071063 0.77787245 0.60508554]\n385\n[-0.96205103 0.9357394 0.88071361 0.77787768 0.60509369]\n54 54\n\n[ 0.99999996 0.99999991 0.99999983 0.99999967 0.99999934] @ 100 evals\n[ 1. 1. 1. 1.00000001 1.00000001] @ 145 evals\n[ 1. 1. 1. 1.00000001 1.00000002] @ 61 evals\n[ 1.00000001 1.00000003 1.00000007 1.00000013 1.00000026] @ 122 evals\n[ 1.00000001 1.00000002 1.00000003 1.00000007 1.00000014] @ 120 evals\n"
]
],
[
[
"* The penalty functions\n\n$\\psi(x) = f(x) + k*p(x)$",
"_____no_output_____"
]
],
[
[
"# http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#tutorial-sqlsp\n'''\n Maximize: f(x) = 2*x0*x1 + 2*x0 - x0**2 - 2*x1**2\n \n Subject to: x0**3 - x1 == 0\n x1 >= 1\n'''\nimport numpy as np\n\ndef objective(x, sign=1.0):\n return sign*(2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2)\n\ndef derivative(x, sign=1.0):\n dfdx0 = sign*(-2*x[0] + 2*x[1] + 2)\n dfdx1 = sign*(2*x[0] - 4*x[1])\n return np.array([ dfdx0, dfdx1 ])\n\n# unconstrained\nresult = opt.minimize(objective, [-1.0,1.0], args=(-1.0,),\n jac=derivative, method='SLSQP', options={'disp': True})\nprint(\"unconstrained: {}\".format(result.x))\n\n\ncons = ({'type': 'eq',\n 'fun' : lambda x: np.array([x[0]**3 - x[1]]),\n 'jac' : lambda x: np.array([3.0*(x[0]**2.0), -1.0])},\n {'type': 'ineq',\n 'fun' : lambda x: np.array([x[1] - 1]),\n 'jac' : lambda x: np.array([0.0, 1.0])})\n\n# constrained\nresult = opt.minimize(objective, [-1.0,1.0], args=(-1.0,), jac=derivative,\n constraints=cons, method='SLSQP', options={'disp': True})\n\nprint(\"constrained: {}\".format(result.x))",
"Optimization terminated successfully. (Exit mode 0)\n Current function value: -2.0\n Iterations: 4\n Function evaluations: 5\n Gradient evaluations: 4\nunconstrained: [ 2. 1.]\nOptimization terminated successfully. (Exit mode 0)\n Current function value: -1.0000001831052137\n Iterations: 9\n Function evaluations: 14\n Gradient evaluations: 9\nconstrained: [ 1.00000009 1. ]\n"
]
],
[
[
"Optimizer classifications",
"_____no_output_____"
],
[
"* Constrained versus unconstrained (and importantly LP and QP)",
"_____no_output_____"
]
],
[
[
"# from scipy.optimize.minimize documentation\n'''\n **Unconstrained minimization**\n \n Method *Nelder-Mead* uses the Simplex algorithm [1]_, [2]_. This\n algorithm has been successful in many applications but other algorithms\n using the first and/or second derivatives information might be preferred\n for their better performances and robustness in general.\n \n Method *Powell* is a modification of Powell's method [3]_, [4]_ which\n is a conjugate direction method. It performs sequential one-dimensional\n minimizations along each vector of the directions set (`direc` field in\n `options` and `info`), which is updated at each iteration of the main\n minimization loop. The function need not be differentiable, and no\n derivatives are taken.\n \n Method *CG* uses a nonlinear conjugate gradient algorithm by Polak and\n Ribiere, a variant of the Fletcher-Reeves method described in [5]_ pp.\n 120-122. Only the first derivatives are used.\n \n Method *BFGS* uses the quasi-Newton method of Broyden, Fletcher,\n Goldfarb, and Shanno (BFGS) [5]_ pp. 136. It uses the first derivatives\n only. BFGS has proven good performance even for non-smooth\n optimizations. This method also returns an approximation of the Hessian\n inverse, stored as `hess_inv` in the OptimizeResult object.\n \n Method *Newton-CG* uses a Newton-CG algorithm [5]_ pp. 168 (also known\n as the truncated Newton method). It uses a CG method to the compute the\n search direction. See also *TNC* method for a box-constrained\n minimization with a similar algorithm.\n \n Method *Anneal* uses simulated annealing, which is a probabilistic\n metaheuristic algorithm for global optimization. It uses no derivative\n information from the function being optimized.\n \n Method *dogleg* uses the dog-leg trust-region algorithm [5]_\n for unconstrained minimization. This algorithm requires the gradient\n and Hessian; furthermore the Hessian is required to be positive definite.\n \n Method *trust-ncg* uses the Newton conjugate gradient trust-region\n algorithm [5]_ for unconstrained minimization. This algorithm requires\n the gradient and either the Hessian or a function that computes the\n product of the Hessian with a given vector.\n\n **Constrained minimization**\n \n Method *L-BFGS-B* uses the L-BFGS-B algorithm [6]_, [7]_ for bound\n constrained minimization.\n \n Method *TNC* uses a truncated Newton algorithm [5]_, [8]_ to minimize a\n function with variables subject to bounds. This algorithm uses\n gradient information; it is also called Newton Conjugate-Gradient. It\n differs from the *Newton-CG* method described above as it wraps a C\n implementation and allows each variable to be given upper and lower\n bounds.\n \n Method *COBYLA* uses the Constrained Optimization BY Linear\n Approximation (COBYLA) method [9]_, [10]_, [11]_. The algorithm is\n based on linear approximations to the objective function and each\n constraint. The method wraps a FORTRAN implementation of the algorithm.\n \n Method *SLSQP* uses Sequential Least SQuares Programming to minimize a\n function of several variables with any combination of bounds, equality\n and inequality constraints. The method wraps the SLSQP Optimization\n subroutine originally implemented by Dieter Kraft [12]_. Note that the\n wrapper handles infinite values in bounds by converting them into large\n floating values.\n'''",
"_____no_output_____"
]
],
[
[
"The typical optimization algorithm (local or global) is unconstrained. Constrained algorithms tend strongly to be local, and also often use LP/QP approximations. Hence, most optimization algorithms are good either for quick linear/quadratic approximation under some constraints, or are intended for nonlinear functions without constraints. Any information about the problem that impacts the potential solution can be seen as constraining information. Constraining information is typically applied as a penatly, or as a box constraint on an input. The user is thus typically forced to pick whether they want to apply constraints but treat the problem as a LP/QP approximation, or to ignore the constraining information in exchange for a nonliear solver.",
"_____no_output_____"
]
],
[
[
"import scipy.optimize as opt\n\n# constrained: linear (i.e. A*x + b)\nprint(opt.cobyla.fmin_cobyla)\nprint(opt.linprog)\n\n# constrained: quadratic programming (i.e. up to x**2)\nprint(opt.fmin_slsqp)",
"<function fmin_cobyla at 0x10dba79d8>\n<function linprog at 0x10dd1d730>\n<function fmin_slsqp at 0x10dba7bf8>\n"
],
[
"# http://cvxopt.org/examples/tutorial/lp.html\n'''\nminimize: f = 2*x0 + x1\n\nsubject to:\n -x0 + x1 <= 1\n x0 + x1 >= 2\n x1 >= 0\n x0 - 2*x1 <= 4\n'''\n\nimport cvxopt as cvx\nfrom cvxopt import solvers as cvx_solvers\n\nA = cvx.matrix([ [-1.0, -1.0, 0.0, 1.0], [1.0, -1.0, -1.0, -2.0] ])\nb = cvx.matrix([ 1.0, -2.0, 0.0, 4.0 ])\ncost = cvx.matrix([ 2.0, 1.0 ])\nsol = cvx_solvers.lp(cost, A, b)\n\nprint(sol['x'])",
" pcost dcost gap pres dres k/t\n 0: 2.6471e+00 -7.0588e-01 2e+01 8e-01 2e+00 1e+00\n 1: 3.0726e+00 2.8437e+00 1e+00 1e-01 2e-01 3e-01\n 2: 2.4891e+00 2.4808e+00 1e-01 1e-02 2e-02 5e-02\n 3: 2.4999e+00 2.4998e+00 1e-03 1e-04 2e-04 5e-04\n 4: 2.5000e+00 2.5000e+00 1e-05 1e-06 2e-06 5e-06\n 5: 2.5000e+00 2.5000e+00 1e-07 1e-08 2e-08 5e-08\nOptimal solution found.\n[ 5.00e-01]\n[ 1.50e+00]\n\n"
],
[
"# http://cvxopt.org/examples/tutorial/qp.html\n'''\nminimize: f = 2*x1**2 + x2**2 + x1*x2 + x1 + x2\n\nsubject to:\n x1 >= 0\n x2 >= 0\n x1 + x2 == 1\n'''\n\nimport cvxopt as cvx\nfrom cvxopt import solvers as cvx_solvers\n\nQ = 2*cvx.matrix([ [2, .5], [.5, 1] ])\np = cvx.matrix([1.0, 1.0])\nG = cvx.matrix([[-1.0,0.0],[0.0,-1.0]])\nh = cvx.matrix([0.0,0.0])\nA = cvx.matrix([1.0, 1.0], (1,2))\nb = cvx.matrix(1.0)\nsol = cvx_solvers.qp(Q, p, G, h, A, b)\n\nprint(sol['x'])",
" pcost dcost gap pres dres\n 0: 1.8889e+00 7.7778e-01 1e+00 3e-16 2e+00\n 1: 1.8769e+00 1.8320e+00 4e-02 2e-16 6e-02\n 2: 1.8750e+00 1.8739e+00 1e-03 2e-16 5e-04\n 3: 1.8750e+00 1.8750e+00 1e-05 1e-16 5e-06\n 4: 1.8750e+00 1.8750e+00 1e-07 1e-16 5e-08\nOptimal solution found.\n[ 2.50e-01]\n[ 7.50e-01]\n\n"
]
],
[
[
"Notice how much nicer it is to see the optimizer \"trajectory\". Now, instead of a single number, we have the path the optimizer took in finding the solution. `scipy.optimize` has a version of this, with `options={'retall':True}`, which returns the solver trajectory.",
"_____no_output_____"
],
[
"**EXERCISE:** Solve the constrained programming problem by any of the means above.\n\nMinimize: f = -1*x[0] + 4*x[1]\n\nSubject to: <br>\n-3*x[0] + 1*x[1] <= 6 <br>\n1*x[0] + 2*x[1] <= 4 <br>\nx[1] >= -3 <br>\n\nwhere: -inf <= x[0] <= inf",
"_____no_output_____"
],
[
"* Local versus global",
"_____no_output_____"
]
],
[
[
"import scipy.optimize as opt\n\n# probabilstic solvers, that use random hopping/mutations\nprint(opt.differential_evolution)\nprint(opt.basinhopping)",
"<function differential_evolution at 0x10dd1dea0>\n<function basinhopping at 0x10dd10510>\n"
],
[
"import scipy.optimize as opt\n\n# bounds instead of an initial guess\nbounds = [(-10., 10)]*5\n\nfor i in range(10):\n result = opt.differential_evolution(opt.rosen, bounds)\n # result and number of function evaluations\n print(result.x, '@ {} evals'.format(result.nfev))",
"[ 1. 1. 1. 1. 1.] @ 45006 evals\n[-0.96205104 0.93573948 0.88071398 0.77787799 0.60509392] @ 5595 evals\n[ 1. 1. 1. 1. 1.] @ 41406 evals\n[ 1. 1. 1. 1. 1.] @ 42906 evals\n[ 1. 1. 1. 1. 1.] @ 43806 evals\n[ 1. 1. 1. 1. 1.] @ 44931 evals\n[ 1. 1. 1. 1. 1.] @ 42906 evals\n[ 1. 1. 1. 1. 1.] @ 44931 evals\n[ 1. 1. 1. 1. 1.] @ 43206 evals\n[ 1. 1. 1. 1. 1.] @ 42081 evals\n"
]
],
[
[
"Global optimizers tend to be much slower than local optimizers, and often use randomness to pick points within some box constraints instead of starting with an initial guess. The choice then is between algorithms that are non-deterministic and algorithms that are deterministic but depend very strongly on the selected starting point.\n\nLocal optimization algorithms have names like \"gradient descent\" and \"steepest descent\", while global optimizations tend to use things like \"stocastic\" and \"genetic\" algorithms.",
"_____no_output_____"
],
[
"* Not covered: other exotic types",
"_____no_output_____"
],
[
"Other important special cases:",
"_____no_output_____"
],
[
"* Least-squares fitting",
"_____no_output_____"
]
],
[
[
"import scipy.optimize as opt\nimport scipy.stats as stats\nimport numpy as np\n\n# Define the function to fit.\ndef function(x, a, b, f, phi):\n result = a * np.exp(-b * np.sin(f * x + phi))\n return result\n\n# Create a noisy data set around the actual parameters\ntrue_params = [3, 2, 1, np.pi/4]\nprint(\"target parameters: {}\".format(true_params))\nx = np.linspace(0, 2*np.pi, 25)\nexact = function(x, *true_params)\nnoisy = exact + 0.3*stats.norm.rvs(size=len(x))\n\n# Use curve_fit to estimate the function parameters from the noisy data.\ninitial_guess = [1,1,1,1]\nestimated_params, err_est = opt.curve_fit(function, x, noisy, p0=initial_guess)\nprint(\"solved parameters: {}\".format(estimated_params))\n\n# err_est is an estimate of the covariance matrix of the estimates\nprint(\"covarance: {}\".format(err_est.diagonal()))\n\nimport matplotlib.pylab as mpl\nmpl.plot(x, noisy, 'ro')\nmpl.plot(x, function(x, *estimated_params)) ",
"target parameters: [3, 2, 1, 0.7853981633974483]\nsolved parameters: [ 2.90411388 2.04288133 1.00695954 0.74444564]\ncovarance: [ 0.04703666 0.00550453 0.00048609 0.00755947]\n"
]
],
[
[
"Least-squares tends to be chosen when the user wants a measure of the covariance, typically as an error estimate.",
"_____no_output_____"
],
[
"* Integer programming",
"_____no_output_____"
],
[
"Integer programming (IP) or Mixed-integer programming (MIP) requires special optimizers that only select parameter values from the set of integers. These optimizers are typically used for things like cryptography, or other optimizations over a discrete set of possible solutions.",
"_____no_output_____"
],
[
"Typical uses",
"_____no_output_____"
],
[
"* Function minimization",
"_____no_output_____"
],
[
"* Data fitting",
"_____no_output_____"
],
[
"* Root finding",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport scipy.optimize as opt\n\ndef system(x,a,b,c):\n x0, x1, x2 = x\n eqs= [\n 3 * x0 - np.cos(x1*x2) + a, # == 0\n x0**2 - 81*(x1+0.1)**2 + np.sin(x2) + b, # == 0\n np.exp(-x0*x1) + 20*x2 + c # == 0\n ]\n return eqs\n\n\n# coefficients\na = -0.5\nb = 1.06\nc = (10 * np.pi - 3.0) / 3\n\n# initial guess\nx0 = [0.1, 0.1, -0.1]\n\n# Solve the system of non-linear equations.\nresult = opt.root(system, x0, args=(a, b, c))\nprint(\"root:\", result.x)\nprint(\"solution:\", result.fun)",
"root: [ 5.00000000e-01 1.38102142e-13 -5.23598776e-01]\nsolution: [ 0.00000000e+00 -2.23110419e-12 7.46069873e-14]\n"
]
],
[
[
"* Parameter estimation",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport scipy.stats as stats\n\n# Create clean data.\nx = np.linspace(0, 4.0, 100)\ny = 1.5 * np.exp(-0.2 * x) + 0.3\n\n# Add a bit of noise.\nnoise = 0.1 * stats.norm.rvs(size=100)\nnoisy_y = y + noise\n\n# Fit noisy data with a linear model.\nlinear_coef = np.polyfit(x, noisy_y, 1)\nlinear_poly = np.poly1d(linear_coef)\nlinear_y = linear_poly(x)\n\n# Fit noisy data with a quadratic model.\nquad_coef = np.polyfit(x, noisy_y, 2)\nquad_poly = np.poly1d(quad_coef)\nquad_y = quad_poly(x)\n\nimport matplotlib.pylab as mpl\nmpl.plot(x, noisy_y, 'ro')\nmpl.plot(x, linear_y)\nmpl.plot(x, quad_y)\n#mpl.plot(x, y)",
"_____no_output_____"
]
],
[
[
"Standard diagnostic tools",
"_____no_output_____"
],
[
"* Eyeball the plotted solution against the objective",
"_____no_output_____"
],
[
"* Run several times and take the best result",
"_____no_output_____"
],
[
"* Analyze a log of intermediate results, per iteration",
"_____no_output_____"
],
[
"* Rare: look at the covariance matrix",
"_____no_output_____"
],
[
"* Issue: how can you really be sure you have the results you were looking for?",
"_____no_output_____"
],
[
"**EXERCISE:** Use any of the solvers we've seen thus far to find the minimum of the `zimmermann` function (i.e. use `mystic.models.zimmermann` as the objective). Use the bounds suggested below, if your choice of solver allows it.",
"_____no_output_____"
]
],
[
[
"import mystic.models as models\nprint(models.zimmermann.__doc__)",
"evaluates a Zimmermann function for a list of coeffs\n\nf(x) = max(f_0(x), p_i(x)), with i = 0,1,2,3\n\nWhere:\nf_0(x) = 9 - x_0 - x_1\nwith for x_0 < 0:\np_0(x) = -100 * x_0\nand for x_1 < 0:\np_1(x) = -100 * x_1\nand for c_2(x) > 16 and c_3(x) > 14:\np_i(x) = 100 * c_i(x), with i = 2,3\nc_2(x) = (x_0 - 3)^2 + (x_1 - 2)^2\nc_3(x) = x_0 * x_1\nOtherwise, p_i(x)=0 for i=0,1,2,3 and c_i(x)=0 for i=2,3.\n\nInspect with mystic_model_plotter using::\n mystic.models.zimmermann -b \"-5:10:.1, -5:10:.1\" -d -x 1\n\nThe minimum is f(x)=0.0 at x=(7.0,2.0)\n"
]
],
[
[
"**EXERCISE:** Do the same for the `fosc3d` function found at `mystic.models.fosc3d`, using the bounds suggested by the documentation, if your chosen solver accepts bounds or constraints.",
"_____no_output_____"
],
[
"More to ponder: what about high-dimenstional and nonlinear constraints?",
"_____no_output_____"
],
[
"Let's look at optimization \"redesigned\" in [mystic](mystic.ipynb)...",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
4a7e1b9677f55edb821c311d630328b58c0272b3
| 13,878 |
ipynb
|
Jupyter Notebook
|
A/Other_programming_languages.ipynb
|
AskerNC/lectures-2021
|
d152450b2fee7be775892dde1a467639aa5e35ea
|
[
"MIT"
] | 9 |
2020-11-30T22:25:38.000Z
|
2021-10-05T12:17:11.000Z
|
A/Other_programming_languages.ipynb
|
AskerNC/lectures-2021
|
d152450b2fee7be775892dde1a467639aa5e35ea
|
[
"MIT"
] | 1 |
2021-04-12T14:15:49.000Z
|
2021-04-12T15:03:55.000Z
|
A/Other_programming_languages.ipynb
|
AskerNC/lectures-2021
|
d152450b2fee7be775892dde1a467639aa5e35ea
|
[
"MIT"
] | 30 |
2021-02-08T16:18:01.000Z
|
2022-02-05T17:02:35.000Z
| 26.688462 | 308 | 0.557429 |
[
[
[
"# Other programming languages",
"_____no_output_____"
],
[
"**Today we talk about various programming languages:** If you have learned one programming language, it is easy to learn the next.\n\n**Different kinds** of programming languages:\n\n1. **Low-level, compiled (C/C++, Fortran):** You are in full control, but need to specify types, allocate memory and clean up after your-self\n2. **High-level, interpreted (MATLAB, Python, Julia, R):** Types are inferred, memory is allocated automatically, and there is automatic garbage collection",
"_____no_output_____"
],
[
"**Others:** \n\n1. **[Wolfram Mathematica](https://www.wolfram.com/mathematica/)**: A mathematical programming langauge. The inspiration for **sympy**.\n2. **[STATA](https://www.stata.com/)**: For many economists still the prefered statistical program, because it is so good at panel data and provides standard errors for a lot of the commonly used estimators. \n\n> **Note:** Data cleaning and structuring is increasingly done in **R** or **Python**, and **STATA** is then only used for estimation. ",
"_____no_output_____"
],
[
"**Comparison:** We solve the same Simulated Minimum Distance (SMD) problem in MATLAB, Python and Julia.",
"_____no_output_____"
],
[
"**Observations:**\n\n1. Any language can typically be used to solve a task. But some have a **comparative advantage**.\n2. If a **syntax** in a language irritates you, you will write worse code.\n3. A **community** in your field around a language is important.\n4. **No language is the best at everything**.",
"_____no_output_____"
],
[
"**Comparisons:**\n\n- Coleman et al. (2020): MATLAB, [Python and Julia: What to choose in economics?](https://lmaliar.ws.gc.cuny.edu/files/2019/01/CEPR-DP13210.pdf)\n- Fernández-Villaverde and Valencia (2019): [A Practical Guide to Parallization in Economics](https://www.sas.upenn.edu/~jesusfv/Guide_Parallel.pdf)\n",
"_____no_output_____"
],
[
"# High-level programming languages",
"_____no_output_____"
],
[
"## MATLAB",
"_____no_output_____"
],
[
"The **godfather** of high-level scientific programming. *The main source of inspiration for numpy and Julia*.\n\nThe **good** things:\n\n1. Full scientific programming langauge \n2. Especially good at optimization and (sparse) matrix algebra \n3. Well-developed interface (IDE) and debugger \n4. Integration with C++ through mex functions\n\nThe **bad** things:\n\n1. Not open source and costly outside of academia\n2. Not always easy to parallelize natively\n3. Not complete programming langauge\n4. Not in JupyterLab",
"_____no_output_____"
],
[
"**Download:** Available in the Absalon software library.\n\n**Example:** `SMD_MATLAB.mlx`\n\n**More:** \n\n1. **Mini-course in MATLAB:** See the folder `\\MATLAB_course`\n2. [NumPy for Matlab users](https://docs.scipy.org/doc/numpy/user/numpy-for-matlab-users.html)",
"_____no_output_____"
],
[
"## Python",
"_____no_output_____"
],
[
"The **swiss-knife** of programming languages.\n\nThe **good** things:\n\n1. Allround programming language\n2. Full scientific programming (numpy+scipy)\n3. Good at statistics (in particular data handling and machine learning)\n4. Just-in-time (jit) compilation availible (numba)\n4. Easy to integrate with C++ (ctypes, cffi)\n\nThe **bad** things:\n\n1. Messy package system at times\n2. Sometimes hard to jit-compile and parallelize",
"_____no_output_____"
],
[
"**Example:** `SMD_Python.ipynb`",
"_____no_output_____"
],
[
"## Julia",
"_____no_output_____"
],
[
"The **newcomer** of scientific programming languages.\n\n1. All-round programming language\n2. Automatic just-in-time compilation with native parallization - almost as fast as C++\n3. Focused on scientific computing and high performance computing\n\nThe **bad** things:\n\n1. Young language, with smallish, but growing, community\n2. Sometimes hard to ensure that the just-in-time compliation works efficiently",
"_____no_output_____"
],
[
"**Example:** `SMD_Julia.ipynb`",
"_____no_output_____"
],
[
"**Download Julia:**\n\n- [Open source version](https://julialang.org/downloads/)\n- [JuliaPro from Julia Computing (bundled with IDE and notebook support)](https://juliacomputing.com/products/juliapro)\n- [Documentation (language and about 1900 packages)](https://pkg.julialang.org/docs/)",
"_____no_output_____"
],
[
"**Julia community:**\n- [Discourse](https://discourse.julialang.org)\n- [Slack](https://julialang.slack.com)",
"_____no_output_____"
],
[
"For **introductory material on Julia for economists**, see [https://lectures.quantecon.org/jl/](https://lectures.quantecon.org/jl/).",
"_____no_output_____"
],
[
"## R",
"_____no_output_____"
],
[
"The **statistician favorite choice** of programming language.\n\n1. Great package system\n2. The best statistical packages\n3. Well-developed interface (IDE) (Rstudio) \n4. Easy to integrate with C++ (Rcpp)\n\nThe **bad** things:\n\n1. Not designed to be a scientific programming langauge\n2. Not a complete programming langauge\n\n**Download:** https://www.rstudio.com/",
"_____no_output_____"
],
[
"# Low-level programming languages",
"_____no_output_____"
],
[
"## Fortran",
"_____no_output_____"
],
[
"What I have nightmares about...\n\nIn the old days, it was a bit faster than C++. This is no longer true.",
"_____no_output_____"
],
[
"## C/C++",
"_____no_output_____"
],
[
"**The fastest you can get.** A very powerfull tool, but hard to learn, and impossible to master.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport ctypes as ct\nimport callcpp # local library",
"_____no_output_____"
],
[
"import psutil\nCPUs = psutil.cpu_count()\nCPUs_list = set(np.sort([1,2,4,*np.arange(8,CPUs+1,4)])) \nprint(f'this computer has {CPUs} CPUs')",
"this computer has 8 CPUs\n"
]
],
[
[
"## Calling C++ from Python",
"_____no_output_____"
],
[
"> **Note I:** This section can only be run on a Windows computer with the free **Microsoft Visual Studio 2017 Community Edition** ([download here](https://visualstudio.microsoft.com/downloads/)) installed.\n>\n> **Note II:** Learning C++ is somewhat hard. These [tutorials](http://www.cplusplus.com/doc/tutorial/) are helpful.\n\n\nPyton contains multiple ways of calling functions written in C++. Here I use **ctypes**.\n\n**C++ file:** example.cpp in the current folder.",
"_____no_output_____"
],
[
"**Step 1:** Compile C++ to a .dll file",
"_____no_output_____"
]
],
[
[
"callcpp.compile_cpp('example') # compiles example.cpp",
"cpp files compiled\n"
]
],
[
[
"> **Details:** Write a file called ``compile.bat`` and run it in a terminal under the hood.",
"_____no_output_____"
],
[
"**Step 2:** Link to .dll file",
"_____no_output_____"
]
],
[
[
"# funcs (list): list of functions with elements (functionname,[argtype1,argtype2,etc.])\nfuncs = [('myfun_cpp',[ct.POINTER(ct.c_double),ct.POINTER(ct.c_double),ct.POINTER(ct.c_double),\n ct.c_long,ct.c_long,ct.c_long])]\n\n# ct.POINTER(ct.c_double) to a double\n# ct.c_long interger\n\ncppfile = callcpp.link_cpp('example',funcs)",
"cpp files loaded\n"
]
],
[
[
"**Step 3:** Call function",
"_____no_output_____"
]
],
[
[
"def myfun_numpy_vec(x1,x2):\n y = np.empty((1,x1.size))\n I = x1 < 0.5\n y[I] = np.sum(np.exp(x2*x1[I]),axis=0)\n y[~I] = np.sum(np.log(x2*x1[~I]),axis=0)\n return y\n\n# setup\nx1 = np.random.uniform(size=10**6)\nx2 = np.random.uniform(size=np.int(100*CPUs/8)) # adjust the size of the problem\nx1_np = x1.reshape((1,x1.size))\nx2_np = x2.reshape((x2.size,1))\n\n# timing\n%timeit myfun_numpy_vec(x1_np,x2_np)",
"2.51 s ± 842 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
],
[
"def myfun_cpp(x1,x2,threads):\n y = np.empty(x1.size)\n p_x1 = np.ctypeslib.as_ctypes(x1) # pointer to x1\n p_x2 = np.ctypeslib.as_ctypes(x2) # pointer to x2\n p_y = np.ctypeslib.as_ctypes(y) # pointer to y\n cppfile.myfun_cpp(p_x1,p_x2,p_y,x1.size,x2.size,threads)\n return y\n\nassert np.allclose(myfun_numpy_vec(x1_np,x2_np),myfun_cpp(x1,x2,1))\nfor threads in CPUs_list:\n print(f'threads = {threads}')\n %timeit myfun_cpp(x1,x2,threads)\n print('')",
"threads = 8\n271 ms ± 7.85 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nthreads = 1\n1.05 s ± 60.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nthreads = 2\n668 ms ± 52.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nthreads = 4\n388 ms ± 9.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n"
]
],
[
[
"**Observation:** Compare with results in lecture 12. Numba is roughly as fast as C++ here (I get different results across different computers). In larger problems, C++ is usually faster, and while Numba is limited in terms of which Python and Numpy features it supports, everything can be coded in C++.",
"_____no_output_____"
],
[
"**Step 4:** Delink .dll file",
"_____no_output_____"
]
],
[
[
"callcpp.delink_cpp(cppfile,'example')",
"cpp files delinked\n"
]
],
[
[
"**More information:** See the folder \"Numba and C++\" in the [ConsumptionSavingNotebooks](https://github.com/NumEconCopenhagen/ConsumptionSavingNotebooks) repository. Incudes, an explanation on how to use the **NLopt optimizers** in C++.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a7e2311913308aa4f7c74f46416ef591b55f5bb
| 114,365 |
ipynb
|
Jupyter Notebook
|
src/src.ipynb
|
nidhintk/NeuralBasedDocumentSummarisation
|
c6a06364c54a8573cc243c762f514e283c1eac0f
|
[
"MIT"
] | null | null | null |
src/src.ipynb
|
nidhintk/NeuralBasedDocumentSummarisation
|
c6a06364c54a8573cc243c762f514e283c1eac0f
|
[
"MIT"
] | null | null | null |
src/src.ipynb
|
nidhintk/NeuralBasedDocumentSummarisation
|
c6a06364c54a8573cc243c762f514e283c1eac0f
|
[
"MIT"
] | null | null | null | 55.328979 | 601 | 0.551917 |
[
[
[
"import pickle\nimport codecs\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\nimport time\nfrom nltk.corpus import stopwords\nfrom os import listdir\nimport re",
"_____no_output_____"
],
[
"class BasePreprocessor:\n \"\"\"The abstract class for a preprocessor. You should subclass\n this and implement the methods actions and result, and possibly\n __init__, goal_test, and path_cost. Then you will create instances\n of your subclass and solve them with the various search functions.\"\"\"\n \n # List of contractions.\n CONTRACTION_LIST = {\n \"ain't\": \"is not\",\n \"aren't\": \"are not\",\n \"can't\": \"cannot\",\n \"can't've\": \"cannot have\",\n \"'cause\": \"because\",\n \"could've\": \"could have\",\n \"couldn't\": \"could not\",\n \"couldn't've\": \"could not have\",\n \"didn't\": \"did not\",\n \"doesn't\": \"does not\",\n \"don't\": \"do not\",\n \"hadn't\": \"had not\",\n \"hadn't've\": \"had not have\",\n \"hasn't\": \"has not\",\n \"haven't\": \"have not\",\n \"he'd\": \"he would\",\n \"he'd've\": \"he would have\",\n \"he'll\": \"he will\",\n \"he'll've\": \"he he will have\",\n \"he's\": \"he is\",\n \"how'd\": \"how did\",\n \"how'd'y\": \"how do you\",\n \"how'll\": \"how will\",\n \"how's\": \"how is\",\n \"I'd\": \"I would\",\n \"I'd've\": \"I would have\",\n \"I'll\": \"I will\",\n \"I'll've\": \"I will have\",\n \"I'm\": \"I am\",\n \"I've\": \"I have\",\n \"i'd\": \"i would\",\n \"i'd've\": \"i would have\",\n \"i'll\": \"i will\",\n \"i'll've\": \"i will have\",\n \"i'm\": \"i am\",\n \"i've\": \"i have\",\n \"isn't\": \"is not\",\n \"it'd\": \"it would\",\n \"it'd've\": \"it would have\",\n \"it'll\": \"it will\",\n \"it'll've\": \"it will have\",\n \"it's\": \"it is\",\n \"let's\": \"let us\",\n \"ma'am\": \"madam\",\n \"mayn't\": \"may not\",\n \"might've\": \"might have\",\n \"mightn't\": \"might not\",\n \"mightn't've\": \"might not have\",\n \"must've\": \"must have\",\n \"mustn't\": \"must not\",\n \"mustn't've\": \"must not have\",\n \"needn't\": \"need not\",\n \"needn't've\": \"need not have\",\n \"o'clock\": \"of the clock\",\n \"oughtn't\": \"ought not\",\n \"oughtn't've\": \"ought not have\",\n \"shan't\": \"shall not\",\n \"sha'n't\": \"shall not\",\n \"shan't've\": \"shall not have\",\n \"she'd\": \"she would\",\n \"she'd've\": \"she would have\",\n \"she'll\": \"she will\",\n \"she'll've\": \"she will have\",\n \"she's\": \"she is\",\n \"should've\": \"should have\",\n \"shouldn't\": \"should not\",\n \"shouldn't've\": \"should not have\",\n \"so've\": \"so have\",\n \"so's\": \"so as\",\n \"that'd\": \"that would\",\n \"that'd've\": \"that would have\",\n \"that's\": \"that is\",\n \"there'd\": \"there would\",\n \"there'd've\": \"there would have\",\n \"there's\": \"there is\",\n \"they'd\": \"they would\",\n \"they'd've\": \"they would have\",\n \"they'll\": \"they will\",\n \"they'll've\": \"they will have\",\n \"they're\": \"they are\",\n \"they've\": \"they have\",\n \"to've\": \"to have\",\n \"wasn't\": \"was not\",\n \"we'd\": \"we would\",\n \"we'd've\": \"we would have\",\n \"we'll\": \"we will\",\n \"we'll've\": \"we will have\",\n \"we're\": \"we are\",\n \"we've\": \"we have\",\n \"weren't\": \"were not\",\n \"what'll\": \"what will\",\n \"what'll've\": \"what will have\",\n \"what're\": \"what are\",\n \"what's\": \"what is\",\n \"what've\": \"what have\",\n \"when's\": \"when is\",\n \"when've\": \"when have\",\n \"where'd\": \"where did\",\n \"where's\": \"where is\",\n \"where've\": \"where have\",\n \"who'll\": \"who will\",\n \"who'll've\": \"who will have\",\n \"who's\": \"who is\",\n \"who've\": \"who have\",\n \"why's\": \"why is\",\n \"why've\": \"why have\",\n \"will've\": \"will have\",\n \"won't\": \"will not\",\n \"won't've\": \"will not have\",\n \"would've\": \"would have\",\n \"wouldn't\": \"would not\",\n \"wouldn't've\": \"would not have\",\n \"y'all\": \"you all\",\n \"y'all'd\": \"you all would\",\n \"y'all'd've\": \"you all would have\",\n \"y'all're\": \"you all are\",\n \"y'all've\": \"you all have\",\n \"you'd\": \"you would\",\n \"you'd've\": \"you would have\",\n \"you'll\": \"you will\",\n \"you'll've\": \"you will have\",\n \"you're\": \"you are\",\n \"you've\": \"you have\"\n }\n\n def __init__(self):\n \"\"\"The constructor. Your subclass's constructor can add\n other arguments.\"\"\"\n \n def cleanData(self, text, removeStopwords = True):\n \"\"\"\n This method is a standard implementation to clean any text that are\n passed in as parameter. Here the text is split into sentences and each\n sentence is in turn cleaned by invoking the cleanSentence() method.\n \n Any custom cleaning needs to be done at the subclass Preprocessor and\n the invoke this method.\n\n Parameters\n ----------\n text : string\n The text to be cleaned.\n\n Returns\n -------\n string\n The cleaned text.\n punctuationsToBeExcluded : list\n List of any particular punctuations to be ignored when cleaning \n the sentence.\n\n \"\"\"\n cleanedSentences = list()\n sentences = text.split('\\n')\n for sentence in sentences:\n # Cleaning the sentence here\n sentence = self.cleanSentence(sentence, removeStopwords)\n if len(sentence) > 0:\n cleanedSentences.append(sentence)\n return ' '.join(cleanedSentences).lower()\n \n def cleanSentence(self, sentence, removeStopwords):\n \"\"\"\n The method cleans a passed in sentence parameter by:\n i. removing all whitespace characters.\n ii. removing all punctuations.\n\n Parameters\n ----------\n sentence : string\n The sentence to be cleaned.\n\n Returns\n -------\n string\n The cleaned sentence.\n\n \"\"\"\n sentence = sentence.lower()\n sentence = self.fixContractions(sentence)\n sentence = self.removeUnwantedCharacters(sentence)\n if removeStopwords:\n sentence = self.removeStopWords(sentence)\n return sentence\n \n def fixContractions(self, text, contractionList=CONTRACTION_LIST):\n \"\"\"\n # Expands the contractions by finding a match in the Contraction list \n Regular expression pattern matching.\n\n Parameters\n ----------\n text : string\n The text where contractions need to be fixed.\n contraction_list : dictionary, optional\n The dictionary which tells the mapping for different types of \n contractions. The default is CONTRACTION_LIST.\n\n Returns\n -------\n string\n The expanded text.\n\n \"\"\"\n text = re.findall(r\"[\\w']+\", text)\n new_text = []\n for word in text:\n if word in contractionList:\n new_text.append(contractionList[word])\n else:\n new_text.append(word)\n return ' '.join(new_text)\n \n def removeUnwantedCharacters(self, text):\n \"\"\"\n Removes all unwanted characters from the text.\n This includes any URLs, HTML tags, punctuations, line breaks.\n\n Parameters\n ----------\n text : string\n The text that needs to be cleaned.\n\n Returns\n -------\n text : string\n The cleaned text.\n\n \"\"\"\n text = text.strip()\n text = re.sub(r'https?:\\/\\/.*[\\r\\n]*', '', text, flags=re.MULTILINE)# remove links\n text = re.sub(r'\\<a href', ' ', text)# remove html link tag\n text = re.sub(r'&', '', text) \n text = re.sub(r'[_\"\\-;%()|+&=*%.,!?:#$@\\[\\]/]', ' ', text)\n text = re.sub(r'<br />', ' ', text)\n text = re.sub(r'\\'', ' ', text)\n return text\n \n def removeStopWords(self, text):\n \"\"\"\n Removes the stop words.\n\n Parameters\n ----------\n text : string\n The text where the stop words need to be removed.\n\n Returns\n -------\n string\n The stop words removed text.\n\n \"\"\"\n text = text.split()\n stops = set(stopwords.words(\"english\"))\n text = [w for w in text if not w in stops]\n return ' '.join(text)",
"_____no_output_____"
],
[
"class CnnPreprocessor(BasePreprocessor):\n \"\"\"This is a preprocessor class which implements CNN dataset specific\n cleaning methods.\"\"\"\n\n def __init__(self):\n \"\"\"\n The constructor method to do any initial value setting.\n\n Returns\n -------\n CnnProcessor class object.\n\n \"\"\"\n super().__init__()\n \n def stripOffNewsSource(self, text):\n \"\"\"\n This method helps to strip off the news source from the text.\n\n Parameters\n ----------\n text : string\n The news text.\n\n Returns\n -------\n text : string\n The news text with any news source stripped off.\n\n \"\"\"\n closingBracketIndex = text.find(')')\n firstWord = ''\n if closingBracketIndex > -1:\n firstWordToBeExcluded = False\n countOfSpaceChar = 0\n for i in range(closingBracketIndex-1,-1,-1):\n if text[i] == ' ':\n if countOfSpaceChar < 4:\n countOfSpaceChar += 1\n continue\n else:\n firstWordToBeExcluded = False\n break\n elif text[i] == '(' and not firstWordToBeExcluded:\n countOfSpaceChar = 0\n firstWordToBeExcluded = True\n \n if firstWordToBeExcluded:\n firstWord = text[:closingBracketIndex + 1]\n text = text[len(firstWord):].strip()\n return text\n \n def cleanData(self, text, isSummary):\n \"\"\"\n This method helps to clean any text by calling the cleanData from the base\n class. \n \n The CNN dataset files can have the source of the news at the start of\n the file in brackets. It iss wise to remove this as part of the cleaning\n as this source name doesn't help with the actual summarisation task.\n Hence another method called stripOffNewsSource() is invoked before\n before calling the cleanData() method in the base class.\n \n Parameters\n ----------\n text : string\n The text to be cleaned.\n isSummary : boolean\n Denotes whether the text to be cleaned is actual News text or \n the summary.\n \n Returns\n -------\n string\n The cleaned text.\n\n \"\"\"\n # If the text is not a summary, then strip of the news source from\n # the text\n if not isSummary:\n text = self.stripOffNewsSource(text)\n \n # Invoking the standard cleanData method.\n return super().cleanData(text, not isSummary)",
"_____no_output_____"
],
[
"\"\"\"\nImplementation of base class for the data loader.\n\"\"\"\nclass DataLoader:\n \"\"\"\n Class to help with the loading of data\n \"\"\"\n \n def __init__(self, cleanDataOp):\n \"\"\"\n The constructor method to do any initial value setting.\n \n\n Returns\n -------\n DataLoader class object.\n\n \"\"\"\n self.cleanDataOp = cleanDataOp\n \n def loadSourceDocument(self, filePath):\n \"\"\"\n Loads the contents of a single source document\n\n Parameters\n ----------\n filePath : string\n The file path of the source document.\n\n Returns\n -------\n text : string\n The loaded text.\n\n \"\"\"\n file = open(filePath, encoding='utf-8')\n text = file.read()\n file.close()\n return text\n \n def loadSourceDocuments(self, sourceDirectoryPath, refreshSourceDocs):\n \"\"\"\n This method helps to load the source documents.\n\n Parameters\n ----------\n sourceDirectoryPath : string\n Directory path where the source files reside.\n refreshSourceDocs : bool\n If this parameter is true, all the source files are read fresh else\n already pickled file is loaded.\n\n Returns\n -------\n List of dictionaries holding the loaded text and summaries.\n\n \"\"\"\n all_text = {}\n all_text['Text'] = []\n all_text['Summary'] = []\n if refreshSourceDocs:\n fileIndex = 1\n for name in listdir(sourceDirectoryPath):\n if not name.startswith('._'):\n filePath = sourceDirectoryPath + '/' + name\n # load document\n doc = self.loadSourceDocument(filePath)\n text, summary = self.retrieveTextAndSummary(doc)\n all_text['Text'].append(self.cleanDataOp(text, False))\n all_text['Summary'].append(self.cleanDataOp(summary, True))\n print('Extracted and cleaned file number', fileIndex, '=>', name)\n fileIndex += 1\n return all_text\n \n def retrieveTextAndSummary(self, document):\n \"\"\"\n This method helps separate the actual text and summary from the whole\n CNN news document.\n\n Parameters\n ----------\n document : string\n The content of the news story file from which the actual text and\n summary needs to be separated.\n\n Returns\n -------\n string\n The text and a list of summaries.\n\n \"\"\"\n # All the summaries in the document are starting with the '@highlight'\n # phrase.\n textIndex = document.find('@highlight')\n \n # Splitting the actual text content and the summary lines\n text, summaries = document[:textIndex], document[textIndex:].split('@highlight')\n \n # Stripping all the whitespaces from each of the summary lines.\n summaries = [s.strip() for s in summaries if len(s) > 0]\n \n # Returning the actual text and the list of summaries\n return text, ' '.join(summaries)",
"_____no_output_____"
],
[
"\"\"\"\nImplementation of base class for the Word Embedding framework.\n\"\"\"\nclass WordEmbeddingBase:\n \"\"\"The base class for Word Embedding framework.\n \"\"\"\n \n def __init__(self, embeddingsDimension, specialTokens):\n \"\"\"The constructor. Your subclass's constructor can add\n other arguments.\n\n Returns\n -------\n WordEmbeddingBase object.\n\n \"\"\"\n self.embeddingsDimension = embeddingsDimension\n self.specialTokens = specialTokens\n \n def constructEmbeddingsIndex(self):\n \"\"\"\n The method to build the embedding index using the vector file\n\n Returns\n -------\n embedding_index : dictionary\n The word to vector data mapping.\n\n \"\"\"\n embeddingsIndex = {}\n with codecs.open(self.vectorFilePath, 'r', 'utf-8') as f:\n for i, line in enumerate(f):\n sr = line.split()\n word = sr[0]\n embedding = np.asarray(sr[1:], dtype='float32')\n embeddingsIndex[word] = embedding\n return embeddingsIndex\n \n def buildEmbeddingsVectorMatrix(self, wordToIntDict, embeddingsIndex):\n \"\"\"\n The method to build the embedding index using the vector file\n\n Parameters\n ----------\n embeddingDimension : number\n The dimension of embedding used.\n\n Returns\n -------\n embeddingMatrix : dictionary\n The mapping from integer representation of the word to the \n embedding vector.\n\n \"\"\"\n embeddingsMatrix = np.zeros((len(wordToIntDict), self.embeddingsDimension), dtype=np.float32)\n for word, i in wordToIntDict.items():\n embeddingsVector = embeddingsIndex.get(word)\n if embeddingsVector is not None:\n # words not found in embedding index will be all-zeros.\n embeddingsMatrix[i] = embeddingsVector\n else:\n randomGeneratedEmbeddingsVector = np.array(np.random.uniform(-1.0, 1.0, self.embeddingsDimension))\n embeddingsIndex[word] = randomGeneratedEmbeddingsVector\n embeddingsMatrix[i] = randomGeneratedEmbeddingsVector\n return embeddingsMatrix",
"_____no_output_____"
],
[
"\"\"\"\nImplementation of custom class for the Glove Word Embedding framework.\n\"\"\"\nclass GloveEmbedding(WordEmbeddingBase):\n \"\"\"The custom class for Glove Word Embedding framework.\n \"\"\"\n \n def __init__(self, embeddingsDimension, specialTokens):\n \"\"\"\n The constructor to do any initial value setting.\n\n Returns\n -------\n GloveEmbedding class object.\n\n \"\"\"\n self.vectorFilePath = 'embeddings/frameworks/glove.6B.50d.txt'\n super().__init__(embeddingsDimension, specialTokens)\n ",
"_____no_output_____"
],
[
"\"\"\"\nImplementation of custom class for the Conceptnet Numberbatch's Embedding framework.\n\"\"\"\nclass ConceptNetEmbedding(WordEmbeddingBase):\n \"\"\"The custom class for Coneptnet Numberbatch's Embedding framework.\n \"\"\"\n \n def __init__(self, embeddingsDimension, specialTokens):\n \"\"\"\n The constructor to do any initial value setting.\n\n Returns\n -------\n GloveEmbedding class object.\n\n \"\"\"\n self.vectorFilePath = 'embeddings/frameworks/numberbatch-en-19.08.txt'\n super().__init__(embeddingsDimension, specialTokens)",
"_____no_output_____"
],
[
"class Utils:\n \"\"\"A Utility class for some static helper methods\"\"\"\n \n @staticmethod\n def pickle(filename, contents):\n \"\"\"\n This method pickles the contents to a file\n\n Parameters\n ----------\n filename : string\n The pickle file location.\n contents : string\n The contents to be pickled.\n\n Returns\n -------\n None.\n\n \"\"\"\n file = open(filename, \"wb\")\n pickle.dump(contents, file)\n file.close()\n\n @staticmethod\n def unPickle(filename):\n \"\"\"\n This method loads the contents from a pickled file\n\n Parameters\n ----------\n filename : string\n The pickle file location.\n\n Returns\n -------\n The contents from a pickled file.\n\n \"\"\"\n file = open(filename,\"rb\")\n contents = pickle.load(file)\n file.close()\n return contents\n \n @staticmethod\n def countWords(wordsCountDict, text):\n \"\"\"\n This method returns a dictionary with the words to number of occurrences\n mapping.\n\n Parameters\n ----------\n wordsCountDict : dictionary\n Word to number of occurrences mapping.\n text : string\n The text.\n\n Returns\n -------\n None.\n\n \"\"\"\n for sentence in text:\n for word in sentence.split():\n if word not in wordsCountDict:\n wordsCountDict[word] = 1\n else:\n wordsCountDict[word] += 1\n \n @staticmethod\n def buildWordToNumberRepresentations(wordsCountDict, specialTokens, embeddingsIndex, thresholdForRareWordsCount):\n \"\"\"\n This method returns two dictionaries with a word to number mapping and another one with number to word \n mapping.\n\n Parameters\n ----------\n wordsCountDict : dictionary\n Word to number of occurrences mapping.\n specialTokens: dictionary\n Special tokens to number mapping\n embeddingsIndex: dictionary\n The dictionary which has the mapping from a word to corresponding embedding vector. This dictionary\n is normally constructed from a word embeddings vector file.\n thresholdForRareWordsCount : int\n Only those words with frequencies above this threshold are considered if they are not part of\n the embeddings index dictionary.\n\n Returns\n -------\n Two dictionaries:\n i. Word to Number mapping\n ii. Number to Word mapping\n\n \"\"\"\n wordToIntDict = {}\n intToWordDict = {}\n wordIndex = 0\n for word, count in wordsCountDict.items():\n if count >= thresholdForRareWordsCount or word in embeddingsIndex:\n wordToIntDict[word] = wordIndex\n intToWordDict[wordIndex] = word\n wordIndex += 1\n \n for token in specialTokens.values():\n wordToIntDict[token] = wordIndex\n intToWordDict[wordIndex] = token\n wordIndex += 1\n \n return wordToIntDict, intToWordDict\n \n @staticmethod\n def convertTextToNumberSequence(text, wordToIntDict, unknownToken, eosToken = None, applyEos = False):\n \"\"\"\n This method converts a text to a sequence of numbers based on the word to integer mapping dictionary.\n If a word does not exist in the word to integer mapping dictionary, a number representation of 'Unknown'\n special token is used instead.\n \n Parameters\n ----------\n wordToIntDict : dictionary\n Word to number of mapping.\n unknownToken: string\n The 'Unknown' specal token string.\n eosToken: number\n The 'End of Sequence' special token string.\n applyEos : boolean\n If true, at the end of the number sequence the number corresponding to 'End of Sequence' special token\n shall be appended. \n \n Returns\n -------\n i. The sequence of numbers\n ii. Total words count\n iii. Total unknown words count\n \"\"\"\n numberSequenceForText = []\n wordsCount = 0\n unknownWordsCount = 0\n for sentence in text:\n numberSequenceForSentence = []\n for word in sentence.split():\n wordsCount += 1\n if word in wordToIntDict:\n numberSequenceForSentence.append(wordToIntDict[word])\n else:\n numberSequenceForSentence.append(wordToIntDict[unknownToken])\n unknownWordsCount += 1\n \n if applyEos and eosToken is not None:\n numberSequenceForSentence.append(wordToIntDict[eosToken])\n numberSequenceForText.append(numberSequenceForSentence)\n return numberSequenceForText, wordsCount, unknownWordsCount\n \n @staticmethod \n def applyFilterAndSort(summariesAndTextZippedList, summaryAndTextAttributes):\n \"\"\"\n Filter method to filter out summary and text zipped entry based on maximum Summary Length, \n maximum Text length, unknown word limit in summaries and unknown word limit in text.\n \n Parameters\n ----------\n summariesAndTextZippedList: list\n List of zipped version of Summary and Text\n summaryAndTextAttributes : dictionary\n Carries:\n i. The maximum number of words allowed in a Summary\n ii. The maximum number of words allowed in a Text\n i. The minimum number of words required in a Summary\n ii. The minimum number of words required in a Text\n iii. The maximum number of unknown words allowed in a Summary\n iv. The maximum number of unknown words allowed in a Text\n \n Returns\n -------\n i. The sequence of numbers\n ii. Total words count\n iii. Total unknown words count\n \"\"\"\n maximumSummaryLength = summaryAndTextAttributes['maximumSummaryLength']\n maximumTextLength = summaryAndTextAttributes['maximumTextLength']\n minimumSummaryLength = summaryAndTextAttributes['minimumSummaryLength']\n minimumTextLength = summaryAndTextAttributes['minimumTextLength']\n unknownsInSummaryLimit = summaryAndTextAttributes['unknownsInSummaryLimit']\n unknownsInTextLimit = summaryAndTextAttributes['unknownsInTextLimit']\n unknownTokenNumberRepresentation = summaryAndTextAttributes['unknownTokenNumberRepresentation']\n \n def countUnknowns(sentence, unknownTokenNumberRepresentation):\n '''Counts the number of time UNK appears in a sentence.'''\n unknownsCount = 0\n for word in sentence:\n if word == unknownTokenNumberRepresentation:\n unknownsCount += 1\n return unknownsCount\n \n def filterCondition(item):\n \"\"\"\n Filters an item based on certain conditions.\n \"\"\"\n summarySeq = item[0]\n textSeq = item[1]\n if(len(summarySeq) <= maximumSummaryLength and\n len(textSeq) <= maximumTextLength and \n len(summarySeq) >= minimumSummaryLength and\n len(textSeq) >= minimumTextLength and\n countUnknowns(summarySeq, unknownTokenNumberRepresentation) <= unknownsInSummaryLimit and \n countUnknowns(textSeq, unknownTokenNumberRepresentation) <= unknownsInTextLimit):\n return True\n else:\n return False\n \n filteredSummariesAndText = list(filter(filterCondition, summariesAndTextZippedList))\n summariesAndTextSorted = sorted(filteredSummariesAndText, key=lambda entry: len(entry[1]))\n summariesAndTextSorted = list(zip(*summariesAndTextSorted))\n return list(summariesAndTextSorted[0]), list(summariesAndTextSorted[1])\n \n @staticmethod\n def computeSequenceLengthsIntoDataFrame(textToNumberSequences):\n '''Create a data frame of the sentence lengths from a text'''\n lengths = []\n for textToNumberSequence in textToNumberSequences:\n lengths.append(len(textToNumberSequence))\n return pd.DataFrame(lengths, columns=['counts'])",
"_____no_output_____"
],
[
"class Seq2SeqModel:\n \"\"\"\n The implementation for Sequence to sequence modelling\n \"\"\"\n def __init__(self):\n \"\"\"The constructor. Your subclass's constructor can add\n other arguments.\"\"\"\n \n def createModelInputsPlaceholders(self):\n inputData = tf.placeholder(tf.int32, [None, None], name='inputData')\n targetData = tf.placeholder(tf.int32, [None, None], name='targetData')\n learningRate = tf.placeholder(tf.float32, name='learningRate')\n dropoutRate = tf.placeholder(tf.float32, name='dropoutRate')\n inputSummaryLengths = tf.placeholder(tf.int32, (None,), name='inputSummaryLengths')\n maximumSummaryLength = tf.reduce_max(inputSummaryLengths, name='maximumSummaryLength')\n inputTextLengths = tf.placeholder(tf.int32, (None,), name='inputTextLengths')\n\n return inputData, targetData, learningRate, dropoutRate, inputSummaryLengths, maximumSummaryLength, inputTextLengths\n \n def createLSTMCell(self, rnnPerCellUnitsCount, requireDropoutLayer = False, dropoutRate = 0.95):\n # Creating the RNN cell\n cell = tf.contrib.rnn.LSTMCell(rnnPerCellUnitsCount,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n \n # Attaching a dropout layer for the cell if required\n if requireDropoutLayer:\n cell = tf.contrib.rnn.DropoutWrapper(cell, input_keep_prob = dropoutRate)\n return cell\n \n def doEncoding(self, rnnPerCellUnitsCount, inputTextLengths, rnnCellsCount, embeddedEncoderInput, dropoutRate):\n \"\"\"\n This is the implementation of an encoding process.\n \"\"\"\n for rnnCellIndex in range(rnnCellsCount):\n with tf.variable_scope('encoder_{}'.format(rnnCellIndex)):\n # Creating the forward RNN cell for the Bi-directional RNN\n forwardCell = self.createLSTMCell(rnnPerCellUnitsCount, \n requireDropoutLayer = True, \n dropoutRate = dropoutRate)\n \n # Creating the backward RNN cell for the Bi-directional RNN\n backwardCell = self.createLSTMCell(rnnPerCellUnitsCount, \n requireDropoutLayer = True, \n dropoutRate = dropoutRate)\n \n # Connecting the forward and backward cells to create a Bi-directional RNN\n encoderOutput, encoderStates = tf.nn.bidirectional_dynamic_rnn(forwardCell, \n backwardCell, \n embeddedEncoderInput,\n inputTextLengths,\n dtype=tf.float32)\n encoderOutput = tf.concat(encoderOutput, 2)\n # The current layer's output is being fed into next layer's input\n embeddedEncoderInput = encoderOutput\n return encoderOutput, encoderStates\n \n def processDecoderInput(self, targetData, wordToIntDict, batchSize, startToken):\n \"\"\"\n Remove the last word id from each batch and concatenate the id of the STARTOFSEQUENCE to the \n begining of each batch.\n \"\"\"\n ending = tf.strided_slice(targetData, [0, 0], [batchSize, -1], [1, 1])\n decoderInput = tf.concat([tf.fill([batchSize, 1], wordToIntDict[startToken]), ending], 1)\n return decoderInput\n \n def processTrainingLayerForDecoder(self, embeddedDecoderInput, inputSummaryLengths, decoderCell,\n outputLayer, totalWordsCountInVocab, maximumSummaryLength,\n batchSize):\n \"\"\"\n This is the implementation for a Training decoding layer.\n \"\"\"\n trainingHelper = tf.contrib.seq2seq.TrainingHelper(inputs = embeddedDecoderInput,\n sequence_length = inputSummaryLengths,\n time_major = False)\n \n trainingDecoder = tf.contrib.seq2seq.BasicDecoder(cell = decoderCell,\n helper = trainingHelper,\n initial_state = decoderCell.zero_state(\n dtype=tf.float32, batch_size=batchSize),\n output_layer = outputLayer)\n \n trainingLogits = tf.contrib.seq2seq.dynamic_decode(trainingDecoder,\n output_time_major = False,\n impute_finished = True,\n maximum_iterations = maximumSummaryLength)\n return trainingLogits\n \n def processInferenceLayerForDecoder(self, embeddingsMatrix, startOfSequenceToken, endOfSequenceToken,\n decoderCell, outputLayer, maximumSummaryLength, batchSize):\n \"\"\"\n This is the implementation for an Inference decoding layer.\n \"\"\"\n startTokens = tf.tile(tf.constant([startOfSequenceToken], dtype=tf.int32), \n [batchSize], \n name='start_tokens')\n \n inferenceHelper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embeddingsMatrix,\n startTokens,\n endOfSequenceToken)\n \n inferenceDecoder = tf.contrib.seq2seq.BasicDecoder(decoderCell,\n inferenceHelper,\n decoderCell.zero_state(\n dtype=tf.float32, batch_size=batchSize),\n outputLayer)\n \n inferenceLogits = tf.contrib.seq2seq.dynamic_decode(inferenceDecoder,\n output_time_major=False,\n impute_finished=True,\n maximum_iterations=maximumSummaryLength)\n \n return inferenceLogits\n \n def doDecoding(self, embeddedDecoderInput, embeddingsMatrix, encoderOutput, encoderStates,\n totalWordsCountInVocab, inputTextLengths, inputSummaryLengths, maximumSummaryLength, \n rnnPerCellUnitsCount, wordToIntDict, dropoutRate, batchSize, rnnCellsCount, \n enableAttention = True):\n # Creating the RNN cell for the decoder\n decoderCell = tf.contrib.rnn.MultiRNNCell([self.createLSTMCell(rnnPerCellUnitsCount, requireDropoutLayer = True, dropoutRate = dropoutRate) for _ in range(rnnCellsCount)])\n\n # If an additional Attention layer needs to be applied\n if enableAttention:\n attentionMechanism = tf.contrib.seq2seq.BahdanauAttention(rnnPerCellUnitsCount,\n encoderOutput,\n inputTextLengths,\n normalize = False,\n name = 'BahdanauAttention')\n decoderCell = tf.contrib.seq2seq.AttentionWrapper(decoderCell, attentionMechanism, rnnPerCellUnitsCount)\n \n outputLayer = Dense(totalWordsCountInVocab, \n kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))\n with tf.variable_scope(\"decode\"):\n trainingLogits = self.processTrainingLayerForDecoder(embeddedDecoderInput,\n inputSummaryLengths,\n decoderCell,\n outputLayer,\n totalWordsCountInVocab,\n maximumSummaryLength,\n batchSize)\n with tf.variable_scope(\"decode\", reuse=True):\n inferenceLogits = self.processInferenceLayerForDecoder(embeddingsMatrix,\n wordToIntDict[embedding.specialTokens['STARTOFSEQUENCE']],\n wordToIntDict[embedding.specialTokens['ENDOFSEQUENCE']],\n decoderCell,\n outputLayer,\n maximumSummaryLength,\n batchSize)\n return trainingLogits, inferenceLogits\n \n def process(self, inputData, targetData, dropoutRate, inputTextLengths, inputSummaryLengths, \n maximumSummaryLength, totalWordsCountInVocab, rnnPerCellUnitsCount, \n rnnCellsCount, wordToIntDict, batchSize, embeddingsMatrix):\n \n # Performing parallel lookups of inputData on the embeddingMatrix\n embeddedEncoderInput = tf.nn.embedding_lookup(embeddingsMatrix, inputData)\n \n # Performing the encoding\n encoderOutput, encoderStates = self.doEncoding(rnnPerCellUnitsCount,\n inputTextLengths,\n rnnCellsCount,\n embeddedEncoderInput,\n dropoutRate)\n \n # Process the decoder input before passing to decoding layer\n decoderInput = self.processDecoderInput(targetData, \n wordToIntDict, \n batchSize, \n embedding.specialTokens['STARTOFSEQUENCE'])\n \n # Performing parallel lookups of decoder input on the embeddingMatrix\n embeddedDecoderInput = tf.nn.embedding_lookup(embeddingsMatrix, decoderInput)\n \n # Performing the encoding\n trainingLogits, inferenceLogits = self.doDecoding(embeddedDecoderInput,\n embeddingsMatrix,\n encoderOutput,\n encoderStates,\n totalWordsCountInVocab,\n inputTextLengths,\n inputSummaryLengths,\n maximumSummaryLength,\n rnnPerCellUnitsCount,\n wordToIntDict,\n dropoutRate,\n batchSize,\n rnnCellsCount)\n \n return trainingLogits, inferenceLogits",
"_____no_output_____"
],
[
"class BatchDataGenerator:\n \"\"\"\n A class which helps in the generation of batches of data\n \"\"\"\n @staticmethod\n def generateBatches(summaries, texts, batchSize, paddingToken):\n def padBatchContents(contents, paddingToken):\n maxContentLength = max([len(content) for content in contents])\n return [content + [paddingToken] * (maxContentLength - len(content)) for content in contents]\n possibleBatchCount = len(texts)//batchSize\n for batchIndex in range(0, possibleBatchCount):\n batchStartPoint = batchIndex * batchSize\n summariesBatch = summaries[batchStartPoint: batchStartPoint + batchSize]\n textBatch = texts[batchStartPoint: batchStartPoint + batchSize]\n paddedSummariesBatch = np.array(padBatchContents(summariesBatch, paddingToken))\n paddedTextBatch = np.array(padBatchContents(textBatch, paddingToken))\n \n # Need the lengths for the lengths parameters\n paddedSummariesLength = []\n for summary in paddedSummariesBatch:\n paddedSummariesLength.append(len(summary))\n\n paddedTextLength = []\n for text in paddedTextBatch:\n paddedTextLength.append(len(text))\n\n yield paddedSummariesBatch, paddedTextBatch, paddedSummariesLength, paddedTextLength",
"_____no_output_____"
],
[
"# load text\nsourceDirectoryPath = '../data/cnn/stories'\nrefreshSourceDocs = False\npickledFilePath = '../data/cnn_dataset.pkl'",
"_____no_output_____"
],
[
"if refreshSourceDocs:\n preprocessor = CnnPreprocessor()\n dataLoader = DataLoader(preprocessor.cleanData)\n loadedContent = dataLoader.loadSourceDocuments(sourceDirectoryPath, refreshSourceDocs)\n \n # save to file\n Utils.pickle(pickledFilePath, loadedContent)\n print('Pickled the cleaned data into the file:', pickledFilePath)\n\n# load from file\nnews = Utils.unPickle(pickledFilePath)\nprint('Loaded Texts %d' % len(news['Text']))",
"Loaded Texts 92579\n"
],
[
"cleanedText = news['Text']\ncleanedSummaries = news['Summary']",
"_____no_output_____"
],
[
"# Creating the word embedding class\nembeddingsDimension = 50\nspecialTokens = {\n 'UNKNOWN': '<UNK>',\n 'PADDING': '<PAD>',\n 'ENDOFSEQUENCE': '<EOS>',\n 'STARTOFSEQUENCE': '<GO>'\n}\nembedding = GloveEmbedding(embeddingsDimension, specialTokens)",
"_____no_output_____"
],
[
"# Creating a dictionary with word to frequency mapping\nwordsCountDict = {}\nUtils.countWords(wordsCountDict, cleanedText)\nUtils.countWords(wordsCountDict, cleanedSummaries)\nprint(\"Size of Vocabulary:\", len(wordsCountDict))",
"Size of Vocabulary: 238749\n"
],
[
"# Constructing a word embeddings index\n# This is simply a word to word vector mapping dictionary\nembeddingsIndex = embedding.constructEmbeddingsIndex()",
"_____no_output_____"
],
[
"print(len(embeddingsIndex))",
"400000\n"
],
[
"# This value defines the threshold of the minimum number of occurrences of an Unknown word for that word\n# to be included in the word to number representation dictionary.\nthresholdForRareWordsCount = 10\n\n# Building the word to number representation dictionary for representing a big text as a sequence of numbers\n# when passed to the RNN\n# Alone with this a number to word representation dictionary is also built which helps in the conversion of final\n# output of sequence of numbers to corresponding Predicted summary text.\nwordToIntDict, intToWordDict = Utils.buildWordToNumberRepresentations(\n wordsCountDict, embedding.specialTokens, embeddingsIndex, thresholdForRareWordsCount\n)",
"_____no_output_____"
],
[
"# Building the Embeddings vector matrix which is basically a two dimensional matrix with\n# Number of rows = number of words in wordtoIntDict above\n# Number of columns = the dimensionality of chosen word embedding framework (each word vector will be of this size)\n# Also if there are any unknown words in workToIntDict which are not there in the embeddingsIndex constructured from\n# the word embedding file, a random word vector shall be generated and inserted as a row in the \n# Embeddings vector matrix.\nembeddingsMatrix = embedding.buildEmbeddingsVectorMatrix(wordToIntDict, embeddingsIndex)\nprint('Total number of embeddings:', len(embeddingsMatrix))",
"Total number of embeddings: 158180\n"
],
[
"# Converting all the summaries to corresponding number sequences\nsummariesToNumberSequence, summaryWordsCount, summaryUnknownWordsCount = Utils.convertTextToNumberSequence(\n cleanedSummaries, \n wordToIntDict, \n embedding.specialTokens['UNKNOWN']\n)\n\n# Converting all the text to corresponding number sequences\ntextToNumberSequence, textWordsCount, textUnknownWordsCount = Utils.convertTextToNumberSequence(\n cleanedText, \n wordToIntDict, \n embedding.specialTokens['UNKNOWN'], \n eosToken = embedding.specialTokens['ENDOFSEQUENCE'],\n applyEos = True\n)\n\ntotalWordsCount = summaryWordsCount + textWordsCount\ntotalUnknownWordsCount = summaryUnknownWordsCount + textUnknownWordsCount\nunknownPercentage = round(totalUnknownWordsCount/totalWordsCount,4) * 100\n\nprint(\"Total number of words:\", totalWordsCount)\nprint(\"Total number of UNKs:\", totalUnknownWordsCount)\nprint(\"Percent of words that are UNK: {}%\".format(unknownPercentage))",
"Total number of words: 39010724\nTotal number of UNKs: 150476\nPercent of words that are UNK: 0.38999999999999996%\n"
],
[
"lengthSummaries = Utils.computeSequenceLengthsIntoDataFrame(summariesToNumberSequence)\nlengthText = Utils.computeSequenceLengthsIntoDataFrame(textToNumberSequence)\n\n# Inspect the length of texts\nprint(np.percentile(lengthText.counts, 70))\nprint(np.percentile(lengthText.counts, 90))\nprint(np.percentile(lengthText.counts, 95))\nprint(np.percentile(lengthText.counts, 99))\n\n# Inspect the length of summaries\nprint(np.percentile(lengthSummaries.counts, 70))\nprint(np.percentile(lengthSummaries.counts, 90))\nprint(np.percentile(lengthSummaries.counts, 95))\nprint(np.percentile(lengthSummaries.counts, 99.5))",
"464.0\n645.0\n748.0\n927.2200000000012\n49.0\n56.0\n59.0\n67.0\n"
],
[
"maximumTextLength = 464\nmaximumSummaryLength = 67\nminimumTextLength = 2\nminimumSummaryLength = 2\nunknownsInSummaryLimit = 4\nunknownsInTextLimit = 10\n \nsummariesAndTextSequence = list(zip(summariesToNumberSequence, textToNumberSequence))\nsortedSummaries, sortedText = Utils.applyFilterAndSort(summariesAndTextSequence, {\n 'maximumTextLength': maximumTextLength,\n 'maximumSummaryLength': maximumSummaryLength,\n 'minimumTextLength': minimumTextLength,\n 'minimumSummaryLength': minimumSummaryLength,\n 'unknownsInSummaryLimit': unknownsInSummaryLimit,\n 'unknownsInTextLimit': unknownsInTextLimit,\n 'unknownTokenNumberRepresentation': embedding.specialTokens['UNKNOWN']\n})\n\n# Compare lengths to ensure they match\nprint(len(sortedSummaries))\nprint(len(sortedText))",
"64603\n64603\n"
],
[
"Utils.pickle(\"../data/sorted_summaries.pkl\",sortedSummaries)\nUtils.pickle(\"../data/sorted_text.pkl\",sortedText)\nUtils.pickle(\"../data/embeddings_matrix.pkl\",embeddingsMatrix)\nUtils.pickle(\"../data/word_to_int.pkl\",wordToIntDict)\nUtils.pickle(\"../data/int_to_word.pkl\",intToWordDict)",
"_____no_output_____"
],
[
"sortedSummaries = Utils.unPickle(\"../data/sorted_summaries.pkl\")\nsortedText = Utils.unPickle(\"../data/sorted_text.pkl\")\nembeddingsMatrix = Utils.unPickle(\"../data/embeddings_matrix.pkl\")\nwordToIntDict = Utils.unPickle(\"../data/word_to_int.pkl\")\nintToWordDict = Utils.unPickle(\"../data/int_to_word.pkl\")",
"_____no_output_____"
],
[
"# Set the Hyperparameters\nepochs = 100\nbatchSize = 15\nrnnPerCellUnitsCount = 128\nrnnCellsCount = 2\nlearningRate = 0.001\ndropoutRate = 0.95",
"_____no_output_____"
],
[
"seq2seqModel = Seq2SeqModel()\n# Build the graph\ntrain_graph = tf.Graph()\n# Set the graph to default to ensure that it is ready for training\nwith train_graph.as_default():\n \n # Load the model inputs \n input_data, targets, lr, dropout_rate, summary_length, max_summary_length, text_length = seq2seqModel.createModelInputsPlaceholders()\n\n # Create the training and inference logits\n trainingLogits, inferenceLogits = seq2seqModel.process(tf.reverse(input_data, [-1]),\n targets, \n dropout_rate, \n text_length,\n summary_length,\n max_summary_length,\n len(wordToIntDict)+1,\n rnnPerCellUnitsCount, \n rnnCellsCount, \n wordToIntDict,\n batchSize,\n embeddingsMatrix)\n \n # Create tensors for the training logits and inference logits\n trainingLogits = tf.identity(trainingLogits[0].rnn_output, 'logits')\n inferenceLogits = tf.identity(inferenceLogits[0].sample_id, name='predictions')\n \n # Create the weights for sequence_loss, the sould be all True across since each batch is padded\n masks = tf.sequence_mask(summary_length, max_summary_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n trainingLogits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(learningRate)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\nprint(\"Graph is built.\")\ngraph_location = \"../modelrun/graph\"\nprint(graph_location)\ntrain_writer = tf.summary.FileWriter(graph_location)\ntrain_writer.add_graph(train_graph)",
"WARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81c50>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81c50>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING: Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81c50>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81c50>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81f28>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81f28>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING: Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81f28>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81f28>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81358>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81358>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING: Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81358>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f9b7cf81358>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f974e914eb8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f974e914eb8>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING: Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f974e914eb8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f974e914eb8>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f974ef2c5f8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f974ef2c5f8>>: AssertionError: Bad argument number for Name: 3, expecting 4\nWARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f974ef2c5f8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f974ef2c5f8>>: AssertionError: Bad argument number for Name: 3, expecting 4\nWARNING:tensorflow:Entity <bound method AttentionWrapper.call of <tensorflow.contrib.seq2seq.python.ops.attention_wrapper.AttentionWrapper object at 0x7f974ef6cf60>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method AttentionWrapper.call of <tensorflow.contrib.seq2seq.python.ops.attention_wrapper.AttentionWrapper object at 0x7f974ef6cf60>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING: Entity <bound method AttentionWrapper.call of <tensorflow.contrib.seq2seq.python.ops.attention_wrapper.AttentionWrapper object at 0x7f974ef6cf60>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method AttentionWrapper.call of <tensorflow.contrib.seq2seq.python.ops.attention_wrapper.AttentionWrapper object at 0x7f974ef6cf60>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING:tensorflow:Entity <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f974eea5780>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f974eea5780>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING: Entity <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f974eea5780>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f974eea5780>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f974e8d2a20>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f974e8d2a20>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING: Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f974e8d2a20>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f974e8d2a20>>: AttributeError: module 'gast' has no attribute 'Num'\nWARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f974eb7a240>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f974eb7a240>>: AttributeError: module 'gast' has no attribute 'Num'\n"
],
[
"# Subset the data for training\nstart = 150\nend = start + 45000\nprint(len(sortedSummaries))\nsampledSortedSummaries = sortedSummaries[start:end:15]\nsampledSortedText = sortedText[start:end:15]\nprint(len(sampledSortedSummaries))\nprint(\"The shortest text length:\", len(sampledSortedText[0]))\nprint(\"The longest text length:\",len(sampledSortedText[-1]))",
"64603\n3000\nThe shortest text length: 44\nThe longest text length: 345\n"
],
[
"# Train the Model\nlearning_rate_decay = 0.95\nmin_learning_rate = 0.0005\ndisplay_step = 10 # Check training loss after every 10 batches\nstop_early = 0 \nstop = 2 # If the update loss does not decrease in 3 consecutive update checks, stop training\nper_epoch = 2 # Make 2 update checks per epoch\nupdate_check = (len(sampledSortedText)//batchSize//per_epoch)\n\nupdate_loss = 0 \nbatch_loss = 0\nsummary_update_loss = [] # Record the update losses for saving improvements in the model\npaddingToken = wordToIntDict[embedding.specialTokens['PADDING']]\ncheckpoint = \"../modelrun/best_model.ckpt\" \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n \n # If we want to continue training a previous session\n #loader = tf.train.import_meta_graph(\"./\" + checkpoint + '.meta')\n #loader.restore(sess, checkpoint)\n \n for epoch_i in range(1, epochs+1):\n update_loss = 0\n batch_loss = 0\n for batch_i, (summaries_batch, texts_batch, summaries_lengths, texts_lengths) in enumerate(\n BatchDataGenerator.generateBatches(sampledSortedSummaries, sampledSortedText, batchSize, paddingToken)):\n start_time = time.time()\n _, loss = sess.run(\n [train_op, cost],\n {input_data: texts_batch,\n targets: summaries_batch,\n lr: learningRate,\n summary_length: summaries_lengths,\n text_length: texts_lengths,\n dropout_rate: dropoutRate})\n\n batch_loss += loss\n update_loss += loss\n end_time = time.time()\n batch_time = end_time - start_time\n\n if (batch_i+1) % display_step == 0 and batch_i > 0:\n print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f}, Seconds: {:>4.2f}'\n .format(epoch_i,\n epochs, \n batch_i+1, \n len(sampledSortedText) // batchSize, \n batch_loss / display_step, \n batch_time*display_step))\n batch_loss = 0\n\n if (batch_i+1) % update_check == 0 and batch_i > 0:\n print(\"Average loss for this update:\", round(update_loss/update_check,3))\n summary_update_loss.append(update_loss)\n \n # If the update loss is at a new minimum, save the model\n if update_loss <= min(summary_update_loss):\n print('New Record!') \n stop_early = 0\n saver = tf.train.Saver() \n saver.save(sess, checkpoint)\n\n else:\n print(\"No Improvement.\")\n stop_early += 1\n if stop_early == stop:\n break\n update_loss = 0\n \n \n # Reduce learning rate, but not below its minimum value\n learningRate *= learning_rate_decay\n if learningRate < min_learning_rate:\n learningRate = min_learning_rate\n \n if stop_early == stop:\n print(\"Stopping Training.\")\n break",
"Epoch 1/100 Batch 10/200 - Loss: 9.555, Seconds: 89.22\nEpoch 1/100 Batch 20/200 - Loss: 7.760, Seconds: 63.90\nEpoch 1/100 Batch 30/200 - Loss: 6.529, Seconds: 91.88\nEpoch 1/100 Batch 40/200 - Loss: 6.179, Seconds: 105.31\nEpoch 1/100 Batch 50/200 - Loss: 6.292, Seconds: 106.16\nEpoch 1/100 Batch 60/200 - Loss: 6.132, Seconds: 88.31\nEpoch 1/100 Batch 70/200 - Loss: 5.881, Seconds: 107.43\nEpoch 1/100 Batch 80/200 - Loss: 6.283, Seconds: 101.75\nEpoch 1/100 Batch 90/200 - Loss: 5.723, Seconds: 110.45\nEpoch 1/100 Batch 100/200 - Loss: 6.071, Seconds: 92.14\nAverage loss for this update: 6.64\nNew Record!\nEpoch 1/100 Batch 110/200 - Loss: 5.923, Seconds: 101.99\nEpoch 1/100 Batch 120/200 - Loss: 6.130, Seconds: 101.59\nEpoch 1/100 Batch 130/200 - Loss: 5.955, Seconds: 109.54\nEpoch 1/100 Batch 140/200 - Loss: 5.913, Seconds: 118.32\nEpoch 1/100 Batch 150/200 - Loss: 5.749, Seconds: 125.52\nEpoch 1/100 Batch 160/200 - Loss: 6.161, Seconds: 112.63\nEpoch 1/100 Batch 170/200 - Loss: 5.824, Seconds: 120.98\nEpoch 1/100 Batch 180/200 - Loss: 5.989, Seconds: 123.04\nEpoch 1/100 Batch 190/200 - Loss: 6.203, Seconds: 114.08\nEpoch 1/100 Batch 200/200 - Loss: 6.149, Seconds: 123.71\nAverage loss for this update: 6.0\nNew Record!\nEpoch 2/100 Batch 10/200 - Loss: 4.969, Seconds: 96.73\nEpoch 2/100 Batch 20/200 - Loss: 5.266, Seconds: 66.96\nEpoch 2/100 Batch 30/200 - Loss: 5.103, Seconds: 89.47\nEpoch 2/100 Batch 40/200 - Loss: 5.273, Seconds: 93.73\nEpoch 2/100 Batch 50/200 - Loss: 5.580, Seconds: 97.60\nEpoch 2/100 Batch 60/200 - Loss: 5.553, Seconds: 85.43\nEpoch 2/100 Batch 70/200 - Loss: 5.386, Seconds: 106.08\nEpoch 2/100 Batch 80/200 - Loss: 5.784, Seconds: 101.63\nEpoch 2/100 Batch 90/200 - Loss: 5.316, Seconds: 109.03\nEpoch 2/100 Batch 100/200 - Loss: 5.668, Seconds: 91.04\nAverage loss for this update: 5.39\nNew Record!\nEpoch 2/100 Batch 110/200 - Loss: 5.570, Seconds: 101.38\nEpoch 2/100 Batch 120/200 - Loss: 5.755, Seconds: 98.75\nEpoch 2/100 Batch 130/200 - Loss: 5.637, Seconds: 107.89\nEpoch 2/100 Batch 140/200 - Loss: 5.612, Seconds: 116.92\nEpoch 2/100 Batch 150/200 - Loss: 5.459, Seconds: 122.62\nEpoch 2/100 Batch 160/200 - Loss: 5.853, Seconds: 110.96\nEpoch 2/100 Batch 170/200 - Loss: 5.564, Seconds: 118.66\nEpoch 2/100 Batch 180/200 - Loss: 5.710, Seconds: 122.42\nEpoch 2/100 Batch 190/200 - Loss: 5.914, Seconds: 111.56\nEpoch 2/100 Batch 200/200 - Loss: 5.878, Seconds: 120.75\nAverage loss for this update: 5.695\nNo Improvement.\nEpoch 3/100 Batch 10/200 - Loss: 4.876, Seconds: 95.00\nEpoch 3/100 Batch 20/200 - Loss: 5.162, Seconds: 65.63\nEpoch 3/100 Batch 30/200 - Loss: 4.982, Seconds: 88.34\nEpoch 3/100 Batch 40/200 - Loss: 5.124, Seconds: 93.50\nEpoch 3/100 Batch 50/200 - Loss: 5.428, Seconds: 96.70\nEpoch 3/100 Batch 60/200 - Loss: 5.398, Seconds: 82.24\nEpoch 3/100 Batch 70/200 - Loss: 5.214, Seconds: 109.68\nEpoch 3/100 Batch 80/200 - Loss: 5.582, Seconds: 98.60\nEpoch 3/100 Batch 90/200 - Loss: 5.128, Seconds: 107.03\nEpoch 3/100 Batch 100/200 - Loss: 5.465, Seconds: 90.62\nAverage loss for this update: 5.236\nNew Record!\nEpoch 3/100 Batch 110/200 - Loss: 5.359, Seconds: 101.09\nEpoch 3/100 Batch 120/200 - Loss: 5.491, Seconds: 98.22\nEpoch 3/100 Batch 130/200 - Loss: 5.406, Seconds: 109.45\nEpoch 3/100 Batch 140/200 - Loss: 5.378, Seconds: 115.93\nEpoch 3/100 Batch 150/200 - Loss: 5.207, Seconds: 123.30\nEpoch 3/100 Batch 160/200 - Loss: 5.572, Seconds: 111.42\nEpoch 3/100 Batch 170/200 - Loss: 5.307, Seconds: 118.18\nEpoch 3/100 Batch 180/200 - Loss: 5.431, Seconds: 120.35\nEpoch 3/100 Batch 190/200 - Loss: 5.622, Seconds: 112.43\nEpoch 3/100 Batch 200/200 - Loss: 5.594, Seconds: 121.38\nAverage loss for this update: 5.437\nNo Improvement.\nEpoch 4/100 Batch 10/200 - Loss: 4.680, Seconds: 93.97\nEpoch 4/100 Batch 20/200 - Loss: 4.951, Seconds: 65.41\nEpoch 4/100 Batch 30/200 - Loss: 4.744, Seconds: 87.78\nEpoch 4/100 Batch 40/200 - Loss: 4.876, Seconds: 90.86\nEpoch 4/100 Batch 50/200 - Loss: 5.166, Seconds: 96.13\nEpoch 4/100 Batch 60/200 - Loss: 5.148, Seconds: 82.27\nEpoch 4/100 Batch 70/200 - Loss: 4.968, Seconds: 104.71\nEpoch 4/100 Batch 80/200 - Loss: 5.309, Seconds: 100.57\nEpoch 4/100 Batch 90/200 - Loss: 4.890, Seconds: 108.15\nEpoch 4/100 Batch 100/200 - Loss: 5.210, Seconds: 89.69\nAverage loss for this update: 4.994\nNew Record!\nEpoch 4/100 Batch 110/200 - Loss: 5.114, Seconds: 99.76\nEpoch 4/100 Batch 120/200 - Loss: 5.211, Seconds: 98.74\nEpoch 4/100 Batch 130/200 - Loss: 5.148, Seconds: 106.64\nEpoch 4/100 Batch 140/200 - Loss: 5.122, Seconds: 116.57\nEpoch 4/100 Batch 150/200 - Loss: 4.963, Seconds: 121.47\nEpoch 4/100 Batch 160/200 - Loss: 5.278, Seconds: 110.99\nEpoch 4/100 Batch 170/200 - Loss: 5.044, Seconds: 118.86\nEpoch 4/100 Batch 180/200 - Loss: 5.159, Seconds: 120.84\nEpoch 4/100 Batch 190/200 - Loss: 5.342, Seconds: 112.43\nEpoch 4/100 Batch 200/200 - Loss: 5.301, Seconds: 120.54\nAverage loss for this update: 5.168\nNo Improvement.\nEpoch 5/100 Batch 10/200 - Loss: 4.436, Seconds: 94.30\nEpoch 5/100 Batch 20/200 - Loss: 4.694, Seconds: 65.34\nEpoch 5/100 Batch 30/200 - Loss: 4.475, Seconds: 87.84\nEpoch 5/100 Batch 40/200 - Loss: 4.633, Seconds: 90.69\nEpoch 5/100 Batch 50/200 - Loss: 4.899, Seconds: 95.01\nEpoch 5/100 Batch 60/200 - Loss: 4.893, Seconds: 81.03\nEpoch 5/100 Batch 70/200 - Loss: 4.735, Seconds: 104.56\nEpoch 5/100 Batch 80/200 - Loss: 5.044, Seconds: 97.43\nEpoch 5/100 Batch 90/200 - Loss: 4.652, Seconds: 106.11\nEpoch 5/100 Batch 100/200 - Loss: 4.968, Seconds: 89.61\nAverage loss for this update: 4.743\nNew Record!\nEpoch 5/100 Batch 110/200 - Loss: 4.887, Seconds: 100.80\nEpoch 5/100 Batch 120/200 - Loss: 4.969, Seconds: 98.70\nEpoch 5/100 Batch 130/200 - Loss: 4.904, Seconds: 106.26\nEpoch 5/100 Batch 140/200 - Loss: 4.887, Seconds: 116.11\nEpoch 5/100 Batch 150/200 - Loss: 4.750, Seconds: 121.19\nEpoch 5/100 Batch 160/200 - Loss: 5.026, Seconds: 109.41\nEpoch 5/100 Batch 170/200 - Loss: 4.819, Seconds: 116.81\nEpoch 5/100 Batch 180/200 - Loss: 4.925, Seconds: 119.77\nEpoch 5/100 Batch 190/200 - Loss: 5.100, Seconds: 109.71\nEpoch 5/100 Batch 200/200 - Loss: 5.057, Seconds: 117.94\nAverage loss for this update: 4.932\nNo Improvement.\nEpoch 6/100 Batch 10/200 - Loss: 4.211, Seconds: 93.56\nEpoch 6/100 Batch 20/200 - Loss: 4.458, Seconds: 65.17\nEpoch 6/100 Batch 30/200 - Loss: 4.263, Seconds: 86.62\nEpoch 6/100 Batch 40/200 - Loss: 4.440, Seconds: 89.71\nEpoch 6/100 Batch 50/200 - Loss: 4.691, Seconds: 94.30\nEpoch 6/100 Batch 60/200 - Loss: 4.686, Seconds: 82.15\nEpoch 6/100 Batch 70/200 - Loss: 4.554, Seconds: 104.23\nEpoch 6/100 Batch 80/200 - Loss: 4.837, Seconds: 98.04\nEpoch 6/100 Batch 90/200 - Loss: 4.455, Seconds: 109.74\nEpoch 6/100 Batch 100/200 - Loss: 4.749, Seconds: 82.74\nAverage loss for this update: 4.535\nNew Record!\nEpoch 6/100 Batch 110/200 - Loss: 4.689, Seconds: 95.97\nEpoch 6/100 Batch 120/200 - Loss: 4.766, Seconds: 101.98\nEpoch 6/100 Batch 130/200 - Loss: 4.695, Seconds: 106.55\nEpoch 6/100 Batch 140/200 - Loss: 4.682, Seconds: 113.48\nEpoch 6/100 Batch 150/200 - Loss: 4.565, Seconds: 120.73\nEpoch 6/100 Batch 160/200 - Loss: 4.821, Seconds: 108.53\nEpoch 6/100 Batch 170/200 - Loss: 4.625, Seconds: 116.99\nEpoch 6/100 Batch 180/200 - Loss: 4.729, Seconds: 120.05\nEpoch 6/100 Batch 190/200 - Loss: 4.906, Seconds: 109.87\nEpoch 6/100 Batch 200/200 - Loss: 4.884, Seconds: 121.79\nAverage loss for this update: 4.736\nNo Improvement.\nEpoch 7/100 Batch 10/200 - Loss: 4.029, Seconds: 93.00\nEpoch 7/100 Batch 20/200 - Loss: 4.243, Seconds: 65.69\nEpoch 7/100 Batch 30/200 - Loss: 4.086, Seconds: 85.74\nEpoch 7/100 Batch 40/200 - Loss: 4.256, Seconds: 94.20\nEpoch 7/100 Batch 50/200 - Loss: 4.508, Seconds: 93.67\nEpoch 7/100 Batch 60/200 - Loss: 4.507, Seconds: 83.99\n"
],
[
"newsIndex = 165\ntotalNewsCount = len(textToNumberSequence)\ntestNews = [textToNumberSequence[newsIndex]]\nmaxSummaryLength = len(news['Summary'][newsIndex])\nprint(testNews)",
"_____no_output_____"
],
[
"checkpoint = \"./best_model.ckpt\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(checkpoint + '.meta')\n loader.restore(sess, checkpoint)\n input_data = loaded_graph.get_tensor_by_name('inputData:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n text_length = loaded_graph.get_tensor_by_name('inputTextLengths:0')\n summary_length = loaded_graph.get_tensor_by_name('inputSummaryLengths:0')\n dropout_rate = loaded_graph.get_tensor_by_name('dropoutRate:0')\n #Multiply by batch_size to match the model's input parameters\n for i, text in enumerate(testNews):\n answer_logits = sess.run(logits, {input_data: [text]*batchSize, \n summary_length: [maxSummaryLength], #summary_length: [np.random.randint(5,8)], \n text_length: [len(text)]*batchSize,\n dropout_rate: 1.0})[0] \n # Remove the padding from the summaries\n pad = wordToIntDict[\"<PAD>\"] \n #print('- News:\\n\\r {}\\n\\r\\n\\r'.format(\" \".join([intToWordDict[j] for j in testNews[i] if j != pad])))\n print('- News:\\n\\r {}\\n\\r\\n\\r'.format(news['Text'][newsIndex]))\n print('- Actual Summary:\\n\\r {}\\n\\r\\n\\r'.format(news['Summary'][newsIndex]))\n print('- Predicted Summary:\\n\\r {}\\n\\r\\n\\r'.format(\" \".join([intToWordDict[j] for j in answer_logits if j != pad])))",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7e25c18bcb0b0bc9e13849e78f10c2c813ba46
| 9,479 |
ipynb
|
Jupyter Notebook
|
archive/6-bipartite-graphs-instructor.ipynb
|
ChrisKeefe/Network-Analysis-Made-Simple
|
98644f0d03aa3c1ece4aa2d4147835fa10a0fcf8
|
[
"MIT"
] | 1 |
2017-08-19T15:03:49.000Z
|
2017-08-19T15:03:49.000Z
|
archive/6-bipartite-graphs-instructor.ipynb
|
a1ip/Network-Analysis-Made-Simple
|
7404c35cab8cdc9c119961ba33baef0398a20adc
|
[
"MIT"
] | null | null | null |
archive/6-bipartite-graphs-instructor.ipynb
|
a1ip/Network-Analysis-Made-Simple
|
7404c35cab8cdc9c119961ba33baef0398a20adc
|
[
"MIT"
] | 2 |
2022-02-09T15:41:33.000Z
|
2022-02-11T07:47:40.000Z
| 25.143236 | 352 | 0.579703 |
[
[
[
"import networkx as nx\nfrom custom import load_data as cf\nfrom networkx.algorithms import bipartite\nfrom nxviz import CircosPlot\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'",
"_____no_output_____"
]
],
[
[
"# Introduction\n\nBipartite graphs are graphs that have two (bi-) partitions (-partite) of nodes. Nodes within each partition are not allowed to be connected to one another; rather, they can only be connected to nodes in the other partition.\n\nBipartite graphs can be useful for modelling relations between two sets of entities. We will explore the construction and analysis of bipartite graphs here.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"Let's load a [crime data](http://konect.uni-koblenz.de/networks/moreno_crime) bipartite graph and quickly explore it.\n\n> This bipartite network contains persons who appeared in at least one crime case as either a suspect, a victim, a witness or both a suspect and victim at the same time. A left node represents a person and a right node represents a crime. An edge between two nodes shows that the left node was involved in the crime represented by the right node.",
"_____no_output_____"
]
],
[
[
"G = cf.load_crime_network()\nlist(G.edges(data=True))[0:5]",
"_____no_output_____"
],
[
"list(G.nodes(data=True))[0:10]",
"_____no_output_____"
]
],
[
[
"# Projections\n\nBipartite graphs can be projected down to one of the projections. For example, we can generate a person-person graph from the person-crime graph, by declaring that two nodes that share a crime node are in fact joined by an edge.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"## Exercise\n\nFind the bipartite projection function in the NetworkX `bipartite` module [docs](https://networkx.github.io/documentation/networkx-1.10/reference/algorithms.bipartite.html), and use it to obtain the `unipartite` projection of the bipartite graph. (5 min.)",
"_____no_output_____"
]
],
[
[
"person_nodes = [n for n in G.nodes() if G.nodes[n]['bipartite'] == 'person']\npG = bipartite.projection.projected_graph(G, person_nodes)\nlist(pG.nodes(data=True))[0:5]",
"_____no_output_____"
]
],
[
[
"## Exercise\n\nTry visualizing the person-person crime network by using a Circos plot. Ensure that the nodes are grouped by gender and then by number of connections. (5 min.)\n\nAgain, recapping the Circos Plot API:\n\n```python\nc = CircosPlot(graph_object, node_color='metadata_key1', node_grouping='metadata_key2', node_order='metadat_key3')\nc.draw()\nplt.show() # or plt.savefig('...')\n```",
"_____no_output_____"
]
],
[
[
"for n, d in pG.nodes(data=True):\n pG.nodes[n]['connectivity'] = len(list(pG.neighbors(n)))\nc = CircosPlot(pG, node_color='gender', node_grouping='gender', node_order='connectivity')\nc.draw()\nplt.savefig('images/crime-person.png', dpi=300)",
"_____no_output_____"
]
],
[
[
"## Exercise\n\nUse a similar logic to extract crime links. (2 min.)",
"_____no_output_____"
]
],
[
[
"crime_nodes = [n for n in G.nodes() if G.nodes[n]['bipartite'] == 'crime']\ncG = bipartite.projection.projected_graph(G, crime_nodes)",
"_____no_output_____"
]
],
[
[
"## Exercise\n\nCan you plot how the crimes are connected, using a Circos plot? Try ordering it by number of connections. (5 min.)",
"_____no_output_____"
]
],
[
[
"for n in cG.nodes():\n cG.nodes[n]['connectivity'] = float(len(list(cG.neighbors(n))))\nc = CircosPlot(cG, node_order='connectivity', node_color='connectivity')\nc.draw()\nplt.savefig('images/crime-crime.png', dpi=300)",
"_____no_output_____"
]
],
[
[
"## Exercise\n\nNetworkX also implements centrality measures for bipartite graphs, which allows you to obtain their metrics without first converting to a particular projection. This is useful for exploratory data analysis. \n\nTry the following challenges, referring to the [API documentation](https://networkx.github.io/documentation/networkx-1.9/reference/algorithms.bipartite.html) to help you:\n\n1. Which crimes have the most number of people involved?\n1. Which people are involved in the most number of crimes?\n\nExercise total: 5 min.",
"_____no_output_____"
]
],
[
[
"# Degree Centrality\nbpdc = bipartite.degree_centrality(G, person_nodes)\nsorted(bpdc.items(), key=lambda x: x[1], reverse=True)[0:5]",
"_____no_output_____"
],
[
"bpdc['p1']",
"_____no_output_____"
],
[
"nx.degree_centrality(G)['p1']",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a7e2c6ea6f44ae9061e6083aa4fbad8ab5a4bd6
| 8,782 |
ipynb
|
Jupyter Notebook
|
Solutions/Chapter1/1.5. Binomial distribution.ipynb
|
sichu91/ISP_examples
|
af5e3fceb559c383805cddbd4f8ac6fab249521e
|
[
"MIT"
] | null | null | null |
Solutions/Chapter1/1.5. Binomial distribution.ipynb
|
sichu91/ISP_examples
|
af5e3fceb559c383805cddbd4f8ac6fab249521e
|
[
"MIT"
] | null | null | null |
Solutions/Chapter1/1.5. Binomial distribution.ipynb
|
sichu91/ISP_examples
|
af5e3fceb559c383805cddbd4f8ac6fab249521e
|
[
"MIT"
] | 1 |
2020-11-29T14:42:58.000Z
|
2020-11-29T14:42:58.000Z
| 22.403061 | 102 | 0.445912 |
[
[
[
"import numpy as np\nfrom scipy.stats import binom, norm, multinomial\nfrom scipy.special import comb ",
"_____no_output_____"
]
],
[
[
"### Solution 1",
"_____no_output_____"
]
],
[
[
"# 변수 초기화\nn = 25\np = 0.1",
"_____no_output_____"
],
[
"## 직접 계산\n\n# a)\nprobs = [(comb(n, i) * (p**i) * ((1-p)**(n-i))) for i in range(4)]\nprob = 1 - sum(probs)\nprint(f\"a) 적어도 4대가 검은 색: {prob:.4f}\")\n\n# b)\nprobs = [(comb(n, i) * (p**i) * ((1-p)**(n-i))) for i in range(7)]\nprob = sum(probs)\nprint(f\"b) 최대 6대가 검은 색 : {prob:.4f}\")\n\n# c)\nprobs = [(comb(n, i) * (p**i) * ((1-p)**(n-i))) for i in range(4)]\nprob = 1 - sum(probs)\nprint(f\"c) 4대 이상이 검은 색 : {prob:.4f}\")\n\n# d)\nprob = comb(n, 4) * (p**4) * ((1-p)**(n-4))\nprint(f\"d) 정확히 4대가 검은색 : {prob:.4f}\")\n\n# e)\nprobs = [(comb(n, i) * (p**i) * ((1-p)**(n-i))) for i in (3, 4)]\nprob = sum(probs)\nprint(f\"d) 3대~4대의 자동차가 검은 색 : {prob:.4f}\")",
"a) 적어도 4대가 검은 색: 0.2364\nb) 최대 6대가 검은 색 : 0.9905\nc) 4대 이상이 검은 색 : 0.2364\nd) 정확히 4대가 검은색 : 0.1384\nd) 3대~4대의 자동차가 검은 색 : 0.3649\n"
],
[
"## scipy를 이용해 pmf 함수 이용\n\n# a)\nprob = 1 - binom.cdf(3, 25, 0.1)\nprint(f\"a) 적어도 4대가 검은 색: {prob:.4f}\")\n\n# b)\nprob = binom.cdf(6, 25, 0.1)\nprint(f\"b) 최대 6대가 검은 색 : {prob:.4f}\")\n\n# c)\nprob = 1 - binom.cdf(3, 25, 0.1)\nprint(f\"c) 4대 이상이 검은 색 : {prob:.4f}\")\n\n# d)\nprob = binom.pmf(4, 25, 0.1)\nprint(f\"d) 정확히 4대가 검은색 : {prob:.4f}\")\n\n# e)\nprob = binom.pmf(3, 25, 0.1) + binom.pmf(4, 25, 0.1)\nprint(f\"d) 3대~4대의 자동차가 검은 색 : {prob:.4f}\")",
"a) 적어도 4대가 검은 색: 0.2364\nb) 최대 6대가 검은 색 : 0.9905\nc) 4대 이상이 검은 색 : 0.2364\nd) 정확히 4대가 검은색 : 0.1384\nd) 3대~4대의 자동차가 검은 색 : 0.3649\n"
]
],
[
[
"* * *",
"_____no_output_____"
],
[
"### Solution 2",
"_____no_output_____"
]
],
[
[
"## 직접 계산\n\n# a)\nprob = 0.25**5\nprint(f\"a) 어떤 학생이 모든 문제의 답을 맞출 확률은 {prob:.4f}\")\n# b)\nprob = (1 - 0.25)**5\nprint(f\"b) 어떤 학생이 모든 문제를 틀릴 확률은 {prob:.4f}\")",
"a) 어떤 학생이 모든 문제의 답을 맞출 확률은 0.0010\nb) 어떤 학생이 모든 문제를 틀릴 확률은 0.2373\n"
],
[
"## scipy를 이용해 pmf 함수 이용\n\n# a)\nprob = binom.pmf(5, 5, 0.25)\nprint(f\"a) 어떤 학생이 모든 문제의 답을 맞출 확률은 {prob:.4f}\")\n# b)\nprob = binom.pmf(0, 5, 0.25)\nprint(f\"b) 어떤 학생이 모든 문제를 틀릴 확률은 {prob:.4f}\")",
"a) 어떤 학생이 모든 문제의 답을 맞출 확률은 0.0010\nb) 어떤 학생이 모든 문제를 틀릴 확률은 0.2373\n"
]
],
[
[
"* * *",
"_____no_output_____"
],
[
"### Solution 3",
"_____no_output_____"
]
],
[
[
"## 직접 계산\n\n# a)\nprob = 1 - (0.5**3)\nprint(f\"a) 적어도 한 명이 딸일 확률은 {prob:.4f}\")\n\n# b)\ndaughter_2 = (0.5**3) * comb(3, 2)\ndaughter_3 = (0.5**3) * comb(3, 3)\nprob = daughter_2 + daughter_3\nprint(f\"b) 적어도 두 명이 딸일 확률은 {prob:.4f}\")",
"a) 적어도 한 명이 딸일 확률은 0.8750\nb) 적어도 두 명이 딸일 확률은 0.5000\n"
],
[
"## scipy의 pmf 함수 이용\n\n# b)\nprob = 1 - binom.cdf(1, 3, 0.5)\nprint(f\"b) 적어도 두 명이 딸일 확률은 {prob:.4f}\")",
"b) 적어도 두 명이 딸일 확률은 0.5000\n"
],
[
"## 시뮬레이션을 통해 계산\ntwo_more = 0\nn = 100000\nfor _ in range(n):\n daughter_count = np.random.binomial(3, 0.5)\n if daughter_count >= 2:\n two_more += 1\np = two_more / n\nprint(f\"적어도 두 명이 딸일 확률은? {p:.4f}\")",
"적어도 두 명이 딸일 확률은? 0.5004\n"
]
],
[
[
"* * *",
"_____no_output_____"
],
[
"### Solution 4",
"_____no_output_____"
]
],
[
[
"## scipy를 이용\n\nmean_kor = binom.mean(100, 0.3)\nvar_kor = binom.var(100, 0.3)\nprint(f\"a)100명의 학생 중 국어를 선택할 사람 수에 대한 평균: {mean_kor}, 분산: {var_kor}\")\n\nmean_not_math = binom.mean(100, (1-0.5))\nvar_not_math = binom.var(100, (1-0.5))\nprint(f\"b)100명의 학생 중 수학이 아닌 과목을 선택할 사람 수에 대한 평균: {mean_not_math}, 분산: {var_not_math}\")",
"a)100명의 학생 중 국어를 선택할 사람 수에 대한 평균: 30.0, 분산: 21.0\nb)100명의 학생 중 수학이 아닌 과목을 선택할 사람 수에 대한 평균: 50.0, 분산: 25.0\n"
],
[
"## binom의 평균과 분산 이용: np, np(1-p)\n\nmean_kor = 100*0.3\nvar_kor = 100*0.3*(1-0.3)\nprint(f\"a)100명의 학생 중 국어를 선택할 사람 수에 대한 평균: {mean_kor}, 분산: {var_kor}\")\n\nmean_not_math = 100*0.5\nvar_not_math = 100*0.5*(1-0.5)\nprint(f\"b)100명의 학생 중 수학이 아닌 과목을 선택할 사람 수에 대한 평균: {mean_not_math}, 분산: {var_not_math}\")",
"a)100명의 학생 중 국어를 선택할 사람 수에 대한 평균: 30.0, 분산: 21.0\nb)100명의 학생 중 수학이 아닌 과목을 선택할 사람 수에 대한 평균: 50.0, 분산: 25.0\n"
],
[
"## 시뮬레이션을 통해 계산\n\nkor = []\nmath = []\nfor _ in range(10000):\n samples = multinomial.rvs(1, [0.3, 0.2, 0.5], 100)\n kor.append(sum([(sample[0] == 1).all() for sample in samples]))\n math.append(sum([(sample[2] == 0).all() for sample in samples]))\n\nmean_kor = np.mean(kor)\nvar_kor = np.var(kor)\nprint(f\"a)100명의 학생 중 국어를 선택할 사람 수에 대한 평균: {mean_kor:.2f}, 분산: {var_kor:.2f}\")\n\nmean_not_math = np.mean(math)\nvar_not_math = np.var(math)\nprint(f\"b)100명의 학생 중 수학이 아닌 과목을 선택할 사람 수에 대한 평균: {mean_not_math:.2f}, 분산: {var_not_math:.2f}\")",
"a)100명의 학생 중 국어를 선택할 사람 수에 대한 평균: 29.97, 분산: 20.99\nb)100명의 학생 중 수학이 아닌 과목을 선택할 사람 수에 대한 평균: 49.96, 분산: 25.28\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a7e4fe5407be146aa252bedbb759a3cc01bd6bd
| 370,826 |
ipynb
|
Jupyter Notebook
|
County Clustering.ipynb
|
mthelm85/CountyClustering
|
1acaf79c61596651039a80b8239c971bd53a8d67
|
[
"MIT"
] | null | null | null |
County Clustering.ipynb
|
mthelm85/CountyClustering
|
1acaf79c61596651039a80b8239c971bd53a8d67
|
[
"MIT"
] | null | null | null |
County Clustering.ipynb
|
mthelm85/CountyClustering
|
1acaf79c61596651039a80b8239c971bd53a8d67
|
[
"MIT"
] | null | null | null | 174.506353 | 110,101 | 0.635791 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a7e5cf30ed88f5f0e963299def6b523c2ce581e
| 10,152 |
ipynb
|
Jupyter Notebook
|
notebooks/sql_advanced/raw/tut1.ipynb
|
guesswhohaha/learntools
|
c1bd607ade5227f8c8977ff05bf9d04d0a8b7732
|
[
"Apache-2.0"
] | 359 |
2018-03-23T15:57:52.000Z
|
2022-03-25T21:56:28.000Z
|
notebooks/sql_advanced/raw/tut1.ipynb
|
guesswhohaha/learntools
|
c1bd607ade5227f8c8977ff05bf9d04d0a8b7732
|
[
"Apache-2.0"
] | 84 |
2018-06-14T00:06:52.000Z
|
2022-02-08T17:25:54.000Z
|
notebooks/sql_advanced/raw/tut1.ipynb
|
guesswhohaha/learntools
|
c1bd607ade5227f8c8977ff05bf9d04d0a8b7732
|
[
"Apache-2.0"
] | 213 |
2018-05-02T19:06:31.000Z
|
2022-03-20T15:40:34.000Z
| 41.436735 | 360 | 0.608353 |
[
[
[
"# Introduction\n\nIn the [Intro to SQL micro-course](https://www.kaggle.com/learn/intro-to-sql), you learned how to use [**INNER JOIN**](https://www.kaggle.com/dansbecker/joining-data) to consolidate information from two different tables. Now you'll learn about a few more types of **JOIN**, along with how to use **UNIONs** to pull information from multiple tables. \n\nAlong the way, we'll work with two imaginary tables, called `owners` and `pets`. \n\n\n\nEach row of the `owners` table identifies a different pet owner, where the `ID` column is a unique identifier. The `Pet_ID` column (in the `owners` table) contains the ID for the pet that belongs to the owner (this number matches the ID for the pet from the `pets` table).\n\nFor example, \n- the `pets` table shows that Dr. Harris Bonkers is the pet with ID 1.\n- The `owners` table shows that Aubrey Little is the owner of the pet with ID 1.\n\nPutting these two facts together, Dr. Harris Bonkers is owned by Aubrey Little. Likewise, since Veronica Dunn does not have a corresponding `Pet_ID`, she does not have a pet. And, since 5 does not appear in the `Pet_ID` column, Maisie does not have an owner.\n\n# JOINs\n\nRecall that we can use an **INNER JOIN** to pull rows from both tables where the value in the `Pet_ID` column in the `owners` table has a match in the `ID` column of the `pets` table.\n\n\n\nIn this case, Veronica Dunn and Maisie are not included in the results. But what if we instead want to create a table containing all pets, regardless of whether they have owners? Or, what if we want to combine all of the rows in both tables? In these cases, we need only use a different type of **JOIN**.\n\nFor instance, to create a table containing all rows from the `owners` table, we use a **LEFT JOIN**. In this case, \"left\" refers to the table that appears before the **JOIN** in the query. (\"Right\" refers to the table that is after the **JOIN**.)\n\n\n\nReplacing **INNER JOIN** in the query above with **LEFT JOIN** returns all rows where the two tables have matching entries, along with all of the rows in the left table (whether there is a match or not). \n\nIf we instead use a **RIGHT JOIN**, we get the matching rows, along with all rows in the right table (whether there is a match or not).\n\nFinally, a **FULL JOIN** returns all rows from both tables. Note that in general, any row that does not have a match in both tables will have NULL entries for the missing values. You can see this in the image below.\n\n\n\n\n# UNIONs\n\nAs you've seen, **JOINs** horizontally combine results from different tables. If you instead would like to vertically concatenate columns, you can do so with a **UNION**. The example query below combines the `Age` columns from both tables.\n\n\n\nNote that with a **UNION**, the data types of both columns must be the same, but the column names can be different. (So, for instance, we cannot take the **UNION** of the `Age` column from the `owners` table and the `Pet_Name` column from the `pets` table.) \n\nWe use **UNION ALL** to include duplicate values - you'll notice that `9` appears in both the `owners` table and the `pets` table, and shows up twice in the concatenated results. If you'd like to drop duplicate values, you need only change **UNION ALL** in the query to **UNION DISTINCT**.\n\n# Example\n\nWe'll work with the [Hacker News](https://www.kaggle.com/hacker-news/hacker-news) dataset. We begin by reviewing the first several rows of the `comments` table. (_The corresponding code is hidden, but you can un-hide it by clicking on the \"Code\" button below._)",
"_____no_output_____"
]
],
[
[
"#$HIDE_INPUT$\nfrom google.cloud import bigquery\n\n# Create a \"Client\" object\nclient = bigquery.Client()\n\n# Construct a reference to the \"hacker_news\" dataset\ndataset_ref = client.dataset(\"hacker_news\", project=\"bigquery-public-data\")\n\n# API request - fetch the dataset\ndataset = client.get_dataset(dataset_ref)\n\n# Construct a reference to the \"comments\" table\ntable_ref = dataset_ref.table(\"comments\")\n\n# API request - fetch the table\ntable = client.get_table(table_ref)\n\n# Preview the first five lines of the table\nclient.list_rows(table, max_results=5).to_dataframe()",
"_____no_output_____"
]
],
[
[
"You'll also work with the `stories` table.",
"_____no_output_____"
]
],
[
[
"# Construct a reference to the \"stories\" table\ntable_ref = dataset_ref.table(\"stories\")\n\n# API request - fetch the table\ntable = client.get_table(table_ref)\n\n# Preview the first five lines of the table\nclient.list_rows(table, max_results=5).to_dataframe()",
"_____no_output_____"
]
],
[
[
"Since you are already familiar with **JOINs** from the [Intro to SQL micro-course](https://www.kaggle.com/learn/intro-to-sql), we'll work with a relatively complex example of a JOIN that uses a [common table expression (CTE)](https://www.kaggle.com/dansbecker/as-with).\n\nThe query below pulls information from the `stories` and `comments` tables to create a table showing all stories posted on January 1, 2012, along with the corresponding number of comments. We use a **LEFT JOIN** so that the results include stories that didn't receive any comments.",
"_____no_output_____"
]
],
[
[
"# Query to select all stories posted on January 1, 2012, with number of comments\njoin_query = \"\"\"\n WITH c AS\n (\n SELECT parent, COUNT(*) as num_comments\n FROM `bigquery-public-data.hacker_news.comments` \n GROUP BY parent\n )\n SELECT s.id as story_id, s.by, s.title, c.num_comments\n FROM `bigquery-public-data.hacker_news.stories` AS s\n LEFT JOIN c\n ON s.id = c.parent\n WHERE EXTRACT(DATE FROM s.time_ts) = '2012-01-01'\n ORDER BY c.num_comments DESC\n \"\"\"\n\n# Run the query, and return a pandas DataFrame\njoin_result = client.query(join_query).result().to_dataframe()\njoin_result.head()",
"_____no_output_____"
]
],
[
[
"Since the results are ordered by the `num_comments` column, stories without comments appear at the end of the DataFrame. (Remember that **NaN** stands for \"not a number\".)",
"_____no_output_____"
]
],
[
[
"# None of these stories received any comments\njoin_result.tail()",
"_____no_output_____"
]
],
[
[
"Next, we write a query to select all usernames corresponding to users who wrote stories or comments on January 1, 2014. We use **UNION DISTINCT** (instead of **UNION ALL**) to ensure that each user appears in the table at most once.",
"_____no_output_____"
]
],
[
[
"# Query to select all users who posted stories or comments on January 1, 2014\nunion_query = \"\"\"\n SELECT c.by\n FROM `bigquery-public-data.hacker_news.comments` AS c\n WHERE EXTRACT(DATE FROM c.time_ts) = '2014-01-01'\n UNION DISTINCT\n SELECT s.by\n FROM `bigquery-public-data.hacker_news.stories` AS s\n WHERE EXTRACT(DATE FROM s.time_ts) = '2014-01-01'\n \"\"\"\n\n# Run the query, and return a pandas DataFrame\nunion_result = client.query(union_query).result().to_dataframe()\nunion_result.head()",
"_____no_output_____"
]
],
[
[
"To get the number of users who posted on January 1, 2014, we need only take the length of the DataFrame.",
"_____no_output_____"
]
],
[
[
"# Number of users who posted stories or comments on January 1, 2014\nlen(union_result)",
"_____no_output_____"
]
],
[
[
"# Your turn \n\nUse what you've learned to **[pull information from multiple tables](#$NEXT_NOTEBOOK_URL$)**.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a7e72ea432df778f33dd498a90701851e99b783
| 61,069 |
ipynb
|
Jupyter Notebook
|
image_classifier.ipynb
|
sheetalreddy/scene_classification
|
0f2ebf5b258ec2a19885394d134c27000978341d
|
[
"MIT"
] | null | null | null |
image_classifier.ipynb
|
sheetalreddy/scene_classification
|
0f2ebf5b258ec2a19885394d134c27000978341d
|
[
"MIT"
] | null | null | null |
image_classifier.ipynb
|
sheetalreddy/scene_classification
|
0f2ebf5b258ec2a19885394d134c27000978341d
|
[
"MIT"
] | null | null | null | 42.350208 | 609 | 0.588564 |
[
[
[
"<a name='main'></a>\n# **AI IN PRACTICE : HOW TO TRAIN AN IMAGE CLASSIFIER**\n### **Author: Sheetal Reddy**\n### **Contact : [email protected]**\n\n---\n**Introduction** \n\nThe training \"AI in Practice\" will give you, at a basic level, knowledge about how to train a pre-trained model, the pre-requisites, what techniques that are used and how to continue experimenting in the finetuning of the model. \n\nIn this training we are going to use image classification, open data-set, a pre-trained model and Colab* to train your model. \n\nA pre-trained model gives you the possibility to finetune an existing model trained on a large amount of data to better fit your purposes and by that also save you time. We will go through the more of the advantages later in the training. \n\nThere are many pre-trained models available for different purposes you can find some of them here: https://pytorch.org/docs/stable/torchvision/models.html\n\n**The objective**\n\nThe objective of this training is to give you enough knowledge to feel confident when entering an AI project. \nBy understanding the steps requerired to train a model you will an advantage when working in AI related project. \nThis by giving you both an theoretical knowledge but also by you being able to practice hands on - how to train a model. \n\n**Learning objectives**\n\nAfter the training you will be able to: \n* Describe the necessary steps to train a model\n* Use a Jupyter Notebook - Google Colab\n* Be able to train a model \n - Prepare datasets\n - Finetune pre-trained models\n - Visualize and quantify results\n\n**Pre-requisites**\n\nTo be able to get the most out of this training we expect you to be aware of: \n\n* The subject of AI \n* The importance of data \n\n**Training instructions**\n\nThe training is primarly performed individially but you will be placed in a group.\n\nThere will be some group questions and exercises but you are expected to performe the tasks your-self.\n\nThere is a Common Terminology section in the end of your Colab document. The concepts or wording available in the Common Terminology section will be marked with an (*) \n\nThere are also some links in the document if you want to learn more in the different sections\n\nLet us know if you have any questions or your group members – **but first google it!** \"Googling \" is one of the most common ways that data scientists work with understanding new techniques and ways of working.\n\n**Duration**\n* Expected time to finish the training is in total 3 hours. \n\n**The challenge** \n\n* The challenge in this training, is to finetune the pre-trained model to the use case and dataset - capable of **image classification**, see below for explanation. We will also later on in this training go through more on the benefits of working with a pre-trained model. \n* In this case you will work with improving/training the model using a data set containing different images including scenes.\n* The outcome of your work will result in a model that can classify \"nature scenes\" with a higher accuracy.\n\n**Image classification**\n\n\nSo why did we choose image classification for this training? \n* Image classification is a technique that is used to classify or predict the class of a specific object in an image. Image classification is one of the most important applications of computer vision. The main goal of this technique is to accurately identify the features in an image. Its applications range from classifying objects in self-driving cars to identifying blood cells in the healthcare industry, from identifying defective items in the manufacturing industry to build a system that can classify persons wearing masks or not.\n\n* Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do. To learn more: \nhttps://en.wikipedia.org/wiki/Computer_vision#:~:text=Computer%20vision%20is%20an%20interdisciplinary,human%20visual%20system%20can%20do.\n\n\n\n\n\n\n",
"_____no_output_____"
],
[
"# **Lets start the training !** \n\n**How to train a pre-trained model**\n\nTo train a model you usually need to plan according to the following steps below. The first three steps will set the foundation for what you will be able to train your model on and what results you will be able to expect. \n\nWe will use this structure and go through the steps in the training one by one.\n\n1. [Define adequately our problem (objective, desired outputs…).](#main) \n2. [Setup the computing environment](#computing_env)\n3. [Gather data](#computing_env) \n4. [Prepare the data](#data_preparation)\n5. [Train the model and choose a measure of success.](#training) - In this training the measure of succes is to have a model with a low error rate.\n6. [An overview of how a model learns](#results).",
"_____no_output_____"
],
[
"\n## **1.Define the problem**\n\nA problem well defined is a problem half-solved. \n\nUnderstanding the problem and developing the requirements isn't something you typically get right on the first attempt; this is often an iterative process where we initially define a set of rough requirements and refine the detail as we gain more information. \n\nBy asking and aswering the five questions you are in a good way to be able to define a problem. \n\n1. What is the nature of the problem that requires solving?\n2. Why does the problem require a solution?\n3. How should solutions to the problem be approached?\n4. What aspect of the problem will a deep learning model solve?\n5. How is the solution to the problem intended to be interacted with?\n\nSince we already have a defined problem or in this case a challenge: **To finetune a pre-trained model with the aim to classify \"scenes\" with a high accuracy.**\n\nWe will go ahead with setting the computing environment\n\n<a name='computing_env'></a>\n## **2. Setting up the computing environment**\n\n**Change the runtime setting of your colab notebook to GPU*:**\nGraphics Processing Units (GPUs), computing power, can significantly accelerate the training process for many deep learning models. Training models for tasks like image classification, video analysis, and natural language processing involves compute-intensive matrix multiplication and other operations that can take advantage of a GPU's massively parallel architecture.\n\nTraining a deep learning model that involves intensive compute tasks on extremely large datasets can take days to run on a single processor. However, if you design your program to offload those tasks to one or more GPUs, you can reduce training time to hours instead of days.\n\n**How to change your runtime setting to GPU* in your environment**\n\nThe first thing you want to do is to in this Colab page go to the menubar and follow the following steps \"Körning > Ändra körningstyp > Välj \"GPU\". This will set the google colab environment up with a free GPU that will be used to train your models. If you have CPU selected it will still work, only much slower.\n\n## **3. Gather the Dataset**\n\nGathering and preparing data requires great care. It usally involves taking below steps into considaration. \n\n1. Determine what information you want or need to Collect to solve the problem\n2. Set a timeframe for data collection\n3. Determine your data collection method\n4. Collect the data\n5. Analyze the data and implement your findings\n\nThe correct gathering of data is completely dependent on the problem you would like or need to solve. \n\n**Domain of the problem**\n\nDepending upon the domain of your problem, you may either use standard datasets collected by others or start collecting your own data. As you intend to use neural networks, then you should be aware that your dataset should be large, or else those techniques may not be very useful.\nWhat is the domain of your problem? Is it related to Computer Vision, Natural Language Processing, Sensor data, or some XYZ?\n\nIn our case its related to Computer Vision for that reason we need to gather a large set of images. There are various ways to gather image data and you need to specify what images that are relevant for solving the problem. \n\nIt is important to plan ahead on how much data one may acquire. You cannot just store in a hard-disk and save it in directories and assume you are ready to go. A lot of effort goes in data storage, organization, annotation and pre-processing. \n\n**Data Privacy** \n\nData privacy is an important part if individual people’s personal information is to be stored. Some data can be stored in simple text files but for other you may want to develop a database (or a light version) for faster access. If the data is too big to fit in memory, then big data techniques may need to be adopted (e.g. Hadoop framework). \n\nFor this training we chosen not to include any personal data and we have also chosen to a pretty small dataset so its possible to store in a laptop. You will learn more about the data for this training as we go along the training. \n\n\n**Instructions to add the dataset to your drive**\n\n1. Download the dataset from the dropbox folder by clicking here\nhttps://www.dropbox.com/s/gf6d2t1zbogjjgg/AI_IN_PRACTICE.zip?dl=1\n2. Upload the **AI_IN_PRACTICE.zip** file to your google drive. \n3. Make sure you have a file called **AI_IN_PRACTICE.zip** in your **Mydrive** (In swedish **Min enhet**) in google drive \n\nYou will learn about the data traits later in the training. \n\nNow you are all set to start running the code cells one by one ! The cells are they grey \"boxes\" that you will find throughout the Colab document. The fast and cool way to run a cell is to press shift+enter/ctrl + enter. \n\n\n\n",
"_____no_output_____"
]
],
[
[
"#The code in this cell connects your google drive space to the jupyter notebook and sets up fastai in your colab environment.\n#This will enable the code in your jupyter notebook to access the dataset in your google drive. \n\n#Install fastbook(contains fastai setup) in the colab environment. \n!pip install -Uqq torchtext==0.8.1\n!pip install -Uqq fastbook\n\n#Importing fastai into the jupyter notebook\nimport fastbook\n\n#setup fastai and mounts your google drive space in /content/gdrive \nfastbook.setup_book()\n\nprint('Setup complete')",
"_____no_output_____"
]
],
[
[
"Now your google drive is mounted at /content/gdrive/MyDrive. It is only accesable through your Jupyter notebook for your view. \nClick on the above link to make sure your drive is mounted in the right location.\n\nIf you experince any error, let the organizer know.\n\n\nNow you should run the next cell to unzip/extract the dataset. ",
"_____no_output_____"
]
],
[
[
"#When pressing the run button the code in this cell will unzip the AI_IN_PRACTICE.zip dataset and create a scenes folder in your google drive in MyDrive.\n\n#The code below Unzips the AI_IN_PRACTICE.zip file\n!unzip -q '/content/gdrive/MyDrive/AI_IN_PRACTICE.zip' -d '/content/gdrive/MyDrive/'\nprint('The unzip is complete now and you can move to the next cell !')\n#This might take a while - Do not rerun the cell in between\n#When the code is executed correctly you will see this message \"The unzip is complete now and you can move to the next cell !\"\n#If you still do a rerun you will get the following message: \"replace /content/gdrive/MyDrive/AI_IN_PRACTICE/scenes/train/sea/1.jpg? [y]es, [n]o, [A]ll, [N]one, [r]ename:\" press \"A\" and press Enter",
"_____no_output_____"
]
],
[
[
"Now we have the unziped dataset in the location /content/gdrive/MyDrive/AI_IN_PRACTICE/scenes\n\nClick on the link above to make sure you have scenes folder in your MyDrive. You should be able to see the different folders in the scenes dataset such as models, train, train_medium and valid.\n\nIf you experience any error when you click the link, it means that the dataset is not at the right location.",
"_____no_output_____"
],
[
"**Import the necessary packages**\n\nIn python, which fastai* uses as a building block, we import packages (containing code) to our code using import statement as shown below for eg : import os \n\nIt is a convinient way to import all the open source packages that are interesting and important for solving the challenge. There are many open source packages being produced and which ones to use for the specific problem needs to be explored. \n\nThe importance of the packages we are using are described below in the code cell. We are going to work with the fastai libary which sits on top of PyTorch*. The fastai libary provides many useful functions that enable us to quickly and easily build neural networks (NN) and train our models. To learn more about NN please watch the move through this link: https://www.youtube.com/watch?v=bfmFfD2RIcg\n",
"_____no_output_____"
]
],
[
[
"\n#The code in this cell imports all the necesssary packages useful for training your model.\n\nfrom fastbook import *\n# imports fastai vision package to work with images \nfrom fastai.vision.all import *\n\n# imports fastai metrics like error_rate\nfrom fastai.metrics import error_rate # 1-accuracy\n\n#import numpy libraries for matrix manipulations \nimport numpy as np \n\nimport os\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.utils import shuffle\n\n#import plotting and visualization libraries\nimport matplotlib.pyplot as plt\n\n#import libraries to read and write images\nimport cv2 \n \nmatplotlib.rc('image', cmap='Greys')\nprint('Good Job ! You are on the right track')",
"_____no_output_____"
]
],
[
[
"<a name='data_preparation'></a>\n# **4. Data Preparation**\n\nData preparation is the process of cleaning and transforming raw data prior to processing and analysis. It is an important step prior to processing and often involves reformatting data, making corrections to data and the combining of data sets to enrich data.\n\nData preparation is often a lengthy undertaking for data professionals or business users, but it is essential as a prerequisite to put data in context in order to turn it into insights and eliminate bias resulting from poor data quality.\n\nFor example, the data preparation process usually includes standardizing data formats, enriching source data, and/or removing outliers.\n\nDataset preparation can be divided into five steps \n\n1. [Data Exploration](#data_exploration)\n2. [Data Cleaning](#data_cleaning) \n3. [Data Augmentation](#data_augmentation)\n4. [Data Splitting](#data_splitting)\n5. [Visualize data](#data_visualization) \n\n\n\n",
"_____no_output_____"
],
[
"<a name='data_exploration'></a>\n## **4.1. Data Exploration**\n\nIn the data exploration stage, we understand and try to answer some basic questions about the dataset. The question are listed below and are there for you to get a fast overview of the dataset you're handeling. In some cases this will give you enough information to understand if your dataset will be able to solve your problem or not. \n\n1. How big is the dataset? \n2. How many train files and validation/test files do we have?\n3. How many classes are there in the dataset ?\n4. How many data samples are there per class ? ",
"_____no_output_____"
],
[
"To be able to answer the above questions, we need to let our code know where our dataset is located. We do that by running the below code cell.",
"_____no_output_____"
],
[
"**Location of Scenes Dataset**",
"_____no_output_____"
]
],
[
[
"#The code in this cell adds the location where the data exists to a path variable.\npath = '/content/gdrive/MyDrive/AI_IN_PRACTICE/scenes'\nprint('Cell execution Completed')",
"_____no_output_____"
],
[
"#The code in this cell stores all the locations of the train and test images in the dataset. \n\n#gets image locations from scenes/train folder and save them to train_files \ntrain_files=get_image_files(path+'/train')\n\n#get image locations from scenes/valid folder and save them to test_files \ntest_files=get_image_files(path+'/valid')\n\nprint('Cell execution Completed')",
"_____no_output_____"
]
],
[
[
"If you need more information about the code in the code cells, Use doc() for more documentation. An example of how to use doc() is given below.",
"_____no_output_____"
]
],
[
[
"doc(get_image_files)",
"_____no_output_____"
]
],
[
[
"**Amount of files in the scenes dataset**",
"_____no_output_____"
]
],
[
[
"#The code in this cell prints the number of images used for training and test/validation. The numbers are fixed to the dataset.\nprint('Number of images used for training '+ str(len(train_files)))\nprint('Number of images used for validation '+ str(len(test_files)))",
"_____no_output_____"
]
],
[
[
"**Amount of Classes**",
"_____no_output_____"
]
],
[
[
"#The code in this cell prints the classes in our dataset\nlabels = os.listdir(path+'/train')\nprint(labels)",
"_____no_output_____"
],
[
"#The code in this cell counts the number of samples per class in the train dataset. Plotted blow in the chart. \ncounts = [0]*len(labels)\nfor i in train_files:\n for j in range(0,len(labels)):\n if labels[j] in str(i):\n counts[j]= counts[j]+1\nprint('Counts extracted')",
"_____no_output_____"
],
[
"#The code below defines a function for plotting the number of samples per class\ndef plot_bar_counts():\n # this is for plotting purpose\n index = np.arange(len(labels))\n plt.bar(labels, counts)\n plt.xlabel('labels', fontsize=5)\n plt.ylabel('No of data samples', fontsize=15)\n plt.xticks(index, labels, fontsize=15, rotation=30)\n plt.title('Train data analysis')\n plt.show()",
"_____no_output_____"
],
[
"#Plots the bar code of the training samples\nplot_bar_counts()",
"_____no_output_____"
]
],
[
[
"\n#[**4.2. Data Cleaning**](#data_cleaning) \n\nIn this training , we will do the data cleaning in the next pilot session. ",
"_____no_output_____"
],
[
"<a name='data_augmentation'></a>\n## **4.3. Data Augmentation**\n\nData augmentation is the technique of increasing the size of data used for training a model but also to create real life situations. For reliable predictions, the deep learning models often require a lot of training data, which is not always available. Therefore, the existing data is augmented in order to make a better generalized model.\n\nAlthough data augmentation can be applied in various domains, it's commonly used in computer vision. Some of the most common data augmentation techniques used for images are:\n\n**Position augmentation**\n* Scaling\n* Cropping\n* Flipping\n* Padding\n* Rotation \n* Translation \n* Affine tranformation (ex:warping)\n\n**Color augmentation**\n* Brightness\n* Contrast \n* Saturation \n* Hue\n\n**Fun fact**: Color augmentations are the basis for the **Instagram filters** we use to make us look picture perfect :) \n\nBelow we go through some of the techniques and visualize different augmentations using one sample image\n\n",
"_____no_output_____"
]
],
[
[
"import random\n\nnum = random.randint(0, len(train_files)-1)\n\n#Load a random image to visiaulize the image augmentations\nimg = PILImage(PILImage.create(train_files[num]))\n\n#show the image\nshow_image(img)",
"_____no_output_____"
]
],
[
[
"## **Random Crop Augmentaion**\n\nRandom crop is a data augmentation technique wherein we create a random subset of an original image. This helps our model generalize better because the object(s) of interest we want our models to learn are not always wholly visible in the image or the same scale in our training data.",
"_____no_output_____"
]
],
[
[
"# The code in this cell applies Randomized crop to the image loaded above\n'''\nRandomResizedCrop(n): Randomly crops an image to size (nxn)\n'''\nn=224\ncrop = RandomResizedCrop(n)\n_,axs = plt.subplots(3,3,figsize=(9,9))\nfor ax in axs.flatten():\n cropped = crop(img)\n show_image(cropped, ctx=ax);\n ",
"_____no_output_____"
]
],
[
[
"## **Crop pad**\n\nCrop Pad is an additional augmentaqtion technique to increase the scenes data set by padding an image.\n",
"_____no_output_____"
]
],
[
[
"# The code in this cell applies crop_pad to the image loaded above \n_,axs = plt.subplots(1,3,figsize=(12,4))\nfor ax,sz in zip(axs.flatten(), [150, 300, 500]):\n show_image(img.crop_pad(sz), ctx=ax, title=f'Size {sz}');",
"_____no_output_____"
]
],
[
[
"## **Rotation Augmentation**\n\nA source image is random rotated clockwise or counterclockwise by some number of degrees, changing the position of the object in frame. \nRandom Rotate is a useful augmentation in particular because it changes the angles that objects appear in your dataset during training. Random rotation can improve your model without you having to collect and label more data.",
"_____no_output_____"
]
],
[
[
"# The code in this cell applies given rotations the image.\n\ntimg = TensorImage(array(img)).permute(2,0,1).float()/255.\ndef _batch_ex(bs): return TensorImage(timg[None].expand(bs, *timg.shape).clone())\n\n'''\n\nthetas - Angles which the original image is rotated to.\n\nFor ex: thetas = [-15,0,15]\n\nDisplays three images rotated to -15 degrees, 0 degrees and 15 degrees respectively\n\n'''\nthetas = [-30,-15,0,15,30]\nimgs = _batch_ex(5)\ndeflt = Rotate()\nlisty = Rotate(p=1.,draw=thetas)\nshow_images( listy(imgs) ,suptitle='Manual List Rotate',titles=[f'{i} Degrees' for i in thetas])",
"_____no_output_____"
]
],
[
[
"## **Warping Augmentation**\n\nAppling warping technique adds distorted images to the scenes dataset. ",
"_____no_output_____"
]
],
[
[
"scales = [-0.4, -0.2, 0., 0.2, 0.4]\nimgs=_batch_ex(5)\nvert_warp = Warp(p=1., draw_y=scales, draw_x=0.)\nhorz_warp = Warp(p=1., draw_x=scales, draw_y=0.)\nshow_images( vert_warp(imgs) ,suptitle='Vertical warping', titles=[f'magnitude {i}' for i in scales])\nshow_images( horz_warp(imgs) ,suptitle='Horizontal warping', titles=[f'magnitude {i}' for i in scales])",
"_____no_output_____"
]
],
[
[
"**Flip**\n\nFlips a batch of images.",
"_____no_output_____"
]
],
[
[
"with no_random(32):\n imgs = _batch_ex(2)\n deflt = Flip()\n show_images( deflt(imgs) ,suptitle='Default Flip')\n ",
"_____no_output_____"
]
],
[
[
"Let's now batch all these augmentation/transformation together and apply them in the code cell below. \n\nWe also change the size of the images to make sure every image is of the same shape and size (normalize). This allows the GPU to apply the same instructions on all the images. \n\nWhen we normalize the images, the pixel channels standard deviations are reduced to help train models. If you do have problems training your model, one thing to do is check if you have normalized it. \n\n***NOTE: The types of data augmentations are very specific to the dataset. In our case we only rotate the image by a smaller degree to maintain representability of the real world. If we consider Medical images (Ex:cell Images), It is okay to rotate them by a larger degree(ex: 180 degrees)***\n",
"_____no_output_____"
]
],
[
[
"#tfms = None\n#The code in this cell collects all the data augmentations into one variable which can be applied to our dataset in the later stages.\ntfms =[*aug_transforms(size=224, min_scale=0.75, max_rotate=10, max_zoom=1.05, max_warp=.1, do_flip=True), Normalize.from_stats(*imagenet_stats)]\n",
"_____no_output_____"
],
[
"#if you are running on GPU instance , this code cell will work, Otherwise it will throw an error ! \n#if you are not running on GPU, comment the second line (y = y.to(device=torch.device(\"cuda:0\"))). \ny = _batch_ex(9)\ny = y.to(device=torch.device(\"cuda:0\"))\nfor t in tfms: y = t(y, split_idx=0)\n_,axs = plt.subplots(1,5, figsize=(12,3))\nfor i,ax in enumerate(axs.flatten()):\n show_image(y[i], ctx=ax)\n",
"_____no_output_____"
]
],
[
[
"<a name='data_splitting'></a>\n## **4.4 Data Splitting**\n\nNow its time to split your data for training and validation. The training data usually contains 70% of the image dataset and the trainingValidation dataset the remaining 30%. \n\nRun the code below to perform the splitting. ",
"_____no_output_____"
]
],
[
[
"#The code in this cell loads the whole train and valid images into a data variable. Also applies the tfms variable that we created in the previous cells. \nnp.random.seed(42)\n\n'''\nThe method below loads train and valid subfolders in the code (data =)\n\ntrain : name of the train subfolder \nvalid : name of the valid subfolder\nitem_tfms : transforms performed on the individual image\nbatch_tfms : transforms performed on the batch \nbs : batch size\n \n'''\ndata = ImageDataLoaders.from_folder(path,train='train', valid ='valid', item_tfms=Resize(224), batch_tfms=tfms, bs=10)",
"_____no_output_____"
]
],
[
[
"Before we move on to next code cell, we need to be clear with the below question. Believe me, Its Important ! \n\n**What Is Batch Size?** \n\nTo refresh you menemory please look at the video explaining NN here: https://www.youtube.com/watch?v=bfmFfD2RIcg\n\n* The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters.\n\n* Think of a batch as a for-loop iterating over one or more samples and making predictions. At the end of the batch, the predictions are compared to the expected output variables and an error is calculated. From this error, the update algorithm is used to improve the model, e.g. move down along the error gradient.\n\n* A training dataset can be divided into one or more batches.\n* Batch Size(bs) can be changed in this code cell [here](#data_splitting). It's value is currently set to 10.\n\nTo get more information about a Batch Size please follow the link: https://www.youtube.com/watch?v=U4WB9p6ODjM\n",
"_____no_output_____"
],
[
"## **4.5 Visualize Data**\n\nBy Visualizing the data you can confirm that you are on the right track e.g. regarding the labeling \n\nDo your images match the correct labels? \nIf yes, then you have succeeded ! ",
"_____no_output_____"
]
],
[
[
"#The below line of code shows a random batch of images \ndata.show_batch(figsize=(10,10))",
"_____no_output_____"
]
],
[
[
"<a name='training'></a>\n\n# **5.Training/Fine-Tuning the model using Transfer Learning**\n\n**Welcome back!**\n\nNow we will start with fine-tuning of our pretrained model. This means that we are building a model which will take images as input and will output the predicted probability for each of the categories, in this case, it will get 6 probabilities and class with the maximun probability is chosen as the label. For this task we will use a technique called Transfer Learning. To learn more about transfer learning please follow this link: https://www.youtube.com/watch?v=5T-iXNNiwIs\n\n\n**What is Transfer Learning?**\n\n\n* Transfer learning is a technique where you use a model trained on a very large dataset (usually ImageNet in computer vision) and then adapt it to your own dataset.\n\n* The idea is that the model has learned to recognize many features on all of this data, like ImageNet, and that you will benefit from this knowledge, especially if your dataset is small. \n\n* In practice, you need to change the last part of the model to be adapted to your own number of classes. \n\n* Most convolutional models end with a few linear layers (a part we will call the head).\n\n* The last convolutional layer will have analyzed features in the image that went through the model, and the job of the head is to convert those in predictions for each of your classes. \n\n* In transfer learning one keeps all the convolutional layers (called the body or the backbone of the model) with their weights pretrained on ImageNet but will define a new head initialized randomly.\n\n**Two-Phase Training of the model**\n* We will train the model in two phases: first we freeze the body weights and only train the head (to convert those analyzed features into predictions for our own data). In the second phase we unfreeze the layers of the backbone (gradually if necessary) and fine-tune the whole model (possibly using differential learning rates).\n\n\n\n\n ",
"_____no_output_____"
],
[
"For this training we have chosen a pretrained model called resnet34, it has previously been trained on 1,5 million of images. This means that we don't have to start with a model that knows nothing, we start with a model that knows something about recognizing images already. The 34 stands for the number of layers in the network, a smaller model trains faster. There is a bigger version is called resnet50. \n\nWith below code our model will be able to train with the resnet34.",
"_____no_output_____"
]
],
[
[
"#The code in this cell will use a cnn_learner method. With this line of code we tell a learner to create a cnn model for us, in this case it's a resnet34. \n \n#The cnn_learner method helps you to automatically get a pretrained model from a given architecture, in this case resnet34\nlearn = cnn_learner(data, models.resnet34, loss_func=CrossEntropyLossFlat(), metrics=[error_rate, accuracy])\n",
"_____no_output_____"
]
],
[
[
"### **Wait !!!**\n\nThere seems to be a lot of terms in the code that are complicated in the previous cell. Let's review each of them a bit \n\n\n* **CNN** : Convolutional Neural Networks are a class of neural networks that are widely used in the areas of images recognition, images classifications. Objects detections, recognition faces etc. A convolution is the basic operation of a CNN. For more explanation, watch the below video.\nhttps://www.youtube.com/watch?v=YRhxdVk_sIs&t=419s\n\n* **Cross-Entropy Loss** : Cross-entropy loss is a loss function used for this dataset. It has two benefits:\n\n> 1. It works even when our dependent variable has more than two categories.\n> 2. It results in faster and more reliable training.\n\n* **Error rate**: \n error_rate = 1 - accuracy \n accuracy = no of correctly classified samples / all samples\n\n ",
"_____no_output_____"
],
[
"The below code-cell shows the detailed architecture of the deep neural network model(in our case resnet34) we are training. Knowing the architecture of a DNN(deep neural network) is useful in designing better neural network architectures for more advanced usecases.",
"_____no_output_____"
]
],
[
[
"#The code in this cell shows the architecture of the model( in our case CNN) that is being trained.\nlearn.model",
"_____no_output_____"
]
],
[
[
"## **Phase 1: Finetune the head of the model**\n\nNow we enter the first phase of the training which means that we first we freeze the body weights and only train the head (to convert those analyzed features into predictions for our own data). We will train our models by letting it cycle through all our data 6 times. The number 6 is the number of times we let the model go through all the data. We can see the training loss which is telling us how much is the model learning from the data. The validation loss tells us how generalizable is the model.\n\nIn both the cases, training and validation loss, it's good to have a decreasing trend.\n\n1 cycle = 1 epoch\n\nIt will take sometime to train your model.\n\nSit and relax after running the below cell ! :) You did a great job !\n\nOr you can read [here](#cycles) on how to choose the number of cycles/epochs.",
"_____no_output_____"
]
],
[
[
"#The code in this cell will run the training job for 6 epochs.\nlearn.fit_one_cycle(6)",
"_____no_output_____"
]
],
[
[
"Ideally if your model is learning something, you should see a certain trend. Your train_loss and valid_loss and error_rate should be decreasing while accuracy should be increasing.\n\n",
"_____no_output_____"
]
],
[
[
"#Plots the loss for both training and validation dataset\nlearn.recorder.plot_loss()",
"_____no_output_____"
],
[
"#The code in this code cell is saving the model to the disk with name stage-1\nlearn.save('stage-1')",
"_____no_output_____"
]
],
[
[
"Observe the decreasing trend in the plots above !!",
"_____no_output_____"
],
[
"\n<a name='cycles'></a>\n### **How do we select the number of epochs?**\n\n\n* Often you will find that you are limited by time, rather than generalization and accuracy, when choosing how many epochs to train for. So your first approach to training should be to simply pick a number of epochs that will train in the amount of time that you are happy to wait for. Then look at the training and validation loss plots, as shown above, and in particular your metrics, and if you see that they are still getting better even in your final epochs, then you know that you have not trained for too long. In this situation you can increase the number of epochs you are training for.\n\n* If you have the time to train for more epochs, you may also want to instead use that time to train more parameters—that is, use a deeper architecture.\n\nNow we successfully finetuned our model. In order not to lose our progress, let's save our trained model in preset location. The model will be saved on your google drive at /content/gdrive/MyDrive/scenes/models",
"_____no_output_____"
],
[
"## **Phase 2: Unfreezing and fine-tuning**",
"_____no_output_____"
],
[
"As mentioned above, training is a two-phase process. In the first training, we train only last layer of the model. It’ll never overfit and will give good results, but to really make the best use of the model, we unfreeze and fine tune all the layers in the model to train it better.\n\nFinetuning all the layers of the model let's the model weights of all the layers finetuned to the features of the scenes dataset. This makes the model perform better on the scenes dataset.",
"_____no_output_____"
]
],
[
[
"#The code in this code cell unfreezes and trains the whole resnet34 model. We now allow for the whole model to be trained, not just the last layer. \nlearn.unfreeze()",
"_____no_output_____"
]
],
[
[
"**Finding the best learning rate**\n\nFinding a good learning rate is one important problem faced by the machine learning community. Learning rate decides how fast should the model weights be updated. It is mostly trial and error based but fastai has come up with a tool called learning rate finder which can give us the most appropriate learning rate. \n\nFor a more intuitive explanation on how the learning rate finder works, refer to the below link\n(https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html)\n\nThe below cell plots a curve showing the learning late versus loss. \n\n\n\n\n",
"_____no_output_____"
]
],
[
[
"#The code in the code cell runs the learning rate finder provided by fastai\nlearn.lr_find()",
"_____no_output_____"
]
],
[
[
"Now you see a text above the plot which suggests a learning rate range. Change the lr_min value in the code cell below to the suggested lr_min value in the plot.\n\nFor example if you the Suggested LRs are given as below :\n\nSuggestedLRs(lr_min=0.004786301031708717, lr_steep=0.0014454397605732083)\n\nThen, change the lr_min value to 0.0047 below in the code cell.\n",
"_____no_output_____"
]
],
[
[
"#Change the value of lr_min to the value suggested in the previous plot.\nlr_min = 1e-4",
"_____no_output_____"
]
],
[
[
"Now, we train the model again after unfreezing all the layers of the pretrained model and also using the learning rate from the learning rate finder.",
"_____no_output_____"
]
],
[
[
"#The code in the code cell here runs a training for 5 epochs.\nlearn.fit_one_cycle(5, lr_max=slice(1e-6,lr_min))",
"_____no_output_____"
]
],
[
[
"Now we successfully finished phase-2 training of our model.\n\n In order not to lose our progress, let's save our trained model in preset location. The model will be saved on your google drive at /content/gdrive/MyDrive/scenes/models",
"_____no_output_____"
]
],
[
[
"#The code in this cell is saving the model to the disk with name stage-2\nlearn.save('stage-2')",
"_____no_output_____"
]
],
[
[
"<a name='results'></a>\n# **Results Intepretation and Analysis**\n\n*Now comes the most interesting part!*\n\n\n\nWe will first see which were the categories that the model was most confused with. We will try to see if what the model predicted is reasonable or not. Furthermore, we will plot a confusion matrix where we can see and learn more about the mistakes that the model made. We will explain the confusion matrix a bit further down. ",
"_____no_output_____"
]
],
[
[
"#The code in this cell when exected performs an analysis of the model performance on all the classes. The results of the analysis are shown in the next code cells.\ninterp = ClassificationInterpretation.from_learner(learn)\n\nlosses,idxs = interp.top_losses()\n\nprint('Interpretation and Analysis of Results done ! ')",
"_____no_output_____"
],
[
"# The code in this code cell shows some sample images, actual ground truth used for training and the predicted label.\n# If the predicted label and the ground truth match, the labels are shown in green.\n# If the predicted label and the ground truth do not match, the labels are shown in red.\nlearn.show_results()",
"_____no_output_____"
]
],
[
[
"So, one of the most interesting things we can do is called plot top losses. What this does is plot out when the model was very certain about a certain class, but was wrong. This means you are going to have a high loss. In other words; the model was confident about an answer, but answered wrong. The title of each image shows: prediction, actual, loss, probability of actual class. ",
"_____no_output_____"
]
],
[
[
"#The code in this cell shows the images the model is most confused on.\n\n'''For every image, it shows \n1. Prediction: The label predicted by the model.\n2. Actual: The actual label in the dataset.\n3. Loss : The cross entropy loss of the image. More loss means the model is very certain about a wrong prediction.\n4. Probability : How certain is the model's prediction\n\n'''\ninterp.plot_top_losses(9, figsize=(15,11))",
"_____no_output_____"
]
],
[
[
"\n\nThe confusion matrix is a way to visualuize your results and get an understanding for where your model makes mistakes and how frequent they are.\n\n\nThe confusion matrix so interesting that we want everyone to understand it properly. We gather in the main group to discuss it.\n \n\n If you see that people are still working, grab a coffee and come back ! :)",
"_____no_output_____"
]
],
[
[
"interp.plot_confusion_matrix(figsize=(12,12), dpi=60)",
"_____no_output_____"
]
],
[
[
"The most confused grabs the most common wrong predictions out of the confusion matrix. This can allow you for example as a domain expert, to understand based on your expertise, is this something that the model should be confused about. We can all understand that a glacier in many cases may be easy to confuse with mountains as glaciers many times exist in mountains. \n",
"_____no_output_____"
]
],
[
[
"#The code in this code cell gives us the classes on which the model is confused in descending order.\n'''\nFor example:\n('glacier', 'mountain', 131)\n\nWhat we can infer from the above line is that 131 glacier images have been predicted as mountain images.\n\n'''\ninterp.most_confused(min_val=10)",
"_____no_output_____"
]
],
[
[
"## **Let's see if you can get better Accuracy ! Try it out**\n\n\n<a name='data_cleaning'></a>\n## **Data Cleaning**\n\nOops ! seems like the organizers have mixed up two datasets in rush :P. \n\n Can you try to clean it and see if that gives any accuracy gains ?\n\n **TIP** : The mix up happened with mostly the glacier and building classes.\n\n Other suggestions which might help in the accuracy gain:\n\n* Use the train_medium dataset in /content/gdrive/MyDrive/AI_IN_PRACTICE/scenes provided which has more data\n* Increase the batch size ",
"_____no_output_____"
],
[
"\n\n---\n\n**Congratulations!!**\n\n---\n\n\n\nYou have completed the training :) \n\nPlease return to the main group. \nPlease be ready to let us know your error rate.",
"_____no_output_____"
],
[
"# **Common Terminology used in this training**\n\n* **CPU**: A central processing unit, also called a central processor, main processor or just processor, is the electronic circuitry within a computer that executes instructions that make up a computer program. The CPU performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program.\n\n* **GPU**: A graphics processing unit, is a specialized, electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them more efficient than general-purpose central processing units (CPUs) for algorithms that process large blocks of data in parallel.\n\n* **fastai** : is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. To learn more follow this link: https://docs.fast.ai/\n\n* **PyTorch** : Is a Python-based scientific computing and deep learning framework. It's a replacement for NumPy to use the power of GPUs. It's a deep learning research platform that provides maximum flexibility and speed.\n\n* **Google Colab** : At this moment you are in a google colab environment and you will be using this platform to run code and start learning about AI. Colab is a cloud based working environment that allows to to collaborate and train your models. A great environment to try things out and test. \n\n* **Python** : Python is an interpreted programming language currently being used for any machine learning projects. Many of the open source Machine learning packages are extensively available in python only because of which it became a go-to language for Machine learning prototyping.\n\n* **epoch** : An epoch refers to one cycle of training through the full training dataset.\n\n* **Imagenet**: ImageNet is a dataset consisting of 1.3 million images of various sizes around 500 pixels across, in 1,000 categories, which took a few days to train\n\n* **Pretrained model** : The model that has been trained from scratch on a very large dataset(usually ImageNet in computer vision) is called the pretrained model. To learn more about pretrained models, check the link below.\nhttps://towardsdatascience.com/how-do-pretrained-models-work-11fe2f64eaa2",
"_____no_output_____"
],
[
"## **Acknowledgements**\n\n1. A huge thanks to Fastai for providing a framework for fast prototyping.\n2. Thanks to Kaggle and Intel for proving the scenes classification dataset",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a7e7435cd941423c92da4300bd2f5cbfd904ba9
| 48,610 |
ipynb
|
Jupyter Notebook
|
models/BertV3 (1).ipynb
|
mayankshouche/DSLabFinalProject
|
ee42f0f8294e69d5d0bb4716486d4eb4242f401e
|
[
"MIT"
] | 1 |
2021-04-16T19:13:11.000Z
|
2021-04-16T19:13:11.000Z
|
models/BertV3 (1).ipynb
|
sunnykharel/DSLabFinalProject
|
ee42f0f8294e69d5d0bb4716486d4eb4242f401e
|
[
"MIT"
] | null | null | null |
models/BertV3 (1).ipynb
|
sunnykharel/DSLabFinalProject
|
ee42f0f8294e69d5d0bb4716486d4eb4242f401e
|
[
"MIT"
] | 1 |
2021-04-16T19:12:50.000Z
|
2021-04-16T19:12:50.000Z
| 48,610 | 48,610 | 0.643756 |
[
[
[
"!pip install transformers datasets tweet-preprocessor ray[tune] hyperopt",
"Requirement already satisfied: transformers in /usr/local/lib/python3.6/dist-packages (4.0.0)\nRequirement already satisfied: datasets in /usr/local/lib/python3.6/dist-packages (1.1.3)\nRequirement already satisfied: tweet-preprocessor in /usr/local/lib/python3.6/dist-packages (0.6.0)\nRequirement already satisfied: ray[tune] in /usr/local/lib/python3.6/dist-packages (1.0.1.post1)\nRequirement already satisfied: hyperopt in /usr/local/lib/python3.6/dist-packages (0.1.2)\nRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)\nRequirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)\nRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.4)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.18.5)\nRequirement already satisfied: dataclasses; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from transformers) (0.8)\nRequirement already satisfied: tokenizers==0.9.4 in /usr/local/lib/python3.6/dist-packages (from transformers) (0.9.4)\nRequirement already satisfied: sacremoses in /usr/local/lib/python3.6/dist-packages (from transformers) (0.0.43)\nRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from datasets) (1.1.4)\nRequirement already satisfied: multiprocess in /usr/local/lib/python3.6/dist-packages (from datasets) (0.70.11.1)\nRequirement already satisfied: pyarrow>=0.17.1 in /usr/local/lib/python3.6/dist-packages (from datasets) (2.0.0)\nRequirement already satisfied: dill in /usr/local/lib/python3.6/dist-packages (from datasets) (0.3.3)\nRequirement already satisfied: xxhash in /usr/local/lib/python3.6/dist-packages (from datasets) (2.0.0)\nRequirement already satisfied: prometheus-client>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (0.9.0)\nRequirement already satisfied: click>=7.0 in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (7.1.2)\nRequirement already satisfied: jsonschema in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (2.6.0)\nRequirement already satisfied: aioredis in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (1.3.1)\nRequirement already satisfied: opencensus in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (0.7.11)\nRequirement already satisfied: aiohttp in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (3.7.3)\nRequirement already satisfied: msgpack<2.0.0,>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (1.0.0)\nRequirement already satisfied: grpcio>=1.28.1 in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (1.33.2)\nRequirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (3.13)\nRequirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (3.12.4)\nRequirement already satisfied: redis<3.5.0,>=3.3.2 in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (3.4.1)\nRequirement already satisfied: colorama in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (0.4.4)\nRequirement already satisfied: colorful in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (0.5.4)\nRequirement already satisfied: aiohttp-cors in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (0.7.0)\nRequirement already satisfied: gpustat in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (0.6.0)\nRequirement already satisfied: google in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (2.0.3)\nRequirement already satisfied: py-spy>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (0.3.3)\nRequirement already satisfied: tensorboardX; extra == \"tune\" in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (2.1)\nRequirement already satisfied: tabulate; extra == \"tune\" in /usr/local/lib/python3.6/dist-packages (from ray[tune]) (0.8.7)\nRequirement already satisfied: networkx in /usr/local/lib/python3.6/dist-packages (from hyperopt) (2.5)\nRequirement already satisfied: pymongo in /usr/local/lib/python3.6/dist-packages (from hyperopt) (3.11.1)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from hyperopt) (0.16.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from hyperopt) (1.4.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from hyperopt) (1.15.0)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.11.8)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (0.17.0)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas->datasets) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->datasets) (2018.9)\nRequirement already satisfied: hiredis in /usr/local/lib/python3.6/dist-packages (from aioredis->ray[tune]) (1.1.0)\nRequirement already satisfied: async-timeout in /usr/local/lib/python3.6/dist-packages (from aioredis->ray[tune]) (3.0.1)\nRequirement already satisfied: google-api-core<2.0.0,>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from opencensus->ray[tune]) (1.16.0)\nRequirement already satisfied: opencensus-context==0.1.2 in /usr/local/lib/python3.6/dist-packages (from opencensus->ray[tune]) (0.1.2)\nRequirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.6/dist-packages (from aiohttp->ray[tune]) (1.6.3)\nRequirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.6/dist-packages (from aiohttp->ray[tune]) (5.1.0)\nRequirement already satisfied: typing-extensions>=3.6.5 in /usr/local/lib/python3.6/dist-packages (from aiohttp->ray[tune]) (3.7.4.3)\nRequirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.6/dist-packages (from aiohttp->ray[tune]) (20.3.0)\nRequirement already satisfied: idna-ssl>=1.0; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from aiohttp->ray[tune]) (1.1.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.8.0->ray[tune]) (50.3.2)\nRequirement already satisfied: nvidia-ml-py3>=7.352.0 in /usr/local/lib/python3.6/dist-packages (from gpustat->ray[tune]) (7.352.0)\nRequirement already satisfied: blessings>=1.6 in /usr/local/lib/python3.6/dist-packages (from gpustat->ray[tune]) (1.7)\nRequirement already satisfied: psutil in /usr/local/lib/python3.6/dist-packages (from gpustat->ray[tune]) (5.4.8)\nRequirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.6/dist-packages (from google->ray[tune]) (4.6.3)\nRequirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx->hyperopt) (4.4.2)\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0,>=1.0.0->opencensus->ray[tune]) (1.52.0)\nRequirement already satisfied: google-auth<2.0dev,>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2.0.0,>=1.0.0->opencensus->ray[tune]) (1.17.2)\nRequirement already satisfied: contextvars; python_version >= \"3.6\" and python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from opencensus-context==0.1.2->opencensus->ray[tune]) (2.4)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus->ray[tune]) (4.1.1)\nRequirement already satisfied: rsa<5,>=3.1.4; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus->ray[tune]) (4.6)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus->ray[tune]) (0.2.8)\nRequirement already satisfied: immutables>=0.9 in /usr/local/lib/python3.6/dist-packages (from contextvars; python_version >= \"3.6\" and python_version < \"3.7\"->opencensus-context==0.1.2->opencensus->ray[tune]) (0.14)\nRequirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<5,>=3.1.4; python_version >= \"3\"->google-auth<2.0dev,>=0.4.0->google-api-core<2.0.0,>=1.0.0->opencensus->ray[tune]) (0.4.8)\n"
],
[
"import pandas as pd\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\nimport wordcloud\r\nimport preprocessor as p # tweet-preprocessor\r\nimport nltk\r\nimport re\r\nimport seaborn as sns\r\nimport torch\r\n\r\nfrom transformers import BertTokenizer, BertForSequenceClassification, AdamW, get_linear_schedule_with_warmup\r\nfrom sklearn.metrics import accuracy_score, roc_auc_score, confusion_matrix\r\nfrom sklearn.model_selection import train_test_split, StratifiedKFold\r\nfrom scipy.special import softmax\r\nfrom torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler\r\nfrom tqdm.notebook import tqdm\r\nfrom ray import tune\r\nfrom ray.tune import CLIReporter\r\nfrom ray.tune.schedulers import ASHAScheduler\r\nfrom ray.tune.suggest.hyperopt import HyperOptSearch",
"_____no_output_____"
],
[
"from google.colab import drive\r\ndrive.mount('/content/drive')",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
],
[
"# dataset_dem = pd.read_csv('/content/drive/MyDrive/democrat_tweets_v2.csv')\r\n# dataset_gop = pd.read_csv('/content/drive/MyDrive/republican_tweets_v2.csv')\r\n\r\n# dataset_dem[\"label\"] = \"Democrat\"\r\n# dataset_gop[\"label\"] = \"Republican\"\r\n\r\n# dataset_final = pd.concat([dataset_dem, dataset_gop])\r\n# dataset_final.reset_index(drop=True, inplace=True)\r\ndataset_final = pd.read_csv(\"/content/drive/MyDrive/Copy of 2020_labled_political_tweets.csv.zip\")\r\n# dataset_final=dataset_final[(dataset_final[\"party\"].any()==\"D\")]\r\ndataset_final = dataset_final.iloc[0:2000]\r\nfor index, row in dataset_final.iterrows():\r\n if str(row['party']) !=\"D\":\r\n if str(row[\"party\"])!=\"R\":\r\n dataset_final.drop(index, inplace=True)\r\ndataset_final.head()\r\n",
"_____no_output_____"
],
[
"dataset_final.count",
"_____no_output_____"
],
[
"# dataset=pd.read_csv(\"/content/drive/MyDrive/Copy of 2020_labled_political_tweets.csv.zip\")\r\n# X=dataset.drop([\"party\"],axis=1)\r\n# y = dataset[[\"party\"]]\r\n# X_train, X_val, y_train, y_val = train_test_split(X, \r\n# y, \r\n# test_size=0.20, \r\n# random_state=42)",
"_____no_output_____"
],
[
"LABEL_MAP = {\r\n \"D\": 0,\r\n \"R\": 1\r\n}\r\n\r\ndef buildLabels(row):\r\n return LABEL_MAP.get(row[\"party\"])\r\n\r\n# def cleanTweet(row):\r\n# tweet = row[\"text\"]\r\n# tweet = str(p.clean(tweet))\r\n# tweet = re.sub(r'[^\\w\\s]', '', tweet) # punctuation\r\n# tweet = re.sub(\"^\\d+\\s|\\s\\d+\\s|\\s\\d+$\", \" \", tweet) # numbers\r\n# return tweet\r\n\r\n \r\ndataset_final[\"party\"] = dataset_final.apply(lambda row: buildLabels(row), axis=1)\r\n# dataset_final[\"clean_text\"] = dataset_final.apply(lambda row: cleanTweet(row), \r\n # axis=1)",
"_____no_output_____"
],
[
"dataset_final.head()",
"_____no_output_____"
],
[
"dataset_clf = dataset_final[[\"text\", \"party\"]]\r\ndataset_clf.reset_index(drop=True, inplace=True)",
"_____no_output_____"
],
[
"X_train, X_val, y_train, y_val = train_test_split(dataset_clf.index.values, \r\n dataset_clf.party.values, \r\n test_size=0.20, \r\n random_state=42, \r\n stratify=dataset_clf.party.values)\r\n\r\ndataset_clf['data_type'] = ['not_set']*dataset_final.shape[0]\r\n\r\ndataset_clf.loc[X_train, 'data_type'] = 'train'\r\ndataset_clf.loc[X_val, 'data_type'] = 'test'\r\n\r\ndataset_train = dataset_clf.loc[dataset_clf.data_type == 'train']\r\ndataset_test = dataset_clf.loc[dataset_clf.data_type == 'test']\r\n",
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:7: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n import sys\n/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py:1763: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n isetter(loc, value)\n"
],
[
"dataset_train.head()",
"_____no_output_____"
],
[
"def get_dataloaders(data, batch_size):\r\n tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', \r\n do_lower_case=True)\r\n # tokenize train and test data so BERT can understand it\r\n encoded_data_train = tokenizer.batch_encode_plus(\r\n data[data.data_type=='train'].text.values, \r\n add_special_tokens=True, \r\n return_attention_mask=True, \r\n padding=True,\r\n max_length=64, \r\n return_tensors='pt'\r\n )\r\n\r\n encoded_data_test = tokenizer.batch_encode_plus(\r\n data[data.data_type=='test'].text.values, \r\n add_special_tokens=True, \r\n return_attention_mask=True, \r\n padding=True, \r\n max_length=64, \r\n return_tensors='pt'\r\n )\r\n\r\n\r\n # destructure out the input_ids, attention masks, and labels from tokenizer & encoder output\r\n input_ids_train = encoded_data_train['input_ids']\r\n attention_masks_train = encoded_data_train['attention_mask']\r\n labels_train = torch.tensor(data[data.data_type=='train'].party.values)\r\n\r\n input_ids_test = encoded_data_test['input_ids']\r\n attention_masks_test = encoded_data_test['attention_mask']\r\n labels_test = torch.tensor(data[data.data_type=='test'].party.values)\r\n\r\n train_data = TensorDataset(input_ids_train, attention_masks_train, labels_train)\r\n test_data = TensorDataset(input_ids_test, attention_masks_test, labels_test)\r\n\r\n train_dataloader = DataLoader(train_data, \r\n sampler=RandomSampler(train_data), \r\n batch_size=batch_size)\r\n\r\n test_dataloader = DataLoader(test_data,\r\n sampler=SequentialSampler(test_data),\r\n batch_size=batch_size)\r\n \r\n return train_dataloader, test_dataloader",
"_____no_output_____"
],
[
"def auc_score(preds, labels):\r\n soft_preds = softmax(preds, axis=1) # logit -> probability\r\n if np.shape(preds)[1] > 2: # check for multi-class\r\n return roc_auc_score(labels, soft_preds, multi_class='ovr')\r\n else:\r\n soft_preds = soft_preds[:,1]\r\n return roc_auc_score(labels, soft_preds)\r\n\r\ndef acc_score_by_class(preds, labels):\r\n label_dict_inverse = {v: k for k, v in LABEL_MAP.items()} \r\n\r\n preds_flat = np.argmax(preds, axis=1).flatten()\r\n labels_flat = labels.flatten()\r\n\r\n for label in np.unique(labels_flat):\r\n y_preds = preds_flat[labels_flat==label]\r\n y_true = labels_flat[labels_flat==label]\r\n print(f'Class: {label_dict_inverse[label]}')\r\n print(f'Accuracy: {len(y_preds[y_preds==label])}/{len(y_true)}\\n')",
"_____no_output_____"
],
[
"def evaluate(model, dataloader, device):\r\n model.eval()\r\n\r\n loss_val_total = 0\r\n predictions, true_vals = [], []\r\n \r\n for batch in dataloader:\r\n \r\n # convert data to CUDA\r\n batch = tuple(b.to(device) for b in batch)\r\n \r\n inputs = {\r\n 'input_ids': batch[0],\r\n 'attention_mask': batch[1],\r\n 'labels': batch[2],\r\n }\r\n\r\n with torch.no_grad(): \r\n outputs = model(**inputs) # get predictions\r\n \r\n loss = outputs[0]\r\n logits = outputs[1]\r\n loss_val_total += loss.item()\r\n\r\n logits = logits.detach().cpu().numpy()\r\n label_ids = inputs['labels'].cpu().numpy()\r\n predictions.append(logits)\r\n true_vals.append(label_ids)\r\n \r\n loss_val_avg = loss_val_total/len(dataloader) \r\n \r\n predictions = np.concatenate(predictions, axis=0)\r\n true_vals = np.concatenate(true_vals, axis=0)\r\n \r\n return loss_val_avg, predictions, true_vals",
"_____no_output_____"
],
[
"def train_and_hyperparam_search(config,\r\n model_init, # function to init a clean version of the net\r\n data, # data as Pandas array\r\n cv # rounds of cross-validation\r\n ):\r\n losses = []\r\n aucs = []\r\n skf = StratifiedKFold(n_splits=cv, shuffle=True)\r\n for train_idx, test_idx in skf.split(data.text, data.party):\r\n model = model_init()\r\n device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\r\n model.to(device)\r\n print(f\"Device: {device}\")\r\n\r\n optimizer = AdamW(model.parameters(),\r\n lr=config['lr'],\r\n eps=config['eps'],\r\n weight_decay=config['weight_decay'])\r\n \r\n data.loc[train_idx, 'data_type'] = 'train'\r\n data.loc[test_idx, 'data_type'] = 'test'\r\n \r\n train_dataloader, test_dataloader = get_dataloaders(data,\r\n config['batch_size'])\r\n\r\n for epoch in range(1, config['epochs']+1):\r\n model.train() # enter training mode\r\n loss_train_total = 0\r\n\r\n for batch in train_dataloader:\r\n model.zero_grad()\r\n \r\n # get CUDA data\r\n batch = tuple(b.to(device) for b in batch)\r\n \r\n inputs = {\r\n 'input_ids': batch[0],\r\n 'attention_mask': batch[1],\r\n 'labels': batch[2],\r\n }\r\n\r\n outputs = model(**inputs) # evaluate\r\n \r\n # for reference, we are using cross-entropy loss here,\r\n # as implemented in https://huggingface.co/transformers/_modules/transformers/modeling_bert.html\r\n loss = outputs[0]\r\n loss_train_total += loss.item()\r\n loss.backward() # do backprop\r\n\r\n torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)\r\n\r\n optimizer.step()\r\n \r\n \r\n loss_train_avg = loss_train_total/len(train_dataloader) \r\n print(f\"Training loss for epoch {epoch}: {loss_train_avg}\") \r\n \r\n val_loss, predictions, true_vals = evaluate(model, test_dataloader, device)\r\n auc = auc_score(predictions, true_vals)\r\n\r\n losses.append(val_loss)\r\n aucs.append(auc)\r\n\r\n tune.report(loss=np.mean(losses), auc=np.mean(aucs))",
"_____no_output_____"
],
[
"from functools import partial\r\n\r\ndef model_init():\r\n return BertForSequenceClassification.from_pretrained('bert-base-uncased',\r\n num_labels=2,\r\n output_attentions=False,\r\n output_hidden_states=False) \r\n\r\n \r\nconfig = {\r\n \"lr\": tune.choice([5e-5,3e-5,2e-5]),\r\n \"eps\": tune.loguniform(1e-10, 1e-7),\r\n \"weight_decay\": tune.loguniform(1e-10, 1e-5),\r\n \"batch_size\": tune.choice([4,8,16, 32]),\r\n \"epochs\": tune.choice([2, 3, 4])\r\n}\r\n\r\nscheduler = ASHAScheduler(\r\n metric=\"auc\",\r\n mode=\"max\",\r\n max_t=10,\r\n grace_period=1,\r\n reduction_factor=2\r\n)\r\n\r\nreporter = CLIReporter(metric_columns=[\"loss\", \"auc\", \"training_iteration\"])\r\nhyperopt_search = HyperOptSearch(metric=\"auc\", mode=\"max\")\r\n\r\nresult = tune.run(\r\n partial(train_and_hyperparam_search, model_init=model_init, data=dataset_clf, cv=3),\r\n resources_per_trial={\"cpu\": 2, \"gpu\": 1},\r\n config=config,\r\n num_samples=8,\r\n scheduler=scheduler,\r\n search_alg=hyperopt_search,\r\n progress_reporter=reporter\r\n)",
"2020-12-09 07:16:38,246\tWARNING experiment.py:274 -- No name detected on trainable. Using DEFAULT.\n2020-12-09 07:16:38,250\tINFO registry.py:65 -- Detected unknown callable for trainable. Converting to class.\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7e7abc591c3998a48a89d7331c7f9250953bae
| 6,188 |
ipynb
|
Jupyter Notebook
|
flaskweather.ipynb
|
erikanfox/WeatherAdvisor
|
f463611172b07886dbe7cc10f9c402ad6a49ab83
|
[
"CC0-1.0"
] | 1 |
2022-02-10T19:29:05.000Z
|
2022-02-10T19:29:05.000Z
|
flaskweather.ipynb
|
erikanfox/WeatherAdvisor
|
f463611172b07886dbe7cc10f9c402ad6a49ab83
|
[
"CC0-1.0"
] | null | null | null |
flaskweather.ipynb
|
erikanfox/WeatherAdvisor
|
f463611172b07886dbe7cc10f9c402ad6a49ab83
|
[
"CC0-1.0"
] | null | null | null | 37.50303 | 1,126 | 0.57256 |
[
[
[
"import pandas as pd\nfrom sklearn.tree import DecisionTreeClassifier # Import Decision Tree Classifier\nfrom sklearn.model_selection import train_test_split # Import train_test_split function\nfrom sklearn import metrics #Import scikit-learn metrics module for accuracy calculation\nfrom fastapi import FastAPI\nimport uvicorn\n\ndata = pd.read_csv(\"clothing_weather.csv\")\ndata\n\napp = FastAPI()",
"_____no_output_____"
],
[
"@app.get(\"/\")\nasync def root():\n \"\"\"Weather Advisor Welcome\"\"\"\n return {\"message\": \"Hello, welcome to Weather advisor! Enter a word to calculate it's score.\"}\n",
"_____no_output_____"
],
[
"@app.get(\"/weatheradvisor/{temp}/{rain}/{snow}\")\nasync def weatheradvisor(temp: int,rain:int,snow:int):\n y=predict(temp,rain,snow)\n message=getMessage(y[0], rain, snow)\n return \"You should wear {0}\".format(message))",
"_____no_output_____"
],
[
"async def predict(temp: int,rain:int,snow:int): \n data[\"rain\"] = data[\"rain\"].replace(\"no\", 0)\n data[\"rain\"] = data[\"rain\"].replace(\"yes\", 1)\n data[\"snow\"] = data[\"rain\"].replace(\"no\", 0)\n data[\"snow\"] = data[\"rain\"].replace(\"yes\", 1)\n\n feature_cols = ['temp_f','rain','snow']\n X = data[feature_cols] # Features\n y = data.overall # Target variabley\n\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)\n\n clf = DecisionTreeClassifier(criterion=\"entropy\", max_depth=4)\n\n # Train Decision Tree Classifer\n clf = clf.fit(X_train,y_train)\n\n #Predict the response for test dataset\n y_pred = clf.predict(X_test)\n\n print(\"Accuracy:\",metrics.accuracy_score(y_test, y_pred))\n\n y_pred = clf.predict([temp,rain,snow])\n #print the predicted outfit code\n return y_pred",
"_____no_output_____"
],
[
"def getMessage(pred, rain, snow):\n ans=\"\"\n outfit_code = {\n 1: \"a short sleeve shirt and shorts.\",\n 2: \"a short sleeve shirt and long pants.\",\n 3: \"a short sleeve shirt, shorts and a light jacket or sweatshirt.\",\n 4: \"a short sleeve shirt, long pants, and a light jacket or sweatshirt.\",\n 5: \"a long sleeve shirt, long pants, and a light jacket or sweatshirt.\",\n 6: \"a short sleeve shirt, long pants, and a heavy jacket.\",\n 7: \"a long sleeve shirt or sweater, long pants, and a heavy jacket.\",\n 8: \"a long sleeve shirt and shorts.\"\n }\n\n if pred in outfit_code:\n ans=ans+outfit_code[pred]\n else:\n return \"an error occurred\"\n \n if rain == 1:\n ans=ans+ \" You may also want a rain jacket, rain boots, and/or an umbrella.\"\n\n if snow == 1:\n ans=ans+ \" You should also bring a scarf, gloves, and snow boots!\"\n \n return ans\n",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
4a7e7da871aa3d136383dfd1e950509f3d289e64
| 151,068 |
ipynb
|
Jupyter Notebook
|
nb/script.ipynb
|
gem763/qit
|
ad2559f012c9f91637dc5232ff4a4acc749c5d6a
|
[
"MIT"
] | null | null | null |
nb/script.ipynb
|
gem763/qit
|
ad2559f012c9f91637dc5232ff4a4acc749c5d6a
|
[
"MIT"
] | null | null | null |
nb/script.ipynb
|
gem763/qit
|
ad2559f012c9f91637dc5232ff4a4acc749c5d6a
|
[
"MIT"
] | null | null | null | 168.602679 | 27,308 | 0.893948 |
[
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"fin = pd.read_pickle('fin.pkl')\nmc = pd.read_pickle('mc.pkl')\ninfo = pd.read_pickle('info.pkl')",
"_____no_output_____"
]
],
[
[
"# 전략\n\n* input = 날짜\n* output = 종목별 투자비중",
"_____no_output_____"
]
],
[
[
"date = '2018-12-31' # input\nfisyear = 2017\nposition = fin['매출액'].xs(fisyear, level=1).nlargest(10)\nposition[:] = 1/len(position); position # output; [:] 빼면 될까 안될까",
"_____no_output_____"
]
],
[
[
"### date → fisyear",
"_____no_output_____"
]
],
[
[
"date = pd.Timestamp(date)\nif date.month >=6: \n fisyear = date.year - 1\nelse:\n fisyear = date.year - 2",
"_____no_output_____"
],
[
"def get_fisyear(date):\n date = pd.Timestamp(date)\n \n if date.month >=6: \n return date.year - 1\n else:\n return date.year - 2",
"_____no_output_____"
],
[
"def 매출상위(date, fin, n=10):\n fisyear = get_fisyear(date)\n position = fin['매출액'].xs(fisyear, level=1).nlargest(n)\n position[:] = 1/len(position)\n return position",
"_____no_output_____"
]
],
[
[
"# 벡테스터\n매 리밸런싱 마다 포지션을 잡고, 주기적으로 포지션 가치를 계산하면 된다",
"_____no_output_____"
]
],
[
[
"dates = mc.index[8:]",
"_____no_output_____"
],
[
"for date in dates:\n# print(date)\n pass",
"_____no_output_____"
],
[
"pos = {}\nnav = {}\nfor date in dates:\n pos[date] = 매출상위(date, fin)\n # nav[date] = 내 계좌의 가치기록, HOW?",
"_____no_output_____"
]
],
[
[
"### 내 계좌의 가치(NAV) 계산은 어떻게?\n* 전 리밸일 기준 포지션 전체가치 = nav_prev (given)\n* 전 리밸일 포지션별 = pos_prev (given)\n* 전 리밸일 포지션별 가치 = nav_prev * pos_prev\n* 전 리밸일 이후 포지션 가치변화 = 현재시총 / 전 리밸일 시총\n* 전 리밸일 포지션의 현재가치 = nav_prev * pos_prev * 현재시총 / 전 리밸일 시총\n* 현재 리밸일 기준 포지션 전체가치 = sum(전 리밸일 포지션의 현재가치)",
"_____no_output_____"
]
],
[
[
"pos = {}\nnav = {}\nfor i, date in enumerate(dates):\n pos[date] = 매출상위(date, fin)\n \n date_prev = dates[i-1]\n nav_prev = nav[date_prev]\n pos_prev = pos[date_prev]\n assets_prev = pos_prev.index\n mc_chg = mc.loc[date, assets_prev] / mc.loc[date_prev, assets_prev]\n nav_pos_prev = nav_prev * pos_prev * mc_chg\n nav[date] = nav_pos_prev.sum()",
"_____no_output_____"
],
[
"pos = {}\nnav = {}\nfor i, date in enumerate(dates):\n pos[date] = 매출상위(date, fin)\n \n if i==0:\n nav[date] = 1\n \n else:\n date_prev = dates[i-1]\n nav_prev = nav[date_prev]\n pos_prev = pos[date_prev]\n assets_prev = pos_prev.index\n mc_chg = mc.loc[date, assets_prev] / mc.loc[date_prev, assets_prev]\n nav_pos_prev = nav_prev * pos_prev * mc_chg\n nav[date] = nav_pos_prev.sum()",
"_____no_output_____"
],
[
"nav;",
"_____no_output_____"
],
[
"from IPython.core.debugger import set_trace",
"_____no_output_____"
],
[
"pos = {}\nnav = {}\n\ndef 내계좌는얼마(nav, pos, date, date_prev, mc):\n nav_prev = nav[date_prev]\n pos_prev = pos[date_prev]\n assets_prev = pos_prev.index\n mc_chg = mc.loc[date, assets_prev] / mc.loc[date_prev, assets_prev]\n nav_pos_prev = nav_prev * pos_prev * mc_chg\n return nav_pos_prev.sum()\n\nfor i, date in enumerate(dates):\n# set_trace()\n pos[date] = 매출상위(date, fin)\n \n if i==0:\n nav[date] = 1\n \n else:\n date_prev = dates[i-1]\n nav[date] = 내계좌는얼마(nav, pos, date, date_prev, mc)",
"_____no_output_____"
],
[
"pd.DataFrame({'Model':nav}).plot()",
"_____no_output_____"
]
],
[
[
"### BM 전략도 만들어보자\n시총상위 200개 종목을 시총가중",
"_____no_output_____"
]
],
[
[
"position = mc.loc[date].nlargest(200)\nposition / position.sum();",
"_____no_output_____"
],
[
"def BM(date, fin=None, mc=None, n=200):\n position = mc.loc[date].nlargest(n)\n position = position / position.sum()\n return position\n\ndef 매출상위(date, fin=None, mc=None, n=10):\n fisyear = get_fisyear(date)\n position = fin['매출액'].xs(fisyear, level=1).nlargest(n)\n position[:] = 1/len(position)\n return position",
"_____no_output_____"
],
[
"BM('2018-12-31', mc=mc);",
"_____no_output_____"
],
[
"pos = {}\nnav = {}\npos_bm = {}\nnav_bm = {}\nn = 10\n\nfor i, date in enumerate(dates):\n pos[date] = 매출상위(date, fin=fin, mc=mc, n=n)\n pos_bm[date] = BM(date, fin=fin, mc=mc)\n \n if i==0:\n nav[date] = 1\n nav_bm[date] = 1\n \n else:\n date_prev = dates[i-1]\n nav[date] = 내계좌는얼마(nav, pos, date, date_prev, mc)\n nav_bm[date] = 내계좌는얼마(nav_bm, pos_bm, date, date_prev, mc)",
"_____no_output_____"
],
[
"pd.DataFrame({'Model':nav, 'BM':nav_bm}).plot()",
"_____no_output_____"
]
],
[
[
"# 좀더 고급진 백테스터\n\n* 전략 = 설계도\n* 벡테스터 = Backtest(어떤전략, BM, DB, 기타옵션들...)\n* 백테스터.run()\n* 백테스터.plot() ...",
"_____no_output_____"
]
],
[
[
"class Backtest:\n def __init__(self, model=None, bm=None, fin=None, mc=None, n=10):\n self.model = model\n self.bm = bm\n self.fin = fin\n self.mc = mc\n self.n = n",
"_____no_output_____"
],
[
"bt = Backtest(model=매출상위, bm=BM, fin=fin, mc=mc, n=10)",
"_____no_output_____"
],
[
"bt.bm",
"_____no_output_____"
],
[
"import inspect\ninspect.getsource(bt.bm)",
"_____no_output_____"
],
[
"class Backtest:\n def __init__(self, model=None, bm=None, fin=None, mc=None, n=10):\n self.model = model\n self.bm = bm\n self.fin = fin\n self.mc = mc\n self.n = n\n \n # fin, mc, n, 내계좌는 얼마.를 self. 로\n # pos, nav, pos_bm, nav_bm를 self 저장\n def run(self):\n dates = mc.index[8:]\n pos = {}\n nav = {}\n pos_bm = {}\n nav_bm = {}\n\n for i, date in enumerate(tqdm_notebook(dates)):\n pos[date] = self.model(date, fin=self.fin, mc=self.mc, n=self.n)\n pos_bm[date] = self.bm(date, fin=self.fin, mc=self.mc)\n\n if i==0:\n nav[date] = 1\n nav_bm[date] = 1\n\n else:\n date_prev = dates[i-1]\n nav[date] = self.내계좌는얼마(nav, pos, date, date_prev)\n nav_bm[date] = self.내계좌는얼마(nav_bm, pos_bm, date, date_prev)\n \n self.pos = pos\n self.nav = nav\n self.pos_bm = pos_bm\n self.nav_bm = nav_bm\n \n \n # mc를 self.mc로\n def 내계좌는얼마(self, nav, pos, date, date_prev):\n nav_prev = nav[date_prev]\n pos_prev = pos[date_prev]\n assets_prev = pos_prev.index\n mc_chg = self.mc.loc[date, assets_prev] / self.mc.loc[date_prev, assets_prev]\n nav_pos_prev = nav_prev * pos_prev * mc_chg\n return nav_pos_prev.sum()\n \n \n def navs(self):\n return pd.DataFrame({'Model':self.nav, 'BM':self.nav_bm})\n \n # plot_perf() 추가\n def plot_perf(self):\n self.navs().plot()\n \n # 성과평가 추가\n def stats(self):\n _navs = self.navs()\n ndays = (_navs.index[-1]-_navs.index[0]).days\n ann_rtn = (_navs.iloc[-1]**(365/ndays)) - 1\n vol = _navs.pct_change().std() * (4**0.5)\n return pd.DataFrame({\n 'Annual return': ann_rtn, \n 'Volatility': vol, \n 'Sharpe': ann_rtn/vol\n })",
"_____no_output_____"
]
],
[
[
"### plot_perf() 추가",
"_____no_output_____"
]
],
[
[
"bt = Backtest(model=매출상위, bm=BM, fin=fin, mc=mc, n=10)\nbt.run()\nbt.plot_perf()",
"_____no_output_____"
]
],
[
[
"### tqdm 추가",
"_____no_output_____"
]
],
[
[
"from tqdm import tqdm_notebook",
"_____no_output_____"
],
[
"bt = Backtest(model=매출상위, bm=BM, fin=fin, mc=mc, n=10)\nbt.run()\nbt.plot_perf()",
"_____no_output_____"
]
],
[
[
"### 성과평가\n* (1+R)^(총연수) = 최종nav\n* R = (최종nav)^(1/총연수) - 1",
"_____no_output_____"
]
],
[
[
"bt = Backtest(model=매출상위, bm=BM, fin=fin, mc=mc, n=10)\nbt.run()\nbt.stats()",
"_____no_output_____"
]
],
[
[
"# 새로운 전략 생성",
"_____no_output_____"
]
],
[
[
"def 시총높고PB저평가(date, fin=None, mc=None, n=10):\n fisyear = get_fisyear(date)\n marketcap = mc.loc[date].nlargest(100)\n univ = marketcap.index\n bv = fin['자본총계'].xs(fisyear, level=1).loc[univ]\n bp = bv / marketcap\n position = bp.nlargest(n)\n position[:] = 1/len(position)\n return position",
"_____no_output_____"
],
[
"bt = Backtest(model=시총높고PB저평가, bm=BM, fin=fin, mc=mc, n=10)\nbt.run()\nbt.plot_perf()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a7e7ddfafd24daa20b6b737d5a548a8592a1ac1
| 55,936 |
ipynb
|
Jupyter Notebook
|
docs/examples/semantic_segmentation_superpixel_approach.ipynb
|
msk-mind/HistomicsTK
|
737a84b561ced20a311969e4c92a0e1a966a2dba
|
[
"Apache-2.0"
] | 249 |
2016-04-04T12:00:54.000Z
|
2022-03-31T15:46:50.000Z
|
docs/examples/semantic_segmentation_superpixel_approach.ipynb
|
msk-mind/HistomicsTK
|
737a84b561ced20a311969e4c92a0e1a966a2dba
|
[
"Apache-2.0"
] | 616 |
2016-01-13T21:06:01.000Z
|
2022-03-19T00:06:28.000Z
|
docs/examples/semantic_segmentation_superpixel_approach.ipynb
|
msk-mind/HistomicsTK
|
737a84b561ced20a311969e4c92a0e1a966a2dba
|
[
"Apache-2.0"
] | 109 |
2016-01-21T16:14:34.000Z
|
2022-03-10T00:59:06.000Z
| 89.92926 | 28,048 | 0.796178 |
[
[
[
"# Finding cellular regions with superpixel analysis\n\n**Overview:**\n\nWhole-slide images often contain artifacts like marker or acellular regions that\nneed to be avoided during analysis. In this example we show how HistomicsTK can\nbe used to develop saliency detection algorithms that segment the slide at low\nmagnification to generate a map to guide higher magnification analyses. Here we\nshow how superpixel analysis can be used to locate hypercellular regions that\ncorrespond to tumor-rich content.\n\nThis uses Simple Linear Iterative Clustering (SLIC) to get superpixels at a low\nslide magnification to detect cellular regions. The first step of this pipeline\ndetects tissue regions (i.e. individual tissue pieces) using the `get_tissue_mask`\n method of the `histomicstk.saliency` module. Then, each tissue piece is processed\n separately for accuracy and disk space efficiency. It is important to keep in\n mind that this does NOT rely on a tile iterator, but loads the entire tissue\n region (but NOT the whole slide) in memory and passes it on to\n `skimage.segmentation.slic` method. Not using a tile iterator helps keep the\n superpixel sizes large enough to correspond to tissue boundaries.\n\nOnce superpixels are segmented, the image is deconvolved and features are extracted from the hematoxylin channel. Features include intensity and possibly also texture features. Then, a mixed component Gaussian mixture model is fit to the features, and median intensity is used to rank superpixel clusters by 'cellularity' (since we are working with the hematoxylin channel).\n\nNote that the decison to fit a gaussian mixture model instead of using K-means clustering is a design choice. If you'd like to experiment, feel free to try other methods of classifying superpixels into clusters using other approaches.\n\nAdditional functionality includes contour extraction to get the final segmentation boundaries of cellular regions and to visualize them in HistomicsUI using one's preferred colormap.\n\n**Here are some sample results:**\n\nFrom left to right: Slide thumbnail, superpixel classifications, contiguous cellular/acellular regions\n\n\n\n**Where to look?**\n\n```\n|_ histomicstk/\n |_saliency/\n |_cellularity_detection.py \n |_tests/\n |_test_saliency.py\n```",
"_____no_output_____"
]
],
[
[
"import tempfile\nimport girder_client\nimport numpy as np\nfrom histomicstk.annotations_and_masks.annotation_and_mask_utils import (\n delete_annotations_in_slide)\nfrom histomicstk.saliency.cellularity_detection_superpixels import (\n Cellularity_detector_superpixels)\n\nimport matplotlib.pylab as plt\nfrom matplotlib.colors import ListedColormap\n%matplotlib inline\n\n# color map\nvals = np.random.rand(256,3)\nvals[0, ...] = [0.9, 0.9, 0.9]\ncMap = ListedColormap(1 - vals)",
"_____no_output_____"
]
],
[
[
"## Prepwork",
"_____no_output_____"
]
],
[
[
"APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'\nSAMPLE_SLIDE_ID = \"5d586d76bd4404c6b1f286ae\"\n# SAMPLE_SLIDE_ID = \"5d8c296cbd4404c6b1fa5572\"\n\ngc = girder_client.GirderClient(apiUrl=APIURL)\ngc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')\n\n# This is where the run logs will be saved\nlogging_savepath = tempfile.mkdtemp()\n\n# color normalization values from TCGA-A2-A3XS-DX1\ncnorm_thumbnail = {\n 'mu': np.array([9.24496373, -0.00966569, 0.01757247]),\n 'sigma': np.array([0.35686209, 0.02566772, 0.02500282]),\n}\n# from the ROI in Amgad et al, 2019\ncnorm_main = {\n 'mu': np.array([8.74108109, -0.12440419, 0.0444982]),\n 'sigma': np.array([0.6135447, 0.10989545, 0.0286032]),\n}",
"_____no_output_____"
],
[
"# deleting existing annotations in target slide (if any)\ndelete_annotations_in_slide(gc, SAMPLE_SLIDE_ID)",
"_____no_output_____"
]
],
[
[
"## Initialize the cellularity detector",
"_____no_output_____"
]
],
[
[
"print(Cellularity_detector_superpixels.__init__.__doc__)",
"Init Cellularity_Detector_Superpixels object.\n\n Arguments:\n -----------\n gc : object\n girder client object\n slide_id : str\n girder ID of slide\n verbose : int\n 0 - Do not print to screen\n 1 - Print only key messages\n 2 - Print everything to screen\n 3 - print everything including from inner functions\n monitorPrefix : str\n text to prepend to printed statements\n logging_savepath : str or None\n where to save run logs\n suppress_warnings : bool\n whether to suppress warnings\n cnorm_params : dict\n Reinhard color normalization parameters. Accepted keys: thumbnail\n and main (since thumbnail normalization is different from color\n normalization of tissue at target magnification. Each entry is a\n dict containing values for mu and sigma. This is either given\n here or can be set using self.set_color_normalization_values().\n May be left unset if you do not want to normalize.\n get_tissue_mask_kwargs : dict\n kwargs for the get_tissue_mask() method.\n MAG : float\n magnification at which to detect cellularity\n spixel_size_baseMag : int\n approximate superpixel size at base (scan) magnification\n compactness : float\n compactness parameter for the SLIC method. Higher values result\n in more regular superpixels while smaller values are more likely\n to respect tissue boundaries.\n deconvolve : bool\n Whether to deconvolve and use hematoxylin channel for feature\n extraction. Must be True to ranks spixel clusters by cellularity.\n use_grayscale : bool\n If True, grayscale image is used with SLIC. May be more robust to\n color variations from slide to slide and more efficient.\n use_intensity : bool\n Whether to extract intensity features from the hematoxylin channel.\n This must be True to rank spuerpixel clusters by cellularity.\n use_texture : bool\n Whether to extract Haralick texture features from Htx channel. May\n not necessarily improve results when used in conjunction with\n intensity features.\n keep_feats : list\n Name of intensity features to use. See\n histomicstk.features.compute_intensity_features.\n Using fewer informative features may result in better\n gaussian mixture modeling results.\n n_gaussian_components : int\n no of gaussian mixture model components\n max_cellularity : int\n Range [0, 100] or None. If None, normalize visualization RGB values\n for each tissue piece separately, else normalize by given number.\n opacity : float\n opacity of superpixel polygons when posted to DSA.\n 0 (no opacity) is more efficient to render.\n opacity_contig : float\n opacity of contiguous region polygons when posted to DSA.\n 0 (no opacity) is more efficient to render.\n lineWidth : float\n width of line when displaying superpixel boundaries.\n cMap : object\n matplotlib color map to use when visualizing cellularity\n visualize_tissue_boundary : bool\n whether to visualize result from tissue detection component\n visualize_spixels : bool\n whether to visualize superpixels, color-coded by cellularity\n visualize_contiguous : bool\n whether to visualize contiguous cellular regions\n\n \n"
]
],
[
[
"In this example, and as the default behavior, we use a handful of informative intensity features extracted from the hematoxylin channel after color deconvolution to fit a gaussian mixture model. Empirically (on a few test slides), this seems to give better results than using the full suite of intensity and texture features available. Feel free to experiment with this and find the optimum combination of features for your application. ",
"_____no_output_____"
]
],
[
[
"# init cellularity detector\ncds = Cellularity_detector_superpixels(\n gc, slide_id=SAMPLE_SLIDE_ID,\n MAG=3.0, compactness=0.1, spixel_size_baseMag=256 * 256,\n max_cellularity=40,\n visualize_spixels=True, visualize_contiguous=True,\n get_tissue_mask_kwargs={\n 'deconvolve_first': False,\n 'n_thresholding_steps': 2,\n 'sigma': 1.5,\n 'min_size': 500, },\n verbose=2, monitorPrefix='test',\n logging_savepath=logging_savepath)",
"Saving logs to: /tmp/tmpt7dygwhf/2019-09-29_18-04.log\n"
]
],
[
[
"## Set the color normalization values\n\nYou can choose to reinhard color normalize the slide thumbnail and/or the tissue image at target magnificaion. You can either provide the mu and sigma values directly or provide the path to an image from which to infer these values. Please refer to the *color_normalization* module for reinhard normalization implementation details. In this example, we use a \"high-sensitivity, low-specificity\" strategy to detect tissue, followed by the more specific cellularity detection module. In other words, the *tissue_detection* module is used to detect all tissue, and only exclude whitespace and marker. Here we do NOT perform color normalization before tissue detection (empirically gives worse results), but we do normalize when detecting the cellular regions within the tissue. ",
"_____no_output_____"
]
],
[
[
"# set color normalization for thumbnail\n# cds.set_color_normalization_values(\n# mu=cnorm_thumbnail['mu'],\n# sigma=cnorm_thumbnail['sigma'], what='thumbnail')\n\n# set color normalization values for main tissue\ncds.set_color_normalization_values(\n mu=cnorm_main['mu'], sigma=cnorm_main['sigma'], what='main')",
"_____no_output_____"
]
],
[
[
"## Run the detector",
"_____no_output_____"
]
],
[
[
"print(cds.run.__doc__)",
"Run cellularity detection and optionally visualize result.\n\n This runs the cellularity detection +/- visualization pipeline and\n returns a list of CD_single_tissue_piece objects. Each object has\n the following attributes\n\n tissue_mask : np array\n mask of where tissue is at target magnification\n ymin : int\n min y coordinate at base (scan) magnification\n xmin : int\n min x coordinate at base (scan) magnification\n ymax : int\n max y coordinate at base (scan) magnification\n xmax : int\n max x coordinate at base (scan) magnification\n spixel_mask : np array\n np array where each unique value represents one superpixel\n fdata : pandas DataFrame\n features extracted for each superpixel. Index corresponds to\n values in the spixel_mask. This includes a 'cluster' column\n indicatign which cluster this superpixel belongs to.\n cluster_props : dict\n properties of each superpixel cluster, including its assigned\n cellularity score.\n\n \n"
],
[
"tissue_pieces = cds.run()",
"test: set_slide_info_and_get_tissue_mask()\ntest: Tissue piece 1 of 2\ntest: Tissue piece 1 of 2: set_tissue_rgb()\ntest: Tissue piece 1 of 2: set_superpixel_mask()\ntest: Tissue piece 1 of 2: set_superpixel_features()\ntest: Tissue piece 1 of 2: set_superpixel_assignment()\ntest: Tissue piece 1 of 2: assign_cellularity_scores()\ntest: Tissue piece 1 of 2: visualize_individual_superpixels()\ntest: Tissue piece 1 of 2: Posting doc 1 of 5\ntest: Tissue piece 1 of 2: Posting doc 2 of 5\ntest: Tissue piece 1 of 2: Posting doc 3 of 5\ntest: Tissue piece 1 of 2: Posting doc 4 of 5\ntest: Tissue piece 1 of 2: Posting doc 5 of 5\ntest: Tissue piece 1 of 2: visualize_contiguous_superpixels()\ntest: Tissue piece 1 of 2: Posting doc 1 of 5\ntest: Tissue piece 1 of 2: Posting doc 2 of 5\ntest: Tissue piece 1 of 2: Posting doc 3 of 5\ntest: Tissue piece 1 of 2: Posting doc 4 of 5\ntest: Tissue piece 1 of 2: Posting doc 5 of 5\ntest: Tissue piece 2 of 2\ntest: Tissue piece 2 of 2: set_tissue_rgb()\ntest: Tissue piece 2 of 2: set_superpixel_mask()\ntest: Tissue piece 2 of 2: set_superpixel_features()\ntest: Tissue piece 2 of 2: set_superpixel_assignment()\ntest: Tissue piece 2 of 2: assign_cellularity_scores()\ntest: Tissue piece 2 of 2: visualize_individual_superpixels()\ntest: Tissue piece 2 of 2: Posting doc 1 of 5\ntest: Tissue piece 2 of 2: Posting doc 2 of 5\ntest: Tissue piece 2 of 2: Posting doc 3 of 5\ntest: Tissue piece 2 of 2: Posting doc 4 of 5\ntest: Tissue piece 2 of 2: Posting doc 5 of 5\ntest: Tissue piece 2 of 2: visualize_contiguous_superpixels()\ntest: Tissue piece 2 of 2: Posting doc 1 of 5\ntest: Tissue piece 2 of 2: Posting doc 2 of 5\ntest: Tissue piece 2 of 2: Posting doc 3 of 5\ntest: Tissue piece 2 of 2: Posting doc 4 of 5\ntest: Tissue piece 2 of 2: Posting doc 5 of 5\n"
]
],
[
[
"## Check the results\n\nThe resultant list of objects correspond to the results for each \"tissue piece\" detected in the slide. You may explore various attributes like the offset coordinates, tissue mask, superpixel labeled mask, superpixel feature data, and superpixel cluster properties.",
"_____no_output_____"
]
],
[
[
"plt.imshow(tissue_pieces[0].tissue_mask, cmap=cMap)",
"_____no_output_____"
],
[
"plt.imshow(tissue_pieces[0].spixel_mask, cmap=cMap)",
"_____no_output_____"
],
[
"tissue_pieces[0].fdata.head()",
"_____no_output_____"
],
[
"tissue_pieces[0].cluster_props",
"_____no_output_____"
]
],
[
[
"## Check the visualization on HistomicsUI\n\nNow you may go to the slide on Digital Slide Archive and check the posted annotations.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a7ea530b1b3f2adcd25fd2d6ddfcb1903caf2f4
| 771,644 |
ipynb
|
Jupyter Notebook
|
ImageClassification.ipynb
|
ashutosh1919/tissues-image-classification
|
be6ab522c3606ad57a04529e06120e8a4d1f5e9c
|
[
"MIT"
] | 1 |
2021-12-13T03:46:32.000Z
|
2021-12-13T03:46:32.000Z
|
ImageClassification.ipynb
|
ashutosh1919/tissues-image-classification
|
be6ab522c3606ad57a04529e06120e8a4d1f5e9c
|
[
"MIT"
] | null | null | null |
ImageClassification.ipynb
|
ashutosh1919/tissues-image-classification
|
be6ab522c3606ad57a04529e06120e8a4d1f5e9c
|
[
"MIT"
] | 2 |
2021-12-13T05:03:57.000Z
|
2021-12-14T06:41:48.000Z
| 557.546243 | 518,752 | 0.935969 |
[
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport PIL\nimport tensorflow as tf\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\n\nimport pathlib\nfrom tqdm import tqdm\nfrom abc import ABCMeta, abstractmethod",
"_____no_output_____"
]
],
[
[
"I have downloaded the dataset locally and mentioned paths below. Since dataset is huge (~30 GB), I am not pushing it to the repository. You can put the `data` dir inside dataset adjacent to this jupyter notebook in order to run it successfully.",
"_____no_output_____"
]
],
[
[
"train_data_dir = 'data/train'\ntest_data_dir = 'data/test'\ntrain_data_path = pathlib.Path(train_data_dir)\ntest_data_path = pathlib.Path(test_data_dir)",
"_____no_output_____"
]
],
[
[
"Below are all the classes given for tissue samples in `train` and `test` dataset.",
"_____no_output_____"
]
],
[
[
"tissue_classes = [\n 'spleen',\n 'skin_1',\n 'skin_2',\n 'pancreas',\n 'lymph_node',\n 'small_intestine',\n 'endometrium_1',\n 'endometrium_2',\n 'liver',\n 'kidney',\n 'lung',\n 'colon'\n]",
"_____no_output_____"
]
],
[
[
"Let us display an example image from each of the `12` classes of tissues in our dataset.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(nrows=4, ncols=3, figsize=(10, 10))\n\ncounter = 0\nfor row in ax:\n for col in row:\n images = list(train_data_path.glob(tissue_classes[counter] + '/*'))\n image = np.array(PIL.Image.open(str(images[0])))\n col.set_title(tissue_classes[counter])\n col.imshow(image)\n counter += 1\n\nfig.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"From dataset, we have **1119** unique images for **training** and **600** unique images for **testing** data. \n\nSince we are working with very large dataset, it is not advisable to load all the data at once. It is not possible to do that since the data is huge. That is why, we have created data generator which will generate training/testing examples on demand. It will only generate a batch of examples at a time. \n\nBelow class is the custom data generator we have created in order to ingest images into ML pipeline.",
"_____no_output_____"
]
],
[
[
"class TissueDataGenerator(tf.keras.utils.Sequence):\n def __init__(self,\n data_dir,\n batch_size,\n class_labels,\n img_height=128,\n img_width=128,\n img_channels=3,\n preprocess_func=None,\n shuffle=True):\n self.file_ds = tf.data.Dataset.list_files(str(data_dir + '/*/*'))\n self.batch_size = batch_size\n self.class_labels = class_labels\n self.n_classes = len(class_labels)\n self.img_size = (img_height, img_width)\n self.img_n_channels = img_channels\n self.shuffle = shuffle\n self.preprocess_func = preprocess_func\n self.label_mapping = self.find_label_mappings()\n self.labeled_ds = self.file_ds.map(lambda f: tf.py_function(func=self.process_example,\n inp=[f],\n Tout=[tf.float32, tf.int32]))\n self.labeled_ds = self.labeled_ds.batch(self.batch_size)\n self.on_epoch_end()\n \n def find_label_mappings(self):\n mp = {}\n for i, label in enumerate(self.class_labels):\n mp[label] = i\n return mp\n \n def process_example(self, file_path):\n label = tf.strings.split(file_path, os.sep)[-2]\n label_map = self.label_mapping[str(label.numpy().decode('utf-8'))]\n label_encode = tf.keras.utils.to_categorical(label_map, self.n_classes)\n image = np.array(PIL.Image.open(str(file_path.numpy().decode('utf-8'))))\n image = tf.image.resize(image, self.img_size)\n if self.preprocess_func is not None:\n image = self.preprocess_func(image)\n return image, label_encode\n \n def __getitem__(self, index):\n 'Generate one batch of data'\n batch = next(self.iterator, None)\n if batch is None:\n self.on_epoch_end()\n batch = next(self.iterator)\n return batch\n \n def on_epoch_end(self):\n self.iterator = iter(self.labeled_ds)\n \n def __len__(self):\n return len(self.file_ds) // self.batch_size",
"_____no_output_____"
]
],
[
[
"During our research of finding best model for image classification, we usually experiment on various different kinds of models. Because of that, we usually rewrite some of the code redundantly. To prevent that, we have created abstract model class below. Whatever models we want to experiment on can inherit this class to get access to some of the common features we will use for all the model classes like compiling & training model, testing model, plotting metrics etc.",
"_____no_output_____"
]
],
[
[
"class ModifiedModel:\n __metaclass__ = ABCMeta\n \n def __init__(self,\n input_shape,\n num_classes,\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'],\n verbose=True):\n if not isinstance(input_shape, list) and not isinstance(input_shape, tuple):\n raise TypeError('input_shape must be of type list or tuple.')\n input_shape = tuple(input_shape)\n if len(input_shape) != 3:\n raise TypeError('input_shape must contain exactly 3 dimensions.')\n \n self.input_shape = input_shape\n self.num_classes = num_classes\n self.optimizer = optimizer\n self.loss = loss\n self.metrics = metrics\n self.verbose = verbose\n self.history = None\n self.model = None\n \n @abstractmethod\n def build_model(self):\n pass\n \n def compile_model(self, **kwargs):\n self.raise_if_not_built()\n self.model.compile(optimizer=self.optimizer,\n loss=self.loss,\n metrics=self.metrics, **kwargs)\n \n def raise_if_not_built(self):\n if self.model is None:\n raise ValueError('object of model class has not created instance yet.')\n \n def train(self, train_generator, epochs, **kwargs):\n self.raise_if_not_built()\n self.history = self.model.fit(train_generator, epochs=epochs, **kwargs)\n \n def test(self, test_generator, **kwargs):\n self.raise_if_not_built()\n return self.model.evaluate(test_generator, **kwargs)\n \n def plot_metrics(self):\n if self.history is None:\n raise ValueError('model must be trained to generate metric plot.')\n if 'loss' not in self.history.history:\n raise ValueError('history must contain loss information.')\n if 'accuracy' not in self.history.history:\n raise ValueError('history must contain accuracy information')\n fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))\n attrs = ['loss', 'accuracy']\n counter = 0\n for col in ax:\n info = self.history.history[attrs[counter]]\n col.plot(range(len(info)), info)\n col.set_title(attrs[counter])\n col.set_xlabel('Epochs')\n col.set_ylabel(attrs[counter])\n counter += 1\n fig.tight_layout()\n plt.show()\n \n def display_score(self, score):\n if len(score) < 2:\n raise ValueError('score must have atleast 2 values')\n print('Loss: {}, Accuracy: {}'.format(score[0], score[1]))",
"_____no_output_____"
]
],
[
[
"Below are some of the parameters which will be common across all the experiments and that is why we have decided to initialize them at the top and all other experiments will consume these three parameters.\n\n**Note:** We haven't fixed shape of input images because the input image shape may differ based on the model we experiment on. Also, We haven't used original dimension `(3000, 3000, 3)` because of computational power restrictions. We are using smaller shapes of images as input as per the model requirements",
"_____no_output_____"
]
],
[
[
"batch_size = 4\nnum_channels = 3\nepochs = 15",
"_____no_output_____"
]
],
[
[
"## Training Custom CNN model for image classification\n\nCustom model inherits the `ModifiedModel` class defined above. We have used multiple Conv - Max pooling blocks following softmax output. The input images resized to shape `(128, 128, 3)`.",
"_____no_output_____"
]
],
[
[
"custom_img_height = 128\ncustom_img_width = 128\n\ncustom_train_gen = TissueDataGenerator(train_data_dir,\n batch_size=batch_size,\n class_labels=tissue_classes,\n img_height=custom_img_height,\n img_width=custom_img_width)\ncustom_test_gen = TissueDataGenerator(test_data_dir,\n batch_size=batch_size,\n class_labels=tissue_classes,\n img_height=custom_img_height,\n img_width=custom_img_width)",
"_____no_output_____"
],
[
"class CustomModel(ModifiedModel):\n def __init__(self,\n input_shape,\n num_classes,\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'],\n verbose=True):\n super().__init__(input_shape,\n num_classes,\n optimizer,\n loss,\n metrics,\n verbose)\n self.build_model()\n self.compile_model()\n \n def build_model(self):\n self.model = Sequential([\n layers.Rescaling(1./255, input_shape=self.input_shape),\n layers.Conv2D(16, 3, padding='same', activation='relu'),\n layers.MaxPooling2D(),\n layers.Conv2D(32, 3, padding='same', activation='relu'),\n layers.MaxPooling2D(),\n layers.Conv2D(64, 3, padding='same', activation='relu'),\n layers.MaxPooling2D(),\n layers.Flatten(),\n layers.Dense(128, activation='relu'),\n layers.Dense(self.num_classes, activation = 'softmax')\n ])",
"_____no_output_____"
],
[
"customModel = CustomModel(input_shape=(custom_img_height, custom_img_width, num_channels),\n num_classes=len(tissue_classes))",
"_____no_output_____"
],
[
"customModel.model.summary()",
"Model: \"sequential_10\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nrescaling_8 (Rescaling) (None, 128, 128, 3) 0 \n_________________________________________________________________\nconv2d_115 (Conv2D) (None, 128, 128, 16) 448 \n_________________________________________________________________\nmax_pooling2d_25 (MaxPooling (None, 64, 64, 16) 0 \n_________________________________________________________________\nconv2d_116 (Conv2D) (None, 64, 64, 32) 4640 \n_________________________________________________________________\nmax_pooling2d_26 (MaxPooling (None, 32, 32, 32) 0 \n_________________________________________________________________\nconv2d_117 (Conv2D) (None, 32, 32, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_27 (MaxPooling (None, 16, 16, 64) 0 \n_________________________________________________________________\nflatten_10 (Flatten) (None, 16384) 0 \n_________________________________________________________________\ndense_23 (Dense) (None, 128) 2097280 \n_________________________________________________________________\ndense_24 (Dense) (None, 12) 1548 \n=================================================================\nTotal params: 2,122,412\nTrainable params: 2,122,412\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"customModel.train(custom_train_gen, epochs=epochs)",
"Epoch 1/15\n279/279 [==============================] - 40s 142ms/step - loss: 2.4728 - accuracy: 0.0897\nEpoch 2/15\n279/279 [==============================] - 38s 138ms/step - loss: 2.3455 - accuracy: 0.1676\nEpoch 3/15\n279/279 [==============================] - 40s 143ms/step - loss: 1.9991 - accuracy: 0.2894\nEpoch 4/15\n279/279 [==============================] - 38s 137ms/step - loss: 1.8241 - accuracy: 0.3486\nEpoch 5/15\n279/279 [==============================] - 39s 139ms/step - loss: 1.7336 - accuracy: 0.3898\nEpoch 6/15\n279/279 [==============================] - 38s 137ms/step - loss: 1.6471 - accuracy: 0.4220\nEpoch 7/15\n279/279 [==============================] - 38s 137ms/step - loss: 1.4953 - accuracy: 0.4767\nEpoch 8/15\n279/279 [==============================] - 38s 138ms/step - loss: 1.3462 - accuracy: 0.5385\nEpoch 9/15\n279/279 [==============================] - 38s 136ms/step - loss: 1.2123 - accuracy: 0.5690\nEpoch 10/15\n279/279 [==============================] - 39s 140ms/step - loss: 1.0002 - accuracy: 0.6505\nEpoch 11/15\n279/279 [==============================] - 39s 141ms/step - loss: 0.8511 - accuracy: 0.7097\nEpoch 12/15\n279/279 [==============================] - 39s 141ms/step - loss: 0.7023 - accuracy: 0.7572\nEpoch 13/15\n279/279 [==============================] - 39s 139ms/step - loss: 0.5228 - accuracy: 0.8118\nEpoch 14/15\n279/279 [==============================] - 39s 139ms/step - loss: 0.4305 - accuracy: 0.8432\nEpoch 15/15\n279/279 [==============================] - 39s 141ms/step - loss: 0.3327 - accuracy: 0.9005\n"
],
[
"customModel.plot_metrics()",
"_____no_output_____"
],
[
"custom_score = customModel.test(custom_test_gen)\ncustomModel.display_score(custom_score)",
"150/150 [==============================] - 21s 140ms/step - loss: 3.0714 - accuracy: 0.3767\nLoss: 3.071406126022339, Accuracy: 0.3766666650772095\n"
]
],
[
[
"Now, we also are experimenting on some of the pretrained models like VGG, InceptionNet and EfficientNet. We have defined single class `PretrainedModel` below which will take instance of pretrained model and define it as functional unit in the classification model followed by multiple fully connected layers and softmax output.",
"_____no_output_____"
]
],
[
[
"class PretrainedModel(ModifiedModel):\n def __init__(self,\n input_shape,\n num_classes,\n pretrainedModel,\n optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'],\n verbose=True):\n super().__init__(input_shape,\n num_classes,\n optimizer,\n loss,\n metrics,\n verbose)\n self.pretrained = pretrainedModel\n self.build_model()\n self.compile_model()\n \n def build_model(self):\n for layer in self.pretrained.layers:\n layer.trainable = False\n self.model = Sequential([\n self.pretrained,\n layers.Flatten(),\n layers.Dense(512, activation='relu'),\n layers.Dense(128, activation='relu'),\n layers.Dense(self.num_classes, activation = 'softmax')\n ])",
"_____no_output_____"
]
],
[
[
"## Transfer Learning on VGG16\n\nWe are using pretrained `VGG16` model as the first layer in our model and retraing only the layers which are added. The input images resized to shape `(224, 224, 3)`.",
"_____no_output_____"
]
],
[
[
"vgg_img_height = 224\nvgg_img_width = 224\n\nvgg_train_gen = TissueDataGenerator(train_data_dir,\n batch_size=batch_size,\n class_labels=tissue_classes,\n img_height=vgg_img_height,\n img_width=vgg_img_width,\n preprocess_func=tf.keras.applications.vgg16.preprocess_input)\nvgg_test_gen = TissueDataGenerator(test_data_dir,\n batch_size=batch_size,\n class_labels=tissue_classes,\n img_height=vgg_img_height,\n img_width=vgg_img_width,\n preprocess_func=tf.keras.applications.vgg16.preprocess_input)",
"_____no_output_____"
],
[
"vggModel = PretrainedModel(input_shape=(vgg_img_height, vgg_img_width, num_channels),\n num_classes=len(tissue_classes),\n pretrainedModel=tf.keras.applications.vgg16.VGG16())",
"_____no_output_____"
],
[
"vggModel.model.summary()",
"Model: \"sequential_2\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nvgg16 (Functional) (None, 1000) 138357544 \n_________________________________________________________________\nflatten_2 (Flatten) (None, 1000) 0 \n_________________________________________________________________\ndense_4 (Dense) (None, 512) 512512 \n_________________________________________________________________\ndense_5 (Dense) (None, 128) 65664 \n_________________________________________________________________\ndense_6 (Dense) (None, 12) 1548 \n=================================================================\nTotal params: 138,937,268\nTrainable params: 579,724\nNon-trainable params: 138,357,544\n_________________________________________________________________\n"
],
[
"vggModel.train(vgg_train_gen, epochs=epochs)",
"Epoch 1/15\n279/279 [==============================] - 145s 519ms/step - loss: 2.0517 - accuracy: 0.3049\nEpoch 2/15\n279/279 [==============================] - 136s 487ms/step - loss: 1.4991 - accuracy: 0.4624\nEpoch 3/15\n279/279 [==============================] - 149s 533ms/step - loss: 1.3079 - accuracy: 0.5323\nEpoch 4/15\n279/279 [==============================] - 138s 496ms/step - loss: 1.1703 - accuracy: 0.5780\nEpoch 5/15\n279/279 [==============================] - 139s 496ms/step - loss: 1.0654 - accuracy: 0.6237\nEpoch 6/15\n279/279 [==============================] - 145s 517ms/step - loss: 0.9883 - accuracy: 0.6407\nEpoch 7/15\n279/279 [==============================] - 138s 493ms/step - loss: 0.9150 - accuracy: 0.6774\nEpoch 8/15\n279/279 [==============================] - 140s 501ms/step - loss: 0.8467 - accuracy: 0.7052\nEpoch 9/15\n279/279 [==============================] - 139s 495ms/step - loss: 0.7986 - accuracy: 0.7177\nEpoch 10/15\n279/279 [==============================] - 147s 527ms/step - loss: 0.7487 - accuracy: 0.7330\nEpoch 11/15\n279/279 [==============================] - 148s 530ms/step - loss: 0.7043 - accuracy: 0.7437\nEpoch 12/15\n279/279 [==============================] - 139s 499ms/step - loss: 0.6717 - accuracy: 0.7545\nEpoch 13/15\n279/279 [==============================] - 148s 530ms/step - loss: 0.6315 - accuracy: 0.7778\nEpoch 14/15\n279/279 [==============================] - 140s 501ms/step - loss: 0.5993 - accuracy: 0.7903\nEpoch 15/15\n279/279 [==============================] - 144s 515ms/step - loss: 0.5672 - accuracy: 0.7966\n"
],
[
"vggModel.plot_metrics()",
"_____no_output_____"
],
[
"vgg_score = vggModel.test(vgg_test_gen)\nvggModel.display_score(vgg_score)",
"150/150 [==============================] - 76s 505ms/step - loss: 1.5615 - accuracy: 0.5283\nLoss: 1.561493992805481, Accuracy: 0.528333306312561\n"
]
],
[
[
"## Transfer Learning on InceptionV3\n\nWe are using pretrained `InceptionV3` model as the first layer in our model and retraing only the layers which are added. The input images resized to shape `(299, 299, 3)`.",
"_____no_output_____"
]
],
[
[
"inception_img_height = 299\ninception_img_width = 299\n\ninception_train_gen = TissueDataGenerator(train_data_dir,\n batch_size=batch_size,\n class_labels=tissue_classes,\n img_height=inception_img_height,\n img_width=inception_img_width,\n preprocess_func=tf.keras.applications.inception_v3.preprocess_input)\ninception_test_gen = TissueDataGenerator(test_data_dir,\n batch_size=batch_size,\n class_labels=tissue_classes,\n img_height=inception_img_height,\n img_width=inception_img_width,\n preprocess_func=tf.keras.applications.inception_v3.preprocess_input)",
"_____no_output_____"
],
[
"inceptionModel = PretrainedModel(input_shape=(inception_img_height, inception_img_width, num_channels),\n num_classes=len(tissue_classes),\n pretrainedModel=tf.keras.applications.inception_v3.InceptionV3())",
"_____no_output_____"
],
[
"inceptionModel.model.summary()",
"Model: \"sequential_3\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninception_v3 (Functional) (None, 1000) 23851784 \n_________________________________________________________________\nflatten_3 (Flatten) (None, 1000) 0 \n_________________________________________________________________\ndense_7 (Dense) (None, 512) 512512 \n_________________________________________________________________\ndense_8 (Dense) (None, 128) 65664 \n_________________________________________________________________\ndense_9 (Dense) (None, 12) 1548 \n=================================================================\nTotal params: 24,431,508\nTrainable params: 579,724\nNon-trainable params: 23,851,784\n_________________________________________________________________\n"
],
[
"inceptionModel.train(inception_train_gen, epochs=epochs)",
"Epoch 1/15\n279/279 [==============================] - 61s 214ms/step - loss: 2.1148 - accuracy: 0.2771\nEpoch 2/15\n279/279 [==============================] - 61s 219ms/step - loss: 1.5451 - accuracy: 0.4516\nEpoch 3/15\n279/279 [==============================] - 64s 230ms/step - loss: 1.3226 - accuracy: 0.5376\nEpoch 4/15\n279/279 [==============================] - 62s 222ms/step - loss: 1.1796 - accuracy: 0.6004\nEpoch 5/15\n279/279 [==============================] - 66s 235ms/step - loss: 1.0533 - accuracy: 0.6192\nEpoch 6/15\n279/279 [==============================] - 66s 236ms/step - loss: 0.9659 - accuracy: 0.6532\nEpoch 7/15\n279/279 [==============================] - 63s 225ms/step - loss: 0.8871 - accuracy: 0.6900\nEpoch 8/15\n279/279 [==============================] - 67s 240ms/step - loss: 0.8228 - accuracy: 0.7016s - loss: 0.8187 - accu\nEpoch 9/15\n279/279 [==============================] - 67s 240ms/step - loss: 0.7821 - accuracy: 0.7088\nEpoch 10/15\n279/279 [==============================] - 61s 218ms/step - loss: 0.7272 - accuracy: 0.7410\nEpoch 11/15\n279/279 [==============================] - 60s 215ms/step - loss: 0.6687 - accuracy: 0.7545\nEpoch 12/15\n279/279 [==============================] - 59s 211ms/step - loss: 0.6109 - accuracy: 0.7841\nEpoch 13/15\n279/279 [==============================] - 58s 208ms/step - loss: 0.6297 - accuracy: 0.7697\nEpoch 14/15\n279/279 [==============================] - 59s 210ms/step - loss: 0.5548 - accuracy: 0.7975\nEpoch 15/15\n279/279 [==============================] - 64s 229ms/step - loss: 0.5279 - accuracy: 0.8163\n"
],
[
"inceptionModel.plot_metrics()",
"_____no_output_____"
],
[
"inception_score = inceptionModel.test(inception_test_gen)\ninceptionModel.display_score(inception_score)",
"150/150 [==============================] - 32s 210ms/step - loss: 1.8996 - accuracy: 0.5350\nLoss: 1.8995614051818848, Accuracy: 0.5350000262260437\n"
]
],
[
[
"## Transfer Learning on EfficientNetB7\n\nWe are using pretrained `EfficientNetB7` model as the first layer in our model and retraing only the layers which are added. The input images resized to shape `(128, 128, 3)`.",
"_____no_output_____"
]
],
[
[
"effnet_img_height = 128\neffnet_img_width = 128\n\neffnet_train_gen = TissueDataGenerator(train_data_dir,\n batch_size=batch_size,\n class_labels=tissue_classes,\n img_height=effnet_img_height,\n img_width=effnet_img_width,\n preprocess_func=tf.keras.applications.efficientnet.preprocess_input)\neffnet_test_gen = TissueDataGenerator(test_data_dir,\n batch_size=batch_size,\n class_labels=tissue_classes,\n img_height=effnet_img_height,\n img_width=effnet_img_width,\n preprocess_func=tf.keras.applications.efficientnet.preprocess_input)",
"_____no_output_____"
],
[
"effnetModel = PretrainedModel(input_shape=(effnet_img_height, effnet_img_width, num_channels),\n num_classes=len(tissue_classes),\n pretrainedModel=tf.keras.applications.efficientnet.EfficientNetB7())",
"_____no_output_____"
],
[
"effnetModel.model.summary()",
"Model: \"sequential_4\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nefficientnetb7 (Functional) (None, 1000) 66658687 \n_________________________________________________________________\nflatten_4 (Flatten) (None, 1000) 0 \n_________________________________________________________________\ndense_10 (Dense) (None, 512) 512512 \n_________________________________________________________________\ndense_11 (Dense) (None, 128) 65664 \n_________________________________________________________________\ndense_12 (Dense) (None, 12) 1548 \n=================================================================\nTotal params: 67,238,411\nTrainable params: 579,724\nNon-trainable params: 66,658,687\n_________________________________________________________________\n"
],
[
"effnetModel.train(effnet_train_gen, epochs=epochs)",
"Epoch 1/15\n279/279 [==============================] - 66s 215ms/step - loss: 2.3305 - accuracy: 0.1937\nEpoch 2/15\n279/279 [==============================] - 65s 231ms/step - loss: 2.0011 - accuracy: 0.2984\nEpoch 3/15\n279/279 [==============================] - 63s 226ms/step - loss: 1.8177 - accuracy: 0.3477\nEpoch 4/15\n279/279 [==============================] - 61s 216ms/step - loss: 1.7248 - accuracy: 0.3853\nEpoch 5/15\n279/279 [==============================] - 63s 227ms/step - loss: 1.6270 - accuracy: 0.4337\nEpoch 6/15\n279/279 [==============================] - 65s 233ms/step - loss: 1.5408 - accuracy: 0.4731\nEpoch 7/15\n279/279 [==============================] - 61s 217ms/step - loss: 1.5156 - accuracy: 0.4561\nEpoch 8/15\n279/279 [==============================] - 66s 237ms/step - loss: 1.4667 - accuracy: 0.4937\nEpoch 9/15\n279/279 [==============================] - 64s 228ms/step - loss: 1.4579 - accuracy: 0.4857\nEpoch 10/15\n279/279 [==============================] - 60s 213ms/step - loss: 1.4030 - accuracy: 0.5045\nEpoch 11/15\n279/279 [==============================] - 61s 218ms/step - loss: 1.3866 - accuracy: 0.5188\nEpoch 12/15\n279/279 [==============================] - 64s 229ms/step - loss: 1.3447 - accuracy: 0.5251\nEpoch 13/15\n279/279 [==============================] - 66s 235ms/step - loss: 1.3665 - accuracy: 0.5332\nEpoch 14/15\n279/279 [==============================] - 65s 232ms/step - loss: 1.3179 - accuracy: 0.5502s - loss: 1.3\nEpoch 15/15\n279/279 [==============================] - 60s 215ms/step - loss: 1.2703 - accuracy: 0.5565\n"
],
[
"effnetModel.plot_metrics()",
"_____no_output_____"
],
[
"effnet_score = effnetModel.test(effnet_test_gen)\neffnetModel.display_score(effnet_score)",
"150/150 [==============================] - 38s 238ms/step - loss: 1.6277 - accuracy: 0.4533\nLoss: 1.6276763677597046, Accuracy: 0.4533333480358124\n"
]
],
[
[
"Note that above three pretrained model accuracy will improve on training for more epochs but we were not able to do that because of less computational power and time constraint.",
"_____no_output_____"
],
[
"## t-SNE plot for visualizing data distributions",
"_____no_output_____"
],
[
"Let us draw t-SNE plot of image features w.r.t. `customModel` that we created.",
"_____no_output_____"
]
],
[
[
"img_height = 128\nimg_width = 128\n\nmodel = customModel\nlabel2int = {}\nfor i, t in enumerate(tissue_classes):\n label2int[t] = i\n\ndef process_path(file_path):\n label = tf.strings.split(file_path, os.sep)[-2]\n label_map = label2int[str(label.numpy().decode('utf-8'))]\n image = np.array(PIL.Image.open(str(file_path.numpy().decode('utf-8'))))\n image = tf.image.resize(image, (img_height, img_width))\n feature = model.model(np.array([image]))\n return feature.numpy()[0], label_map\n\ntrain_gen = TissueDataGenerator(train_data_dir,\n batch_size=batch_size,\n class_labels=tissue_classes,\n img_height=img_height,\n img_width=img_width)\ntrain_ds = train_gen.file_ds.map(lambda f: tf.py_function(func=process_path,\n inp=[f],\n Tout=[tf.float32, tf.int32]))\n\ntest_gen = TissueDataGenerator(test_data_dir,\n batch_size=batch_size,\n class_labels=tissue_classes,\n img_height=img_height,\n img_width=img_width)\ntest_ds = test_gen.file_ds.map(lambda f: tf.py_function(func=process_path,\n inp=[f],\n Tout=[tf.float32, tf.int32]))",
"_____no_output_____"
],
[
"def extract_data(ds):\n images = None\n labels = None\n for img, lab in tqdm(ds):\n if images is None:\n images = np.array([img])\n labels = np.array([lab])\n else:\n images = np.append(images, [img], axis=0)\n labels = np.append(labels, [lab], axis=0)\n return images, labels\n\ntrain_images, train_labels = extract_data(train_ds)\ntest_images, test_labels = extract_data(test_ds)",
"100%|███████████████████████████████████████| 1119/1119 [00:41<00:00, 27.02it/s]\n100%|█████████████████████████████████████████| 600/600 [00:22<00:00, 27.06it/s]\n"
],
[
"from sklearn.manifold import TSNE\nimport seaborn as sns\nimport matplotlib.patheffects as PathEffects",
"_____no_output_____"
],
[
"train_tsne = TSNE(n_components=2, random_state=41).fit_transform(train_images)\ntest_tsne = TSNE(n_components=2, random_state=41).fit_transform(test_images)",
"/Users/ashutosh1919/miniforge3/envs/tissue/lib/python3.8/site-packages/sklearn/manifold/_t_sne.py:780: FutureWarning: The default initialization in TSNE will change from 'random' to 'pca' in 1.2.\n warnings.warn(\n/Users/ashutosh1919/miniforge3/envs/tissue/lib/python3.8/site-packages/sklearn/manifold/_t_sne.py:790: FutureWarning: The default learning rate in TSNE will change from 200.0 to 'auto' in 1.2.\n warnings.warn(\n/Users/ashutosh1919/miniforge3/envs/tissue/lib/python3.8/site-packages/sklearn/manifold/_t_sne.py:780: FutureWarning: The default initialization in TSNE will change from 'random' to 'pca' in 1.2.\n warnings.warn(\n/Users/ashutosh1919/miniforge3/envs/tissue/lib/python3.8/site-packages/sklearn/manifold/_t_sne.py:790: FutureWarning: The default learning rate in TSNE will change from 200.0 to 'auto' in 1.2.\n warnings.warn(\n"
],
[
"def tissue_scatter(x, colors):\n num_classes = len(np.unique(colors))\n palette = np.array(sns.color_palette(\"hls\", num_classes))\n\n # create a scatter plot.\n f = plt.figure(figsize=(8, 8))\n ax = plt.subplot(aspect='equal')\n sc = ax.scatter(x[:,0], x[:,1], lw=0, s=40, c=palette[colors.astype(np.int)])\n plt.xlim(-25, 25)\n plt.ylim(-25, 25)\n ax.axis('off')\n ax.axis('tight')\n\n # add the labels for each digit corresponding to the label\n txts = []\n\n for i in range(num_classes):\n\n # Position of each label at median of data points.\n\n xtext, ytext = np.median(x[colors == i, :], axis=0)\n txt = ax.text(xtext, ytext, str(i), fontsize=24)\n txt.set_path_effects([\n PathEffects.Stroke(linewidth=5, foreground=\"w\"),\n PathEffects.Normal()])\n txts.append(txt)\n\n return f, ax, sc, txts",
"_____no_output_____"
],
[
"tissue_scatter(train_tsne, train_labels)",
"_____no_output_____"
],
[
"tissue_scatter(test_tsne, test_labels)",
"_____no_output_____"
]
],
[
[
"## Reasons behind missclassification\n\n- One possible reason might be mixed pixels. The composition of the various objects in a single pixel makes identification of genuine class more difficult.\n- Original size of images are `(3000, 3000, 3)` but we have resized them down to very small size `(128, 128, 3)` for the model because which many details in image data might be lost.\n- We trained image only for 15 epochs becuase of limited time and computational power restriction. ",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a7eadd8c3cf1e64b2ccc2859f31b9c680661191
| 200,012 |
ipynb
|
Jupyter Notebook
|
model3/lgb_bystore_final3.ipynb
|
QingweiMeng1234/Kaggle_M5_Accuracy_Report
|
2409b57271fcf6894852cdb388a892434bf32624
|
[
"MIT"
] | null | null | null |
model3/lgb_bystore_final3.ipynb
|
QingweiMeng1234/Kaggle_M5_Accuracy_Report
|
2409b57271fcf6894852cdb388a892434bf32624
|
[
"MIT"
] | null | null | null |
model3/lgb_bystore_final3.ipynb
|
QingweiMeng1234/Kaggle_M5_Accuracy_Report
|
2409b57271fcf6894852cdb388a892434bf32624
|
[
"MIT"
] | null | null | null | 37.759487 | 1,461 | 0.393811 |
[
[
[
"# General imports\nimport numpy as np\nimport pandas as pd\nimport os, sys, gc, time, warnings, pickle, psutil, random\n\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"# :seed to make all processes deterministic # type: int\ndef seed_everything(seed=0):\n random.seed(seed)\n np.random.seed(seed)",
"_____no_output_____"
],
[
"# Read data\ndef get_data_by_store(store):\n \n # Read and contact basic feature\n df = pd.concat([pd.read_pickle(BASE),\n pd.read_pickle(PRICE).iloc[:,2:],\n pd.read_pickle(CALENDAR).iloc[:,2:]],\n axis=1)\n \n # Leave only relevant store\n df = df[df['store_id']==store]\n\n # With memory limits we have to read \n # lags and mean encoding features\n # separately and drop items that we don't need.\n # As our Features Grids are aligned \n # we can use index to keep only necessary rows\n # Alignment is good for us as concat uses less memory than merge.\n df2 = pd.read_pickle(MEAN_ENC)[mean_features]\n df2 = df2[df2.index.isin(df.index)]\n \n df3 = pd.read_pickle(LAGS).iloc[:,3:]\n df3 = df3[df3.index.isin(df.index)]\n \n df = pd.concat([df, df2], axis=1)\n del df2 # to not reach memory limit \n \n df = pd.concat([df, df3], axis=1)\n del df3 # to not reach memory limit \n \n if store_id in ['CA_1', 'CA_2', 'CA_3','CA_4','TX_1','TX_2','TX_3']:\n remove_features = ['id','state_id','store_id','date','wm_yr_wk','d',TARGET,'cluster','snow_m',\n 'rolling_quantile_97_28', 'rolling_quantile_87.5_28', 'rolling_quantile_50_28', 'rolling_quantile_22.5_28', 'rolling_quantile_3_28', 'rolling_quantile_97_56', 'rolling_quantile_87.5_56', 'rolling_quantile_50_56', 'rolling_quantile_22.5_56', 'rolling_quantile_3_56', 'rolling_quantile_97_168', 'rolling_quantile_87.5_168', 'rolling_quantile_50_168', 'rolling_quantile_22.5_168', 'rolling_quantile_3_168']\n else:\n remove_features = ['id','state_id','store_id','date','wm_yr_wk','d',TARGET,'cluster',\n 'rolling_quantile_97_28', 'rolling_quantile_87.5_28', 'rolling_quantile_50_28', 'rolling_quantile_22.5_28', 'rolling_quantile_3_28', 'rolling_quantile_97_56', 'rolling_quantile_87.5_56', 'rolling_quantile_50_56', 'rolling_quantile_22.5_56', 'rolling_quantile_3_56', 'rolling_quantile_97_168', 'rolling_quantile_87.5_168', 'rolling_quantile_50_168', 'rolling_quantile_22.5_168', 'rolling_quantile_3_168']\n \n # Create features list\n features = [col for col in list(df) if col not in remove_features]\n df = df[['id','d',TARGET]+features]\n \n # Skipping first n rows\n df = df[df['d']>=START_TRAIN].reset_index(drop=True)\n \n return df, features\n\n# Recombine Test set after training\ndef get_base_test():\n base_test = pd.DataFrame()\n\n for store_id in STORES_IDS:\n temp_df = pd.read_pickle('test_'+store_id+str(VER)+'.pkl')\n temp_df['store_id'] = store_id\n base_test = pd.concat([base_test, temp_df]).reset_index(drop=True)\n \n return base_test\n\n\n########################### Helper to make dynamic rolling lags\n#################################################################################\ndef make_lag(LAG_DAY):\n lag_df = base_test[['id','d',TARGET]]\n col_name = 'sales_lag_'+str(LAG_DAY)\n lag_df[col_name] = lag_df.groupby(['id'])[TARGET].transform(lambda x: x.shift(LAG_DAY)).astype(np.float16)\n return lag_df[[col_name]]",
"_____no_output_____"
],
[
"def make_lag_roll(LAG_DAY,lag_df_new):\n \n lag_df = base_test[['id','d',TARGET]]\n \n lag_df=lag_df.sort_values(by=[\"d\"])\n \n for i in range(0,len(LAG_DAY)):\n\n shift_day = LAG_DAY[i][0]\n roll_wind = LAG_DAY[i][1]\n col_name = 'rolling_mean_tmp_'+str(shift_day)+'_'+str(roll_wind)\n lag_df[col_name] = (lag_df.groupby(['id'])[TARGET]).transform(lambda x: x.shift(shift_day).rolling(roll_wind).mean())\n lag_df_new=lag_df.drop(columns=[\"sales\"])\n return lag_df_new",
"_____no_output_____"
],
[
"import lightgbm as lgb\nlgb_params = {\n 'boosting_type': 'gbdt',\n 'objective': 'tweedie',\n 'tweedie_variance_power': 1.1,\n 'metric': 'rmse',\n 'subsample': 0.5,\n 'subsample_freq': 1,\n 'learning_rate': 0.03,\n \"lambda\":0.1,\n 'num_leaves': 2**11-1,\n 'min_data_in_leaf': 2**12-1,\n 'feature_fraction': 0.5,\n 'max_bin': 100,\n 'n_estimators': 1400,\n 'boost_from_average': False,\n 'verbose': -1,\n } \n\n\n\n# lgb_params ={\n# \"objective\" : \"tweedie\",\n# \"metric\" :\"rmse\",\n# \"force_row_wise\" : True,\n# \"learning_rate\" : 0.075,\n# \"sub_feature\" : 0.8,\n# \"sub_row\" : 0.75,\n# \"bagging_freq\" : 1,\n# \"lambda_l2\" : 0.1,\n# \"metric\": [\"rmse\"],\n# \"nthread\": -1,\n# \"tweedie_variance_power\":1.1,\n# 'verbosity': 1,\n# # 'num_iterations' : 1500,\n# 'num_leaves': 128,\n# \"min_data_in_leaf\": 104,\n# }\n\n\n\n\n# Let's look closer on params\n\n## 'boosting_type': 'gbdt'\n# we have 'goss' option for faster training\n# but it normally leads to underfit.\n# Also there is good 'dart' mode\n# but it takes forever to train\n# and model performance depends \n# a lot on random factor \n# https://www.kaggle.com/c/home-credit-default-risk/discussion/60921\n\n## 'objective': 'tweedie'\n# Tweedie Gradient Boosting for Extremely\n# Unbalanced Zero-inflated Data\n# https://arxiv.org/pdf/1811.10192.pdf\n# and many more articles about tweediie\n#\n# Strange (for me) but Tweedie is close in results\n# to my own ugly loss.\n# My advice here - make OWN LOSS function\n# https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/140564\n# https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/143070\n# I think many of you already using it (after poisson kernel appeared) \n# (kagglers are very good with \"params\" testing and tuning).\n# Try to figure out why Tweedie works.\n# probably it will show you new features options\n# or data transformation (Target transformation?).\n\n## 'tweedie_variance_power': 1.1\n# default = 1.5\n# set this closer to 2 to shift towards a Gamma distribution\n# set this closer to 1 to shift towards a Poisson distribution\n# my CV shows 1.1 is optimal \n# but you can make your own choice\n\n## 'metric': 'rmse'\n# Doesn't mean anything to us\n# as competition metric is different\n# and we don't use early stoppings here.\n# So rmse serves just for general \n# model performance overview.\n# Also we use \"fake\" validation set\n# (as it makes part of the training set)\n# so even general rmse score doesn't mean anything))\n# https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/133834\n\n## 'subsample': 0.5\n# Serves to fight with overfit\n# this will randomly select part of data without resampling\n# Chosen by CV (my CV can be wrong!)\n# Next kernel will be about CV\n\n##'subsample_freq': 1\n# frequency for bagging\n# default value - seems ok\n\n## 'learning_rate': 0.03\n# Chosen by CV\n# Smaller - longer training\n# but there is an option to stop \n# in \"local minimum\"\n# Bigger - faster training\n# but there is a chance to\n# not find \"global minimum\" minimum\n\n## 'num_leaves': 2**11-1\n## 'min_data_in_leaf': 2**12-1\n# Force model to use more features\n# We need it to reduce \"recursive\"\n# error impact.\n# Also it leads to overfit\n# that's why we use small \n\n# 'max_bin': 100\n## l1, l2 regularizations\n# https://towardsdatascience.com/l1-and-l2-regularization-methods-ce25e7fc831c\n# Good tiny explanation\n# l2 can work with bigger num_leaves\n# but my CV doesn't show boost\n \n## 'n_estimators': 1400\n# CV shows that there should be\n# different values for each state/store.\n# Current value was chosen \n# for general purpose.\n# As we don't use any early stopings\n# careful to not overfit Public LB.\n\n##'feature_fraction': 0.5\n# LightGBM will randomly select \n# part of features on each iteration (tree).\n# We have maaaany features\n# and many of them are \"duplicates\"\n# and many just \"noise\"\n# good values here - 0.5-0.7 (by CV)\n\n## 'boost_from_average': False\n# There is some \"problem\"\n# to code boost_from_average for \n# custom loss\n# 'True' makes training faster\n# BUT carefull use it\n# https://github.com/microsoft/LightGBM/issues/1514",
"_____no_output_____"
],
[
"VER = 5 # Our model version\nSEED = 42 # We want all things\nseed_everything(SEED) # to be as deterministic \nlgb_params['seed'] = SEED # as possible\nN_CORES = psutil.cpu_count() # Available CPU cores\n\n\n#LIMITS and const\nTARGET = 'sales' # Our target\nSTART_TRAIN = 0 # We can skip some rows (Nans/faster training)\nEND_TRAIN = 1941 # End day of our train set, change this part for final\nP_HORIZON = 28 # Prediction horizon\n\n#FEATURES to remove\n## These features lead to overfit\n## or values not present in test set\nmean_features = ['enc_cat_id_mean','enc_cat_id_std',\n 'enc_dept_id_mean','enc_dept_id_std',\n 'enc_item_id_mean','enc_item_id_std'] \n\n#PATHS for Features\nBASE = 'grid_part_1.pkl'\nPRICE = 'grid_part_2.pkl'\nCALENDAR = 'grid_part_3.pkl'\nLAGS = 'lags_df_28_v3.pkl'\nMEAN_ENC = 'mean_encoding_df.pkl'\n\n\n# AUX(pretrained) Models paths\n\n#STORES ids\nSTORES_IDS = pd.read_csv('sales_train_evaluation.csv')['store_id']#change this part for final\nSTORES_IDS = list(STORES_IDS.unique())\n\n#SPLITS for lags creation\nSHIFT_DAY = 28\nN_LAGS = 15\nLAGS_SPLIT = [col for col in range(SHIFT_DAY,SHIFT_DAY+N_LAGS)]\nROLS_SPLIT = []\nfor i in [1,7,14]:\n for j in [7,14,28,56]:\n ROLS_SPLIT.append([i,j])",
"_____no_output_____"
],
[
"for store_id in STORES_IDS:\n print('Train', store_id)\n \n # Get grid for current store\n grid_df, features_columns = get_data_by_store(store_id)\n \n print(features_columns)\n # Masks for \n # Train (All data less than 1913)\n # \"Validation\" (Last 28 days - not real validation set)\n # Test (All data greater than 1913 day, \n # with some gap for recursive features)\n train_mask = grid_df['d']<=END_TRAIN\n valid_mask = train_mask&(grid_df['d']>(END_TRAIN-P_HORIZON))\n preds_mask = grid_df['d']>(END_TRAIN-100)\n \n # Apply masks and save lgb dataset as bin\n # to reduce memory spikes during dtype convertations\n # https://github.com/Microsoft/LightGBM/issues/1032\n # \"To avoid any conversions, you should always use np.float32\"\n # or save to bin before start training\n # https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/53773\n train_data = lgb.Dataset(grid_df[train_mask][features_columns], \n label=grid_df[train_mask][TARGET],\n weight=grid_df[train_mask]['sell_price'])\n \n valid_data = lgb.Dataset(grid_df[valid_mask][features_columns], \n label=grid_df[valid_mask][TARGET],\n weight=grid_df[valid_mask]['sell_price'])\n \n # Saving part of the dataset for later predictions\n # Removing features that we need to calculate recursively \n grid_df = grid_df[preds_mask].reset_index(drop=True)\n keep_cols = [col for col in list(grid_df) if '_tmp_' not in col]\n grid_df = grid_df[keep_cols]\n grid_df.to_pickle('test_'+store_id+str(VER)+'.pkl')\n del grid_df\n gc.collect()\n \n # Launch seeder again to make lgb training 100% deterministic\n # with each \"code line\" np.random \"evolves\" \n # so we need (may want) to \"reset\" it\n seed_everything(SEED)\n estimator = lgb.train(lgb_params,\n train_data,\n valid_sets = [valid_data],\n verbose_eval = 100,\n )\n imp_type = \"gain\"\n features = estimator.feature_name()\n importances = estimator.feature_importance(imp_type)\n importance_df=pd.DataFrame(features,columns=['features'])\n importance_df['importances']=importances\n importance_df=importance_df.sort_values(by='importances', ascending=False)\n importance_df.to_csv(store_id+'_fe_imp_'+str(VER)+'.csv',index=False)\n del importance_df\n gc.collect()\n \n # Save model - it's not real '.bin' but a pickle file\n # estimator = lgb.Booster(model_file='model.txt')\n # can only predict with the best iteration (or the saving iteration)\n # pickle.dump gives us more flexibility\n # like estimator.predict(TEST, num_iteration=100)\n # num_iteration - number of iteration want to predict with, \n # NULL or <= 0 means use best iteration\n model_name = 'lgb_model_'+store_id+'_v'+str(VER)+'.bin'\n pickle.dump(estimator, open(model_name, 'wb'))\n\n # Remove temporary files and objects \n # to free some hdd space and ram memory\n # !rm train_data.bin\n del train_data, valid_data, estimator\n gc.collect()",
"Train CA_1\n['item_id', 'dept_id', 'cat_id', 'release', 'sell_price', 'price_max', 'price_min', 'price_std', 'price_mean', 'price_norm', 'price_rank_dept', 'price_nunique', 'item_nunique', 'price_momentum', 'price_momentum_m', 'price_momentum_y', 'temperature_high', 'temperature_con', 'rainfall_m', 'event_name_1', 'event_type_1', 'event_name_2', 'event_type_2', 'snap_CA', 'snap_TX', 'snap_WI', 'is_first_half_month', 'event_bef_weekend', 'event_after_weekend', 'NBA', 'event_attention_after', 'event_attention_bef', 'event_attention_sum', 'tm_d', 'tm_w', 'tm_m', 'tm_q', 'tm_y', 'tm_wm', 'tm_dw', 'tm_w_end', 'enc_cat_id_mean', 'enc_cat_id_std', 'enc_dept_id_mean', 'enc_dept_id_std', 'enc_item_id_mean', 'enc_item_id_std', 'sales_lag_28', 'sales_lag_29', 'sales_lag_30', 'sales_lag_31', 'sales_lag_32', 'sales_lag_33', 'sales_lag_34', 'sales_lag_35', 'sales_lag_36', 'sales_lag_37', 'sales_lag_38', 'sales_lag_39', 'sales_lag_40', 'sales_lag_41', 'sales_lag_42', 'rolling_mean_7', 'rolling_std_7', 'rolling_mean_14', 'rolling_std_14', 'rolling_mean_28', 'rolling_std_28', 'rolling_mean_56', 'rolling_std_56', 'rolling_mean_168', 'rolling_std_168', 'rolling_mean_tmp_1_7', 'rolling_mean_tmp_1_14', 'rolling_mean_tmp_1_28', 'rolling_mean_tmp_1_56', 'rolling_mean_tmp_7_7', 'rolling_mean_tmp_7_14', 'rolling_mean_tmp_7_28', 'rolling_mean_tmp_7_56', 'rolling_mean_tmp_14_7', 'rolling_mean_tmp_14_14', 'rolling_mean_tmp_14_28', 'rolling_mean_tmp_14_56']\n[100]\tvalid_0's rmse: 1.49312\n[200]\tvalid_0's rmse: 1.46575\n[300]\tvalid_0's rmse: 1.45727\n[400]\tvalid_0's rmse: 1.45121\n[500]\tvalid_0's rmse: 1.44489\n[600]\tvalid_0's rmse: 1.43941\n[700]\tvalid_0's rmse: 1.43439\n[800]\tvalid_0's rmse: 1.42959\n[900]\tvalid_0's rmse: 1.42524\n[1000]\tvalid_0's rmse: 1.42119\n[1100]\tvalid_0's rmse: 1.41712\n[1200]\tvalid_0's rmse: 1.41319\n[1300]\tvalid_0's rmse: 1.40926\n[1400]\tvalid_0's rmse: 1.40524\nTrain CA_2\n['item_id', 'dept_id', 'cat_id', 'release', 'sell_price', 'price_max', 'price_min', 'price_std', 'price_mean', 'price_norm', 'price_rank_dept', 'price_nunique', 'item_nunique', 'price_momentum', 'price_momentum_m', 'price_momentum_y', 'temperature_high', 'temperature_con', 'rainfall_m', 'event_name_1', 'event_type_1', 'event_name_2', 'event_type_2', 'snap_CA', 'snap_TX', 'snap_WI', 'is_first_half_month', 'event_bef_weekend', 'event_after_weekend', 'NBA', 'event_attention_after', 'event_attention_bef', 'event_attention_sum', 'tm_d', 'tm_w', 'tm_m', 'tm_q', 'tm_y', 'tm_wm', 'tm_dw', 'tm_w_end', 'enc_cat_id_mean', 'enc_cat_id_std', 'enc_dept_id_mean', 'enc_dept_id_std', 'enc_item_id_mean', 'enc_item_id_std', 'sales_lag_28', 'sales_lag_29', 'sales_lag_30', 'sales_lag_31', 'sales_lag_32', 'sales_lag_33', 'sales_lag_34', 'sales_lag_35', 'sales_lag_36', 'sales_lag_37', 'sales_lag_38', 'sales_lag_39', 'sales_lag_40', 'sales_lag_41', 'sales_lag_42', 'rolling_mean_7', 'rolling_std_7', 'rolling_mean_14', 'rolling_std_14', 'rolling_mean_28', 'rolling_std_28', 'rolling_mean_56', 'rolling_std_56', 'rolling_mean_168', 'rolling_std_168', 'rolling_mean_tmp_1_7', 'rolling_mean_tmp_1_14', 'rolling_mean_tmp_1_28', 'rolling_mean_tmp_1_56', 'rolling_mean_tmp_7_7', 'rolling_mean_tmp_7_14', 'rolling_mean_tmp_7_28', 'rolling_mean_tmp_7_56', 'rolling_mean_tmp_14_7', 'rolling_mean_tmp_14_14', 'rolling_mean_tmp_14_28', 'rolling_mean_tmp_14_56']\n[100]\tvalid_0's rmse: 1.45277\n[200]\tvalid_0's rmse: 1.41661\n[300]\tvalid_0's rmse: 1.40561\n[400]\tvalid_0's rmse: 1.39897\n[500]\tvalid_0's rmse: 1.39348\n[600]\tvalid_0's rmse: 1.38842\n[700]\tvalid_0's rmse: 1.38382\n[800]\tvalid_0's rmse: 1.37969\n[900]\tvalid_0's rmse: 1.37604\n[1000]\tvalid_0's rmse: 1.37209\n[1100]\tvalid_0's rmse: 1.36818\n[1200]\tvalid_0's rmse: 1.36465\n[1300]\tvalid_0's rmse: 1.36164\n[1400]\tvalid_0's rmse: 1.35823\nTrain CA_3\n['item_id', 'dept_id', 'cat_id', 'release', 'sell_price', 'price_max', 'price_min', 'price_std', 'price_mean', 'price_norm', 'price_rank_dept', 'price_nunique', 'item_nunique', 'price_momentum', 'price_momentum_m', 'price_momentum_y', 'temperature_high', 'temperature_con', 'rainfall_m', 'event_name_1', 'event_type_1', 'event_name_2', 'event_type_2', 'snap_CA', 'snap_TX', 'snap_WI', 'is_first_half_month', 'event_bef_weekend', 'event_after_weekend', 'NBA', 'event_attention_after', 'event_attention_bef', 'event_attention_sum', 'tm_d', 'tm_w', 'tm_m', 'tm_q', 'tm_y', 'tm_wm', 'tm_dw', 'tm_w_end', 'enc_cat_id_mean', 'enc_cat_id_std', 'enc_dept_id_mean', 'enc_dept_id_std', 'enc_item_id_mean', 'enc_item_id_std', 'sales_lag_28', 'sales_lag_29', 'sales_lag_30', 'sales_lag_31', 'sales_lag_32', 'sales_lag_33', 'sales_lag_34', 'sales_lag_35', 'sales_lag_36', 'sales_lag_37', 'sales_lag_38', 'sales_lag_39', 'sales_lag_40', 'sales_lag_41', 'sales_lag_42', 'rolling_mean_7', 'rolling_std_7', 'rolling_mean_14', 'rolling_std_14', 'rolling_mean_28', 'rolling_std_28', 'rolling_mean_56', 'rolling_std_56', 'rolling_mean_168', 'rolling_std_168', 'rolling_mean_tmp_1_7', 'rolling_mean_tmp_1_14', 'rolling_mean_tmp_1_28', 'rolling_mean_tmp_1_56', 'rolling_mean_tmp_7_7', 'rolling_mean_tmp_7_14', 'rolling_mean_tmp_7_28', 'rolling_mean_tmp_7_56', 'rolling_mean_tmp_14_7', 'rolling_mean_tmp_14_14', 'rolling_mean_tmp_14_28', 'rolling_mean_tmp_14_56']\n[100]\tvalid_0's rmse: 1.78301\n[200]\tvalid_0's rmse: 1.76433\n[300]\tvalid_0's rmse: 1.74959\n[400]\tvalid_0's rmse: 1.73924\n[500]\tvalid_0's rmse: 1.73245\n[600]\tvalid_0's rmse: 1.72577\n[700]\tvalid_0's rmse: 1.71937\n[800]\tvalid_0's rmse: 1.71448\n[900]\tvalid_0's rmse: 1.70931\n[1000]\tvalid_0's rmse: 1.70515\n[1100]\tvalid_0's rmse: 1.70073\n[1200]\tvalid_0's rmse: 1.69679\n[1300]\tvalid_0's rmse: 1.6933\n[1400]\tvalid_0's rmse: 1.68883\nTrain CA_4\n['item_id', 'dept_id', 'cat_id', 'release', 'sell_price', 'price_max', 'price_min', 'price_std', 'price_mean', 'price_norm', 'price_rank_dept', 'price_nunique', 'item_nunique', 'price_momentum', 'price_momentum_m', 'price_momentum_y', 'temperature_high', 'temperature_con', 'rainfall_m', 'event_name_1', 'event_type_1', 'event_name_2', 'event_type_2', 'snap_CA', 'snap_TX', 'snap_WI', 'is_first_half_month', 'event_bef_weekend', 'event_after_weekend', 'NBA', 'event_attention_after', 'event_attention_bef', 'event_attention_sum', 'tm_d', 'tm_w', 'tm_m', 'tm_q', 'tm_y', 'tm_wm', 'tm_dw', 'tm_w_end', 'enc_cat_id_mean', 'enc_cat_id_std', 'enc_dept_id_mean', 'enc_dept_id_std', 'enc_item_id_mean', 'enc_item_id_std', 'sales_lag_28', 'sales_lag_29', 'sales_lag_30', 'sales_lag_31', 'sales_lag_32', 'sales_lag_33', 'sales_lag_34', 'sales_lag_35', 'sales_lag_36', 'sales_lag_37', 'sales_lag_38', 'sales_lag_39', 'sales_lag_40', 'sales_lag_41', 'sales_lag_42', 'rolling_mean_7', 'rolling_std_7', 'rolling_mean_14', 'rolling_std_14', 'rolling_mean_28', 'rolling_std_28', 'rolling_mean_56', 'rolling_std_56', 'rolling_mean_168', 'rolling_std_168', 'rolling_mean_tmp_1_7', 'rolling_mean_tmp_1_14', 'rolling_mean_tmp_1_28', 'rolling_mean_tmp_1_56', 'rolling_mean_tmp_7_7', 'rolling_mean_tmp_7_14', 'rolling_mean_tmp_7_28', 'rolling_mean_tmp_7_56', 'rolling_mean_tmp_14_7', 'rolling_mean_tmp_14_14', 'rolling_mean_tmp_14_28', 'rolling_mean_tmp_14_56']\n[100]\tvalid_0's rmse: 1.02317\n[200]\tvalid_0's rmse: 1.01604\n[300]\tvalid_0's rmse: 1.01224\n[400]\tvalid_0's rmse: 1.00903\n[500]\tvalid_0's rmse: 1.0062\n[600]\tvalid_0's rmse: 1.00382\n[700]\tvalid_0's rmse: 1.0013\n[800]\tvalid_0's rmse: 0.998891\n[900]\tvalid_0's rmse: 0.996419\n[1000]\tvalid_0's rmse: 0.994247\n[1100]\tvalid_0's rmse: 0.992246\n[1200]\tvalid_0's rmse: 0.990175\n[1300]\tvalid_0's rmse: 0.988148\n[1400]\tvalid_0's rmse: 0.986162\nTrain TX_1\n['item_id', 'dept_id', 'cat_id', 'release', 'sell_price', 'price_max', 'price_min', 'price_std', 'price_mean', 'price_norm', 'price_rank_dept', 'price_nunique', 'item_nunique', 'price_momentum', 'price_momentum_m', 'price_momentum_y', 'temperature_high', 'temperature_con', 'rainfall_m', 'event_name_1', 'event_type_1', 'event_name_2', 'event_type_2', 'snap_CA', 'snap_TX', 'snap_WI', 'is_first_half_month', 'event_bef_weekend', 'event_after_weekend', 'NBA', 'event_attention_after', 'event_attention_bef', 'event_attention_sum', 'tm_d', 'tm_w', 'tm_m', 'tm_q', 'tm_y', 'tm_wm', 'tm_dw', 'tm_w_end', 'enc_cat_id_mean', 'enc_cat_id_std', 'enc_dept_id_mean', 'enc_dept_id_std', 'enc_item_id_mean', 'enc_item_id_std', 'sales_lag_28', 'sales_lag_29', 'sales_lag_30', 'sales_lag_31', 'sales_lag_32', 'sales_lag_33', 'sales_lag_34', 'sales_lag_35', 'sales_lag_36', 'sales_lag_37', 'sales_lag_38', 'sales_lag_39', 'sales_lag_40', 'sales_lag_41', 'sales_lag_42', 'rolling_mean_7', 'rolling_std_7', 'rolling_mean_14', 'rolling_std_14', 'rolling_mean_28', 'rolling_std_28', 'rolling_mean_56', 'rolling_std_56', 'rolling_mean_168', 'rolling_std_168', 'rolling_mean_tmp_1_7', 'rolling_mean_tmp_1_14', 'rolling_mean_tmp_1_28', 'rolling_mean_tmp_1_56', 'rolling_mean_tmp_7_7', 'rolling_mean_tmp_7_14', 'rolling_mean_tmp_7_28', 'rolling_mean_tmp_7_56', 'rolling_mean_tmp_14_7', 'rolling_mean_tmp_14_14', 'rolling_mean_tmp_14_28', 'rolling_mean_tmp_14_56']\n"
],
[
"# Create Dummy DataFrame to store predictions\nall_preds = pd.DataFrame()\n\n# Join back the Test dataset with \n# a small part of the training data \n# to make recursive features\nbase_test = get_base_test()\n\n# Timer to measure predictions time \nmain_time = time.time()\n\n# Loop over each prediction day\n# As rolling lags are the most timeconsuming\n# we will calculate it for whole day\n\n\nfor PREDICT_DAY in range(1,29): \n print('Predict | Day:', PREDICT_DAY)\n start_time = time.time()\n\n # Make temporary grid to calculate rolling lags\n grid_df = base_test.copy()\n \n \n lag_df_new = pd.DataFrame()\n\n lag_df_new=make_lag_roll(ROLS_SPLIT,lag_df_new)\n\n\n grid_df = grid_df.merge(lag_df_new, on=['id','d'], how='left')\n\n\n for store_id in STORES_IDS:\n \n if store_id in ['CA_1', 'CA_2', 'CA_3','CA_4','TX_1','TX_2','TX_3']:\n MODEL_FEATURES = ['item_id', 'dept_id', 'cat_id', 'release', 'sell_price', 'price_max', \n 'price_min', 'price_std', 'price_mean', 'price_norm', 'price_rank_dept',\n 'price_nunique', 'item_nunique', 'price_momentum', 'price_momentum_m', \n 'price_momentum_y', 'temperature_high', 'temperature_con', 'rainfall_m', \n 'event_name_1', 'event_type_1', 'event_name_2', 'event_type_2', 'snap_CA', \n 'snap_TX', 'snap_WI', 'is_first_half_month', 'event_bef_weekend', 'event_after_weekend',\n 'NBA', 'event_attention_after', 'event_attention_bef', 'event_attention_sum', 'tm_d',\n 'tm_w', 'tm_m', 'tm_q', 'tm_y', 'tm_wm', 'tm_dw', 'tm_w_end', 'enc_cat_id_mean', \n 'enc_cat_id_std', 'enc_dept_id_mean', 'enc_dept_id_std', 'enc_item_id_mean', \n 'enc_item_id_std', 'sales_lag_28', 'sales_lag_29', 'sales_lag_30', 'sales_lag_31', \n 'sales_lag_32', 'sales_lag_33', 'sales_lag_34', 'sales_lag_35', 'sales_lag_36',\n 'sales_lag_37', 'sales_lag_38', 'sales_lag_39', 'sales_lag_40', 'sales_lag_41', \n 'sales_lag_42', 'rolling_mean_7', 'rolling_std_7', 'rolling_mean_14', 'rolling_std_14', \n 'rolling_mean_28', 'rolling_std_28', 'rolling_mean_56', 'rolling_std_56', \n 'rolling_mean_168', 'rolling_std_168', 'rolling_mean_tmp_1_7', 'rolling_mean_tmp_1_14',\n 'rolling_mean_tmp_1_28', 'rolling_mean_tmp_1_56', 'rolling_mean_tmp_7_7', \n 'rolling_mean_tmp_7_14', 'rolling_mean_tmp_7_28', 'rolling_mean_tmp_7_56', \n 'rolling_mean_tmp_14_7', 'rolling_mean_tmp_14_14', 'rolling_mean_tmp_14_28', 'rolling_mean_tmp_14_56']\n else:\n MODEL_FEATURES = ['item_id', 'dept_id', 'cat_id', 'release', 'sell_price', 'price_max', \n 'price_min', 'price_std', 'price_mean', 'price_norm', 'price_rank_dept',\n 'price_nunique', 'item_nunique', 'price_momentum', 'price_momentum_m', \n 'price_momentum_y', 'temperature_high', 'temperature_con', 'rainfall_m', 'snow_m',\n 'event_name_1', 'event_type_1', 'event_name_2', 'event_type_2', 'snap_CA', \n 'snap_TX', 'snap_WI', 'is_first_half_month', 'event_bef_weekend', 'event_after_weekend',\n 'NBA', 'event_attention_after', 'event_attention_bef', 'event_attention_sum', 'tm_d',\n 'tm_w', 'tm_m', 'tm_q', 'tm_y', 'tm_wm', 'tm_dw', 'tm_w_end', 'enc_cat_id_mean', \n 'enc_cat_id_std', 'enc_dept_id_mean', 'enc_dept_id_std', 'enc_item_id_mean', \n 'enc_item_id_std', 'sales_lag_28', 'sales_lag_29', 'sales_lag_30', 'sales_lag_31', \n 'sales_lag_32', 'sales_lag_33', 'sales_lag_34', 'sales_lag_35', 'sales_lag_36',\n 'sales_lag_37', 'sales_lag_38', 'sales_lag_39', 'sales_lag_40', 'sales_lag_41', \n 'sales_lag_42', 'rolling_mean_7', 'rolling_std_7', 'rolling_mean_14', 'rolling_std_14', \n 'rolling_mean_28', 'rolling_std_28', 'rolling_mean_56', 'rolling_std_56', \n 'rolling_mean_168', 'rolling_std_168', 'rolling_mean_tmp_1_7', 'rolling_mean_tmp_1_14',\n 'rolling_mean_tmp_1_28', 'rolling_mean_tmp_1_56', 'rolling_mean_tmp_7_7', \n 'rolling_mean_tmp_7_14', 'rolling_mean_tmp_7_28', 'rolling_mean_tmp_7_56', \n 'rolling_mean_tmp_14_7', 'rolling_mean_tmp_14_14', 'rolling_mean_tmp_14_28', 'rolling_mean_tmp_14_56']\n # Read all our models and make predictions\n # for each day/store pairs\n model_path = 'lgb_model_'+store_id+'_v'+str(VER)+'.bin' \n\n estimator = pickle.load(open(model_path, 'rb'))\n\n day_mask = base_test['d']==(END_TRAIN+PREDICT_DAY)\n store_mask = base_test['store_id']==store_id\n\n mask = (day_mask)&(store_mask)\n base_test[TARGET][mask] = estimator.predict(grid_df[mask][MODEL_FEATURES])\n\n # Make good column naming and add \n # to all_preds DataFrame\n temp_df = base_test[day_mask][['id',TARGET]]\n temp_df.columns = ['id','F'+str(PREDICT_DAY)]\n if 'id' in list(all_preds):\n all_preds = all_preds.merge(temp_df, on=['id'], how='left')\n else:\n all_preds = temp_df.copy()\n\n print('#'*10, ' %0.2f min round |' % ((time.time() - start_time) / 60),\n ' %0.2f min total |' % ((time.time() - main_time) / 60),\n ' %0.2f day sales |' % (temp_df['F'+str(PREDICT_DAY)].sum()))\n \n del temp_df, lag_df_new\n\nall_preds = all_preds.reset_index(drop=True)\nall_preds.head()",
"Predict | Day: 1\n########## 4.70 min round | 4.70 min total | 40022.45 day sales |\nPredict | Day: 2\n########## 4.69 min round | 9.38 min total | 37571.07 day sales |\nPredict | Day: 3\n########## 4.66 min round | 14.04 min total | 36975.99 day sales |\nPredict | Day: 4\n########## 4.63 min round | 18.67 min total | 37245.93 day sales |\nPredict | Day: 5\n########## 4.64 min round | 23.31 min total | 42771.22 day sales |\nPredict | Day: 6\n########## 4.66 min round | 27.97 min total | 50664.98 day sales |\nPredict | Day: 7\n########## 4.63 min round | 32.60 min total | 51184.89 day sales |\nPredict | Day: 8\n########## 4.65 min round | 37.25 min total | 45971.93 day sales |\nPredict | Day: 9\n########## 4.66 min round | 41.90 min total | 38775.92 day sales |\nPredict | Day: 10\n########## 4.63 min round | 46.53 min total | 43869.28 day sales |\nPredict | Day: 11\n########## 4.63 min round | 51.16 min total | 45208.85 day sales |\nPredict | Day: 12\n########## 4.74 min round | 55.90 min total | 51592.11 day sales |\nPredict | Day: 13\n########## 4.68 min round | 60.57 min total | 55155.09 day sales |\nPredict | Day: 14\n########## 4.65 min round | 65.23 min total | 56457.26 day sales |\nPredict | Day: 15\n########## 4.65 min round | 69.88 min total | 47595.92 day sales |\nPredict | Day: 16\n########## 4.64 min round | 74.52 min total | 43760.72 day sales |\nPredict | Day: 17\n########## 4.68 min round | 79.20 min total | 42460.21 day sales |\nPredict | Day: 18\n########## 4.77 min round | 83.97 min total | 44890.22 day sales |\nPredict | Day: 19\n########## 4.92 min round | 88.89 min total | 46626.42 day sales |\nPredict | Day: 20\n########## 4.79 min round | 93.68 min total | 58424.60 day sales |\nPredict | Day: 21\n########## 4.70 min round | 98.38 min total | 59082.47 day sales |\nPredict | Day: 22\n########## 4.65 min round | 103.03 min total | 45810.14 day sales |\nPredict | Day: 23\n########## 4.62 min round | 107.65 min total | 42874.91 day sales |\nPredict | Day: 24\n########## 4.65 min round | 112.30 min total | 45103.61 day sales |\nPredict | Day: 25\n########## 4.65 min round | 116.95 min total | 41880.29 day sales |\nPredict | Day: 26\n########## 4.63 min round | 121.58 min total | 47040.62 day sales |\nPredict | Day: 27\n########## 4.64 min round | 126.22 min total | 54261.00 day sales |\nPredict | Day: 28\n########## 4.62 min round | 130.84 min total | 50097.28 day sales |\n"
],
[
"all_preds.tail()",
"_____no_output_____"
],
[
"all_preds.shape",
"_____no_output_____"
],
[
"all_preds.describe()",
"_____no_output_____"
],
[
"# all the following is changed",
"_____no_output_____"
],
[
"# replace validation part\ntrain_df = pd.read_csv('sales_train_evaluation.csv')\ntrain_df=train_df[['id','d_1914','d_1915','d_1916','d_1917','d_1918','d_1919','d_1920','d_1921','d_1922','d_1923',\n 'd_1924','d_1925','d_1926','d_1927','d_1928','d_1929','d_1930','d_1931','d_1932','d_1933',\n 'd_1934','d_1935','d_1936','d_1937','d_1938','d_1939','d_1940','d_1941']]",
"_____no_output_____"
],
[
"train_df.head()",
"_____no_output_____"
],
[
"submission = pd.read_csv('sample_submission.csv')",
"_____no_output_____"
],
[
"submission.head()",
"_____no_output_____"
],
[
"submission.tail()",
"_____no_output_____"
],
[
"train_df['id']=train_df['id'].str.replace('evaluation','validation')",
"_____no_output_____"
],
[
"train_df.head()",
"_____no_output_____"
],
[
"train_df.columns=submission.columns",
"_____no_output_____"
],
[
"train_df.head()",
"_____no_output_____"
],
[
"train_df.tail()",
"_____no_output_____"
],
[
"train_df.shape",
"_____no_output_____"
],
[
"submission.shape",
"_____no_output_____"
],
[
"submission = submission[['id']]\nsub1 = submission.merge(train_df, on=['id'], how='left')",
"_____no_output_____"
],
[
"sub1.head()",
"_____no_output_____"
],
[
"sub1.tail()",
"_____no_output_____"
],
[
"sub1=sub1[:30490]",
"_____no_output_____"
],
[
"sub1.head()",
"_____no_output_____"
],
[
"sub1.tail()",
"_____no_output_____"
],
[
"sub2 = submission.merge(all_preds, on=['id'], how='left')",
"_____no_output_____"
],
[
"sub2.head()",
"_____no_output_____"
],
[
"sub2.tail()",
"_____no_output_____"
],
[
"sub2=sub2[30490:]",
"_____no_output_____"
],
[
"sub2.head()",
"_____no_output_____"
],
[
"sub2.tail()",
"_____no_output_____"
],
[
"final_sub=pd.concat([sub1,sub2],axis=0)",
"_____no_output_____"
],
[
"final_sub.head()",
"_____no_output_____"
],
[
"final_sub.tail()",
"_____no_output_____"
],
[
"final_sub.describe()",
"_____no_output_____"
],
[
"final_sub.to_csv('lgb_bystore_final3.csv',index=False)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7ec03ea8cc752c23fe41e1961c76d287bf6574
| 140,062 |
ipynb
|
Jupyter Notebook
|
9. NLP/1. Lexical Processing/2. Basic Lexical Processing/.ipynb_checkpoints/6. final+bag+of+words-checkpoint.ipynb
|
vshubh24/machineLearning
|
eedd4432ce7c58e20503611c3d801a53b74e3c2a
|
[
"Apache-2.0"
] | null | null | null |
9. NLP/1. Lexical Processing/2. Basic Lexical Processing/.ipynb_checkpoints/6. final+bag+of+words-checkpoint.ipynb
|
vshubh24/machineLearning
|
eedd4432ce7c58e20503611c3d801a53b74e3c2a
|
[
"Apache-2.0"
] | null | null | null |
9. NLP/1. Lexical Processing/2. Basic Lexical Processing/.ipynb_checkpoints/6. final+bag+of+words-checkpoint.ipynb
|
vshubh24/machineLearning
|
eedd4432ce7c58e20503611c3d801a53b74e3c2a
|
[
"Apache-2.0"
] | null | null | null | 39.299102 | 4,961 | 0.302966 |
[
[
[
"### Bag of words model",
"_____no_output_____"
]
],
[
[
"# load all necessary libraries\nimport pandas as pd\nfrom nltk.tokenize import word_tokenize\nfrom nltk.corpus import stopwords\nfrom sklearn.feature_extraction.text import CountVectorizer\n\npd.set_option('max_colwidth', 100)",
"_____no_output_____"
]
],
[
[
"#### Let's build a basic bag of words model on three sample documents",
"_____no_output_____"
]
],
[
[
"documents = [\"Gangs of Wasseypur is a great movie.\", \"The success of a movie depends on the performance of the actors.\", \"There are no new movies releasing this week.\"]\nprint(documents)",
"['Gangs of Wasseypur is a great movie.', 'The success of a movie depends on the performance of the actors.', 'There are no new movies releasing this week.']\n"
],
[
"def preprocess(document):\n 'changes document to lower case and removes stopwords'\n\n # change sentence to lower case\n document = document.lower()\n\n # tokenize into words\n words = word_tokenize(document)\n\n # remove stop words\n words = [word for word in words if word not in stopwords.words(\"english\")]\n\n # join words to make sentence\n document = \" \".join(words)\n \n return document\n\ndocuments = [preprocess(document) for document in documents]\nprint(documents)\n",
"['gangs wasseypur great movie .', 'success movie depends performance actors .', 'new movies releasing week .']\n"
]
],
[
[
"#### Creating bag of words model using count vectorizer function",
"_____no_output_____"
]
],
[
[
"vectorizer = CountVectorizer()\nbow_model = vectorizer.fit_transform(documents)\nprint(bow_model) # returns the row number and column number of the cells which have 1 as value",
" (0, 4)\t1\n (0, 3)\t1\n (0, 10)\t1\n (0, 2)\t1\n (1, 0)\t1\n (1, 7)\t1\n (1, 1)\t1\n (1, 9)\t1\n (1, 4)\t1\n (2, 11)\t1\n (2, 8)\t1\n (2, 5)\t1\n (2, 6)\t1\n"
],
[
"# print the full sparse matrix\nprint(bow_model.toarray())",
"[[0 0 1 1 1 0 0 0 0 0 1 0]\n [1 1 0 0 1 0 0 1 0 1 0 0]\n [0 0 0 0 0 1 1 0 1 0 0 1]]\n"
],
[
"print(bow_model.shape)\nprint(vectorizer.get_feature_names())",
"(3, 12)\n['actors', 'depends', 'gangs', 'great', 'movie', 'movies', 'new', 'performance', 'releasing', 'success', 'wasseypur', 'week']\n"
]
],
[
[
"### Let's create a bag of words model on the spam dataset.",
"_____no_output_____"
]
],
[
[
"# load data\nspam = pd.read_csv(\"SMSSpamCollection.txt\", sep = \"\\t\", names=[\"label\", \"message\"])\nspam.head()",
"_____no_output_____"
]
],
[
[
"##### Let's take a subset of data (first 50 rows only) and create bag of word model on that.",
"_____no_output_____"
]
],
[
[
"spam = spam.iloc[0:50,:]\nprint(spam)",
" label \\\n0 ham \n1 ham \n2 spam \n3 ham \n4 ham \n5 spam \n6 ham \n7 ham \n8 spam \n9 spam \n10 ham \n11 spam \n12 spam \n13 ham \n14 ham \n15 spam \n16 ham \n17 ham \n18 ham \n19 spam \n20 ham \n21 ham \n22 ham \n23 ham \n24 ham \n25 ham \n26 ham \n27 ham \n28 ham \n29 ham \n30 ham \n31 ham \n32 ham \n33 ham \n34 spam \n35 ham \n36 ham \n37 ham \n38 ham \n39 ham \n40 ham \n41 ham \n42 spam \n43 ham \n44 ham \n45 ham \n46 ham \n47 ham \n48 ham \n49 ham \n\n message \n0 Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there g... \n1 Ok lar... Joking wif u oni... \n2 Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive ... \n3 U dun say so early hor... U c already then say... \n4 Nah I don't think he goes to usf, he lives around here though \n5 FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for ... \n6 Even my brother is not like to speak with me. They treat me like aids patent. \n7 As per your request 'Melle Melle (Oru Minnaminunginte Nurungu Vettam)' has been set as your call... \n8 WINNER!! As a valued network customer you have been selected to receivea £900 prize reward! To c... \n9 Had your mobile 11 months or more? U R entitled to Update to the latest colour mobiles with came... \n10 I'm gonna be home soon and i don't want to talk about this stuff anymore tonight, k? I've cried ... \n11 SIX chances to win CASH! From 100 to 20,000 pounds txt> CSH11 and send to 87575. Cost 150p/day, ... \n12 URGENT! You have won a 1 week FREE membership in our £100,000 Prize Jackpot! Txt the word: CLAIM... \n13 I've been searching for the right words to thank you for this breather. I promise i wont take yo... \n14 I HAVE A DATE ON SUNDAY WITH WILL!! \n15 XXXMobileMovieClub: To use your credit, click the WAP link in the next txt message or click here... \n16 Oh k...i'm watching here:) \n17 Eh u remember how 2 spell his name... Yes i did. He v naughty make until i v wet. \n18 Fine if thats the way u feel. Thats the way its gota b \n19 England v Macedonia - dont miss the goals/team news. Txt ur national team to 87077 eg ENGLAND to... \n20 Is that seriously how you spell his name? \n21 I‘m going to try for 2 months ha ha only joking \n22 So ü pay first lar... Then when is da stock comin... \n23 Aft i finish my lunch then i go str down lor. Ard 3 smth lor. U finish ur lunch already? \n24 Ffffffffff. Alright no way I can meet up with you sooner? \n25 Just forced myself to eat a slice. I'm really not hungry tho. This sucks. Mark is getting worrie... \n26 Lol your always so convincing. \n27 Did you catch the bus ? Are you frying an egg ? Did you make a tea? Are you eating your mom's le... \n28 I'm back & we're packing the car now, I'll let you know if there's room \n29 Ahhh. Work. I vaguely remember that! What does it feel like? Lol \n30 Wait that's still not all that clear, were you not sure about me being sarcastic or that that's ... \n31 Yeah he got in at 2 and was v apologetic. n had fallen out and she was actin like spoilt child a... \n32 K tell me anything about you. \n33 For fear of fainting with the of all that housework you just did? Quick have a cuppa \n34 Thanks for your subscription to Ringtone UK your mobile will be charged £5/month Please confirm ... \n35 Yup... Ok i go home look at the timings then i msg ü again... Xuhui going to learn on 2nd may to... \n36 Oops, I'll let you know when my roommate's done \n37 I see the letter B on my car \n38 Anything lor... U decide... \n39 Hello! How's you and how did saturday go? I was just texting to see if you'd decided to do anyth... \n40 Pls go ahead with watts. I just wanted to be sure. Do have a great weekend. Abiola \n41 Did I forget to tell you ? I want you , I need you, I crave you ... But most of all ... I love y... \n42 07732584351 - Rodger Burns - MSG = We tried to call you re your reply to our sms for a free noki... \n43 WHO ARE YOU SEEING? \n44 Great! I hope you like your man well endowed. I am <#> inches... \n45 No calls..messages..missed calls \n46 Didn't you get hep b immunisation in nigeria. \n47 Fair enough, anything going on? \n48 Yeah hopefully, if tyler can't do it I could maybe ask around a bit \n49 U don't know how stubborn I am. I didn't even want to go to the hospital. I kept telling Mark I'... \n"
],
[
"# extract the messages from the dataframe\nmessages = spam.message\nprint(messages)",
"0 Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there g...\n1 Ok lar... Joking wif u oni...\n2 Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive ...\n3 U dun say so early hor... U c already then say...\n4 Nah I don't think he goes to usf, he lives around here though\n5 FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for ...\n6 Even my brother is not like to speak with me. They treat me like aids patent.\n7 As per your request 'Melle Melle (Oru Minnaminunginte Nurungu Vettam)' has been set as your call...\n8 WINNER!! As a valued network customer you have been selected to receivea £900 prize reward! To c...\n9 Had your mobile 11 months or more? U R entitled to Update to the latest colour mobiles with came...\n10 I'm gonna be home soon and i don't want to talk about this stuff anymore tonight, k? I've cried ...\n11 SIX chances to win CASH! From 100 to 20,000 pounds txt> CSH11 and send to 87575. Cost 150p/day, ...\n12 URGENT! You have won a 1 week FREE membership in our £100,000 Prize Jackpot! Txt the word: CLAIM...\n13 I've been searching for the right words to thank you for this breather. I promise i wont take yo...\n14 I HAVE A DATE ON SUNDAY WITH WILL!!\n15 XXXMobileMovieClub: To use your credit, click the WAP link in the next txt message or click here...\n16 Oh k...i'm watching here:)\n17 Eh u remember how 2 spell his name... Yes i did. He v naughty make until i v wet.\n18 Fine if thats the way u feel. Thats the way its gota b\n19 England v Macedonia - dont miss the goals/team news. Txt ur national team to 87077 eg ENGLAND to...\n20 Is that seriously how you spell his name?\n21 I‘m going to try for 2 months ha ha only joking\n22 So ü pay first lar... Then when is da stock comin...\n23 Aft i finish my lunch then i go str down lor. Ard 3 smth lor. U finish ur lunch already?\n24 Ffffffffff. Alright no way I can meet up with you sooner?\n25 Just forced myself to eat a slice. I'm really not hungry tho. This sucks. Mark is getting worrie...\n26 Lol your always so convincing.\n27 Did you catch the bus ? Are you frying an egg ? Did you make a tea? Are you eating your mom's le...\n28 I'm back & we're packing the car now, I'll let you know if there's room\n29 Ahhh. Work. I vaguely remember that! What does it feel like? Lol\n30 Wait that's still not all that clear, were you not sure about me being sarcastic or that that's ...\n31 Yeah he got in at 2 and was v apologetic. n had fallen out and she was actin like spoilt child a...\n32 K tell me anything about you.\n33 For fear of fainting with the of all that housework you just did? Quick have a cuppa\n34 Thanks for your subscription to Ringtone UK your mobile will be charged £5/month Please confirm ...\n35 Yup... Ok i go home look at the timings then i msg ü again... Xuhui going to learn on 2nd may to...\n36 Oops, I'll let you know when my roommate's done\n37 I see the letter B on my car\n38 Anything lor... U decide...\n39 Hello! How's you and how did saturday go? I was just texting to see if you'd decided to do anyth...\n40 Pls go ahead with watts. I just wanted to be sure. Do have a great weekend. Abiola\n41 Did I forget to tell you ? I want you , I need you, I crave you ... But most of all ... I love y...\n42 07732584351 - Rodger Burns - MSG = We tried to call you re your reply to our sms for a free noki...\n43 WHO ARE YOU SEEING?\n44 Great! I hope you like your man well endowed. I am <#> inches...\n45 No calls..messages..missed calls\n46 Didn't you get hep b immunisation in nigeria.\n47 Fair enough, anything going on?\n48 Yeah hopefully, if tyler can't do it I could maybe ask around a bit\n49 U don't know how stubborn I am. I didn't even want to go to the hospital. I kept telling Mark I'...\nName: message, dtype: object\n"
],
[
"# convert messages into list\nmessages = [message for message in messages]\nprint(messages)",
"['Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...', 'Ok lar... Joking wif u oni...', \"Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's\", 'U dun say so early hor... U c already then say...', \"Nah I don't think he goes to usf, he lives around here though\", \"FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for it still? Tb ok! XxX std chgs to send, £1.50 to rcv\", 'Even my brother is not like to speak with me. They treat me like aids patent.', \"As per your request 'Melle Melle (Oru Minnaminunginte Nurungu Vettam)' has been set as your callertune for all Callers. Press *9 to copy your friends Callertune\", 'WINNER!! As a valued network customer you have been selected to receivea £900 prize reward! To claim call 09061701461. Claim code KL341. Valid 12 hours only.', 'Had your mobile 11 months or more? U R entitled to Update to the latest colour mobiles with camera for Free! Call The Mobile Update Co FREE on 08002986030', \"I'm gonna be home soon and i don't want to talk about this stuff anymore tonight, k? I've cried enough today.\", 'SIX chances to win CASH! From 100 to 20,000 pounds txt> CSH11 and send to 87575. Cost 150p/day, 6days, 16+ TsandCs apply Reply HL 4 info', 'URGENT! You have won a 1 week FREE membership in our £100,000 Prize Jackpot! Txt the word: CLAIM to No: 81010 T&C www.dbuk.net LCCLTD POBOX 4403LDNW1A7RW18', \"I've been searching for the right words to thank you for this breather. I promise i wont take your help for granted and will fulfil my promise. You have been wonderful and a blessing at all times.\", 'I HAVE A DATE ON SUNDAY WITH WILL!!', 'XXXMobileMovieClub: To use your credit, click the WAP link in the next txt message or click here>> http://wap. xxxmobilemovieclub.com?n=QJKGIGHJJGCBL', \"Oh k...i'm watching here:)\", 'Eh u remember how 2 spell his name... Yes i did. He v naughty make until i v wet.', 'Fine if that\\x92s the way u feel. That\\x92s the way its gota b', 'England v Macedonia - dont miss the goals/team news. Txt ur national team to 87077 eg ENGLAND to 87077 Try:WALES, SCOTLAND 4txt/ú1.20 POBOXox36504W45WQ 16+', 'Is that seriously how you spell his name?', 'I‘m going to try for 2 months ha ha only joking', 'So ü pay first lar... Then when is da stock comin...', 'Aft i finish my lunch then i go str down lor. Ard 3 smth lor. U finish ur lunch already?', 'Ffffffffff. Alright no way I can meet up with you sooner?', \"Just forced myself to eat a slice. I'm really not hungry tho. This sucks. Mark is getting worried. He knows I'm sick when I turn down pizza. Lol\", 'Lol your always so convincing.', \"Did you catch the bus ? Are you frying an egg ? Did you make a tea? Are you eating your mom's left over dinner ? Do you feel my Love ?\", \"I'm back & we're packing the car now, I'll let you know if there's room\", 'Ahhh. Work. I vaguely remember that! What does it feel like? Lol', \"Wait that's still not all that clear, were you not sure about me being sarcastic or that that's why x doesn't want to live with us\", \"Yeah he got in at 2 and was v apologetic. n had fallen out and she was actin like spoilt child and he got caught up in that. Till 2! But we won't go there! Not doing too badly cheers. You? \", 'K tell me anything about you.', 'For fear of fainting with the of all that housework you just did? Quick have a cuppa', 'Thanks for your subscription to Ringtone UK your mobile will be charged £5/month Please confirm by replying YES or NO. If you reply NO you will not be charged', 'Yup... Ok i go home look at the timings then i msg ü again... Xuhui going to learn on 2nd may too but her lesson is at 8am', \"Oops, I'll let you know when my roommate's done\", 'I see the letter B on my car', 'Anything lor... U decide...', \"Hello! How's you and how did saturday go? I was just texting to see if you'd decided to do anything tomo. Not that i'm trying to invite myself or anything!\", 'Pls go ahead with watts. I just wanted to be sure. Do have a great weekend. Abiola', 'Did I forget to tell you ? I want you , I need you, I crave you ... But most of all ... I love you my sweet Arabian steed ... Mmmmmm ... Yummy', '07732584351 - Rodger Burns - MSG = We tried to call you re your reply to our sms for a free nokia mobile + free camcorder. Please call now 08000930705 for delivery tomorrow', 'WHO ARE YOU SEEING?', 'Great! I hope you like your man well endowed. I am <#> inches...', 'No calls..messages..missed calls', \"Didn't you get hep b immunisation in nigeria.\", 'Fair enough, anything going on?', \"Yeah hopefully, if tyler can't do it I could maybe ask around a bit\", \"U don't know how stubborn I am. I didn't even want to go to the hospital. I kept telling Mark I'm not a weak sucker. Hospitals are for weak suckers.\"]\n"
],
[
"# preprocess messages using the preprocess function\nmessages = [preprocess(message) for message in messages]\nprint(messages)",
"['go jurong point , crazy.. available bugis n great world la e buffet ... cine got amore wat ...', 'ok lar ... joking wif u oni ...', \"free entry 2 wkly comp win fa cup final tkts 21st may 2005. text fa 87121 receive entry question ( std txt rate ) & c 's apply 08452810075over18 's\", 'u dun say early hor ... u c already say ...', \"nah n't think goes usf , lives around though\", \"freemsg hey darling 's 3 week 's word back ! 'd like fun still ? tb ok ! xxx std chgs send , £1.50 rcv\", 'even brother like speak . treat like aids patent .', \"per request 'melle melle ( oru minnaminunginte nurungu vettam ) ' set callertune callers . press *9 copy friends callertune\", 'winner ! ! valued network customer selected receivea £900 prize reward ! claim call 09061701461. claim code kl341 . valid 12 hours .', 'mobile 11 months ? u r entitled update latest colour mobiles camera free ! call mobile update co free 08002986030', \"'m gon na home soon n't want talk stuff anymore tonight , k ? 've cried enough today .\", 'six chances win cash ! 100 20,000 pounds txt > csh11 send 87575. cost 150p/day , 6days , 16+ tsandcs apply reply hl 4 info', 'urgent ! 1 week free membership £100,000 prize jackpot ! txt word : claim : 81010 & c www.dbuk.net lccltd pobox 4403ldnw1a7rw18', \"'ve searching right words thank breather . promise wont take help granted fulfil promise . wonderful blessing times .\", 'date sunday ! !', 'xxxmobilemovieclub : use credit , click wap link next txt message click > > http : //wap . xxxmobilemovieclub.com ? n=qjkgighjjgcbl', \"oh k ... 'm watching : )\", 'eh u remember 2 spell name ... yes . v naughty make v wet .', 'fine that\\x92s way u feel . that\\x92s way gota b', 'england v macedonia - dont miss goals/team news . txt ur national team 87077 eg england 87077 try : wales , scotland 4txt/ú1.20 poboxox36504w45wq 16+', 'seriously spell name ?', '‘ going try 2 months ha ha joking', 'ü pay first lar ... da stock comin ...', 'aft finish lunch go str lor . ard 3 smth lor . u finish ur lunch already ?', 'ffffffffff . alright way meet sooner ?', \"forced eat slice . 'm really hungry tho . sucks . mark getting worried . knows 'm sick turn pizza . lol\", 'lol always convincing .', \"catch bus ? frying egg ? make tea ? eating mom 's left dinner ? feel love ?\", \"'m back & amp ; 're packing car , 'll let know 's room\", 'ahhh . work . vaguely remember ! feel like ? lol', \"wait 's still clear , sure sarcastic 's x n't want live us\", \"yeah got 2 v apologetic . n fallen actin like spoilt child got caught . till 2 ! wo n't go ! badly cheers . ?\", 'k tell anything .', 'fear fainting housework ? quick cuppa', 'thanks subscription ringtone uk mobile charged £5/month please confirm replying yes . reply charged', 'yup ... ok go home look timings msg ü ... xuhui going learn 2nd may lesson 8am', \"oops , 'll let know roommate 's done\", 'see letter b car', 'anything lor ... u decide ...', \"hello ! 's saturday go ? texting see 'd decided anything tomo . 'm trying invite anything !\", 'pls go ahead watts . wanted sure . great weekend . abiola', 'forget tell ? want , need , crave ... ... love sweet arabian steed ... mmmmmm ... yummy', '07732584351 - rodger burns - msg = tried call reply sms free nokia mobile + free camcorder . please call 08000930705 delivery tomorrow', 'seeing ?', 'great ! hope like man well endowed . & lt ; # & gt ; inches ...', 'calls..messages..missed calls', \"n't get hep b immunisation nigeria .\", 'fair enough , anything going ?', \"yeah hopefully , tyler ca n't could maybe ask around bit\", \"u n't know stubborn . n't even want go hospital . kept telling mark 'm weak sucker . hospitals weak suckers .\"]\n"
],
[
"# bag of words model\nvectorizer = CountVectorizer()\nbow_model = vectorizer.fit_transform(messages)\nprint(bow_model.toarray())",
"[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]\n"
],
[
"print(bow_model.shape)\nprint(vectorizer.get_feature_names())",
"(50, 381)\n['000', '07732584351', '08000930705', '08002986030', '08452810075over18', '09061701461', '100', '11', '12', '150p', '16', '20', '2005', '21st', '2nd', '4403ldnw1a7rw18', '4txt', '50', '6days', '81010', '87077', '87121', '87575', '8am', '900', 'abiola', 'actin', 'aft', 'ahead', 'ahhh', 'aids', 'already', 'alright', 'always', 'amore', 'amp', 'anymore', 'anything', 'apologetic', 'apply', 'arabian', 'ard', 'around', 'ask', 'available', 'back', 'badly', 'bit', 'blessing', 'breather', 'brother', 'buffet', 'bugis', 'burns', 'bus', 'ca', 'call', 'callers', 'callertune', 'calls', 'camcorder', 'camera', 'car', 'cash', 'catch', 'caught', 'chances', 'charged', 'cheers', 'chgs', 'child', 'cine', 'claim', 'clear', 'click', 'co', 'code', 'colour', 'com', 'comin', 'comp', 'confirm', 'convincing', 'copy', 'cost', 'could', 'crave', 'crazy', 'credit', 'cried', 'csh11', 'cup', 'cuppa', 'customer', 'da', 'darling', 'date', 'day', 'dbuk', 'decide', 'decided', 'delivery', 'dinner', 'done', 'dont', 'dun', 'early', 'eat', 'eating', 'eg', 'egg', 'eh', 'endowed', 'england', 'enough', 'entitled', 'entry', 'even', 'fa', 'fainting', 'fair', 'fallen', 'fear', 'feel', 'ffffffffff', 'final', 'fine', 'finish', 'first', 'forced', 'forget', 'free', 'freemsg', 'friends', 'frying', 'fulfil', 'fun', 'get', 'getting', 'go', 'goals', 'goes', 'going', 'gon', 'got', 'gota', 'granted', 'great', 'gt', 'ha', 'hello', 'help', 'hep', 'hey', 'hl', 'home', 'hope', 'hopefully', 'hor', 'hospital', 'hospitals', 'hours', 'housework', 'http', 'hungry', 'immunisation', 'inches', 'info', 'invite', 'jackpot', 'joking', 'jurong', 'kept', 'kl341', 'know', 'knows', 'la', 'lar', 'latest', 'lccltd', 'learn', 'left', 'lesson', 'let', 'letter', 'like', 'link', 'live', 'lives', 'll', 'lol', 'look', 'lor', 'love', 'lt', 'lunch', 'macedonia', 'make', 'man', 'mark', 'may', 'maybe', 'meet', 'melle', 'membership', 'message', 'messages', 'minnaminunginte', 'miss', 'missed', 'mmmmmm', 'mobile', 'mobiles', 'mom', 'month', 'months', 'msg', 'na', 'nah', 'name', 'national', 'naughty', 'need', 'net', 'network', 'news', 'next', 'nigeria', 'nokia', 'nurungu', 'oh', 'ok', 'oni', 'oops', 'oru', 'packing', 'patent', 'pay', 'per', 'pizza', 'please', 'pls', 'pobox', 'poboxox36504w45wq', 'point', 'pounds', 'press', 'prize', 'promise', 'qjkgighjjgcbl', 'question', 'quick', 'rate', 'rcv', 're', 'really', 'receive', 'receivea', 'remember', 'reply', 'replying', 'request', 'reward', 'right', 'ringtone', 'rodger', 'room', 'roommate', 'sarcastic', 'saturday', 'say', 'scotland', 'searching', 'see', 'seeing', 'selected', 'send', 'seriously', 'set', 'sick', 'six', 'slice', 'sms', 'smth', 'soon', 'sooner', 'speak', 'spell', 'spoilt', 'std', 'steed', 'still', 'stock', 'str', 'stubborn', 'stuff', 'subscription', 'sucker', 'suckers', 'sucks', 'sunday', 'sure', 'sweet', 'take', 'talk', 'tb', 'tea', 'team', 'tell', 'telling', 'text', 'texting', 'thank', 'thanks', 'that', 'think', 'tho', 'though', 'till', 'times', 'timings', 'tkts', 'today', 'tomo', 'tomorrow', 'tonight', 'treat', 'tried', 'try', 'trying', 'tsandcs', 'turn', 'txt', 'tyler', 'uk', 'update', 'ur', 'urgent', 'us', 'use', 'usf', 'vaguely', 'valid', 'valued', 've', 'vettam', 'wait', 'wales', 'want', 'wanted', 'wap', 'wat', 'watching', 'watts', 'way', 'weak', 'week', 'weekend', 'well', 'wet', 'wif', 'win', 'winner', 'wkly', 'wo', 'wonderful', 'wont', 'word', 'words', 'work', 'world', 'worried', 'www', 'xuhui', 'xxx', 'xxxmobilemovieclub', 'yeah', 'yes', 'yummy', 'yup', 'ú1']\n"
]
],
[
[
"* A lot of duplicate tokens such as 'win'and 'winner'; 'reply' and 'replying'; 'want' and 'wanted' etc. ",
"_____no_output_____"
],
[
"## Stemming and lemmatising",
"_____no_output_____"
]
],
[
[
"from nltk.stem.porter import PorterStemmer\nfrom nltk.stem import WordNetLemmatizer\n\nstemmer = PorterStemmer()\nwordnet_lemmatizer = WordNetLemmatizer()\n\n# add stemming and lemmatisation in the preprocess function\ndef preprocess(document, stem=True):\n 'changes document to lower case and removes stopwords'\n\n # change sentence to lower case\n document = document.lower()\n\n # tokenize into words\n words = word_tokenize(document)\n\n # remove stop words\n words = [word for word in words if word not in stopwords.words(\"english\")]\n \n if stem:\n words = [stemmer.stem(word) for word in words]\n else:\n words = [wordnet_lemmatizer.lemmatize(word, pos='v') for word in words]\n\n # join words to make sentence\n document = \" \".join(words)\n \n return document",
"_____no_output_____"
]
],
[
[
"### Bag of words model on stemmed messages",
"_____no_output_____"
]
],
[
[
"# stem messages\nmessages = [preprocess(message, stem=True) for message in spam.message]\n\n# bag of words model\nvectorizer = CountVectorizer()\nbow_model = vectorizer.fit_transform(messages)",
"_____no_output_____"
],
[
"# look at the dataframe\npd.DataFrame(bow_model.toarray(), columns = vectorizer.get_feature_names())",
"_____no_output_____"
],
[
"# token names\nprint(vectorizer.get_feature_names())",
"['000', '07732584351', '08000930705', '08002986030', '08452810075over18', '09061701461', '100', '11', '12', '150p', '16', '20', '2005', '21st', '2nd', '4403ldnw1a7rw18', '4txt', '50', '6day', '81010', '87077', '87121', '87575', '8am', '900', 'abiola', 'actin', 'aft', 'ahead', 'ahhh', 'aid', 'alreadi', 'alright', 'alway', 'amor', 'amp', 'anymor', 'anyth', 'apologet', 'appli', 'arabian', 'ard', 'around', 'ask', 'avail', 'back', 'badli', 'bit', 'bless', 'breather', 'brother', 'bu', 'buffet', 'bugi', 'burn', 'ca', 'call', 'caller', 'callertun', 'calls', 'camcord', 'camera', 'car', 'cash', 'catch', 'caught', 'chanc', 'charg', 'cheer', 'chg', 'child', 'cine', 'claim', 'clear', 'click', 'co', 'code', 'colour', 'com', 'comin', 'comp', 'confirm', 'convinc', 'copi', 'cost', 'could', 'crave', 'crazy', 'credit', 'cri', 'csh11', 'cup', 'cuppa', 'custom', 'da', 'darl', 'date', 'day', 'dbuk', 'decid', 'deliveri', 'dinner', 'done', 'dont', 'dun', 'earli', 'eat', 'eg', 'egg', 'eh', 'endow', 'england', 'enough', 'entitl', 'entri', 'even', 'fa', 'faint', 'fair', 'fallen', 'fear', 'feel', 'ffffffffff', 'final', 'fine', 'finish', 'first', 'forc', 'forget', 'free', 'freemsg', 'fri', 'friend', 'fulfil', 'fun', 'get', 'go', 'goals', 'goe', 'gon', 'got', 'gota', 'grant', 'great', 'gt', 'ha', 'hello', 'help', 'hep', 'hey', 'hl', 'home', 'hope', 'hor', 'hospit', 'hour', 'housework', 'http', 'hungri', 'immunis', 'inch', 'info', 'invit', 'jackpot', 'joke', 'jurong', 'kept', 'kl341', 'know', 'la', 'lar', 'latest', 'lccltd', 'learn', 'left', 'lesson', 'let', 'letter', 'like', 'link', 'live', 'll', 'lol', 'look', 'lor', 'love', 'lt', 'lunch', 'macedonia', 'make', 'man', 'mark', 'may', 'mayb', 'meet', 'mell', 'membership', 'messag', 'messages', 'minnaminungint', 'miss', 'mmmmmm', 'mobil', 'mom', 'month', 'msg', 'na', 'nah', 'name', 'nation', 'naughti', 'need', 'net', 'network', 'news', 'next', 'nigeria', 'nokia', 'nurungu', 'oh', 'ok', 'oni', 'oop', 'oru', 'pack', 'patent', 'pay', 'per', 'pizza', 'pl', 'pleas', 'pobox', 'poboxox36504w45wq', 'point', 'pound', 'press', 'prize', 'promis', 'qjkgighjjgcbl', 'question', 'quick', 'rate', 'rcv', 're', 'realli', 'receiv', 'receivea', 'rememb', 'repli', 'request', 'reward', 'right', 'rington', 'rodger', 'room', 'roommat', 'sarcast', 'saturday', 'say', 'scotland', 'search', 'see', 'select', 'send', 'serious', 'set', 'sick', 'six', 'slice', 'sm', 'smth', 'soon', 'sooner', 'speak', 'spell', 'spoilt', 'std', 'steed', 'still', 'stock', 'str', 'stubborn', 'stuff', 'subscript', 'suck', 'sucker', 'sunday', 'sure', 'sweet', 'take', 'talk', 'tb', 'tea', 'team', 'tell', 'text', 'thank', 'that', 'think', 'tho', 'though', 'till', 'time', 'tkt', 'today', 'tomo', 'tomorrow', 'tonight', 'treat', 'tri', 'tsandc', 'turn', 'txt', 'tyler', 'uk', 'updat', 'ur', 'urgent', 'us', 'use', 'usf', 'vagu', 'valid', 'valu', 've', 'vettam', 'wait', 'wale', 'want', 'wap', 'wat', 'watch', 'watt', 'way', 'weak', 'week', 'weekend', 'well', 'wet', 'wif', 'win', 'winner', 'wkli', 'wo', 'wonder', 'wont', 'word', 'work', 'world', 'worri', 'www', 'xuhui', 'xxx', 'xxxmobilemovieclub', 'ye', 'yeah', 'yummi', 'yup', 'ú1']\n"
]
],
[
[
"### 359 tokens after stemming the messages as compared to 381 tokens without stemming.\n\n### Let's try lemmatizing the messages.",
"_____no_output_____"
]
],
[
[
"# lemmatise messages\nmessages = [preprocess(message, stem=False) for message in spam.message]\n\n# bag of words model\nvectorizer = CountVectorizer()\nbow_model = vectorizer.fit_transform(messages)",
"_____no_output_____"
],
[
"# look at the dataframe\npd.DataFrame(bow_model.toarray(), columns = vectorizer.get_feature_names())",
"_____no_output_____"
],
[
"# token names\nprint(vectorizer.get_feature_names())",
"['000', '07732584351', '08000930705', '08002986030', '08452810075over18', '09061701461', '100', '11', '12', '150p', '16', '20', '2005', '21st', '2nd', '4403ldnw1a7rw18', '4txt', '50', '6days', '81010', '87077', '87121', '87575', '8am', '900', 'abiola', 'actin', 'aft', 'ahead', 'ahhh', 'aid', 'already', 'alright', 'always', 'amore', 'amp', 'anymore', 'anything', 'apologetic', 'apply', 'arabian', 'ard', 'around', 'ask', 'available', 'back', 'badly', 'bite', 'bless', 'breather', 'brother', 'buffet', 'bugis', 'burn', 'bus', 'ca', 'call', 'callers', 'callertune', 'calls', 'camcorder', 'camera', 'car', 'cash', 'catch', 'chance', 'charge', 'cheer', 'chgs', 'child', 'cine', 'claim', 'clear', 'click', 'co', 'code', 'colour', 'com', 'comin', 'comp', 'confirm', 'convince', 'copy', 'cost', 'could', 'crave', 'crazy', 'credit', 'cry', 'csh11', 'cup', 'cuppa', 'customer', 'da', 'darling', 'date', 'day', 'dbuk', 'decide', 'delivery', 'dinner', 'do', 'dont', 'dun', 'early', 'eat', 'eg', 'egg', 'eh', 'endow', 'england', 'enough', 'entitle', 'entry', 'even', 'fa', 'faint', 'fair', 'fall', 'fear', 'feel', 'ffffffffff', 'final', 'fine', 'finish', 'first', 'force', 'forget', 'free', 'freemsg', 'friends', 'fry', 'fulfil', 'fun', 'get', 'go', 'goals', 'gon', 'gota', 'grant', 'great', 'gt', 'ha', 'hello', 'help', 'hep', 'hey', 'hl', 'home', 'hope', 'hopefully', 'hor', 'hospital', 'hospitals', 'hours', 'housework', 'http', 'hungry', 'immunisation', 'inch', 'info', 'invite', 'jackpot', 'joke', 'jurong', 'keep', 'kl341', 'know', 'la', 'lar', 'latest', 'lccltd', 'learn', 'leave', 'lesson', 'let', 'letter', 'like', 'link', 'live', 'll', 'lol', 'look', 'lor', 'love', 'lt', 'lunch', 'macedonia', 'make', 'man', 'mark', 'may', 'maybe', 'meet', 'melle', 'membership', 'message', 'messages', 'minnaminunginte', 'miss', 'missed', 'mmmmmm', 'mobile', 'mobiles', 'mom', 'month', 'months', 'msg', 'na', 'nah', 'name', 'national', 'naughty', 'need', 'net', 'network', 'news', 'next', 'nigeria', 'nokia', 'nurungu', 'oh', 'ok', 'oni', 'oops', 'oru', 'pack', 'patent', 'pay', 'per', 'pizza', 'please', 'pls', 'pobox', 'poboxox36504w45wq', 'point', 'pound', 'press', 'prize', 'promise', 'qjkgighjjgcbl', 'question', 'quick', 'rate', 'rcv', 're', 'really', 'receive', 'receivea', 'remember', 'reply', 'request', 'reward', 'right', 'ringtone', 'rodger', 'room', 'roommate', 'sarcastic', 'saturday', 'say', 'scotland', 'search', 'see', 'select', 'send', 'seriously', 'set', 'sick', 'six', 'slice', 'sms', 'smth', 'soon', 'sooner', 'speak', 'spell', 'spoil', 'std', 'steed', 'still', 'stock', 'str', 'stubborn', 'stuff', 'subscription', 'suck', 'sucker', 'suckers', 'sunday', 'sure', 'sweet', 'take', 'talk', 'tb', 'tea', 'team', 'tell', 'text', 'texting', 'thank', 'that', 'think', 'tho', 'though', 'till', 'time', 'tkts', 'today', 'tomo', 'tomorrow', 'tonight', 'treat', 'try', 'tsandcs', 'turn', 'txt', 'tyler', 'uk', 'update', 'ur', 'urgent', 'us', 'use', 'usf', 'vaguely', 'valid', 'value', 've', 'vettam', 'wait', 'wales', 'want', 'wap', 'wat', 'watch', 'watts', 'way', 'weak', 'week', 'weekend', 'well', 'wet', 'wif', 'win', 'winner', 'wkly', 'wo', 'wonderful', 'wont', 'word', 'work', 'world', 'worry', 'www', 'xuhui', 'xxx', 'xxxmobilemovieclub', 'yeah', 'yes', 'yummy', 'yup', 'ú1']\n"
]
],
[
[
"### 363 tokens after lemmatizing the messages as compared to 381 tokens without lemmatising. But, on the other hand, stemmer reduces the token count to 359. Lemmatization doesn't work as expected because the data is very unclean.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a7ec901aeaf8d27ce0740a4728e4ab3d8c55711
| 3,938 |
ipynb
|
Jupyter Notebook
|
notebooks/11_Storage/ebs-delete.ipynb
|
dalbhanj/eks-kubeflow-workshop
|
c72ff6f2658db570c0f666aa4395f66c59429fad
|
[
"Apache-2.0"
] | null | null | null |
notebooks/11_Storage/ebs-delete.ipynb
|
dalbhanj/eks-kubeflow-workshop
|
c72ff6f2658db570c0f666aa4395f66c59429fad
|
[
"Apache-2.0"
] | null | null | null |
notebooks/11_Storage/ebs-delete.ipynb
|
dalbhanj/eks-kubeflow-workshop
|
c72ff6f2658db570c0f666aa4395f66c59429fad
|
[
"Apache-2.0"
] | 1 |
2021-05-17T13:32:47.000Z
|
2021-05-17T13:32:47.000Z
| 27.929078 | 96 | 0.537836 |
[
[
[
"import kfp\nimport kfp.dsl as dsl\nfrom kfp import compiler\n\n#from irml_tim.kubeflow import transformers\nfrom kubernetes import client as k8s_client\nfrom kubernetes.client.models import V1EnvVar, V1SecretKeySelector",
"_____no_output_____"
],
[
"def node_selector(op):\n if isinstance(op, dsl.ContainerOp):\n # op.add_node_selector_constraint('compute-size', 'cpu-small')\n # op.add_node_selector_constraint('single-az', 'true')\n op.add_node_selector_constraint('spot', 'false')\n op.container.set_memory_request(\"2G\")\n op.container.set_cpu_request(\"1\")",
"_____no_output_____"
],
[
"@dsl.pipeline(\n name=\"VolumeOp Basic\",\n description=\"A Basic Example on VolumeOp Usage.\"\n)\ndef ebs_pipeline(size=\"1Gi\"):\n\n vop = dsl.VolumeOp(\n name=\"create_pvc\",\n resource_name=\"my-pvc\",\n modes=dsl.VOLUME_MODE_RWO,\n size=size\n )\n\n cop = dsl.ContainerOp(\n name=\"Component1\",\n image=\"library/bash:4.4.23\",\n command=[\"sh\", \"-c\"],\n arguments=[\"sleep 1m && echo foo > /mnt/file1\"],\n pvolumes={\"/mnt\": vop.volume}\n )\n\n cop2 = dsl.ContainerOp(\n name=\"Component2\",\n image=\"library/bash:4.4.23\",\n command=[\"sh\", \"-c\"],\n arguments=[\"cat /mnt/file1\"],\n pvolumes={\"/mnt\": vop.volume}\n ).after(cop)\n \n vop.delete().after(cop2)\n\n pipeline_conf = dsl.get_pipeline_conf()\n# pipeline_conf.add_op_transformer(transformers.irml_defaults)\n# pipeline_conf.add_op_transformer(node_selector)\n\n\nsample_input = {'size': '1Gi'} ",
"_____no_output_____"
],
[
"# Get or create an experiment and submit a pipeline run\nEXPERIMENT_NAME='ebs-delete'\nclient = kfp.Client()\nexperiment = client.create_experiment(EXPERIMENT_NAME)",
"_____no_output_____"
],
[
"pipeline_func = ebs_pipeline\npipeline_filename = pipeline_func.__name__ + '.zip'\ncompiler.Compiler().compile(pipeline_func, pipeline_filename)",
"_____no_output_____"
],
[
"# Specify pipeline argument values\narguments = {'size': '1Gi'}\n\n# Submit a pipeline run\nrun_name = pipeline_func.__name__ + ' run'\nrun_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)\n\n# This link leads to the run information page. \n# Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7ece42a9aa06c0448fbbbc51ace220c1e24508
| 77,293 |
ipynb
|
Jupyter Notebook
|
notebooks/M2-cross_validation_sol_01.ipynb
|
datagistips/scikit-learn-mooc
|
9eb67c53173218b5cd3061712c827c6a663e425a
|
[
"CC-BY-4.0"
] | 1 |
2021-07-14T09:41:21.000Z
|
2021-07-14T09:41:21.000Z
|
notebooks/M2-cross_validation_sol_01.ipynb
|
datagistips/scikit-learn-mooc
|
9eb67c53173218b5cd3061712c827c6a663e425a
|
[
"CC-BY-4.0"
] | null | null | null |
notebooks/M2-cross_validation_sol_01.ipynb
|
datagistips/scikit-learn-mooc
|
9eb67c53173218b5cd3061712c827c6a663e425a
|
[
"CC-BY-4.0"
] | null | null | null | 174.083333 | 32,360 | 0.891154 |
[
[
[
"# 📃 Solution for Exercise M2.01\n\nThe aim of this exercise is to make the following experiments:\n\n* train and test a support vector machine classifier through\n cross-validation;\n* study the effect of the parameter gamma of this classifier using a\n validation curve;\n* study if it would be useful in term of classification if we could add new\n samples in the dataset using a learning curve.\n\nTo make these experiments we will first load the blood transfusion dataset.",
"_____no_output_____"
],
[
"<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">If you want a deeper overview regarding this dataset, you can refer to the\nAppendix - Datasets description section at the end of this MOOC.</p>\n</div>",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nblood_transfusion = pd.read_csv(\"../datasets/blood_transfusion.csv\")\ndata = blood_transfusion.drop(columns=\"Class\")\ntarget = blood_transfusion[\"Class\"]",
"_____no_output_____"
]
],
[
[
"We will use a support vector machine classifier (SVM). In its most simple\nform, a SVM classifier is a linear classifier behaving similarly to a\nlogistic regression. Indeed, the optimization used to find the optimal\nweights of the linear model are different but we don't need to know these\ndetails for the exercise.\n\nAlso, this classifier can become more flexible/expressive by using a\nso-called kernel making the model becomes non-linear. Again, no requirement\nregarding the mathematics is required to accomplish this exercise.\n\nWe will use an RBF kernel where a parameter `gamma` allows to tune the\nflexibility of the model.\n\nFirst let's create a predictive pipeline made of:\n\n* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)\n with default parameter;\n* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html)\n where the parameter `kernel` could be set to `\"rbf\"`. Note that this is the\n default.",
"_____no_output_____"
]
],
[
[
"from sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.svm import SVC\n\nmodel = make_pipeline(StandardScaler(), SVC())",
"_____no_output_____"
]
],
[
[
"Evaluate the statistical performance of your model by cross-validation with a\n`ShuffleSplit` scheme. Thus, you can use\n[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)\nand pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)\nto the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`\nand let the other parameters to the default.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import cross_validate, ShuffleSplit\n\ncv = ShuffleSplit(random_state=0)\ncv_results = cross_validate(model, data, target, cv=cv, n_jobs=-1)\ncv_results = pd.DataFrame(cv_results)\ncv_results",
"_____no_output_____"
],
[
"print(\n f\"Accuracy score of our model:\\n\"\n f\"{cv_results['test_score'].mean():.3f} +/- \"\n f\"{cv_results['test_score'].std():.3f}\"\n)",
"Accuracy score of our model:\n0.765 +/- 0.043\n"
]
],
[
[
"As previously mentioned, the parameter `gamma` is one of the parameter\ncontrolling under/over-fitting in support vector machine with an RBF kernel.\n\nCompute the validation curve to evaluate the effect of the parameter `gamma`.\nYou can vary its value between `10e-3` and `10e2` by generating samples on a\nlogarithmic scale. Thus, you can use `np.logspace(-3, 2, num=30)`.\n\nSince we are manipulating a `Pipeline` the parameter name will be set to\n`svc__gamma` instead of only `gamma`. You can retrieve the parameter name\nusing `model.get_params().keys()`. We will go more into details regarding\naccessing and setting hyperparameter in the next section.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom sklearn.model_selection import validation_curve\n\ngammas = np.logspace(-3, 2, num=30)\nparam_name = \"svc__gamma\"\ntrain_scores, test_scores = validation_curve(\n model, data, target, param_name=param_name, param_range=gammas, cv=cv,\n n_jobs=-1)",
"_____no_output_____"
]
],
[
[
"Plot the validation curve for the train and test scores.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"plt.errorbar(gammas, train_scores.mean(axis=1),\n yerr=train_scores.std(axis=1), label='Training error')\nplt.errorbar(gammas, test_scores.mean(axis=1),\n yerr=test_scores.std(axis=1), label='Testing error')\nplt.legend()\n\nplt.xscale(\"log\")\nplt.xlabel(r\"Value of hyperparameter $\\gamma$\")\nplt.ylabel(\"Accuracy score\")\n_ = plt.title(\"Validation score of support vector machine\")",
"_____no_output_____"
]
],
[
[
"Looking at the curve, we can clearly identify the over-fitting regime of\nthe SVC classifier when `gamma > 1`.\nThe best setting is around `gamma = 1` while for `gamma < 1`,\nit is not very clear if the classifier is under-fitting but the\ntesting score is worse than for `gamma = 1`.",
"_____no_output_____"
],
[
"Now, you can perform an analysis to check whether adding new samples to the\ndataset could help our model to better generalize. Compute the learning curve\n(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))\nby computing the train and test scores for different training dataset size.\nPlot the train and test scores with respect to the number of samples.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import learning_curve\n\ntrain_sizes = np.linspace(0.1, 1, num=10)\nresults = learning_curve(\n model, data, target, train_sizes=train_sizes, cv=cv, n_jobs=-1)\ntrain_size, train_scores, test_scores = results[:3]",
"_____no_output_____"
],
[
"plt.errorbar(train_size, train_scores.mean(axis=1),\n yerr=train_scores.std(axis=1), label='Training error')\nplt.errorbar(train_size, test_scores.mean(axis=1),\n yerr=test_scores.std(axis=1), label='Testing error')\nplt.legend()\n\nplt.xlabel(\"Number of samples in the training set\")\nplt.ylabel(\"Accuracy\")\n_ = plt.title(\"Learning curve for support vector machine\")",
"_____no_output_____"
]
],
[
[
"We observe that adding new samples in the dataset does not improve the\ntesting score. We can only conclude that the standard deviation of\nthe training error is decreasing when adding more samples which is not a\nsurprise.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
4a7edf2d9cc06d547f5c00170c138c46b206216e
| 26,514 |
ipynb
|
Jupyter Notebook
|
chapter16-pdf-batch-processing/Improve-accuracy-of-pdf-processing-with-Amazon-Textract-and-Amazon-A2I-forGitHub.ipynb
|
premranga/Natural-Language-Processing-with-AWS-AI-Services
|
12a0334287304fc62c673c374d1e333a5beef723
|
[
"MIT"
] | 1 |
2021-09-27T12:05:00.000Z
|
2021-09-27T12:05:00.000Z
|
chapter16-pdf-batch-processing/Improve-accuracy-of-pdf-processing-with-Amazon-Textract-and-Amazon-A2I-forGitHub.ipynb
|
premranga/Natural-Language-Processing-with-AWS-AI-Services
|
12a0334287304fc62c673c374d1e333a5beef723
|
[
"MIT"
] | null | null | null |
chapter16-pdf-batch-processing/Improve-accuracy-of-pdf-processing-with-Amazon-Textract-and-Amazon-A2I-forGitHub.ipynb
|
premranga/Natural-Language-Processing-with-AWS-AI-Services
|
12a0334287304fc62c673c374d1e333a5beef723
|
[
"MIT"
] | null | null | null | 36.876217 | 381 | 0.551633 |
[
[
[
"# Improve accuracy of pdf batch processing with Amazon Textract and Amazon A2I\n\nIn this chapter and this accompanying notebook learn with an example on how you can use Amazon Textract in asynchronous mode by extracting content from multiple PDF files in batch, and sending specific content from these PDF documents to an Amazon A2I human review loop to review and modify the values, and send them to an Amazon DynamoDB table for downstream processing. \n\n**Important Note:** This is an accompanying notebook for Chapter 16 - Improve accuracy of pdf batch processing with Amazon Textract and Amazon A2I from the Natural Language Processing with AWS AI Services book. Please make sure to read the instructions provided in the book prior to attempting this notebook. ",
"_____no_output_____"
],
[
"### Step 0 - Create a private human review workforce\n\nThis step requires you to use the AWS Console. However, we highly recommend that you follow it, especially when creating your own task with a custom template we will use for this notebook. We will create a private workteam and add only one user (you) to it.\n\nTo create a private team:\n\n 1. Go to AWS Console > Amazon SageMaker > Labeling workforces\n 1. Click \"Private\" and then \"Create private team\".\n 1. Enter the desired name for your private workteam.\n 1. Enter your own email address in the \"Email addresses\" section.\n 1. Enter the name of your organization and a contact email to administer the private workteam.\n 1. Click \"Create Private Team\".\n 1. The AWS Console should now return to AWS Console > Amazon SageMaker > Labeling workforces. Your newly created team should be visible under \"Private teams\". Next to it you will see an ARN which is a long string that looks like arn:aws:sagemaker:region-name-123456:workteam/private-crowd/team-name. Please copy this ARN to paste in the cell below.\n 1. You should get an email from [email protected] that contains your workforce username and password.\n 1. In AWS Console > Amazon SageMaker > Labeling workforces, click on the URL in Labeling portal sign-in URL. Use the email/password combination from Step 8 to log in (you will be asked to create a new, non-default password).\n 1. This is your private worker's interface. When we create a verification task in Verify your task using a private team below, your task should appear in this window. You can invite your colleagues to participate in the labeling job by clicking the \"Invite new workers\" button.\n\nPlease refer to the [Amazon SageMaker documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management.html) if you need more details.",
"_____no_output_____"
],
[
"### Step 1 - Import libraries and initiliaze variables",
"_____no_output_____"
]
],
[
[
"# Step 1 - Cell 1\nimport urllib\nimport boto3\nimport os\nimport json\nimport time\nimport uuid\nimport sagemaker\nimport pandas as pd\nfrom sagemaker import get_execution_role\nfrom sagemaker.s3 import S3Uploader, S3Downloader\n\ntextract = boto3.client('textract')\ns3 = boto3.resource('s3')\nbucket = \"<S3-bucket-name>\"\nprefix = 'chapter16/input'\n# Enter the Workteam ARN you created from point 7 in Step 0 above\nWORKTEAM_ARN= '<your-private-workteam-arn>'",
"_____no_output_____"
],
[
"# Step 1 - Cell 2\n# Upload the SEC registration documents\ns3_client = boto3.client('s3')\nfor secfile in os.listdir():\n if secfile.endswith('pdf'):\n response = s3_client.upload_file(secfile, bucket, prefix+'/'+secfile)\n print(\"Uploaded {} to S3 bucket {} in folder {}\".format(secfile, bucket, prefix))",
"_____no_output_____"
]
],
[
[
"### Step 2 - Start Amazon Textract Text Detection Job",
"_____no_output_____"
]
],
[
[
"# Step 2 - Cell 1\ninput_bucket = s3.Bucket(bucket)\njobids = {}",
"_____no_output_____"
],
[
"# Step 2 - Cell 2\nfor doc in input_bucket.objects.all():\n if doc.key.startswith(prefix) and doc.key.endswith('pdf'): \n tres = textract.start_document_text_detection(\n DocumentLocation={\n \"S3Object\": {\n \"Bucket\": bucket,\n \"Name\": doc.key\n }\n }\n )\n jobids[doc.key.split('/')[2]] = tres['JobId']",
"_____no_output_____"
],
[
"# Step 2 - Cell 3\nfor j in jobids:\n print(\"Textract detection Job ID for {} is {}\".format(j,str(jobids[j])))",
"_____no_output_____"
]
],
[
[
"### Step 3 - Get Amazon Textract Text Detection Results",
"_____no_output_____"
]
],
[
[
"# Step 3 - Cell 1\n\nclass TextExtractor():\n def extract_text(self, jobId):\n \"\"\" Extract text from document corresponding to jobId and\n generate a list of pages containing the text\n \"\"\"\n\n textract_result = self.__get_textract_result(jobId)\n pages = {}\n self.__extract_all_pages(jobId, textract_result, pages, [])\n return pages\n\n def __get_textract_result(self, jobId):\n \"\"\" retrieve textract result with job Id \"\"\"\n\n result = textract.get_document_text_detection(\n JobId=jobId\n )\n return result\n\n def __extract_all_pages(self, jobId, textract_result, pages, page_numbers):\n \"\"\" extract page content: build the pages array,\n recurse if response is too big (when NextToken is provided by textract)\n \"\"\"\n blocks = [x for x in textract_result['Blocks'] if x['BlockType'] == \"LINE\"]\n content = {}\n line = 0\n for block in blocks:\n line += 1\n content['Text'+str(line)] = block['Text']\n content['Confidence'+str(line)] = block['Confidence']\n if block['Page'] not in page_numbers:\n page_numbers.append(block['Page'])\n pages[block['Page']] = {\n \"Number\": block['Page'],\n \"Content\": content\n }\n else:\n pages[block['Page']]['Content'] = content\n nextToken = textract_result.get(\"NextToken\", \"\")\n if nextToken != '':\n textract_result = textract.get_document_text_detection(\n JobId=jobId,\n NextToken=nextToken\n )\n self.__extract_all_pages(jobId,\n textract_result,\n pages,\n page_numbers)",
"_____no_output_____"
],
[
"# Step 3 - Cell 2\ntext_extractor = TextExtractor()\nindoc = {}\ndf_indoc = pd.DataFrame(columns = ['DocName','LineNr','DetectedText','Confidence', 'CorrectedText', 'Comments'])\nfor x in jobids:\n pages = text_extractor.extract_text(jobids[x])\n contdict =pages[1]['Content']\n for row in range(1,(int(len(contdict)/2))+1):\n df_indoc.loc[len(df_indoc.index)] = [x, row, contdict['Text'+str(row)], round(contdict['Confidence'+str(row)],1),'','']\n# Uncomment the line below if you want to review the contents of this dataframe\n#df_indoc.to_csv('extract.csv')",
"_____no_output_____"
],
[
"# Step 3 - Cell 3\n# The lines in each document that are of importance for the human loop to review\nbounding_dict = {'lines': '9:11:12:13:15:16:17:18:19:20:21:22:23:24:25'}",
"_____no_output_____"
],
[
"# Step 3 - Cell 4\n# Let us now create a new dataframe that only contains the subset of lines we need from the bounding_dict\ndf_newdoc = pd.DataFrame(columns = ['DocName','LineNr','DetectedText','Confidence','CorrectedText','Comments'])\nfor idx, row in df_indoc.iterrows():\n if str(row['LineNr']) in bounding_dict['lines'].split(':'):\n df_newdoc.loc[len(df_newdoc.index)] = [row['DocName'],row['LineNr'], row['DetectedText'], row['Confidence'], row['CorrectedText'],row['Comments']]\ndf_newdoc",
"_____no_output_____"
]
],
[
[
"### Step 4 - Create the Amazon A2I human review Task UI\nWe will customize a sample tabular template from the Amazon A2I sample Task UI template page - https://github.com/aws-samples/amazon-a2i-sample-task-uis",
"_____no_output_____"
]
],
[
[
"# Step 4 - Cell 1\n# Initialize A2I variables\na2i_prefix = \"chapter16/a2i-results\"\n\n# Define IAM role\nrole = get_execution_role()\nprint(\"RoleArn: {}\".format(role))\n\ntimestamp = time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n# Amazon SageMaker client\nsagemaker_client = boto3.client('sagemaker')\n\n# Amazon Augment AI (A2I) client\na2i = boto3.client('sagemaker-a2i-runtime')\n\n# Flow definition name - this value is unique per account and region. You can also provide your own value here.\nflowDefinitionName = 'fd-pdf-docs-' + timestamp\n\n# Task UI name - this value is unique per account and region. You can also provide your own value here.\ntaskUIName = 'ui-pdf-docs-' + timestamp\n\n# Flow definition outputs\nOUTPUT_PATH = f's3://' + bucket + '/' + a2i_prefix",
"_____no_output_____"
],
[
"# Step 4 - Cell 2\n# We will use the tabular liquid template and customize it for our requirements\n\ntemplate = r\"\"\"\n<script src=\"https://assets.crowd.aws/crowd-html-elements.js\"></script>\n\n<style>\n table, tr, th, td {\n border: 1px solid black;\n border-collapse: collapse;\n padding: 5px;\n }\n</style>\n\n<crowd-form>\n <div>\n <h1>Instructions</h1>\n <p>Please review the SEC registration form inputs, and make corrections where appropriate. </p>\n </div>\n <div>\n <h3>Original Registration Form - Page 1</h3>\n <classification-target>\n <img style=\"width: 70%; max-height: 40%; margin-bottom: 10px\" src=\"{{ task.input.image | grant_read_access }}\"/> \n </classification-target> \n </div>\n <br>\n <h1> Please enter your modifications below </h1>\n <table>\n <tr>\n <th>Line Nr</th>\n <th style=\"width:500px\">Detected Text</th>\n <th style=\"width:500px\">Confidence</th>\n <th>Change Required</th>\n <th style=\"width:500px\">Corrected Text</th>\n <th>Comments</th>\n </tr>\n {% for pair in task.input.document %}\n\n <tr>\n <td>{{ pair.linenr }}</td>\n <td><crowd-text-area name=\"predicteddoc{{ pair.linenr }}\" value=\"{{ pair.detectedtext }}\"></crowd-text-area></td>\n <td><crowd-text-area name=\"confidence{{ pair.linenr }}\" value=\"{{ pair.confidence }}\"></crowd-text-area></td>\n <td>\n <p>\n <input type=\"radio\" id=\"agree{{ pair.linenr }}\" name=\"rating{{ pair.linenr }}\" value=\"agree\" required>\n <label for=\"agree{{ pair.linenr }}\">Correct</label>\n </p>\n <p>\n <input type=\"radio\" id=\"disagree{{ pair.linenr }}\" name=\"rating{{ pair.linenr }}\" value=\"disagree\" required>\n <label for=\"disagree{{ pair.linenr }}\">Incorrect</label>\n </p>\n </td>\n <td>\n <p>\n <input style=\"width:500px\" rows=\"3\" type=\"text\" name=\"correcteddoc{{ pair.linenr }}\" value=\"{{pair.detectedtext}}\" required/>\n </p>\n </td>\n <td>\n <p>\n <input style=\"width:500px\" rows=\"3\" type=\"text\" name=\"comments{{ pair.linenr }}\" placeholder=\"Explain why you changed the value\"/>\n </p>\n </td>\n </tr>\n\n {% endfor %}\n </table>\n <br>\n <br>\n</crowd-form>\n\"\"\"",
"_____no_output_____"
],
[
"# Step 4 - Cell 2\n# Define the method to initialize and create the Task UI\ndef create_task_ui():\n response = sagemaker_client.create_human_task_ui(\n HumanTaskUiName=taskUIName,\n UiTemplate={'Content': template})\n return response",
"_____no_output_____"
],
[
"# Step 4 - Cell 3\n# Execute the method to create the Task UI\nhumanTaskUiResponse = create_task_ui()\nhumanTaskUiArn = humanTaskUiResponse['HumanTaskUiArn']\nprint(humanTaskUiArn)",
"_____no_output_____"
]
],
[
[
"### Step 5 - Create the Amazon A2I flow definition\n\nIn this section, we're going to create a flow definition definition. Flow Definitions allow us to specify:\n\n* The workforce that your tasks will be sent to.\n* The instructions that your workforce will receive. This is called a worker task template.\n* Where your output data will be stored.\n\nThis notebook is going to use the API, but you can optionally create this workflow definition in the console as well.\nFor more details and instructions, see: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html.",
"_____no_output_____"
]
],
[
[
"# Step 5 - Cell 1\ncreate_workflow_definition_response = sagemaker_client.create_flow_definition(\n FlowDefinitionName=flowDefinitionName,\n RoleArn=role,\n HumanLoopConfig= {\n \"WorkteamArn\": WORKTEAM_ARN,\n \"HumanTaskUiArn\": humanTaskUiArn,\n \"TaskCount\": 1,\n \"TaskDescription\": \"Review the contents and correct values as indicated\",\n \"TaskTitle\": \"SEC Registration Form Review\"\n },\n OutputConfig={\n \"S3OutputPath\" : OUTPUT_PATH\n }\n )\nflowDefinitionArn = create_workflow_definition_response['FlowDefinitionArn'] # let's save this ARN for future use",
"_____no_output_____"
],
[
"# Step 5 - Cell 2\nfor x in range(60):\n describeFlowDefinitionResponse = sagemaker_client.describe_flow_definition(FlowDefinitionName=flowDefinitionName)\n print(describeFlowDefinitionResponse['FlowDefinitionStatus'])\n if (describeFlowDefinitionResponse['FlowDefinitionStatus'] == 'Active'):\n print(\"Flow Definition is active\")\n break\n time.sleep(2)",
"_____no_output_____"
]
],
[
[
"### Step 6 - Activate the Amazon A2I flow definition",
"_____no_output_____"
]
],
[
[
"# Step 6 - Cell 1\n# We will display the PDF first page for reference on what is being edited by the human loop\nreg_images = {}\nfor image in os.listdir():\n if image.endswith('png'):\n reg_images[image.split('_')[0]] = S3Uploader.upload(image, 's3://{}/{}'.format(bucket, prefix))",
"_____no_output_____"
],
[
"# Step 6 - Cell 2\n# Activate human loops for all the three documents. These will be delivered for review sequentially in the Task UI.\n# We will also send only low confidence detections to A2I so the human team can update the text for what is should actually be\nhumanLoopName = {}\ndocs = df_newdoc.DocName.unique()\n# confidence threshold\nconfidence_threshold = 95\nfor doc in docs:\n doc_list = []\n humanLoopName[doc] = str(uuid.uuid4())\n for idx, line in df_newdoc.iterrows():\n # Send only those lines whose confidence score is less than threshold\n if line['DocName'] == doc and line['Confidence'] <= confidence_threshold:\n doc_list.append({'linenr': line['LineNr'], 'detectedtext': line['DetectedText'], 'confidence':line['Confidence']})\n ip_content = {\"document\": doc_list,\n 'image': reg_images[doc.split('.')[0]]\n } \n start_loop_response = a2i.start_human_loop(\n HumanLoopName=humanLoopName[doc],\n FlowDefinitionArn=flowDefinitionArn,\n HumanLoopInput={\n \"InputContent\": json.dumps(ip_content)\n }\n )\n",
"_____no_output_____"
],
[
"# Step 6 - Cell 3\ncompleted_human_loops = []\nfor doc in humanLoopName:\n resp = a2i.describe_human_loop(HumanLoopName=humanLoopName[doc])\n print(f'HumanLoop Name: {humanLoopName[doc]}')\n print(f'HumanLoop Status: {resp[\"HumanLoopStatus\"]}')\n print(f'HumanLoop Output Destination: {resp[\"HumanLoopOutput\"]}')\n print('\\n')",
"_____no_output_____"
],
[
"# Step 6 - Cell 4\nworkteamName = WORKTEAM_ARN[WORKTEAM_ARN.rfind('/') + 1:]\nprint(\"Navigate to the private worker portal and do the tasks. Make sure you've invited yourself to your workteam!\")\nprint('https://' + sagemaker_client.describe_workteam(WorkteamName=workteamName)['Workteam']['SubDomain'])",
"_____no_output_____"
],
[
"# Step 6 - Cell 5\ncompleted_human_loops = []\nfor doc in humanLoopName:\n resp = a2i.describe_human_loop(HumanLoopName=humanLoopName[doc])\n print(f'HumanLoop Name: {humanLoopName[doc]}')\n print(f'HumanLoop Status: {resp[\"HumanLoopStatus\"]}')\n print(f'HumanLoop Output Destination: {resp[\"HumanLoopOutput\"]}')\n print('\\n')\n if resp[\"HumanLoopStatus\"] == \"Completed\":\n completed_human_loops.append(resp)",
"_____no_output_____"
],
[
"# Step 6 - Cell 7\nimport re\nimport pandas as pd\n\nfor resp in completed_human_loops:\n splitted_string = re.split('s3://' + bucket + '/', resp['HumanLoopOutput']['OutputS3Uri'])\n output_bucket_key = splitted_string[1]\n response = s3_client.get_object(Bucket=bucket, Key=output_bucket_key)\n content = response[\"Body\"].read()\n json_output = json.loads(content)\n loop_name = json_output['humanLoopName']\n for i in json_output['humanAnswers']:\n x = i['answerContent']\n docname = list(humanLoopName.keys())[list(humanLoopName.values()).index(loop_name)]\n for i, r in df_newdoc.iterrows():\n if r['DocName'] == docname:\n df_newdoc.at[i,'CorrectedText'] = x['correcteddoc'+str(r['LineNr'])] if 'correcteddoc'+str(r['LineNr']) in x else ''\n df_newdoc.at[i,'Comments'] = x['comments'+str(r['LineNr'])] if 'comments'+str(r['LineNr']) in x else ''",
"_____no_output_____"
],
[
"# Step 6 - Cell 8\ndf_newdoc.head(30)",
"_____no_output_____"
]
],
[
[
"### Step 7 - Save changes to Amazon DynamoDB",
"_____no_output_____"
]
],
[
[
"# Step 7 - Cell 1\n# Create the Amazon DynamoDB table - note that a new DynamoDB table is created everytime you execute this cell\n\n# Get the service resource.\ndynamodb = boto3.resource('dynamodb')\ntablename = \"SEC-registration-\"+str(uuid.uuid4())\n\n# Create the DynamoDB table.\ntable = dynamodb.create_table(\n TableName=tablename,\n KeySchema=[\n {\n 'AttributeName': 'row_nr',\n 'KeyType': 'HASH'\n }\n ],\n AttributeDefinitions=[\n {\n 'AttributeName': 'row_nr',\n 'AttributeType': 'N'\n },\n ],\n ProvisionedThroughput={\n 'ReadCapacityUnits': 5,\n 'WriteCapacityUnits': 5\n }\n)\n\n# Wait until the table exists, this will take a minute or so\ntable.meta.client.get_waiter('table_exists').wait(TableName=tablename)\n\n# Print out some data about the table.\nprint(\"Table successfully created\")",
"_____no_output_____"
],
[
"# Step 7 - Cell 2\n# Load the Amazon DynamoDB table\n\nfor idx, row in df_newdoc.iterrows():\n table.put_item(\n Item={\n 'row_nr': idx,\n 'doc_name': str(row['DocName']) ,\n 'line_nr': str(row['LineNr']),\n 'detected_line': str(row['DetectedText']),\n 'confidence': str(row['Confidence']), \n 'corrected_line': str(row['CorrectedText']),\n 'change_comments': str(row['Comments']) \n }\n )\n\nprint(\"Items were successfully created in DynamoDB table\")",
"_____no_output_____"
]
],
[
[
"### End of Notebook\nPlease go back to Chapter 16 - Improve accuracy of pdf batch processing with Amazon Textract and Amazon A2I from the Natural Language Processing with AWS AI Services book to proceed further. ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
4a7ef3d622b0a742452d391ee5f15f08790fec79
| 381,142 |
ipynb
|
Jupyter Notebook
|
examples/correlation.ipynb
|
zalandoresearch/probrnn
|
3f3e792189cce72450d1de287fbd977816f95bdd
|
[
"MIT"
] | 44 |
2017-11-21T10:00:18.000Z
|
2020-07-23T11:07:38.000Z
|
examples/correlation.ipynb
|
zalandoresearch/probrnn
|
3f3e792189cce72450d1de287fbd977816f95bdd
|
[
"MIT"
] | null | null | null |
examples/correlation.ipynb
|
zalandoresearch/probrnn
|
3f3e792189cce72450d1de287fbd977816f95bdd
|
[
"MIT"
] | 17 |
2018-06-18T10:54:50.000Z
|
2020-06-05T03:44:46.000Z
| 905.325416 | 50,666 | 0.937435 |
[
[
[
"import modules and get command-line parameters if running as script",
"_____no_output_____"
]
],
[
[
"from probrnn import models, data, inference\nimport numpy as np\nimport json \nfrom matplotlib import pyplot as plt\nfrom IPython.display import clear_output",
"_____no_output_____"
]
],
[
[
"parameters for the model and training",
"_____no_output_____"
]
],
[
[
"params = \\\n {\n \"N_ITERATIONS\": 10 ** 5,\n \"VALIDATE_EACH\": 100,\n \"SAVE_EACH\": 1000,\n \"LOG_EVERY\": 50,\n \"LEARNING_RATE\": 0.0001,\n \"N_HIDDEN\": 256,\n \"N_BINS\": 50,\n \"BATCH_SIZE\": 50,\n }",
"_____no_output_____"
]
],
[
[
"Get some correlated toy data",
"_____no_output_____"
]
],
[
[
"datastruct = data.CoupledToyData(n_bins=params[\"N_BINS\"])\n\nx, _ = datastruct._gen(1).next()\nx = datastruct.get_readable(x)\n\nplt.figure()\nplt.plot(x)\nplt.show()",
"_____no_output_____"
]
],
[
[
"do some training",
"_____no_output_____"
]
],
[
[
"model = models.NADE(datastruct, params=params)\n\ntraining = models.Training(\n model, \n \"../models/toy_nade_bivariate\", \n \"../models/toy_nade_bivariate_training.json\",\n )\n\ndef print_function(trer, i, batch):\n if i % 10 == 0:\n clear_output()\n print \"loss: {}; iteration {}\".format(np.mean(trer[-100:]), i)\n\ntraining.train(print_function)",
"loss: 3.69458389282; iteration 50\n"
]
],
[
[
"visualize the training errors",
"_____no_output_____"
]
],
[
[
"with open(\"../models/toy_nade_bivariate_training.json\") as f:\n errs = json.load(f)\n\nplt.figure()\nplt.plot(np.array(errs[\"training_error\"])[:, 0], \n np.array(errs[\"training_error\"])[:, 1])\nplt.plot(np.array(errs[\"validation_error\"])[:, 0], \n np.array(errs[\"validation_error\"])[:, 1], 'r')\nplt.legend([\"training\", \"validation\"])\nplt.show()",
"_____no_output_____"
]
],
[
[
"plot some weight traces",
"_____no_output_____"
]
],
[
[
"for x in errs.keys():\n if x != \"training_error\" and x != \"validation_error\" and \"train\" not in x:\n plt.figure()\n for key in errs[x].keys():\n if key == \"mean\":\n plt.plot(errs[x][key], 'b', linewidth=5.0)\n elif key == \"random\":\n plt.plot(errs[x][key], 'c')\n else:\n plt.plot(errs[x][key], 'b', linestyle='--')\n\n plt.title(\"variable: {}\".format(x))\nplt.show()",
"_____no_output_____"
]
],
[
[
"load trained model",
"_____no_output_____"
]
],
[
[
"load_name = \"../models/toy_nade_bivariate_12000\"\nmodel = models.NADE(datastruct, fn=load_name)\nprint json.dumps(model.params, indent=4)",
"restoring...\nINFO:tensorflow:Restoring parameters from ../models/toy_nade_bivariate_12000.ckpt\n{\n \"N_BINS\": 50, \n \"LEARNING_RATE\": 0.0001, \n \"BATCH_SIZE\": 50, \n \"SAVE_EACH\": 1000, \n \"N_ITERATIONS\": 1000000, \n \"VALIDATE_EACH\": 100, \n \"LOG_EVERY\": 50, \n \"N_HIDDEN\": 256\n}\n"
]
],
[
[
"try some sampling",
"_____no_output_____"
]
],
[
[
"x = model.sample(200)\n\nplt.plot(x[::2])\nplt.plot(x[1::2])\nplt.show()",
"_____no_output_____"
]
],
[
[
"try some imputation",
"_____no_output_____"
]
],
[
[
"x = datastruct.simulate()\n\nx_missing = np.zeros(x.shape[0] * 2)\nx_missing[::2] = x[:, 0]\nx_missing[1::2] = np.nan\n\nestimate = inference.NaiveSIS(model, x_missing, 1000, binned=False, quiet=False).estimate()\n\nplt.figure()\nplt.plot(estimate[::2])\nplt.plot(estimate[1::2])\nplt.show()",
"100%|██████████| 200/200 [00:09<00:00, 19.23it/s]\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a7ef61ba356aa5102e0b7dfa879b159cdb17f39
| 14,507 |
ipynb
|
Jupyter Notebook
|
DisplayImagesInJupyterNoteBook/ImageInJNB.ipynb
|
go2sree/Jan2019
|
832fd05abd0804437b051b8d3199258403c539cc
|
[
"MIT"
] | null | null | null |
DisplayImagesInJupyterNoteBook/ImageInJNB.ipynb
|
go2sree/Jan2019
|
832fd05abd0804437b051b8d3199258403c539cc
|
[
"MIT"
] | null | null | null |
DisplayImagesInJupyterNoteBook/ImageInJNB.ipynb
|
go2sree/Jan2019
|
832fd05abd0804437b051b8d3199258403c539cc
|
[
"MIT"
] | null | null | null | 97.362416 | 11,515 | 0.869304 |
[
[
[
"# ",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(\"img/Jupiter\")",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(\"img/Jupiter.jpg\")",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nfrom IPython.core.display import HTML \nImage(url= \"http://my_site.com/my_picture.jpg\")",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\n# from IPython.core.display import HTML \nImage(url= \"http://www.thetimenow.com/img/astronomy/all/solarsystemwiki-Jupiter-compared-with-other-planets1.png\")",
"_____no_output_____"
]
],
[
[
"<img src=\"img/Jupiter.jpg\" width= 200, height=200> # use this in markdown",
"_____no_output_____"
]
],
[
[
"<img src=\"img/Jupiter.jpg\" width= 200, height=200>",
"_____no_output_____"
]
],
[
[
"%%html\n<img src=\"img/Jupiter.jpg\" width= 200, height=200> #Code",
"_____no_output_____"
]
],
[
[
"%%html\n<img src=\"img/Jupiter.jpg\" width= 200, height=200> ",
"_____no_output_____"
]
]
] |
[
"raw",
"markdown",
"raw",
"code",
"markdown",
"code",
"raw",
"markdown",
"raw",
"code"
] |
[
[
"raw"
],
[
"markdown"
],
[
"raw"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"raw"
],
[
"markdown"
],
[
"raw"
],
[
"code"
]
] |
4a7ef920688314c22bccb000dd84454b4acfedbd
| 263,101 |
ipynb
|
Jupyter Notebook
|
analysis/analyze_overfit.ipynb
|
DrugowitschLab/motion-structure-identification
|
908f084b36c7387daf0cbfe75f16bab70cf96db9
|
[
"MIT"
] | null | null | null |
analysis/analyze_overfit.ipynb
|
DrugowitschLab/motion-structure-identification
|
908f084b36c7387daf0cbfe75f16bab70cf96db9
|
[
"MIT"
] | null | null | null |
analysis/analyze_overfit.ipynb
|
DrugowitschLab/motion-structure-identification
|
908f084b36c7387daf0cbfe75f16bab70cf96db9
|
[
"MIT"
] | null | null | null | 1,105.466387 | 22,619 | 0.954337 |
[
[
[
"from analysis.data_exp1 import DataExp1\nimport analysis.models as models\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfor pid in DataExp1.pids:\n print(pid)\n data = DataExp1(pid)\n _, ax = plt.subplots(1, 3)\n data.plot_confusion_matrix(ax[0])\n model = data.build_model(models.ChoiceModel4Param)\n res = model.fit()\n model.plot_confusion_matrix(model.predict(res), ax[1])\n print(res.log_likelihood)\n cv_pred = data.cross_validate(models.ChoiceModel4Param)\n model.plot_confusion_matrix(cv_pred, ax[2])\n cv_pred['choice'] = data.df['choice']\n print(np.log(cv_pred.apply(lambda row: row[row['choice']], axis=1)).sum())\n plt.show()\n print()\n",
"1\n-134.15953429826516\n-140.07478130121942\n\n2\n-178.28060195147813\n-182.74164899310506\n\n3\n-154.75538678945418\n-159.37350756628325\n\n4\n-203.52670123813732\n-207.95776441422927\n\n5\n-165.74267660334178\n-170.7555834385773\n\n6\n-184.64416875615373\n-188.98442210610466\n\n7\n-122.14211169027614\n-127.90321455890141\n\n8\n-209.1045897617091\n-213.66927625426968\n\n9\n-154.06618856751385\n-160.6499215151079\n\n10\n-199.8743926999934\n-208.08358600112\n\n11\n-154.44918237463554\n-158.77523213201368\n\n12\n-177.35785680444897\n-182.50529880054324\n\n"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4a7f082b29c8a1477c2259219df68417e2507186
| 159,043 |
ipynb
|
Jupyter Notebook
|
analyses/mrna-electroporation/GFP mRNA electroporation.ipynb
|
hammerlab/t-cell-guide
|
3b95f3abebed6e6042b353fb58639729fe8823d3
|
[
"Apache-2.0"
] | 14 |
2018-06-13T16:55:42.000Z
|
2022-02-25T00:03:59.000Z
|
analyses/mrna-electroporation/GFP mRNA electroporation.ipynb
|
hammerlab/t-cell-guide
|
3b95f3abebed6e6042b353fb58639729fe8823d3
|
[
"Apache-2.0"
] | 9 |
2018-06-10T11:30:21.000Z
|
2019-03-14T19:40:02.000Z
|
analyses/mrna-electroporation/GFP mRNA electroporation.ipynb
|
hammerlab/t-cell-guide
|
3b95f3abebed6e6042b353fb58639729fe8823d3
|
[
"Apache-2.0"
] | null | null | null | 1,916.180723 | 157,288 | 0.951931 |
[
[
[
"library('readr')\nlibrary('magrittr')\nlibrary('dplyr')\nlibrary('tidyr')\nlibrary('ggplot2')",
"_____no_output_____"
],
[
"tsubsets <- read_csv(\n \"data.csv\",\n col_types=cols(\n `Donor`=col_factor(levels=paste(\"Donor\", c(10, 12, 14, 15))),\n `Day`=col_factor(levels=1:6),\n `Condition`=col_factor(levels=c(\"No RNA\", \"1600 ng\", \"800 ng\", \"400 ng\")),\n `Statistic`=col_factor(levels=c(\"GFP+\")),\n `Value`=col_double()\n )\n)",
"_____no_output_____"
],
[
"tsubsets %>%\n ggplot(aes(x=`Day`, y=`Value`, group=`Condition`, color=`Condition`)) + \n geom_point() + \n geom_line() +\n ylab('GFP Positive Percent (%)') +\n facet_wrap(~Donor, ncol=4)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
4a7f09dd10e18adb31fa0703f81b12843820c6c9
| 131,569 |
ipynb
|
Jupyter Notebook
|
merge_video/preprocessing_videos.ipynb
|
gjustin40/workspace
|
5391abf33db3b4b861b8fa74d8da41117bc011fe
|
[
"MIT"
] | null | null | null |
merge_video/preprocessing_videos.ipynb
|
gjustin40/workspace
|
5391abf33db3b4b861b8fa74d8da41117bc011fe
|
[
"MIT"
] | null | null | null |
merge_video/preprocessing_videos.ipynb
|
gjustin40/workspace
|
5391abf33db3b4b861b8fa74d8da41117bc011fe
|
[
"MIT"
] | null | null | null | 91.877793 | 1,398 | 0.731274 |
[
[
[
"import cv2\nimport glob\nimport os\nimport numpy as np\nfrom moviepy.editor import VideoFileClip, concatenate_videoclips\n\nimport pandas as pd\n\nfrom tqdm import tqdm\nimport time",
"_____no_output_____"
],
[
"weather_list = ['foggy', 'sunny', 'rainy']\nbase_url = 'T:/public_data/video_classification/used_video/'\nfoggy_list = glob.glob(base_url + 'foggy/*')\nsunny_list = glob.glob(base_url + 'sunny/*')\nrainy_list = glob.glob(base_url + 'rainy/*')\n\nprint('foggy num', len(foggy_list))\nprint('sunny num', len(sunny_list))\nprint('rainy num', len(rainy_list))",
"foggy num 2208\nsunny num 2292\nrainy num 2321\n"
],
[
"filenames = foggy_list\npresets = list(map(lambda x: x[-7:-4], filenames))\ndf_foggy = pd.DataFrame({'filename': filenames, 'preset': presets})\ndf_foggy['weather'] = 'foggy'\n\ndf_foggy",
"_____no_output_____"
],
[
"preset_list = ['p01', 'p02', 'p03', 'p04', 'p05', 'p06', 'p07', 'p08']\nfor preset in preset_list:\n df_preset = df_foggy.loc[df_foggy['preset'] == preset]\n filenames = sorted(df_preset.filename.tolist(), key=lambda x: os.path.basename(x).split('_')[1:3])\n \n clip_list = []\n for filename in tqdm(filenames):\n clip_list.append(VideoFileClip(filename).subclip(2, -2))\n \n final_clip = concatenate_videoclips(clip_list)\n final_clip.write_videofile(f'T:/public_data/video_classification/used_video/foggy_merge/{preset}_foggy_merge.mp4')",
" 4%|███ | 4/105 [00:11<04:49, 2.87s/it]\n"
],
[
"final_clip = concatenate_videoclips(clip_clean)\nfinal_clip.write_videofile('T:/public_data/video_classification/used_video/foggy_merge/foggy_01_merge.mp4')",
"Moviepy - Building video T:/public_data/video_classification/used_video/foggy_merge/foggy_01_merge.mp4.\nMoviepy - Writing video T:/public_data/video_classification/used_video/foggy_merge/foggy_01_merge.mp4\n\n"
],
[
"sorted(list(df.filename), key=lambda x: os.path.basename(x).split('_')[0])",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7f0fb293a821d504d03c54d11d5246e7f907df
| 29,607 |
ipynb
|
Jupyter Notebook
|
notebooks/arc_solver.ipynb
|
aysjajohnson/ARC
|
3650acfa040a2249ea8f1acebf0dc3bf6d21bf6f
|
[
"Apache-2.0"
] | null | null | null |
notebooks/arc_solver.ipynb
|
aysjajohnson/ARC
|
3650acfa040a2249ea8f1acebf0dc3bf6d21bf6f
|
[
"Apache-2.0"
] | 32 |
2020-04-21T18:43:36.000Z
|
2020-07-27T18:19:34.000Z
|
notebooks/arc_solver.ipynb
|
aysjajohnson/ARC
|
3650acfa040a2249ea8f1acebf0dc3bf6d21bf6f
|
[
"Apache-2.0"
] | null | null | null | 64.785558 | 3,484 | 0.644273 |
[
[
[
"import numpy as np\nimport pandas as pd\n\nimport os\nimport json\nimport time\n\nfrom IPython.display import clear_output\nfrom IPython.display import HTML\n\nimport matplotlib.pyplot as plt\nfrom matplotlib import animation\nfrom matplotlib import colors\nimport numpy as np\n\nfrom skimage.segmentation import flood, flood_fill\n\n%matplotlib inline",
"_____no_output_____"
],
[
"# define solver\nclass ARCSolver:\n def __init__(self, task_filename):\n # load task and extract input and output pairs\n self.task_filename = task_filename\n self.task = self.load_task(task_filename)\n self.train_inputs, self.train_outputs, self.test_inputs, self.test_outputs = \\\n self.extract_io_pairs()\n self.test_pred = np.zeros((5, 5))\n self.test_pred_height, self.test_pred_width = self.test_pred.shape\n \n self.solved = False # have we solved the task yet?\n self.selected_colour = 0\n self.clipboard = None\n self.description = ''\n \n # variables for plotting\n self.cmap = colors.ListedColormap(\n ['#000000', '#0074D9','#FF4136','#2ECC40','#FFDC00',\n '#AAAAAA', '#F012BE', '#FF851B', '#7FDBFF', '#870C25'])\n self.colour_to_num = {'black': 0, 'blue': 1, 'red': 2, 'green': 3, 'yellow': 4,\n 'grey': 5, 'magenta': 6, 'orange': 7, 'light_blue': 8, \n 'maroon': 9}\n self.num_to_colour = {0: 'black', 1: 'blue', 2: 'red', 3: 'green', 4: 'yellow',\n 5: 'grey', 6: 'magneta', 7: 'orange', 8: 'light_blue',\n 9: 'maroon'}\n\n \n def load_task(self, task_filename):\n with open(task_filename, 'r') as f:\n task = json.load(f) \n return task\n\n def plot_task(self):\n \"\"\"\n Plots the first train and test pairs of a specified task,\n using same color scheme as the ARC app\n \"\"\"\n norm = colors.Normalize(vmin=0, vmax=9)\n n_train = len(self.task['train'])\n fig, axs = plt.subplots(n_train+1, 2, figsize=(10, 10))\n for i in range(n_train):\n axs[i, 0].imshow(self.task['train'][i]['input'], cmap=self.cmap, norm=norm)\n axs[i, 0].axis('off')\n axs[i, 0].set_title('Train Input')\n axs[i, 1].imshow(self.task['train'][i]['output'], cmap=self.cmap, norm=norm)\n axs[i, 1].axis('off')\n axs[i, 1].set_title('Train Output')\n axs[n_train, 0].imshow(self.task['test'][0]['input'], cmap=self.cmap, norm=norm)\n axs[n_train, 0].axis('off')\n axs[n_train, 0].set_title('Test Input')\n axs[n_train, 1].imshow(self.task['test'][0]['output'], cmap=self.cmap, norm=norm)\n axs[n_train, 1].axis('off')\n axs[n_train, 1].set_title('Test Output')\n plt.tight_layout()\n plt.show()\n \n def plot_grid(self, grid):\n \"\"\"\n Plots a single grid\n \"\"\"\n #plt.clf()\n\n #plt.draw()\n #display(plt)\n \n\n def plot_grids(self, grids):\n \"\"\"\n Plots a list of grids\n \"\"\"\n n_grids = len(grids)\n norm = colors.Normalize(vmin=0, vmax=9)\n fig, axs = plt.subplots(1, n_grids, figsize=(6, 6), squeeze=False)\n for i in range(n_grids):\n axs[0, i].imshow(grids[i], cmap=self.cmap, norm=norm)\n axs[0, i].axis('off')\n plt.tight_layout()\n plt.show()\n \n def extract_io_pairs(self):\n train = self.task['train']\n test = self.task['test']\n n_train = len(train)\n n_test = len(test)\n\n train_inputs = np.array([train[i]['input'] for i in range(n_train)])\n train_outputs = np.array([train[i]['output'] for i in range(n_train)])\n test_inputs = np.array([test[i]['input'] for i in range(n_test)])\n test_outputs = np.array([test[i]['output'] for i in range(n_test)])\n\n return train_inputs, train_outputs, test_inputs, test_outputs\n \n def copy_from_input(self):\n # copy over the first test input\n self.test_pred = self.test_inputs[0].copy()\n self.test_pred_height, self.test_pred_width = self.test_inputs[0].shape\n self.description = 'copy from input'\n \n def reset(self):\n # resets grid to all zeros with size of the grid based on current settings\n self.test_pred = np.zeros((self.test_pred_height, self.test_pred_width))\n self.description = 'reset'\n \n def resize(self):\n # resizes the grid\n prev_test_pred = self.test_pred.copy()\n prev_test_pred_width = self.test_pred_width\n prev_test_pred_height = self.test_pred_height\n\n # sample new grid size\n new_test_pred_width = np.random.choice(np.arange(1, 5))\n new_test_pred_height = np.random.choice(np.arange(1, 5))\n new_test_pred = np.zeros((new_test_pred_height, new_test_pred_width))\n \n # copy over values\n for i in range(min(prev_test_pred_height, new_test_pred_height)):\n for j in range(min(prev_test_pred_width, new_test_pred_width)):\n new_test_pred[i, j] = prev_test_pred[i, j]\n \n self.test_pred = new_test_pred\n self.test_pred_width = new_test_pred_width\n self.test_pred_height = new_test_pred_height\n self.description = f'resize: ({new_test_pred_height}, {new_test_pred_width})'\n \n def change_colour(self):\n self.selected_colour = np.random.choice(np.arange(10))\n self.description = f'change colour: {self.num_to_colour[self.selected_colour]}'\n \n def edit(self):\n # select a random location\n x = np.random.choice(np.arange(self.test_pred_width))\n y = np.random.choice(np.arange(self.test_pred_height))\n self.test_pred[y, x] = self.selected_colour\n self.description = f'edit: ({y}, {x})'\n \n def edit_rectangle(self):\n # selects a randomly selected region and changes the colour of all of the cells\n x_start = np.random.choice(np.arange(self.test_pred_width))\n x_end = np.random.choice(np.arange(x_start+1, self.test_pred_width+1))\n y_start = np.random.choice(np.arange(self.test_pred_height))\n y_end = np.random.choice(np.arange(y_start+1, self.test_pred_height+1))\n \n # select a new colour\n self.selected_colour = np.random.choice(np.arange(10))\n self.test_pred[y_start:y_end, x_start:x_end] = self.selected_colour\n self.description = f'edit rectangle from ({y_start}:{y_end}, {x_start}:{x_end}) to {self.selected_colour}'\n \n def copy(self):\n # copies a randomly selected region\n x_start = np.random.choice(np.arange(self.test_pred_width))\n x_end = np.random.choice(np.arange(x_start+1, self.test_pred_width+1))\n y_start = np.random.choice(np.arange(self.test_pred_height))\n y_end = np.random.choice(np.arange(y_start+1, self.test_pred_height+1))\n \n self.clipboard = self.test_pred[y_start:y_end, x_start:x_end].copy()\n self.description = f'copy from ({y_start}:{y_end}, {x_start}:{x_end})'\n #print(f'clipboard: {self.clipboard}')\n \n def paste(self):\n # pastes clipboard value into randomly selected location\n clipboard_height, clipboard_width = self.clipboard.shape\n x_start = np.random.choice(np.arange(self.test_pred_width))\n x_width = min(clipboard_width, self.test_pred_width - x_start) \n y_start = np.random.choice(np.arange(self.test_pred_height))\n y_height = min(clipboard_height, self.test_pred_height - y_start)\n \n self.test_pred[y_start:y_start+y_height, x_start:x_start+x_width] = self.clipboard[:y_height, :x_width] \n self.description = f'pasting from ({y_start}:{y_start+y_height}, {x_start}:{x_start+x_width})'\n \n def flood_fill(self):\n # flood fill at a random location\n x = np.random.choice(self.test_pred_width)\n y = np.random.choice(self.test_pred_height)\n self.test_pred = flood_fill(self.test_pred, (y, x), \n self.selected_colour)\n self.description = f'flood fill from: ({y}, {x})'\n \n def solve(self):\n fig = plt.figure(figsize=(6, 6))\n plt.ion()\n plt.show()\n norm = colors.Normalize(vmin=0, vmax=9)\n \n while not self.solved:\n clear_output()\n # randomly select available function\n if np.random.choice([0, 1]) == 0:\n self.change_colour()\n else:\n self.edit()\n\n plt.imshow(self.test_pred, cmap=self.cmap, norm=norm)\n plt.axis('off')\n plt.tight_layout()\n plt.pause(1)\n \n # check accuracy\n \n ",
"_____no_output_____"
],
[
"training_path = \"/Users/aysjajohnson/Desktop/ARC-master/data/training/\"\nsolver = ARCSolver(task_filename=os.path.join(training_path, '6e02f1e3.json'))\nsolver.plot_grids(solver.train_inputs)\nsolver.plot_grids(solver.train_outputs)",
"_____no_output_____"
],
[
"solver = ARCSolver(task_filename=os.path.join(training_path, '6e02f1e3.json'))\n\nfig = plt.figure(figsize=(5, 5))\nax = plt.axes(xlim=(-.5, 4.5), ylim=(-0.5, 4.5))\nnorm = colors.Normalize(vmin=0, vmax=9)\nim = plt.imshow(solver.test_pred, cmap=solver.cmap, norm=norm)\nplt.gca().invert_yaxis()\nplt.xticks([])\nplt.yticks([])\n\n# initialization function: plot the background of each frame\ndef init():\n # TODO: modify initialization\n im.set_data(solver.test_pred)\n return [im]\n\n# animation function. This is called sequentially\ndef animate(i):\n # TODO: replace the two function calls below with a generic next() function\n # or something like that\n r = np.random.choice([0, 1, 2, 3, 4, 5, 6, 7, 8])\n if r == 0:\n solver.change_colour()\n elif r == 1:\n solver.edit()\n elif r == 2:\n solver.resize()\n elif r == 3:\n solver.reset()\n elif r == 4:\n solver.flood_fill()\n elif r == 5:\n solver.copy()\n elif r == 6:\n if solver.clipboard is not None:\n solver.paste()\n elif r == 7:\n solver.copy_from_input()\n elif r == 8:\n solver.edit_rectangle()\n \n #print(solver.description)\n #print(solver.test_pred.shape)\n #plt.gcf().set_size_inches(solver.test_pred_height, solver.test_pred_width, forward=True)\n plt.rcParams[\"figure.figsize\"] = (solver.test_pred_height, solver.test_pred_width)\n\n im.set_data(solver.test_pred)\n ax.set_title(solver.description)\n return [im]\n\n# call the animator. blit=True means only re-draw the parts that have changed.\nanim = animation.FuncAnimation(fig, animate, init_func=init,\n frames=100, interval=200, blit=False)\n\n# save the animation as an mp4. This requires ffmpeg or mencoder to be\n# installed. The extra_args ensure that the x264 codec is used, so that\n# the video can be embedded in html5. You may need to adjust this for\n# your system: for more information, see\n# http://matplotlib.sourceforge.net/api/animation_api.html\nanim.save('basic_animation.mp4', fps=5, extra_args=['-vcodec', 'libx264'])\n\nHTML(anim.to_html5_video())",
"MovieWriter ffmpeg unavailable; trying to use <class 'matplotlib.animation.PillowWriter'> instead.\n"
],
[
"np.zeros((3, 2)).shape",
"_____no_output_____"
],
[
"for i in range(1):\n print(i)",
"0\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7f1bcc39b2b77c5969f5313b398f8e8e30b6c5
| 120,196 |
ipynb
|
Jupyter Notebook
|
intro-to-pytorch/Part 5 - Inference and Validation (Solution).ipynb
|
CangelosiQ/deep-learning-v2-pytorch
|
2d9a38ae2f6f2bc340f9dc5997144e933776915c
|
[
"MIT"
] | 1 |
2021-03-26T00:11:39.000Z
|
2021-03-26T00:11:39.000Z
|
intro-to-pytorch/Part 5 - Inference and Validation (Solution).ipynb
|
CangelosiQ/deep-learning-v2-pytorch
|
2d9a38ae2f6f2bc340f9dc5997144e933776915c
|
[
"MIT"
] | null | null | null |
intro-to-pytorch/Part 5 - Inference and Validation (Solution).ipynb
|
CangelosiQ/deep-learning-v2-pytorch
|
2d9a38ae2f6f2bc340f9dc5997144e933776915c
|
[
"MIT"
] | null | null | null | 216.960289 | 51,824 | 0.893715 |
[
[
[
"# Inference and Validation\n\nNow that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch. \n\nAs usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:\n\n```python\ntestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)\n```\n\nThe test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.",
"_____no_output_____"
]
],
[
[
"import torch\nfrom torchvision import datasets, transforms\n\n# Define a transform to normalize the data\ntransform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,))])\n# Download and load the training data\ntrainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)\n\n# Download and load the test data\ntestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)",
"_____no_output_____"
]
],
[
[
"Here I'll create a model like normal, using the same one from my solution for part 4.",
"_____no_output_____"
]
],
[
[
"from torch import nn, optim\nimport torch.nn.functional as F\n\nclass Classifier(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n \n def forward(self, x):\n # make sure input tensor is flattened\n x = x.view(x.shape[0], -1)\n \n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = F.relu(self.fc3(x))\n x = F.log_softmax(self.fc4(x), dim=1)\n \n return x",
"_____no_output_____"
]
],
[
[
"The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.",
"_____no_output_____"
]
],
[
[
"model = Classifier()\n\nimages, labels = next(iter(testloader))\n# Get the class probabilities\nps = torch.exp(model(images))\n# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples\nprint(ps.shape)",
"_____no_output_____"
]
],
[
[
"With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.",
"_____no_output_____"
]
],
[
[
"top_p, top_class = ps.topk(1, dim=1)\n# Look at the most likely classes for the first 10 examples\nprint(top_class[:10,:])",
"_____no_output_____"
]
],
[
[
"Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.\n\nIf we do\n\n```python\nequals = top_class == labels\n```\n\n`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.",
"_____no_output_____"
]
],
[
[
"equals = top_class == labels.view(*top_class.shape)",
"_____no_output_____"
]
],
[
[
"Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error\n\n```\nRuntimeError: mean is not implemented for type torch.ByteTensor\n```\n\nThis happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implement for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.",
"_____no_output_____"
]
],
[
[
"accuracy = torch.mean(equals.type(torch.FloatTensor))\nprint(f'Accuracy: {accuracy.item()*100}%')",
"_____no_output_____"
]
],
[
[
"The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up the by turning off gradients using `torch.no_grad()`:\n\n```python\n# turn off gradients\nwith torch.no_grad():\n # validation pass here\n for images, labels in testloader:\n ...\n```\n\n>**Exercise:** Implement the validation loop below. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting.",
"_____no_output_____"
]
],
[
[
"model = Classifier()\ncriterion = nn.NLLLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.003)\n\nepochs = 30\nsteps = 0\n\ntrain_losses, test_losses = [], []\nfor e in range(epochs):\n running_loss = 0\n for images, labels in trainloader:\n \n optimizer.zero_grad()\n \n log_ps = model(images)\n loss = criterion(log_ps, labels)\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n \n else:\n test_loss = 0\n accuracy = 0\n \n # Turn off gradients for validation, saves memory and computations\n with torch.no_grad():\n for images, labels in testloader:\n log_ps = model(images)\n test_loss += criterion(log_ps, labels)\n \n ps = torch.exp(log_ps)\n top_p, top_class = ps.topk(1, dim=1)\n equals = top_class == labels.view(*top_class.shape)\n accuracy += torch.mean(equals.type(torch.FloatTensor))\n \n train_losses.append(running_loss/len(trainloader))\n test_losses.append(test_loss/len(testloader))\n\n print(\"Epoch: {}/{}.. \".format(e+1, epochs),\n \"Training Loss: {:.3f}.. \".format(running_loss/len(trainloader)),\n \"Test Loss: {:.3f}.. \".format(test_loss/len(testloader)),\n \"Test Accuracy: {:.3f}\".format(accuracy/len(testloader)))",
"_____no_output_____"
],
[
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"plt.plot(train_losses, label='Training loss')\nplt.plot(test_losses, label='Validation loss')\nplt.legend(frameon=False)",
"_____no_output_____"
]
],
[
[
"## Overfitting\n\nIf we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.\n\n<img src='assets/overfitting.png' width=450px>\n\nThe network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.\n\nThe most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.\n\n```python\nclass Classifier(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n \n # Dropout module with 0.2 drop probability\n self.dropout = nn.Dropout(p=0.2)\n \n def forward(self, x):\n # make sure input tensor is flattened\n x = x.view(x.shape[0], -1)\n \n # Now with dropout\n x = self.dropout(F.relu(self.fc1(x)))\n x = self.dropout(F.relu(self.fc2(x)))\n x = self.dropout(F.relu(self.fc3(x)))\n \n # output so no dropout here\n x = F.log_softmax(self.fc4(x), dim=1)\n \n return x\n```\n\nDuring training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.\n\n```python\n# turn off gradients\nwith torch.no_grad():\n \n # set model to evaluation mode\n model.eval()\n \n # validation pass here\n for images, labels in testloader:\n ...\n\n# set model back to train mode\nmodel.train()\n```",
"_____no_output_____"
],
[
"> **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss.",
"_____no_output_____"
]
],
[
[
"class Classifier(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n\n # Dropout module with 0.2 drop probability\n self.dropout = nn.Dropout(p=0.2)\n\n def forward(self, x):\n # make sure input tensor is flattened\n x = x.view(x.shape[0], -1)\n\n # Now with dropout\n x = self.dropout(F.relu(self.fc1(x)))\n x = self.dropout(F.relu(self.fc2(x)))\n x = self.dropout(F.relu(self.fc3(x)))\n\n # output so no dropout here\n x = F.log_softmax(self.fc4(x), dim=1)\n\n return x",
"_____no_output_____"
],
[
"model = Classifier()\ncriterion = nn.NLLLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.003)\n\nepochs = 30\nsteps = 0\n\ntrain_losses, test_losses = [], []\nfor e in range(epochs):\n running_loss = 0\n for images, labels in trainloader:\n \n optimizer.zero_grad()\n \n log_ps = model(images)\n loss = criterion(log_ps, labels)\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n \n else:\n test_loss = 0\n accuracy = 0\n \n # Turn off gradients for validation, saves memory and computations\n with torch.no_grad():\n model.eval()\n for images, labels in testloader:\n log_ps = model(images)\n test_loss += criterion(log_ps, labels)\n \n ps = torch.exp(log_ps)\n top_p, top_class = ps.topk(1, dim=1)\n equals = top_class == labels.view(*top_class.shape)\n accuracy += torch.mean(equals.type(torch.FloatTensor))\n \n model.train()\n \n train_losses.append(running_loss/len(trainloader))\n test_losses.append(test_loss/len(testloader))\n\n print(\"Epoch: {}/{}.. \".format(e+1, epochs),\n \"Training Loss: {:.3f}.. \".format(train_losses[-1]),\n \"Test Loss: {:.3f}.. \".format(test_losses[-1]),\n \"Test Accuracy: {:.3f}\".format(accuracy/len(testloader)))",
"_____no_output_____"
],
[
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"plt.plot(train_losses, label='Training loss')\nplt.plot(test_losses, label='Validation loss')\nplt.legend(frameon=False)",
"_____no_output_____"
]
],
[
[
"## Inference\n\nNow that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.",
"_____no_output_____"
]
],
[
[
"# Import helper module (should be in the repo)\nimport helper\n\n# Test out your network!\n\nmodel.eval()\n\ndataiter = iter(testloader)\nimages, labels = dataiter.next()\nimg = images[0]\n# Convert 2D image to 1D vector\nimg = img.view(1, 784)\n\n# Calculate the class probabilities (softmax) for img\nwith torch.no_grad():\n output = model.forward(img)\n\nps = torch.exp(output)\n\n# Plot the image and probabilities\nhelper.view_classify(img.view(1, 28, 28), ps, version='Fashion')",
"_____no_output_____"
]
],
[
[
"## Next Up!\n\nIn the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a7f3e31fcf109b19817f9c9c794c7856cc0bd66
| 2,604 |
ipynb
|
Jupyter Notebook
|
3/1_docker_install.ipynb
|
rymzt/aaic_gathering
|
5ce8fa130257e6c53efb431ebd84c009c9d4b641
|
[
"MIT"
] | 3 |
2017-09-05T10:29:26.000Z
|
2019-02-08T08:01:38.000Z
|
3/1_docker_install.ipynb
|
rymzt/aaic_gathering
|
5ce8fa130257e6c53efb431ebd84c009c9d4b641
|
[
"MIT"
] | null | null | null |
3/1_docker_install.ipynb
|
rymzt/aaic_gathering
|
5ce8fa130257e6c53efb431ebd84c009c9d4b641
|
[
"MIT"
] | 2 |
2018-07-09T01:04:06.000Z
|
2018-08-09T09:46:24.000Z
| 24.111111 | 142 | 0.610983 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a7f426875646ccbb27b225b0872cd48e69d05fd
| 29,544 |
ipynb
|
Jupyter Notebook
|
aas229_workshop/Tutorial_Notebooks/photutils/gaussian_psf_photometry.ipynb
|
astropy/astropy-workshops
|
2c35a2775b5926e1bcbffadd5934591d0acb989f
|
[
"BSD-3-Clause"
] | 1 |
2019-12-10T19:45:03.000Z
|
2019-12-10T19:45:03.000Z
|
aas229_workshop/Tutorial_Notebooks/photutils/gaussian_psf_photometry.ipynb
|
astropy/astropy-workshops
|
2c35a2775b5926e1bcbffadd5934591d0acb989f
|
[
"BSD-3-Clause"
] | null | null | null |
aas229_workshop/Tutorial_Notebooks/photutils/gaussian_psf_photometry.ipynb
|
astropy/astropy-workshops
|
2c35a2775b5926e1bcbffadd5934591d0acb989f
|
[
"BSD-3-Clause"
] | 2 |
2019-09-30T01:37:34.000Z
|
2019-10-31T18:19:54.000Z
| 30.332649 | 463 | 0.62236 |
[
[
[
"# Point Spread Function Photometry with Photutils",
"_____no_output_____"
],
[
"The PSF photometry module of photutils is intended to be a fully modular tool such that users are able to completly customise the photometry procedure, e.g., by using different source detection algorithms, background estimators, PSF models, etc. Photutils provides implementations for each subtask involved in the photometry process, however, users are still able to include their own implementations without having to touch into the photutils core classes!",
"_____no_output_____"
],
[
"This modularity characteristic is accomplished by using the object oriented programming approach which provides a more convient user experience while at the same time allows the developers to think in terms of classes and objects rather than isolated functions.",
"_____no_output_____"
],
[
"Photutils provides three basic classes to perform PSF photometry: `BasicPSFPhotometry`, `IterativelySubtractedPSFPhotometry`, and `DAOPhotPSFPhotometry`. In this notebook, we will go through them, explaining their differences and particular uses.",
"_____no_output_____"
],
[
"# Artificial Starlist",
"_____no_output_____"
],
[
"First things first! Let's create an artifical list of stars using photutils in order to explain the PSF procedures through examples.",
"_____no_output_____"
]
],
[
[
"from photutils.datasets import make_random_gaussians\nfrom photutils.datasets import make_noise_image\nfrom photutils.datasets import make_gaussian_sources\n\nnum_sources = 150\nmin_flux = 500\nmax_flux = 5000\nmin_xmean = 16\nmax_xmean = 240\nsigma_psf = 2.0\n\nstarlist = make_random_gaussians(num_sources, [min_flux, max_flux],\n [min_xmean, max_xmean],\n [min_xmean, max_xmean],\n [sigma_psf, sigma_psf],\n [sigma_psf, sigma_psf],\n random_state=1234)\n\nshape = (256, 256)\nimage = (make_gaussian_sources(shape, starlist) +\n make_noise_image(shape, type='poisson', mean=6., random_state=1234) + \n make_noise_image(shape, type='gaussian', mean=0., stddev=2., random_state=1234))",
"_____no_output_____"
]
],
[
[
"Note that we also added Poisson and Gaussian background noises with the function `make_noise_image`.",
"_____no_output_____"
],
[
"Let's keep in mind this fact:",
"_____no_output_____"
]
],
[
[
"type(starlist)",
"_____no_output_____"
],
[
"starlist",
"_____no_output_____"
]
],
[
[
"Pretty much all lists of sources in `photutils` are returned or passed in as `astropy` `Table` objects, so this is something to get used to.",
"_____no_output_____"
],
[
"Let's also plot our list of stars.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom matplotlib import rcParams\nimport matplotlib.pyplot as plt\nrcParams['image.cmap'] = 'magma'\nrcParams['image.aspect'] = 1 # to get images with square pixels\nrcParams['figure.figsize'] = (20,10)\nrcParams['image.interpolation'] = 'nearest'\nrcParams['image.origin'] = 'lower'\nrcParams['font.size'] = 14\n\nplt.imshow(image)\nplt.title('Simulated data')\nplt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)",
"_____no_output_____"
]
],
[
[
"# The `BasicPSFPhotometry` class",
"_____no_output_____"
],
[
"As the name suggests, this is a basic class which provides the minimum tools necessary to perform photometry in crowded fields (or non crowded fields). Let's take a look into its attributes and methods.",
"_____no_output_____"
],
[
"BasicPSFPhotometry has the following mandatory attributes:\n * group_maker : callable or instance of any GroupStarsBase subclass\n * bkg_estimator : callable, instance of any BackgroundBase subclass, or None\n * psf_model : astropy.modeling.Fittable2DModel instance\n * fitshape : integer or length-2 array-like",
"_____no_output_____"
],
[
"And the following optional attributes:\n * finder : callable or instance of any StarFinderBase subclasses or None\n * fitter : Astropy Fitter instance\n * aperture_radius : float or int",
"_____no_output_____"
],
[
"## Group Maker",
"_____no_output_____"
],
[
"`group_maker` can be instantiated using any GroupStarBase subclass, such as `photutils.psf.DAOGroup` or `photutils.psf.DBSCANGroup`, or even using a `callable` provided by the user.",
"_____no_output_____"
],
[
"`photutils.psf.DAOGroup` is a class which implements the `GROUP` algorithm proposed by Stetson which is used in DAOPHOT. This class takes one attribute to be initialized namely:\n * crit_separation : int or float\n Distance, in units of pixels, such that any two stars separated by less than this distance will be placed in the same group.",
"_____no_output_____"
],
[
"As it is shown in its description, `crit_separation` plays a crucial role in deciding whether or not a given star belong to some group of stars. Usually, `crit_separation` is set to be a positive real number multiplied by the FWHM of the PSF.",
"_____no_output_____"
],
[
"`photutils.psf.DBSCANGroup` is a generalized case of `photutils.psf.DAOGroup`, in fact, it is a wrapper around the `sklearn.cluster.DBSCAN` class. Its usage is very similar to `photutils.psf.DAOGroup` and we refer the photutils API doc page for more information: https://photutils.readthedocs.io/en/latest/api/photutils.psf.DBSCANGroup.html#photutils.psf.DBSCANGroup",
"_____no_output_____"
],
[
"The user is welcome to check the narrative docs on the photutils RTD webpage: https://photutils.readthedocs.io/en/latest/photutils/grouping.html",
"_____no_output_____"
],
[
"Now, let's instantiate a `group_maker` from `DAOGroup`:",
"_____no_output_____"
]
],
[
[
"from photutils import psf\nfrom astropy.stats import gaussian_sigma_to_fwhm",
"_____no_output_____"
],
[
"daogroup = psf.DAOGroup(crit_separation=2.*sigma_psf*gaussian_sigma_to_fwhm)",
"_____no_output_____"
]
],
[
[
"Now, the object `daogroup` is ready to be passed to `BasicPSFPhotometry`.",
"_____no_output_____"
],
[
"## Background Estimation",
"_____no_output_____"
],
[
"Background estimation is needed in the photometry process in order to reduce the bias added primarily by Poisson noise background into the flux estimation.",
"_____no_output_____"
],
[
"Photutils provides several classes to perform both scalar background estimation, i.e., when the background is flat and does not vary strongly across the image, and spatial varying background estimation, i.e., when there exist a gradient field associated with the background.",
"_____no_output_____"
],
[
"The user is welcome to refer to the Background Esimation narrative docs in the photutils webpage for a detailed explanation. https://photutils.readthedocs.io/en/latest/photutils/background.html",
"_____no_output_____"
],
[
"In this notebook, we will use the class `MMMBackground` which is intended to estimate scalar background. This class is based on the background estimator used in `DAOPHOT`.",
"_____no_output_____"
],
[
"`MMMBackground` gets a `SigmaClip` object as an attribute. It's basically used to perform sigma clip on the image before performing background estimation. For our scenario, we will just instatiate a object of `MMMBackground` with default attribute values:",
"_____no_output_____"
]
],
[
[
"from photutils import MMMBackground\nmmm_bkg = MMMBackground()",
"_____no_output_____"
],
[
"mmm_bkg.sigma_clip.sigma",
"_____no_output_____"
],
[
"mmm_bkg.sigma_clip.iters",
"_____no_output_____"
]
],
[
[
"## PSF Models",
"_____no_output_____"
],
[
"The attribute ``psf_model`` represents an analytical function with unkwon parameters (e.g., peak center and flux) which describes the underlying point spread function. ``psf_model`` is usually a subclass of `astropy.modeling.Fittable2DModel`. In this notebook, we will use `photutils.psf.IntegratedGaussianPRF` as our underlying PSF model.",
"_____no_output_____"
],
[
"Note that the underlying PSF model has to have parameters with the following names ``x_0``, ``y_0``, and ``flux``, to describe the center peak position and the flux, respectively.",
"_____no_output_____"
]
],
[
[
"from photutils.psf import IntegratedGaussianPRF\ngaussian_psf = IntegratedGaussianPRF(sigma=2.0)",
"_____no_output_____"
]
],
[
[
"## Finder",
"_____no_output_____"
],
[
"Finder is an optional attribute, meaning that if it is `None`, then the user should provide a table with the center positions of each star when calling the `BasicPSFPhotometry` object.\nLater, we will see examples of both cases, i.e., when Finder is `None` and when it is not.",
"_____no_output_____"
],
[
"The finder attribute is used to perform source detection. It can be any subclass of `photutils.StarFinderBase` such as `photutils.DAOStarFinder` or `photutils.IRAFStarFinder`, which implement a DAOPHOT-like or IRAF-like source detection algorithms, respectively. The user can also set her/his own source detection algorithm as long as the input/output formats are compatible with `photutils.StarFinderBase`.",
"_____no_output_____"
],
[
"`photutils.DAOStarFinder`, for instance, receives the following mandatory attributes: \n * threshold : float\n The absolute image value above which to select sources.\n * fwhm : float\n The full-width half-maximum (FWHM) of the major axis of the Gaussian kernel in units of pixels.",
"_____no_output_____"
],
[
"Now, let's instantiate our `DAOStarFinder` object:",
"_____no_output_____"
]
],
[
[
"from photutils.detection import DAOStarFinder\n\ndaofinder = DAOStarFinder(threshold=2.5*mmm_bkg(image), fwhm=sigma_psf*gaussian_sigma_to_fwhm)",
"_____no_output_____"
]
],
[
[
"Note that we choose the `threshold` to be a multiple of the background level and we assumed the `fwhm` to be known from our list of stars.",
"_____no_output_____"
],
[
"More details about source detection can be found on the `photutils.detection` narrative docs: https://photutils.readthedocs.io/en/latest/photutils/detection.html",
"_____no_output_____"
],
[
"## Fitter",
"_____no_output_____"
],
[
"Fitter should be an instance of a fitter implemented in `astropy.modeling.fitting`. Since the PSF model is almost always nonlinear, the fitter should be able to handle nonlinear optimization problems. In this notebook, we will use the `LevMarLSQFitter`, which combines the Levenberg-Marquardt optimization algorithm with the least-squares statistic. The default value for fitter is `LevMarLSQFitter()`. ",
"_____no_output_____"
],
[
"Look at http://docs.astropy.org/en/stable/modeling/index.html for more details on fitting.",
"_____no_output_____"
],
[
"NOTE: At this point it should be stated tha photutils do not have a standard way to compute uncertainties on the fitted parameters. However, this will change in the near future with the addition of a new affiliated package to the Astropy environment, namely, `SABA: Sherpa-Astropy Bridge` which made possible to use astropy models together with Sherpa Fitters.",
"_____no_output_____"
],
[
"## Fitshape and Aperture Radius",
"_____no_output_____"
],
[
"There are two attributes left: `fitshape` (mandatory) and `aperture_radius` (optional).\n`fitshape` corresponds to the size of the rectangular region necessary to enclose one single source. The pixels inside that region will be used in the fitting process. `fitshape` should be an odd integer or a tuple of odd integers.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfitshape = 11",
"_____no_output_____"
]
],
[
[
"The aperture radius corresponds to the radius used to compute initial guesses for the fluxes of the sources. If this value is `None`, then one fwhm will be used if it can be determined by the `psf_model`.",
"_____no_output_____"
],
[
"## Example with unknown positions and unknown fluxes",
"_____no_output_____"
],
[
"Now we are ready to take a look at an actual example. Let's first create our `BasicPSFPhotometry` object putting together the pieces that we defined along the way:",
"_____no_output_____"
]
],
[
[
"from photutils.psf import BasicPSFPhotometry\nbasic_photometry = BasicPSFPhotometry(group_maker=daogroup, bkg_estimator=mmm_bkg,\n psf_model=gaussian_psf, fitshape=fitshape,\n finder=daofinder)",
"_____no_output_____"
]
],
[
[
"To actually perform photometry on our image that we defined previously, we should use `basic_photometry` as a function call:",
"_____no_output_____"
]
],
[
[
"photometry_results = basic_photometry(image)\nphotometry_results",
"_____no_output_____"
]
],
[
[
"Let's plot the residual image along with the original image:",
"_____no_output_____"
]
],
[
[
"fig, (ax1, ax2) = plt.subplots(1,2)\nim1 = ax1.imshow(basic_photometry.get_residual_image())\nax1.set_title('Residual Image')\nplt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04,\n ax=ax1, mappable=im1)\n\nim2 = ax2.imshow(image)\nax2.set_title('Simulated data')\nplt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04,\n ax=ax2, mappable=im2)",
"_____no_output_____"
]
],
[
[
"Looking at the residual image we observe that the photometry process was able to fit many stars but not all. This is probably due to inability of the source detection algorithm to decide the number of sources in every crowded group. Therefore, let's play with the source detection classes to see whether we can improve the photometry process.",
"_____no_output_____"
],
[
"Let's use the `IRAFStarFinder` and play with the optional parameters. A complete description of these parameters can be seen at the `photutils.dection` API documentation: https://photutils.readthedocs.io/en/latest/api/photutils.detection.IRAFStarFinder.html#photutils.detection.IRAFStarFinder",
"_____no_output_____"
]
],
[
[
"from photutils.detection import IRAFStarFinder\niraffind = IRAFStarFinder(threshold=2.5*mmm_bkg(image),\n fwhm=sigma_psf*gaussian_sigma_to_fwhm,\n minsep_fwhm=0.01, roundhi=5.0, roundlo=-5.0,\n sharplo=0.0, sharphi=2.0)",
"_____no_output_____"
]
],
[
[
"Now let's set the `finder` attribute of our `BasicPSFPhotometry` object with `iraffind`:",
"_____no_output_____"
]
],
[
[
"basic_photometry.finder = iraffind",
"_____no_output_____"
]
],
[
[
"Let's repeat the photometry process:",
"_____no_output_____"
]
],
[
[
"photometry_results = basic_photometry(image)\nphotometry_results",
"_____no_output_____"
],
[
"plt.subplot(1,2,1)\nplt.imshow(basic_photometry.get_residual_image())\nplt.title('Residual Image')\nplt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)\n\nplt.subplot(1,2,2)\nplt.imshow(image)\nplt.title('Simulated data')\nplt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)",
"_____no_output_____"
]
],
[
[
"As we can see, the residual presents a better Gaussianity with only three groups that were not fitted well. The reason for that is that the sources may be too close to be distinguishable by the source detection algorithm.",
"_____no_output_____"
],
[
"## Example with known positions and unknwon fluxes",
"_____no_output_____"
],
[
"Let's assume that somehow we know the true positions of the stars and we only would like to perform fitting on the fluxes. Then we should use the optional argument `positions` when calling the photometry object:",
"_____no_output_____"
]
],
[
[
"from astropy.table import Table\npositions = Table(names=['x_0', 'y_0'], data=[starlist['x_mean'], starlist['y_mean']])",
"_____no_output_____"
],
[
"photometry_results = basic_photometry(image=image, positions=positions)",
"_____no_output_____"
],
[
"plt.subplot(1,2,1)\nplt.imshow(basic_photometry.get_residual_image())\nplt.title('Residual Image')\nplt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)\n\nplt.subplot(1,2,2)\nplt.imshow(image)\nplt.title('Simulated data')\nplt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)",
"_____no_output_____"
]
],
[
[
"Let's do a scatter plot between ground-truth fluxes and estimated fluxes:",
"_____no_output_____"
]
],
[
[
"photometry_results.sort('id')\nplt.scatter(starlist['flux'], photometry_results['flux_fit'])\nplt.xlabel('Ground-truth fluxes')\nplt.ylabel('Estimated fluxes')",
"_____no_output_____"
]
],
[
[
"Let's also plot the relative error on the fluxes estimation as a function of the ground-truth fluxes.",
"_____no_output_____"
]
],
[
[
"plt.scatter(starlist['flux'], (photometry_results['flux_fit'] - starlist['flux'])/starlist['flux'])\nplt.xlabel('Ground-truth flux')\nplt.ylabel('Estimate Relative Error')",
"_____no_output_____"
]
],
[
[
"As we can see, the relative error becomes smaller as the flux increase.",
"_____no_output_____"
],
[
"# `IterativelySubtractedPSFPhotometry`",
"_____no_output_____"
],
[
"`IterativelySubtractedPSFPhotometry` is a subclass of `BasicPSFPhotometry` which adds iteration functionality to the photometry procedure. It has the same attributes as `BasicPSFPhotometry`, except that it includes an additional `niters` which represents the number of of times to loop through the photometry process, subtracting the best-fit stars each time.",
"_____no_output_____"
],
[
"Hence, the process implemented in `IterativelySubtractedPSFPhotometry` resembles the loop used by DAOPHOT: `FIND`, `GROUP`, `NSTAR`, `SUBTRACT`, `FIND`. On its own `IterativelySubtractedPSFPhotometry` doesn't implement the specific algorithms used in DAOPHOT, but it does implement the *structure* to enambe this (and `DAOPhotPSFPhotometry`, discussed below, does).",
"_____no_output_____"
],
[
"The attribute `niters` can be `None`, which means that the photometry procedure will continue until no more sources are detected.",
"_____no_output_____"
],
[
"One final detail: the attribute `finder` (specifying the star-finder algorithm) for `IterativelySubtractedPSFPhotometry` cannot be `None` (as it can be for `BasicPSFPhotometry`). This is because it would not make sense to have an iterative process where the star finder changes completely at each step. If you want to do that you're better off manually looping over a series of calls to different `BasicPSFPhotometry` objects.",
"_____no_output_____"
],
[
"## Example with unknwon positions and unknown fluxes",
"_____no_output_____"
],
[
"Let's instantiate an object of `IterativelySubtractedPSFPhotometry`:",
"_____no_output_____"
]
],
[
[
"from photutils.psf import IterativelySubtractedPSFPhotometry\nitr_phot = IterativelySubtractedPSFPhotometry(group_maker=daogroup, bkg_estimator=mmm_bkg,\n psf_model=gaussian_psf, fitshape=fitshape,\n finder=iraffind, niters=2)",
"_____no_output_____"
]
],
[
[
"Let's now perform photometry on our artificil image:",
"_____no_output_____"
]
],
[
[
"photometry_results = itr_phot(image)\nphotometry_results",
"_____no_output_____"
]
],
[
[
"Observe that there is a new column namely `iter_detected` which shows the number of the iteration in which that source was detected.",
"_____no_output_____"
],
[
"Let's plot the residual image:",
"_____no_output_____"
]
],
[
[
"plt.subplot(1,2,1)\nplt.imshow(itr_phot.get_residual_image())\nplt.title('Residual Image')\nplt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)\n\nplt.subplot(1,2,2)\nplt.imshow(image)\nplt.title('Simulated data')\nplt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)",
"_____no_output_____"
]
],
[
[
"# `DAOPhotPSFPhotometry` ",
"_____no_output_____"
],
[
"There is also a class called `DAOPhotPSFPhotometry` that is a subclass of `IterativelySubtractedPSFPhotometry`. `DAOPhotPSFPhotometry` essentially implements the DAOPHOT photometry algorithm using `IterativelySubtractedPSFPhotometry`. So instead of giving it arguments like `finder`, you provide parameters specific for the DAOPhot-like sub-tasks (e.g., the FWHM the star-finder is optimized for).\n\nWe leave the use of this class as an exercise to the user to play with the parameters which would optimize the photometry procedure.",
"_____no_output_____"
]
],
[
[
"from photutils.psf import DAOPhotPSFPhotometry\n\ndao_phot = DAOPhotPSFPhotometry(...)\n\nphotometry_results = dao_phot(image)\nphotometry_results",
"_____no_output_____"
]
],
[
[
"## Documentation",
"_____no_output_____"
],
[
"Narrative and API docs of the classes used here can be found in https://photutils.readthedocs.io/en/latest/",
"_____no_output_____"
],
[
"# Future Works\nThe PSF Photometry module in photutils is still under development and feedback from users is much appreciated. Please open an issue on the github issue tracker of photutils with any suggestions for improvement, functionalities wanted, bugs, etc. \n\nNear future implementations in the photutils.psf module include:\n\n* FWHM estimation: a Python equivalent to DAOPHOT psfmeasure.\n* Uncertainties computation: uncertainties are very critical and it's very likely that we are going to use astropy saba package to integrate uncertainty computation into photutils.psf.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
4a7f4d1648c6a9fbfd02e4c76f767714a9a553ee
| 824,249 |
ipynb
|
Jupyter Notebook
|
M51 Stuff.ipynb
|
tobin-wainer/mass_function_fitting
|
70128e48742f21e38f2e7c08c52f4cf5c2149b7e
|
[
"MIT"
] | null | null | null |
M51 Stuff.ipynb
|
tobin-wainer/mass_function_fitting
|
70128e48742f21e38f2e7c08c52f4cf5c2149b7e
|
[
"MIT"
] | null | null | null |
M51 Stuff.ipynb
|
tobin-wainer/mass_function_fitting
|
70128e48742f21e38f2e7c08c52f4cf5c2149b7e
|
[
"MIT"
] | null | null | null | 934.522676 | 129,696 | 0.94788 |
[
[
[
"import numpy as np\nfrom astropy.table import Table, join, MaskedColumn, vstack\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nimport scipy\nfrom astropy.time import Time\nimport pandas as pd\nimport re\nimport seaborn as sns\nimport datetime\nfrom datetime import datetime\nfrom datetime import timedelta\nfrom math import e\nfrom math import pi\nfrom astropy.table import Column\nfrom math import sqrt\nimport numpy as np\nimport emcee\nimport matplotlib.pyplot as plt\nfrom astropy.io import fits\nfrom astropy.table import Table\nimport math\nimport corner\nfrom numpy import exp\nfrom scipy import integrate\nfrom scipy.integrate import quad\nimport pdb\nimport powerlaw\nimport random",
"_____no_output_____"
],
[
"#Reading in data file\nM51_raw=Table.read('M51_Messa_2018_CSV.csv')\nM51_raw\n\n#Messa+ 2018 only used masses greater than 5000 solar masses\nM51_used_masses_ind=np.where(M51_raw['Best_Mass_Msolar']>5000)\nM51_used_masses=M51_raw[M51_used_masses_ind]\nM51_used_masses\n\n#Only used Ages Less than 200 Myr\nM51_age_cut=np.where(M51_used_masses['Best_Age_yr']<200000000)\nM51_used_ages_masses=M51_used_masses[M51_age_cut]\nM51_used_ages_masses\n\nlog_masses=np.log10(M51_used_ages_masses['Best_Mass_Msolar'])\nlog_max_mass=np.log10(M51_used_ages_masses['Max_Mass_Msolar'])\nlog_min_mass=np.log10(M51_used_ages_masses['Min_Mass_Msolar'])\n\n#20 Clusters with no upper or lower Estimates\nno_max_min_estimate=np.where(log_max_mass<0)\nM51_used_ages_masses.remove_rows([no_max_min_estimate])\nM51_use=M51_used_ages_masses\nM51_use",
"/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16: RuntimeWarning: divide by zero encountered in log10\n app.launch_new_instance()\n/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:17: RuntimeWarning: divide by zero encountered in log10\n/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/astropy/table/table.py:2226: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n keep_mask[row_specifier] = False\n"
],
[
"#Making The Histogram Anil wanted to see\n\nlog_masses=np.log10(M51_use['Best_Mass_Msolar'])\nlog_max_mass=np.log10(M51_use['Max_Mass_Msolar'])\nlog_min_mass=np.log10(M51_use['Min_Mass_Msolar'])\n\nmass_error=[]\nfor i in range(len(log_max_mass)):\n mass_error.append((log_max_mass[i]-log_min_mass[i])/2)\n\nplt.hist(log_max_mass-log_masses, color='b', histtype='step', bins=20, label='Upper-Est')\nplt.hist(log_masses-log_min_mass, color='r', histtype='step', bins=20, label='Est-Lower')\nplt.yscale('log')\nplt.legend()\nplt.show()\n\nplt.hist((log_max_mass-log_masses)-(log_masses-log_min_mass), color='k', histtype='step', bins=20)\n#plt.hist(log_masses-log_min_mass, color='r', histtype='step', bins=20, label='Est-Lower')\nplt.yscale('log')\n",
"_____no_output_____"
],
[
"plt.hist(log_masses, histtype='step', color='k')\nplt.yscale('log')",
"_____no_output_____"
],
[
"#Running their Sample\n\ndef lnZ(theta, M):\n alpha, M_c = theta\n lin_M_c= 10**M_c\n def f(M):\n return (M**alpha)*exp(-M/lin_M_c)\n ans, err = quad(f, 5000, np.inf)\n return np.log(ans)\n\ndef lnlike(theta, M):\n alpha, M_c = theta\n lin_M= 10**M\n lin_M_c= 10**M_c\n return (np.sum(-lin_M/lin_M_c + alpha*np.log(lin_M) - lnZ(theta, lin_M)))\n\ndef lnprior(theta):\n alpha, M_c = theta\n if -3 <= alpha <= -1 and 3 <= M_c <= 8:\n return 0.0\n return -np.inf\n\ndef lnprob(theta, M):\n lp = lnprior(theta)\n if not np.isfinite(lp):\n return -np.inf\n return lp + lnlike(theta, M)\n\nstarting_point=np.array([-1.99, 5.00])\n\nndim, nwalkers = 2, 500\nnsteps= 600\nburnin=100\npos = starting_point + 1e-2*np.random.randn(nwalkers, ndim)\n\nsampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=([log_masses]))\nsampler.run_mcmc(pos, nsteps)\n\n#plot chain\nplt.plot(np.transpose(sampler.chain[:,:,1]))\nplt.show()\nsampler.get_chain(thin=5)\nsamples = sampler.chain[:, burnin:, :].reshape((-1, ndim))\nfig = corner.corner(samples, labels=[\"Alpha\", \"Log(M_c)\"], label_kwargs={\"fontsize\": 18},\n quantiles=[0.16, 0.5, 0.84], show_titles=True, title_kwargs={\"fontsize\": 18})\n\nfig.show()",
"_____no_output_____"
],
[
"#Trying to generate randomn samples that follows a power law distribution with an upper mass truncation published \n#in Messa+2018\n\ntheoretical_distribution = powerlaw.Power_Law(xmin=5000, xmax=100000, parameters = [2], discrete=True)\nsimulated_data=theoretical_distribution.generate_random(3200)\nfake_M_l=[]\nfor i in range(len(simulated_data)):\n fake_M_l.append(simulated_data[i])\n\nA3_fml=[]\nfor i in range(len(fake_M_l)):\n if fake_M_l[i] >=5000 and fake_M_l[i] < 10**6.2:\n A3_fml.append(fake_M_l[i])\n \nA3_fml.sort()\n\nfake_M=np.array(A3_fml)\nfake_M\n\nprint(np.where(fake_M>100000))\n\nrandom_ints = np.array(random.sample(range(2991, 3200), 190))\n#random_ints2 = np.array(random.sample(range(2940, 3180), 150))\nnew_fake_M=np.delete(fake_M, [random_ints])\n#new_fake_M2=np.delete(new_fake_M, [random_ints2])\n\nlog_FMl=[3.7 for i in range(93)]\nfor i in range(len(new_fake_M)):\n log_FMl.append(np.log10(new_fake_M[i]))\n \nlog_FM= np.array(log_FMl)\nlog_FM\nprint(len(log_FM))\n\n\n#x=[3,5.3]\n#y=[461,1]\nplt.hist(log_FM, histtype='step', bins=10)\n#plt.plot(x,y, c='r', label='alpha= -2')\n#plt.xlim(2.99,5)\nplt.yscale('log')\nplt.ylim(1)\nplt.xlabel('logM')\nplt.ylabel('N Clusters')\nplt.legend()",
"/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:24: DeprecationWarning: in the future out of bounds indices will raise an error instead of being ignored by `numpy.delete`.\nWARNING:matplotlib.legend:No handles with labels found to put in legend.\n"
],
[
"def lnZ(theta, M):\n alpha, M_c = theta\n lin_M_c= 10**M_c\n def f(M):\n return (M**alpha)*exp(-M/lin_M_c)\n ans, err = quad(f, 5000, np.inf)\n return np.log(ans)\n\ndef lnlike(theta, M):\n alpha, M_c = theta\n lin_M= 10**M\n lin_M_c= 10**M_c\n return (np.sum(-lin_M/lin_M_c + alpha*np.log(lin_M) - lnZ(theta, lin_M)))\n\ndef lnprior(theta):\n alpha, M_c = theta\n if -3 <= alpha <= -1 and 3 <= M_c <= 8:\n return 0.0\n return -np.inf\n\ndef lnprob(theta, M):\n lp = lnprior(theta)\n if not np.isfinite(lp):\n return -np.inf\n return lp + lnlike(theta, M)\n\nstarting_point=np.array([-2.00, 5.00])\n\nndim, nwalkers = 2, 500\nnsteps= 600\nburnin=100\npos = starting_point + 1e-2*np.random.randn(nwalkers, ndim)\n\nsampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=([log_FM]))\nsampler.run_mcmc(pos, nsteps)\n\n#plot chain\nplt.plot(np.transpose(sampler.chain[:,:,1]))\nplt.show()\nsampler.get_chain(thin=5)\nsamples = sampler.chain[:, burnin:, :].reshape((-1, ndim))\nfig = corner.corner(samples, labels=[\"Alpha\", \"Log(M_c)\"], label_kwargs={\"fontsize\": 18},\n quantiles=[0.16, 0.5, 0.84], show_titles=True, title_kwargs={\"fontsize\": 18})\n\nfig.show()",
"_____no_output_____"
],
[
"def uncertainty(mass_error, log_FM):\n spread_masses=[]\n for i in range(len(mass_error)):\n rand_spread=(np.random.normal(0, mass_error[i]))\n spread_masses.append(log_FM[i]+rand_spread)\n \n spread_masses=np.array(spread_masses)\n \n sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=([spread_masses]))\n sampler.run_mcmc(pos, nsteps)\n\n #plot chain\n# plt.plot(np.transpose(sampler.chain[:,:,1]))\n# plt.show()\n sampler.get_chain(thin=5)\n samples = sampler.chain[:, burnin:, :].reshape((-1, ndim))\n fig = corner.corner(samples, labels=[\"Alpha\", \"Log(M_c)\"], label_kwargs={\"fontsize\": 18},\n quantiles=[0.16, 0.5, 0.84], show_titles=True, title_kwargs={\"fontsize\": 18})\n \n fig.show()\n \n alpha=[i[0] for i in samples]\n Mc= [i[1] for i in samples]\n \n med_a=np.median(alpha)\n upper_sig_a= np.percentile(alpha, 84) \n lower_sig_a= np.percentile(alpha, 16)\n med_Mc=np.median(Mc)\n upper_sig_Mc= np.percentile(Mc, 84)\n lower_sig_Mc= np.percentile(Mc, 16)\n \n return np.array((med_a, lower_sig_a, upper_sig_a, med_Mc, lower_sig_Mc, upper_sig_Mc))\n",
"_____no_output_____"
],
[
"round1=uncertainty(mass_error, log_FM)",
"WARNING:root:Too few points to create valid contours\n/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n"
],
[
"round2=uncertainty(mass_error, log_FM)",
"WARNING:root:Too few points to create valid contours\n/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n"
],
[
"round3=uncertainty(mass_error, log_FM)",
"/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n"
],
[
"round4=uncertainty(mass_error, log_FM)",
"WARNING:root:Too few points to create valid contours\n/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n"
],
[
"round5=uncertainty(mass_error, log_FM)",
"/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n"
],
[
"round6=uncertainty(mass_error, log_FM)",
"/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n"
],
[
"round7=uncertainty(mass_error, log_FM)",
"/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n"
],
[
"round8=uncertainty(mass_error, log_FM)",
"/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n"
],
[
"round9=uncertainty(mass_error, log_FM)",
"WARNING:root:Too few points to create valid contours\n/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n"
],
[
"round10=uncertainty(mass_error, log_FM)",
"/Users/Tobin/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:20: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n"
],
[
"alphas=[round1[0], round2[0], round3[0], round4[0], round5[0], round6[0], round7[0], round8[0], round9[0], round10[0]]\nMcs= [round1[3], round2[3], round3[3], round4[3], round5[3], round6[3], round7[3], round8[3], round9[3], round10[3],]\n\nprint(\"Median:\", np.median(Mcs))\nprint(\"1 Sigma:\", np.percentile(Mcs, 16))\nprint(\"1 Sigma:\", np.percentile(Mcs, 84))",
"Median: 5.701392119786052\n1 Sigma: 5.6645783912585115\n1 Sigma: 5.742840808132371\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7f4f3ad96f42c4e926fb5b4d3f7d4a954b07e2
| 15,563 |
ipynb
|
Jupyter Notebook
|
4_2_Robot_Localization/7_2. Inexact Move Function, solution.ipynb
|
sids07/CVND_Exercise
|
eb4812af3bc715e512d7d67fbb87a864583b4663
|
[
"MIT"
] | null | null | null |
4_2_Robot_Localization/7_2. Inexact Move Function, solution.ipynb
|
sids07/CVND_Exercise
|
eb4812af3bc715e512d7d67fbb87a864583b4663
|
[
"MIT"
] | null | null | null |
4_2_Robot_Localization/7_2. Inexact Move Function, solution.ipynb
|
sids07/CVND_Exercise
|
eb4812af3bc715e512d7d67fbb87a864583b4663
|
[
"MIT"
] | null | null | null | 76.289216 | 9,600 | 0.800488 |
[
[
[
"# Inexact Move Function\n\nLet's see how we can incorporate **uncertain** motion into our motion update. We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. \n\nNext, you're tasked with modifying the `move` function so that it incorporates uncertainty in motion.\n\n<img src='images/uncertain_motion.png' width=50% height=50% />\n",
"_____no_output_____"
],
[
"First let's include our usual resource imports and display function.",
"_____no_output_____"
]
],
[
[
"# importing resources\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"A helper function for visualizing a distribution.",
"_____no_output_____"
]
],
[
[
"def display_map(grid, bar_width=1):\n if(len(grid) > 0):\n x_labels = range(len(grid))\n plt.bar(x_labels, height=grid, width=bar_width, color='b')\n plt.xlabel('Grid Cell')\n plt.ylabel('Probability')\n plt.ylim(0, 1) # range of 0-1 for probability values \n plt.title('Probability of the robot being at each cell in the grid')\n plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))\n plt.show()\n else:\n print('Grid is empty')\n",
"_____no_output_____"
]
],
[
[
"You are given the initial variables and the complete `sense` function, below.",
"_____no_output_____"
]
],
[
[
"# given initial variables\np=[0, 1, 0, 0, 0]\n# the color of each grid cell in the 1D world\nworld=['green', 'red', 'red', 'green', 'green']\n# Z, the sensor reading ('red' or 'green')\nZ = 'red'\npHit = 0.6\npMiss = 0.2\n\n# You are given the complete sense function\ndef sense(p, Z):\n ''' Takes in a current probability distribution, p, and a sensor reading, Z.\n Returns a *normalized* distribution after the sensor measurement has been made, q.\n This should be accurate whether Z is 'red' or 'green'. '''\n q=[]\n # loop through all grid cells\n for i in range(len(p)):\n # check if the sensor reading is equal to the color of the grid cell\n # if so, hit = 1\n # if not, hit = 0\n hit = (Z == world[i])\n q.append(p[i] * (hit * pHit + (1-hit) * pMiss))\n \n # sum up all the components\n s = sum(q)\n # divide all elements of q by the sum to normalize\n for i in range(len(p)):\n q[i] = q[i] / s\n return q\n\n# Commented out code for measurements\n# for k in range(len(measurements)):\n# p = sense(p, measurements)\n",
"_____no_output_____"
]
],
[
[
"### QUIZ: Modify the move function to accommodate the added probabilities of overshooting or undershooting the intended destination.\n\nThis function should shift a distribution with the motion, U, with some probability of under/overshooting. For the given, initial `p`, you should see the result for U = 1 and incorporated uncertainties: `[0.0, 0.1, 0.8, 0.1, 0.0]`.",
"_____no_output_____"
]
],
[
[
"## TODO: Modify the move function to accommodate the added robabilities of overshooting or undershooting \npExact = 0.8\npOvershoot = 0.1\npUndershoot = 0.1\n\n# Complete the move function\ndef move(p, U):\n q=[]\n # iterate through all values in p\n for i in range(len(p)):\n # use the modulo operator to find the new location for a p value\n # this finds an index that is shifted by the correct amount\n index = (i-U) % len(p)\n nextIndex = (index+1) % len(p)\n prevIndex = (index-1) % len(p)\n s = pExact * p[index]\n s = s + pOvershoot * p[nextIndex]\n s = s + pUndershoot * p[prevIndex]\n # append the correct, modified value of p to q\n q.append(s)\n return q\n\n## TODO: try this for U = 2 and see the result\np = move(p,1)\nprint(p)\ndisplay_map(p)",
"[0.0, 0.1, 0.8, 0.1, 0.0]\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a7f5521a931478e7cab38963fdbf5db0f3cea32
| 295,449 |
ipynb
|
Jupyter Notebook
|
jupyters/Rudolph_tune_i2t_pl.ipynb
|
sberbank-ai/ru-dolph
|
d2944bc2aca27a42dac6b5622141f0116bea2082
|
[
"Apache-2.0"
] | 215 |
2022-01-07T23:05:19.000Z
|
2022-03-19T21:27:07.000Z
|
jupyters/Rudolph_tune_i2t_pl.ipynb
|
sberbank-ai/ru-dolph
|
d2944bc2aca27a42dac6b5622141f0116bea2082
|
[
"Apache-2.0"
] | 4 |
2022-01-09T01:26:49.000Z
|
2022-02-10T01:13:31.000Z
|
jupyters/Rudolph_tune_i2t_pl.ipynb
|
sberbank-ai/ru-dolph
|
d2944bc2aca27a42dac6b5622141f0116bea2082
|
[
"Apache-2.0"
] | 19 |
2022-01-08T15:48:25.000Z
|
2022-03-10T21:08:50.000Z
| 103.231656 | 135,534 | 0.797799 |
[
[
[
"# 🦌 RuDOLPH 350M\n\n<b><font color=\"white\" size=\"+2\">Official colab of [RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP](https://github.com/sberbank-ai/ru-dolph)</font></b>\n\n\n<font color=\"white\" size=\"-0.75.\"><b>RuDOLPH</b> is a fast and light text-image-text transformer (350M GPT-3) for generating text like <b>GPT</b>, generating image (e.g.: image by text, image by image prompt) like <b>DALL-E</b>, generating image captions, image classification in Zero-Shot mode and image ranking like <b>CLIP</b>. \n\n<b>RuDOLPH 350M</b> is designed for quick and easy fine-tuning setup for solution of various tasks: from generating images by text description and image classification, to visual question answering and more. This colab demonstates the power of Hyper-Modal Transfomers.</font>\n\nHyper-modality means generalized multi-modal, e.g., model that consists of two multi-modal parts: text-2-image and image-2-text becomes text and image hyper-modality model\n\n<font color=\"white\" size=\"-0.75.\"><b>RuDOLPH for fast zero-shot text to image generation.</b> On the first phase we generate 288 in 5 min images by text! It takes Diffusion decoder is based on [Jack000](https://github.com/Jack000/) solution and ESRGAN-Real for high quality image rendering.</font>",
"_____no_output_____"
],
[
"# install all",
"_____no_output_____"
]
],
[
[
"!pip install rudolph==0.0.1rc8 > /dev/null\n!pip install bitsandbytes-cuda111 > /dev/null\n!pip install wandb > /dev/null\n!pip install pytorch-lightning > /dev/null",
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ndatascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.\nalbumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.\u001b[0m\n"
]
],
[
[
"#Download data",
"_____no_output_____"
]
],
[
[
"!pip install --upgrade gdown",
"Requirement already satisfied: gdown in /usr/local/lib/python3.7/dist-packages (4.4.0)\nRequirement already satisfied: requests[socks] in /usr/local/lib/python3.7/dist-packages (from gdown) (2.23.0)\nRequirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.7/dist-packages (from gdown) (4.6.3)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from gdown) (4.64.0)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from gdown) (1.15.0)\nRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from gdown) (3.6.0)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown) (2021.10.8)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown) (2.10)\nRequirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.7/dist-packages (from requests[socks]->gdown) (1.7.1)\n"
],
[
"import gdown\n\n# a file\nurl = \"http://drive.google.com/uc?id=17bPt7G3N_vGKCCxppIOPbPlhv1qUnv0o\"\noutput = \"food.zip\"\ngdown.download(url, output, quiet=False)",
"Downloading...\nFrom: http://drive.google.com/uc?id=17bPt7G3N_vGKCCxppIOPbPlhv1qUnv0o\nTo: /content/food.zip\n100%|██████████| 34.8M/34.8M [00:01<00:00, 26.0MB/s]\n"
],
[
"!unzip /content/food.zip",
"_____no_output_____"
]
],
[
[
"# Train this deer🦌🦌🦌",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport random\nfrom collections import Counter\n\nimport PIL\nimport torch\nimport numpy as np\nimport pandas as pd\nimport bitsandbytes as bnb\nimport torchvision.transforms as T\nimport torchvision.transforms.functional as TF\nfrom tqdm import tqdm\nfrom wordcloud import WordCloud\nfrom matplotlib import pyplot as plt\nfrom torch.utils.data import Dataset, DataLoader\nfrom rudalle import get_tokenizer, get_vae\nfrom rudalle.utils import seed_everything\nimport pytorch_lightning as pl\nfrom rudolph.model.utils import get_attention_mask\nfrom rudolph.model import get_rudolph_model, ruDolphModel, FP16Module\nfrom rudolph.pipelines import generate_codebooks, self_reranking_by_image, self_reranking_by_text, show, generate_captions, generate_texts, zs_clf\nfrom rudolph import utils",
"_____no_output_____"
],
[
"device = 'cuda'\n\nmodel = get_rudolph_model('350M', fp16=True, device=device)\ntokenizer = get_tokenizer()\nvae = get_vae(dwt=False).to(device)",
"_____no_output_____"
],
[
"class Args():\n def __init__(self, model):\n self.device = model.get_param('device')\n self.l_text_seq_length = model.get_param('l_text_seq_length')\n self.r_text_seq_length = model.get_param('r_text_seq_length')\n self.image_tokens_per_dim = model.get_param('image_tokens_per_dim')\n self.image_seq_length = model.get_param('image_seq_length')\n self.epochs = 5\n self.save_path='checkpoints/'\n self.model_name = 'awesomemodel_'\n self.save_every = 500\n self.bs = 2\n self.clip = 1.0\n self.lr = 2e-5\n self.freeze = False\n self.wandb = False\n self.train_steps = 10\n self.lt_loss_weight = 0.01\n self.img_loss_weight = 1\n self.rt_loss_weight = 7\n self.image_size = self.image_tokens_per_dim * 8\n\nargs = Args(model)\nif not os.path.exists(args.save_path):\n os.makedirs(args.save_path)",
"_____no_output_____"
],
[
"class FoodDataset(Dataset):\n def __init__(self, file_path, csv_path, tokenizer, shuffle=True):\n self.tokenizer = tokenizer\n self.samples = []\n self.image_transform = T.Compose([\n T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),\n T.RandomResizedCrop(args.image_size, scale=(1., 1.), ratio=(1., 1.)),\n T.ToTensor()\n ])\n\n df = pd.read_csv(csv_path)\n df.columns = ['index', 'belok', 'fats', 'uglevod', 'kkal', 'name', 'path']\n\n for belok, fats, uglevod, kkal, caption, f_path in zip(\n df['belok'],df['fats'], df['uglevod'], df['kkal'], df['name'], df['path']\n ):\n caption = f'блюдо: {caption}; белков: {belok}; жиров: {fats}; углеводов: {uglevod}; ккал: {kkal};'\n if len(caption)>10 and len(caption)<100 and os.path.isfile(f'{file_path}/{f_path}'):\n self.samples.append([file_path, f_path, caption.lower()])\n if shuffle:\n np.random.shuffle(self.samples)\n print('Shuffled')\n\n def __len__(self):\n return len(self.samples)\n\n def load_image(self, file_path, img_name):\n return PIL.Image.open(f'{file_path}/{img_name}')\n\n def __getitem__(self, item):\n item = item % len(self.samples)\n file_path, img_name, text = self.samples[item]\n\n try:\n image = self.load_image(file_path, img_name)\n image = self.image_transform(image)\n except Exception as err: \n print(err)\n random_item = random.randint(0, len(self.samples) - 1)\n return self.__getitem__(random_item)\n \n text = text.lower().strip()\n encoded = self.tokenizer.encode_text(text, text_seq_length=args.r_text_seq_length) \n return encoded, image",
"_____no_output_____"
]
],
[
[
"#Lets look what is inside food Dataset 🤔",
"_____no_output_____"
]
],
[
[
"dataset = FoodDataset(file_path='/content/food' ,csv_path ='/content/food/food.csv',tokenizer=tokenizer)\nargs.train_steps = len(dataset)//args.bs",
"Shuffled\n"
],
[
"class FoodDataModule(pl.LightningDataModule):\n\n def __init__(self, file_path, csv_path, tokenizer):\n super().__init__()\n \n\n def setup(self, stage=None):\n self.train_dataset = FoodDataset(file_path='/content/food', \n csv_path ='/content/food/food.csv', \n tokenizer=tokenizer)\n\n \n\n def train_dataloader(self):\n return DataLoader(\n self.train_dataset,\n batch_size=args.bs,\n shuffle=True,\n \n )\n\ndata_module = FoodDataModule(file_path='/content/food' ,csv_path ='/content/food/food.csv',tokenizer=tokenizer)",
"_____no_output_____"
],
[
"idx = random.randint(0, len(dataset)-1)\nencoded, image = dataset[idx]\n\nprint(tokenizer.decode_text(encoded))\n\nplt.imshow(image.permute(1,2,0).cpu().numpy());",
"_____no_output_____"
],
[
"idx = random.randint(0, len(dataset)-1)\nencoded, image = dataset[idx]\n\nprint(tokenizer.decode_text(encoded))\n\nplt.imshow(image.permute(1,2,0).cpu().numpy());",
"_____no_output_____"
],
[
"df = pd.read_csv('/content/food/food.csv')\nwc, c = WordCloud(), Counter()\n\nfor text in df['name']:\n try:\n c.update(wc.process_text(text)) \n except:\n continue",
"_____no_output_____"
],
[
"wc.fit_words(c)\nplt.figure(figsize=(7,7));\nplt.imshow(wc, interpolation='bilinear');\nplt.axis(\"off\");",
"_____no_output_____"
],
[
"import seaborn as sns\ntext_value_counts = pd.DataFrame(df['name'].value_counts())\nax = sns.histplot(data=text_value_counts, x=\"name\");\nax.set_title('Duplicated text count histogram');\nax.set_xlabel('duplicates count');",
"_____no_output_____"
]
],
[
[
"#Train this deer 🦌🎄☃️",
"_____no_output_____"
]
],
[
[
"class Rudolph_(pl.LightningModule):\n\n def __init__(self, args, vae):\n super().__init__()\n \n\n self.model = get_rudolph_model('350M', fp16=False, device=self.device)\n\n #self.vae = get_vae(dwt=False).to(self.device)\n\n \n print(self.device)\n \n def forward(self, \n input_ids,\n lt_loss_weight=0.1,\n img_loss_weight=0.8, \n rt_loss_weight=0.1, \n return_loss=True):\n \n total_seq_length = args.l_text_seq_length + args.image_seq_length*args.image_seq_length + args.r_text_seq_length\n masks = torch.ones(args.bs, args.r_text_seq_length, dtype=torch.int32)\n\n attention_mask = get_attention_mask(masks, args.bs, args.l_text_seq_length, args.image_tokens_per_dim,\n args.r_text_seq_length, self.device)\n\n loss, loss_values = self.model.forward(input_ids, \n attention_mask, \n lt_loss_weight=lt_loss_weight,\n img_loss_weight=img_loss_weight, \n rt_loss_weight=rt_loss_weight, \n return_loss=True)\n \n return loss\n\n\n\n def training_step(self, batch):\n text, images = batch[0], batch[1]\n \n image_input_ids = vae.get_codebook_indices(images).to(self.device)\n r_text = text.to(self.device)\n l_text = torch.zeros((args.bs, args.l_text_seq_length), dtype=torch.long).to(self.device)\n\n input_ids = torch.cat((l_text, image_input_ids, r_text), dim=1)\n \n loss = self.forward(input_ids, \n lt_loss_weight=args.lt_loss_weight,\n img_loss_weight=args.img_loss_weight, \n rt_loss_weight=args.rt_loss_weight, \n return_loss=True)\n \n \n self.log(\"train_loss\", loss, prog_bar=True, logger=True)\n\n return {\"loss\": loss}\n\n def training_epoch_end(self, outputs):\n pass\n\n def _freeze(self, \n params,\n freeze_emb=False,\n freeze_ln=False,\n freeze_attn=True,\n freeze_ff=True,\n freeze_other=False):\n #print(params)\n for name, p in enumerate(params):\n #print(name, p)\n #name = name.lower()\n if 'ln' in name or 'norm' in name:\n p.requires_grad = not freeze_ln\n elif 'embeddings' in name:\n p.requires_grad = not freeze_emb\n elif 'mlp' in name:\n p.requires_grad = not freeze_ff\n elif 'attn' in name:\n p.requires_grad = not freeze_attn\n else:\n p.requires_grad = not freeze_other\n return model\n \n\n\n\n\n def configure_optimizers(self):\n if args.freeze:\n optimizer = torch.optim.Adam(self._freeze(self.parameters()), lr=args.lr)\n\n else:\n optimizer = torch.optim.Adam(self.parameters(), lr=args.lr)\n #bnb.optim.Adam8bit(self.parameters(), lr=args.lr)\n\n scheduler = torch.optim.lr_scheduler.OneCycleLR(\n optimizer, \n max_lr=args.lr,\n final_div_factor=500, \n steps_per_epoch=args.train_steps,\n epochs=args.epochs \n )\n\n return optimizer\n ",
"_____no_output_____"
],
[
"from pytorch_lightning.loggers import WandbLogger\n# я использую wandb в качестве логера, если надо замените на тенсорборду\nwandb_logger = WandbLogger(project=\"rudolf\")",
"_____no_output_____"
],
[
"from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping\nfrom pytorch_lightning.loggers import TensorBoardLogger\ncheckpoint_callback = ModelCheckpoint(\n dirpath=\"checkpoints\",\n filename=\"best-checkpoint\",\n save_top_k=1,\n verbose=True,\n \n mode=\"min\"\n)\n",
"_____no_output_____"
],
[
"model = Rudolph_(args,vae)\n\ndata_module = FoodDataModule(file_path='/content/food' ,csv_path ='/content/food/food.csv',tokenizer=tokenizer)\n\ntrainer = pl.Trainer(\n logger=wandb_logger,\n checkpoint_callback=checkpoint_callback,\n max_epochs=2,\n accelerator=\"gpu\",\n progress_bar_refresh_rate=30\n)",
"/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/callback_connector.py:152: LightningDeprecationWarning: Setting `Trainer(checkpoint_callback=<pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint object at 0x7fb3f1dea450>)` is deprecated in v1.5 and will be removed in v1.7. Please consider using `Trainer(enable_checkpointing=<pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint object at 0x7fb3f1dea450>)`.\n f\"Setting `Trainer(checkpoint_callback={checkpoint_callback})` is deprecated in v1.5 and will \"\n/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/callback_connector.py:97: LightningDeprecationWarning: Setting `Trainer(progress_bar_refresh_rate=30)` is deprecated in v1.5 and will be removed in v1.7. Please pass `pytorch_lightning.callbacks.progress.TQDMProgressBar` with `refresh_rate` directly to the Trainer's `callbacks` argument instead. Or, to disable the progress bar pass `enable_progress_bar = False` to the Trainer.\n f\"Setting `Trainer(progress_bar_refresh_rate={progress_bar_refresh_rate})` is deprecated in v1.5 and\"\nGPU available: True, used: True\nTPU available: False, using: 0 TPU cores\nIPU available: False, using: 0 IPUs\nHPU available: False, using: 0 HPUs\n"
],
[
"trainer.fit(model,data_module)",
"Shuffled\n"
],
[
"trainer.save_checkpoint('/rudolf')",
"_____no_output_____"
]
],
[
[
"# 🖼2✍ Lets test trained model",
"_____no_output_____"
]
],
[
[
"def _fix_pl(path):\n d = torch.load(path)[\"state_dict\"]\n checkpoint = {}\n for key in d.keys():\n checkpoint[key.replace('model.','')] = d[key]\n torch.save(checkpoint,'fixed.pt')",
"_____no_output_____"
],
[
"template = 'блюдо:'\n\nimport requests\nfrom PIL import Image\nimport torch\n\ndevice = 'cuda'\n\nmodel = get_rudolph_model('350M', fp16=True, device=device)\ntokenizer = get_tokenizer()\nvae = get_vae(dwt=False).to(device)\n\n# path can change because PL \n_fix_pl('/content/rudolf/1033wc66/checkpoints/epoch=1-step=474-v1.ckpt')\n\nmodel.load_state_dict(torch.load('fixed.pt'))\n\nimg_by_url = 'https://kulinarenok.ru/img/steps/31445/1-7.jpg' #@param {type:\"string\"}\n# img_by_url = 'https://img.delo-vcusa.ru/2020/11/Borshh-s-yablokami.jpg' \n\nimg_by_url = Image.open(requests.get(img_by_url, stream=True).raw).resize((128, 128))\n#@markdown number of images\ncaptions_num = 4 #@param{type:'slider'}\ndisplay(img_by_url)\n\ntexts = generate_captions(img_by_url, tokenizer, model, vae, template=template, \n top_k=16, captions_num=captions_num, bs=16, top_p=0.6, seed=43, \n temperature=0.8, limit_eos=False)\nppl_text, ppl_image = self_reranking_by_image(texts, img_by_url, tokenizer, model, vae, bs=16, seed=42)\nfor idx in ppl_image.argsort()[:8]:\n print(texts[idx])",
"Russian Diffusion On Language Picture Hyper-modality (RuDOLPH 🦌🎄☃️) 350M is a fast and light text-image-text transformer (350M GPT-3) designed for a quick and easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more. \nThis model demonstrates the power of Hyper-modality Transformers.\ntokenizer --> ready\nWorking with z of shape (1, 256, 32, 32) = 262144 dimensions.\nvae --> ready\n"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a7f6a0dc11fe2db1728a0423e6a2b54908396e8
| 4,336 |
ipynb
|
Jupyter Notebook
|
docs/systems/pentalanine.ipynb
|
dprada/OpenMolecularSystems
|
5787fc159f87091ec498cf23abd07c1c2aec6138
|
[
"MIT"
] | 1 |
2021-07-02T14:42:08.000Z
|
2021-07-02T14:42:08.000Z
|
docs/systems/pentalanine.ipynb
|
dprada/OpenMolecularSystems
|
5787fc159f87091ec498cf23abd07c1c2aec6138
|
[
"MIT"
] | 1 |
2021-07-25T02:28:07.000Z
|
2021-07-25T02:28:07.000Z
|
docs/systems/pentalanine.ipynb
|
dprada/OpenMolecularSystems
|
5787fc159f87091ec498cf23abd07c1c2aec6138
|
[
"MIT"
] | 1 |
2021-06-17T18:56:55.000Z
|
2021-06-17T18:56:55.000Z
| 22.010152 | 129 | 0.494696 |
[
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import simtk.unit as unit",
"_____no_output_____"
]
],
[
[
"# Alanine pentapeptide",
"_____no_output_____"
],
[
"## Alanine pentapeptide in vacuum",
"_____no_output_____"
]
],
[
[
"from uibcdf_test_systems.systems import AlaninePentapeptideVacuum",
"_____no_output_____"
],
[
"pentalanine=AlaninePentapeptideVacuum()",
"_____no_output_____"
]
],
[
[
"## Alanine pentapeptide in implicit solvent",
"_____no_output_____"
]
],
[
[
"from uibcdf_test_systems.systems import AlaninePentapeptideImplicitSolvent",
"_____no_output_____"
],
[
"pentalanine=AlaninePentapeptideImplicitSolvent()",
"_____no_output_____"
],
[
"from uibcdf_test_systems.simulation import langevin_NVT",
"_____no_output_____"
],
[
"time, position, velocity, kinetic_energy, potential_energy = langevin_NVT (pentalanine,\n temperature = 300*unit.kelvin,\n friction = 1.0/unit.picoseconds,\n integration_timestep = 2.0*unit.femtoseconds,\n saving_timestep = 10.0*unit.picoseconds,\n total_time = 5.0*unit.nanoseconds)",
"_____no_output_____"
],
[
"langevin_NVT (pentalanine,\n temperature = 300*unit.kelvin,\n friction = 1.0/unit.picoseconds,\n integration_timestep = 2.0*unit.femtoseconds,\n saving_timestep = 10.0*unit.picoseconds,\n total_time = 50.0*unit.nanoseconds,\n output='pentalanine.h5')",
"_____no_output_____"
],
[
"import mdtraj as md",
"_____no_output_____"
],
[
"aa=md.load('pentalanine.h5')",
"_____no_output_____"
]
],
[
[
"## Alanine pentapeptide in explicit solvent",
"_____no_output_____"
]
],
[
[
"from uibcdf_test_systems.systems import AlaninePentapeptideExplicitSolvent",
"_____no_output_____"
],
[
"pentalanine=AlaninePentapeptideExplicitSolvent()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a7f707196c7ca62d4f14e0b57123b7644c90fd8
| 3,434 |
ipynb
|
Jupyter Notebook
|
00_core.ipynb
|
lukemshepherd/iplots
|
dacc4365f67139456e34260177fdf501c9d317ae
|
[
"Apache-2.0"
] | null | null | null |
00_core.ipynb
|
lukemshepherd/iplots
|
dacc4365f67139456e34260177fdf501c9d317ae
|
[
"Apache-2.0"
] | null | null | null |
00_core.ipynb
|
lukemshepherd/iplots
|
dacc4365f67139456e34260177fdf501c9d317ae
|
[
"Apache-2.0"
] | null | null | null | 25.437037 | 285 | 0.49039 |
[
[
[
"# default_exp core",
"_____no_output_____"
],
[
"%load_ext autoreload\n%autoreload 2",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
]
],
[
[
"# Contry\n> Contains basic population and colour vaulues",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.showdoc import *",
"_____no_output_____"
],
[
"class Country:\n window = 7\n list_of = []\n \n def __init__(self, name, population, color):\n assert type(name) == str\n# raise 'name must be string'\n self.name = name\n self.pop = population\n self.color = color\n self.linewidth= 4\n self.alpha = 0.7\n \n\n # Prevnets duplicate countries\n names = [getattr(nation,'name') for nation in Country.list_of]\n \n if self.name not in names:\n Country.list_of.append(self)\n \n \n def __str__(self):\n return self.name\n \n def __repr__(self):\n return f'Country_cls_Object:{self.name}'\n \n \n def set_data(self,df):\n self.data = df\n \n\n @classmethod\n def get_data(cls, df, df_name):\n setattr(cls,f'data_{df_name}', df)\n \n @classmethod\n def set_time(cls, df_name, time1, time2):\n \n cls.time1 = time1\n cls.time2 = time2\n \n df = getattr(cls,f'data_{df_name}') \n df_date_range = df.loc[time1:time2]\n \n setattr(cls,f'data_{df_name}_From_Date',df_date_range)\n \n# self.df_index = df_date_range[df_date_range['location'] == self.name].index",
"_____no_output_____"
],
[
"# Con1try\n> Contains basic population and colour vaulues",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a7f716e16991aefa17b625d7e70deca0aecd2d6
| 63,629 |
ipynb
|
Jupyter Notebook
|
my_test/wdl_estimator_with_tfrecord.ipynb
|
MC-Zealot/DeepCTR
|
b61b406232c9b37a06d7e83bb26db35ab2cd60b0
|
[
"Apache-2.0"
] | null | null | null |
my_test/wdl_estimator_with_tfrecord.ipynb
|
MC-Zealot/DeepCTR
|
b61b406232c9b37a06d7e83bb26db35ab2cd60b0
|
[
"Apache-2.0"
] | null | null | null |
my_test/wdl_estimator_with_tfrecord.ipynb
|
MC-Zealot/DeepCTR
|
b61b406232c9b37a06d7e83bb26db35ab2cd60b0
|
[
"Apache-2.0"
] | null | null | null | 223.259649 | 9,300 | 0.741753 |
[
[
[
"### Getting started: 4 steps to DeepCTR Estimator with TFRecord\n\n\n### Step 1: Import model",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nfrom deepctr.estimator.inputs import input_fn_tfrecord\nfrom deepctr.estimator.models import WDLEstimator\n",
"_____no_output_____"
]
],
[
[
"### Step 2: Generate feature columns for linear part and dnn part",
"_____no_output_____"
]
],
[
[
"sparse_features = ['C' + str(i) for i in range(1, 27)]\ndense_features = ['I' + str(i) for i in range(1, 14)]\n\ndnn_feature_columns = []\nlinear_feature_columns = []",
"_____no_output_____"
],
[
"for i, feat in enumerate(sparse_features):\n dnn_feature_columns.append(\n tf.feature_column.embedding_column(tf.feature_column.categorical_column_with_identity(feat, 1000), 4)\n )\n linear_feature_columns.append(tf.feature_column.categorical_column_with_identity(feat, 1000))\nfor feat in dense_features:\n dnn_feature_columns.append(tf.feature_column.numeric_column(feat))\n linear_feature_columns.append(tf.feature_column.numeric_column(feat))",
"_____no_output_____"
],
[
"print(dnn_feature_columns)",
"[EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C1', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144cb10>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C2', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c210>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C3', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c550>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C4', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144cc90>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C5', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c390>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C6', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c4d0>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C7', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c3d0>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C8', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144cd90>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C9', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144ca10>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C10', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c150>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C11', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c750>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C12', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c5d0>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C13', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c910>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C14', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144cb50>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C15', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c190>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C16', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c410>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C17', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c1d0>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C18', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c0d0>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C19', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c510>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C20', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c850>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C21', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c050>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C22', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144cd10>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C23', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c890>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C24', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c090>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C25', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c450>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), EmbeddingColumn(categorical_column=IdentityCategoricalColumn(key='C26', number_buckets=1000, default_value=None), dimension=4, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x64144c310>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True), NumericColumn(key='I1', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I2', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I3', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I4', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I5', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I6', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I7', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I8', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I9', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I10', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I11', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I12', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None), NumericColumn(key='I13', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)]\n"
]
],
[
[
"### Step 3: Generate the training samples with TFRecord format",
"_____no_output_____"
]
],
[
[
"feature_description = {k: tf.io.FixedLenFeature(dtype=tf.int64, shape=1) for k in sparse_features}\nfeature_description.update(\n {k: tf.io.FixedLenFeature(dtype=tf.float32, shape=1) for k in dense_features}\n)\nfeature_description['label'] = tf.io.FixedLenFeature(dtype=tf.float32, shape=1)\n\ntrain_model_input = input_fn_tfrecord('./criteo_sample.tr.tfrecords', feature_description, 'label', batch_size=256,\n num_epochs=1, shuffle_factor=10)\ntest_model_input = input_fn_tfrecord('./criteo_sample.te.tfrecords', feature_description, 'label',\n batch_size=2 ** 14, num_epochs=1, shuffle_factor=0)",
"_____no_output_____"
]
],
[
[
"### Step 4: Train and evaluate the model",
"_____no_output_____"
]
],
[
[
"model = WDLEstimator(linear_feature_columns, dnn_feature_columns, dnn_hidden_units=[4, 4], dnn_dropout=0.5)",
"INFO:tensorflow:Using default config.\nWARNING:tensorflow:Using temporary folder as model directory: /var/folders/n2/6bkx5wwj5zn_dpld14gmpk0c0000gp/T/tmpvcmb4056\nINFO:tensorflow:Using config: {'_model_dir': '/var/folders/n2/6bkx5wwj5zn_dpld14gmpk0c0000gp/T/tmpvcmb4056', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true\ngraph_options {\n rewrite_options {\n meta_optimizer_iterations: ONE\n }\n}\n, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x641261dd0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n"
],
[
"model.train(train_model_input)",
"WARNING:tensorflow:From /Users/taoyizhou/miniconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\nWARNING:tensorflow:From /Users/taoyizhou/miniconda3/lib/python3.7/site-packages/tensorflow_core/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.\n"
],
[
"eval_result = model.evaluate(test_model_input)\n\nprint(eval_result)",
"INFO:tensorflow:Could not find trained model in model_dir: /var/folders/n2/6bkx5wwj5zn_dpld14gmpk0c0000gp/T/tmpf377vlry, running initialization to evaluate.\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a7f735f2af3b429ccca9b46fb3b014ade23e049
| 3,835 |
ipynb
|
Jupyter Notebook
|
cnn.ipynb
|
beidongjiedeguang/my-deep-learning
|
3076e46469bf4027163ae05674266b3db258bb0d
|
[
"MIT"
] | 17 |
2019-02-27T06:27:45.000Z
|
2019-08-01T02:54:51.000Z
|
cnn.ipynb
|
beidongjiedeguang/my-deep-learning
|
3076e46469bf4027163ae05674266b3db258bb0d
|
[
"MIT"
] | null | null | null |
cnn.ipynb
|
beidongjiedeguang/my-deep-learning
|
3076e46469bf4027163ae05674266b3db258bb0d
|
[
"MIT"
] | null | null | null | 22.558824 | 88 | 0.525424 |
[
[
[
"# 指定CPU跑\nimport os\nos.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\" \nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"",
"_____no_output_____"
],
[
"import tensorflow as tf\nfrom tensorflow.keras import Model, layers\nimport numpy as np",
"_____no_output_____"
],
[
"num_classes = 10 # total classes (0-9 digits).\n\n# 训练参数\nlearning_rate = 0.001\ntraining_steps = 200\nbatch_size = 128\ndisplay_step = 10\n\n# Network parameters.\nconv1_filters = 32 # number of filters for 1st conv layer.\nconv2_filters = 64 # number of filters for 2nd conv layer.\nfc1_units = 1024 # number of neurons for 1st fully-connected layer.",
"_____no_output_____"
],
[
"# Prepare MNIST data.\nfrom tensorflow.keras.datasets import mnist\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\n\n# Convert to float32.\nx_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)\n\n# Normalize images value from [0, 255] to [0, 1].\nx_train, x_test = x_train / 255., x_test / 255.\n\nx_train.shape, y_train.shape, x_test.shape, y_test.shape",
"_____no_output_____"
],
[
"# Use tf.data API to shuffle and batch data.\nDataset = tf.data.Dataset\ntrain_data = Dataset.from_tensor_slices((x_train, y_train))\ntrain_data",
"_____no_output_____"
],
[
"train_data = train_data.repeat()\ntrain_data\n\ntrain_data = train_data.shuffle(5000).batch(batch_size)\ntrain_data",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7f73adc20e234045c093282098ce149d4eabb2
| 4,041 |
ipynb
|
Jupyter Notebook
|
Categorical_Data_Vis/.ipynb_checkpoints/GEOSNAP2NAM2-checkpoint.ipynb
|
suhanmappingideas/CyberGIS-Vis
|
6df96811d101629c36a6cfbfe9ebf54c7cb89430
|
[
"Apache-2.0"
] | 3 |
2021-12-02T06:38:21.000Z
|
2022-03-09T19:27:58.000Z
|
Categorical_Data_Vis/.ipynb_checkpoints/GEOSNAP2NAM2-checkpoint.ipynb
|
suhanmappingideas/CyberGIS-Vis
|
6df96811d101629c36a6cfbfe9ebf54c7cb89430
|
[
"Apache-2.0"
] | 6 |
2020-07-19T06:14:25.000Z
|
2021-05-10T23:36:55.000Z
|
Categorical_Data_Vis/.ipynb_checkpoints/GEOSNAP2NAM2-checkpoint.ipynb
|
suhanmappingideas/CyberGIS-Vis
|
6df96811d101629c36a6cfbfe9ebf54c7cb89430
|
[
"Apache-2.0"
] | 3 |
2020-12-11T21:30:11.000Z
|
2021-02-28T18:18:15.000Z
| 32.328 | 245 | 0.562979 |
[
[
[
"import pandas as pd\nfrom GEOSNAP2NAM import Aspatial_Clustering_viz\nfrom GEOSNAP2NAM import Aspatial_Clustering_log",
"_____no_output_____"
],
[
"#sample = \"downloads/LTDB_Std_All_Sample.zip\"\n#full = \"downloads/LTDB_Std_All_fullcount.zip\"\n#store_ltdb(sample=sample, fullcount=full)\n#store_census()",
"_____no_output_____"
],
[
"param = {\n 'title': \"Neighborhood Analysis: Kmeans, San Diego\",\n 'filename_suffix': \"San Diego\", # \"Albertville\"\n 'state_fips': None,\n 'msa_fips': \"41740\", # \"10700\"\n 'county_fips': None,\n 'years': [1980, 1990, 2000, 2010], # Available years: 1970, 1980, 1990, 2000 and 2010\n 'method': \"kmeans\", # affinity_propagation, gaussian_mixture, hdbscan, kmeans, spectral, ward \n 'nClusters': 8, # This option should be commented out for affinity_propagation and hdbscan\n 'variables': [\"p_nonhisp_white_persons\", \n \"p_nonhisp_black_persons\", \n \"p_hispanic_persons\", \n \"p_native_persons\", \n \"p_asian_persons\",\n ],\n 'Sequence': False,\n # optional visualization below.\n 'Index_of_neighborhood_change': True, #choropleth map: Maps representing index of neighborhood Change\n 'Maps_of_neighborhood': True, #choropleth map: Maps representing clustering result\t\t\n 'Distribution_INC1': True, #density chart: INC changes as the map extent changes \n 'Distribution_INC2_different_period': True, #density chart: INC changes by different years\n 'Distribution_INC2_different_cluster': True, #density chart: INC changes by different clusters\n #'Temporal_change_in_neighborhoods': True, #stacked chart: Temporal Change in Neighborhoods over years\t\t\n #'Parallel_Categories_Diagram_in_neighborhoods': True,\n #'Chord_Diagram_in_neighborhoods': True, \n}",
"_____no_output_____"
],
[
"Aspatial_Clustering_viz(param)",
"/home/hshan/anaconda3/envs/geosnap2/lib/python3.7/site-packages/scipy/cluster/hierarchy.py:830: ClusterWarning: scipy.cluster: The symmetric non-negative hollow observation matrix looks suspiciously like an uncondensed distance matrix\n return linkage(y, method='ward', metric='euclidean')\n"
],
[
"Aspatial_Clustering_log()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
4a7f7601fecae6717b12176b6f9de1720882ad56
| 381,336 |
ipynb
|
Jupyter Notebook
|
E_Biostatistics_with_R/Playground/Quantitative Omics.ipynb
|
oercompbiomed/CBM101
|
20010dcb99fbf218c4789eb5918dcff8ceb94898
|
[
"MIT"
] | 7 |
2019-07-03T07:41:55.000Z
|
2022-02-06T20:25:37.000Z
|
E_Biostatistics_with_R/Playground/Quantitative Omics.ipynb
|
oercompbiomed/CBM101
|
20010dcb99fbf218c4789eb5918dcff8ceb94898
|
[
"MIT"
] | 9 |
2019-03-14T15:15:09.000Z
|
2019-08-01T14:18:21.000Z
|
E_Biostatistics_with_R/Playground/Quantitative Omics.ipynb
|
oercompbiomed/CBM101
|
20010dcb99fbf218c4789eb5918dcff8ceb94898
|
[
"MIT"
] | 11 |
2019-03-12T10:43:11.000Z
|
2021-10-05T12:15:00.000Z
| 224.051704 | 134,290 | 0.882201 |
[
[
[
"# Quantitative omics \nThe exercises of this notebook correspond to different steps of the data analysis of quantitative omics data. We use data from transcriptomics and proteomics experiments.\n",
"_____no_output_____"
],
[
"## Installation of libraries and necessary software\n\nCopy the files *me_bestprobes.csv* and _AllQuantProteinsInAllSamples.csv_ into the folder that contains this jupyter notebook or upload them to http://localhost:8888/tree\n\n\nInstall the necessary libraries (only needed once) by executing (shift-enter) the following cell:\n",
"_____no_output_____"
]
],
[
[
"install.packages(\"DAAG\", repos='http://cran.us.r-project.org')\ninstall.packages(\"MASS\", repos='http://cran.us.r-project.org')\ninstall.packages(\"matrixStats\", repos='http://cran.us.r-project.org')\nif (!requireNamespace(\"BiocManager\", quietly = TRUE))\n install.packages(\"BiocManager\")\nBiocManager::install(c(\"Biobase\",\"preprocessCore\",\"qvalue\",\"limma\"))",
"_____no_output_____"
]
],
[
[
"## Loading data and libraries\nThis requires that the installation above have been finished without error",
"_____no_output_____"
]
],
[
[
"library(\"MASS\")\nlibrary(\"DAAG\")\nlibrary(\"matrixStats\")\nlibrary(\"Biobase\")\nlibrary(\"preprocessCore\")\nlibrary(\"qvalue\")\nlibrary(\"limma\")\n\nme_Kalinka <- read.csv(\"me_bestprobes.csv\",row.names=1)\nCanceriTRAQ <- read.csv(\"AllQuantProteinsInAllSamples.csv\",row.names=1)",
"Loading required package: lattice\n\n\nAttaching package: ‘DAAG’\n\n\nThe following object is masked from ‘package:MASS’:\n\n hills\n\n\nLoading required package: BiocGenerics\n\nLoading required package: parallel\n\n\nAttaching package: ‘BiocGenerics’\n\n\nThe following objects are masked from ‘package:parallel’:\n\n clusterApply, clusterApplyLB, clusterCall, clusterEvalQ,\n clusterExport, clusterMap, parApply, parCapply, parLapply,\n parLapplyLB, parRapply, parSapply, parSapplyLB\n\n\nThe following objects are masked from ‘package:stats’:\n\n IQR, mad, sd, var, xtabs\n\n\nThe following objects are masked from ‘package:base’:\n\n anyDuplicated, append, as.data.frame, basename, cbind, colnames,\n dirname, do.call, duplicated, eval, evalq, Filter, Find, get, grep,\n grepl, intersect, is.unsorted, lapply, Map, mapply, match, mget,\n order, paste, pmax, pmax.int, pmin, pmin.int, Position, rank,\n rbind, Reduce, rownames, sapply, setdiff, sort, table, tapply,\n union, unique, unsplit, which, which.max, which.min\n\n\nWelcome to Bioconductor\n\n Vignettes contain introductory material; view with\n 'browseVignettes()'. To cite Bioconductor, see\n 'citation(\"Biobase\")', and for packages 'citation(\"pkgname\")'.\n\n\n\nAttaching package: ‘Biobase’\n\n\nThe following objects are masked from ‘package:matrixStats’:\n\n anyMissing, rowMedians\n\n\n\nAttaching package: ‘limma’\n\n\nThe following object is masked from ‘package:BiocGenerics’:\n\n plotMA\n\n\n"
]
],
[
[
"### Exercise 1\n\nWe apply different ways of normalization to a typical microarray data set. \n\nGet the data ```geneData``` from the ```Biobase``` package. Normalize the columns (by division on normal scale or subtraction on log-scale) by a) mean, b) median, c) mean of log-values, and d) median of log-values. Revise the results extensively by comparing the multiple distributions in histograms, density plots, ranked plots and ```qqnorm```. Do also a direct comparison between replicates by scatter plots.\n\n\n",
"_____no_output_____"
]
],
[
[
"data(geneData)\ngeneData[geneData<=0] <- NA\nlogDat <- log2(geneData)\n\n",
"_____no_output_____"
]
],
[
[
"##### Question I: <u>Would you plot the data on log-scale or on normal scale?</u>\n\n_Answer_\n\n##### Question II: <u>What does qqnorm tell us?</u>\n\n_Answer_\n\n##### Question III: <u>What is the problem when normalizing by the mean on normal scale?</u>\n\n_Answer_\n\n##### Question IV: <u>What is the difference between normalization b) and d)?</u>\n\n_Answer_\n\n",
"_____no_output_____"
],
[
"### Exercise 2\n\nHere, we will determine differentially regulated genes from the comparison between different sample groups of geneData. \n\na) Take the log-transformed ```geneData``` set and perform t-tests for all genes between sample groups (B, I, K, N, P, T) and (C, G, J, O, R, U, V). You can copy and modifiy the code from the lecture. Do not forget to correct for multiple testing. Plot a histogram of the p-values and generate a volcano plot.\n\nb) In order to see whether the t-tests also provide results for any comparison, take randomly chosen samples of 6 versus 6 groups and redo the statistical tests.\n\nc) Carry out a principal component analysis on the entire data set and look for the groups that you tested for significantly different genes (loading plot) in a).\n",
"_____no_output_____"
]
],
[
[
"data(geneData)\ngeneData[geneData<=0] <- NA\nlogDat <- log2(geneData)\nlogDat <- logDat[complete.cases(logDat),]\npvals <- vector(,nrow(logDat))\nfor(i in 1:nrow(logDat)) {\n pvals[i] <- t.test(logDat[i, c(\"B\", \"I\", \"K\", \"N\", \"P\", \"T\")], logDat[i, c(\"C\", \"G\", \"J\", \"O\", \"R\", \"U\", \"V\")])$p.value\n}\n\npvals2 <- apply(logDat, 1, function(x) t.test(x[c(\"B\", \"I\", \"K\", \"N\", \"P\", \"T\")] , x[c(\"C\", \"G\", \"J\", \"O\", \"R\", \"U\", \"V\")])$p.value)\n\nhist(pvals, 100)\nfdrs <- p.adjust(pvals, method = \"BH\")\n\nplot(rowMeans(logDat[, c(\"B\", \"I\", \"K\", \"N\", \"P\", \"T\")]) - \n rowMeans(logDat[, c(\"C\", \"G\", \"J\", \"O\", \"R\", \"U\", \"V\")]), \n -log10(fdrs)) \nabline(h=1)\nabline(v=c(-2,2))\n\n \nsamples <- sample(LETTERS, 12)\ng1 <- samples[1:6]\ng2 <- samples[7:12]\npvals <- vector(,nrow(logDat))\nfor(i in 1:nrow(logDat)) {\n pvals[i] <- t.test(logDat[i, g1], logDat[i, g2])$p.value\n}\n\npvals2 <- apply(logDat, 1, function(x) t.test(x[g1] , x[g2])$p.value)\n\nhist(pvals, 100)\nfdrs <- p.adjust(pvals, method = \"BH\")\n\nplot(rowMeans(logDat[, g1]) - \n rowMeans(logDat[, g2]), \n -log10(fdrs)) \nabline(h=1)\nabline(v=c(-2,2))\n \n \npca.out <- princomp(logDat)\nplot(pca.out$loadings)\ntext(pca.out$loadings, colnames(logDat), pos=2)\n \n \n \n \n# ...\n \n ",
"_____no_output_____"
]
],
[
[
"##### Question I: <u>How many differentially regulated genes do you find in a) and in b) (p-value below 0.01)?</u>\n\n_Answer_\n\n##### Question II: <u>Why does a volcano plot look like a volcano?</u>\n\n_Answer_\n\n##### Question III: <u>What does the PCA tell you about part a) of this exercise?</u>\n\n_Answer_\n\n",
"_____no_output_____"
],
[
"### Exercise 3\n\nIn bottom-up LC-MS experiments, the output are peptides which can be shared between different proteins. This is why the results most of the time report protein groups instead of single proteins. Here, you will apply different operations on the reported protein groups. \n\nRead the file _Example.csv_ and extract the column with the protein accession numbers. \n\na) Pick out one of the values and apply ```strsplit``` to separate database name (e.g. TREMBL, SWISS-PROT) from accession id.\n\nb) Take a value with multiple protein accessions and extract only the accession ids.\n\nc) Operate ```strsplit``` on the entire column and try to extract the accession ids.\n\nd) Count the number of proteins per protein group and plot their distribution as histogram.\n",
"_____no_output_____"
]
],
[
[
"A <- read.csv(\"ExampleFile.csv\")\nprotaccs <- A$Protein.Accessions\nprotaccs[60:65]\n# a)\nexample_str <- strsplit(as.character(protaccs[63]),\":\",fixed = T)\nexample_str[[1]][2]\n# b)\nunlist(strsplit(strsplit(as.character(protaccs[63]),\":\",fixed = T)[[1]][2],\";\",fixed=T))\n# c) Still some SWISS-PROT in the array though\n# c) Still some SWISS-PROT in the array though\nallprots <- list()\nfor (i in 1:length(protaccs)) {\n str1 <- strsplit(as.character(protaccs[i]),\":\",fixed = T)\n# print(str1[[1]])\n if (length(str1[[1]])>1)\n allprots[[i]] <- unlist(strsplit(str1[[1]][2],\";\",fixed=T))\n}\n \n# d) This one is on you\nhist(sapply(allprots, length), 50)\ntable(sapply(allprots, length))\n",
"_____no_output_____"
]
],
[
[
"##### Question I: <u>What is the difference between TREMBL and SWISS-PROT annotations?</u>\n\n_Answer_\n\n##### Question II: <u>What is the advantage of measuring multiple peptides of a protein?</u>\n\n_Answer_\n\n##### Question 3: <u>How many proteins contains the largest protein group?</u>\n\n_Answer_\n\n",
"_____no_output_____"
],
[
"### Exercise 4\n\nWe will test different normalization methods on micro-array data from _Drosophila melanogaster_ development (https://www.nature.com/articles/nature09634). \n\na) Make a boxplot and compare the different developmental stages. \n\nMake a scatter plot and change sample numbers to see how they compare quantitatively.\n\nLook at the MA plot and understand what it shows\n\nb) Carry out median normalization and look at the plots of the normalized data\n\nc) Carry out quantile normalization ```normalize.quantiles(microarray)``` and look at the plots again\n\n",
"_____no_output_____"
]
],
[
[
"microarray <- me_Kalinka[,2:ncol(me_Kalinka)]\n\n#boxplot(microarray)\n\nsample1 <- 1\nsample2 <- 7\n\nplot(rowMeans(microarray,na.rm=T),microarray[,sample2]-microarray[,sample1],cex=0.5,pch=15, col=\"#00000033\", \n xlab=paste(\"Sample\",sample1), ylab=paste(\"Sample\",sample2))\nabline(h=0)\n\n# add different normalizations here\n\n\n# plot again",
"_____no_output_____"
]
],
[
[
"##### Question I: <u>Can you spot the difference between the developmental states from the boxplot?</u>\n\n_Answer_\n\n##### Question II: <u>What complicates normalization of such a data set with large differences?</u>\n\n_Answer_\n\n\n##### Question III: <u>What are the sometimes rather drastic changes in the data when using quantile normalization?</u>\n\n_Answer_\n\n\n##### Question IV: <u>Which normalization would you recommend?</u>\n\n_Answer_\n\n",
"_____no_output_____"
],
[
"### Exercise 5\n\nIn this exercise, you will apply statistical tests to proteomics data.\n\nCarry out t-tests between the two cancer subtypes of the ```CanceriTRAQ``` data (from https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0137048). Plot the p-values (corrected for multiple testing) in a volcano plot and compare the results to the ones in the _IsoProt_ paper (https://pubs.acs.org/doi/10.1021/acs.jproteome.8b00968)\n\nCompare the results for the two types of correction for multiple testing \"Benjamini-Hochberg\" and the ```qvalue``` library (\"Storey\" method). You can make a scatter plot of the FDRs (corrected p-values) on log-scale and also compare by making two volcano plots.\n",
"_____no_output_____"
]
],
[
[
"CanceriTRAQRed <- CanceriTRAQ[rowSums(is.na(CanceriTRAQ))<3,]\n# Add your code here:\n",
"_____no_output_____"
]
],
[
[
"##### Question I: <u>What does the first line of code do?</u>\n\n_Answer_\n\n##### Question II: <u>How many p-values <0.05 and 0.1 do you get? How many after correction for multiple testing?</u>\n\n_Answer_\n\n##### Question III: <u>What would be needed to increase the number of significantly changing proteins?</u>\n\n_Answer_\n\n##### Question IV: <u>How many p-values below 0.05 would a randomized data set of the same size give without correction for multiple testing?</u>\n\n_Answer_\n\n##### Question V: <u>Name the difference you observe when comparing the two methods (\"Benjamini-Hochberg\" and \"Storey\")</u>\n\n_Answer_\n\n",
"_____no_output_____"
],
[
"### Exercise 6\n\nThe ```limma``` package provides better estimates of the p-values by adjusting the observed variances of the features to the generally observed trends in the data. We will further use different tools for biological interpretation.\n\nCarry out limma testing on the cancer data and compare the results to the ones from the t-tests.\n\nTake the 50 most regulated proteins and upload them to the following two web services for biological interpretation:\n\n- DAVID: http://david.ncifcrf.gov\n- GOrilla http://cbl-gorilla.cs.technion.ac.il/\n\n",
"_____no_output_____"
]
],
[
[
"## limma\n# Set replicate numbers\nReps <- c(1,1,1,1,2,2,2,2)\nData <- CanceriTRAQ\nNumCond <- max(Reps)\n design <- model.matrix(~0+factor(Reps-1))\n colnames(design)<-paste(\"i\",c(1:NumCond),sep=\"\")\n contrasts<-NULL\n First <- 1\n for (i in (1:NumCond)[-First]) contrasts<-append(contrasts,paste(colnames(design)[i],\"-\",colnames(design)[First],sep=\"\"))\n contrast.matrix<-makeContrasts(contrasts=contrasts,levels=design)\n print(dim(Data))\n lm.fitted <- lmFit(Data,design)\n \n lm.contr <- contrasts.fit(lm.fitted,contrast.matrix)\n lm.bayes<-eBayes(lm.contr)\n #topTable(lm.bayes)\n# These are the (uncorrected) p-values from the moderated t-test from the limma package:\n plvalues <- lm.bayes$p.value\nhead(sort(p.adjust(plvalues, method=\"BH\")))\n\n",
"_____no_output_____"
]
],
[
[
"##### Question I: <u>How many regulated proteins do you find this time (FDR < 0.05)?</u>\n\n_Answer_\n\n##### Question II: <u>Which are the most enriched Gene ontology terms (GO terms, BP) in both web sites?</u>\n\n_Answer_\n\n##### Question III: <u>Which pathways are likely to distinguish the two cancer subtypes?</u>\n\n_Answer_\n\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a7f8056f0ddd835e3d61836ae55b0999725ff29
| 5,036 |
ipynb
|
Jupyter Notebook
|
bronze/B00_Credits.ipynb
|
QPoland/basics-of-quantum-computing-pl
|
543ada7311c0146f41d2e68da6784bb4e635a8e9
|
[
"Apache-2.0",
"CC-BY-4.0"
] | 1 |
2021-04-08T16:12:21.000Z
|
2021-04-08T16:12:21.000Z
|
bronze/B00_Credits.ipynb
|
QPoland/basics-of-quantum-computing-pl
|
543ada7311c0146f41d2e68da6784bb4e635a8e9
|
[
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null |
bronze/B00_Credits.ipynb
|
QPoland/basics-of-quantum-computing-pl
|
543ada7311c0146f41d2e68da6784bb4e635a8e9
|
[
"Apache-2.0",
"CC-BY-4.0"
] | 3 |
2021-02-05T14:13:48.000Z
|
2021-09-14T09:13:51.000Z
| 44.964286 | 333 | 0.576251 |
[
[
[
"<table><tr>\n <td style=\"background-color:#ffffff;text-align:left;\"><a href=\"http://qworld.lu.lv\" target=\"_blank\"><img src=\"../images/qworld.jpg\" width=\"30%\" align=\"left\"></a></td>\n <td style=\"background-color:#ffffff;\"> </td>\n <td style=\"background-color:#ffffff;vertical-align:text-middle;text-align:right;\">\n <table><tr style=\"background-color:white;\">\n <td> Visit</td>\n <td><a href=\"http://qworld.lu.lv\" target=\"_blank\"><img src=\"../images/web-logo.png\" width=\"35px\"></a></td>\n <td width=\"10pt\"></td>\n <td> Join</td>\n <td><a href=\"https://qworldworkspace.slack.com/\" target=\"_blank\"><img src=\"../images/slack-icon.png\" width=\"80px\"></a></td>\n <td width=\"10pt\"></td>\n <td>Follow</td>\n <td><a href=\"https://www.facebook.com/qworld19/\" target=\"_blank\"><img src=\"../images/facebook-icon.png\" width=\"40px\"></a></td>\n <td><a href=\"https://twitter.com/QWorld19\" target=\"_blank\"><img src=\"../images/twitter-icon.png\" width=\"40px\"></a></td>\n </tr></table>\n </td> \n</tr></table>",
"_____no_output_____"
],
[
"<h2> Credits </h2>",
"_____no_output_____"
],
[
"<font style=\"color: #cd7f32;\"><b>Bronze</b></font> was created by <a href=\"http://abu.lu.lv\" target=\"_blank\"><b>Dr. Abuzer Yakaryilmaz</b></a> (<a href=\"http://qworld.lu.lv/index.php/qlatvia/\" target=\"_blank\">QLatvia</a>) in October 2018, and the most part of it has been developed by him.\n\n<b>Dr. Maksims Dimitrijevs</b> (<a href=\"http://qworld.lu.lv/index.php/qlatvia/\" target=\"_blank\">QLatvia</a>) and <b>Dr. Özlem Salehi Köken</b> (<a href=\"http://qworld.lu.lv/index.php/qturkey/\" target=\"_blank\">QTurkey</a>) have revised all notebooks, proposed certain changes, and prepared a couple of new notebooks.\n\nThe first recording lectures were prepared by <b>Dr. Abuzer Yakaryilmaz</b>, <b>Dr. Özlem Salehi Köken</b>, and <b>Anastasija Trizna</b> (<a href=\"http://qworld.lu.lv/index.php/qlatvia/\" target=\"_blank\">QLatvia</a>).\n\nStarting from <b>July 7, 2019</b>, Bronze has been on a public gitlab repository (https://gitlab.com/qkitchen/basics-of-quantum-computing) and it is expected to have contributions from public as well. ",
"_____no_output_____"
],
[
"<hr>\n<h3>Bronze 2020</h3>\n\nBronze has been revised throughout 2020. \n\nWe thank to the participants of [QTraining for Bronze program](https://qworld.lu.lv/index.php/qtraining-for-bronze-2020/) for their corrections and suggestions.",
"_____no_output_____"
],
[
"<hr>\n<h3>Bronze 2019</h3>\n\nWe thank to <b><i><a href=\"https://qworld.lu.lv/index.php/qdrive/\" target=\"_blank\">QDrive</a> mentors and participants</i></b> for their very helpful corrections and suggestions.\n\nWe thank <b><i><a href=\"https://pl.linkedin.com/in/adamglos92\" target=\"_blank\">Adam Glos</a></i></b> (<a href=\"http://qworld.lu.lv/index.php/qpoland/\" target=\"_blank\">QPoland</a>) for his comments on Bronze 2018.",
"_____no_output_____"
],
[
"<hr>\n<h3>Bronze 2018</h3>\n\nWe thank to <b><i>Katrina Kizenbaha</i></b> from Riga TechGirls for her revisions on our notebooks on python.\n\nWe thank to <b><i>Martins Kalis</i></b> (QLatvia) for his technical comments on python, qiskit, and our notebooks.\n\nWe thank to <b><i>Maksims Dimitrijevs</i></b> (QLatvia) for his careful reading and corrections on our notebooks.\n\nWe thank to QLatvia members and former members <b><i>Martins Kalis</i></b>, <b><i>Maksims Dimitrijevs</i></b>, <b><i>Aleksejs Naumovs</i></b>, <b><i>Andis Draguns</i></b>, and <b><i>Matiss Apinis</i></b> for their help and support.\n\nWe thank to <b><i>the students (<a href=\"https://www.df.lu.lv\">DF@LU</a>) attending quantum programming's meetings</i></b> on each Friday (Fall 2018) for their comments while working with our notebooks.\n<hr>",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a7fad602cd19df006a8c4182262ef28a5c05d6f
| 147,114 |
ipynb
|
Jupyter Notebook
|
2sem/BERT/BERT_for_text_classification.ipynb
|
Temprain-ops/Deep_Learning_School_MIPT
|
c42b6077aa8ab8e4792c5b4f1dcd145619c2d3b4
|
[
"MIT"
] | null | null | null |
2sem/BERT/BERT_for_text_classification.ipynb
|
Temprain-ops/Deep_Learning_School_MIPT
|
c42b6077aa8ab8e4792c5b4f1dcd145619c2d3b4
|
[
"MIT"
] | null | null | null |
2sem/BERT/BERT_for_text_classification.ipynb
|
Temprain-ops/Deep_Learning_School_MIPT
|
c42b6077aa8ab8e4792c5b4f1dcd145619c2d3b4
|
[
"MIT"
] | null | null | null | 147,114 | 147,114 | 0.86329 |
[
[
[
"<img src=\"https://s8.hostingkartinok.com/uploads/images/2018/08/308b49fcfbc619d629fe4604bceb67ac.jpg\" width=500, height=450>\n<h3 style=\"text-align: center;\"><b>Физтех-Школа Прикладной математики и информатики (ФПМИ) МФТИ</b></h3>",
"_____no_output_____"
],
[
"***Some parts of the notebook are almost the exact copy of [ML-MIPT course](https://github.com/girafe-ai/ml-mipt).Special thanks to ML-MIPT team for making them publicly available. [Original notebook](https://github.com/girafe-ai/ml-mipt/blob/advanced_f20/week1_05_BERT_and_GPT/week05_BERT_for_text_classification.ipynb).***",
"_____no_output_____"
],
[
"## Practice: A Visual Notebook to Using BERT for the First Time\n\n\n*Credits: first part of this notebook belongs to Jay Alammar and his [great blog post](http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/) (while it has minor changes). His blog is a great way to dive into the DL and NLP concepts.*\n\n<img src=\"https://jalammar.github.io/images/distilBERT/bert-distilbert-sentence-classification.png\" />\n\nIn this notebook, we will use pre-trained deep learning model to process some text. We will then use the output of that model to classify the text. The text is a list of sentences from film reviews. And we will calssify each sentence as either speaking \"positively\" about its subject of \"negatively\".\n\n### Models: Sentence Sentiment Classification\nOur goal is to create a model that takes a sentence (just like the ones in our dataset) and produces either 1 (indicating the sentence carries a positive sentiment) or a 0 (indicating the sentence carries a negative sentiment). We can think of it as looking like this:\n\n<img src=\"https://jalammar.github.io/images/distilBERT/sentiment-classifier-1.png\" />\n\nUnder the hood, the model is actually made up of two model.\n\n* DistilBERT processes the sentence and passes along some information it extracted from it on to the next model. DistilBERT is a smaller version of BERT developed and open sourced by the team at HuggingFace. It’s a lighter and faster version of BERT that roughly matches its performance.\n* The next model, a basic Logistic Regression model from scikit learn will take in the result of DistilBERT’s processing, and classify the sentence as either positive or negative (1 or 0, respectively).\n\nThe data we pass between the two models is a vector of size 768. We can think of this of vector as an embedding for the sentence that we can use for classification.\n\n\n<img src=\"https://jalammar.github.io/images/distilBERT/distilbert-bert-sentiment-classifier.png\" />\n\n## Dataset\nThe dataset we will use in this example is [SST2](https://nlp.stanford.edu/sentiment/index.html), which contains sentences from movie reviews, each labeled as either positive (has the value 1) or negative (has the value 0):\n\n\n<table class=\"features-table\">\n <tr>\n <th class=\"mdc-text-light-green-600\">\n sentence\n </th>\n <th class=\"mdc-text-purple-600\">\n label\n </th>\n </tr>\n <tr>\n <td class=\"mdc-bg-light-green-50\" style=\"text-align:left\">\n a stirring , funny and finally transporting re imagining of beauty and the beast and 1930s horror films\n </td>\n <td class=\"mdc-bg-purple-50\">\n 1\n </td>\n </tr>\n <tr>\n <td class=\"mdc-bg-light-green-50\" style=\"text-align:left\">\n apparently reassembled from the cutting room floor of any given daytime soap\n </td>\n <td class=\"mdc-bg-purple-50\">\n 0\n </td>\n </tr>\n <tr>\n <td class=\"mdc-bg-light-green-50\" style=\"text-align:left\">\n they presume their audience won't sit still for a sociology lesson\n </td>\n <td class=\"mdc-bg-purple-50\">\n 0\n </td>\n </tr>\n <tr>\n <td class=\"mdc-bg-light-green-50\" style=\"text-align:left\">\n this is a visually stunning rumination on love , memory , history and the war between art and commerce\n </td>\n <td class=\"mdc-bg-purple-50\">\n 1\n </td>\n </tr>\n <tr>\n <td class=\"mdc-bg-light-green-50\" style=\"text-align:left\">\n jonathan parker 's bartleby should have been the be all end all of the modern office anomie films\n </td>\n <td class=\"mdc-bg-purple-50\">\n 1\n </td>\n </tr>\n</table>\n\n## Installing the transformers library\nLet's start by installing the huggingface transformers library so we can load our deep learning NLP model.",
"_____no_output_____"
]
],
[
[
"!pip install transformers",
"Collecting transformers\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/3a/83/e74092e7f24a08d751aa59b37a9fc572b2e4af3918cb66f7766c3affb1b4/transformers-3.5.1-py3-none-any.whl (1.3MB)\n\u001b[K |████████████████████████████████| 1.3MB 13.0MB/s \n\u001b[?25hRequirement already satisfied: protobuf in /usr/local/lib/python3.6/dist-packages (from transformers) (3.12.4)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.18.5)\nRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.4)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)\nCollecting sacremoses\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz (883kB)\n\u001b[K |████████████████████████████████| 890kB 43.6MB/s \n\u001b[?25hRequirement already satisfied: dataclasses; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from transformers) (0.8)\nCollecting sentencepiece==0.1.91\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)\n\u001b[K |████████████████████████████████| 1.1MB 55.3MB/s \n\u001b[?25hRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)\nCollecting tokenizers==0.9.3\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/4c/34/b39eb9994bc3c999270b69c9eea40ecc6f0e97991dba28282b9fd32d44ee/tokenizers-0.9.3-cp36-cp36m-manylinux1_x86_64.whl (2.9MB)\n\u001b[K |████████████████████████████████| 2.9MB 53.4MB/s \n\u001b[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)\nRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf->transformers) (50.3.2)\nRequirement already satisfied: six>=1.9 in /usr/local/lib/python3.6/dist-packages (from protobuf->transformers) (1.15.0)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)\nRequirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (0.17.0)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.11.8)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)\nBuilding wheels for collected packages: sacremoses\n Building wheel for sacremoses (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for sacremoses: filename=sacremoses-0.0.43-cp36-none-any.whl size=893257 sha256=97133869f29b79204ac3f8543a0d11b819024225fb8126d8a08cc656e49d738c\n Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45\nSuccessfully built sacremoses\nInstalling collected packages: sacremoses, sentencepiece, tokenizers, transformers\nSuccessfully installed sacremoses-0.0.43 sentencepiece-0.1.91 tokenizers-0.9.3 transformers-3.5.1\n"
]
],
[
[
"[Transformers library doc](https://huggingface.co/transformers/)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import cross_val_score\nimport torch\nimport transformers as ppb\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
]
],
[
[
"## Using BERT for text classification.\n\n### Importing the dataset\nWe'll use pandas to read the dataset and load it into a dataframe.",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\n 'https://github.com/clairett/pytorch-sentiment-classification/raw/master/data/SST2/train.tsv',\n delimiter='\\t',\n header=None\n)",
"_____no_output_____"
]
],
[
[
"For performance reasons, we'll only use 2,000 sentences from the dataset",
"_____no_output_____"
]
],
[
[
"batch_1 = df[:2000]",
"_____no_output_____"
],
[
"batch_1.head()",
"_____no_output_____"
]
],
[
[
"We can ask pandas how many sentences are labeled as \"positive\" (value 1) and how many are labeled \"negative\" (having the value 0)",
"_____no_output_____"
]
],
[
[
"batch_1[1].value_counts()",
"_____no_output_____"
]
],
[
[
"## Loading the Pre-trained BERT model\nLet's now load a pre-trained BERT model. ",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# For DistilBERT:\nmodel_class, tokenizer_class, pretrained_weights = (ppb.DistilBertModel, ppb.DistilBertTokenizer, 'distilbert-base-uncased')\n\n## Want BERT instead of distilBERT? Uncomment the following line:\n#model_class, tokenizer_class, pretrained_weights = (ppb.BertModel, ppb.BertTokenizer, 'bert-base-uncased')\n\n# Load pretrained model/tokenizer\ntokenizer = tokenizer_class.from_pretrained(pretrained_weights)\nmodel = model_class.from_pretrained(pretrained_weights)",
"_____no_output_____"
]
],
[
[
"Right now, the variable `model` holds a pretrained [distilBERT](https://medium.com/huggingface/distilbert-8cf3380435b5) model -- a version of BERT that is smaller, but much faster and requiring a lot less memory.\n\n### Step #1: Preparing the Dataset\nBefore we can hand our sentences to BERT, we need to so some minimal processing to put them in the format it requires.\n\n### Tokenization\nOur first step is to tokenize the sentences -- break them up into word and subwords in the format BERT is comfortable with.",
"_____no_output_____"
]
],
[
[
"tokenized = batch_1[0].apply((lambda x: tokenizer.encode(x, add_special_tokens=True)))\nprint(tokenized[0])",
"[101, 1037, 18385, 1010, 6057, 1998, 2633, 18276, 2128, 16603, 1997, 5053, 1998, 1996, 6841, 1998, 5687, 5469, 3152, 102]\n"
],
[
"text = batch_1.loc[1, 0]\nprint(text)\nprint(tokenizer.encode(text))\ntext_encode = tokenizer.encode(text)\nprint(tokenizer.encode(text, add_special_tokens=False))\nprint(tokenizer.decode(text_encode))\nprint(' '.join([tokenizer.ids_to_tokens[i] for i in text_encode]))",
"apparently reassembled from the cutting room floor of any given daytime soap\n[101, 4593, 2128, 27241, 23931, 2013, 1996, 6276, 2282, 2723, 1997, 2151, 2445, 12217, 7815, 102]\n[4593, 2128, 27241, 23931, 2013, 1996, 6276, 2282, 2723, 1997, 2151, 2445, 12217, 7815]\n[CLS] apparently reassembled from the cutting room floor of any given daytime soap [SEP]\n[CLS] apparently re ##asse ##mbled from the cutting room floor of any given daytime soap [SEP]\n"
],
[
"tokenizer.cls_token_id, tokenizer.sep_token_id, tokenizer.pad_token_id",
"_____no_output_____"
],
[
"tokenizer.vocab_size",
"_____no_output_____"
]
],
[
[
"<img src=\"https://jalammar.github.io/images/distilBERT/bert-distilbert-tokenization-2-token-ids.png\" />\n\n\n### Padding\nAfter tokenization, `tokenized` is a list of sentences -- each sentences is represented as a list of tokens. We want BERT to process our examples all at once (as one batch). It's just faster that way. For that reason, we need to pad all lists to the same size, so we can represent the input as one 2-d array, rather than a list of lists (of different lengths).",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nplt.hist(list(map(len, tokenized.values)))\nplt.show()",
"_____no_output_____"
],
[
"max_len = 0\nfor i in tokenized.values:\n if len(i) > max_len:\n max_len = len(i)\n\npadded = np.array([i + [0]*(max_len-len(i)) for i in tokenized.values])",
"_____no_output_____"
]
],
[
[
"Our dataset is now in the `padded` variable, we can view its dimensions below:",
"_____no_output_____"
]
],
[
[
"np.array(padded).shape",
"_____no_output_____"
]
],
[
[
"### Masking\nIf we directly send `padded` to BERT, that would slightly confuse it. We need to create another variable to tell it to ignore (mask) the padding we've added when it's processing its input. That's what attention_mask is:",
"_____no_output_____"
]
],
[
[
"attention_mask = np.where(padded != 0, 1, 0)\nattention_mask.shape",
"_____no_output_____"
]
],
[
[
"### Step #1: And Now, Deep Learning!\nNow that we have our model and inputs ready, let's run our model!\n\n<img src=\"https://jalammar.github.io/images/distilBERT/bert-distilbert-tutorial-sentence-embedding.png\" />\n\nThe `model()` function runs our sentences through BERT. The results of the processing will be returned into `last_hidden_states`.",
"_____no_output_____"
]
],
[
[
"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\ndevice",
"_____no_output_____"
],
[
"model.eval()\nmodel = model.to(device)",
"_____no_output_____"
],
[
"padded.shape, attention_mask.shape",
"_____no_output_____"
],
[
"from tqdm.notebook import tqdm\n\ninput_ids = torch.tensor(padded) \nattention_mask = torch.tensor(attention_mask)\n\nbatch_size = 20\noutput = []\n\nfor idx in tqdm(range(0, len(padded), batch_size)):\n \n batch = input_ids[idx:idx + batch_size].to(device)\n print(batch.shape)\n part_attention_mask = attention_mask[idx:idx + batch_size].to(device)\n print(part_attention_mask.shape)\n with torch.no_grad():\n last_hidden_states = model(batch, attention_mask=part_attention_mask)\n output.append(last_hidden_states[0].cpu())",
"_____no_output_____"
]
],
[
[
"Let's slice only the part of the output that we need. That is the output corresponding the first token of each sentence. The way BERT does sentence classification, is that it adds a token called `[CLS]` (for classification) at the beginning of every sentence. The output corresponding to that token can be thought of as an embedding for the entire sentence.\n\n<img src=\"https://jalammar.github.io/images/distilBERT/bert-output-tensor-selection.png\" />\n\nWe'll save those in the `features` variable, as they'll serve as the features to our logitics regression model.",
"_____no_output_____"
],
[
"$Z_{[CLS]} = \\sum\\limits_{token \\: \\in \\: Vocab} \\text{similarity}(Q_{[CLS]} \\cdot K_{token}) \\cdot V_{token}$\n----------",
"_____no_output_____"
]
],
[
[
"input_ids.shape",
"_____no_output_____"
],
[
"# change output\noutput = torch.cat(output, dim=0)",
"_____no_output_____"
],
[
"output.shape",
"_____no_output_____"
],
[
"features = output[:,0,:].numpy()",
"_____no_output_____"
]
],
[
[
"The labels indicating which sentence is positive and negative now go into the `labels` variable",
"_____no_output_____"
]
],
[
[
"labels = batch_1[1]",
"_____no_output_____"
],
[
"features.shape, labels.shape",
"_____no_output_____"
]
],
[
[
"### Step #3: Train/Test Split\nLet's now split our datset into a training set and testing set (even though we're using 2,000 sentences from the SST2 training set).",
"_____no_output_____"
]
],
[
[
"train_features, test_features, train_labels, test_labels = train_test_split(features, labels, random_state=0)",
"_____no_output_____"
]
],
[
[
"<img src=\"https://jalammar.github.io/images/distilBERT/bert-distilbert-train-test-split-sentence-embedding.png\" />\n\n### [Extra] Grid Search for Parameters\nWe can dive into Logistic regression directly with the Scikit Learn default parameters, but sometimes it's worth searching for the best value of the C parameter, which determines regularization strength.",
"_____no_output_____"
]
],
[
[
"parameters = {'C': np.linspace(0.0001, 10, 20)}\ngrid_search = GridSearchCV(LogisticRegression(), parameters)\ngrid_search.fit(train_features, train_labels)\n\nprint('best parameters: ', grid_search.best_params_)\nprint('best scrores: ', grid_search.best_score_)",
"best parameters: {'C': 1.052721052631579}\nbest scrores: 0.8486666666666667\n"
]
],
[
[
"We now train the LogisticRegression model. If you've chosen to do the gridsearch, you can plug the value of C into the model declaration (e.g. `LogisticRegression(C=5.2)`).",
"_____no_output_____"
]
],
[
[
"lr_clf = LogisticRegression(C=1.052721052631579)\nlr_clf.fit(train_features, train_labels)",
"_____no_output_____"
]
],
[
[
"<img src=\"https://jalammar.github.io/images/distilBERT/bert-training-logistic-regression.png\" />\n\n### Step #4: Evaluating Model\nSo how well does our model do in classifying sentences? One way is to check the accuracy against the testing dataset:",
"_____no_output_____"
]
],
[
[
"lr_clf.score(test_features, test_labels)",
"_____no_output_____"
]
],
[
[
"How good is this score? What can we compare it against? Let's first look at a dummy classifier:",
"_____no_output_____"
]
],
[
[
"from sklearn.dummy import DummyClassifier\nclf = DummyClassifier()\n\nscores = cross_val_score(clf, train_features, train_labels)\nprint(\"Dummy classifier score: %0.3f (+/- %0.2f)\" % (scores.mean(), scores.std() * 2))",
"Dummy classifier score: 0.516 (+/- 0.06)\n"
]
],
[
[
"So our model clearly does better than a dummy classifier. But how does it compare against the best models?\n\n### Proper SST2 scores\nFor reference, the [highest accuracy score](http://nlpprogress.com/english/sentiment_analysis.html) for this dataset is currently **96.8**. DistilBERT can be trained to improve its score on this task – a process called **fine-tuning** which updates BERT’s weights to make it achieve a better performance in this sentence classification task (which we can call the downstream task). The fine-tuned DistilBERT turns out to achieve an accuracy score of **90.7**. The full size BERT model achieves **94.9**.\n\n\n\nAnd that’s it! That’s a good first contact with BERT. The next step would be to head over to the documentation and try your hand at [fine-tuning](https://huggingface.co/transformers/examples.html#glue). You can also go back and switch from distilBERT to BERT and see how that works.",
"_____no_output_____"
],
[
"So, how does it look? Did we achieve better results? \n\nHere come some further ideas:\n\n* Try using the larger BERT (e.g. BERT-base or BERT-large) and compare the results (be careful, they require more memory).\n\n* Using BERT output for translation? Why not ;)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4a7faf63acb32735c9750fccbeba3ea7d8d71ab6
| 11,603 |
ipynb
|
Jupyter Notebook
|
Sistemas_Cognitivos/Trabajos/Laboratorio+1.ipynb
|
mapa17/UNIR-IA
|
d8c899e21c8cf8ddeb78803cd4c633c6dce2b121
|
[
"MIT"
] | null | null | null |
Sistemas_Cognitivos/Trabajos/Laboratorio+1.ipynb
|
mapa17/UNIR-IA
|
d8c899e21c8cf8ddeb78803cd4c633c6dce2b121
|
[
"MIT"
] | null | null | null |
Sistemas_Cognitivos/Trabajos/Laboratorio+1.ipynb
|
mapa17/UNIR-IA
|
d8c899e21c8cf8ddeb78803cd4c633c6dce2b121
|
[
"MIT"
] | null | null | null | 44.799228 | 579 | 0.637938 |
[
[
[
"# Laboratorio: Convolutional Neural Networks\n\nEn este laboratorio, vamos a trabajar con Convolutional Neural Networks para resolver un problema de clasificación de imágenes. En particular, vamos a clasificar imágenes de personajes de la conocida serie de los Simpsons.\n\nComo las CNN profundas son un tipo de modelo bastante avanzado y computacionalmente costoso, se recomienda hacer la práctica en Google Colaboratory con soporte para GPUs. En [este enlace](https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d) se explica cómo activar un entorno con GPUs. *Nota: para leer las imágenes y estandarizarlas al mismo tamaño se usa la librería opencv. Esta ĺibrería está ya instalada en el entorno de Colab, pero si trabajáis de manera local tendréis que instalarla.*\n\n<center><img src=\"https://i.imgur.com/i8zIGqX.jpg\" style=\"text-align: center\" height=\"300px\"></center>\n\nEl dataset a utilizar consiste en imágenes de personajes de los Simpsons extraídas directamente de capítulos de la serie. Este dataset ha sido recopilado por [Alexandre Attia](http://www.alexattia.fr/) y es más complejo que el dataset de Fashion MNIST que hemos utilizado hasta ahora. Aparte de tener más clases (vamos a utilizar los 18 personajes con más imágenes), los personajes pueden aparecer en distintas poses, en distintas posiciones de la imagen o con otros personajes en pantalla (si bien el personaje a clasificar siempre aparece en la posición predominante).\n\nEl dataset de training puede ser descargado desde aquí:\n\n[Training data](https://onedrive.live.com/download?cid=C506CF0A4F373B0F&resid=C506CF0A4F373B0F%219337&authkey=AMzI92bJPx8Sd60) (~500MB)\n\nPor otro lado, el dataset de test puede ser descargado de aquí:\n\n[Test data](https://onedrive.live.com/download?cid=C506CF0A4F373B0F&resid=C506CF0A4F373B0F%219341&authkey=ANnjK3Uq1FhuAe8) (~10MB)\n\nAntes de empezar la práctica, se recomienda descargar las imágenes y echarlas un vistazo.\n",
"_____no_output_____"
],
[
"## Carga de los datos",
"_____no_output_____"
]
],
[
[
"import cv2\nimport os\nimport numpy as np \nimport keras\nimport matplotlib.pyplot as plt\nimport glob\n\n\n# Primero, bajamos los datos de entrenamiento\nkeras.utils.get_file(fname=\"simpsons_train.tar.gz\", \n origin=\"https://onedrive.live.com/download?cid=C506CF0A4F373B0F&resid=C506CF0A4F373B0F%219337&authkey=AMzI92bJPx8Sd60\")\n\n# Descomprimimos el archivo\n!tar -xzf /root/.keras/datasets/simpsons_train.tar.gz -C /root/.keras/datasets\n\n# Hacemos lo mismo con los datos de test\nkeras.utils.get_file(fname=\"simpsons_test.tar.gz\", \n origin=\"https://onedrive.live.com/download?cid=C506CF0A4F373B0F&resid=C506CF0A4F373B0F%219341&authkey=ANnjK3Uq1FhuAe8\")\n!tar -xzf /root/.keras/datasets/simpsons_test.tar.gz -C /root/.keras/datasets",
"_____no_output_____"
],
[
"# Esta variable contiene un mapeo de número de clase a personaje.\n# Utilizamos sólo los 18 personajes del dataset que tienen más imágenes.\nMAP_CHARACTERS = {\n 0: 'abraham_grampa_simpson', 1: 'apu_nahasapeemapetilon', 2: 'bart_simpson',\n 3: 'charles_montgomery_burns', 4: 'chief_wiggum', 5: 'comic_book_guy', 6: 'edna_krabappel', \n 7: 'homer_simpson', 8: 'kent_brockman', 9: 'krusty_the_clown', 10: 'lisa_simpson', \n 11: 'marge_simpson', 12: 'milhouse_van_houten', 13: 'moe_szyslak', \n 14: 'ned_flanders', 15: 'nelson_muntz', 16: 'principal_skinner', 17: 'sideshow_bob'\n}\n\n# Vamos a standarizar todas las imágenes a tamaño 64x64\nIMG_SIZE = 64",
"_____no_output_____"
],
[
"def load_train_set(dirname, map_characters, verbose=True):\n \"\"\"Esta función carga los datos de training en imágenes.\n \n Como las imágenes tienen tamaños distintas, utilizamos la librería opencv\n para hacer un resize y adaptarlas todas a tamaño IMG_SIZE x IMG_SIZE.\n \n Args:\n dirname: directorio completo del que leer los datos\n map_characters: variable de mapeo entre labels y personajes\n verbose: si es True, muestra información de las imágenes cargadas\n \n Returns:\n X, y: X es un array con todas las imágenes cargadas con tamaño\n IMG_SIZE x IMG_SIZE\n y es un array con las labels de correspondientes a cada imagen\n \"\"\"\n X_train = []\n y_train = []\n for label, character in map_characters.items(): \n files = os.listdir(os.path.join(dirname, character))\n images = [file for file in files if file.endswith(\"jpg\")]\n if verbose:\n print(\"Leyendo {} imágenes encontradas de {}\".format(len(images), character))\n for image_name in images:\n image = cv2.imread(os.path.join(dirname, character, image_name))\n X_train.append(cv2.resize(image,(IMG_SIZE, IMG_SIZE)))\n y_train.append(label)\n return np.array(X_train), np.array(y_train)",
"_____no_output_____"
],
[
"def load_test_set(dirname, map_characters, verbose=True):\n \"\"\"Esta función funciona de manera equivalente a la función load_train_set\n pero cargando los datos de test.\"\"\"\n X_test = []\n y_test = []\n reverse_dict = {v: k for k, v in map_characters.items()}\n for filename in glob.glob(dirname + '/*.*'):\n char_name = \"_\".join(filename.split('/')[-1].split('_')[:-1])\n if char_name in reverse_dict:\n image = cv2.imread(filename)\n image = cv2.resize(image, (IMG_SIZE, IMG_SIZE))\n X_test.append(image)\n y_test.append(reverse_dict[char_name])\n if verbose:\n print(\"Leídas {} imágenes de test\".format(len(X_test)))\n return np.array(X_test), np.array(y_test)\n",
"_____no_output_____"
],
[
"# Cargamos los datos. Si no estás trabajando en colab, cambia los paths por\n# los de los ficheros donde hayas descargado los datos.\nDATASET_TRAIN_PATH_COLAB = \"/root/.keras/datasets/simpsons\"\nDATASET_TEST_PATH_COLAB = \"/root/.keras/datasets/simpsons_testset\"\n\nX, y = load_train_set(DATASET_TRAIN_PATH_COLAB, MAP_CHARACTERS)\nX_t, y_t = load_test_set(DATASET_TEST_PATH_COLAB, MAP_CHARACTERS)",
"_____no_output_____"
],
[
"# Vamos a barajar aleatoriamente los datos. Esto es importante ya que si no\n# lo hacemos y, por ejemplo, cogemos el 20% de los datos finales como validation\n# set, estaremos utilizando solo un pequeño número de personajes, ya que\n# las imágenes se leen secuencialmente personaje a personaje.\nperm = np.random.permutation(len(X))\nX, y = X[perm], y[perm]",
"_____no_output_____"
]
],
[
[
"## Entregable\n\nUtilizando Convolutional Neural Networks con Keras, entrenar un clasificador que sea capaz de reconocer personajes en imágenes de los Simpsons con una accuracy en el dataset de test de **85%**. Redactar un informe analizando varias de las alternativas probadas y los resultados obtenidos.\n\nA continuación se detallan una serie de aspectos orientativos que podrían ser analizados en vuestro informe (no es necesario tratar todos ellos ni mucho menos, esto son ideas orientativas de aspectos que podéis explorar):\n\n* Análisis de los datos a utilizar.\n* Análisis de resultados, obtención de métricas de *precision* y *recall* por clase y análisis de qué clases obtienen mejores o peores resultados.\n* Análisis visual de los errores de la red. ¿Qué tipo de imágenes o qué personajes dan más problemas a nuestro modelo?\n* Comparación de modelos CNNs con un modelo de Fully Connected para este problema.\n* Utilización de distintas arquitecturas CNNs, comentando aspectos como su profundidad, hiperparámetros utilizados, optimizador, uso de técnicas de regularización, *batch normalization*, etc.\n* [ *algo más difícil* ] Utilización de *data augmentation*. Esto puede conseguirse con la clase [ImageDataGenerator](https://keras.io/preprocessing/image/#imagedatagenerator-class) de Keras.\n\nNotas: \n* Recuerda partir los datos en training/validation para tener una buena estimación de los valores que nuestro modelo tendrá en los datos de test, así como comprobar que no estamos cayendo en overfitting. Una posible partición puede ser 80 / 20.\n* No es necesario mostrar en el notebook las trazas de entrenamiento de todos los modelos entrenados, si bien una buena idea seria guardar gráficas de esos entrenamientos para el análisis. Sin embargo, **se debe mostrar el entrenamiento completo del mejor modelo obtenido y la evaluación de los datos de test con este modelo**.\n* Las imágenes **no están normalizadas**. Hay que normalizarlas como hemos hecho en trabajos anteriores.\n* El test set del problema tiene imágenes un poco más \"fáciles\", por lo que es posible encontrarse con métricas en el test set bastante mejores que en el training set.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a7fbcde62293d8ae8ad7b8e908faf84e631c295
| 8,785 |
ipynb
|
Jupyter Notebook
|
EXPERIMENTS/develop_data_iterator.ipynb
|
Mithrillion/BiQA
|
f61bea95521f5b2ffd838aa60aecaad568de6564
|
[
"MIT"
] | null | null | null |
EXPERIMENTS/develop_data_iterator.ipynb
|
Mithrillion/BiQA
|
f61bea95521f5b2ffd838aa60aecaad568de6564
|
[
"MIT"
] | null | null | null |
EXPERIMENTS/develop_data_iterator.ipynb
|
Mithrillion/BiQA
|
f61bea95521f5b2ffd838aa60aecaad568de6564
|
[
"MIT"
] | null | null | null | 30.085616 | 112 | 0.435857 |
[
[
[
"import numpy as np\nimport pandas as pd\nimport re\nimport spacy\nimport torch.utils.data as tud\nnlp = spacy.load('es')\nwith open(\"../wordvecs/wiki.es/wiki.es.nospace.vec\") as f:\n nlp.vocab.load_vectors(f)",
"_____no_output_____"
],
[
"class QADataset(tud.Dataset):\n def __init__(self, data_df):\n self.data_df = data_df\n\n def __len__(self):\n return self.data_df.shape[0]\n\n def __getitem__(self, i):\n s = np.zeros((2000, 300))\n s_mask = np.zeros(2000, dtype=np.int32)\n s_var = np.zeros(2000, dtype=np.int32)\n q = np.zeros((50, 300))\n q_mask = np.zeros(50, dtype=np.int32)\n q_var = np.zeros(50, dtype=np.int32)\n q_ph = np.zeros(50, dtype=np.int32)\n\n story = nlp(self.data_df['story'].iloc[i].lower(), parse=False, tag=False, entity=False)\n s_len = len(story)\n s_mask[:s_len] = [not w.has_vector for w in story]\n s_var[np.where([x.text[:7] == '@entity' for x in story])[0]] =\\\n [int(re.search(r'\\d+', x.text).group(0)) + 1 for x in story if x.text[:7] == '@entity']\n s[:s_len, :] = np.stack([x.vector for x in story])\n\n question = nlp(self.data_df['question'].iloc[i].lower(), parse=False, tag=False, entity=False)\n q_len = len(question)\n q_mask[:q_len] = [not w.has_vector for w in question]\n s_var[np.where([x.text[:7] == '@entity' for x in question])[0]] =\\\n [int(re.search(r'\\d+', x.text).group(0)) + 1 for x in question if x.text[:7] == '@entity']\n q_ph[np.where([x.text == '@placeholder' for x in question])[0]] = 1\n q[:q_len, :] = np.stack([x.vector for x in question])\n\n answer = int(re.search(r'\\d+', self.data_df['answer'].iloc[i]).group(0))\n\n return s, q, s_len, q_len, s_mask, q_mask, s_var, q_var, q_ph, answer",
"_____no_output_____"
],
[
"train = pd.read_pickle(\"../input_data/input.pkl\")\ntrain.head()",
"_____no_output_____"
],
[
"ds = QADataset(train)\nprint(ds.__len__())\ns, q, sl, ql, sm, qm, sv, qv, qph, a = ds.__getitem__(2)\nprint(s.shape)\nprint(q.shape)\nprint(a)\nprint(sl)\nprint(ql)\nprint(sm.shape)\nprint(qm.shape)\nprint(sv.shape)\nprint(qv.shape)\nprint(qph.shape)",
"56361\n(2000, 300)\n(50, 300)\n2\n1998\n11\n(2000,)\n(50,)\n(2000,)\n(50,)\n(50,)\n"
],
[
"qa_loader = tud.DataLoader(ds, batch_size=20)\ns, q, sl, ql, sm, qm, sv, qv, qph, a = next(iter(qa_loader))\nprint(a)\nprint(s.shape)\nprint(q.shape)",
"\n 80\n 212\n 2\n 170\n 126\n 101\n 136\n 91\n 68\n 496\n 255\n 471\n 420\n 51\n 464\n 337\n 483\n 362\n 237\n 460\n[torch.LongTensor of size 20]\n\ntorch.Size([20, 2000, 300])\ntorch.Size([20, 50, 300])\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
4a7fc2948159e00663499bcf00a8e5e77b0f7a2c
| 13,225 |
ipynb
|
Jupyter Notebook
|
Examples/Experiment_network/.ipynb_checkpoints/PPO Actor Critic Discrete Acrobat-checkpoint.ipynb
|
akashkmr27089/ReinforcementLearning_Udacity_Deep_Reinforcemnt_Learning
|
b7dc13b0116898848d8d0b8a95b7af182982bd6b
|
[
"MIT"
] | null | null | null |
Examples/Experiment_network/.ipynb_checkpoints/PPO Actor Critic Discrete Acrobat-checkpoint.ipynb
|
akashkmr27089/ReinforcementLearning_Udacity_Deep_Reinforcemnt_Learning
|
b7dc13b0116898848d8d0b8a95b7af182982bd6b
|
[
"MIT"
] | null | null | null |
Examples/Experiment_network/.ipynb_checkpoints/PPO Actor Critic Discrete Acrobat-checkpoint.ipynb
|
akashkmr27089/ReinforcementLearning_Udacity_Deep_Reinforcemnt_Learning
|
b7dc13b0116898848d8d0b8a95b7af182982bd6b
|
[
"MIT"
] | null | null | null | 32.735149 | 137 | 0.511531 |
[
[
[
"import torch\nimport gym\nimport time\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n%matplotlib inline \ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")",
"_____no_output_____"
],
[
"env = gym.make('Acrobot-v1')\nenv.seed(0)\nprint('State shape: ', env.observation_space.shape)\nprint('Number of actions: ', env.action_space.n)",
"_____no_output_____"
],
[
"import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nclass Critic(nn.Module): #gives score of how bad or good the action is \n \"\"\"Actor (Policy) Model.\"\"\"\n\n def __init__(self, state_size, action_size, seed= 12):\n \n super(Critic, self).__init__()\n self.seed = torch.manual_seed(seed)\n \"*** YOUR CODE HERE ***\"\n self.fc1 = nn.Linear(state_size, 32)\n self.fc2 = nn.Linear(32, 32)\n self.fc3 = nn.Linear(32, action_size)\n\n# def forward(self, state):\n# \"\"\"Build a network that maps state -> action values.\"\"\"\n# x = self.fc1(state)\n# x = torch.tanh(x)\n# x = self.fc2(x)\n# x = torch.tanh(x)\n# x = self.fc3(x)\n# x = torch.tanh(x) #using tanh for giving score of how good is action \n# return x\n\n def forward(self, state):\n \"\"\"Build a network that maps state -> action values.\"\"\"\n x = self.fc1(state)\n x = F.relu(x)\n x = self.fc2(x)\n x = F.relu(x)\n x = self.fc3(x)\n x = F.relu(x) #using tanh for giving score of how good is action \n return x\n\n \nclass Actor(nn.Module): #Policy Network\n \"\"\"Actor (Policy) Model.\"\"\"\n\n def __init__(self, state_size, action_size, seed= 12):\n \n super(Actor, self).__init__()\n self.seed = torch.manual_seed(seed)\n \"*** YOUR CODE HERE ***\"\n self.fc1 = nn.Linear(state_size, 32)\n self.fc2 = nn.Linear(32, 32)\n self.fc3 = nn.Linear(32,action_size)\n self.final = nn.Sigmoid()\n\n def forward(self, state):\n \"\"\"Build a network that maps state -> action values.\"\"\"\n x = self.fc1(state)\n x = F.relu(x)\n x = self.fc2(x)\n x = F.relu(x)\n x = self.fc3(x)\n x = self.final(x) #using sigmoid in an action \n return x \n \ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\nactor = Actor(6,3,12).to(device)\ncritic = Critic(6,3,12).to(device)\n\nimport torch.optim as optim\noptimizer = optim.Adam(actor.parameters(), lr=1e-4)\noptimizer_critic = optim.Adam(critic.parameters(), lr=1e-4)\nprint(actor)\nprint(critic)",
"_____no_output_____"
],
[
"# Testing the network\nfor _ in range(5):\n state = env.reset()\n for i in range(100):\n env.render()\n state_tensor = torch.from_numpy(state).float().to(device)\n prob = actor.forward(state_tensor)\n action = prob.argmax()\n prob = max(prob)\n action_baseline = critic.forward(state_tensor)\n next_state, reward, done, _ = env.step(action)\n state = next_state\n print('\\rReward {} with action {} with score {}'.format(reward, action, action_baseline), end = ' ')\n if done:\n break",
"_____no_output_____"
]
],
[
[
"### Actual Making of Network using ppo Policy Network",
"_____no_output_____"
]
],
[
[
"def clipped_surrogate(policy, old_probs, states, actions, rewards, next_states,\n discount=0.995,\n epsilon=0.1, beta=0.01,\n gamma = 0.1):\n\n states = torch.from_numpy(np.array(states)).float().to(device)\n next_states = torch.from_numpy(np.array(next_states)).float().to(device)\n \n discount = discount**np.arange(len(rewards))\n rewards_te = np.multiply(rewards, discount).reshape(len(rewards),1)\n rewards_future = rewards_te[::-1].cumsum(axis=0)[::-1]\n actions = np.array(actions, dtype=np.int8)\n actions_final = torch.LongTensor(actions.reshape(len(actions),1))\n \n# # adding contribution of actor\n# f1 = critic.forward(next_states).argmax(1).reshape(len(next_states),1)\n# f2 = torch.LongTensor(f1.cpu().reshape(f1.size()[0],1))\n# f3 = torch.gather(f1,1,f2.to(device))\n \n# # f1 = critic.forward(states).argmax(1).reshape(len(next_states),1)\n# # f2 = torch.LongTensor(f1.cpu().reshape(f1.size()[0],1))\n# # f4 = torch.gather(f1,1,f2.to(device))\n# f1 = critic.forward(states)\n# f4 = torch.gather(f1,1,actions_final.to(device))\n \n# rewards_future = rewards_future + gamma*f3.detach().cpu().numpy() - f4.detach().cpu().numpy()\n# ##end\n mean = np.mean(rewards_future, axis = 0)\n std = np.std(rewards_future, axis = 0)\n rewards_normalized = (rewards_future - mean)/std\n \n old_probs = torch.tensor(old_probs, dtype=torch.float, device=device).reshape(len(old_probs),1)\n rewards = torch.tensor(rewards_normalized, dtype=torch.float, device=device)\n \n g = actor.forward(states)\n new_probs = torch.gather(g,1,actions_final.to(device))\n \n ratio = new_probs/old_probs\n# # clipped function\n clip = torch.clamp(ratio, 1-epsilon, 1+epsilon)\n clipped_surrogate = torch.min(ratio*rewards, clip*rewards)\n\n \n # include a regularization term\n # this steers new_policy towards 0.5\n # add in 1.e-10 to avoid log(0) which gives nan\n entropy = -(new_probs*torch.log(old_probs+1.e-10)+ \\\n (1.0-new_probs)*torch.log(1.0-old_probs+1.e-10))\n \n return torch.mean(clipped_surrogate + beta*entropy)",
"_____no_output_____"
],
[
"def update_baseline(next_state, reward, state):\n next_state = torch.from_numpy(np.array(next_state)).to(device).float()\n reward = torch.from_numpy(np.array(reward)).to(device)\n state = torch.from_numpy(np.array(state)).to(device).float()\n Loss = F.mse_loss(critic.forward(state), reward + critic.forward(next_state))\n optimizer_critic.zero_grad()\n Loss.backward()\n optimizer_critic.step()",
"_____no_output_____"
],
[
"def collect_trajectories(envs, policy, tmax=200):\n state = env.reset()\n states = []\n actions = []\n rewards = []\n probs = []\n next_states = []\n \n for _ in range(tmax):\n prob = actor(torch.from_numpy(state).float().to(device)) #for converting state to torch variable \n prob_new = max(prob)\n probs.append(prob_new)\n states.append(state)\n action = prob.argmax()\n next_state, reward, done , _ = env.step(action)\n# update_baseline(next_state, reward,state)\n next_states.append(next_state)\n rewards.append(reward)\n actions.append(action)\n state = next_state\n if done:\n break\n \n return probs, states, actions, rewards, next_states",
"_____no_output_____"
],
[
"probs, states, actions, rewards, next_states = collect_trajectories(env, actor, tmax=200)",
"_____no_output_____"
],
[
"discount_rate = .99\nepsilon = 0.1\nbeta = .01\ntmax = 200\nSGD_epoch = 4\nepisode = 1000",
"_____no_output_____"
],
[
"import progressbar as pb\n\nwidget = ['training loop: ', pb.Percentage(), ' ', \n pb.Bar(), ' ', pb.ETA() ]\ntimer = pb.ProgressBar(widgets=widget, maxval=episode).start()\n#following generate sim_nos instance of simulation \nenvs = gym.make('Acrobot-v1')\nmean_rewards = []\nfor e in range(episode):\n\n # collect trajectories\n old_probs, states, actions, rewards, next_states = \\\n collect_trajectories(envs, actor, tmax=tmax) \n total_rewards = np.sum(rewards, axis=0)\n \n # this is the SOLUTION!\n # use your own surrogate function\n # L = -surrogate(policy, old_probs, states, actions, rewards, beta=beta)\n for _ in range(SGD_epoch):\n L = -1*clipped_surrogate(actor, old_probs, states, actions, rewards, next_states, epsilon=epsilon, beta=beta)\n optimizer.zero_grad()\n L.backward()\n optimizer.step()\n del L\n\n epsilon*=0.999\n # the regulation term also reduces\n # this reduces exploration in later runs\n beta*=.995\n \n # get the average reward of the parallel environments\n mean_rewards.append(np.mean(total_rewards))\n \n # display some progress every 20 iterations\n if (e+1)%20 ==0 :\n print(\"Episode: {0:d}, score: {1:f}\".format(e+1,np.mean(total_rewards)))\n print(total_rewards)\n \n # update progress widget bar\n timer.update(e+1)\n \n if(np.mean(total_rewards) == 200):\n break\n \ntimer.finish()\nplt.plot(mean_rewards)\n ",
"_____no_output_____"
]
],
[
[
"### Testing ",
"_____no_output_____"
]
],
[
[
"actor.forward(state_tensor)",
"_____no_output_____"
],
[
"# Testing the network\nfor _ in range(5):\n state = env.reset()\n for i in range(100):\n env.render()\n state_tensor = torch.from_numpy(state).float().to(device)\n prob = actor.forward(state_tensor)\n action_baseline = critic.forward(state_tensor)\n action = prob.argmax()\n next_state, reward, done, _ = env.step(action)\n state = next_state\n print('\\rReward {} with action {} with critic baseline {} {}'.format(reward, action, action_baseline, prob), end = ' ')\n if done:\n break",
"_____no_output_____"
],
[
"env.close()",
"_____no_output_____"
],
[
"torch.save(a)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a7fc53fb1022521d7bdd1ea1ea8cb55844aeeb9
| 4,823 |
ipynb
|
Jupyter Notebook
|
andrew_ng/machine_learning/cnn/concepts/transfer_learning.ipynb
|
anaconda121/Machine-Learning-Projects
|
4b79b5f794f88626cd311de10ebed22170b98db5
|
[
"MIT"
] | 1 |
2021-03-11T03:28:24.000Z
|
2021-03-11T03:28:24.000Z
|
andrew_ng/machine_learning/cnn/concepts/transfer_learning.ipynb
|
anaconda121/Machine-Learning-Projects
|
4b79b5f794f88626cd311de10ebed22170b98db5
|
[
"MIT"
] | null | null | null |
andrew_ng/machine_learning/cnn/concepts/transfer_learning.ipynb
|
anaconda121/Machine-Learning-Projects
|
4b79b5f794f88626cd311de10ebed22170b98db5
|
[
"MIT"
] | null | null | null | 69.898551 | 1,520 | 0.673647 |
[
[
[
"import os\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import Model",
"_____no_output_____"
],
[
"model = keras.models.load_model(\"data/model.h5\")",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code"
]
] |
4a7fcf39f0018103e0ac2911d1778058848eac95
| 32,930 |
ipynb
|
Jupyter Notebook
|
w2v_neighbors.ipynb
|
lopuhin/WSI-LDA
|
b1b5d788dd515c36de57201aa6a8cca8c80bb151
|
[
"MIT"
] | 2 |
2017-11-12T16:56:16.000Z
|
2018-01-13T19:50:24.000Z
|
w2v_neighbors.ipynb
|
lopuhin/WSI
|
b1b5d788dd515c36de57201aa6a8cca8c80bb151
|
[
"MIT"
] | null | null | null |
w2v_neighbors.ipynb
|
lopuhin/WSI
|
b1b5d788dd515c36de57201aa6a8cca8c80bb151
|
[
"MIT"
] | null | null | null | 148.333333 | 27,938 | 0.888643 |
[
[
[
"from gensim.models import Word2Vec\nw2v = Word2Vec.load(\"model.pkl\")",
"_____no_output_____"
],
[
"import numpy as np\n\nword = 'горшок'\n\nsimilar = w2v.most_similar(positive=[word], topn=100)\nfor w, wt in similar[:10]:\n print(w, wt)\n \nwords = np.array([w for w, _ in similar])\nword_vectors = np.array([w2v[w] for w in words])",
"горшочек 0.7105925679206848\nкастрюля 0.657353401184082\nглиняный 0.6550846099853516\nваза 0.6483527421951294\nкувшин 0.6478919982910156\nмиска 0.6423501372337341\nкадка 0.6408426761627197\nкорчага 0.6137997508049011\nчерепок 0.6068799495697021\nтаз 0.6067258715629578\n"
],
[
"import kmeans \nn_senses = 6\nkm = kmeans.KMeans(word_vectors, k=n_senses, metric='cosine')",
"kmeans: X (60, 300) centres (6, 300) delta=0.001 maxiter=10 metric=cosine\nkmeans: 6 iterations cluster sizes: [ 7 21 7 9 7 9]\nkmeans: X (100, 300) centres (6, 300) delta=0.001 maxiter=10 metric=cosine\nkmeans: 5 iterations cluster sizes: [ 9 33 7 21 10 20]\n"
],
[
"for sense in range(n_senses):\n #sense_words = [w for w, _sense in zip(words, clf_res) if _sense == sense]\n sense_words = list(words[km.Xtocentre == sense])\n sense_words.sort(key=lambda w: w2v.vocab[w].count, reverse=True)\n print(sense, ' '.join(sense_words[:5]))",
"0 печь котел сковорода сковородка жаровня\n1 корзина кувшин ваза глиняный тыква\n2 цветочный клумба герань вазон кашпо\n3 ложка тарелка миска поднос фарфоровый\n4 посуда черепок\n5 ведро раковина чайник кастрюля таз\n"
],
[
"from sklearn.cluster import KMeans\n\nn_senses = 6\nclf = KMeans(n_clusters=n_senses, random_state=42)\nclf_res = clf.fit_predict(word_vectors)\n\nfrom collections import Counter\nCounter(clf_res)",
"_____no_output_____"
],
[
"from sklearn.decomposition import PCA\n\npca = PCA(n_components=2)\npca_res = pca.fit_transform(word_vectors)\n\nprint(pca_res[:2])",
"[[-0.08377539 -0.88522649]\n [-1.32236671 0.41424409]]\n"
],
[
"%matplotlib inline\n\nimport pandas as pd\n\ndf = pd.DataFrame.from_dict({\n \"x\": pca_res[:,0],\n \"y\": pca_res[:,1],\n \"clf\": clf_res,\n})\ndf.plot.scatter(x=\"x\", y=\"y\", c=\"clf\")",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7fd0350aa76a06a47c6ad0390ce4d0a666cf6a
| 224,996 |
ipynb
|
Jupyter Notebook
|
PLS_Final .ipynb
|
Juncheng-Dong/Partial-Least-Square
|
1e30217e56ce5baab07cf341ec46c641102b42c0
|
[
"MIT"
] | null | null | null |
PLS_Final .ipynb
|
Juncheng-Dong/Partial-Least-Square
|
1e30217e56ce5baab07cf341ec46c641102b42c0
|
[
"MIT"
] | null | null | null |
PLS_Final .ipynb
|
Juncheng-Dong/Partial-Least-Square
|
1e30217e56ce5baab07cf341ec46c641102b42c0
|
[
"MIT"
] | null | null | null | 124.859046 | 44,328 | 0.840944 |
[
[
[
"#STA 663 Final Project\n\n#Juncheng Dong, Xiaoqiao Xing\n\n#May 2020\n\nimport numpy as np \nimport pandas as pd\nimport math\nimport matplotlib.pyplot as plt\nfrom numpy import linalg as la\nimport random \nfrom sklearn.cross_decomposition import PLSRegression\nfrom sklearn.linear_model import Ridge\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn import model_selection\nfrom sklearn.model_selection import KFold\nfrom sklearn.decomposition import PCA\nfrom sklearn.metrics import mean_squared_error",
"_____no_output_____"
]
],
[
[
"# Partial Least Square Function ",
"_____no_output_____"
]
],
[
[
"def Normalize(X):\n '''func to centerize and normalize the dataset,dataset should be numpy array'''\n \n return (X - np.mean(X, axis = 0))/(np.std(X, axis = 0))",
"_____no_output_____"
],
[
"def norm(x):\n sum=0\n for i in x:\n sum = sum + i**2\n \n return np.sqrt(sum)",
"_____no_output_____"
],
[
"def PLS(X,Y,ncomponents,tol=1e-6):\n E,F=X,Y\n T = []\n W = []\n Q = []\n U = []\n P = []\n B = []\n rY, cY = Y.shape\n rX, cX = X.shape\n for i in range(ncomponents):\n index=np.random.choice(range(Y.shape[1]))\n #u=Y[:,index]\n u=np.random.rand(rY)\n counter = 0\n while(True):\n w = E.T@u\n w = w/norm(w)\n t = E@w\n t = t/norm(t)\n q = F.T@t\n q = q/norm(q)\n u = F@q\n \n if counter==0:\n tlast=t\n elif norm(tlast-t)<tol:\n break\n else:\n tlast=t\n \n counter=counter+1\n \n b = t.T@u\n p = E.T@t\n \n B.append(b)\n T.append(t)\n P.append(p)\n W.append(w)\n Q.append(q)\n U.append(u)\n E = E-t.reshape(-1,1)@p.reshape(1,-1)\n F = F-b*t.reshape(-1,1)@q.reshape(1,-1)\n \n \n return (np.array(T),np.array(P),np.array(W),np.array(Q),np.array(U),np.diag(B))",
"_____no_output_____"
]
],
[
[
"# Test Function on Wine Data ",
"_____no_output_____"
]
],
[
[
"#Example1 Data : Wine\nX1 = np.array([[7, 7, 13, 7], \n [4, 3, 14, 7],\n [10, 5, 12, 5],\n [16, 7, 11, 3],\n [13, 3, 10, 3]])\n\nY1 = np.array([[14, 7, 8], \n [10, 7, 6],\n [8, 5, 5],\n [2, 4, 7],\n [6, 2, 4]])",
"_____no_output_____"
],
[
"X1=Normalize(X1)\nY1=Normalize(Y1)\n[T, P, W, Q, U, B] = PLS(X1,Y1,2)\nP = P.T\nQ = Q.T",
"_____no_output_____"
],
[
"P",
"_____no_output_____"
],
[
"BPLS = la.pinv(P.T)@[email protected]\nBPLS",
"_____no_output_____"
]
],
[
[
"# Compare OLS and PLS when there is only one solution",
"_____no_output_____"
]
],
[
[
"# OLS vs PLS\nX_sim = np.random.randn(5, 5)\nX_sim",
"_____no_output_____"
],
[
"Y_sim = np.random.randn(5,1)\nY_sim",
"_____no_output_____"
],
[
"X_sim = Normalize(X_sim)\nY_sim = Normalize(Y_sim)",
"_____no_output_____"
],
[
"from sklearn.linear_model import LinearRegression\n\nOLS = LinearRegression()\nB_O = OLS.fit(X_sim, Y_sim).coef_.T\nB_O",
"_____no_output_____"
],
[
"[T, P, W, Q, U, B] = PLS(X_sim,Y_sim,5)\n\nP = P.T\nQ = Q.T\nB_P = la.pinv(P.T)@[email protected]\nB_P",
"_____no_output_____"
],
[
"np.allclose(B_O, B_P)",
"_____no_output_____"
],
[
"pls = PLSRegression(n_components=5)\n\npls.fit(X_sim, Y_sim).coef_",
"/Applications/anaconda3/lib/python3.7/site-packages/sklearn/cross_decomposition/pls_.py:292: UserWarning: Y residual constant at iteration 4\n warnings.warn('Y residual constant at iteration %s' % k)\n"
]
],
[
[
"# PLS Application & Comparison",
"_____no_output_____"
]
],
[
[
"#Import cars data\ndf = pd.read_excel(\"/Users/rachelxing/Desktop/STA663/cars_pls_regression.xls\")\ndf.head()",
"_____no_output_____"
],
[
"X = df.iloc[:,:-3].to_numpy()\nY = df.iloc[:, -3:].to_numpy()\nX.shape, Y.shape",
"_____no_output_____"
],
[
"#normalize X and Y\nX = Normalize(X)\nY = Normalize(Y)",
"_____no_output_____"
],
[
"#PLS + leave one out (20 fold)\nkf = KFold(n_splits=20, random_state=None, shuffle=False)\nkf.get_n_splits(X)\n",
"_____no_output_____"
],
[
"y_predict_pls = []\ny_test_pls = []\n\nfor train_index, test_index in kf.split(X):\n \n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = Y[train_index], Y[test_index]\n \n [T, P, W, Q, U, B] = PLS(X_train,y_train,7)\n \n P = P.T\n Q = Q.T\n BPLS = la.pinv(P.T)@[email protected]\n \n y_test_pls.append(y_test)\n y_predict_pls.append(X_test@BPLS)",
"_____no_output_____"
],
[
"y_predict_pls = np.array(y_predict_pls).reshape(20,3)\ny_test_pls = np.array(y_test_pls).reshape(20,3)",
"_____no_output_____"
],
[
"mean_squared_error(y_test_pls, y_predict_pls)",
"_____no_output_____"
],
[
"#OLS on cars data + leave one out \ny_predict_ols = []\ny_test_ols = []\n\n\nfor train_index, test_index in kf.split(X):\n \n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = Y[train_index], Y[test_index]\n \n reg1 = LinearRegression().fit(X_train, y_train[:,0])\n reg2 = LinearRegression().fit(X_train, y_train[:,1])\n reg3 = LinearRegression().fit(X_train, y_train[:,2])\n \n p1 = reg1.predict(X_test)\n p2 = reg2.predict(X_test)\n p3 = reg3.predict(X_test)\n \n y_test_ols.append(y_test)\n y_predict_ols.append([p1 ,p2, p3])\n\n\n",
"_____no_output_____"
],
[
"y_predict_ols = np.array(y_predict_ols).reshape(20,3)\ny_test_ols = np.array(y_test_ols).reshape(20,3)",
"_____no_output_____"
],
[
"mean_squared_error(y_test_ols, y_predict_ols)",
"_____no_output_____"
],
[
"#Ridge Regression\n\n#Select best parameter alpha\nridge = Ridge()\nparameters = {'alpha' : [1e-10, 1e-8, 1e-4, 1e-3, 1e-2, 1, 5, 10, 20]}\nridge_reg = GridSearchCV(ridge, parameters, scoring = 'neg_mean_squared_error', cv = 20)\nridge_reg.fit(X, Y)\nprint(ridge_reg.best_params_)\nprint(ridge_reg.best_score_)\n",
"{'alpha': 5}\n-0.4260849040371636\n"
],
[
"#Ridge Regression\n\ny_predict_ridge = []\ny_test_ridge = []\n\nfor train_index, test_index in kf.split(X):\n \n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = Y[train_index], Y[test_index]\n \n reg = Ridge(alpha=5)\n reg.fit(X_train, y_train)\n \n \n \n \n y_test_ridge.append(y_test)\n y_predict_ridge.append(reg.predict(X_test))\n\n",
"_____no_output_____"
],
[
"y_predict_ridge = np.array(y_predict_ridge).reshape(20,3)\ny_test_ridge = np.array(y_test_ridge).reshape(20,3)",
"_____no_output_____"
],
[
"mean_squared_error(y_test_ridge, y_predict_ridge)",
"_____no_output_____"
],
[
"#Principal Component Regression \npca = PCA(n_components=7)\npca.fit(X.T)\nprint(pca.explained_variance_ratio_)",
"[0.42125265 0.29217237 0.11760887 0.0978641 0.04417708 0.01401474\n 0.00561174]\n"
],
[
"Z = pca.components_.T",
"_____no_output_____"
],
[
"X.shape, pca.components_.T.shape",
"_____no_output_____"
],
[
"#Regress on Principal components\n\ny_predict_pcr = []\ny_test_pcr = []\n\n\nfor train_index, test_index in kf.split(Z):\n \n X_train, X_test = Z[train_index], Z[test_index]\n y_train, y_test = Y[train_index], Y[test_index]\n \n reg1 = LinearRegression().fit(X_train, y_train[:,0])\n reg2 = LinearRegression().fit(X_train, y_train[:,1])\n reg3 = LinearRegression().fit(X_train, y_train[:,2])\n \n p1 = reg1.predict(X_test)\n p2 = reg2.predict(X_test)\n p3 = reg3.predict(X_test)\n \n y_test_pcr.append(y_test)\n y_predict_pcr.append([p1 ,p2, p3])\n\n \ny_predict_pcr = np.array(y_predict_pcr).reshape(20,3)\ny_test_pcr = np.array(y_test_pcr).reshape(20,3)\n",
"_____no_output_____"
],
[
"mean_squared_error(y_test_pcr, y_predict_pcr)",
"_____no_output_____"
]
],
[
[
"# Visualization",
"_____no_output_____"
]
],
[
[
"df_test =pd.DataFrame(Y, columns=['N_conscity', 'N_price', 'N_symboling'] )",
"_____no_output_____"
],
[
"df_test.head()",
"_____no_output_____"
],
[
"df_test[['PLS_conscity', 'PLS_price', 'PLS_symboling']] = pd.DataFrame(y_predict_pls)",
"_____no_output_____"
],
[
"df_test.head()",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(1,3, figsize = (15, 5))\nfig.suptitle('PLS Performance', fontsize=20)\naxs[0].scatter(df_test[\"N_conscity\"], df_test[\"PLS_conscity\"] , c = 'black')\naxs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, c = 'black', linestyle='dashed')\naxs[0].set_xlabel('Conscity (test)')\naxs[0].set_ylabel('Conscity (predict)')\n\n\n\naxs[1].scatter(df_test[\"N_price\"], df_test[\"PLS_price\"] , c = 'black')\naxs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, c = 'black', linestyle='dashed')\naxs[1].set_xlabel('Price (test)')\naxs[1].set_ylabel('Price (predict)')\n\n\naxs[2].scatter(df_test[\"N_symboling\"], df_test[\"PLS_symboling\"] , c = 'black')\naxs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, c = 'black', linestyle='dashed')\naxs[2].set_xlabel('Symboling (test)')\naxs[2].set_ylabel('Symboling (predict)')",
"_____no_output_____"
],
[
"df_test[['OLS_conscity', 'OLS_price', 'OLS_symboling']] = pd.DataFrame(y_predict_ols)\ndf_test.head()",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(1,3, figsize = (15, 5))\nfig.suptitle('OLS Performance', fontsize=20)\naxs[0].scatter(df_test[\"N_conscity\"], df_test[\"OLS_conscity\"])\naxs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, linestyle='dashed')\naxs[0].set_xlabel('Conscity (test)')\naxs[0].set_ylabel('Conscity (predict)')\n\n\naxs[1].scatter(df_test[\"N_price\"], df_test[\"OLS_price\"] )\naxs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, linestyle='dashed')\naxs[1].set_xlabel('Price (test)')\naxs[1].set_ylabel('Price (predict)')\n\n\naxs[2].scatter(df_test[\"N_symboling\"], df_test[\"OLS_symboling\"] )\naxs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, linestyle='dashed')\naxs[2].set_xlabel('Symboling (test)')\naxs[2].set_ylabel('Symboling (predict)')",
"_____no_output_____"
],
[
"df_test[['Ridge_conscity', 'Ridge_price', 'Ridge_symboling']] = pd.DataFrame(y_predict_ridge)\ndf_test.head()",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(1,3, figsize = (15, 5))\nfig.suptitle('Ridge Performance', fontsize=20)\naxs[0].scatter(df_test[\"N_conscity\"], df_test[\"Ridge_conscity\"], c = 'orange' )\naxs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, c = 'orange', linestyle='dashed')\naxs[0].set_xlabel('Conscity (test)')\naxs[0].set_ylabel('Conscity (predict)')\n\n\naxs[1].scatter(df_test[\"N_price\"], df_test[\"Ridge_price\"], c = 'orange' )\naxs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, c = 'orange', linestyle='dashed')\naxs[1].set_xlabel('Price (test)')\naxs[1].set_ylabel('Price (predict)')\n\n\naxs[2].scatter(df_test[\"N_symboling\"], df_test[\"Ridge_symboling\"], c = 'orange' )\naxs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, c = 'orange', linestyle='dashed')\naxs[2].set_xlabel('Symboling (test)')\naxs[2].set_ylabel('Symboling (predict)')",
"_____no_output_____"
],
[
"df_test[['PCR_conscity', 'PCR_price', 'PCR_symboling']] = pd.DataFrame(y_predict_pcr)\ndf_test.head()",
"_____no_output_____"
],
[
"fig, axs = plt.subplots(1,3, figsize = (15, 5))\nfig.suptitle('PCR Performance', fontsize=20)\naxs[0].scatter(df_test[\"N_conscity\"], df_test[\"PCR_conscity\"], c = 'navy' )\naxs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, c = 'navy', linestyle='dashed')\naxs[0].set_xlabel('Conscity (test)')\naxs[0].set_ylabel('Conscity (predict)')\n\n\naxs[1].scatter(df_test[\"N_price\"], df_test[\"PCR_price\"], c = 'navy' )\naxs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, c = 'navy', linestyle='dashed')\naxs[1].set_xlabel('Price (test)')\naxs[1].set_ylabel('Price (predict)')\n\n\naxs[2].scatter(df_test[\"N_symboling\"], df_test[\"PCR_symboling\"], c = 'navy' )\naxs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, c = 'navy', linestyle='dashed')\naxs[2].set_xlabel('Symboling (test)')\naxs[2].set_ylabel('Symboling (predict)')",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7fd6abe681beb0d3981b4955d677a405e868aa
| 25,805 |
ipynb
|
Jupyter Notebook
|
examples/ch15/IMDB_RNN.ipynb
|
germanngc/PythonFundamentals
|
14d22baa30d7c3c5404fc11362709669e92474b8
|
[
"Apache-2.0"
] | null | null | null |
examples/ch15/IMDB_RNN.ipynb
|
germanngc/PythonFundamentals
|
14d22baa30d7c3c5404fc11362709669e92474b8
|
[
"Apache-2.0"
] | null | null | null |
examples/ch15/IMDB_RNN.ipynb
|
germanngc/PythonFundamentals
|
14d22baa30d7c3c5404fc11362709669e92474b8
|
[
"Apache-2.0"
] | null | null | null | 30.430425 | 331 | 0.564503 |
[
[
[
"## 15.9.1 Loading the IMDb Movie Reviews Dataset (1 of 2)\n* Contains **25,000 training samples** and **25,000 testing samples**, each **labeled** with its positive (1) or negative (0) sentiment",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.datasets import imdb",
"_____no_output_____"
]
],
[
[
"* **Over 88,000 unique words** in the dataset\n* Can specify **number of unique words to import** when loading **training and testing data**\n* We'll use top **10,000 most frequently occurring words** \n * Due to **system memory limitations** and **training on a CPU** (intentionally)\n * Most people don't have systems with Tensorflow-compatible **GPUs** or **TPUs**\n* **More data** takes **longer to train**, but may produce **better models**",
"_____no_output_____"
],
[
"## 15.9.1 Loading the IMDb Movie Reviews Dataset (1 of 2)\n* **`load_data`** **replaces** any words **outside the top 10,000** with a **placeholder** value (discussed shortly)",
"_____no_output_____"
]
],
[
[
"number_of_words = 10000",
"_____no_output_____"
]
],
[
[
"**NOTE:** Following cell was added to work around a **known issue with TensorFlow/Keras and NumPy**—this issue is already fixed in a forthcoming version. [See this cell's code on StackOverflow.](https://stackoverflow.com/questions/55890813/how-to-fix-object-arrays-cannot-be-loaded-when-allow-pickle-false-for-imdb-loa)",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n# save np.load\nnp_load_old = np.load\n\n# modify the default parameters of np.load\nnp.load = lambda *a,**k: np_load_old(*a, allow_pickle=True, **k)",
"_____no_output_____"
],
[
"(X_train, y_train), (X_test, y_test) = imdb.load_data(\n num_words=number_of_words)",
"_____no_output_____"
],
[
"# This cell completes the workaround mentioned above\n# restore np.load for future normal usage\nnp.load = np_load_old",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"## 15.9.2 Data Exploration (1 of 2)\n* Check sample and target dimensions\n* **Note that `X_train` and `X_test` appear to be one-dimensional**\n * They're actually **NumPy arrays of objects** (lists of integers)",
"_____no_output_____"
]
],
[
[
"X_train.shape",
"_____no_output_____"
],
[
"y_train.shape",
"_____no_output_____"
],
[
"X_test.shape",
"_____no_output_____"
],
[
"y_test.shape",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"## 15.9.2 Data Exploration (2 of 2)\n* The **arrays `y_train` and `y_test`** are **one-dimensional** arrays containing **1s and 0s**, indicating whether each review is **positive** or **negative**\n* `X_train` and `X_test` are **lists** of integers, each representing one review’s contents\n* **Keras models require numeric data** — **IMDb dataset is preprocessed for you**",
"_____no_output_____"
]
],
[
[
"%pprint # toggle pretty printing, so elements don't display vertically",
"_____no_output_____"
],
[
"X_train[123]",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Movie Review Encodings (1 of 2)\n* Because the **movie reviews** are **numerically encoded**, to view their original text, you need to know the word to which each number corresponds\n* **Keras’s IMDb dataset** provides a **dictionary** that **maps the words to their indexes**\n* **Each word’s value** is its **frequency ranking** among all words in the dataset\n * **Ranking 1** is the **most frequently occurring word**\n * **Ranking 2** is the **second most frequently occurring word**\n * ...",
"_____no_output_____"
],
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Movie Review Encodings (2 of 2)\n* Ranking values are **offset by 3** in the training/testing samples\n * **Most frequently occurring word has the value 4** wherever it appears in a review\n* **0, 1 and 2** in each encoded review are **reserved**:\n * **padding (0)** \n * All training/testing samples **must have same dimensions**\n * Some reviews may need to be padded with **0** and some shortened\n * **start of a sequence (1)** — a **token** that Keras uses internally for learning purposes\n * **unknown word (2)** — typically a word that was **not loaded**\n * **`load_data`** uses **2** for words with **frequency rankings greater than `num_words`** ",
"_____no_output_____"
],
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Decoding a Movie Review (1 of 3)\n* Must account for offset when **decoding reviews**\n* Get the **word-to-index dictionary**",
"_____no_output_____"
]
],
[
[
"word_to_index = imdb.get_word_index()",
"_____no_output_____"
]
],
[
[
"* The word `'great'` might appear in a positive movie review:",
"_____no_output_____"
]
],
[
[
"word_to_index['great'] # 84th most frequent word",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Decoding a Movie Review (2 of 3)\n* **Reverse `word_to_index` mapping**, so we can **look up words** by **frequency rating**",
"_____no_output_____"
]
],
[
[
"index_to_word = {index: word for (word, index) in word_to_index.items()}",
"_____no_output_____"
]
],
[
[
"* **Top 50 words**—**most frequent word** has the key **1** in the **new dictionary**",
"_____no_output_____"
]
],
[
[
"[index_to_word[i] for i in range(1, 51)]",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Decoding a Movie Review (3 of 3)\n* Now, we can **decode a review**\n* **`i - 3`** accounts for the **frequency ratings offsets** in the encoded reviews \n* For `i` values `0`–`2`, `get` returns `'?'`; otherwise, `get` returns the word with the **key `i - 3`** in the **`index_to_word` dictionary**",
"_____no_output_____"
]
],
[
[
"' '.join([index_to_word.get(i - 3, '?') for i in X_train[123]])",
"_____no_output_____"
]
],
[
[
"* Can see from **`y_train[123]`** that this **review** is **classified as positive**",
"_____no_output_____"
]
],
[
[
"y_train[123]",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"## 15.9.3 Data Preparation (1 of 2)\n* Number of words per review varies\n* Keras **requires all samples to have the same dimensions**\n* **Prepare data** for learning\n\t* Restrict every review to the **same number of words**\n\t* **Pad** some with **0s**, **truncate** others\n* **`pad_sequences` function** reshapes samples and **returns a 2D array**",
"_____no_output_____"
]
],
[
[
"words_per_review = 200 ",
"_____no_output_____"
],
[
"from tensorflow.keras.preprocessing.sequence import pad_sequences",
"_____no_output_____"
],
[
"X_train = pad_sequences(X_train, maxlen=words_per_review)",
"_____no_output_____"
],
[
"X_train.shape",
"_____no_output_____"
]
],
[
[
"## 15.9.3 Data Preparation (2 of 2)\n* Must also **reshape `X_test`** for evaluating the model later",
"_____no_output_____"
]
],
[
[
"X_test = pad_sequences(X_test, maxlen=words_per_review) ",
"_____no_output_____"
],
[
"X_test.shape",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Splitting the Test Data into Validation and Test Data\n* Split the **25,000 test samples** into **20,000 test samples** and **5,000 validation samples**\n* We'll pass validation samples to the model’s `fit` method via **`validation_data`** argument\n* Use **Scikit-learn’s `train_test_split` function** ",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"X_test, X_val, y_test, y_val = train_test_split(\n X_test, y_test, random_state=11, test_size=0.20) ",
"_____no_output_____"
]
],
[
[
"* Confirm the split by checking `X_test`’s and `X_val`’s shapes:",
"_____no_output_____"
]
],
[
[
"X_test.shape",
"_____no_output_____"
],
[
"X_val.shape",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"## 15.9.4 Creating the Neural Network\n* Begin with a **`Sequential` model** and import the other layers",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.models import Sequential",
"_____no_output_____"
],
[
"rnn = Sequential()",
"_____no_output_____"
],
[
"from tensorflow.keras.layers import Dense, LSTM, Embedding",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Adding an Embedding Layer (1 of 2)\n* RNNs that process **text sequences** typically begin with an **embedding layer** \n* Encodes each word in a **dense-vector representation**\n* These capture the **word’s context**—how a given word **relates to words around it**\n* Help **RNN learn word relationships** \n* **Predefined word embeddings**, such as **Word2Vec** and **GloVe**\n\t* Can **load** into neural networks to **save training time**\n\t* Sometimes used to **add basic word relationships** to a model when **smaller amounts of training data** are available\n\t* **Improve model accuracy** by **building upon previously learned word relationships**, rather than trying to learn those relationships with insufficient data",
"_____no_output_____"
],
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Adding an `Embedding` Layer (2 of 2)",
"_____no_output_____"
]
],
[
[
"rnn.add(Embedding(input_dim=number_of_words, output_dim=128,\n input_length=words_per_review))",
"_____no_output_____"
]
],
[
[
"* **`input_dim=number_of_words`**—Number of **unique words**\n* **`output_dim=128`**—Size of each word embedding\n * If you [load pre-existing embeddings](https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html) like **Word2Vec** and **GloVe**, you must set this to **match the size of the word embeddings you load**\n* **`input_length=words_per_review`**—Number of words in each input sample",
"_____no_output_____"
],
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Adding an LSTM Layer",
"_____no_output_____"
]
],
[
[
"rnn.add(LSTM(units=128, dropout=0.2, recurrent_dropout=0.2))",
"_____no_output_____"
]
],
[
[
"* **`units`**—**number of neurons** in the layer\n\t* **More neurons** means **network can remember more**\n\t* [**Guideline**](https://towardsdatascience.com/choosing-the-right-hyperparameters-for-a-simple-lstm-using-keras-f8e9ed76f046): Value between **length of the sequences** (200 in this example) and **number of classes to predict** (2 in this example)\n* **`dropout`**—**percentage of neurons to randomly disable** when processing the layer’s input and output\n\t* Like **pooling layers** in a **convnet**, **dropout** is a proven technique that **reduces overfitting**\n * Yarin, Ghahramani, and Zoubin. “A Theoretically Grounded Application of Dropout in Recurrent Neural Networks.” October 05, 2016. https://arxiv.org/abs/1512.05287\n * Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” _Journal of Machine Learning Research_ 15 (June 14, 2014): 1929-1958. http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf\n\t* Keras also provides a **`Dropout`** layer that you can add to your models \n* **`recurrent_dropout`**—**percentage of neurons to randomly disable** when the **layer’s output** is **fed back into the layer** again to allow the network to **learn from what it has seen previously**\n * **Mechanics of how the LSTM layer performs its task are beyond scope**.\n * Chollet says: “you don’t need to understand anything about the specific architecture of an LSTM cell; **as a human, it shouldn’t be your job to understand it**. Just keep in mind what the LSTM cell is meant to do: allow past information to be reinjected at a later time.”\n\t\t* Chollet, François. _Deep Learning with Python_. p. 204. Shelter Island, NY: Manning Publications, 2018.",
"_____no_output_____"
],
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Adding a Dense Output Layer \n* Reduce the **LSTM layer’s output** to **one result** indicating whether a review is **positive** or **negative**, thus the value **`1` for the `units` argument**\n* **`'sigmoid`' activation function** is preferred for **binary classification**\n\t* Chollet, François. _Deep Learning with Python_. p.114. Shelter Island, NY: Manning Publications, 2018.\n\t* Reduces arbitrary values into the range **0.0–1.0**, producing a probability",
"_____no_output_____"
]
],
[
[
"rnn.add(Dense(units=1, activation='sigmoid'))",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"### Compiling the Model and Displaying the Summary\n* **Two possible outputs**, so we use the **`binary_crossentropy` loss function**:",
"_____no_output_____"
]
],
[
[
"rnn.compile(optimizer='adam',\n loss='binary_crossentropy', \n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"* **Fewer layers** than our **convnet**, but nearly **three times as many parameters** (the network’s **weights**) \n\t* **More parameters means more training time**\n\t* The large number of parameters primarily comes from the **number of words in the vocabulary** (we loaded 10,000) **times the number of neurons in the `Embedding` layer’s output (128)**",
"_____no_output_____"
]
],
[
[
"rnn.summary()",
"_____no_output_____"
]
],
[
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
],
[
"## 15.9.5 Training and Evaluating the Model (1 of 2)\n* For each **epoch** the **RNN model** takes **significantly longer to train** than our **convnet**\n * Due to the **larger numbers of parameters** (weights) our **RNN model** needs to learn",
"_____no_output_____"
]
],
[
[
"rnn.fit(X_train, y_train, epochs=10, batch_size=32, \n validation_data=(X_val, y_val))",
"_____no_output_____"
]
],
[
[
"<!--\n```\nTrain on 25000 samples, validate on 20000 samples\nWARNING:tensorflow:From /Users/pauldeitel/anaconda3/envs/tf_env/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\nEpoch 1/10\n25000/25000 [==============================] - 297s 12ms/sample - loss: 0.4827 - acc: 0.7673 - val_loss: 0.3925 - val_acc: 0.8324\nEpoch 2/10\n25000/25000 [==============================] - 291s 12ms/sample - loss: 0.3327 - acc: 0.8618 - val_loss: 0.3614 - val_acc: 0.8461\nEpoch 3/10\n25000/25000 [==============================] - 272s 11ms/sample - loss: 0.2662 - acc: 0.8937 - val_loss: 0.3503 - val_acc: 0.8492\nEpoch 4/10\n25000/25000 [==============================] - 272s 11ms/sample - loss: 0.2066 - acc: 0.9198 - val_loss: 0.3695 - val_acc: 0.8623\nEpoch 5/10\n25000/25000 [==============================] - 271s 11ms/sample - loss: 0.1612 - acc: 0.9403 - val_loss: 0.3802 - val_acc: 0.8587\nEpoch 6/10\n25000/25000 [==============================] - 291s 12ms/sample - loss: 0.1218 - acc: 0.9556 - val_loss: 0.4103 - val_acc: 0.8421\nEpoch 7/10\n25000/25000 [==============================] - 295s 12ms/sample - loss: 0.1023 - acc: 0.9634 - val_loss: 0.4634 - val_acc: 0.8582\nEpoch 8/10\n25000/25000 [==============================] - 273s 11ms/sample - loss: 0.0789 - acc: 0.9732 - val_loss: 0.5103 - val_acc: 0.8555\nEpoch 9/10\n25000/25000 [==============================] - 273s 11ms/sample - loss: 0.0676 - acc: 0.9775 - val_loss: 0.5071 - val_acc: 0.8526\nEpoch 10/10\n25000/25000 [==============================] - 273s 11ms/sample - loss: 0.0663 - acc: 0.9787 - val_loss: 0.5156 - val_acc: 0.8536\n<tensorflow.python.keras.callbacks.History object at 0x141462e48>\n```\n-->",
"_____no_output_____"
],
[
"## 15.9.5 Training and Evaluating the Model (2 of 2)\n* Function **`evaluate`** returns the **loss and accuracy values**",
"_____no_output_____"
]
],
[
[
"results = rnn.evaluate(X_test, y_test)",
"_____no_output_____"
],
[
"results",
"_____no_output_____"
]
],
[
[
"* **Accuracy seems low** compared to our **convnet**, but this is a **much more difficult problem**\n * Many **IMDb sentiment-analysis binary-classification studies** show results **in the high 80s**\n* We did **reasonably well** with our **small recurrent neural network** of only **three layers**\n * We have not tried to tune our model",
"_____no_output_____"
],
[
"<hr style=\"height:2px; border:none; color:black; background-color:black;\">",
"_____no_output_____"
]
],
[
[
"##########################################################################\n# (C) Copyright 2019 by Deitel & Associates, Inc. and #\n# Pearson Education, Inc. All Rights Reserved. #\n# #\n# DISCLAIMER: The authors and publisher of this book have used their #\n# best efforts in preparing the book. These efforts include the #\n# development, research, and testing of the theories and programs #\n# to determine their effectiveness. The authors and publisher make #\n# no warranty of any kind, expressed or implied, with regard to these #\n# programs or to the documentation contained in these books. The authors #\n# and publisher shall not be liable in any event for incidental or #\n# consequential damages in connection with, or arising out of, the #\n# furnishing, performance, or use of these programs. #\n##########################################################################",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4a7fe1b5c21b95a62bfede7f7120efb0ed81c223
| 4,670 |
ipynb
|
Jupyter Notebook
|
Open3D/Basic/File_IO.ipynb
|
MikoyChinese/learn
|
c482b1e84496279935b5bb2cfc1e6d78e2868c63
|
[
"Apache-2.0"
] | null | null | null |
Open3D/Basic/File_IO.ipynb
|
MikoyChinese/learn
|
c482b1e84496279935b5bb2cfc1e6d78e2868c63
|
[
"Apache-2.0"
] | null | null | null |
Open3D/Basic/File_IO.ipynb
|
MikoyChinese/learn
|
c482b1e84496279935b5bb2cfc1e6d78e2868c63
|
[
"Apache-2.0"
] | null | null | null | 26.99422 | 291 | 0.571949 |
[
[
[
"import open3d as o3d",
"_____no_output_____"
]
],
[
[
"### File IO\n\nThis shows how basic geometries are read and written by Open3D.\n\n1. IO for point cloud:\n\n```python\n# Read\npcd = o3d.io.read_point_cloud(pcd_file)\n# Write\no3d.io.write_point_cloud('copy_file', pcd)\n```\n\n2. IO for meshes:\n\n```python\n# Read\nmesh = o3d.io.read_triangle_mesh('.ply')\n# Write\no3d.io.write_triangle_mesh('', mesh)\n```\n\n3. IO for images\n\n```python\n# Read\nimg = o3d.io.read_image(img_file)\n# Write\no3d.io.write_image('', img)\n```",
"_____no_output_____"
],
[
"### PointCloud\n\n#### 1. Visualize point cloud\n\n```python\npcd = o3d.io.read_point_cloud(pcd_file)\nprint(pcd)\nprint(np.asarray(pcd.points))\n\n# Visualization:\no3d.visualization.draw_geometries([pcd])\n```\n\n#### 2. Voxel downsampling: 体素下采样\n\nVoxel downsampling uses a regular voxel grid to create a uniformly downsampled point cloud from an input point cloud. It is often used as a pre-processing step for many point cloud processing tasks. The algorithm operates in two steps:\n\n1. Points are bucketed into voxels.\n\n2. Each occupied voxel generates exact one point by averaging all points inside.\n\n```python\ndownpcd = pcd.voxel_down_sample(voxel_size=0.05)\no3d.visualization.draw_geometries([downpcd])\n```\n\n#### 3. Vertex normal estimation\n\nestimate_normals computes normal for every point. The function finds adjacent points and calculate the principal axis of the adjacent points using covariance analysis.\n\n```python\ndownpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(\n radius=0.1, max_nn=30))\no3d.visualization.draw_geometries([downpcd])\n```\n\nThe function takes an instance of KDTreeSearchParamHybrid class as an argument. The two key arguments `radius = 0.1` and `max_nn = 30` specifies search radius and maximum nearest neighbor. It has 10cm of search radius, and only considers up to 30 neighbors to save computation time.\n\n#### 4. Access estimated vertex normal\n\nEstimated normal vectors can be retrieved by normals variable of downpcd.\n\n```python\nprint(downpcd.normals[0]) # Normal vector of the 0th point.\n```\n\n#### 5. Crop point cloud\n\n`read_selection_polygon_volume` reads a json file that specifies polygon selection area. `vol.crop_point_cloud(pcd) filters out points.\n\n```python\nvol = o3d.visualization.read_selection_polygon_volume(crop.json)\ncrop = vol.crop_point_cloud(pcd)\no3d.visualization.draw_geometries([crop])\n\n# Paint point cloud\n\ncrop.paint_uniform_color([0,0,0])\no3d.visualization.draw_geometries([crop])\n```",
"_____no_output_____"
]
]
] |
[
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown",
"markdown"
]
] |
4a7fe39832c2a1ba33fa439940ced0d078d24fa6
| 143,911 |
ipynb
|
Jupyter Notebook
|
ch02git/02Solo.ipynb
|
MitchellAcoustics/rsd-engineeringcourse
|
43769a849e02983f3fb334eb10d6d8e9ec259eac
|
[
"CC-BY-3.0"
] | null | null | null |
ch02git/02Solo.ipynb
|
MitchellAcoustics/rsd-engineeringcourse
|
43769a849e02983f3fb334eb10d6d8e9ec259eac
|
[
"CC-BY-3.0"
] | null | null | null |
ch02git/02Solo.ipynb
|
MitchellAcoustics/rsd-engineeringcourse
|
43769a849e02983f3fb334eb10d6d8e9ec259eac
|
[
"CC-BY-3.0"
] | null | null | null | 120.427615 | 44,280 | 0.818103 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a7fe75a84e54bbc357d947cb40af566c629beea
| 18,328 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/machine_translation-checkpoint.ipynb
|
anhtu96/machine-translation
|
df0d0a0b201c93ef4b39156b498f4db2ce98dc37
|
[
"MIT"
] | 5 |
2020-07-11T10:41:08.000Z
|
2022-01-20T09:08:24.000Z
|
.ipynb_checkpoints/machine_translation-checkpoint.ipynb
|
anhtu96/machine-translation
|
df0d0a0b201c93ef4b39156b498f4db2ce98dc37
|
[
"MIT"
] | 1 |
2021-11-29T00:41:22.000Z
|
2021-12-07T08:52:06.000Z
|
.ipynb_checkpoints/machine_translation-checkpoint.ipynb
|
anhtu96/machine-translation
|
df0d0a0b201c93ef4b39156b498f4db2ce98dc37
|
[
"MIT"
] | null | null | null | 23.71022 | 477 | 0.527226 |
[
[
[
"from google.colab import drive\ndrive.mount('/content/drive', force_remount=True)",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n"
],
[
"cd 'drive/My Drive/Colab Notebooks/machine_translation'",
"/content/drive/My Drive/Colab Notebooks/machine_translation\n"
],
[
"from dataset import MTDataset\nfrom model import Encoder, Decoder\nfrom language import Language\nfrom utils import preprocess\nfrom train import train\nfrom eval import validate\nfrom translate import translate",
"_____no_output_____"
],
[
"sentences_inp_train, sentences_trg_train = preprocess('datasets/train/train.en', 'datasets/train/train.vi', max_len=20)\nsentences_inp_val, sentences_trg_val = preprocess('datasets/dev/tst2012.en', 'datasets/dev/tst2012.vi', max_len=20)",
"_____no_output_____"
],
[
"train_inp = Language(sentences_inp_train)\ntrain_trg = Language(sentences_trg_train)",
"_____no_output_____"
],
[
"val_inp = Language(sentences_inp_val, train=False, word2id=train_inp.word2id, id2word=train_inp.id2word)\nval_trg = Language(sentences_trg_val, train=False, word2id=train_trg.word2id, id2word=train_trg.id2word)",
"_____no_output_____"
],
[
"train_set = MTDataset(train_inp.wordvec, train_trg.wordvec)\nval_set = MTDataset(val_inp.wordvec, val_trg.wordvec)",
"_____no_output_____"
],
[
"from torch.utils.data import DataLoader\nimport torch\nimport torch.nn as nn\nfrom torch.optim.lr_scheduler import StepLR",
"_____no_output_____"
],
[
"train_loader = DataLoader(train_set, batch_size=64, shuffle=True)\nval_loader = DataLoader(val_set, batch_size=64)",
"_____no_output_____"
],
[
"Tx, Ty = train_inp.max_len, train_trg.max_len\nvocab_size_inp, vocab_size_trg = train_inp.vocab_size, train_trg.vocab_size\nembedding_dim = 256\nhidden_size = 1024",
"_____no_output_____"
],
[
"if torch.cuda.is_available():\n device='cuda'\nelse:\n device='cpu'",
"_____no_output_____"
],
[
"encoder = Encoder(vocab_size_inp, embedding_dim, hidden_size).to(device=device)\ndecoder = Decoder(hidden_size, vocab_size_trg, embedding_dim).to(device=device)",
"_____no_output_____"
],
[
"optimizer = torch.optim.Adam(params=list(encoder.parameters()) + list(decoder.parameters()))\ncriterion = nn.CrossEntropyLoss()\nscheduler = StepLR(optimizer, step_size=2, gamma=0.5)",
"_____no_output_____"
],
[
"train(encoder, decoder, train_loader, val_loader, optimizer, criterion, train_trg.id2word, scheduler, 10, 200, device)",
"Epoch 1\nIter 0, loss = 9.005056\nIter 200, loss = 2.832219\nIter 400, loss = 2.161599\nIter 600, loss = 2.072055\nIter 800, loss = 2.183336\nIter 1000, loss = 1.953040\nValidation BLEU score: 0.110472\n\nEpoch 2\nIter 0, loss = 1.553245\nIter 200, loss = 1.450432\nIter 400, loss = 1.463465\nIter 600, loss = 1.509131\nIter 800, loss = 1.661497\nIter 1000, loss = 1.494548\nValidation BLEU score: 0.138041\n\nEpoch 3\nIter 0, loss = 1.073714\nIter 200, loss = 0.982473\nIter 400, loss = 0.938613\nIter 600, loss = 1.045970\nIter 800, loss = 1.044115\nIter 1000, loss = 1.120787\nValidation BLEU score: 0.157240\n\nEpoch 4\nIter 0, loss = 0.747102\nIter 200, loss = 0.649947\nIter 400, loss = 0.648416\nIter 600, loss = 0.731301\nIter 800, loss = 0.729189\nIter 1000, loss = 0.672074\nValidation BLEU score: 0.157583\n\nEpoch 5\nIter 0, loss = 0.427079\nIter 200, loss = 0.404889\nIter 400, loss = 0.375883\nIter 600, loss = 0.342018\nIter 800, loss = 0.421010\nIter 1000, loss = 0.396873\nValidation BLEU score: 0.154613\n\nEpoch 6\nIter 0, loss = 0.248889\nIter 200, loss = 0.210372\nIter 400, loss = 0.238896\nIter 600, loss = 0.240338\nIter 800, loss = 0.273934\nIter 1000, loss = 0.279463\nValidation BLEU score: 0.155281\n\nEpoch 7\nIter 0, loss = 0.123383\nIter 200, loss = 0.131666\nIter 400, loss = 0.136402\nIter 600, loss = 0.142925\nIter 800, loss = 0.133448\nIter 1000, loss = 0.155486\nValidation BLEU score: 0.153768\n\nEpoch 8\nIter 0, loss = 0.086204\nIter 200, loss = 0.077821\nIter 400, loss = 0.076827\nIter 600, loss = 0.081094\nIter 800, loss = 0.106093\nIter 1000, loss = 0.102409\nValidation BLEU score: 0.153652\n\nEpoch 9\nIter 0, loss = 0.063651\nIter 200, loss = 0.048124\nIter 400, loss = 0.051989\nIter 600, loss = 0.057838\nIter 800, loss = 0.056169\nIter 1000, loss = 0.069033\nValidation BLEU score: 0.150720\n\nEpoch 10\nIter 0, loss = 0.033596\nIter 200, loss = 0.041813\nIter 400, loss = 0.043144\nIter 600, loss = 0.036493\nIter 800, loss = 0.043718\nIter 1000, loss = 0.053935\nValidation BLEU score: 0.148495\n\n"
],
[
"torch.save(encoder.state_dict(), 'encoder.pth')\ntorch.save(decoder.state_dict(), 'decoder.pth')",
"_____no_output_____"
],
[
"import string\nexclude = list(string.punctuation) + list(string.digits)\ntest_sen = 'hello i am a student'\ntest_sen = ''.join([char for char in test_sen if char not in exclude]).strip().lower()\ntest_sen = '<START> ' + test_sen + ' <END>'\nlength = len(test_sen.split())\ndiff = train_inp.max_len -length\ntest_sen = test_sen + ''.join([' <PAD>']*diff)",
"_____no_output_____"
],
[
"test_vec = [train_inp.word2id[s] for s in test_sen.split()]\ntest_tensor = torch.Tensor(test_vec).to(device='cuda', dtype=torch.long).unsqueeze(0)",
"_____no_output_____"
],
[
"with torch.no_grad():\n encoder.eval()\n decoder.eval()\n enc_out, enc_hidden_backward, enc_hidden_forward = encoder(test_tensor)\n dec_hidden = enc_hidden_backward\n dec_input = torch.Tensor([train_trg.word2id['<START>']]).to(device='cuda', dtype=torch.long)\n for t in range(1, Ty):\n out, dec_hidden = decoder(dec_input, dec_hidden, enc_out)\n dec_input = torch.max(out, dim=-1)[1].squeeze(1)\n next_id = dec_input.squeeze().clone().cpu().numpy()\n next_word = train_trg.id2word[next_id]\n if next_word == '<END>':\n break\n print(next_word)",
"xin\nchào\ntôi\nlà\nmột\nsinh\nviên\n"
],
[
"translate('i am a student', train_inp.word2id, train_trg.word2id, train_trg.id2word, encoder, decoder, 20, device)",
"_____no_output_____"
],
[
"decoder.load_state_dict(torch.load('decoder.pth'))",
"_____no_output_____"
],
[
"train_inp.id2word[4112]",
"_____no_output_____"
],
[
"train_trg.sentences[0]",
"_____no_output_____"
],
[
"from nltk.translate.bleu_score import corpus_bleu, SmoothingFunction",
"_____no_output_____"
],
[
"ref, hyp, bleu = validate()",
"_____no_output_____"
],
[
"hyp[0]",
"_____no_output_____"
],
[
"ref1 = 'the cat is on the mat'.split()\nref2 = 'there is a cat on the mat'.split()\nhyp = 'the cat the cat on the mat'.split()",
"_____no_output_____"
],
[
"corpus_bleu([[ref1, ref2]], [hyp])",
"_____no_output_____"
],
[
"ref3 = 'i am student ngo anh tu'.split()\nref4 = 'my name is student ngo anh tu'.split()\nhyp2 = 'there is a student ngo anh tu'.split()",
"_____no_output_____"
],
[
"corpus_bleu([[ref1, ref2], [ref3, ref4]], [hyp, hyp2])",
"_____no_output_____"
],
[
"sentence_bleu([ref1, ref2], hyp)",
"_____no_output_____"
],
[
"sentence_bleu([ref3, ref4], hyp2)",
"_____no_output_____"
],
[
"validate()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a7ff89e04b89fc697327710da42bbb44c8bb519
| 500,711 |
ipynb
|
Jupyter Notebook
|
Hypthesis_Testing_Redo.ipynb
|
gathoni/hypothesis_testing
|
17861b3b9155a2366a17580b0cf006f73cbdbd91
|
[
"MIT"
] | null | null | null |
Hypthesis_Testing_Redo.ipynb
|
gathoni/hypothesis_testing
|
17861b3b9155a2366a17580b0cf006f73cbdbd91
|
[
"MIT"
] | null | null | null |
Hypthesis_Testing_Redo.ipynb
|
gathoni/hypothesis_testing
|
17861b3b9155a2366a17580b0cf006f73cbdbd91
|
[
"MIT"
] | null | null | null | 87.644145 | 85,200 | 0.656688 |
[
[
[
"<a href=\"https://colab.research.google.com/github/gathoni/hypothesis_testing/blob/master/Hypthesis_Testing_Redo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# **Autolib Dataset**",
"_____no_output_____"
],
[
"## **1.1 INTRODUCTION**",
"_____no_output_____"
],
[
"### **1.1.1 Defining the question**\n\nInvestigating the electric (bluecars) car usage in Paris during weekdays.\n\nTest a Hypothesis: whether there is difference in the means of blue cars taken in two different postal codes selected randomly on weekdays.",
"_____no_output_____"
],
[
"### **1.1.2 Metric of Success**\nOur metric for success will be based on the analysis of the number bluecars taken in different stations. \n\nWe will get two postal code areas using simple random samplinga and then compare their usage.",
"_____no_output_____"
],
[
"### **1.1.3 Understanding the context**\nIn this project we will seek to understand electric car usage by solving for another research question.\n\nWe will work as a Data Scientist for the Autolib electric car-sharing service company to investigate a claim about the blue cars from the provided Autolib dataset.\n\nTo do this, we need to identify some areas and periods of interest via sampling stating the reason to the choice of method, then perform hypothesis testing with regards to the claim that we will have made.\n\nAn example of claim to test would be \"Is the number of Bluecars taken in area X different than in area Y? Is it greater in area X than in area Z? Etc”. The selected periods of interest be either weekdays or weekends but not a mix of both. We can also consider postal codes as some of the areas of interest.",
"_____no_output_____"
],
[
"### **1.1.4 Experimental Design**\nExploratory Data Analysis\n\nData Cleaning\n\nUnivariate, Bivariate Analysis\n\nVisualizations\n\nTesting a Hypothesis\n\nChallenge our solution by providing insights on how we can make improvements.",
"_____no_output_____"
],
[
"### **1.1.5 Appropriateness of Data**\nThe dataset and glossary to use for this project can be found here [http://bit.ly/DSCoreAutolibDataset].\n\nThe provided dataset is a daily aggregation, by date and postal code, of the number of events on the Autolib network (car-sharing and recharging)",
"_____no_output_____"
],
[
"## **1.2 EXPLORATORY DATA ANALYSIS**",
"_____no_output_____"
],
[
"### **1.2.1 Importing Libraries**",
"_____no_output_____"
]
],
[
[
"# Import Libraries\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport pandas_profiling as pp\nfrom scipy import stats",
"_____no_output_____"
]
],
[
[
"### **1.2.2 Loading the Dataset**",
"_____no_output_____"
]
],
[
[
"# call our dataset autolib\nautolib = pd.read_csv(\"http://bit.ly/DSCoreAutolibDataset\")",
"_____no_output_____"
]
],
[
[
"### **1.2.3 Viewing the dataset**",
"_____no_output_____"
]
],
[
[
"# Viewing the first 5 rows\nautolib.head()",
"_____no_output_____"
],
[
"# Viewing the last 5 rows\nautolib.tail()",
"_____no_output_____"
],
[
"# Checking the dataset shape i.e. number of rows and columns\nprint('The Autolib dataset has ' + str(autolib.shape[0]) + \n ' rows and ' + str(autolib.shape[1]) + ' columns' )",
"The Autolib dataset has 16085 rows and 13 columns\n"
],
[
"# Check the data types of each column\nautolib.dtypes",
"_____no_output_____"
],
[
"# Checking the dataset information\nautolib.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 16085 entries, 0 to 16084\nData columns (total 13 columns):\nPostal code 16085 non-null int64\ndate 16085 non-null object\nn_daily_data_points 16085 non-null int64\ndayOfWeek 16085 non-null int64\nday_type 16085 non-null object\nBlueCars_taken_sum 16085 non-null int64\nBlueCars_returned_sum 16085 non-null int64\nUtilib_taken_sum 16085 non-null int64\nUtilib_returned_sum 16085 non-null int64\nUtilib_14_taken_sum 16085 non-null int64\nUtilib_14_returned_sum 16085 non-null int64\nSlots_freed_sum 16085 non-null int64\nSlots_taken_sum 16085 non-null int64\ndtypes: int64(11), object(2)\nmemory usage: 1.6+ MB\n"
],
[
"# Checking number of unique items in each column\nautolib.nunique()",
"_____no_output_____"
],
[
"# Summary description of our dataset\nautolib.describe()",
"_____no_output_____"
],
[
"# Using Pandas Profiling to get a detailed summary report of our dataset\npp.ProfileReport(autolib)",
"/usr/local/lib/python3.6/dist-packages/pandas_profiling/describe.py:392: FutureWarning: The join_axes-keyword is deprecated. Use .reindex or .reindex_like on the result to achieve the same functionality.\n variable_stats = pd.concat(ldesc, join_axes=pd.Index([names]), axis=1)\n"
]
],
[
[
"## **1.3 DATA CLEANING**",
"_____no_output_____"
],
[
"### **1.3.1 Fixing column names**",
"_____no_output_____"
]
],
[
[
"# Removing spaces in the columns names\nautolib.columns = autolib.columns.str.lower().str.replace(\" \", \"\")\n# confirming the columns names\nautolib.columns",
"_____no_output_____"
],
[
"# Dropping columns we do not need for this analysis\n# We are only dealing with Blue cars only for this analysis. \n\nautolib.drop(['utilib_taken_sum', 'utilib_returned_sum', 'utilib_14_taken_sum', \n 'utilib_14_returned_sum'], axis = 1, inplace = True)\n\n# confirming that we only have the relevant columns\nautolib.head()",
"_____no_output_____"
]
],
[
[
"### **1.3.2 Missing values**",
"_____no_output_____"
]
],
[
[
"# Missing values\nautolib.isnull().sum()",
"_____no_output_____"
]
],
[
[
"We have no mising values in our dataset",
"_____no_output_____"
],
[
"### **1.3.3 Anomalies**",
"_____no_output_____"
]
],
[
[
"# Checking for Anomalies\n# duplicates \n\nautolib_duplicate = autolib[autolib.duplicated()]\nautolib_duplicate.shape",
"_____no_output_____"
]
],
[
[
"There are no duplicated rows in the dataset",
"_____no_output_____"
],
[
"## **1.4 UNIVARIATE ANALYSIS**",
"_____no_output_____"
]
],
[
[
"#Description of all the numerical data columns\nautolib.describe()",
"_____no_output_____"
],
[
"# mean,std,min,max and the IQR of Blue cars taken and returned\nauto= autolib[['postalcode','bluecars_taken_sum', 'bluecars_returned_sum','day_type']].describe()\nauto",
"_____no_output_____"
],
[
"# Variance, Kurtosis and Skewness\nprint('Variance, Kurtosis and Skewness for Blue cars taken')\nprint(\"The Variance: \",autolib.bluecars_taken_sum.var())\nprint(\"The Kurtosis: \",autolib.bluecars_taken_sum.kurt())\nprint(\"The Skewness: \",autolib.bluecars_taken_sum.skew())\n",
"Variance, Kurtosis and Skewness for Blue cars taken\nThe Variance: 34383.01611333789\nThe Kurtosis: 6.172692305510042\nThe Skewness: 2.4063548974959086\n"
],
[
"print('Variance, Kurtosis and Skewness for Blue cars returned')\nprint(\"The Variance: \",autolib.bluecars_returned_sum.var())\nprint(\"The Kurtosis: \",autolib.bluecars_returned_sum.kurt())\nprint(\"The Skewness: \",autolib.bluecars_returned_sum.skew())",
"Variance, Kurtosis and Skewness for Blue cars returned\nThe Variance: 34410.819413706275\nThe Kurtosis: 6.1862880957582345\nThe Skewness: 2.412084978838923\n"
]
],
[
[
"### **1.4.1 Visualizations**",
"_____no_output_____"
],
[
"#### **1.4.1.1 Boxplots**",
"_____no_output_____"
]
],
[
[
"# Boxplots\na = sns.boxplot(autolib['bluecars_taken_sum'],showmeans = True)",
"_____no_output_____"
],
[
"b = sns.boxplot(autolib['bluecars_returned_sum'],showmeans = True)",
"_____no_output_____"
]
],
[
[
"#### **1.4.1.1 Histogram**",
"_____no_output_____"
]
],
[
[
"#Plot histogram showing distribution of the BlueCars taken column\nsns.set(style='ticks', color_codes=True)\nbt_hist = sns.FacetGrid(autolib)\nbt_hist.map(plt.hist, 'bluecars_taken_sum', bins=20)",
"_____no_output_____"
],
[
"#Plot histogram showing distribution of the BlueCars taken column\nsns.set(style='ticks', color_codes=True)\nbt_hist = sns.FacetGrid(autolib)\nbt_hist.map(plt.hist, 'bluecars_returned_sum', bins=20)",
"_____no_output_____"
]
],
[
[
"## **1.5 BIVARIATE ANALYSIS**",
"_____no_output_____"
]
],
[
[
"sns.pairplot(autolib,hue = 'day_type')",
"_____no_output_____"
],
[
"# Using Matplotlib: Plotting our scatterplot to compare two numerical the variables\nplt.figure(dpi = 100)\nplt.scatter(autolib['bluecars_taken_sum'], autolib['bluecars_returned_sum'], color = 'purple')\nplt.title('A scatter plot of Bluecars returned vs Bluecars taken', color = 'black')\nplt.xlabel('bluecars_taken_sum')\nplt.ylabel('bluecars_returned_sum')\nplt.show()",
"_____no_output_____"
]
],
[
[
"There is strong positive correlation between Bluecars returned vs taken.\n\nAs the blue cars taken increases, the bluecar returned also increases.",
"_____no_output_____"
],
[
"## **1.7 MULTIVARIATE ANALYSIS**",
"_____no_output_____"
],
[
"Here, model will try to predict station type given ('postalcode', 'bluecars_taken_sum', 'bluecars_returned_sum' and 'day_type')\n\n",
"_____no_output_____"
]
],
[
[
"p=['postalcode','bluecars_taken_sum', 'bluecars_returned_sum','day_type']\nt=[i for i in p]\ndf=pd.DataFrame(autolib[t])\ndf.head()",
"_____no_output_____"
],
[
"# label encoding\nfrom sklearn.preprocessing import LabelEncoder\nlabel_encoder= LabelEncoder()\ndf['postalcode']=label_encoder.fit_transform(df['postalcode'])\ndf['bluecars_taken_sum']=label_encoder.fit_transform(df['bluecars_taken_sum'])\ndf['bluecars_returned_sum']=label_encoder.fit_transform(df['bluecars_returned_sum'])\ndf.head()",
"_____no_output_____"
],
[
"#Separating features and labels\nX = df.drop('postalcode', 1)\ny = df['postalcode']",
"_____no_output_____"
],
[
"#Split the data into a training set and testing set.\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)",
"_____no_output_____"
]
],
[
[
"## **1.6 HYPOTHESIS TESTING**",
"_____no_output_____"
],
[
"### **Hypothesis Testing**\n",
"_____no_output_____"
],
[
"We would like to test and see whether there is a day on the weekend where more blue cars are taken. \n\n**Null Hypothesis**\n\n**Ho:** No of blue cars taken on *Saturday* are more than Sunday\n\n**Alternative Hypothesis**\n\n**Ha:** No of cars taken on Saturday are not more than the cars taken on Sunday\n\nOur level of significance shall be 0.05 \n\nResearch allows a 5% error This means there is a 5% risk that we will be rejecting null when its true.",
"_____no_output_____"
],
[
"### **Sampling**",
"_____no_output_____"
],
[
"Separate data into weekend entries",
"_____no_output_____"
]
],
[
[
"weekend=autolib[(autolib['day_type']=='weekend')]\nweekend\n",
"_____no_output_____"
],
[
"# Simple Random Sampling \nweekend_sample = weekend.sample(n = 10, replace=\"False\")\nweekend_sample",
"_____no_output_____"
],
[
"for i in weekend_sample[\"dayofweek\"]:\n if i == 5:\n weekend_sample[\"day_5\"]=weekend_sample['dayofweek']==5\n else:\n weekend_sample[\"day_6\"]=weekend_sample['dayofweek']==6\nweekend_sample",
"_____no_output_____"
],
[
"# Find sum of the blue cars taken for the different days\ndf2 = weekend_sample.groupby(weekend_sample[\"dayofweek\"]).bluecars_taken_sum.sum()\ndf2",
"_____no_output_____"
],
[
"# Sum of blur cars returned\ndf2 = weekend_sample.groupby(weekend_sample[\"dayofweek\"]).bluecars_returned_sum.sum()\ndf2",
"_____no_output_____"
],
[
"# Mean of blue cars taken\ndf2 = weekend_sample.groupby(weekend_sample[\"dayofweek\"]).bluecars_taken_sum.mean()\ndf2",
"_____no_output_____"
],
[
"# Mean of blue cars returned\ndf2 = weekend_sample.groupby(weekend_sample[\"dayofweek\"]).bluecars_returned_sum.mean()\ndf2",
"_____no_output_____"
],
[
"# Std deviation of blue cars taken\ndf2 = weekend_sample.groupby(weekend_sample[\"dayofweek\"]).bluecars_taken_sum.std()\ndf2",
"_____no_output_____"
],
[
"# Std deviation of blue cars returned\ndf2 = weekend_sample.groupby(weekend_sample[\"dayofweek\"]).bluecars_returned_sum.std()\ndf2",
"_____no_output_____"
]
],
[
[
"### **Test Statistics**\nThe sample we are working with is less than 30. T-test will be used.",
"_____no_output_____"
]
],
[
[
"# Saturday Blue cars taken\nx = (107.2 - 110)/89.510893",
"_____no_output_____"
],
[
"# Saturday blue cars returned\nt = (106.6-110)/88.455073",
"_____no_output_____"
],
[
"# Sunday blue cars taken\ny = (172.2 - 175)/ 244.060853",
"_____no_output_____"
],
[
"# Sunday blue cars returned\nh = (173.0 - 175)/ 247.313566",
"_____no_output_____"
]
],
[
[
"### **P Value**",
"_____no_output_____"
]
],
[
[
"#Blue cars taken\nfrom scipy import stats\nfrom scipy.stats import norm\nprob = stats.norm.cdf(x)\nprob",
"_____no_output_____"
],
[
"prob = stats.norm.cdf(t)\nprob",
"_____no_output_____"
]
],
[
[
"The p value is less than the level of significance. Therefore, we reject the null hypothesis",
"_____no_output_____"
]
],
[
[
"## P value \nprob = stats.norm.cdf(y)\nprob",
"_____no_output_____"
],
[
"prob = stats.norm.cdf(h)\nprob",
"_____no_output_____"
]
],
[
[
"The p value is less than the level of significance. Therefore, we reject the null hypothesis",
"_____no_output_____"
],
[
"### **CONCLUSION**\n\nWe therefore reject the null hypothesis. We also agree that most blue cars are used on Sunday as compared to Saturday.",
"_____no_output_____"
],
[
"### **RECOMMENDATION**\nThe company should make the blue cars readily available for consumers on this day.This shall increase the profit margin for the company",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
4a8009577bb20a84176e398f68d3f6733136e91a
| 131,164 |
ipynb
|
Jupyter Notebook
|
results/smmart_protein_rna_tissue10_result_1.ipynb
|
bgruening/Galaxy-ML-1
|
47514940c7ac39d6ca1d595b58b5d1311b3f3840
|
[
"MIT"
] | null | null | null |
results/smmart_protein_rna_tissue10_result_1.ipynb
|
bgruening/Galaxy-ML-1
|
47514940c7ac39d6ca1d595b58b5d1311b3f3840
|
[
"MIT"
] | null | null | null |
results/smmart_protein_rna_tissue10_result_1.ipynb
|
bgruening/Galaxy-ML-1
|
47514940c7ac39d6ca1d595b58b5d1311b3f3840
|
[
"MIT"
] | null | null | null | 50.081711 | 6,193 | 0.499604 |
[
[
[
"import warnings\nimport pprint\nimport skrebate\nimport imblearn\nfrom imblearn import under_sampling, over_sampling, combine\nfrom imblearn.pipeline import Pipeline as imbPipeline\nfrom sklearn import (preprocessing, svm, linear_model, ensemble, naive_bayes,\n tree, neighbors, decomposition, kernel_approximation, cluster)\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.base import clone\n\nfrom sklearn.compose import TransformedTargetRegressor\nfrom sklearn.model_selection import (KFold, GroupKFold, StratifiedKFold,\n LeaveOneGroupOut, cross_validate,\n cross_val_predict, learning_curve,\n GridSearchCV)\nfrom sklearn.feature_selection import SelectKBest, f_regression, SelectFromModel, VarianceThreshold, f_classif\nfrom sklearn.metrics import (r2_score, auc, roc_auc_score, balanced_accuracy_score, \n average_precision_score, confusion_matrix, roc_curve,\n precision_recall_curve)\nfrom sklearn.metrics.scorer import roc_auc_scorer\nfrom sklearn.preprocessing import QuantileTransformer, quantile_transform, StandardScaler, MinMaxScaler\nfrom sklearn.utils.class_weight import compute_class_weight, compute_sample_weight\nfrom sklearn.utils.validation import check_memory\nfrom xgboost import XGBRegressor, XGBClassifier\nfrom sklearn.ensemble import RandomForestClassifier\n\nwarnings.simplefilter('ignore')",
"/Users/guq/galaxy/.venv/lib/python2.7/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file \"/Users/guq/galaxy/.venv/lib/python2.7/site-packages/matplotlib/mpl-data/matplotlibrc\", line #620\n (fname, cnt))\n"
],
[
"import os\nimport sys",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport re\n\n\nimport plotly.plotly as py\nimport plotly.graph_objs as go\nfrom plotly import __version__\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\n\ninit_notebook_mode(connected=True)",
"_____no_output_____"
]
],
[
[
"## result",
"_____no_output_____"
]
],
[
[
"work_dir = './drug_respond/results/smmart_proten_rna_tissue10/'\nsub1 = 'Hyperparameter Search on collection 18 _ randomforest/'\nsub2 = 'Hyperparameter Search on collection 18 _xgboost_2/'\nsub3 = 'Hyperparameter Search on collection 19 _iraps/'\nsub4 = 'Hyperparameter Search on collection 19 _xgbregressor'\n",
"_____no_output_____"
],
[
"def concate_best_result(folder, file_name, scorer, classifier, results):\n path = os.path.join(folder, file_name)\n res = pd.read_csv(path, sep='\\t')\n \n res_sort = res.sort_values(['mean_test_'+scorer, 'std_test_'+scorer], ascending=[False, True])\n res_best = res_sort[['mean_test_'+scorer, 'std_test_'+scorer,'params']].head(1).reset_index(drop=True)\n res_best.insert(loc=0, column='dataset', value=file_name[:-11])\n res_best.insert(loc=0, column='classifier', value=classifier)\n if results is None:\n results = res_best\n else:\n results = results.append(res_best, ignore_index=True)\n return results",
"_____no_output_____"
],
[
"# best AP scores\nfiles1 = os.listdir(work_dir+sub1)\nfiles2 = os.listdir(work_dir+sub2)\nfiles3 = os.listdir(work_dir+sub3)\nfiles4 = os.listdir(work_dir+sub4)\nresults = None\nscorer = 'binarize_average_precision_scorer'\nfor fl in files1:\n results = concate_best_result(work_dir+sub1, fl, scorer, 'RandomForestClassifier', results)\n\nfor fl in files2:\n results = concate_best_result(work_dir+sub2, fl, scorer, 'XGBClassifier', results)\n \nfor fl in files3:\n results = concate_best_result(work_dir+sub3, fl, scorer, 'IRAPSClassifier', results)\n\nfor fl in files4:\n results = concate_best_result(work_dir+sub4, fl, scorer, 'XGBRegressor', results)\n\n\nresults = results.sort_values(['classifier', 'dataset'])\nresults",
"_____no_output_____"
],
[
"# best AP scores\nfiles1 = os.listdir(work_dir+sub1)\nfiles2 = os.listdir(work_dir+sub2)\nfiles3 = os.listdir(work_dir+sub3)\nfiles4 = os.listdir(work_dir+sub4)\nresults_auc = None\nscorer = 'binarize_auc_scorer'\nfor fl in files1:\n results_auc = concate_best_result(work_dir+sub1, fl, scorer, 'RandomForestClassifier', results_auc)\n\nfor fl in files2:\n results_auc = concate_best_result(work_dir+sub2, fl, scorer, 'XGBClassifier', results_auc)\n \nfor fl in files3:\n results_auc = concate_best_result(work_dir+sub3, fl, scorer, 'IRAPSClassifier', results_auc)\n\nfor fl in files4:\n results_auc = concate_best_result(work_dir+sub4, fl, scorer, 'XGBRegressor', results_auc)\n\nresults_auc = results_auc.sort_values(['classifier', 'dataset'])\nresults_auc",
"_____no_output_____"
],
[
"data1 = go.Bar(\n x = results[results['classifier'] == 'IRAPSClassifier']['dataset'],\n y = results[results['classifier'] == 'IRAPSClassifier']['mean_test_binarize_average_precision_scorer'],\n name = 'IRAPS_AP'\n)\ndata2 = go.Bar(\n x = results[results['classifier'] == 'RandomForestClassifier']['dataset'],\n y = results[results['classifier'] == 'RandomForestClassifier']['mean_test_binarize_average_precision_scorer'],\n name = 'RF_AP'\n)\ndata3 = go.Bar(\n x = results[results['classifier'] == 'XGBClassifier']['dataset'],\n y = results[results['classifier'] == 'XGBClassifier']['mean_test_binarize_average_precision_scorer'],\n name = 'XGBC_AP'\n)\n\ndata4 = go.Bar(\n x = results[results['classifier'] == 'XGBRegressor']['dataset'],\n y = results[results['classifier'] == 'XGBRegressor']['mean_test_binarize_average_precision_scorer'],\n name = 'XGBRegr_AP'\n)\n\ndata5 = go.Bar(\n x = results_auc[results_auc['classifier'] == 'IRAPSClassifier']['dataset'],\n y = results_auc[results_auc['classifier'] == 'IRAPSClassifier']['mean_test_binarize_auc_scorer'],\n name = 'IRAPS_ROC-AUC'\n)\ndata6 = go.Bar(\n x = results_auc[results_auc['classifier'] == 'RandomForestClassifier']['dataset'],\n y = results_auc[results_auc['classifier'] == 'RandomForestClassifier']['mean_test_binarize_auc_scorer'],\n name = 'RF_ROC-AUC'\n)\n\ndata7 = go.Bar(\n x = results_auc[results_auc['classifier'] == 'XGBClassifier']['dataset'],\n y = results_auc[results_auc['classifier'] == 'XGBClassifier']['mean_test_binarize_auc_scorer'],\n name = 'XGBC_ROC-AUC'\n)\ndata8 = go.Bar(\n x = results_auc[results_auc['classifier'] == 'XGBRegressor']['dataset'],\n y = results_auc[results_auc['classifier'] == 'XGBRegressor']['mean_test_binarize_auc_scorer'],\n name = 'XGBRegr_ROC-AUC'\n)\n\nlayout = go.Layout(\n xaxis=dict(\n title='Dataset'\n ),\n yaxis=dict(\n title='Performance score'\n ),\n barmode = 'group'\n)\nfig = go.Figure(data=[data1, data2, data3, data4], layout=layout)\niplot(fig)\n\nfig = go.Figure(data=[data5, data6, data7,data8], layout=layout)\niplot(fig)\n# To show plot, paste the link to this GitHub notebook into http://nbviewer.jupyter.org/",
"_____no_output_____"
],
[
"trace1 = {\n \"type\": 'violin',\n \"x\": results['classifier'],\n \"y\": results['mean_test_binarize_average_precision_scorer'],\n \"legendgroup\": 'AP',\n \"scalegroup\": 'AP',\n \"name\": 'AP',\n \"box\": {\n \"visible\": True\n },\n \"meanline\": {\n \"visible\": True\n },\n \"line\": {\n \"color\": 'blue'\n }\n}\n\ntrace2 = {\n \"type\": 'violin',\n \"x\": results_auc['classifier'],\n \"y\": results_auc['mean_test_binarize_auc_scorer'],\n \"legendgroup\": 'ROC-AUC',\n \"scalegroup\": 'ROC-AUC',\n \"name\": 'ROC-AUC',\n \"box\": {\n \"visible\": True\n },\n \"meanline\": {\n \"visible\": True\n },\n \"line\": {\n \"color\": 'pink'\n }\n}\n\nlayout = {\n \"yaxis\": {\n \"zeroline\": False,\n },\n \"violinmode\": 'group'\n}\nfig = go.Figure(data=[trace1, trace2], layout=layout)\niplot(fig)\n# To show plot, paste the link to this GitHub notebook into http://nbviewer.jupyter.org/",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a8012c013eec64bf15c8faf30bc6aacf5e6bce5
| 25,791 |
ipynb
|
Jupyter Notebook
|
_build/jupyter_execute/curriculum-notebooks/Mathematics/FractionMultiplication/fraction-multiplication.ipynb
|
BryceHaley/curriculum-jbook
|
d1246799ddfe62b0cf5c389394a18c2904383437
|
[
"CC-BY-4.0"
] | 1 |
2022-03-18T18:19:40.000Z
|
2022-03-18T18:19:40.000Z
|
_build/jupyter_execute/curriculum-notebooks/Mathematics/FractionMultiplication/fraction-multiplication.ipynb
|
callysto/curriculum-jbook
|
ffb685901e266b0ae91d1250bf63e05a87c456d9
|
[
"CC-BY-4.0"
] | null | null | null |
_build/jupyter_execute/curriculum-notebooks/Mathematics/FractionMultiplication/fraction-multiplication.ipynb
|
callysto/curriculum-jbook
|
ffb685901e266b0ae91d1250bf63e05a87c456d9
|
[
"CC-BY-4.0"
] | null | null | null | 43.346218 | 438 | 0.550774 |
[
[
[
"\n\n<a href=\"https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Mathematics/FractionMultiplication/FractionMultiplication.ipynb&depth=1\" target=\"_parent\"><img src=\"https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true\" width=\"123\" height=\"24\" alt=\"Open in Callysto\"/></a>",
"_____no_output_____"
]
],
[
[
"import uiButtons\n%uiButtons",
"_____no_output_____"
]
],
[
[
"# Fractions and Multiplication\n\n## Visualizing Fraction Multiplication\n\n## Introduction\nAn important skill to have when it comes to fractions is knowing how to multiply them together.<br>\n\nAs we know, fractions are of the form $\\frac{a}{b}$ with $a$ and $b$ integers and $b\\neq 0$. <br>\n\nYou can think of $\\frac{a}{b}$ as the number you get when you do $a\\div b$. <br>\nIf we think of a fraction as a division problem then it makes sense that it works well with multiplication.<br>\nUnlike addition, multiplying fractions is easy and straightforward. <br>\n\nIn this notebook we will look into two forms of fraction multiplication:\n- multiplying two fractions together (e.g. $\\dfrac{4}{7} \\times \\dfrac{2}{3}$ )\n- multiplying a fraction by an integer (e.g. $\\dfrac{4}{7} \\times 3$ )",
"_____no_output_____"
],
[
"## Procedure\nAs mentioned earlier, multiplying two fractions together is simple.<br>\nLet's say we want to multiply the fractions $\\dfrac{4}{7}$ and $\\dfrac{2}{3}$.<br>\nAll we have to do is multiply the numerators (top numbers) together, then multiply the denominators (bottom numbers) together. Let's take a look: \n\n$$\n\\frac{4}{7} \\times \\frac{2}{3}=\\frac{4\\times 2}{7\\times 3}=\\frac{8}{21}\n$$ \n\nLet's try another example. Take the fractions $\\dfrac{3}{5}$ and $\\dfrac{2}{3}$. To multiply them we multiply the numerators together and the denominators together: \n\n$$\n\\frac{3\\times 2}{5\\times 3}=\\frac{6}{15}\n$$ \n\nIn this example, you might notice that the result is not in lowest terms: both 6 and 15 are divisible by 3, so we get $\\dfrac{6}{15} = \\dfrac25$. In a later notebook, we'll focus on mechanics like this. For now, we want to focus on a visual understanding of the problem.\n\nNow that we know how to multiply two fractions, let's think about what it actually means.<br>\nRecall that a fraction simply represents a part of something. We can think of multiplying fractions together as taking a part of another part. In other words $\\dfrac{1}{2}\\times\\dfrac{1}{2}$ is like saying $\\dfrac{1}{2}$ of $\\dfrac{1}{2}$ (one half **of** one half). If we have $\\dfrac{1}{2}$ of a pizza and we want $\\dfrac{1}{2}$ of that half what do we end up with?<br>\n\n<img src=\"./images/pizza.png\" width=\"400px\">\n\nWe get $\\dfrac{1}{4}$ because $\\dfrac{1}{2}\\times\\dfrac{1}{2}=\\dfrac{1}{4}$.<br>\n\nWatch the video below to help us further visualize this concept.",
"_____no_output_____"
]
],
[
[
"%%html\n<div align=\"middle\">\n<iframe id=\"vid1\" width=\"640\" height=\"360\" src=\"https://www.youtube.com/embed/hr_mTd-oJ-M\" frameborder=\"0\" allow=\"autoplay; encrypted-media\" allowfullscreen></iframe> \n<p><a href=\"https://www.youtube.com/channel/UC4a-Gbdw7vOaccHmFo40b9g\" target=\"_blank\">Click here</a> for more videos by Khan Academy</p>\n</div>\n<script>\n $(function() {\n var reachable = false;\n var myFrame = $('#vid1');\n var videoSrc = myFrame.attr(\"src\");\n myFrame.attr(\"src\", videoSrc)\n .on('load', function(){reachable = true;});\n setTimeout(function() {\n if(!reachable) {\n var ifrm = myFrame[0];\n ifrm = (ifrm.contentWindow) ? ifrm.contentWindow : (ifrm.contentDocument.document) ? ifrm.contentDocument.document : ifrm.contentDocument;\n ifrm.document.open();\n ifrm.document.write('If the video does not start click <a href=\"' + videoSrc + '\" target=\"_blank\">here</a>');\n ifrm.document.close();\n }\n }, 2000)\n });\n</script>",
"_____no_output_____"
]
],
[
[
"## Interactive visualization\n\nThe widget below allows you to visualize fraction multiplication as shown in the video. To begin, enter a fraction in the boxes below.",
"_____no_output_____"
]
],
[
[
"%%html\n<script src=\"./d3/d3.min.js\"></script>\n<!-- <script src=\"https://d3js.org/d3.v3.min.js\"></script> -->\n<script type=\"text/x-mathjax-config\">\n MathJax.Hub.Config({\n tex2jax: {inlineMath: [['$','$'], ['\\\\(','\\\\)']]}\n });\n</script>\n<script src=\"https://code.jquery.com/jquery-1.10.2.js\"></script>\n<style>\n .fractionInput {\n max-width: 40px;\n }\n \n .fractionBar {\n width: 40px;\n height: 3px;\n background-color: #000000;\n }\n \n .ingredientsInput {\n margin-left: 10px;\n margin-right: 10px;\n max-width: 40px;\n /* float: right; */\n }\n \n #speech {\n margin: 50px;\n font-size: 150%;\n }\n \n li {\n margin-bottom: 15px;\n }\n</style>",
"_____no_output_____"
],
[
"%%html\n<div class=\"fractionInputs\" style=\"margin:20px\">\n <h1 id=\"leftInputFractionText\" style=\"float: left; display: none\"></h1>\n <div id=\"opperandInput\" style=\"float: left; display: block\">\n <input type=\"text\" class=\"fractionInput form-control form-control-sm\" id=\"oppNumerator\" placeholder=\"0\" style=\"margin-bottom: -10px;\">\n <hr align=\"left\" class=\"fractionBar\">\n <input type=\"text\" class=\"fractionInput form-control form-control-sm\" id=\"oppDenominator\" placeholder=\"1\" style=\"margin-top: -10px;\">\n </div>\n <button type=\"button\" id=\"continueBtn\" class=\"btn btn-primary buttons\" style=\"margin: 30px\">Continue</button>\n\n </div>\n\n <div class=\"canvasDiv\" style=\"clear: left\">\n <svg height=\"500\" width=\"500\" viewbox=\"0 0 500 500\" mlns=\"http://www.w3.org/2000/svg\" id=\"mainCanvas\" style=\"float: left\">\n <rect id=\"mainBox\" height=\"480\" width=\"480\" x=\"10\" y=\"10\" style=\"outline: solid #000000 3px; fill:#ffffff\"></rect>\n <rect id=\"leftOpperand\" height=\"480\" width=\"0\" x=\"10\" y=\"10\"></rect>\n <rect id=\"rightOpperand\" height=\"0\" width=\"480\" x=\"10\" y=\"10\"></rect>\n </svg>\n </div>\n <div>\n <p id=\"speech\">Enter a fraction inside the boxes provided then click continue.</p>\n </div>\n \n <div style=\"clear: left; margin-left: 10px\">\n <button type=\"button\" id=\"resetFractionBoxBtn\" class=\"btn btn-primary buttons\">Reset</button>\n </div>\n",
"_____no_output_____"
]
],
[
[
"## Multiplying a fraction by an integer\n\nIn this section we will talk about multiplying a fraction like $\\dfrac{4}{7}$, with an integer such as $3$. A good example of when this could be useful is when you need to double a recipe. <br>\n\nDoing multiplication of this form is simply a special case of multiplying two fractions together since any integer, such as $3$ in this case, can be rewritten as $\\dfrac{3}{1}$. On a calculator, try inputting any number divided by $1$, and you will always get back the original number. <br>\n\nLet's demonstrate this with an example. To multiply the fraction $\\dfrac{4}{7}$ and the integer $3$, remember that we can write $3$ as $\\dfrac31$. We get\n\n$$\n\\frac{4}{7}\\times\\frac{3}{1} = \\frac{4\\times 3}{7\\times 1}= \\frac{12}{7} \n$$\n\n**Note that $\\dfrac{3}{1}$ is an improper fraction. Improper fractions follow all the same rules for multiplication as proper fractions.**\n\nThe big take away from this is that the denominator does not change as it is simply multiplied by $1$. This means we did not change the \"whole\", we only changed how many parts of the \"whole\" we have (the numerator). In effect all we did was triple our fraction, since our constant was 3. <br>\n\nLet's practice what we just learned with a recipe example. Below you will see the ingredient list for the famous **Fresh Tomato and Basil Pasta Salad** recipe. This recipe makes enough for 4 servings, but we would like to double the recipe in order to serve 8 people. Apply what we have learned so far to double the ingredients list for the **tomato and basil pasta salad** in order to make 8 servings. \n\n(Enter your answer in the provided boxes. Fractions should be written using the _forward slash_ key \"/\" eg. 5/8. When your done click _check answer_ to see if you are correct!)",
"_____no_output_____"
]
],
[
[
"%%html\n<div class=\"ingredientsList\">\n <h1>Fresh Tomato and Basil Pasta Salad</h1>\n <img src=\"./images/pastaSalad.jpg\" width=250 style=\"float: left; margin-right: 50px; box-shadow: 5px 6px 25px 3px grey\">\n\n <ul style=\"max-width: 700px; margin-bottom\">\n <li><label>3 medium ripe tomatoes, chopped --></label><input id=\"tomatoes\" class=\"ingredientsInput\"></input><label>tomatoes</label></li>\n <li><label>1/3 cup thinly sliced fresh basil --></label><input id=\"basil\" class=\"ingredientsInput\"></input><label>cup</label></li>\n <li><label>2 Tbsp. olive oil --></label><input id=\"olivOil\" class=\"ingredientsInput\"></input><label>Tbsp.</label></li>\n <li><label>1 clove garlic, minced --></label><input id=\"garlic\" class=\"ingredientsInput\"></input><label>clove</label></li>\n <li><label>1/2 tsp. salt --></label><input id=\"salt\" class=\"ingredientsInput\"></input><label>tsp.</label></li>\n <li><label>1/4 tsp. pepper --></label><input id=\"pepper\" class=\"ingredientsInput\"></input><label>tsp.</label></li>\n <li><label>8 oz. rotini pasta pasta, uncooked --></label><input id=\"pasta\" class=\"ingredientsInput\"></input><label>oz.</label></li>\n <li><label>3/4 cup Parmesan Style Grated Topping --></label><input id=\"parmesan\" class=\"ingredientsInput\"></input><label>cup</label></li>\n </ul>\n <button type=\"button\" id=\"checkAnswerBtn\">Check Answers</button>\n <button type=\"button\" id=\"resetBtn\">Reset</button>\n</div>\n<div>\n <h2 id=\"answerStatus\"></h2>\n</div>",
"_____no_output_____"
]
],
[
[
"## Conclusion\nThroughout this notebook we looked at how easy multiplying fractions together really is. We also looked at how to work with a fraction multiplied by a constant. Lets recap what we have learned:\n\n- When multiplying two fractions together we multiply the numerators together and the denominators together: $\\dfrac{a}{b}\\times\\dfrac{c}{d}=\\dfrac{a \\times c}{b \\times d} = \\dfrac{ac}{bd}$\n\n- A constant can always be rewritten as the constant over 1: $c = \\dfrac{c}{1}$\n\n- Multiplying a fraction with a constant, multiply the numerator by the constant and keep the denominator the same: $\\dfrac{a}{b}\\times c=\\dfrac{a\\times c}{b}=\\dfrac{ac}{b}$\n\n- Multiplying two fractions together is the same as saying _a part of a part_: $\\dfrac{a}{b}\\times\\dfrac{c}{d}$ is like saying $\\dfrac{a}{b}$ **of** $\\dfrac{c}{d}$ (The equation $\\dfrac{3}{5}\\times\\dfrac{1}{4}$ is the same as _three fifths **of** one quarter_)",
"_____no_output_____"
]
],
[
[
"%%html\n<script>\nvar leftOpperand = {\n id: 'leftOpperand',\n numerator: Number(0),\n denominator: Number(0),\n colour: '#ff0066'\n};\n\nvar rightOpperand = {\n id: 'rightOpperand',\n numerator: Number(0),\n denominator: Number(0),\n colour: '#0000ff'\n};\n\nvar currentState = 0;\n\nvar getOpperandInput = function(numeratorInput, denominatorInput, opperand) {\n opperand.numerator = document.getElementById(numeratorInput).value;\n opperand.denominator = document.getElementById(denominatorInput).value;\n\n}\n\nvar verticalDivide = function(xVal, lineNum) {\n var i = xVal;\n while(lineNum > 0){\n addLine(Number(i + 10), Number(i + 10), 10, Number(d3.select('#mainBox').attr('height')) + 10);\n i += xVal;\n lineNum --;\n }\n};\n\nvar horizontalDivide = function(xVal, lineNum) {\n var i = Number(xVal);\n while(lineNum > 0){\n addLine(10, Number(d3.select('#mainBox').attr('width')) + 10, Number(i + 10), Number(i +10));\n i += xVal;\n lineNum --;\n }\n};\n\nvar addLine = function (x1, x2, y1, y2,) {\n var dashed = '0,0';\n var stroke = 2;\n\n d3.select('#mainCanvas').append('line')\n .attr('class', 'divLine ')\n .attr('x1', x1)\n .attr('x2', x2)\n .attr('y1', y1)\n .attr('y2', y2)\n .style('stroke', 'black')\n .style('stroke-width', stroke);\n};\n\nvar fillBox = function(box, width, height, colour, opacity) {\n d3.select('#' + box.id)\n .style('fill', colour)\n .style('opacity', opacity)\n .transition().delay(function (d, i) {\n return i * 300;\n }).duration(500)\n .attr('width', width)\n .attr('height', height);\n};\n\nvar changeOpacity = function(box, opacity) {\n d3.select('#' + box.id).transition().delay(function (d, i) {\n return i * 300;\n }).duration(500)\n .style('opacity', opacity);\n\n d3.selectAll('.divLine').transition().delay(function (d, i) {\n return i * 100;\n }).duration(200)\n .style('opacity', opacity);\n};\n\nvar resetInputs = function() {\n d3.select('#continueBtn').attr('disabled', null);\n d3.selectAll('.divLine').remove();\n d3.select('#leftOpperand').attr('width', 0);\n d3.select('#rightOpperand').attr('height', 0);\n d3.select('#leftInputFractionText').text('').style('display', 'none');\n clearInput('oppNumerator');\n clearInput('oppDenominator');\n leftOpperand.numerator = Number(0);\n leftOpperand.denominator = Number(0);\n rightOpperand.numerator = Number(0);\n rightOpperand.denominator = Number(0);\n\n};\n\nvar isValid = function(numerator, denominator) {\n if (numerator < 0 || numerator > 12) {\n return false;\n }\n if (denominator <= 0 || denominator > 12) {\n return false;\n }\n return (numerator < denominator);\n};\n\nvar updateMathJax = function() {\n MathJax.Hub.Queue([\"Typeset\",MathJax.Hub]);\n};\n\nvar showInputBox = function(inputId) {\n d3.select('#' + inputId).style('display', 'block');\n};\n\nvar hideInputBox = function(inputId) {\n d3.select('#' + inputId).style('display', 'none');\n};\n\nvar clearInput = function(inputId) {\n document.getElementById(inputId).value = '';\n}\n\nvar stateControler = function(state) {\n currentState = state;\n setSpeech(state);\n\n switch(state) {\n case 0 :\n resetInputs();\n showInputBox('opperandInput');\n break;\n case 1 :\n getOpperandInput('oppNumerator', 'oppDenominator', leftOpperand);\n d3.select('#leftInputFractionText')\n .text('$\\\\frac{'+leftOpperand.numerator+'}{'+leftOpperand.denominator+'} \\\\times$')\n .style('display', 'block');\n updateMathJax();\n verticalDivide(Number(d3.select('#mainBox').attr('width')/leftOpperand.denominator), Number(leftOpperand.denominator - 1));\n hideInputBox('opperandInput');\n break;\n case 2 :\n fillBox(leftOpperand, Number(d3.select('#mainBox').attr('width')/leftOpperand.denominator) * leftOpperand.numerator, Number(d3.select('#mainBox').attr('height')), leftOpperand.colour, 1);\n clearInput('oppNumerator');\n clearInput('oppDenominator');\n showInputBox('opperandInput');\n break;\n case 3 :\n getOpperandInput('oppNumerator', 'oppDenominator', rightOpperand);\n d3.select('#leftInputFractionText')\n .text('$\\\\frac{'+leftOpperand.numerator+'}{'+leftOpperand.denominator+'} \\\\times$' + '$\\\\frac{'+rightOpperand.numerator+'}{'+rightOpperand.denominator+'}$');\n updateMathJax();\n changeOpacity(leftOpperand, 0);\n horizontalDivide(Number(d3.select('#mainBox').attr('height')/rightOpperand.denominator), Number(rightOpperand.denominator - 1));\n hideInputBox('opperandInput');\n break;\n case 4 :\n fillBox(rightOpperand, Number(d3.select('#mainBox').attr('width')), Number(d3.select('#mainBox').attr('height')/rightOpperand.denominator) * rightOpperand.numerator, rightOpperand.colour, 0.5);\n break;\n case 5 :\n changeOpacity(leftOpperand, 1);\n d3.select('#continueBtn').attr('disabled', true);\n break;\n default:\n console.log('not a valid of state, returning to state 0');\n stateControler(0);\n }\n};\n\nvar speech = [\n \"Enter a fraction in the boxes provided, then click continue.\",\n \"Great! Now we see that the square has been divided into rectangles of equal size. The number of rectangles is given by the denominator. Click continue when ready.\",\n \"Some of the equal parts have been filled in with pink. The numerator equals the number of pink rectangles. The ratio of the area in pink to the total area is our fraction. Enter another fraction to multiply then click continue.\",\n \"Let’s focus on the second fraction. The first one is temporarily hidden for clarity. As before, the number of rectangles we see equals the denominator. Click continue when ready.\",\n \"Now we have a blue section representing the numerator of the second fraction. Click continue to multiply these two fractions.\",\n \"Awesome! The first fraction is back and overlaid with the second fraction. The number of rectangles in the purple section is the numerator of our answer. Notice that this is the product of the numerators. The total number of rectangles is the denominator of the product, and this is just the product of the two denominators!\"\n];\n\nfunction setSpeech(state) {\n d3.select('#speech').text(speech[state]);\n};\n\ndocument.getElementById('continueBtn').onclick = function() {\n if(!isValid(Number(document.getElementById('oppNumerator').value), Number(document.getElementById('oppDenominator').value))){\n alert('Make sure your factions are proper and the denominators less than or equal to 12');\n }\n else {\n stateControler(currentState + 1);\n }\n};\n\ndocument.getElementById('resetFractionBoxBtn').onclick = function() {\n console.log(\"hello\");\n resetInputs();\n stateControler(0);\n};\n</script>",
"_____no_output_____"
],
[
"%%html\n<script type=\"text/javascript\">\n var x = 2; //Recipie multiplyer\n\n getInput('checkAnswerBtn').onclick = function() {\n if(checkAnswers()) {\n d3.select('#answerStatus').text('Correct!! Good job.');\n } else {\n d3.select('#answerStatus').text('Not quite, keep trying!'); \n }\n };\n \n getInput('resetBtn').onclick = function() {\n var inputs = document.getElementsByClassName('ingredientsInput');\n for(var i = 0; i < inputs.length; i++) {\n inputs[i].value = '';\n }\n d3.selectAll('.ingredientsInput').style('background-color', '#ffffff');\n d3.select('#answerStatus').text('');\n };\n\n function checkAnswers() {\n var isCorrect = true;\n if(!checkAnswer('tomatoes', x*3))\n isCorrect = false;\n if(!checkAnswer('basil', x*(1/3)))\n isCorrect = false;\n if(!checkAnswer('olivOil', x*2))\n isCorrect = false;\n if(!checkAnswer('garlic', x*1))\n isCorrect = false;\n if(!checkAnswer('salt', x*(1/2)))\n isCorrect = false;\n if(!checkAnswer('pepper', x*(1/4)))\n isCorrect = false;\n if(!checkAnswer('pasta', x*8))\n isCorrect = false;\n if(!checkAnswer('parmesan', x*(3/4)))\n isCorrect = false;\n\n return isCorrect;\n };\n\n function checkAnswer(id, ans) {\n if(eval(getInput(id).value) === ans) {\n return answerCorrect(id);\n }\n return answerIncorrect(id);\n };\n\n function answerCorrect(id) {\n d3.select('#' + id).style('background-color', '#76D177');\n return true;\n }\n\n function answerIncorrect(id) {\n d3.select('#' + id).style('background-color', '#BB4646');\n return false;\n }\n\n function getInput(id) {\n return document.getElementById(id);\n };\n</script>",
"_____no_output_____"
]
],
[
[
"[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
4a801bc0fbd096800e3361c8c64aa4284b10f96d
| 241,686 |
ipynb
|
Jupyter Notebook
|
experiments/tl_3v2/jitter1/cores-oracle.run1.framed/trials/14/trial.ipynb
|
stevester94/csc500-notebooks
|
4c1b04c537fe233a75bed82913d9d84985a89177
|
[
"MIT"
] | null | null | null |
experiments/tl_3v2/jitter1/cores-oracle.run1.framed/trials/14/trial.ipynb
|
stevester94/csc500-notebooks
|
4c1b04c537fe233a75bed82913d9d84985a89177
|
[
"MIT"
] | null | null | null |
experiments/tl_3v2/jitter1/cores-oracle.run1.framed/trials/14/trial.ipynb
|
stevester94/csc500-notebooks
|
4c1b04c537fe233a75bed82913d9d84985a89177
|
[
"MIT"
] | null | null | null | 84.62395 | 72,764 | 0.741648 |
[
[
[
"# Transfer Learning Template",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\n \nimport os, json, sys, time, random\nimport numpy as np\nimport torch\nfrom torch.optim import Adam\nfrom easydict import EasyDict\nimport matplotlib.pyplot as plt\n\nfrom steves_models.steves_ptn import Steves_Prototypical_Network\n\nfrom steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper\nfrom steves_utils.iterable_aggregator import Iterable_Aggregator\nfrom steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig\nfrom steves_utils.torch_sequential_builder import build_sequential\nfrom steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader\nfrom steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)\nfrom steves_utils.PTN.utils import independent_accuracy_assesment\n\nfrom torch.utils.data import DataLoader\n\nfrom steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory\n\nfrom steves_utils.ptn_do_report import (\n get_loss_curve,\n get_results_table,\n get_parameters_table,\n get_domain_accuracies,\n)\n\nfrom steves_utils.transforms import get_chained_transform",
"_____no_output_____"
]
],
[
[
"# Allowed Parameters\nThese are allowed parameters, not defaults\nEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)\n\nPapermill uses the cell tag \"parameters\" to inject the real parameters below this cell.\nEnable tags to see what I mean",
"_____no_output_____"
]
],
[
[
"required_parameters = {\n \"experiment_name\",\n \"lr\",\n \"device\",\n \"seed\",\n \"dataset_seed\",\n \"n_shot\",\n \"n_query\",\n \"n_way\",\n \"train_k_factor\",\n \"val_k_factor\",\n \"test_k_factor\",\n \"n_epoch\",\n \"patience\",\n \"criteria_for_best\",\n \"x_net\",\n \"datasets\",\n \"torch_default_dtype\",\n \"NUM_LOGS_PER_EPOCH\",\n \"BEST_MODEL_PATH\",\n \"x_shape\",\n}",
"_____no_output_____"
],
[
"from steves_utils.CORES.utils import (\n ALL_NODES,\n ALL_NODES_MINIMUM_1000_EXAMPLES,\n ALL_DAYS\n)\n\nfrom steves_utils.ORACLE.utils_v2 import (\n ALL_DISTANCES_FEET_NARROWED,\n ALL_RUNS,\n ALL_SERIAL_NUMBERS,\n)\n\nstandalone_parameters = {}\nstandalone_parameters[\"experiment_name\"] = \"STANDALONE PTN\"\nstandalone_parameters[\"lr\"] = 0.001\nstandalone_parameters[\"device\"] = \"cuda\"\n\nstandalone_parameters[\"seed\"] = 1337\nstandalone_parameters[\"dataset_seed\"] = 1337\n\nstandalone_parameters[\"n_way\"] = 8\nstandalone_parameters[\"n_shot\"] = 3\nstandalone_parameters[\"n_query\"] = 2\nstandalone_parameters[\"train_k_factor\"] = 1\nstandalone_parameters[\"val_k_factor\"] = 2\nstandalone_parameters[\"test_k_factor\"] = 2\n\n\nstandalone_parameters[\"n_epoch\"] = 50\n\nstandalone_parameters[\"patience\"] = 10\nstandalone_parameters[\"criteria_for_best\"] = \"source_loss\"\n\nstandalone_parameters[\"datasets\"] = [\n {\n \"labels\": ALL_SERIAL_NUMBERS,\n \"domains\": ALL_DISTANCES_FEET_NARROWED,\n \"num_examples_per_domain_per_label\": 100,\n \"pickle_path\": os.path.join(get_datasets_base_path(), \"oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl\"),\n \"source_or_target_dataset\": \"source\",\n \"x_transforms\": [\"unit_mag\", \"minus_two\"],\n \"episode_transforms\": [],\n \"domain_prefix\": \"ORACLE_\"\n },\n {\n \"labels\": ALL_NODES,\n \"domains\": ALL_DAYS,\n \"num_examples_per_domain_per_label\": 100,\n \"pickle_path\": os.path.join(get_datasets_base_path(), \"cores.stratified_ds.2022A.pkl\"),\n \"source_or_target_dataset\": \"target\",\n \"x_transforms\": [\"unit_power\", \"times_zero\"],\n \"episode_transforms\": [],\n \"domain_prefix\": \"CORES_\"\n } \n]\n\nstandalone_parameters[\"torch_default_dtype\"] = \"torch.float32\" \n\n\n\nstandalone_parameters[\"x_net\"] = [\n {\"class\": \"nnReshape\", \"kargs\": {\"shape\":[-1, 1, 2, 256]}},\n {\"class\": \"Conv2d\", \"kargs\": { \"in_channels\":1, \"out_channels\":256, \"kernel_size\":(1,7), \"bias\":False, \"padding\":(0,3), },},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\":256}},\n\n {\"class\": \"Conv2d\", \"kargs\": { \"in_channels\":256, \"out_channels\":80, \"kernel_size\":(2,7), \"bias\":True, \"padding\":(0,3), },},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\":80}},\n {\"class\": \"Flatten\", \"kargs\": {}},\n\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 80*256, \"out_features\": 256}}, # 80 units per IQ pair\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm1d\", \"kargs\": {\"num_features\":256}},\n\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 256, \"out_features\": 256}},\n]\n\n# Parameters relevant to results\n# These parameters will basically never need to change\nstandalone_parameters[\"NUM_LOGS_PER_EPOCH\"] = 10\nstandalone_parameters[\"BEST_MODEL_PATH\"] = \"./best_model.pth\"\n\n\n\n\n",
"_____no_output_____"
],
[
"# Parameters\nparameters = {\n \"experiment_name\": \"tl_3-jitter1v2:cores -> oracle.run1.framed\",\n \"device\": \"cuda\",\n \"lr\": 0.0001,\n \"x_shape\": [2, 256],\n \"n_shot\": 3,\n \"n_query\": 2,\n \"train_k_factor\": 3,\n \"val_k_factor\": 2,\n \"test_k_factor\": 2,\n \"torch_default_dtype\": \"torch.float32\",\n \"n_epoch\": 50,\n \"patience\": 3,\n \"criteria_for_best\": \"target_accuracy\",\n \"x_net\": [\n {\"class\": \"nnReshape\", \"kargs\": {\"shape\": [-1, 1, 2, 256]}},\n {\n \"class\": \"Conv2d\",\n \"kargs\": {\n \"in_channels\": 1,\n \"out_channels\": 256,\n \"kernel_size\": [1, 7],\n \"bias\": False,\n \"padding\": [0, 3],\n },\n },\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\": 256}},\n {\n \"class\": \"Conv2d\",\n \"kargs\": {\n \"in_channels\": 256,\n \"out_channels\": 80,\n \"kernel_size\": [2, 7],\n \"bias\": True,\n \"padding\": [0, 3],\n },\n },\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\": 80}},\n {\"class\": \"Flatten\", \"kargs\": {}},\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 20480, \"out_features\": 256}},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm1d\", \"kargs\": {\"num_features\": 256}},\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 256, \"out_features\": 256}},\n ],\n \"NUM_LOGS_PER_EPOCH\": 10,\n \"BEST_MODEL_PATH\": \"./best_model.pth\",\n \"n_way\": 16,\n \"datasets\": [\n {\n \"labels\": [\n \"1-10.\",\n \"1-11.\",\n \"1-15.\",\n \"1-16.\",\n \"1-17.\",\n \"1-18.\",\n \"1-19.\",\n \"10-4.\",\n \"10-7.\",\n \"11-1.\",\n \"11-14.\",\n \"11-17.\",\n \"11-20.\",\n \"11-7.\",\n \"13-20.\",\n \"13-8.\",\n \"14-10.\",\n \"14-11.\",\n \"14-14.\",\n \"14-7.\",\n \"15-1.\",\n \"15-20.\",\n \"16-1.\",\n \"16-16.\",\n \"17-10.\",\n \"17-11.\",\n \"17-2.\",\n \"19-1.\",\n \"19-16.\",\n \"19-19.\",\n \"19-20.\",\n \"19-3.\",\n \"2-10.\",\n \"2-11.\",\n \"2-17.\",\n \"2-18.\",\n \"2-20.\",\n \"2-3.\",\n \"2-4.\",\n \"2-5.\",\n \"2-6.\",\n \"2-7.\",\n \"2-8.\",\n \"3-13.\",\n \"3-18.\",\n \"3-3.\",\n \"4-1.\",\n \"4-10.\",\n \"4-11.\",\n \"4-19.\",\n \"5-5.\",\n \"6-15.\",\n \"7-10.\",\n \"7-14.\",\n \"8-18.\",\n \"8-20.\",\n \"8-3.\",\n \"8-8.\",\n ],\n \"domains\": [1, 2, 3, 4, 5],\n \"num_examples_per_domain_per_label\": -1,\n \"pickle_path\": \"/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl\",\n \"source_or_target_dataset\": \"source\",\n \"x_transforms\": [\"jitter_256_1\", \"lowpass_+/-10MHz\", \"take_200\"],\n \"episode_transforms\": [],\n \"domain_prefix\": \"C_\",\n },\n {\n \"labels\": [\n \"3123D52\",\n \"3123D65\",\n \"3123D79\",\n \"3123D80\",\n \"3123D54\",\n \"3123D70\",\n \"3123D7B\",\n \"3123D89\",\n \"3123D58\",\n \"3123D76\",\n \"3123D7D\",\n \"3123EFE\",\n \"3123D64\",\n \"3123D78\",\n \"3123D7E\",\n \"3124E4A\",\n ],\n \"domains\": [32, 38, 8, 44, 14, 50, 20, 26],\n \"num_examples_per_domain_per_label\": 2000,\n \"pickle_path\": \"/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl\",\n \"source_or_target_dataset\": \"target\",\n \"x_transforms\": [\"jitter_256_1\", \"take_200\", \"resample_20Msps_to_25Msps\"],\n \"episode_transforms\": [],\n \"domain_prefix\": \"O_\",\n },\n ],\n \"seed\": 500,\n \"dataset_seed\": 500,\n}\n",
"_____no_output_____"
],
[
"# Set this to True if you want to run this template directly\nSTANDALONE = False\nif STANDALONE:\n print(\"parameters not injected, running with standalone_parameters\")\n parameters = standalone_parameters\n\nif not 'parameters' in locals() and not 'parameters' in globals():\n raise Exception(\"Parameter injection failed\")\n\n#Use an easy dict for all the parameters\np = EasyDict(parameters)\n\nif \"x_shape\" not in p:\n p.x_shape = [2,256] # Default to this if we dont supply x_shape\n\n\nsupplied_keys = set(p.keys())\n\nif supplied_keys != required_parameters:\n print(\"Parameters are incorrect\")\n if len(supplied_keys - required_parameters)>0: print(\"Shouldn't have:\", str(supplied_keys - required_parameters))\n if len(required_parameters - supplied_keys)>0: print(\"Need to have:\", str(required_parameters - supplied_keys))\n raise RuntimeError(\"Parameters are incorrect\")",
"_____no_output_____"
],
[
"###################################\n# Set the RNGs and make it all deterministic\n###################################\nnp.random.seed(p.seed)\nrandom.seed(p.seed)\ntorch.manual_seed(p.seed)\n\ntorch.use_deterministic_algorithms(True) ",
"_____no_output_____"
],
[
"###########################################\n# The stratified datasets honor this\n###########################################\ntorch.set_default_dtype(eval(p.torch_default_dtype))",
"_____no_output_____"
],
[
"###################################\n# Build the network(s)\n# Note: It's critical to do this AFTER setting the RNG\n###################################\nx_net = build_sequential(p.x_net)",
"_____no_output_____"
],
[
"start_time_secs = time.time()",
"_____no_output_____"
],
[
"p.domains_source = []\np.domains_target = []\n\n\ntrain_original_source = []\nval_original_source = []\ntest_original_source = []\n\ntrain_original_target = []\nval_original_target = []\ntest_original_target = []",
"_____no_output_____"
],
[
"# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), \"unit_power\") # unit_power, unit_mag\n# global_x_transform_func = lambda x: normalize(x, \"unit_power\") # unit_power, unit_mag",
"_____no_output_____"
],
[
"def add_dataset(\n labels,\n domains,\n pickle_path,\n x_transforms,\n episode_transforms,\n domain_prefix,\n num_examples_per_domain_per_label,\n source_or_target_dataset:str,\n iterator_seed=p.seed,\n dataset_seed=p.dataset_seed,\n n_shot=p.n_shot,\n n_way=p.n_way,\n n_query=p.n_query,\n train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),\n):\n \n if x_transforms == []: x_transform = None\n else: x_transform = get_chained_transform(x_transforms)\n \n if episode_transforms == []: episode_transform = None\n else: raise Exception(\"episode_transforms not implemented\")\n \n episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])\n\n\n eaf = Episodic_Accessor_Factory(\n labels=labels,\n domains=domains,\n num_examples_per_domain_per_label=num_examples_per_domain_per_label,\n iterator_seed=iterator_seed,\n dataset_seed=dataset_seed,\n n_shot=n_shot,\n n_way=n_way,\n n_query=n_query,\n train_val_test_k_factors=train_val_test_k_factors,\n pickle_path=pickle_path,\n x_transform_func=x_transform,\n )\n\n train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()\n train = Lazy_Iterable_Wrapper(train, episode_transform)\n val = Lazy_Iterable_Wrapper(val, episode_transform)\n test = Lazy_Iterable_Wrapper(test, episode_transform)\n\n if source_or_target_dataset==\"source\":\n train_original_source.append(train)\n val_original_source.append(val)\n test_original_source.append(test)\n\n p.domains_source.extend(\n [domain_prefix + str(u) for u in domains]\n )\n elif source_or_target_dataset==\"target\":\n train_original_target.append(train)\n val_original_target.append(val)\n test_original_target.append(test)\n p.domains_target.extend(\n [domain_prefix + str(u) for u in domains]\n )\n else:\n raise Exception(f\"invalid source_or_target_dataset: {source_or_target_dataset}\")\n ",
"_____no_output_____"
],
[
"for ds in p.datasets:\n add_dataset(**ds)",
"_____no_output_____"
],
[
"# from steves_utils.CORES.utils import (\n# ALL_NODES,\n# ALL_NODES_MINIMUM_1000_EXAMPLES,\n# ALL_DAYS\n# )\n\n# add_dataset(\n# labels=ALL_NODES,\n# domains = ALL_DAYS,\n# num_examples_per_domain_per_label=100,\n# pickle_path=os.path.join(get_datasets_base_path(), \"cores.stratified_ds.2022A.pkl\"),\n# source_or_target_dataset=\"target\",\n# x_transform_func=global_x_transform_func,\n# domain_modifier=lambda u: f\"cores_{u}\"\n# )",
"_____no_output_____"
],
[
"# from steves_utils.ORACLE.utils_v2 import (\n# ALL_DISTANCES_FEET,\n# ALL_RUNS,\n# ALL_SERIAL_NUMBERS,\n# )\n\n\n# add_dataset(\n# labels=ALL_SERIAL_NUMBERS,\n# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),\n# num_examples_per_domain_per_label=100,\n# pickle_path=os.path.join(get_datasets_base_path(), \"oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl\"),\n# source_or_target_dataset=\"source\",\n# x_transform_func=global_x_transform_func,\n# domain_modifier=lambda u: f\"oracle1_{u}\"\n# )\n",
"_____no_output_____"
],
[
"# from steves_utils.ORACLE.utils_v2 import (\n# ALL_DISTANCES_FEET,\n# ALL_RUNS,\n# ALL_SERIAL_NUMBERS,\n# )\n\n\n# add_dataset(\n# labels=ALL_SERIAL_NUMBERS,\n# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),\n# num_examples_per_domain_per_label=100,\n# pickle_path=os.path.join(get_datasets_base_path(), \"oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl\"),\n# source_or_target_dataset=\"source\",\n# x_transform_func=global_x_transform_func,\n# domain_modifier=lambda u: f\"oracle2_{u}\"\n# )",
"_____no_output_____"
],
[
"# add_dataset(\n# labels=list(range(19)),\n# domains = [0,1,2],\n# num_examples_per_domain_per_label=100,\n# pickle_path=os.path.join(get_datasets_base_path(), \"metehan.stratified_ds.2022A.pkl\"),\n# source_or_target_dataset=\"target\",\n# x_transform_func=global_x_transform_func,\n# domain_modifier=lambda u: f\"met_{u}\"\n# )",
"_____no_output_____"
],
[
"# # from steves_utils.wisig.utils import (\n# # ALL_NODES_MINIMUM_100_EXAMPLES,\n# # ALL_NODES_MINIMUM_500_EXAMPLES,\n# # ALL_NODES_MINIMUM_1000_EXAMPLES,\n# # ALL_DAYS\n# # )\n\n# import steves_utils.wisig.utils as wisig\n\n\n# add_dataset(\n# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,\n# domains = wisig.ALL_DAYS,\n# num_examples_per_domain_per_label=100,\n# pickle_path=os.path.join(get_datasets_base_path(), \"wisig.node3-19.stratified_ds.2022A.pkl\"),\n# source_or_target_dataset=\"target\",\n# x_transform_func=global_x_transform_func,\n# domain_modifier=lambda u: f\"wisig_{u}\"\n# )",
"_____no_output_____"
],
[
"###################################\n# Build the dataset\n###################################\ntrain_original_source = Iterable_Aggregator(train_original_source, p.seed)\nval_original_source = Iterable_Aggregator(val_original_source, p.seed)\ntest_original_source = Iterable_Aggregator(test_original_source, p.seed)\n\n\ntrain_original_target = Iterable_Aggregator(train_original_target, p.seed)\nval_original_target = Iterable_Aggregator(val_original_target, p.seed)\ntest_original_target = Iterable_Aggregator(test_original_target, p.seed)\n\n# For CNN We only use X and Y. And we only train on the source.\n# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader\n\ntransform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only\n\ntrain_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)\nval_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)\ntest_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)\n\ntrain_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)\nval_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)\ntest_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)\n\ndatasets = EasyDict({\n \"source\": {\n \"original\": {\"train\":train_original_source, \"val\":val_original_source, \"test\":test_original_source},\n \"processed\": {\"train\":train_processed_source, \"val\":val_processed_source, \"test\":test_processed_source}\n },\n \"target\": {\n \"original\": {\"train\":train_original_target, \"val\":val_original_target, \"test\":test_original_target},\n \"processed\": {\"train\":train_processed_target, \"val\":val_processed_target, \"test\":test_processed_target}\n },\n})",
"_____no_output_____"
],
[
"from steves_utils.transforms import get_average_magnitude, get_average_power\n\nprint(set([u for u,_ in val_original_source]))\nprint(set([u for u,_ in val_original_target]))\n\ns_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))\nprint(s_x)\n\n# for ds in [\n# train_processed_source,\n# val_processed_source,\n# test_processed_source,\n# train_processed_target,\n# val_processed_target,\n# test_processed_target\n# ]:\n# for s_x, s_y, q_x, q_y, _ in ds:\n# for X in (s_x, q_x):\n# for x in X:\n# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)\n# assert np.isclose(get_average_power(x.numpy()), 1.0)\n ",
"{'C_1', 'C_3', 'C_2', 'C_4', 'C_5'}\n"
],
[
"###################################\n# Build the model\n###################################\n# easfsl only wants a tuple for the shape\nmodel = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))\noptimizer = Adam(params=model.parameters(), lr=p.lr)",
"(2, 256)\n"
],
[
"###################################\n# train\n###################################\njig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)\n\njig.train(\n train_iterable=datasets.source.processed.train,\n source_val_iterable=datasets.source.processed.val,\n target_val_iterable=datasets.target.processed.val,\n num_epochs=p.n_epoch,\n num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,\n patience=p.patience,\n optimizer=optimizer,\n criteria_for_best=p.criteria_for_best,\n)",
"epoch: 1, [batch: 1 / 6317], examples_per_second: 33.6170, train_label_loss: 2.2146, \n"
],
[
"total_experiment_time_secs = time.time() - start_time_secs",
"_____no_output_____"
],
[
"###################################\n# Evaluate the model\n###################################\nsource_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)\ntarget_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)\n\nsource_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)\ntarget_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)\n\nhistory = jig.get_history()\n\ntotal_epochs_trained = len(history[\"epoch_indices\"])\n\nval_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))\n\nconfusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)\nper_domain_accuracy = per_domain_accuracy_from_confusion(confusion)\n\n# Add a key to per_domain_accuracy for if it was a source domain\nfor domain, accuracy in per_domain_accuracy.items():\n per_domain_accuracy[domain] = {\n \"accuracy\": accuracy,\n \"source?\": domain in p.domains_source\n }\n\n# Do an independent accuracy assesment JUST TO BE SURE!\n# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)\n# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)\n# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)\n# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)\n\n# assert(_source_test_label_accuracy == source_test_label_accuracy)\n# assert(_target_test_label_accuracy == target_test_label_accuracy)\n# assert(_source_val_label_accuracy == source_val_label_accuracy)\n# assert(_target_val_label_accuracy == target_val_label_accuracy)\n\nexperiment = {\n \"experiment_name\": p.experiment_name,\n \"parameters\": dict(p),\n \"results\": {\n \"source_test_label_accuracy\": source_test_label_accuracy,\n \"source_test_label_loss\": source_test_label_loss,\n \"target_test_label_accuracy\": target_test_label_accuracy,\n \"target_test_label_loss\": target_test_label_loss,\n \"source_val_label_accuracy\": source_val_label_accuracy,\n \"source_val_label_loss\": source_val_label_loss,\n \"target_val_label_accuracy\": target_val_label_accuracy,\n \"target_val_label_loss\": target_val_label_loss,\n \"total_epochs_trained\": total_epochs_trained,\n \"total_experiment_time_secs\": total_experiment_time_secs,\n \"confusion\": confusion,\n \"per_domain_accuracy\": per_domain_accuracy,\n },\n \"history\": history,\n \"dataset_metrics\": get_dataset_metrics(datasets, \"ptn\"),\n}",
"_____no_output_____"
],
[
"ax = get_loss_curve(experiment)\nplt.show()",
"_____no_output_____"
],
[
"get_results_table(experiment)",
"_____no_output_____"
],
[
"get_domain_accuracies(experiment)",
"_____no_output_____"
],
[
"print(\"Source Test Label Accuracy:\", experiment[\"results\"][\"source_test_label_accuracy\"], \"Target Test Label Accuracy:\", experiment[\"results\"][\"target_test_label_accuracy\"])\nprint(\"Source Val Label Accuracy:\", experiment[\"results\"][\"source_val_label_accuracy\"], \"Target Val Label Accuracy:\", experiment[\"results\"][\"target_val_label_accuracy\"])",
"Source Test Label Accuracy: 0.9992977528089888 Target Test Label Accuracy: 0.5085286458333333\nSource Val Label Accuracy: 0.9993214285714286 Target Val Label Accuracy: 0.5037109375\n"
],
[
"json.dumps(experiment)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a8032a8f91962e4fe4a593548af0bf6ff1f8594
| 56,650 |
ipynb
|
Jupyter Notebook
|
Model Comparisons/beta_vae_no_c/.ipynb_checkpoints/Dentate_Graphs_losses-checkpoint.ipynb
|
theislab/disent
|
0dff9534b3ff902e0f1642dd96d41bbcaecf1b9d
|
[
"BSD-3-Clause"
] | 4 |
2021-04-01T06:49:35.000Z
|
2022-02-24T15:13:58.000Z
|
Model Comparisons/beta_vae_no_c/Dentate_Graphs_losses.ipynb
|
theislab/disent
|
0dff9534b3ff902e0f1642dd96d41bbcaecf1b9d
|
[
"BSD-3-Clause"
] | null | null | null |
Model Comparisons/beta_vae_no_c/Dentate_Graphs_losses.ipynb
|
theislab/disent
|
0dff9534b3ff902e0f1642dd96d41bbcaecf1b9d
|
[
"BSD-3-Clause"
] | null | null | null | 236.041667 | 20,220 | 0.918429 |
[
[
[
"## Comparing Loss Components for Dentate Gyrus - Beta Vae ",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings('ignore')\nimport scanpy as sc\nfrom sklearn.model_selection import train_test_split\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport glob, os\nimport matplotlib\nimport re",
"_____no_output_____"
],
[
"os.getcwd()",
"_____no_output_____"
],
[
"#path='./models_dentate_beta_vae_noc/'\nfolders = os.listdir()\nfolders.sort(reverse=True)\nprint(folders)",
"['latent5_alpha50', 'latent5_alpha5', 'latent5_alpha20', 'latent5_alpha100', 'latent5_alpha1', 'Dentate_Graphs_losses.ipynb', 'Dentate_Graphs_dis-score.ipynb', '.ipynb_checkpoints']\n"
]
],
[
[
"# Latent 5",
"_____no_output_____"
]
],
[
[
"\nz_dims = []\nalphas = []\nkl_loss_last = []\nrecon_loss_last = []\nvae_loss_last = []\nactive_units = []\n\nfor folder_name in folders:\n if \"latent\" in folder_name:\n z = re.search('latent(\\d+)', folder_name)\n if int(z[1]) == 10:\n continue\n al = re.search('alpha(\\d+)', folder_name)\n z_dims.append(int(z[1]))\n alphas.append(int(al[1]))\n df = pd.read_csv(folder_name+\"/csv_logger.log\")\n kl_loss_last.append(round(df.loc[df.index[-1],\"kl_loss\"],2))\n recon_loss_last.append(round(df.loc[df.index[-1],\"recon_loss\"],2))\n vae_loss_last.append(round(df.loc[df.index[-1],\"loss\"],2))\n count = 0\n for i in range(int(z[1])):\n col_name = \"kl_loss_monitor\"+str(i)\n if df.loc[df.index[-1],col_name] > 0.99:\n count = count + 1\n active_units.append(count)",
"_____no_output_____"
],
[
"print(active_units)\nprint(alphas)\n",
"[0, 1, 0, 0, 5]\n[50, 5, 20, 100, 1]\n"
],
[
"sns.set(font_scale=1)\nsns.set_style(\"darkgrid\")\nfig, ax = plt.subplots(figsize=(5,5))\nscatter1 = sns.scatterplot(alphas,kl_loss_last,linewidth=0)\n\nscatter1.set_ylabel(\"KL Loss\")\nscatter1.set_xlabel(\"Beta Values\")\n\nscatter1.set_title(\"Total KL loss over all dimensions(5)\", weight=\"bold\")\n\nplt.savefig(\"KL_loss_all_5.png\",bbox_inches=\"tight\")\n",
"_____no_output_____"
],
[
"sns.set(font_scale=1)\nsns.set_style(\"darkgrid\")\nfig, ax = plt.subplots(figsize=(5,5))\nscatter1 = sns.scatterplot(alphas,recon_loss_last,linewidth=0)\n\nscatter1.set_ylabel(\"Reconstruction Loss\")\nscatter1.set_xlabel(\"Beta Values\")\n\nscatter1.set_title(\"Total Reconstruction loss over all dimensions(5)\", weight=\"bold\")\n\nplt.savefig(\"Recon_loss_all_5.png\",bbox_inches=\"tight\")\n",
"_____no_output_____"
],
[
"sns.set(font_scale=1)\nsns.set_style(\"darkgrid\")\nfig, ax = plt.subplots(figsize=(5,5))\nscatter1 = sns.scatterplot(alphas,vae_loss_last,linewidth=0)\n\nscatter1.set_ylabel(\"VAE Loss\")\nscatter1.set_xlabel(\"Beta Values\")\nscatter1.set_title(\"Total VAE loss over all dimensions(5)\", weight=\"bold\")\nplt.savefig(\"Vae_loss_all_5.png\",bbox_inches=\"tight\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
4a803d02d4b2c33758fce14b1fed11cc9d993677
| 409,053 |
ipynb
|
Jupyter Notebook
|
Web_Crawling_Project_presentation.ipynb
|
LeeJaeKeun14/Web_Crawling
|
4ee9ee6b9af304f050d1068384ef6328a7db07b0
|
[
"MIT"
] | null | null | null |
Web_Crawling_Project_presentation.ipynb
|
LeeJaeKeun14/Web_Crawling
|
4ee9ee6b9af304f050d1068384ef6328a7db07b0
|
[
"MIT"
] | null | null | null |
Web_Crawling_Project_presentation.ipynb
|
LeeJaeKeun14/Web_Crawling
|
4ee9ee6b9af304f050d1068384ef6328a7db07b0
|
[
"MIT"
] | 1 |
2019-12-11T02:15:41.000Z
|
2019-12-11T02:15:41.000Z
| 1,358.980066 | 402,440 | 0.946611 |
[
[
[
"# Web_Crawling",
"_____no_output_____"
],
[
"## 하루 시작을 알리는 크롤링 프로젝트\n- 하루를 시작하면서 자동으로 내가 원하는 정보를 모아서 메세지로 보내주는 서비스가 있으면 좋겠다 생각했습니다. 기존의 서비스는 제가 원하지 않는 정보가 있어 더이상 찾거나 결재를 하지 않았지만 이번 기회로 직접 만들자 생각이 들어 시작하게 되었습니다.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"## 크롤링 사이트",
"_____no_output_____"
],
[
"### 다음 뉴스\n1. [media.daum.net](https://media.daum.net/)\n",
"_____no_output_____"
],
[
"### 케이웨더\n2. [www.kweather.co.kr](http://www.kweather.co.kr/main/main.html)\n",
"_____no_output_____"
],
[
"### 다음 사전\n3. [dic.daum.net/word](https://dic.daum.net/word/view.do?wordid=ekw000132285&q=project)\n",
"_____no_output_____"
],
[
"## GitHub\n[Web-Crawling-repo](https://github.com/LeeJaeKeun14/Web_Crawling#need-install)",
"_____no_output_____"
],
[
"## 웹 크롤링\n\n### - 다음 뉴스\n- scrapy\n\n\n### - 케이웨더\n- selenium\n\n\n### - 다음 사전\n- selenium\n\n",
"_____no_output_____"
],
[
"## 패키지 구성\n",
"_____no_output_____"
],
[
"##### Web_Crawling",
"_____no_output_____"
],
[
"> Make_Module.ipynb : 모듈파일을 만드는 주피터 노트북 파일\n> \n> \\_\\_init__.py : \n>\n>>```python\n>>__all__ = [\"weather\", \"slack_file\", \"slack_msg_1\", \"diction\", \"mongodb\", \"make_msg\"]\n>>```\n> \n> \n> **\\_\\_pycache__** : 패키지 실행시 저장되는 캐시 데이터\n> \n> diction.csv : 크롤링한 영어단어 csv 파일 \n>\n> diction.py : \n>\n>>```python\n>> import os\n>> import pandas as pd\n>> from selenium import webdriver\n>> def open_driver(driver, name): ## driver를 입력하는 단어로 url을 이동\n>> def find_dic(driver, dic): ## driver에 dic에 영단어의 정보를 하나하나 저장\n>>```\n>\n> eng.csv : 웹 크롤링 할 영단어가 저장된 csv파일\n>\n> make_msg.py :\n>>```python\n>> def make_msg(df, col): ## 웹 크롤링한 영단어 정보를 출력할 하나의 str 형식으로 변환하여 반환\n>>```\n>\n> mongodb.py : \n>>```python\n>> import pymongo\n>>\n>> client = pymongo.MongoClient(\"mongodb:// server : ip\")\n>> db = client.diction\n>> collection = db.english\n>>```\n>\n> **news** : scrapy startproject\n>> items.py : 뉴스 카테고리, 뉴스 타이틀, 링크\n>>\n>> settings.py :\n>>> 다음 뉴스 robots.txt 가 Disallow 이므로\n>>> ROBOTSTXT_OBEY = False 로 수정\n>>\n>> spider.py. : 각 카테고리 별로 가장 상위 5개 뉴스 타이틀, 링크를 각각 수집\n>>\n>\n> slack_file.py :\n>>```python\n>> import os\n>> import slack\n>>\n>> def send_file(): ## 웹 크롤링한 날씨 이미지 데이터를 슬랙으로 전송\n>>```\n>\n> slack_msg.py :\n>>```python\n>> import requests\n>> import json\n>>\n>> def send_msg(msg): ## 웹 크롤링한 str정보를 슬랙으로 전송\n>>```\n>\n> weather.png : 웹 크롤링한 이미지를 저장한 PNG 파일\n>\n> weather.py :\n>>```python\n>> from selenium import webdriver\n>> import time\n>> import os\n>> \n>> def weather(): ## 날씨 정보를 이미지로 가져와서 저장\n>>```\n>\n>\n>",
"_____no_output_____"
],
[
"## 프로젝트 진행하면서의 문제점",
"_____no_output_____"
],
[
"### crontab 에서의 경로 문제\n#### run.sh\n- run.sh\\\n> rm -rf ~/python3/notebook/Web_Crawling_Project/Web_Crawling/news/news.csv\\\n> cd ~/python3/notebook/Web_Crawling_Project/Web_Crawling/news/\\\n> scrapy crawl News -o news.csv\\\n\n/home/ubuntu/python3/notebook/Web_Crawling_Project/run.sh: 3: /home/ubuntu/python3/notebook/Web_Crawling_Project/run.sh: scrapy: not found\n",
"_____no_output_____"
],
[
"# 느낀점",
"_____no_output_____"
],
[
"인터넷에 있는 데이터를 사져오고, 가공한 다음,\\\n이 데이터를 데이터 베이스에 저장한 뒤,\\\n데이터를 자신에게 직접 제공하는 패키지를 자동화 하는 프로젝트를 진행하였습니다.\\\n이러한 프로젝트를 처음부터 끝까지 했다는 완성감과 성취감을 느끼고 평소 유용한 데이터를 사용하며 지금까지 진행했던 프로젝트중 가장 재미있는 프로젝트라 생각합니다.",
"_____no_output_____"
],
[
"## 이후 진행 계획",
"_____no_output_____"
],
[
"### 1. AWS Lambda 서비스 이용\n- boto3를 이용하여 항상 서버를 이용하지 않고 특정한 시간대에만 서버를 열어서 크롤링 한다\n- Lambda에 시간 트리거를 설정하여 boto3 함수를 사용한다\n- 최종적으로 매일 아침 자동으로 메일을 보내주는 서비스를 완성한다\n\n### 2. 워드 클라우드 또는 분석 모형 사용\n- 웹 크롤링 한 데이터를 필요한 데이터를 한번 더 가공하여 보내는 서비스 까지 계발",
"_____no_output_____"
]
],
[
[
"import WC\nWC.show()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
4a8050932fe6ed69c1aba5046161b2e5bbab77fe
| 66,656 |
ipynb
|
Jupyter Notebook
|
content/HW/HW7/cs109b_hw7_students.ipynb
|
DavidAssaraf106/2021-CS109B
|
502970bf8bb653222eade0ff42d1fc2c54d3fa74
|
[
"MIT"
] | null | null | null |
content/HW/HW7/cs109b_hw7_students.ipynb
|
DavidAssaraf106/2021-CS109B
|
502970bf8bb653222eade0ff42d1fc2c54d3fa74
|
[
"MIT"
] | null | null | null |
content/HW/HW7/cs109b_hw7_students.ipynb
|
DavidAssaraf106/2021-CS109B
|
502970bf8bb653222eade0ff42d1fc2c54d3fa74
|
[
"MIT"
] | null | null | null | 40.594397 | 922 | 0.604987 |
[
[
[
"# <img style=\"float: left; padding-right: 10px; width: 45px\" src=\"https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png\"> CS109B Data Science 2: Advanced Topics in Data Science \n\n## Homework 7: Generative Models - Variational Autoencoders and GANs [100 pts]\n\n\n**Harvard University**<br/>\n**Spring 2021**<br/>\n**Instructors**: Pavlos Protopapas, Mark Glickman and Chris Tanner<br/>\n\n**DISCLAIMER**: No public reproduction of this homework nor its solution is allowed without the explicit consent of their authors.\n\n**Due Date**: <font color=\"red\">April 21 (11:59pm EST), 2021</font><br/>\n\n<hr style=\"height:2pt\">\n\n---\n\n",
"_____no_output_____"
]
],
[
[
"#RUN THIS CELL \nimport requests\nfrom IPython.core.display import HTML, display\nstyles = requests.get(\"https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css\").text\nHTML(styles)",
"_____no_output_____"
]
],
[
[
"### INSTRUCTIONS\n\n- To submit your assignment follow the instructions given in Canvas.\n\n- Please restart the kernel and run the entire notebook again before you submit. \n\n- Running cells out of order is a common pitfall in Jupyter Notebooks. To make sure your code works restart the kernel and run the whole notebook again before you submit. \n\n- We have tried to include all the libraries you may need to do the assignment in the imports cell provided below. **Please use only the libraries provided in those imports.**\n\n- Please use .head() when viewing data. Do not submit a notebook that is **excessively long**. \n\n- In questions that require code to answer, such as \"calculate the $R^2$\", do not just output the value from a cell. Write a `print()` function that clearly labels the output, includes a reference to the calculated value, and rounds it to a reasonable number of digits. **Do not hard code values in your printed output**. For example, this is an appropriate print statement:\n```python\nprint(f\"The R^2 is {R:.4f}\")\n```\n- Your plots should be clearly labeled, including clear labels for the $x$ and $y$ axes as well as a descriptive title (\"MSE plot\" is NOT a descriptive title; \"95% confidence interval of coefficients of polynomial degree 5\" on the other hand is descriptive).\n\n<hr style=\"height:2pt\">\n\n\n\n<a id=\"contents\"></a>\n\n## Notebook Contents \n\n- [**Part 0 (Set Up Notebook)**](#part0)\n\n\n- [**PART 1 [ 20 pts ]: Preprocess and Visualize data**](#part1)\n - [Overview](#part1intro)\n - [Questions](#part1questions)\n - [Solutions](#part1solutions)\n\n\n- [**PART 2 [ 20 pts ]: Set-up an AutoEncoder**](#part2)\n - [Overview](#part2intro)\n - [Questions](#part2questions)\n - [Solutions](#part2solutions)\n\n\n- [**PART 3 [ 20 pts ]: Set-up a Convolutional Variational Autoencoder**](#part3)\n - [Overview](#part3intro)\n - [Questions](#part3questions)\n - [Solutions](#part3solutions)\n \n \n- [**PART 4 [ 20 pts ]: Set-up a Conditional VAE**](#part4)\n - [Overview](#part4intro)\n - [Questions](#part4questions)\n - [Solutions](#part4solutions)\n \n \n- [**PART 5 [ 20 pts ]: GANs**](#part5)\n - [Overview](#part5intro)\n - [Questions](#part5questions)\n - [Solutions](#part5solutions)\n ",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"<a id=\"part0\"></a>\n## Overview \n\nWe are going to compare autoencoders (AEs), variational autoencoders (VAEs) and generative adversarial networks (GANs). The goal is to understand the particularities of each model and to learn how to build them. \n\nIn addition to standard VAEs, we will also study conditional VAEs. Conditional VAEs incorporate input attributes on the latent representation of an input, providing some structure in the latent space. We will analyze how conditional VAEs are capable of generating new photos that depend on specified attributes. \n\nWe are going to train our networks using [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html), which is a large-scale face attributes dataset with more than 200K celebrity images and 40 different attribute annotations.\n\nRun the following cell to load important libraries. ",
"_____no_output_____"
]
],
[
[
"# DO NOT DELETE THIS CELL\n# Load useful libraries\nimport numpy as np\nimport pandas as pd\nimport zipfile\nimport os\nimport tqdm\nimport pathlib\nimport time\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.decomposition import PCA\nfrom sklearn.manifold import TSNE\n\n# TensorFlow and tf.keras\nimport tensorflow as tf\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import models\nfrom tensorflow.keras import losses\nfrom tensorflow.keras import optimizers\nfrom tensorflow.keras import initializers\nfrom tensorflow.keras.metrics import Accuracy\n\n# Plotting libraries\nfrom matplotlib import pyplot as plt\nfrom matplotlib.colors import ListedColormap\n\nplt.gray() #set colormap to gray ",
"_____no_output_____"
]
],
[
[
"**Check availability of GPU**\n\nRun this line to verify your environment lists an available GPU.",
"_____no_output_____"
]
],
[
[
"# DO NOT DELETE THIS CELL\ntf.config.experimental.list_physical_devices('GPU')",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
]
],
[
[
"# DO NOT DELETE THIS CELL\n# Run this cell to define our download_celeb function\n\ndef download_celeb(\n url, \n filename,\n filepath,\n dirname,\n dirpath,\n chunk_size=1204,\n overwrite=False,\n):\n \"\"\"Downloads and extracts CelebA dataset from CS109B S3 bucket\"\"\"\n \n # Do not download if data already exists and overwrite==False\n if not overwrite and os.path.isdir(os.path.join(dirpath, \"2.0.1\")):\n print(\n \"Congratulations...the CelebA dataset already exists \"\n \"locally!\\nNo new downloads are required :o)\\n\"\n )\n # Download and extract CelebA if it doesn't already exist\n else:\n print(\"Downloading CelebA dataset to {}\\n\".format(filepath))\n\n with requests.get(url, stream=True) as r:\n chunk_size = 1024\n length = int(r.headers['content-length'])\n print(\n \"...downloading a {:.2f} GB file.\"\n \"This is going to take a while!\".format(length/1e9)\n )\n time.sleep(0.5)\n with open(filepath, 'wb') as f:\n for chunk in tqdm.tqdm(\n r.iter_content(chunk_size=chunk_size),\n total=int(length/chunk_size),\n unit=\"KB\"\n ):\n f.write(chunk)\n\n print(\"...{} download complete :o)\".format(filename))\n\n if not os.path.isdir(dirpath):\n os.makedirs(dirpath)\n\n print(\n \"...extracting {}. This will take a while too :o(\\n\"\n \"\".format(filename)\n )\n\n with zipfile.ZipFile(filepath, 'r') as zipobj:\n zipobj.extractall(dirpath)\n\n print(\n \"The CelebA dataset has been extracted to:\"\n \"\\n\\n\\t{}\\n\".format(dirpath)\n )",
"_____no_output_____"
],
[
"# DO NOT DELETE THIS CELL\n# RUN THIS CELL\n\nworking_dir = pathlib.Path().absolute()\n# Uncomment line below to debug if images don't show\n#print(working_dir)\nos.chdir(working_dir)",
"_____no_output_____"
],
[
"%%time\n# DO NOT DELETE THIS CELL\n# Download the CelebA dataset from the CS109B S3 bucket\nurl = \"https://cs109b-course-data.s3.amazonaws.com/CelebA/2.0.1.zip\"\nfilename = \"2.0.1.zip\"\ndirname = \"data/celeb_a\"\ndirpath = os.path.join(working_dir, dirname)\nfilepath = os.path.join(working_dir, filename)\n\ndownload_celeb(url, filename, filepath, dirname, dirpath)",
"Congratulations...the CelebA dataset already exists locally!\nNo new downloads are required :o)\n\nCPU times: user 348 µs, sys: 55 µs, total: 403 µs\nWall time: 261 µs\n"
],
[
"# DO NOT DELETE THIS CELL\n# Run this cell\n# Assumes CelebA has been manually downloaded and is available in `~/tensorflow_datasets/celeb_a/2.0.1/`.\n\nimport tensorflow_datasets as tfds\n\ntrain_celeb, val_celeb = tfds.load('celeb_a', \n split=['train', 'validation'], \n shuffle_files=False,\n data_dir = os.path.join(working_dir, \"data\"), \n download=False)\n",
"_____no_output_____"
],
[
"# DO NOT DELETE THIS CELL\n# Global variables to define training/loading models. \n# Modify as required. These are only suggested parameters.\n\ntrain = True\nepochs = 5 # number of epochs to train models\nbatch_size = 32\ninput_size = (64, 64, 3) # images will be cropped and resized to `input_size`.",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"<a id=\"part1\"></a>\n\n# PART 1. Preprocess and Visualize the data [20 pts]\n\n\n[Return to contents](#contents)\n\n\n\n<a id=\"part1intro\"></a>\n\n## Overview\n\nCelebA has 202,599 face images of various celebrities and training on the whole set requires large computational resources to fit your models. For this reason we recommend cropping the images and resizing them to reduce the computational costs. Feel free to adjust the image resolution depending on your computation capabilities. We recommend using `image_size = (64,64,3)`, but feel free to use a larger resolution, or smaller (but no smaller than `image_size = (32,32,3))`.\n\n\nWe provide the function `tf_norm_crop_resize_image` to normalize image pixels between `[0,1]`, to crop the height and width of images to `150x150` pixels, and to [resize](https://www.tensorflow.org/api_docs/python/tf/image/resize) images to the indicated size in the function call. Follow the intructions below to format your data for the different models you will need to train:\n\n<a id=\"part1questions\"></a>\n\n## PART 1: Questions\n\n<a id=\"q11\"></a>\n**[1.1:](#s11)** Create training and validation Dataset pipelines `train_ds` and `val_ds` from `train_celeb` and `val_celeb`, respectively. The Dataset pipelines you create have to return a tuple `(image, image)` which you will use to train your models with an MSE loss criteria: the first element is the input fed to the model, the second element is used to compute the loss of the model.\n\nMake sure the Datasets follow the format: 1) In this order, normalize, crop, and resize using [map](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), 2) [shuffle](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) the data, 3) apply [batching](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), 4) optionally use [prefetch](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch)\n\n\n\n\n<a id=\"q12\"></a>\n**[1.2:](#s12)** Create training and validation Dataset pipelines `train_cond_ds` and `val_cond_ds` from `train_celeb` and `val_celeb`, respectively. The Dataset pipelines you create have to return a tuple `((image, attributes), image)` to train your conditional VAE model. The first element of the tuple corresponds to the input of the model and consists of two tensors: the image and 2 selected attributes of your choice (for example, `Male` and `Smiling` attributes). You can choose your attributes from the ones [available](https://www.tensorflow.org/datasets/catalog/celeb_a). Make sure the attributes you use are easily identifiable in the images because you will need to alter them and expect visual changes (see Question 4.3). Convert the boolean attributes to `tf.float32` using [`tf.cast`](https://www.tensorflow.org/api_docs/python/tf/cast).\n\nMake sure the Datasets follow the format: 1) In this order, normalize, crop, and resize using [map](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), 2) [shuffle](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) the data, 3) apply [batching](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), 4) optionally use [prefetch](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch)\n\n\n<a id=\"q13\"></a>\n**[1.3:](#s13)** Pick 5 random images from the train dataset and plot them. Clearly label each plot with your chosen attributes and provide a written confirmation that they are correct.",
"_____no_output_____"
]
],
[
[
"# DO NOT DELETE THIS CELL\n# Use this function to normalize, crop and resize your images.\ndef tf_norm_crop_resize_image(image, resize_dim):\n \"\"\"Normalizes image to [0.,1.], crops to dims (150, 150, 3)\n and resizes to `resize_dim`, returning an image tensor.\"\"\"\n image = tf.cast(image, tf.float32)/255.\n image = tf.image.resize_with_crop_or_pad(image, 150, 150)\n image = tf.image.resize(image, resize_dim)\n image.set_shape(resize_dim + (3,))\n return image",
"_____no_output_____"
]
],
[
[
"<a id=\"part1solutions\"></a>\n\n## PART 1: Solutions\n\n[Return to contents](#contents)\n\n",
"_____no_output_____"
],
[
"<a id=\"s11\"></a>\n<div class='exercise-r'>\n\n**[1.1:](#q11)** \nCreate training and validation Dataset pipelines `train_ds` and `val_ds` from `train_celeb` and `val_celeb`, respectively. The Dataset pipelines you create have to return a tuple `(image, image)` which you will use to train your models with an MSE loss criteria: the first element is the input fed to the model, the second element is used to compute the loss of the model.\n\nMake sure the Datasets follow the format: 1) In this order, normalize, crop, and resize using [map](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), 2) [shuffle](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) the data, 3) apply [batching](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), 4) optionally use [prefetch](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch)\n\n \n</div>\n",
"_____no_output_____"
]
],
[
[
"# 1.1\n# your code here\n",
"_____no_output_____"
]
],
[
[
"<a id=\"s12\"></a>\n<div class='exercise-r'>\n\n**[1.2:](#q12)** \n\nCreate training and validation Dataset pipelines `train_cond_ds` and `val_cond_ds` from `train_celeb` and `val_celeb`, respectively. The Dataset pipelines you create have to return a tuple `((image, attributes), image)` to train your conditional VAE model. The first element of the tuple corresponds to the input of the model and consists of two tensors: the image and 2 selected attributes of your choice (for example, `Male` and `Smiling` attributes). You can choose your attributes from the ones [available](https://www.tensorflow.org/datasets/catalog/celeb_a). Make sure the attributes you use are easily identifiable in the images because you will need to alter them and expect visual changes (see Question 4.3). Convert the boolean attributes to `tf.float32` using [`tf.cast`](https://www.tensorflow.org/api_docs/python/tf/cast).\n\n\nMake sure the Datasets follow the format: 1) In this order, normalize, crop, and resize using [map](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), 2) [shuffle](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) the data, 3) apply [batching](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), 4) optionally use [prefetch](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch)\n\n \n</div>\n",
"_____no_output_____"
]
],
[
[
"# 1.2\n# your code here \n",
"_____no_output_____"
]
],
[
[
"<a id=\"s13\"></a>\n<div class='exercise-r'>\n\n**[1.3:](#q13)** \n\nPick 5 random images from the train dataset and plot them. Clearly label each plot with your chosen attributes and provide a written confirmation that they are correct.\n\n</div>",
"_____no_output_____"
]
],
[
[
"# 1.3\n# your code here \n",
"_____no_output_____"
]
],
[
[
"*your answer here*",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"<a id=\"part2\"></a>\n\n# PART 2. Set-up an AutoEncoder [20 points] \n\n\n[Return to contents](#contents)\n\n\n\n<a id=\"part2intro\"></a>\n## Overview\n\n**Define custom convolutional layers**\n\nFor the following models, you will need to utilize custom Keras layers. Below we have provided a class skeleton which you must first complete.You should read the Keras [guidelines](https://www.tensorflow.org/guide/keras/custom_layers_and_models) on how to build custom layers. You are required to fill the specific methods indicated below on each part.\n\nYou will then construct an autoencoder using both custom layers, and visualize the AE image reconstruction and latent spaces.\n\n<a id=\"part2questions\"></a>\n\n## PART 2: Questions\n\n<a id=\"q21\"></a>\n**[2.1:](#s21)** Set up a custom layer consisting of convolutional layers and complete the `__init__` and `call` methods of the `ConvEncoder` class. We recommend to use 4 convolutional layers; 9, 18, 32, and 64 filters in each consecutive layer, kernels of size 5x5, `relu` activations, `same` padding, and strides of 2x2. The intention is to halve the spatial dimensions on each convolutional layer while augmenting the number of filters on deeper layers.\n\nYou will use this layer repeatedly when building your subsequent models.\n\n\n<a id=\"q22\"></a>\n**[2.2:](#s22)** Set up a custom layer consisting of convolutional layers and complete the `__init__` and `call` methods of the `ConvDecoder` class. We will refer to the input dimension of this layer as `latent_dim`. Make sure the output dimension of this layer is equal to the input dimension of your images, i.e., `(64,64,3)` if you followed our recommendation.\n\nWe recommend to use 4 `UpSampling2D` layers; each followed by a `Conv2D` layer with 64, 32, 18, and 3 filters in each consecutive convolutional layer, kernels of size 5x5, `relu` activations, `same` padding, and strides of 1x1. Adjust activations as appropriate.\n\n<a id=\"q23\"></a>\n**[2.3:](#s23)** Create a Keras model `AE`. Use the previously defined `ConvEncoder` and `ConvDecoder` layer classes you just completed to build your autoencoder. Between these layers, [flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) the input and incorporate two intermediate [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense), and [reshape](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Reshape) layers. More precisely, use the following architecture:\n- `Input` image\n- `ConvEncoder` layer\n- `Flatten` layer\n- **`Dense` layer with linear activation** and `bottleneck_dim=128` units (recommended dimension)\n- **`Dense` layer with ReLU activation**\n- `Reshape` layer to `latent_dim`\n- `ConvDecoder` layer\n\n\n<a id=\"q24\"></a>\n**[2.4:](#s24)** Why do we suggest that the first dense layer after the `ConvEncoder` layer use linear activation in the `AE` model? Is this a necessary requirement or not? Explain your answer.\n\n<a id=\"q25\"></a>\n**[2.5:](#s25)** Train the `AE` model using MSE as the loss and an optimizer of your choice. We found 5 epochs sufficient for training, but feel free to adjust this value. Print a summary of the model. \n\n**We recommend [saving](https://www.tensorflow.org/tutorials/keras/save_and_load) the trained model**.\n\n<a id=\"q26\"></a> \n**[2.6:](#s26)** Visualize 5 random original and reconstructed images fed to the autoencoder from the validation data.\n\n<a id=\"q27\"></a> \n**[2.7:](#s27)** Visualize the first 2 [PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) components and a 2-dimensional [t-SNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) projection onto the plane of the latent representation for the validation images. Use the representation after the first dense layer where `bottleneck_dim=128` to compute the PCA and t-SNE projections. Retrieve at least `1024` images and color each input by class type (for example, `Male` and `Smiling` if these where your chosen attributes), for **each scatter plot visualization** and attribute. You need to present 4 scatter plots in total. Explain your results.\n\n",
"_____no_output_____"
],
[
"<a id=\"part2solutions\"></a>\n\n## PART 2: Solutions\n\n[Return to contents](#contents)",
"_____no_output_____"
],
[
"<a id=\"s21\"></a>\n<div class='exercise-r'>\n\n**[2.1:](#q21)** Set up a custom layer consisting of convolutional layers and complete the `__init__` and `call` methods of the `ConvEncoder` class. We recommend to use 4 convolutional layers; 9, 18, 32, and 64 filters in each consecutive layer, kernels of size 5x5, `relu` activations, `same` padding, and strides of 2x2. The intention is to halve the spatial dimensions on each convolutional layer while augmenting the number of filters on deeper layers.\n\nYou will use this layer repeatedly when building your subsequent models.\n\n\n</div>",
"_____no_output_____"
]
],
[
[
"# 2.1\n\nclass ConvEncoder(layers.Layer):\n \"\"\"\n Convolutional Encoder Layer Class.\n Converts an input into a latent representation.\n \"\"\"\n\n def __init__(self, input_shape, dropout_rate=0.0, name='encoder', **kwargs):\n \"\"\"\n Initializes the encoder layers and saves them as local attribute.\n \n Input:\n -input_dim: 3D-tuple with (rows, cols, channels) input image dimensions.\n -dropout_rate: if dropout layers present.\n \n Returns nothing.\n \"\"\"\n super(ConvEncoder, self).__init__(name=name, input_shape=input_shape, **kwargs)\n \n ## your code here\n \n \n # end of your code here\n \n\n def call(self, inputs, training=None):\n \"\"\"\n Runs the encoding inference for `inputs`.\n \n Inputs:\n -inputs: 4D-tensor with dimension (batch_size, self.input_dim).\n \"\"\"\n ## your code here\n \n # end of your code here\n return z",
"_____no_output_____"
]
],
[
[
"<a id=\"s22\"></a>\n<div class='exercise-r'>\n\n**[2.2:](#q22)** Set up a custom layer consisting of convolutional layers and complete the `__init__` and `call` methods of the `ConvDecoder` class. We will refer to the input dimension of this layer as `latent_dim`. Make sure the output dimension of this layer is equal to the input dimension of your images, i.e., `(64,64,3)` if you followed our recommendation.\n\nWe recommend to use 4 `UpSampling2D` layers; each followed by a `Conv2D` layer with 64, 32, 18, and 3 filters in each consecutive convolutional layer, kernels of size 5x5, `relu` activations, `same` padding, and strides of 1x1. Adjust activations as appropriate.\n \n</div>",
"_____no_output_____"
]
],
[
[
"# 2.2\n\nclass ConvDecoder(layers.Layer):\n \"\"\"\n Convolutional Decoder Layer Class.\n Converts z, the encoded digit vector, back into a readable digit.\n \"\"\"\n\n def __init__(self, input_shape, dropout_rate=0.0, name='decoder', **kwargs):\n \"\"\"\n Initializes the decoder architecture and saves it as a local attribute.\n \n Input:\n -input_shape: 3D-tuple with (rows, cols, channels) input representation.\n -dropout_rate: if dropout layers present.\n \n Returns nothing.\n \"\"\"\n super(ConvDecoder, self).__init__(name=name, input_shape=input_shape, **kwargs)\n self.dropout_rate = dropout_rate\n \n # your code here\n\n # end your code here\n \n def call(self, z, training=None):\n # your code here\n \n # end your code here\n return x",
"_____no_output_____"
]
],
[
[
"<a id=\"s23\"></a>\n<div class='exercise-r'>\n\n**[2.3:](#q23)** Create a Keras model `AE`. Use the previously defined `ConvEncoder` and `ConvDecoder` layer classes you just completed to build your autoencoder. Between these layers, [flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) the input and incorporate two intermediate [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense), and [reshape](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Reshape) layers. More precisely, use the following architecture:\n\n- `Input` image\n- `ConvEncoder` layer\n- `Flatten` layer\n- **`Dense` layer with linear activation** and `bottleneck_dim=128` units (recommended dimension)\n- **`Dense` layer with ReLU activation**\n- `Reshape` layer to `latent_dim`\n- `ConvDecoder` layer\n\n</div>\n",
"_____no_output_____"
]
],
[
[
"# 2.3\n# your code here\n",
"_____no_output_____"
]
],
[
[
"<a id=\"s24\"></a>\n<div class='exercise-r'>\n\n**[2.4:](#q24)** \nWhy do we suggest that the first dense layer after the `ConvEncoder` layer use linear activation in the `AE` model? Is this a necessary requirement or not? Explain your answer.\n \n</div>",
"_____no_output_____"
],
[
"*Your answer here*\n\n",
"_____no_output_____"
],
[
"<a id=\"s25\"></a>\n<div class='exercise-r'>\n\n**[2.5:](#q25)** Train the `AE` model using MSE as the loss and an optimizer of your choice. We found 5 epochs sufficient for training, but feel free to adjust this value. Print a summary of the model. \n\n**We recommend [saving](https://www.tensorflow.org/tutorials/keras/save_and_load) the trained model**.\n</div>",
"_____no_output_____"
]
],
[
[
"# 2.5\n# your code here\n",
"_____no_output_____"
]
],
[
[
"<a id=\"s26\"></a>\n<div class='exercise-r'>\n\n**[2.6:](#q26)** \nVisualize 5 random original and reconstructed images fed to the autoencoder from the validation data.\n\n</div>",
"_____no_output_____"
]
],
[
[
"# 2.6\n# your code here\n",
"_____no_output_____"
]
],
[
[
"<a id=\"s27\"></a>\n<div class='exercise-r'>\n\n**[2.7:](#q27)** \nVisualize the first 2 [PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) components and a 2-dimensional [t-SNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) projection onto the plane of the latent representation for the validation images. Use the representation after the first dense layer where `bottleneck_dim=128` to compute the PCA and t-SNE projections. Retrieve at least `1024` images and color each input by class type (for example, `Male` and `Smiling` if these where your chosen attributes), for **each scatter plot visualization** and attribute. You need to present 4 scatter plots in total. Explain your results.\n\n</div>",
"_____no_output_____"
]
],
[
[
"# 2.7 (PCA visualization)\n# your code here\n",
"_____no_output_____"
]
],
[
[
"**Explanation of PCA:** \n*your answer here*\n",
"_____no_output_____"
]
],
[
[
"# 2.7 (t-SNE visualization)\n# your code here\n",
"_____no_output_____"
]
],
[
[
"**Explanation of t-SNE:** \n*your answer here*\n",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"<a id=\"part3\"></a>\n\n\n# PART 3. Set-up a Convolutional Variational Autoencoder [20 points]\n\n[Return to contents](#contents)\n\n\n<a id=\"part3intro\"></a>\n## Overview\n\n\nIn this exercise you will code a standard Convolutional Variational Autoencoder. You will first create a custom layer `Sampling` that takes the mean and log-variance of a Gaussian distribution as inputs, and returns a sample from that distribution. You will use this sample as a latent representation of your probabilistic encoder conditioned on the input image, and use it to reconstruct an image. You will build the complete VAE architecture and study its properties.\n\nYou will need to minimize the negative ELBO function formed by a reconstruction loss and a regularization term over the mean and variance of the probabilistic encoder. You will train two VAE models, one with no regularization, and a second with regularization.\n\n<a id=\"part3questions\"></a>\n\n## PART 3: Questions\n\n<a id=\"q31\"></a>\n**[3.1:](#s31)** Complete the `call` method of our `Sampling` keras layer class. This method takes as input the mean and log-variance vectors of a multivariate Gaussian distribution and returns a sampled tensor from this distribution.\n\n\n<a id=\"q32\"></a>\n**[3.2:](#s32)** Create two Variational AutoEncoder models named `VAE1` and `VAE2`. Use the `ConvEncoder` and `ConvDecoder` layer classes you completed in Question 2 and the `Sampling` layer class from 3.1. Both VAEs should have the following architecture:\n\n- `Input` image\n- `ConvEncoder`\n- `Flatten` layer\n- `Dense` layer with linear activation and 128 units to predict the mean of the encoder conditional distribution $q_x(z)=N(\\mu,\\sigma)$\n- `Dense` layer with linear activation and 128 units to predict the log-variance of the encoder conditional distribution $q_x(z)=N(\\mu,\\sigma)$\n- `Sampling` layer you completed in Question 3.1\n- `Dense` layer with ReLU activation\n- `Reshape` layer: reshapes the output of the `Dense` layer into `latent_dim`\n- `ConvDecoder`\n\nFinally, `VAE1` should not use any regularization of the probabilistic encoder (from the prior). \n\nInstead, `VAE2` should incorporate a KL loss to regularize the probabilistic encoder to normal Gaussian of zero mean and unit variance acting as prior, as explained in class. \nYou may use the following expression: `kl_loss = - reg * 0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)`, where a reasonable value for `reg = 0.1` (feel free to adjust).\nTo include the intermediate loss in `VAE2`, you may use the function `add_loss` from keras models/layers as explained in the [documentation](https://www.tensorflow.org/guide/keras/train_and_evaluate). \n\n**We recommend saving your trained models.**\n\n<a id=\"q33\"></a>\n**[3.3:](#s33)** Why do we use linear activation values to encode the mean and log-variance of the probabilistic encoder? Explain your answer.\n\n<a id=\"q34\"></a>\n**[3.4:](#s34)** Visualize 1 original image from the validation data and 5 reconstructions of that image using `VAE1` and 5 using `VAE2`. Comment on the 10 reconstructed images. Notice that you may need to tune the penalty regularization term to observe differences between `VAE1` and `VAE2` (there should be differences!).\n\n<a id=\"q35\"></a>\n**[3.5:](#s35)** Visualize the first 2 PCA components and the 2-dimensional t-SNE decomposition of the validation data on both `VAE1` and `VAE2` obtained from the latent space (i.e. a sample drawn from the probabilistic encoder for a given input). Color the datapoints depending on the input's attributes of your choice (e.g. `Male` and `Smiling` if these were your choice). Draw 8 separate scatterplots in total (4 with PCA and 4 with t-SNE). Explain what you observe.\n ",
"_____no_output_____"
],
[
"<a id=\"part3solutions\"></a>\n\n## PART 3: Solutions\n\n[Return to contents](#contents)",
"_____no_output_____"
],
[
"<a id=\"s31\"></a>\n<div class='exercise-r'>\n\n**[3.1:](#q31)** Complete the `call` method of our `Sampling` keras layer class. This method takes as input the mean and log-variance vectors of a multivariate Gaussian distribution and returns a sampled tensor from this distribution.\n\n</div>",
"_____no_output_____"
]
],
[
[
"# 3.1\nclass Sampling(layers.Layer):\n \"\"\"\n Sampling layer in latent space.\n Uses (z_mean, z_log_var) to sample z.\n \"\"\"\n\n def call(self, inputs):\n \"\"\"Rturns a random sample from a Gaussian with mean and \n log-variance indicated in inputs.\n \n Inputs:\n -inputs: tuple (z_mean, z_log_var)\n \n Returns a sample z drawn from Gaussian.\n \"\"\"\n z_mean, z_log_var = inputs\n \n # your code here\n \n # end your code here\n return z",
"_____no_output_____"
]
],
[
[
"<a id=\"s32\"></a>\n<div class='exercise-r'>\n\n**[3.2:](#q32)** Create two Variational AutoEncoder models named `VAE1` and `VAE2`. Use the `ConvEncoder` and `ConvDecoder` layer classes you completed in Question 2 and the `Sampling` layer class from 3.1. Both VAEs should have the following architecture:\n\n- `Input` image\n- `ConvEncoder`\n- `Flatten` layer\n- `Dense` layer with linear activation and 128 units to predict the mean of the encoder conditional distribution $q_x(z)=N(\\mu,\\sigma)$\n- `Dense` layer with linear activation and 128 units to predict the log-variance of the encoder conditional distribution $q_x(z)=N(\\mu,\\sigma)$\n- `Sampling` layer you completed in Question 3.1\n- `Dense` layer with ReLU activation\n- `Reshape` layer: reshapes the output of the `Dense` layer into `latent_dim`\n- `ConvDecoder`\n\nFinally, `VAE1` should not use any regularization of the probabilistic encoder (from the prior). \n\nInstead, `VAE2` should incorporate a KL loss to regularize the probabilistic encoder to normal Gaussian of zero mean and unit variance acting as prior, as explained in class. \nYou may use the following expression: `kl_loss = - reg * 0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)`, where a reasonable value for `reg = 0.1` (feel free to adjust).\nTo include the intermediate loss in `VAE2`, you may use the function `add_loss` from keras models/layers as explained in the [documentation](https://www.tensorflow.org/guide/keras/train_and_evaluate). \n\n**We recommend saving your trained models.**\n \n \n</div>",
"_____no_output_____"
]
],
[
[
"# 3.2 \n# your code here\n",
"_____no_output_____"
]
],
[
[
"<a id=\"s33\"></a>\n<div class='exercise-r'>\n\n**[3.3:](#q33)** Why do we use linear activation values to encode the mean and log-variance of the probabilistic encoder? Explain your answer.\n \n \n</div>",
"_____no_output_____"
],
[
"*your answer here*\n\n",
"_____no_output_____"
],
[
"<a id=\"s34\"></a>\n<div class='exercise-r'>\n\n**[3.4:](#q34)** Visualize 1 original image from the validation data and 5 reconstructions of that image using `VAE1` and 5 using `VAE2`. Comment on the 10 reconstructed images. Notice that you may need to tune the penalty regularization term to observe differences between `VAE1` and `VAE2` (there should be differences!).\n \n</div>",
"_____no_output_____"
]
],
[
[
"# 3.4\n# your code here\n\n",
"_____no_output_____"
]
],
[
[
"**Explanation:** \n\n*your answer here*\n",
"_____no_output_____"
],
[
"<a id=\"s35\"></a>\n<div class='exercise-r'>\n\n**[3.5:](#q35)** Visualize the first 2 PCA components and the 2-dimensional t-SNE decomposition of the validation data on both `VAE1` and `VAE2` obtained from the latent space (i.e. a sample drawn from the probabilistic encoder for a given input). Color the datapoints depending on the input's attributes of your choice (e.g. `Male` and `Smiling` if these were your choice). Draw 8 separate scatterplots in total (4 with PCA and 4 with t-SNE). Explain what you observe.\n \n</div>",
"_____no_output_____"
]
],
[
[
"# 3.5 \n# your code here\n",
"_____no_output_____"
]
],
[
[
"**Explanation of PCA visualization:** \n\n*your answer here*\n",
"_____no_output_____"
]
],
[
[
"# 3.5\n# your code here\n",
"_____no_output_____"
]
],
[
[
"**Explanation of t-SNE decomposition:** \n\n*your answer here*\n",
"_____no_output_____"
],
[
"<a id=\"part4\"></a>\n\n# PART 4. Set-up a Conditional VAE [20 points]\n\n[Return to contents](#contents)\n\n\n<a id=\"part4intro\"></a>\n## Overview\n\nConditional VAEs are similar to standard VAEs, except they allow us to also incorporate an attribute label into the latent space. When the model is trained in this form, the model learns to distinguish between the specific features associated with that label. This allows you to then \"activate\" labeled attributes in the latent space manually and explore the space of those representations in an explicit manner. We point you to [one](https://wiseodd.github.io/techblog/2016/12/17/conditional-vae/) and [two](https://ijdykeman.github.io/ml/2016/12/21/cvae.html) short tutorials on conditional VAEs. Additionally, you may be interested in reading the [original paper](http://papers.nips.cc/paper/5775-learning-structured-output-representation-using-deep-conditional-generative-models.pdf) and the [continuation paper](https://papers.nips.cc/paper/7880-learning-latent-subspaces-in-variational-autoencoders.pdf). \n\nIn this exercise you are going to build a conditional VAE, and reconstruct images by altering their attributes. For example, you could pick a set of non-smiling men and transform them by changing the label conditions in the latent space associated with 'Smiling' and/or 'Male'. You can choose whatever attributes you want, as long as the reconstructed latent space shows reasonable success when changing the attribute labels.\n\n\n<a id=\"part4questions\"></a>\n\n## PART 4: Questions\n\n<a id=\"q41\"></a>\n**[4.1:](#s41)** Create a conditional VAE keras model named `CVAE`. The conditional VAE should have the following architecture:\n\n- `Input` for image\n- `Input` for attributes\n- `ConvEncoder` layer\n- `Flatten` layer: flattens the output of the `ConvEncoder`\n- [`Concatenate`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/concatenate) layer: concatenates the latent representation of dimension `latent_dim[0]*latent_dim[1]*latent_dim[2]` with two attribute codes of your choice (`tf.float32` representations)\n- `Dense` layer with linear activation and `bottleneck_dim` units to predict the mean of the encoder conditional distribution $q_x(z)=N(\\mu,\\sigma)$\n- `Dense` layer with linear activation and `bottleneck_dim` units to predict the log-variance of the encoder conditional distribution $q_x(z)=N(\\mu,\\sigma)$\n- `Sampling` layer you completed in Question 3.1\n- [`Concatenate`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/concatenate) layer: that combines your sample with the two attribute codes of your choice (`tf.float32` representations)\n- `Dense` layer with ReLU activation\n- `Reshape` layer\n- `ConvDecoder`\n- Output image of same size as input image\n\n\n<a id=\"q42\"></a>\n**[4.2:](#s42)** Train the model using the data generator you completed in Question 1.2 (use mean squared error loss and an optimizer of your choice). Print a summary of your model.\n\n**We recommend saving your trained models**.\n\n<a id=\"q43\"></a>\n**[4.3:](#s43)** Select 5 photos with common attributes from the validation data and reconstruct these images after feeding them to the conditional variational autoencoder `CVAE`. Change the attributes to form the other three possible combinations and visualize all compositions. Comment on your compositions.\n\nFor example, if your choice of attributes were 'Male' and 'Smiling', you should reconstruct these images with all possible attribute combinations.\n\n<a id=\"q44\"></a>\n**[4.4:](#s44)** Visualize the first 2 PCA components and the 2-dimensional t-SNE decomposition of the validation data of `CVAE` obtained from the latent space (i.e. a sample drawn from the probabilistic encoder for at least 1024 input images). Color the datapoints depending on the input's attributes (e.g. `Male` and `Smiling` if these were your choice). Draw 4 separate scatterplots in total. Explain what you observe.",
"_____no_output_____"
],
[
"<a id=\"part4solutions\"></a>\n\n## PART 4: Solutions\n\n[Return to contents](#contents)",
"_____no_output_____"
],
[
"<a id=\"s41\"></a>\n<div class='exercise-r'>\n\n**[4.1:](#q41)** Create a conditional VAE keras model named `CVAE`. The conditional VAE should have the following architecture:\n\n- `Input` for image\n- `Input` for attributes\n- `ConvEncoder` layer\n- `Flatten` layer: flattens the output of the `ConvEncoder`\n- [`Concatenate`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/concatenate) layer: concatenates the latent representation of dimension `latent_dim[0]*latent_dim[1]*latent_dim[2]` with two attribute codes of your choice (`tf.float32` representations)\n- `Dense` layer with linear activation and `bottleneck_dim` units to predict the mean of the encoder conditional distribution $q_x(z)=N(\\mu,\\sigma)$\n- `Dense` layer with linear activation and `bottleneck_dim` units to predict the log-variance of the encoder conditional distribution $q_x(z)=N(\\mu,\\sigma)$\n- `Sampling` layer you completed in Question 3.1\n- [`Concatenate`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/concatenate) layer: that combines your sample with the two attribute codes of your choice (`tf.float32` representations)\n- `Dense` layer with ReLU activation\n- `Reshape` layer\n- `ConvDecoder`\n- Output image of same size as input image\n\n\n</div>",
"_____no_output_____"
]
],
[
[
"# 4.1\n# your code here\n",
"_____no_output_____"
]
],
[
[
"<a id=\"s42\"></a>\n<div class='exercise-r'>\n\n**[4.2:](#q42)** Train the model using the data generator you completed in Question 1.2 (use mean squared error loss and an optimizer of your choice). Print a summary of your model.\n\n**We recommend saving your trained models**.\n</div>",
"_____no_output_____"
]
],
[
[
"# 4.2\n# your code here\n",
"_____no_output_____"
]
],
[
[
"<a id=\"s43\"></a>\n<div class='exercise-r'>\n\n**[4.3:](#q43)** Select 5 photos with common attributes from the validation data and reconstruct these images after feeding them to the conditional variational autoencoder `CVAE`. Change the attributes to form the other three possible combinations and visualize all compositions. Comment on your compositions.\n\nFor example, if your choice of attributes were 'Male' and 'Smiling', you should reconstruct these images with all possible attribute combinations.\n</div>",
"_____no_output_____"
]
],
[
[
"# 4.3\n# your code here\n",
"_____no_output_____"
]
],
[
[
"**Comments on generated images:**\n\n*your answer here*\n",
"_____no_output_____"
],
[
"<a id=\"s44\"></a>\n<div class='exercise-r'>\n\n**[4.4:](#q44)** Visualize the first 2 PCA components and the 2-dimensional t-SNE decomposition of the validation data of `CVAE` obtained from the latent space (i.e. a sample drawn from the probabilistic encoder for at least 1024 input images). Color the datapoints depending on the input's attributes (e.g. `Male` and `Smiling` if these were your choice). Draw 4 separate scatterplots in total. Explain what you observe.\n</div>",
"_____no_output_____"
]
],
[
[
"# 4.4\n# your code here\n",
"_____no_output_____"
]
],
[
[
"**Explanation of PCA visualization:** \n\n*your answer here*\n",
"_____no_output_____"
]
],
[
[
"# 4.4\n# your code here\n",
"_____no_output_____"
]
],
[
[
"**Explanation of t-SNE visualization:** \n\n*your answer here*\n",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"<a id=\"part5\"></a>\n\n# PART 5. Generative Adversarial Networks [20 points]\n\n[Return to contents](#contents)\n\n\n<a id=\"part5intro\"></a>\n## Overview\nFor the final exercise we are going to create a standard GAN composed of a generator network, and a discriminator network. GANs are tricky to train, so we encourage you to follow the given instructions for the deep convolutional GAN (DCGAN) when building your architecture and training your models.\n\nHowever, feel completely free to explore and present other architectures if they present better results. For instance, you can instead build a Wasserstein GAN (WGAN), as was illustrated in section. Just be certain to split the different components of your GAN (i.e. generator, discriminator, composition, and training) among the appropriate parts of Question 5 below. \n\n<a id=\"part5questions\"></a>\n\n## PART 5: Questions\n\n<a id=\"q51\"></a>\n**[5.1:](#s51)** Create a convolutional keras generator model. We recommend the follow architecture.\n\n- Input to the generator is a noise vector of dimension `bottleneck_dim` (you can rename to `noise_dim` for more corresponding terminology if you prefer)\n- `Dense` layer with `latent_dim[0]*latent_dim[1]*latent_dim[2]` units, and `LeakyReLU`\n- `Reshape` to `latent_dim`\n- 3 `UpSampling2D` layers each followed by a `Conv2D` layer with 128 filters, 4x4 kernels, 1x1 strides, `'same'` padding, followed by `LeakyReLU`. Adjust the `Conv2D` parameters and activation appropriately in the final layer.\n \nPrint a summary of your model.\n\n<a id=\"q52\"></a>\n**[5.2:](#s52)** Create a convolutional discriminator model. Our recommended setup is to use 3 `Conv2D` layers with filters of size `(4,4)`, `'same'` padding, strides 2x2, and `LeakyReLU` activations. Compile the model with binary cross-entropy loss and an optimizer of your choice. Print a summary of the model.\n\n<a id=\"q53\"></a>\n**[5.3:](#s53)** Create a DCGAN model that is a composition of the generator and the discriminator. The DCGAN model takes a Gaussian vector as input into the generator, and then the discriminator decides whether the output comes from the generator or from the true distribution. The DCGAN is composed of the trainable weights of the generator, and fixed discriminator weights. You can accomplish this behavior by fixing the discriminator training weights using `discriminator.trainable = False` before constructing the model. Once you have instantiated the DCGAN model, compile it with a binary cross-entropy loss and optimizer of your choice.\n\n<a id=\"q54\"></a>\n**[5.4:](#s54)** Train your model (both DCGAN and discriminator) on the train images of the CelebA dataset. We recommend you display images after every train epoch to visualize performance. You should observe \"sensible\" images at 5 or fewer epochs, specially if you train on the full dataset. Consider training on a subset of the full dataset if it takes too long. \n\nTo train your DCGAN model, you will not be able to use the model's [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit) function. Instead, you should consider using [`train_on_batch`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#train_on_batch) method, where you can manually feed an input and training labels, and alternate between the DCGAN and the discriminator. Datasets are iterable, so you can use them directly in a for-loop to obtain mini-batches. You need to run these three steps inside the for-loop: \n\n1. `train_on_batch` the discriminator on real images with labels equal to 1 (optionally, minus a small smoother) The smoother may help the generator train faster than the discriminator.\n2. `train_on_batch` the discriminator on generated images obtained from random Gaussian input and labels equal to 0\n3. `train_on_batch` the DCGAN by feeding noise inputs and labels of 1\n\n**Show at least 8 generated images from your final trained DCGAN model for submission**. How do these images compare in quality to the faces generated via VAE? Explain.\n\n<a id=\"q55\"></a>\n**[5.5:](#s55)** Standard GANs are composed as a generator and discriminator, as you just coded them. Could we substitute the discriminator with something else, like a KL loss with the empirical distribution? Why or why not? Explain your answer.",
"_____no_output_____"
],
[
"<a id=\"part5solutions\"></a>\n\n## PART 5: Solutions\n\n[Return to contents](#contents)",
"_____no_output_____"
],
[
"<a id=\"s51\"></a>\n<div class='exercise-r'>\n\n**[5.1:](#q51)** Create a convolutional keras generator model. We recommend the follow architecture.\n\n- Input to the generator is a noise vector of dimension `bottleneck_dim` (you can rename to `noise_dim` for more corresponding terminology if you prefer)\n- `Dense` layer with `latent_dim[0]*latent_dim[1]*latent_dim[2]` units, and `LeakyReLU`\n- `Reshape` to `latent_dim`\n- 3 `UpSampling2D` layers each followed by a `Conv2D` layer with 128 filters, 4x4 kernels, 1x1 strides, `'same'` padding, followed by `LeakyReLU`. Adjust the `Conv2D` parameters and activation appropriately in the final layer.\n \nPrint a summary of your model.\n</div>",
"_____no_output_____"
]
],
[
[
"# 5.1\n# your code here\n",
"_____no_output_____"
]
],
[
[
"<a id=\"s52\"></a>\n<div class='exercise-r'>\n\n**[5.2:](#q52)** Create a convolutional discriminator model. Our recommended setup is to use 3 `Conv2D` layers with filters of size `(4,4)`, `'same'` padding, strides 2x2, and `LeakyReLU` activations. Compile the model with binary cross-entropy loss and an optimizer of your choice. Print a summary of the model.\n</div>",
"_____no_output_____"
]
],
[
[
"# 5.2\n# your code here\n",
"_____no_output_____"
]
],
[
[
"<a id=\"s53\"></a>\n<div class='exercise-r'>\n\n**[5.3:](#q53)** Create a DCGAN model that is a composition of the generator and the discriminator. The DCGAN model takes a Gaussian vector as input into the generator, and then the discriminator decides whether the output comes from the generator or from the true distribution. The DCGAN is composed of the trainable weights of the generator, and fixed discriminator weights. You can accomplish this behavior by fixing the discriminator training weights using `discriminator.trainable = False` before constructing the model. Once you have instantiated the DCGAN model, compile it with a binary cross-entropy loss and optimizer of your choice.\n</div>",
"_____no_output_____"
]
],
[
[
"# 5.3\n# your code here\n",
"_____no_output_____"
]
],
[
[
"<a id=\"s54\"></a>\n<div class='exercise-r'>\n\n**[5.4:](#q54)** Train your model (both DCGAN and discriminator) on the train images of the CelebA dataset. We recommend you display images after every train epoch to visualize performance. You should observe \"sensible\" images at 5 or fewer epochs, specially if you train on the full dataset. Consider training on a subset of the full dataset if it takes too long. \n\nTo train your DCGAN model, you will not be able to use the model's [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit) function. Instead, you should consider using [`train_on_batch`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#train_on_batch) method, where you can manually feed an input and training labels, and alternate between the DCGAN and the discriminator. Datasets are iterable, so you can use them directly in a for-loop to obtain mini-batches. You need to run these three steps inside the for-loop: \n\n1. `train_on_batch` the discriminator on real images with labels equal to 1 (optionally, minus a small smoother). The smoother may help the generator train faster than the discriminator\n2. `train_on_batch` the discriminator on generated images obtained from random Gaussian input and labels equal to 0\n3. `train_on_batch` the DCGAN by feeding noise inputs and labels of 1\n\n**Show at least 8 generated images from your final trained DCGAN model for submission**. How do these images compare in quality to the faces generated via VAE? Explain.\n</div>",
"_____no_output_____"
]
],
[
[
"# 5.4\n# your code here\n",
"_____no_output_____"
]
],
[
[
"*your answer here*",
"_____no_output_____"
],
[
"<a id=\"s55\"></a>\n<div class='exercise-r'>\n\n**[5.5:](#q55)** Standard GANs are composed as a generator and discriminator, as you just coded them. Could we substitute the discriminator with something else, like a KL loss with the empirical distribution? Why or why not? Explain your answer.\n</div>",
"_____no_output_____"
],
[
"*your answer here*\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
4a806404541480e139fffca725cde2dfa0cf8418
| 23,691 |
ipynb
|
Jupyter Notebook
|
notebooks/scott/scott_gensim.ipynb
|
codeup-nlp-capstone/nlp-capstone
|
c8b4b2398be55a74438b2484d823b49694543b9e
|
[
"MIT"
] | 2 |
2022-03-04T21:56:13.000Z
|
2022-03-17T22:16:10.000Z
|
notebooks/scott/scott_gensim.ipynb
|
codeup-nlp-capstone/nlp-capstone
|
c8b4b2398be55a74438b2484d823b49694543b9e
|
[
"MIT"
] | null | null | null |
notebooks/scott/scott_gensim.ipynb
|
codeup-nlp-capstone/nlp-capstone
|
c8b4b2398be55a74438b2484d823b49694543b9e
|
[
"MIT"
] | null | null | null | 35.045858 | 691 | 0.485205 |
[
[
[
"### testing new file structure setup and imports",
"_____no_output_____"
],
[
"#### check for imports",
"_____no_output_____"
]
],
[
[
"import sys\ndirectory_path = \"/Users/dragonzord/Documents/SchoolDocs/codeup-data-science/exercises/methodologies2/capstone/nlp-capstone/src/\"\nfor path in os.listdir(directory_path):\n if path[-2:] == \"py\":\n module_path = f\"{directory_path}{path}\" \n if module_path in sys.path: print(\"added successfully\")\n else: sys.path.append(module_path)\nprint(\"\\nAFTER\")\nfor p in sys.path:\n print(p)",
"\nAFTER\n/Users/dragonzord/Documents/SchoolDocs/codeup-data-science/exercises/methodologies2/capstone/nlp-capstone/notebooks/scott\n/Users/dragonzord/.vscode/extensions/ms-toolsai.jupyter-2022.2.1030672458/pythonFiles\n/Users/dragonzord/.vscode/extensions/ms-toolsai.jupyter-2022.2.1030672458/pythonFiles/lib/python\n/opt/miniconda3/envs/tf/lib/python37.zip\n/opt/miniconda3/envs/tf/lib/python3.7\n/opt/miniconda3/envs/tf/lib/python3.7/lib-dynload\n\n/opt/miniconda3/envs/tf/lib/python3.7/site-packages\n/opt/miniconda3/envs/tf/lib/python3.7/site-packages/IPython/extensions\n/Users/dragonzord/.ipython\n/Users/dragonzord/.local/lib/python3.7/site-packages\n~/Documents/SchoolDocs/codeup-data-science/exercises/methodologies2/capstone/nlp-capstone/src\n/Users/dragonzord/Documents/SchoolDocs/codeup-data-science/exercises/methodologies2/capstone/nlp-capstone/src/acquire.py\n/Users/dragonzord/Documents/SchoolDocs/codeup-data-science/exercises/methodologies2/capstone/nlp-capstone/src/scott.py\n/Users/dragonzord/Documents/SchoolDocs/codeup-data-science/exercises/methodologies2/capstone/nlp-capstone/src/casenums.py\n/Users/dragonzord/Documents/SchoolDocs/codeup-data-science/exercises/methodologies2/capstone/nlp-capstone/src/prepare_jag.py\n"
],
[
"from casenums import UniqueDataFrames",
"_____no_output_____"
],
[
"import pandas as pd\n#import prepare_jag as pg\nfrom prepare_jag import basic_clean3",
"_____no_output_____"
],
[
"patient_notes = pd.read_csv(\"../../data/patient_notes.csv\")\nfeatures = pd.read_csv(\"../../data/features.csv\")",
"_____no_output_____"
],
[
"d = {}\nfor i in features[\"case_num\"].unique():\n d[i] = {a: features[\"feature_text\"][features[\"feature_num\"] == a].to_string()[5:].strip() for a in features[\"feature_num\"][features[\"case_num\"] == i]}",
"_____no_output_____"
],
[
"df = pd.DataFrame(data=[d.keys(), d.values()]).T\ndf.columns = [\"case_num\", \"features\"]\ndf",
"_____no_output_____"
],
[
"notes_features = patient_notes.merge(right=df, left_on=\"case_num\", right_on=\"case_num\")\nnotes_features\nprint(notes_features.shape)\nnotes_features[\"clean_pn_history\"] = notes_features[\"pn_history\"].apply(prepare_jag.basic_clean3).apply(prepare_jag.remove_stopwords, exclude_words=[\"no\"])\nnotes_features.head()",
"(42146, 4)\n"
],
[
"def word_list(text):\n return [word for word in text.split()]",
"_____no_output_____"
],
[
"notes_features[\"word_list\"] = notes_features[\"clean_pn_history\"].apply(word_list)",
"_____no_output_____"
],
[
"notes_features.head()",
"_____no_output_____"
],
[
"from gensim.models.word2vec import Word2Vec,KeyedVectors ",
"/opt/miniconda3/envs/tf/lib/python3.7/site-packages/gensim/similarities/__init__.py:15: UserWarning: The gensim.similarities.levenshtein submodule is disabled, because the optional Levenshtein package <https://pypi.org/project/python-Levenshtein/> is unavailable. Install Levenhstein (e.g. `pip install python-Levenshtein`) to suppress this warning.\n warnings.warn(msg)\n"
],
[
"gens_w2v_input = notes_features[\"word_list\"].to_list()",
"_____no_output_____"
],
[
"# using CBOW Architecture for trainnig\ncbow_w2v = Word2Vec(gens_w2v_input, window=5, min_count=5, workers=3, sg=0)\n",
"_____no_output_____"
],
[
"type(cbow_w2v)",
"_____no_output_____"
],
[
"cbow_w2v.wv.most_similar(\"nausea\")",
"_____no_output_____"
],
[
"sgram_w2v = Word2Vec(gens_w2v_input, window=3, min_count=5, workers=3, sg=1)",
"_____no_output_____"
],
[
"sgram_w2v.wv.most_similar(\"nausea\")",
"_____no_output_____"
],
[
"# not working\n# cbow_keyed_vec = KeyedVectors(gens_w2v_input)",
"_____no_output_____"
],
[
"# not working\n# cbow_keyed_vec.WordEmbeddingsKeyedVectors.most_similar(\"\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a8065227cd5bcdeec4f33ca9e6a7687b6c2c086
| 11,889 |
ipynb
|
Jupyter Notebook
|
nbs/dl2/07a_lsuv.ipynb
|
deepenp/course-v3
|
f1854e88b92f2184e66684b0f9a106c5b7e505f1
|
[
"Apache-2.0"
] | null | null | null |
nbs/dl2/07a_lsuv.ipynb
|
deepenp/course-v3
|
f1854e88b92f2184e66684b0f9a106c5b7e505f1
|
[
"Apache-2.0"
] | null | null | null |
nbs/dl2/07a_lsuv.ipynb
|
deepenp/course-v3
|
f1854e88b92f2184e66684b0f9a106c5b7e505f1
|
[
"Apache-2.0"
] | null | null | null | 25.622845 | 543 | 0.542771 |
[
[
[
"%load_ext autoreload\n%autoreload 2\n\n%matplotlib inline",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"#export\nfrom exp.nb_07 import *",
"_____no_output_____"
]
],
[
[
"## Layerwise Sequential Unit Variance (LSUV)\n### paper: https://arxiv.org/pdf/1511.06422.pdf",
"_____no_output_____"
],
[
"Getting the MNIST data and a CNN",
"_____no_output_____"
],
[
"[Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=235)",
"_____no_output_____"
]
],
[
[
"x_train,y_train,x_valid,y_valid = get_data()\n\nx_train,x_valid = normalize_to(x_train,x_valid)\ntrain_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)\n\nnh,bs = 50,512\nc = y_train.max().item()+1\nloss_func = F.cross_entropy\n\ndata = DataBunch(*get_dls(train_ds, valid_ds, bs), c)",
"_____no_output_____"
],
[
"mnist_view = view_tfm(1,28,28)\ncbfs = [Recorder,\n partial(AvgStatsCallback,accuracy),\n CudaCallback,\n partial(BatchTransformXCallback, mnist_view)]",
"_____no_output_____"
],
[
"nfs = [8,16,32,64,64]",
"_____no_output_____"
],
[
"class ConvLayer(nn.Module):\n def __init__(self, ni, nf, ks=3, stride=2, sub=0., **kwargs):\n super().__init__()\n self.conv = nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=True)\n self.relu = GeneralRelu(sub=sub, **kwargs)\n \n def forward(self, x): return self.relu(self.conv(x))\n \n @property\n def bias(self): return -self.relu.sub\n @bias.setter\n def bias(self,v): self.relu.sub = -v\n @property\n def weight(self): return self.conv.weight",
"_____no_output_____"
],
[
"learn,run = get_learn_run(nfs, data, 0.6, ConvLayer, cbs=cbfs)",
"_____no_output_____"
]
],
[
[
"Now we're going to look at the paper [All You Need is a Good Init](https://arxiv.org/pdf/1511.06422.pdf), which introduces *Layer-wise Sequential Unit-Variance* (*LSUV*). We initialize our neural net with the usual technique, then we pass a batch through the model and check the outputs of the linear and convolutional layers. We can then rescale the weights according to the actual variance we observe on the activations, and subtract the mean we observe from the initial bias. That way we will have activations that stay normalized.\n\nWe repeat this process until we are satisfied with the mean/variance we observe.\n\nLet's start by looking at a baseline:",
"_____no_output_____"
]
],
[
[
"run.fit(2, learn)",
"train: [1.73625, tensor(0.3975, device='cuda:0')]\nvalid: [1.68747265625, tensor(0.5652, device='cuda:0')]\ntrain: [0.356792578125, tensor(0.8880, device='cuda:0')]\nvalid: [0.13243565673828125, tensor(0.9588, device='cuda:0')]\n"
]
],
[
[
"Now we recreate our model and we'll try again with LSUV. Hopefully, we'll get better results!",
"_____no_output_____"
]
],
[
[
"learn,run = get_learn_run(nfs, data, 0.6, ConvLayer, cbs=cbfs)",
"_____no_output_____"
]
],
[
[
"Helper function to get one batch of a given dataloader, with the callbacks called to preprocess it.",
"_____no_output_____"
]
],
[
[
"#export\ndef get_batch(dl, run):\n run.xb,run.yb = next(iter(dl))\n for cb in run.cbs: cb.set_runner(run)\n run('begin_batch')\n return run.xb,run.yb",
"_____no_output_____"
],
[
"xb,yb = get_batch(data.train_dl, run)",
"_____no_output_____"
]
],
[
[
"We only want the outputs of convolutional or linear layers. To find them, we need a recursive function. We can use `sum(list, [])` to concatenate the lists the function finds (`sum` applies the + operate between the elements of the list you pass it, beginning with the initial state in the second argument).",
"_____no_output_____"
]
],
[
[
"#export\ndef find_modules(m, cond):\n if cond(m): return [m]\n return sum([find_modules(o,cond) for o in m.children()], [])\n\ndef is_lin_layer(l):\n lin_layers = (nn.Conv1d, nn.Conv2d, nn.Conv3d, nn.Linear, nn.ReLU)\n return isinstance(l, lin_layers)",
"_____no_output_____"
],
[
"mods = find_modules(learn.model, lambda o: isinstance(o,ConvLayer))",
"_____no_output_____"
],
[
"mods",
"_____no_output_____"
]
],
[
[
"This is a helper function to grab the mean and std of the output of a hooked layer.",
"_____no_output_____"
]
],
[
[
"def append_stat(hook, mod, inp, outp):\n d = outp.data\n hook.mean,hook.std = d.mean().item(),d.std().item()",
"_____no_output_____"
],
[
"mdl = learn.model.cuda()",
"_____no_output_____"
]
],
[
[
"So now we can look at the mean and std of the conv layers of our model.",
"_____no_output_____"
]
],
[
[
"with Hooks(mods, append_stat) as hooks:\n mdl(xb)\n for hook in hooks: print(hook.mean,hook.std)",
"0.3813672363758087 0.6907835006713867\n0.3570525348186493 0.651114284992218\n0.28284627199172974 0.5356632471084595\n0.2487572282552719 0.42617663741111755\n0.15965904295444489 0.2474386990070343\n"
]
],
[
[
"We first adjust the bias terms to make the means 0, then we adjust the standard deviations to make the stds 1 (with a threshold of 1e-3). The `mdl(xb) is not None` clause is just there to pass `xb` through `mdl` and compute all the activations so that the hooks get updated. ",
"_____no_output_____"
]
],
[
[
"#export\ndef lsuv_module(m, xb):\n h = Hook(m, append_stat)\n\n while mdl(xb) is not None and abs(h.mean) > 1e-3: m.bias -= h.mean\n while mdl(xb) is not None and abs(h.std-1) > 1e-3: m.weight.data /= h.std\n\n h.remove()\n return h.mean,h.std",
"_____no_output_____"
]
],
[
[
"We execute that initialization on all the conv layers in order:",
"_____no_output_____"
]
],
[
[
"for m in mods: print(lsuv_module(m, xb))",
"(0.17071205377578735, 1.0)\n(0.08888687938451767, 1.0000001192092896)\n(0.1499888300895691, 0.9999999403953552)\n(0.15749432146549225, 1.0)\n(0.3106708824634552, 1.0)\n"
]
],
[
[
"Note that the mean doesn't exactly stay at 0. since we change the standard deviation after by scaling the weight.",
"_____no_output_____"
],
[
"Then training is beginning on better grounds.",
"_____no_output_____"
]
],
[
[
"%time run.fit(2, learn)",
"train: [0.42438078125, tensor(0.8629, device='cuda:0')]\nvalid: [0.14604696044921875, tensor(0.9548, device='cuda:0')]\ntrain: [0.128675537109375, tensor(0.9608, device='cuda:0')]\nvalid: [0.09168212280273437, tensor(0.9733, device='cuda:0')]\nCPU times: user 4.09 s, sys: 504 ms, total: 4.6 s\nWall time: 4.61 s\n"
]
],
[
[
"LSUV is particularly useful for more complex and deeper architectures that are hard to initialize to get unit variance at the last layer.",
"_____no_output_____"
],
[
"## Export",
"_____no_output_____"
]
],
[
[
"!python notebook2script.py 07a_lsuv.ipynb",
"Converted 07a_lsuv.ipynb to exp/nb_07a.py\r\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4a8077f2d8ec888ba94aae5a5011bd886e72b2cb
| 52,945 |
ipynb
|
Jupyter Notebook
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
Jiacheng98/pytorch-seq2seq
|
8c6b314b21750adae3b3fcd5d32ec7b7bfbd678a
|
[
"MIT"
] | 1 |
2021-04-20T01:51:40.000Z
|
2021-04-20T01:51:40.000Z
|
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
soohyunee/pytorch-seq2seq
|
8c6b314b21750adae3b3fcd5d32ec7b7bfbd678a
|
[
"MIT"
] | null | null | null |
1 - Sequence to Sequence Learning with Neural Networks.ipynb
|
soohyunee/pytorch-seq2seq
|
8c6b314b21750adae3b3fcd5d32ec7b7bfbd678a
|
[
"MIT"
] | null | null | null | 48.263446 | 915 | 0.609047 |
[
[
[
"# 1 - Sequence to Sequence Learning with Neural Networks\n\nIn this series we'll be building a machine learning model to go from once sequence to another, using PyTorch and torchtext. This will be done on German to English translations, but the models can be applied to any problem that involves going from one sequence to another, such as summarization, i.e. going from a sequence to a shorter sequence in the same language.\n\nIn this first notebook, we'll start simple to understand the general concepts by implementing the model from the [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215) paper. \n\n## Introduction\n\nThe most common sequence-to-sequence (seq2seq) models are *encoder-decoder* models, which commonly use a *recurrent neural network* (RNN) to *encode* the source (input) sentence into a single vector. In this notebook, we'll refer to this single vector as a *context vector*. We can think of the context vector as being an abstract representation of the entire input sentence. This vector is then *decoded* by a second RNN which learns to output the target (output) sentence by generating it one word at a time.\n\n\n\nThe above image shows an example translation. The input/source sentence, \"guten morgen\", is passed through the embedding layer (yellow) and then input into the encoder (green). We also append a *start of sequence* (`<sos>`) and *end of sequence* (`<eos>`) token to the start and end of sentence, respectively. At each time-step, the input to the encoder RNN is both the embedding, $e$, of the current word, $e(x_t)$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder RNN outputs a new hidden state $h_t$. We can think of the hidden state as a vector representation of the sentence so far. The RNN can be represented as a function of both of $e(x_t)$ and $h_{t-1}$:\n\n$$h_t = \\text{EncoderRNN}(e(x_t), h_{t-1})$$\n\nWe're using the term RNN generally here, it could be any recurrent architecture, such as an *LSTM* (Long Short-Term Memory) or a *GRU* (Gated Recurrent Unit). \n\nHere, we have $X = \\{x_1, x_2, ..., x_T\\}$, where $x_1 = \\text{<sos>}, x_2 = \\text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter.\n\nOnce the final word, $x_T$, has been passed into the RNN via the embedding layer, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence.\n\nNow we have our context vector, $z$, we can start decoding it to get the output/target sentence, \"good morning\". Again, we append start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder RNN (blue) is the embedding, $d$, of current word, $d(y_t)$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as:\n\n$$s_t = \\text{DecoderRNN}(d(y_t), s_{t-1})$$\n\nAlthough the input/source embedding layer, $e$, and the output/target embedding layer, $d$, are both shown in yellow in the diagram they are two different embedding layers with their own parameters.\n\nIn the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a `Linear` layer, shown in purple) what we think is the next word in the sequence, $\\hat{y}_t$. \n\n$$\\hat{y}_t = f(s_t)$$\n\nThe words in the decoder are always generated one after another, with one per time-step. We always use `<sos>` for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\\hat{y}_{t-1}$. This is called *teacher forcing*, see a bit more info about it [here](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/). \n\nWhen training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference it is common to keep generating words until the model outputs an `<eos>` token or after a certain amount of words have been generated.\n\nOnce we have our predicted target sentence, $\\hat{Y} = \\{ \\hat{y}_1, \\hat{y}_2, ..., \\hat{y}_T \\}$, we compare it against our actual target sentence, $Y = \\{ y_1, y_2, ..., y_T \\}$, to calculate our loss. We then use this loss to update all of the parameters in our model.\n\n## Preparing Data\n\nWe'll be coding up the models in PyTorch and using torchtext to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data.",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn\nimport torch.optim as optim\n\nfrom torchtext.datasets import Multi30k\nfrom torchtext.data import Field, BucketIterator\n\nimport spacy\nimport numpy as np\n\nimport random\nimport math\nimport time",
"_____no_output_____"
]
],
[
[
"We'll set the random seeds for deterministic results.",
"_____no_output_____"
]
],
[
[
"SEED = 1234\n\nrandom.seed(SEED)\nnp.random.seed(SEED)\ntorch.manual_seed(SEED)\ntorch.cuda.manual_seed(SEED)\ntorch.backends.cudnn.deterministic = True",
"_____no_output_____"
]
],
[
[
"Next, we'll create the tokenizers. A tokenizer is used to turn a string containing a sentence into a list of individual tokens that make up that string, e.g. \"good morning!\" becomes [\"good\", \"morning\", \"!\"]. We'll start talking about the sentences being a sequence of tokens from now, instead of saying they're a sequence of words. What's the difference? Well, \"good\" and \"morning\" are both words and tokens, but \"!\" is a token, not a word. \n\nspaCy has model for each language (\"de_core_news_sm\" for German and \"en_core_web_sm\" for English) which need to be loaded so we can access the tokenizer of each model. \n\n**Note**: the models must first be downloaded using the following on the command line: \n```\npython -m spacy download en_core_web_sm\npython -m spacy download de_core_news_sm\n```\n\nWe load the models as such:",
"_____no_output_____"
]
],
[
[
"spacy_de = spacy.load('de_core_news_sm')\nspacy_en = spacy.load('en_core_web_sm')",
"_____no_output_____"
]
],
[
[
"Next, we create the tokenizer functions. These can be passed to torchtext and will take in the sentence as a string and return the sentence as a list of tokens.\n\nIn the paper we are implementing, they find it beneficial to reverse the order of the input which they believe \"introduces many short term dependencies in the data that make the optimization problem much easier\". We copy this by reversing the German sentence after it has been transformed into a list of tokens.",
"_____no_output_____"
]
],
[
[
"def tokenize_de(text):\n \"\"\"\n Tokenizes German text from a string into a list of strings (tokens) and reverses it\n \"\"\"\n return [tok.text for tok in spacy_de.tokenizer(text)][::-1]\n\ndef tokenize_en(text):\n \"\"\"\n Tokenizes English text from a string into a list of strings (tokens)\n \"\"\"\n return [tok.text for tok in spacy_en.tokenizer(text)]",
"_____no_output_____"
]
],
[
[
"torchtext's `Field`s handle how data should be processed. All of the possible arguments are detailed [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.py#L61). \n\nWe set the `tokenize` argument to the correct tokenization function for each, with German being the `SRC` (source) field and English being the `TRG` (target) field. The field also appends the \"start of sequence\" and \"end of sequence\" tokens via the `init_token` and `eos_token` arguments, and converts all words to lowercase.",
"_____no_output_____"
]
],
[
[
"SRC = Field(tokenize = tokenize_de, \n init_token = '<sos>', \n eos_token = '<eos>', \n lower = True)\n\nTRG = Field(tokenize = tokenize_en, \n init_token = '<sos>', \n eos_token = '<eos>', \n lower = True)",
"/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/field.py:150: UserWarning: Field class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.\n warnings.warn('{} class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.'.format(self.__class__.__name__), UserWarning)\n"
]
],
[
[
"Next, we download and load the train, validation and test data. \n\nThe dataset we'll be using is the [Multi30k dataset](https://github.com/multi30k/dataset). This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence. \n\n`exts` specifies which languages to use as the source and target (source goes first) and `fields` specifies which field to use for the source and target.",
"_____no_output_____"
]
],
[
[
"train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'), \n fields = (SRC, TRG))",
"/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/example.py:78: UserWarning: Example class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.\n warnings.warn('Example class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.', UserWarning)\n"
]
],
[
[
"We can double check that we've loaded the right number of examples:",
"_____no_output_____"
]
],
[
[
"print(f\"Number of training examples: {len(train_data.examples)}\")\nprint(f\"Number of validation examples: {len(valid_data.examples)}\")\nprint(f\"Number of testing examples: {len(test_data.examples)}\")",
"Number of training examples: 29000\nNumber of validation examples: 1014\nNumber of testing examples: 1000\n"
]
],
[
[
"We can also print out an example, making sure the source sentence is reversed:",
"_____no_output_____"
]
],
[
[
"print(vars(train_data.examples[0]))",
"{'src': ['.', 'büsche', 'vieler', 'nähe', 'der', 'in', 'freien', 'im', 'sind', 'männer', 'weiße', 'junge', 'zwei'], 'trg': ['two', 'young', ',', 'white', 'males', 'are', 'outside', 'near', 'many', 'bushes', '.']}\n"
]
],
[
[
"The period is at the beginning of the German (src) sentence, so it looks like the sentence has been correctly reversed.\n\nNext, we'll build the *vocabulary* for the source and target languages. The vocabulary is used to associate each unique token with an index (an integer). The vocabularies of the source and target languages are distinct.\n\nUsing the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `<unk>` (unknown) token.\n\nIt is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents \"information leakage\" into our model, giving us artifically inflated validation/test scores.",
"_____no_output_____"
]
],
[
[
"SRC.build_vocab(train_data, min_freq = 2)\nTRG.build_vocab(train_data, min_freq = 2)",
"_____no_output_____"
],
[
"print(f\"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}\")\nprint(f\"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}\")",
"Unique tokens in source (de) vocabulary: 7853\nUnique tokens in target (en) vocabulary: 5893\n"
]
],
[
[
"The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a `src` attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a `trg` attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary. \n\nWe also need to define a `torch.device`. This is used to tell torchText to put the tensors on the GPU or not. We use the `torch.cuda.is_available()` function, which will return `True` if a GPU is detected on our computer. We pass this `device` to the iterator.\n\nWhen we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, torchText iterators handle this for us! \n\nWe use a `BucketIterator` instead of the standard `Iterator` as it creates batches in such a way that it minimizes the amount of padding in both the source and target sentences. ",
"_____no_output_____"
]
],
[
[
"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')",
"_____no_output_____"
],
[
"BATCH_SIZE = 128\n\ntrain_iterator, valid_iterator, test_iterator = BucketIterator.splits(\n (train_data, valid_data, test_data), \n batch_size = BATCH_SIZE, \n device = device)",
"/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/iterator.py:48: UserWarning: BucketIterator class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.\n warnings.warn('{} class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.'.format(self.__class__.__name__), UserWarning)\n"
]
],
[
[
"## Building the Seq2Seq Model\n\nWe'll be building our model in three parts. The encoder, the decoder and a seq2seq model that encapsulates the encoder and decoder and will provide a way to interface with each.\n\n### Encoder\n\nFirst, the encoder, a 2 layer LSTM. The paper we are implementing uses a 4-layer LSTM, but in the interest of training time we cut this down to 2-layers. The concept of multi-layer RNNs is easy to expand from 2 to 4 layers. \n\nFor a multi-layer RNN, the input sentence, $X$, after being embedded goes into the first (bottom) layer of the RNN and hidden states, $H=\\{h_1, h_2, ..., h_T\\}$, output by this layer are used as inputs to the RNN in the layer above. Thus, representing each layer with a superscript, the hidden states in the first layer are given by:\n\n$$h_t^1 = \\text{EncoderRNN}^1(e(x_t), h_{t-1}^1)$$\n\nThe hidden states in the second layer are given by:\n\n$$h_t^2 = \\text{EncoderRNN}^2(h_t^1, h_{t-1}^2)$$\n\nUsing a multi-layer RNN also means we'll also need an initial hidden state as input per layer, $h_0^l$, and we will also output a context vector per layer, $z^l$.\n\nWithout going into too much detail about LSTMs (see [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) blog post to learn more about them), all we need to know is that they're a type of RNN which instead of just taking in a hidden state and returning a new hidden state per time-step, also take in and return a *cell state*, $c_t$, per time-step.\n\n$$\\begin{align*}\nh_t &= \\text{RNN}(e(x_t), h_{t-1})\\\\\n(h_t, c_t) &= \\text{LSTM}(e(x_t), h_{t-1}, c_{t-1})\n\\end{align*}$$\n\nWe can just think of $c_t$ as another type of hidden state. Similar to $h_0^l$, $c_0^l$ will be initialized to a tensor of all zeros. Also, our context vector will now be both the final hidden state and the final cell state, i.e. $z^l = (h_T^l, c_T^l)$.\n\nExtending our multi-layer equations to LSTMs, we get:\n\n$$\\begin{align*}\n(h_t^1, c_t^1) &= \\text{EncoderLSTM}^1(e(x_t), (h_{t-1}^1, c_{t-1}^1))\\\\\n(h_t^2, c_t^2) &= \\text{EncoderLSTM}^2(h_t^1, (h_{t-1}^2, c_{t-1}^2))\n\\end{align*}$$\n\nNote how only our hidden state from the first layer is passed as input to the second layer, and not the cell state.\n\nSo our encoder looks something like this: \n\n\n\nWe create this in code by making an `Encoder` module, which requires we inherit from `torch.nn.Module` and use the `super().__init__()` as some boilerplate code. The encoder takes the following arguments:\n- `input_dim` is the size/dimensionality of the one-hot vectors that will be input to the encoder. This is equal to the input (source) vocabulary size.\n- `emb_dim` is the dimensionality of the embedding layer. This layer converts the one-hot vectors into dense vectors with `emb_dim` dimensions. \n- `hid_dim` is the dimensionality of the hidden and cell states.\n- `n_layers` is the number of layers in the RNN.\n- `dropout` is the amount of dropout to use. This is a regularization parameter to prevent overfitting. Check out [this](https://www.coursera.org/lecture/deep-neural-network/understanding-dropout-YaGbR) for more details about dropout.\n\nWe aren't going to discuss the embedding layer in detail during these tutorials. All we need to know is that there is a step before the words - technically, the indexes of the words - are passed into the RNN, where the words are transformed into vectors. To read more about word embeddings, check these articles: [1](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/), [2](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html), [3](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), [4](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/). \n\nThe embedding layer is created using `nn.Embedding`, the LSTM with `nn.LSTM` and a dropout layer with `nn.Dropout`. Check the PyTorch [documentation](https://pytorch.org/docs/stable/nn.html) for more about these.\n\nOne thing to note is that the `dropout` argument to the LSTM is how much dropout to apply between the layers of a multi-layer RNN, i.e. between the hidden states output from layer $l$ and those same hidden states being used for the input of layer $l+1$.\n\nIn the `forward` method, we pass in the source sentence, $X$, which is converted into dense vectors using the `embedding` layer, and then dropout is applied. These embeddings are then passed into the RNN. As we pass a whole sequence to the RNN, it will automatically do the recurrent calculation of the hidden states over the whole sequence for us! Notice that we do not pass an initial hidden or cell state to the RNN. This is because, as noted in the [documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM), that if no hidden/cell state is passed to the RNN, it will automatically create an initial hidden/cell state as a tensor of all zeros. \n\nThe RNN returns: `outputs` (the top-layer hidden state for each time-step), `hidden` (the final hidden state for each layer, $h_T$, stacked on top of each other) and `cell` (the final cell state for each layer, $c_T$, stacked on top of each other).\n\nAs we only need the final hidden and cell states (to make our context vector), `forward` only returns `hidden` and `cell`. \n\nThe sizes of each of the tensors is left as comments in the code. In this implementation `n_directions` will always be 1, however note that bidirectional RNNs (covered in tutorial 3) will have `n_directions` as 2.",
"_____no_output_____"
]
],
[
[
"class Encoder(nn.Module):\n def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):\n super().__init__()\n \n self.hid_dim = hid_dim\n self.n_layers = n_layers\n \n self.embedding = nn.Embedding(input_dim, emb_dim)\n \n self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)\n \n self.dropout = nn.Dropout(dropout)\n \n def forward(self, src):\n \n #src = [src len, batch size]\n \n embedded = self.dropout(self.embedding(src))\n \n #embedded = [src len, batch size, emb dim]\n \n outputs, (hidden, cell) = self.rnn(embedded)\n \n #outputs = [src len, batch size, hid dim * n directions]\n #hidden = [n layers * n directions, batch size, hid dim]\n #cell = [n layers * n directions, batch size, hid dim]\n \n #outputs are always from the top hidden layer\n \n return hidden, cell",
"_____no_output_____"
]
],
[
[
"### Decoder\n\nNext, we'll build our decoder, which will also be a 2-layer (4 in the paper) LSTM.\n\n\n\nThe `Decoder` class does a single step of decoding, i.e. it ouputs single token per time-step. The first layer will receive a hidden and cell state from the previous time-step, $(s_{t-1}^1, c_{t-1}^1)$, and feeds it through the LSTM with the current embedded token, $y_t$, to produce a new hidden and cell state, $(s_t^1, c_t^1)$. The subsequent layers will use the hidden state from the layer below, $s_t^{l-1}$, and the previous hidden and cell states from their layer, $(s_{t-1}^l, c_{t-1}^l)$. This provides equations very similar to those in the encoder.\n\n$$\\begin{align*}\n(s_t^1, c_t^1) = \\text{DecoderLSTM}^1(d(y_t), (s_{t-1}^1, c_{t-1}^1))\\\\\n(s_t^2, c_t^2) = \\text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2))\n\\end{align*}$$\n\nRemember that the initial hidden and cell states to our decoder are our context vectors, which are the final hidden and cell states of our encoder from the same layer, i.e. $(s_0^l,c_0^l)=z^l=(h_T^l,c_T^l)$.\n\nWe then pass the hidden state from the top layer of the RNN, $s_t^L$, through a linear layer, $f$, to make a prediction of what the next token in the target (output) sequence should be, $\\hat{y}_{t+1}$. \n\n$$\\hat{y}_{t+1} = f(s_t^L)$$\n\nThe arguments and initialization are similar to the `Encoder` class, except we now have an `output_dim` which is the size of the vocabulary for the output/target. There is also the addition of the `Linear` layer, used to make the predictions from the top layer hidden state.\n\nWithin the `forward` method, we accept a batch of input tokens, previous hidden states and previous cell states. As we are only decoding one token at a time, the input tokens will always have a sequence length of 1. We `unsqueeze` the input tokens to add a sentence length dimension of 1. Then, similar to the encoder, we pass through an embedding layer and apply dropout. This batch of embedded tokens is then passed into the RNN with the previous hidden and cell states. This produces an `output` (hidden state from the top layer of the RNN), a new `hidden` state (one for each layer, stacked on top of each other) and a new `cell` state (also one per layer, stacked on top of each other). We then pass the `output` (after getting rid of the sentence length dimension) through the linear layer to receive our `prediction`. We then return the `prediction`, the new `hidden` state and the new `cell` state.\n\n**Note**: as we always have a sequence length of 1, we could use `nn.LSTMCell`, instead of `nn.LSTM`, as it is designed to handle a batch of inputs that aren't necessarily in a sequence. `nn.LSTMCell` is just a single cell and `nn.LSTM` is a wrapper around potentially multiple cells. Using the `nn.LSTMCell` in this case would mean we don't have to `unsqueeze` to add a fake sequence length dimension, but we would need one `nn.LSTMCell` per layer in the decoder and to ensure each `nn.LSTMCell` receives the correct initial hidden state from the encoder. All of this makes the code less concise - hence the decision to stick with the regular `nn.LSTM`.",
"_____no_output_____"
]
],
[
[
"class Decoder(nn.Module):\n def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):\n super().__init__()\n \n self.output_dim = output_dim\n self.hid_dim = hid_dim\n self.n_layers = n_layers\n \n self.embedding = nn.Embedding(output_dim, emb_dim)\n \n self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)\n \n self.fc_out = nn.Linear(hid_dim, output_dim)\n \n self.dropout = nn.Dropout(dropout)\n \n def forward(self, input, hidden, cell):\n \n #input = [batch size]\n #hidden = [n layers * n directions, batch size, hid dim]\n #cell = [n layers * n directions, batch size, hid dim]\n \n #n directions in the decoder will both always be 1, therefore:\n #hidden = [n layers, batch size, hid dim]\n #context = [n layers, batch size, hid dim]\n \n input = input.unsqueeze(0)\n \n #input = [1, batch size]\n \n embedded = self.dropout(self.embedding(input))\n \n #embedded = [1, batch size, emb dim]\n \n output, (hidden, cell) = self.rnn(embedded, (hidden, cell))\n \n #output = [seq len, batch size, hid dim * n directions]\n #hidden = [n layers * n directions, batch size, hid dim]\n #cell = [n layers * n directions, batch size, hid dim]\n \n #seq len and n directions will always be 1 in the decoder, therefore:\n #output = [1, batch size, hid dim]\n #hidden = [n layers, batch size, hid dim]\n #cell = [n layers, batch size, hid dim]\n \n prediction = self.fc_out(output.squeeze(0))\n \n #prediction = [batch size, output dim]\n \n return prediction, hidden, cell",
"_____no_output_____"
]
],
[
[
"### Seq2Seq\n\nFor the final part of the implemenetation, we'll implement the seq2seq model. This will handle: \n- receiving the input/source sentence\n- using the encoder to produce the context vectors \n- using the decoder to produce the predicted output/target sentence\n\nOur full model will look like this:\n\n\n\nThe `Seq2Seq` model takes in an `Encoder`, `Decoder`, and a `device` (used to place tensors on the GPU, if it exists).\n\nFor this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the `Encoder` and `Decoder`. This is not always the case, we do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if we did something like having a different number of layers then we would need to make decisions about how this is handled. For example, if our encoder has 2 layers and our decoder only has 1, how is this handled? Do we average the two context vectors output by the decoder? Do we pass both through a linear layer? Do we only use the context vector from the highest layer? Etc.\n\nOur `forward` method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded, $\\hat{y}_{t+1}=f(s_t^L)$. With probability equal to the teaching forcing ratio (`teacher_forcing_ratio`) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability `1 - teacher_forcing_ratio`, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence. \n\nThe first thing we do in the `forward` method is to create an `outputs` tensor that will store all of our predictions, $\\hat{Y}$.\n\nWe then feed the input/source sentence, `src`, into the encoder and receive out final hidden and cell states.\n\nThe first input to the decoder is the start of sequence (`<sos>`) token. As our `trg` tensor already has the `<sos>` token appended (all the way back when we defined the `init_token` in our `TRG` field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (`max_len`), so we loop that many times. The last token input into the decoder is the one **before** the `<eos>` token - the `<eos>` token is never input into the decoder. \n\nDuring each iteration of the loop, we:\n- pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder\n- receive a prediction, next hidden state and next cell state ($\\hat{y}_{t+1}, s_{t}, c_{t}$) from the decoder\n- place our prediction, $\\hat{y}_{t+1}$/`output` in our tensor of predictions, $\\hat{Y}$/`outputs`\n- decide if we are going to \"teacher force\" or not\n - if we do, the next `input` is the ground-truth next token in the sequence, $y_{t+1}$/`trg[t]`\n - if we don't, the next `input` is the predicted next token in the sequence, $\\hat{y}_{t+1}$/`top1`, which we get by doing an `argmax` over the output tensor\n \nOnce we've made all of our predictions, we return our tensor full of predictions, $\\hat{Y}$/`outputs`.\n\n**Note**: our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:\n\n$$\\begin{align*}\n\\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\\\\n\\text{outputs} = [0, &\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align*}$$\n\nLater on when we calculate the loss, we cut off the first element of each tensor to get:\n\n$$\\begin{align*}\n\\text{trg} = [&y_1, y_2, y_3, <eos>]\\\\\n\\text{outputs} = [&\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align*}$$",
"_____no_output_____"
]
],
[
[
"class Seq2Seq(nn.Module):\n def __init__(self, encoder, decoder, device):\n super().__init__()\n \n self.encoder = encoder\n self.decoder = decoder\n self.device = device\n \n assert encoder.hid_dim == decoder.hid_dim, \\\n \"Hidden dimensions of encoder and decoder must be equal!\"\n assert encoder.n_layers == decoder.n_layers, \\\n \"Encoder and decoder must have equal number of layers!\"\n \n def forward(self, src, trg, teacher_forcing_ratio = 0.5):\n \n #src = [src len, batch size]\n #trg = [trg len, batch size]\n #teacher_forcing_ratio is probability to use teacher forcing\n #e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time\n \n batch_size = trg.shape[1]\n trg_len = trg.shape[0]\n trg_vocab_size = self.decoder.output_dim\n \n #tensor to store decoder outputs\n outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)\n \n #last hidden state of the encoder is used as the initial hidden state of the decoder\n hidden, cell = self.encoder(src)\n \n #first input to the decoder is the <sos> tokens\n input = trg[0,:]\n \n for t in range(1, trg_len):\n \n #insert input token embedding, previous hidden and previous cell states\n #receive output tensor (predictions) and new hidden and cell states\n output, hidden, cell = self.decoder(input, hidden, cell)\n \n #place predictions in a tensor holding predictions for each token\n outputs[t] = output\n \n #decide if we are going to use teacher forcing or not\n teacher_force = random.random() < teacher_forcing_ratio\n \n #get the highest predicted token from our predictions\n top1 = output.argmax(1) \n \n #if teacher forcing, use actual next token as next input\n #if not, use predicted token\n input = trg[t] if teacher_force else top1\n \n return outputs",
"_____no_output_____"
]
],
[
[
"# Training the Seq2Seq Model\n\nNow we have our model implemented, we can begin training it. \n\nFirst, we'll initialize our model. As mentioned before, the input and output dimensions are defined by the size of the vocabulary. The embedding dimesions and dropout for the encoder and decoder can be different, but the number of layers and the size of the hidden/cell states must be the same. \n\nWe then define the encoder, decoder and then our Seq2Seq model, which we place on the `device`.",
"_____no_output_____"
]
],
[
[
"INPUT_DIM = len(SRC.vocab)\nOUTPUT_DIM = len(TRG.vocab)\nENC_EMB_DIM = 256\nDEC_EMB_DIM = 256\nHID_DIM = 512\nN_LAYERS = 2\nENC_DROPOUT = 0.5\nDEC_DROPOUT = 0.5\n\nenc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)\ndec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)\n\nmodel = Seq2Seq(enc, dec, device).to(device)",
"_____no_output_____"
]
],
[
[
"Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\\mathcal{U}(-0.08, 0.08)$.\n\nWe initialize weights in PyTorch by creating a function which we `apply` to our model. When using `apply`, the `init_weights` function will be called on every module and sub-module within our model. For each module we loop through all of the parameters and sample them from a uniform distribution with `nn.init.uniform_`.",
"_____no_output_____"
]
],
[
[
"def init_weights(m):\n for name, param in m.named_parameters():\n nn.init.uniform_(param.data, -0.08, 0.08)\n \nmodel.apply(init_weights)",
"_____no_output_____"
]
],
[
[
"We also define a function that will calculate the number of trainable parameters in the model.",
"_____no_output_____"
]
],
[
[
"def count_parameters(model):\n return sum(p.numel() for p in model.parameters() if p.requires_grad)\n\nprint(f'The model has {count_parameters(model):,} trainable parameters')",
"The model has 13,898,501 trainable parameters\n"
]
],
[
[
"We define our optimizer, which we use to update our parameters in the training loop. Check out [this](http://ruder.io/optimizing-gradient-descent/) post for information about different optimizers. Here, we'll use Adam.",
"_____no_output_____"
]
],
[
[
"optimizer = optim.Adam(model.parameters())",
"_____no_output_____"
]
],
[
[
"Next, we define our loss function. The `CrossEntropyLoss` function calculates both the log softmax as well as the negative log-likelihood of our predictions. \n\nOur loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token. ",
"_____no_output_____"
]
],
[
[
"TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]\n\ncriterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)",
"_____no_output_____"
]
],
[
[
"Next, we'll define our training loop. \n\nFirst, we'll set the model into \"training mode\" with `model.train()`. This will turn on dropout (and batch normalization, which we aren't using) and then iterate through our data iterator.\n\nAs stated before, our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:\n\n$$\\begin{align*}\n\\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\\\\n\\text{outputs} = [0, &\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align*}$$\n\nHere, when we calculate the loss, we cut off the first element of each tensor to get:\n\n$$\\begin{align*}\n\\text{trg} = [&y_1, y_2, y_3, <eos>]\\\\\n\\text{outputs} = [&\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align*}$$\n\nAt each iteration:\n- get the source and target sentences from the batch, $X$ and $Y$\n- zero the gradients calculated from the last batch\n- feed the source and target into the model to get the output, $\\hat{Y}$\n- as the loss function only works on 2d inputs with 1d targets we need to flatten each of them with `.view`\n - we slice off the first column of the output and target tensors as mentioned above\n- calculate the gradients with `loss.backward()`\n- clip the gradients to prevent them from exploding (a common issue in RNNs)\n- update the parameters of our model by doing an optimizer step\n- sum the loss value to a running total\n\nFinally, we return the loss that is averaged over all batches.",
"_____no_output_____"
]
],
[
[
"def train(model, iterator, optimizer, criterion, clip):\n \n model.train()\n \n epoch_loss = 0\n \n for i, batch in enumerate(iterator):\n \n src = batch.src\n trg = batch.trg\n \n optimizer.zero_grad()\n \n output = model(src, trg)\n \n #trg = [trg len, batch size]\n #output = [trg len, batch size, output dim]\n \n output_dim = output.shape[-1]\n \n output = output[1:].view(-1, output_dim)\n trg = trg[1:].view(-1)\n \n #trg = [(trg len - 1) * batch size]\n #output = [(trg len - 1) * batch size, output dim]\n \n loss = criterion(output, trg)\n \n loss.backward()\n \n torch.nn.utils.clip_grad_norm_(model.parameters(), clip)\n \n optimizer.step()\n \n epoch_loss += loss.item()\n \n return epoch_loss / len(iterator)",
"_____no_output_____"
]
],
[
[
"Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value.\n\nWe must remember to set the model to evaluation mode with `model.eval()`. This will turn off dropout (and batch normalization, if used).\n\nWe use the `with torch.no_grad()` block to ensure no gradients are calculated within the block. This reduces memory consumption and speeds things up. \n\nThe iteration loop is similar (without the parameter updates), however we must ensure we turn teacher forcing off for evaluation. This will cause the model to only use it's own predictions to make further predictions within a sentence, which mirrors how it would be used in deployment.",
"_____no_output_____"
]
],
[
[
"def evaluate(model, iterator, criterion):\n \n model.eval()\n \n epoch_loss = 0\n \n with torch.no_grad():\n \n for i, batch in enumerate(iterator):\n\n src = batch.src\n trg = batch.trg\n\n output = model(src, trg, 0) #turn off teacher forcing\n\n #trg = [trg len, batch size]\n #output = [trg len, batch size, output dim]\n\n output_dim = output.shape[-1]\n \n output = output[1:].view(-1, output_dim)\n trg = trg[1:].view(-1)\n\n #trg = [(trg len - 1) * batch size]\n #output = [(trg len - 1) * batch size, output dim]\n\n loss = criterion(output, trg)\n \n epoch_loss += loss.item()\n \n return epoch_loss / len(iterator)",
"_____no_output_____"
]
],
[
[
"Next, we'll create a function that we'll use to tell us how long an epoch takes.",
"_____no_output_____"
]
],
[
[
"def epoch_time(start_time, end_time):\n elapsed_time = end_time - start_time\n elapsed_mins = int(elapsed_time / 60)\n elapsed_secs = int(elapsed_time - (elapsed_mins * 60))\n return elapsed_mins, elapsed_secs",
"_____no_output_____"
]
],
[
[
"We can finally start training our model!\n\nAt each epoch, we'll be checking if our model has achieved the best validation loss so far. If it has, we'll update our best validation loss and save the parameters of our model (called `state_dict` in PyTorch). Then, when we come to test our model, we'll use the saved parameters used to achieve the best validation loss. \n\nWe'll be printing out both the loss and the perplexity at each epoch. It is easier to see a change in perplexity than a change in loss as the numbers are much bigger.",
"_____no_output_____"
]
],
[
[
"N_EPOCHS = 10\nCLIP = 1\n\nbest_valid_loss = float('inf')\n\nfor epoch in range(N_EPOCHS):\n \n start_time = time.time()\n \n train_loss = train(model, train_iterator, optimizer, criterion, CLIP)\n valid_loss = evaluate(model, valid_iterator, criterion)\n \n end_time = time.time()\n \n epoch_mins, epoch_secs = epoch_time(start_time, end_time)\n \n if valid_loss < best_valid_loss:\n best_valid_loss = valid_loss\n torch.save(model.state_dict(), 'tut1-model.pt')\n \n print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')\n print(f'\\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')\n print(f'\\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')",
"/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/batch.py:23: UserWarning: Batch class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.\n warnings.warn('{} class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.'.format(self.__class__.__name__), UserWarning)\n"
]
],
[
[
"We'll load the parameters (`state_dict`) that gave our model the best validation loss and run it the model on the test set.",
"_____no_output_____"
]
],
[
[
"model.load_state_dict(torch.load('tut1-model.pt'))\n\ntest_loss = evaluate(model, test_iterator, criterion)\n\nprint(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')",
"| Test Loss: 3.951 | Test PPL: 52.001 |\n"
]
],
[
[
"In the following notebook we'll implement a model that achieves improved test perplexity, but only uses a single layer in the encoder and the decoder.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a80853fa0a011e63c3fcf3c9eb8b90df547fda4
| 109,082 |
ipynb
|
Jupyter Notebook
|
ASS 9.ipynb
|
Ann-ah/ADS-Assignment-9
|
f54784c7e2d6cb50c0d91027e20a451cd6f7b5fc
|
[
"MIT"
] | null | null | null |
ASS 9.ipynb
|
Ann-ah/ADS-Assignment-9
|
f54784c7e2d6cb50c0d91027e20a451cd6f7b5fc
|
[
"MIT"
] | null | null | null |
ASS 9.ipynb
|
Ann-ah/ADS-Assignment-9
|
f54784c7e2d6cb50c0d91027e20a451cd6f7b5fc
|
[
"MIT"
] | null | null | null | 54.925478 | 7,760 | 0.593608 |
[
[
[
"conda install pandas",
"Collecting package metadata (current_repodata.json): ...working... done\nSolving environment: ...working... done\n\n# All requested packages already installed.\n\n\nNote: you may need to restart the kernel to use updated packages.\n"
],
[
"conda install numpy",
"Collecting package metadata (current_repodata.json): ...working... done\nSolving environment: ...working... done\n\n# All requested packages already installed.\n\n\nNote: you may need to restart the kernel to use updated packages.\n"
],
[
"conda install matplotlib",
"Collecting package metadata (current_repodata.json): ...working... done\nSolving environment: ...working... done\n\n# All requested packages already installed.\n\n\nNote: you may need to restart the kernel to use updated packages.\n"
],
[
"pip install plotly",
"Requirement already satisfied: plotly in c:\\users\\admin\\anaconda3\\lib\\site-packages (5.1.0)Note: you may need to restart the kernel to use updated packages.\nRequirement already satisfied: six in c:\\users\\admin\\anaconda3\\lib\\site-packages (from plotly) (1.15.0)\nRequirement already satisfied: tenacity>=6.2.0 in c:\\users\\admin\\anaconda3\\lib\\site-packages (from plotly) (8.0.1)\n\n"
],
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport plotly.express as px\nfrom scipy import stats\nimport warnings\n%matplotlib inline\n\nwarnings.filterwarnings(\"ignore\")",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV\nfrom sklearn.preprocessing import StandardScaler, LabelEncoder\nfrom sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score, confusion_matrix, classification_report, accuracy_score\nfrom sklearn.linear_model import ElasticNet, LogisticRegression\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.ensemble import BaggingRegressor, AdaBoostRegressor, RandomForestClassifier\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.feature_selection import RFE\nfrom scipy.stats import chi2_contingency\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nimport pickle\n\n\n",
"_____no_output_____"
],
[
"df = pd.read_csv(\"heart.csv\")\ndf",
"_____no_output_____"
]
],
[
[
"1. Load in the data. The target column should be considered as whether a patient will develop heart disease or not.",
"_____no_output_____"
]
],
[
[
"X_df = df.drop(\"target\", axis=1)\nX_df.shape",
"_____no_output_____"
],
[
"y_df = df[\"target\"]\ny_df.shape",
"_____no_output_____"
]
],
[
[
"2. Explore the data. Notice all columns are numerical. Therefore separate the continuous from the discrete features.",
"_____no_output_____"
]
],
[
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 303 entries, 0 to 302\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 age 303 non-null int64 \n 1 sex 303 non-null int64 \n 2 cp 303 non-null int64 \n 3 trestbps 303 non-null int64 \n 4 chol 303 non-null int64 \n 5 fbs 303 non-null int64 \n 6 restecg 303 non-null int64 \n 7 thalach 303 non-null int64 \n 8 exang 303 non-null int64 \n 9 oldpeak 303 non-null float64\n 10 slope 303 non-null int64 \n 11 ca 303 non-null int64 \n 12 thal 303 non-null int64 \n 13 target 303 non-null int64 \ndtypes: float64(1), int64(13)\nmemory usage: 33.2 KB\n"
],
[
"df.describe()",
"_____no_output_____"
],
[
"df.nunique()",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"numerical_continuous = []\nfor column in df.columns:\n if df[column].dtypes != \"object\":\n if df[column].nunique() >= 10:\n numerical_continuous.append(column)\nnumerical_continuous",
"_____no_output_____"
],
[
"numerical_discreet = []\nfor column in df.columns:\n if df[column].dtypes != \"object\":\n if df[column].nunique() < 10:\n numerical_discreet.append(column)\nnumerical_discreet",
"_____no_output_____"
]
],
[
[
"3. Identify any presence of outliers in the continuous features and resolve them using the IQR method.",
"_____no_output_____"
]
],
[
[
"for column in numerical_continuous:\n (df[column].value_counts()/df.shape[0]).plot(kind = \"box\")\n plt.title(column)\n plt.show()",
"_____no_output_____"
],
[
"def remove_outlier(df, numerical_continuous):\n q1 = df[numerical_continuous].quantile(0.25)\n q3 = df[numerical_continuous].quantile(0.75)\n iqr = q3-q1\n fence_low = q1-1.5 * iqr\n fence_high = q3+1.5 * iqr\n df = df.loc[(df[numerical_continuous] > fence_low) & (df[numerical_continuous] < fence_high)]\n return df\n re_dat = remove_outlier(stepframe, stepframe.columns)",
"_____no_output_____"
],
[
"for column in numerical_continuous:\n lower, upper = remove_outlier(df[column])\n df = df.loc[(df[column] > lower) & (df[column] < upper)]",
"_____no_output_____"
]
],
[
[
"4. Binned the continuous column values apart from the column ‘oldpeak’.",
"_____no_output_____"
]
],
[
[
"le = LabelEncoder()\nfor column in numerical_continuous[:-1]: \n df[column] = pd.qcut(df[column], q = [0, 0.25, 0.50, 0.75, 1])\n df[column] = le.fit_transform(df[column])",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
],
[
[
"5.Separate the features from the labels and use the most appropriate feature selection technique(s).",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_selection import SelectKBest\nfrom sklearn.feature_selection import chi2\nfeature_sel_df = df.drop([\"target\"], axis = 1)\nfeature_sel_df[numerical_continuous] = feature_sel_df[numerical_continuous]\nselector = SelectKBest(score_func=chi2, k=3)\nselected_df = selector.fit_transform(feature_sel_df, df[\"target\"])\nselected_df",
"_____no_output_____"
]
],
[
[
"6. Slice the data and scale the features.",
"_____no_output_____"
]
],
[
[
"scaled_df = df[[numerical_continuous]]\nprint(\"mean:\", scaled_df[numerical_continuous].mean())\nprint(\"standard deviation:\", scaled_df[numerical_continuous].std())",
"_____no_output_____"
]
],
[
[
"7. Identify the data if the data is balanced. If not, sample the data using the most appropriate method keeping the size of the data in mind.",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import plot_confusion_matrix\nplot_confusion_matrix(clf, X, y)\nfrom sklearn.metrics import roc_curve\ny_prob = clf.predict_proba (X_test)\ny_probs = y_probs[:,1]\nFpr, tpr, thresholds = roc_curve(y_test, y_prob)\nFpr\nimport matplotlib.pyplot as plt\ndef plot_roc_curve(Fpr, tpr)\n\n",
"_____no_output_____"
]
],
[
[
"8. Using at least 4 classification methods, identify the best machine learning model using their training and testing accuracy scores.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.33, random_state=42)",
"_____no_output_____"
],
[
"log_reg = LogisticRegression(random_state = 0)\nsvm_clf = SVC(random_state = 0)\nknn_clf = KNeighborsClassifier()\nrf_clf = RandomForestClassifier(random_state = 0)\n\nmodels = {'LogisticRegression': log_reg, 'SVC': svm_clf, 'KNeighborsClassifier': knn_clf, 'RandomForestClassifier': rf_clf}",
"_____no_output_____"
],
[
"def model_training_testing(models):\n for model_name, model in models.items():\n model.fit(X_train, y_train)\n y_predict_trian = model.predict(X_train)\n y_predict_test = model.predict(X_test)\n print(f'{model_name} Training Accuracy:', accuracy_score(y_train, np.round(y_predict_trian)))\n print(f'{model_name} Testing Accuracy:', accuracy_score(y_test, np.round(y_predict_test)))\n print('\\n')",
"_____no_output_____"
],
[
"model_training_testing(models)",
"_____no_output_____"
]
],
[
[
"9. Hyper parameter tune the best model using grid search to identify the best performing model.",
"_____no_output_____"
]
],
[
[
"params = {'n_estimators': np.arange(10, 100, 10), 'random_state': [0], 'n_jobs': [1, -1]} \ngrid_search = GridSearchCV(RandomForestClassifier(), params, n_jobs = -1, cv = 5)\n\ngrid_search.fit(X_train, y_train)\n\ngrid_search.best_estimator_",
"_____no_output_____"
]
],
[
[
"10. Redefine the model instance based on the grid search results, train it and evaluate it using:\na. A classification report.\nb. A visual representation and well labelled confusion matrix.\nc. AUC score. (Explain the score in a markdown cell.)\nd. ROC curve.",
"_____no_output_____"
]
],
[
[
"def model_evaluation(model, X, y, model_name):\n y_predict = model.predict(X)\n print(f'Model: {model_name} \\n \\n Classification Report: {classification_report(y, y_predict)}')\n\n cnf_matrix = confusion_matrix(y, y_predict)\n \n class_names = [0, 1]\n tick_marks = np.arange(len(class_names))\n plt.figure(figsize = (9, 7))\n\n sns.heatmap(pd.DataFrame(cnf_matrix), annot = True, cmap = \"YlGnBu\", fmt = 'g')\n\n plt.title(f'{model_name} Confusion Matrix', y = 1.1, fontsize = 22)\n plt.ylabel('Actual Label', fontsize = 15)\n plt.xlabel('Predicted Label', fontsize = 15)",
"_____no_output_____"
],
[
"model_evaluation(rf_clf_tuned, X_test, y_test, model_name = 'Random Forest Classifier Tuned')",
"_____no_output_____"
],
[
"from sklearn.metrics import roc_auc_score, roc_curve\n\ny_pred_prob = rf_clf_tuned.predict_proba(X_test)[:, 1]\nprint(f'Area Under the Curve Score: {roc_auc_score(y_test, y_pred_prob)}')",
"_____no_output_____"
],
[
"fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob)\n\ndf_roc = pd.DataFrame([fpr, tpr]).T\ndf_roc.columns = ['False Positive Ratio', 'True Positive Ratio']\n\nimport plotly.express as px\n\nfig = px.line(df_roc, x = 'False Positive Ratio', y = 'True Positive Ratio')\nfig.update_layout(title = dict(text = \"ROC Curve.\", y = 0.95, x = 0.5, \n xanchor = 'center', yanchor = 'top', font = dict(size = 20)))",
"_____no_output_____"
]
],
[
[
"11. Based on the results on the ROC curve, which threshold would be ideal given the nature of the data? (Explain in a markdown cell.)",
"_____no_output_____"
],
[
"12. Save the model as ‘classification_model’.",
"_____no_output_____"
]
],
[
[
"pickle.dump(rf_clf_tuned, open(\"classification_model.pkl\", \"wb\"))",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4a808a3e49ecea2983305a6a01fc14e639405885
| 2,323 |
ipynb
|
Jupyter Notebook
|
python_learning/Leetcode examples/.ipynb_checkpoints/InsertionSort-checkpoint.ipynb
|
sssssch/jupyter-examples
|
cf9e26e22dcfa263bcd26323527911cdbcc2cd61
|
[
"MIT"
] | 2 |
2020-07-29T13:07:52.000Z
|
2021-01-15T09:22:07.000Z
|
python_learning/Leetcode examples/Sort/InsertionSort.ipynb
|
sssssch/jupyter-examples
|
cf9e26e22dcfa263bcd26323527911cdbcc2cd61
|
[
"MIT"
] | null | null | null |
python_learning/Leetcode examples/Sort/InsertionSort.ipynb
|
sssssch/jupyter-examples
|
cf9e26e22dcfa263bcd26323527911cdbcc2cd61
|
[
"MIT"
] | null | null | null | 28.679012 | 114 | 0.507103 |
[
[
[
"# %load /Users/mac/Downloads/深度学习资料/Leetcode/2/InsertionSort.py\n#!/usr/bin/env python3\n# File: /Users/king/Python初级算法/code/2/insertionSort.py\n# Project: /Users/king/Python初级算法/code/2\n# Created Date: 2018/10/26\n# Author: hstking [email protected]\n\n\nimport random\ndef randomList(n):\n '''返回一个长度为n的整数列表,数据范围[0,1000) '''\n iList = []\n for i in range(n):\n iList.append(random.randrange(1000))\n return iList\nimport timeit\n\niList = randomList(20)\n\ndef insertionSort(iList):\n if len(iList) <= 1:\n return iList\n for right in range(1, len(iList)):\n target = iList[right]\n for left in range(0, right):\n if target <= iList[left]:\n iList[left+1:right+1] = iList[left:right] #使用Python的切片赋值\n iList[left] = target\n break\n # print(\"第 %d 轮排序结果:\" %(right), end=\"\")\n # print(iList)\n return iList\n\nif __name__ == \"__main__\":\n print(iList)\n print(insertionSort(iList))\n print(timeit.timeit(\"insertionSort(iList)\", \"from __main__ import insertionSort,iList\", number=100))",
"[654, 15, 991, 671, 177, 339, 707, 355, 241, 112, 305, 120, 389, 751, 114, 157, 921, 260, 326, 206]\n[15, 112, 114, 120, 157, 177, 206, 241, 260, 305, 326, 339, 355, 389, 654, 671, 707, 751, 921, 991]\n0.0016987839990179054\n"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4a809a71fcabe28f780c2b418d3540566631c3b3
| 77,807 |
ipynb
|
Jupyter Notebook
|
reviews/Jupyter_Widgets/Continuous Updating.ipynb
|
Aquaveo/jupyter_to_tethys
|
b2fc246d31ef0526666bc5db856551ec9704f315
|
[
"MIT"
] | null | null | null |
reviews/Jupyter_Widgets/Continuous Updating.ipynb
|
Aquaveo/jupyter_to_tethys
|
b2fc246d31ef0526666bc5db856551ec9704f315
|
[
"MIT"
] | null | null | null |
reviews/Jupyter_Widgets/Continuous Updating.ipynb
|
Aquaveo/jupyter_to_tethys
|
b2fc246d31ef0526666bc5db856551ec9704f315
|
[
"MIT"
] | 1 |
2019-02-12T19:17:54.000Z
|
2019-02-12T19:17:54.000Z
| 160.096708 | 53,713 | 0.710322 |
[
[
[
"This IPython Notebook contains simple examples of the line function. \n\nTo clear all previously rendered cell outputs, select from the menu:\n\n Cell -> All Output -> Clear",
"_____no_output_____"
]
],
[
[
"import time\n\nimport numpy as np\nfrom bokeh.io import push_notebook, show, output_notebook\nfrom bokeh.models import HoverTool\nfrom bokeh.plotting import figure \noutput_notebook()",
"_____no_output_____"
],
[
"N = 1000\nx = np.random.random(size=N) * 100\ny = np.random.random(size=N) * 100\nradii = np.random.random(size=N) * 2\ncolors = [\"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)]",
"_____no_output_____"
],
[
"TOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,tap,box_select,lasso_select\"\n\np = figure(tools=TOOLS)\np.axis.major_label_text_font_size = \"18pt\"\nhover = HoverTool(tooltips=None, mode=\"vline\")\np.add_tools(hover)\nr = p.circle(x,y, radius=radii, \n fill_color=colors, fill_alpha=0.6, line_color=None, \n hover_fill_color=\"black\", hover_fill_alpha=0.7, hover_line_color=None)\n",
"_____no_output_____"
],
[
"# get and explicit handle to update the next show cell with\ntarget = show(p, notebook_handle=True)",
"_____no_output_____"
],
[
"i = 0\nwhile True:\n i +=1 \n p.title.text = str(i)\n \n r.data_source.data['radius'] = radii * (2 + np.sin(i/5))\n \n x = r.data_source.data['x']\n y = r.data_source.data['y']\n d = np.sqrt((x-50)**2 + (y-50)**2)/100\n rand = 2 * (np.random.random(size=N) - 0.5)\n r.data_source.data['x'] = x + 2 * np.sin(d) * rand\n r.data_source.data['y'] = y + np.cos(d**2) * rand\n \n p.axis.major_label_text_color = r.data_source.data['fill_color'][int(i%N)]\n\n # push updates to the plot continuously using the handle (intererrupt the notebook kernel to stop)\n push_notebook(handle=target)\n time.sleep(0.1)",
"_____no_output_____"
],
[
"# Update the hover glyph propertes using the explicit handle (go hover over the plot)\nr.hover_glyph.fill_color = \"white\"\nr.hover_glyph.fill_alpha = 0.5\nhover.mode = \"vline\"\npush_notebook(handle=target)",
"_____no_output_____"
]
]
] |
[
"raw",
"code"
] |
[
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a809d8a1ec74b5a4865c93b41e34df2ea574fd8
| 4,318 |
ipynb
|
Jupyter Notebook
|
notebooks/date/since_epoch.ipynb
|
hubert-thieriot/gee_tools
|
e4907d6a6b536326fdd8495e86d42d72694efede
|
[
"MIT"
] | 369 |
2017-05-22T22:13:20.000Z
|
2022-03-31T09:40:12.000Z
|
notebooks/date/since_epoch.ipynb
|
yqx674834119/gee_tools
|
d7b35174933739f3aeee439d622e5fab57b6dd2d
|
[
"MIT"
] | 58 |
2017-10-31T13:15:32.000Z
|
2022-03-18T16:06:01.000Z
|
notebooks/date/since_epoch.ipynb
|
yqx674834119/gee_tools
|
d7b35174933739f3aeee439d622e5fab57b6dd2d
|
[
"MIT"
] | 109 |
2017-08-09T09:02:07.000Z
|
2022-01-08T10:27:48.000Z
| 21.270936 | 229 | 0.537749 |
[
[
[
"import ee\nee.Initialize()",
"_____no_output_____"
],
[
"from geetools import tools",
"_____no_output_____"
],
[
"import ipygee as ui",
"_____no_output_____"
],
[
"test_image = ee.Image('LANDSAT/LT05/C01/T1_SR/LT05_226087_20000102')",
"_____no_output_____"
]
],
[
[
"# `get_date_band`\nGet the date of an image, compute how many `units` (for example `day`) has ellpsed since the epoch (1970-01-01) and set it to a band (called `date`) and a property (called `unit_since_epoch`, for example, `day_since_epoch`)",
"_____no_output_____"
]
],
[
[
"date_band = tools.date.getDateBand(test_image, 'day')",
"_____no_output_____"
],
[
"ui.eprint(date_band)",
"_____no_output_____"
]
],
[
[
"# `date_since_epoch`\nGiven an ellapsed time since epoch (for example the result of `get_date_band`) compute what day it is",
"_____no_output_____"
]
],
[
[
"image_date = date_band.get('day_since_epoch')\ndate_since_epoch = tools.date.dateSinceEpoch(image_date)",
"_____no_output_____"
],
[
"ui.eprint(date_since_epoch)",
"_____no_output_____"
]
],
[
[
"# `unit_since_epoch`\nReturn the number of `unit` (for example, `day`) since the epoch (1970-1-1)",
"_____no_output_____"
]
],
[
[
"date = ee.Date('2000-01-02')",
"_____no_output_____"
],
[
"days = tools.date.unitSinceEpoch(date)",
"_____no_output_____"
],
[
"ui.eprint(days)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a80a6b580fb316ebe07d1cc30bdd492f1549501
| 5,732 |
ipynb
|
Jupyter Notebook
|
CLF_Train.ipynb
|
bxtkezhan/TextClassifierAPI
|
5a76ec3ec3b330b63ae6f32108b2751b4ddbe808
|
[
"Apache-2.0"
] | null | null | null |
CLF_Train.ipynb
|
bxtkezhan/TextClassifierAPI
|
5a76ec3ec3b330b63ae6f32108b2751b4ddbe808
|
[
"Apache-2.0"
] | null | null | null |
CLF_Train.ipynb
|
bxtkezhan/TextClassifierAPI
|
5a76ec3ec3b330b63ae6f32108b2751b4ddbe808
|
[
"Apache-2.0"
] | null | null | null | 30.168421 | 1,173 | 0.505408 |
[
[
[
"from fasttext import train_supervised",
"Using TensorFlow backend.\n"
],
[
"from glob import glob\nimport random",
"_____no_output_____"
],
[
"spam_paths = glob('./datasets/spam_train/*.txt')\nham_paths = glob('./datasets/ham_train/*.txt')\n# random.shuffle(spam_paths)\n# random.shuffle(ham_paths)\n\nlen(spam_paths), len(ham_paths)",
"_____no_output_____"
],
[
"spam_list = []\nfor path in spam_paths:\n try:\n with open(path) as f:\n text = f.read()\n except UnicodeDecodeError as e:\n continue\n text = text.replace('\\n', ' ')\n text = text.strip()\n text = '__label__spam , ' + text\n spam_list.append(text)\n \nham_list = []\nfor path in ham_paths:\n try:\n with open(path) as f:\n text = f.read()\n except UnicodeDecodeError as e:\n continue\n text = text.replace('\\n', ' ')\n text = text.strip()\n text = '__label__ham , ' + text\n ham_list.append(text)\n\ntrain_list = spam_list + ham_list\nrandom.shuffle(train_list)\n\nprint(spam_list[0])\nprint(ham_list[0])",
"__label__spam , Subject: paramagnet isaac , ( 75 % off for all new softwares . windowxp , photoshop , window 2003 . . . etcmore gregarious , she was gently .\n__label__ham , Subject: lng - europe darren , thanks for the lead . i ' ll give sam a call . - - - - - - - - - - - - - - - - - - - - - - forwarded by brad hitch / eu / enron on 08 / 03 / 2001 09 : 11 - - - - - - - - - - - - - - - - - - - - - - - - - - - eric gonzales @ ect 07 / 03 / 2001 20 : 15 to : brad hitch / eu / enron @ enron cc : daren j farmer / hou / ect @ ect subject : lng - europe please follow up . eric - - - - - - - - - - - - - - - - - - - - - - forwarded by eric gonzales / lon / ect on 07 / 03 / 2001 21 : 18 - - - - - - - - - - - - - - - - - - - - - - - - - - - daren j farmer 07 / 03 / 2001 19 : 54 to : eric gonzales / lon / ect @ ect cc : subject : lng - europe eric , i recieved a call from a guy with pacific interlink ( ? ) . he is looking to market lng in europe . since i have very little knowledge in this area , i didn ' t get much specific information . but , i told him i would find someone for him to talk with . his name is sam kovacevich . phone : 847 - 971 - 3369 . i would appreciate it if you would give sam a call . if you aren ' t the person he needs to talk to , please let me know . thanks . daren farmer texas desk - gas\n"
],
[
"data_path = './datasets/train.txt'\nwith open(data_path, 'w') as f:\n f.write('\\n'.join(train_list))",
"_____no_output_____"
],
[
"model_path = './FEmail.bin.gz'\nmodel = train_supervised(data_path, wordNgrams=2, lr=0.01, epoch=3, minCount=5)\nmodel.save_model(model_path)",
"Adding 2-gram features\nEpoch 1/3\n4958/4958 [==============================] - 13s 3ms/step - loss: 0.1968 - precision: 0.9157 - recall: 0.9157\nEpoch 2/3\n4958/4958 [==============================] - 12s 2ms/step - loss: 0.0249 - precision: 0.9913 - recall: 0.9913\nEpoch 3/3\n4958/4958 [==============================] - 12s 2ms/step - loss: 0.0121 - precision: 0.9972 - recall: 0.9972\n"
],
[
"text = '''\nSubject: re : milf neighbors lookin for . . . fbs\ns - e - x - y local singles inside !\npetticoat is isocline conduce but strove not prosecute litigate .\nhere decimal holstein may platen and fiske die ,\ndefrock not ash .\n. n . o . t . h . a . n . k . z . z . z\n'''.replace('\\n', ' ')\nmodel.predict(text, k=2)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a80bdec7f57aa97ad6a6c659b62c60e6bf06605
| 1,095 |
ipynb
|
Jupyter Notebook
|
csv_to_html.ipynb
|
rbourdeau1/rbourdeau1.github.io
|
0f4d97a70a9b2c2f63a1f0516480ca2d61d05971
|
[
"Apache-2.0"
] | null | null | null |
csv_to_html.ipynb
|
rbourdeau1/rbourdeau1.github.io
|
0f4d97a70a9b2c2f63a1f0516480ca2d61d05971
|
[
"Apache-2.0"
] | null | null | null |
csv_to_html.ipynb
|
rbourdeau1/rbourdeau1.github.io
|
0f4d97a70a9b2c2f63a1f0516480ca2d61d05971
|
[
"Apache-2.0"
] | null | null | null | 18.87931 | 59 | 0.530594 |
[
[
[
"# import dependencies \nimport os\nimport pandas as pd",
"_____no_output_____"
],
[
"# path to csv\ncsv_path = os.path.join('Resources', 'cities.csv')\ncities_csv = pd.read_csv(csv_path)",
"_____no_output_____"
],
[
"# convert to html table and save to table.html\ncities_csv.to_html('table.html', index=False)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
4a80e1c341b584164566ccc61c10456bfb27f7a8
| 86,029 |
ipynb
|
Jupyter Notebook
|
academy_awards_sqlite.ipynb
|
rjegankumar/SQLite_data_prep
|
6bc6c1abfc162f62b1b620cdd9749d3a8cab1759
|
[
"MIT"
] | null | null | null |
academy_awards_sqlite.ipynb
|
rjegankumar/SQLite_data_prep
|
6bc6c1abfc162f62b1b620cdd9749d3a8cab1759
|
[
"MIT"
] | null | null | null |
academy_awards_sqlite.ipynb
|
rjegankumar/SQLite_data_prep
|
6bc6c1abfc162f62b1b620cdd9749d3a8cab1759
|
[
"MIT"
] | null | null | null | 51.391278 | 343 | 0.293203 |
[
[
[
"# Cleaning up the academy awards dataset and creating a SQLite table",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nacademy_awards = pd.read_csv(\"academy_awards.csv\", encoding = \"ISO-8859-1\")\n\nacademy_awards.head()",
"_____no_output_____"
],
[
"for column in academy_awards.columns:\n print(\"No. of unique values in '{0}' are\".format(column),len(academy_awards[column].value_counts()),\"\\n\")",
"No. of unique values in 'Year' are 83 \n\nNo. of unique values in 'Category' are 40 \n\nNo. of unique values in 'Nominee' are 6001 \n\nNo. of unique values in 'Additional Info' are 6424 \n\nNo. of unique values in 'Won?' are 16 \n\nNo. of unique values in 'Unnamed: 5' are 5 \n\nNo. of unique values in 'Unnamed: 6' are 4 \n\nNo. of unique values in 'Unnamed: 7' are 3 \n\nNo. of unique values in 'Unnamed: 8' are 2 \n\nNo. of unique values in 'Unnamed: 9' are 1 \n\nNo. of unique values in 'Unnamed: 10' are 1 \n\n"
],
[
"won_not_yes_no = academy_awards[(academy_awards['Won?'] != 'YES') & (academy_awards['Won?'] != 'NO')]\nprint(won_not_yes_no)",
" Year Category \\\n510 2007 (80th) Scientific and Technical (Technical Achievemen... \n511 2007 (80th) Scientific and Technical (Technical Achievemen... \n764 2005 (78th) Scientific and Technical (Scientific and Engin... \n773 2005 (78th) Scientific and Technical (Technical Achievemen... \n905 2004 (77th) Scientific and Technical (Technical Achievemen... \n1024 2003 (76th) Scientific and Technical (Scientific and Engin... \n1269 2001 (74th) Scientific and Technical (Scientific and Engin... \n1287 2001 (74th) Scientific and Technical (Technical Achievemen... \n1289 2001 (74th) Scientific and Technical (Technical Achievemen... \n1293 2001 (74th) Scientific and Technical (Special Awards) \n1690 1998 (71st) Scientific and Technical (Technical Achievemen... \n6669 1957 (30th) Honorary Award \n9550 1937 (10th) Honorary Award \n9785 1935 (8th) Dance Direction (archaic category) \n\n Nominee \\\n510 To CHRISTIEN TINSLEY for the creation of the t... \n511 To JÖRG PÖHLER and RÜDIGER KLEINKE of OTTEC Te... \n764 To DAVID BARAFF, MICHAEL KASS and ANDREW WITKI... \n773 To JOHN PLATT and DEMETRI TERZOPOULOS for thei... \n905 To ALAN KAPLER for the design and development ... \n1024 To STEPHEN REGELOUS for the design and develop... \n1269 To JOHN M. EARGLE, D.B. DON\" KEELE and MARK E.... \n1287 To DR. UWE SASSENBERG and ROLF SCHNEIDER for t... \n1289 To MIC RODGERS and MATT SWEENEY for the concep... \n1293 To the American Society of Cinematographers (A... \n1690 To CARY PHILLIPS for the design and developmen... \n6669 To Gilbert M. (Broncho Billy\") Anderson \n9550 To Mack Sennett, for his lasting contribution ... \n9785 Hermes Pan -- Piccolino\" and \"Top Hat \n\n Additional Info \\\n510 bruises and birthmarks \n511 well-engineered and remote-controllable packa... \n764 providing the key in demonstrating to the ind... \n773 introducing the concept of physically-based t... \n905 a software toolkit for artistic control of vo... \n1024 over 200 \n1269 design and engineering of the modern constant... \n1287 an advanced and robust camera and object matc... \n1289 low bed picture car carrier and camera platfo... \n1293 first published by the ASC in 1930, the Ameri... \n1690 and adding an expressive multi-target shape i... \n6669 motion picture pioneer \n9550 the basic principles of which are as importan... \n9785 White Tie \n\n Won? \\\n510 as well as 3D prosthetic appliances ranging i... \n511 more conventional fog units. [Stage Operations]\" \n764 complex cloth could be achieved efficiently a... \n773 deforming objects. [Digital Imaging Technology]\" \n905 water and avalanches with familiar operators ... \n1024 000 agents were controlled in several scenes. ... \n1269 direct radiator style motion picture loudspea... \n1287 which significantly reduces the need for pain... \n1289 economic and realistic filming of action sequ... \n1293 this premier reference manual has had a signi... \n1690 the \"Caricature\" system provides a degree of ... \n6669 for his contributions to the development of m... \n9550 the Academy presents a Special Award to that ... \n9785 and Tails\" numbers from Top Hat [came in 2nd]\" \n\n Unnamed: 5 \\\n510 resilience \n511 NaN \n764 NaN \n773 NaN \n905 NaN \n1024 NaN \n1269 D.B. \"Don\" Keele and Mark E. Engebretson has ... \n1287 error-prone measurements on sets. [Digital Im... \n1289 NaN \n1293 NaN \n1690 NaN \n6669 NaN \n9550 discoverer of stars \n9785 NaN \n\n Unnamed: 6 \\\n510 flexibility and water resistance \n511 * \n764 * \n773 * \n905 * \n1024 * \n1269 direct radiator bass style cinema loudspeaker... \n1287 NaN \n1289 * \n1293 * \n1690 * \n6669 * \n9550 sympathetic \n9785 NaN \n\n Unnamed: 7 \\\n510 while requiring no dangerous solvents. [Syste... \n511 NaN \n764 NaN \n773 NaN \n905 NaN \n1024 NaN \n1269 NaN \n1287 * \n1289 NaN \n1293 NaN \n1690 NaN \n6669 NaN \n9550 kindly \n9785 NaN \n\n Unnamed: 8 Unnamed: 9 Unnamed: 10 \n510 NaN * NaN \n511 NaN NaN NaN \n764 NaN NaN NaN \n773 NaN NaN NaN \n905 NaN NaN NaN \n1024 NaN NaN NaN \n1269 * NaN NaN \n1287 NaN NaN NaN \n1289 NaN NaN NaN \n1293 NaN NaN NaN \n1690 NaN NaN NaN \n6669 NaN NaN NaN \n9550 understanding comedy genius - Mack Sennett.\"\" NaN * \n9785 NaN NaN NaN \n"
],
[
"for i in range(5,11):\n print(academy_awards.iloc[:,i].value_counts())",
"* 7\n discoverer of stars 1\n error-prone measurements on sets. [Digital Imaging Technology]\" 1\n D.B. \"Don\" Keele and Mark E. Engebretson has resulted in the over 20-year dominance of constant-directivity 1\n resilience 1\nName: Unnamed: 5, dtype: int64\n* 9\n flexibility and water resistance 1\n direct radiator bass style cinema loudspeaker systems. [Sound]\" 1\n sympathetic 1\nName: Unnamed: 6, dtype: int64\n kindly 1\n while requiring no dangerous solvents. [Systems]\" 1\n* 1\nName: Unnamed: 7, dtype: int64\n understanding comedy genius - Mack Sennett.\"\" 1\n* 1\nName: Unnamed: 8, dtype: int64\n* 1\nName: Unnamed: 9, dtype: int64\n* 1\nName: Unnamed: 10, dtype: int64\n"
],
[
"academy_awards = academy_awards.iloc[:,:5]\nacademy_awards.head()",
"_____no_output_____"
],
[
"for index in won_not_yes_no.index:\n academy_awards.loc[index,'Won?'] = 'YES'\nprint(academy_awards['Won?'].value_counts())",
"NO 7168\nYES 2969\nName: Won?, dtype: int64\n"
],
[
"academy_awards['Year'].value_counts()",
"_____no_output_____"
],
[
"import re\n\nsplit_year = academy_awards['Year'].map(lambda x: (re.search(\"[/]\", x)) is not None)\nprint(academy_awards[split_year]['Year'].unique())",
"['1932/33 (6th)' '1931/32 (5th)' '1930/31 (4th)' '1929/30 (3rd)'\n '1928/29 (2nd)' '1927/28 (1st)']\n"
],
[
"academy_awards[academy_awards['Year'].map(lambda x: (re.search(\"1934\", x)) is not None)].head()",
"_____no_output_____"
],
[
"split_year_dict = {\n '1932/33 (6th)': '1933 (6th)',\n '1931/32 (5th)': '1932 (5th)',\n '1930/31 (4th)': '1931 (4th)',\n '1929/30 (3rd)': '1930 (3rd)',\n '1928/29 (2nd)': '1929 (2nd)',\n '1927/28 (1st)': '1928 (1st)'\n}\n\nfor key, value in split_year_dict.items():\n academy_awards[academy_awards['Year']==key].loc[:,'Year'] = value\n\nprint(academy_awards[split_year].loc[:,'Year'].unique())",
"['1932/33 (6th)' '1931/32 (5th)' '1930/31 (4th)' '1929/30 (3rd)'\n '1928/29 (2nd)' '1927/28 (1st)']\n"
],
[
"academy_awards['Category'].value_counts()",
"_____no_output_____"
],
[
"academy_awards['Nominee'].value_counts()",
"_____no_output_____"
],
[
"academy_awards['Additional Info'].value_counts()",
"_____no_output_____"
],
[
"academy_awards[\"Year\"] = academy_awards[\"Year\"].str[0:4].astype(\"int64\")\nacademy_awards[\"Year\"]",
"_____no_output_____"
],
[
"later_than_2000 = academy_awards[academy_awards[\"Year\"] > 2000]\nlater_than_2000['Year'].value_counts()",
"_____no_output_____"
],
[
"award_categories = [\"Actor -- Leading Role\",\"Actor -- Supporting Role\",\"Actress -- Leading Role\",\\\n \"Actress -- Supporting Role\"]\nnominations = later_than_2000[later_than_2000[\"Category\"].isin(award_categories)]\nnominations[\"Category\"].value_counts()",
"_____no_output_____"
],
[
"replace_dict = { \"YES\": 1, \"NO\": 0 }\nnominations[\"Won?\"] = nominations[\"Won?\"].map(replace_dict)\nnominations[\"Won?\"].value_counts()",
"/Users/jeganram/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n from ipykernel import kernelapp as app\n"
],
[
"nominations[\"Won\"] = nominations[\"Won?\"]\nfinal_nominations = nominations.drop(\"Won?\", axis=1)\nfinal_nominations.head()",
"/Users/jeganram/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n if __name__ == '__main__':\n"
],
[
"additional_info_1 = final_nominations[\"Additional Info\"].str.rstrip(\"'}\")\nadditional_info_2 = additional_info_1.str.split(\" {'\")\nmovie_names = additional_info_2.str[0]\ncharacters = additional_info_2.str[1]\nfinal_nominations[\"Movie\"] = movie_names\nfinal_nominations[\"Character\"] = characters\nfinal_nominations.head()",
"_____no_output_____"
],
[
"final_nominations = final_nominations.drop(\"Additional Info\", axis=1)\nfinal_nominations.head()",
"_____no_output_____"
],
[
"import sqlite3",
"_____no_output_____"
],
[
"conn = sqlite3.connect(\"nominations.db\")",
"_____no_output_____"
],
[
"final_nominations.to_sql(\"nominations\", conn, index = False)",
"_____no_output_____"
],
[
"def query(query_str):\n result = conn.execute(query_str).fetchall()\n return result",
"_____no_output_____"
],
[
"query(\"PRAGMA table_info(nominations);\")",
"_____no_output_____"
],
[
"query(\"SELECT * FROM nominations LIMIT 10;\")",
"_____no_output_____"
],
[
"conn.close()",
"_____no_output_____"
]
],
[
[
"## Next Steps \nExplore the rest of our original dataset academy_awards.csv and brainstorm how to fix the rest of the dataset:\n* The awards categories in older ceremonies were different than the ones we have today. What relevant information should we keep from older ceremonies?\n* What are all the different formatting styles that the Additional Info column contains. Can we use tools like regular expressions to capture these patterns and clean them up?\n* The nominations for the Art Direction category have lengthy values for Additional Info. What information is useful and how do we extract it?\n* Many values in Additional Info don't contain the character name the actor or actress played. Should we toss out character name altogether as we expand our data? What tradeoffs do we make by doing so?\n* What's the best way to handle awards ceremonies that included movies from 2 years?\nE.g. see 1927/28 (1st) in the Year column.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a80e3e72b64873392eac9b246dff2afe0bf94e6
| 41,157 |
ipynb
|
Jupyter Notebook
|
notebooks/3_Feature_Selection.ipynb
|
corralm/project_5_sf311
|
20a3e5d587ce30ae6add97400706215ac529d949
|
[
"MIT"
] | null | null | null |
notebooks/3_Feature_Selection.ipynb
|
corralm/project_5_sf311
|
20a3e5d587ce30ae6add97400706215ac529d949
|
[
"MIT"
] | null | null | null |
notebooks/3_Feature_Selection.ipynb
|
corralm/project_5_sf311
|
20a3e5d587ce30ae6add97400706215ac529d949
|
[
"MIT"
] | null | null | null | 33.433794 | 123 | 0.426294 |
[
[
[
"import pandas as pd\nimport numpy as np\n\n# Tools\nfrom collections import Counter\nimport pickle\n\n# Preprocessing & Selections\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.feature_selection import SelectKBest, chi2, f_classif\nfrom sklearn.model_selection import train_test_split\n\n# Sampling\nfrom imblearn.over_sampling import SMOTE\nfrom imblearn.under_sampling import NearMiss\nfrom imblearn.under_sampling import RandomUnderSampler",
"Using TensorFlow backend.\n"
],
[
"# Load dataframe\ndf = pd.read_pickle('../data/02_df_pre_model_2018.pkl')\n\n# # Convert to Dask dataframe\n# df = dd.from_pandas(df_pd, npartitions=16)\ndf.head()",
"_____no_output_____"
],
[
"# Train and test splitting\n\n# Columns to exclude\nexclude_cols = [\n 'target', # Target variable\n 'case_id',\n 'opened', # Feature Eng\n 'closed', # Feature Eng\n 'status',\n 'status_notes', # Needs NLP\n 'request_details', # Needs NLP\n 'address', # Needs NLP\n# 'street',\n 'point',\n\n # New items\n 'responsible_agency',\n 'category', # Need to choose 'category' or 'request_type' NOT BOTH\n# 'request_type', # Needs NLP\n 'opened_year',\n# 'opened_month_sin',\n# 'opened_month_cos',\n# 'opened_week_sin',\n# 'opened_week_cos',\n# 'opened_day_sin',\n# 'opened_day_cos',\n# 'opened_hour_sin',\n# 'opened_hour_cos',\n 'police_district',\n 'supervisor_district',\n# 'latitude',\n 'longitude',\n]",
"_____no_output_____"
],
[
"# Predictor variables\nX = df.drop(columns=exclude_cols, axis=0, inplace=False)\n\n# Get dummies for categorical variables\nX = pd.get_dummies(X, drop_first=True)\n\n# Target variable\ny = df['target']\n\n# Split train and test\nX_train, X_test, y_train, y_test = train_test_split(X, y,\n test_size=0.2, \n random_state=2020, \n stratify=y, # Stratify to keep same class ratios\n shuffle=True # Shuffle data since it's ordered chronologically\n )\nX_train.head()",
"_____no_output_____"
],
[
"scaler = MinMaxScaler()\nscaler.fit(X_train)\nX_train_scaled = scaler.transform(X_train)\nX_test_scaled = scaler.transform(X_test)\n\n#Medium\n# scaler = StandardScaler()\n# scaler.fit(X_train)\n# X_train = scaler.transform(X_train)\n# X_test = scaler.transform(X_test)\n\n\n\n# StackOverflow\n# scaler = MinMaxScaler()\n# X_train_scaled = scaler.fit_transform(X_train)\n\n# model = SVC()\n# model.fit(X_train_scaled, y_train)\n\n# X_test_scaled = scaler.transform(X_test)\n# y_pred = model.predict(X_test_scaled)",
"_____no_output_____"
],
[
"# # Pickle for later use\n# with open('../data/03_X.pkl', 'wb') as f:\n# pickle.dump(X, f)\n# f.close()\n\n# X_train.to_pickle('../data/X_train.pkl')\n# X_test.to_pickle('../data/X_test.pkl')\n# y_train.to_pickle('../data/y_train.pkl')\n# y_test.to_pickle('../data/y_test.pkl')",
"_____no_output_____"
]
],
[
[
"# Feature Selection",
"_____no_output_____"
]
],
[
[
"def select_features(X_train, y_train, X_test):\n '''Returns X_train, X_test, and feature selection function'''\n fs = SelectKBest(score_func=chi2, k='all')\n fs.fit(X_train, y_train)\n# X_train_fs = fs.transform(X_train)\n# X_test_fs = fs.transform(X_test)\n# return X_train_fs, X_test_fs, fs\n return fs\n\n# Feature selection\n# X_train_fs, X_test_fs, fs = select_features(X_train, y_train, X_test)\nfs = select_features(X_train_scaled, y_train, X_test_scaled)",
"_____no_output_____"
],
[
"# # Feature scores\n# features_df = pd.DataFrame(data=[X_train.columns, fs.scores_.astype(int)]).transpose()\n# features_df.rename(columns={0: 'Feature', 1: 'ANOVA F-Value'}, inplace=True)\n# features_df.sort_values(by='ANOVA F-Value', ascending=False, inplace=True)\n# features_df.reset_index(drop=True, inplace=True)\n# features_df",
"_____no_output_____"
],
[
"# Feature scores\nfeatures_df = pd.DataFrame(data=[X_train.columns, fs.scores_.astype(int)]).transpose()\nfeatures_df.rename(columns={0: 'Feature', 1: 'Chi2'}, inplace=True)\nfeatures_df.sort_values(by='Chi2', ascending=False, inplace=True)\nfeatures_df.reset_index(drop=True, inplace=True)\nfeatures_df",
"_____no_output_____"
],
[
"# Select features above threshold\nthreshold = 50\n# best_features_df = features_df[(features_df['ANOVA F-Value'] > threshold)]\nbest_features_df = features_df[(features_df['Chi2'] > threshold)]\nbest_features_df",
"_____no_output_____"
],
[
"# best_features_df.to_pickle('../data/best_features_df.pkl')\n# best_features_df = pd.read_pickle('../data/best_features_df.pkl')",
"_____no_output_____"
],
[
"# # Filter X_train & X_test with selected features\n# X_train = X_train.filter(items=best_features_df['Feature'])\n# X_test = X_test.filter(items=best_features_df['Feature'])\n\n# # Clean column names\n# X_train.columns = X_train.columns.str.strip().str.lower().str.replace(\n# ' ', '_').str.replace('(', '').str.replace(')', '')\n\n# X_test.columns = X_test.columns.str.strip().str.lower().str.replace(\n# ' ', '_').str.replace('(', '').str.replace(')', '')",
"_____no_output_____"
],
[
"# Filter X_train & X_test with selected features\nX_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns).filter(items=best_features_df['Feature'])\nX_test_scaled = pd.DataFrame(X_test_scaled, columns=X_train.columns).filter(items=best_features_df['Feature'])\n\n# Clean column names\nX_train_scaled.columns = X_train_scaled.columns.str.strip().str.lower().str.replace(\n ' ', '_').str.replace('(', '').str.replace(')', '')\n\nX_test_scaled.columns = X_test_scaled.columns.str.strip().str.lower().str.replace(\n ' ', '_').str.replace('(', '').str.replace(')', '')",
"_____no_output_____"
],
[
"print('df\\t', df.shape)\nprint('X_train\\t', X_train_scaled.shape)\nprint('X_test\\t', X_test_scaled.shape)\nprint('y_train\\t', y_train.shape)\nprint('y_test\\t', y_test.shape)",
"df\t (529769, 36)\nX_train\t (423815, 193)\nX_test\t (105954, 193)\ny_train\t (423815,)\ny_test\t (105954,)\n"
]
],
[
[
"# Class Balancing",
"_____no_output_____"
]
],
[
[
"# Target variable\ntarget_count = df['target'].value_counts()\n\n# Print class balance\nprint(f'Class 0: {target_count[0]}')\nprint(f'Class 1: {target_count[1]}')\nprint(f'Proportion: {round(target_count[0] / target_count[1], 2)} : 1')\nprint(f'Percentage of Majority Class: {round(target_count[0] / sum(target_count), 3)*100}')",
"Class 0: 418265\nClass 1: 111504\nProportion: 3.75 : 1\nPercentage of Majority Class: 79.0\n"
]
],
[
[
"## Oversampling",
"_____no_output_____"
]
],
[
[
"# # Define the oversampling method – SMOTE\n# smote = SMOTE(random_state=2020)\n# X_train_smote, y_train_smote = smote.fit_sample(X_train, y_train)\n\n# # Summarize the new class distribution\n# Counter(y_train_smote)",
"_____no_output_____"
]
],
[
[
"## Undersampling",
"_____no_output_____"
]
],
[
[
"# Define the undersampling method – RandomUnderSampler\nrndm_under = RandomUnderSampler(random_state=2020)\n\n# Transform the dataset\n# X_train_under, y_train_under = rndm_under.fit_sample(X_train, y_train)\nX_train_under, y_train_under = rndm_under.fit_sample(X_train_scaled, y_train)\n\n# New class distribution\nCounter(y_train_under)",
"_____no_output_____"
],
[
"# Pickle dataframes\ndf.to_pickle('../data/df.pkl')\nX_train_under.to_pickle('../data/03_X_train_under.pkl')\nX_test_scaled.to_pickle('../data/03_X_test.pkl')\ny_train_under.to_pickle('../data/03_y_train_under.pkl')\ny_test.to_pickle('../data/03_y_test.pkl')\n\n# # Transform to Dask dataframes\n# X_train_under = dd.from_pandas(X_train_under, npartitions=16)\n# X_test = dd.from_pandas(X_test, npartitions=16)\n# y_train_under = dd.from_pandas(y_train_under, npartitions=16)\n# y_test = dd.from_pandas(y_test, npartitions=16)",
"_____no_output_____"
]
],
[
[
"# Appendix",
"_____no_output_____"
]
],
[
[
"# Dask\n# cat /proc/cpuinfo\n# from dask.distributed import Client, progress\n# from sklearn.externals.joblib import parallel_backend\n\n# client = Client(processes=False)\n# # client = Client(processes=False, n_workers=4, threads_per_worker=8)\n# client\n# # client.close()",
"_____no_output_____"
],
[
"# # Define the undersampling method – NearMiss\n# # Selects the closest examples from the majority class for each minority class.\n# undersample = NearMiss(version=3, n_neighbors_ver3=3)\n\n# # Transform the dataset\n# X_train_under, y_train_under = undersample.fit_resample(X_train, y_train)\n\n# # Summarize the new class distribution\n# Counter(y_train_under)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a80f4755a2480ff531d77538c6548ee6d6a1163
| 42,646 |
ipynb
|
Jupyter Notebook
|
matrix_one/Dzien_3.ipynb
|
apalkamu/dw_matrix
|
f62a71651bd85da062b974f35cc8726982dc384f
|
[
"MIT"
] | null | null | null |
matrix_one/Dzien_3.ipynb
|
apalkamu/dw_matrix
|
f62a71651bd85da062b974f35cc8726982dc384f
|
[
"MIT"
] | null | null | null |
matrix_one/Dzien_3.ipynb
|
apalkamu/dw_matrix
|
f62a71651bd85da062b974f35cc8726982dc384f
|
[
"MIT"
] | null | null | null | 42,646 | 42,646 | 0.729072 |
[
[
[
"#!pip install datadotworld\n#!pip install datadotworld[pandas]",
"_____no_output_____"
],
[
"#!dw configure",
"_____no_output_____"
],
[
"from google.colab import drive\nimport pandas as pd\nimport numpy as np\n\nimport datadotworld as dw",
"_____no_output_____"
],
[
"#drive.mount(\"/content/drive\")",
"_____no_output_____"
],
[
"ls",
"\u001b[0m\u001b[01;34mdrive\u001b[0m/ \u001b[01;34msample_data\u001b[0m/\n"
],
[
"cd \"drive/My Drive/Colab Notebooks/dw_matrix\"",
"/content/drive/My Drive/Colab Notebooks/dw_matrix\n"
],
[
"ls matrix_one",
"Dzien_3.ipynb\n"
],
[
"!echo 'data' >.gitignore",
"_____no_output_____"
],
[
"!git add .gitignore",
"_____no_output_____"
],
[
"data = dw.load_dataset(\"datafiniti/mens-shoe-prices\")",
"_____no_output_____"
],
[
"df = data.dataframes['7004_1']\ndf.shape",
"_____no_output_____"
],
[
"df.sample(5)",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"df.prices_currency.unique()",
"_____no_output_____"
],
[
"df.prices_currency.value_counts(normalize=True)",
"_____no_output_____"
],
[
"df_usd =df[df.prices_currency == 'USD'].copy()",
"_____no_output_____"
],
[
"df_usd.shape",
"_____no_output_____"
],
[
"df_usd[\"prices_amountmin\"] = df_usd.prices_amountmin.astype(np.float)\ndf_usd[\"prices_amountmin\"].hist()",
"_____no_output_____"
],
[
"filter_max = np.percentile(df_usd['prices_amountmin'], 99)\nfilter_max",
"_____no_output_____"
],
[
"df_usd_filter = df_usd[df_usd['prices_amountmin']< filter_max]",
"_____no_output_____"
],
[
"df_usd_filter.prices_amountmin.hist(bins=100)",
"_____no_output_____"
],
[
"ls",
"Hello_Github.ipynb LICENSE \u001b[0m\u001b[01;34mmatrix_one\u001b[0m/ README.md\n"
],
[
"ls",
"Hello_Github.ipynb LICENSE \u001b[0m\u001b[01;34mmatrix_one\u001b[0m/ README.md\n"
],
[
"!git add matrix_one/Dzien_3.ipynb",
"_____no_output_____"
],
[
"!git commit -m \"Read Men's Shoe Prices dataset from data.world\"",
"\n*** Please tell me who you are.\n\nRun\n\n git config --global user.email \"[email protected]\"\n git config --global user.name \"Your Name\"\n\nto set your account's default identity.\nOmit --global to set the identity only in this repository.\n\nfatal: unable to auto-detect email address (got 'root@b58409cd9002.(none)')\n"
],
[
"!git config --global user.email \"[email protected]\"\n!git config --global user.name \"apalkamu\"",
"_____no_output_____"
],
[
"!git commit -m \"Read Men's Shoe Prices dataset from data.world\"",
"[master 3265f9e] Read Men's Shoe Prices dataset from data.world\n 2 files changed, 2 insertions(+), 129 deletions(-)\n rewrite .gitignore (100%)\n create mode 100644 matrix_one/Dzien_3.ipynb\n"
],
[
"!git push -u origin master",
"Counting objects: 5, done.\nDelta compression using up to 2 threads.\nCompressing objects: 33% (1/3) \rCompressing objects: 66% (2/3) \rCompressing objects: 100% (3/3) \rCompressing objects: 100% (3/3), done.\nWriting objects: 20% (1/5) \rWriting objects: 40% (2/5) \rWriting objects: 60% (3/5) \rWriting objects: 80% (4/5) \rWriting objects: 100% (5/5) \rWriting objects: 100% (5/5), 16.68 KiB | 2.38 MiB/s, done.\nTotal 5 (delta 0), reused 0 (delta 0)\nTo https://github.com/apalkamu/dw_matrix.git\n 51ae2e4..3265f9e master -> master\nBranch 'master' set up to track remote branch 'master' from 'origin'.\n"
],
[
"",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a80f90da713e3af4e0686daf28a30a505fb97f4
| 8,868 |
ipynb
|
Jupyter Notebook
|
nbs/index.ipynb
|
Nixtla/mlforecast
|
bf3c574e1a538a06a228bd5e505a835a88488043
|
[
"Apache-2.0"
] | 38 |
2021-04-26T23:07:23.000Z
|
2022-03-30T22:23:42.000Z
|
nbs/index.ipynb
|
Nixtla/mlforecast
|
bf3c574e1a538a06a228bd5e505a835a88488043
|
[
"Apache-2.0"
] | 16 |
2021-05-20T04:32:03.000Z
|
2021-10-02T01:20:49.000Z
|
nbs/index.ipynb
|
Nixtla/mlforecast
|
bf3c574e1a538a06a228bd5e505a835a88488043
|
[
"Apache-2.0"
] | 9 |
2021-05-17T18:29:43.000Z
|
2022-03-12T05:59:13.000Z
| 29.658863 | 357 | 0.609044 |
[
[
[
"# mlforecast\n\n> Scalable machine learning based time series forecasting.\n\n**mlforecast** is a framework to perform time series forecasting using machine learning models, with the option to scale to massive amounts of data using remote clusters.",
"_____no_output_____"
],
[
"[](https://github.com/Nixtla/mlforecast/actions/workflows/ci.yaml)\n[](https://github.com/Nixtla/mlforecast/actions/workflows/lint.yaml)\n[](https://pypi.org/project/mlforecast/)\n[](https://pypi.org/project/mlforecast/)\n[](https://anaconda.org/conda-forge/mlforecast)\n[](https://codecov.io/gh/Nixtla/mlforecast)\n[](https://github.com/Nixtla/mlforecast/blob/main/LICENSE)",
"_____no_output_____"
],
[
"## Install",
"_____no_output_____"
],
[
"### PyPI\n\n`pip install mlforecast`\n\n#### Optional dependencies\nIf you want more functionality you can instead use `pip install mlforecast[extra1,extra2,...]`. The current extra dependencies are:\n\n* **aws**: adds the functionality to use S3 as the storage in the CLI.\n* **cli**: includes the validations necessary to use the CLI.\n* **distributed**: installs [dask](https://dask.org/) to perform distributed training. Note that you'll also need to install either [LightGBM](https://github.com/microsoft/LightGBM/tree/master/python-package) or [XGBoost](https://xgboost.readthedocs.io/en/latest/install.html#python).\n\nFor example, if you want to perform distributed training through the CLI using S3 as your storage you'll need all three extras, which you can get using: `pip install mlforecast[aws,cli,distributed]`.",
"_____no_output_____"
],
[
"### conda-forge\n`conda install -c conda-forge mlforecast`\n\nNote that this installation comes with the required dependencies for the local interface. If you want to:\n* Use s3 as storage: `conda install -c conda-forge s3path`\n* Perform distributed training: `conda install -c conda-forge dask` and either [LightGBM](https://github.com/microsoft/LightGBM/tree/master/python-package) or [XGBoost](https://xgboost.readthedocs.io/en/latest/install.html#python).",
"_____no_output_____"
],
[
"## How to use\nThe following provides a very basic overview, for a more detailed description see the [documentation](https://nixtla.github.io/mlforecast/).",
"_____no_output_____"
],
[
"### Programmatic API",
"_____no_output_____"
]
],
[
[
"#hide\nimport os\nimport shutil\nfrom pathlib import Path\n\nfrom IPython.display import display, Markdown\n\n\nos.chdir('..')\n\n\ndef display_df(df):\n display(Markdown(df.to_markdown()))",
"_____no_output_____"
]
],
[
[
"Store your time series in a pandas dataframe with an index named **unique_id** that identifies each time serie, a column **ds** that contains the datestamps and a column **y** with the values.",
"_____no_output_____"
]
],
[
[
"from mlforecast.utils import generate_daily_series\n\nseries = generate_daily_series(20)\ndisplay_df(series.head())",
"_____no_output_____"
]
],
[
[
"Then create a `TimeSeries` object with the features that you want to use. These include lags, transformations on the lags and date features. The lag transformations are defined as [numba](http://numba.pydata.org/) *jitted* functions that transform an array, if they have additional arguments you supply a tuple (`transform_func`, `arg1`, `arg2`, ...).",
"_____no_output_____"
]
],
[
[
"from mlforecast.core import TimeSeries\nfrom window_ops.expanding import expanding_mean\nfrom window_ops.rolling import rolling_mean\n\nts = TimeSeries(\n lags=[7, 14],\n lag_transforms={\n 1: [expanding_mean],\n 7: [(rolling_mean, 7), (rolling_mean, 14)]\n },\n date_features=['dayofweek', 'month']\n)\nts",
"_____no_output_____"
]
],
[
[
"Next define a model. If you want to use the local interface this can be any regressor that follows the scikit-learn API. For distributed training there are `LGBMForecast` and `XGBForecast`.",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestRegressor\n\nmodel = RandomForestRegressor(random_state=0)",
"_____no_output_____"
]
],
[
[
"Now instantiate your forecast object with the model and the time series. There are two types of forecasters, `Forecast` which is local and `DistributedForecast` which performs the whole process in a distributed way.",
"_____no_output_____"
]
],
[
[
"from mlforecast.forecast import Forecast\n\nfcst = Forecast(model, ts)",
"_____no_output_____"
]
],
[
[
"To compute the features and train the model using them call `.fit` on your `Forecast` object.",
"_____no_output_____"
]
],
[
[
"fcst.fit(series)",
"_____no_output_____"
]
],
[
[
"To get the forecasts for the next 14 days call `.predict(14)` on the forecaster. This will update the target with each prediction and recompute the features to get the next one.",
"_____no_output_____"
]
],
[
[
"predictions = fcst.predict(14)\n\ndisplay_df(predictions.head())",
"_____no_output_____"
]
],
[
[
"### CLI",
"_____no_output_____"
],
[
"If you're looking for computing quick baselines, want to avoid some boilerplate or just like using CLIs better then you can use the `mlforecast` binary with a configuration file like the following:",
"_____no_output_____"
]
],
[
[
"!cat sample_configs/local.yaml",
"_____no_output_____"
]
],
[
[
"The configuration is validated using `FlowConfig`.\n\nThis configuration will use the data in `data.prefix/data.input` to train and write the results to `data.prefix/data.output` both with `data.format`.",
"_____no_output_____"
]
],
[
[
"data_path = Path('data')\ndata_path.mkdir()\nseries.to_parquet(data_path/'train')",
"_____no_output_____"
],
[
"!mlforecast sample_configs/local.yaml",
"_____no_output_____"
],
[
"list((data_path/'outputs').iterdir())",
"_____no_output_____"
],
[
"#hide\nshutil.rmtree(data_path)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a80fb75894196c7644972ae80e467b38f7b8035
| 47,572 |
ipynb
|
Jupyter Notebook
|
voila_map.ipynb
|
groegercesg/CovidEnforcementScotland
|
473b5f5f3f3d7d3955201115ae8a9d0ec894b6ef
|
[
"MIT"
] | 1 |
2021-07-05T15:36:28.000Z
|
2021-07-05T15:36:28.000Z
|
voila_map.ipynb
|
groegercesg/CovidEnforcementScotland
|
473b5f5f3f3d7d3955201115ae8a9d0ec894b6ef
|
[
"MIT"
] | null | null | null |
voila_map.ipynb
|
groegercesg/CovidEnforcementScotland
|
473b5f5f3f3d7d3955201115ae8a9d0ec894b6ef
|
[
"MIT"
] | null | null | null | 67.382436 | 2,087 | 0.592954 |
[
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport ipywidgets as widgets\nfrom IPython.display import HTML\nfrom datetime import datetime\n\n# General\nimport os\n\n# Drawing\nimport cartopy\nimport matplotlib.pyplot as plt\nimport cartopy.crs as ccrs\nfrom cartopy.io import shapereader\nfrom matplotlib.cm import get_cmap\nimport matplotlib.cm as cm\nimport matplotlib.colors as colors\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\nfrom math import floor\nfrom matplotlib import patheffects\nimport matplotlib\nif os.name == 'nt':\n matplotlib.rc('font', family='Arial')\nelse: # might need tweaking, must support black triangle for N arrow\n matplotlib.rc('font', family='DejaVu Sans')\n\nfrom datetime import date\n\nplt.ioff()",
"_____no_output_____"
],
[
"from IPython.display import display, Javascript\nJavascript('document.title=\"{}\"'.format(\"Coronavirus Enforcement\"))",
"_____no_output_____"
],
[
"DATA_URL = 'https://www.scotland.police.uk/spa-media/ewloducq/coronavirus-enforcement-information-to-30-june-2021.xlsx'",
"_____no_output_____"
],
[
"def datesFromData(url):\n raw_data = pd.read_excel(url, sheet_name=1)\n earlyDate = (min(raw_data[\"Date\"]).strftime(\"%d %B %Y\"))\n lateDate = (max(raw_data[\"Date\"]).strftime(\"%d %B %Y\"))\n return earlyDate, lateDate",
"_____no_output_____"
],
[
"today = date.today()\ndate_formatted = today.strftime(\"%d %B %Y\")\n\nearliestDate, latestDate = datesFromData(DATA_URL)\n\nEXPLANATION = \"\"\"\\\n<div class=\"app-sidebar\">\n<p><em>Compare the prevalence of different intervention results - geospatially.</em><p>\n\n<p>As a result of the 2020 introduction of the: <a href=\"https://www.legislation.gov.uk/ssi/2020/103/contents/made\">The Health Protection (Coronavirus) (Restrictions) (Scotland) Regulations 2020</a>\nand <a href=\"https://www.legislation.gov.uk/ukpga/2020/7/contents/enacted\">Coronavirus Act 2020</a>, \nPolice Scotland were mandated to develop a ‘Coronavirus Interventions’ (CVI) recording system.</p>\n\n<p>Police Scotland gather data in reference to the public co-operation levels with the new legislation.\nHowever, <b>it should be noted</b>, the system relies on Police officers manually updating the system - with the specific co-operation level they <i>\"experienced\"</i> when they encounter a contravention of the legislation.</p>\n\n<p>As such, the CVI data is indicative only and actual figures may be higher. CVI data is published <a href=\"https://www.scotland.police.uk/about-us/covid-19-police-scotland-response/enforcement-and-response-data/\">weekly</a>\nand broken down by date, Police Scotland division, subdivision and the following five categories of CVI:\n<ul>\n <li>Total number of people dispersed when informed</li>\n <li>Total number of people dispersed but only when instructed</li>\n <li>Total number of people removed from place or premise</li>\n <li>Total number of people issued a fixed penalty notice (FPN)</li>\n <li>Total number of people arrested</li>\n</ul></p>\n\n<p> The map can display CVI data from \"\"\" + earliestDate + \"\"\" to \"\"\" + latestDate + \"\"\", for each of the above categories, \nin terms of: total numbers, numbers per 100,000 people, <a href=\"https://github.com/groegercesg/CovidEnforcementScotland#officer-numbers\">numbers per 100 officers*</a> and average daily arrests within a Police Scotland division.</p>\n\n</div>\n\"\"\"\n\nCREATED = \"\"\" \\\n<em>Created by: <a href=\"https://callumgroeger.com\">Callum Groeger</a> | \"\"\" + date_formatted + \"\"\" </em>\n<br>\n\"\"\"\n\nPROJECTION = \"\"\" \\\n<em>Projection: British National Grid (BNG) | License: MIT </em>\n<br>\n\"\"\"\n\nDATA = \"\"\" \\\n<em>Data: Coronavirus Interventions (<a href=\"https://www.scotland.police.uk/about-us/covid-19-police-scotland-response/enforcement-and-response-data/\">Police Scotland</a>), \nPopulation Estimates 2019 (<a href=\"https://www.nrscotland.gov.uk/statistics-and-data/statistics/statistics-by-theme/population/population-estimates/mid-year-population-estimates/mid-2019\">National Records of Scotland</a>),\nPolice Divisions (<a href=\"https://spatialdata.gov.scot/geonetwork/srv/eng/catalog.search;jsessionid=61F713CF39B3EE2F440F48E9C31BA806#/metadata/4364af71-167a-4236-b5a0-bd4109913231\">Scottish Government</a>),\nPolice Staffing Q1 2021 (<a href=\"https://www.scotland.police.uk/about-us/police-scotland/police-scotland-officer-numbers/\">Police Scotland</a>)\n</em>\n\"\"\"\n\nGIF_ADDRESS = 'gif.gif'",
"_____no_output_____"
],
[
"HTML(\"\"\"\\\n<style>\n.app-title {\n font-size: 2.5em;\n}\n\n.app-subtitle {\n font-size: 1.5em;\n}\n\n.app-subtitle a {\n color: #106ba3;\n}\n\n.app-subtitle a:hover {\n text-decoration: underline;\n}\n\n.app-sidebar p {\n margin-bottom: 1em;\n line-height: 1.7;\n}\n\n.app-sidebar a {\n color: #106ba3;\n}\n\n.app-sidebar a:hover {\n text-decoration: underline;\n}\n</style>\n\"\"\")",
"_____no_output_____"
],
[
"class App:\n def __init__(self, df):\n self._df = df\n self._dfBASE = df.copy(deep=True)\n\n # Get dropdown options, cut out the first five - as this is just Divisions\n available_indicators = list(self._df)\n del available_indicators[0:4]\n\n # Loading GIF\n with open(GIF_ADDRESS, 'rb') as f:\n img = f.read()\n # create loading bar widget, ready to display when running long function\n self.loading_bar = widgets.Image(value=img)\n self.loading_bar.layout.object_fit = 'contain'\n\n self._dropdown1 = self._create_indicator_dropdown(available_indicators, 0)\n self._dropdown2 = self._create_indicator_dropdown([(\"Total\", 0), (\"Per 100,000\", 1), (\"Per 100 officers\", 2), (\"Daily Average\", 3)], 0)\n self._plot_container = widgets.Output()\n\n self._date_slider, date_slider_box = self._create_date_slider(\n df, 'Date'\n )\n\n self._app_container = widgets.VBox([\n widgets.HBox([\n self._dropdown1,\n self._dropdown2\n ]),\n self._plot_container,\n date_slider_box\n ], layout=widgets.Layout(align_items='center', flex='3 0 auto'))\n\n # flex: https://minrk-ipywidgets.readthedocs.io/en/latest/examples/Widget%20Styling.html#Properties-of-the-items\n self.container = widgets.VBox([\n widgets.HTML(\n (\n '<h1 class=\"app-title\">Police Scotland Coronavirus Interventions 2020-1</h1>'\n '<h2 class=\"app-subtitle\"><a href=\"https://github.com/groegercesg/CovidEnforcementScotland\">Link to Github</a></h2>'\n ), \n layout=widgets.Layout(margin='0 0 2em 0')\n # margin: https://minrk-ipywidgets.readthedocs.io/en/latest/examples/Widget%20Styling.html#Shorthand-CSS-properties\n ),\n widgets.HBox([\n self._app_container,\n widgets.HTML(EXPLANATION, layout=widgets.Layout(margin='0 0 0 2em')) # 0\n ], layout=widgets.Layout(margin='0 0 2em 0')),\n # layout options for center: align_items='center', align_content='center'\n widgets.HTML(\n (\n '<hr>'\n )),\n widgets.HBox([\n widgets.HTML(CREATED),\n widgets.HTML(PROJECTION),\n widgets.HTML(DATA)\n ], layout=widgets.Layout(display='flex', flex_flow='column', align_items='center', width='100%'))\n ], layout=widgets.Layout(flex='1 1 auto', margin='0 auto 0 auto', max_width='1024px'))\n self._update_app()\n\n\n def _create_date_slider(self, df, column_name):\n dates = df[column_name]\n\n options = [(date.strftime(' %d %b %Y '), date) for date in dates]\n index = (0, len(options)-1)\n\n date_slider_label = widgets.Label('Date range: ')\n date_slider = widgets.SelectionRangeSlider(\n options=options,\n index=index,\n orientation='horizontal',\n continuous_update=False,\n layout=widgets.Layout(width='500px')\n )\n date_slider.observe(self._on_change, names=['value'])\n date_slider_box = widgets.HBox([date_slider_label, date_slider], \n layout=widgets.Layout(flex='1 1 auto', width='auto'))\n\n # We need to manually set the description of our SelectionRangeSlider\n\n # We can do this physically with Inspect Element\n # .widget-inline-hbox .widget-readout {\n # text-align: center;\n # max-width: 200px;\n\n # Discussion at: https://github.com/jupyter-widgets/ipywidgets/issues/2318\n \n return date_slider, date_slider_box\n\n def groupByDailyAverage(self, df, days):\n df['Daily Average Asked / Informed'] = df.apply (lambda row: row['Asked / Informed']/days if days > 0 else 0, axis=1)\n df['Daily Average Warned / Instructed'] = df.apply (lambda row: row['Warned / Instructed']/days if days > 0 else 0, axis=1)\n df['Daily Average Removed from Place or Premises'] = df.apply (lambda row: row['Removed from Place or Premises']/days if days > 0 else 0, axis=1)\n df['Daily Average FPN'] = df.apply (lambda row: row['FPN']/days if days > 0 else 0, axis=1)\n df['Daily Average Arrested'] = df.apply (lambda row: row['Arrested']/days if days > 0 else 0, axis=1)\n \n return df\n\n def groupByDivision(self, df):\n division_grouped = df.groupby('Division Letter', as_index=False\n ).agg(\n {\"Asked / Informed\": \"sum\",\n \"Warned / Instructed\": \"sum\",\n \"Removed from Place or Premises\": \"sum\",\n \"FPN\": \"sum\",\n \"Arrested\": \"sum\",\n })\n\n return division_grouped\n\n def groupByOfficerNumber(self, df):\n # Process data of police numbers\n # Data from: https://www.scotland.police.uk/about-us/police-scotland/police-scotland-officer-numbers/\n\n officer_dict = {'A': 1115,\n 'C': 626,\n 'D': 919,\n 'E': 1099,\n 'G': 2434,\n 'J': 902,\n 'K': 613,\n 'L': 553,\n 'N': 661,\n 'P': 759,\n 'Q': 1388,\n 'U': 818,\n 'V': 382\n }\n\n div_officer_data = pd.DataFrame(officer_dict.items(), columns=['Division Letter', 'Officer Numbers'])\n\n # Merge Data\n dfMerge = pd.merge(df, div_officer_data, on='Division Letter')\n\n dfMerge['Asked / Informed per 100 officers'] = dfMerge.apply (lambda row: row['Asked / Informed']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)\n dfMerge['Warned / Instructed per 100 officers'] = dfMerge.apply (lambda row: row['Warned / Instructed']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)\n dfMerge['Removed from Place or Premises per 100 officers'] = dfMerge.apply (lambda row: row['Removed from Place or Premises']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)\n dfMerge['FPN per 100 officers'] = dfMerge.apply (lambda row: row['FPN']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)\n dfMerge['Arrested per 100 officers'] = dfMerge.apply (lambda row: row['Arrested']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)\n \n return dfMerge\n\n\n def groupByPopulation(self, df):\n # Process Population Data\n # Data from: https://www.nrscotland.gov.uk/statistics-and-data/statistics/statistics-by-theme/population/population-estimates/mid-year-population-estimates/mid-2019\n\n raw_pop_data = pd.read_csv(os.path.join(os.getcwd(), 'datasets', 'Population', 'mid-year-pop-est-19-data_Table 2.csv'))\n # Keep only the specific columns\n raw_pop_data = raw_pop_data[['Unnamed: 1','Unnamed: 2']]\n # Rename them inplace\n raw_pop_data.rename(columns={'Unnamed: 1': 'Council areas', 'Unnamed: 2': 'Population'}, inplace=True)\n # Drop upper rows that are bad\n raw_pop_data = raw_pop_data.drop(raw_pop_data.index[[0,1,2,3,4]]).reset_index(drop=True)\n # Drop from certain row, minus 1 for the row above position\n raw_pop_data = raw_pop_data[:(raw_pop_data[raw_pop_data['Council areas'] == 'NHS Board areas'].index[0] - 1)]\n # Strip out all the commas in Objects of the Population column\n raw_pop_data[\"Population\"].replace(',','', regex=True, inplace=True)\n # Convert string to int\n raw_pop_data[\"Population\"] = raw_pop_data[\"Population\"].astype(str).astype(int)\n\n # Group Pop Data\n\n # We group the council areas into our police divisions\n # First, set our index\n raw_pop_data.set_index('Council areas')\n # Create our division dictionary\n div_dict = {'A': [\"Moray\", \"Aberdeenshire\", \"Aberdeen City\"],\n 'C': [\"Stirling\", \"Clackmannanshire\", \"Falkirk\"],\n 'D': [\"Angus\", \"Dundee City\", \"Perth and Kinross\"],\n 'E': [\"City of Edinburgh\"],\n 'G': [\"East Renfrewshire\", \"Glasgow City\", \"East Dunbartonshire\"],\n 'J': [\"Scottish Borders\", \"East Lothian\", \"Midlothian\", \"West Lothian\"],\n 'K': [\"Inverclyde\", \"Renfrewshire\"],\n 'L': [\"Argyll and Bute\", \"West Dunbartonshire\"],\n 'N': [\"Na h-Eileanan Siar\", \"Orkney Islands\", \"Highland\", \"Shetland Islands\"],\n 'P': [\"Fife\"],\n 'Q': [\"South Lanarkshire\", \"North Lanarkshire\"],\n 'U': [\"South Ayrshire\", \"East Ayrshire\", \"North Ayrshire\"],\n 'V': [\"Dumfries and Galloway\"]\n }\n\n div_pop = {}\n\n def divisionPopulation(row):\n incomingRow = row.tolist()\n\n for div, councils in div_dict.items():\n for council in councils:\n if (council == incomingRow[0]):\n if div in div_pop:\n div_pop[div] += incomingRow[1]\n else:\n div_pop[div] = incomingRow[1]\n\n raw_pop_data.apply(lambda row: divisionPopulation(row), axis=1)\n\n div_pop_data = pd.DataFrame(div_pop.items(), columns=['Division Letter', 'Population'])\n\n # Merge Data\n dfMerge = pd.merge(df, div_pop_data, on='Division Letter')\n\n dfMerge['Asked / Informed per 100k'] = dfMerge.apply (lambda row: row['Asked / Informed']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)\n dfMerge['Warned / Instructed per 100k'] = dfMerge.apply (lambda row: row['Warned / Instructed']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)\n dfMerge['Removed from Place or Premises per 100k'] = dfMerge.apply (lambda row: row['Removed from Place or Premises']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)\n dfMerge['FPN per 100k'] = dfMerge.apply (lambda row: row['FPN']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)\n dfMerge['Arrested per 100k'] = dfMerge.apply (lambda row: row['Arrested']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)\n \n return dfMerge\n \n # The class method, we use this to gather the data then pre-process it\n @classmethod\n def from_url(cls, url):\n raw_data = pd.read_excel(url, sheet_name=1)\n raw_data.drop(['Unnamed: 9', 'Unnamed: 10', 'Unnamed: 11', 'Unnamed: 12', 'Unnamed: 13', 'Unnamed: 14', 'Unnamed: 15', 'Unnamed: 16', 'Unnamed: 17'], axis=1, inplace=True)\n\n # Taking account of NaNs\n # Explanation:\n # The xlsx to pandas dataframe conversion seems to have taken \"NA\" for a division \"N\" and an Area Command \"Inverness\"\n # and interpret that \"NA\" as actually: \"NaN\". Which is very annoying. So the below overwrites the SD letter of area commands\n # that are inverness and turns them back to \"NA\"\n raw_data.loc[raw_data[\"Area Commands\"] == \"Inverness\", \"SD Letter\"] = raw_data[\"SD Letter\"].fillna(\"NA\")\n\n if (raw_data.isnull().sum().sum() != 0):\n raise ValueError(\"We have NaNs in our dataframe\")\n\n return cls(raw_data)\n \n def _create_indicator_dropdown(self, indicators, initial_index):\n # Handling for the two different types of Dropdown options storage\n if isinstance(indicators[initial_index], tuple):\n valuePos = initial_index\n elif isinstance(indicators[initial_index], str):\n valuePos = indicators[initial_index]\n else:\n raise ValueError(\"Unknown dropdown input type\")\n \n dropdown = widgets.Dropdown(options=indicators, value=valuePos)\n dropdown.observe(self._on_change, names=['value'])\n return dropdown\n \n\n def utm_from_lon(self, lon):\n \"\"\"\n utm_from_lon - UTM zone for a longitude\n\n Not right for some polar regions (Norway, Svalbard, Antartica)\n\n :param float lon: longitude\n :return: UTM zone number\n :rtype: int\n \"\"\"\n return floor( ( lon + 180 ) / 6) + 1\n\n def scale_bar(self, ax, proj, length, location=(0.5, 0.05), linewidth=3,\n units='km', m_per_unit=1000):\n \"\"\"\n http://stackoverflow.com/a/35705477/1072212\n ax is the axes to draw the scalebar on.\n proj is the projection the axes are in\n location is center of the scalebar in axis coordinates ie. 0.5 is the middle of the plot\n length is the length of the scalebar in km.\n linewidth is the thickness of the scalebar.\n units is the name of the unit\n m_per_unit is the number of meters in a unit\n \"\"\"\n # find lat/lon center to find best UTM zone\n x0, x1, y0, y1 = ax.get_extent(proj.as_geodetic())\n # Projection in metres\n utm = ccrs.UTM(self.utm_from_lon((x0+x1)/2))\n # Get the extent of the plotted area in coordinates in metres\n x0, x1, y0, y1 = ax.get_extent(utm)\n # Turn the specified scalebar location into coordinates in metres\n sbcx, sbcy = x0 + (x1 - x0) * location[0], y0 + (y1 - y0) * location[1]\n # Generate the x coordinate for the ends of the scalebar\n bar_xs = [sbcx - length * m_per_unit/2, sbcx + length * m_per_unit/2]\n # buffer for scalebar\n buffer = [patheffects.withStroke(linewidth=5, foreground=\"w\")]\n # Plot the scalebar with buffer\n ax.plot(bar_xs, [sbcy, sbcy], transform=utm, color='k',\n linewidth=linewidth, path_effects=buffer)\n # buffer for text\n buffer = [patheffects.withStroke(linewidth=3, foreground=\"w\")]\n # Plot the scalebar label\n t0 = ax.text(sbcx, sbcy, str(length) + ' ' + units, transform=utm,\n horizontalalignment='center', verticalalignment='bottom',\n path_effects=buffer, zorder=2)\n left = x0+(x1-x0)*0.05\n # Plot the N arrow\n t1 = ax.text(left, sbcy, u'\\u25B2\\nN', transform=utm,\n horizontalalignment='center', verticalalignment='bottom',\n path_effects=buffer, zorder=2)\n # Plot the scalebar without buffer, in case covered by text buffer\n ax.plot(bar_xs, [sbcy, sbcy], transform=utm, color='k',\n linewidth=linewidth, zorder=3)\n \n def _create_plot(self, indicator, scaling):\n fig = plt.figure(figsize=(6,8), dpi=100)\n projectionPARAM = ccrs.TransverseMercator(central_longitude=-2.0, central_latitude=49.0, false_easting=400000.0, false_northing=-100000.0, scale_factor=0.9996012717, approx=False)\n ax = fig.add_subplot(1, 1, 1, projection=projectionPARAM)\n ax.set_extent([-8, 0, 54.5, 61]) # Ideal coordinate map range for plotting Scotland\n\n # Process the input from the second dropdown\n if scaling == 0:\n indicator = indicator\n elif scaling == 1:\n indicator = indicator + \" per 100k\"\n elif scaling == 2:\n indicator = indicator + \" per 100 officers\"\n elif scaling == 3:\n indicator = \"Daily Average \" + indicator\n else:\n raise ValueError(\"Bizarre dropdown option achieved, investigation needed!\")\n\n police_dict = (self._df[['Division Letter', indicator]].set_index('Division Letter').T.to_dict('records'))[0]\n\n # Downloaded from: https://spatialdata.gov.scot/geonetwork/srv/eng/catalog.search;jsessionid=61F713CF39B3EE2F440F48E9C31BA806#/metadata/4364af71-167a-4236-b5a0-bd4109913231\n area_file = os.path.join(os.getcwd(), 'datasets', 'ScottishPoliceDivisions', 'SG_ScottishPoliceDivisions_2019.shp')\n police_divisions = shapereader.Reader(area_file)\n\n norm = colors.Normalize(vmin=0., vmax=max(police_dict.values()))\n cmap = get_cmap('PuBu')\n\n for record in police_divisions.records():\n code = record.attributes['AdminCode']\n police_entry = police_dict.get(code, -1)\n if police_entry == -1:\n police_color = \"Silver\"\n else:\n police_color = cmap(police_entry/max(police_dict.values()))\n ax.add_geometries(\n [record.geometry],\n #facecolor=numpy.random.rand(3,),\n facecolor=police_color,\n linewidth=0,\n crs=projectionPARAM,\n )\n\n # following https://matplotlib.org/2.0.2/mpl_toolkits/axes_grid/users/overview.html#colorbar-whose-height-or-width-in-sync-with-the-master-axes\n # we need to set axes_class=plt.Axes, else it attempts to create\n # a GeoAxes as colorbar\n\n divider = make_axes_locatable(ax)\n ax_cb = divider.new_horizontal(size=\"5%\", pad=0.1, axes_class=plt.Axes)\n\n fig.add_axes(ax_cb)\n\n sm = plt.cm.ScalarMappable(norm=norm, cmap=cmap)\n cb = plt.colorbar(sm, cax=ax_cb)\n cb.set_label(indicator)\n\n #self.scale_bar(ax, projectionPARAM, 100, location=(0.85, 0.05)) # 100 km scale bar\n\n plt.plot()\n \n def _on_change(self, _):\n self._update_app()\n\n def trimToDateRange(self, df, date_range):\n # We want to trim the data, so that it's range is inline with date range\n # First we replace _df with our base df, so we can then correctly apply the range\n self._df = self._dfBASE.copy(deep=True)\n\n # Then we cut it to only within our date range\n df = self._df[self._df['Date'].between(*date_range)]\n\n return df\n\n def _process_data(self, date_range):\n numberOfDays = (date_range[1] - date_range[0]).days\n self._df = self.trimToDateRange(self._df, date_range)\n self._df = self.groupByDivision(self._df)\n self._df = self.groupByPopulation(self._df)\n self._df = self.groupByOfficerNumber(self._df)\n self._df = self.groupByDailyAverage(self._df, numberOfDays)\n \n def _update_app(self):\n # Pull in widget attributes for passing to plot function\n indicator = self._dropdown1.value\n scaling = self._dropdown2.value\n date_range = self._date_slider.value\n\n # Process data\n self._process_data(date_range)\n\n self._plot_container.clear_output()\n # wait=True\n with self._plot_container:\n #self.loading_bar.layout.visibility = 'visible'\n self.loading_bar.layout.display = 'block' \n display(self.loading_bar)\n self._create_plot(indicator, scaling)\n plt.show()\n #self.loading_bar.layout.visibility = 'hidden'\n self.loading_bar.layout.display = 'none'\n",
"_____no_output_____"
],
[
"app = App.from_url(DATA_URL)\n\napp.container",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a80fc4eabb8e697ebc40bb695c33a76ec18ba0e
| 732,971 |
ipynb
|
Jupyter Notebook
|
notebooks/Intro_to_neural_networks.ipynb
|
seismic-shift/geoml
|
e3c14f1a0b9a7df8fa200747feeb1612b1e44d1d
|
[
"Apache-2.0"
] | null | null | null |
notebooks/Intro_to_neural_networks.ipynb
|
seismic-shift/geoml
|
e3c14f1a0b9a7df8fa200747feeb1612b1e44d1d
|
[
"Apache-2.0"
] | null | null | null |
notebooks/Intro_to_neural_networks.ipynb
|
seismic-shift/geoml
|
e3c14f1a0b9a7df8fa200747feeb1612b1e44d1d
|
[
"Apache-2.0"
] | null | null | null | 361.069458 | 139,064 | 0.933501 |
[
[
[
"# Intro to neural networks: Regression\n\nThis notebook is based on the SEG Geophysical Tutorial from August 2018 by Graham Ganssle: https://github.com/seg/tutorials-2018.\n\nThe idea is to introduce the based components of an artificial neural network and implement a simple version of one using Numpy.\n\nWe'll use a regression task — predicting a DT log from other logs.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom tqdm import tqdm # gives progress bar on iterable",
"_____no_output_____"
],
[
"# demonstrate TDQM\nfor n in tqdm(range(5_000_000)):\n pass",
"100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5000000/5000000 [00:01<00:00, 4977366.14it/s]\n"
]
],
[
[
"## Activation functions",
"_____no_output_____"
],
[
"A neural network is nothing but a nonlinear system of equations like $\\mathbf{y} = \\sigma(\\mathbf{W}\\mathbf{x} + \\mathbf{b})$.\n\nThere are multiple functions $\\sigma$ that are used to introduce the non-linear component. One of the earliest was the *sigmoid* (aka *logistic*) function is given by:\n\n$$ \\sigma(z) = \\frac{1}{1 + \\operatorname{e}^{-z}} $$\n\nIts derivative is:\n\n$$ \\frac{\\mathrm{d} \\sigma(z)}{\\mathrm{d} z} = \\sigma(z) (1 - \\sigma(z)) $$\n\nWe need the derivative for the _backpropagation_ process that enables neural networks to learn efficiently. Backpropagation adjusts the parameters of the neural network by injecting an error signal backwards through the network's layers, from the last to the first.\n\nWe can implement the logistic function like this in Python:",
"_____no_output_____"
]
],
[
[
"def logistic(z, derivative=False):\n if not derivative:\n return 1 / (1 + np.exp(-z))\n else:\n return z * (1 - z) # In the implementation, 'z' will actually be sigma(z).",
"_____no_output_____"
]
],
[
[
"The function transforms, or 'squeezes', numbers into the range [0, 1] and looks like this:",
"_____no_output_____"
]
],
[
[
"# function is in cyan and derivative is in red",
"_____no_output_____"
],
[
"from utils import plot_activation\n\nplot_activation(logistic)",
"_____no_output_____"
]
],
[
[
"In practice, while this function is sometimes useful for handling probabilities, there are some problems with it.\n\n- The maximum value of the derivative is 0.25, which tends to reduce the learning rate, especially in deeper layers.\n- Large activations input result in 'saturation' and a gradient of 0 ('vanishing gradient'), which will halt learning.\n- The exponentials are expensive to compute.\n\nThe $\\operatorname{tanh}$ function solves some of these issues — for example, it has a maximum gradient of 1.",
"_____no_output_____"
]
],
[
[
"def tanh(z, derivative=False):\n \"\"\"\n Compute a tanh transformation for a given input.\n \"\"\"\n if not derivative:\n return (np.exp(z) - np.exp(-z)) / (np.exp(z) + np.exp(-z))\n else:\n return 1 - z**2 # In the implementation, we'll get tanh(z) coming at us.",
"_____no_output_____"
],
[
"plot_activation(tanh)",
"_____no_output_____"
]
],
[
[
"But it still suffers from the saturation issue, and the expense of computation.\n\nBoth of these issues are solved by the ReLU, or rectified linear unit, function.",
"_____no_output_____"
],
[
"<div style=\"background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen\">\n<h3>EXERCISE</h3>\n\nThe **rectified linear unit** (ReLU) and its derivative are given by:\n\n$$ f(z) = \\begin{cases}\n z & \\text{if } z > 0, \\\\\n 0 & \\text{otherwise}.\n\\end{cases} $$\n\n$$ \\frac{\\mathrm{d}f(z)}{\\mathrm{d}z} = \\begin{cases}\n 1 & \\text{if } z > 0, \\\\\n 0 & \\text{otherwise}.\n\\end{cases} $$\n\nThe main problem with the ReLU is that, depending on how the weights are initialized, some units in the network might 'die' as they get into negative activations and never fire. Accordingly, a common variant of the ReLU is the 'parametric' ReLU, which has $f(z) = \\alpha z$, when $Z \\leq 0$ (the corresponding derivative is then just $\\alpha$). The parameter $\\alpha$ can be tuned like other hyperparameters. A typical value is 0.01.\n\nThe parametric ReLU is also called a 'leaky' ReLU, but that term implies that the value of $\\alpha$ is not being considered as a hyperparameter or tuned in any way.\n\nCan you implement a ReLU? (Or, if you prefer, a parametric ReLU?)\n</div>",
"_____no_output_____"
]
],
[
[
"# Note that if you use `if z > 0` in your code, then\n# the plot_activation function won't work, because it\n# defines z as an array to make its plots. In general,\n# it's a good idea to write functions that work for\n# both scalars and arrays, where possible.\n\n# YOUR CODE HERE\ndef relu(z, derivative=False):\n \"\"\"\n compute RELU\n \"\"\"\n if not derivative:\n return z * (z > 0)\n else:\n return 1 * (z > 0)\n",
"_____no_output_____"
],
[
"assert (relu(-1), relu(0), relu(1)) == (0, 0, 1)",
"_____no_output_____"
],
[
"# matt solution for both\n\ndef prelu(z, derivative=False, alpha=0.1):\n \"\"\"A parametric ReLU.\"\"\"\n if not derivative:\n return np.maximum(alpha * z, z) # alpha must be < 1\n else:\n return alpha * (z <= 0) + (z > 0)\ndef relu(z, derivative=False):\n \"\"\"\n Compute a ReLU transformation for a given input.\n \"\"\"\n return prelu(z, derivative=derivative, alpha=0)",
"_____no_output_____"
],
[
"plot_activation(relu)",
"_____no_output_____"
]
],
[
[
"<div style=\"background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen\">\n<h3>Stretch exercise</h3>\n\nSome people prefer the exponential linear unit, because it has a smooth derivative. Can you implement it?\n\n$$ f(z) = \\begin{cases} z & \\text{if } z > 0 \\\\ \\alpha(e^z-1) & \\text{otherwise} \\end{cases} $$\n\nThe derivative is given by:\n\n$$ \\frac{\\mathrm{d} f}{\\mathrm{d} z} = \\begin{cases} 1 & \\text{if } z > 0 \\\\ \\alpha e^z & \\text{otherwise} \\end{cases} $$\n\nAgain, $\\alpha$ is a hyperparameter.\n</div>",
"_____no_output_____"
]
],
[
[
"# YOUR CODE HERE\ndef prelu(z, derivative=False, alpha=0.1):\n \"\"\"\n A parametric RELU\n \"\"\"\n if not derivative:\n return np.maximum(alpha * z, z) # alpha must be less than one\n else:\n return alpha * (z <= 0) + (z > 0)\n \n",
"_____no_output_____"
]
],
[
[
"Check the [Intro_to_neural_network_regression.ipynb](../master/Intro_to_neural_network_regression.ipynb) master notebook for a solution to this problem. ",
"_____no_output_____"
],
[
"There are still other rectifiers — e.g. the GELU and SiLU — read about them [on Wikipedia](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)). Why not try implementing some of them?",
"_____no_output_____"
],
[
"## Loss\n\nWe're going to need a way to tell when we're doing well. The **loss function** is some measure of error. We'll use the mean squared error, where the error is the difference between a known value of the target and our estimate of the target.\n\nWe're going to need a function for that too:",
"_____no_output_____"
]
],
[
[
"def loss(y, y_hat):\n \"\"\"\n Compute half the mean squared error. The factor of 0.5 gets cancelled by the\n squared term in the derivative, so it's common to see it in the loss function.\n \"\"\"\n return 0.5 * np.mean((np.array(y_hat) - np.array(y))**2)",
"_____no_output_____"
]
],
[
[
"## Defining a network",
"_____no_output_____"
],
[
"A typical neural network consist of three or more *layers*: an input layer, one or more _hidden_ layers, and an output layer.\n\nLet's implement a network with one hidden layer. The layers are as follows:\n\n$$ \\text{Input layer:}\\ \\ \\mathbf{x}^{(i)} $$\n\n$$ \\text{Hidden layer:}\\ \\ \\mathbf{a}_1^{(i)} = \\sigma ( \\mathbf{W}_1 \\mathbf{x}^{(i)} + \\mathbf{b}_1) $$\n\n$$ \\text{Output layer:}\\ \\ \\hat{\\mathbf{y}}^{(i)} = \\mathbf{W}_2 \\mathbf{a}_1^{(i)} + \\mathbf{b}_2 $$\n\nwhere $\\mathbf{x}^{(i)}$ is the $i$-th sample of the input data $\\mathbf{X}$. $\\mathbf{W}_1, \\mathbf{b}_1, \\mathbf{W}_2, \\mathbf{b}_2$ are the weight matrices and bias vectors for layers 1 and 2 respectively, and $\\sigma$ is our nonlinear function. Applying the nonlinearity to $\\mathbf{W}_1 \\mathbf{x}^{(i)} + \\mathbf{b}_1$ in layer 1 results in the _activation_ $\\mathbf{a}_1$. The output layer yields $\\hat{\\mathbf{y}}^{(i)}$, the $i$-th estimate of the desired output. We're not going to apply the nonlinearity to the output, but people often do. The weights are randomly initialized and the biases start at zero; during training they will be iteratively updated to encourage the network to converge on an optimal approximation to the expected output.\n\n\nNote that these are vector operations. In `Numpy` we can easily deal with this because the library understands proper matrix operations. For example, matrix multiplication is done through the `@` operator.\n\nA forward pass of the data through the network looks like this:",
"_____no_output_____"
]
],
[
[
"def forward(xi, W1, b1, W2, b2, activation):\n z1 = W1 @ xi + b1\n a1 = activation(z1)\n z2 = W2 @ a1 + b2 # n.b. z2 is y_hat\n return z2, a1",
"_____no_output_____"
]
],
[
[
"Below is a picture of a neural network similar to the one we're building:\n\n",
"_____no_output_____"
],
[
"## How does a neural net learn?\n\nThe short version is that we show the system a bunch of corresponding input/output pairs we want it to learn, and we show it these pairs thousands of times. Every time we do so, we move the **W**'s and **b**'s in whatever direction makes the outputs of the network more similar to the known output we're trying to teach it. ",
"_____no_output_____"
],
[
" For each training example:\n For each layer:\n - Calculate the error.\n - Calculate weight gradient.\n - Update weights.\n - Calculate the bias gradient.\n - Update biases.",
"_____no_output_____"
],
[
"What's all this about gradients?\n\nIn order to learn, the network will have to find the parameters (weights and biases) that result in the smallest loss. We'll use gradient descent for this. \n\n<img src=\"../images/gradient_descent.png\" width=\"800px\" />",
"_____no_output_____"
],
[
"This is straightforward for the output layer. That's why we needed the derivative in the activation functions, and we need to know the derivative for the `loss()` function.\n\nThe error on the output layer for a given instance (data record) looks like this:\n\n$$ E = \\frac{1}{2} \\left[ \\hat{y}^{(i)} - y^{(i)} \\right]^2 $$\n\nwhere\n\n$$ \\hat{y}^{(i)} = \\mathbf{w}_2 \\mathbf{a}_1^{(i)} + b_2 $$\n\nThe derivative (gradient, or slope) of this function, with respect to the weight **w**<sub>2</sub>, is:\n\n$$ \\frac{\\mathrm{d}E}{\\mathrm{d}\\mathbf{w_2}} = \\frac{\\mathrm{d}E}{\\mathrm{d}\\hat{y}}\\frac{\\mathrm{d}\\hat{y}}{\\mathrm{d}\\mathbf{w_2}} = (\\hat{y} - y) \\ \\mathbf{a}_1$$\n\nTo calculate the gradient at the hidden layer, we need to compute the gradient of the error with respect to the weights and biases of the hidden layer.\n\nLet's implement this as a Python function:",
"_____no_output_____"
]
],
[
[
"def backward(xi, yi,\n a1, z2,\n params,\n learning_rate,\n activation\n ):\n\n err_output = z2 - yi # Derivative of loss function\n grad_W2 = err_output * a1\n params['W2'] -= learning_rate * grad_W2\n\n grad_b2 = err_output\n params['b2'] -= learning_rate * grad_b2\n\n derivative = activation(a1, derivative=True)\n err_hidden = err_output * derivative * params['W2']\n grad_W1 = err_hidden[:, None] @ xi[None, :]\n params['W1'] -= learning_rate * grad_W1\n \n grad_b1 = err_hidden\n params['b1'] -= learning_rate * grad_b1\n \n return params",
"_____no_output_____"
]
],
[
[
"The trick with the `None` indexing is the same as reshaping the array. We have to do this to produce a 2D array for the `W1` gradients.",
"_____no_output_____"
],
[
"To demonstrate this backpropagation workflow, and thus that our system can learn, let's try to get the above neural network to learn the relationship between a DT log and some other logs. We're going to need some data.",
"_____no_output_____"
],
[
"## Get some data",
"_____no_output_____"
]
],
[
[
"import welly\n\nw = welly.Well.from_las('../data/R-90.las', index='original')\n\ndata = w.data_as_matrix(keys=['GR', 'NPHISS', 'RHOB', 'DT'], start=1000, stop=3500, step=0.2)",
"_____no_output_____"
],
[
"data[:10]",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler\n\nX_val = data[6500:6750, :3].reshape(-1, 3)\nX_train = data[6750:7750, :3].reshape(-1, 3)\n\nscaler = StandardScaler().fit(X_train)\n\nX_train = scaler.transform(X_train)\nX_val = scaler.transform(X_val)",
"_____no_output_____"
],
[
"X_train.shape, X_val.shape",
"_____no_output_____"
],
[
"fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(15, 5))\n\nax0.plot(X_train)\nax1.plot(X_val)",
"_____no_output_____"
],
[
"import seaborn as sns\n\nsns.displot(X_train)",
"_____no_output_____"
]
],
[
[
"In many situations, we do not need to scale the target variable. But when using gradient descent for optimization — essentially in all neural nets — we might need to worry about it. \n\nVery large errors may lead to exploding gradients in training and/or result in floating point overflows — especially if you're using GPUs, which use single-precision floats.",
"_____no_output_____"
]
],
[
[
"y_val_ = data[6500:6750, -1] # Keep the unscaled data.\ny_train_ = data[6750:7750, -1]\n\ntarget_scaler = StandardScaler().fit(y_train_.reshape(-1, 1))\n\ny_train = target_scaler.transform(y_train_.reshape(-1, 1))\ny_val = target_scaler.transform(y_val_.reshape(-1, 1))",
"_____no_output_____"
],
[
"X_train.shape, y_train.shape",
"_____no_output_____"
],
[
"fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(15, 5))\n\nax0.plot(y_train)\nax1.plot(y_val)",
"_____no_output_____"
]
],
[
[
"## Initialize network parameters",
"_____no_output_____"
],
[
"Now we can initialize the weights and biases for our network. A common approach is to initialize the weights with small random numbers (with NumPy's `randn()` function) and the biases with zeros.",
"_____no_output_____"
],
[
"<div style=\"background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen\">\n<h3>EXERCISE</h3>\n\nFinish the `initialize_params()` function:\n</div>",
"_____no_output_____"
]
],
[
[
"def initialize_params(features, units, seed=42):\n np.random.seed(seed)\n params = {\n \"W1\": np.random.randn(units, features),\n \"b1\": np.zeros(shape=units),\n\n # YOUR CODE HERE\n # Initialize W2 (shape is just `units`) and b2 (shape is `1`)\n \"W2\": np.random.randn(units),\n \"b2\": np.zeros(shape=1)\n \n # ===============\n }\n return params",
"_____no_output_____"
],
[
"features = X_train.shape[-1]\nunits = 5 # Units in hidden layer.\n\nparams = initialize_params(features, units, seed=33)",
"_____no_output_____"
],
[
"params",
"_____no_output_____"
]
],
[
[
"Now we have a network! It just doesn't know anything.",
"_____no_output_____"
],
[
"## Prediction\n\nTo apply this (untrained) network to some data, we're going to need a `predict` function, to make inferences from the trained network. This mode of application is called **inference**.",
"_____no_output_____"
],
[
"<div style=\"background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen\">\n<h3>EXERCISE</h3>\n\nFinish the `predict()` function.\n</div>",
"_____no_output_____"
]
],
[
[
"def predict(X, forward, params, activation):\n \"\"\"\n Make a prediction for a given 2D input ``X``,\n using function ``forward``.\n \"\"\"\n y_hats = []\n for xi in X:\n # YOUR CODE HERE\n # You need to call `forward` to set a value for `y_hat`.\n y_hat, _ = forward(xi, **params, activation=activation)\n # ==============\n y_hats.append(y_hat.item()) # gets floating point number out\n return np.array(y_hats)",
"_____no_output_____"
]
],
[
[
"Let's make a prediction for our untrained network — it should be essentially random:",
"_____no_output_____"
]
],
[
[
"y_pred = predict(X_train, forward, params, activation=relu)",
"_____no_output_____"
],
[
"plt.plot(y_train[:200])\nplt.plot(y_pred[:200])",
"_____no_output_____"
]
],
[
[
"## Training\n\nDuring training, we expose the network to the input/output pairs one at a time. These pairs are called `xi` and `yi` respectively in the code. According to our diagram above, the input goes into the green slots and we adjust the orange neurons to make the red slot output from the network a tiny bit closer to the true DT result.",
"_____no_output_____"
],
[
"We do this many times. Every time we do, we calculate the mean squared error between the network's prediction and the ground-truth output. After many iterations, or *epochs*, we draw a plot which shows the total error, or loss, at each step. If the network is learning anything, we expect the loss to decrease, as the predictions are getting closer to the ground truth.",
"_____no_output_____"
]
],
[
[
"# Hyperparameters.\nnum_epochs = 30\nlearning_rate = 0.001\nactivation = relu\n\n# Intitialize.\ndata = list(zip(X_train, y_train, y_train_)) #y_train_ is unscaled data. helps us get to more meaningful error. TW.\nparams = initialize_params(features, units)\nloss_history = []\n\nfor i in tqdm(range(num_epochs)):\n\n # Shuffle and prepare.\n np.random.shuffle(data)\n y_, y_hat = [], []\n \n for xi, yi, y_raw in data:\n \n # Optionally do a pass for validation (omitted here).\n \n # Forward pass.\n z2, a1 = forward(xi, **params, activation=activation)\n\n # Back propagation.\n params = backward(xi, yi,\n a1, z2.item(),\n params,\n learning_rate,\n activation=activation\n )\n \n # Capture actual prediction at correct scale.\n y_.append(y_raw)\n y_hat.append(target_scaler.inverse_transform(z2))\n\n # Compute training loss for this epoch.\n loss_history.append(loss(y_, y_hat))",
"100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:01<00:00, 18.84it/s]\n"
]
],
[
[
"The parameters of the model are now no longer random.",
"_____no_output_____"
]
],
[
[
"params",
"_____no_output_____"
]
],
[
[
"They do look kind of random though. It's usually hard to 'see' what neural networks have learned. Let's look at the W1 weights only:",
"_____no_output_____"
]
],
[
[
"W1 = params['W1']\n\nplt.figure(figsize=(5, 3))\n_ = plt.imshow(W1.T, aspect='auto', vmin=0.1)",
"_____no_output_____"
]
],
[
[
"If the network learned anything **useful** then the loss should have decreased during training. The loss is our measure of whatever it is we care about.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(10,3))\n\nax.semilogy(loss_history, label='Training loss')\n\nax.set_title('Mean squared error vs epoch number', fontsize=16)\nax.tick_params(axis='both', which='major', labelsize=14)\nax.grid()\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"y_pred = predict(X_val, forward, params, activation)",
"_____no_output_____"
]
],
[
[
"The loss decreased dramatically over the course of relatively few epochs, so presumably the network has learned something. To test this theory, let's plot the outputs after training (orange) and compare them to the expected result (blue):",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(15, 3))\nplt.plot(y_val)\nplt.plot(y_pred)\nplt.grid(c='k', alpha=0.2)",
"_____no_output_____"
]
],
[
[
"## Compare using RMS error\n\nIt's fine for the network to learn using MSE, but it's easier for humans to understand RMS error, because it has the same units as the target.",
"_____no_output_____"
],
[
"<div style=\"background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen\">\n<h3>EXERCISE</h3>\n\nImplement an equation for the RMS error.\n\n$$ E_\\mathrm{RMS} = \\sqrt{ \\frac{1}{N} \\sum_{i=0}^{N} (\\hat{y} - y)^2 } $$\n</div>",
"_____no_output_____"
]
],
[
[
"def rmse(y_true, y_pred):\n\n mse = np.sum((y_pred - y_true)**2) / y_true.size\n rmse = np.sqrt(mse)\n \n return rmse",
"_____no_output_____"
],
[
"rmse(y_val_, target_scaler.inverse_transform(y_pred))",
"_____no_output_____"
],
[
"plt.figure(figsize=(15, 3))\nplt.plot(y_val_)\nplt.plot(target_scaler.inverse_transform(y_pred))\nplt.grid(c='k', alpha=0.2)",
"_____no_output_____"
]
],
[
[
"<div style=\"background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen\">\n<h3>Exercise: how does this network look in `scikit-learn`?</h3>\n\nReplicate this neural network with `sklearn.neural_network.MLPRegressor`.\n\nYou will have to read the documentation carefully. In particular, pay attention to `solver`, `activation`, `max_iter`, and `batch_size`.\n\nGet started with this:\n</div>",
"_____no_output_____"
]
],
[
[
"from sklearn.neural_network import MLPRegressor\n\nmlp = MLPRegressor(hidden_layer_sizes=(5,),\n tol=1e-12, # Turn off early stopping.\n momentum=0, # Turn off momentum \n activation='relu',\n solver='sgd',\n learning_rate_init=0.001,\n max_iter=30,\n random_state=33,\n alpha=0,\n batch_size=1\n # YOUR CODE HERE\n \n )\n\nmlp.fit(X_train, y_train)\n\ny_pred_skl = mlp.predict(X_val)",
"/glb/ams/pt.sgs/data/tacit_ssw/nltwag/miniconda3/envs/geoml/lib/python3.9/site-packages/sklearn/utils/validation.py:63: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n return f(*args, **kwargs)\n/glb/ams/pt.sgs/data/tacit_ssw/nltwag/miniconda3/envs/geoml/lib/python3.9/site-packages/sklearn/neural_network/_multilayer_perceptron.py:614: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (30) reached and the optimization hasn't converged yet.\n warnings.warn(\n"
],
[
"plt.figure(figsize=(15, 3))\nplt.plot(y_val_)\nplt.plot(target_scaler.inverse_transform(y_pred))\nplt.plot(target_scaler.inverse_transform(y_pred_skl))\nplt.grid(c='k', alpha=0.2)",
"_____no_output_____"
],
[
"print(\"Scratch NN\")\nprint(rmse(y_val_, target_scaler.inverse_transform(y_pred)))\nprint()\nprint(\"Sklearn NN\")\nprint(rmse(y_val_, target_scaler.inverse_transform(y_pred_skl)))",
"Scratch NN\n7.824764825548954\n\nSklearn NN\n7.936733084096812\n"
]
],
[
[
"<div style=\"background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen\">\n<h3>EXERCISE</h3>\n\nCan you change the hyperparameters to get a better result?\n</div>",
"_____no_output_____"
]
],
[
[
"# Copy the solution from the last example here.\n# Then change some of the parameters and see how it affects the result.\n\nmlp = MLPRegressor(hidden_layer_sizes=(10,),\n tol=1e-12, # Turn off early stopping.\n momentum=0, # Turn off momentum \n activation='relu',\n solver='adam',\n learning_rate_init=0.001,\n max_iter=500,\n random_state=33,\n alpha=0,\n batch_size=1 # this will really speed things up by increasing, and fine to do. more efficient way of feeding the model data rather than sample by sample\n # YOUR CODE HERE\n \n )\n\nmlp.fit(X_train, y_train)\n\ny_pred_skl = mlp.predict(X_val)",
"/glb/ams/pt.sgs/data/tacit_ssw/nltwag/miniconda3/envs/geoml/lib/python3.9/site-packages/sklearn/utils/validation.py:63: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n return f(*args, **kwargs)\n"
],
[
"print(\"Scratch NN\")\nprint(rmse(y_val_, target_scaler.inverse_transform(y_pred)))\nprint()\nprint(\"Sklearn NN\")\nprint(rmse(y_val_, target_scaler.inverse_transform(y_pred_skl)))",
"Scratch NN\n7.824764825548954\n\nSklearn NN\n7.430958481921306\n"
]
],
[
[
"## Compare with PyTorch",
"_____no_output_____"
]
],
[
[
"import torch\nfrom torch import nn\n\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")",
"_____no_output_____"
],
[
"X_train_pt = torch.tensor(X_train, dtype=torch.float32).to(device) \ny_train_pt = torch.tensor(y_train.reshape(-1, 1), dtype=torch.float32).to(device)\n\ntraindata = torch.utils.data.TensorDataset(X_train_pt, y_train_pt)\ntrainloader = torch.utils.data.DataLoader(traindata)",
"_____no_output_____"
],
[
"# tensor is pretty much the pytorch equivalent of an ndarray. difference is 'they remember where they came from'",
"_____no_output_____"
]
],
[
[
"There's a high-level approach:",
"_____no_output_____"
]
],
[
[
"net = nn.Sequential(\n nn.Linear(3, 5),\n nn.ELU(), \n nn.Linear(5, 1),\n).to(device)",
"_____no_output_____"
]
],
[
[
"And a low-level approach that gives you fine-tuned control:",
"_____no_output_____"
]
],
[
[
"class Net(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.hidden = nn.Linear(3, 5) # aka \"Fully-connected\"\n self.output = nn.Linear(5, 1)\n\n # Optional.\n nn.init.xavier_uniform_(self.hidden.weight)\n nn.init.zeros_(self.hidden.bias)\n nn.init.xavier_uniform_(self.output.weight)\n nn.init.zeros_(self.output.bias)\n\n def forward(self, x):\n z1 = self.hidden(x)\n a1 = torch.nn.functional.elu(z1)\n z2 = self.output(a1)\n return z2\n \nnet = Net().to(device)",
"_____no_output_____"
]
],
[
[
"Training the network:",
"_____no_output_____"
]
],
[
[
"lr = 0.005\nweight_decay = 0.0 # L2 regularization\noptimizer = torch.optim.SGD(net.parameters(), lr=lr, weight_decay=weight_decay)\n\ncriterion = nn.MSELoss()\n\nnet.train()\n\nepochs = 100\nfor epoch in range(epochs):\n epoch_loss = 0.0\n for xi, yi in trainloader:\n optimizer.zero_grad()\n y_hat = net(xi) # Forward pass\n loss = criterion(y_hat, yi) # get the loss\n loss.backward() # backprop\n optimizer.step() # step optimmizer, wont do anything in this example\n epoch_loss += loss.item() # capture loss\n print(f\"# {epoch+1} Loss {epoch_loss}\")\nprint('Finished training')",
"# 1 Loss 157.44785002464909\n# 2 Loss 50.08588698047578\n# 3 Loss 47.8970574281325\n# 4 Loss 46.797818975588726\n# 5 Loss 46.160646719485925\n# 6 Loss 45.753433484214156\n# 7 Loss 45.472471797158676\n# 8 Loss 45.2650603209956\n# 9 Loss 45.10220108370163\n# 10 Loss 44.967412030759704\n# 11 Loss 44.85098707415514\n# 12 Loss 44.7468052282443\n# 13 Loss 44.650658158630506\n# 14 Loss 44.55946513369638\n# 15 Loss 44.47149756035037\n# 16 Loss 44.38531609147242\n# 17 Loss 44.299825369742564\n# 18 Loss 44.214822768990786\n# 19 Loss 44.13074112433456\n# 20 Loss 44.046243998263876\n# 21 Loss 43.9606400629162\n# 22 Loss 43.87325568477823\n# 23 Loss 43.78417326936621\n# 24 Loss 43.69310607220922\n# 25 Loss 43.60007451301982\n# 26 Loss 43.50594941563733\n# 27 Loss 43.40957576016286\n# 28 Loss 43.310306193405836\n# 29 Loss 43.20742901375996\n# 30 Loss 43.10099777098847\n# 31 Loss 42.99098068240605\n# 32 Loss 42.87728916134739\n# 33 Loss 42.760903683220704\n# 34 Loss 42.642490376375804\n# 35 Loss 42.52106776318654\n# 36 Loss 42.39631355240705\n# 37 Loss 42.26832563291775\n# 38 Loss 42.13751771176604\n# 39 Loss 42.00406038236511\n# 40 Loss 41.86810772142074\n# 41 Loss 41.73070457389345\n# 42 Loss 41.591920445924245\n# 43 Loss 41.452529795587935\n# 44 Loss 41.31338257980565\n# 45 Loss 41.17516258203099\n# 46 Loss 41.038109861941905\n# 47 Loss 40.90319913763602\n# 48 Loss 40.77082447803086\n# 49 Loss 40.641362862973466\n# 50 Loss 40.51493619539204\n# 51 Loss 40.39172750724701\n# 52 Loss 40.27201170034648\n# 53 Loss 40.15597752349686\n# 54 Loss 40.0444740847488\n# 55 Loss 39.937454590862586\n# 56 Loss 39.835126593767626\n# 57 Loss 39.737866584648984\n# 58 Loss 39.64481257721798\n# 59 Loss 39.55621597753429\n# 60 Loss 39.4722244553878\n# 61 Loss 39.39261653780751\n# 62 Loss 39.31640309564614\n# 63 Loss 39.243487777995085\n# 64 Loss 39.173795060590024\n# 65 Loss 39.1070554330346\n# 66 Loss 39.04263377237375\n# 67 Loss 38.981029611314696\n# 68 Loss 38.92183274888448\n# 69 Loss 38.865525829945405\n# 70 Loss 38.81244949271222\n# 71 Loss 38.76261436470188\n# 72 Loss 38.71492958536709\n# 73 Loss 38.66891151979531\n# 74 Loss 38.62475069480949\n# 75 Loss 38.58229360226427\n# 76 Loss 38.541553187786796\n# 77 Loss 38.50237395130074\n# 78 Loss 38.46494247123351\n# 79 Loss 38.42923706607661\n# 80 Loss 38.39497936491831\n# 81 Loss 38.36195772147562\n# 82 Loss 38.33019168366161\n# 83 Loss 38.29960964550881\n# 84 Loss 38.270198099955806\n# 85 Loss 38.24167536038654\n# 86 Loss 38.21431001221481\n# 87 Loss 38.188245573170434\n# 88 Loss 38.16341749091748\n# 89 Loss 38.13948867118938\n# 90 Loss 38.11638183700097\n# 91 Loss 38.09411651718335\n# 92 Loss 38.07252271552579\n# 93 Loss 38.05158137159185\n# 94 Loss 38.031277368864664\n# 95 Loss 38.011662737255065\n# 96 Loss 37.992656504756795\n# 97 Loss 37.9740027387019\n# 98 Loss 37.95617544978626\n# 99 Loss 37.939595241688295\n# 100 Loss 37.92376268381349\nFinished training\n"
]
],
[
[
"### Evaluate the model",
"_____no_output_____"
]
],
[
[
"X_val_pt = torch.tensor(X_val, dtype=torch.float).to(device)\ny_val_pt = torch.tensor(y_val.reshape(-1, 1), dtype=torch.float).to(device)\n\nvaldata = torch.utils.data.TensorDataset(X_val_pt, y_val_pt)\nvalloader = torch.utils.data.DataLoader(valdata)",
"_____no_output_____"
],
[
"net.eval()\n\nwith torch.no_grad():\n y_pred_torch = [float(net(xi)) for xi, yi in valloader]",
"_____no_output_____"
],
[
"plt.figure(figsize=(15, 3))\nplt.plot(y_val_)\nplt.plot(target_scaler.inverse_transform(y_pred))\nplt.plot(target_scaler.inverse_transform(y_pred_skl))\nplt.plot(target_scaler.inverse_transform(y_pred_torch))\nplt.grid(c='k', alpha=0.2)",
"_____no_output_____"
],
[
"print(\"Scratch NN\")\nprint(rmse(y_val_, target_scaler.inverse_transform(y_pred)))\nprint()\nprint(\"Sklearn NN\")\nprint(rmse(y_val_, target_scaler.inverse_transform(y_pred_skl)))\nprint()\nprint(\"PyTorch NN\")\nprint(rmse(y_val_, target_scaler.inverse_transform(y_pred_torch)))",
"Scratch NN\n7.824764825548954\n\nSklearn NN\n7.430958481921306\n\nPyTorch NN\n11.430781625429763\n"
]
],
[
[
"### Saving a PyTorch model\n\nIt is possible to save the mode with `torch.save(model, PATH)`, but this is not recommended because it depends on the exact structure of the project (files, directories, etc). Instead, PyTorch docs recommend saving the model class \n\nWe can save the model's parameters to disk:",
"_____no_output_____"
]
],
[
[
"fname = \"dt_model.pth\"\ntorch.save(net.state_dict(), fname)",
"_____no_output_____"
]
],
[
[
"...and read them into a new model:",
"_____no_output_____"
]
],
[
[
"saved_net = Net()\nsaved_net.load_state_dict(torch.load(fname))\n\nnet.eval()\n\nwith torch.no_grad():\n y_pred_torch_ = [float(saved_net(xi)) for xi, yi in valloader]\n\n# Check it's the same as before.\nnp.all(y_pred_torch == y_pred_torch_)",
"_____no_output_____"
]
],
[
[
"## Compare with linear regression",
"_____no_output_____"
],
[
"<div style=\"background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen\">\n<h3>EXERCISE</h3>\n\nMake a prediction using `sklearn.linear_model.Ridge`. How does it compare to the neural networks?\n</div>",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import Ridge\n\n# YOUR CODE HERE\nest = Ridge()\n\nest.fit(X_train, y_train_)\n# End with...\ny_pred_linreg = est.predict(X_val)",
"_____no_output_____"
],
[
"plt.figure(figsize=(15, 5))\nplt.plot(y_val_)\nplt.plot(target_scaler.inverse_transform(y_pred))\nplt.plot(target_scaler.inverse_transform(y_pred_skl))\nplt.plot(target_scaler.inverse_transform(y_pred_torch))\nplt.plot(y_pred_linreg)\nplt.grid(c='k', alpha=0.2)",
"_____no_output_____"
],
[
"print(\"Scratch NN\")\nprint(rmse(y_val_, target_scaler.inverse_transform(y_pred)))\nprint()\nprint(\"Sklearn NN\")\nprint(rmse(y_val_, target_scaler.inverse_transform(y_pred_skl)))\nprint()\nprint(\"PyTorch NN\")\nprint(rmse(y_val_, target_scaler.inverse_transform(y_pred_torch)))\nprint()\nprint(\"Linear regression\")\nprint(rmse(y_val_, y_pred_linreg))",
"Scratch NN\n7.824764825548954\n\nSklearn NN\n7.430958481921306\n\nPyTorch NN\n11.430781625429763\n\nLinear regression\n7.4307912121804724\n"
]
],
[
[
"---",
"_____no_output_____"
],
[
"<div style=\"background: #e0ffe0; border: solid 2px #d0f0d0; border-radius:3px; padding: 1em; color: darkgreen\">\n<h3>Optional exercises</h3>\n\nTry to do these exercises on the NumPy implementation. But if that proves too difficult, use the `sklearn` implementation.\n\n- Try changing the model parameters, for example using fewer units in the hidden layer. Does this help?\n- Add another layer to the model. Does this help?\n- Try using other activation functions than the logistic function we're currently using.\n- Implement batches, RMSprop, and momentum.\n\n<h3>Stretch</h3>\n\nIf you've taken the Mastery class, or know about object oriented programming, write a Python `class` to hold the NumPy implementation. Copy the `keras`/`sklearn` interface as closely as possible. Related: [this awesome video from Joel Grus](https://www.youtube.com/watch?v=o64FV-ez6Gw).\n</div>",
"_____no_output_____"
],
[
"## Other types of neural networks\n\n",
"_____no_output_____"
],
[
"---\n\n© 2021 Agile Scientific and Graham Ganssle — Content is CC-BY-SA",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a80fd5982336ec3b17833e8cc20d1d27f86a37c
| 188,266 |
ipynb
|
Jupyter Notebook
|
Baseball Stats/baseball_analysis.ipynb
|
akrakowsky/baseball_analysis
|
0ea514048e86dd42c7027726b31ab1244b848f37
|
[
"MIT"
] | null | null | null |
Baseball Stats/baseball_analysis.ipynb
|
akrakowsky/baseball_analysis
|
0ea514048e86dd42c7027726b31ab1244b848f37
|
[
"MIT"
] | null | null | null |
Baseball Stats/baseball_analysis.ipynb
|
akrakowsky/baseball_analysis
|
0ea514048e86dd42c7027726b31ab1244b848f37
|
[
"MIT"
] | null | null | null | 180.158852 | 32,416 | 0.876329 |
[
[
[
"# Baseball Analysis",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport scipy.stats as st\nimport numpy as np\n\n# Study data files\nplayer_path = \"./player.csv\"\nbatting_path = \"./batting.csv\"\npitching_path = \"./pitching.csv\"\nfielding_path = \"./fielding.csv\"\n\n# Read the baseball data and the study results\nplayer_data = pd.read_csv(player_path)\nplayer_data.head()\n\n# Clean player data\nplayer_clean = player_data[[\"player_id\", \"birth_country\", \"birth_state\",\n \"birth_city\", \"name_given\", \"weight\", \"height\",\n \"bats\", \"throws\", \"debut\", \"final_game\"]]\nplayer_clean.head()",
"_____no_output_____"
]
],
[
[
"## Bar Graph of Player Birth States ",
"_____no_output_____"
]
],
[
[
"# Generate a bar graph of players born in each state(exclude non-US born athletes).\nplayer_us = player_data[player_data[\"birth_country\"] == \"USA\"]\nplayer_us_year = player_us[player_us[\"birth_year\"] >= 1950]\nplayer_us_year\n\n# Filter the DataFrame down only to those columns to chart\nplayer_us_state = player_us_year[[\"birth_state\",\"player_id\"]]\nplayer_us_state\n\n# Groupby State\nplayer_state = player_us_state.groupby(\"birth_state\").count()\nplayer_state\n\n# Create a list indicating where to write x labels and set figure size to adjust for space\nplayer_state.plot(kind=\"bar\", figsize=(20,3))\n\n# Set a Title and labels\nplt.title(\"Baseball Players Born per State\")\nplt.xlabel(\"State\")\nplt.ylabel(\"Amount of Baseball Players\")\nplt.tight_layout()\nplt.savefig(\"./birth_state.png\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Should the NL adopt the DH rule?",
"_____no_output_____"
]
],
[
[
"batting_data = pd.read_csv(batting_path)\nbatting_data.head()",
"_____no_output_____"
],
[
"# DH rule was adopted by the AL league in 1973. \nbatting_data = batting_data[batting_data[\"year\"] >= 1973]\nbatting_data\n\n# Find the batting average and add a new column\nbatting_data[\"ba\"] = \"\"\nba = batting_data[\"h\"]/batting_data[\"ab\"]\nbatting_data[\"ba\"] = ba\nbatting_data\n\n# Remove NAN\nbatting_data.dropna()\n\n# Get the mean batting average per year for the AL\nbatting_al = batting_data[batting_data[\"league_id\"] == \"AL\"]\nbatting_al\n# Group by year\nbatting_al = batting_al.groupby(\"year\").mean()[\"ba\"]\nbatting_al\n\n# Get the mean batting average per year for the NL\nbatting_nl = batting_data[batting_data[\"league_id\"] == \"NL\"]\nbatting_nl\n# Group by year\nbatting_nl = batting_nl.groupby(\"year\").mean()[\"ba\"]\nbatting_nl\n\n# Plot as a line graph\nx_axis = np.arange(1973,2016,1)\n# print(x_axis)\nal_ba, = plt.plot(x_axis, batting_al, color=\"red\", label=\"AL\")\nnl_ba, = plt.plot(x_axis, batting_nl, color=\"blue\", label=\"NL\")\nplt.title(\"Batting Average Comparison for NL and AL from 1973-2015\")\nplt.xlabel(\"Years\")\nplt.ylabel(\"Batting average\")\nplt.legend(handles=[al_ba, nl_ba], loc=\"best\")\nplt.show()\nplt.savefig(\"./al_vs_nl.png\")",
"_____no_output_____"
]
],
[
[
"### Observation: Overall batting average for the National League is lower then the American League. The American League uses desginated hitters in place of their pitchers batting. This could show the impact of having a hitting focused player in the line-up who replaces the pitcher, who tends to be the weaker batter.",
"_____no_output_____"
],
[
"## Pitching",
"_____no_output_____"
],
[
"## Has ERA improved over the years?",
"_____no_output_____"
]
],
[
[
"pitching_data = pd.read_csv(pitching_path)\npitching_data.head()",
"_____no_output_____"
],
[
"# Show only more recent pitching starting at 1970 and games played more than 5\npitching_clean = pitching_data[pitching_data[\"year\"] >= 1970]\npitching_games = pitching_clean[pitching_clean[\"g\"] > 5]\npitching_games\n\n# Group pitching records by year and average era\npitching_era = pitching_games.groupby(\"year\").mean()[\"era\"]\npitching_era\n\n# Plot as line\nxaxis = np.arange(1970, 2016, 1)\nplt.plot(xaxis, pitching_era)\nplt.title(\"Ptiching ERA from 1970-2015\")\nplt.xlabel(\"Years\")\nplt.ylabel(\"Earned Run Average\")\nplt.savefig(\"./pitching_era.png\")",
"_____no_output_____"
]
],
[
[
"## What caused the increase in ERA and batting average?",
"_____no_output_____"
]
],
[
[
"# Change in player size over the years\nimport warnings\nwarnings.filterwarnings('ignore')\n\nplayer_data\nplayer_data[\"final\"] = pd.to_datetime(player_data['final_game'], format='%Y-%m-%d').dt.year\nplayer_data.dropna()\n\n# Create scatter plot\nx_values = player_data[\"final\"]\ny_values = player_data[\"weight\"]\n\nplt.scatter(x_values,y_values)\nplt.xlabel(\"Year\")\nplt.ylabel(\"Weight of Player (lbs)\")\nplt.title(\"Baseball Player Weight Over the Years\")\nplt.show()\nplt.savefig(\"./player_weight.png\")",
"_____no_output_____"
],
[
"# Create scatter plot\nx_values = player_data[\"final\"]\ny_values2 = player_data[\"height\"]\n\nplt.scatter(x_values,y_values2)\nplt.xlabel(\"Year\")\nplt.ylabel(\"Height of Player (inches)\")\nplt.title(\"Baseball Player Height Over the Years\")\nplt.show()\nplt.savefig(\"./player_height.png\")\n",
"_____no_output_____"
]
],
[
[
"### Observation: Pitching ERA spike in the late 90's. Baseball began to focus on strangth training in the 80s and this could show an benefit of trianing as pitchers gave up more hits. The 90's became known as the \"Steroid Era\" where players used performance enhancing drugs to improve their power. There appears to be a trend for increased height and weight in baseball players over the years, but this can be a result of general population size increase.\n",
"_____no_output_____"
],
[
"## What position has the most fielding errors?",
"_____no_output_____"
]
],
[
[
"# Import the fielding data\nfielding_data = pd.read_csv(fielding_path)\nfielding_data\n\n# Only show data from 1970 and remove position DH(hitter only)\nfielding_data = fielding_data [fielding_data[\"year\"] >= 1970]\nfielding_data = fielding_data[fielding_data[\"pos\"] != \"DH\"]\nfielding_data\n\n# Combine by position and find the most errors\nerr_data = fielding_data[fielding_data[\"g\"] != 0]\nerr_data = err_data.groupby(\"pos\").sum()[\"e\"]\nerr_data.sort_values()\n\n# Plot the data as a bar chart\nerr_data.plot(kind = \"bar\", color = \"blue\", alpha = 0.8, align =\"center\")\n\n# Add labels\nplt.title(\"Total Errors by Position from 1970-2015\")\nplt.xlabel(\"Position\")\nplt.ylabel(\"Total Errors\")\nplt.savefig(\"./position_errors.png\")",
"_____no_output_____"
]
],
[
[
"### Observation: Shortstops and 3rd Base have the most errors and individual outfield positions(RF, LF, CF) have the least. This is expected as SS and 3rd base have more attempts at fielding than other positions.",
"_____no_output_____"
],
[
"## Do players with more years played have better fielding percentage?",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings('ignore')\n\n# Find the total years played\nplayer_clean[\"started\"] = pd.to_datetime(player_clean['debut'], format='%Y-%m-%d').dt.year\nplayer_clean[\"final\"] = pd.to_datetime(player_clean['final_game'], format='%Y-%m-%d').dt.year\nyears_played = player_clean[\"final\"] - player_clean[\"started\"]\nplayer_clean[\"years_played\"] = years_played\nplayer_clean\n\n# Merge the data\nnew_field = pd.merge(fielding_data, player_clean, on=\"player_id\")\nnew_field\n\n# Find the fielding percentage(FP = (put out + attempts)/(put outs + attempts + errors))\n# Create a new column for fielding percentage\nnew_field[\"FP\"] = \"\"\nnew_field\nnew_field[\"FP\"] = (new_field[\"po\"] + new_field[\"a\"])/(new_field[\"po\"] + new_field[\"a\"] + new_field[\"e\"])\n\n# Remove players that had less than 5 games played\nnew_field = new_field[new_field[\"g\"]> 5]\nnew_field = new_field.groupby([\"pos\", \"years_played\"]).mean()[[\"FP\"]]\nnew_field = new_field.reset_index(level=['pos', 'years_played'])\npositions = ['1B', '2B', '3B', 'C', 'CF', 'LF', 'P', 'RF', 'SS']\nnew_field\n\n# Create a scatter plot\nfig, axes = plt.subplots(nrows=3, ncols=3, sharex = True, sharey = True, figsize=(9, 9))\nfig.text(0.5, 0.04, 'Years Played', ha='center')\nfig.text(0.04, 0.5, 'Fielding Percentage', va='center', rotation='vertical')\naxes = axes.ravel()\nfor i in range(9):\n data = new_field[new_field[\"pos\"] == positions[i]]\n axes[i].scatter(data[\"years_played\"], data[\"FP\"])\n axes[i].set_xlim(0,30)\n axes[i].set_ylim(0.75, 1.03)\n axes[i].set_title(\"Position: \" + positions[i])\nplt.savefig(\"./position_errs.png\")",
"_____no_output_____"
]
],
[
[
"### Observation: At this level of competition, the average fielding percentage does not vary much for amount of time spent playing professional baseball. Fielding percentages remain fairly consistent across all positions.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a8108d03df10dbd1b6a328e3de32e17824be301
| 4,501 |
ipynb
|
Jupyter Notebook
|
community/aqua/general/eoh.ipynb
|
Chibikuri/qiskit-tutorials
|
15c121b95249de17e311c869fbc455210b2fcf5e
|
[
"Apache-2.0"
] | 2 |
2017-11-09T16:33:14.000Z
|
2018-02-26T00:42:17.000Z
|
community/aqua/general/eoh.ipynb
|
Chibikuri/qiskit-tutorials
|
15c121b95249de17e311c869fbc455210b2fcf5e
|
[
"Apache-2.0"
] | 1 |
2019-04-12T07:43:25.000Z
|
2020-02-07T13:32:18.000Z
|
community/aqua/general/eoh.ipynb
|
Chibikuri/qiskit-tutorials
|
15c121b95249de17e311c869fbc455210b2fcf5e
|
[
"Apache-2.0"
] | 2 |
2019-03-24T21:00:25.000Z
|
2019-03-24T21:57:10.000Z
| 27.278788 | 231 | 0.580315 |
[
[
[
"## _*The EOH (Evolution of Hamiltonian) Algorithm*_\n\nThis notebook demonstrates how to use the `Qiskit Aqua` library to invoke the EOH algorithm and process the result.\n\nFurther information may be found for the algorithms in the online [Aqua documentation](https://qiskit.org/documentation/aqua/algorithms.html).\n\nFor this particular demonstration, we illustrate the `EOH` algorithm. First, two `Operator` instances we created are randomly generated Hamiltonians.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom qiskit import LegacySimulators\nfrom qiskit.transpiler import PassManager\nfrom qiskit_aqua import run_algorithm\nfrom qiskit_aqua.operator import Operator, QuantumInstance\nfrom qiskit_aqua.algorithms import EOH\nfrom qiskit_aqua.components.initial_states import Custom\nfrom qiskit_aqua.input import EnergyInput\n\nnum_qubits = 2\ntemp = np.random.random((2 ** num_qubits, 2 ** num_qubits))\nqubit_op = Operator(matrix=temp + temp.T)\ntemp = np.random.random((2 ** num_qubits, 2 ** num_qubits))\nevo_op = Operator(matrix=temp + temp.T)",
"_____no_output_____"
]
],
[
[
"For EOH, we would like to evolve some initial state (e.g. the uniform superposition state) with `evo_op` and do a measurement using `qubit_op`. Below, we illustrate how such an example dynamics process can be easily prepared.",
"_____no_output_____"
]
],
[
[
"evo_time = 1\nnum_time_slices = 1\nstate_in = Custom(qubit_op.num_qubits, state='uniform')\neoh = EOH(qubit_op, state_in, evo_op, 'paulis', evo_time, num_time_slices)",
"_____no_output_____"
]
],
[
[
"We can then configure the quantum backend and execute our `EOH` instance:",
"_____no_output_____"
]
],
[
[
"backend = LegacySimulators.get_backend('statevector_simulator')\nquantum_instance = QuantumInstance(backend, pass_manager=PassManager())\n\nret = eoh.run(quantum_instance)\nprint('The result is\\n{}'.format(ret))",
"The result is\n{'avg': (2.722036822009398-5.381265357255164e-17j), 'std_dev': 0.0}\n"
]
],
[
[
"The above programmatic approach can also be achieved via a declarative manner using json dictionary configuration:",
"_____no_output_____"
]
],
[
[
"params = {\n 'problem': {\n 'name': 'eoh'\n },\n 'algorithm': {\n 'name': 'EOH',\n 'num_time_slices': 1\n },\n 'initial_state': {\n 'name': 'CUSTOM',\n 'state': 'uniform'\n }\n}\nalgo_input = EnergyInput(qubit_op)\nalgo_input.add_aux_op(evo_op)",
"_____no_output_____"
]
],
[
[
"With all the necessary pieces prepared, we can then proceed to run the algorithm and examine the result.",
"_____no_output_____"
]
],
[
[
"ret = run_algorithm(params, algo_input, backend=backend)\nprint('The result is\\n{}'.format(ret))",
"The result is\n{'avg': (2.722036822009398-5.381265357255164e-17j), 'std_dev': 0.0}\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a8108f827e68f9339ccf6bcb2a03c0e51a0e498
| 4,084 |
ipynb
|
Jupyter Notebook
|
ClassMaterial/02 - Tools/02 code/02.3_WSC_Exercise.ipynb
|
gruberpeter/smartcontractscourse
|
b228c3476a9701e93b45fadd81d137dff86ee593
|
[
"MIT"
] | 1 |
2022-03-28T20:57:37.000Z
|
2022-03-28T20:57:37.000Z
|
ClassMaterial/02 - Tools/02 code/02.3_WSC_Exercise.ipynb
|
gruberpeter/smartcontractscourse
|
b228c3476a9701e93b45fadd81d137dff86ee593
|
[
"MIT"
] | null | null | null |
ClassMaterial/02 - Tools/02 code/02.3_WSC_Exercise.ipynb
|
gruberpeter/smartcontractscourse
|
b228c3476a9701e93b45fadd81d137dff86ee593
|
[
"MIT"
] | null | null | null | 24.309524 | 190 | 0.548482 |
[
[
[
"# Python Exercises\n#### 02.3 Writing Smart Contracts\n##### Peter Gruber ([email protected])\n2022-01-22\n* Exercises for basic Python operations",
"_____no_output_____"
],
[
"### Exercise 1\nPerform the follwoing calculations, where $\\ln()$ stands for the natural logarithm and $e$ is the Euler number.\n\n$(a)\\; e^{2} \\qquad (b)\\; \\sqrt[3]{17} \\qquad (c)\\; 12^{2\\cdot3} \\qquad (d)\\; \\frac{16\\cdot\\ln(2)}{8-\\sqrt{2}}$",
"_____no_output_____"
]
],
[
[
"# Python code goes here",
"_____no_output_____"
]
],
[
[
"### Exercise 2\nFormulate the following conditions in Python\n* `a` is smaller than 2 and `b` is larger than 5\n* `a` is between (including) -2 and (including) 5\n* either `a` is positive or `b` is between (excluding) -1 and (excluding) 1\n* the sum of `a`, `b` and `c` is smaller than the product of `a`, `b` and `c`",
"_____no_output_____"
]
],
[
[
"# Python code goes here",
"_____no_output_____"
]
],
[
[
"### Exercise 3\nCreate a new list `mylogarithms` that contains the logarithm of every element of the list `mynumbers`",
"_____no_output_____"
]
],
[
[
"mynumbers = [1, 2.5, 7, 18, 1E7]\n# Python code goes here",
"_____no_output_____"
]
],
[
[
"### Exercise 4\nYou want to split 16 ALGOS like this: 1/2 for Alice, 1/4 for Bob and 1/8 each for Craig and Dan. Perform the divisions and store the results as a variable that expresses *entire* ALGOs",
"_____no_output_____"
]
],
[
[
"# Python code goes here",
"_____no_output_____"
]
],
[
[
"### Exercise 5\nAnswer the following questions using (clever) iteger arithmetic. If you change the unit of measurement, add it as a comment.\n* Two kilos of oranges are split among 5 people. How much does everyone get?\n* Seven people pay CHF 265 for dinner. How much does everyone have to pay? ",
"_____no_output_____"
]
],
[
[
"# Python code goes here",
"_____no_output_____"
]
],
[
[
"### Exercise 6\nExpress 1 Million as integer number. Do not use the `0`",
"_____no_output_____"
]
],
[
[
"# Python code goes here",
"_____no_output_____"
]
],
[
[
"### Exercise 7\nThe list `mywords` contains several words. Create a new list `shortwords` that contains only the first four letters of each word in `mywords`.\n\n*Hint* you know everything to get the \"first four letters\" of a word. Maybe try out this part without a list first.",
"_____no_output_____"
]
],
[
[
"mywords = ['apple','orange','banana','tomatoe','salad']\n# Python code goes here",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a812fd7009cdb44788c3f1f8330d1701e073a10
| 1,613 |
ipynb
|
Jupyter Notebook
|
Sherlock.ipynb
|
Miike728/sherlock
|
521754319c4f0d93dd22a2c284785a90e875e8ac
|
[
"Unlicense"
] | null | null | null |
Sherlock.ipynb
|
Miike728/sherlock
|
521754319c4f0d93dd22a2c284785a90e875e8ac
|
[
"Unlicense"
] | null | null | null |
Sherlock.ipynb
|
Miike728/sherlock
|
521754319c4f0d93dd22a2c284785a90e875e8ac
|
[
"Unlicense"
] | null | null | null | 22.402778 | 222 | 0.480471 |
[
[
[
"<a href=\"https://colab.research.google.com/github/Miike728/sherlock/blob/main/Sherlock.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"!git clone https://github.com/sherlock-project/sherlock.git",
"_____no_output_____"
],
[
"cd sherlock",
"_____no_output_____"
],
[
"!python3 -m pip install -r requirements.txt",
"_____no_output_____"
],
[
"!python3 sherlock \"USUARIO (SIN COMILLAS)\"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a813f08c389b0c957c5ce4d3499beae123c42e4
| 87,027 |
ipynb
|
Jupyter Notebook
|
notebook/1_topic_model_BERT_scratch.ipynb
|
jakartaresearch/topic-modeling
|
175110905c345f641b37d8081221e03fe1020370
|
[
"MIT"
] | null | null | null |
notebook/1_topic_model_BERT_scratch.ipynb
|
jakartaresearch/topic-modeling
|
175110905c345f641b37d8081221e03fe1020370
|
[
"MIT"
] | null | null | null |
notebook/1_topic_model_BERT_scratch.ipynb
|
jakartaresearch/topic-modeling
|
175110905c345f641b37d8081221e03fe1020370
|
[
"MIT"
] | null | null | null | 168.005792 | 73,852 | 0.883117 |
[
[
[
"## Topic Modeling using BERT",
"_____no_output_____"
],
[
"following https://towardsdatascience.com/topic-modeling-with-bert-779f7db187e6",
"_____no_output_____"
],
[
"Reference\n- https://www.sbert.net/index.html\n- https://github.com/UKPLab/sentence-transformers",
"_____no_output_____"
]
],
[
[
"import os\nimport re\n\nimport pandas as pd\n\nfrom sentence_transformers import SentenceTransformer",
"_____no_output_____"
],
[
"d_dataset = pd.read_json(\"../data/dataset/jan_sep_2020.json\")",
"_____no_output_____"
],
[
"d_dataset.head()",
"_____no_output_____"
],
[
"d_dataset.shape",
"_____no_output_____"
],
[
"d_dataset = d_dataset.loc[:9999, ]",
"_____no_output_____"
],
[
"d_dataset.shape",
"_____no_output_____"
],
[
"model = SentenceTransformer('stsb-xlm-r-multilingual')",
"_____no_output_____"
],
[
"embeddings = model.encode(d_dataset.title, show_progress_bar=True)",
"_____no_output_____"
],
[
"import umap\nimport umap.plot",
"_____no_output_____"
],
[
"umap_embeddings = umap.UMAP(n_neighbors=15, \n n_components=5, \n metric='cosine').fit_transform(embeddings)",
"_____no_output_____"
],
[
"umap_embeddings.shape",
"_____no_output_____"
],
[
"import hdbscan",
"_____no_output_____"
],
[
"cluster = hdbscan.HDBSCAN(min_cluster_size=15,\n metric='euclidean', \n cluster_selection_method='eom').fit(umap_embeddings)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\n# Prepare data\numap_data = umap.UMAP(n_neighbors=15,\n n_components=2,\n min_dist=0.0,\n metric='cosine').fit_transform(embeddings)\n\nresult = pd.DataFrame(umap_data, columns=['x', 'y'])\nresult['labels'] = cluster.labels_",
"_____no_output_____"
],
[
"result",
"_____no_output_____"
],
[
"# Visualize clusters\nfig, ax = plt.subplots(figsize=(20, 10))\noutliers = result.loc[result.labels == -1, :]\nclustered = result.loc[result.labels != -1, :]\nplt.scatter(outliers.x, outliers.y, color='#BDBDBD', s=0.05)\nplt.scatter(clustered.x, clustered.y, c=clustered.labels, s=0.05, cmap='hsv_r')\nplt.colorbar()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a814846a5e6bbb240a3edef75fee565177f090d
| 198,111 |
ipynb
|
Jupyter Notebook
|
harmonic_oscillator.ipynb
|
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-xren307
|
2ae5c64257c4cd01f2f04d7594c9d0522104bee2
|
[
"MIT"
] | null | null | null |
harmonic_oscillator.ipynb
|
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-xren307
|
2ae5c64257c4cd01f2f04d7594c9d0522104bee2
|
[
"MIT"
] | null | null | null |
harmonic_oscillator.ipynb
|
sju-chem264-2019/10-24-19-introduction-to-harmonic-oscillator-xren307
|
2ae5c64257c4cd01f2f04d7594c9d0522104bee2
|
[
"MIT"
] | null | null | null | 166.90059 | 18,316 | 0.894817 |
[
[
[
"# Introduction to the Harmonic Oscillator",
"_____no_output_____"
],
[
"*Note:* Much of this is adapted/copied from https://flothesof.github.io/harmonic-oscillator-three-methods-solution.html",
"_____no_output_____"
],
[
"This week week we are going to begin studying molecular dynamics, which uses classical mechanics to study molecular systems. Our \"hydrogen atom\" in this section will be the 1D harmomic oscillator. \n\n ",
"_____no_output_____"
],
[
"The harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:\n\n$$F=-kx$$\n\nThe potential energy of this system is \n\n$$V = {1 \\over 2}k{x^2}$$",
"_____no_output_____"
],
[
"These are sometime rewritten as\n\n$$ F=- \\omega_0^2 m x, \\text{ } V(x) = {1 \\over 2} \\omega_0^2 m {x^2}$$\n\nWhere $\\omega_0 = \\sqrt {{k \\over m}} $",
"_____no_output_____"
],
[
"In classical mechanics, our goal is to determine the equations of motion, $x(t),y(t)$, that describe our system. \n\nIn this notebook we will use sympy to solve an second order, ordinary differential equation.",
"_____no_output_____"
],
[
"## 1. Solving differential equations with sympy",
"_____no_output_____"
],
[
"Soliving differential equations can be tough, and there is not always a set plan on how to proceed. Luckily for us, the harmonic osscillator is the classic second order diffferential eqations.",
"_____no_output_____"
],
[
"Consider the following second order differential equation\n\n$$ay(t)''+by(t)'=c$$\n\nwhere $y(t)'' = {{{d^2}y} \\over {dt^2}}$, and $y(t)' = {{{d}y} \\over {dt}}$",
"_____no_output_____"
],
[
"We can rewrite this as a homogeneous linear differential equations\n\n$$ay(t)''+by(t)'-c=0$$",
"_____no_output_____"
],
[
"The goal here is to find $y(t)$, similar to our classical mechanics problems. Lets use sympy to solve this equation",
"_____no_output_____"
],
[
"### Second order ordinary differential equation",
"_____no_output_____"
],
[
"First we import the sympy library",
"_____no_output_____"
]
],
[
[
"import sympy as sym",
"_____no_output_____"
]
],
[
[
"Next we initialize pretty printing",
"_____no_output_____"
]
],
[
[
"sym.init_printing()",
"_____no_output_____"
]
],
[
[
"Next we will set our symbols",
"_____no_output_____"
]
],
[
[
"t,a,b,c=sym.symbols(\"t,a,b,c\")",
"_____no_output_____"
]
],
[
[
"Now for somehting new. We can define functions using `sym.Function(\"f\")`",
"_____no_output_____"
]
],
[
[
"y=sym.Function(\"y\")\ny(t)",
"_____no_output_____"
]
],
[
[
"Now, If I want to define a first or second derivative, I can use `sym.diff`",
"_____no_output_____"
]
],
[
[
"sym.diff(y(t),(t,1)),sym.diff(y(t),(t,2))",
"_____no_output_____"
]
],
[
[
"My differential equation can be written as follows",
"_____no_output_____"
]
],
[
[
"dfeq=a*sym.diff(y(t),(t,2))+b*sym.diff(y(t),(t,1))-c\ndfeq",
"_____no_output_____"
],
[
"sol = sym.dsolve(dfeq)\nsol",
"_____no_output_____"
]
],
[
[
"The two constants $C_1$ and $C_2$ can be determined by setting boundry conditions.\nFirst, we can set the condition $y(t=0)=y_0$\n\nThe next intial condition we will set is $y'(t=0)=v_0$\n\nTo setup the equality we want to solve, we are using `sym.Eq`. This function sets up an equaility between a lhs aand rhs of an equation",
"_____no_output_____"
]
],
[
[
"# sym.Eq example\nalpha,beta=sym.symbols(\"alpha,beta\")\nsym.Eq(alpha+2,beta)",
"_____no_output_____"
]
],
[
[
"Back to the actual problem",
"_____no_output_____"
]
],
[
[
"x0,v0=sym.symbols(\"x_0,v_0\")\nics=[sym.Eq(sol.args[1].subs(t, 0), x0),\n sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)]\nics",
"_____no_output_____"
]
],
[
[
"We can use this result to first solve for $C_2$ and then solve for $C_1$.\nOr we can use sympy to solve this for us.",
"_____no_output_____"
]
],
[
[
"solved_ics=sym.solve(ics)\nsolved_ics",
"_____no_output_____"
]
],
[
[
"Substitute the result back into $y(t)$",
"_____no_output_____"
]
],
[
[
"full_sol = sol.subs(solved_ics[0])\nfull_sol",
"_____no_output_____"
]
],
[
[
"We can plot this result too. Assume that $a,b,c=1$ and that the starting conditions are $y_0=0,v_0=0$\n\n\nWe will use two sample problems:\n\n* case 1 : initial position is nonzero and initial velocity is zero\n* case 2 : initial position is zero and initialvelocity is nonzero\n",
"_____no_output_____"
]
],
[
[
"# Print plots\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"#### Initial velocity set to zero",
"_____no_output_____"
]
],
[
[
"case1 = sym.simplify(full_sol.subs({y0:0, v0:0, a:1, b:1, c:1}))\ncase1",
"_____no_output_____"
],
[
"sym.plot(case1.rhs)\nsym.plot(case1.rhs,(t,-2,2))",
"_____no_output_____"
]
],
[
[
"#### Initial velocity set to one",
"_____no_output_____"
]
],
[
[
"case2 = sym.simplify(full_sol.subs({y0:0, v0:1, a:1, b:1, c:1}))\ncase2",
"_____no_output_____"
],
[
"sym.plot(case2.lhs,(t,-2,2))",
"_____no_output_____"
]
],
[
[
"## Calculate the phase space",
"_____no_output_____"
],
[
"As we will see in lecture, the state of our classical systems are defined as points in phase space, a hyperspace defined by ${{\\bf{r}}^N},{{\\bf{p}}^N}$. We will convert our sympy expression into a numerical function so that we can plot the path of $y(t)$ in phase space $y,y'$.",
"_____no_output_____"
]
],
[
[
"case1",
"_____no_output_____"
],
[
"# Import numpy library\nimport numpy as np\n\n# Make numerical functions out of symbolic expressions\nyfunc=sym.lambdify(t,case1.rhs,'numpy')\nvfunc=sym.lambdify(t,case1.rhs.diff(t),'numpy')\n\n# Make list of numbers\ntlst=np.linspace(-2,2,100)\n\n# Import pyplot\nimport matplotlib\nimport matplotlib.pyplot as plt\n# Make plot\nplt.plot(yfunc(tlst),vfunc(tlst))\nplt.xlabel('$y$')\nplt.ylabel(\"$y'$\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Exercise 1.1 \n\nChange the initial starting conditions and see how that changes the plots. Make three different plots with different starting conditions",
"_____no_output_____"
]
],
[
[
"# Change starting velocity to 5\ncase3 = sym.simplify(full_sol.subs({y0:0, v0:5, a:1, b:1, c:1}))\ntlst=np.linspace(-2,2,100)\nsym.plot(case3.rhs,(t,-2,2))\n\n# Change starting position to 3\ncase4 = sym.simplify(full_sol.subs({y0:3, v0:0, a:1, b:1, c:1}))\ntlst=np.linspace(-2,2,100)\nsym.plot(case4.rhs,(t,-2,2))\n\n\n# Change starting velocity to 0.5\ncase5 = sym.simplify(full_sol.subs({y0:0, v0:0.5, a:1, b:1, c:1}))\ntlst=np.linspace(-2,2,100)\nsym.plot(case5.rhs,(t,-2,2))",
"_____no_output_____"
],
[
"#",
"_____no_output_____"
]
],
[
[
"## 2. Harmonic oscillator ",
"_____no_output_____"
],
[
"Applying the harmonic oscillator force to Newton's second law leads to the following second order differential equation\n\n$$ F = m a $$\n\n$$ F= - \\omega_0^2 m x $$\n\n$$ a = - \\omega_0^2 x $$\n\n$$ x(t)'' = - \\omega_0^2 x $$",
"_____no_output_____"
],
[
"The final expression can be rearranged into a second order homogenous differential equation, and can be solved using the methods we used above",
"_____no_output_____"
],
[
"Your goal is determine and plot the equations of motion of a 1D harmomnic oscillator",
"_____no_output_____"
],
[
"### Exercise 2.1 ",
"_____no_output_____"
],
[
"1. Use the methodology above to determine the equations of motion $x(t), v(t)$ for a harmonic ocillator\n1. Solve for any constants by using the following initial conditions: $x(0)=x_0, v(0)=v_0$\n1. Show expressions for and plot the equations of motion for the following cases:\n 1. $x(0)=0, v(0)=0$\n 1. $x(0)=0, v(0)>0$\n 1. $x(0)>0, v(0)=0$\n 1. $x(0)<0, v(0)=0$\n1. Plot the phasespace diagram for the harmonic oscillator",
"_____no_output_____"
]
],
[
[
"# Equations of motion x(t). a= dv/dt\nt,a,b,c,x,k,z, w0=sym.symbols(\"t,a,b,c,x,k,z,w0\")\nx=sym.Function(\"x\")\nx(t)\nsym.diff(x(t),(t,1)),sym.diff(x(t),(t,2))\ndfeq=a*sym.diff(x(t),(t,2))+b*sym.diff(x(t),(t,1))-c\ndfeq\n",
"_____no_output_____"
],
[
"sol = sym.dsolve(dfeq)\nsol",
"_____no_output_____"
],
[
"# sym.Eq example. limitation on C1 and C2 \nalpha,beta=sym.symbols(\"alpha,beta\")\nsym.Eq(alpha+2,beta)",
"_____no_output_____"
],
[
"#x= x0, z'= v\nx0,v0=sym.symbols(\"x_0,v_0\")\nics=[sym.Eq(sol.args[1].subs(t, 0), x0),\n sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)]\nics\nfull_sol = sol.subs(solved_ics[0])\nfull_sol",
"_____no_output_____"
],
[
"dfeq=sym.diff(x(t),(t,2))+w0**2*x(t)\ndfeq\n",
"_____no_output_____"
],
[
"sol = sym.dsolve(dfeq)\nsol\n\nalpha,beta=sym.symbols(\"alpha,beta\")\nsym.Eq(alpha+2,beta)\n\nx0,v0=sym.symbols(\"x_0,v_0\")\nics=[sym.Eq(sol.args[1].subs(t, 0), x0),\n sym.Eq(sol.args[1].diff(t).subs(t, 0), v0)]\nics\n\nsolved_ics=sym.solve(ics)\nsolved_ics",
"_____no_output_____"
],
[
"full_sol= sol.subs(solved_ics[0])\nfull_sol",
"_____no_output_____"
],
[
"#1. 𝑥(0)=0,𝑣(0)=0\ncasea = sym.simplify(full_sol.subs({x0:0, v0:0, w0:1}))\ntlst=np.linspace(-2,2,100)\nsym.plot(casea.rhs,(t,-2,2))\n\n",
"_____no_output_____"
],
[
"#2. 𝑥(0)=0,𝑣(0)>0\ncaseb = sym.simplify(full_sol.subs({x0:0, v0:2, w0:1}))\ntlst=np.linspace(-2,2,100)\nsym.plot(caseb.rhs,(t,-2,2))",
"_____no_output_____"
],
[
"#3. 𝑥(0)>0,𝑣(0)=0\ncasec = sym.simplify(full_sol.subs({x0:2, v0:0, wo:1}))\ntlst=np.linspace(-2,2,100)\nsym.plot(caseb.rhs,(t,-2,2))\n ",
"_____no_output_____"
],
[
"#4. 𝑥(0)<0,𝑣(0)=0\ncased = sym.simplify(full_sol.subs({x0:-2, v0:0, w0:1))\ntlst=np.linspace(-2,2,100)\nsym.plot(caseb.rhs,(t,-2,2))",
"_____no_output_____"
],
[
"# Phase Plot\nimport numpy as np\n\n# Make numerical functions out of symbolic expressions\nxfunc=sym.lambdify(t,caseb.rhs,'numpy')\nvfunc=sym.lambdify(t,caseb.rhs.diff(t),'numpy')\n\n# Make list of numbers\ntlst=np.linspace(-10,10,100)\n\n# Import pyplot\nimport matplotlib\nimport matplotlib.pyplot as plt\n# Make plot\nplt.plot(xfunc(tlst),vfunc(tlst))\nplt.xlabel('$x$')\nplt.ylabel(\"$x'$\")\nplt.show()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a81491bb5030530d8f2a4b64d5cb64aba3dffed
| 386,165 |
ipynb
|
Jupyter Notebook
|
data.world_gapminer_experiment.ipynb
|
brianray/data.world-scripts
|
b72770cb49e45772d75a58f48485777e969ee848
|
[
"Apache-2.0"
] | 11 |
2017-08-03T12:17:56.000Z
|
2021-04-30T21:33:25.000Z
|
data.world_gapminer_experiment.ipynb
|
brianray/data.world-scripts
|
b72770cb49e45772d75a58f48485777e969ee848
|
[
"Apache-2.0"
] | null | null | null |
data.world_gapminer_experiment.ipynb
|
brianray/data.world-scripts
|
b72770cb49e45772d75a58f48485777e969ee848
|
[
"Apache-2.0"
] | 5 |
2019-05-13T13:20:11.000Z
|
2021-09-01T13:23:57.000Z
| 100.485298 | 424 | 0.643999 |
[
[
[
"import re\nfrom lxml import html\ndata = open(\"Data.htm\", \"r\").read() # Data.htm from https://www.gapminder.org/data/\ntree = html.fromstring(data)\ntitles = [x.text for x in tree.xpath('//*[@id=\"indicators-table\"]/tbody/tr[*]/td[1]/a')]\nsources = [x.text for x in tree.xpath('//*[@id=\"indicators-table\"]/tbody/tr[*]/td[2]/a')]\ntags = [[str(x.text).lower(), 'gapminder'] for x in tree.xpath('//*[@id=\"indicators-table\"]/tbody/tr[*]/td[3]')]\nlinks = [link.values()[1] for link in tree.xpath('//*[@id=\"indicators-table\"]/tbody/tr[*]/td[5]/a[1]')]\n\n\ndata = tuple(zip(titles, links, sources, tags))",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"import os.path\nimport urllib.request\nimport datadotworld as dw\nimport re\n\nclient = dw.api_client()\n\nfor title, link, based, tags in data:\n orglink = link\n link = link.strip().replace(\"&\", \"&\").replace(\"xls\", \"csv\")\n title = re.sub(r'\\W+', ' ', title)\n filename = \"{}.csv\".format(title.strip())\n title = \"gapminder {}\".format(title)[:30]\n data_args = dict(owner_id=\"brianray\",\n title=title,\n description=\"{} based on {}\".format(title, based)[:120],\n tags=tags,\n license='CC-BY',\n visibility=\"OPEN\",\n files={filename: link})\n print(data_args)\n try:\n client.create_dataset(**data_args)\n except Exception as e:\n print(e)\n continue\n print(\"---\")\n \n \n ",
"{'description': 'gapminder Adults with HIV age based on Based on UNAIDS', 'files': {'Adults with HIV age 15 49.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEfbZyl0qjbiRQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Adults with HIV age ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:48 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"7bfb5940-b3da-4e76-81eb-9f7c62a18b6e\"}\n\n{'description': 'gapminder Age at 1st marriage based on Various sources', 'files': {'Age at 1st marriage women.csv': 'http://spreadsheets.google.com/pub?key=t4eF8H_jq_xyKCUHAX6VT1g&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Age at 1st marriage ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:48 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"2e7b6da9-a00a-4222-af03-2c2045a76eeb\"}\n\n{'description': 'gapminder Aged 15 employment r based on International Labour Organization', 'files': {'Aged 15 employment rate.csv': 'http://spreadsheets.google.com/pub?key=rV0ksExNqh6V_h40f0_nFjg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aged 15 employment r', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:48 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"96dbbc8f-81b1-4d47-bda7-fccb96651524\"}\n\n{'description': 'gapminder Aged 15 labour force based on International Labour Organization', 'files': {'Aged 15 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=ryyQX_1TXlohXWOSUswhIKg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aged 15 labour force', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:48 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"18b5c50a-1ff0-4384-be1a-2bc6431574bb\"}\n\n{'description': 'gapminder Aged 15 unemployment based on International Labour Organization', 'files': {'Aged 15 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rlD36wGmkwFt3ED558waCTQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aged 15 unemployment', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:48 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"f0934fca-4d07-4e47-b0fb-f0a74a4d3415\"}\n\n{'description': 'gapminder Aged 15 24 employmen based on International Labour Organization', 'files': {'Aged 15 24 employment rate.csv': 'http://spreadsheets.google.com/pub?key=rfHz_nx27dDQo4dUoIeVT3A&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aged 15 24 employmen', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:48 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"04577123-0f34-4791-b7fb-d9d057cbe747\"}\n\n{'description': 'gapminder Aged 15 24 unemploym based on International Labour Organization', 'files': {'Aged 15 24 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rb0oP4d1BREXa8xMIUf4NZg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aged 15 24 unemploym', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:48 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"50c2befa-3beb-43e4-a91d-557f2434b20b\"}\n\n{'description': 'gapminder Aged 15 64 labour fo based on International Labour Organization', 'files': {'Aged 15 64 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=rx1TECfEnGlnomonxCCO-Aw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aged 15 64 labour fo', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:48 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"1168b430-d406-4c98-9729-b188feb413aa\"}\n\n{'description': 'gapminder Aged 25 54 labour fo based on International Labour Organization', 'files': {'Aged 25 54 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=rTrB-PY0sfM_gdAQ20XovfA&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aged 25 54 labour fo', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:48 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"541f5acd-2b8a-4e83-94fc-ba4e1a3e6a5f\"}\n\n{'description': 'gapminder Aged 25 54 unemploym based on International Labour Organization', 'files': {'Aged 25 54 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rEMA-cbNPaOtpDyxTcwugnw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aged 25 54 unemploym', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"8fca2b3b-b8d7-4599-be72-017d9bc91bbb\"}\n\n{'description': 'gapminder Aged 55 unemployment based on International Labour Organization', 'files': {'Aged 55 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rNn0y3e0bCpaqTM_8BVZBdg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aged 55 unemployment', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"63e18883-f1d5-4346-8253-ffe2e4a0e3e7\"}\n\n{'description': 'gapminder Aged 65 labour force based on International Labour Organization', 'files': {'Aged 65 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=r1hlZB_n1rpXTij11Kw7lTQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aged 65 labour force', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"35c9f729-1065-4d9d-bce0-e84d9c5d018e\"}\n\n{'description': 'gapminder Agricultural land of based on World Bank', 'files': {'Agricultural land of land area.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdEF3alRGS0JQZVgwSW1FWUxUQmZoWXc&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Agricultural land of', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"b7cf87bc-44a8-4656-877b-276461ed8d8c\"}\n\n{'description': 'gapminder Agricultural water w based on FAO aquastat database', 'files': {'Agricultural water withdrawal of total.csv': 'http://spreadsheets.google.com/pub?key=rab3jHe_JZrU1pqlX0xnQEw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Agricultural water w', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"69963d51-c7e7-4dc6-b84b-4b8bee8f81a2\"}\n\n{'description': 'gapminder Agriculture of GDP based on World Bank', 'files': {'Agriculture of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFhPbDdCTTYxM1dGc21UdE9sSkp1WEE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Agriculture of GDP ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"ec380c6f-98c1-49f8-9d1c-e00976721426\"}\n\n{'description': 'gapminder Agriculture workers based on International Labour Organization', 'files': {'Agriculture workers of labour force.csv': 'http://spreadsheets.google.com/pub?key=rzFD5mOuB5mR7-vLoP04LAQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Agriculture workers ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"745f1c27-942e-4441-aa2d-47e4a33ded97\"}\n\n{'description': 'gapminder Aid given of GNI based on OECD QWIDS', 'files': {'Aid given of GNI.csv': 'http://spreadsheets.google.com/pub?key=tQR7RhlZdPjBkVCDPPF4zUg&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aid given of GNI ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"b11067c2-8adc-452f-b19e-8968a4b7e898\"}\n\n{'description': 'gapminder Aid given 2007 US based on OECD QWIDS', 'files': {'Aid given 2007 US.csv': 'http://spreadsheets.google.com/pub?key=tFOn62dEO9QCyIKK6kgSYRQ&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aid given 2007 US ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"1ad4c1a9-2db6-4e7a-a40e-b5a88e39a4ca\"}\n\n{'description': 'gapminder Aid given per person based on OECD QWIDS', 'files': {'Aid given per person 2007 US.csv': 'http://spreadsheets.google.com/pub?key=tGTst0WEm8V8zI9LOYvGmTg&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aid given per person', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"eef862c9-3292-4f9f-925c-af2b5bf78a60\"}\n\n{'description': 'gapminder Aid received of GNI based on World Bank', 'files': {'Aid received of GNI.csv': 'http://spreadsheets.google.com/pub?key=tzK6dx2JltRfVXFI1ADh84w&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aid received of GNI ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"85787303-26e1-41c6-9788-fb7e9346529b\"}\n\n{'description': 'gapminder Aid received per per based on World Bank', 'files': {'Aid received per person current US.csv': 'http://spreadsheets.google.com/pub?key=t9GL1nIZdtxszJbjKErN2Hg&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aid received per per', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"bd9b08e4-e13b-421c-b37f-bb0737233fcf\"}\n\n{'description': 'gapminder Aid received total U based on World Bank', 'files': {'Aid received total US inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdHVuNVpKdnNCa08yV3NOd0Zsal9JaWc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Aid received total U', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:49 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"16b90317-46d0-4b02-ad60-49bb82eda4ad\"}\n\n{'description': 'gapminder Alcohol consumption based on WHO with additions', 'files': {'Alcohol consumption per adult 15 litres.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldGJqTDRfNHBWODJMRWlZaVhNclhNZXc&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Alcohol consumption ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:50 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"0a355d29-b567-49eb-98c9-5e8ab6a39f54\"}\n\n{'description': 'gapminder All causes deaths in based on Lancet', 'files': {'All causes deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=t9FP3_OPrE2ug_I_iNsbePg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All causes deaths in', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:50 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"f02b0b89-30e4-4ed6-aa01-1bfc20ef3005\"}\n\n{'description': 'gapminder All causes deaths in based on Lancet', 'files': {'All causes deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=taYeez4Mkk8OvdH-q4QAxxQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All causes deaths in', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:50 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"4136fee4-ce94-4173-b394-35bca5e99aa7\"}\n\n{'description': 'gapminder All causes deaths in based on Lancet', 'files': {'All causes deaths in newborn per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tbpc_fYEqPzaE7pDWBG84Rw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All causes deaths in', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:50 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"98066c36-e0fc-450f-8877-04d2b198f4f0\"}\n\n{'description': 'gapminder All causes deaths in based on Lancet', 'files': {'All causes deaths in newborn total deaths.csv': 'http://spreadsheets.google.com/pub?key=tBVcAKWEbc2s687q6D9yuYg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All causes deaths in', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:50 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"19f0ecb0-4e2e-42ce-ba17-83f0316ce712\"}\n\n{'description': 'gapminder All forms of TB deat based on World Health Organization', 'files': {'All forms of TB deaths per 100 000 estimated.csv': 'http://spreadsheets.google.com/pub?key=rWM9yEzjpGJvcJlUAIm35tA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All forms of TB deat', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder All forms of TB dete based on World Health Organization', 'files': {'All forms of TB detection rate.csv': 'http://spreadsheets.google.com/pub?key=ru195-zJ0rsx5axPIvm_bRA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All forms of TB dete', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder All forms of TB dete based on World Health Organization', 'files': {'All forms of TB detection rate DOTS only.csv': 'http://spreadsheets.google.com/pub?key=rutVwqgB14uRV_f2dRbqhUA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All forms of TB dete', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:50 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"99828efa-9909-4d8a-a9a8-1d6fc194d2ee\"}\n\n{'description': 'gapminder All forms of TB exis based on World Health Organization', 'files': {'All forms of TB existing cases per 100 000 estimated.csv': 'http://spreadsheets.google.com/pub?key=rZNoyaocUmUGuFyRgjJUpig&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All forms of TB exis', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder All forms of TB new based on World Health Organization', 'files': {'All forms of TB new cases per 100 000 estimated.csv': 'http://spreadsheets.google.com/pub?key=rMsQHawTObBb6_U2ESjKXYw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All forms of TB new ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder All forms of TB new based on World Health Organization', 'files': {'All forms of TB new cases per 100 000 reported.csv': 'http://spreadsheets.google.com/pub?key=rYICvtvVz28fVyQuG_ote2w&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All forms of TB new ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:50 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"b07716d7-7865-4c6a-9b6d-bb7a4cf5abd7\"}\n\n{'description': 'gapminder All forms of TB numb based on World Health Organization', 'files': {'All forms of TB number of deaths estimated.csv': 'http://spreadsheets.google.com/pub?key=rrQ_y5fqQPlznp5mJGXWr-A&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All forms of TB numb', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder All forms of TB numb based on World Health Organization', 'files': {'All forms of TB number of existing cases estimated.csv': 'http://spreadsheets.google.com/pub?key=rOCGMcGcrZs-dfeTEC792ZQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All forms of TB numb', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:51 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"fc0b3247-f10b-4bc4-905d-4b486589a923\"}\n\n{'description': 'gapminder All forms of TB numb based on World Health Organization', 'files': {'All forms of TB number of new cases estimated.csv': 'http://spreadsheets.google.com/pub?key=rhZayTbvH3ZLlSN64OuRYFg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All forms of TB numb', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:51 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"ba51eaff-d915-4f32-86f8-2e3ef5a291a6\"}\n\n{'description': 'gapminder All forms of TB numb based on World Health Organization', 'files': {'All forms of TB number of new cases reported.csv': 'http://spreadsheets.google.com/pub?key=r5UikGjnZlemlelKY0NX9Pg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder All forms of TB numb', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:51 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"5d2c4dc3-1336-472e-84e6-243750729375\"}\n\n{'description': 'gapminder Alternative GDP per based on PWT 7.1', 'files': {'Alternative GDP per capita PPP PWT 7 1.csv': 'http://spreadsheets.google.com/pub?key=0ArtujvvFrPjVdEhhbV90QlJ3RFM0TW83dHd6RXBiX3c&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Alternative GDP per ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Alternative GDP per based on Word Bank', 'files': {'Alternative GDP per capita PPP WB.csv': 'http://spreadsheets.google.com/pub?key=0ArtujvvFrPjVdERlTHZFQ2ZaUkpySHpQMF82UmdlcUE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Alternative GDP per ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:51 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"5489470d-b146-4094-bddd-35df71109119\"}\n\n{'description': 'gapminder Alternative GDP capi based on PWT 6.2', 'files': {'Alternative GDP capita PPP inflation adjusted from PWT.csv': 'http://spreadsheets.google.com/pub?key=tSUr_yZVbM6a3AGJEq_Z2Pw&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Alternative GDP capi', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Alternative poverty based on The World Bank', 'files': {'Alternative poverty below nationally defined poverty line.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEd1R9UmohEIaA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Alternative poverty ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Annual HIV deaths nu based on UNAIDS', 'files': {'Annual HIV deaths number all ages.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdHZJdFBhYVlvck43d1R6ZFYzUWpiLWc&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Annual HIV deaths nu', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Armed forces personn based on WDI', 'files': {'Armed forces personnel of labor force.csv': 'http://spreadsheets.google.com/pub?key=0Asm_G8nr4TCSdFFoVWRjcUdKZDFydGdGNXkzS2ZRbHc&output=csv'}, 'license': 'CC-BY', 'tags': ['society', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Armed forces personn', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Armed forces personn based on WDI', 'files': {'Armed forces personnel total.csv': 'http://spreadsheets.google.com/pub?key=0Asm_G8nr4TCSdG1nNjk5RzItcUp6N2dSdHUwOENXa0E&output=csv'}, 'license': 'CC-BY', 'tags': ['society', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Armed forces personn', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:52 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"04809145-9478-474b-8251-8db9f5f8b388\"}\n\n{'description': 'gapminder Arms exports US infl based on World Bank', 'files': {'Arms exports US inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdGpkU3BSVmw5UXhTMWd6UFc1eXI3Rnc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Arms exports US infl', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Arms imports US infl based on World Bank', 'files': {'Arms imports US inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdEljeENrOXlFXzR3Rm8xT0drTV9YclE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Arms imports US infl', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder ART coverage CD4 350 based on UNAIDS', 'files': {'ART coverage CD4 350.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdHMzRDA5Z1RjWkJIWkNfdWNBVFR6b1E&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder ART coverage CD4 350', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Average age of dolla based on Forbes', 'files': {'Average age of dollar billionaires years.csv': 'http://spreadsheets.google.com/pub?key=t_KhYe1qTGh4c90N61AUDSg&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Average age of dolla', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Bad teeth per child based on WHO modified', 'files': {'Bad teeth per child 12 yr.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3Os9LVO_pRDA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Bad teeth per child ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Battle deaths per 10 based on WHO', 'files': {'Battle deaths per 100 000.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldElMWEl4RFVTemlMbzJqRU50ZDJ3SHc&output=csv'}, 'license': 'CC-BY', 'tags': ['society', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Battle deaths per 10', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Biomass stock in for based on FAO - Food and Agriculture Organization', 'files': {'Biomass stock in forest tons.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWcsSl830EklJA&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Biomass stock in for', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Birth asphyxia death based on Lancet', 'files': {'Birth asphyxia deaths in newborn per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=t_0WcTuc94YT9cJVvy1tmUg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Birth asphyxia death', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Birth asphyxia death based on Lancet', 'files': {'Birth asphyxia deaths in newborn total deaths.csv': 'http://spreadsheets.google.com/pub?key=twrll6eL8GeIU4P71aTakmg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Birth asphyxia death', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:53 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"29451a42-6bac-4e80-a221-edd51403f2e7\"}\n\n{'description': 'gapminder Births attended by s based on World Bank', 'files': {'Births attended by skilled health staff of total.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdF9OQ2dSSG5nNFhpS3RnRVZHUzZMb3c&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Births attended by s', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Blood pressure SBP m based on MRC-HPA Centre for Environment and Health', 'files': {'Blood pressure SBP men mmHg.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdHNwRm9DN1FnT3hXWWZVSncxMkZyS2c&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Blood pressure SBP m', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Blood pressure SBP w based on MRC-HPA Centre for Environment and Health', 'files': {'Blood pressure SBP women mmHg.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdHBzUVVSMDlTX1ZCUnNJQ3ZFdkFXVFE&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Blood pressure SBP w', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Body Mass Index BMI based on MRC-HPA Centre for Environment and Health', 'files': {'Body Mass Index BMI men Kg m2.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdF9saE1pWUNYVkVsNU1FdW1Yem81Nmc&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Body Mass Index BMI ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Body Mass Index BMI based on MRC-HPA Centre for Environment and Health', 'files': {'Body Mass Index BMI women Kg m2.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdGt0elo2dzJMVVQ3WmFGSmdhc09LRlE&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Body Mass Index BMI ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:53 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"ddf3f58a-ef01-4abb-9283-cee39fd3a683\"}\n\n{'description': 'gapminder Breast cancer deaths based on Based on IARC and WHO data', 'files': {'Breast cancer deaths per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3wJUwXXDdiGg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Breast cancer deaths', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Breast cancer new ca based on Based on IARC data', 'files': {'Breast cancer new cases per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0Zbn8wVYsEHQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Breast cancer new ca', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Breast cancer number based on IARC', 'files': {'Breast cancer number of female deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj07QB-Apx8RfQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Breast cancer number', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Breast cancer number based on IARC', 'files': {'Breast cancer number of new female cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2fGuJ1VdTpOw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Breast cancer number', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:54 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"58f04c14-ad1f-4e5a-be1d-9fd8e66db617\"}\n\n{'description': 'gapminder Broadband subscriber based on World Bank', 'files': {'Broadband subscribers per 100 people.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdHdmMGVNNnV1SHBONDRTdTJzTVBKQXc&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Broadband subscriber', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Broadband subscriber based on World Bank', 'files': {'Broadband subscribers total.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdEpMTHBoMmNzcDVCNVRHWE5zSVJVRHc&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Broadband subscriber', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:54 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"5939eee0-dd0c-4c21-87fa-b4d50e1b276b\"}\n\n{'description': 'gapminder Burns deaths per 100 based on WHO', 'files': {'Burns deaths per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldGloTm9wekpDcUdMUjhlYWFKaHdnVWc&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Burns deaths per 100', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Car deaths per 100 0 based on XXX???', 'files': {'Car deaths per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=tLf-4GD5z0QxqsDoUz4vOlg&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Car deaths per 100 0', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Cars trucks buses pe based on World Bank', 'files': {'Cars trucks buses per 1 000 persons.csv': 'http://spreadsheets.google.com/pub?key=tu0H0unnUriNvMXwH_qOqzw&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Cars trucks buses pe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Cell phones per 100 based on World Bank', 'files': {'Cell phones per 100 people.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdG1MSjEyS0h2QjRQZ3FXRVR2dVQyeFE&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Cell phones per 100 ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Cell phones total based on World Bank', 'files': {'Cell phones total.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdEhWLWtqNzljbWg4ZXV6M09JQXNGaUE&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Cell phones total ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Central bank discoun based on UN Statistics Division', 'files': {'Central bank discount rate annual.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEejn8qHNmm4LQ&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Central bank discoun', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Cervical cancer deat based on Based on IARC and WHO data', 'files': {'Cervical cancer deaths per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0jPl21g3mqfQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Cervical cancer deat', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Cervical cancer new based on Based on IARC data', 'files': {'Cervical cancer new cases per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3XpKHFksEPcA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Cervical cancer new ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Cervical cancer numb based on IARC', 'files': {'Cervical cancer number of female deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2KBU_veE9AQg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Cervical cancer numb', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Cervical cancer numb based on IARC', 'files': {'Cervical cancer number of new female cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0g-SgNTI23GQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Cervical cancer numb', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:55 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"2840f009-7077-4d78-a04e-6ea157e5d78d\"}\n\n{'description': 'gapminder Child mortality 0 5 based on Various sources', 'files': {'Child mortality 0 5 year olds dying per 1 000 born.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCcGhBd2NOQVZ1eWowNVpSNjl1c3lRSWc&output=csv'}, 'license': 'CC-BY', 'tags': ['none', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Child mortality 0 5 ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Children and elderly based on UN data', 'files': {'Children and elderly per 100 adults.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2biT80zgTsYQ&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Children and elderly', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Children out of scho based on World Bank', 'files': {'Children out of school primary.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdGVSdkZnZWhfMDYybzd6U2p3NkxsZ3c&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Children out of scho', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Children out of scho based on World Bank', 'files': {'Children out of school primary female.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFNPaEhiUDJ0QWZYOEwwQlBSTmtza1E&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Children out of scho', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:56 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"1104896e-bcc8-4477-9c31-269dfd778459\"}\n\n{'description': 'gapminder Children out of scho based on World Bank', 'files': {'Children out of school primary male.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFd5cnQzNU5IYm1vTWtrTWRIX3UxbHc&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Children out of scho', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:56 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"e7b91ebb-75c5-4ec8-8402-a8564976fecb\"}\n\n{'description': 'gapminder Children per woman t based on Various sources', 'files': {'Children per woman total fertility.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0TAlJeCEzcGQ&output=csv'}, 'license': 'CC-BY', 'tags': ['none', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Children per woman t', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Children per woman t based on various sources', 'files': {'Children per woman total fertility with projections.csv': 'http://spreadsheets.google.com/pub?key=tGdhNYGTGtunwzkKJ9aRhsA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Children per woman t', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:56 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"50359e72-ad06-4f5d-b55c-6ef3ebed16bb\"}\n\n{'description': 'gapminder Children per woman t based on Various sources', 'files': {'Children per woman temporary update.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdFJCTm43NGc0SzdYeHBpZWZGb1V0ckE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Children per woman t', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:56 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"aa9686a1-2f60-4703-a02c-aec97dfab05a\"}\n\n{'description': 'gapminder Cholesterol fat in b based on MRC-HPA Centre for Environment and Health', 'files': {'Cholesterol fat in blood men mmol L.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdDU5SnRoQ0xlZWhwRUZ6RFNQV042enc&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Cholesterol fat in b', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Cholesterol fat in b based on MRC-HPA Centre for Environment and Health', 'files': {'Cholesterol fat in blood women mmol L.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdGJHcHZkSUdBcU56aS1OT3lLeU4tRHc&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Cholesterol fat in b', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:56 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"db6d9c13-e3e6-4e64-bca6-f5863b34535d\"}\n\n{'description': 'gapminder CO2 emissions tonnes based on CDIAC (Carbon Dioxide Information Analysis Center)', 'files': {'CO2 emissions tonnes per person.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1gkNuUEXOGag&output=csv'}, 'license': 'CC-BY', 'tags': ['none', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder CO2 emissions tonnes', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder CO2 intensity of eco based on World Bank', 'files': {'CO2 intensity of economic output kg CO2 per 2005 PPP of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdEVUcEJVRzlFWWRRcjhveGlrQzdwdUE&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder CO2 intensity of eco', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Coal consumption per based on BP', 'files': {'Coal consumption per person.csv': 'http://spreadsheets.google.com/pub?key=0Auk0ddvGIrGqcHlqNnRTY1pxbUVka1hObHlPbTlmUUE&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Coal consumption per', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Coal consumption tot based on BP', 'files': {'Coal consumption total.csv': 'http://spreadsheets.google.com/pub?key=0Auk0ddvGIrGqcHlqNnRTY1pxbUVjMVRtTWlGZG1PVmc&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Coal consumption tot', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Colon Rectum cancer based on Based on IARC and WHO data', 'files': {'Colon Rectum cancer deaths per 100 000 men.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0VhkB8-rRHQg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Colon Rectum cancer ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Colon Rectum cancer based on Based on IARC and WHO data', 'files': {'Colon Rectum cancer deaths per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3bdzmKAvdUSw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Colon Rectum cancer ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:57 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"d1d31cc1-1ec0-41a5-b739-6de681f091f5\"}\n\n{'description': 'gapminder Colon Rectum cancer based on Based on IARC data', 'files': {'Colon Rectum cancer new cases per 100 000 men.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2UBFFaFy3Ebg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Colon Rectum cancer ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:57 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"d136204f-450e-4807-9114-cb5d5ce34f92\"}\n\n{'description': 'gapminder Colon Rectum cancer based on Based on IARC data', 'files': {'Colon Rectum cancer new cases per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2P7lqZXLeZAw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Colon Rectum cancer ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:57 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"2fe7f290-cdf9-475c-9170-20affac3373e\"}\n\n{'description': 'gapminder Colon Rectum cancer based on IARC', 'files': {'Colon Rectum cancer number of female deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0ndPkFlozzXQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Colon Rectum cancer ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:57 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"34b73090-3a40-4d2e-8b3e-7bc4137b535f\"}\n\n{'description': 'gapminder Colon Rectum cancer based on IARC', 'files': {'Colon Rectum cancer number of male deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0-sqkfnD4rGA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Colon Rectum cancer ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:57 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"732bfeb0-9a22-4dd9-8a24-378a384e4a9a\"}\n\n{'description': 'gapminder Colon Rectum cancer based on IARC', 'files': {'Colon Rectum cancer number of new female cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2L5YHqVxRTLA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Colon Rectum cancer ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:57 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"6f82f344-624d-44cb-bb36-64ea7932206c\"}\n\n{'description': 'gapminder Colon Rectum cancer based on IARC', 'files': {'Colon Rectum cancer number of new male cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3nBOvEwrkMTA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Colon Rectum cancer ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:57 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"7ef347cc-7901-4fea-83fa-a55bfeedf49d\"}\n\n{'description': 'gapminder Congenital deaths in based on Lancet', 'files': {'Congenital deaths in newborn per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tneegXfZGHA0-nLG25ypnyg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Congenital deaths in', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Congenital deaths in based on Lancet', 'files': {'Congenital deaths in newborn total deaths.csv': 'http://spreadsheets.google.com/pub?key=tUvFsKz2lGL-GtPmw47VLdg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Congenital deaths in', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:57 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"76507056-97ec-495a-86ff-ceee5edf25d1\"}\n\n{'description': 'gapminder Contraceptive use of based on World Bank', 'files': {'Contraceptive use of women ages 15 49.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFp2OENYMUVKWnY1dkJLRXAtYnI3UVE&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Contraceptive use of', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Corruption Perceptio based on Transparency International', 'files': {'Corruption Perception Index CPI.csv': 'http://spreadsheets.google.com/pub?key=tKOphM3UPRd94T6C6pmsuXw&output=csv'}, 'license': 'CC-BY', 'tags': ['society', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Corruption Perceptio', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Crude birth rate bir based on UN Population Division', 'files': {'Crude birth rate births per 1 000 population.csv': 'http://spreadsheets.google.com/pub?key=tUSeGJOQhafugwUvHvY-wLA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Crude birth rate bir', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Crude death rate dea based on UN Population Division', 'files': {'Crude death rate deaths per 1 000 population.csv': 'http://spreadsheets.google.com/pub?key=tHyj-2jRvK3CCNJOc5Vm-HQ&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Crude death rate dea', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Cumulative CO2 emiss based on CDIAC (Carbon Dioxide Information Analysis Center)', 'files': {'Cumulative CO2 emissions tonnes.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEed4UamoiNCFA&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Cumulative CO2 emiss', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Data method Maternal based on Hill et al', 'files': {'Data method Maternal mortality.csv': 'http://spreadsheets.google.com/pub?key=thDxsgSWvLGW4M1qUBby7OQ&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Data method Maternal', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Data quality Childre based on Various sources', 'files': {'Data quality Children per woman.csv': 'http://spreadsheets.google.com/pub?key=thlR4hyNMEnaVyV_uxRzjfQ&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Data quality Childre', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Data quality Income based on Various sources', 'files': {'Data quality Income per person.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0atjJIuxy-KQ&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Data quality Income ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Data quality Life ex based on Various sources', 'files': {'Data quality Life expectancy.csv': 'http://spreadsheets.google.com/pub?key=p8SIY47PNEw6vGCNzlYJ5eA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Data quality Life ex', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Data quality Populat based on Various sources', 'files': {'Data quality Population.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1ONZlWMf9KQA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Data quality Populat', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Dead kids per woman based on Gapminder', 'files': {'Dead kids per woman.csv': 'http://spreadsheets.google.com/pub?key=0ArtujvvFrPjVdDRQTUhzNmVySlp6djRtLWh4eS1sNHc&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Dead kids per woman', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Debt servicing costs based on World Bank', 'files': {'Debt servicing costs of exports and net income from abroad.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEd-30XcArAJHA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Debt servicing costs', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Debt to foreigners b based on World Bank', 'files': {'Debt to foreigners by public private of GNI.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdDlSTTVWUkU3Z254aEhERmVuQWZaeWc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Debt to foreigners b', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Democracy score use based on Polity IV project', 'files': {'Democracy score use as color.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdGQ2YlhDSWVIdXdpMmhLY2ZZRHdNNnc&output=csv'}, 'license': 'CC-BY', 'tags': ['society', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Democracy score use ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Desalinated water pr based on FAO aquastat database', 'files': {'Desalinated water produced billion cu meters.csv': 'http://spreadsheets.google.com/pub?key=rt3BaikcRwJBNC0CvQsjDCA&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Desalinated water pr', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Diarrhoeal deaths in based on Lancet', 'files': {'Diarrhoeal deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tADSStUzP0ADEPYqsIfVHMQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Diarrhoeal deaths in', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Diarrhoeal deaths in based on Lancet', 'files': {'Diarrhoeal deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=tDzIq4iIDwNtYthN_QYZwZg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Diarrhoeal deaths in', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:59 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"768c5ca2-95c5-43cb-a75a-9cc09f0efe26\"}\n\n{'description': 'gapminder Diarrhoeal deaths in based on Lancet', 'files': {'Diarrhoeal deaths in newborn per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tX7NQAP_5H4TOS1OEnxdxYw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Diarrhoeal deaths in', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:59 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"f3dafc2e-63a2-4d38-b2f3-96a44373392f\"}\n\n{'description': 'gapminder Diarrhoeal deaths in based on Lancet', 'files': {'Diarrhoeal deaths in newborn total deaths.csv': 'http://spreadsheets.google.com/pub?key=tKZcnGP_XImMXRZ4tZqtjMg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Diarrhoeal deaths in', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:32:59 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"5d258c51-153e-4ff3-89c6-e28189b6d7e7\"}\n\n{'description': 'gapminder Dollar billionaires based on Forbes', 'files': {'Dollar billionaires per million people.csv': 'http://spreadsheets.google.com/pub?key=tAQpQeQkuyA1-ZovdDJ7JAw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Dollar billionaires ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Drought affected ann based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Drought affected annual number.csv': 'http://spreadsheets.google.com/pub?key=rAYA0bjnfYwzXnir0cigijQ&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Drought affected ann', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Drought deaths annua based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Drought deaths annual number.csv': 'http://spreadsheets.google.com/pub?key=rvRwTPi0n_94EScVA3YjLeg&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Drought deaths annua', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Drownings per 100 00 based on WHO', 'files': {'Drownings per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldF9ZRnhuWWljUXJHRDRpb0cyaHNLSUE&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Drownings per 100 00', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder DTP3 immunized of on based on UNICEF Childinfo', 'files': {'DTP3 immunized of one year olds.csv': 'http://spreadsheets.google.com/pub?key=txVTyScWObTBNuMmkNtLh1w&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder DTP3 immunized of on', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Earthquake affected based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Earthquake affected annual number.csv': 'http://spreadsheets.google.com/pub?key=rG_BjsDwyS2n7DANNH3i5vQ&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Earthquake affected ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Earthquake deaths an based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Earthquake deaths annual number.csv': 'http://spreadsheets.google.com/pub?key=rvbbs7uxQc7swJ4RR2BcQfA&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Earthquake deaths an', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Economic growth over based on Various sources', 'files': {'Economic growth over the past 10 years.csv': 'http://spreadsheets.google.com/pub?key=tkdAnkbHJxPAlRX6P1mAh8w&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Economic growth over', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Economical infrastru based on OECD QWIDS', 'files': {'Economical infrastructure aid given of aid.csv': 'http://spreadsheets.google.com/pub?key=tMxqFNS7BC5QyLrhBO8DXqQ&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Economical infrastru', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Education aid given based on OECD QWIDS', 'files': {'Education aid given of aid.csv': 'http://spreadsheets.google.com/pub?key=tQQaILpdu-vGtSkJLr2VQCw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Education aid given ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Electricity generati based on BP', 'files': {'Electricity generation per person.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEeMtYNdMyLKOw&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Electricity generati', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Electricity generati based on BP', 'files': {'Electricity generation total.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEehRG-9mMHYdg&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Electricity generati', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:00 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"6687e8c4-35ea-4b5f-96df-c7c858d7be47\"}\n\n{'description': 'gapminder Electricity use per based on BP', 'files': {'Electricity use per person.csv': 'http://spreadsheets.google.com/pub?key=tiVeyAJd7iRWorOwl_ARWEQ&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Electricity use per ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Electricity use tota based on World Bank', 'files': {'Electricity use total.csv': 'http://spreadsheets.google.com/pub?key=tEu78F4acf0u6MRyhg5-9qQ&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Electricity use tota', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Energy from solid bi based on Based on IEA data', 'files': {'Energy from solid biofuels.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdFdaNHA3R1BzcG9GSlkwMXdnMHpScVE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Energy from solid bi', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Energy production pe based on Based on WDI data', 'files': {'Energy production per person.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdHJ6UTV4MkEyN0NYdnJJOG1oYUxmTWc&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Energy production pe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Energy production to based on World Bank', 'files': {'Energy production total.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFNZMXZwcjNPY2c3MWwxbWIwVFgyd0E&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Energy production to', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Energy supply per pe based on Based on IEA', 'files': {'Energy supply per person TOE.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdFFjMFlMeS02N1NGNjJabl8wamVtdHc&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Energy supply per pe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Energy use per perso based on World Bank', 'files': {'Energy use per person.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdHRmYjJWLVF0SjlQY1N5Vm9yU0xxaGc&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Energy use per perso', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Energy use total based on World Bank', 'files': {'Energy use total.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdHd2Nld0NEVFOGRiSTc0V3ZoekNuS1E&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Energy use total', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Epidemic affected an based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Epidemic affected annual number.csv': 'http://spreadsheets.google.com/pub?key=r6H8dfVPu2CJ2nzI8X0jurw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Epidemic affected an', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Epidemic deaths annu based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Epidemic deaths annual number.csv': 'http://spreadsheets.google.com/pub?key=rvtZF_JC0OI27tL66o6hiMQ&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Epidemic deaths annu', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Estimate or projecti based on Institute for Health Metrics and Evaluation', 'files': {'Estimate or projection of under five mortality rate from IHME.csv': 'http://spreadsheets.google.com/pub?key=p8SIY47PNEw4vx1GOsJM7bA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Estimate or projecti', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Expenditure per stud based on World Bank', 'files': {'Expenditure per student primary of GDP per person.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFJTUEVleTM0cE5jTnlTMk41ajBGclE&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Expenditure per stud', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Expenditure per stud based on World Bank', 'files': {'Expenditure per student secondary of GDP per person.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdDBuUVIzbWwtaU5helpJVG5BMmxyX1E&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Expenditure per stud', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:02 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"e8108322-1c61-4a64-a53e-2d83489c6dc3\"}\n\n{'description': 'gapminder Expenditure per stud based on World Bank', 'files': {'Expenditure per student tertiary of GDP per person.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdDJxdlN6cEtYMjMxdC1XdGdKOXR2bkE&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Expenditure per stud', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:02 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"a48f69ba-8e9f-4718-a2fc-a2d42d9195cd\"}\n\n{'description': 'gapminder Exports of GDP based on World Bank', 'files': {'Exports of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdHZSTVMxaVdxQlFLR3NMbnBEWnVuTXc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Exports of GDP ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Exports unit value i based on World Bank', 'files': {'Exports unit value index 2000 100.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEddLKU1fnfSSA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Exports unit value i', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder External debt total based on World Bank', 'files': {'External debt total US not inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEdQY2PxSsxxJQ&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder External debt total ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Extreme poverty peop based on The World Bank', 'files': {'Extreme poverty people below 1 25 a day.csv': 'http://spreadsheets.google.com/pub?key=t1YAVXUoD3iJKy2mSq2Padw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Extreme poverty peop', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Extreme poverty peop based on The World Bank', 'files': {'Extreme poverty people below 1 25 a day version 20120905.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdDhjcXdjbURLMFFVcVFPYThhYmtvZXc&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Extreme poverty peop', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:03 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"08d7a650-3311-4218-bd2c-287f44ece1d8\"}\n\n{'description': 'gapminder Extreme temperature based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Extreme temperature affected annual number.csv': 'http://spreadsheets.google.com/pub?key=ryPd-H6Kn3wcHC50zyImqUg&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Extreme temperature ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Extreme temperature based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Extreme temperature deaths annual number.csv': 'http://spreadsheets.google.com/pub?key=r4WUTsck3NfccM6UsKlGF7g&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Extreme temperature ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:03 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"4c2f6e44-e508-4531-95e8-40664361dddb\"}\n\n{'description': 'gapminder Falls deaths per 100 based on WHO', 'files': {'Falls deaths per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldEJZajRXMWZPTE9nUXFBNUdPcG5yT2c&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Falls deaths per 100', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Family workers of la based on International Labour Organization', 'files': {'Family workers of labour force.csv': 'http://spreadsheets.google.com/pub?key=rW7k_DdDKlGgJhzYRuNvguw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Family workers of la', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Female agriculture w based on International Labour Organization', 'files': {'Female agriculture workers of female labour force.csv': 'http://spreadsheets.google.com/pub?key=rraOr_PTB0jcQ60TagEH_WQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Female agriculture w', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Female family worker based on International Labour Organization', 'files': {'Female family workers of female labour force.csv': 'http://spreadsheets.google.com/pub?key=rjFKVVoWIVbTgdtJK2xOZqQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Female family worker', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Female industry work based on International Labour Organization', 'files': {'Female industry workers of female labour force.csv': 'http://spreadsheets.google.com/pub?key=rA5BvUGX_Es43DaKb3FidUg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Female industry work', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Female long term une based on International Labour Organization', 'files': {'Female long term unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rT5EpK40a19zVodp1HV1xGw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Female long term une', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Female population wi based on UN Population Division', 'files': {'Female population with projections.csv': 'http://spreadsheets.google.com/pub?key=t6nZVbvqsF-BbyheqUxerZQ&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Female population wi', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Female salaried work based on International Labour Organization', 'files': {'Female salaried workers of female labour force.csv': 'http://spreadsheets.google.com/pub?key=rhuyv42EAyApMwy4tDYm3XQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Female salaried work', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Female self employed based on International Labour Organization', 'files': {'Female self employed of female labour force.csv': 'http://spreadsheets.google.com/pub?key=rIe2Y4f4Ehde4I4BGPN2VBg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Female self employed', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Female service worke based on International Labour Organization', 'files': {'Female service workers of female labour force.csv': 'http://spreadsheets.google.com/pub?key=r1B3mjfpBItUmvrhqaRgTWQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Female service worke', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Females aged 15 empl based on International Labour Organization', 'files': {'Females aged 15 employment rate.csv': 'http://spreadsheets.google.com/pub?key=rOXvRa2ZC2oXqBn7gz62IMg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Females aged 15 empl', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Females aged 15 labo based on International Labour Organization', 'files': {'Females aged 15 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=rZyHDNFsPBn7cqZCIzDQtIg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Females aged 15 labo', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Females aged 15 unem based on International Labour Organization', 'files': {'Females aged 15 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rcHjAQAzF2e1yR1R-hywCEw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Females aged 15 unem', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Females aged 15 24 e based on International Labour Organization', 'files': {'Females aged 15 24 employment rate.csv': 'http://spreadsheets.google.com/pub?key=rRS0FbArN8jYsY25X-ZiU9A&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Females aged 15 24 e', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Females aged 15 24 u based on International Labour Organization', 'files': {'Females aged 15 24 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rMf--YMvuEKf2LVppT63Xvw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Females aged 15 24 u', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Females aged 15 64 l based on International Labour Organization', 'files': {'Females aged 15 64 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=rLRScmH2JZmjxsCGW2LB1cA&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Females aged 15 64 l', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Females aged 25 54 l based on International Labour Organization', 'files': {'Females aged 25 54 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=rgdYcit5cC0wxcLAQf9kJ_Q&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Females aged 25 54 l', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Females aged 25 54 u based on International Labour Organization', 'files': {'Females aged 25 54 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=r9StWVETzyX9Lv-r4-2sh6w&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Females aged 25 54 u', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Females aged 55 unem based on International Labour Organization', 'files': {'Females aged 55 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rz8kJ7CIyckuQAWgHUHe4sA&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Females aged 55 unem', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Females aged 65 labo based on International Labour Organization', 'files': {'Females aged 65 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=rEZ0xOSmU7UuX7iOyL0Xp3g&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Females aged 65 labo', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Fixed line and mobil based on World Development Indicators', 'files': {'Fixed line and mobile phone subscribers per 100 people.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEcfLoOcU6GAfg&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Fixed line and mobil', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Flood affected annua based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Flood affected annual number.csv': 'http://spreadsheets.google.com/pub?key=rsCDusOObseaoBUdarzw7Kw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Flood affected annua', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Flood deaths annual based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Flood deaths annual number.csv': 'http://spreadsheets.google.com/pub?key=rtESPUlrTyLEoHpURqE8RAg&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Flood deaths annual ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Food supply kilocalo based on FAO modified', 'files': {'Food supply kilocalories person day.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdGlYVVpXX20tbU13STZyVG0yNkRrZnc&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Food supply kilocalo', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Foreign direct inves based on World Bank', 'files': {'Foreign direct investment net inflows of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdE03VFhRMnBpMGZhQ19Vbk9pMGU5VUE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Foreign direct inves', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Foreign direct inves based on Not Found.', 'files': {'Foreign direct investment net outflows of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdHQtSzBhVXA2WTNrVDFleUZvZ0doTUE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Foreign direct inves', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:06 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"900c4039-dc38-4725-824f-c726eec725af\"}\n\n{'description': 'gapminder Forest area sq km based on World Bank', 'files': {'Forest area sq km.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFRuaV91Mm9JeUhwR1hHRXJhV3ZBQkE&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Forest area sq km ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Forest coverage based on FAO - Food and Agriculture Organization', 'files': {'Forest coverage.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWfRGgfhjf8FBQ&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Forest coverage ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Forest land total ar based on FAO - Food and Agriculture Organization', 'files': {'Forest land total area ha.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWeB1N1HlpFQVQ&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Forest land total ar', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Forest products remo based on FAO - Food and Agriculture Organization', 'files': {'Forest products removal per ha.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWd9CVdfFx1dEw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Forest products remo', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Forest products remo based on FAO - Food and Agriculture Organization', 'files': {'Forest products removal total.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWf66stZ2oNUAA&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Forest products remo', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:06 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"1f07335b-0ff1-4652-a070-f702fa9afaaa\"}\n\n{'description': 'gapminder GDP capita US inflat based on World Bank', 'files': {'GDP capita US inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdHo5S0J6ekhVOF9QaVhod05QSGV4T3c&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder GDP capita US inflat', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder GDP capita growth pe based on World Bank', 'files': {'GDP capita growth per year.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdEdDWHhBcFpjMUo4MGE2X2Q4WXFQRGc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder GDP capita growth pe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder GDP capita growth ov based on Various sources', 'files': {'GDP capita growth over next 10 years.csv': 'http://spreadsheets.google.com/pub?key=tvllZwGIbhwxLD7EXFhPeXQ&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder GDP capita growth ov', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder GDP employee US infl based on International Labour Organization', 'files': {'GDP employee US inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=rcTO3doih5lvJCjgLSvlajA&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder GDP employee US infl', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder GDP working hour US based on International Labour Organization', 'files': {'GDP working hour US inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=r6kTHMinnVedj8gPsUtfZ0g&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder GDP working hour US ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder GNI capita Atlas met based on World Bank ', 'files': {'GNI capita Atlas method current US.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdFVrVDZQUnRwZ2lqT2lPMXcySXZwRmc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder GNI capita Atlas met', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder GNI capita constant based on World Bank ', 'files': {'GNI capita constant 2000 US.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdFdqZ0NOdjluMmoyUTBTWTRjWWQzQVE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder GNI capita constant ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder GNI per capita PPP c based on World Bank ', 'files': {'GNI per capita PPP current international.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdGhJcHAwanc2aFdZeXl1WTVZQnJjb1E&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder GNI per capita PPP c', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Government and socie based on OECD QWIDS', 'files': {'Government and society aid given of aid.csv': 'http://spreadsheets.google.com/pub?key=t3IAEOsfHK-z6rvGLCDR74g&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Government and socie', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Government health sp based on WHO Global Health Expenditure Database', 'files': {'Government health spending of total gov spending.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEd7K-YgYOkGFQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Government health sp', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Government health sp based on WHO Global Health Expenditure Database', 'files': {'Government health spending per person international.csv': 'http://spreadsheets.google.com/pub?key=tZ3uHUdw0H__Siyj78GXsGg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Government health sp', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:07 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"1896741a-08a5-4477-9d54-bc260bd9a8e5\"}\n\n{'description': 'gapminder Government health sp based on WHO Global Health Expenditure Database', 'files': {'Government health spending per person US.csv': 'http://spreadsheets.google.com/pub?key=tBwBBkViOJoycBhLnWHqwSQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Government health sp', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:07 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"90b082b4-4bbb-42bd-b3d0-4ed0ed4e1026\"}\n\n{'description': 'gapminder Government share of based on WHO Global Health Expenditure Database', 'files': {'Government share of total health spending.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEcJI3KBJnrlDQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Government share of ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder HDI Human Developmen based on UNDP', 'files': {'HDI Human Development Index.csv': 'http://spreadsheets.google.com/pub?key=tyadrylIpQ1K_iHP407374Q&output=csv'}, 'license': 'CC-BY', 'tags': ['society', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder HDI Human Developmen', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Health aid given of based on OECD QWIDS', 'files': {'Health aid given of aid.csv': 'http://spreadsheets.google.com/pub?key=tRybjVoG5Ah9yhKcEx16u5Q&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Health aid given of ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder HepB3 immunized of o based on UNICEF Childinfo', 'files': {'HepB3 immunized of one year olds.csv': 'http://spreadsheets.google.com/pub?key=t7pU8fR9_ZzRFIMF3FX47YQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder HepB3 immunized of o', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Hib3 immunized of on based on UNICEF Childinfo', 'files': {'Hib3 immunized of one year olds.csv': 'http://spreadsheets.google.com/pub?key=thClNiXoQqfJDzTv0SYIHZg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Hib3 immunized of on', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder High technology expo based on World Bank', 'files': {'High technology exports of manufactured exports.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdEZkTFJZR2RNMVFuRmUzbktyTkoxREE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder High technology expo', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder HIV deaths in childr based on Lancet', 'files': {'HIV deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=t4C4M_ynK9Ho8tGRj6a5U5w&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder HIV deaths in childr', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder HIV deaths in childr based on Lancet', 'files': {'HIV deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=tQe6yinBBauXLvBFroZEL3Q&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder HIV deaths in childr', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:08 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"bb3c2c68-a149-4071-bdb7-e5c93b884506\"}\n\n{'description': 'gapminder Hourly compensation based on International Labour Organization', 'files': {'Hourly compensation US.csv': 'http://spreadsheets.google.com/pub?key=rEF20Sw6Sy7tn4DKsKSDDMQ&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Hourly compensation ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder How far to the north based on Various sources', 'files': {'How far to the north.csv': 'http://spreadsheets.google.com/pub?key=rAIffGKCmiCdzTl1C0AR2nw&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder How far to the north', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Hydroelectric electr based on World Bank', 'files': {'Hydroelectric electricity production per person.csv': 'http://spreadsheets.google.com/pub?key=tSjVrGemv30eCh3jPZkXYCQ&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Hydroelectric electr', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Hydroelectric electr based on World Bank', 'files': {'Hydroelectric electricity production total.csv': 'http://spreadsheets.google.com/pub?key=t1MShlv870O6LmFNEHazdEg&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Hydroelectric electr', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:09 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"0d4afaa1-2b82-44cc-8889-55d46dfffe0c\"}\n\n{'description': 'gapminder IFPRI Underweight ch based on World Bank', 'files': {'IFPRI Underweight children.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdHFkZlc5WkhVQmVmeU0tR0RsSUdTU0E&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder IFPRI Underweight ch', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Imports of GDP based on World Bank', 'files': {'Imports of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdEhLMVdnUjZ0d05WWkhjT0FjSDIwQmc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Imports of GDP ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Imports unit value i based on World Bank', 'files': {'Imports unit value index 2000 100.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEcL6zpB3Sj1Wg&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Imports unit value i', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Improved sanitation based on World Bank', 'files': {'Improved sanitation overall access.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdE4tekJPYkR4WmJqYTRPWjc3OTl4WUE&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Improved sanitation ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Improved sanitation based on World Bank', 'files': {'Improved sanitation rural access.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdFNPMTE3d3FHTHdYaGFMXzJyNDBGd3c&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Improved sanitation ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:10 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"5567dc7b-32a1-4225-b609-325318fa54b6\"}\n\n{'description': 'gapminder Improved sanitation based on World Bank', 'files': {'Improved sanitation urban access.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEfLbPu48DrKfQ&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Improved sanitation ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:10 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"fa0fcb37-2d4f-4cdb-bb7c-68e9f3354419\"}\n\n{'description': 'gapminder Improved water sourc based on MDG indicators', 'files': {'Improved water source overall access.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEd98lRwrU3gIg&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Improved water sourc', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Improved water sourc based on MDG indicators', 'files': {'Improved water source rural access.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdFhhVzhXUEh0U0hlQ3M3TTZIQTFySUE&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Improved water sourc', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:10 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"510098c8-483c-4de5-8d4e-e0fb399dcc59\"}\n\n{'description': 'gapminder Improved water sourc based on MDG indicators', 'files': {'Improved water source urban access.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdDlJNzNjcVc5Sm9memNuVHRzY1FsOXc&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Improved water sourc', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:10 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"88debaa8-6bad-404a-958a-5079391a6ea6\"}\n\n{'description': 'gapminder Income per person GD based on Various sources', 'files': {'Income per person GDP capita PPP inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1jiMAkmq1iMg&output=csv'}, 'license': 'CC-BY', 'tags': ['none', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Income per person GD', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Income per person wi based on Various sources', 'files': {'Income per person with projections.csv': 'http://spreadsheets.google.com/pub?key=rX3Jfop_ebuY-chuMpCgmRg&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Income per person wi', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Income share of 2nd based on The World Bank', 'files': {'Income share of 2nd poorest 20.csv': 'http://spreadsheets.google.com/pub?key=tXRyZGCfHsWMmr53VFxrqTw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Income share of 2nd ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Income share of 2nd based on The World Bank', 'files': {'Income share of 2nd richest 20.csv': 'http://spreadsheets.google.com/pub?key=twSOUYrIFh2W2snDUt7VaQg&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Income share of 2nd ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:10 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"387f12d8-156d-4247-ab62-78c0521b8f02\"}\n\n{'description': 'gapminder Income share of midd based on The World Bank', 'files': {'Income share of middle 20.csv': 'http://spreadsheets.google.com/pub?key=t_-14NtXH6xZX48xHG75z5w&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Income share of midd', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Income share of poor based on The World Bank', 'files': {'Income share of poorest 10.csv': 'http://spreadsheets.google.com/pub?key=trzLWJQU4SZMDpeVg3XnL5A&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Income share of poor', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Income share of poor based on The World Bank', 'files': {'Income share of poorest 20.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEdIyrBS31XAaw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Income share of poor', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:10 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"b41817f0-a417-4cde-9810-849598017535\"}\n\n{'description': 'gapminder Income share of rich based on The World Bank', 'files': {'Income share of richest 10.csv': 'http://spreadsheets.google.com/pub?key=tmKvydPl_roGIQBrMYA6C4g&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Income share of rich', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Income share of rich based on The World Bank', 'files': {'Income share of richest 20.csv': 'http://spreadsheets.google.com/pub?key=tLnCxItXzRSu9gH-5PyEFDw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Income share of rich', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:11 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"6948db66-6546-41ac-a2c5-596214695ed5\"}\n\n{'description': 'gapminder Industrial water wit based on FAO aquastat database', 'files': {'Industrial water withdrawal of total.csv': 'http://spreadsheets.google.com/pub?key=rGKP-BBylLOM11iGahW1lxA&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Industrial water wit', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Industry of GDP based on World Bank', 'files': {'Industry of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdHA2UEFOYTlUTWtzV29xbHFuMU00SFE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Industry of GDP ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Industry workers of based on International Labour Organization', 'files': {'Industry workers of labour force.csv': 'http://spreadsheets.google.com/pub?key=rqcJTExcUqNdolB-7flqebQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Industry workers of ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Inequality index Gin based on The World Bank', 'files': {'Inequality index Gini.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEcjeKHnZq6RIg&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Inequality index Gin', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Infant mortality rat based on Various sources', 'files': {'Infant mortality rate per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0NpF2PTov2Cw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Infant mortality rat', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Infectious TB detect based on World Health Organization', 'files': {'Infectious TB detection rate.csv': 'http://spreadsheets.google.com/pub?key=rDb6EYc4YUTBRfXlBvjHYlg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Infectious TB detect', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Infectious TB detect based on World Health Organization', 'files': {'Infectious TB detection rate DOTS only.csv': 'http://spreadsheets.google.com/pub?key=rjGHot8B6YSt3kPYEG8nANA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Infectious TB detect', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:11 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"fb28eb1f-8a58-4fd9-9666-a3002e28fbfa\"}\n\n{'description': 'gapminder Infectious TB new ca based on World Health Organization', 'files': {'Infectious TB new cases per 100 000 estimated.csv': 'http://spreadsheets.google.com/pub?key=rVyfxaPK4dJ9B6ZdgG34F-g&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Infectious TB new ca', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Infectious TB new ca based on World Health Organization', 'files': {'Infectious TB new cases per 100 000 reported.csv': 'http://spreadsheets.google.com/pub?key=r0pD5wznwEUJ0ipdxAWQjiA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Infectious TB new ca', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:12 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"839b5817-9570-41cb-9ce8-c06581ff60d0\"}\n\n{'description': 'gapminder Infectious TB number based on World Health Organization', 'files': {'Infectious TB number of new cases estimated.csv': 'http://spreadsheets.google.com/pub?key=rOPfJcbTTIyS-vxDWbkfNLA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Infectious TB number', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Infectious TB number based on World Health Organization', 'files': {'Infectious TB number of new cases reported.csv': 'http://spreadsheets.google.com/pub?key=rcbx0R-TXbkqRCyvKzn08fg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Infectious TB number', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:12 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"518a2f14-d3c0-4bae-928d-021c16c7023a\"}\n\n{'description': 'gapminder Infectious TB treatm based on World Health Organization', 'files': {'Infectious TB treatment DOTS completed.csv': 'http://spreadsheets.google.com/pub?key=rewICFMTvBuer8UoJIK0yUg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Infectious TB treatm', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Inflation annual based on World Bank', 'files': {'Inflation annual.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdGJoOUJXalk3STFYUG85MkxlbnQxMmc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Inflation annual ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Injury deaths in chi based on Lancet', 'files': {'Injury deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tfOi0Ji7pJDJbxJVqwJXj9g&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Injury deaths in chi', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Injury deaths in chi based on Lancet', 'files': {'Injury deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=tnRIpcH0InZUFz7f2ziXKog&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Injury deaths in chi', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:12 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"486bd3d5-8508-41ac-9bd2-71ef38d9ad16\"}\n\n{'description': 'gapminder Internal renewable w based on FAO aquastat database', 'files': {'Internal renewable water cu meters per person.csv': 'http://spreadsheets.google.com/pub?key=riLRFECHMsTq7OTa2KYZCWA&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Internal renewable w', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Internet users per 1 based on World Bank', 'files': {'Internet users per 100 people.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdGwzSGV5OE9FOGhURlhTdEQtMW1TNkE&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Internet users per 1', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Internet users total based on World Bank', 'files': {'Internet users total.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdC1PcWJUZldDelFyQXdaOEtDUG9HSUE&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Internet users total', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Investments of GDP based on World Bank', 'files': {'Investments of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdG9sVVF6dHpGdnhQU3BkMlAtNHFwVkE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Investments of GDP ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Life expectancy year based on Various sources', 'files': {'Life expectancy years.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2tPLxKvvnNPA&output=csv'}, 'license': 'CC-BY', 'tags': ['none', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Life expectancy year', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Life expectancy at b based on Various sources', 'files': {'Life expectancy at birth temporary update.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdG9jSHA0WklHU0dqUnBCVUpVOXFzQUE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Life expectancy at b', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Life expectancy at b based on Various sources', 'files': {'Life expectancy at birth with projections.csv': 'http://spreadsheets.google.com/pub?key=tiAiXcrneZrUnnJ9dBU-PAw&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Life expectancy at b', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:13 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"560bbebb-84b7-4a71-b1ec-265a1683269f\"}\n\n{'description': 'gapminder Literacy rate adult based on UNESCO Institute for Statistics', 'files': {'Literacy rate adult female of females ages 15 and above.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEc96gAEE60-Zg&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Literacy rate adult ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Literacy rate adult based on UNESCO Institute for Statistics', 'files': {'Literacy rate adult male of males ages 15 and above.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEd4fn4YYOvuOg&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Literacy rate adult ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:13 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"a7f17603-49d6-4e50-aa4e-d7f48c6f3066\"}\n\n{'description': 'gapminder Literacy rate adult based on UNESCO Institute for Statistics', 'files': {'Literacy rate adult total of people ages 15 and above.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEdrsBnj2ROXAg&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Literacy rate adult ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:13 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"51d07fe0-6817-4d0e-9af1-715dec4b577b\"}\n\n{'description': 'gapminder Literacy rate youth based on UNESCO Institute for Statistics', 'files': {'Literacy rate youth female of females ages 15 24.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEf96wv_abR0OA&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Literacy rate youth ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Literacy rate youth based on UNESCO Institute for Statistics', 'files': {'Literacy rate youth male of males ages 15 24.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEe7OxrqKcSWfw&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Literacy rate youth ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:13 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"3bdc4713-d388-40a4-9aae-81b0a813a84a\"}\n\n{'description': 'gapminder Literacy rate youth based on UNESCO Institute for Statistics', 'files': {'Literacy rate youth total of people ages 15 24.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEepmgV0TLjBag&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Literacy rate youth ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:13 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"7f58c3fb-6ae0-4160-85f0-66e2d498fb49\"}\n\n{'description': 'gapminder Liver cancer deaths based on Based on IARC and WHO data', 'files': {'Liver cancer deaths per 100 000 men.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3rojF8TmZtOw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Liver cancer deaths ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Liver cancer deaths based on Based on IARC and WHO data', 'files': {'Liver cancer deaths per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2ItBsVpK9VBA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Liver cancer deaths ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:13 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"c8067ceb-ed03-478a-83fb-c5381f82c8ea\"}\n\n{'description': 'gapminder Liver cancer new cas based on Based on IARC data', 'files': {'Liver cancer new cases per 100 000 men.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1u0KpZbsopCA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Liver cancer new cas', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Liver cancer new cas based on Based on IARC data', 'files': {'Liver cancer new cases per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2xhaKENmyRKw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Liver cancer new cas', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:14 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"fa13e802-ad74-4d65-b50d-28ba613c1cd1\"}\n\n{'description': 'gapminder Liver cancer number based on IARC', 'files': {'Liver cancer number of female deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2LwNOwMSnJQA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Liver cancer number ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Liver cancer number based on IARC', 'files': {'Liver cancer number of male deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1RD88c3w1vNg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Liver cancer number ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:14 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"7775a929-5c6b-43d1-974d-1c95c50a33a6\"}\n\n{'description': 'gapminder Liver cancer number based on IARC', 'files': {'Liver cancer number of new female cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1_IYQtrqQCKQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Liver cancer number ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:14 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"9124b1e1-ac23-490e-864c-3c0e6b5bed5f\"}\n\n{'description': 'gapminder Liver cancer number based on IARC', 'files': {'Liver cancer number of new male cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2LIYJXVW9EVw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Liver cancer number ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:14 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"13eedc4e-d5bd-403c-8bb3-bd17e05efc6c\"}\n\n{'description': 'gapminder Long term unemployme based on International Labour Organization', 'files': {'Long term unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rCRqVXC95LeKm_EvLrFNXKw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Long term unemployme', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Lung cancer deaths p based on Based on IARC and WHO data', 'files': {'Lung cancer deaths per 100 000 men.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2_ibAjsuNgYA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Lung cancer deaths p', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Lung cancer deaths p based on Based on IARC and WHO data', 'files': {'Lung cancer deaths per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2hZorcv6aVLA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Lung cancer deaths p', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:14 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"5eb76300-4031-40bf-b7a7-15e5df1d7560\"}\n\n{'description': 'gapminder Lung cancer new case based on Based on IARC data', 'files': {'Lung cancer new cases per 100 000 men.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1kCRbsnNcTVg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Lung cancer new case', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Lung cancer new case based on Based on IARC data', 'files': {'Lung cancer new cases per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0UaQ2DNjK9Lg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Lung cancer new case', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:14 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"f9f11dfc-8fe5-4f8b-b9ab-6bb034ba8de3\"}\n\n{'description': 'gapminder Lung cancer number o based on IARC', 'files': {'Lung cancer number of female deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0RMyUI1QFfaA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Lung cancer number o', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Lung cancer number o based on IARC', 'files': {'Lung cancer number of male deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2Az43qu-dQJQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Lung cancer number o', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:15 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"2ba7cab3-2f2f-491d-9719-b169fa870a37\"}\n\n{'description': 'gapminder Lung cancer number o based on IARC', 'files': {'Lung cancer number of new female cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2oQAW8_53cVQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Lung cancer number o', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:15 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"4b509199-44c3-4d6c-88bb-a8b99db23faf\"}\n\n{'description': 'gapminder Lung cancer number o based on IARC', 'files': {'Lung cancer number of new male cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0sd0z2MGpeeA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Lung cancer number o', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:15 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"1630b76d-7c19-4c4e-a755-4ce4f88da4a3\"}\n\n{'description': 'gapminder Malaria cases per 10 based on World Health Organization', 'files': {'Malaria cases per 100 000 reported.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWcGnFB9pe14OA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Malaria cases per 10', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Malaria deaths in ch based on Lancet', 'files': {'Malaria deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tunqKEwokfnJMDA1g7W8KwA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Malaria deaths in ch', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Malaria deaths in ch based on Lancet', 'files': {'Malaria deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=t1Vf-4rkGlG20pYzuGib3hw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Malaria deaths in ch', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:15 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"2af11a30-2da6-4fd1-bbce-0f60493a3e33\"}\n\n{'description': 'gapminder Malaria deaths per 1 based on World Health Organization', 'files': {'Malaria deaths per 100 000 reported.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWfGZUVJ2L-dCw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Malaria deaths per 1', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Malaria number of ca based on World Health Organization', 'files': {'Malaria number of cases reported.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWczfPHQMiqxCg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Malaria number of ca', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Malaria number of de based on World Health Organization', 'files': {'Malaria number of deaths reported.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWfZGL9qouvTbQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Malaria number of de', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Male agriculture wor based on International Labour Organization', 'files': {'Male agriculture workers of male labour force.csv': 'http://spreadsheets.google.com/pub?key=rtt_ihBgyYafmDJpThQecoA&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Male agriculture wor', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Male family workers based on International Labour Organization', 'files': {'Male family workers of male labour force.csv': 'http://spreadsheets.google.com/pub?key=rJMlhr2YOvL2EE5NhpbfYAg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Male family workers ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Male industry worker based on International Labour Organization', 'files': {'Male industry workers of male labour force.csv': 'http://spreadsheets.google.com/pub?key=rmLnlLnnm2kjBbNsBZYxqow&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Male industry worker', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Male long term unemp based on International Labour Organization', 'files': {'Male long term unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rezDaYxOOBEFgR4TiPN9qtw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Male long term unemp', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Male population with based on UN Population Division', 'files': {'Male population with projections.csv': 'http://spreadsheets.google.com/pub?key=tZMsbTkrY0k4OkkkXEfp6pA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Male population with', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Male salaried worker based on International Labour Organization', 'files': {'Male salaried workers of male labour force.csv': 'http://spreadsheets.google.com/pub?key=riwXFQrsUhb96BT2yFC9rFw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Male salaried worker', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Male self employed o based on International Labour Organization', 'files': {'Male self employed of male labour force.csv': 'http://spreadsheets.google.com/pub?key=raAOA9AFRPzq5ilAm5Qa65Q&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Male self employed o', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Male service workers based on International Labour Organization', 'files': {'Male service workers of male labour force.csv': 'http://spreadsheets.google.com/pub?key=ravxsZdBslM5zF5HwDsX30g&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Male service workers', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Males aged 15 employ based on International Labour Organization', 'files': {'Males aged 15 employment rate.csv': 'http://spreadsheets.google.com/pub?key=rTRt7Z5m9i9D9-vvipvdx2w&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Males aged 15 employ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Males aged 15 labour based on International Labour Organization', 'files': {'Males aged 15 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=rImcpLhokI0fXWNA-2nSWFw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Males aged 15 labour', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Males aged 15 unempl based on International Labour Organization', 'files': {'Males aged 15 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=r5_68IYi0bC1bRjGMFYFk8g&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Males aged 15 unempl', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Males aged 15 24 emp based on International Labour Organization', 'files': {'Males aged 15 24 employment rate.csv': 'http://spreadsheets.google.com/pub?key=rCyfwvThkbHVlNVw48vHybg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Males aged 15 24 emp', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Males aged 15 24 une based on International Labour Organization', 'files': {'Males aged 15 24 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rGS7_GpdXrYyjhKkAFcLHGA&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Males aged 15 24 une', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Males aged 15 64 lab based on International Labour Organization', 'files': {'Males aged 15 64 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=rB4P-M5oVWuv9CyQ5s1mvOA&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Males aged 15 64 lab', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Males aged 25 54 lab based on International Labour Organization', 'files': {'Males aged 25 54 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=rj102tw9R1O_d56Uw_eqBzg&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Males aged 25 54 lab', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Males aged 25 54 une based on International Labour Organization', 'files': {'Males aged 25 54 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rjkDFSPV2pw9Pbnz2kpiqPQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Males aged 25 54 une', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Males aged 55 unempl based on International Labour Organization', 'files': {'Males aged 55 unemployment rate.csv': 'http://spreadsheets.google.com/pub?key=rSMPg9BmVRsqE8_k1ARUudA&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Males aged 55 unempl', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Males aged 65 labour based on International Labour Organization', 'files': {'Males aged 65 labour force participation rate.csv': 'http://spreadsheets.google.com/pub?key=r_hbrX2qsjHphzAAlwsxhRA&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Males aged 65 labour', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Malnutrition weight based on World Bank', 'files': {'Malnutrition weight for age of children under 5.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdGhpWDY5QVlOdUxpVGhaMVlUOE9iX0E&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Malnutrition weight ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Market value of list based on World Bank', 'files': {'Market value of listed companies of GDP.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEfMkeuokDLVIQ&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Market value of list', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Maternal deaths life based on Unicef (Childinfo)', 'files': {'Maternal deaths lifetime risk per 1 000.csv': 'http://spreadsheets.google.com/pub?key=t2ha4jg1M70Le8CH3wHcPIQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Maternal deaths life', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Maternal deaths tota based on various sources', 'files': {'Maternal deaths total number.csv': 'http://spreadsheets.google.com/pub?key=tOJs331rbt36sNBXE8g5AUg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Maternal deaths tota', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Maternal mortality r based on Various sources', 'files': {'Maternal mortality ratio per 100 000 live births.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEcVezxiMlWaRw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Maternal mortality r', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Maternal mortality r based on WHO', 'files': {'Maternal mortality ratio WHO.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdE5UaXVKa2hWbUFzcUJJbVdHOE1VcFE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Maternal mortality r', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:17 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"41bd94cb-dc11-40b8-a1dc-c3ba2324e6ea\"}\n\n{'description': 'gapminder Math achievement 4th based on TIMSS modified', 'files': {'Math achievement 4th grade.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3Iw3kqbjJTZQ&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Math achievement 4th', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Math achievement 8th based on TIMSS modified', 'files': {'Math achievement 8th grade.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3fwfA8XA25Eg&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Math achievement 8th', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder MCV immunized of one based on UNICEF Childinfo', 'files': {'MCV immunized of one year olds.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEenS18Yjl_SOQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder MCV immunized of one', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Mean years in school based on IHME', 'files': {'Mean years in school men 25 to 34 years.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdHlYZHNWN1YtWVNudU9UbWJOd19nUVE&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Mean years in school', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Mean years in school based on IHME', 'files': {'Mean years in school men 25 years and older.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdC03X1ZWNGJHR3FBb0Q3VjJIdV9OSmc&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Mean years in school', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:18 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"e52c7409-e0ce-43e4-821f-f13446f78a9a\"}\n\n{'description': 'gapminder Mean years in school based on IHME', 'files': {'Mean years in school women men 25 to 34 years.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdG5JUDZjTHR5SzZFZjBqa2JYYUxBclE&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Mean years in school', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:18 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"6b051d2b-f8a7-4d17-a8fe-a68e8a953705\"}\n\n{'description': 'gapminder Mean years in school based on IHME', 'files': {'Mean years in school women 25 to 34 years.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdC1MYzAtY2xPQ2xOR1lMeGhYSWlpR0E&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Mean years in school', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:18 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"b8cd51e3-72e3-40db-8cee-4e1254c6e042\"}\n\n{'description': 'gapminder Mean years in school based on IHME', 'files': {'Mean years in school women 25 years and older.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdHFEVVhHZElzb0VRWE9pc3JmZHg2dWc&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Mean years in school', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:18 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"15949d72-e0ca-4887-a044-465a0fd1caf5\"}\n\n{'description': 'gapminder Mean years in school based on IHME', 'files': {'Mean years in school women of reproductive age 15 to 44.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdEttYjNNUTlkUTdrMUQ0c3BTR0dINlE&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Mean years in school', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:18 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"e7612d55-860c-42c6-be95-249f14d468b7\"}\n\n{'description': 'gapminder Measles deaths in ch based on Lancet', 'files': {'Measles deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tfyKUbGu10P_WOgHSOHhCJg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Measles deaths in ch', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Measles deaths in ch based on Lancet', 'files': {'Measles deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=tYZAce4wGXEp5Jve3AI7yWQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Measles deaths in ch', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:18 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"74072045-7c6a-4774-88a9-cf8e784519a0\"}\n\n{'description': 'gapminder Median age years based on UN Population Division', 'files': {'Median age years.csv': 'http://spreadsheets.google.com/pub?key=tH113JLeGr5DhWgtqN2FxWg&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Median age years ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Medical Doctors per based on World development indicators modified', 'files': {'Medical Doctors per 1 000 people.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2yo1IzJQmbZg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Medical Doctors per ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Meningitis deaths in based on Lancet', 'files': {'Meningitis deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tZhYsw0sliDInwowT0iWTLQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Meningitis deaths in', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Meningitis deaths in based on Lancet', 'files': {'Meningitis deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=tX-vHLzEZ7mqfk1vf6SbUXA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Meningitis deaths in', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:19 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"04517acf-c0db-40e8-832c-a5593e256422\"}\n\n{'description': 'gapminder Merchandise trade of based on World Bank', 'files': {'Merchandise trade of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdDJFSzRHa3g1Q29BOWlla0tTOEFyVGc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Merchandise trade of', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Military expenditure based on World Bank', 'files': {'Military expenditure of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdDNPQjFBT2s5Zko3U2V0NFQzS3owRnc&output=csv'}, 'license': 'CC-BY', 'tags': ['society', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Military expenditure', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Motorcycle deaths pe based on GIMD/WHO', 'files': {'Motorcycle deaths per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=t7-m0musxnWbugcQ9ECH4KA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Motorcycle deaths pe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Multisector cross cu based on OECD QWIDS', 'files': {'Multisector cross cutting aid given of aid.csv': 'http://spreadsheets.google.com/pub?key=tBJ1rYQ-nA6fqI6_Gn3mBNg&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Multisector cross cu', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Municipal water with based on FAO aquastat database', 'files': {'Municipal water withdrawal of total.csv': 'http://spreadsheets.google.com/pub?key=r58mA3XNUvBov6M_1T_FiUg&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Municipal water with', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Municipal water with based on FAO aquastat database', 'files': {'Municipal water withdrawal cu meters per person.csv': 'http://spreadsheets.google.com/pub?key=ruUmTRBZ5xYjpOAhOT9VQbw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Municipal water with', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:19 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"6a1b29fd-2125-4bc2-8ae2-612222751265\"}\n\n{'description': 'gapminder Murder per 100 000 p based on Various sources', 'files': {'Murder per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=tZgPgT_sx3VdAuyDxEzenYA&output=csv'}, 'license': 'CC-BY', 'tags': ['society', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Murder per 100 000 p', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Murder total deaths based on WHO Global Burden of Disease', 'files': {'Murder total deaths.csv': 'http://spreadsheets.google.com/pub?key=tIgjxaAg3M6kuHRcQTjEgsQ&output=csv'}, 'license': 'CC-BY', 'tags': ['society', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Murder total deaths ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Murdered 15 29 per 1 based on WHO', 'files': {'Murdered 15 29 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldDFGSGhtOWt0cS1JbEIzS29EZzlQRXc&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Murdered 15 29 per 1', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Murdered 30 44 per 1 based on WHO', 'files': {'Murdered 30 44 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldG51d0o2T0JQWXFTMUlydWFsSTZMeFE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Murdered 30 44 per 1', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Murdered 45 59 per 1 based on WHO', 'files': {'Murdered 45 59 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldElCSWl6TkpaZ1JpcXVxa2tmUGhxbFE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Murdered 45 59 per 1', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Murdered 60 per 100 based on WHO', 'files': {'Murdered 60 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldGF1WHhpZFVQTWswclJHdjE3MkZ4c3c&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Murdered 60 per 100 ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Murdered children 0 based on WHO', 'files': {'Murdered children 0 14 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldC1rcTI5OU50Mnc1djdkNXpnWUFrZmc&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Murdered children 0 ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Murdered men per 100 based on WHO modified', 'files': {'Murdered men per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=tHgVOu-6TYQ6Kig0Ur3Y-kw&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Murdered men per 100', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Murdered women per 1 based on WHO modified', 'files': {'Murdered women per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=tyeSLo9Zpmw_e05IR3EoReg&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Murdered women per 1', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Natural gas producti based on BP', 'files': {'Natural gas production per person.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEf0IBo_AGrgKA&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Natural gas producti', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Natural gas producti based on BP', 'files': {'Natural gas production total.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEfv2K6dZmskWg&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Natural gas producti', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:20 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"cf5f9349-e41f-4a10-b1aa-c3c6ed783e1c\"}\n\n{'description': 'gapminder Natural gas proved r based on BP', 'files': {'Natural gas proved reserves total.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEc5qiv87tr3NA&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Natural gas proved r', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Natural gas proven r based on BP', 'files': {'Natural gas proven reserves per person.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEfZd6DbNF1MKA&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Natural gas proven r', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder NCD deaths in childr based on Lancet', 'files': {'NCD deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tqAEpy6pMtN7ULlF6uIzEog&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder NCD deaths in childr', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder NCD deaths in childr based on Lancet', 'files': {'NCD deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=tfmFGk3PIvTgimegQiHOtSQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder NCD deaths in childr', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:21 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"742410a3-f39b-4ee0-8abe-f96e56a1ff0e\"}\n\n{'description': 'gapminder Net barter terms of based on World Bank', 'files': {'Net barter terms of trade 2000 100.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEe5SueBIj6eSw&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Net barter terms of ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder New births total num based on Based on UN Population Division', 'files': {'New births total number estimated.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdERQeFplM2VWczVrMTFfMXVrQkJpVXc&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder New births total num', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Newly HIV infected a based on UNAIDS', 'files': {'Newly HIV infected age 15 49.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdDREUkRSRDJtQmFNTE1TYmRYX1pFNEE&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Newly HIV infected a', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Newly HIV infected n based on UNAIDS', 'files': {'Newly HIV infected number all ages.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdFk2WGhwakxTQkt4NUtTdFJDSlFHQ3c&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Newly HIV infected n', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Nuclear electricity based on Not Found.', 'files': {'Nuclear electricity production per person.csv': 'http://spreadsheets.google.com/pub?key=t28UhT9IaWINamciSiJIS7w&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Nuclear electricity ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Nuclear electricity based on World Bank', 'files': {'Nuclear electricity production total.csv': 'http://spreadsheets.google.com/pub?key=trRb8ZIaBOD4KzikShshZ2g&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Nuclear electricity ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:21 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"8aec3d3f-6b19-45dd-8ebf-0a5c0a8e6c08\"}\n\n{'description': 'gapminder Number of child deat based on Various sources', 'files': {'Number of child deaths.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdGhSY2trbEVpYnNsMENqendaVm5ucnc&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Number of child deat', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Oil consumption per based on BP', 'files': {'Oil consumption per person.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdERNZmlfUGM5YVE3bmEwODdlRDFqSkE&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Oil consumption per ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Oil consumption tota based on BP', 'files': {'Oil consumption total.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEcm0fIa0IVtKw&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Oil consumption tota', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Oil production per p based on BP', 'files': {'Oil production per person.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEeUgUuvuJTONQ&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Oil production per p', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Oil production total based on BP', 'files': {'Oil production total.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEdNIa3ckVXaCQ&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Oil production total', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Oil proved reserves based on BP', 'files': {'Oil proved reserves total.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEfJfS1WYrLeBA&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Oil proved reserves ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Oil proven reserves based on BP', 'files': {'Oil proven reserves per person.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEfrI4YlDJDUag&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Oil proven reserves ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Old version of Incom based on Various sources', 'files': {'Old version of Income per person version 3.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0npaJxInyYDg&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Old version of Incom', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Old version of Incom based on Various sources', 'files': {'Old version of Income per person version 8.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdE5xWmcyYVZJQzJvOFpZUklqX3lkSkE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Old version of Incom', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:22 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"cce3ac8b-cd07-4a6a-a285-096272b82b11\"}\n\n{'description': 'gapminder Other deaths in newb based on Lancet', 'files': {'Other deaths in newborn per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tiYIen5OQP0fKiQbg6g8VyA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Other deaths in newb', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Other deaths in newb based on Lancet', 'files': {'Other deaths in newborn total deaths.csv': 'http://spreadsheets.google.com/pub?key=tyPO4OLCIZtK5zcdkafcHMA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Other deaths in newb', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:22 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"8f444340-6409-4b65-814d-89efb59753f7\"}\n\n{'description': 'gapminder Other infections dea based on Lancet', 'files': {'Other infections deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tVAqgYwCDZfcq9jSCI87SAw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Other infections dea', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Other infections dea based on Lancet', 'files': {'Other infections deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=tiehFos6jB-lekEAH028deA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Other infections dea', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:22 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"1acedaf4-cf4a-4119-b030-a05895266b7b\"}\n\n{'description': 'gapminder Other social service based on OECD QWIDS', 'files': {'Other social services aid given of aid.csv': 'http://spreadsheets.google.com/pub?key=tqjUijP4mi_dHKkCZjOn0oA&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Other social service', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Out of pocket share based on Global Health Expenditure Database', 'files': {'Out of pocket share of total health spending.csv': 'http://spreadsheets.google.com/pub?key=tXf6_OUYVmyEMZo0g4DQw6w&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Out of pocket share ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder PAB immunized of new based on UNICEF Childinfo', 'files': {'PAB immunized of newborns.csv': 'http://spreadsheets.google.com/pub?key=tnvxVX8aOAl0dwDNujbELPQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder PAB immunized of new', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Patent applications based on UN Statistics Division', 'files': {'Patent applications total.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEd5FA9xlfO9eA&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Patent applications ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Patents granted tota based on UN Statistics Division', 'files': {'Patents granted total.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEdMioz5VJKXHw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Patents granted tota', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Patents in force tot based on UN Statistics Division', 'files': {'Patents in force total.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEe371ZVZl73eA&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Patents in force tot', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder People living with H based on Based on UNAIDS', 'files': {'People living with HIV number all ages.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEe1GaiYJX2qGA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder People living with H', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Personal computers p based on UN data', 'files': {'Personal computers per 100 people.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEdFW4nUY4gQdA&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Personal computers p', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Personal computers t based on UN data', 'files': {'Personal computers total.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEfUXdC83YSzfw&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Personal computers t', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Pertussis deaths in based on Lancet', 'files': {'Pertussis deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=t4RbfeK6Dtt2srxw4REoXxQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Pertussis deaths in ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Pertussis deaths in based on Lancet', 'files': {'Pertussis deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=tdOlzqGxVdDnnCpfWyyGX1A&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Pertussis deaths in ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:23 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"6ed31923-a3df-4a89-9d95-e2e17e1e0c5d\"}\n\n{'description': 'gapminder Plane crash affected based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Plane crash affected annual number.csv': 'http://spreadsheets.google.com/pub?key=rhbS3kWOfMzvY4ofUFoeFJg&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Plane crash affected', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Plane crash deaths a based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Plane crash deaths annual number.csv': 'http://spreadsheets.google.com/pub?key=rSv5aMDwESiKg-yA__-tRFg&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Plane crash deaths a', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Planted forest area based on FAO - Food and Agriculture Organization', 'files': {'Planted forest area ha.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWc4aJd9fV8zZg&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Planted forest area ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Pneumonia deaths in based on Lancet', 'files': {'Pneumonia deaths in children 1 59 months per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tCe8N5KXAYCJteQokAqVM_A&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Pneumonia deaths in ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Pneumonia deaths in based on Lancet', 'files': {'Pneumonia deaths in children 1 59 months total deaths.csv': 'http://spreadsheets.google.com/pub?key=tcYkTk6KMHsrXzcM9WyUxbw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Pneumonia deaths in ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:23 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"834b3a42-0766-4aa1-90e1-cf328482b3ff\"}\n\n{'description': 'gapminder Pneumonia deaths in based on Lancet', 'files': {'Pneumonia deaths in newborn per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tOXHWd6PcUGK3Dg-k2N8Clw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Pneumonia deaths in ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:24 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"0802f443-9e3a-4548-acc0-e4817743db64\"}\n\n{'description': 'gapminder Pneumonia deaths in based on Lancet', 'files': {'Pneumonia deaths in newborn total deaths.csv': 'http://spreadsheets.google.com/pub?key=tjvbVGhVx7YCk1uguLrkaag&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Pneumonia deaths in ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:24 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"f4e69273-7335-44b2-a97c-9beb61553f4d\"}\n\n{'description': 'gapminder Poisonings deaths pe based on WHO', 'files': {'Poisonings deaths per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldGhVLXVsSTB0aDY2eXdLaEt1T0psdXc&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Poisonings deaths pe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population aged 0 4 based on UN Population Division', 'files': {'Population aged 0 4 years total number.csv': 'http://spreadsheets.google.com/pub?key=rovrK0Uj9JPN95P9adob0tw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 0 4 ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population aged 0 4 based on UN Population Division', 'files': {'Population aged 0 4 years both sexes.csv': 'http://spreadsheets.google.com/pub?key=rsOONWhmGBtzb4j__0MJv7Q&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 0 4 ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:24 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"ffb23601-83b1-47a6-900d-3baca23f8af6\"}\n\n{'description': 'gapminder Population aged 0 4 based on UN Population Division', 'files': {'Population aged 0 4 years female.csv': 'http://spreadsheets.google.com/pub?key=roLpNPQkooNCFzpTWXQ48Dw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 0 4 ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:24 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"91903c26-7056-45f3-9526-f925d4399d81\"}\n\n{'description': 'gapminder Population aged 0 4 based on UN Population Division', 'files': {'Population aged 0 4 years male.csv': 'http://spreadsheets.google.com/pub?key=rIpDsoI9lVTCh_PRqm66Tcw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 0 4 ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:24 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"05627941-bd6a-425b-8310-f5ef6e42ec45\"}\n\n{'description': 'gapminder Population aged 10 1 based on UN Population Division', 'files': {'Population aged 10 14 years total number.csv': 'http://spreadsheets.google.com/pub?key=r9ztOSMb5WNHUBLwlgJqPSw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 10 1', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population aged 10 1 based on UN Population Division', 'files': {'Population aged 10 14 years both sexes.csv': 'http://spreadsheets.google.com/pub?key=rmViJSkPd4xZneV2Q6gzFwQ&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 10 1', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:24 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"b2e946da-a4e3-4e97-b3c3-575037f1bf78\"}\n\n{'description': 'gapminder Population aged 10 1 based on UN Population Division', 'files': {'Population aged 10 14 years female.csv': 'http://spreadsheets.google.com/pub?key=rJwVmwTFzheqVUzYWwSqXlA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 10 1', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:24 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"0c007e04-da2f-434c-bc1a-036cc6f51607\"}\n\n{'description': 'gapminder Population aged 10 1 based on UN Population Division', 'files': {'Population aged 10 14 years male.csv': 'http://spreadsheets.google.com/pub?key=rmQZ_H88rIhF3315QBZpcIQ&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 10 1', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:24 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"e42bf614-0b2d-485f-99a4-700dd55eb3ed\"}\n\n{'description': 'gapminder Population aged 15 1 based on UN Population Division', 'files': {'Population aged 15 19 years total number.csv': 'http://spreadsheets.google.com/pub?key=rFmJvuotJYE30q4nWEvpGJA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 15 1', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population aged 15 1 based on UN Population Division', 'files': {'Population aged 15 19 years both sexes.csv': 'http://spreadsheets.google.com/pub?key=r4VUu4a4AaWqXXoAsFz-z-Q&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 15 1', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:24 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"5b044492-d358-497e-b770-2a60eb00fa48\"}\n\n{'description': 'gapminder Population aged 15 1 based on UN Population Division', 'files': {'Population aged 15 19 years female.csv': 'http://spreadsheets.google.com/pub?key=rYEHWlJHaLjHtcsSpRRJeig&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 15 1', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:24 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"ffdde929-3cc2-4ee2-b652-9a915b77303f\"}\n\n{'description': 'gapminder Population aged 15 1 based on UN Population Division', 'files': {'Population aged 15 19 years male.csv': 'http://spreadsheets.google.com/pub?key=rYfw4UJZSizLRtgDs73d5jA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 15 1', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"b0fa3ab1-c860-46bc-848f-5de14c08c720\"}\n\n{'description': 'gapminder Population aged 20 3 based on UN Population Division', 'files': {'Population aged 20 39 years total number.csv': 'http://spreadsheets.google.com/pub?key=rHrin819tHgZudARnpsN0Mg&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 20 3', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population aged 20 3 based on UN Population Division', 'files': {'Population aged 20 39 years both sexes.csv': 'http://spreadsheets.google.com/pub?key=rTU20DXn0Bi7bTwW5T6J3gg&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 20 3', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"3b2cd07f-1a7d-4f8b-a73e-9b7fcdfa84b6\"}\n\n{'description': 'gapminder Population aged 20 3 based on UN Population Division', 'files': {'Population aged 20 39 years female.csv': 'http://spreadsheets.google.com/pub?key=rWpQHQIQdntj6BEK8OIuWYw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 20 3', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"f0353f65-807d-4ff1-ad3d-3e40e689bed7\"}\n\n{'description': 'gapminder Population aged 20 3 based on UN Population Division', 'files': {'Population aged 20 39 years male.csv': 'http://spreadsheets.google.com/pub?key=rab1GmqpzJkWd4MF0hieVgA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 20 3', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"ad4c61aa-2ea8-452e-8a63-f0bd02ef10f1\"}\n\n{'description': 'gapminder Population aged 40 5 based on UN Population Division', 'files': {'Population aged 40 59 years total number.csv': 'http://spreadsheets.google.com/pub?key=ri9SXMNc7TpebHucmAYepGQ&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 40 5', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population aged 40 5 based on UN Population Division', 'files': {'Population aged 40 59 years both sexes.csv': 'http://spreadsheets.google.com/pub?key=rLwpdKbW2OykBbvVxhYKrhA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 40 5', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"bbf97745-be59-45b0-8ad8-fbd6b89918f4\"}\n\n{'description': 'gapminder Population aged 40 5 based on UN Population Division', 'files': {'Population aged 40 59 years female.csv': 'http://spreadsheets.google.com/pub?key=rElErbmOnSM6om03a1uinKQ&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 40 5', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"0d047606-5217-440e-a461-f06b556dc41c\"}\n\n{'description': 'gapminder Population aged 40 5 based on UN Population Division', 'files': {'Population aged 40 59 years male.csv': 'http://spreadsheets.google.com/pub?key=rcQkob1yAm-to1scz51flgw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 40 5', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"608a07a0-6d8c-4e0e-bf21-ca994260d003\"}\n\n{'description': 'gapminder Population aged 5 9 based on UN Population Division', 'files': {'Population aged 5 9 years total number.csv': 'http://spreadsheets.google.com/pub?key=r83X3yfjC6ENYWoo41yDehg&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 5 9 ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population aged 5 9 based on UN Population Division', 'files': {'Population aged 5 9 years both sexes.csv': 'http://spreadsheets.google.com/pub?key=rC5UskPU6PRVlmN7eXoridw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 5 9 ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"d5b1ae04-43c6-439e-a05e-63f5a7f618b2\"}\n\n{'description': 'gapminder Population aged 5 9 based on UN Population Division', 'files': {'Population aged 5 9 years female.csv': 'http://spreadsheets.google.com/pub?key=re3efChATTYAT5bRqoaChXA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 5 9 ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"00275d5a-21bb-4407-8825-022e82cfa7dc\"}\n\n{'description': 'gapminder Population aged 5 9 based on UN Population Division', 'files': {'Population aged 5 9 years male.csv': 'http://spreadsheets.google.com/pub?key=rgyZrNmSfXcPJQxJK7IxnEw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 5 9 ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"06d33136-6c38-4d12-b7e8-1f05a5aab610\"}\n\n{'description': 'gapminder Population aged 60 y based on UN Population Division', 'files': {'Population aged 60 years total number.csv': 'http://spreadsheets.google.com/pub?key=rVD6A2uAmeIE0BQNW1KSg3A&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 60 y', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population aged 60 y based on UN Population Division', 'files': {'Population aged 60 years both sexes.csv': 'http://spreadsheets.google.com/pub?key=rH6TEe8f_WNq_8x9pWZ3W0A&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 60 y', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:25 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"715acb23-789a-4702-bb9b-0d2e302eb443\"}\n\n{'description': 'gapminder Population aged 60 y based on UN Population Division', 'files': {'Population aged 60 years female.csv': 'http://spreadsheets.google.com/pub?key=rjhBvpeRgCxBq0EnQVN6b0w&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 60 y', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:26 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"9d7c3399-84af-434b-8840-97967a761e80\"}\n\n{'description': 'gapminder Population aged 60 y based on UN Population Division', 'files': {'Population aged 60 years male.csv': 'http://spreadsheets.google.com/pub?key=rSkhGgUVN74knEhSAhBSSKA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population aged 60 y', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:26 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"a17cd8dd-c90f-4966-8e6b-2f0854ed3e77\"}\n\n{'description': 'gapminder Population density p based on UN Population Division', 'files': {'Population density per square km.csv': 'http://spreadsheets.google.com/pub?key=tVY51lNaCL9m9xPqf29oFAA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population density p', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population growth an based on World Bank', 'files': {'Population growth annual.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFY5Z0QzTzRRbzJ1VXdqdGVyNE0tcFE&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population growth an', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population growth an based on UN Population Division', 'files': {'Population growth annual with projections.csv': 'http://spreadsheets.google.com/pub?key=rZrHzR__kZfmw6L_xUx7cwg&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population growth an', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:26 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"95eee46b-fd4b-44cc-8b36-4feeb7b537d3\"}\n\n{'description': 'gapminder Population in urban based on World Bank', 'files': {'Population in urban agglomerations 1 million of total population.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEdwXv1tqzV4Xg&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population in urban ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population policies based on OECD QWIDS', 'files': {'Population policies aid given of aid.csv': 'http://spreadsheets.google.com/pub?key=tjFf_YqwB6tgSG9L0r0Ywdg&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population policies ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Population total based on Various sources', 'files': {'Population total.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0XOoBL_n5tAQ&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Population total', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Poverty people below based on The World Bank', 'files': {'Poverty people below 2 a day.csv': 'http://spreadsheets.google.com/pub?key=tBrbR3BlR_12WlTIlSTpu6g&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Poverty people below', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Prematurity deaths i based on Lancet', 'files': {'Prematurity deaths in newborn per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tVMPnbOfGdvTRrtbuIbRw5w&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Prematurity deaths i', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Prematurity deaths i based on Lancet', 'files': {'Prematurity deaths in newborn total deaths.csv': 'http://spreadsheets.google.com/pub?key=tVyFL3CyRTuGEnRQErDeSLQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Prematurity deaths i', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:26 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"8d81c0b0-c4d7-42b4-82ee-648012f86b2a\"}\n\n{'description': 'gapminder Present value of deb based on World Bank', 'files': {'Present value of debt of GNI.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEeUceHo3wTOaQ&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Present value of deb', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Primary completion r based on World Bank', 'files': {'Primary completion rate total of relevant age group.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdEhTN2hlZ05ZczVwZDdVZlF5cUxJb2c&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Primary completion r', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Primary forest area based on FAO - Food and Agriculture Organization', 'files': {'Primary forest area ha.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWeECA6Gf__BNQ&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Primary forest area ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Primary school compl based on World Bank', 'files': {'Primary school completion of boys.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdGhCWnZrTGMwaTl5ek9QS0szMTIwcEE&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Primary school compl', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Primary school compl based on World Bank', 'files': {'Primary school completion of girls.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFVxSEVZVWE1b0l6NWo5NzNTZ2IzWVE&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Primary school compl', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:27 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"43755d43-cb1f-4d15-9b54-323329e35e44\"}\n\n{'description': 'gapminder Private share of tot based on WHO Global Health Expenditure Database', 'files': {'Private share of total health spending.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEcXBFxQw8cFaw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Private share of tot', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Privately owned fore based on FAO - Food and Agriculture Organization', 'files': {'Privately owned forest land.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWdFemmS_iN5fw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Privately owned fore', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Privately owned othe based on FAO - Food and Agriculture Organization', 'files': {'Privately owned other wooded land.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWdtCylhQOQiXw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Privately owned othe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Production sector ai based on OECD QWIDS', 'files': {'Production sector aid given of aid.csv': 'http://spreadsheets.google.com/pub?key=tMjW0fdVf9VJaxVk_VFSUhg&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Production sector ai', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Prostate cancer deat based on Based on IARC and WHO data', 'files': {'Prostate cancer deaths per 100 000 men.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2S9phBhTP3dw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Prostate cancer deat', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Prostate cancer new based on Based on IARC data', 'files': {'Prostate cancer new cases per 100 000 men.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3qX39HWaQjEg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Prostate cancer new ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Prostate cancer numb based on IARC', 'files': {'Prostate cancer number of male deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1ImYURLRHPRA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Prostate cancer numb', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Prostate cancer numb based on IARC', 'files': {'Prostate cancer number of new male cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2vXUZJKI0XHA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Prostate cancer numb', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:28 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"39ef4ab5-a5a5-4c57-8296-b6c503c521c8\"}\n\n{'description': 'gapminder Pump price for gasol based on World Development Indicators', 'files': {'Pump price for gasoline US per liter.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEdz8B4njtoHPA&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Pump price for gasol', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Ratio of girls to bo based on World Bank', 'files': {'Ratio of girls to boys in primary and secondary education.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEcWM3hb0x-BZA&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Ratio of girls to bo', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Ratio of young liter based on World Bank', 'files': {'Ratio of young literate females to males ages 15 24.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdE8xR0dUWDI4ME02SjQ5bi1NYnFHN0E&output=csv'}, 'license': 'CC-BY', 'tags': ['education', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Ratio of young liter', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Renewable water cu m based on FAO aquastat database', 'files': {'Renewable water cu meters per person.csv': 'http://spreadsheets.google.com/pub?key=rPN9VekxwpUzwowMaxg9Ybw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Renewable water cu m', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Residential electric based on IEA (International Energy Agency)', 'files': {'Residential electricity use per person.csv': 'http://spreadsheets.google.com/pub?key=t7SFNscT9Ex0s9i3av7PxRQ&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Residential electric', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Residential electric based on IEA (International Energy Agency)', 'files': {'Residential electricity use total.csv': 'http://spreadsheets.google.com/pub?key=teUZEfKw52HewO3D0YrQ5HA&output=csv'}, 'license': 'CC-BY', 'tags': ['energy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Residential electric', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:28 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"3b6f085f-fc81-4d1c-bb0d-c71d9b323822\"}\n\n{'description': 'gapminder Residential energy u based on Not Found.', 'files': {'Residential energy use.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdEV1RkJqTEItQnJYVXJlZzVuc3Y3Mmcnn&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Residential energy u', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Roads paved of total based on World Bank', 'files': {'Roads paved of total roads.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdDBKd2V5VmxkYlJuUHAtOURzUkZzNEE&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Roads paved of total', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Rural poverty rural based on The World Bank', 'files': {'Rural poverty rural people below national rural poverty line.csv': 'http://spreadsheets.google.com/pub?key=trbzCrl1eb6QJG5D8j1-qQw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Rural poverty rural ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Salaried workers of based on International Labour Organization', 'files': {'Salaried workers of labour force.csv': 'http://spreadsheets.google.com/pub?key=rcO6CXqmEjV-wS-29qejCpw&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Salaried workers of ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Self employed of lab based on International Labour Organization', 'files': {'Self employed of labour force.csv': 'http://spreadsheets.google.com/pub?key=rSrvaPPzWvOyTMb9_dfJDtQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Self employed of lab', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Sepsis deaths in new based on Lancet', 'files': {'Sepsis deaths in newborn per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tGVRSoAJtdwQ30CqCSexKJA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Sepsis deaths in new', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Sepsis deaths in new based on Lancet', 'files': {'Sepsis deaths in newborn total deaths.csv': 'http://spreadsheets.google.com/pub?key=tRA1VmW2ZQ7sCsoD7AHIilg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Sepsis deaths in new', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:29 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"1289095e-3b0f-403f-b14f-9d9fc73e09d2\"}\n\n{'description': 'gapminder Service workers of l based on International Labour Organization', 'files': {'Service workers of labour force.csv': 'http://spreadsheets.google.com/pub?key=r4orIwujZpT-z3Exd_9ARpQ&output=csv'}, 'license': 'CC-BY', 'tags': ['work', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Service workers of l', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Services of GDP based on World Bank', 'files': {'Services of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdHk4eXd4RG5Rb1gtUTB0cUJ3M21qdGc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Services of GDP ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Sex ratio 0 14 years based on UN Population Division', 'files': {'Sex ratio 0 14 years.csv': 'http://spreadsheets.google.com/pub?key=tfWSVJPJHn3u7e_7MUaCbnw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Sex ratio 0 14 years', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Sex ratio 15 24 year based on UN Population Division', 'files': {'Sex ratio 15 24 years.csv': 'http://spreadsheets.google.com/pub?key=ta-Da73B_Z7lKOZo8o-Ykvw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Sex ratio 15 24 year', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Sex ratio 15 49 year based on UN Population Division', 'files': {'Sex ratio 15 49 years.csv': 'http://spreadsheets.google.com/pub?key=tF_P_4G0g5bR3lYmQT9Tv4w&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Sex ratio 15 49 year', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Sex ratio above 50 y based on UN Population Division', 'files': {'Sex ratio above 50 years.csv': 'http://spreadsheets.google.com/pub?key=tQP1KnoWcjjtz3wmq0bnGNA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Sex ratio above 50 y', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Sex ratio all age gr based on UN Population Division', 'files': {'Sex ratio all age groups.csv': 'http://spreadsheets.google.com/pub?key=tAQ31_cAELrHqNc2qa13uHw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Sex ratio all age gr', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Smoking adults of po based on Based on WHOSIS data 2005', 'files': {'Smoking adults of population over age 15.csv': 'http://spreadsheets.google.com/pub?key=tRccVp7QMaCXMv19CcxERaA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Smoking adults of po', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Smoking men of men o based on Based on WHOSIS data 2005', 'files': {'Smoking men of men over age 15.csv': 'http://spreadsheets.google.com/pub?key=t60tpjxpWq3Bm-nBOvSm3og&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Smoking men of men o', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Smoking women of wom based on Based on WHOSIS data 2005', 'files': {'Smoking women of women over age 15.csv': 'http://spreadsheets.google.com/pub?key=thortPEzDn2xc_5bU255mPA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Smoking women of wom', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Stillbirths per 1 00 based on Stanton etal with additions', 'files': {'Stillbirths per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=tgJHpDEY4S7hxJpELGJueWA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Stillbirths per 1 00', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Stomach cancer death based on Based on IARC and WHO data', 'files': {'Stomach cancer deaths per 100 000 men.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3ky4_oAkatBw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Stomach cancer death', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Stomach cancer death based on Based on IARC and WHO data', 'files': {'Stomach cancer deaths per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0RpUEQPgGcZQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Stomach cancer death', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:30 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"be2b74fc-2d2f-4d10-a8b4-43d0584d1113\"}\n\n{'description': 'gapminder Stomach cancer new c based on Based on IARC data', 'files': {'Stomach cancer new cases per 100 000 men.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1XKvT6zwrMPw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Stomach cancer new c', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Stomach cancer new c based on Based on IARC data', 'files': {'Stomach cancer new cases per 100 000 women.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0je8zzeM4WXQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Stomach cancer new c', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:30 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"e2de2d02-777c-46fb-998d-79994a9567ce\"}\n\n{'description': 'gapminder Stomach cancer numbe based on IARC', 'files': {'Stomach cancer number of female deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1o1rJNFHpQZw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Stomach cancer numbe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Stomach cancer numbe based on IARC', 'files': {'Stomach cancer number of male deaths.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2NmCvOcsjpag&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Stomach cancer numbe', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:30 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"505c76ab-0444-4c5d-b912-02fcad0a0577\"}\n\n{'description': 'gapminder Stomach cancer numbe based on IARC', 'files': {'Stomach cancer number of new female cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1aXfw3aV83TA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Stomach cancer numbe', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:30 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"c8b61241-39ac-4373-936b-b450656669f1\"}\n\n{'description': 'gapminder Stomach cancer numbe based on IARC', 'files': {'Stomach cancer number of new male cases.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3XF3cD2lbecA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Stomach cancer numbe', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:30 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"a085a911-ca02-4535-824c-9524e7c07453\"}\n\n{'description': 'gapminder Storm affected annua based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Storm affected annual number.csv': 'http://spreadsheets.google.com/pub?key=rAxnmm4ZL2HrYjIqJX0Ch-w&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Storm affected annua', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Storm deaths annual based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Storm deaths annual number.csv': 'http://spreadsheets.google.com/pub?key=r0JePjBgBQqtuh5wh1Wz9CA&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Storm deaths annual ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Subsistence incomes based on Various sources', 'files': {'Subsistence incomes per person.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdGlGLVd4OGVfcVdScVBSS0JLVHpiMlE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Subsistence incomes ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Sugar per person g p based on FAO modified', 'files': {'Sugar per person g per day.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2sdmdhX9zuKg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Sugar per person g p', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Suicide per 100 000 based on WHO modified', 'files': {'Suicide per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=troMumuI0Y6Phpwnj6qXa_A&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Suicide per 100 000 ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Suicide total deaths based on WHO Global Burden of Disease', 'files': {'Suicide total deaths.csv': 'http://spreadsheets.google.com/pub?key=tOS388dWYzO1_FANUaiwKuA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Suicide total deaths', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Suicide age 0 14 per based on WHO', 'files': {'Suicide age 0 14 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldGhJdkhTSHNEYTFKQjRrMlBwZXk1TkE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Suicide age 0 14 per', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Suicide age 15 29 pe based on WHO', 'files': {'Suicide age 15 29 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldHNWNkNVR2Zwalc2U04zTjE5MDZlUkE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Suicide age 15 29 pe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Suicide age 30 44 pe based on WHO', 'files': {'Suicide age 30 44 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldG9MeHpzRkNHQmZ4MmtxSnd2Y0o2UFE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Suicide age 30 44 pe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Suicide age 45 59 pe based on WHO', 'files': {'Suicide age 45 59 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldGh2OWd2eVJiUnhScW9tOEtNTFkyQUE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Suicide age 45 59 pe', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Suicide age 60 per 1 based on WHO', 'files': {'Suicide age 60 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldEVRNFBZS2wzRmtZOWZEZDVZVG05dHc&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Suicide age 60 per 1', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Suicide men per 100 based on WHO modified', 'files': {'Suicide men per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=tB8ge4cxd8TL7yIV4ALm5NA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Suicide men per 100 ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Suicide women per 10 based on WHO modified', 'files': {'Suicide women per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=tUD6kmYmB_Bp85SRdEn1Krg&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Suicide women per 10', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Sulfur emissions per based on Stern modified', 'files': {'Sulfur emissions per person kg.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0uBndTxZAXNQ&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Sulfur emissions per', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Surface area sq km based on World Bank', 'files': {'Surface area sq km.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFFWcWdEM0RXT1lRZ0wwRVNsakZCaWc&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Surface area sq km ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Surviving kids per w based on Gapminder', 'files': {'Surviving kids per woman.csv': 'http://spreadsheets.google.com/pub?key=0ArtujvvFrPjVdGdFWmhqOEVXcUZha1hJWXAtWHlDSFE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Surviving kids per w', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Tax revenue of GDP based on World Bank', 'files': {'Tax revenue of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFgzT1ZJWW4tdDB4Q2NETTVoTG1ZYlE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Tax revenue of GDP ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder TB programme DOTS po based on World Health Organization', 'files': {'TB programme DOTS population coverage.csv': 'http://spreadsheets.google.com/pub?key=rKfjGaPxqirPDe8gnTVKuIw&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder TB programme DOTS po', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder TB with HIV deaths p based on World Health Organization', 'files': {'TB with HIV deaths per 100 000 estimated.csv': 'http://spreadsheets.google.com/pub?key=rUBCConMMLm9CxPXUGm325A&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder TB with HIV deaths p', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder TB with HIV existing based on World Health Organization', 'files': {'TB with HIV existing cases per 100 000 estimated.csv': 'http://spreadsheets.google.com/pub?key=rQV47xgPGa3qOPHoLiVon-w&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder TB with HIV existing', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder TB with HIV new case based on World Health Organization', 'files': {'TB with HIV new cases per 100 000 estimated.csv': 'http://spreadsheets.google.com/pub?key=rRCxDI3hB9E9zvc8qSe11qg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder TB with HIV new case', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder TB with HIV number o based on World Health Organization', 'files': {'TB with HIV number of deaths estimated.csv': 'http://spreadsheets.google.com/pub?key=rFAkC0Ae7oXxrVqosJ4NWUA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder TB with HIV number o', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder TB with HIV number o based on World Health Organization', 'files': {'TB with HIV number of existing cases estimated.csv': 'http://spreadsheets.google.com/pub?key=reiGJwoabnMOrPeFima_9ng&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder TB with HIV number o', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:33 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"170a0e4f-6832-4088-ac47-49bc64256143\"}\n\n{'description': 'gapminder TB with HIV number o based on World Health Organization', 'files': {'TB with HIV number of new cases estimated.csv': 'http://spreadsheets.google.com/pub?key=rERPF4iYruK0DhAw_0tb5nA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder TB with HIV number o', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:33 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"33658e97-63aa-44a8-9276-a4174870ef7c\"}\n\n{'description': 'gapminder Teen fertility rate based on Various sources', 'files': {'Teen fertility rate births per 1 000 women ages 15 19.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEdIphYUHxcdLg&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Teen fertility rate ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Tetanus deaths in ne based on Lancet', 'files': {'Tetanus deaths in newborn per 1 000 births.csv': 'http://spreadsheets.google.com/pub?key=t1E7e32tlIxtJU9UhnR9nJg&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Tetanus deaths in ne', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Tetanus deaths in ne based on Lancet', 'files': {'Tetanus deaths in newborn total deaths.csv': 'http://spreadsheets.google.com/pub?key=tB6Gkh4rLC9yB2TXfHSApIA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Tetanus deaths in ne', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:33 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"02a24165-0433-4633-a4d6-810ad3f34ad5\"}\n\n{'description': 'gapminder Total GDP PPP inflat based on Various sources', 'files': {'Total GDP PPP inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=0Asm_G8nr4TCSdDh2NWQtVDJhYlVsTElFRjJIYkNlSnc&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total GDP PPP inflat', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Total GDP US inflati based on World Bank', 'files': {'Total GDP US inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEfI4sLVvEQtHw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total GDP US inflati', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Total GNI PPP curren based on World Bank ', 'files': {'Total GNI PPP current international.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdFl6cDkxcmZxM0pVNXBUYjE1ZmNqVUE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total GNI PPP curren', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Total health spendin based on Global Health Expenditure Database modified', 'files': {'Total health spending of GDP.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj3XYThRy0yJMA&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total health spendin', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Total health spendin based on WHO Global Health Expenditure Database', 'files': {'Total health spending per person international.csv': 'http://spreadsheets.google.com/pub?key=tR3MM-UTZ0B44BKxxWeAZaQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total health spendin', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:34 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"980bfb92-af77-4458-a300-efc45e184bc4\"}\n\n{'description': 'gapminder Total health spendin based on WHO Global Health Expenditure Database', 'files': {'Total health spending per person US.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEeL79qOoKtofQ&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total health spendin', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:34 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"ac324bb3-8125-4cc3-a7ab-8f6d94b19675\"}\n\n{'description': 'gapminder Total number of doll based on Forbes', 'files': {'Total number of dollar billionaires.csv': 'http://spreadsheets.google.com/pub?key=tNWhbu-1UIPPxtmRHtnINOQ&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total number of doll', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Total population wit based on Various sources', 'files': {'Total population with projections.csv': 'http://spreadsheets.google.com/pub?key=tL0jLxFBF9TbXIN_39b1qcQ&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total population wit', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Total reserves of de based on World Bank', 'files': {'Total reserves of debt to foreigners.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdC1iMVRuVUFUd08tVDM0ZDF0cnFtekE&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total reserves of de', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Total sulfur emissio based on Stern modified', 'files': {'Total sulfur emission kilotonnes.csv': 'http://spreadsheets.google.com/pub?key=t9SYWh7siLJDzyZYN1R4HfQ&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total sulfur emissio', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Total water withdraw based on FAO aquastat database', 'files': {'Total water withdrawal billion cu meters.csv': 'http://spreadsheets.google.com/pub?key=rIG3ZWxv381t2bIL2BNaIVw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Total water withdraw', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Trade balance of GDP based on World Bank', 'files': {'Trade balance of GDP.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdFpGU185SkpmZ2V4ajNPZHFaaEwtU1E&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Trade balance of GDP', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Trade balance US not based on World Bank', 'files': {'Trade balance US not inflation adjusted.csv': 'http://spreadsheets.google.com/pub?key=0AkBd6lyS3EmpdEF6VzlKTzNCNjRnT0ZzMDg5a1d1Z3c&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Trade balance US not', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Traffic total deaths based on WHO with additions', 'files': {'Traffic total deaths.csv': 'http://spreadsheets.google.com/pub?key=tW9t1f9EQpvS4U04kWnk-og&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Traffic total deaths', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Traffic deaths per 1 based on WHO modified', 'files': {'Traffic deaths per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=tK87SOy-oZlfW99UDD7L3hw&output=csv'}, 'license': 'CC-BY', 'tags': ['infrastructure', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Traffic deaths per 1', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Traffic deaths men p based on WHO modified', 'files': {'Traffic deaths men per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=tUaaG6Pu9zT_BVIsLvGLQdA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Traffic deaths men p', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Traffic deaths women based on WHO modified', 'files': {'Traffic deaths women per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=t4pYrpNzP-JeR7zSjOyDofQ&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Traffic deaths women', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Traffic mortality 15 based on WHO', 'files': {'Traffic mortality 15 29 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldEVPSG5qYzBfS0llQ1RnTl9wWXZodkE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Traffic mortality 15', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Traffic mortality 30 based on WHO', 'files': {'Traffic mortality 30 44 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldGRqbENsQm5VMWFLdnRXV0w1S0tVSEE&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Traffic mortality 30', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Traffic mortality 45 based on WHO', 'files': {'Traffic mortality 45 59 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldFJrcW9wdlJITlBDYU9IbnRKdVllVGc&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Traffic mortality 45', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Traffic mortality 60 based on WHO', 'files': {'Traffic mortality 60 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldEw5RXhuckZuU1V2aVAzNDFZaDUxa2c&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Traffic mortality 60', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Traffic mortality ch based on WHO', 'files': {'Traffic mortality children 0 14 per 100 000 people.csv': 'http://spreadsheets.google.com/pub?key=0AgogXXPMARyldHB2MkhmVDcyMG1Oa3Y5eEhRQ0VlUWc&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Traffic mortality ch', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Tsunami affected ann based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Tsunami affected annual number.csv': 'http://spreadsheets.google.com/pub?key=rskN46tpbe6Iy3K_ULk1_cQ&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Tsunami affected ann', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Tsunami deaths annua based on EM-DAT: The OFDA/CRED International Disaster Database', 'files': {'Tsunami deaths annual number.csv': 'http://spreadsheets.google.com/pub?key=rdBew79hTeXcIXhB1VCTPfg&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Tsunami deaths annua', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Under five mortality based on CME (Child Mortality Estimates Info)', 'files': {'Under five mortality from CME per 1 000 born.csv': 'http://spreadsheets.google.com/pub?key=p8SIY47PNEw6pJRPAS1tXPQ&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Under five mortality', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Under five mortality based on Institute for Health Metrics and Evaluation', 'files': {'Under five mortality from IHME per 1 000 born.csv': 'http://spreadsheets.google.com/pub?key=p8SIY47PNEw4TgTkrmIVIXA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Under five mortality', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:35 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"7d222bdf-245f-4b4a-9d15-6651a264b92b\"}\n\n{'description': 'gapminder Underweight children based on World Bank', 'files': {'Underweight children.csv': 'http://spreadsheets.google.com/pub?key=0ArfEDsV3bBwCdDdTQUtvNEJhb0RjRjU0WUtET1R0Vnc&output=csv'}, 'license': 'CC-BY', 'tags': ['health', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Underweight children', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Urban population based on World Bank', 'files': {'Urban population.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEfH89V6UQhpZA&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Urban population', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Urban population of based on World Bank', 'files': {'Urban population of total.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj0-LE4StzCsEw&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Urban population of ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Urban population gro based on World Bank', 'files': {'Urban population growth annual.csv': 'http://spreadsheets.google.com/pub?key=pyj6tScZqmEcRJEN8MyV3PQ&output=csv'}, 'license': 'CC-BY', 'tags': ['population', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Urban population gro', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Urban poverty urban based on The World Bank', 'files': {'Urban poverty urban people below national urban poverty line.csv': 'http://spreadsheets.google.com/pub?key=tublssyj-uqIY25OoRupbCw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Urban poverty urban ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Water and sanitation based on OECD QWIDS', 'files': {'Water and sanitation aid given of aid.csv': 'http://spreadsheets.google.com/pub?key=tXn3DSfvsYujaBP9bvH6acg&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Water and sanitation', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Water withdrawal cu based on FAO aquastat database', 'files': {'Water withdrawal cu meters per person.csv': 'http://spreadsheets.google.com/pub?key=rezAT4nYhKc2Loe6CxWSPWw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Water withdrawal cu ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Wood removal cubic m based on FAO - Food and Agriculture Organization', 'files': {'Wood removal cubic meters.csv': 'http://spreadsheets.google.com/pub?key=pp59adS3CHWe8O-N9RgxzDw&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Wood removal cubic m', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Working hours per we based on International Labour Organization', 'files': {'Working hours per week.csv': 'http://spreadsheets.google.com/pub?key=rdCufG2vozTpKw7TBGbyoWw&output=csv'}, 'license': 'CC-BY', 'tags': ['economy', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Working hours per we', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Year categorization based on Various sources', 'files': {'Year categorization 1820 2010.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj2t4ep52YXjSg&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Year categorization ', 'owner_id': 'brianray'}\n---\n{'description': 'gapminder Year categorization based on Various sources', 'files': {'Year categorization 1950.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj02SA7cGjnRbA&output=csv'}, 'license': 'CC-BY', 'tags': ['for advanced users', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Year categorization ', 'owner_id': 'brianray'}\n(400)\nReason: Bad Request\nHTTP response headers: HTTPHeaderDict({'Date': 'Fri, 31 Mar 2017 04:33:36 GMT', 'Content-Length': '124', 'Server': 'nginx/1.8.1', 'Connection': 'keep-alive', 'Content-Type': 'application/json'})\nHTTP response body: {\"code\":400,\"message\":\"Attempted to create an entity that already exists.\",\"details\":\"da4d39dd-9f99-4fcc-987d-03e53923ced5\"}\n\n{'description': 'gapminder Yearly CO2 emissions based on CDIAC (Carbon Dioxide Information Analysis Center)', 'files': {'Yearly CO2 emissions 1000 tonnes.csv': 'http://spreadsheets.google.com/pub?key=phAwcNAVuyj1NHPC9MyZ9SQ&output=csv'}, 'license': 'CC-BY', 'tags': ['environment', 'gapminder'], 'visibility': 'OPEN', 'title': 'gapminder Yearly CO2 emissions', 'owner_id': 'brianray'}\n---\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
4a816d559809959b5d0a6822fb29fa0c1ba3838f
| 318,114 |
ipynb
|
Jupyter Notebook
|
notebooks/munging/ChangeoQC.ipynb
|
michael-swift/seqclone
|
f139872a31fb328dd519ac9432b15f1ffc04f47b
|
[
"MIT"
] | null | null | null |
notebooks/munging/ChangeoQC.ipynb
|
michael-swift/seqclone
|
f139872a31fb328dd519ac9432b15f1ffc04f47b
|
[
"MIT"
] | 2 |
2021-03-25T01:13:42.000Z
|
2021-03-25T01:14:14.000Z
|
notebooks/munging/ChangeoQC.ipynb
|
michael-swift/seqclone
|
f139872a31fb328dd519ac9432b15f1ffc04f47b
|
[
"MIT"
] | null | null | null | 375.577332 | 157,896 | 0.903899 |
[
[
[
"#utils I made to look at this data\nimport switchy.util as ut\nimport pandas as pd\nimport numpy as np\nimport scipy\nimport sys\nimport os\nimport time\nimport random\nimport copy\nimport math\n%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport matplotlib as mpl\nimport scanpy as sc\nimport seaborn as sns\nimport autoreload\nparams = {\n 'font.size': 12,\n 'axes.titlesize': 12,\n 'axes.labelsize': 12,\n 'legend.fontsize': 12,\n 'xtick.labelsize': 12,\n 'ytick.labelsize': 12,\n 'font.family': \"Helvetica\",\n 'pdf.fonttype': 42,\n 'ps.fonttype': 42,\n 'figure.dpi': 300\n }\n\nmpl.rcParams.update(params)\n\nsns.set_style(\"ticks\")\n\nsavefig_args = {\"dpi\": 300, \"bbox_inches\": \"tight\", \"pad_inches\": 0, \"transparent\": True}\nmpl.rc('savefig', dpi=300)\noutput_dir = \"outs\"\noutput_suffix = \"\"\noutput_formats = [\".png\", \".pdf\"]\n\ndef save_figure(fig, name, output_dir=output_dir, output_suffix=output_suffix, output_formats=output_formats, savefig_args=savefig_args):\n for output_format in output_formats:\n fig.savefig(output_dir + \"/\" + name + output_suffix + output_format, **savefig_args)\n return None\n\npd.set_option('display.max_rows', 500)\npd.set_option('display.max_columns', 500)\npd.set_option('display.width', 1000)\n\ndata_dir = \"../../../SharedData/\"\n#import auto reload\n%load_ext autoreload\n%autoreload 2",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"# MUNGE THE sequence id to get a Cell column and a Donor Column\nchangeodb = pd.read_csv('../../../ImmTrinity/ShazamTrinity.tab', index_col = None, sep = '\\t')\n\n_x = changeodb.SEQUENCE_ID.str.split('_')[0][1:-7]\n\ncell_list = []\nfor SeqID in changeodb.SEQUENCE_ID:\n _x = SeqID.split('_')[1:-7]\n cell_list.append('_'.join(_x))\n\nchangeodb['CELL'] = cell_list\nchangeodb['Donor'] = changeodb.SEQUENCE_ID.str.split(' ', expand = True)[1]",
"_____no_output_____"
],
[
"df_contig_aggr = changeodb.groupby([\"Donor\", \"CELL\", \"LOCUS\"]).size().unstack(fill_value=0)\n",
"_____no_output_____"
],
[
"# Examine number of high-quality contigs assembled per cell (joint distribution of IGH, IGK/L)\n# full range\n\nx = df_contig_aggr[\"IGH\"]\ny = df_contig_aggr[\"IGL\"] + df_contig_aggr[\"IGK\"]\n\nprint(max(x), max(y))\nxbins = np.array(range(0,max(x)+2))-0.5\nybins = np.array(range(0,max(y)+2))-0.5\n\nfig, ax = plt.subplots(1, 1, figsize=(5,4))\n\ncounts, xedges, yedges, im = ax.hist2d(x, y, bins=(xbins, ybins),\n cmap=\"magma\",\n norm=mpl.colors.LogNorm(1, 1e5))\n\nax.set_xlabel(\"Productive heavy chain contigs\")\nax.set_ylabel(\"Productive light chain contigs (K + L)\")\nplt.colorbar(im, ax=ax, label=\"Cells\")\n\nax.set_ylim(top=8)\n\n# show counts\ndx = xedges[2]-xedges[1]\ndy = yedges[2]-yedges[1]\nfor i in range(xedges.size-1):\n for j in range(yedges.size-1):\n xb = xedges[i] + 0.5*dx\n yb = yedges[j] + 0.5*dy \n ax.text(xb, yb, str(int(counts[i,j])), fontsize=4, ha=\"center\", va=\"center\", color=\"w\")\n\n# show count of 1H+1L in black\nxb = xedges[1] + 0.5*dx\nyb = yedges[1] + 0.5*dy \nax.text(xb, yb, str(int(counts[1,1])), fontsize=4, ha=\"center\", va=\"center\", color=\"k\")",
"2 4\n"
],
[
"# Filter for cells having exactly 1H+1L\ndf_contig_aggr_filtered = df_contig_aggr\ndf = df_contig_aggr_filtered.loc[(df_contig_aggr_filtered[\"IGH\"] == 1) &\n (((df_contig_aggr_filtered[\"IGL\"] == 1) & (df_contig_aggr_filtered[\"IGK\"] == 0)) |\n ((df_contig_aggr_filtered[\"IGL\"] == 0) & (df_contig_aggr_filtered[\"IGK\"] == 1)))]\ndf.head()\n# Filter orginal Changeodb to include only singlets\ndf_all_contig_annotations_valid = changeodb.set_index([\"Donor\", \"CELL\"]).loc[df.index]\nprint(df_all_contig_annotations_valid.shape)\n\n# Filter contigs for only IGH, IGL, or IGK\ndf_all_contig_annotations_valid = df_all_contig_annotations_valid.loc[df_all_contig_annotations_valid[\"LOCUS\"].isin([\"IGH\", \"IGL\", \"IGK\"])]\nprint (df_all_contig_annotations_valid.shape)\ndf_all_contig_annotations_valid.head()\n\n# Filter contigs for only productive\ndf_all_contig_annotations_valid = df_all_contig_annotations_valid.loc[df_all_contig_annotations_valid[\"FUNCTIONAL\"] == True]\nprint (df_all_contig_annotations_valid.shape)\ndf_all_contig_annotations_valid.head()",
"(3468, 60)\n(3468, 60)\n(3468, 60)\n"
],
[
"## Add Isotype column by using splice junctions ",
"_____no_output_____"
],
[
"df = df_all_contig_annotations_valid",
"_____no_output_____"
],
[
"ab_tx, switch_tx = ut.loadSJoutIGH(data_dir + 'CombinedSJouts_chr14_IGH.fthr')",
"filtering SJout to just IGH locus\nmaking SJTable human readable\n"
],
[
"isotype_calls = ut.callIsotypeBySJout(ab_tx, plot=True)",
"/home/mswift/local/anaconda3/envs/singlecell/lib/python3.6/site-packages/pandas/core/indexing.py:845: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self.obj[key] = _infer_fill_value(value)\n/home/mswift/local/anaconda3/envs/singlecell/lib/python3.6/site-packages/pandas/core/indexing.py:966: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self.obj[item] = s\n"
],
[
"\n\n_df = isotype_calls[['ISOTYPE_by_splice', 'cell']]\n\ndf = pd.merge(df, _df, left_on='CELL', right_on='cell')\n\ndf['ISOTYPE'] = df['ISOTYPE_by_splice']",
"_____no_output_____"
],
[
"df.to_csv(data_dir + 'ShazamQCed.tab', sep = '\\t')",
"_____no_output_____"
],
[
"df = df[df.LOCUS == 'IGH']",
"_____no_output_____"
],
[
"df.to_csv(data_dir + 'ShazamQCedIGH.tab', sep = '\\t')",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a817d81244aaf1eb6656b0ddc5ff3b118ba096d
| 11,104 |
ipynb
|
Jupyter Notebook
|
Day 9 Assignment - Batch 7.ipynb
|
KumaranMurugan/Letsupgrade-python-batch7
|
106a3a47cf9de0d75539e1a18566345898336fe4
|
[
"Apache-2.0"
] | null | null | null |
Day 9 Assignment - Batch 7.ipynb
|
KumaranMurugan/Letsupgrade-python-batch7
|
106a3a47cf9de0d75539e1a18566345898336fe4
|
[
"Apache-2.0"
] | null | null | null |
Day 9 Assignment - Batch 7.ipynb
|
KumaranMurugan/Letsupgrade-python-batch7
|
106a3a47cf9de0d75539e1a18566345898336fe4
|
[
"Apache-2.0"
] | null | null | null | 41.432836 | 4,897 | 0.53089 |
[
[
[
"# Python",
"_____no_output_____"
],
[
"# Day 9 Assignment",
"_____no_output_____"
],
[
"# Question 1 ",
"_____no_output_____"
]
],
[
[
"! pip install pylint",
"Requirement already satisfied: pylint in c:\\users\\john\\anaconda3\\lib\\site-packages (2.5.3)\nRequirement already satisfied: toml>=0.7.1 in c:\\users\\john\\anaconda3\\lib\\site-packages (from pylint) (0.10.1)\nRequirement already satisfied: isort<5,>=4.2.5 in c:\\users\\john\\anaconda3\\lib\\site-packages (from pylint) (4.3.21)\nRequirement already satisfied: astroid<=2.5,>=2.4.0 in c:\\users\\john\\anaconda3\\lib\\site-packages (from pylint) (2.4.2)\nRequirement already satisfied: colorama; sys_platform == \"win32\" in c:\\users\\john\\anaconda3\\lib\\site-packages (from pylint) (0.4.3)\nRequirement already satisfied: mccabe<0.7,>=0.6 in c:\\users\\john\\anaconda3\\lib\\site-packages (from pylint) (0.6.1)\nRequirement already satisfied: wrapt~=1.11 in c:\\users\\john\\anaconda3\\lib\\site-packages (from astroid<=2.5,>=2.4.0->pylint) (1.11.2)\nRequirement already satisfied: lazy-object-proxy==1.4.* in c:\\users\\john\\anaconda3\\lib\\site-packages (from astroid<=2.5,>=2.4.0->pylint) (1.4.3)\nRequirement already satisfied: six~=1.12 in c:\\users\\john\\anaconda3\\lib\\site-packages (from astroid<=2.5,>=2.4.0->pylint) (1.15.0)\n"
],
[
"%%writefile prime.py\n'''its a pylint'''\ndef is_prime(number):\n \"\"\"Returnn True if *number* is prime.\"\"\"\n for element in range(number):\n if number % element == 0:\n return False\n return True\ndef print_next_prime(number):\n \"\"\"Print the closest prime number larger than *number*.\"\"\"\n index = number\n while True:\n index += 1\n if is_prime(index):\n print(index)",
"Overwriting prime.py\n"
],
[
"! pylint \"prime.py\"",
"\n-------------------------------------------------------------------\n\nYour code has been rated at 10.00/10 (previous run: 9.09/10, +0.91)\n\n\n\n"
],
[
"%%writefile primes.py\nimport unittest\nfrom primes import is_prime\nclass PrimesTestCase(unittest.testcase):\n \"\"\"Tests for primes.py.\"\"\"\n def test_is_five_prime(self):\n \"\"\"Is five successfully determine to be detremined to be prime?\"\"\"\n self.assertTrue(is_prime(5))\nif __name__ == '__main__':\n unittest.main()",
"Overwriting primes.py\n"
]
],
[
[
"# Question 2",
"_____no_output_____"
]
],
[
[
"def ArmstrongNumber(lst):\n for item in lst:\n temp=item\n sum=0\n while temp>0:\n digit=temp%10\n sum=sum+digit**3\n temp=temp//10\n if sum==item:\n print(item)",
"_____no_output_____"
],
[
"lst=list(range(1,1000))",
"_____no_output_____"
],
[
"print(lst)",
"[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999]\n"
],
[
"def ArmstrongNumberGen(lst):\n for item in lst:\n temp=item\n sum=0\n while temp>0:\n digit=temp%10\n sum=sum+digit**3\n temp=temp//10\n if sum==item:\n yield item",
"_____no_output_____"
],
[
"ArmstrongNumber(lst)",
"1\n153\n370\n371\n407\n"
],
[
"print(list(ArmstrongNumberGen(lst)))",
"[1, 153, 370, 371, 407]\n"
],
[
"#Day 9 Assignment is succeessfully completed",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a8180ba55d50c9cbd988e476f34b7287a8bf7a4
| 44,705 |
ipynb
|
Jupyter Notebook
|
notebooks/lessons/Supplemental - One-Way ANOVA.ipynb
|
prof-groff/evns-462
|
072b3d7b2e145bd9423e35a18b1b5cbcf4d11914
|
[
"MIT"
] | null | null | null |
notebooks/lessons/Supplemental - One-Way ANOVA.ipynb
|
prof-groff/evns-462
|
072b3d7b2e145bd9423e35a18b1b5cbcf4d11914
|
[
"MIT"
] | null | null | null |
notebooks/lessons/Supplemental - One-Way ANOVA.ipynb
|
prof-groff/evns-462
|
072b3d7b2e145bd9423e35a18b1b5cbcf4d11914
|
[
"MIT"
] | 1 |
2020-03-12T15:14:05.000Z
|
2020-03-12T15:14:05.000Z
| 49.343267 | 13,472 | 0.590493 |
[
[
[
"## One-Way Analysis of Variance (ANOVA)\n\nTo compare the means of two independent samples of internval or ratio data (assuming the samples are from normally distributed populations having equal variance) we can do a t-test. But what if you have more than two groups that you want to compare? You could do multiple t-tests, one for each pairing of groups. But this approach would increase the likelhood of experiencing a type-1 error, that is, of rejecting the null hypothesis when you should not have done so (false positive). The practice of doing repeated t-tests between multiple variables in the search for p-values less than 0.05 is sometimes called p-hacking or data dredging. A better approach is to do an analysis of variance (ANOVA) test. Think of ANOVA as testing all groups simultaneously and looking for statistical evidence that at least one of the groups is different than any of the others. We will focus on one-way ANOVA were there is only one factor that is different between groups. If ANOVA reveals that at least one of the groups is different than the others, a follow up test or post-hoc test is necessary to uncover which group or groups are different from the others. A popular post-hoc test demonstrated here is called Tukey's range test.\n\nAfter this notebook you will know:\n* how to conduct one-way ANOVA (analysis of variance) between multiple groups of interval or ratio data.\n* how to do a Tukey's range test.",
"_____no_output_____"
],
[
"### About the Data\n\nThe dataset we will work with is from an experiment testing the connection between red dye no. 40 and the occurance of cancer in mice. There are three treatment groups receiving different size doses and a control. Here are some more details about the data.\n\nName: reddye40.csv\n\nTitle: Red Dye 40 and Cancer in Mice\n\nSource: Journal Natl. Cancer Inst., Vol. 66, p 197-212\n\nDescription: S.W. Laagakos and F. Mosteller of Harvard University fed mice different doses of red dye number 40 and recorded the time of death in weeks. Results for female mice, dosage and time of death are shown in the data:\n* X1 = time of death for control group\n* X2 = time of death for group with low dosage\n* X3 = time of death for group with medium dosage\n* X4 = time of death for group with high dosage\n\nThe following cell will import the red dye 40 cancer data using pandas. The data is formated as a CSV (comma-separated-values) file. Such a file could be opend in Excel but here we simply load the file into a pandas data structure called a dataframe and print out the first couple rows of the dataframe.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats # some useful stuff\nurl = \"https://raw.githubusercontent.com/prof-groff/evns462/master/data/reddye40.csv\"\nreddye = pd.read_csv(url)",
"_____no_output_____"
],
[
"reddye",
"_____no_output_____"
]
],
[
[
"### One-Way ANOVA Hypothesis Testing\n\nANOVA allows us to test the null hypothesis that their is no difference between the means of different groups in a study. For the red dye 40 data the null hypothesis would be that there is no difference between the mean time of death in weeks between mice receiving no dose (control), a low dose, a medium dose, or a high dose of red dye 40.\n* H<sub>0</sub>: x1_bar = x2_bar = x3_bar = x4_bar at α = 0.05\n* H<sub>A</sub>: The means are not all equal. That is, at least one of the means is different from the rest. \n\nThe test statistic for ANOVA is called the F-statistic and is defined as the ratio of mean squared error between groups divided by the mean squared error within groups. \n\nF = MSB/MSE\n\nwhere\n\nMSB = SUM(nj(xj_bar - x_bar)^2) / (k-1) \n\n* The sum is taken over all k groups, nj is the number of data values in group j, xj_bar is the mean of group j, x_bar is the grand mean, which is the mean of all data values in all groups. The degrees of freedom between groups is k-1\n\nMSE = SUM(SUM(x-xj_bar)^2) / (N-k)\n\n* The inner sum is taken over all data values in group j and the other sum is taken over all k groups. The degrees of freedom within groups is N-k, where N is the total number of data values in all groups.\n\nThe F-critical value for the stated significance level can be found in F-tables like [these](http://www.socr.ucla.edu/applets.dir/f_table.html) or using a calculator like [this one](https://www.danielsoper.com/statcalc/calculator.aspx?id=4). There is a different F-table for each significance level. The columns are for different between group degrees of freedom (k-1) and the rows are for different within group degrees of freedom (N-k). For the red dye data dfB = k-1 = 3 and dfW = N - k = 38 giving a F-critical value of 2.85174134.\n\nBelow the F-statistic is calculated using the formulas above and again using a built in python function that is much easier to use.\n\n**NOTE: ANOVA assumes that the data in each group is normally distributed and the various groups have uniform variance. In practice, the ANOVA test works well if the data is decently normal and the smallest group variance is no more than 3 times smaller than the largest group variance. (More arbitrary rules?)**",
"_____no_output_____"
]
],
[
[
"# FIRST LET'S PULL OUT THE FOUR GROUPS. Notice that the number of mice in each sample is different.\ngroups = ['X1 control', 'X2 low dose', 'X3 medium dose', 'X4 high dose']\nx1 = reddye[reddye[groups[0]]>0][groups[0]]\nx2 = reddye[reddye[groups[1]]>0][groups[1]]\nx3 = reddye[reddye[groups[2]]>0][groups[2]]\nx4 = reddye[reddye[groups[3]]>0][groups[3]]\n\n# NOW LET'S FIND THE SIZE OF EACH GROUP ...\nn1 = len(x1)\nn2 = len(x2)\nn3 = len(x3)\nn4 = len(x4)\nN = n1+n2+n3+n4 # 38 data values in all groups\n\n# AND CALCULATE dfB and dfW\nk = 4 # 4 groups\ndfB = k-1\ndfW = N-k",
"_____no_output_____"
],
[
"# NOW CALCULATE THE GRAND MEAN ...\nx_bar = (n1*x1.mean() + n2*x2.mean() + n3*x3.mean() + n4*x4.mean())/N\nprint(x_bar)",
"75.55263157894737\n"
],
[
"# THE SUM OF SQUARES BETWEEN GROUPS ...\nSSB = n1*(x1.mean()-x_bar)**2 + n2*(x2.mean()-x_bar)**2 + n3*(x3.mean()-x_bar)**2 + n4*(x4.mean()-x_bar)**2\nprint(SSB)\n# AND THE SUM OF SQUARES WITHIN GROUPS ...\nSSW = sum(((x1 - x1.mean()))**2) + sum(((x2 - x2.mean()))**2) + sum(((x3 - x3.mean()))**2) + sum(((x4 - x4.mean()))**2)\nprint(SSW)\n\n# NOW CALCULATE THE F-STATISTIC\nF = (SSB/dfB)/(SSW/dfW)\nprint(\"F = \", F)",
"4051.9603934077604\n12937.434343434343\nF = 3.5495614178911623\n"
],
[
"# Now, let's do this the easy way using a stats function.\nF, p = stats.f_oneway(x1, x2, x3, x4)\nprint(\"F = \", F, \" p = \", p)",
"F = 3.5495614178911628 p = 0.024471844533744118\n"
]
],
[
[
"### Repeat the Analysis with \"Flattened\" Data\n\nPerhaps you notice that the dataframe used here is a bit strange in that not all of the columns have the same number of elements. This is because the different columns represent different treatment groups with different sample sizes. A better way to organize the data may be to make the treatment group a dimension of the data set. Many of you will have data sets structured like this. Let's repeat the analysis with the data reformated in this way. ",
"_____no_output_____"
]
],
[
[
"# import the flattend data and view it\nurl = \"https://raw.githubusercontent.com/prof-groff/evns-462/master/data/reddye40_flat.csv\"\nreddye2 = pd.read_csv(url)\nreddye2",
"_____no_output_____"
],
[
"# group by treatment group\ngroups = reddye2.groupby('treatment group')\nx1 = groups.get_group('X1 control')\nx2 = groups.get_group('X2 low dose')\nx3 = groups.get_group('X3 medium dose')\nx4 = groups.get_group('X4 high dose')\n\n# each of these groups are now different data frames with two columns\n# we only want the \"weeks till death\" column though\nx1 = x1['weeks till death']\nx2 = x2['weeks till death']\nx3 = x3['weeks till death']\nx4 = x4['weeks till death']",
"_____no_output_____"
],
[
"# now do ANOVA, observe the same result as before\nF, p = stats.f_oneway(x1,x2,x3,x4)\nprint(\"F = \", F, \" p = \", p)",
"F = 3.5495614178911628 p = 0.024471844533744118\n"
]
],
[
[
"### Intepreting the Result\nSince the F-statistic is greater than F-critical we reject the null and accept the alternative hypothesis. The means of the groups are not the same. But this doesn't tell us which mean or means are different. To determine this we could proceed to do independent sample t-tests or explore the data some other way. Let's do a test called Tukey's range test.",
"_____no_output_____"
]
],
[
[
"# LET'S JUST LOOK AT SOME SUMMARY STATISTICS FOR EACH GROUP\ngroups.describe()",
"_____no_output_____"
],
[
"# Now let's do the Tukey test\nfrom statsmodels.stats.multicomp import pairwise_tukeyhsd\ntukey = pairwise_tukeyhsd(endog=reddye2['weeks till death'], groups=reddye2['treatment group'], alpha=0.05)\ntukey.summary() # See test summary",
"_____no_output_____"
],
[
"# and plot the group confidence intervals\ntukey.plot_simultaneous()\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Intepreting the Results\n\nThe results of the Tukey test show that the only statistically significant difference is group X1 (the control) compared to group X4 (high dose). ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a8181064353ace1ba643d4e0c39addefa6c0553
| 43,884 |
ipynb
|
Jupyter Notebook
|
modeling Corona.ipynb
|
MiguelHeCa/master_big_data
|
61934100bcd10dc32f3c49d56e67847585608495
|
[
"MIT"
] | null | null | null |
modeling Corona.ipynb
|
MiguelHeCa/master_big_data
|
61934100bcd10dc32f3c49d56e67847585608495
|
[
"MIT"
] | null | null | null |
modeling Corona.ipynb
|
MiguelHeCa/master_big_data
|
61934100bcd10dc32f3c49d56e67847585608495
|
[
"MIT"
] | null | null | null | 113.689119 | 35,420 | 0.868563 |
[
[
[
"import pandas as pd\nfrom sklearn.metrics import mean_squared_error\nfrom scipy.optimize import curve_fit\nfrom scipy.optimize import fsolve\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom datetime import datetime, timedelta\n\ndef logistic_model(x, a, b, c):\n return c / (1 + np.exp(-(x - b) / a))\ndef exponential_model(x, a, b, c):\n return a * np.exp( b * (x - c))\n\n# Datos\ndf_original = pd.read_csv(\"https://covid.ourworldindata.org/data/total_cases.csv\")\n\n# arguments\ncountry = \"Spain\"\nfirst_day = datetime.strptime('2020-01-01', '%Y-%m-%d')\np0_log = [5, 20, 40000]\np0_exp = [0.5, 0.5, 0.5]",
"_____no_output_____"
]
],
[
[
"Se selecciona el país puesto en `country` y se crea una columna `days`que indica los días que han transcurrido desde el 1 de enero. Luego se crean `x` e `y` como listas de las columnas `days` y los casos del país, respectivamente.",
"_____no_output_____"
]
],
[
[
"df = df_original\ndf = df[['date', country]]\ndf = df[True != df[country].isna()]\ndf = df.assign(days = df['date'].map(lambda x : (datetime.strptime(x, '%Y-%m-%d') - first_day).days))\nx = list(df.iloc[:, 2])\ny = list(df.iloc[:, 1])",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
]
],
[
[
"Luego se utiliza la función creada de `logistic_curve` con `curve_fit`, pero lo que no en entiendo es realmente qué hace.\n\nDeduzco que en el parámetro `p0`, el contenido de `p0_log` equivale a:\n\n$$\n\\frac{40000}{1 + e^{-(x - 20)/5}}\n$$\n\npero no entiendo bien por qué esos parámetros.",
"_____no_output_____"
]
],
[
[
"fit = curve_fit(logistic_model, xdata=x, ydata=y, p0=p0_log, maxfev=2000)\na, b, c = fit[0]\nerrors = np.sqrt(np.diag(fit[1]))",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"b",
"_____no_output_____"
],
[
"c",
"_____no_output_____"
]
],
[
[
"Luego con la función `fsolve` no sé realmente qué hace porque le indica que resuelva `logistic_model` como argumento principal `b`, de tal forma que queda\n\n$$\n\\frac{17189.69}{1 + e^{-(75.60 - 75.60)/2.37}} - 17189.69 = \\frac{17189.69}{1 + e^{0}} - 17189.69\n$$\n\npero no entiendo por qué se tiene que resolver a través de `b`.",
"_____no_output_____"
]
],
[
[
"sol = int(fsolve(lambda z : logistic_model(z, a, b, c) - int(c), b))\nlast_day = datetime.strftime(first_day + timedelta(days=sol), '%Y-%m-%d')",
"_____no_output_____"
]
],
[
[
"Al final, con `sol` ya se determinan los días de predicción. Supongo que la clave está de hecho en que `b` corresponda a los días, pero no estoy seguro.",
"_____no_output_____"
]
],
[
[
"print(\"Last day of infections : \", last_day , \" (approximately)\")",
"Last day of infections : 2020-04-09 (approximately)\n"
],
[
"exp_fit = curve_fit(exponential_model, x, y, p0=p0_exp)\npred_x = list(range(max(x), sol))",
"_____no_output_____"
],
[
"fig = plt.figure(figsize = (10, 10))\nplt.scatter(df.iloc[:, 2], df.iloc[:, 1], label='Actual data')\nplt.plot(x+pred_x, [logistic_model(i,fit[0][0],fit[0][1],fit[0][2]) for i in x+pred_x], label=\"Logistic curve\", alpha=0.7, color=\"green\")\nplt.plot(x+pred_x, [exponential_model(i,exp_fit[0][0],exp_fit[0][1],exp_fit[0][2]) for i in x+pred_x], label=\"Exponential curve\",alpha=0.6, color = \"red\")\nplt.legend()\nplt.xlabel(\"Days from 1 January 2020\")\nplt.ylabel(\"Amount of infected people\")\nplt.ylim((min(y)*0.9,c*1.1))\nplt.show()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a818cfec942959065f8567e4fd4b85b0202e7f4
| 10,940 |
ipynb
|
Jupyter Notebook
|
notebooks/forecast-like-observations.ipynb
|
crim-ca/crims2s
|
0392fe320b819cf71b22522ea1d6b6e3cddf5142
|
[
"MIT"
] | 7 |
2021-11-06T03:42:04.000Z
|
2022-03-22T00:48:24.000Z
|
notebooks/forecast-like-observations.ipynb
|
crim-ca/crims2s
|
0392fe320b819cf71b22522ea1d6b6e3cddf5142
|
[
"MIT"
] | 1 |
2021-12-03T18:54:12.000Z
|
2021-12-03T18:54:12.000Z
|
notebooks/forecast-like-observations.ipynb
|
crim-ca/crims2s
|
0392fe320b819cf71b22522ea1d6b6e3cddf5142
|
[
"MIT"
] | 5 |
2021-11-06T02:08:19.000Z
|
2022-03-31T02:48:37.000Z
| 21.88 | 159 | 0.564351 |
[
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
]
],
[
[
"# Forecast like observations\n\nUse observation files to produce new files that fit the shape of a forecast file.\nThat makes them easier to use for ML purposes.\nAt the core of this task is the forecast_like_observations provided by the organizers.\nThis notebooks loads the appropriate forecasts and calls this function to generate corresponding obs, from our own set of obs files.\nThe obs files were modified to make them more consisten w/r to nans, see *land-mask-investigate.ipybn*.",
"_____no_output_____"
]
],
[
[
"import climetlab as cml\nimport climetlab_s2s_ai_challenge\nimport dask\nimport dask.array as da\nimport dask.distributed\nimport dask_jobqueue\nimport pathlib\nimport xarray as xr\n\nfrom crims2s.util import fix_dataset_dims",
"_____no_output_____"
],
[
"DATA_PATH = '***BASEDIR***'\ndata_path = pathlib.Path(DATA_PATH)",
"_____no_output_____"
]
],
[
[
"## Boot dask cluster",
"_____no_output_____"
]
],
[
[
"cluster = dask_jobqueue.SLURMCluster(env_extra=['source ***HOME***.bash_profile','conda activate s2s'])",
"_____no_output_____"
],
[
"cluster.scale(jobs=4)",
"_____no_output_____"
],
[
"client = dask.distributed.Client(cluster)",
"_____no_output_____"
],
[
"client",
"_____no_output_____"
]
],
[
[
"## Temperature",
"_____no_output_____"
]
],
[
[
"forecast_dir = data_path / 'training-input'",
"_____no_output_____"
],
[
"forecast_files = [f for f in forecast_dir.iterdir() if 'ecmwf' in f.stem and 't2m' in f.stem]",
"_____no_output_____"
],
[
"forecast_files[:10]",
"_____no_output_____"
],
[
"forecast = xr.open_mfdataset(forecast_files, preprocess=fix_dataset_dims)",
"_____no_output_____"
],
[
"obs = xr.open_dataset(data_path / 'obs_t2m_interp_remask.nc')",
"_____no_output_____"
],
[
"forecast_shaped_t2m = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)",
"_____no_output_____"
],
[
"forecast_shaped_t2m",
"_____no_output_____"
],
[
"sample = forecast_shaped_t2m.isel(forecast_dayofyear=0, forecast_year=10, lead_time=40)",
"_____no_output_____"
],
[
"sample.valid_time.item()",
"_____no_output_____"
],
[
"(sample == obs.sel(time=sample.valid_time)).t2m.plot()",
"_____no_output_____"
]
],
[
[
"Seems legit!",
"_____no_output_____"
]
],
[
[
"forecast_shaped_t2m.isel(forecast_year=0).to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_2000.nc')",
"_____no_output_____"
],
[
"forecast_shaped_t2m.isel(forecast_year=[0])",
"_____no_output_____"
],
[
"forecast_files[:10]",
"_____no_output_____"
],
[
"for f in forecast_files:\n print(f)\n forecast = fix_dataset_dims(xr.open_dataset(f))\n forecast_shaped_t2m = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)\n\n day_of_year = forecast_shaped_t2m.forecast_time.dt.dayofyear[0].item()\n \n forecast_shaped_t2m = forecast_shaped_t2m.expand_dims('forecast_dayofyear').assign_coords(forecast_dayofyear=[day_of_year])\n forecast_shaped_t2m.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_{day_of_year:03}.nc')",
"_____no_output_____"
],
[
"for y in forecast_shaped_t2m.forecast_year:\n print(y.item())",
"_____no_output_____"
],
[
"for y in forecast_shaped_t2m.forecast_year:\n print(y.item())\n forecast_shaped_t2m.sel(forecast_year=[y]).to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_{y.item()}.nc')",
"_____no_output_____"
],
[
"forecast_shaped_t2m.to_netcdf(data_path / 'obs_t2m_forecast_shape.nc')",
"_____no_output_____"
],
[
"forecast_shaped_t2m.to_netcdf('***BASEDIR***obs_t2m_forecast_shape.nc')",
"_____no_output_____"
],
[
"del obs\ndel forecast\ndel forecast_shaped_t2m",
"_____no_output_____"
]
],
[
[
"## Precipitation",
"_____no_output_____"
]
],
[
[
"forecast_dir = data_path / 'training-input'",
"_____no_output_____"
],
[
"forecast_files = [f for f in forecast_dir.iterdir() if 'ecmwf' in f.stem and 'tp' in f.stem]",
"_____no_output_____"
],
[
"forecast_files[:10]",
"_____no_output_____"
],
[
"obs = xr.open_dataset(data_path / 'obs_pr_interp_remask.nc')\n",
"_____no_output_____"
],
[
"for f in forecast_files:\n forecast = fix_dataset_dims(xr.open_dataset(f))\n forecast_shaped_tp = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)\n\n day_of_year = forecast_shaped_tp.forecast_time.dt.dayofyear[0].item()\n \n forecast_shaped_tp = forecast_shaped_tp.expand_dims('forecast_dayofyear').assign_coords(forecast_dayofyear=[day_of_year])\n forecast_shaped_tp.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_tp_forecast_shape_{day_of_year:03}.nc')",
"_____no_output_____"
],
[
"forecast_shaped_tp.forecast_time.dt.day[0].item()",
"_____no_output_____"
],
[
"day_of_year = 289\nforecast_shaped_tp.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_tp_forecast_shape_{day_of_year:03}.nc')",
"_____no_output_____"
],
[
"forecast_shaped_tp",
"_____no_output_____"
],
[
"sample = forecast.isel(forecast_year=10, lead_time=10)",
"_____no_output_____"
],
[
"sample",
"_____no_output_____"
],
[
"obs",
"_____no_output_____"
],
[
"forecast_shaped_tp",
"_____no_output_____"
],
[
"sample = forecast_shaped_tp.isel(forecast_year=10, lead_time=15)",
"_____no_output_____"
],
[
"sample",
"_____no_output_____"
],
[
"obs_of_sample = obs.sel(time=slice(sample.forecast_time, sample.forecast_time + sample.lead_time)).isel(time=slice(None, -1))",
"_____no_output_____"
],
[
"obs_of_sample",
"_____no_output_____"
],
[
"(obs_of_sample.sum(dim='time').pr == sample.tp).plot()",
"_____no_output_____"
]
],
[
[
"seems legit! don't forget to exclude the last day when computing the cumsum",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a8196d3cfea4dcc6dfb383ad3c184e326f804cb
| 241,918 |
ipynb
|
Jupyter Notebook
|
9_etl/transform.ipynb
|
edrmonteiro/DataSciencePython
|
0a35fb085bc0b98b33e083d0e1b113a04caa3aac
|
[
"MIT"
] | null | null | null |
9_etl/transform.ipynb
|
edrmonteiro/DataSciencePython
|
0a35fb085bc0b98b33e083d0e1b113a04caa3aac
|
[
"MIT"
] | null | null | null |
9_etl/transform.ipynb
|
edrmonteiro/DataSciencePython
|
0a35fb085bc0b98b33e083d0e1b113a04caa3aac
|
[
"MIT"
] | null | null | null | 36.155732 | 152 | 0.329331 |
[
[
[
"import pandas as pd\nimport pandera as pa",
"_____no_output_____"
],
[
"valores_ausentes = ['**','###!','####','****','*****','NULL']\ndf = pd.read_csv(\"data.csv\", sep=\";\", parse_dates=['ocorrencia_dia'], dayfirst=True, na_values=valores_ausentes)\ndf.head(10)",
"_____no_output_____"
],
[
"schema = pa.DataFrameSchema(\n columns = {\n \"codigo_ocorrencia\": pa.Column(pa.Int),\n \"codigo_ocorrencia2\": pa.Column(pa.Int),\n \"ocorrencia_classificacao\": pa.Column(pa.String),\n \"ocorrencia_cidade\": pa.Column(pa.String),\n \"ocorrencia_uf\": pa.Column(pa.String, pa.Check.str_length(2,2), nullable=True),\n \"ocorrencia_aerodromo\": pa.Column(pa.String, nullable=True),\n \"ocorrencia_dia\": pa.Column(pa.DateTime),\n \"ocorrencia_hora\": pa.Column(pa.String, pa.Check.str_matches(r'^([0-1]?[0-9]|[2][0-3]):([0-5][0-9])(:[0-5][0-9])?$'), nullable=True),\n \"total_recomendacoes\": pa.Column(pa.Int) \n }\n)",
"_____no_output_____"
],
[
"schema.validate(df)",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"df.loc[1]",
"_____no_output_____"
],
[
"df.iloc[1]",
"_____no_output_____"
],
[
"df.iloc[-1]",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
],
[
"df.iloc[10:15]",
"_____no_output_____"
],
[
"df.loc[10:15]",
"_____no_output_____"
],
[
"df.loc[:,'ocorrencia_uf']",
"_____no_output_____"
],
[
"df['ocorrencia_uf']",
"_____no_output_____"
],
[
"df.isna().sum()",
"_____no_output_____"
],
[
"df.isnull().sum()",
"_____no_output_____"
],
[
"filtro = df.ocorrencia_uf.isnull()\ndf.loc[filtro]",
"_____no_output_____"
],
[
"filtro = df.ocorrencia_aerodromo.isnull()\ndf.loc[filtro]",
"_____no_output_____"
],
[
"filtro = df.ocorrencia_hora.isnull()\ndf.loc[filtro]",
"_____no_output_____"
],
[
"df.count()",
"_____no_output_____"
],
[
"#ocorrências com mais de 10 recomendações\nfiltro = df.total_recomendacoes > 10\ndf.loc[filtro]",
"_____no_output_____"
],
[
"#ocorrências com mais de 10 recomendações\nfiltro = df.total_recomendacoes > 10\ndf.loc[filtro, ['ocorrencia_cidade', 'total_recomendacoes']]",
"_____no_output_____"
],
[
"#ocorrências cuja classificação == INCIDENTE GRAVE\t\nfiltro = df.ocorrencia_classificacao == 'INCIDENTE GRAVE'\ndf.loc[filtro]",
"_____no_output_____"
],
[
"#ocorrências cuja classificação == INCIDENTE GRAVE e o estado == SP\nfiltro1 = df.ocorrencia_classificacao == 'INCIDENTE GRAVE'\nfiltro2 = df.ocorrencia_uf == 'SP'\ndf.loc[filtro1 & filtro2]",
"_____no_output_____"
],
[
"#ocorrências cuja classificação == INCIDENTE GRAVE ou o estado == SP\nfiltro1 = df.ocorrencia_classificacao == 'INCIDENTE GRAVE'\nfiltro2 = df.ocorrencia_uf == 'SP'\ndf.loc[filtro1 | filtro2]",
"_____no_output_____"
],
[
"#ocorrências cuja (classificação == INCIDENTE GRAVE ou classificação == INCIDENTE) e o estado == SP\nfiltro1 = df.ocorrencia_classificacao.isin(['INCIDENTE GRAVE', 'INCIDENTE'])\nfiltro2 = df.ocorrencia_uf == 'SP'\ndf.loc[filtro1 & filtro2]",
"_____no_output_____"
],
[
"#ocorrências cuja cidade comecem com a letra C\nfiltro = df.ocorrencia_cidade.str[0] == 'C'\ndf.loc[filtro]",
"_____no_output_____"
],
[
"#ocorrências cuja cidade terminam com a letra A\nfiltro = df.ocorrencia_cidade.str[-1] == 'A'\ndf.loc[filtro]",
"_____no_output_____"
],
[
"#ocorrências cuja cidade terminam com os caracteres MA\nfiltro = df.ocorrencia_cidade.str[-2:] == 'MA'\ndf.loc[filtro]",
"_____no_output_____"
],
[
"#ocorrências cuja cidade contém (em qualquer parte do conteúdo) os caracteres MA ou AL\nfiltro = df.ocorrencia_cidade.str.contains('MA|AL')\ndf.loc[filtro]",
"_____no_output_____"
],
[
"#ocorrências do ano de 2015\nfiltro = df.ocorrencia_dia.dt.year == 2015\ndf.loc[filtro]",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"#ocorrências do ano de 2015 e mês 12 e dias entre 3 e 8\nfiltro_ano = df.ocorrencia_dia.dt.year == 2015\nfiltro_mes = df.ocorrencia_dia.dt.month == 12\nfiltro_dia_inicio = df.ocorrencia_dia.dt.day > 2 \nfiltro_dia_fim = df.ocorrencia_dia.dt.day < 9\ndf.loc[filtro_ano & filtro_mes & filtro_dia_inicio & filtro_dia_fim]",
"_____no_output_____"
],
[
"df['ocorrencia_dia_hora'] = pd.to_datetime(df.ocorrencia_dia.astype(str) + ' ' + df.ocorrencia_hora)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"#ocorrências do ano de 2015 e mês 12 e dias entre 3 e 8\nfiltro_ano = df.ocorrencia_dia_hora.dt.year == 2015\nfiltro_mes = df.ocorrencia_dia_hora.dt.month == 12\nfiltro_dia_inicio = df.ocorrencia_dia_hora.dt.day > 2 \nfiltro_dia_fim = df.ocorrencia_dia_hora.dt.day < 9\ndf.loc[filtro_ano & filtro_mes & filtro_dia_inicio & filtro_dia_fim]",
"_____no_output_____"
],
[
"filtro1 = df.ocorrencia_dia_hora >= '2015-12-03 11:00:00'\nfiltro2 = df.ocorrencia_dia_hora <= '2015-12-08 14:30:00'\ndf.loc[filtro1 & filtro2]",
"_____no_output_____"
],
[
"#ocorrências do ano de 2015 e mês 03\nfiltro1 = df.ocorrencia_dia.dt.year == 2015\nfiltro2 = df.ocorrencia_dia.dt.month == 3\ndf201503 = df.loc[filtro1 & filtro2]\ndf201503",
"_____no_output_____"
],
[
"df201503.count()",
"_____no_output_____"
],
[
"df201503.groupby(['ocorrencia_classificacao']).codigo_ocorrencia.count()",
"_____no_output_____"
],
[
"df201503.groupby(['ocorrencia_classificacao']).ocorrencia_aerodromo.count()",
"_____no_output_____"
],
[
"df201503.groupby(['ocorrencia_classificacao']).size()",
"_____no_output_____"
],
[
"df201503.groupby(['ocorrencia_classificacao']).size().sort_values()",
"_____no_output_____"
],
[
"df201503.groupby(['ocorrencia_classificacao']).size().sort_values(ascending=False)",
"_____no_output_____"
],
[
"filtro1 = df.ocorrencia_dia.dt.year == 2010\nfiltro2 = df.ocorrencia_uf.isin(['SP','MG','ES','RJ'])\ndfsudeste2010 = df.loc[filtro1 & filtro2]\ndfsudeste2010",
"_____no_output_____"
],
[
"dfsudeste2010.groupby(['ocorrencia_classificacao']).size()",
"_____no_output_____"
],
[
"dfsudeste2010.count()",
"_____no_output_____"
],
[
"dfsudeste2010.groupby(['ocorrencia_uf', 'ocorrencia_classificacao']).size()",
"_____no_output_____"
],
[
"dfsudeste2010.groupby(['ocorrencia_cidade']).size().sort_values(ascending=False)",
"_____no_output_____"
],
[
"filtro1 = dfsudeste2010.ocorrencia_cidade == 'RIO DE JANEIRO'\nfiltro2 = dfsudeste2010.total_recomendacoes > 0\ndfsudeste2010.loc[filtro1 & filtro2]",
"_____no_output_____"
],
[
"filtro = dfsudeste2010.ocorrencia_cidade == 'RIO DE JANEIRO'\ndfsudeste2010.loc[filtro].total_recomendacoes.sum()",
"_____no_output_____"
],
[
"dfsudeste2010.groupby(['ocorrencia_aerodromo'], dropna=False).total_recomendacoes.sum()",
"_____no_output_____"
],
[
"dfsudeste2010.groupby(['ocorrencia_cidade']).total_recomendacoes.sum()",
"_____no_output_____"
],
[
"filtro = dfsudeste2010.total_recomendacoes > 0\ndfsudeste2010.loc[filtro].groupby(['ocorrencia_cidade']).total_recomendacoes.sum().sort_values()",
"_____no_output_____"
],
[
"dfsudeste2010.loc[filtro].groupby(['ocorrencia_cidade', dfsudeste2010.ocorrencia_dia.dt.month]).total_recomendacoes.sum()",
"_____no_output_____"
],
[
"filtro1 = dfsudeste2010.total_recomendacoes > 0\nfiltro2 = dfsudeste2010.ocorrencia_cidade == 'SÃO PAULO'\ndfsudeste2010.loc[filtro1 & filtro2]",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a8197cdd50e10eb173e7eef4d3719c661686814
| 59,469 |
ipynb
|
Jupyter Notebook
|
multilayer_perceptrons/mlp_pytorch_basic.ipynb
|
ximingxing/Deep-Learning-in-Action
|
38d5d3d6990553ff9d3ea771d8e83f8b47241b9a
|
[
"MIT"
] | 1 |
2020-09-16T09:17:37.000Z
|
2020-09-16T09:17:37.000Z
|
multilayer_perceptrons/mlp_pytorch_basic.ipynb
|
ximingxing/Deep-Learning-in-Action
|
38d5d3d6990553ff9d3ea771d8e83f8b47241b9a
|
[
"MIT"
] | 1 |
2021-05-13T05:20:07.000Z
|
2021-05-13T05:20:07.000Z
|
multilayer_perceptrons/mlp_pytorch_basic.ipynb
|
ximingxing/Deep-Learning-in-Action
|
38d5d3d6990553ff9d3ea771d8e83f8b47241b9a
|
[
"MIT"
] | null | null | null | 113.925287 | 15,875 | 0.850258 |
[
[
[
"Author: Xi Ming.\n\n## Build a Multilayer Perceptron from Scratch based on PyTorch.\n\nPyTorch's automatic differentiation mechanism can help quickly implement multilayer perceptrons.",
"_____no_output_____"
],
[
"### Import Packages.",
"_____no_output_____"
]
],
[
[
"import torch\nimport torchvision\nimport torch.nn as nn\nfrom torchvision import datasets,transforms\nfrom torch.utils.data import DataLoader\n\nimport numpy as np\n\nprint('pytorch version:',torch.__version__,'\\ntorchvision version: ',torchvision.__version__,'\\nnumpy version:' ,np.__version__)",
"pytorch version: 1.7.1+cu101 \ntorchvision version: 0.8.2+cu101 \nnumpy version: 1.18.2\n"
]
],
[
[
"### Settings",
"_____no_output_____"
]
],
[
[
"# model runs on GPU or CPU\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n\n# Hyperparameters\nlearning_rate = 1e-2\nmomentum = 0.9\nnum_epochs = 10\nbatch_size = 128\n\n# Architecture\nnum_features = 784\nnum_hidden_1 = 400\nnum_hidden_2 = 200\nnum_classes = 10",
"_____no_output_____"
]
],
[
[
"### Dataset: MNIST",
"_____no_output_____"
]
],
[
[
"train_dataset = datasets.MNIST(root='data', \n train=True, \n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))]),\n download=True)\n\ntest_dataset = datasets.MNIST(root='data', \n train=False, \n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,), (0.3081,))]))\n\n\ntrain_loader = DataLoader(dataset=train_dataset, \n batch_size=batch_size, shuffle=True)\n\ntest_loader = DataLoader(dataset=test_dataset, \n batch_size=batch_size, shuffle=False)\n\n# Checking the dataset\nfor images, labels in train_loader: \n print('Image batch dimensions:', images.shape)\n print('Image label dimensions:', labels.shape)\n break\n ",
"Image batch dimensions: torch.Size([128, 1, 28, 28])\nImage label dimensions: torch.Size([128])\n"
]
],
[
[
"### Define model",
"_____no_output_____"
]
],
[
[
"class MultilayerPerceptron(nn.Module):\n\n def __init__(self, num_features, num_classes):\n super(MultilayerPerceptron, self).__init__()\n \n self.model = nn.Sequential(\n nn.Linear(num_features, num_hidden_1),\n nn.Sigmoid(),\n nn.Linear(num_hidden_1, num_hidden_2),\n nn.Sigmoid(),\n nn.Linear(num_hidden_2, num_classes)\n )\n\n def forward(self, x):\n x = self.model(x)\n return x",
"_____no_output_____"
]
],
[
[
"### Init model, define optimizer and loss function",
"_____no_output_____"
]
],
[
[
"model = MultilayerPerceptron(num_features=num_features,\n num_classes=num_classes)\nmodel = model.to(device)\n\noptimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=momentum)\n\ncriterion = nn.CrossEntropyLoss()",
"_____no_output_____"
]
],
[
[
"### Training model",
"_____no_output_____"
]
],
[
[
"train_loss_list = []\ntest_acc_list = []\n\nfor epoch in range(num_epochs):\n\n model.train()\n for batch_idx, (data, target) in enumerate(train_loader):\n data, target = data.to(device), target.to(device)\n data = data.view(-1, 28*28)\n \n # forward\n logits = model(data)\n loss = criterion(logits, target)\n\n # backprop\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n if batch_idx % 100 == 0:\n print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n epoch, batch_idx * len(data), len(train_loader.dataset),\n 100. * batch_idx / len(train_loader), loss.data.item()))\n\n train_loss_list.append(loss.data.item())\n \n test_loss = 0\n correct = 0 \n model.eval()\n with torch.no_grad():\n # test\n total_correct = 0\n total_num = 0\n for data, target in test_loader:\n data, target = data.to(device), target.to(device)\n data = data.view(-1, 28*28)\n\n logits = model(data)\n test_loss += criterion(logits, target).item()\n\n pred = logits.data.max(1)[1]\n correct += pred.eq(target.data).sum()\n\n test_loss /= len(test_loader.dataset)\n test_acc = 100. * correct / len(test_loader.dataset)\n print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n test_loss, correct, len(test_loader.dataset), test_acc))\n\n test_acc_list.append(test_acc)",
"Train Epoch: 0 [0/60000 (0%)]\tLoss: 2.321454\nTrain Epoch: 0 [12800/60000 (21%)]\tLoss: 2.224704\nTrain Epoch: 0 [25600/60000 (43%)]\tLoss: 1.826435\nTrain Epoch: 0 [38400/60000 (64%)]\tLoss: 1.124504\nTrain Epoch: 0 [51200/60000 (85%)]\tLoss: 0.841719\n\nTest set: Average loss: 0.0053, Accuracy: 8232/10000 (82%)\n\nTrain Epoch: 1 [0/60000 (0%)]\tLoss: 0.670159\nTrain Epoch: 1 [12800/60000 (21%)]\tLoss: 0.455997\nTrain Epoch: 1 [25600/60000 (43%)]\tLoss: 0.508148\nTrain Epoch: 1 [38400/60000 (64%)]\tLoss: 0.435971\nTrain Epoch: 1 [51200/60000 (85%)]\tLoss: 0.537225\n\nTest set: Average loss: 0.0030, Accuracy: 8909/10000 (89%)\n\nTrain Epoch: 2 [0/60000 (0%)]\tLoss: 0.446308\nTrain Epoch: 2 [12800/60000 (21%)]\tLoss: 0.325904\nTrain Epoch: 2 [25600/60000 (43%)]\tLoss: 0.334509\nTrain Epoch: 2 [38400/60000 (64%)]\tLoss: 0.268637\nTrain Epoch: 2 [51200/60000 (85%)]\tLoss: 0.335764\n\nTest set: Average loss: 0.0025, Accuracy: 9083/10000 (91%)\n\nTrain Epoch: 3 [0/60000 (0%)]\tLoss: 0.275971\nTrain Epoch: 3 [12800/60000 (21%)]\tLoss: 0.428632\nTrain Epoch: 3 [25600/60000 (43%)]\tLoss: 0.270322\nTrain Epoch: 3 [38400/60000 (64%)]\tLoss: 0.231103\nTrain Epoch: 3 [51200/60000 (85%)]\tLoss: 0.235235\n\nTest set: Average loss: 0.0022, Accuracy: 9179/10000 (92%)\n\nTrain Epoch: 4 [0/60000 (0%)]\tLoss: 0.423989\nTrain Epoch: 4 [12800/60000 (21%)]\tLoss: 0.320259\nTrain Epoch: 4 [25600/60000 (43%)]\tLoss: 0.301397\nTrain Epoch: 4 [38400/60000 (64%)]\tLoss: 0.392316\nTrain Epoch: 4 [51200/60000 (85%)]\tLoss: 0.354596\n\nTest set: Average loss: 0.0020, Accuracy: 9258/10000 (93%)\n\nTrain Epoch: 5 [0/60000 (0%)]\tLoss: 0.221529\nTrain Epoch: 5 [12800/60000 (21%)]\tLoss: 0.187045\nTrain Epoch: 5 [25600/60000 (43%)]\tLoss: 0.152846\nTrain Epoch: 5 [38400/60000 (64%)]\tLoss: 0.288960\nTrain Epoch: 5 [51200/60000 (85%)]\tLoss: 0.191972\n\nTest set: Average loss: 0.0019, Accuracy: 9322/10000 (93%)\n\nTrain Epoch: 6 [0/60000 (0%)]\tLoss: 0.173000\nTrain Epoch: 6 [12800/60000 (21%)]\tLoss: 0.392608\nTrain Epoch: 6 [25600/60000 (43%)]\tLoss: 0.281637\nTrain Epoch: 6 [38400/60000 (64%)]\tLoss: 0.210830\nTrain Epoch: 6 [51200/60000 (85%)]\tLoss: 0.221315\n\nTest set: Average loss: 0.0017, Accuracy: 9358/10000 (94%)\n\nTrain Epoch: 7 [0/60000 (0%)]\tLoss: 0.231126\nTrain Epoch: 7 [12800/60000 (21%)]\tLoss: 0.294496\nTrain Epoch: 7 [25600/60000 (43%)]\tLoss: 0.140917\nTrain Epoch: 7 [38400/60000 (64%)]\tLoss: 0.216424\nTrain Epoch: 7 [51200/60000 (85%)]\tLoss: 0.200036\n\nTest set: Average loss: 0.0016, Accuracy: 9427/10000 (94%)\n\nTrain Epoch: 8 [0/60000 (0%)]\tLoss: 0.207002\nTrain Epoch: 8 [12800/60000 (21%)]\tLoss: 0.198266\nTrain Epoch: 8 [25600/60000 (43%)]\tLoss: 0.248742\nTrain Epoch: 8 [38400/60000 (64%)]\tLoss: 0.226041\nTrain Epoch: 8 [51200/60000 (85%)]\tLoss: 0.263913\n\nTest set: Average loss: 0.0014, Accuracy: 9463/10000 (95%)\n\nTrain Epoch: 9 [0/60000 (0%)]\tLoss: 0.169967\nTrain Epoch: 9 [12800/60000 (21%)]\tLoss: 0.198549\nTrain Epoch: 9 [25600/60000 (43%)]\tLoss: 0.164222\nTrain Epoch: 9 [38400/60000 (64%)]\tLoss: 0.175347\nTrain Epoch: 9 [51200/60000 (85%)]\tLoss: 0.148149\n\nTest set: Average loss: 0.0013, Accuracy: 9492/10000 (95%)\n\n"
]
],
[
[
"### Plot Training index curve",
"_____no_output_____"
]
],
[
[
"import matplotlib\nimport matplotlib.pyplot as plt\n\nx = np.arange(0, num_epochs)\n\nplt.title(\"Training index curve\")\nplt.plot(x, train_loss_list, label='train loss')\nplt.xlabel('epochs')\nplt.ylabel('train loss')\nplt.show()\n\nplt.title(\"Training index curve\")\nplt.plot(x, test_acc_list, label='test accuracy')\nplt.xlabel('epochs')\nplt.ylabel('train acc')\n\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"### Visual Inspection",
"_____no_output_____"
]
],
[
[
"for features, targets in test_loader:\n break\n\nfig, ax = plt.subplots(1, 4)\ndata = data.to('cpu')\nfor i in range(4):\n ax[i].imshow(data[i].view(28, 28), cmap=matplotlib.cm.binary)\nplt.show()\n\ndata = data.to(device)\npredictions = model.forward(data[:4].view(-1, 28*28))\npredictions = torch.argmax(predictions, dim=1)\nprint('Predicted labels', predictions)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a81ae10dadd5f6a64ee02de6a5f189c0c916b0a
| 46,802 |
ipynb
|
Jupyter Notebook
|
pretrained-model/tts/fastspeech2/calculate-alignment-tacotron2-female-train.ipynb
|
ishine/malaya-speech
|
fd34afc7107af1656dff4b3201fa51dda54fde18
|
[
"MIT"
] | 111 |
2020-08-31T04:58:54.000Z
|
2022-03-29T15:44:18.000Z
|
pretrained-model/tts/fastspeech2/calculate-alignment-tacotron2-female-train.ipynb
|
ishine/malaya-speech
|
fd34afc7107af1656dff4b3201fa51dda54fde18
|
[
"MIT"
] | 14 |
2020-12-16T07:27:22.000Z
|
2022-03-15T17:39:01.000Z
|
pretrained-model/tts/fastspeech2/calculate-alignment-tacotron2-female-train.ipynb
|
ishine/malaya-speech
|
fd34afc7107af1656dff4b3201fa51dda54fde18
|
[
"MIT"
] | 29 |
2021-02-09T08:57:15.000Z
|
2022-03-12T14:09:19.000Z
| 26.263749 | 258 | 0.477629 |
[
[
[
"import os\n\nos.environ['CUDA_VISIBLE_DEVICES'] = '0'",
"_____no_output_____"
],
[
"os.system('rm -rf tacotron2-female-alignment')\nos.system('mkdir tacotron2-female-alignment')",
"_____no_output_____"
],
[
"import tensorflow as tf\nimport numpy as np\nfrom glob import glob\nimport tensorflow as tf\nimport malaya_speech\nimport malaya_speech.train\nfrom malaya_speech.train.model import tacotron2_nvidia as tacotron2\nimport malaya_speech.config\nimport numpy as np\nimport json\nimport malaya_speech.train as train",
"WARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/optimizer/__init__.py:38: The name tf.train.AdagradOptimizer is deprecated. Please use tf.compat.v1.train.AdagradOptimizer instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/optimizer/__init__.py:39: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/optimizer/__init__.py:40: The name tf.train.FtrlOptimizer is deprecated. Please use tf.compat.v1.train.FtrlOptimizer instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/optimizer/__init__.py:42: The name tf.train.RMSPropOptimizer is deprecated. Please use tf.compat.v1.train.RMSPropOptimizer instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/optimizer/__init__.py:43: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.\n\nWARNING:tensorflow:\nThe TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, please see:\n * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md\n * https://github.com/tensorflow/addons\n * https://github.com/tensorflow/io (for I/O related ops)\nIf you depend on functionality not listed there, please file an issue.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/openseq2seq/layer.py:6: The name tf.layers.Conv1D is deprecated. Please use tf.compat.v1.layers.Conv1D instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/openseq2seq/attention.py:4: The name tf.layers.Layer is deprecated. Please use tf.compat.v1.layers.Layer instead.\n\n"
],
[
"def norm_mean_std(x, mean, std):\n zero_idxs = np.where(x == 0.0)[0]\n x = (x - mean) / std\n x[zero_idxs] = 0.0\n return x\n\ndef average_by_duration(x, durs):\n mel_len = durs.sum()\n durs_cum = np.cumsum(np.pad(durs, (1, 0)))\n \n x_char = np.zeros((durs.shape[0],), dtype=np.float32)\n for idx, start, end in zip(range(mel_len), durs_cum[:-1], durs_cum[1:]):\n values = x[start:end][np.where(x[start:end] != 0.0)[0]]\n x_char[idx] = np.mean(values) if len(values) > 0 else 0.0\n\n return x_char.astype(np.float32)",
"_____no_output_____"
],
[
"f0_stat = np.load('../speech-bahasa/female-stats/stats_f0.npy')\nenergy_stat = np.load('../speech-bahasa/female-stats/stats_energy.npy')",
"_____no_output_____"
],
[
"with open('mels-female.json') as fopen:\n files = json.load(fopen)\n \nreduction_factor = 1\nmaxlen = 904\nminlen = 32\npad_to = 8\ndata_min = 1e-2\n\n_pad = 'pad'\n_start = 'start'\n_eos = 'eos'\n_punctuation = \"!'(),.:;? \"\n_special = '-'\n_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'\n\nMALAYA_SPEECH_SYMBOLS = (\n [_pad, _start, _eos] + list(_special) + list(_punctuation) + list(_letters)\n)\n\ndef generate(files):\n for f in files:\n f = f.decode()\n mel = np.load(f)\n mel_length = len(mel)\n if mel_length > maxlen or mel_length < minlen:\n continue\n\n stop_token_target = np.zeros([len(mel)], dtype = np.float32)\n\n text_ids = np.load(f.replace('mels', 'text_ids'), allow_pickle = True)[\n 0\n ]\n text_input = np.array(\n [\n MALAYA_SPEECH_SYMBOLS.index(c)\n for c in text_ids\n if c in MALAYA_SPEECH_SYMBOLS\n ]\n )\n num_pad = pad_to - ((len(text_input) + 2) % pad_to)\n text_input = np.pad(\n text_input, ((1, 1)), 'constant', constant_values = ((1, 2))\n )\n text_input = np.pad(\n text_input, ((0, num_pad)), 'constant', constant_values = 0\n )\n num_pad = pad_to - ((len(mel) + 1) % pad_to) + 1\n pad_value_mel = np.log(data_min)\n mel = np.pad(\n mel,\n ((0, num_pad), (0, 0)),\n 'constant',\n constant_values = pad_value_mel,\n )\n stop_token_target = np.pad(\n stop_token_target, ((0, num_pad)), 'constant', constant_values = 1\n )\n len_mel = [len(mel)]\n len_text_ids = [len(text_input)]\n \n \n f0 = np.load(f.replace('mels', 'f0s'))\n num_pad = pad_to - ((len(f0) + 1) % pad_to) + 1\n f0 = np.pad(\n f0,\n ((0, num_pad)),\n 'constant',\n )\n f0 = norm_mean_std(f0, f0_stat[0], f0_stat[1])\n len_f0 = [len(f0)]\n \n energy = np.load(f.replace('mels', 'energies'))\n num_pad = pad_to - ((len(energy) + 1) % pad_to) + 1\n energy = np.pad(\n energy,\n ((0, num_pad)),\n 'constant',\n )\n energy = norm_mean_std(energy, energy_stat[0], energy_stat[1])\n len_energy = [len(energy)]\n \n \n yield {\n 'mel': mel,\n 'text_ids': text_input,\n 'len_mel': len_mel,\n 'len_text_ids': len_text_ids,\n 'stop_token_target': stop_token_target,\n 'f0': f0,\n 'len_f0': len_f0,\n 'energy': energy,\n 'len_energy': len_energy,\n 'f': [f]\n }\n\ndef parse(example):\n mel_len = example['len_mel'][0]\n input_len = example['len_text_ids'][0]\n g = tacotron2.generate_guided_attention(mel_len, input_len, reduction_factor = reduction_factor)\n example['g'] = g\n return example\n \n \ndef get_dataset(files, batch_size = 32, shuffle_size = 32, thread_count = 24):\n def get():\n dataset = tf.data.Dataset.from_generator(\n generate,\n {\n 'mel': tf.float32,\n 'text_ids': tf.int32,\n 'len_mel': tf.int32,\n 'len_text_ids': tf.int32,\n 'stop_token_target': tf.float32,\n 'f0': tf.float32,\n 'len_f0': tf.int32,\n 'energy': tf.float32,\n 'len_energy': tf.int32,\n 'f': tf.string\n },\n output_shapes = {\n 'mel': tf.TensorShape([None, 80]),\n 'text_ids': tf.TensorShape([None]),\n 'len_mel': tf.TensorShape([1]),\n 'len_text_ids': tf.TensorShape([1]),\n 'stop_token_target': tf.TensorShape([None]),\n 'f0': tf.TensorShape([None]),\n 'len_f0': tf.TensorShape([1]),\n 'energy': tf.TensorShape([None]),\n 'len_energy': tf.TensorShape([1]),\n 'f': tf.TensorShape([1]),\n },\n args = (files,),\n )\n dataset = dataset.map(parse, num_parallel_calls = thread_count)\n dataset = dataset.padded_batch(\n shuffle_size,\n padded_shapes = {\n 'mel': tf.TensorShape([None, 80]),\n 'text_ids': tf.TensorShape([None]),\n 'len_mel': tf.TensorShape([1]),\n 'len_text_ids': tf.TensorShape([1]),\n 'g': tf.TensorShape([None, None]),\n 'stop_token_target': tf.TensorShape([None]),\n 'f0': tf.TensorShape([None]),\n 'len_f0': tf.TensorShape([1]),\n 'energy': tf.TensorShape([None]),\n 'len_energy': tf.TensorShape([1]),\n 'f': tf.TensorShape([1]),\n },\n padding_values = {\n 'mel': tf.constant(0, dtype = tf.float32),\n 'text_ids': tf.constant(0, dtype = tf.int32),\n 'len_mel': tf.constant(0, dtype = tf.int32),\n 'len_text_ids': tf.constant(0, dtype = tf.int32),\n 'g': tf.constant(-1.0, dtype = tf.float32),\n 'stop_token_target': tf.constant(0, dtype = tf.float32),\n 'f0': tf.constant(0, dtype = tf.float32),\n 'len_f0': tf.constant(0, dtype = tf.int32),\n 'energy': tf.constant(0, dtype = tf.float32),\n 'len_energy': tf.constant(0, dtype = tf.int32),\n 'f': tf.constant('', dtype = tf.string),\n },\n )\n return dataset\n\n return get",
"_____no_output_____"
],
[
"features = get_dataset(files['train'])()\nfeatures = features.make_one_shot_iterator().get_next()",
"WARNING:tensorflow:From <ipython-input-7-8632c4afc50b>:2: DatasetV1.make_one_shot_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`.\n"
],
[
"input_ids = features['text_ids']\ninput_lengths = features['len_text_ids'][:, 0]\nspeaker_ids = tf.constant([0], dtype = tf.int32)\nmel_outputs = features['mel']\nmel_lengths = features['len_mel'][:, 0]\nguided = features['g']\nstop_token_target = features['stop_token_target']\nbatch_size = tf.shape(guided)[0]",
"_____no_output_____"
],
[
"model = tacotron2.Model(\n [input_ids, input_lengths],\n [mel_outputs, mel_lengths],\n len(MALAYA_SPEECH_SYMBOLS),\n)",
"WARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/openseq2seq/abstract.py:143: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/tacotron2_nvidia/encoder.py:60: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/openseq2seq/layer.py:340: conv1d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.keras.layers.Conv1D` instead.\nWARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/layers/convolutional.py:218: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `layer.__call__` method instead.\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/openseq2seq/layer.py:358: batch_normalization (from tensorflow.python.layers.normalization) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation).\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/tacotron2_nvidia/encoder.py:129: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse keras.layers.dropout instead.\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/openseq2seq/rnn.py:111: LSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/tacotron2_nvidia/encoder.py:205: MultiRNNCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis class is equivalent as tf.keras.layers.StackedRNNCells, and will be replaced by that in Tensorflow 2.0.\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/tacotron2_nvidia/encoder.py:236: bidirectional_dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `keras.layers.Bidirectional(keras.layers.RNN(cell))`, which is equivalent to this API\nWARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/rnn.py:464: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `keras.layers.RNN(cell)`, which is equivalent to this API\nWARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/rnn_cell_impl.py:958: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `layer.add_weight` method instead.\nWARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/rnn_cell_impl.py:962: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\nInstructions for updating:\nCall initializer instance with the dtype argument instead of passing it to the constructor\nWARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/rnn.py:244: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/tacotron2_nvidia/encoder.py:255: The name tf.add_to_collection is deprecated. Please use tf.compat.v1.add_to_collection instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/tacotron2_nvidia/decoder.py:496: The name tf.layers.Dense is deprecated. Please use tf.compat.v1.layers.Dense instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/tacotron2_nvidia/decoder.py:412: The name tf.get_variable_scope is deprecated. Please use tf.compat.v1.get_variable_scope instead.\n\n"
],
[
"r = model.decoder_logits['outputs']\ndecoder_output, post_mel_outputs, alignment_histories, _, _, _ = r\nstop_token_predictions = model.decoder_logits['stop_token_prediction']",
"_____no_output_____"
],
[
"sess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())",
"_____no_output_____"
],
[
"saver = tf.train.Saver()\nsaver.restore(sess, 'tacotron2-female/model.ckpt-54000')",
"INFO:tensorflow:Restoring parameters from tacotron2-female/model.ckpt-54000\n"
],
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"def decode(x):\n return ''.join([MALAYA_SPEECH_SYMBOLS[i] for i in x])",
"_____no_output_____"
],
[
"def get_duration_from_alignment(alignment):\n D = np.array([0 for _ in range(np.shape(alignment)[0])])\n\n for i in range(np.shape(alignment)[1]):\n max_index = list(alignment[:, i]).index(alignment[:, i].max())\n D[max_index] = D[max_index] + 1\n\n return D",
"_____no_output_____"
],
[
"count = 0\nwhile True:\n try:\n o = sess.run([decoder_output, post_mel_outputs, stop_token_predictions, alignment_histories, features])\n f = o[-1]\n for i in range(len(f['f'])):\n file = f['f'][i,0].decode().split('/')[-1]\n file = f'tacotron2-female-alignment/{file}'\n len_mel = f['len_mel'][i, 0]\n len_text_ids = f['len_text_ids'][i, 0]\n d = get_duration_from_alignment(o[3][i, :len_text_ids, :len_mel])\n assert d.sum() == len_mel\n np.save(file, d)\n print('done', count)\n count += 1\n except:\n break",
"done 0\ndone 1\ndone 2\ndone 3\ndone 4\ndone 5\ndone 6\ndone 7\ndone 8\ndone 9\ndone 10\ndone 11\ndone 12\ndone 13\ndone 14\ndone 15\ndone 16\ndone 17\ndone 18\ndone 19\ndone 20\ndone 21\ndone 22\ndone 23\ndone 24\ndone 25\ndone 26\ndone 27\ndone 28\ndone 29\ndone 30\ndone 31\ndone 32\ndone 33\ndone 34\ndone 35\ndone 36\ndone 37\ndone 38\ndone 39\ndone 40\ndone 41\ndone 42\ndone 43\ndone 44\ndone 45\ndone 46\ndone 47\ndone 48\ndone 49\ndone 50\ndone 51\ndone 52\ndone 53\ndone 54\ndone 55\ndone 56\ndone 57\ndone 58\ndone 59\ndone 60\ndone 61\ndone 62\ndone 63\ndone 64\ndone 65\ndone 66\ndone 67\ndone 68\ndone 69\ndone 70\ndone 71\ndone 72\ndone 73\ndone 74\ndone 75\ndone 76\ndone 77\ndone 78\ndone 79\ndone 80\ndone 81\ndone 82\ndone 83\ndone 84\ndone 85\ndone 86\ndone 87\ndone 88\ndone 89\ndone 90\ndone 91\ndone 92\ndone 93\ndone 94\ndone 95\ndone 96\ndone 97\ndone 98\ndone 99\ndone 100\ndone 101\ndone 102\ndone 103\ndone 104\ndone 105\ndone 106\ndone 107\ndone 108\ndone 109\ndone 110\ndone 111\ndone 112\ndone 113\ndone 114\ndone 115\ndone 116\ndone 117\ndone 118\ndone 119\ndone 120\ndone 121\ndone 122\ndone 123\ndone 124\ndone 125\ndone 126\ndone 127\ndone 128\ndone 129\ndone 130\ndone 131\ndone 132\ndone 133\ndone 134\ndone 135\ndone 136\ndone 137\ndone 138\ndone 139\ndone 140\ndone 141\ndone 142\ndone 143\ndone 144\ndone 145\ndone 146\ndone 147\ndone 148\ndone 149\ndone 150\ndone 151\ndone 152\ndone 153\ndone 154\ndone 155\ndone 156\ndone 157\ndone 158\ndone 159\ndone 160\ndone 161\ndone 162\ndone 163\ndone 164\ndone 165\ndone 166\ndone 167\ndone 168\ndone 169\ndone 170\ndone 171\ndone 172\ndone 173\ndone 174\ndone 175\ndone 176\ndone 177\ndone 178\ndone 179\ndone 180\ndone 181\ndone 182\ndone 183\ndone 184\ndone 185\ndone 186\ndone 187\ndone 188\ndone 189\ndone 190\ndone 191\ndone 192\ndone 193\ndone 194\ndone 195\ndone 196\ndone 197\ndone 198\ndone 199\ndone 200\ndone 201\ndone 202\ndone 203\ndone 204\ndone 205\ndone 206\ndone 207\ndone 208\ndone 209\ndone 210\ndone 211\ndone 212\ndone 213\ndone 214\ndone 215\ndone 216\ndone 217\ndone 218\ndone 219\ndone 220\ndone 221\ndone 222\ndone 223\ndone 224\ndone 225\ndone 226\ndone 227\ndone 228\ndone 229\ndone 230\ndone 231\ndone 232\ndone 233\ndone 234\ndone 235\ndone 236\ndone 237\ndone 238\ndone 239\ndone 240\ndone 241\ndone 242\ndone 243\ndone 244\ndone 245\ndone 246\ndone 247\ndone 248\ndone 249\ndone 250\ndone 251\ndone 252\ndone 253\ndone 254\ndone 255\ndone 256\ndone 257\ndone 258\ndone 259\ndone 260\ndone 261\ndone 262\ndone 263\ndone 264\ndone 265\ndone 266\ndone 267\ndone 268\ndone 269\ndone 270\ndone 271\ndone 272\ndone 273\ndone 274\ndone 275\ndone 276\ndone 277\ndone 278\ndone 279\ndone 280\ndone 281\ndone 282\ndone 283\ndone 284\ndone 285\ndone 286\ndone 287\ndone 288\ndone 289\ndone 290\ndone 291\ndone 292\ndone 293\ndone 294\ndone 295\ndone 296\ndone 297\ndone 298\ndone 299\ndone 300\ndone 301\ndone 302\ndone 303\ndone 304\ndone 305\ndone 306\ndone 307\ndone 308\ndone 309\ndone 310\ndone 311\ndone 312\ndone 313\ndone 314\ndone 315\ndone 316\ndone 317\ndone 318\ndone 319\ndone 320\ndone 321\ndone 322\ndone 323\ndone 324\ndone 325\ndone 326\ndone 327\ndone 328\ndone 329\ndone 330\ndone 331\ndone 332\ndone 333\ndone 334\ndone 335\ndone 336\ndone 337\ndone 338\ndone 339\ndone 340\ndone 341\ndone 342\ndone 343\ndone 344\ndone 345\ndone 346\ndone 347\ndone 348\ndone 349\ndone 350\ndone 351\ndone 352\ndone 353\ndone 354\ndone 355\ndone 356\ndone 357\ndone 358\ndone 359\ndone 360\ndone 361\ndone 362\ndone 363\ndone 364\ndone 365\ndone 366\ndone 367\ndone 368\ndone 369\ndone 370\ndone 371\ndone 372\ndone 373\ndone 374\ndone 375\ndone 376\ndone 377\ndone 378\ndone 379\ndone 380\ndone 381\ndone 382\ndone 383\ndone 384\ndone 385\ndone 386\ndone 387\ndone 388\ndone 389\ndone 390\ndone 391\ndone 392\ndone 393\ndone 394\ndone 395\ndone 396\ndone 397\ndone 398\ndone 399\ndone 400\ndone 401\ndone 402\ndone 403\ndone 404\ndone 405\ndone 406\ndone 407\ndone 408\ndone 409\ndone 410\ndone 411\ndone 412\ndone 413\ndone 414\ndone 415\ndone 416\ndone 417\ndone 418\ndone 419\ndone 420\ndone 421\ndone 422\ndone 423\ndone 424\ndone 425\ndone 426\ndone 427\ndone 428\ndone 429\ndone 430\ndone 431\ndone 432\ndone 433\ndone 434\ndone 435\ndone 436\ndone 437\ndone 438\ndone 439\ndone 440\ndone 441\ndone 442\ndone 443\ndone 444\ndone 445\ndone 446\ndone 447\ndone 448\ndone 449\ndone 450\ndone 451\ndone 452\ndone 453\ndone 454\ndone 455\ndone 456\ndone 457\ndone 458\ndone 459\ndone 460\ndone 461\ndone 462\ndone 463\ndone 464\ndone 465\ndone 466\ndone 467\ndone 468\ndone 469\ndone 470\ndone 471\ndone 472\ndone 473\ndone 474\ndone 475\ndone 476\ndone 477\ndone 478\ndone 479\ndone 480\ndone 481\ndone 482\ndone 483\ndone 484\ndone 485\ndone 486\ndone 487\ndone 488\ndone 489\ndone 490\ndone 491\ndone 492\ndone 493\ndone 494\ndone 495\ndone 496\ndone 497\ndone 498\ndone 499\ndone 500\ndone 501\ndone 502\ndone 503\ndone 504\ndone 505\ndone 506\ndone 507\ndone 508\ndone 509\ndone 510\ndone 511\ndone 512\ndone 513\ndone 514\ndone 515\ndone 516\ndone 517\ndone 518\ndone 519\ndone 520\ndone 521\ndone 522\ndone 523\ndone 524\ndone 525\ndone 526\ndone 527\ndone 528\ndone 529\ndone 530\ndone 531\ndone 532\ndone 533\ndone 534\ndone 535\ndone 536\ndone 537\ndone 538\ndone 539\ndone 540\ndone 541\ndone 542\ndone 543\ndone 544\ndone 545\ndone 546\ndone 547\ndone 548\ndone 549\ndone 550\ndone 551\ndone 552\ndone 553\ndone 554\ndone 555\ndone 556\ndone 557\ndone 558\ndone 559\ndone 560\ndone 561\ndone 562\ndone 563\ndone 564\ndone 565\ndone 566\ndone 567\ndone 568\ndone 569\ndone 570\ndone 571\ndone 572\ndone 573\ndone 574\ndone 575\ndone 576\ndone 577\ndone 578\ndone 579\ndone 580\ndone 581\ndone 582\ndone 583\ndone 584\ndone 585\ndone 586\ndone 587\ndone 588\ndone 589\ndone 590\ndone 591\ndone 592\ndone 593\ndone 594\ndone 595\ndone 596\ndone 597\ndone 598\ndone 599\ndone 600\ndone 601\ndone 602\ndone 603\ndone 604\ndone 605\ndone 606\ndone 607\ndone 608\ndone 609\ndone 610\ndone 611\ndone 612\ndone 613\ndone 614\ndone 615\ndone 616\ndone 617\ndone 618\ndone 619\ndone 620\ndone 621\ndone 622\ndone 623\ndone 624\ndone 625\ndone 626\ndone 627\ndone 628\ndone 629\ndone 630\ndone 631\ndone 632\ndone 633\ndone 634\ndone 635\ndone 636\ndone 637\ndone 638\ndone 639\ndone 640\ndone 641\ndone 642\ndone 643\ndone 644\ndone 645\ndone 646\ndone 647\ndone 648\ndone 649\ndone 650\ndone 651\ndone 652\ndone 653\ndone 654\ndone 655\ndone 656\ndone 657\ndone 658\ndone 659\ndone 660\ndone 661\ndone 662\ndone 663\ndone 664\ndone 665\ndone 666\ndone 667\ndone 668\ndone 669\ndone 670\ndone 671\ndone 672\ndone 673\ndone 674\ndone 675\ndone 676\ndone 677\ndone 678\ndone 679\ndone 680\ndone 681\ndone 682\ndone 683\ndone 684\ndone 685\ndone 686\ndone 687\ndone 688\ndone 689\ndone 690\ndone 691\ndone 692\ndone 693\ndone 694\ndone 695\ndone 696\ndone 697\ndone 698\ndone 699\ndone 700\ndone 701\ndone 702\ndone 703\ndone 704\ndone 705\ndone 706\ndone 707\ndone 708\ndone 709\ndone 710\ndone 711\ndone 712\ndone 713\ndone 714\ndone 715\ndone 716\ndone 717\ndone 718\ndone 719\ndone 720\ndone 721\ndone 722\ndone 723\ndone 724\ndone 725\ndone 726\ndone 727\ndone 728\ndone 729\ndone 730\ndone 731\ndone 732\ndone 733\ndone 734\ndone 735\ndone 736\ndone 737\ndone 738\ndone 739\ndone 740\ndone 741\ndone 742\ndone 743\ndone 744\ndone 745\ndone 746\ndone 747\ndone 748\ndone 749\ndone 750\ndone 751\ndone 752\ndone 753\ndone 754\ndone 755\ndone 756\ndone 757\ndone 758\ndone 759\ndone 760\ndone 761\ndone 762\ndone 763\ndone 764\ndone 765\ndone 766\ndone 767\ndone 768\ndone 769\ndone 770\ndone 771\ndone 772\ndone 773\ndone 774\ndone 775\ndone 776\ndone 777\ndone 778\ndone 779\ndone 780\ndone 781\ndone 782\ndone 783\ndone 784\ndone 785\ndone 786\ndone 787\ndone 788\ndone 789\ndone 790\ndone 791\ndone 792\ndone 793\ndone 794\ndone 795\ndone 796\ndone 797\ndone 798\ndone 799\ndone 800\ndone 801\ndone 802\ndone 803\ndone 804\ndone 805\ndone 806\ndone 807\ndone 808\ndone 809\ndone 810\ndone 811\ndone 812\ndone 813\ndone 814\ndone 815\ndone 816\ndone 817\ndone 818\ndone 819\ndone 820\ndone 821\ndone 822\ndone 823\ndone 824\ndone 825\ndone 826\ndone 827\ndone 828\ndone 829\ndone 830\ndone 831\ndone 832\ndone 833\ndone 834\ndone 835\ndone 836\ndone 837\ndone 838\ndone 839\ndone 840\ndone 841\ndone 842\ndone 843\ndone 844\ndone 845\ndone 846\ndone 847\ndone 848\ndone 849\ndone 850\ndone 851\ndone 852\ndone 853\ndone 854\ndone 855\ndone 856\ndone 857\ndone 858\ndone 859\ndone 860\ndone 861\ndone 862\ndone 863\ndone 864\ndone 865\ndone 866\ndone 867\ndone 868\ndone 869\ndone 870\ndone 871\ndone 872\ndone 873\ndone 874\ndone 875\ndone 876\ndone 877\ndone 878\ndone 879\ndone 880\ndone 881\ndone 882\ndone 883\ndone 884\ndone 885\ndone 886\ndone 887\ndone 888\ndone 889\ndone 890\ndone 891\ndone 892\ndone 893\ndone 894\ndone 895\ndone 896\ndone 897\ndone 898\ndone 899\ndone 900\ndone 901\ndone 902\ndone 903\ndone 904\ndone 905\ndone 906\ndone 907\ndone 908\ndone 909\ndone 910\ndone 911\ndone 912\ndone 913\ndone 914\ndone 915\ndone 916\ndone 917\ndone 918\ndone 919\ndone 920\ndone 921\ndone 922\n"
],
[
"# import pickle\n\n# with open('dataset-mel.pkl', 'wb') as fopen:\n# pickle.dump([o[-1], d], fopen)",
"_____no_output_____"
],
[
"# import pickle\n\n# with open('a.pkl', 'wb') as fopen:\n# pickle.dump([np.reshape(o[0][0], [-1, 80]), np.reshape(o[1][0], [-1, 80]), o[-1]['mel'][0]], fopen)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a81af242934057630e88faac672ae38d19875d3
| 2,924 |
ipynb
|
Jupyter Notebook
|
02_overview_of_hdfs/02_using_hdfs_cli.ipynb
|
itversity/spark-sql
|
017181d9976e39848c5e46fc628a7ba9cbc38ec0
|
[
"MIT"
] | 9 |
2020-12-26T11:03:45.000Z
|
2022-03-03T14:12:30.000Z
|
02_overview_of_hdfs/02_using_hdfs_cli.ipynb
|
itversity/spark-sql
|
017181d9976e39848c5e46fc628a7ba9cbc38ec0
|
[
"MIT"
] | null | null | null |
02_overview_of_hdfs/02_using_hdfs_cli.ipynb
|
itversity/spark-sql
|
017181d9976e39848c5e46fc628a7ba9cbc38ec0
|
[
"MIT"
] | 17 |
2020-12-26T20:23:45.000Z
|
2022-03-10T06:10:55.000Z
| 28.666667 | 224 | 0.584131 |
[
[
[
"## Using HDFS CLI\n\nLet us understand how to use HDFS CLI to interact with HDFS.\n* Typically the cluster contain 3 types of nodes.\n * Gateway nodes or client nodes or edge nodes\n * Master nodes\n * Worker nodes\n* Developers like us will typically have access to Gateway nodes or Client nodes.\n* We can connect to Gateway nodes or Client nodes using SSH.\n* Once login, we can interact with HDFS either by using `hadoop fs` or `hdfs dfs`. Both of them are aliases to each other.\n* `hadoop` have other subcommands than `fs` and is typically used to interact with HDFS or Map Reduce as developers.\n* `hdfs` have other subcommands than `dfs`. It is typically used to not only manage files in HDFS but also administrative tasks related HDFS components such as **Namenode**, **Secondary Namenode**, **Datanode** etc.\n* As deveopers, our scope will be limited to use `hdfs dfs` or `hadoop fs` to interact with HDFS.\n* Both have sub commands and each of the sub command take additional control arguments. Let us understand the structure by taking the example of `hdfs dfs -ls -l -S -r /public`.\n * `hdfs` is the main command to manage all the components of HDFS.\n * `dfs` is the sub command to manage files in HDFS.\n * `-ls` is the file system command to list files in HDFS.\n * `-l -S -r` are control arguments for `-ls` to control the run time behavior of the command.\n * `/public` is the argument for the `-ls` command. It is path in HDFS. You will understad as you get into the details.",
"_____no_output_____"
]
],
[
[
"%%sh\n\nhadoop",
"_____no_output_____"
],
[
"%%sh\n\nhadoop fs -usage",
"_____no_output_____"
],
[
"%%sh\n\nhdfs",
"_____no_output_____"
],
[
"%%sh\n\nhdfs dfs -usage",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a81bf2cffaf3b03ad5a080c22ca9046a97f7b0f
| 80,614 |
ipynb
|
Jupyter Notebook
|
lectures/16_lecture/airline.ipynb
|
edmund735/companion
|
95ef489b6f79f8c375c4ac060c3c1a1506332f8c
|
[
"MIT"
] | 6 |
2021-01-12T08:56:54.000Z
|
2022-02-09T01:01:34.000Z
|
lectures/16_lecture/airline.ipynb
|
edmund735/companion
|
95ef489b6f79f8c375c4ac060c3c1a1506332f8c
|
[
"MIT"
] | null | null | null |
lectures/16_lecture/airline.ipynb
|
edmund735/companion
|
95ef489b6f79f8c375c4ac060c3c1a1506332f8c
|
[
"MIT"
] | 25 |
2021-01-31T00:44:40.000Z
|
2022-03-25T12:43:56.000Z
| 321.171315 | 72,876 | 0.926229 |
[
[
[
"import numpy as np\nimport cvxpy as cp\nimport networkx as nx\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# Problem data\nreservations = np.array([110, 118, 103, 161, 140])\nflight_capacities = np.array([100, 100, 100, 150, 150])\ncost_per_hour = 50\ncost_external_company = 75",
"_____no_output_____"
],
[
"# Build transportation grah\nG = nx.DiGraph()\n\n# Add nodes\nG.add_node(0, supply=reservations[0], label=\"10am\")\nG.add_node(1, supply=reservations[1], label=\"12pm\")\nG.add_node(2, supply=reservations[2], label=\"2pm\")\nG.add_node(3, supply=reservations[3], label=\"4pm\")\nG.add_node(4, supply=reservations[4], label=\"6pm\")\nG.add_node(5, supply=0, label=\"9pm\")\nG.add_node(6, supply=-np.sum(reservations), label=\"NY\")\n\n# Edges\nM = 1000\n\n# From 10am\nG.add_edge(0, 1, cost=2 * cost_per_hour, capacity=M)\nG.add_edge(0, 2, cost=4 * cost_per_hour, capacity=M)\nG.add_edge(0, 3, cost=6 * cost_per_hour, capacity=M)\nG.add_edge(0, 4, cost=8 * cost_per_hour, capacity=M)\nG.add_edge(0, 5, cost=11 * cost_per_hour + cost_external_company, capacity=M)\nG.add_edge(0, 6, cost=0, capacity=flight_capacities[0])\n\n# From 12pm\nG.add_edge(1, 2, cost=2 * cost_per_hour, capacity=M)\nG.add_edge(1, 3, cost=4 * cost_per_hour, capacity=M)\nG.add_edge(1, 4, cost=6 * cost_per_hour, capacity=M)\nG.add_edge(1, 5, cost=9 * cost_per_hour + cost_external_company, capacity=M)\nG.add_edge(1, 6, cost=0, capacity=flight_capacities[1])\n\n# From 2pm\nG.add_edge(2, 3, cost=2 * cost_per_hour, capacity=M)\nG.add_edge(2, 4, cost=4 * cost_per_hour, capacity=M)\nG.add_edge(2, 5, cost=7 * cost_per_hour + cost_external_company, capacity=M)\nG.add_edge(2, 6, cost=0, capacity=flight_capacities[2])\n\n# From 4pm\nG.add_edge(3, 4, cost=2 * cost_per_hour, capacity=M)\nG.add_edge(3, 5, cost=5 * cost_per_hour + cost_external_company, capacity=M)\nG.add_edge(3, 6, cost=0, capacity=flight_capacities[3])\n\n# From 6pm\nG.add_edge(4, 5, cost=3 * cost_per_hour + cost_external_company, capacity=M)\nG.add_edge(4, 6, cost=0, capacity=flight_capacities[4])\n\n# From 9pm\nG.add_edge(5, 6, cost=0, capacity=M)",
"_____no_output_____"
],
[
"# Note minus sign for convention\n# In our formulation:\n# -> 1 means arc exits node\n# -> -1 means arc enters node\nA = -nx.linalg.graphmatrix.incidence_matrix(G, oriented=True)\nprint(\"A =\\n\", A.todense())",
"A =\n [[ 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0.]\n [-1. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0.]\n [ 0. -1. 0. 0. 0. 0. -1. 0. 0. 0. 0. 1. 1. 1. 1. 0. 0. 0.\n 0. 0. 0.]\n [ 0. 0. -1. 0. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. 0. 1. 1. 1.\n 0. 0. 0.]\n [ 0. 0. 0. -1. 0. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. -1. 0. 0.\n 1. 1. 0.]\n [ 0. 0. 0. 0. -1. 0. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. -1. 0.\n -1. 0. 1.]\n [ 0. 0. 0. 0. 0. -1. 0. 0. 0. 0. -1. 0. 0. 0. -1. 0. 0. -1.\n 0. -1. -1.]]\n"
],
[
"# Get weights, capacities, and supply vectors\nc = np.array([G[u][v]['cost'] for u,v in G.edges])\nu = np.array([G[u][v]['capacity'] for u,v in G.edges])\nb = np.array([G.nodes[u]['supply'] for u in G.nodes])",
"_____no_output_____"
],
[
"# Solve airline problem\n# Note: you need to install GLPK. It is part of CVXOPT.\n# Just run:\n# pip install cvxopt\n# \n# GLPK runs a simple method, which, as you know, returns exactly integral \n# solutions at vertices. Other solvers such as ECOS use interior-point methods \n# and they return slightly imprecise solutions that are not exactly integral.\nx = cp.Variable(len(G.edges))\nobjective = cp.Minimize(c @ x)\nconstraints = [A @ x == b, 0 <= x, x <= u]\nproblem = cp.Problem(objective, constraints)\nproblem.solve(solver=cp.GLPK)\nprint(\"Optimal cost = $\", problem.objective.value)",
"Optimal cost = $ 18300.0\n"
],
[
"# Show solution\n# Note: some bounds/capacities are not integral -> Solution not integral\nprint(\"x = \", x.value)",
"x = [ 0. 0. 0. 10. 0. 100. 0. 0. 0. 18. 100. 0. 0. 3.\n 100. 0. 11. 150. 0. 150. 32.]\n"
],
[
"fig, ax = plt.subplots(1, 1, figsize=(15, 10))\ncmap = plt.cm.Blues\n\n# Positions in 2d plot\nlayout = {0: np.array([0.0, 0.0]),\n 1: np.array([1.0, 0.5]),\n 2: np.array([2.0, 1.0]),\n 3: np.array([3.0, 0.5]),\n 4: np.array([4.0, 0.0]),\n 5: np.array([1.6, -0.3]),\n 6: np.array([2.0, -2.0]),\n }\nnx.draw_networkx_nodes(G, layout, node_color='w', edgecolors='k', node_size=2000)\nnx.draw_networkx_edges(G, layout, edge_cmap=cmap, edge_color=x.value, \n width=2, arrowsize=30, min_target_margin=20)\n\nlabels = {u: G.nodes[u]['label'] for u in G.nodes}\nnx.draw_networkx_labels(G,layout,labels,font_size=14)\n\n# Print colormap\nsm = plt.cm.ScalarMappable(cmap=cmap, \n norm=plt.Normalize(vmin=0, vmax=200)\n )\ncbar = plt.colorbar(sm)\n\nplt.show()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a81e6fab77e294cf7d034104636041fdd759f7b
| 21,075 |
ipynb
|
Jupyter Notebook
|
ml/jupyter/exploratory/learning-thermostat-state.ipynb
|
deep-santani/machine-learnt-air-conditioning
|
8acd583d7f273170c06fb5489759b5d42c6e2152
|
[
"Apache-2.0"
] | 1 |
2020-03-27T02:56:58.000Z
|
2020-03-27T02:56:58.000Z
|
ml/jupyter/exploratory/learning-thermostat-state.ipynb
|
deep-santani/machine-learnt-air-conditioning
|
8acd583d7f273170c06fb5489759b5d42c6e2152
|
[
"Apache-2.0"
] | null | null | null |
ml/jupyter/exploratory/learning-thermostat-state.ipynb
|
deep-santani/machine-learnt-air-conditioning
|
8acd583d7f273170c06fb5489759b5d42c6e2152
|
[
"Apache-2.0"
] | null | null | null | 68.872549 | 1,697 | 0.640854 |
[
[
[
"import tempfile\nimport urllib.request\ntrain_file = \"datasets/thermostat/sample-training-data.csv\"\ntest_file = \"datasets/thermostat/test-data.csv\"",
"_____no_output_____"
],
[
"import pandas as pd\nCOLUMNS = [\"month\", \"day\", \"hour\", \"min\", \"pirstatus\",\n \"isDay\", \"extTemp\", \"extHumidity\", \"loungeTemp\", \"loungeHumidity\",\n \"state\", \"temperature\", \"label\"]\ndf_train = pd.read_csv(train_file, names=COLUMNS, skipinitialspace=True)\ndf_test = pd.read_csv(test_file, names=COLUMNS, skipinitialspace=True)",
"_____no_output_____"
],
[
"CATEGORICAL_COLUMNS = []\nCONTINUOUS_COLUMNS = [\"month\",\"day\", \"hour\", \"min\", \"pirstatus\",\n \"isDay\", \"extTemp\", \"extHumidity\", \"loungeTemp\", \"loungeHumidity\"\n ]\nLABEL_COLUMN=\"label\"",
"_____no_output_____"
],
[
"df_train[LABEL_COLUMN] = df_train[\"state\"]\ndf_test[LABEL_COLUMN] = df_test[\"state\"]\nprint(df_test)",
" month day hour min pirstatus isDay extTemp extHumidity loungeTemp \\\n0 12 12 0 0 1 0 19 52 19.5 \n1 12 12 9 0 0 1 19 52 18.0 \n2 12 12 10 0 0 1 21 52 19.0 \n3 12 12 14 0 0 1 24 52 22.0 \n4 12 12 16 0 1 1 23 52 22.0 \n\n loungeHumidity state temperature label \n0 53 0 26 0 \n1 53 0 26 0 \n2 53 0 26 0 \n3 53 1 24 1 \n4 53 1 24 1 \n"
],
[
"import tensorflow as tf\n\ndef input_fn(df):\n # Creates a dictionary mapping from each continuous feature column name (k) to\n # the values of that column stored in a constant Tensor.\n continuous_cols = {k: tf.constant(df[k].values)\n for k in CONTINUOUS_COLUMNS}\n # Creates a dictionary mapping from each categorical feature column name (k)\n # to the values of that column stored in a tf.SparseTensor.\n categorical_cols = {k: tf.SparseTensor(\n indices=[[i, 0] for i in range(df[k].size)],\n values=df[k].values.astype(str),\n dense_shape=[df[k].size, 1])\n for k in CATEGORICAL_COLUMNS}\n # Merges the two dictionaries into one.\n feature_cols = dict()\n feature_cols.update(continuous_cols.copy())\n feature_cols.update(categorical_cols.copy())\n #feature_cols = dict(continuous_cols.items() + categorical_cols.items())\n # Converts the label column into a constant Tensor.\n label = tf.constant(df[LABEL_COLUMN].values)\n # Returns the feature columns and the label.\n return feature_cols, label\n\ndef train_input_fn():\n return input_fn(df_train)\n\ndef eval_input_fn():\n return input_fn(df_test)\n\n",
"_____no_output_____"
],
[
"month = tf.contrib.layers.real_valued_column(\"month\")\nday = tf.contrib.layers.real_valued_column(\"day\")\nhour = tf.contrib.layers.real_valued_column(\"hour\")\nminute = tf.contrib.layers.real_valued_column(\"min\")\npirstatus = tf.contrib.layers.real_valued_column(\"pirstatus\")\nisDay = tf.contrib.layers.real_valued_column(\"isDay\")\nextTemp = tf.contrib.layers.real_valued_column(\"extTemp\")\nextHumidity = tf.contrib.layers.real_valued_column(\"extHumidity\")\nloungeTemp = tf.contrib.layers.real_valued_column(\"loungeTemp\")\nloungeHumidity = tf.contrib.layers.real_valued_column(\"loungeHumidity\")",
"_____no_output_____"
],
[
"model_dir = tempfile.mkdtemp()\nm = tf.contrib.learn.LinearClassifier(feature_columns=[\n month, day, hour, minute, pirstatus, isDay,\n extTemp, extHumidity, loungeTemp, loungeHumidity],\n optimizer=tf.train.FtrlOptimizer(\n learning_rate=0.1,\n l1_regularization_strength=1.0,\n l2_regularization_strength=1.0),\n model_dir=model_dir)",
"INFO:tensorflow:Using default config.\nINFO:tensorflow:Using config: {'_task_id': 0, '_save_checkpoints_secs': 600, '_num_worker_replicas': 0, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_master': '', '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x0000025184738B38>, '_tf_config': gpu_options {\n per_process_gpu_memory_fraction: 1\n}\n, '_environment': 'local', '_task_type': None, '_model_dir': 'C:\\\\Users\\\\faisal.t\\\\AppData\\\\Local\\\\Temp\\\\tmpueabmsxd', '_session_config': None, '_is_chief': True, '_tf_random_seed': None, '_save_checkpoints_steps': None, '_keep_checkpoint_max': 5, '_evaluation_master': '', '_num_ps_replicas': 0, '_save_summary_steps': 100}\n"
],
[
"m.fit(input_fn=train_input_fn, steps=500)",
"WARNING:tensorflow:From d:\\dump\\ml\\learn\\lib\\site-packages\\tensorflow\\contrib\\learn\\python\\learn\\estimators\\linear.py:173: get_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease switch to tf.train.get_global_step\nWARNING:tensorflow:Casting <dtype: 'int64'> labels to bool.\nWARNING:tensorflow:Casting <dtype: 'int64'> labels to bool.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Saving checkpoints for 1 into C:\\Users\\faisal.t\\AppData\\Local\\Temp\\tmpueabmsxd\\model.ckpt.\nINFO:tensorflow:loss = 0.693147, step = 1\nINFO:tensorflow:global_step/sec: 730.026\nINFO:tensorflow:loss = 0.2893, step = 101 (0.139 sec)\nINFO:tensorflow:global_step/sec: 1063.84\nINFO:tensorflow:loss = 0.219083, step = 201 (0.094 sec)\nINFO:tensorflow:global_step/sec: 1063.84\nINFO:tensorflow:loss = 0.182496, step = 301 (0.095 sec)\nINFO:tensorflow:global_step/sec: 1041.63\nINFO:tensorflow:loss = 0.159112, step = 401 (0.095 sec)\nINFO:tensorflow:Saving checkpoints for 500 into C:\\Users\\faisal.t\\AppData\\Local\\Temp\\tmpueabmsxd\\model.ckpt.\nINFO:tensorflow:Loss for final step: 0.142792.\n"
],
[
"results = m.evaluate(input_fn=eval_input_fn, steps=1)\nprint(\"printin results\")\nfor key in sorted(results):\n print(\"%s: %s\" % (key, results[key]))",
"_____no_output_____"
],
[
"def predict_input_fn():\n test_data = {\n \"month\":[12],\n \"day\":[12],\n \"hour\":[22],\n \"min\":[0],\n \"pirstatus\":[0],\n \"isDay\":[1],\n \"extTemp\":[35],\n \"extHumidity\":[20],\n \"loungeTemp\":[12],\n \"loungeHumidity\":[30],\n }\n \n continuous_cols = {k: tf.constant(test_data[k])\n for k in test_data}\n return continuous_cols",
"_____no_output_____"
],
[
"predictions = list(m.predict(input_fn=predict_input_fn, as_iterable=True))\nprint('Predictions: {}'.format(str(predictions)))",
"INFO:tensorflow:Restoring parameters from C:\\Users\\faisal.t\\AppData\\Local\\Temp\\tmpueabmsxd\\model.ckpt-500\nPredictions: [1]\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a8201717e6acadb47206c68e01434e664211ef2
| 70,663 |
ipynb
|
Jupyter Notebook
|
Trafikverket.ipynb
|
danjo133/JupyterNotebooks
|
531a6d0d1293710f1b77a9d1f801b3e99589e5b7
|
[
"Apache-2.0"
] | null | null | null |
Trafikverket.ipynb
|
danjo133/JupyterNotebooks
|
531a6d0d1293710f1b77a9d1f801b3e99589e5b7
|
[
"Apache-2.0"
] | null | null | null |
Trafikverket.ipynb
|
danjo133/JupyterNotebooks
|
531a6d0d1293710f1b77a9d1f801b3e99589e5b7
|
[
"Apache-2.0"
] | null | null | null | 39.990379 | 288 | 0.399672 |
[
[
[
"%matplotlib inline\n\nfrom datetime import date\nimport pandas as pd\nimport urllib.request\nimport xmltodict\nfrom ipywidgets import HTML\nfrom ipyleaflet import *\nimport configparser",
"_____no_output_____"
],
[
"config = configparser.ConfigParser()\nconfig.read(\"config.cfg\")",
"_____no_output_____"
],
[
"import math\n\naxis = None # Semi-major axis of the ellipsoid.\nflattening = None # Flattening of the ellipsoid.\ncentral_meridian = None # Central meridian for the projection.\nlat_of_origin = None # Latitude of origin.\nscale = None # Scale on central meridian.\nfalse_northing = None # Offset for origo.\nfalse_easting = None # Offset for origo.\n\n# Parameters for RT90 and SWEREF99TM.\n# Note: Parameters for RT90 are choosen to eliminate the \n# differences between Bessel and GRS80-ellipsoides.\n# Bessel-iants should only be used if lat/long are given as\n# RT90-lat/long based on the Bessel ellipsoide (from old maps).\n# Parameter: projection (string). Must match if-statement.\ndef swedish_params(projection) :\n\n global central_meridian \n global scale \n global false_northing \n global false_easting \n global lat_of_origin\n\n # RT90 parameters, GRS 80 ellipsoid.\n if (projection == \"rt90_7.5_gon_v\") :\n grs80_params()\n central_meridian = 11.0 + 18.375/60.0\n scale = 1.000006000000\n false_northing = -667.282\n false_easting = 1500025.141\n\n elif (projection == \"rt90_5.0_gon_v\") :\n grs80_params()\n central_meridian = 13.0 + 33.376/60.0\n scale = 1.000005800000\n false_northing = -667.130\n false_easting = 1500044.695\n\n elif (projection == \"rt90_2.5_gon_v\") :\n grs80_params()\n central_meridian = 15.0 + 48.0/60.0 + 22.624306/3600.0\n scale = 1.00000561024\n false_northing = -667.711\n false_easting = 1500064.274\n\n elif (projection == \"rt90_0.0_gon_v\") :\n grs80_params()\n central_meridian = 18.0 + 3.378/60.0\n scale = 1.000005400000\n false_northing = -668.844\n false_easting = 1500083.521\n\n elif (projection == \"rt90_2.5_gon_o\") :\n grs80_params()\n central_meridian = 20.0 + 18.379/60.0\n scale = 1.000005200000\n false_northing = -670.706\n false_easting = 1500102.765\n\n elif (projection == \"rt90_5.0_gon_o\") :\n grs80_params()\n central_meridian = 22.0 + 33.380/60.0\n scale = 1.000004900000\n false_northing = -672.557\n false_easting = 1500121.846\n\n\n # RT90 parameters, Bessel 1841 ellipsoid.\n elif (projection == \"bessel_rt90_7.5_gon_v\") :\n bessel_params()\n central_meridian = 11.0 + 18.0/60.0 + 29.8/3600.0\n\n elif (projection == \"bessel_rt90_5.0_gon_v\") :\n bessel_params()\n central_meridian = 13.0 + 33.0/60.0 + 29.8/3600.0\n\n elif (projection == \"bessel_rt90_2.5_gon_v\") :\n bessel_params()\n central_meridian = 15.0 + 48.0/60.0 + 29.8/3600.0\n\n elif (projection == \"bessel_rt90_0.0_gon_v\") :\n bessel_params()\n central_meridian = 18.0 + 3.0/60.0 + 29.8/3600.0\n\n elif (projection == \"bessel_rt90_2.5_gon_o\") :\n bessel_params()\n central_meridian = 20.0 + 18.0/60.0 + 29.8/3600.0\n\n elif (projection == \"bessel_rt90_5.0_gon_o\") :\n bessel_params()\n central_meridian = 22.0 + 33.0/60.0 + 29.8/3600.0\n\n\n # SWEREF99TM and SWEREF99ddmm parameters.\n elif (projection == \"sweref_99_tm\") :\n sweref99_params()\n central_meridian = 15.00\n lat_of_origin = 0.0\n scale = 0.9996\n false_northing = 0.0\n false_easting = 500000.0\n\n elif (projection == \"sweref_99_1200\") :\n sweref99_params()\n central_meridian = 12.00\n\n elif (projection == \"sweref_99_1330\") :\n sweref99_params()\n central_meridian = 13.50\n\n elif (projection == \"sweref_99_1500\") :\n sweref99_params()\n central_meridian = 15.00\n\n elif (projection == \"sweref_99_1630\") :\n sweref99_params()\n central_meridian = 16.50\n\n elif (projection == \"sweref_99_1800\") :\n sweref99_params()\n central_meridian = 18.00\n\n elif (projection == \"sweref_99_1415\") :\n sweref99_params()\n central_meridian = 14.25\n\n elif (projection == \"sweref_99_1545\") :\n sweref99_params()\n central_meridian = 15.75\n\n elif (projection == \"sweref_99_1715\") :\n sweref99_params()\n central_meridian = 17.25\n\n elif (projection == \"sweref_99_1845\") :\n sweref99_params()\n central_meridian = 18.75\n\n elif (projection == \"sweref_99_2015\") :\n sweref99_params()\n central_meridian = 20.25\n\n elif (projection == \"sweref_99_2145\") :\n sweref99_params()\n central_meridian = 21.75\n\n elif (projection == \"sweref_99_2315\") :\n sweref99_params()\n central_meridian = 23.25\n\n\n # Test-case:\n # Lat: 66 0'0\", lon: 24 0'0\".\n # X:1135809.413803 Y:555304.016555.\n elif (projection == \"test_case\") :\n axis = 6378137.0\n flattening = 1.0 / 298.257222101\n central_meridian = 13.0 + 35.0/60.0 + 7.692000/3600.0\n lat_of_origin = 0.0\n scale = 1.000002540000\n false_northing = -6226307.8640\n false_easting = 84182.8790\n\n # Not a valid projection. \n else :\n central_meridian = None\n\n\n# Sets of default parameters.\ndef grs80_params() :\n\n global axis \n global flattening \n global central_meridian \n global lat_of_origin \n\n axis = 6378137.0 # GRS 80.\n flattening = 1.0 / 298.257222101 # GRS 80.\n central_meridian = None\n lat_of_origin = 0.0\n\ndef bessel_params() :\n\n global axis \n global flattening \n global central_meridian \n global lat_of_origin \n global scale \n global false_northing \n global false_easting \n\n axis = 6377397.155 # Bessel 1841.\n flattening = 1.0 / 299.1528128 # Bessel 1841.\n central_meridian = None\n lat_of_origin = 0.0\n scale = 1.0\n false_northing = 0.0\n false_easting = 1500000.0\n\ndef sweref99_params() :\n\n global axis \n global flattening \n global central_meridian \n global lat_of_origin \n global scale \n global false_northing \n global false_easting \n\n axis = 6378137.0 # GRS 80.\n flattening = 1.0 / 298.257222101 # GRS 80.\n central_meridian = None\n lat_of_origin = 0.0\n scale = 1.0\n false_northing = 0.0\n false_easting = 150000.0\n\n\n# Conversion from geodetic coordinates to grid coordinates.\ndef geodetic_to_grid(latitude, longitude) :\n x_y = [0] * 2\n if (central_meridian == None) :\n return x_y\n\n # Prepare ellipsoid-based stuff.\n e2 = flattening * (2.0 - flattening)\n n = flattening / (2.0 - flattening)\n a_roof = axis / (1.0 + n) * (1.0 + n*n/4.0 + n*n*n*n/64.0)\n A = e2\n B = (5.0*e2*e2 - e2*e2*e2) / 6.0\n C = (104.0*e2*e2*e2 - 45.0*e2*e2*e2*e2) / 120.0\n D = (1237.0*e2*e2*e2*e2) / 1260.0\n beta1 = n/2.0 - 2.0*n*n/3.0 + 5.0*n*n*n/16.0 + 41.0*n*n*n*n/180.0\n beta2 = 13.0*n*n/48.0 - 3.0*n*n*n/5.0 + 557.0*n*n*n*n/1440.0\n beta3 = 61.0*n*n*n/240.0 - 103.0*n*n*n*n/140.0\n beta4 = 49561.0*n*n*n*n/161280.0\n\n # Convert.\n deg_to_rad = math.pi / 180.0\n phi = latitude * deg_to_rad\n lambda_ = longitude * deg_to_rad\n lambda_zero = central_meridian * deg_to_rad\n\n phi_star = phi - math.sin(phi) * math.cos(phi) * (A + B*math.pow(math.sin(phi), 2) + C*math.pow(math.sin(phi), 4) + D*math.pow(math.sin(phi), 6))\n delta_lambda = lambda_ - lambda_zero\n xi_prim = math.atan(math.tan(phi_star) / math.cos(delta_lambda))\n eta_prim = math_atanh(math.cos(phi_star) * math.sin(delta_lambda))\n x = scale * a_roof * (xi_prim +beta1 * math.sin(2.0*xi_prim) * math_cosh(2.0*eta_prim) +beta2 * math.sin(4.0*xi_prim) * math_cosh(4.0*eta_prim) +beta3 * math.sin(6.0*xi_prim) * math_cosh(6.0*eta_prim) +beta4 * math.sin(8.0*xi_prim) * math_cosh(8.0*eta_prim)) + false_northing\n y = scale * a_roof * (eta_prim +beta1 * math.cos(2.0*xi_prim) * math_sinh(2.0*eta_prim) +beta2 * math.cos(4.0*xi_prim) * math_sinh(4.0*eta_prim) +beta3 * math.cos(6.0*xi_prim) * math_sinh(6.0*eta_prim) +beta4 * math.cos(8.0*xi_prim) * math_sinh(8.0*eta_prim)) + false_easting\n x_y[0] = math.round(x * 1000.0) / 1000.0\n x_y[1] = math.round(y * 1000.0) / 1000.0\n# x_y[0] = x\n# x_y[1] = y\n return x_y\n\n\n# Conversion from grid coordinates to geodetic coordinates.\ndef grid_to_geodetic(x, y) :\n lat_lon = [0] * 2\n if (central_meridian == None) :\n return lat_lon\n\n # Prepare ellipsoid-based stuff.\n e2 = flattening * (2.0 - flattening)\n n = flattening / (2.0 - flattening)\n a_roof = axis / (1.0 + n) * (1.0 + n*n/4.0 + n*n*n*n/64.0)\n delta1 = n/2.0 - 2.0*n*n/3.0 + 37.0*n*n*n/96.0 - n*n*n*n/360.0\n delta2 = n*n/48.0 + n*n*n/15.0 - 437.0*n*n*n*n/1440.0\n delta3 = 17.0*n*n*n/480.0 - 37*n*n*n*n/840.0\n delta4 = 4397.0*n*n*n*n/161280.0\n\n Astar = e2 + e2*e2 + e2*e2*e2 + e2*e2*e2*e2\n Bstar = -(7.0*e2*e2 + 17.0*e2*e2*e2 + 30.0*e2*e2*e2*e2) / 6.0\n Cstar = (224.0*e2*e2*e2 + 889.0*e2*e2*e2*e2) / 120.0\n Dstar = -(4279.0*e2*e2*e2*e2) / 1260.0\n\n # Convert.\n deg_to_rad = math.pi / 180\n lambda_zero = central_meridian * deg_to_rad\n xi = (x - false_northing) / (scale * a_roof) \n eta = (y - false_easting) / (scale * a_roof)\n xi_prim = xi - delta1*math.sin(2.0*xi) * math_cosh(2.0*eta) - delta2*math.sin(4.0*xi) * math_cosh(4.0*eta) - delta3*math.sin(6.0*xi) * math_cosh(6.0*eta) - delta4*math.sin(8.0*xi) * math_cosh(8.0*eta)\n eta_prim = eta - delta1*math.cos(2.0*xi) * math_sinh(2.0*eta) - delta2*math.cos(4.0*xi) * math_sinh(4.0*eta) - delta3*math.cos(6.0*xi) * math_sinh(6.0*eta) - delta4*math.cos(8.0*xi) * math_sinh(8.0*eta)\n phi_star = math.asin(math.sin(xi_prim) / math_cosh(eta_prim))\n delta_lambda = math.atan(math_sinh(eta_prim) / math.cos(xi_prim))\n lon_radian = lambda_zero + delta_lambda\n lat_radian = phi_star + math.sin(phi_star) * math.cos(phi_star) * (Astar + Bstar*math.pow(math.sin(phi_star), 2) + Cstar*math.pow(math.sin(phi_star), 4) + Dstar*math.pow(math.sin(phi_star), 6)) \n lat_lon[0] = lat_radian * 180.0 / math.pi\n lat_lon[1] = lon_radian * 180.0 / math.pi\n return lat_lon\n\n\n# Missing defs in the math library.\ndef math_sinh(value) :\n return 0.5 * (math.exp(value) - math.exp(-value))\n\ndef math_cosh(value) :\n return 0.5 * (math.exp(value) + math.exp(-value))\n\ndef math_atanh(value) :\n return 0.5 * math.log((1.0 + value) / (1.0 - value))",
"_____no_output_____"
],
[
"from IPython.core.display import HTML\ncss = open('style-table.css').read() + open('style-notebook.css').read()\nHTML('<style>{}</style>'.format(css))",
"_____no_output_____"
],
[
"key = config[\"tokens\"][\"vaginfo\"]\nurlstr = \"http://opendata.linkoping.se/ws_opendata/main.asmx/VagarbeteAlla?CustomKey=\" + key\n\ndata = {}\nwith urllib.request.urlopen(urlstr) as url:\n data = xmltodict.parse(url.read().decode())\n",
"_____no_output_____"
],
[
"def timeconv(item):\n start = item['STARTTID'].replace(\"MAJ\",\"MAY\")\n start = start.replace('OKT', 'OCT')\n slut = item['SLUTTID'].replace(\"MAJ\",\"MAY\")\n slut = slut.replace('OKT', 'OCT')\n item['STARTTID'] = start\n item['SLUTTID'] = slut\n return item\ndata2 = [timeconv(item) for item in data['ResponseVagarbete']['ListaVagarbeten']['Vagarbete'] ]\ndf = pd.DataFrame.from_dict(data2)\ndf2 = df.sort_values('IDNR')",
"_____no_output_____"
],
[
"def make_clickable(val):\n return '<a target=\"_blank\" href=\"{}\">{}</a>'.format(val,val)",
"_____no_output_____"
],
[
"m = Map(center=(58.41, 15.6), zoom=13, basemap=basemaps.OpenStreetMap.Mapnik)\nm",
"_____no_output_____"
],
[
"df2['SLUTTID'] = pd.to_datetime(df2['SLUTTID'])\ndf2['STARTTID'] = pd.to_datetime(df2['STARTTID'])\ndf3 = df2.loc[df2['SLUTTID'] > date.today(),:]\ndf3",
"/home/daniel/python_virtualenvs/py3/lib/python3.6/site-packages/ipykernel_launcher.py:3: FutureWarning: Comparing Series of datetimes with 'datetime.date'. Currently, the\n'datetime.date' is coerced to a datetime. In the future pandas will\nnot coerce, and a TypeError will be raised. To retain the current\nbehavior, convert the 'datetime.date' to a datetime with\n'pd.Timestamp'.\n This is separate from the ipykernel package so we can avoid doing imports until\n"
],
[
"#def get_location(swerefx, swerefy):\n# return [58.41, 15.6]\nswedish_params(\"sweref_99_1500\")\ndef create_marker(datapoint):\n loc = grid_to_geodetic(float(datapoint.Y_SWEREF991500), float(datapoint.X_SWEREF991500))\n roadcondition = \"\"\n if datapoint.FRAMKOMLIGHET_BIL:\n roadcondition = \"Avstängd\" if float(datapoint.FRAMKOMLIGHET_BIL.replace(',','.'))<0.1 else \"Begränsad\"\n htmltext = \"\"\"\n <div>{}\n <ul class='list-group'>\n <li class='list-group-item'>Loc: {}</li>\n <li class='list-group-item'>Start: {}, Slut: {}</li>\n <li class='list-group-item'>Framkomlighet: {}</li>\n </ul></div>\"\"\".format(\n datapoint.BESKRIVNING,\n datapoint.PLATS,\n datapoint.STARTTID,\n datapoint.SLUTTID,\n roadcondition)\n html_widget = HTML(\n value=htmltext,\n placeholder='',\n description=''\n )\n return Marker(location=loc, popup=html_widget)\n\nfor item in range(0,len(df3)):\n mark = create_marker(df3.iloc[item,:])\n m += mark\nm",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a820aac41686e3cfe54a8c34d9ffd0125d8116b
| 74,801 |
ipynb
|
Jupyter Notebook
|
docs/notebooks/APIFuzzer.ipynb
|
vrthra-forks/fuzzingbook
|
15319dcd7c213559cfe992c2e5936dab52929658
|
[
"MIT"
] | 1 |
2022-02-09T22:01:26.000Z
|
2022-02-09T22:01:26.000Z
|
docs/notebooks/APIFuzzer.ipynb
|
vrthra-forks/fuzzingbook
|
15319dcd7c213559cfe992c2e5936dab52929658
|
[
"MIT"
] | null | null | null |
docs/notebooks/APIFuzzer.ipynb
|
vrthra-forks/fuzzingbook
|
15319dcd7c213559cfe992c2e5936dab52929658
|
[
"MIT"
] | null | null | null | 28.648411 | 562 | 0.553054 |
[
[
[
"# Fuzzing APIs\n\nSo far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. However, we can also generate inputs that go directly into individual functions, gaining flexibility and speed in the process. In this chapter, we explore the use of grammars to synthesize code for function calls, which allows you to generate _program code that very efficiently invokes functions directly._ ",
"_____no_output_____"
]
],
[
[
"from bookutils import YouTubeVideo\nYouTubeVideo('U842dC2R3V0')",
"_____no_output_____"
]
],
[
[
"**Prerequisites**\n\n* You have to know how grammar fuzzing work, e.g. from the [chapter on grammars](Grammars.ipynb).\n* We make use of _generator functions_, as discussed in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb).\n* We make use of probabilities, as discussed in the [chapter on fuzzing with probabilities](ProbabilisticGrammarFuzzer.ipynb).",
"_____no_output_____"
],
[
"## Synopsis\n<!-- Automatically generated. Do not edit. -->\n\nTo [use the code provided in this chapter](Importing.ipynb), write\n\n```python\n>>> from fuzzingbook.APIFuzzer import <identifier>\n```\n\nand then make use of the following features.\n\n\nThis chapter provides *grammar constructors* that are useful for generating _function calls_.\n\nThe grammars are [probabilistic](ProbabilisticGrammarFuzzer.ipynb) and make use of [generators](GeneratorGrammarFuzzer.ipynb), so use `ProbabilisticGeneratorGrammarFuzzer` as a producer.\n\n```python\n>>> from GeneratorGrammarFuzzer import ProbabilisticGeneratorGrammarFuzzer\n```\n`INT_GRAMMAR`, `FLOAT_GRAMMAR`, `ASCII_STRING_GRAMMAR` produce integers, floats, and strings, respectively:\n\n```python\n>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(INT_GRAMMAR)\n>>> [fuzzer.fuzz() for i in range(10)]\n['-51', '9', '0', '0', '0', '0', '32', '0', '0', '0']\n>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(FLOAT_GRAMMAR)\n>>> [fuzzer.fuzz() for i in range(10)]\n['0e0',\n '-9.43e34',\n '-7.3282e0',\n '-9.5e-9',\n '0',\n '-30.840386e-5',\n '3',\n '-4.1e0',\n '-9.7',\n '413']\n>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(ASCII_STRING_GRAMMAR)\n>>> [fuzzer.fuzz() for i in range(10)]\n['\"#vYV*t@I%KNTT[q~}&-v+[zAzj[X-z|RzC$(g$Br]1tC\\':5<F-\"',\n '\"\"',\n '\"^S/\"',\n '\"y)QDs_9\"',\n '\")dY~?WYqMh,bwn3\\\\\"A!02Pk`gx\"',\n '\"01n|(dd$-d.sx\\\\\"83\\\\\"h/]qx)d9LPNdrk$}$4t3zhC.%3VY@AZZ0wCs2 N\"',\n '\"D\\\\6\\\\xgw#TQ}$\\'3\"',\n '\"LaM{\"',\n '\"\\\\\"ux\\'1H!=%;2T$.=l\"',\n '\"=vkiV~w.Ypt,?JwcEr}Moc>!5<U+DdYAup\\\\\"N 0V?h3x~jFN3\"']\n```\n`int_grammar_with_range(start, end)` produces an integer grammar with values `N` such that `start <= N <= end`:\n\n```python\n>>> int_grammar = int_grammar_with_range(100, 200)\n>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)\n>>> [fuzzer.fuzz() for i in range(10)]\n['154', '149', '185', '117', '182', '154', '131', '194', '147', '192']\n```\n`float_grammar_with_range(start, end)` produces a floating-number grammar with values `N` such that `start <= N <= end`.\n\n```python\n>>> float_grammar = float_grammar_with_range(100, 200)\n>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(float_grammar)\n>>> [fuzzer.fuzz() for i in range(10)]\n['121.8092479227325',\n '187.18037169119634',\n '127.9576486784452',\n '125.47768739781723',\n '151.8091820472274',\n '117.864410860742',\n '187.50918008379483',\n '119.29335112884749',\n '149.2637029583114',\n '126.61818995939146']\n```\nAll such values can be immediately used for testing function calls:\n\n```python\n>>> from math import sqrt\n>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)\n>>> call = \"sqrt(\" + fuzzer.fuzz() + \")\"\n>>> call\n'sqrt(143)'\n>>> eval(call)\n11.958260743101398\n```\nThese grammars can also be composed to form more complex grammars. `list_grammar(object_grammar)` returns a grammar that produces lists of objects as defined by `object_grammar`.\n\n```python\n>>> int_list_grammar = list_grammar(int_grammar)\n>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_list_grammar)\n>>> [fuzzer.fuzz() for i in range(5)]\n['[118, 111, 188, 137, 129]',\n '[170, 172]',\n '[171, 161, 117, 191, 175, 183, 164]',\n '[189]',\n '[129, 110, 178]']\n>>> some_list = eval(fuzzer.fuzz())\n>>> some_list\n[172, 120, 106, 192, 124, 191, 161, 100, 117]\n>>> len(some_list)\n9\n```\nIn a similar vein, we can construct arbitrary further data types for testing individual functions programmatically.\n\n",
"_____no_output_____"
],
[
"## Fuzzing a Function\n\nLet us start with our first problem: How do we fuzz a given function? For an interpreted language like Python, this is pretty straight-forward. All we need to do is to generate _calls_ to the function(s) we want to test. This is something we can easily do with a grammar.",
"_____no_output_____"
],
[
"As an example, consider the `urlparse()` function from the Python library. `urlparse()` takes a URL and decomposes it into its individual components.",
"_____no_output_____"
]
],
[
[
"import bookutils",
"_____no_output_____"
],
[
"from urllib.parse import urlparse",
"_____no_output_____"
],
[
"urlparse('https://www.fuzzingbook.com/html/APIFuzzer.html')",
"_____no_output_____"
]
],
[
[
"You see how the individual elements of the URL – the _scheme_ (`\"http\"`), the _network location_ (`\"www.fuzzingbook.com\"`), or the path (`\"//html/APIFuzzer.html\"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input.",
"_____no_output_____"
],
[
"To test `urlparse()`, we'd want to feed it a large set of different URLs. We can obtain these from the URL grammar we had defined in the [\"Grammars\"](Grammars.ipynb) chapter.",
"_____no_output_____"
]
],
[
[
"from Grammars import URL_GRAMMAR, is_valid_grammar, START_SYMBOL\nfrom Grammars import opts, extend_grammar, Grammar\nfrom GrammarFuzzer import GrammarFuzzer",
"_____no_output_____"
],
[
"url_fuzzer = GrammarFuzzer(URL_GRAMMAR)",
"_____no_output_____"
],
[
"for i in range(10):\n url = url_fuzzer.fuzz()\n print(urlparse(url))",
"ParseResult(scheme='https', netloc='user:[email protected]:8080', path='/', params='', query='', fragment='')\nParseResult(scheme='http', netloc='cispa.saarland:1', path='/', params='', query='', fragment='')\nParseResult(scheme='https', netloc='fuzzingbook.com:7', path='', params='', query='', fragment='')\nParseResult(scheme='https', netloc='user:[email protected]:80', path='', params='', query='', fragment='')\nParseResult(scheme='ftps', netloc='user:[email protected]', path='', params='', query='', fragment='')\nParseResult(scheme='ftp', netloc='fuzzingbook.com', path='/abc', params='', query='abc=x31&def=x20', fragment='')\nParseResult(scheme='ftp', netloc='user:[email protected]', path='', params='', query='', fragment='')\nParseResult(scheme='https', netloc='www.google.com:80', path='/', params='', query='', fragment='')\nParseResult(scheme='http', netloc='fuzzingbook.com:52', path='/', params='', query='', fragment='')\nParseResult(scheme='ftps', netloc='user:[email protected]', path='', params='', query='', fragment='')\n"
]
],
[
[
"This way, we can easily test any Python function – by setting up a scaffold that runs it. How would we proceed, though, if we wanted to have a test that can be re-run again and again, without having to generate new calls every time?",
"_____no_output_____"
],
[
"## Synthesizing Code\n\nThe \"scaffolding\" method, as sketched above, has an important downside: It couples test generation and test execution into a single unit, disallowing running both at different times, or for different languages. To decouple the two, we take another approach: Rather than generating inputs and immediately feeding this input into a function, we _synthesize code_ instead that invokes functions with a given input.",
"_____no_output_____"
],
[
"For instance, if we generate the string",
"_____no_output_____"
]
],
[
[
"call = \"urlparse('http://www.example.com/')\"",
"_____no_output_____"
]
],
[
[
"we can execute this string as a whole (and thus run the test) at any time:",
"_____no_output_____"
]
],
[
[
"eval(call)",
"_____no_output_____"
]
],
[
[
"To systematically generate such calls, we can again use a grammar:",
"_____no_output_____"
]
],
[
[
"URLPARSE_GRAMMAR: Grammar = {\n \"<call>\":\n ['urlparse(\"<url>\")']\n}\n\n# Import definitions from URL_GRAMMAR\nURLPARSE_GRAMMAR.update(URL_GRAMMAR)\nURLPARSE_GRAMMAR[\"<start>\"] = [\"<call>\"]\n\nassert is_valid_grammar(URLPARSE_GRAMMAR)",
"_____no_output_____"
]
],
[
[
"This grammar creates calls in the form `urlparse(<url>)`, where `<url>` comes from the \"imported\" URL grammar. The idea is to create many of these calls and to feed them into the Python interpreter.",
"_____no_output_____"
]
],
[
[
"URLPARSE_GRAMMAR",
"_____no_output_____"
]
],
[
[
"We can now use this grammar for fuzzing and synthesizing calls to `urlparse)`:",
"_____no_output_____"
]
],
[
[
"urlparse_fuzzer = GrammarFuzzer(URLPARSE_GRAMMAR)\nurlparse_fuzzer.fuzz()",
"_____no_output_____"
]
],
[
[
"Just as above, we can immediately execute these calls. To better see what is happening, we define a small helper function:",
"_____no_output_____"
]
],
[
[
"# Call function_name(arg[0], arg[1], ...) as a string\ndef do_call(call_string):\n print(call_string)\n result = eval(call_string)\n print(\"\\t= \" + repr(result))\n return result",
"_____no_output_____"
],
[
"call = urlparse_fuzzer.fuzz()\ndo_call(call)",
"urlparse(\"http://www.google.com?abc=def\")\n\t= ParseResult(scheme='http', netloc='www.google.com', path='', params='', query='abc=def', fragment='')\n"
]
],
[
[
"If `urlparse()` were a C function, for instance, we could embed its call into some (also generated) C function:",
"_____no_output_____"
]
],
[
[
"URLPARSE_C_GRAMMAR: Grammar = {\n \"<cfile>\": [\"<cheader><cfunction>\"],\n \"<cheader>\": ['#include \"urlparse.h\"\\n\\n'],\n \"<cfunction>\": [\"void test() {\\n<calls>}\\n\"],\n \"<calls>\": [\"<call>\", \"<calls><call>\"],\n \"<call>\": [' urlparse(\"<url>\");\\n']\n}",
"_____no_output_____"
],
[
"URLPARSE_C_GRAMMAR.update(URL_GRAMMAR)",
"_____no_output_____"
],
[
"URLPARSE_C_GRAMMAR[\"<start>\"] = [\"<cfile>\"]",
"_____no_output_____"
],
[
"assert is_valid_grammar(URLPARSE_C_GRAMMAR)",
"_____no_output_____"
],
[
"urlparse_fuzzer = GrammarFuzzer(URLPARSE_C_GRAMMAR)\nprint(urlparse_fuzzer.fuzz())",
"#include \"urlparse.h\"\n\nvoid test() {\n urlparse(\"http://user:[email protected]:99/x69?x57=abc\");\n}\n\n"
]
],
[
[
"## Synthesizing Oracles\n\nIn our `urlparse()` example, both the Python as well as the C variant only check for _generic_ errors in `urlparse()`; that is, they only detect fatal errors and exceptions. For a full test, we need to set up a specific *oracle* as well that checks whether the result is valid.",
"_____no_output_____"
],
[
"Our plan is to check whether specific parts of the URL reappear in the result – that is, if the scheme is `http:`, then the `ParseResult` returned should also contain a `http:` scheme. As discussed in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb), equalities of strings such as `http:` across two symbols cannot be expressed in a context-free grammar. We can, however, use a _generator function_ (also introduced in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb)) to automatically enforce such equalities.",
"_____no_output_____"
],
[
"Here is an example. Invoking `geturl()` on a `urlparse()` result should return the URL as originally passed to `urlparse()`.",
"_____no_output_____"
]
],
[
[
"from GeneratorGrammarFuzzer import GeneratorGrammarFuzzer, ProbabilisticGeneratorGrammarFuzzer",
"_____no_output_____"
],
[
"URLPARSE_ORACLE_GRAMMAR: Grammar = extend_grammar(URLPARSE_GRAMMAR,\n{\n \"<call>\": [(\"assert urlparse('<url>').geturl() == '<url>'\",\n opts(post=lambda url_1, url_2: [None, url_1]))]\n})",
"_____no_output_____"
],
[
"urlparse_oracle_fuzzer = GeneratorGrammarFuzzer(URLPARSE_ORACLE_GRAMMAR)\ntest = urlparse_oracle_fuzzer.fuzz()\nprint(test)",
"assert urlparse('https://user:[email protected]/abc?abc=abc').geturl() == 'https://user:[email protected]/abc?abc=abc'\n"
],
[
"exec(test)",
"_____no_output_____"
]
],
[
[
"In a similar way, we can also check individual components of the result:",
"_____no_output_____"
]
],
[
[
"URLPARSE_ORACLE_GRAMMAR: Grammar = extend_grammar(URLPARSE_GRAMMAR,\n{\n \"<call>\": [(\"result = urlparse('<scheme>://<host><path>?<params>')\\n\"\n # + \"print(result)\\n\"\n + \"assert result.scheme == '<scheme>'\\n\"\n + \"assert result.netloc == '<host>'\\n\"\n + \"assert result.path == '<path>'\\n\"\n + \"assert result.query == '<params>'\",\n opts(post=lambda scheme_1, authority_1, path_1, params_1,\n scheme_2, authority_2, path_2, params_2:\n [None, None, None, None,\n scheme_1, authority_1, path_1, params_1]))]\n})\n\n# Get rid of unused symbols\ndel URLPARSE_ORACLE_GRAMMAR[\"<url>\"]\ndel URLPARSE_ORACLE_GRAMMAR[\"<query>\"]\ndel URLPARSE_ORACLE_GRAMMAR[\"<authority>\"]\ndel URLPARSE_ORACLE_GRAMMAR[\"<userinfo>\"]\ndel URLPARSE_ORACLE_GRAMMAR[\"<port>\"]",
"_____no_output_____"
],
[
"urlparse_oracle_fuzzer = GeneratorGrammarFuzzer(URLPARSE_ORACLE_GRAMMAR)\ntest = urlparse_oracle_fuzzer.fuzz()\nprint(test)",
"result = urlparse('https://www.google.com/?def=18&abc=abc')\nassert result.scheme == 'https'\nassert result.netloc == 'www.google.com'\nassert result.path == '/'\nassert result.query == 'def=18&abc=abc'\n"
],
[
"exec(test)",
"_____no_output_____"
]
],
[
[
"The use of generator functions may feel a bit cumbersome. Indeed, if we uniquely stick to Python, we could also create a _unit test_ that directly invokes the fuzzer to generate individual parts:",
"_____no_output_____"
]
],
[
[
"def fuzzed_url_element(symbol):\n return GrammarFuzzer(URLPARSE_GRAMMAR, start_symbol=symbol).fuzz()",
"_____no_output_____"
],
[
"scheme = fuzzed_url_element(\"<scheme>\")\nauthority = fuzzed_url_element(\"<authority>\")\npath = fuzzed_url_element(\"<path>\")\nquery = fuzzed_url_element(\"<params>\")\nurl = \"%s://%s%s?%s\" % (scheme, authority, path, query)\nresult = urlparse(url)\n# print(result)\nassert result.geturl() == url\nassert result.scheme == scheme\nassert result.path == path\nassert result.query == query",
"_____no_output_____"
]
],
[
[
"Using such a unit test makes it easier to express oracles. However, we lose the ability to systematically cover individual URL elements and alternatives as with [`GrammarCoverageFuzzer`](GrammarCoverageFuzzer.ipynb) as well as the ability to guide generation towards specific elements as with [`ProbabilisticGrammarFuzzer`](ProbabilisticGrammarFuzzer.ipynb). Furthermore, a grammar allows us to generate tests for arbitrary programming languages and APIs.",
"_____no_output_____"
],
[
"## Synthesizing Data\n\nFor `urlparse()`, we have used a very specific grammar for creating a very specific argument. Many functions take basic data types as (some) arguments, though; we therefore define grammars that generate precisely those arguments. Even better, we can define functions that _generate_ grammars tailored towards our specific needs, returning values in a particular range, for instance.",
"_____no_output_____"
],
[
"### Integers\n\nWe introduce a simple grammar to produce integers.",
"_____no_output_____"
]
],
[
[
"from Grammars import convert_ebnf_grammar, crange",
"_____no_output_____"
],
[
"from ProbabilisticGrammarFuzzer import ProbabilisticGrammarFuzzer",
"_____no_output_____"
],
[
"INT_EBNF_GRAMMAR: Grammar = {\n \"<start>\": [\"<int>\"],\n \"<int>\": [\"<_int>\"],\n \"<_int>\": [\"(-)?<leaddigit><digit>*\", \"0\"],\n \"<leaddigit>\": crange('1', '9'),\n \"<digit>\": crange('0', '9')\n}\n\nassert is_valid_grammar(INT_EBNF_GRAMMAR)",
"_____no_output_____"
],
[
"INT_GRAMMAR = convert_ebnf_grammar(INT_EBNF_GRAMMAR)\nINT_GRAMMAR",
"_____no_output_____"
],
[
"int_fuzzer = GrammarFuzzer(INT_GRAMMAR)\nprint([int_fuzzer.fuzz() for i in range(10)])",
"['699', '-44', '321', '-7', '-6', '67', '0', '0', '57', '0']\n"
]
],
[
[
"If we need integers in a specific range, we can add a generator function that does right that:",
"_____no_output_____"
]
],
[
[
"from Grammars import set_opts",
"_____no_output_____"
],
[
"import random",
"_____no_output_____"
],
[
"def int_grammar_with_range(start, end):\n int_grammar = extend_grammar(INT_GRAMMAR)\n set_opts(int_grammar, \"<int>\", \"<_int>\",\n opts(pre=lambda: random.randint(start, end)))\n return int_grammar",
"_____no_output_____"
],
[
"int_fuzzer = GeneratorGrammarFuzzer(int_grammar_with_range(900, 1000))\n[int_fuzzer.fuzz() for i in range(10)]",
"_____no_output_____"
]
],
[
[
"### Floats\n\nThe grammar for floating-point values closely resembles the integer grammar.",
"_____no_output_____"
]
],
[
[
"FLOAT_EBNF_GRAMMAR: Grammar = {\n \"<start>\": [\"<float>\"],\n \"<float>\": [(\"<_float>\", opts(prob=0.9)), \"inf\", \"NaN\"],\n \"<_float>\": [\"<int>(.<digit>+)?<exp>?\"],\n \"<exp>\": [\"e<int>\"]\n}\nFLOAT_EBNF_GRAMMAR.update(INT_EBNF_GRAMMAR)\nFLOAT_EBNF_GRAMMAR[\"<start>\"] = [\"<float>\"]\n\nassert is_valid_grammar(FLOAT_EBNF_GRAMMAR)",
"_____no_output_____"
],
[
"FLOAT_GRAMMAR = convert_ebnf_grammar(FLOAT_EBNF_GRAMMAR)\nFLOAT_GRAMMAR",
"_____no_output_____"
],
[
"float_fuzzer = ProbabilisticGrammarFuzzer(FLOAT_GRAMMAR)\nprint([float_fuzzer.fuzz() for i in range(10)])",
"['0', '-4e0', '-3.3', '0.55e0', '0e2', '0.2', '-48.6e0', '0.216', '-4.844', '-6.100']\n"
],
[
"def float_grammar_with_range(start, end):\n float_grammar = extend_grammar(FLOAT_GRAMMAR)\n set_opts(float_grammar, \"<float>\", \"<_float>\", opts(\n pre=lambda: start + random.random() * (end - start)))\n return float_grammar",
"_____no_output_____"
],
[
"float_fuzzer = ProbabilisticGeneratorGrammarFuzzer(\n float_grammar_with_range(900.0, 900.9))\n[float_fuzzer.fuzz() for i in range(10)]",
"_____no_output_____"
]
],
[
[
"### Strings",
"_____no_output_____"
],
[
"Finally, we introduce a grammar for producing strings.",
"_____no_output_____"
]
],
[
[
"ASCII_STRING_EBNF_GRAMMAR: Grammar = {\n \"<start>\": [\"<ascii-string>\"],\n \"<ascii-string>\": ['\"<ascii-chars>\"'],\n \"<ascii-chars>\": [\n (\"\", opts(prob=0.05)),\n \"<ascii-chars><ascii-char>\"\n ],\n \"<ascii-char>\": crange(\" \", \"!\") + [r'\\\"'] + crange(\"#\", \"~\")\n}\n\nassert is_valid_grammar(ASCII_STRING_EBNF_GRAMMAR)",
"_____no_output_____"
],
[
"ASCII_STRING_GRAMMAR = convert_ebnf_grammar(ASCII_STRING_EBNF_GRAMMAR)",
"_____no_output_____"
],
[
"string_fuzzer = ProbabilisticGrammarFuzzer(ASCII_STRING_GRAMMAR)\nprint([string_fuzzer.fuzz() for i in range(10)])",
"['\"BgY)\"', '\"j[-64Big65wso(f:wg|}w&*D9JthLX}0@PT^]mr[`69Cq8H713ITYx<#jpml)\\\\\"\"', '\"{);XWZJ@d`\\'[h#F{1)C9M?%C`=\"', '\"Y\"', '\"C4gh`?uzJzD~$\\\\\\\\\"=|j)jj=SrBLIJ@0IbYiwIvNf5#pT4QUR}[g,35?Wg4i?3TdIsR0|eq3r;ZKuyI\\'<\\\\\"[p/x$<$B!\\\\\"_\"', '\"J0HG33+E(p8JQtKW.;G7 ^?.\"', '\"7r^B:Jf*J.@sqfED|M)3,eJ&OD\"', '\"c3Hcx^&*~3\\\\\"Jvac}cX\"', '\"\\'IHBQ:N+U:w(OAFn0pHLzX\"', '\"x4agH>H-2{Q|\\\\kpYF\"']\n"
]
],
[
[
"## Synthesizing Composite Data\n\nFrom basic data, as discussed above, we can also produce _composite data_ in data structures such as sets or lists. We illustrate such generation on lists.",
"_____no_output_____"
],
[
"### Lists",
"_____no_output_____"
]
],
[
[
"LIST_EBNF_GRAMMAR: Grammar = {\n \"<start>\": [\"<list>\"],\n \"<list>\": [\n (\"[]\", opts(prob=0.05)),\n \"[<list-objects>]\"\n ],\n \"<list-objects>\": [\n (\"<list-object>\", opts(prob=0.2)),\n \"<list-object>, <list-objects>\"\n ],\n \"<list-object>\": [\"0\"],\n}\n\nassert is_valid_grammar(LIST_EBNF_GRAMMAR)",
"_____no_output_____"
],
[
"LIST_GRAMMAR = convert_ebnf_grammar(LIST_EBNF_GRAMMAR)",
"_____no_output_____"
]
],
[
[
"Our list generator takes a grammar that produces objects; it then instantiates a list grammar with the objects from these grammars.",
"_____no_output_____"
]
],
[
[
"def list_grammar(object_grammar, list_object_symbol=None):\n obj_list_grammar = extend_grammar(LIST_GRAMMAR)\n if list_object_symbol is None:\n # Default: Use the first expansion of <start> as list symbol\n list_object_symbol = object_grammar[START_SYMBOL][0]\n\n obj_list_grammar.update(object_grammar)\n obj_list_grammar[START_SYMBOL] = [\"<list>\"]\n obj_list_grammar[\"<list-object>\"] = [list_object_symbol]\n\n assert is_valid_grammar(obj_list_grammar)\n\n return obj_list_grammar",
"_____no_output_____"
],
[
"int_list_fuzzer = ProbabilisticGrammarFuzzer(list_grammar(INT_GRAMMAR))\n[int_list_fuzzer.fuzz() for i in range(10)]",
"_____no_output_____"
],
[
"string_list_fuzzer = ProbabilisticGrammarFuzzer(\n list_grammar(ASCII_STRING_GRAMMAR))\n[string_list_fuzzer.fuzz() for i in range(10)]",
"_____no_output_____"
],
[
"float_list_fuzzer = ProbabilisticGeneratorGrammarFuzzer(list_grammar(\n float_grammar_with_range(900.0, 900.9)))\n[float_list_fuzzer.fuzz() for i in range(10)]",
"_____no_output_____"
]
],
[
[
"Generators for dictionaries, sets, etc. can be defined in a similar fashion. By plugging together grammar generators, we can produce data structures with arbitrary elements.",
"_____no_output_____"
],
[
"## Synopsis\n\nThis chapter provides *grammar constructors* that are useful for generating _function calls_.\n\nThe grammars are [probabilistic](ProbabilisticGrammarFuzzer.ipynb) and make use of [generators](GeneratorGrammarFuzzer.ipynb), so use `ProbabilisticGeneratorGrammarFuzzer` as a producer.",
"_____no_output_____"
]
],
[
[
"from GeneratorGrammarFuzzer import ProbabilisticGeneratorGrammarFuzzer",
"_____no_output_____"
]
],
[
[
"`INT_GRAMMAR`, `FLOAT_GRAMMAR`, `ASCII_STRING_GRAMMAR` produce integers, floats, and strings, respectively:",
"_____no_output_____"
]
],
[
[
"fuzzer = ProbabilisticGeneratorGrammarFuzzer(INT_GRAMMAR)\n[fuzzer.fuzz() for i in range(10)]",
"_____no_output_____"
],
[
"fuzzer = ProbabilisticGeneratorGrammarFuzzer(FLOAT_GRAMMAR)\n[fuzzer.fuzz() for i in range(10)]",
"_____no_output_____"
],
[
"fuzzer = ProbabilisticGeneratorGrammarFuzzer(ASCII_STRING_GRAMMAR)\n[fuzzer.fuzz() for i in range(10)]",
"_____no_output_____"
]
],
[
[
"`int_grammar_with_range(start, end)` produces an integer grammar with values `N` such that `start <= N <= end`:",
"_____no_output_____"
]
],
[
[
"int_grammar = int_grammar_with_range(100, 200)\nfuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)\n[fuzzer.fuzz() for i in range(10)]",
"_____no_output_____"
]
],
[
[
"`float_grammar_with_range(start, end)` produces a floating-number grammar with values `N` such that `start <= N <= end`.",
"_____no_output_____"
]
],
[
[
"float_grammar = float_grammar_with_range(100, 200)\nfuzzer = ProbabilisticGeneratorGrammarFuzzer(float_grammar)\n[fuzzer.fuzz() for i in range(10)]",
"_____no_output_____"
]
],
[
[
"All such values can be immediately used for testing function calls:",
"_____no_output_____"
]
],
[
[
"from math import sqrt",
"_____no_output_____"
],
[
"fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)\ncall = \"sqrt(\" + fuzzer.fuzz() + \")\"\ncall",
"_____no_output_____"
],
[
"eval(call)",
"_____no_output_____"
]
],
[
[
"These grammars can also be composed to form more complex grammars. `list_grammar(object_grammar)` returns a grammar that produces lists of objects as defined by `object_grammar`.",
"_____no_output_____"
]
],
[
[
"int_list_grammar = list_grammar(int_grammar)\nfuzzer = ProbabilisticGeneratorGrammarFuzzer(int_list_grammar)\n[fuzzer.fuzz() for i in range(5)]",
"_____no_output_____"
],
[
"some_list = eval(fuzzer.fuzz())",
"_____no_output_____"
],
[
"some_list",
"_____no_output_____"
],
[
"len(some_list)",
"_____no_output_____"
]
],
[
[
"In a similar vein, we can construct arbitrary further data types for testing individual functions programmatically.",
"_____no_output_____"
],
[
"## Lessons Learned\n\n* To fuzz individual functions, one can easily set up grammars that produce function calls.\n* Fuzzing at the API level can be much faster than fuzzing at the system level, but brings the risk of false alarms by violating implicit preconditions.",
"_____no_output_____"
],
[
"## Next Steps\n\nThis chapter was all about manually writing test and controlling which data gets generated. [In the next chapter](Carver.ipynb), we will introduce a much higher level of automation:\n\n* _Carving_ automatically records function calls and arguments from program executions.\n* We can turn these into _grammars_, allowing to test these functions with various combinations of recorded values.\n\nWith these techniques, we automatically obtain grammars that already invoke functions in application contexts, making our work of specifying them much easier. ",
"_____no_output_____"
],
[
"## Background\n\nThe idea of using generator functions to generate input structures was first explored in QuickCheck \\cite{Claessen2000}. A very nice implementation for Python is the [hypothesis package](https://hypothesis.readthedocs.io/en/latest/) which allows to write and combine data structure generators for testing APIs.\n\n",
"_____no_output_____"
],
[
"## Exercises\n\nThe exercises for this chapter combine the above techniques with fuzzing techniques introduced earlier.",
"_____no_output_____"
],
[
"### Exercise 1: Deep Arguments\n\nIn the example generating oracles for `urlparse()`, important elements such as `authority` or `port` are not checked. Enrich `URLPARSE_ORACLE_GRAMMAR` with post-expansion functions that store the generated elements in a symbol table, such that they can be accessed when generating the assertions.",
"_____no_output_____"
],
[
"**Solution.** Left to the reader.",
"_____no_output_____"
],
[
"### Exercise 2: Covering Argument Combinations\n\nIn the chapter on [configuration testing](ConfigurationFuzzer.ipynb), we also discussed _combinatorial testing_ – that is, systematic coverage of _sets_ of configuration elements. Implement a scheme that by changing the grammar, allows all _pairs_ of argument values to be covered.",
"_____no_output_____"
],
[
"**Solution.** Left to the reader.",
"_____no_output_____"
],
[
"### Exercise 3: Mutating Arguments\n\nTo widen the range of arguments to be used during testing, apply the _mutation schemes_ introduced in [mutation fuzzing](MutationFuzzer.ipynb) – for instance, flip individual bytes or delete characters from strings. Apply this either during grammar inference or as a separate step when invoking functions.",
"_____no_output_____"
],
[
"**Solution.** Left to the reader.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a820bf22d37b5caeeacf6fb2a5236c21746973f
| 31,890 |
ipynb
|
Jupyter Notebook
|
clean/notebooks/original_Random_Forest.ipynb
|
ahhuisg/ML-Data-Prep-Zoo
|
195733b5767d69c9992456f1380e6c646e30a5ae
|
[
"Apache-2.0"
] | 1 |
2022-03-19T03:29:49.000Z
|
2022-03-19T03:29:49.000Z
|
clean/notebooks/original_Random_Forest.ipynb
|
ahhuisg/ML-Data-Prep-Zoo
|
195733b5767d69c9992456f1380e6c646e30a5ae
|
[
"Apache-2.0"
] | null | null | null |
clean/notebooks/original_Random_Forest.ipynb
|
ahhuisg/ML-Data-Prep-Zoo
|
195733b5767d69c9992456f1380e6c646e30a5ae
|
[
"Apache-2.0"
] | null | null | null | 41.631854 | 261 | 0.586924 |
[
[
[
"#Copyright 2020 Vraj Shah, Arun Kumar\n#\n#Licensed under the Apache License, Version 2.0 (the \"License\");\n#you may not use this file except in compliance with the License.\n#You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n#Unless required by applicable law or agreed to in writing, software\n#distributed under the License is distributed on an \"AS IS\" BASIS,\n#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#See the License for the specific language governing permissions and\n#limitations under the License.\n\nimport pandas as pd\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import KFold\nfrom sklearn import metrics\n\nimport joblib\nimport numpy as np\n\nnp.random.seed(512)",
"_____no_output_____"
],
[
"xtrain = pd.read_csv('../data/ml/data_train.csv')\nxtest = pd.read_csv('../data/ml/data_test.csv')\n\n\nxtrain = xtrain.sample(frac=1,random_state=100).reset_index(drop=True)\nprint(len(xtrain))\n\ny_train = xtrain.loc[:,['y_act']]\ny_test = xtest.loc[:,['y_act']]",
"7936\n"
],
[
"dict_label = {\n 'numeric': 0,\n 'categorical': 1,\n 'datetime': 2,\n 'sentence': 3,\n 'url': 4,\n 'embedded-number': 5,\n 'list': 6,\n 'not-generalizable': 7,\n 'context-specific': 8\n}\n\ny_train['y_act'] = [dict_label[i] for i in y_train['y_act']]\ny_test['y_act'] = [dict_label[i] for i in y_test['y_act']]\ny_train",
"_____no_output_____"
],
[
"useStats = 1\nuseAttributeName = 1\nuseSample1 = 0\nuseSample2 = 0\n## Using descriptive stats and attribute name",
"_____no_output_____"
],
[
"def ProcessStats(data,y):\n\n data1 = data[['total_vals', 'num_nans', '%_nans', 'num_of_dist_val', '%_dist_val', 'mean', 'std_dev', 'min_val', 'max_val','has_delimiters', 'has_url', 'has_email', 'has_date', 'mean_word_count',\n 'std_dev_word_count', 'mean_stopword_total', 'stdev_stopword_total',\n 'mean_char_count', 'stdev_char_count', 'mean_whitespace_count',\n 'stdev_whitespace_count', 'mean_delim_count', 'stdev_delim_count',\n 'is_list', 'is_long_sentence']]\n data1 = data1.reset_index(drop=True)\n data1 = data1.fillna(0)\n\n y.y_act = y.y_act.astype(float)\n \n return data1\n\n\nvectorizerName = CountVectorizer(ngram_range=(2, 2), analyzer='char')\nvectorizerSample = CountVectorizer(ngram_range=(2, 2), analyzer='char')\n\ndef FeatureExtraction(data,data1,flag):\n\n arr = data['Attribute_name'].values\n arr = [str(x) for x in arr]\n \n arr1 = data['sample_1'].values\n arr1 = [str(x) for x in arr1]\n arr2 = data['sample_2'].values\n arr2 = [str(x) for x in arr2]\n arr3 = data['sample_3'].values\n arr3 = [str(x) for x in arr3] \n print(len(arr1),len(arr2))\n if flag:\n X = vectorizerName.fit_transform(arr)\n X1 = vectorizerSample.fit_transform(arr1)\n X2 = vectorizerSample.transform(arr2) \n \n else:\n X = vectorizerName.transform(arr)\n X1 = vectorizerSample.transform(arr1)\n X2 = vectorizerSample.transform(arr2) \n \n# print(f\"> Length of vectorized feature_names: {len(vectorizer.get_feature_names())}\")\n\n attr_df = pd.DataFrame(X.toarray())\n sample1_df = pd.DataFrame(X1.toarray())\n sample2_df = pd.DataFrame(X2.toarray())\n print(len(data1),len(attr_df),len(sample1_df),len(sample2_df))\n\n if useSample1: data2 = sample1_df\n if useSample2: data2 = sample2_df \n \n data2 = pd.concat([data1, attr_df], axis=1, sort=False)\n print(len(data2))\n return data2",
"_____no_output_____"
],
[
"xtrain1 = ProcessStats(xtrain,y_train)\nxtest1 = ProcessStats(xtest,y_test)\n\n\nX_train = FeatureExtraction(xtrain,xtrain1,1)\nX_test = FeatureExtraction(xtest,xtest1,0)\n\n\nX_train_new = X_train.reset_index(drop=True)\ny_train_new = y_train.reset_index(drop=True)\nX_train_new = X_train_new.values\ny_train_new = y_train_new.values\n\n\nk = 5\nkf = KFold(n_splits=k,random_state = 100,shuffle=True)\navg_train_acc,avg_test_acc = 0,0\n\nn_estimators_grid = [5,25,50,75,100,500]\nmax_depth_grid = [5,10,25,50,100,250]\n\n# n_estimators_grid = [25,50,75,100]\n# max_depth_grid = [50,100]\n\navgsc_lst,avgsc_train_lst,avgsc_hld_lst = [],[],[]\navgsc,avgsc_train,avgsc_hld = 0,0,0\n\nbest_param_count = {'n_estimator': {}, 'max_depth': {}}\ni=0\nfor train_index, test_index in kf.split(X_train_new):\n# if i==1: break\n i=i+1\n X_train_cur, X_test_cur = X_train_new[train_index], X_train_new[test_index]\n y_train_cur, y_test_cur = y_train_new[train_index], y_train_new[test_index]\n X_train_train, X_val,y_train_train,y_val = train_test_split(X_train_cur,y_train_cur, test_size=0.25,random_state=100)\n\n bestPerformingModel = RandomForestClassifier(n_estimators=10,max_depth=5,random_state=100)\n bestscore = 0\n print('='*10)\n for ne in n_estimators_grid:\n for md in max_depth_grid:\n clf = RandomForestClassifier(n_estimators=ne,max_depth=md,random_state=100)\n clf.fit(X_train_train, y_train_train.ravel())\n sc = clf.score(X_val, y_val)\n print(f\"[n_estimator: {ne}, max_depth: {md}, accuracy: {sc}]\")\n if bestscore < sc:\n bestne = ne\n bestmd = md\n bestscore = sc\n bestPerformingModel = clf\n\n if str(bestne) in best_param_count['n_estimator']:\n best_param_count['n_estimator'][str(bestne)] += 1\n else:\n best_param_count['n_estimator'][str(bestne)] = 1\n\n if str(bestmd) in best_param_count['max_depth']:\n best_param_count['max_depth'][str(bestmd)] += 1\n else:\n best_param_count['max_depth'][str(bestmd)] = 1\n\n bscr_train = bestPerformingModel.score(X_train_cur, y_train_cur)\n bscr = bestPerformingModel.score(X_test_cur, y_test_cur)\n bscr_hld = bestPerformingModel.score(X_test, y_test)\n\n avgsc_train_lst.append(bscr_train)\n avgsc_lst.append(bscr)\n avgsc_hld_lst.append(bscr_hld)\n\n avgsc_train = avgsc_train + bscr_train \n avgsc = avgsc + bscr\n avgsc_hld = avgsc_hld + bscr_hld\n\n print()\n print(f\"> Best n_estimator: {bestne} || Best max_depth: {bestmd}\")\n print(f\"> Best training score: {bscr_train}\")\n print(f\"> Best test score: {bscr}\")\n print(f\"> Best held score: {bscr_hld}\")\nprint('='*10)\n\nprint(avgsc_train_lst)\nprint(avgsc_lst)\nprint(avgsc_hld_lst)\n\nprint(avgsc_train/k)\nprint(avgsc/k)\nprint(avgsc_hld/k)\n\ny_pred = bestPerformingModel.predict(X_test)\nbscr_hld = bestPerformingModel.score(X_test, y_test)\nprint(bscr_hld)",
"7936 7936\n7936 7936 7936 7936\n7936\n1985 1985\n1985 1985 1985 1985\n1985\n==========\n[n_estimator: 5, max_depth: 5, accuracy: 0.6086956521739131]\n[n_estimator: 5, max_depth: 10, accuracy: 0.7296786389413988]\n[n_estimator: 5, max_depth: 25, accuracy: 0.8538122243226213]\n[n_estimator: 5, max_depth: 50, accuracy: 0.8827977315689981]\n[n_estimator: 5, max_depth: 100, accuracy: 0.8834278512917454]\n[n_estimator: 5, max_depth: 250, accuracy: 0.8834278512917454]\n[n_estimator: 25, max_depth: 5, accuracy: 0.6906112161310649]\n[n_estimator: 25, max_depth: 10, accuracy: 0.798361688720857]\n[n_estimator: 25, max_depth: 25, accuracy: 0.9042218021424071]\n[n_estimator: 25, max_depth: 50, accuracy: 0.9224952741020794]\n[n_estimator: 25, max_depth: 100, accuracy: 0.9193446754883428]\n[n_estimator: 25, max_depth: 250, accuracy: 0.9193446754883428]\n[n_estimator: 50, max_depth: 5, accuracy: 0.6962822936357907]\n[n_estimator: 50, max_depth: 10, accuracy: 0.7889098928796471]\n[n_estimator: 50, max_depth: 25, accuracy: 0.9067422810333964]\n[n_estimator: 50, max_depth: 50, accuracy: 0.921865154379332]\n[n_estimator: 50, max_depth: 100, accuracy: 0.9224952741020794]\n[n_estimator: 50, max_depth: 250, accuracy: 0.9224952741020794]\n[n_estimator: 75, max_depth: 5, accuracy: 0.6836798991808444]\n[n_estimator: 75, max_depth: 10, accuracy: 0.7914303717706365]\n[n_estimator: 75, max_depth: 25, accuracy: 0.9073724007561437]\n[n_estimator: 75, max_depth: 50, accuracy: 0.923755513547574]\n[n_estimator: 75, max_depth: 100, accuracy: 0.923755513547574]\n[n_estimator: 75, max_depth: 250, accuracy: 0.923755513547574]\n[n_estimator: 100, max_depth: 5, accuracy: 0.6931316950220542]\n[n_estimator: 100, max_depth: 10, accuracy: 0.7863894139886578]\n[n_estimator: 100, max_depth: 25, accuracy: 0.9054820415879017]\n[n_estimator: 100, max_depth: 50, accuracy: 0.9206049149338374]\n[n_estimator: 100, max_depth: 100, accuracy: 0.925645872715816]\n[n_estimator: 100, max_depth: 250, accuracy: 0.925645872715816]\n[n_estimator: 500, max_depth: 5, accuracy: 0.7013232514177694]\n[n_estimator: 500, max_depth: 10, accuracy: 0.7800882167611847]\n[n_estimator: 500, max_depth: 25, accuracy: 0.906112161310649]\n[n_estimator: 500, max_depth: 50, accuracy: 0.9269061121613107]\n[n_estimator: 500, max_depth: 100, accuracy: 0.9294265910523]\n[n_estimator: 500, max_depth: 250, accuracy: 0.9294265910523]\n"
],
[
"bestPerformingModel.score(X_test, y_test)",
"C:\\Users\\Admin\\Anaconda3\\envs\\zoo-test1\\lib\\site-packages\\sklearn\\utils\\validation.py:1692: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['int', 'str']. An error will be raised in 1.2.\n FutureWarning,\n"
],
[
"joblib.dump(bestPerformingModel, 'rf.joblib')\njoblib.dump(vectorizerName, 'vectorizerName.joblib')\njoblib.dump(vectorizerSample, 'vectorizerSample.joblib')",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.