hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cb823305bd62f06dd467a5cb79a418209c568ca0
| 891,071 |
ipynb
|
Jupyter Notebook
|
All Exercise Notebooks/Fourier Transform.ipynb
|
malekaburaddaha/Computer_Vision_Nanodegree
|
da3309cd65bb09864fc2f62e1d011d55f1a09d67
|
[
"MIT"
] | null | null | null |
All Exercise Notebooks/Fourier Transform.ipynb
|
malekaburaddaha/Computer_Vision_Nanodegree
|
da3309cd65bb09864fc2f62e1d011d55f1a09d67
|
[
"MIT"
] | null | null | null |
All Exercise Notebooks/Fourier Transform.ipynb
|
malekaburaddaha/Computer_Vision_Nanodegree
|
da3309cd65bb09864fc2f62e1d011d55f1a09d67
|
[
"MIT"
] | null | null | null | 3,807.995726 | 529,904 | 0.964401 |
[
[
[
"## Fourier Transforms\n\nThe frequency components of an image can be displayed after doing a Fourier Transform (FT). An FT looks at the components of an image (edges that are high-frequency, and areas of smooth color as low-frequency), and plots the frequencies that occur as points in spectrum.\n\nIn fact, an FT treats patterns of intensity in an image as sine waves with a particular frequency, and you can look at an interesting visualization of these sine wave components [on this page](https://plus.maths.org/content/fourier-transforms-images).\n\nIn this notebook, we'll first look at a few simple image patterns to build up an idea of what image frequency components look like, and then transform a more complex image to see what it looks like in the frequency domain.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport cv2\n\n%matplotlib inline\n\n# Read in the images\nimage_stripes = cv2.imread('images/stripes.jpg')\n# Change color to RGB (from BGR)\nimage_stripes = cv2.cvtColor(image_stripes, cv2.COLOR_BGR2RGB)\n\n# Read in the images\nimage_solid = cv2.imread('images/pink_solid.jpg')\n# Change color to RGB (from BGR)\nimage_solid = cv2.cvtColor(image_solid, cv2.COLOR_BGR2RGB)\n\n\n# Display the images\nf, (ax1,ax2) = plt.subplots(1, 2, figsize=(10,5))\n\nax1.imshow(image_stripes)\nax2.imshow(image_solid)",
"_____no_output_____"
],
[
"# convert to grayscale to focus on the intensity patterns in the image\ngray_stripes = cv2.cvtColor(image_stripes, cv2.COLOR_RGB2GRAY)\ngray_solid = cv2.cvtColor(image_solid, cv2.COLOR_RGB2GRAY)\n\n# normalize the image color values from a range of [0,255] to [0,1] for further processing\nnorm_stripes = gray_stripes/255.0\nnorm_solid = gray_solid/255.0\n\n# perform a fast fourier transform and create a scaled, frequency transform image\ndef ft_image(norm_image):\n '''This function takes in a normalized, grayscale image\n and returns a frequency spectrum transform of that image. '''\n f = np.fft.fft2(norm_image)\n fshift = np.fft.fftshift(f)\n frequency_tx = 20*np.log(np.abs(fshift))\n \n return frequency_tx\n",
"_____no_output_____"
],
[
"# Call the function on the normalized images\n# and display the transforms\nf_stripes = ft_image(norm_stripes)\nf_solid = ft_image(norm_solid)\n\n# display the images\n# original images to the left of their frequency transform\nf, (ax1,ax2,ax3,ax4) = plt.subplots(1, 4, figsize=(20,10))\n\nax1.set_title('original image')\nax1.imshow(image_stripes)\nax2.set_title('frequency transform image')\nax2.imshow(f_stripes, cmap='gray')\n\nax3.set_title('original image')\nax3.imshow(image_solid)\nax4.set_title('frequency transform image')\nax4.imshow(f_solid, cmap='gray')\n",
"_____no_output_____"
]
],
[
[
"Low frequencies are at the center of the frequency transform image. \n\nThe transform images for these example show that the solid image has most low-frequency components (as seen by the center bright spot). \n\nThe stripes tranform image contains low-frequencies for the areas of white and black color and high frequencies for the edges in between those colors. The stripes transform image also tells us that there is one dominating direction for these frequencies; vertical stripes are represented by a horizontal line passing through the center of the frequency transform image.\n\nNext, let's see what this looks like applied to a real-world image.",
"_____no_output_____"
]
],
[
[
"# Read in an image\nimage = cv2.imread('images/birds.jpg')\n# Change color to RGB (from BGR)\nimage = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n\n# convert to grayscale\ngray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)\n# normalize the image\nnorm_image = gray/255.0\n\nf_image = ft_image(norm_image)\n\n# Display the images\nf, (ax1,ax2) = plt.subplots(1, 2, figsize=(20,10))\n\nax1.imshow(image)\nax2.imshow(f_image, cmap='gray')",
"_____no_output_____"
]
],
[
[
"Notice that this image has components of all frequencies. You can see a bright spot in the center of the transform image, which tells us that a large portion of the image is low-frequency; this makes sense since the body of the birds and background are solid colors. The transform image also tells us that there are **two** dominating directions for these frequencies; vertical edges (from the edges of birds) are represented by a horizontal line passing through the center of the frequency transform image, and horizontal edges (from the branch and tops of the birds' heads) are represented by a vertical line passing through the center.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb823783c29ac1f0f5b474d82c7c3aa295159fff
| 13,719 |
ipynb
|
Jupyter Notebook
|
books/model_analysis.ipynb
|
CarlosPena00/pytorch-unet
|
8365bace23e4b04b9c5b75cd6720807ea8cac5ab
|
[
"MIT"
] | null | null | null |
books/model_analysis.ipynb
|
CarlosPena00/pytorch-unet
|
8365bace23e4b04b9c5b75cd6720807ea8cac5ab
|
[
"MIT"
] | null | null | null |
books/model_analysis.ipynb
|
CarlosPena00/pytorch-unet
|
8365bace23e4b04b9c5b75cd6720807ea8cac5ab
|
[
"MIT"
] | null | null | null | 36.978437 | 161 | 0.501057 |
[
[
[
"%load_ext autoreload\n%autoreload 2\n\nimport os\nimport sys\nimport numpy as np\nimport pandas as pd\nimport csv\nimport cv2\n\nimport torch\nfrom torch.utils.data import Dataset, DataLoader\nfrom torchvision import transforms, utils\nimport torchvision\nfrom skimage import io, transform\nfrom skimage import color\nimport scipy.misc\nimport scipy.ndimage as ndi\nfrom glob import glob\nfrom pathlib import Path\nfrom pytvision import visualization as view\nfrom pytvision.transforms import transforms as mtrans\nfrom tqdm import tqdm\nsys.path.append('../')\nfrom torchlib.datasets import dsxbdata\nfrom torchlib.datasets.dsxbdata import DSXBExDataset, DSXBDataset\nfrom torchlib.datasets import imageutl as imutl\nfrom torchlib import utils\nfrom torchlib.models import unetpad\nfrom torchlib.metrics import get_metrics\nimport matplotlib\nimport matplotlib.pyplot as plt\n#matplotlib.style.use('fivethirtyeight')\n\n# Ignore warnings\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nplt.ion() # interactive mode\nfrom pytvision.transforms import transforms as mtrans\nfrom torchlib import metrics\n\nfrom torchlib.segneuralnet import SegmentationNeuralNet\nfrom torchlib import post_processing_func",
"_____no_output_____"
],
[
"\nmap_post = post_processing_func.MAP_post()\nth_post = post_processing_func.TH_post()\nwts_post = post_processing_func.WTS_post()\n\nnormalize = mtrans.ToMeanNormalization(\n mean = (0.485, 0.456, 0.406), \n std = (0.229, 0.224, 0.225), \n )\n\nclass NormalizeInverse(torchvision.transforms.Normalize):\n \"\"\"\n Undoes the normalization and returns the reconstructed images in the input domain.\n \"\"\"\n\n def __init__(self, mean = (0.485, 0.456, 0.406), std = (0.229, 0.224, 0.225)):\n mean = torch.as_tensor(mean)\n std = torch.as_tensor(std)\n std_inv = 1 / (std + 1e-7)\n mean_inv = -mean * std_inv\n super().__init__(mean=mean_inv, std=std_inv)\n\n def __call__(self, tensor):\n return super().__call__(tensor.clone())\n\nn = NormalizeInverse()\n\ndef get_simple_transforms(pad=0):\n return transforms.Compose([\n #mtrans.CenterCrop( (1008, 1008) ),\n mtrans.ToPad( pad, pad, padding_mode=cv2.BORDER_CONSTANT ),\n mtrans.ToTensor(),\n normalize, \n ])\n\n\ndef get_flip_transforms(pad=0):\n return transforms.Compose([\n #mtrans.CenterCrop( (1008, 1008) ),\n mtrans.ToRandomTransform( mtrans.VFlip(), prob=0.5 ),\n mtrans.ToRandomTransform( mtrans.HFlip(), prob=0.5 ),\n \n mtrans.ToPad( pad, pad, padding_mode=cv2.BORDER_CONSTANT ),\n mtrans.ToTensor(),\n normalize, \n ])\n\ndef tensor2image(tensor, norm_inverse=True):\n if tensor.dim() == 4:\n tensor = tensor[0]\n if norm_inverse:\n tensor = n(tensor)\n img = tensor.cpu().numpy().transpose(1,2,0)\n img = (img * 255).clip(0, 255).astype(np.uint8)\n return img\n\ndef show(src, titles=[], suptitle=\"\", \n bwidth=4, bheight=4, save_file=False,\n show_axis=True, show_cbar=False, last_max=0):\n\n num_cols = len(src)\n \n plt.figure(figsize=(bwidth * num_cols, bheight))\n plt.suptitle(suptitle)\n\n for idx in range(num_cols):\n plt.subplot(1, num_cols, idx+1)\n if not show_axis: plt.axis(\"off\")\n if idx < len(titles): plt.title(titles[idx])\n \n if idx == num_cols-1 and last_max:\n plt.imshow(src[idx]*1, vmax=last_max, vmin=0)\n else:\n plt.imshow(src[idx]*1)\n if type(show_cbar) is bool:\n if show_cbar: plt.colorbar()\n elif idx < len(show_cbar) and show_cbar[idx]:\n plt.colorbar()\n \n plt.tight_layout()\n if save_file:\n plt.savefig(save_file)\n \ndef show2(src, titles=[], suptitle=\"\", \n bwidth=4, bheight=4, save_file=False,\n show_axis=True, show_cbar=False, last_max=0):\n\n num_cols = len(src)//2\n \n plt.figure(figsize=(bwidth * num_cols, bheight*2))\n plt.suptitle(suptitle)\n\n for idx in range(num_cols*2):\n plt.subplot(2, num_cols, idx+1)\n if not show_axis: plt.axis(\"off\")\n if idx < len(titles): plt.title(titles[idx])\n \n if idx == num_cols-1 and last_max:\n plt.imshow(src[idx]*1, vmax=last_max, vmin=0)\n else:\n plt.imshow(src[idx]*1)\n if type(show_cbar) is bool:\n if show_cbar: plt.colorbar()\n elif idx < len(show_cbar) and show_cbar[idx]:\n plt.colorbar()\n \n plt.tight_layout()\n if save_file:\n plt.savefig(save_file)\n \ndef get_diversity_map(preds, gt_predictionlb, th=0.5):\n max_iou = 0\n diversity_map = np.zeros_like(gt_predictionlb)\n for idx_gt in range(1, gt_predictionlb.max()):\n roi = (gt_predictionlb==idx_gt)\n max_iou = 0\n\n for predlb in preds:\n for idx_pred in range(1, predlb.max()):\n roi_pred = (predlb==idx_pred)\n union = roi.astype(int) + roi_pred.astype(int)\n val, freq = np.unique(union, return_counts=True)\n\n if len(val)==3:\n iou = freq[2]/(freq[1]+freq[2])\n if iou > max_iou:\n max_iou = iou\n if max_iou > th: break\n if max_iou >th:\n diversity_map += roi\n return diversity_map",
"_____no_output_____"
],
[
"pathdataset = os.path.expanduser( '/home/chcp/Datasets' )\nnamedataset = 'Seg33_1.0.4'\nnamedataset = 'Seg1009_0.3.2'\n#namedataset = 'Bfhsc_1.0.0'\n#'Segments_Seg1009_0.3.2_unetpad_jreg__adam_map_ransac2_1_7_1'\n\n#namedataset = 'FluoC2DLMSC_0.0.1'\nsub_folder = 'test'\nfolders_images = 'images'\nfolders_contours = 'touchs'\nfolders_weights = 'weights'\nfolders_segment = 'outputs'\nnum_classes = 2\nnum_channels = 3\npad = 0\npathname = pathdataset + '//' + namedataset\nsubset = 'test'",
"_____no_output_____"
],
[
"def ransac_step2(net, inputs, targets, tag=None, max_deep=3, verbose=False):\n srcs = inputs[:, :3]\n segs = inputs[:, 3:]\n lv_segs = segs#.clone()\n\n first = True\n final_loss = 0.0\n for lv in range(max_deep):\n n_segs = segs.shape[1]\n new_segs = []\n actual_c = 7 ** (max_deep - lv)\n if verbose: print(segs.shape, actual_c)\n actual_seg_ids = np.random.choice(range(n_segs), size=actual_c)\n step_segs = segs[:, actual_seg_ids]\n\n for idx in range(0, actual_c, 7):\n mini_inp = torch.cat((srcs, step_segs[:, idx:idx+7]), dim=1)\n\n\n mini_out = net(mini_inp)\n new_segs.append(mini_out.argmax(1, keepdim=True))\n\n segs = torch.cat(new_segs, dim=1).float()\n\n return final_loss, mini_out",
"_____no_output_____"
],
[
"model_list = [Path(url).name for url in glob(r'/home/chcp/Code/pytorch-unet/out/SEG1009/Segments_Seg1009_0.3.2_unetpad_jreg__adam_map_ransac2_1_7_1*')]\nfor model_url_base in tqdm(model_list):\n pathmodel = r'/home/chcp/Code/pytorch-unet/out/SEG1009/'\n ckpt = r'/models/model_best.pth.tar'\n\n net = SegmentationNeuralNet(\n patchproject=pathmodel, \n nameproject=model_url_base, \n no_cuda=True, parallel=False,\n seed=2021, print_freq=False,\n gpu=True\n )\n\n if net.load( pathmodel+model_url_base+ckpt ) is not True:\n assert(False)\n Path(f\"extra/{model_url_base}\").mkdir(exist_ok=True, parents=True)\n\n for subset in ['test']:\n \n test_data = dsxbdata.ISBIDataset(\n pathname, \n subset, \n folders_labels=f'labels{num_classes}c',\n count=None,\n num_classes=num_classes,\n num_channels=num_channels,\n transform=get_simple_transforms(pad=0),\n use_weight=False,\n weight_name='',\n load_segments=True,\n shuffle_segments=True,\n use_ori=1\n )\n \n\n\n test_loader = DataLoader(test_data, batch_size=1, shuffle=False, \n num_workers=0, pin_memory=True, drop_last=False)\n\n softmax = torch.nn.Softmax(dim=0)\n \n wpq, wsq, wrq, total_cells = 0, 0, 0, 0\n\n for idx, sample in enumerate(test_loader):\n inputs, labels = sample['image'], sample['label']\n \n _, outputs = ransac_step2(net, inputs, labels)\n amax = outputs[0].argmax(0)\n view_inputs = tensor2image(inputs[0, :3])\n view_labels = labels[0].argmax(0)\n prob = outputs[0] / outputs[0].sum(0)\n \n \n results, n_cells, preds = get_metrics(labels, outputs, post_label='map')\n predictionlb, prediction, region, output = preds\n \n wpq += results['pq'] * n_cells\n wsq += results['sq'] * n_cells\n wrq += results['rq'] * n_cells\n total_cells += n_cells\n \n res_str = f\"Nreal {n_cells} | Npred {results['n_cells']} | PQ {results['pq']:0.2f} \" + \\\n f\"| SQ {results['sq']:0.2f} | RQ {results['rq']:0.2f}\"\n \n show2([view_inputs, view_labels, amax, predictionlb, prob[0], prob[1]], show_axis=False, suptitle=res_str,\n show_cbar=[False, False, False, False, True, True, True, True], save_file=f\"extra/{model_url_base}/{namedataset}_{subset}_{idx}.png\",\n titles=['Original', 'Label', 'MAP', 'Cells', 'Prob 0', 'Prob 1'], bheight=4.5)\n \n\n row = [namedataset, subset, model_url_base, wpq/total_cells, wsq/total_cells, wrq/total_cells, total_cells]\n row = list(map(str, row))\n header = [\"dataset\", 'subset', 'model', 'WPQ', 'WSQ', \"WRQ\", \"Cells\"]\n save_file=f\"extra/{model_url_base}\"\n \n summary_log = \"extra/summary.csv\"\n \n write_header = not Path(summary_log).exists()\n with open(summary_log, 'a') as f:\n if write_header:\n f.writelines(','.join(header)+'\\n')\n f.writelines(','.join(row)+'\\n')",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
cb827aafb5eaa528cf65db789a3c6bd0a861e318
| 4,939 |
ipynb
|
Jupyter Notebook
|
mytests/visualize_train.ipynb
|
charithmu/Open3D-ML
|
8f11ef7416b8f3437d9735603b6e18ac7f8e7c96
|
[
"MIT"
] | 3 |
2021-03-18T17:09:32.000Z
|
2021-06-26T20:58:12.000Z
|
mytests/visualize_train.ipynb
|
charithmu/Open3D-ML
|
8f11ef7416b8f3437d9735603b6e18ac7f8e7c96
|
[
"MIT"
] | null | null | null |
mytests/visualize_train.ipynb
|
charithmu/Open3D-ML
|
8f11ef7416b8f3437d9735603b6e18ac7f8e7c96
|
[
"MIT"
] | 1 |
2021-06-26T11:04:29.000Z
|
2021-06-26T11:04:29.000Z
| 27.747191 | 114 | 0.582304 |
[
[
[
"import argparse\nimport copy\nimport os\nimport os.path as osp\nimport pprint\nimport sys\nimport time\nfrom pathlib import Path\n\nimport open3d.ml as _ml3d\nimport open3d.ml.tf as ml3d\nimport yaml\nfrom open3d.ml.datasets import S3DIS, SemanticKITTI, SmartLab\nfrom open3d.ml.tf.models import RandLANet\nfrom open3d.ml.tf.pipelines import SemanticSegmentation\nfrom open3d.ml.utils import Config, get_module",
"_____no_output_____"
],
[
"randlanet_smartlab_cfg = \"/home/threedee/repos/Open3D-ML/ml3d/configs/randlanet_smartlab.yml\"\nrandlanet_semantickitti_cfg = \"/home/threedee/repos/Open3D-ML/ml3d/configs/randlanet_semantickitti.yml\"\nrandlanet_s3dis_cfg = \"/home/threedee/repos/Open3D-ML/ml3d/configs/randlanet_s3dis.yml\"",
"_____no_output_____"
],
[
"cfg = _ml3d.utils.Config.load_from_file(randlanet_smartlab_cfg)\n\n# construct a dataset by specifying dataset_path\ndataset = ml3d.datasets.SmartLab(**cfg.dataset)\n\n# get the 'all' split that combines training, validation and test set\nsplit = dataset.get_split(\"training\")\n\n# print the attributes of the first datum\nprint(split.get_attr(0))\n\n# print the shape of the first point cloud\nprint(split.get_data(0)[\"point\"].shape)\n\n# for idx in range(split.__len__()):\n# print(split.get_data(idx)[\"point\"].shape[0])",
"_____no_output_____"
],
[
"# show the first 100 frames using the visualizer\nvis = ml3d.vis.Visualizer()\nvis.visualize_dataset(dataset, \"training\") # , indices=range(100)",
"_____no_output_____"
],
[
"cfg = _ml3d.utils.Config.load_from_file(randlanet_s3dis_cfg)",
"_____no_output_____"
],
[
"dataset = S3DIS(\"/home/charith/datasets/S3DIS/\", use_cache=True)\n\nmodel = RandLANet(**cfg.model)\n\npipeline = SemanticSegmentation(model=model, dataset=dataset, max_epoch=100)\n\npipeline.cfg_tb = {\n \"readme\": \"readme\",\n \"cmd_line\": \"cmd_line\",\n \"dataset\": pprint.pformat(\"S3DIS\", indent=2),\n \"model\": pprint.pformat(\"RandLANet\", indent=2),\n \"pipeline\": pprint.pformat(\"SemanticSegmentation\", indent=2),\n}",
"_____no_output_____"
],
[
"pipeline.run_train()",
"_____no_output_____"
],
[
"# Inference and test example\nfrom open3d.ml.tf.models import RandLANet\nfrom open3d.ml.tf.pipelines import SemanticSegmentation\n\nPipeline = get_module(\"pipeline\", \"SemanticSegmentation\", \"tf\")\nModel = get_module(\"model\", \"RandLANet\", \"tf\")\nDataset = get_module(\"dataset\", \"SemanticKITTI\")\n\nRandLANet = Model(ckpt_path=args.path_ckpt_randlanet)\n\n# Initialize by specifying config file path\nSemanticKITTI = Dataset(args.path_semantickitti, use_cache=False)\n\npipeline = Pipeline(model=RandLANet, dataset=SemanticKITTI)\n\n# inference\n# get data\ntrain_split = SemanticKITTI.get_split(\"train\")\ndata = train_split.get_data(0)\n# restore weights\n\n# run inference\nresults = pipeline.run_inference(data)\nprint(results)\n\n# test\npipeline.run_test()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb827ef1ddd74e13d4d76d93a055466cd389b363
| 5,643 |
ipynb
|
Jupyter Notebook
|
Pynq-ZU/base/notebooks/pmod/pmod_pwm.ipynb
|
cathalmccabe/PYNQ-ZU
|
31f8ddae58cff33fbcdb48f9b43c797d1ae452ef
|
[
"BSD-3-Clause"
] | 6 |
2021-03-02T20:47:07.000Z
|
2022-03-20T01:11:11.000Z
|
Pynq-ZU/base/notebooks/pmod/pmod_pwm.ipynb
|
cathalmccabe/PYNQ-ZU
|
31f8ddae58cff33fbcdb48f9b43c797d1ae452ef
|
[
"BSD-3-Clause"
] | 3 |
2021-04-30T13:04:18.000Z
|
2021-10-07T07:31:53.000Z
|
Pynq-ZU/base/notebooks/pmod/pmod_pwm.ipynb
|
cathalmccabe/PYNQ-ZU
|
31f8ddae58cff33fbcdb48f9b43c797d1ae452ef
|
[
"BSD-3-Clause"
] | 8 |
2021-04-24T12:05:17.000Z
|
2022-03-18T09:05:53.000Z
| 28.790816 | 207 | 0.557682 |
[
[
[
"# Pmod PWM\n\nIn this notebook, The Pmod PWM driver is exercised. Specifically, An AXI Timer is used to a generate pulse width modulated (PWM) signal. \n\nTo see the results of this notebook, you will need a [Digilent Analog Discovery 2](http://store.digilentinc.com/analog-discovery-2-100msps-usb-oscilloscope-logic-analyzer-and-variable-power-supply/)\n\n<td> <img src=\"https://reference.digilentinc.com/_media/reference/instrumentation/analog-discovery-2/analog-discovery-2-3.png\n\" alt=\"Drawing\" style=\"width: 250px;\"/> </td>\n\nand [WaveForms 2015](https://reference.digilentinc.com/waveforms3#newest)\n\n<td> <img src=\"https://reference.digilentinc.com/_media/reference/software/waveforms/waveforms-3/waveforms3-0.png\" alt=\"Drawing\" style=\"width: 250px;\"/> </td>",
"_____no_output_____"
],
[
"## 1. Instantiation\nImport overlay and instantiate Pmod_PWM class. ",
"_____no_output_____"
]
],
[
[
"from pynq.overlays.base import BaseOverlay\nbase = BaseOverlay(\"base.bit\")",
"_____no_output_____"
]
],
[
[
"## 2. Connect the Analog Discovery\nIn this example, we choose the Digilent Analog Discovery 2 as the logic monitor. \n\n* Connect channel 0 of the Analog Discovery to pin 0 on PMODA interface. \n* Connect ground of the Analog Discovery to `GND` on PMODA interface. \n\nThis example uses PMODA interface. In order to use PMODB interface, users can replace PMODA to PMODB in the examples below. Similarly, users can change the pin number.",
"_____no_output_____"
]
],
[
[
"from pynq.lib import Pmod_PWM\n\npwm = Pmod_PWM(base.PMODA,0)",
"_____no_output_____"
]
],
[
[
"## 3. Generate a clock of $50\\%$ duty cycle and $10\\,\\mu$s period\n\nIn this example, we generate a $10\\,\\mu$s clocks with $50\\%$ duty cycle for 4 seconds and the stop. Issuing stop command stops both timer sub-modules.\n\nUsers have to choose channel 1 for waveform display in the scope. Make sure that the triggering level is about 100 mV. \n\nThe output would look like this:\n\n<img src=\"data/pwm_50_duty_cycle.jpg\" width=\"791px\"/>",
"_____no_output_____"
]
],
[
[
"import time\n\n# Generate a 10 us clocks with 50% duty cycle\nperiod=10\nduty=50\npwm.generate(period,duty)\n\n# Sleep for 4 seconds and stop the timer\ntime.sleep(4)\npwm.stop()",
"_____no_output_____"
]
],
[
[
"## 4. Generate a clock of $25\\%$ duty cycle and $20\\,\\mu$s period\n\nRepeating the above test for another set of parameters. The output would look like this:\n\n<img src=\"data/pwm_25_duty_cycle.jpg\" width=\"791px\"/>",
"_____no_output_____"
]
],
[
[
"import time\n\n# Generate a 20 us clocks with 25% duty cycle\nperiod=20\nduty=25\npwm.generate(period,duty)\n\n# Sleep for 5 seconds and stop the timer\ntime.sleep(5)\npwm.stop()",
"_____no_output_____"
]
],
[
[
"Copyright (C) 2020 Xilinx, Inc",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb8281375c525d35e7776c9abc9377d35c633fea
| 8,122 |
ipynb
|
Jupyter Notebook
|
RockPaperScissors.ipynb
|
TopHatBird/RockPaperScissors
|
0e0d7c86fcdf28c7d7400dadf6b0cae07ab31a7d
|
[
"MIT"
] | null | null | null |
RockPaperScissors.ipynb
|
TopHatBird/RockPaperScissors
|
0e0d7c86fcdf28c7d7400dadf6b0cae07ab31a7d
|
[
"MIT"
] | null | null | null |
RockPaperScissors.ipynb
|
TopHatBird/RockPaperScissors
|
0e0d7c86fcdf28c7d7400dadf6b0cae07ab31a7d
|
[
"MIT"
] | null | null | null | 41.438776 | 192 | 0.522408 |
[
[
[
"import random\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"valid_RPS_actions = [0, 1, 2] # Signifying Rock, Paper, or Scissors\n\ndef playRPS(action_p1, action_p2):\n if (action_p1 not in valid_RPS_actions) or (action_p2 not in valid_RPS_actions):\n raise Exception(\"Invalid Move Detected.\")\n return -100\n \n if action_p1 == action_p2: # If there is a draw, issue the agent a small penalty.\n return (-2, 0)\n \n if (action_p1 == 0 and action_p2 == 1) or (action_p1 == 1 and action_p2 == 2) or (action_p1 == 2 and action_p2 == 0): # The ways the agent could lose against player 2.\n return (-10, 10)\n else:\n return (10, -10)",
"_____no_output_____"
],
[
"# Agent (a.k.a. player 1 settings)\nagnt_hist = []\n\npseudo_probs = [[333334, 333333, 333333]]\n\n# The agent essentially has 'pseudo probabilites' attached to the actions it can take.\n# These pseudo probabilies are adjusted through the incentives, but they are separate from the rewards.\n# The sum of the pseudo probabilities at any point should be the sum of the pseudo probabilities array.\nagnt_preferences = np.concatenate((np.array(valid_RPS_actions, ndmin=2), pseudo_probs), axis = 0)\n\n# This value affects the pseudo-probability distribution upon wins, losses, and draws.\nincentive_factor = 10;\n\n# Easier to use argmax by stuffing the eps_greedy method here.\ndef eps_greedy(prob):\n if prob >= random.random():\n return random.randint(0, len(valid_RPS_actions) - 1)\n else:\n return np.argmax(agnt_preferences[1, :])\n\n# Player 2 Settings\n# p2_history = [] <-- May be important provided \np2_preferences = [0] # A list that describes the possible options available to player 2. p2_pref = [0] means player 2 would only choose rock. [0, 2] would imply rock and scissors.\nadjusted_p2_preferences = [2] # This was used to swap p2's behavior part of the way through training.\n\n# Exploration rate of algorithm starting out when using epsilon greedy\ninitial_greed_rate = 0.999\nrunning_gr = initial_greed_rate\n\n# Minimum value for exploration rate\nmin_greed_rate = 0.0001\n\n# The number of sets of rounds of games that will be played.\nepisodes = 10000\n\n# How many times to play the game per episode.\nrounds = 100\n\nfor epis in range(episodes):\n reward_total = 0\n running_gr = 0.999 ** (epis + 1)\n\n # After so many episodes, p2 changes its behavior.\n if epis >= 2500:\n p2_preferences = adjusted_p2_preferences\n\n # The agent uses an epsilon greedy approach to pick its next move.\n for rnds in range(rounds):\n if running_gr >= min_greed_rate:\n agnt_pick = eps_greedy(running_gr)\n else:\n agnt_pick = eps_greedy(min_greed_rate)\n \n # Player 2 picks randomly - A good future implementation would be to have it choose systematically.\n p2_pick = random.choice(p2_preferences)\n results = playRPS(agnt_pick, p2_pick)\n\n # This section updates the agent's probabilities for selecting a winning action.\n\n # Disincentivize losing actions.\n if results == (-10, 10):\n if agnt_pick == 0:\n selection = random.choice([1, 2])\n agnt_preferences[1, 0] = agnt_preferences[1, 0] - incentive_factor\n elif agnt_pick == 1:\n selection = random.choice([0, 2])\n agnt_preferences[1, 1] = agnt_preferences[1, 1] - incentive_factor\n elif agnt_pick == 2:\n selection = random.choice([0, 1])\n agnt_preferences[1, 2] = agnt_preferences[1, 2] - incentive_factor\n else:\n raise Exception(\"Invalid pick happened somewhere...\") \n \n agnt_preferences[1, selection] = agnt_preferences[1, selection] + incentive_factor\n\n # Incentivize winning actions.\n if results == (10, -10):\n if agnt_pick == 0:\n selection = random.choice([1, 2])\n agnt_preferences[1, 0] = agnt_preferences[1, 0] + incentive_factor\n elif agnt_pick == 1:\n selection = random.choice([0, 2])\n agnt_preferences[1, 1] = agnt_preferences[1, 1] + incentive_factor\n elif agnt_pick == 2:\n selection = random.choice([0, 1])\n agnt_preferences[1, 2] = agnt_preferences[1, 2] + incentive_factor\n else:\n raise Exception(\"Invalid pick happened somewhere...\")\n\n agnt_preferences[1, selection] = agnt_preferences[1, selection] - incentive_factor\n\n # Disincentivize actions that lead to a draw.\n if results == (-2, 0):\n if p2_pick == 0:\n selection = random.choice([1, 2])\n agnt_preferences[1, 0] = agnt_preferences[1, 0] - incentive_factor\n elif p2_pick == 1:\n selection = random.choice([0, 2])\n agnt_preferences[1, 1] = agnt_preferences[1, 1] - incentive_factor\n elif p2_pick == 2:\n selection = random.choice([0, 1])\n agnt_preferences[1, 2] = agnt_preferences[1, 2] - incentive_factor\n else:\n raise Exception(\"Invalid pick happened somewhere...\")\n\n agnt_preferences[1, selection] = agnt_preferences[1, selection] + incentive_factor\n\n \n reward_total += results[0]\n\n agnt_hist.append(reward_total)\n \n",
"_____no_output_____"
],
[
"# Analytics Section\nplt.plot(agnt_hist)\nplt.xlabel('Episode Number', fontsize = 24)\nplt.ylabel('Reward Value', fontsize = 24)\nplt.title('Trials with Rock-Paper-Scissors\\nReward Value vs. Episode\\n[ Opponent Picks Rock for Some Time then Scissors ]', fontsize = 24)\nfig = plt.gcf()\nfig.set_size_inches(18.5, 10.5)\nprint('Distribution of Agent Preferences\\n[Rock Paper Scissors]: ',agnt_preferences[1, :])",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
cb828ac989f5a4ba2774609c03fb8eea363c04ca
| 19,633 |
ipynb
|
Jupyter Notebook
|
notebooks/classification.ipynb
|
hejj16/Landscape-StyleGAN
|
a93cd32b588ab21da9d7589e705ca6f09db18408
|
[
"MIT"
] | 1 |
2022-01-04T17:05:20.000Z
|
2022-01-04T17:05:20.000Z
|
notebooks/classification.ipynb
|
hejj16/Landscape-StyleGAN
|
a93cd32b588ab21da9d7589e705ca6f09db18408
|
[
"MIT"
] | null | null | null |
notebooks/classification.ipynb
|
hejj16/Landscape-StyleGAN
|
a93cd32b588ab21da9d7589e705ca6f09db18408
|
[
"MIT"
] | 1 |
2022-03-28T02:08:58.000Z
|
2022-03-28T02:08:58.000Z
| 82.491597 | 14,228 | 0.83217 |
[
[
[
"import pickle\nimport numpy as np\nimport torch\nfrom torch import nn\nfrom sklearn.model_selection import train_test_split\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"with open('train_x', 'rb') as file:\n s = pickle.load(file)\nz = pickle.loads(s)\nwith open('train_label', 'rb') as file:\n s = pickle.load(file)\nlabel = pickle.loads(s)",
"_____no_output_____"
],
[
"# pictures are labeled manually, 0:night with Aurora, 1:night, 2:dawn/dust, 3:daytime.\n# build 3 classfiers, 1. classify pictures into day/night, 2. classify night pictures by aurora or not, and 3. classify into dawn/dust or daytime\n\n# 1.\nlabel[label == \"0\"] = \"0\"\nlabel[label == \"1\"] = \"0\"\nlabel[label == \"2\"] = \"1\"\nlabel[label == \"3\"] = \"1\"\n\n# 2.\n# label = label[np.logical_or(label == \"0\", label == \"1\")] \n\n# 3.\n# label = label[np.logical_or(label == \"2\", label == \"3\")] \n# label[label == \"2\"] = \"0\"\n# label[label == \"3\"] = \"1\"",
"_____no_output_____"
],
[
"train_x, test_x, train_label, test_label = train_test_split(z, label[:, 1].reshape(-1, 1), test_size=0.1)",
"_____no_output_____"
],
[
"class MLP(nn.Module):\n def __init__(self):\n super(MLP,self).__init__()\n self.lr1=nn.Linear(512,50) \n self.act=nn.ReLU()\n self.lr2=nn.Linear(50,1) \n self.sm=nn.Sigmoid()\n \n def forward(self, x):\n x=self.lr1(x)\n x=self.act(x)\n x=self.lr2(x)\n x=self.sm(x)\n return x",
"_____no_output_____"
],
[
"model = MLP()\ncriterion=nn.BCELoss() \noptimizer=torch.optim.Adam(model.parameters(),lr=5e-4, weight_decay=0.25)",
"_____no_output_____"
],
[
"losses = []",
"_____no_output_____"
],
[
"for epoch in range(3000):\n pred = model(torch.tensor(train_x).float()[train_label[:, 0] != \"4\"])\n loss = criterion(pred, torch.tensor(train_label.astype(np.float)).float()[train_label[:, 0] != \"4\"])\n loss.backward()\n optimizer.step()\n losses.append(loss.item())",
"_____no_output_____"
],
[
"test_p = model(torch.tensor(test_x).float()[test_label[:, 0] != \"4\"])\ntest_p = np.where(test_p > 0.5, 1, 0).reshape(-1)",
"_____no_output_____"
],
[
"test_t = torch.tensor(test_label.astype(np.float)).float()[test_label[:, 0] != \"4\"].reshape(-1).numpy()",
"_____no_output_____"
],
[
"np.mean(test_t == test_p)",
"_____no_output_____"
],
[
"plt.plot(losses)\nplt.show()",
"_____no_output_____"
],
[
"torch.save(model.state_dict(), \"mlp_for_ALL.pkl\") # mlp_for_Night.pkl / mlp_for_Day.pkl",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb82908ee1c46ff0011976562c47925bc89e6930
| 38,412 |
ipynb
|
Jupyter Notebook
|
etl.ipynb
|
Davidcparrar/nanodegree-datamodelingpostgres
|
86d984b072869ba44189fa9ec795e516b42d899c
|
[
"MIT"
] | null | null | null |
etl.ipynb
|
Davidcparrar/nanodegree-datamodelingpostgres
|
86d984b072869ba44189fa9ec795e516b42d899c
|
[
"MIT"
] | null | null | null |
etl.ipynb
|
Davidcparrar/nanodegree-datamodelingpostgres
|
86d984b072869ba44189fa9ec795e516b42d899c
|
[
"MIT"
] | null | null | null | 31.536946 | 402 | 0.447178 |
[
[
[
"# ETL Processes\nUse this notebook to develop the ETL process for each of your tables before completing the `etl.py` file to load the whole datasets.",
"_____no_output_____"
]
],
[
[
"import os\nimport glob\nimport psycopg2\nimport pandas as pd\nfrom sql_queries import *",
"_____no_output_____"
],
[
"conn = psycopg2.connect(\"host=127.0.0.1 dbname=sparkifydb user=student password=student\")\ncur = conn.cursor()",
"_____no_output_____"
],
[
"def get_files(filepath):\n all_files = []\n for root, dirs, files in os.walk(filepath):\n files = glob.glob(os.path.join(root,'*.json'))\n for f in files :\n all_files.append(os.path.abspath(f))\n \n return all_files",
"_____no_output_____"
]
],
[
[
"# Process `song_data`\nIn this first part, you'll perform ETL on the first dataset, `song_data`, to create the `songs` and `artists` dimensional tables.\n\nLet's perform ETL on a single song file and load a single record into each table to start.\n- Use the `get_files` function provided above to get a list of all song JSON files in `data/song_data`\n- Select the first song in this list\n- Read the song file and view the data",
"_____no_output_____"
]
],
[
[
"song_files = \"data/song_data\"",
"_____no_output_____"
],
[
"filepath = get_files(song_files)[20]",
"_____no_output_____"
],
[
"df = pd.read_json(filepath, lines=True)\ndf.head()",
"_____no_output_____"
]
],
[
[
"## #1: `songs` Table\n#### Extract Data for Songs Table\n- Select columns for song ID, title, artist ID, year, and duration\n- Use `df.values` to select just the values from the dataframe\n- Index to select the first (only) record in the dataframe\n- Convert the array to a list and set it to `song_data`",
"_____no_output_____"
]
],
[
[
"song_data = df[['song_id','title','artist_id','year','duration']].values[0]\nsong_data",
"_____no_output_____"
]
],
[
[
"#### Insert Record into Song Table\nImplement the `song_table_insert` query in `sql_queries.py` and run the cell below to insert a record for this song into the `songs` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `songs` table in the sparkify database.",
"_____no_output_____"
]
],
[
[
"cur.execute(song_table_insert, song_data)\nconn.commit()",
"_____no_output_____"
]
],
[
[
"Run `test.ipynb` to see if you've successfully added a record to this table.",
"_____no_output_____"
],
[
"## #2: `artists` Table\n#### Extract Data for Artists Table\n- Select columns for artist ID, name, location, latitude, and longitude\n- Use `df.values` to select just the values from the dataframe\n- Index to select the first (only) record in the dataframe\n- Convert the array to a list and set it to `artist_data`",
"_____no_output_____"
]
],
[
[
"artist_data = df[['artist_id','artist_name','artist_location','artist_latitude','artist_longitude']].values[0]\nartist_data",
"_____no_output_____"
]
],
[
[
"#### Insert Record into Artist Table\nImplement the `artist_table_insert` query in `sql_queries.py` and run the cell below to insert a record for this song's artist into the `artists` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `artists` table in the sparkify database.",
"_____no_output_____"
]
],
[
[
"cur.execute(artist_table_insert, artist_data)\nconn.commit()",
"_____no_output_____"
]
],
[
[
"Run `test.ipynb` to see if you've successfully added a record to this table.",
"_____no_output_____"
],
[
"# Process `log_data`\nIn this part, you'll perform ETL on the second dataset, `log_data`, to create the `time` and `users` dimensional tables, as well as the `songplays` fact table.\n\nLet's perform ETL on a single log file and load a single record into each table.\n- Use the `get_files` function provided above to get a list of all log JSON files in `data/log_data`\n- Select the first log file in this list\n- Read the log file and view the data",
"_____no_output_____"
]
],
[
[
"log_files = \"data/log_data\"",
"_____no_output_____"
],
[
"filepath = get_files(log_files)[0]",
"_____no_output_____"
],
[
"df = pd.read_json(filepath, lines=True)\ndf.head()",
"_____no_output_____"
]
],
[
[
"## #3: `time` Table\n#### Extract Data for Time Table\n- Filter records by `NextSong` action\n- Convert the `ts` timestamp column to datetime\n - Hint: the current timestamp is in milliseconds\n- Extract the timestamp, hour, day, week of year, month, year, and weekday from the `ts` column and set `time_data` to a list containing these values in order\n - Hint: use pandas' [`dt` attribute](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.html) to access easily datetimelike properties.\n- Specify labels for these columns and set to `column_labels`\n- Create a dataframe, `time_df,` containing the time data for this file by combining `column_labels` and `time_data` into a dictionary and converting this into a dataframe",
"_____no_output_____"
]
],
[
[
"df = df[df.page=='NextSong']\ndf['ts'] = pd.to_datetime(df['ts'],unit='ms')\ndf.head()",
"_____no_output_____"
],
[
"t = df[\"ts\"]\nt.head()",
"_____no_output_____"
],
[
"time_data = (t.values, t.dt.hour.values, t.dt.day.values, t.dt.weekofyear.values, t.dt.month.values, t.dt.year.values,t.dt.weekday.values)\ncolumn_labels = ('start_time', 'hour', 'day', 'week', 'month', 'year', 'weekday')",
"_____no_output_____"
],
[
"data = {label:data for label, data in zip(column_labels, time_data)}\ntime_df = pd.DataFrame(data)\ntime_df.head()",
"_____no_output_____"
]
],
[
[
"#### Insert Records into Time Table\nImplement the `time_table_insert` query in `sql_queries.py` and run the cell below to insert records for the timestamps in this log file into the `time` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `time` table in the sparkify database.",
"_____no_output_____"
]
],
[
[
"for i, row in time_df.iterrows():\n cur.execute(time_table_insert, list(row))\n conn.commit()",
"_____no_output_____"
]
],
[
[
"Run `test.ipynb` to see if you've successfully added records to this table.",
"_____no_output_____"
],
[
"## #4: `users` Table\n#### Extract Data for Users Table\n- Select columns for user ID, first name, last name, gender and level and set to `user_df`",
"_____no_output_____"
]
],
[
[
"user_df = df[['userId','firstName','lastName','gender','level']]\nuser_df.head()",
"_____no_output_____"
]
],
[
[
"#### Insert Records into Users Table\nImplement the `user_table_insert` query in `sql_queries.py` and run the cell below to insert records for the users in this log file into the `users` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `users` table in the sparkify database.",
"_____no_output_____"
]
],
[
[
"for i, row in user_df.iterrows():\n cur.execute(user_table_insert, row)\n conn.commit()",
"_____no_output_____"
]
],
[
[
"Run `test.ipynb` to see if you've successfully added records to this table.",
"_____no_output_____"
],
[
"## #5: `songplays` Table\n#### Extract Data and Songplays Table\nThis one is a little more complicated since information from the songs table, artists table, and original log file are all needed for the `songplays` table. Since the log file does not specify an ID for either the song or the artist, you'll need to get the song ID and artist ID by querying the songs and artists tables to find matches based on song title, artist name, and song duration time.\n- Implement the `song_select` query in `sql_queries.py` to find the song ID and artist ID based on the title, artist name, and duration of a song.\n- Select the timestamp, user ID, level, song ID, artist ID, session ID, location, and user agent and set to `songplay_data`\n\n#### Insert Records into Songplays Table\n- Implement the `songplay_table_insert` query and run the cell below to insert records for the songplay actions in this log file into the `songplays` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `songplays` table in the sparkify database.",
"_____no_output_____"
]
],
[
[
"# song_select = \"\"\"\n# SELECT song_id, songs.artist_id FROM (\n# songs JOIN artists ON songs.artist_id = artists.artist_id)\n# WHERE title = %s AND name = %s AND duration = %s\n# \"\"\"",
"_____no_output_____"
],
[
"for index, row in df.iterrows():\n\n # get songid and artistid from song and artist tables\n cur.execute(song_select, (row.song, row.artist, row.length))\n results = cur.fetchone()\n if results:\n songid, artistid = results\n else:\n songid, artistid = None, None\n\n # insert songplay record\n songplay_data = (row[\"ts\"], row[\"userId\"], row[\"level\"], songid, artistid, row[\"sessionId\"], row[\"location\"],row[\"userAgent\"])\n cur.execute(songplay_table_insert, songplay_data)\n conn.commit()",
"_____no_output_____"
]
],
[
[
"Run `test.ipynb` to see if you've successfully added records to this table.",
"_____no_output_____"
],
[
"# Close Connection to Sparkify Database",
"_____no_output_____"
]
],
[
[
"conn.close()",
"_____no_output_____"
]
],
[
[
"# Implement `etl.py`\nUse what you've completed in this notebook to implement `etl.py`.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb82bf0f6825ddcd0d29c4d64fb1e0e0a88be27a
| 7,305 |
ipynb
|
Jupyter Notebook
|
models/model_{3,4}/misc-src/split-imdb-train-dataset.ipynb
|
generic-github-user/STVM_NLP_Research
|
f55553dbee52c05e546fc6798163928f9e446959
|
[
"CNRI-Python",
"Xnet",
"X11",
"RSA-MD"
] | 1 |
2021-07-02T15:00:47.000Z
|
2021-07-02T15:00:47.000Z
|
models/model_{3,4}/misc-src/split-imdb-train-dataset.ipynb
|
generic-github-user/STVM_NLP_Research
|
f55553dbee52c05e546fc6798163928f9e446959
|
[
"CNRI-Python",
"Xnet",
"X11",
"RSA-MD"
] | 32 |
2021-06-25T18:20:08.000Z
|
2021-08-20T20:41:19.000Z
|
models/model_{3,4}/misc-src/split-imdb-train-dataset.ipynb
|
generic-github-user/STVM_NLP_Research
|
f55553dbee52c05e546fc6798163928f9e446959
|
[
"CNRI-Python",
"Xnet",
"X11",
"RSA-MD"
] | null | null | null | 22.828125 | 143 | 0.52115 |
[
[
[
"**Problem**\n\nSplit IMDB training dataset into two: 20K and 5K and create two .csv files accordingly.\nBefore spliting, the training data must be shuffled to ensure class balance.",
"_____no_output_____"
]
],
[
[
"# Enable intellisense\n%config IPCompleter.greedy=True",
"_____no_output_____"
],
[
"# Import modules\nimport pandas as pd\nimport numpy as np\nimport bert\nimport tensorflow as tf\nimport tensorflow_hub as hub\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Input, Dense, Dropout\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\n\nprint(\"TensorFlow Version:\",tf.__version__)\nprint(\"Hub version: \",hub.__version__)\nprint(\"GPU is\", \"available\" if tf.config.list_physical_devices('GPU') else \"NOT AVAILABLE\")\npd.set_option('display.max_colwidth',1000)\npd.options.display.max_rows = 10000",
"TensorFlow Version: 2.2.0\nHub version: 0.9.0\nGPU is available\n"
],
[
"import sys\nsys.path.append(\"../helpers\")\nimport imdb_preprocess_functions as nist_imdb",
"_____no_output_____"
],
[
"dir(nist_imdb)",
"_____no_output_____"
],
[
"# Now load the Stanford IMDB training and test dataset.\n[df_train, df_test] = nist_imdb.get_imdb_df_data('../data/imdb_master.csv')",
"The number of rows and columns in the training dataset is: (25000, 5)\nMissing values in train dataset:\nUnnamed: 0 0\ntype 0\nreview 0\nlabel 0\nfile 0\ndtype: int64\nCheck train class balance\n1.0 12500\n0.0 12500\nName: label, dtype: int64\nThe number of rows and columns in the test dataset is: (25000, 5)\nMissing values in test dataset:\nUnnamed: 0 0\ntype 0\nreview 0\nlabel 0\nfile 0\ndtype: int64\nCheck test class balance\n1.0 12500\n0.0 12500\nName: label, dtype: int64\n"
],
[
"# Shuffle the dataset\ndf = df_train.sample(frac=1, random_state=0)\ndf.shape",
"_____no_output_____"
],
[
"SPLIT_TRAIN_SIZE = 17500",
"_____no_output_____"
],
[
"df_train_train = df[:SPLIT_TRAIN_SIZE]\ndf_train_test = df[SPLIT_TRAIN_SIZE:]",
"_____no_output_____"
],
[
"df_train_train.shape",
"_____no_output_____"
],
[
"df_train_test.shape",
"_____no_output_____"
],
[
"# Check the target class balance\ndf_train_train[nist_imdb.label_column].value_counts()",
"_____no_output_____"
],
[
"# Check the target class balance\ndf_train_test[nist_imdb.label_column].value_counts()",
"_____no_output_____"
],
[
"file_train_train = 'imdb_train_split_' + str(SPLIT_TRAIN_SIZE) + '.csv'\nfile_train_test = 'imdb_train_split_' + str(25000 - SPLIT_TRAIN_SIZE) + '.csv'\ndf_train_train.to_csv(file_train_train, index=False, columns = [nist_imdb.text_column, nist_imdb.label_column, nist_imdb.file_column])\ndf_train_test.to_csv(file_train_test, index=False, columns = [nist_imdb.text_column, nist_imdb.label_column, nist_imdb.file_column])",
"_____no_output_____"
]
]
] |
[
"raw",
"code"
] |
[
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb82c459e689b1575933c28151893cbaa8dc6284
| 93,286 |
ipynb
|
Jupyter Notebook
|
image_example/image_toy_ex_v1.ipynb
|
zhao-david/CDE-conformal
|
f140f767f905d1509e333dcf94fdbbe90674e665
|
[
"MIT"
] | 1 |
2022-01-14T09:41:53.000Z
|
2022-01-14T09:41:53.000Z
|
image_example/image_toy_ex_v1.ipynb
|
benjaminleroy/CDE-conformal
|
f140f767f905d1509e333dcf94fdbbe90674e665
|
[
"MIT"
] | null | null | null |
image_example/image_toy_ex_v1.ipynb
|
benjaminleroy/CDE-conformal
|
f140f767f905d1509e333dcf94fdbbe90674e665
|
[
"MIT"
] | 1 |
2021-04-26T22:10:27.000Z
|
2021-04-26T22:10:27.000Z
| 177.34981 | 38,332 | 0.912323 |
[
[
[
"import numpy as np\nimport pandas as pd\nimport scipy\nimport pickle\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport ipdb",
"_____no_output_____"
]
],
[
[
"# generate data",
"_____no_output_____"
],
[
"## 4 types of GalSim images",
"_____no_output_____"
]
],
[
[
"#### 1000 training images\n\nwith open(\"data/galsim_simulated_2500gals_lambda0.4_theta3.14159_2021-05-20-17-01.pkl\", 'rb') as handle:\n group1 = pickle.load(handle)\nwith open(\"data/galsim_simulated_2500gals_lambda0.4_theta2.3562_2021-05-20-17-42.pkl\", 'rb') as handle:\n group2 = pickle.load(handle)\nwith open(\"data/galsim_simulated_2500gals_lambda0.4_theta1.5708_2021-05-20-17-08.pkl\", 'rb') as handle:\n group3 = pickle.load(handle)\nwith open(\"data/galsim_simulated_2500gals_lambda0.4_theta0.7854_2021-05-20-17-44.pkl\", 'rb') as handle:\n group4 = pickle.load(handle)",
"_____no_output_____"
],
[
"sns.heatmap(group1['galaxies_generated'][0])\nplt.show()",
"_____no_output_____"
],
[
"sns.heatmap(group2['galaxies_generated'][0])\nplt.show()",
"_____no_output_____"
],
[
"sns.heatmap(group3['galaxies_generated'][0])\nplt.show()",
"_____no_output_____"
],
[
"sns.heatmap(group4['galaxies_generated'][0])\nplt.show()",
"_____no_output_____"
],
[
"#### 1000 test images\n\nwith open(\"data/galsim_simulated_250gals_lambda0.4_theta3.14159_2021-05-20-18-14.pkl\", 'rb') as handle:#\n test1 = pickle.load(handle)\nwith open(\"data/galsim_simulated_250gals_lambda0.4_theta2.3562_2021-05-20-18-14.pkl\", 'rb') as handle:\n test2 = pickle.load(handle)\nwith open(\"data/galsim_simulated_250gals_lambda0.4_theta1.5708_2021-05-20-18-14.pkl\", 'rb') as handle:\n test3 = pickle.load(handle)\nwith open(\"data/galsim_simulated_250gals_lambda0.4_theta0.7854_2021-05-20-18-14.pkl\", 'rb') as handle:\n test4 = pickle.load(handle)",
"_____no_output_____"
],
[
"gal_img1 = group1['galaxies_generated']\ngal_img2 = group2['galaxies_generated']\ngal_img3 = group3['galaxies_generated']\ngal_img4 = group4['galaxies_generated']\nall_gal_imgs = np.vstack([gal_img1, gal_img2, gal_img3, gal_img4])\nall_gal_imgs.shape",
"_____no_output_____"
],
[
"test_img1 = test1['galaxies_generated']\ntest_img2 = test2['galaxies_generated']\ntest_img3 = test3['galaxies_generated']\ntest_img4 = test4['galaxies_generated']\nall_test_imgs = np.vstack([test_img1, test_img2, test_img3, test_img4])\nall_test_imgs.shape",
"_____no_output_____"
],
[
"all_train_test_imgs = np.vstack([all_gal_imgs, all_test_imgs])\nall_train_test_imgs.shape",
"_____no_output_____"
],
[
"#with open('galsim_conformal_imgs_20210520.pkl', 'wb') as handle:\n# pickle.dump(all_train_test_imgs, handle, protocol=pickle.HIGHEST_PROTOCOL)",
"_____no_output_____"
]
],
[
[
"## 4 distributions with same mean and variance (gaussian, uniform, exponential, bimodal)",
"_____no_output_____"
]
],
[
[
"# N(1,1)\nz1 = np.random.normal(1, 1, size=2500)",
"_____no_output_____"
],
[
"# Unif(1-sqrt(3),1+sqrt(3))\nz2 = np.random.uniform(1-np.sqrt(3), 1+np.sqrt(3), size=2500)",
"_____no_output_____"
],
[
"# Expo(1)\nz3 = np.random.exponential(1, size=2500)",
"_____no_output_____"
],
[
"# 0.5N(0.25,0.4375) + 0.5N(1.75,0.4375)\nz4_ind = np.random.binomial(n=1, p=0.5, size=2500)\nz4 = z4_ind*np.random.normal(0.25, 0.4375, size=2500) + (1-z4_ind)*np.random.normal(1.75, 0.4375, size=2500)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(7,6))\nsns.distplot(z1, color='green', label='N(1,1)', ax=ax)\nsns.distplot(z2, label='Uniform(-0.732,2.732)', ax=ax)\nsns.distplot(z3, label='Expo(1)', ax=ax)\nsns.distplot(z4, color='purple', label='0.5N(0.25,0.4375) + 0.5N(1.75,0.4375)', bins=50, ax=ax)\n\nplt.legend(fontsize=13)\nplt.xlabel('Y', fontsize=14)\nplt.ylabel('Density', fontsize=14)\nplt.tick_params(axis='both', which='major', labelsize=12)\n\nplt.savefig('z_dists_v1.pdf')",
"C:\\Users\\dzhao\\Anaconda3\\lib\\site-packages\\seaborn\\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\nC:\\Users\\dzhao\\Anaconda3\\lib\\site-packages\\seaborn\\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\nC:\\Users\\dzhao\\Anaconda3\\lib\\site-packages\\seaborn\\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\nC:\\Users\\dzhao\\Anaconda3\\lib\\site-packages\\seaborn\\distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n"
],
[
"all_zs = np.hstack([z1, z2, z3, z4])",
"_____no_output_____"
],
[
"test_z1 = np.random.normal(1, 1, size=250)\ntest_z2 = np.random.uniform(1-np.sqrt(3), 1+np.sqrt(3), size=250)\ntest_z3 = np.random.exponential(1, size=250)\ntest_z4_ind = np.random.binomial(n=1, p=0.5, size=250)\ntest_z4 = test_z4_ind*np.random.normal(0.25, 0.4375, size=250) + (1-test_z4_ind)*np.random.normal(1.75, 0.4375, size=250)",
"_____no_output_____"
],
[
"all_test_zs = np.hstack([test_z1, test_z2, test_z3, test_z4])",
"_____no_output_____"
],
[
"all_train_test_zs = np.hstack([all_zs, all_test_zs])",
"_____no_output_____"
],
[
"#with open('z_conformal_20210520.pkl', 'wb') as handle:\n# pickle.dump(all_train_test_zs, handle, protocol=pickle.HIGHEST_PROTOCOL)",
"_____no_output_____"
]
],
[
[
"# fit neural density model",
"_____no_output_____"
],
[
"# run CDE diagnostics",
"_____no_output_____"
],
[
"# conformal approach",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
cb82cd02d119b0f1d3df75f4d792847794912701
| 1,997 |
ipynb
|
Jupyter Notebook
|
ColonizingMars/AccessingData/library-indicators.ipynb
|
BryceHaley/hackathon
|
47de43b626b429dff9983add201a6bbdc6c974b2
|
[
"CC-BY-4.0"
] | 3 |
2019-12-23T14:27:17.000Z
|
2020-10-16T23:00:06.000Z
|
ColonizingMars/AccessingData/library-indicators.ipynb
|
BryceHaley/hackathon
|
47de43b626b429dff9983add201a6bbdc6c974b2
|
[
"CC-BY-4.0"
] | 22 |
2019-12-11T16:58:11.000Z
|
2021-02-25T05:42:07.000Z
|
ColonizingMars/AccessingData/library-indicators.ipynb
|
BryceHaley/hackathon
|
47de43b626b429dff9983add201a6bbdc6c974b2
|
[
"CC-BY-4.0"
] | 6 |
2019-11-07T22:14:41.000Z
|
2021-03-16T04:26:14.000Z
| 31.203125 | 417 | 0.638958 |
[
[
[
"\n\n<a href=\"https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fhackathon&branch=master&subPath=ColonizingMars/AccessingData/library-indicators.ipynb&depth=1\" target=\"_parent\"><img src=\"https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true\" width=\"123\" height=\"24\" alt=\"Open in Callysto\"/></a>",
"_____no_output_____"
],
[
"# Public Library Performance Indicators\n\nUsing [data from the Strathcona County Open Data Portal](https://data.strathcona.ca/Recreation-Culture/Library-Key-Performance-Indicators/ep8g-4kxs) we can see which public library services might be necessary.",
"_____no_output_____"
]
],
[
[
"csv_url = 'https://data.strathcona.ca/api/views/ep8g-4kxs/rows.csv'\n\nimport pandas as pd\ndf = pd.read_csv(csv_url)\ndf",
"_____no_output_____"
]
],
[
[
"[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb82d975ea463781e6b268426fa28a1483808837
| 29,501 |
ipynb
|
Jupyter Notebook
|
ds challenge/notebooks/1_2.ipynb
|
Anthonyive/DSCI-551-Project
|
b0dd4e56b168b150aacdeff83c686eec0dfbedcf
|
[
"MIT"
] | 4 |
2021-08-04T01:37:06.000Z
|
2022-02-07T07:39:08.000Z
|
ds challenge/notebooks/1_2.ipynb
|
Anthonyive/DSCI-551-Project
|
b0dd4e56b168b150aacdeff83c686eec0dfbedcf
|
[
"MIT"
] | null | null | null |
ds challenge/notebooks/1_2.ipynb
|
Anthonyive/DSCI-551-Project
|
b0dd4e56b168b150aacdeff83c686eec0dfbedcf
|
[
"MIT"
] | null | null | null | 35.372902 | 190 | 0.398732 |
[
[
[
"# Imports",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndf_praw = pd.read_csv('../data/PRAW.csv')\ndf_reddit_api = pd.read_csv('../data/Reddit_Api.csv')",
"_____no_output_____"
]
],
[
[
"# 1. Merge two csv files by joining PRAW to Reddit Api (deduplicate the repetitive posts) ",
"_____no_output_____"
],
[
"## Inner join two data frames bases on their creation time and title",
"_____no_output_____"
]
],
[
[
"pd.merge(df_praw, df_reddit_api, how='inner', left_on=['created','title'], right_on = ['create_time','title'])",
"_____no_output_____"
]
],
[
[
"Looks like they have nothing in common.",
"_____no_output_____"
],
[
"## Deduplicate repetitive posts",
"_____no_output_____"
]
],
[
[
"len(df_praw)",
"_____no_output_____"
],
[
"len(df_praw.drop_duplicates())",
"_____no_output_____"
]
],
[
[
"`df_praw` has no duplicates.",
"_____no_output_____"
]
],
[
[
"len(df_reddit_api)",
"_____no_output_____"
],
[
"len(df_reddit_api.drop_duplicates())",
"_____no_output_____"
]
],
[
[
"`df_reddit_api` also has no duplicates.",
"_____no_output_____"
],
[
"# 2. Use datetime to transform the UTC time to readable timestamps",
"_____no_output_____"
]
],
[
[
"from datetime import datetime\ndef convertTime(row):\n if 'created' in row:\n ts = int(row['created'])\n elif 'create_time' in row:\n ts = int(row['create_time'])\n else:\n return None\n\n # if you encounter a \"year is out of range\" error the timestamp\n # may be in milliseconds, try `ts /= 1000` in that case\n return datetime.utcfromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')",
"_____no_output_____"
],
[
"df_praw['readable timestamp'] = df_praw.apply(convertTime, axis=1)",
"_____no_output_____"
],
[
"df_reddit_api['readable timestamp'] = df_reddit_api.apply(convertTime, axis=1)",
"_____no_output_____"
],
[
"df_praw",
"_____no_output_____"
],
[
"df_reddit_api",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
cb82eb127908e9787089667ba5d94b887b202ddd
| 11,990 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/wikipedia_companies-checkpoint.ipynb
|
alandgmendes/business_atlas
|
d73847781dfacee16517a708b11a771bf57b81e9
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/wikipedia_companies-checkpoint.ipynb
|
alandgmendes/business_atlas
|
d73847781dfacee16517a708b11a771bf57b81e9
|
[
"MIT"
] | 4 |
2021-04-14T19:18:46.000Z
|
2021-11-02T16:11:36.000Z
|
.ipynb_checkpoints/wikipedia_companies-checkpoint.ipynb
|
alandgmendes/business_atlas
|
d73847781dfacee16517a708b11a771bf57b81e9
|
[
"MIT"
] | 3 |
2021-09-01T03:05:21.000Z
|
2021-11-01T16:54:26.000Z
| 36.006006 | 215 | 0.538866 |
[
[
[
"import pandas as pd\n\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container {width:90% !important;}</style>\"))\n# Don't wrap repr(DataFrame) across additional lines\npd.set_option(\"display.expand_frame_repr\", True)\n\n# Set max rows displayed in output to 25\npd.set_option(\"display.max_rows\", 25)\n%matplotlib inline\n%matplotlib widget",
"_____no_output_____"
],
[
"# ASK WIKIPEDIA FOR LIST OF COMPANIES\n# pip install sparqlwrapper\n# https://rdflib.github.io/sparqlwrapper/\n\nimport sys\nfrom SPARQLWrapper import SPARQLWrapper, JSON\n\nendpoint_url = \"https://query.wikidata.org/sparql\"\n\nquery = \"\"\"#List of `instances of` \"business enterprise\"\nSELECT ?com ?comLabel ?inception ?industry ?industryLabel ?coordinate ?country ?countryLabel WHERE {\n ?com (wdt:P31/(wdt:P279*)) wd:Q4830453;\n wdt:P625 ?coordinate.\n SERVICE wikibase:label { bd:serviceParam wikibase:language \"en\". }\n OPTIONAL { ?com wdt:P571 ?inception. }\n OPTIONAL { ?com wdt:P452 ?industry. }\n OPTIONAL { ?com wdt:P17 ?country. }\n}\"\"\"\n\ndef get_results(endpoint_url, query):\n user_agent = \"WDQS-example Python/%s.%s\" % (sys.version_info[0], sys.version_info[1])\n # TODO adjust user agent; see https://w.wiki/CX6\n sparql = SPARQLWrapper(endpoint_url, agent=user_agent)\n sparql.setQuery(query)\n sparql.setReturnFormat(JSON)\n return sparql.query().convert()\n\nresults = get_results(endpoint_url, query)\n\nfor result in results[\"results\"][\"bindings\"]:\n print(result)",
"_____no_output_____"
],
[
"#PUT THE DATA ON THE RIGHT FORMAT into pandas\nimport os\nimport json\nimport pandas as pd\nfrom pandas.io.json import json_normalize\n\n# Get the dataset, and transform string into floats for plotting\ndataFrame = pd.json_normalize(results[\"results\"][\"bindings\"]) #in a serialized json-based format\ndf = pd.DataFrame(dataFrame) # into pandas\np = r'(?P<latitude>-?\\d+\\.\\d+).*?(?P<longitude>-?\\d+\\.\\d+)' #get lat/lon from string coordinates\ndf[['longitude', 'latitude']] = df['coordinate.value'].str.extract(p, expand=True)\ndf['latitude'] = pd.to_numeric(df['latitude'], downcast='float')\ndf['longitude'] = pd.to_numeric(df['longitude'], downcast='float')\ndata = pd.DataFrame(df, columns = ['latitude','longitude','comLabel.value','coordinate.value', 'inception.value', 'industryLabel.value', 'com.value', 'industry.value', 'country.value','countryLabel.value'])\ndata=data.dropna(subset=['latitude', 'longitude'])\ndata.rename(columns={'comLabel.value':'company'}, inplace=True)\ndata.rename(columns={'coordinate.value':'coordinate'}, inplace=True)\ndata.rename(columns={'inception.value':'inception'}, inplace=True)\ndata.rename(columns={'industryLabel.value':'industry'}, inplace=True)\ndata.rename(columns={'com.value':'id'}, inplace=True)\ndata.rename(columns={'industry.value':'id_industry'}, inplace=True)\ndata.rename(columns={'country.value':'id_country'}, inplace=True)\ndata.rename(columns={'countryLabel.value':'country'}, inplace=True)\ndata = pd.DataFrame (data) #cluster maps works ONLY with dataframe\nprint(data.shape)\nprint(data.sample(5))\nprint(data.info())",
"_____no_output_____"
],
[
"#DATA index cleaning\nfrom sqlalchemy import create_engine\nfrom pandas.io import sql\nimport re\n\nIDs=[]\nfor name in data['id']:\n ID_n = name.rsplit('/', 1)[1]\n ID = re.findall('\\d+', ID_n)\n #print(ID[0], ID_n)\n IDs.append(ID[0])\ndata ['ID'] = IDs\nprint (data['ID'].describe())\ndata['ID']= data['ID'].astype(int)\n#print (data['ID'].describe())\ndata.rename(columns={'id':'URL'}, inplace=True)\ndata['company_foundation'] = data['inception'].str.extract(r'(\\d{4})')\npd.to_numeric(data['company_foundation'])\ndata = data.set_index(['ID'])\nprint(data.columns)",
"_____no_output_____"
],
[
"#GET company-industry relationship data\nindustries = data.dropna(subset=['id_industry'])\n#print(industries)\n\nindustries.groupby('id_industry')[['company', 'country']].apply(lambda x: x.values.tolist())\nprint(industries.info())\n\nindustries = pd.DataFrame (industries)\nprint(industries.sample(3))",
"_____no_output_____"
],
[
"IDs=[]\nfor name in industries['id_industry']:\n ID_n = name.rsplit('/', 1)[1]\n ID = re.findall('\\d+', ID_n)\n# print(ID, ID_n)\n IDs.append(ID[0])\n \nindustries ['ID_industry'] = IDs\nindustries['ID_industry']= industries['ID_industry'].astype(int)\nindustries.set_index([industries.index, 'ID_industry'], inplace=True)\nindustries['id_wikipedia']=industries['id_industry']\nindustries.drop('id_industry', axis=1, inplace=True) \n\nindustries = pd.DataFrame(industries)\nprint(industries.info())\nprint(industries.sample(3))",
"_____no_output_____"
],
[
"import plotly.express as px\nimport plotly.io as pio\n\npx.defaults.template = \"ggplot2\"\npx.defaults.color_continuous_scale = px.colors.sequential.Blackbody\n#px.defaults.width = 600\n#px.defaults.height = 400\n\n#data = data.dropna(subset=['country'])\n\nfig = px.scatter(data.dropna(subset=['country']), x=\"latitude\", y=\"longitude\", color=\"country\")# width=400)\nfig.show()\n#break born into quarters and use it for the x axis; y has number of companies;\n\n#fig = px.density_heatmap(countries_industries, x=\"country\", y=\"companies\", template=\"seaborn\")\nfig = px.density_heatmap(data, x=\"latitude\", y=\"longitude\")#, template=\"seaborn\")\nfig.show()",
"_____no_output_____"
],
[
"#COMPANIES IN COUNTRIES\nfig = px.histogram(data.dropna(subset=['country', 'industry']), x=\"country\",\n title='COMPANIES IN COUNTRIES',\n # labels={'industry':'industries'}, # can specify one label per df column\n opacity=0.8,\n log_y=False, # represent bars with log scale\n # color_discrete_sequence=['indianred'], # color of histogram bars\n color='industry',\n # marginal=\"rug\", # can be `box`, `violin`\n # hover_data=\"companies\"\n barmode='overlay'\n )\nfig.show()\n\n#INDUSTRIES IN COUNTRIES\nfig = px.histogram(data.dropna(subset=['industry', 'country']), x=\"industry\",\n title='INDUSTRIES IN COUNTRIES',\n # labels={'industry':'industries'}, # can specify one label per df column\n opacity=0.8,\n log_y=False, # represent bars with log scale\n # color_discrete_sequence=['indianred'], # color of histogram bars\n color='country',\n # marginal=\"rug\", # can be `box`, `violin`\n # hover_data=\"companies\"\n barmode='overlay'\n )\nfig.show()",
"_____no_output_____"
],
[
"#THIS IS THE 2D MAP I COULD FIND, :)\nimport plotly.graph_objects as go\ndata['text'] = 'COMPANY: '+ data['company'] + '<br>COUNTRY: ' + data['country'] + '<br>FOUNDATION: ' + data['company_foundation'].astype(str)\n\nfig = go.Figure(data=go.Scattergeo(\n locationmode = 'ISO-3',\n lon = data['longitude'],\n lat = data['latitude'],\n text = data['text'],\n mode = 'markers',\n marker = dict(\n size = 3,\n opacity = 0.8,\n reversescale = True,\n autocolorscale = False,\n symbol = 'square',\n line = dict(width=1, color='rgba(102, 102, 102)'),\n # colorgroup='country'\n # colorscale = 'Blues',\n # cmin = 0,\n # color = df['cnt'],\n # cmax = df['cnt'].max(),\n # colorbar_title=\"Incoming flights<br>February 2011\"\n )))\n\nfig.update_layout(\n title = 'Companies of the World<br>',\n geo = dict(\n scope='world',\n # projection_type='albers usa',\n showland = True,\n landcolor = \"rgb(250, 250, 250)\",\n subunitcolor = \"rgb(217, 217, 217)\",\n countrycolor = \"rgb(217, 217, 217)\",\n countrywidth = 0.5,\n subunitwidth = 0.5\n ),\n )\nfig.show()",
"_____no_output_____"
],
[
"print(data.info())\nimport tkinter as tk\nfrom tkinter import filedialog\nfrom pandas import DataFrame\n\nroot= tk.Tk()\ncanvas1 = tk.Canvas(root, width = 300, height = 300, bg = 'lightsteelblue2', relief = 'raised')\ncanvas1.pack()\n\ndef exportCSV ():\n global df\n \n export_file_path = filedialog.asksaveasfilename(defaultextension='.csv')\n data.to_csv (export_file_path, index = True, header=True)\n\nsaveAsButton_CSV = tk.Button(text='Export CSV', command=exportCSV, bg='green', fg='white', font=('helvetica', 12, 'bold'))\ncanvas1.create_window(150, 150, window=saveAsButton_CSV)\n\nroot.mainloop()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb82ed382baab6e9e16b0be23f2b6c57809269db
| 3,111 |
ipynb
|
Jupyter Notebook
|
examples/overviews/Scatter GL.ipynb
|
jmmease/ipyplotly
|
498b6ad362c5ffdb5106f8ab3566af68eb7076d9
|
[
"MIT"
] | 8 |
2017-12-20T21:19:05.000Z
|
2018-04-25T20:19:46.000Z
|
examples/overviews/Scatter GL.ipynb
|
jonmmease/ipyplotly
|
498b6ad362c5ffdb5106f8ab3566af68eb7076d9
|
[
"MIT"
] | 11 |
2018-01-19T17:14:27.000Z
|
2018-02-20T00:02:40.000Z
|
examples/overviews/Scatter GL.ipynb
|
jonmmease/ipyplotly
|
498b6ad362c5ffdb5106f8ab3566af68eb7076d9
|
[
"MIT"
] | 1 |
2020-05-26T10:48:14.000Z
|
2020-05-26T10:48:14.000Z
| 25.08871 | 116 | 0.534876 |
[
[
[
"## ScatterGL Example\nData is transfered to JS side using ipywidgets binary protocol without JSON serialization",
"_____no_output_____"
]
],
[
[
"# ipyplotly\nfrom ipyplotly.datatypes import FigureWidget\n\n# ipywidgets\nfrom IPython.display import display\n\n# numpy\nimport numpy as np\n\n# core\nimport datetime\nimport time",
"_____no_output_____"
],
[
"# One million points\nN = 1000000",
"_____no_output_____"
],
[
"f = FigureWidget()\nf",
"_____no_output_____"
],
[
"# Adding 1 million points takes ~5 seconds\nscatt1 = f.add_scattergl(x = np.random.randn(N), \n y = np.random.randn(N),\n mode = 'markers',\n marker={'opacity': 0.8, 'line': {'width': 1}})",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
cb82f53deed0467a015d6609b77ff2818b7325f7
| 359,770 |
ipynb
|
Jupyter Notebook
|
Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb
|
jeremymiller00/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
|
2024638d5936e85c4b40975abc2412d46bb9ac44
|
[
"MIT"
] | null | null | null |
Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb
|
jeremymiller00/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
|
2024638d5936e85c4b40975abc2412d46bb9ac44
|
[
"MIT"
] | null | null | null |
Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb
|
jeremymiller00/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
|
2024638d5936e85c4b40975abc2412d46bb9ac44
|
[
"MIT"
] | null | null | null | 305.926871 | 91,228 | 0.908408 |
[
[
[
"Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n##### Version 0.1\n\n`Original content created by Cam Davidson-Pilon`\n\n`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n___\n\n\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!",
"_____no_output_____"
],
[
"Chapter 1\n======\n***",
"_____no_output_____"
],
[
"The Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. ",
"_____no_output_____"
],
[
"\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n",
"_____no_output_____"
],
[
"\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n#### Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\")\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n",
"_____no_output_____"
],
[
"### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to })\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.",
"_____no_output_____"
],
[
"##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).",
"_____no_output_____"
]
],
[
[
"\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load(open(\"../styles/bmh_matplotlibrc.json\"))\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\nplt.style.use('ggplot')\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"The posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. ",
"_____no_output_____"
],
[
"##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:",
"_____no_output_____"
],
[
"\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}",
"_____no_output_____"
],
[
"We have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? ",
"_____no_output_____"
]
],
[
[
"figsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\");",
"_____no_output_____"
]
],
[
[
"We can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n",
"_____no_output_____"
]
],
[
[
"figsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");",
"_____no_output_____"
]
],
[
[
"Notice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.",
"_____no_output_____"
],
[
"_______\n\n## Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n### Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.",
"_____no_output_____"
]
],
[
[
"figsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\");",
"_____no_output_____"
]
],
[
[
"### Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$",
"_____no_output_____"
]
],
[
[
"a = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0,1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");",
"_____no_output_____"
]
],
[
[
"\n### But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n",
"_____no_output_____"
],
[
"\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n",
"_____no_output_____"
]
],
[
[
"figsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);",
"_____no_output_____"
]
],
[
[
"Before we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC3\n-----\n\nPyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.\n\nWe will model the problem above using PyMC3. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables.",
"_____no_output_____"
]
],
[
[
"import pymc3 as pm\nimport theano.tensor as tt\n\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n \n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data - 1)",
"_____no_output_____"
]
],
[
[
"In the code above, we create the PyMC3 variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC3's *stochastic variables*, so-called because they are treated by the back end as random number generators.",
"_____no_output_____"
]
],
[
[
"with model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)",
"_____no_output_____"
],
[
"lambda_",
"_____no_output_____"
]
],
[
[
"This code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. The `switch()` function assigns `lambda_1` or `lambda_2` as the value of `lambda_`, depending on what side of `tau` we are on. The values of `lambda_` up until `tau` are `lambda_1` and the values afterwards are `lambda_2`.\n\nNote that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.",
"_____no_output_____"
]
],
[
[
"with model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)",
"_____no_output_____"
]
],
[
[
"The variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `observed` keyword. \n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.",
"_____no_output_____"
]
],
[
[
"### Mysterious code to be explained in Chapter 3.\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)",
"Multiprocess sampling (4 chains in 4 jobs)\nCompoundStep\n>Metropolis: [tau]\n>Metropolis: [lambda_2]\n>Metropolis: [lambda_1]\n"
],
[
"lambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']",
"_____no_output_____"
],
[
"figsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", density=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", density=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\")\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. ",
"_____no_output_____"
],
[
"### Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. ",
"_____no_output_____"
]
],
[
[
"figsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");",
"_____no_output_____"
]
],
[
[
"Our analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n",
"_____no_output_____"
],
[
"##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?",
"_____no_output_____"
]
],
[
[
"#type your code here.\nprint(lambda_1_samples.mean())\nprint(lambda_2_samples.mean())",
"17.759335513669996\n22.690660793052064\n"
]
],
[
[
"2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.",
"_____no_output_____"
]
],
[
[
"#type your code here.\nprint( (lambda_1_samples / lambda_2_samples).mean() )\nprint(lambda_1_samples.mean() / lambda_2_samples.mean() )",
"0.7838908068925983\n0.7826715879119724\n"
]
],
[
[
"3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC3 part. Just consider all instances where `tau_samples < 45`.)",
"_____no_output_____"
]
],
[
[
"#type your code here.\nlambda_1_samples[tau_samples < 45].mean()",
"_____no_output_____"
],
[
"lambda_1_samples.mean()",
"_____no_output_____"
]
],
[
[
"### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. *PeerJ Computer Science* 2:e55 <https://doi.org/10.7717/peerj-cs.55>\n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. <https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1>.",
"_____no_output_____"
]
],
[
[
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb83041bab05eae652b8a3cef2e732acfc23b206
| 24,927 |
ipynb
|
Jupyter Notebook
|
docs_src/callbacks.tracker.ipynb
|
suleepkumar/fastai
|
728a7154dc120c177fe3499d2e3bf1ba389580fb
|
[
"Apache-2.0"
] | 1 |
2018-12-30T04:12:43.000Z
|
2018-12-30T04:12:43.000Z
|
docs_src/callbacks.tracker.ipynb
|
suleepkumar/fastai
|
728a7154dc120c177fe3499d2e3bf1ba389580fb
|
[
"Apache-2.0"
] | 2 |
2021-05-20T23:02:08.000Z
|
2021-09-28T05:48:00.000Z
|
docs_src/callbacks.tracker.ipynb
|
suleepkumar/fastai
|
728a7154dc120c177fe3499d2e3bf1ba389580fb
|
[
"Apache-2.0"
] | 1 |
2020-11-17T02:37:02.000Z
|
2020-11-17T02:37:02.000Z
| 30.698276 | 422 | 0.528784 |
[
[
[
"# Tracking Callbacks",
"_____no_output_____"
]
],
[
[
"from fastai.gen_doc.nbdoc import *\nfrom fastai.vision import *\nfrom fastai.callbacks import *",
"_____no_output_____"
]
],
[
[
"This module regroups the callbacks that track one of the metrics computed at the end of each epoch to take some decision about training. To show examples of use, we'll use our sample of MNIST and a simple cnn model.",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_SAMPLE)\ndata = ImageDataBunch.from_folder(path)",
"_____no_output_____"
],
[
"show_doc(TerminateOnNaNCallback)",
"_____no_output_____"
]
],
[
[
"Sometimes, training diverges and the loss goes to nan. In that case, there's no point continuing, so this callback stops the training.",
"_____no_output_____"
]
],
[
[
"model = simple_cnn((3,16,16,2))\nlearn = Learner(data, model, metrics=[accuracy])\nlearn.fit_one_cycle(1,1e4)",
"_____no_output_____"
]
],
[
[
"Using it prevents that situation to happen.",
"_____no_output_____"
]
],
[
[
"model = simple_cnn((3,16,16,2))\nlearn = Learner(data, model, metrics=[accuracy], callbacks=[TerminateOnNaNCallback()])\nlearn.fit(2,1e4)",
"_____no_output_____"
]
],
[
[
"### Callback methods",
"_____no_output_____"
],
[
"You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.",
"_____no_output_____"
]
],
[
[
"show_doc(TerminateOnNaNCallback.on_batch_end)",
"_____no_output_____"
],
[
"show_doc(TerminateOnNaNCallback.on_epoch_end)",
"_____no_output_____"
],
[
"show_doc(EarlyStoppingCallback)",
"_____no_output_____"
]
],
[
[
"This callback tracks the quantity in `monitor` during the training of `learn`. `mode` can be forced to 'min' or 'max' but will automatically try to determine if the quantity should be the lowest possible (validation loss) or the highest possible (accuracy). Will stop training after `patience` epochs if the quantity hasn't improved by `min_delta`. ",
"_____no_output_____"
]
],
[
[
"model = simple_cnn((3,16,16,2))\nlearn = Learner(data, model, metrics=[accuracy], \n callback_fns=[partial(EarlyStoppingCallback, monitor='accuracy', min_delta=0.01, patience=3)])\nlearn.fit(50,1e-42)",
"_____no_output_____"
]
],
[
[
"### Callback methods",
"_____no_output_____"
],
[
"You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.",
"_____no_output_____"
]
],
[
[
"show_doc(EarlyStoppingCallback.on_train_begin)",
"_____no_output_____"
],
[
"show_doc(EarlyStoppingCallback.on_epoch_end)",
"_____no_output_____"
],
[
"show_doc(SaveModelCallback)",
"_____no_output_____"
]
],
[
[
"This callback tracks the quantity in `monitor` during the training of `learn`. `mode` can be forced to 'min' or 'max' but will automatically try to determine if the quantity should be the lowest possible (validation loss) or the highest possible (accuracy). Will save the model in `name` whenever determined by `every` ('improvement' or 'epoch'). Loads the best model at the end of training is `every='improvement'`.",
"_____no_output_____"
],
[
"### Callback methods",
"_____no_output_____"
],
[
"You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.",
"_____no_output_____"
]
],
[
[
"show_doc(SaveModelCallback.on_epoch_end)",
"_____no_output_____"
],
[
"show_doc(SaveModelCallback.on_train_end)",
"_____no_output_____"
],
[
"show_doc(ReduceLROnPlateauCallback)",
"_____no_output_____"
]
],
[
[
"This callback tracks the quantity in `monitor` during the training of `learn`. `mode` can be forced to 'min' or 'max' but will automatically try to determine if the quantity should be the lowest possible (validation loss) or the highest possible (accuracy). Will reduce the learning rate by `factor` after `patience` epochs if the quantity hasn't improved by `min_delta`. ",
"_____no_output_____"
],
[
"### Callback methods",
"_____no_output_____"
],
[
"You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.",
"_____no_output_____"
]
],
[
[
"show_doc(ReduceLROnPlateauCallback.on_train_begin)",
"_____no_output_____"
],
[
"show_doc(ReduceLROnPlateauCallback.on_epoch_end)",
"_____no_output_____"
],
[
"show_doc(TrackerCallback)",
"_____no_output_____"
],
[
"show_doc(TrackerCallback.get_monitor_value)",
"_____no_output_____"
]
],
[
[
"### Callback methods",
"_____no_output_____"
],
[
"You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.",
"_____no_output_____"
]
],
[
[
"show_doc(TrackerCallback.on_train_begin)",
"_____no_output_____"
]
],
[
[
"## Undocumented Methods - Methods moved below this line will intentionally be hidden",
"_____no_output_____"
],
[
"## New Methods - Please document or move to the undocumented section",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
cb8311c35fadfc4a71dadf046554406cec67a09b
| 417,641 |
ipynb
|
Jupyter Notebook
|
04 ML/10 gradboost/sem_10_lecturer.ipynb
|
ksetdekov/HSE_DS
|
619d5b84f9d9e97b58ca1f12c5914ec65456c2c8
|
[
"MIT"
] | 1 |
2020-09-26T18:48:11.000Z
|
2020-09-26T18:48:11.000Z
|
04 ML/10 gradboost/sem_10_lecturer.ipynb
|
ksetdekov/HSE_DS
|
619d5b84f9d9e97b58ca1f12c5914ec65456c2c8
|
[
"MIT"
] | null | null | null |
04 ML/10 gradboost/sem_10_lecturer.ipynb
|
ksetdekov/HSE_DS
|
619d5b84f9d9e97b58ca1f12c5914ec65456c2c8
|
[
"MIT"
] | null | null | null | 177.795232 | 44,390 | 0.848363 |
[
[
[
"## Современные библиотеки градиентного бустинга\n\nРанее мы использовали наивную версию градиентного бустинга из scikit-learn, [придуманную](https://projecteuclid.org/download/pdf_1/euclid.aos/1013203451) в 1999 году Фридманом. С тех пор было предложено много реализаций, которые оказываются лучше на практике. На сегодняшний день популярны три библиотеки, реализующие градиентный бустинг:\n* **XGBoost**. После выхода быстро набрала популярность и оставалась стандартом до конца 2016 года. Одними из основных особенностей имплементации были оптимизированность построения деревьев, а также различные регуляризации модели.\n* **LightGBM**. Отличительной чертой является быстрота построения композиции. Например, используется следующий трюк для ускорения обучения: при построении вершины дерева вместо перебора по всем значениям признака производится перебор значений гистограммы этого признака. Таким образом, вместо $O(\\ell)$ требуется $O(\\text{#bins})$. Кроме того, в отличие от других библиотек, которые строят дерево по уровням, LightGBM использует стратегию best-first, т.е. на каждом шаге строит вершину, дающую наибольшее уменьшение функционала. Таким образом, каждое дерево является цепочкой с прикрепленными листьями.\n* **CatBoost**. Библиотека от компании Яндекс. Позволяет автоматически обрабатывать категориальные признаки (даже если их значения представлены в виде строк). Кроме того, алгоритм является менее чувствительным к выбору конкретных гиперпараметров. За счёт этого уменьшается время, которое тратит человек на подбор оптимальных гиперпараметров.",
"_____no_output_____"
],
[
"### Основные параметры\n\n(lightgbm/catboost)\n\n* `objective` – функционал, на который будет настраиваться композиция\n* `eta` / `learning_rate` – темп (скорость) обучения\n* `num_iterations` / `n_estimators` – число итераций бустинга\n\n#### Параметры, отвечающие за сложность деревьев\n* `max_depth` – максимальная глубина \n* `max_leaves` / num_leaves – максимальное число вершин в дереве\n* `gamma` / `min_gain_to_split` – порог на уменьшение функции ошибки при расщеплении в дереве\n* `min_data_in_leaf` – минимальное число объектов в листе\n* `min_sum_hessian_in_leaf` – минимальная сумма весов объектов в листе, минимальное число объектов, при котором делается расщепление \n* `lambda` – коэффициент регуляризации (L2)\n* `subsample` / `bagging_fraction` – какую часть объектов обучения использовать для построения одного дерева \n* `colsample_bytree` / `feature_fraction` – какую часть признаков использовать для построения одного дерева \n\nПодбор всех этих параметров — настоящее искусство. Но начать их настройку можно с самых главных параметров: `learning_rate` и `n_estimators`. Обычно один из них фиксируют, а оставшийся из этих двух параметров подбирают (например, фиксируют `n_estimators=1000` и подбирают `learning_rate`). Следующим по важности является `max_depth`. В силу того, что мы заинтересованы в неглубоких деревьях, обычно его перебирают из диапазона [3; 7].",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\n\nplt.style.use('seaborn')\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (8, 5)",
"_____no_output_____"
],
[
"# !pip install catboost\n# !pip install lightgbm\n# !pip install xgboost",
"Requirement already satisfied: xgboost in /usr/local/lib/python3.7/dist-packages (0.90)\nRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from xgboost) (1.4.1)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from xgboost) (1.19.5)\n"
],
[
"!pip install mlxtend",
"Requirement already satisfied: mlxtend in /usr/local/lib/python3.7/dist-packages (0.14.0)\nRequirement already satisfied: matplotlib>=1.5.1 in /usr/local/lib/python3.7/dist-packages (from mlxtend) (3.2.2)\nRequirement already satisfied: scipy>=0.17 in /usr/local/lib/python3.7/dist-packages (from mlxtend) (1.4.1)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from mlxtend) (57.0.0)\nRequirement already satisfied: numpy>=1.10.4 in /usr/local/lib/python3.7/dist-packages (from mlxtend) (1.19.5)\nRequirement already satisfied: pandas>=0.17.1 in /usr/local/lib/python3.7/dist-packages (from mlxtend) (1.1.5)\nRequirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.7/dist-packages (from mlxtend) (0.22.2.post1)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5.1->mlxtend) (0.10.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5.1->mlxtend) (1.3.1)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5.1->mlxtend) (2.4.7)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=1.5.1->mlxtend) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.17.1->mlxtend) (2018.9)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.18->mlxtend) (1.0.1)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib>=1.5.1->mlxtend) (1.15.0)\n"
],
[
"from sklearn.datasets import make_classification\nfrom sklearn.model_selection import train_test_split\n\nX, y = make_classification(n_samples=500, n_features=2, n_informative=2,\n n_redundant=0, n_repeated=0,\n n_classes=2, n_clusters_per_class=2,\n flip_y=0.05, class_sep=0.8, random_state=241)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=241)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"plt.scatter(X[:, 0], X[:, 1], c=y, cmap='YlGn');",
"_____no_output_____"
]
],
[
[
"## Catboost",
"_____no_output_____"
]
],
[
[
"from catboost import CatBoostClassifier\n??CatBoostClassifier",
"_____no_output_____"
]
],
[
[
"#### Задание 1. \n- Обучите CatBoostClassifier с дефолтными параметрами, используя 300 деревьев. \n- Нарисуйте decision boundary\n- Посчитайте roc_auc_score",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"from sklearn.metrics import roc_auc_score\nfrom mlxtend.plotting import plot_decision_regions\n\nfig, ax = plt.subplots(1,1)\nclf = CatBoostClassifier(iterations=200, logging_level='Silent')\nclf.fit(X_train, y_train)\nplot_decision_regions(X_test, y_test, clf, ax=ax)\nprint(roc_auc_score(y_test, clf.predict_proba(X_test)[:,1]))",
"0.9196301564722617\n"
],
[
"y_test_pred = clf.predict_proba(X_test)[:, 1]\nroc_auc_score(y_test, y_test_pred)",
"_____no_output_____"
]
],
[
[
"### Learning rate\n\nDefault is 0.03\n\n#### Задание 2. \n- Обучите CatBoostClassifier с разными значениями `learning_rate`. \n- Посчитайте roc_auc_score на тестовой и тренировочной выборках\n- Написуйте график зависимости roc_auc от скорости обучения (learning_rate)",
"_____no_output_____"
]
],
[
[
"lrs = np.arange(0.001, 1.1, 0.005)\nquals_train = [] # to store roc auc on trian\nquals_test = [] # to store roc auc on test\n\nfor l in lrs:\n clf = CatBoostClassifier(iterations=150, logging_level='Silent',\n learning_rate=l)\n clf.fit(X_train, y_train)\n q_train = roc_auc_score(y_train, clf.predict_proba(X_train)[:,1]) \n q_test = roc_auc_score(y_test, clf.predict_proba(X_test)[:,1])\n quals_train.append(q_train)\n quals_test.append(q_test)\n # YOUR CODE HERE\n\nplt.plot(lrs, quals_train, marker='.', label='train')\nplt.plot(lrs, quals_test, marker='.', label='test')\nplt.xlabel('LR')\nplt.ylabel('AUC-ROC')\nplt.legend()\n# YOUR CODE HERE (make the plot)",
"learning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\nlearning rate is greater than 1. You probably need to decrease learning rate.\n"
]
],
[
[
"### Number of trees\n\nВажно также подобрать количество деревьев\n\n#### Задание 3. \n- Обучите CatBoostClassifier с разными значениями `iterations`. \n- Посчитайте roc_auc_score на тестовой и тренировочной выборках\n- Написуйте график зависимости roc_auc от размера копозиции",
"_____no_output_____"
]
],
[
[
"%%timeit \nn_trees = [1, 5, 10, 100, 200, 300, 400, 500, 600, 700]\nquals_train = []\nquals_test = []\nfor n in n_trees:\n clf = CatBoostClassifier(iterations=n, logging_level='Silent', learning_rate=0.02)\n clf.fit(X_train, y_train)\n q_train = roc_auc_score(y_train, clf.predict_proba(X_train)[:,1]) \n q_test = roc_auc_score(y_test, clf.predict_proba(X_test)[:,1])\n\n quals_train.append(q_train)\n quals_test.append(q_test)\n # YOUR CODE HERE\n\nplt.plot(n_trees, quals_train, marker='.', label='train')\nplt.plot(n_trees, quals_test, marker='.', label='test')\nplt.xlabel('N trees')\nplt.ylabel('AUC-ROC')\nplt.legend()",
"1 loop, best of 5: 3.2 s per loop\n"
],
[
"plt.plot(n_trees, quals_train, marker='.', label='train')\nplt.plot(n_trees, quals_test, marker='.', label='test')\nplt.xlabel('Number of trees')\nplt.ylabel('AUC-ROC')\nplt.legend()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Staged prediction\n\nКак сделать то же самое, но быстрее. Для этого в библиотеке CatBoost есть метод `staged_predict_proba`",
"_____no_output_____"
]
],
[
[
"%%timeit\n\n# train the model with max trees\nclf = CatBoostClassifier(iterations=700, \n logging_level='Silent',\n learning_rate = 0.01)\nclf.fit(X_train, y_train)\n\n# obtain staged predictiond on test\npredictions_test = clf.staged_predict_proba(\n data=X_test,\n ntree_start=0, \n ntree_end=700, \n eval_period=25\n)\n\n# obtain staged predictiond on train\npredictions_train = clf.staged_predict_proba(\n data=X_train,\n ntree_start=0, \n ntree_end=700, \n eval_period=25\n)\n\n# calculate roc_auc\nquals_train = []\nquals_test = []\nn_trees = []\nfor iteration, (test_pred, train_pred) in enumerate(zip(predictions_test, predictions_train)):\n n_trees.append((iteration+1)*25)\n quals_test.append(roc_auc_score(y_test, test_pred[:, 1]))\n quals_train.append(roc_auc_score(y_train, train_pred[:, 1]))",
"1 loop, best of 5: 820 ms per loop\n"
],
[
"plt.plot(n_trees, quals_train, marker='.', label='train')\nplt.plot(n_trees, quals_test, marker='.', label='test')\nplt.xlabel('Number of trees')\nplt.ylabel('AUC-ROC')\nplt.legend()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"## LightGBM",
"_____no_output_____"
]
],
[
[
"from lightgbm import LGBMClassifier\n??LGBMClassifier",
"_____no_output_____"
]
],
[
[
"#### Задание 4. \n- Обучите LGBMClassifier с дефолтными параметрами, используя 300 деревьев. \n- Нарисуйте decision boundary\n- Посчитайте roc_auc_score",
"_____no_output_____"
]
],
[
[
"clf = LGBMClassifier(n_estimators=200)\nclf.fit(X_train, y_train)\nplot_decision_regions(X_test, y_test, clf)\nprint(roc_auc_score(y_test, clf.predict_proba(X_test)[:,1]))",
"0.8668207681365576\n"
],
[
"n_trees = [1, 5, 10, 100, 200, 300, 400, 500, 600, 700]\nquals_train = []\nquals_test = []\nfor n in n_trees:\n clf = LGBMClassifier(n_estimators=n)\n clf.fit(X_train, y_train)\n q_train = roc_auc_score(y_train, clf.predict_proba(X_train)[:, 1])\n q_test = roc_auc_score(y_test, clf.predict_proba(X_test)[:, 1])\n quals_train.append(q_train)\n quals_test.append(q_test)\n \nplt.plot(n_trees, quals_train, marker='.', label='train')\nplt.plot(n_trees, quals_test, marker='.', label='test')\nplt.xlabel('Number of trees')\nplt.ylabel('AUC-ROC')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"Теперь попробуем взять фиксированное количество деревьев, но будем менять максимальнyю глубину",
"_____no_output_____"
]
],
[
[
"depth = list(range(1, 17, 2))\nquals_train = []\nquals_test = []\nfor d in depth:\n lgb = LGBMClassifier(n_estimators=100, max_depth=d)\n lgb.fit(X_train, y_train)\n q_train = roc_auc_score(y_train, lgb.predict_proba(X_train)[:, 1])\n q_test = roc_auc_score(y_test, lgb.predict_proba(X_test)[:, 1])\n quals_train.append(q_train)\n quals_test.append(q_test)\n \nplt.plot(depth, quals_train, marker='.', label='train')\nplt.plot(depth, quals_test, marker='.', label='test')\nplt.xlabel('Depth of trees')\nplt.ylabel('AUC-ROC')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"И сравним с Catboost:\n\n#### Задание 5. \n- Обучите CatBoostClassifier с разной глубиной\n- Посчитайте roc_auc_score, \n- Сравните лучший результат с LGBM",
"_____no_output_____"
]
],
[
[
"depth = list(range(1, 17, 2))\n\nquals_train = []\nquals_test = []\n\n",
"_____no_output_____"
]
],
[
[
"Теперь, когда у нас получились отличные модели, нужно их сохранить!",
"_____no_output_____"
]
],
[
[
"clf = CatBoostClassifier(n_estimators=200, learning_rate=0.01, \n max_depth=5, logging_level=\"Silent\")\nclf.fit(X_train, y_train)\nclf.save_model('catboost.cbm', format='cbm');",
"_____no_output_____"
],
[
"lgb = LGBMClassifier(n_estimators=100, max_depth=3)\nlgb.fit(X_train, y_train)\nlgb.booster_.save_model('lightgbm.txt')",
"_____no_output_____"
]
],
[
[
"И загрузим обратно, когда понадобится их применить",
"_____no_output_____"
]
],
[
[
"lgb = LGBMClassifier(model_file='lightgbm.txt')\n\nclf = clf.load_model('catboost.cbm')",
"_____no_output_____"
]
],
[
[
"## Блендинг и Стекинг",
"_____no_output_____"
],
[
"Блендинг представляет из себя \"мета-алгоритм\", предсказание которого строится как взвешенная сумма базовых алгоритмов. \n\nРассмотрим простой пример блендинга бустинга и линейной регрессии.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_boston\nfrom sklearn.metrics import mean_squared_error\n\ndata = load_boston()\nX = pd.DataFrame(data.data, columns=data.feature_names)\ny = data.target\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=10)",
"_____no_output_____"
]
],
[
[
"#### Задание 6. \n- Обучите CatBoostRegressor со следующими гиперпараметрами:\n`iterations=100, max_depth=4, learning_rate=0.01, loss_function='RMSE'`\n- Посчитайте предсказание и RMSE на тестовой и тренировочной выборках",
"_____no_output_____"
]
],
[
[
"from catboost import CatBoostRegressor\ncbm = CatBoostRegressor(iterations=100, max_depth=5, learning_rate=0.02,\n loss_function='RMSE', logging_level='Silent')\ncbm.fit(X_train, y_train)\n\ny_pred_cbm = cbm.predict(X_test)\ny_train_pred_cbm = cbm.predict(X_train)\n\nprint(\"Train RMSE = %.4f\" % mean_squared_error(y_train, y_train_pred_cbm))\nprint(\"Test RMSE = %.4f\" % mean_squared_error(y_test, y_pred_cbm))",
"Train RMSE = 14.8901\nTest RMSE = 25.2513\n"
]
],
[
[
"#### Задание 7. \n- Отмасштабируйте данные (StandardScaler) и обучите линейную регрессию\n- Посчитайте предсказание и RMSE на тестовой и тренировочной выборках",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LinearRegression\nfrom sklearn.preprocessing import StandardScaler\n\nlr = LinearRegression(normalize=True)\nlr.fit(X_train, y_train)\ny_pred_lr = lr.predict(X_test)\ny_train_lr = lr.predict(X_train)\n\nprint(\"Train RMSE = %.4f\" % mean_squared_error(y_train, y_train_lr))\nprint(\"Test RMSE = %.4f\" % mean_squared_error(y_test, y_pred_lr))",
"Train RMSE = 19.4597\nTest RMSE = 29.3266\n"
]
],
[
[
"#### Блендинг\n\nБудем считать, что новый алгоритм $a(x)$ представим как\n$$\n a(x)\n =\n \\sum_{n = 1}^{N}\n w_n b_n(x),\n$$\nгде $\\sum\\limits_{n=1}^N w_n =1$\n\nНам нужно обучить линейную регрессию на предсказаниях двух обченных выше алгоритмов",
"_____no_output_____"
],
[
"#### Задание 8. \n",
"_____no_output_____"
]
],
[
[
"predictions_train = pd.DataFrame([y_train_lr, y_train_pred_cbm]).T\npredictions_test = pd.DataFrame([y_pred_lr, y_pred_cbm]).T\n\nlr_blend = LinearRegression()\nlr_blend.fit(predictions_train, y_train)\n\ny_pred_blend = lr_blend.predict(predictions_test)\ny_train_blend = lr_blend.predict(predictions_train)\n\nprint(\"Train RMSE = %.4f\" % mean_squared_error(y_train, y_train_blend))\nprint(\"Test RMSE = %.4f\" % mean_squared_error(y_test, y_pred_blend))",
"Train RMSE = 8.8826\nTest RMSE = 15.7705\n"
]
],
[
[
"#### Стекинг\n\nТеперь обучим более сложную функцию композиции\n\n$$\n a(x) = f(b_1(x), b_2(x))\n$$\n\nгде $f()$ это обученная модель градиентного бустинга",
"_____no_output_____"
],
[
"#### Задание 9. \n",
"_____no_output_____"
]
],
[
[
"from lightgbm import LGBMRegressor\n\nlgb_stack = LGBMRegressor(n_estimators=100, max_depth=2)\nlgb_stack.fit(predictions_train, y_train)\n\ny_pred_stack = lgb_stack.predict(predictions_test)\nmean_squared_error(y_test, y_pred_stack)",
"_____no_output_____"
]
],
[
[
"В итоге получаем качество на тестовой выборке лучше, чем у каждого алгоритма в отдельности.",
"_____no_output_____"
],
[
"Полезные ссылки:\n\n* [Видео про стекинг](https://www.coursera.org/lecture/competitive-data-science/stacking-Qdtt6)",
"_____no_output_____"
],
[
"## XGBoost",
"_____no_output_____"
]
],
[
[
"# based on https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/\n\nimport pandas as pd\nimport numpy as np\nfrom sklearn.model_selection import GridSearchCV \nfrom sklearn import metrics",
"_____no_output_____"
],
[
"titanic = pd.read_csv('titanic.csv')",
"_____no_output_____"
],
[
"X = titanic[['Pclass', 'Age', 'SibSp', 'Fare']]\ny = titanic.Survived.values\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\nX_train.shape, y_train.shape, X_test.shape, y_test.shape",
"_____no_output_____"
],
[
"from xgboost.sklearn import XGBClassifier\n??XGBClassifier",
"_____no_output_____"
],
[
"def modelfit(alg, dtrain, y, X_test=None, y_test=None, test=True): \n\n #Fit the algorithm on the data\n alg.fit(dtrain, y, eval_metric='auc')\n \n #Predict training set:\n dtrain_predictions = alg.predict(dtrain)\n dtrain_predprob = alg.predict_proba(dtrain)[:,1]\n \n #Print model report:\n print (\"\\nModel Report\")\n print (\"Accuracy (Train): %.4g\" % metrics.accuracy_score(y, dtrain_predictions))\n print (\"AUC Score (Train): %f\" % metrics.roc_auc_score(y, dtrain_predprob))\n if test:\n dtest_predictions = alg.predict(X_test)\n dtest_predprob = alg.predict_proba(X_test)[:,1]\n print (\"Accuracy (Test): %.4g\" % metrics.accuracy_score(y_test, dtest_predictions))\n print (\"AUC Score (Test): %f\" % metrics.roc_auc_score(y_test, dtest_predprob))\n # plot feature importance \n feat_imp = pd.Series(alg.get_booster().get_fscore()).sort_values(ascending=False)\n feat_imp.plot(kind='bar', title='Feature Importances')\n plt.ylabel('Feature Importance Score')",
"_____no_output_____"
]
],
[
[
"These parameters are used to define the optimization objective the metric to be calculated at each step.\n\n\n<table><tr>\n<td> <img src=\"https://github.com/AKuzina/ml_dpo/blob/main/practicals/xgb.png?raw=1\" alt=\"Drawing\" style=\"width: 700px;\"/> </td>\n</tr></table>",
"_____no_output_____"
]
],
[
[
"xgb1 = XGBClassifier(objective='binary:logistic',\n eval_metric='auc',\n learning_rate =0.1, \n n_estimators=1000,\n booster='gbtree',\n seed=27)",
"_____no_output_____"
],
[
"modelfit(xgb1, X_train, y_train, X_test, y_test)",
"/Users/annakuzina/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].\n warnings.warn(label_encoder_deprecation_msg, UserWarning)\n"
]
],
[
[
"#### Задание 10. \n- Задайте сетку для перечисленных ниже параметров\n\n`max_depth` - Maximum tree depth for base learners.\n\n`gamma` - Minimum loss reduction required to make a further partition on a leaf node of the tree.\n\n`subsample` - Subsample ratio of the training instance.\n\n`colsample_bytree` - Subsample ratio of columns when constructing each tree.\n\n`reg_alpha` - L1 regularization term on weights\n\n- Запустите поиск, используя `GridSearchCV` c 5 фолдами. Используйте смесь из 100 деревьев.",
"_____no_output_____"
]
],
[
[
"param_grid = {\n# YOUR CODE HERE\n}\n\ngsearch1 = # YOUR CODE HERE",
"Fitting 5 folds for each of 144 candidates, totalling 720 fits\n"
],
[
"gsearch1.best_params_, gsearch1.best_score_",
"_____no_output_____"
]
],
[
[
"Теперь можем взять больше деревьев, но меньше lr",
"_____no_output_____"
]
],
[
[
"xgb_best = XGBClassifier(objective='binary:logistic',\n eval_metric='auc',\n learning_rate =0.01, \n n_estimators=1000,\n booster='gbtree',\n seed=27,\n max_depth = gsearch1.best_params_['max_depth'],\n gamma = gsearch1.best_params_['gamma'], \n subsample = gsearch1.best_params_['subsample'],\n colsample_bytree = gsearch1.best_params_['colsample_bytree'],\n reg_alpha = gsearch1.best_params_['reg_alpha']\n )\nmodelfit(xgb_best, X_train, y_train, X_test, y_test)",
"\nModel Report\nAccuracy (Train): 0.736\nAUC Score (Train): 0.805953\nAccuracy (Test): 0.7318\nAUC Score (Test): 0.822458\n"
]
],
[
[
"## Важность признаков\n\nВ курсе мы подробно обсуждаем, как добиваться хорошего качества решения задачи: имея выборку $X, y$, построить алгоритм с наименьшей ошибкой. Однако заказчику часто важно понимать, как работает алгоритм, почему он делает такие предсказания. Обсудим несколько мотиваций.\n\t\n#### Доверие алгоритму\nНапример, в банках на основе решений, принятых алгоритмом, выполняются финансовые операции, и менеджер, ответственный за эти операции, будет готов использовать алгоритм, только если он понимает, что его решения обоснованы. По этой причине в банках очень часто используют простые линейные алгоритмы. Другой пример из области медицины: поскольку цена ошибки может быть очень велика, врачи готовы использовать только интерпретируемые алгоритмы.\n\t\n#### Отсутствие дискриминации (fairness) \nВновь пример с банком: алгоритм кредитного скоринга не должен учитывать расовую принадлежность (racial bias) заемщика или его пол (gender bias). Между тем, такие зависимости часто могут присутствовать в датасете (исторические данные), на котором обучался алгоритм. Еще один пример: известно, что нейросетевые векторы слов содержат gender bias. Если эти вектора использовались при построении системы поиска по резюме для рекрутера, то, например, по запросу `technical skill` он может видеть женские резюме в конце ранжированного списка.\n\t\n#### Учет контекста\nДанные, на которых обучается алгоритм, не отображают всю предметную область. Интерпретация алгоритма позволит оценить, насколько найденные зависимости связаны с реальной жизнью. Если предсказания интерпретируемы, это также говорит о высокой обобщающей способности алгоритма. \n\nТеперь обсудим несколько вариантов, как можно оценивать важность признаков.",
"_____no_output_____"
],
[
"### Веса линейной модели\n\nСамый простой способ, который уже был рассмотрен на семинаре про линейные модели: после построения модели каждому признаку будет соответствовать свой вес - если признаки масштабированы, то чем он больше по модулю, тем важнее признак, а знак будет говорить о положительном или отрицательном влиянии на величину целевой переменной.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_boston\nfrom sklearn.metrics import mean_squared_error\n\ndata = load_boston()\nX = pd.DataFrame(data.data, columns=data.feature_names)\ny = data.target\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=10)",
"_____no_output_____"
]
],
[
[
"### FSTR (Feature strength)\n\n[Fstr](https://catboost.ai/docs/concepts/fstr.html) говорит, что важность признака — это то, насколько в среднем меняется ответ модели при изменении значения данного признака (изменении значения разбиения).\n\nРассчитать его можно так:\n\n$$feature\\_importance_{F} = \\sum_{tree, leaves_F} (v_1 - avr)^2\\cdot c_1 +(v_2 - avr)^2\\cdot c_2 = \\left(v_1 - v_2\\right)^2\\frac{c_1c_2}{c_1 + c_2}\\\\\n\\qquad avr = \\frac{v_1 \\cdot c_1 + v_2 \\cdot c_2}{c_1 + c_2}.$$\n\nМы сравниваем листы, отличающиеся значением сплита в узле на пути к ним: если условие сплита выполняется, объект попадает в левое поддерево, иначе — в правое. \n\n$c_1, c_2$ - число объектов обучающего датасета, попавших в левое и правое поддерево соответственно, либо суммарный вес этих объектов, если используются веса; $v_1, v_2$ - значение модели в левом и правом поддереве (например, среднее)\n\n\nДалее значения $feature\\_importance$ нормируются, и получаются величины, которые суммируются в 100.",
"_____no_output_____"
]
],
[
[
"clf = CatBoostClassifier(n_estimators=200, learning_rate=0.01, \n max_depth=5, logging_level=\"Silent\")\n\n# load the trained catboost model\nclf = clf.load_model('catboost.cbm')",
"_____no_output_____"
],
[
"for val, name in sorted(zip(cbm.feature_importances_, data.feature_names))[::-1]:\n print(name, val)",
"LSTAT 44.57842395824728\nRM 36.850858175652654\nNOX 3.473133901367306\nPTRATIO 3.3887271637105134\nDIS 2.23667053074621\nINDUS 1.9016830079470277\nTAX 1.7781003730136429\nCRIM 1.5774334311195715\nAGE 1.4213295189456003\nRAD 1.3943420902109485\nB 0.6521915521536685\nCHAS 0.6057385942178969\nZN 0.14136770266764145\n"
],
[
"feature_importances = pd.DataFrame({'importance':cbm.feature_importances_}, index=data.feature_names)\nfeature_importances.sort_values('importance').plot.bar();",
"_____no_output_____"
],
[
"print(data.DESCR)",
".. _boston_dataset:\n\nBoston house prices dataset\n---------------------------\n\n**Data Set Characteristics:** \n\n :Number of Instances: 506 \n\n :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.\n\n :Attribute Information (in order):\n - CRIM per capita crime rate by town\n - ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n - INDUS proportion of non-retail business acres per town\n - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)\n - NOX nitric oxides concentration (parts per 10 million)\n - RM average number of rooms per dwelling\n - AGE proportion of owner-occupied units built prior to 1940\n - DIS weighted distances to five Boston employment centres\n - RAD index of accessibility to radial highways\n - TAX full-value property-tax rate per $10,000\n - PTRATIO pupil-teacher ratio by town\n - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town\n - LSTAT % lower status of the population\n - MEDV Median value of owner-occupied homes in $1000's\n\n :Missing Attribute Values: None\n\n :Creator: Harrison, D. and Rubinfeld, D.L.\n\nThis is a copy of UCI ML housing dataset.\nhttps://archive.ics.uci.edu/ml/machine-learning-databases/housing/\n\n\nThis dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.\n\nThe Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic\nprices and the demand for clean air', J. Environ. Economics & Management,\nvol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics\n...', Wiley, 1980. N.B. Various transformations are used in the table on\npages 244-261 of the latter.\n\nThe Boston house-price data has been used in many machine learning papers that address regression\nproblems. \n \n.. topic:: References\n\n - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.\n - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.\n\n"
]
],
[
[
"### Impurity-based feature importances\n\nВажность признака рассчитывается как (нормированное) общее снижение критерия информативности за счет этого признака.",
"_____no_output_____"
],
[
"Приведем простейший пример, как можно получить такую оценку в sklearn-реализации RandomForest",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestRegressor\n\nclf = RandomForestRegressor(n_estimators=100, oob_score=True)\nclf.fit(X_train, y_train)\n\nclf.feature_importances_",
"_____no_output_____"
],
[
"feature_importances = pd.DataFrame({'importance':clf.feature_importances_}, index=X_train.columns)\nfeature_importances.sort_values('importance').plot.bar();",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
cb831554679f4a61bc513f368a9b8dce7cbce4b7
| 8,492 |
ipynb
|
Jupyter Notebook
|
25Noviembre.ipynb
|
ElydeAngel/daa_2021_1
|
67ddef17b1a5557dadebfac7a2e16bb14fd7f4c8
|
[
"MIT"
] | null | null | null |
25Noviembre.ipynb
|
ElydeAngel/daa_2021_1
|
67ddef17b1a5557dadebfac7a2e16bb14fd7f4c8
|
[
"MIT"
] | null | null | null |
25Noviembre.ipynb
|
ElydeAngel/daa_2021_1
|
67ddef17b1a5557dadebfac7a2e16bb14fd7f4c8
|
[
"MIT"
] | null | null | null | 51.156627 | 607 | 0.524611 |
[
[
[
"<a href=\"https://colab.research.google.com/github/ElydeAngel/daa_2021_1/blob/master/25Noviembre.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"frase = \"\"\"El lema que anima a la Universidad Nacional, Por mi raza hablará el espíritu, revela la vocación humanística con la que fue concebida. El autor de esta célebre frase, José Vasconcelos, asumió la rectoría en 1920, en una época en que las esperanzas de la Revolución aún estaban vivas, había una gran fe en la Patria y el ánimo redentor se extendía en el ambiente.\"\"\"\n\nfrase = frase.strip().replace(\"\\n\",\"\").replace(\",\",\"\").lower().split(\" \") #Quita signo, pasa a minusculas, separa palabras y mete en una lista\nprint(frase)\nfrecuencias = {}\nfor index in range (len(frase)):\n if frase [index] in frecuencias: # 'el' in frecuencias\n pass\n else:\n frecuencias[frase[index]] = 1 # frecuencias ['el'] = 1 --> {'el':1}\n for pivote in range (index +1, len(frase),1):\n #print(frase[index], \"compara contra:\" , frase[pivote])\n if frase[index] == frase[pivote]:\n frecuencias[frase[index]] += 1\nprint(frecuencias)",
"['el', 'lema', 'que', 'anima', 'a', 'la', 'universidad', 'nacional', 'por', 'mi', 'raza', 'hablará', 'el', 'espíritu', 'revela', 'la', 'vocación', 'humanística', 'con', 'la', 'que', 'fue', 'concebida.', 'el', 'autor', 'de', 'esta', 'célebre', 'frase', 'josé', 'vasconcelos', 'asumió', 'la', 'rectoría', 'en', '1920', 'en', 'una', 'época', 'en', 'que', 'las', 'esperanzas', 'de', 'la', 'revolución', 'aún', 'estaban', 'vivas', 'había', 'una', 'gran', 'fe', 'en', 'la', 'patria', 'y', 'el', 'ánimo', 'redentor', 'se', 'extendía', 'en', 'el', 'ambiente.']\n{'el': 5, 'lema': 1, 'que': 3, 'anima': 1, 'a': 1, 'la': 6, 'universidad': 1, 'nacional': 1, 'por': 1, 'mi': 1, 'raza': 1, 'hablará': 1, 'espíritu': 1, 'revela': 1, 'vocación': 1, 'humanística': 1, 'con': 1, 'fue': 1, 'concebida.': 1, 'autor': 1, 'de': 2, 'esta': 1, 'célebre': 1, 'frase': 1, 'josé': 1, 'vasconcelos': 1, 'asumió': 1, 'rectoría': 1, 'en': 5, '1920': 1, 'una': 2, 'época': 1, 'las': 1, 'esperanzas': 1, 'revolución': 1, 'aún': 1, 'estaban': 1, 'vivas': 1, 'había': 1, 'gran': 1, 'fe': 1, 'patria': 1, 'y': 1, 'ánimo': 1, 'redentor': 1, 'se': 1, 'extendía': 1, 'ambiente.': 1}\n"
],
[
"frase = \"\"\"El lema que anima a la Universidad Nacional, Por mi raza hablará el espíritu, revela la vocación humanística con la que fue concebida. El autor de esta célebre frase, José Vasconcelos, asumió la rectoría en 1920, en una época en que las esperanzas de la Revolución aún estaban vivas, había una gran fe en la Patria y el ánimo redentor se extendía en el ambiente.\"\"\"\nfrase = frase.strip().replace(\"\\n\",\"\").replace(\",\",\"\").lower().split(\" \") #Quita signo, pasa a minusculas, separa palabras y mete en una lista\nprint(frase)\n\nfrecuencias = {} # diccionario\nfor index in range(len(frase)): #recorre una sola vez\n print(hash( frase[index]) )\n if frase[index] in frecuencias:\n frecuencias[frase[index]] += 1\n else:\n frecuencias[frase[index]] = 1\nprint(frecuencias)\n",
"['el', 'lema', 'que', 'anima', 'a', 'la', 'universidad', 'nacional', 'por', 'mi', 'raza', 'hablará', 'el', 'espíritu', 'revela', 'la', 'vocación', 'humanística', 'con', 'la', 'que', 'fue', 'concebida.', 'el', 'autor', 'de', 'esta', 'célebre', 'frase', 'josé', 'vasconcelos', 'asumió', 'la', 'rectoría', 'en', '1920', 'en', 'una', 'época', 'en', 'que', 'las', 'esperanzas', 'de', 'la', 'revolución', 'aún', 'estaban', 'vivas', 'había', 'una', 'gran', 'fe', 'en', 'la', 'patria', 'y', 'el', 'ánimo', 'redentor', 'se', 'extendía', 'en', 'el', 'ambiente.']\n-6420825715739860024\n-3267096225767944473\n6047610273788763624\n-3327294466566997909\n-6999985278261386654\n-5853306873605185288\n-2370018837493743769\n3442657628610915374\n-3540575596727508260\n8732357658978860292\n6157369745521635824\n1497742989899001324\n-6420825715739860024\n2838000179673707862\n4902415110860913650\n-5853306873605185288\n-2272853733409959525\n-1790562958443373870\n-2903428457421741751\n-5853306873605185288\n6047610273788763624\n2476744260147196288\n-1179283161451881493\n-6420825715739860024\n-5666921058517157389\n1471715469393659690\n-8170025266296704636\n4177318711902954869\n-250230026231720605\n7575001863798271098\n-5599023044591901504\n3298137148539093647\n-5853306873605185288\n-811277114622954801\n534971966369562450\n-7984585798093365303\n534971966369562450\n2825450115223296436\n-8135030672224264166\n534971966369562450\n6047610273788763624\n2862228048235154126\n-4160240998206661216\n1471715469393659690\n-5853306873605185288\n-4057575655362849966\n-6745908737503104141\n7432291571042399636\n-7458703045859853786\n-1049079171517684492\n2825450115223296436\n-8463753400731575661\n5939465959475678223\n534971966369562450\n-5853306873605185288\n5596126206950465297\n-5412057018560123643\n-6420825715739860024\n-673635585896361419\n-6345420089723149580\n-3575011025375736507\n-3128676232897262845\n534971966369562450\n-6420825715739860024\n7846488063269184632\n{'el': 5, 'lema': 1, 'que': 3, 'anima': 1, 'a': 1, 'la': 6, 'universidad': 1, 'nacional': 1, 'por': 1, 'mi': 1, 'raza': 1, 'hablará': 1, 'espíritu': 1, 'revela': 1, 'vocación': 1, 'humanística': 1, 'con': 1, 'fue': 1, 'concebida.': 1, 'autor': 1, 'de': 2, 'esta': 1, 'célebre': 1, 'frase': 1, 'josé': 1, 'vasconcelos': 1, 'asumió': 1, 'rectoría': 1, 'en': 5, '1920': 1, 'una': 2, 'época': 1, 'las': 1, 'esperanzas': 1, 'revolución': 1, 'aún': 1, 'estaban': 1, 'vivas': 1, 'había': 1, 'gran': 1, 'fe': 1, 'patria': 1, 'y': 1, 'ánimo': 1, 'redentor': 1, 'se': 1, 'extendía': 1, 'ambiente.': 1}\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
]
] |
cb831c4c25ebce8d8abd582f87b9e524992b8a12
| 9,249 |
ipynb
|
Jupyter Notebook
|
src/0-index.ipynb
|
jamfeitosa/ia898
|
785852d243dec5224d9f4268575be9e51ca1f1e5
|
[
"MIT"
] | 14 |
2017-07-12T17:32:44.000Z
|
2021-08-19T13:30:46.000Z
|
src/0-index.ipynb
|
jamfeitosa/ia898
|
785852d243dec5224d9f4268575be9e51ca1f1e5
|
[
"MIT"
] | 1 |
2017-06-29T13:34:26.000Z
|
2017-06-29T13:34:26.000Z
|
src/0-index.ipynb
|
jamfeitosa/ia898
|
785852d243dec5224d9f4268575be9e51ca1f1e5
|
[
"MIT"
] | 19 |
2017-03-05T17:40:48.000Z
|
2020-03-09T17:01:20.000Z
| 42.62212 | 3,057 | 0.595199 |
[
[
[
"from IPython.core.display import HTML,display\nimport glob",
"_____no_output_____"
],
[
"notyet = []",
"_____no_output_____"
],
[
"files = glob.glob('*.ipynb')\nprint('n of files:',len(files))\ns = '<ol>'\nfor ny in notyet:\n s = s + '<li><a target=\"_blank\" href='+ ny + '>' + ny + '</a></li>'\ns = s + '</ol>'\ns = s + '<ol>'\nfor f in files:\n if f not in notyet:\n s = s + '<li><a target=\"_blank\" href='+ f + '>' + f + '</a></li>'\ns = s + '</ol>'\ndisplay(HTML(s))",
"n of files: 46\n"
]
],
[
[
"- [Generate Library](GeneratyLibrary.ipynb) - Gera os arquivos .py a partir dos .ipynb\n- [`__init__.py`](__init__.py) - Lista das funções do package",
"_____no_output_____"
]
],
[
[
"files = glob.glob('*.ipynb')\nnotgo = ['0-index.ipynb','GenerateLibrary.ipynb' ]\ns = ''\nfor f in files:\n if f not in notgo:\n s = s + \"! jupyter nbconvert --to 'python' \" + f + \"\\n\"\nprint(s)",
"! jupyter nbconvert --to 'python' affine.ipynb\n! jupyter nbconvert --to 'python' applylut.ipynb\n! jupyter nbconvert --to 'python' bwlp.ipynb\n! jupyter nbconvert --to 'python' circle.ipynb\n! jupyter nbconvert --to 'python' colormap.ipynb\n! jupyter nbconvert --to 'python' comb.ipynb\n! jupyter nbconvert --to 'python' conv.ipynb\n! jupyter nbconvert --to 'python' cos.ipynb\n! jupyter nbconvert --to 'python' dct.ipynb\n! jupyter nbconvert --to 'python' dctmatrix.ipynb\n! jupyter nbconvert --to 'python' dft.ipynb\n! jupyter nbconvert --to 'python' dftmatrix.ipynb\n! jupyter nbconvert --to 'python' dftshift.ipynb\n! jupyter nbconvert --to 'python' dftview.ipynb\n! jupyter nbconvert --to 'python' ellipse.ipynb\n! jupyter nbconvert --to 'python' gaussian.ipynb\n! jupyter nbconvert --to 'python' gshow.ipynb\n! jupyter nbconvert --to 'python' h2percentile.ipynb\n! jupyter nbconvert --to 'python' h2stats.ipynb\n! jupyter nbconvert --to 'python' haarmatrix.ipynb\n! jupyter nbconvert --to 'python' hadamard.ipynb\n! jupyter nbconvert --to 'python' hadamardmatrix.ipynb\n! jupyter nbconvert --to 'python' histogram.ipynb\n! jupyter nbconvert --to 'python' idct.ipynb\n! jupyter nbconvert --to 'python' idft.ipynb\n! jupyter nbconvert --to 'python' idftshift.ipynb\n! jupyter nbconvert --to 'python' ihadamard.ipynb\n! jupyter nbconvert --to 'python' interpollin.ipynb\n! jupyter nbconvert --to 'python' isccsym.ipynb\n! jupyter nbconvert --to 'python' isolines.ipynb\n! jupyter nbconvert --to 'python' log.ipynb\n! jupyter nbconvert --to 'python' logfilter.ipynb\n! jupyter nbconvert --to 'python' mosaic.ipynb\n! jupyter nbconvert --to 'python' normalize.ipynb\n! jupyter nbconvert --to 'python' pca.ipynb\n! jupyter nbconvert --to 'python' pconv.ipynb\n! jupyter nbconvert --to 'python' phasecorr.ipynb\n! jupyter nbconvert --to 'python' polar.ipynb\n! jupyter nbconvert --to 'python' ptrans.ipynb\n! jupyter nbconvert --to 'python' ramp.ipynb\n! jupyter nbconvert --to 'python' rectangle.ipynb\n! jupyter nbconvert --to 'python' rgb2hsv.ipynb\n! jupyter nbconvert --to 'python' sat.ipynb\n! jupyter nbconvert --to 'python' sobel.ipynb\n\n"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb832204024afa5b1796de7f0e29d42390c7c454
| 151,433 |
ipynb
|
Jupyter Notebook
|
examples/multitask-learning-model.ipynb
|
jinglescode/torchsignal
|
6172bc2b18eeafa9464cfba678e9c02ea4ed5e2a
|
[
"BSD-2-Clause"
] | 10 |
2020-09-17T10:34:26.000Z
|
2022-02-09T02:37:33.000Z
|
examples/multitask-learning-model.ipynb
|
jinglescode/torchsignal
|
6172bc2b18eeafa9464cfba678e9c02ea4ed5e2a
|
[
"BSD-2-Clause"
] | 1 |
2020-07-09T10:11:47.000Z
|
2020-07-09T10:11:51.000Z
|
examples/multitask-learning-model.ipynb
|
jinglescode/torchsignal
|
6172bc2b18eeafa9464cfba678e9c02ea4ed5e2a
|
[
"BSD-2-Clause"
] | 1 |
2021-07-27T11:36:53.000Z
|
2021-07-27T11:36:53.000Z
| 45.584889 | 20,888 | 0.659004 |
[
[
[
"%cd ../",
"D:\\workspace\\github\\torchsignal\n"
],
[
"from torchsignal.datasets import OPENBMI\nfrom torchsignal.datasets.multiplesubjects import MultipleSubjects\nfrom torchsignal.trainer.multitask import Multitask_Trainer\nfrom torchsignal.model import MultitaskSSVEP\n\nimport numpy as np\nimport torch\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import figure",
"_____no_output_____"
],
[
"config = {\n \"exp_name\": \"multitask-run1\",\n \"seed\": 12,\n \"segment_config\": {\n \"window_len\": 1,\n \"shift_len\": 1000,\n \"sample_rate\": 1000,\n \"add_segment_axis\": True\n },\n \"bandpass_config\": {\n \"sample_rate\": 1000,\n \"lowcut\": 1,\n \"highcut\": 40,\n \"order\": 6\n },\n \"train_subject_ids\": {\n \"low\": 1,\n \"high\": 54\n },\n \"test_subject_ids\": {\n \"low\": 1,\n \"high\": 54\n },\n \"root\": \"../data/openbmi\",\n \"selected_channels\": ['P7', 'P3', 'Pz', 'P4', 'P8', 'PO9', 'O1', 'Oz', 'O2', 'PO10'],\n \"sessions\": [1,2],\n \"tsdata\": False,\n \"num_classes\": 4,\n \"num_channel\": 10,\n \"batchsize\": 256,\n \"learning_rate\": 0.001,\n \"epochs\": 100,\n \"patience\": 5,\n \"early_stopping\": 10,\n \"model\": {\n \"n1\": 4,\n \"kernel_window_ssvep\": 59,\n \"kernel_window\": 19,\n \"conv_3_dilation\": 4,\n \"conv_4_dilation\": 4\n },\n \"gpu\": 0,\n \"multitask\": True,\n \"runkfold\": 4,\n \"check_model\": True\n}\n\ndevice = torch.device(\"cuda:\"+str(config['gpu']) if torch.cuda.is_available() else \"cpu\")\nprint('device', device)",
"device cuda:0\n"
]
],
[
[
"# Load Data - OPENBMI",
"_____no_output_____"
]
],
[
[
"subject_ids = list(np.arange(config['train_subject_ids']['low'], config['train_subject_ids']['high']+1, dtype=int))\n\nopenbmi_data = MultipleSubjects(\n dataset=OPENBMI, \n root=config['root'], \n subject_ids=subject_ids, \n sessions=config['sessions'],\n selected_channels=config['selected_channels'],\n segment_config=config['segment_config'],\n bandpass_config=config['bandpass_config'],\n one_hot_labels=True,\n)",
"Load subject: 1\nLoad subject: 2\nLoad subject: 3\nLoad subject: 4\nLoad subject: 5\nLoad subject: 6\nLoad subject: 7\nLoad subject: 8\nLoad subject: 9\nLoad subject: 10\nLoad subject: 11\nLoad subject: 12\nLoad subject: 13\nLoad subject: 14\nLoad subject: 15\nLoad subject: 16\nLoad subject: 17\nLoad subject: 18\nLoad subject: 19\nLoad subject: 20\nLoad subject: 21\nLoad subject: 22\nLoad subject: 23\nLoad subject: 24\nLoad subject: 25\nLoad subject: 26\nLoad subject: 27\nLoad subject: 28\nLoad subject: 29\nLoad subject: 30\nLoad subject: 31\nLoad subject: 32\nLoad subject: 33\nLoad subject: 34\nLoad subject: 35\nLoad subject: 36\nLoad subject: 37\nLoad subject: 38\nLoad subject: 39\nLoad subject: 40\nLoad subject: 41\nLoad subject: 42\nLoad subject: 43\nLoad subject: 44\nLoad subject: 45\nLoad subject: 46\nLoad subject: 47\nLoad subject: 48\nLoad subject: 49\nLoad subject: 50\nLoad subject: 51\nLoad subject: 52\nLoad subject: 53\nLoad subject: 54\n"
]
],
[
[
"# Train-Test model - leave one subject out",
"_____no_output_____"
]
],
[
[
"train_loader, val_loader, test_loader = openbmi_data.leave_one_subject_out(selected_subject_id=1)\ndataloaders_dict = {\n 'train': train_loader,\n 'val': val_loader\n}",
"_____no_output_____"
],
[
"check_model = config['check_model'] if 'check_model' in config else False\nif check_model:\n x = torch.ones((20, 10, 1000)).to(device)\n \n if config['tsdata'] == True:\n x = torch.ones((40, config['num_channel'], config['segment_config']['window_len'] * config['bandpass_config']['sample_rate'])).to(device)\n\n model = MultitaskSSVEP(num_channel=config['num_channel'],\n num_classes=config['num_classes'],\n signal_length=config['segment_config']['window_len'] * config['bandpass_config']['sample_rate'],\n filters_n1= config['model']['n1'],\n kernel_window_ssvep= config['model']['kernel_window_ssvep'],\n kernel_window= config['model']['kernel_window'],\n conv_3_dilation= config['model']['conv_3_dilation'],\n conv_4_dilation= config['model']['conv_4_dilation'],\n ).to(device)\n\n out = model(x)\n print('output',out.shape)\n\n def count_params(model):\n return sum(p.numel() for p in model.parameters() if p.requires_grad)\n print('model size',count_params(model))\n\n del model\n del out",
"output torch.Size([20, 4, 2])\nmodel size 56188\n"
],
[
"model = MultitaskSSVEP(num_channel=config['num_channel'],\n num_classes=config['num_classes'],\n signal_length=config['segment_config']['window_len'] * config['bandpass_config']['sample_rate'],\n filters_n1= config['model']['n1'],\n kernel_window_ssvep= config['model']['kernel_window_ssvep'],\n kernel_window= config['model']['kernel_window'],\n conv_3_dilation= config['model']['conv_3_dilation'],\n conv_4_dilation= config['model']['conv_4_dilation'],\n).to(device)\n\nepochs=config['epochs'] if 'epochs' in config else 50\npatience=config['patience'] if 'patience' in config else 20\nearly_stopping=config['early_stopping'] if 'early_stopping' in config else 40\n\ntrainer = Multitask_Trainer(model, model_name=\"multitask\", device=device, num_classes=config['num_classes'], multitask_learning=True, patience=patience, verbose=True)\n\ntrainer.fit(dataloaders_dict, num_epochs=epochs, early_stopping=early_stopping, topk_accuracy=1, save_model=False)",
"Layers with params to learn:\n\t 18 layers\n-------\nStarting training, on device: cuda:0\nEpoch 1 in 0s || Train loss=13.555, acc=0.469, f1=0.638 | Val loss=9.980, acc=0.700, f1=0.824 | LR=1.0e-03 | best=1 | improvement=True-10\nEpoch 2 in 0s || Train loss=9.308, acc=0.750, f1=0.857 | Val loss=8.608, acc=0.800, f1=0.889 | LR=1.0e-03 | best=2 | improvement=True-10\nEpoch 3 in 0s || Train loss=7.629, acc=0.825, f1=0.904 | Val loss=8.431, acc=0.700, f1=0.824 | LR=1.0e-03 | best=3 | improvement=True-10\nEpoch 4 in 0s || Train loss=6.016, acc=0.900, f1=0.947 | Val loss=7.491, acc=0.800, f1=0.889 | LR=1.0e-03 | best=4 | improvement=True-10\nEpoch 5 in 0s || Train loss=4.460, acc=0.919, f1=0.958 | Val loss=7.611, acc=0.750, f1=0.857 | LR=1.0e-03 | best=4 | improvement=False-10\nEpoch 6 in 0s || Train loss=3.654, acc=0.938, f1=0.968 | Val loss=6.661, acc=0.750, f1=0.857 | LR=1.0e-03 | best=6 | improvement=True-10\nEpoch 7 in 0s || Train loss=2.938, acc=0.938, f1=0.968 | Val loss=5.987, acc=0.850, f1=0.919 | LR=1.0e-03 | best=7 | improvement=True-10\nEpoch 8 in 0s || Train loss=2.211, acc=0.975, f1=0.987 | Val loss=4.994, acc=0.875, f1=0.933 | LR=1.0e-03 | best=8 | improvement=True-10\nEpoch 9 in 0s || Train loss=1.743, acc=0.975, f1=0.987 | Val loss=4.632, acc=0.875, f1=0.933 | LR=1.0e-03 | best=9 | improvement=True-10\nEpoch 10 in 0s || Train loss=1.606, acc=0.988, f1=0.994 | Val loss=4.181, acc=0.875, f1=0.933 | LR=1.0e-03 | best=10 | improvement=True-10\nEpoch 11 in 0s || Train loss=0.986, acc=0.994, f1=0.997 | Val loss=2.905, acc=0.900, f1=0.947 | LR=1.0e-03 | best=11 | improvement=True-10\nEpoch 12 in 0s || Train loss=0.877, acc=1.000, f1=1.000 | Val loss=2.880, acc=0.925, f1=0.961 | LR=1.0e-03 | best=12 | improvement=True-10\nEpoch 13 in 0s || Train loss=0.790, acc=0.981, f1=0.991 | Val loss=2.426, acc=0.925, f1=0.961 | LR=1.0e-03 | best=13 | improvement=True-10\nEpoch 14 in 0s || Train loss=0.451, acc=1.000, f1=1.000 | Val loss=2.845, acc=0.925, f1=0.961 | LR=1.0e-03 | best=13 | improvement=False-9\nEpoch 15 in 0s || Train loss=0.598, acc=0.994, f1=0.997 | Val loss=2.348, acc=0.925, f1=0.961 | LR=1.0e-03 | best=15 | improvement=True-10\nEpoch 16 in 0s || Train loss=0.365, acc=1.000, f1=1.000 | Val loss=1.794, acc=0.925, f1=0.961 | LR=1.0e-03 | best=16 | improvement=True-10\nEpoch 17 in 0s || Train loss=0.513, acc=0.994, f1=0.997 | Val loss=1.879, acc=0.950, f1=0.974 | LR=1.0e-03 | best=16 | improvement=False-9\nEpoch 18 in 0s || Train loss=0.433, acc=0.994, f1=0.997 | Val loss=1.896, acc=0.925, f1=0.961 | LR=1.0e-03 | best=16 | improvement=False-8\nEpoch 19 in 0s || Train loss=0.299, acc=1.000, f1=1.000 | Val loss=1.672, acc=0.950, f1=0.974 | LR=1.0e-03 | best=19 | improvement=True-10\nEpoch 20 in 0s || Train loss=0.174, acc=1.000, f1=1.000 | Val loss=1.399, acc=0.950, f1=0.974 | LR=1.0e-03 | best=20 | improvement=True-10\nEpoch 21 in 0s || Train loss=0.233, acc=1.000, f1=1.000 | Val loss=1.748, acc=0.950, f1=0.974 | LR=1.0e-03 | best=20 | improvement=False-9\nEpoch 22 in 0s || Train loss=0.249, acc=1.000, f1=1.000 | Val loss=1.419, acc=0.975, f1=0.987 | LR=1.0e-03 | best=20 | improvement=False-8\nEpoch 23 in 0s || Train loss=0.299, acc=1.000, f1=1.000 | Val loss=1.432, acc=0.975, f1=0.987 | LR=1.0e-03 | best=20 | improvement=False-7\nEpoch 24 in 0s || Train loss=0.221, acc=1.000, f1=1.000 | Val loss=1.582, acc=0.950, f1=0.974 | LR=1.0e-03 | best=20 | improvement=False-6\nEpoch 25 in 0s || Train loss=0.184, acc=1.000, f1=1.000 | Val loss=0.948, acc=1.000, f1=1.000 | LR=1.0e-03 | best=25 | improvement=True-10\nEpoch 26 in 0s || Train loss=0.166, acc=1.000, f1=1.000 | Val loss=1.194, acc=0.950, f1=0.974 | LR=1.0e-03 | best=25 | improvement=False-9\nEpoch 27 in 0s || Train loss=0.115, acc=1.000, f1=1.000 | Val loss=0.836, acc=1.000, f1=1.000 | LR=1.0e-03 | best=27 | improvement=True-10\nEpoch 28 in 0s || Train loss=0.118, acc=1.000, f1=1.000 | Val loss=0.747, acc=1.000, f1=1.000 | LR=1.0e-03 | best=28 | improvement=True-10\nEpoch 29 in 0s || Train loss=0.112, acc=1.000, f1=1.000 | Val loss=0.950, acc=1.000, f1=1.000 | LR=1.0e-03 | best=28 | improvement=False-9\nEpoch 30 in 0s || Train loss=0.195, acc=1.000, f1=1.000 | Val loss=0.942, acc=1.000, f1=1.000 | LR=1.0e-03 | best=28 | improvement=False-8\nEpoch 31 in 0s || Train loss=0.073, acc=1.000, f1=1.000 | Val loss=1.117, acc=0.975, f1=0.987 | LR=1.0e-03 | best=28 | improvement=False-7\nEpoch 32 in 0s || Train loss=0.132, acc=1.000, f1=1.000 | Val loss=0.927, acc=0.975, f1=0.987 | LR=1.0e-03 | best=28 | improvement=False-6\nEpoch 33 in 0s || Train loss=0.071, acc=1.000, f1=1.000 | Val loss=0.770, acc=1.000, f1=1.000 | LR=1.0e-03 | best=28 | improvement=False-5\nEpoch 34 in 0s || Train loss=0.048, acc=1.000, f1=1.000 | Val loss=0.704, acc=1.000, f1=1.000 | LR=1.0e-03 | best=34 | improvement=True-10\nEpoch 35 in 0s || Train loss=0.056, acc=1.000, f1=1.000 | Val loss=0.771, acc=1.000, f1=1.000 | LR=1.0e-03 | best=34 | improvement=False-9\nEpoch 36 in 0s || Train loss=0.079, acc=1.000, f1=1.000 | Val loss=0.861, acc=1.000, f1=1.000 | LR=1.0e-03 | best=34 | improvement=False-8\nEpoch 37 in 0s || Train loss=0.167, acc=0.994, f1=0.997 | Val loss=0.903, acc=0.975, f1=0.987 | LR=1.0e-03 | best=34 | improvement=False-7\nEpoch 38 in 0s || Train loss=0.108, acc=1.000, f1=1.000 | Val loss=1.262, acc=0.950, f1=0.974 | LR=1.0e-03 | best=34 | improvement=False-6\nEpoch 39 in 0s || Train loss=0.138, acc=1.000, f1=1.000 | Val loss=1.057, acc=0.950, f1=0.974 | LR=1.0e-03 | best=34 | improvement=False-5\nEpoch 40 in 0s || Train loss=0.058, acc=1.000, f1=1.000 | Val loss=0.806, acc=0.975, f1=0.987 | LR=1.0e-03 | best=34 | improvement=False-4\nEpoch 41 in 0s || Train loss=0.094, acc=1.000, f1=1.000 | Val loss=0.701, acc=1.000, f1=1.000 | LR=1.0e-04 | best=41 | improvement=True-10\nEpoch 42 in 0s || Train loss=0.110, acc=0.994, f1=0.997 | Val loss=0.784, acc=0.975, f1=0.987 | LR=1.0e-04 | best=41 | improvement=False-9\nEpoch 43 in 0s || Train loss=0.049, acc=1.000, f1=1.000 | Val loss=0.818, acc=0.975, f1=0.987 | LR=1.0e-04 | best=41 | improvement=False-8\nEpoch 44 in 0s || Train loss=0.064, acc=1.000, f1=1.000 | Val loss=0.783, acc=0.975, f1=0.987 | LR=1.0e-04 | best=41 | improvement=False-7\nEpoch 45 in 0s || Train loss=0.078, acc=1.000, f1=1.000 | Val loss=0.841, acc=0.950, f1=0.974 | LR=1.0e-04 | best=41 | improvement=False-6\nEpoch 46 in 0s || Train loss=0.052, acc=1.000, f1=1.000 | Val loss=0.831, acc=0.950, f1=0.974 | LR=1.0e-04 | best=41 | improvement=False-5\nEpoch 47 in 0s || Train loss=0.071, acc=1.000, f1=1.000 | Val loss=0.815, acc=0.975, f1=0.987 | LR=1.0e-04 | best=41 | improvement=False-4\nEpoch 48 in 0s || Train loss=0.030, acc=1.000, f1=1.000 | Val loss=0.898, acc=0.975, f1=0.987 | LR=1.0e-05 | best=41 | improvement=False-3\nEpoch 49 in 0s || Train loss=0.081, acc=1.000, f1=1.000 | Val loss=0.803, acc=0.975, f1=0.987 | LR=1.0e-05 | best=41 | improvement=False-2\nEpoch 50 in 0s || Train loss=0.043, acc=1.000, f1=1.000 | Val loss=0.903, acc=0.975, f1=0.987 | LR=1.0e-05 | best=41 | improvement=False-1\nEarly Stop\n\nTraining complete in 0m 9s\nEpoch with lowest val loss: 40\ntrain_loss: 0.09395\nval_loss: 0.70139\ntrain_acc: 1.00000\nval_acc: 1.00000\ntrain_classification_f1: 1.00000\nval_classification_f1: 1.00000\n\n"
],
[
"test_loss, test_acc, test_metric = trainer.validate(test_loader, 1)\nprint('test: {:.5f}, {:.5f}, {:.5f}'.format(test_loss, test_acc, test_metric))",
"test: 21.58886, 0.46500, 0.63500\n"
]
],
[
[
"# Train-Test model - k-fold and leave one subject out",
"_____no_output_____"
]
],
[
[
"subject_kfold_acc = {}\nsubject_kfold_f1 = {}\n\ntest_subject_ids = list(np.arange(config['test_subject_ids']['low'], config['test_subject_ids']['high']+1, dtype=int))\n\nfor subject_id in test_subject_ids:\n print('Subject', subject_id)\n kfold_acc = []\n kfold_f1 = []\n \n for k in range(config['runkfold']):\n openbmi_data.split_by_kfold(kfold_k=k, kfold_split=config['runkfold'])\n train_loader, val_loader, test_loader = openbmi_data.leave_one_subject_out(selected_subject_id=subject_id, dataloader_batchsize=config['batchsize'])\n dataloaders_dict = {\n 'train': train_loader,\n 'val': val_loader\n }\n \n model = MultitaskSSVEP(num_channel=config['num_channel'],\n num_classes=config['num_classes'],\n signal_length=config['segment_config']['window_len'] * config['bandpass_config']['sample_rate'],\n filters_n1= config['model']['n1'],\n kernel_window_ssvep= config['model']['kernel_window_ssvep'],\n kernel_window= config['model']['kernel_window'],\n conv_3_dilation= config['model']['conv_3_dilation'],\n conv_4_dilation= config['model']['conv_4_dilation'],\n ).to(device)\n\n epochs=config['epochs'] if 'epochs' in config else 50\n patience=config['patience'] if 'patience' in config else 20\n early_stopping=config['early_stopping'] if 'early_stopping' in config else 40\n\n trainer = Multitask_Trainer(model, model_name=\"Network064b_1-8sub\", device=device, num_classes=config['num_classes'], multitask_learning=True, patience=patience, verbose=False)\n\n trainer.fit(dataloaders_dict, num_epochs=epochs, early_stopping=early_stopping, topk_accuracy=1, save_model=True)\n \n test_loss, test_acc, test_metric = trainer.validate(test_loader, 1)\n # print('test: {:.5f}, {:.5f}, {:.5f}'.format(test_loss, test_acc, test_metric))\n kfold_acc.append(test_acc)\n kfold_f1.append(test_metric)\n \n subject_kfold_acc[subject_id] = kfold_acc\n subject_kfold_f1[subject_id] = kfold_f1\n\nprint('results')\nprint('subject_kfold_acc', subject_kfold_acc)\nprint('subject_kfold_f1', subject_kfold_f1)",
"Subject 1\n\nTraining complete in 10m 43s\nEpoch with lowest val loss: 48\ntrain_loss: 1.84917\nval_loss: 2.08086\ntrain_acc: 0.95531\nval_acc: 0.94623\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97200\n\n\nTraining complete in 10m 12s\nEpoch with lowest val loss: 46\ntrain_loss: 1.86716\nval_loss: 2.27328\ntrain_acc: 0.95519\nval_acc: 0.94151\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97000\n\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.76072\nval_loss: 2.21568\ntrain_acc: 0.95743\nval_acc: 0.94858\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97400\n\n\nTraining complete in 11m 50s\nEpoch with lowest val loss: 55\ntrain_loss: 1.86162\nval_loss: 2.16308\ntrain_acc: 0.95708\nval_acc: 0.94575\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97200\n\n\nTraining complete in 7m 31s\nEpoch with lowest val loss: 31\ntrain_loss: 2.02390\nval_loss: 2.03930\ntrain_acc: 0.94965\nval_acc: 0.94811\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97300\n\nSubject 2\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.06798\nval_loss: 2.15665\ntrain_acc: 0.94776\nval_acc: 0.94387\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97100\n\n\nTraining complete in 10m 12s\nEpoch with lowest val loss: 46\ntrain_loss: 1.86636\nval_loss: 2.24235\ntrain_acc: 0.95601\nval_acc: 0.94151\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97000\n\n\nTraining complete in 11m 60s\nEpoch with lowest val loss: 56\ntrain_loss: 1.76648\nval_loss: 2.19006\ntrain_acc: 0.95802\nval_acc: 0.94575\ntrain_classification_f1: 0.97900\nval_classification_f1: 0.97200\n\n\nTraining complete in 10m 45s\nEpoch with lowest val loss: 49\ntrain_loss: 1.90669\nval_loss: 2.12420\ntrain_acc: 0.95094\nval_acc: 0.94906\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97400\n\n\nTraining complete in 10m 12s\nEpoch with lowest val loss: 46\ntrain_loss: 1.89083\nval_loss: 2.01889\ntrain_acc: 0.95330\nval_acc: 0.94434\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97100\n\nSubject 3\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.02718\nval_loss: 2.12471\ntrain_acc: 0.95130\nval_acc: 0.94292\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 10m 12s\nEpoch with lowest val loss: 46\ntrain_loss: 1.80016\nval_loss: 2.19608\ntrain_acc: 0.95672\nval_acc: 0.93915\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.96900\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 1.96834\nval_loss: 2.18685\ntrain_acc: 0.95059\nval_acc: 0.95000\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97400\n\n\nTraining complete in 12m 21s\nEpoch with lowest val loss: 58\ntrain_loss: 1.95315\nval_loss: 2.21583\ntrain_acc: 0.95142\nval_acc: 0.94340\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.85960\nval_loss: 2.10824\ntrain_acc: 0.95483\nval_acc: 0.94528\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97200\n\nSubject 4\n\nTraining complete in 7m 32s\nEpoch with lowest val loss: 31\ntrain_loss: 2.00084\nval_loss: 2.26098\ntrain_acc: 0.95165\nval_acc: 0.94009\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96900\n\n\nTraining complete in 10m 13s\nEpoch with lowest val loss: 46\ntrain_loss: 1.88073\nval_loss: 2.27427\ntrain_acc: 0.95542\nval_acc: 0.93443\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.96600\n\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.89022\nval_loss: 2.19850\ntrain_acc: 0.95035\nval_acc: 0.94340\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.07253\nval_loss: 2.29301\ntrain_acc: 0.94575\nval_acc: 0.94151\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97000\n\n\nTraining complete in 9m 20s\nEpoch with lowest val loss: 41\ntrain_loss: 1.87961\nval_loss: 2.04849\ntrain_acc: 0.95377\nval_acc: 0.94292\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97100\n\nSubject 5\n\nTraining complete in 10m 34s\nEpoch with lowest val loss: 48\ntrain_loss: 1.89065\nval_loss: 2.18497\ntrain_acc: 0.95425\nval_acc: 0.94198\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97000\n\n\nTraining complete in 5m 55s\nEpoch with lowest val loss: 22\ntrain_loss: 2.00997\nval_loss: 2.26018\ntrain_acc: 0.95259\nval_acc: 0.94292\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97100\n\n\nTraining complete in 9m 20s\nEpoch with lowest val loss: 41\ntrain_loss: 1.82227\nval_loss: 2.14836\ntrain_acc: 0.95507\nval_acc: 0.94811\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97300\n\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.08192\nval_loss: 2.26354\ntrain_acc: 0.94953\nval_acc: 0.94717\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97300\n\n\nTraining complete in 10m 12s\nEpoch with lowest val loss: 46\ntrain_loss: 1.92761\nval_loss: 2.00332\ntrain_acc: 0.95283\nval_acc: 0.94717\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97300\n\nSubject 6\n\nTraining complete in 8m 14s\nEpoch with lowest val loss: 35\ntrain_loss: 1.97558\nval_loss: 2.14628\ntrain_acc: 0.95212\nval_acc: 0.94434\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 10m 13s\nEpoch with lowest val loss: 46\ntrain_loss: 1.83109\nval_loss: 2.28516\ntrain_acc: 0.95554\nval_acc: 0.94151\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.89614\nval_loss: 2.17314\ntrain_acc: 0.95059\nval_acc: 0.94811\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97300\n\n\nTraining complete in 6m 37s\nEpoch with lowest val loss: 26\ntrain_loss: 2.03068\nval_loss: 2.28603\ntrain_acc: 0.94882\nval_acc: 0.94434\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97100\n\n\nTraining complete in 6m 49s\nEpoch with lowest val loss: 27\ntrain_loss: 2.09173\nval_loss: 2.11630\ntrain_acc: 0.94764\nval_acc: 0.94434\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97100\n\nSubject 7\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.12465\nval_loss: 2.18852\ntrain_acc: 0.94458\nval_acc: 0.94198\ntrain_classification_f1: 0.97100\nval_classification_f1: 0.97000\n\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.10852\nval_loss: 2.36463\ntrain_acc: 0.94717\nval_acc: 0.93774\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.96800\n\n\nTraining complete in 11m 38s\nEpoch with lowest val loss: 54\ntrain_loss: 1.86707\nval_loss: 2.24598\ntrain_acc: 0.95436\nval_acc: 0.94387\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97100\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.05875\nval_loss: 2.21090\ntrain_acc: 0.95047\nval_acc: 0.94481\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\n\nTraining complete in 8m 57s\nEpoch with lowest val loss: 39\ntrain_loss: 1.98208\nval_loss: 2.11668\ntrain_acc: 0.95153\nval_acc: 0.94481\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\nSubject 8\n\nTraining complete in 8m 15s\nEpoch with lowest val loss: 35\ntrain_loss: 1.87134\nval_loss: 2.10575\ntrain_acc: 0.95448\nval_acc: 0.94387\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.90461\nval_loss: 2.31819\ntrain_acc: 0.95366\nval_acc: 0.93821\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.96800\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.94987\nval_loss: 2.19403\ntrain_acc: 0.95071\nval_acc: 0.94858\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97400\n\n\nTraining complete in 4m 29s\nEpoch with lowest val loss: 14\ntrain_loss: 2.17415\nval_loss: 2.30415\ntrain_acc: 0.94670\nval_acc: 0.94623\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97200\n\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.90409\nval_loss: 2.11622\ntrain_acc: 0.95601\nval_acc: 0.94623\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97200\n\nSubject 9\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.13999\nval_loss: 2.24969\ntrain_acc: 0.94493\nval_acc: 0.94292\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97100\n\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.06739\nval_loss: 2.31982\ntrain_acc: 0.94906\nval_acc: 0.93491\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.96600\n\n\nTraining complete in 6m 27s\nEpoch with lowest val loss: 25\ntrain_loss: 2.02814\nval_loss: 2.25977\ntrain_acc: 0.94929\nval_acc: 0.94198\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97000\n\n\nTraining complete in 6m 39s\nEpoch with lowest val loss: 26\ntrain_loss: 2.05935\nval_loss: 2.23980\ntrain_acc: 0.94670\nval_acc: 0.94717\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97300\n\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.96724\nval_loss: 2.10456\ntrain_acc: 0.95342\nval_acc: 0.94387\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97100\n\nSubject 10\n\nTraining complete in 6m 16s\nEpoch with lowest val loss: 24\ntrain_loss: 2.17470\nval_loss: 2.25261\ntrain_acc: 0.94587\nval_acc: 0.94340\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97100\n\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.09233\nval_loss: 2.34956\ntrain_acc: 0.94858\nval_acc: 0.93774\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.96800\n\n\nTraining complete in 13m 15s\nEpoch with lowest val loss: 63\ntrain_loss: 1.72038\nval_loss: 2.19018\ntrain_acc: 0.95908\nval_acc: 0.94717\ntrain_classification_f1: 0.97900\nval_classification_f1: 0.97300\n\n\nTraining complete in 8m 3s\nEpoch with lowest val loss: 34\ntrain_loss: 1.94526\nval_loss: 2.22440\ntrain_acc: 0.95189\nval_acc: 0.94858\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97400\n\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.82625\nval_loss: 2.00655\ntrain_acc: 0.95649\nval_acc: 0.94434\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97100\n\nSubject 11\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.18259\nval_loss: 2.26333\ntrain_acc: 0.94575\nval_acc: 0.94245\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97000\n\n\nTraining complete in 11m 17s\nEpoch with lowest val loss: 52\ntrain_loss: 1.85138\nval_loss: 2.21545\ntrain_acc: 0.95519\nval_acc: 0.94387\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97100\n\n\nTraining complete in 6m 27s\nEpoch with lowest val loss: 25\ntrain_loss: 2.13604\nval_loss: 2.34091\ntrain_acc: 0.94741\nval_acc: 0.94057\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.96900\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.06150\nval_loss: 2.26156\ntrain_acc: 0.94953\nval_acc: 0.94245\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97000\n\n\nTraining complete in 9m 17s\nEpoch with lowest val loss: 41\ntrain_loss: 2.00644\nval_loss: 2.15161\ntrain_acc: 0.95059\nval_acc: 0.94104\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97000\n\nSubject 12\n\nTraining complete in 6m 16s\nEpoch with lowest val loss: 24\ntrain_loss: 2.17615\nval_loss: 2.28083\ntrain_acc: 0.94611\nval_acc: 0.94198\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 2.02124\nval_loss: 2.28889\ntrain_acc: 0.95094\nval_acc: 0.93538\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96700\n\n\nTraining complete in 8m 46s\nEpoch with lowest val loss: 38\ntrain_loss: 1.90945\nval_loss: 2.27482\ntrain_acc: 0.95354\nval_acc: 0.94670\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97300\n\n\nTraining complete in 5m 36s\nEpoch with lowest val loss: 20\ntrain_loss: 2.12949\nval_loss: 2.28412\ntrain_acc: 0.94717\nval_acc: 0.94575\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97200\n\n\nTraining complete in 10m 12s\nEpoch with lowest val loss: 46\ntrain_loss: 1.88968\nval_loss: 2.08941\ntrain_acc: 0.95708\nval_acc: 0.94057\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.96900\n\nSubject 13\n\nTraining complete in 8m 14s\nEpoch with lowest val loss: 35\ntrain_loss: 1.92413\nval_loss: 2.02895\ntrain_acc: 0.95330\nval_acc: 0.94764\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97300\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.97660\nval_loss: 2.36608\ntrain_acc: 0.95236\nval_acc: 0.93679\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.96700\n\n\nTraining complete in 8m 46s\nEpoch with lowest val loss: 38\ntrain_loss: 1.92704\nval_loss: 2.27758\ntrain_acc: 0.95377\nval_acc: 0.94528\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97200\n\n\nTraining complete in 12m 21s\nEpoch with lowest val loss: 58\ntrain_loss: 1.91722\nval_loss: 2.22829\ntrain_acc: 0.95165\nval_acc: 0.94764\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97300\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 1.95508\nval_loss: 2.02009\ntrain_acc: 0.95165\nval_acc: 0.94953\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97400\n\nSubject 14\n\nTraining complete in 8m 14s\nEpoch with lowest val loss: 35\ntrain_loss: 2.01514\nval_loss: 2.17097\ntrain_acc: 0.95189\nval_acc: 0.94245\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.94814\nval_loss: 2.31030\ntrain_acc: 0.95283\nval_acc: 0.93491\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.96600\n\n\nTraining complete in 8m 3s\nEpoch with lowest val loss: 34\ntrain_loss: 1.98206\nval_loss: 2.27502\ntrain_acc: 0.94988\nval_acc: 0.94481\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 38s\nEpoch with lowest val loss: 26\ntrain_loss: 2.06765\nval_loss: 2.22348\ntrain_acc: 0.94847\nval_acc: 0.94670\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97300\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.00798\nval_loss: 2.05871\ntrain_acc: 0.95200\nval_acc: 0.94387\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\nSubject 15\n\nTraining complete in 6m 16s\nEpoch with lowest val loss: 24\ntrain_loss: 2.06726\nval_loss: 2.23588\ntrain_acc: 0.94882\nval_acc: 0.94481\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 5m 12s\nEpoch with lowest val loss: 18\ntrain_loss: 2.27326\nval_loss: 2.32845\ntrain_acc: 0.94517\nval_acc: 0.93585\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.96700\n\n\nTraining complete in 11m 27s\nEpoch with lowest val loss: 53\ntrain_loss: 1.80360\nval_loss: 2.15323\ntrain_acc: 0.95660\nval_acc: 0.94528\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97200\n\n\nTraining complete in 10m 45s\nEpoch with lowest val loss: 49\ntrain_loss: 1.86329\nval_loss: 2.14589\ntrain_acc: 0.95436\nval_acc: 0.94670\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97300\n\n\nTraining complete in 9m 18s\nEpoch with lowest val loss: 41\ntrain_loss: 1.94704\nval_loss: 2.02971\ntrain_acc: 0.95354\nval_acc: 0.94764\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97300\n\nSubject 16\n\nTraining complete in 6m 16s\nEpoch with lowest val loss: 24\ntrain_loss: 2.19430\nval_loss: 2.29240\ntrain_acc: 0.94540\nval_acc: 0.94057\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.96900\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.97747\nval_loss: 2.28712\ntrain_acc: 0.94988\nval_acc: 0.93868\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.96800\n\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.96984\nval_loss: 2.28598\ntrain_acc: 0.95118\nval_acc: 0.94764\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97300\n\n\nTraining complete in 10m 44s\nEpoch with lowest val loss: 49\ntrain_loss: 1.87551\nval_loss: 2.19521\ntrain_acc: 0.95460\nval_acc: 0.94528\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97200\n\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.88409\nval_loss: 2.03400\ntrain_acc: 0.95578\nval_acc: 0.94623\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97200\n\nSubject 17\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.13085\nval_loss: 2.21344\ntrain_acc: 0.94670\nval_acc: 0.94434\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97100\n\n\nTraining complete in 12m 11s\nEpoch with lowest val loss: 57\ntrain_loss: 1.77019\nval_loss: 2.22544\ntrain_acc: 0.95849\nval_acc: 0.93962\ntrain_classification_f1: 0.97900\nval_classification_f1: 0.96900\n\n\nTraining complete in 5m 23s\nEpoch with lowest val loss: 19\ntrain_loss: 2.05876\nval_loss: 2.17613\ntrain_acc: 0.94823\nval_acc: 0.94858\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97400\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 1.92788\nval_loss: 2.16861\ntrain_acc: 0.95153\nval_acc: 0.95047\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97500\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.06832\nval_loss: 2.09106\ntrain_acc: 0.94988\nval_acc: 0.94340\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97100\n\nSubject 18\n\nTraining complete in 10m 34s\nEpoch with lowest val loss: 48\ntrain_loss: 1.86416\nval_loss: 2.26265\ntrain_acc: 0.95495\nval_acc: 0.94009\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.96900\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.95865\nval_loss: 2.24911\ntrain_acc: 0.95200\nval_acc: 0.94292\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 10m 34s\nEpoch with lowest val loss: 48\ntrain_loss: 1.93571\nval_loss: 2.29385\ntrain_acc: 0.95342\nval_acc: 0.94292\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97100\n\n\nTraining complete in 6m 37s\nEpoch with lowest val loss: 26\ntrain_loss: 2.00250\nval_loss: 2.27768\ntrain_acc: 0.95224\nval_acc: 0.94623\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 2.10159\nval_loss: 2.14635\ntrain_acc: 0.94764\nval_acc: 0.94434\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97100\n\nSubject 19\n\nTraining complete in 6m 16s\nEpoch with lowest val loss: 24\ntrain_loss: 2.06298\nval_loss: 2.23848\ntrain_acc: 0.94906\nval_acc: 0.94245\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.95304\nval_loss: 2.32451\ntrain_acc: 0.95177\nval_acc: 0.93915\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96900\n\n\nTraining complete in 7m 52s\nEpoch with lowest val loss: 33\ntrain_loss: 1.99410\nval_loss: 2.26710\ntrain_acc: 0.94953\nval_acc: 0.94434\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97100\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.06346\nval_loss: 2.23135\ntrain_acc: 0.94894\nval_acc: 0.94481\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 1.97530\nval_loss: 2.15020\ntrain_acc: 0.95165\nval_acc: 0.94434\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\nSubject 20\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.09376\nval_loss: 2.23937\ntrain_acc: 0.95153\nval_acc: 0.93726\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96800\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.89583\nval_loss: 2.28863\ntrain_acc: 0.95318\nval_acc: 0.93349\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.96600\n\n\nTraining complete in 7m 53s\nEpoch with lowest val loss: 33\ntrain_loss: 1.99081\nval_loss: 2.28503\ntrain_acc: 0.95000\nval_acc: 0.94198\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97000\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.07528\nval_loss: 2.24874\ntrain_acc: 0.94729\nval_acc: 0.94575\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97200\n\n\nTraining complete in 10m 12s\nEpoch with lowest val loss: 46\ntrain_loss: 2.00836\nval_loss: 2.07954\ntrain_acc: 0.95271\nval_acc: 0.94340\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97100\n\nSubject 21\n\nTraining complete in 8m 14s\nEpoch with lowest val loss: 35\ntrain_loss: 1.86312\nval_loss: 2.06950\ntrain_acc: 0.95377\nval_acc: 0.94292\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.97466\nval_loss: 2.33170\ntrain_acc: 0.95200\nval_acc: 0.93915\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96900\n\n\nTraining complete in 5m 55s\nEpoch with lowest val loss: 22\ntrain_loss: 2.01971\nval_loss: 2.28110\ntrain_acc: 0.94858\nval_acc: 0.94575\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 7m 31s\nEpoch with lowest val loss: 31\ntrain_loss: 1.94086\nval_loss: 2.15729\ntrain_acc: 0.95330\nval_acc: 0.94575\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.13503\nval_loss: 2.13340\ntrain_acc: 0.94599\nval_acc: 0.94245\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97000\n\nSubject 22\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.08204\nval_loss: 2.25012\ntrain_acc: 0.94953\nval_acc: 0.93774\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.96800\n\n\nTraining complete in 8m 32s\nEpoch with lowest val loss: 36\ntrain_loss: 1.84874\nval_loss: 2.15827\ntrain_acc: 0.95271\nval_acc: 0.93726\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.96800\n\n\nTraining complete in 8m 4s\nEpoch with lowest val loss: 33\ntrain_loss: 1.92272\nval_loss: 2.19263\ntrain_acc: 0.95177\nval_acc: 0.94575\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 13s\nEpoch with lowest val loss: 23\ntrain_loss: 2.05335\nval_loss: 2.20223\ntrain_acc: 0.95059\nval_acc: 0.94670\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97300\n\n\nTraining complete in 7m 9s\nEpoch with lowest val loss: 28\ntrain_loss: 2.00694\nval_loss: 2.06074\ntrain_acc: 0.95189\nval_acc: 0.94623\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\nSubject 23\n\nTraining complete in 10m 44s\nEpoch with lowest val loss: 48\ntrain_loss: 1.68279\nval_loss: 1.94773\ntrain_acc: 0.95861\nval_acc: 0.94858\ntrain_classification_f1: 0.97900\nval_classification_f1: 0.97400\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.66987\nval_loss: 2.03197\ntrain_acc: 0.95991\nval_acc: 0.94528\ntrain_classification_f1: 0.98000\nval_classification_f1: 0.97200\n\n\nTraining complete in 7m 52s\nEpoch with lowest val loss: 33\ntrain_loss: 1.77454\nval_loss: 2.06111\ntrain_acc: 0.95767\nval_acc: 0.94811\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97300\n\n\nTraining complete in 6m 6s\nEpoch with lowest val loss: 23\ntrain_loss: 1.91294\nval_loss: 2.07909\ntrain_acc: 0.95448\nval_acc: 0.95236\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97600\n\n\nTraining complete in 10m 13s\nEpoch with lowest val loss: 46\ntrain_loss: 1.65080\nval_loss: 1.77756\ntrain_acc: 0.96167\nval_acc: 0.95613\ntrain_classification_f1: 0.98000\nval_classification_f1: 0.97800\n\nSubject 24\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.85612\nval_loss: 2.05831\ntrain_acc: 0.95165\nval_acc: 0.94387\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 26s\nEpoch with lowest val loss: 36\ntrain_loss: 1.96258\nval_loss: 2.31160\ntrain_acc: 0.95354\nval_acc: 0.94104\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97000\n\n\nTraining complete in 5m 55s\nEpoch with lowest val loss: 22\ntrain_loss: 1.97534\nval_loss: 2.26586\ntrain_acc: 0.95200\nval_acc: 0.94670\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97300\n\n\nTraining complete in 11m 38s\nEpoch with lowest val loss: 54\ntrain_loss: 1.83451\nval_loss: 2.21772\ntrain_acc: 0.95413\nval_acc: 0.94104\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 57s\nEpoch with lowest val loss: 39\ntrain_loss: 2.04837\nval_loss: 2.08138\ntrain_acc: 0.94941\nval_acc: 0.94387\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97100\n\nSubject 25\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.85303\nval_loss: 2.07921\ntrain_acc: 0.95330\nval_acc: 0.94387\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97100\n\n\nTraining complete in 12m 11s\nEpoch with lowest val loss: 57\ntrain_loss: 1.90807\nval_loss: 2.30378\ntrain_acc: 0.95566\nval_acc: 0.93868\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.96800\n\n\nTraining complete in 5m 55s\nEpoch with lowest val loss: 22\ntrain_loss: 2.03733\nval_loss: 2.27700\ntrain_acc: 0.95000\nval_acc: 0.94623\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 9m 40s\nEpoch with lowest val loss: 43\ntrain_loss: 1.83705\nval_loss: 2.18253\ntrain_acc: 0.95472\nval_acc: 0.94575\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97200\n\n\nTraining complete in 5m 44s\nEpoch with lowest val loss: 21\ntrain_loss: 2.12343\nval_loss: 2.09078\ntrain_acc: 0.94976\nval_acc: 0.94670\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97300\n\nSubject 26\n\nTraining complete in 9m 18s\nEpoch with lowest val loss: 41\ntrain_loss: 1.89918\nval_loss: 2.11686\ntrain_acc: 0.95401\nval_acc: 0.94245\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 2.06261\nval_loss: 2.36236\ntrain_acc: 0.95035\nval_acc: 0.93726\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96800\n\n\nTraining complete in 5m 55s\nEpoch with lowest val loss: 22\ntrain_loss: 2.06026\nval_loss: 2.28667\ntrain_acc: 0.94858\nval_acc: 0.94481\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 9m 41s\nEpoch with lowest val loss: 43\ntrain_loss: 1.91239\nval_loss: 2.21337\ntrain_acc: 0.95307\nval_acc: 0.94811\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97300\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.03734\nval_loss: 2.10279\ntrain_acc: 0.95118\nval_acc: 0.94387\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\nSubject 27\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.94908\nval_loss: 2.20131\ntrain_acc: 0.95189\nval_acc: 0.94292\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 2.08331\nval_loss: 2.33918\ntrain_acc: 0.94847\nval_acc: 0.93962\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.96900\n\n\nTraining complete in 5m 55s\nEpoch with lowest val loss: 22\ntrain_loss: 2.00609\nval_loss: 2.28408\ntrain_acc: 0.95083\nval_acc: 0.94481\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\n\nTraining complete in 11m 38s\nEpoch with lowest val loss: 54\ntrain_loss: 1.88674\nval_loss: 2.19917\ntrain_acc: 0.95259\nval_acc: 0.94623\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.10280\nval_loss: 2.12126\ntrain_acc: 0.95000\nval_acc: 0.94151\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97000\n\nSubject 28\n\nTraining complete in 6m 27s\nEpoch with lowest val loss: 25\ntrain_loss: 2.16485\nval_loss: 2.25946\ntrain_acc: 0.94906\nval_acc: 0.94198\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 28s\nEpoch with lowest val loss: 36\ntrain_loss: 1.97411\nval_loss: 2.30750\ntrain_acc: 0.95165\nval_acc: 0.93915\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96900\n\n\nTraining complete in 5m 59s\nEpoch with lowest val loss: 22\ntrain_loss: 2.01506\nval_loss: 2.30573\ntrain_acc: 0.95130\nval_acc: 0.94387\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 7m 32s\nEpoch with lowest val loss: 31\ntrain_loss: 1.90893\nval_loss: 2.16560\ntrain_acc: 0.95236\nval_acc: 0.95000\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97400\n\n\nTraining complete in 10m 13s\nEpoch with lowest val loss: 46\ntrain_loss: 1.79120\nval_loss: 2.04130\ntrain_acc: 0.95743\nval_acc: 0.94481\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97200\n\nSubject 29\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.93945\nval_loss: 2.06410\ntrain_acc: 0.95200\nval_acc: 0.94245\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 2.00823\nval_loss: 2.29549\ntrain_acc: 0.95142\nval_acc: 0.94151\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97000\n\n\nTraining complete in 5m 54s\nEpoch with lowest val loss: 22\ntrain_loss: 2.00325\nval_loss: 2.26247\ntrain_acc: 0.95177\nval_acc: 0.94670\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97300\n\n\nTraining complete in 7m 32s\nEpoch with lowest val loss: 31\ntrain_loss: 1.96393\nval_loss: 2.16367\ntrain_acc: 0.95283\nval_acc: 0.94717\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97300\n\n\nTraining complete in 5m 55s\nEpoch with lowest val loss: 22\ntrain_loss: 2.05242\nval_loss: 2.03933\ntrain_acc: 0.95035\nval_acc: 0.94623\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\nSubject 30\n\nTraining complete in 9m 19s\nEpoch with lowest val loss: 41\ntrain_loss: 1.96206\nval_loss: 2.11956\ntrain_acc: 0.95271\nval_acc: 0.94670\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97300\n\n\nTraining complete in 8m 29s\nEpoch with lowest val loss: 36\ntrain_loss: 2.04646\nval_loss: 2.35537\ntrain_acc: 0.95083\nval_acc: 0.93726\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96800\n\n\nTraining complete in 7m 53s\nEpoch with lowest val loss: 33\ntrain_loss: 2.03386\nval_loss: 2.31950\ntrain_acc: 0.95177\nval_acc: 0.94340\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 9m 40s\nEpoch with lowest val loss: 43\ntrain_loss: 1.99227\nval_loss: 2.31316\ntrain_acc: 0.95224\nval_acc: 0.94575\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97200\n\n\nTraining complete in 8m 58s\nEpoch with lowest val loss: 39\ntrain_loss: 1.93214\nval_loss: 2.09397\ntrain_acc: 0.95330\nval_acc: 0.94104\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97000\n\nSubject 31\n\nTraining complete in 10m 35s\nEpoch with lowest val loss: 48\ntrain_loss: 1.83174\nval_loss: 2.13866\ntrain_acc: 0.95554\nval_acc: 0.94387\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 26s\nEpoch with lowest val loss: 36\ntrain_loss: 1.91310\nval_loss: 2.32612\ntrain_acc: 0.95472\nval_acc: 0.93726\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.96800\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.00769\nval_loss: 2.26901\ntrain_acc: 0.94941\nval_acc: 0.94434\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97100\n\n\nTraining complete in 6m 6s\nEpoch with lowest val loss: 23\ntrain_loss: 2.11149\nval_loss: 2.27286\ntrain_acc: 0.94776\nval_acc: 0.93821\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.96800\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 1.97851\nval_loss: 2.05675\ntrain_acc: 0.94929\nval_acc: 0.94528\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\nSubject 32\n\nTraining complete in 7m 10s\nEpoch with lowest val loss: 29\ntrain_loss: 2.10652\nval_loss: 2.23938\ntrain_acc: 0.94835\nval_acc: 0.94009\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.96900\n\n\nTraining complete in 8m 26s\nEpoch with lowest val loss: 36\ntrain_loss: 1.98299\nval_loss: 2.27523\ntrain_acc: 0.95118\nval_acc: 0.94151\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97000\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.01830\nval_loss: 2.25927\ntrain_acc: 0.94917\nval_acc: 0.94670\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97300\n\n\nTraining complete in 6m 38s\nEpoch with lowest val loss: 26\ntrain_loss: 2.03300\nval_loss: 2.22219\ntrain_acc: 0.94941\nval_acc: 0.94151\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 4s\nEpoch with lowest val loss: 34\ntrain_loss: 2.08598\nval_loss: 2.06572\ntrain_acc: 0.94988\nval_acc: 0.94528\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\nSubject 33\n\nTraining complete in 6m 27s\nEpoch with lowest val loss: 25\ntrain_loss: 1.93924\nval_loss: 2.14514\ntrain_acc: 0.95118\nval_acc: 0.93868\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96800\n\n\nTraining complete in 8m 26s\nEpoch with lowest val loss: 36\ntrain_loss: 1.97050\nval_loss: 2.35514\ntrain_acc: 0.95318\nval_acc: 0.93349\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.96600\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.01242\nval_loss: 2.25217\ntrain_acc: 0.94906\nval_acc: 0.94575\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 7m 32s\nEpoch with lowest val loss: 31\ntrain_loss: 2.11237\nval_loss: 2.29477\ntrain_acc: 0.94658\nval_acc: 0.94434\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 58s\nEpoch with lowest val loss: 39\ntrain_loss: 1.89298\nval_loss: 2.02948\ntrain_acc: 0.95200\nval_acc: 0.94481\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\nSubject 34\n\nTraining complete in 6m 27s\nEpoch with lowest val loss: 25\ntrain_loss: 1.97671\nval_loss: 2.05549\ntrain_acc: 0.95177\nval_acc: 0.94387\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 4m 29s\nEpoch with lowest val loss: 14\ntrain_loss: 2.07772\nval_loss: 2.28845\ntrain_acc: 0.95024\nval_acc: 0.93821\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.96800\n\n\nTraining complete in 7m 53s\nEpoch with lowest val loss: 33\ntrain_loss: 1.85994\nval_loss: 2.19973\ntrain_acc: 0.95354\nval_acc: 0.94575\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 6s\nEpoch with lowest val loss: 23\ntrain_loss: 2.01757\nval_loss: 2.13396\ntrain_acc: 0.95047\nval_acc: 0.94717\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97300\n\n\nTraining complete in 8m 58s\nEpoch with lowest val loss: 39\ntrain_loss: 1.86332\nval_loss: 1.97926\ntrain_acc: 0.95413\nval_acc: 0.94387\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97100\n\nSubject 35\n\nTraining complete in 7m 10s\nEpoch with lowest val loss: 29\ntrain_loss: 1.93733\nval_loss: 2.16360\ntrain_acc: 0.95448\nval_acc: 0.94245\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 26s\nEpoch with lowest val loss: 36\ntrain_loss: 1.96380\nval_loss: 2.29391\ntrain_acc: 0.95130\nval_acc: 0.93962\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96900\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 1.96528\nval_loss: 2.23424\ntrain_acc: 0.95035\nval_acc: 0.94623\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\n\nTraining complete in 11m 18s\nEpoch with lowest val loss: 52\ntrain_loss: 1.90866\nval_loss: 2.22331\ntrain_acc: 0.95295\nval_acc: 0.94387\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97100\n\n\nTraining complete in 14m 32s\nEpoch with lowest val loss: 70\ntrain_loss: 1.80382\nval_loss: 2.02345\ntrain_acc: 0.95767\nval_acc: 0.94292\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97100\n\nSubject 36\n\nTraining complete in 6m 27s\nEpoch with lowest val loss: 25\ntrain_loss: 2.14317\nval_loss: 2.17403\ntrain_acc: 0.94611\nval_acc: 0.94717\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97300\n\n\nTraining complete in 8m 26s\nEpoch with lowest val loss: 36\ntrain_loss: 2.03595\nval_loss: 2.38928\ntrain_acc: 0.95271\nval_acc: 0.93726\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.96800\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.09037\nval_loss: 2.31140\ntrain_acc: 0.94729\nval_acc: 0.94575\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97200\n\n\nTraining complete in 11m 18s\nEpoch with lowest val loss: 52\ntrain_loss: 1.98510\nval_loss: 2.28630\ntrain_acc: 0.95224\nval_acc: 0.94104\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 58s\nEpoch with lowest val loss: 39\ntrain_loss: 2.09580\nval_loss: 2.07420\ntrain_acc: 0.95071\nval_acc: 0.94575\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\nSubject 37\n\nTraining complete in 7m 11s\nEpoch with lowest val loss: 29\ntrain_loss: 2.01923\nval_loss: 2.22679\ntrain_acc: 0.95295\nval_acc: 0.94009\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.96900\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.07784\nval_loss: 2.34277\ntrain_acc: 0.94823\nval_acc: 0.93868\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.96800\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.14082\nval_loss: 2.30513\ntrain_acc: 0.94599\nval_acc: 0.94858\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97400\n\n\nTraining complete in 8m 4s\nEpoch with lowest val loss: 34\ntrain_loss: 2.17510\nval_loss: 2.23790\ntrain_acc: 0.94623\nval_acc: 0.94104\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97000\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 2.04024\nval_loss: 2.07873\ntrain_acc: 0.94917\nval_acc: 0.94575\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\nSubject 38\n\nTraining complete in 6m 27s\nEpoch with lowest val loss: 25\ntrain_loss: 2.14594\nval_loss: 2.27666\ntrain_acc: 0.94729\nval_acc: 0.94292\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97100\n\n\nTraining complete in 10m 3s\nEpoch with lowest val loss: 45\ntrain_loss: 1.88138\nval_loss: 2.28811\ntrain_acc: 0.95460\nval_acc: 0.93962\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.96900\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 1.92746\nval_loss: 2.32168\ntrain_acc: 0.95483\nval_acc: 0.94292\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97100\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 2.09560\nval_loss: 2.25586\ntrain_acc: 0.94670\nval_acc: 0.94151\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97000\n\n\nTraining complete in 7m 11s\nEpoch with lowest val loss: 29\ntrain_loss: 2.15409\nval_loss: 2.14579\ntrain_acc: 0.94705\nval_acc: 0.94387\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97100\n\nSubject 39\n\nTraining complete in 6m 27s\nEpoch with lowest val loss: 25\ntrain_loss: 2.13939\nval_loss: 2.19996\ntrain_acc: 0.94670\nval_acc: 0.94387\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97100\n\n\nTraining complete in 10m 3s\nEpoch with lowest val loss: 45\ntrain_loss: 1.96071\nval_loss: 2.32420\ntrain_acc: 0.95200\nval_acc: 0.93915\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96900\n\n\nTraining complete in 7m 42s\nEpoch with lowest val loss: 32\ntrain_loss: 1.94753\nval_loss: 2.23775\ntrain_acc: 0.95142\nval_acc: 0.94481\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\n\nTraining complete in 7m 32s\nEpoch with lowest val loss: 31\ntrain_loss: 2.03488\nval_loss: 2.26753\ntrain_acc: 0.94906\nval_acc: 0.94528\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 9m 1s\nEpoch with lowest val loss: 39\ntrain_loss: 1.98153\nval_loss: 2.06361\ntrain_acc: 0.95224\nval_acc: 0.94387\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97100\n\nSubject 40\n\nTraining complete in 10m 35s\nEpoch with lowest val loss: 48\ntrain_loss: 1.89026\nval_loss: 2.10339\ntrain_acc: 0.95259\nval_acc: 0.94151\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 58s\nEpoch with lowest val loss: 39\ntrain_loss: 1.96516\nval_loss: 2.29651\ntrain_acc: 0.95212\nval_acc: 0.93726\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96800\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 1.99566\nval_loss: 2.25800\ntrain_acc: 0.95047\nval_acc: 0.94387\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 4s\nEpoch with lowest val loss: 34\ntrain_loss: 2.05209\nval_loss: 2.26324\ntrain_acc: 0.94835\nval_acc: 0.94009\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.96900\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.00128\nval_loss: 2.06279\ntrain_acc: 0.95000\nval_acc: 0.94528\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\nSubject 41\n\nTraining complete in 7m 10s\nEpoch with lowest val loss: 29\ntrain_loss: 2.07796\nval_loss: 2.21192\ntrain_acc: 0.94764\nval_acc: 0.94575\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97200\n\n\nTraining complete in 10m 3s\nEpoch with lowest val loss: 45\ntrain_loss: 1.91777\nval_loss: 2.38125\ntrain_acc: 0.95495\nval_acc: 0.93679\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.96700\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.07051\nval_loss: 2.23373\ntrain_acc: 0.94741\nval_acc: 0.94481\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97200\n\n\nTraining complete in 5m 2s\nEpoch with lowest val loss: 17\ntrain_loss: 2.25794\nval_loss: 2.35048\ntrain_acc: 0.94575\nval_acc: 0.94387\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 58s\nEpoch with lowest val loss: 39\ntrain_loss: 1.94659\nval_loss: 2.00373\ntrain_acc: 0.95425\nval_acc: 0.94623\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97200\n\nSubject 42\n\nTraining complete in 6m 28s\nEpoch with lowest val loss: 25\ntrain_loss: 1.98680\nval_loss: 2.15259\ntrain_acc: 0.94965\nval_acc: 0.94292\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97100\n\n\nTraining complete in 12m 1s\nEpoch with lowest val loss: 56\ntrain_loss: 1.88257\nval_loss: 2.26464\ntrain_acc: 0.95448\nval_acc: 0.93962\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.96900\n\n\nTraining complete in 7m 42s\nEpoch with lowest val loss: 32\ntrain_loss: 1.86807\nval_loss: 2.17193\ntrain_acc: 0.95460\nval_acc: 0.94434\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 5s\nEpoch with lowest val loss: 34\ntrain_loss: 2.02293\nval_loss: 2.19379\ntrain_acc: 0.95071\nval_acc: 0.94481\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 1.89203\nval_loss: 1.88617\ntrain_acc: 0.95318\nval_acc: 0.95047\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97500\n\nSubject 43\n\nTraining complete in 10m 2s\nEpoch with lowest val loss: 45\ntrain_loss: 2.01895\nval_loss: 2.19542\ntrain_acc: 0.95153\nval_acc: 0.94434\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 58s\nEpoch with lowest val loss: 39\ntrain_loss: 1.91593\nval_loss: 2.28763\ntrain_acc: 0.95507\nval_acc: 0.93915\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.96900\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.08390\nval_loss: 2.24762\ntrain_acc: 0.94693\nval_acc: 0.94623\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97200\n\n\nTraining complete in 11m 18s\nEpoch with lowest val loss: 52\ntrain_loss: 1.86603\nval_loss: 2.26587\ntrain_acc: 0.95519\nval_acc: 0.94292\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 36s\nEpoch with lowest val loss: 37\ntrain_loss: 1.95101\nval_loss: 1.98509\ntrain_acc: 0.95130\nval_acc: 0.94717\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97300\n\nSubject 44\n\nTraining complete in 6m 28s\nEpoch with lowest val loss: 25\ntrain_loss: 2.07190\nval_loss: 2.19753\ntrain_acc: 0.94764\nval_acc: 0.94151\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97000\n\n\nTraining complete in 12m 1s\nEpoch with lowest val loss: 56\ntrain_loss: 1.90153\nval_loss: 2.31091\ntrain_acc: 0.95259\nval_acc: 0.93868\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.96800\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.07245\nval_loss: 2.25912\ntrain_acc: 0.94870\nval_acc: 0.94387\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97100\n\n\nTraining complete in 11m 39s\nEpoch with lowest val loss: 54\ntrain_loss: 1.94377\nval_loss: 2.28289\ntrain_acc: 0.95271\nval_acc: 0.94104\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 58s\nEpoch with lowest val loss: 39\ntrain_loss: 2.06638\nval_loss: 2.17157\ntrain_acc: 0.94729\nval_acc: 0.93962\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.96900\n\nSubject 45\n\nTraining complete in 10m 46s\nEpoch with lowest val loss: 49\ntrain_loss: 1.91760\nval_loss: 2.11815\ntrain_acc: 0.95271\nval_acc: 0.94481\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97200\n\n\nTraining complete in 10m 2s\nEpoch with lowest val loss: 45\ntrain_loss: 1.77465\nval_loss: 2.23338\ntrain_acc: 0.95755\nval_acc: 0.94009\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.96900\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 1.94151\nval_loss: 2.30108\ntrain_acc: 0.95083\nval_acc: 0.94575\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\n\nTraining complete in 11m 18s\nEpoch with lowest val loss: 52\ntrain_loss: 1.89915\nval_loss: 2.18705\ntrain_acc: 0.95354\nval_acc: 0.94906\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97400\n\n\nTraining complete in 11m 29s\nEpoch with lowest val loss: 53\ntrain_loss: 1.98294\nval_loss: 2.04615\ntrain_acc: 0.95224\nval_acc: 0.94481\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97200\n\nSubject 46\n\nTraining complete in 6m 27s\nEpoch with lowest val loss: 25\ntrain_loss: 1.96679\nval_loss: 2.04814\ntrain_acc: 0.95024\nval_acc: 0.94717\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97300\n\n\nTraining complete in 10m 3s\nEpoch with lowest val loss: 45\ntrain_loss: 2.05038\nval_loss: 2.30992\ntrain_acc: 0.94941\nval_acc: 0.94245\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97000\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 1.93174\nval_loss: 2.27553\ntrain_acc: 0.95189\nval_acc: 0.94198\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97000\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 2.09435\nval_loss: 2.32570\ntrain_acc: 0.94741\nval_acc: 0.94340\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97100\n\n\nTraining complete in 10m 2s\nEpoch with lowest val loss: 45\ntrain_loss: 2.00710\nval_loss: 2.08388\ntrain_acc: 0.95165\nval_acc: 0.94151\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97000\n\nSubject 47\n\nTraining complete in 8m 26s\nEpoch with lowest val loss: 36\ntrain_loss: 1.81818\nval_loss: 1.95758\ntrain_acc: 0.95448\nval_acc: 0.94623\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97200\n\n\nTraining complete in 10m 3s\nEpoch with lowest val loss: 45\ntrain_loss: 1.87754\nval_loss: 2.17874\ntrain_acc: 0.95708\nval_acc: 0.94151\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97000\n\n\nTraining complete in 7m 0s\nEpoch with lowest val loss: 28\ntrain_loss: 1.87398\nval_loss: 2.16006\ntrain_acc: 0.95389\nval_acc: 0.94670\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97300\n\n\nTraining complete in 11m 18s\nEpoch with lowest val loss: 52\ntrain_loss: 1.80279\nval_loss: 2.14037\ntrain_acc: 0.95649\nval_acc: 0.94811\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97300\n\n\nTraining complete in 10m 3s\nEpoch with lowest val loss: 45\ntrain_loss: 1.83104\nval_loss: 1.87122\ntrain_acc: 0.95542\nval_acc: 0.94528\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97200\n\nSubject 48\n\nTraining complete in 7m 22s\nEpoch with lowest val loss: 30\ntrain_loss: 2.06856\nval_loss: 2.17438\ntrain_acc: 0.94941\nval_acc: 0.93962\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.96900\n\n\nTraining complete in 12m 1s\nEpoch with lowest val loss: 56\ntrain_loss: 1.91915\nval_loss: 2.20246\ntrain_acc: 0.95106\nval_acc: 0.94009\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96900\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.07985\nval_loss: 2.25860\ntrain_acc: 0.94917\nval_acc: 0.94340\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97100\n\n\nTraining complete in 8m 26s\nEpoch with lowest val loss: 36\ntrain_loss: 2.07300\nval_loss: 2.29473\ntrain_acc: 0.94988\nval_acc: 0.94057\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.96900\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 2.03546\nval_loss: 2.10204\ntrain_acc: 0.95047\nval_acc: 0.94245\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97000\n\nSubject 49\n\nTraining complete in 7m 21s\nEpoch with lowest val loss: 30\ntrain_loss: 2.15434\nval_loss: 2.32865\ntrain_acc: 0.94823\nval_acc: 0.93774\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.96800\n\n\nTraining complete in 10m 3s\nEpoch with lowest val loss: 45\ntrain_loss: 1.93978\nval_loss: 2.27477\ntrain_acc: 0.95224\nval_acc: 0.93538\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.96700\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 1.95776\nval_loss: 2.32246\ntrain_acc: 0.94976\nval_acc: 0.94198\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97000\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 2.07589\nval_loss: 2.28676\ntrain_acc: 0.94835\nval_acc: 0.94670\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97300\n\n\nTraining complete in 7m 0s\nEpoch with lowest val loss: 28\ntrain_loss: 2.18104\nval_loss: 2.21443\ntrain_acc: 0.94623\nval_acc: 0.94151\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97000\n\nSubject 50\n\nTraining complete in 10m 46s\nEpoch with lowest val loss: 49\ntrain_loss: 1.90365\nval_loss: 2.14873\ntrain_acc: 0.95354\nval_acc: 0.94528\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 38s\nEpoch with lowest val loss: 26\ntrain_loss: 2.02498\nval_loss: 2.32541\ntrain_acc: 0.95153\nval_acc: 0.93726\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.96800\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.02974\nval_loss: 2.27546\ntrain_acc: 0.94953\nval_acc: 0.94292\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97100\n\n\nTraining complete in 6m 59s\nEpoch with lowest val loss: 28\ntrain_loss: 2.03050\nval_loss: 2.21714\ntrain_acc: 0.94917\nval_acc: 0.94623\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 10m 45s\nEpoch with lowest val loss: 49\ntrain_loss: 1.96519\nval_loss: 2.03800\ntrain_acc: 0.95248\nval_acc: 0.94670\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97300\n\nSubject 51\n\nTraining complete in 6m 27s\nEpoch with lowest val loss: 25\ntrain_loss: 1.93717\nval_loss: 2.03958\ntrain_acc: 0.94976\nval_acc: 0.94623\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 38s\nEpoch with lowest val loss: 26\ntrain_loss: 1.88975\nval_loss: 2.22746\ntrain_acc: 0.95354\nval_acc: 0.94198\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 47s\nEpoch with lowest val loss: 38\ntrain_loss: 1.94700\nval_loss: 2.22584\ntrain_acc: 0.95342\nval_acc: 0.94811\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97300\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 1.85267\nval_loss: 2.09835\ntrain_acc: 0.95366\nval_acc: 0.94953\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97400\n\n\nTraining complete in 10m 24s\nEpoch with lowest val loss: 47\ntrain_loss: 1.83635\nval_loss: 2.03962\ntrain_acc: 0.95519\nval_acc: 0.94717\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97300\n\nSubject 52\n\nTraining complete in 8m 25s\nEpoch with lowest val loss: 36\ntrain_loss: 1.86224\nval_loss: 2.05764\ntrain_acc: 0.95542\nval_acc: 0.94670\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.97300\n\n\nTraining complete in 12m 2s\nEpoch with lowest val loss: 56\ntrain_loss: 1.87232\nval_loss: 2.24422\ntrain_acc: 0.95448\nval_acc: 0.93868\ntrain_classification_f1: 0.97700\nval_classification_f1: 0.96800\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.11399\nval_loss: 2.29197\ntrain_acc: 0.94587\nval_acc: 0.94292\ntrain_classification_f1: 0.97200\nval_classification_f1: 0.97100\n\n\nTraining complete in 6m 60s\nEpoch with lowest val loss: 28\ntrain_loss: 2.07405\nval_loss: 2.27304\ntrain_acc: 0.94858\nval_acc: 0.94245\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97000\n\n\nTraining complete in 8m 58s\nEpoch with lowest val loss: 39\ntrain_loss: 2.04124\nval_loss: 2.05182\ntrain_acc: 0.94894\nval_acc: 0.94292\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97100\n\nSubject 53\n\nTraining complete in 8m 16s\nEpoch with lowest val loss: 35\ntrain_loss: 2.10538\nval_loss: 2.22310\ntrain_acc: 0.94882\nval_acc: 0.94481\ntrain_classification_f1: 0.97400\nval_classification_f1: 0.97200\n\n\nTraining complete in 6m 39s\nEpoch with lowest val loss: 26\ntrain_loss: 2.12430\nval_loss: 2.38684\ntrain_acc: 0.94835\nval_acc: 0.93726\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.96800\n\n\nTraining complete in 7m 43s\nEpoch with lowest val loss: 32\ntrain_loss: 2.14493\nval_loss: 2.33198\ntrain_acc: 0.94705\nval_acc: 0.94340\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97100\n\n\nTraining complete in 7m 1s\nEpoch with lowest val loss: 28\ntrain_loss: 2.11463\nval_loss: 2.23129\ntrain_acc: 0.94800\nval_acc: 0.94528\ntrain_classification_f1: 0.97300\nval_classification_f1: 0.97200\n\n\nTraining complete in 9m 53s\nEpoch with lowest val loss: 44\ntrain_loss: 2.04780\nval_loss: 2.11065\ntrain_acc: 0.95035\nval_acc: 0.94623\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97200\n\nSubject 54\n\nTraining complete in 8m 26s\nEpoch with lowest val loss: 36\ntrain_loss: 1.77589\nval_loss: 2.07137\ntrain_acc: 0.95849\nval_acc: 0.94387\ntrain_classification_f1: 0.97900\nval_classification_f1: 0.97100\n\n\nTraining complete in 12m 1s\nEpoch with lowest val loss: 56\ntrain_loss: 1.78864\nval_loss: 2.14116\ntrain_acc: 0.95660\nval_acc: 0.94245\ntrain_classification_f1: 0.97800\nval_classification_f1: 0.97000\n\n\nTraining complete in 10m 14s\nEpoch with lowest val loss: 46\ntrain_loss: 1.82791\nval_loss: 2.11005\ntrain_acc: 0.95318\nval_acc: 0.94906\ntrain_classification_f1: 0.97600\nval_classification_f1: 0.97400\n\n\nTraining complete in 8m 26s\nEpoch with lowest val loss: 36\ntrain_loss: 1.74762\nval_loss: 2.02867\ntrain_acc: 0.95908\nval_acc: 0.95236\ntrain_classification_f1: 0.97900\nval_classification_f1: 0.97600\n\n\nTraining complete in 6m 28s\nEpoch with lowest val loss: 25\ntrain_loss: 2.05408\nval_loss: 1.97447\ntrain_acc: 0.95094\nval_acc: 0.94906\ntrain_classification_f1: 0.97500\nval_classification_f1: 0.97400\n\nresults\nsubject_kfold_acc {1: [0.905, 0.91, 0.905, 0.905, 0.905], 2: [0.96, 0.96, 0.955, 0.965, 0.97], 3: [0.955, 0.95, 0.965, 0.94, 0.945], 4: [0.99, 0.99, 0.99, 1.0, 0.995], 5: [0.875, 0.89, 0.895, 0.89, 0.875], 6: [0.985, 0.98, 0.96, 0.99, 0.98], 7: [0.995, 0.995, 0.995, 0.995, 0.995], 8: [0.935, 0.915, 0.94, 0.945, 0.915], 9: [0.98, 0.975, 0.97, 0.98, 0.98], 10: [0.95, 0.935, 0.935, 0.955, 0.965], 11: [1.0, 1.0, 1.0, 1.0, 1.0], 12: [0.97, 0.955, 0.955, 0.96, 0.95], 13: [0.93, 0.915, 0.935, 0.92, 0.91], 14: [0.905, 0.92, 0.9, 0.925, 0.94], 15: [0.94, 0.93, 0.95, 0.935, 0.935], 16: [0.95, 0.965, 0.96, 0.97, 0.96], 17: [0.875, 0.895, 0.885, 0.87, 0.88], 18: [1.0, 1.0, 1.0, 1.0, 1.0], 19: [0.94, 0.935, 0.935, 0.94, 0.935], 20: [0.945, 0.945, 0.955, 0.955, 0.955], 21: [0.95, 0.945, 0.935, 0.945, 0.935], 22: [0.875, 0.89, 0.895, 0.88, 0.91], 23: [0.435, 0.39, 0.39, 0.47, 0.415], 24: [0.955, 0.95, 0.955, 0.94, 0.96], 25: [0.925, 0.93, 0.935, 0.93, 0.94], 26: [0.97, 0.97, 0.975, 0.965, 0.97], 27: [0.985, 0.99, 0.985, 0.985, 0.985], 28: [0.98, 0.97, 0.975, 0.97, 0.97], 29: [0.95, 0.94, 0.945, 0.945, 0.955], 30: [1.0, 0.995, 0.98, 0.99, 0.99], 31: [0.96, 0.95, 0.965, 0.96, 0.975], 32: [0.835, 0.775, 0.88, 0.805, 0.865], 33: [0.975, 0.985, 0.99, 0.98, 0.99], 34: [0.81, 0.795, 0.82, 0.76, 0.815], 35: [0.86, 0.885, 0.865, 0.875, 0.895], 36: [1.0, 1.0, 1.0, 1.0, 1.0], 37: [0.995, 0.995, 0.995, 0.995, 0.995], 38: [0.98, 0.98, 0.98, 0.98, 0.98], 39: [0.98, 0.985, 0.975, 0.985, 0.985], 40: [0.96, 0.96, 0.955, 0.955, 0.96], 41: [0.965, 0.965, 0.97, 0.965, 0.96], 42: [0.825, 0.815, 0.825, 0.815, 0.815], 43: [1.0, 1.0, 1.0, 1.0, 1.0], 44: [1.0, 1.0, 0.995, 1.0, 1.0], 45: [0.945, 0.94, 0.95, 0.955, 0.955], 46: [0.985, 0.985, 0.99, 0.995, 0.98], 47: [0.78, 0.76, 0.77, 0.765, 0.785], 48: [0.98, 0.985, 0.975, 0.985, 0.975], 49: [0.975, 0.98, 0.985, 0.985, 0.98], 50: [0.915, 0.93, 0.915, 0.9, 0.9], 51: [0.765, 0.775, 0.75, 0.79, 0.765], 52: [0.965, 0.97, 0.955, 0.965, 0.97], 53: [0.995, 1.0, 0.995, 0.995, 1.0], 54: [0.7, 0.7, 0.685, 0.705, 0.68]}\nsubject_kfold_f1 {1: [0.95, 0.953, 0.95, 0.95, 0.95], 2: [0.98, 0.98, 0.977, 0.982, 0.985], 3: [0.977, 0.974, 0.982, 0.969, 0.972], 4: [0.995, 0.995, 0.995, 1.0, 0.997], 5: [0.933, 0.942, 0.945, 0.942, 0.933], 6: [0.992, 0.99, 0.98, 0.995, 0.99], 7: [0.997, 0.997, 0.997, 0.997, 0.997], 8: [0.966, 0.956, 0.969, 0.972, 0.956], 9: [0.99, 0.987, 0.985, 0.99, 0.99], 10: [0.974, 0.966, 0.966, 0.977, 0.982], 11: [1.0, 1.0, 1.0, 1.0, 1.0], 12: [0.985, 0.977, 0.977, 0.98, 0.974], 13: [0.964, 0.956, 0.966, 0.958, 0.953], 14: [0.95, 0.958, 0.947, 0.961, 0.969], 15: [0.969, 0.964, 0.974, 0.966, 0.966], 16: [0.974, 0.982, 0.98, 0.985, 0.98], 17: [0.933, 0.945, 0.939, 0.93, 0.936], 18: [1.0, 1.0, 1.0, 1.0, 1.0], 19: [0.969, 0.966, 0.966, 0.969, 0.966], 20: [0.972, 0.972, 0.977, 0.977, 0.977], 21: [0.974, 0.972, 0.966, 0.972, 0.966], 22: [0.933, 0.942, 0.945, 0.936, 0.953], 23: [0.606, 0.561, 0.561, 0.639, 0.587], 24: [0.977, 0.974, 0.977, 0.969, 0.98], 25: [0.961, 0.964, 0.966, 0.964, 0.969], 26: [0.985, 0.985, 0.987, 0.982, 0.985], 27: [0.992, 0.995, 0.992, 0.992, 0.992], 28: [0.99, 0.985, 0.987, 0.985, 0.985], 29: [0.974, 0.969, 0.972, 0.972, 0.977], 30: [1.0, 0.997, 0.99, 0.995, 0.995], 31: [0.98, 0.974, 0.982, 0.98, 0.987], 32: [0.91, 0.873, 0.936, 0.892, 0.928], 33: [0.987, 0.992, 0.995, 0.99, 0.995], 34: [0.895, 0.886, 0.901, 0.864, 0.898], 35: [0.925, 0.939, 0.928, 0.933, 0.945], 36: [1.0, 1.0, 1.0, 1.0, 1.0], 37: [0.997, 0.997, 0.997, 0.997, 0.997], 38: [0.99, 0.99, 0.99, 0.99, 0.99], 39: [0.99, 0.992, 0.987, 0.992, 0.992], 40: [0.98, 0.98, 0.977, 0.977, 0.98], 41: [0.982, 0.982, 0.985, 0.982, 0.98], 42: [0.904, 0.898, 0.904, 0.898, 0.898], 43: [1.0, 1.0, 1.0, 1.0, 1.0], 44: [1.0, 1.0, 0.997, 1.0, 1.0], 45: [0.972, 0.969, 0.974, 0.977, 0.977], 46: [0.992, 0.992, 0.995, 0.997, 0.99], 47: [0.876, 0.864, 0.87, 0.867, 0.88], 48: [0.99, 0.992, 0.987, 0.992, 0.987], 49: [0.987, 0.99, 0.992, 0.992, 0.99], 50: [0.956, 0.964, 0.956, 0.947, 0.947], 51: [0.867, 0.873, 0.857, 0.883, 0.867], 52: [0.982, 0.985, 0.977, 0.982, 0.985], 53: [0.997, 1.0, 0.997, 0.997, 1.0], 54: [0.824, 0.824, 0.813, 0.827, 0.81]}\n"
],
[
"# acc\nsubjects = []\nacc = []\nacc_min = 1.0\nacc_max = 0.0\n\nfor subject_id in subject_kfold_acc:\n subjects.append(subject_id)\n avg_acc = np.mean(subject_kfold_acc[subject_id])\n if avg_acc < acc_min:\n acc_min = avg_acc\n if avg_acc > acc_max:\n acc_max = avg_acc\n acc.append(avg_acc)\n\n\nx_pos = [i for i, _ in enumerate(subjects)]\nfigure(num=None, figsize=(15, 3), dpi=80, facecolor='w', edgecolor='k')\nplt.bar(x_pos, acc, color='skyblue')\nplt.xlabel(\"Subject\")\nplt.ylabel(\"Accuracies\")\nplt.title(\"Average k-fold Accuracies by subjects\")\nplt.xticks(x_pos, subjects)\nplt.ylim([acc_min-0.02, acc_max+0.02])\nplt.show()\n\n# f1\nsubjects = []\nf1 = []\nf1_min = 1.0\nf1_max = 0.0\n\nfor subject_id in subject_kfold_f1:\n subjects.append(subject_id)\n avg_f1 = np.mean(subject_kfold_f1[subject_id])\n if avg_f1 < f1_min:\n f1_min = avg_f1\n if avg_f1 > f1_max:\n f1_max = avg_f1\n f1.append(avg_f1)\n\n\nx_pos = [i for i, _ in enumerate(subjects)]\nfigure(num=None, figsize=(15, 3), dpi=80, facecolor='w', edgecolor='k')\nplt.bar(x_pos, f1, color='skyblue')\nplt.xlabel(\"Subject\")\nplt.ylabel(\"Accuracies\")\nplt.title(\"Average k-fold F1 by subjects\")\nplt.xticks(x_pos, subjects)\nplt.ylim([f1_min-0.02, f1_max+0.02])\nplt.show()",
"_____no_output_____"
],
[
"print('Average acc:', np.mean(acc))\nprint('Average f1:', np.mean(f1))",
"Average acc: 0.9287592592592593\nAverage f1: 0.9598666666666666\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
cb83245d9862a047b3c6ab4d870fda1a5d5afe09
| 356,913 |
ipynb
|
Jupyter Notebook
|
notebooks/01c_EDA_HMDA_draft2_ak.ipynb
|
georgetown-analytics/Paddle-Your-Loan-Canoe
|
62b40c28ea9ff757a4fbcd31f6e899670f35df6b
|
[
"MIT"
] | null | null | null |
notebooks/01c_EDA_HMDA_draft2_ak.ipynb
|
georgetown-analytics/Paddle-Your-Loan-Canoe
|
62b40c28ea9ff757a4fbcd31f6e899670f35df6b
|
[
"MIT"
] | null | null | null |
notebooks/01c_EDA_HMDA_draft2_ak.ipynb
|
georgetown-analytics/Paddle-Your-Loan-Canoe
|
62b40c28ea9ff757a4fbcd31f6e899670f35df6b
|
[
"MIT"
] | null | null | null | 325.353692 | 211,728 | 0.910424 |
[
[
[
"**Exploratory Data Analysis for HMDA**\n\nIdeas: \nOutcome Variable,\nQuantity of Filers,\nProperty Type,\nLoan Type",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport os\nimport requests\nimport matplotlib\nimport numpy as np\nimport pandas as pd \nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom pandas.plotting import scatter_matrix",
"_____no_output_____"
]
],
[
[
"**Load data into a dataframe:**",
"_____no_output_____"
]
],
[
[
"filepath = os.path.abspath(os.path.join( \"..\", \"fixtures\", \"hmda2017sample.csv\"))\nDATA = pd.read_csv(filepath, low_memory=False)",
"_____no_output_____"
],
[
"DATA.head() ",
"_____no_output_____"
],
[
"DATA = DATA.drop(DATA.columns[0], axis=1)",
"_____no_output_____"
]
],
[
[
"**Summary statistics:**",
"_____no_output_____"
]
],
[
[
"DATA.describe()",
"_____no_output_____"
]
],
[
[
"**Create a binary outcome variable, 'action_taken'**",
"_____no_output_____"
]
],
[
[
"DATA['action_taken'] = DATA.action_taken_name.apply(lambda x: 1 if x in ['Loan purchased by the institution', 'Loan originated'] else 0)\npd.crosstab(DATA['action_taken_name'],DATA['action_taken'], margins=True)",
"_____no_output_____"
]
],
[
[
"#### Making a box plot:",
"_____no_output_____"
]
],
[
[
"matplotlib.style.use('ggplot')",
"_____no_output_____"
],
[
"DATA[[ 'population', \n 'number_of_owner_occupied_units', \n 'number_of_1_to_4_family_units', \n 'loan_amount_000s', \n 'applicant_income_000s' \n ]].plot(kind='box',figsize=(20,10))",
"_____no_output_____"
],
[
"DATA[[ 'population', \n 'number_of_owner_occupied_units', \n 'number_of_1_to_4_family_units', \n 'loan_amount_000s', \n 'applicant_income_000s' \n ]].hist(figsize=(20,10)) # Histogram for all features",
"_____no_output_____"
]
],
[
[
"#### Visualizing the distribution with a kernel density estimate:",
"_____no_output_____"
]
],
[
[
"DATA['number_of_1_to_4_family_units'].plot(kind='kde')",
"_____no_output_____"
]
],
[
[
"#### Making a scatter plot matrix:",
"_____no_output_____"
]
],
[
[
"DATA_targ_numeric = DATA[['action_taken',\n 'tract_to_msamd_income', \n 'population', \n 'minority_population', \n 'number_of_owner_occupied_units', \n 'number_of_1_to_4_family_units', \n 'loan_amount_000s', \n 'hud_median_family_income', \n 'applicant_income_000s' \n ]]",
"_____no_output_____"
],
[
"# Extract our X and y data\nX = DATA_targ_numeric[:-1]\ny = DATA_targ_numeric['action_taken']\n# Create a scatter matrix of the dataframe features\nfrom pandas.plotting import scatter_matrix\nscatter_matrix = scatter_matrix(X, alpha=0.2, figsize=(12, 12), diagonal='kde')\n\nfor ax in scatter_matrix.ravel():\n ax.set_xlabel(ax.get_xlabel(), fontsize = 6, rotation = 90)\n ax.set_ylabel(ax.get_ylabel(), fontsize = 6, rotation = 0)\n \nplt.show()",
"_____no_output_____"
]
],
[
[
"### Don't forget about Matplotlib...\n\nSometimes you'll want to something a bit more custom (or you'll want to figure out how to tweak the labels, change the colors, make small multiples, etc), so you'll want to go straight to the Matplotlib documentation.\nYou will learn more about matplotlib.pyplot on the next Lab.\n\n#### Tweak the labels\nFor example, say we want to tweak the labels on one of our graphs:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 4]\ny = [1, 4, 9, 6]\nlabels = ['Frogs', 'Hogs', 'Bogs', 'Slogs']\n\nplt.plot(x, y, 'ro')\n# You can specify a rotation for the tick labels in degrees or with keywords.\nplt.xticks(x, labels, rotation=30)\n# Pad margins so that markers don't get clipped by the axes\nplt.margins(0.2)\n# Tweak spacing to prevent clipping of tick-labels\nplt.subplots_adjust(bottom=0.15)\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"# Seaborn\n\n## Obtaining the Data For the Census Dataset\n\n### Exploratory Data Analysis (EDA)\n\n\n\n[Seaborn](https://seaborn.pydata.org/) is another great Python visualization library to have up your sleeve.\n\nSeaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. For a brief introduction to the ideas behind the package, you can read the introductory notes. More practical information is on the installation page. You may also want to browse the example gallery to get a sense for what you can do with seaborn and then check out the tutorial and API reference to find out how.\n\nSeaborn has a lot of the same methods as Pandas, like [boxplots](http://seaborn.pydata.org/generated/seaborn.boxplot.html?highlight=box%2520plot#seaborn.boxplot) and [histograms](http://seaborn.pydata.org/generated/seaborn.distplot.html) (albeit with slightly different syntax!), but also comes with some novel tools\n\nWe will now use the census dataset to explore the use of visualizations in feature analysis and selection using this library.",
"_____no_output_____"
],
[
"#### Making a Countplot:\n\nIn this dataset, our target variable is data['income'] which is categorical. It would be interesting to see the frequencies of each class, relative to the target of our classifier. To do this, we can use the countplot function from the Python visualization package Seaborn to count the occurrences of each data point. Let's take a look at the counts of different categories in data['occupation'] and in data['education'] — two likely predictors of income in the Census data:\n\nThe [Countplot](https://seaborn.pydata.org/generated/seaborn.countplot.html) function accepts either an x or a y argument to specify if this is a bar plot or a column plot. We chose to use the y argument so that the labels would be readable. The hue argument specifies a column for comparison; in this case we're concerned with the relationship of our categorical variables to the target income. Go ahead and explore other variables in the dataset, for example data.race and data.sex to see if those values are predictive of the level of income or not!",
"_____no_output_____"
]
],
[
[
"DATA.columns",
"_____no_output_____"
],
[
"ax = sns.countplot(y='loan_type_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='state_abbr', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='purchaser_type_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='property_type_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='loan_purpose_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='loan_type_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='loan_type_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='hoepa_status_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='agency_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='applicant_sex_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='applicant_ethnicity_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='applicant_race_name_1', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='hoepa_status_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
],
[
"ax = sns.countplot(y='hoepa_status_name', hue='action_taken', data=DATA,)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb832d20e38f13c8c768e3eb42f5cd094d8ebf1d
| 24,194 |
ipynb
|
Jupyter Notebook
|
helper/data_summary.ipynb
|
jbkoh/Scrabble
|
6d64be2e9c7d0392332592c804eb15c20a3e2516
|
[
"BSD-3-Clause"
] | 6 |
2018-11-20T13:58:58.000Z
|
2020-07-10T13:43:37.000Z
|
helper/data_summary.ipynb
|
jbkoh/Scrabble
|
6d64be2e9c7d0392332592c804eb15c20a3e2516
|
[
"BSD-3-Clause"
] | null | null | null |
helper/data_summary.ipynb
|
jbkoh/Scrabble
|
6d64be2e9c7d0392332592c804eb15c20a3e2516
|
[
"BSD-3-Clause"
] | 2 |
2018-09-05T12:16:38.000Z
|
2022-03-18T07:29:41.000Z
| 70.331395 | 1,639 | 0.634537 |
[
[
[
"# TODO\n# 1. # of words\n# 2. # of sensor types\n# 3. how bag of words clustering works\n# 4. how data feature classification works on sensor types\n# 5. how data feature classification works on tag classification\n# 6. # of unique sentence structure",
"_____no_output_____"
],
[
"import json\nfrom functools import reduce\nimport os.path\nimport os\nimport random\n\nimport pandas as pd\nfrom scipy.stats import entropy\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\nfrom sklearn.gaussian_process.kernels import RBF\nfrom sklearn.gaussian_process import GaussianProcessClassifier",
"_____no_output_____"
],
[
"nae_dict = {\n 'bonner': ['607', '608', '609', '557', '610'],\n 'ap_m': ['514', '513','604'],\n 'bsb': ['519', '568', '567', '566', '564', '565'],\n 'ebu3b': ['505', '506']\n}",
"_____no_output_____"
],
[
"def counterize_feature(feat):\n indexList = [not np.isnan(val) for val in feat]\n maxVal = max(feat.loc[indexList])\n minVal = min(feat.loc[indexList])\n gran = 100\n interval = (maxVal-minVal)/100.0\n keys = np.arange(minVal,maxVal,interval)\n resultDict = defaultdict(int)\n for key, val in feat.iteritems():\n try:\n if np.isnan(val):\n resultDict[None] += 1\n continue\n diffList = [abs(key-val) for key in keys]\n minVal = min(diffList)\n minIdx = diffList.index(minVal)\n minKey = keys[minIdx]\n resultDict[minKey] += 1\n except:\n print key, val\n return resultDict",
"_____no_output_____"
],
[
"true_df.loc[true_df['Unique Identifier']=='505_0_3000003']['Schema Label'].ravel()[0]\n#sklearn.ensemble.RandomForestClassifier",
"_____no_output_____"
],
[
"building_list = ['ebu3b']\nfor building_name in building_list:\n print(\"============ %s ===========\"%building_name)\n with open('metadata/%s_sentence_dict.json'%building_name, 'r') as fp:\n sentence_dict = json.load(fp)\n srcid_list = list(sentence_dict.keys())\n \n # 1. Number of unique words\n adder = lambda x,y:x+y\n num_remover = lambda xlist: [\"number\" if x.isdigit() else x for x in xlist]\n total_word_set = set(reduce(adder, map(num_remover,sentence_dict.values()), []))\n\n print(\"# of unique words: %d\"%(len(total_word_set)))\n\n # 2. of sensor types\n labeled_metadata_filename = 'metadata/%s_sensor_types_location.csv'%building_name\n if os.path.isfile(labeled_metadata_filename):\n true_df = pd.read_csv(labeled_metadata_filename)\n else:\n true_df = None\n\n if isinstance(true_df, pd.DataFrame):\n sensor_type_set = set(true_df['Schema Label'].ravel())\n print(\"# of unique sensor types: %d\"%(len(sensor_type_set)))\n else:\n sensor_type_set = None\n\n # 3. how bag of words clustering works\n \n with open('model/%s_word_clustering.json'%building_name, 'r') as fp:\n cluster_dict = json.load(fp)\n print(\"# of word clusterings: %d\"%(len(cluster_dict)))\n small_cluster_num = 0\n large_cluster_num = 0\n for cluster_id, srcids in cluster_dict.items():\n if len(srcids)<5:\n small_cluster_num +=1\n else:\n large_cluster_num +=1\n print(\"# of word small (<5)clusterings: %d\"%small_cluster_num)\n print(\"# of word large (>=5)clusterings: %d\"%large_cluster_num)\n \n # 4. how data feature classification works on sensor types\n with open('model/fe_%s.json'%building_name, 'r') as fp:\n #data_feature_dict = json.load(fp)\n pass\n with open('model/fe_%s_normalized.json'%building_name, 'r') as fp:\n data_feature_dict = json.load(fp)\n pass\n feature_num = len(list(data_feature_dict.values())[0])\n data_available_srcid_list = list(data_feature_dict.keys())\n if isinstance(true_df, pd.DataFrame):\n sample_num = 500\n sample_idx_list = random.sample(range(0,len(data_feature_dict)), sample_num)\n learning_srcid_list = [data_available_srcid_list[sample_idx]\n for sample_idx in sample_idx_list]\n learning_x = [data_feature_dict[srcid] for srcid in learning_srcid_list]\n learning_y = [true_df.loc[true_df['Unique Identifier']==srcid]\n ['Schema Label'].ravel()[0]\n for srcid in learning_srcid_list]\n test_srcid_list = [srcid for srcid in data_available_srcid_list \n if srcid not in learning_srcid_list]\n test_x = [data_feature_dict[srcid] for srcid in test_srcid_list]\n \n classifier_list = [RandomForestClassifier(),\n AdaBoostClassifier(),\n MLPClassifier(),\n KNeighborsClassifier(),\n SVC(),\n GaussianNB(),\n DecisionTreeClassifier()\n ]\n for classifier in classifier_list:\n classifier.fit(learning_x, learning_y)\n test_y = classifier.predict(test_x)\n precision = calc_accuracy(test_srcid_list, test_y)\n print(type(classifier).__name__, precision)\n \n # 5. How entropy varies in clusters\n entropy_dict = dict()\n for cluster_id, cluster in cluster_dict.items():\n entropy_list = list()\n for feature_idx in range(0,feature_num):\n entropy_list.append(\\\n entropy([data_feature_dict[srcid][feature_idx] + 0.01\n for srcid in cluster #random_sample_srcid_list \\\n if srcid in data_available_srcid_list]))\n entropy_dict[cluster_id] = entropy_list\n\n # 5. how data feature classification works on tag classification\n \n \n #if isinstance()\n # 6. # of unique sentence structure",
"============ ebu3b ===========\n# of unique words: 438\n# of unique sensor types: 164\n# of word clusterings: 348\n# of word small (<5)clusterings: 311\n# of word large (>=5)clusterings: 37\nRandomForestClassifier 0.5487595185458118\nAdaBoostClassifier 0.12576762466224514\nMLPClassifier 0.12306558585114223\nKNeighborsClassifier 0.33407025300908866\nSVC 0.18447555883075412\nGaussianNB 0.414394497666421\n"
],
[
"def feature_check(data_feature_dict):\n for srcid, features in data_feature_dict.items():\n for feat in features:\n #if np.isnan(feat):\n if feat < -100:\n print(srcid, features)",
"_____no_output_____"
],
[
"correct_cnt = 0\nfor i, srcid in enumerate(test_srcid_list):\n schema_label = true_df.loc[true_df['Unique Identifier']==srcid]['Schema Label'].ravel()[0]\n if schema_label==test_y[i]:\n correct_cnt += 1\nprint(correct_cnt)\nprint(correct_cnt/len(test_srcid_list))",
"1975\n0.5530663679641556\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb832eb478eb54639270f04631d1bb1bcbf4b686
| 20,574 |
ipynb
|
Jupyter Notebook
|
site/ko/tutorials/distribute/keras.ipynb
|
truongsinh/tensorflow-docs
|
e62749e757b0b9fd124f195e0463cfb0c913a835
|
[
"Apache-2.0"
] | 1 |
2020-09-04T05:34:14.000Z
|
2020-09-04T05:34:14.000Z
|
site/ko/tutorials/distribute/keras.ipynb
|
MartinSamanArata2018/docs
|
e62749e757b0b9fd124f195e0463cfb0c913a835
|
[
"Apache-2.0"
] | null | null | null |
site/ko/tutorials/distribute/keras.ipynb
|
MartinSamanArata2018/docs
|
e62749e757b0b9fd124f195e0463cfb0c913a835
|
[
"Apache-2.0"
] | 1 |
2019-09-17T05:17:18.000Z
|
2019-09-17T05:17:18.000Z
| 27.953804 | 439 | 0.483377 |
[
[
[
"##### Copyright 2019 The TensorFlow Authors.\n\n",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# 케라스를 사용한 분산 훈련",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/distribute/keras\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />TensorFlow.org에서 보기</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ko/tutorials/distribute/keras.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />구글 코랩(Colab)에서 실행하기</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/ko/tutorials/distribute/keras.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />깃허브(GitHub) 소스 보기</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도 불구하고 [공식 영문 문서](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/keras.ipynb)의 내용과 일치하지 않을 수 있습니다. 이 번역에 개선할 부분이 있다면 [tensorflow/docs](https://github.com/tensorflow/docs) 깃허브 저장소로 풀 리퀘스트를 보내주시기 바랍니다. 문서 번역이나 리뷰에 참여하려면 [[email protected]](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로 메일을 보내주시기 바랍니다.",
"_____no_output_____"
],
[
"## 개요\n\n`tf.distribute.Strategy` API는 훈련을 여러 처리 장치들로 분산시키는 것을 추상화한 것입니다. 기존의 모델이나 훈련 코드를 조금만 바꾸어 분산 훈련을 할 수 있게 하는 것이 분산 전략 API의 목표입니다.\n\n이 튜토리얼에서는 `tf.distribute.MirroredStrategy`를 사용합니다. 이 전략은 동기화된 훈련 방식을 활용하여 한 장비에 있는 여러 개의 GPU로 그래프 내 복제를 수행합니다. 다시 말하자면, 모델의 모든 변수를 각 프로세서에 복사합니다. 그리고 각 프로세서의 그래디언트(gradient)를 [올 리듀스(all-reduce)](http://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/)를 사용하여 모읍니다. 그다음 모아서 계산한 값을 각 프로세서의 모델 복사본에 적용합니다.\n\n`MirroredStategy`는 텐서플로에서 기본으로 제공하는 몇 가지 분산 전략 중 하나입니다. 다른 전략들에 대해서는 [분산 전략 가이드](../../guide/distribute_strategy.ipynb)를 참고하십시오.",
"_____no_output_____"
],
[
"### 케라스 API\n\n이 예는 모델과 훈련 루프를 만들기 위해 `tf.keras` API를 사용합니다. 직접 훈련 코드를 작성하는 방법은 [사용자 정의 훈련 루프로 분산 훈련하기](training_loops.ipynb) 튜토리얼을 참고하십시오.",
"_____no_output_____"
],
[
"## 필요한 패키지 가져오기",
"_____no_output_____"
]
],
[
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\n# 텐서플로와 텐서플로 데이터셋 패키지 가져오기\n!pip install tensorflow-gpu==2.0.0-beta1\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\ntfds.disable_progress_bar()\n\nimport os",
"_____no_output_____"
]
],
[
[
"## 데이터셋 다운로드",
"_____no_output_____"
],
[
"MNIST 데이터셋을 [TensorFlow Datasets](https://www.tensorflow.org/datasets)에서 다운로드받은 후 불러옵니다. 이 함수는 `tf.data` 형식을 반환합니다.",
"_____no_output_____"
],
[
"`with_info`를 `True`로 설정하면 전체 데이터에 대한 메타 정보도 함께 불러옵니다. 이 정보는 `info` 변수에 저장됩니다. 여기에는 훈련과 테스트 샘플 수를 비롯한 여러가지 정보들이 들어있습니다.",
"_____no_output_____"
]
],
[
[
"datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)\n\nmnist_train, mnist_test = datasets['train'], datasets['test']",
"_____no_output_____"
]
],
[
[
"## 분산 전략 정의하기",
"_____no_output_____"
],
[
"분산과 관련된 처리를 하는 `MirroredStrategy` 객체를 만듭니다. 이 객체가 컨텍스트 관리자(`tf.distribute.MirroredStrategy.scope`)도 제공하는데, 이 안에서 모델을 만들어야 합니다.",
"_____no_output_____"
]
],
[
[
"strategy = tf.distribute.MirroredStrategy()",
"_____no_output_____"
],
[
"print('장치의 수: {}'.format(strategy.num_replicas_in_sync))",
"_____no_output_____"
]
],
[
[
"## 입력 파이프라인 구성하기",
"_____no_output_____"
],
[
"다중 GPU로 모델을 훈련할 때는 배치 크기를 늘려야 컴퓨팅 자원을 효과적으로 사용할 수 있습니다. 기본적으로는 GPU 메모리에 맞추어 가능한 가장 큰 배치 크기를 사용하십시오. 이에 맞게 학습률도 조정해야 합니다.",
"_____no_output_____"
]
],
[
[
"# 데이터셋 내 샘플의 수는 info.splits.total_num_examples 로도\n# 얻을 수 있습니다.\n\nnum_train_examples = info.splits['train'].num_examples\nnum_test_examples = info.splits['test'].num_examples\n\nBUFFER_SIZE = 10000\n\nBATCH_SIZE_PER_REPLICA = 64\nBATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync",
"_____no_output_____"
]
],
[
[
"픽셀의 값은 0~255 사이이므로 [0-1 범위로 정규화](https://en.wikipedia.org/wiki/Feature_scaling)해야 합니다. 정규화 함수를 정의합니다.",
"_____no_output_____"
]
],
[
[
"def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255\n\n return image, label",
"_____no_output_____"
]
],
[
[
"이 함수를 훈련과 테스트 데이터에 적용합니다. 훈련 데이터 순서를 섞고, [훈련을 위해 배치로 묶습니다](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch).",
"_____no_output_____"
]
],
[
[
"train_dataset = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\neval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)",
"_____no_output_____"
]
],
[
[
"## 모델 만들기",
"_____no_output_____"
],
[
"`strategy.scope` 컨텍스트 안에서 케라스 모델을 만들고 컴파일합니다.",
"_____no_output_____"
]
],
[
[
"with strategy.scope():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n\n model.compile(loss='sparse_categorical_crossentropy',\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"## 콜백 정의하기",
"_____no_output_____"
],
[
"여기서 사용하는 콜백은 다음과 같습니다.\n\n* *텐서보드(TensorBoard)*: 이 콜백은 텐서보드용 로그를 남겨서, 텐서보드에서 그래프를 그릴 수 있게 해줍니다.\n* *모델 체크포인트(Checkpoint)*: 이 콜백은 매 에포크(epoch)가 끝난 후 모델을 저장합니다.\n* *학습률 스케줄러*: 이 콜백을 사용하면 매 에포크 혹은 배치가 끝난 후 학습률을 바꿀 수 있습니다.\n\n콜백을 추가하는 방법을 보여드리기 위하여 노트북에 *학습률*을 표시하는 콜백도 추가하겠습니다.",
"_____no_output_____"
]
],
[
[
"# 체크포인트를 저장할 체크포인트 디렉터리를 지정합니다.\ncheckpoint_dir = './training_checkpoints'\n# 체크포인트 파일의 이름\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt_{epoch}\")",
"_____no_output_____"
],
[
"# 학습률을 점점 줄이기 위한 함수\n# 필요한 함수를 직접 정의하여 사용할 수 있습니다.\ndef decay(epoch):\n if epoch < 3:\n return 1e-3\n elif epoch >= 3 and epoch < 7:\n return 1e-4\n else:\n return 1e-5",
"_____no_output_____"
],
[
"# 에포크가 끝날 때마다 학습률을 출력하는 콜백.\nclass PrintLR(tf.keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs=None):\n print('\\n에포크 {}의 학습률은 {}입니다.'.format(epoch + 1,\n model.optimizer.lr.numpy()))",
"_____no_output_____"
],
[
"callbacks = [\n tf.keras.callbacks.TensorBoard(log_dir='./logs'),\n tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,\n save_weights_only=True),\n tf.keras.callbacks.LearningRateScheduler(decay),\n PrintLR()\n]",
"_____no_output_____"
]
],
[
[
"## 훈련과 평가",
"_____no_output_____"
],
[
"이제 평소처럼 모델을 학습합시다. 모델의 `fit` 함수를 호출하고 튜토리얼의 시작 부분에서 만든 데이터셋을 넘깁니다. 이 단계는 분산 훈련 여부와 상관없이 동일합니다.",
"_____no_output_____"
]
],
[
[
"model.fit(train_dataset, epochs=12, callbacks=callbacks)",
"_____no_output_____"
]
],
[
[
"아래에서 볼 수 있듯이 체크포인트가 저장되고 있습니다.",
"_____no_output_____"
]
],
[
[
"# 체크포인트 디렉터리 확인하기\n!ls {checkpoint_dir}",
"_____no_output_____"
]
],
[
[
"모델의 성능이 어떤지 확인하기 위하여, 가장 최근 체크포인트를 불러온 후 테스트 데이터에 대하여 `evaluate`를 호출합니다.\n\n평소와 마찬가지로 적절한 데이터셋과 함께 `evaluate`를 호출하면 됩니다.",
"_____no_output_____"
]
],
[
[
"model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))\n\neval_loss, eval_acc = model.evaluate(eval_dataset)\n\nprint('평가 손실: {}, 평가 정확도: {}'.format(eval_loss, eval_acc))",
"_____no_output_____"
]
],
[
[
"텐서보드 로그를 다운로드받은 후 터미널에서 다음과 같이 텐서보드를 실행하여 훈련 결과를 확인할 수 있습니다.\n\n```\n$ tensorboard --logdir=path/to/log-directory\n```",
"_____no_output_____"
]
],
[
[
"!ls -sh ./logs",
"_____no_output_____"
]
],
[
[
"## SavedModel로 내보내기",
"_____no_output_____"
],
[
"플랫폼에 무관한 SavedModel 형식으로 그래프와 변수들을 내보냅니다. 모델을 내보낸 후에는, 전략 범위(scope) 없이 불러올 수도 있고, 전략 범위와 함께 불러올 수도 있습니다.",
"_____no_output_____"
]
],
[
[
"path = 'saved_model/'",
"_____no_output_____"
],
[
"tf.keras.experimental.export_saved_model(model, path)",
"_____no_output_____"
]
],
[
[
"`strategy.scope` 없이 모델 불러오기.",
"_____no_output_____"
]
],
[
[
"unreplicated_model = tf.keras.experimental.load_from_saved_model(path)\n\nunreplicated_model.compile(\n loss='sparse_categorical_crossentropy',\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['accuracy'])\n\neval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset)\n\nprint('평가 손실: {}, 평가 정확도: {}'.format(eval_loss, eval_acc))",
"_____no_output_____"
]
],
[
[
"`strategy.scope`와 함께 모델 불러오기.",
"_____no_output_____"
]
],
[
[
"with strategy.scope():\n replicated_model = tf.keras.experimental.load_from_saved_model(path)\n replicated_model.compile(loss='sparse_categorical_crossentropy',\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['accuracy'])\n\n eval_loss, eval_acc = replicated_model.evaluate(eval_dataset)\n print ('평가 손실: {}, 평가 정확도: {}'.format(eval_loss, eval_acc))",
"_____no_output_____"
]
],
[
[
"### 예제와 튜토리얼\n\n케라스 적합/컴파일과 함께 분산 전략을 쓰는 예제들이 더 있습니다.\n\n1. `tf.distribute.MirroredStrategy`를 사용하여 학습한 [Transformer](https://github.com/tensorflow/models/blob/master/official/transformer/v2/transformer_main.py) 예제.\n2. `tf.distribute.MirroredStrategy`를 사용하여 학습한 [NCF](https://github.com/tensorflow/models/blob/master/official/recommendation/ncf_keras_main.py) 예제.\n\n[분산 전략 가이드](../../guide/distribute_strategy.ipynb#examples_and_tutorials)에 더 많은 예제 목록이 있습니다.",
"_____no_output_____"
],
[
"## 다음 단계\n\n* [분산 전략 가이드](../../guide/distribute_strategy.ipynb)를 읽어보세요.\n* [사용자 정의 훈련 루프를 사용한 분산 훈련](training_loops.ipynb) 튜토리얼을 읽어보세요.\n\nNote: `tf.distribute.Strategy`은 현재 활발히 개발 중입니다. 근시일내에 예제나 튜토리얼이 더 추가될 수 있습니다. 한 번 사용해 보세요. [깃허브 이슈](https://github.com/tensorflow/tensorflow/issues/new)를 통하여 피드백을 주시면 감사하겠습니다.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
cb8332d684566a9525dcc4fbc0dab534cf020b1e
| 6,049 |
ipynb
|
Jupyter Notebook
|
module1/assignment_applied_modeling_1.ipynb
|
newnativeabq/DS-Unit-2-Applied-Modeling
|
7c5820ff67365815d233e045f3eef2888f2a8494
|
[
"MIT"
] | 1 |
2019-08-21T16:58:43.000Z
|
2019-08-21T16:58:43.000Z
|
module1/assignment_applied_modeling_1.ipynb
|
newnativeabq/DS-Unit-2-Applied-Modeling
|
7c5820ff67365815d233e045f3eef2888f2a8494
|
[
"MIT"
] | null | null | null |
module1/assignment_applied_modeling_1.ipynb
|
newnativeabq/DS-Unit-2-Applied-Modeling
|
7c5820ff67365815d233e045f3eef2888f2a8494
|
[
"MIT"
] | null | null | null | 37.339506 | 617 | 0.646057 |
[
[
[
"Lambda School Data Science, Unit 2: Predictive Modeling\n\n# Applied Modeling, Module 1\n\nYou will use your portfolio project dataset for all assignments this sprint.\n\n## Assignment\n\nComplete these tasks for your project, and document your decisions.\n\n- [ ] Choose your target. Which column in your tabular dataset will you predict?\n- [ ] Choose which observations you will use to train, validate, and test your model. And which observations, if any, to exclude.\n- [ ] Determine whether your problem is regression or classification.\n- [ ] Choose your evaluation metric.\n- [ ] Begin with baselines: majority class baseline for classification, or mean baseline for regression, with your metric of choice.\n- [ ] Begin to clean and explore your data.\n- [ ] Choose which features, if any, to exclude. Would some features \"leak\" information from the future?\n\n## Reading\n- [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _\"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score.\"_\n- [How Shopify Capital Uses Quantile Regression To Help Merchants Succeed](https://engineering.shopify.com/blogs/engineering/how-shopify-uses-machine-learning-to-help-our-merchants-grow-their-business)\n- [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), **by Lambda DS3 student** Michael Brady. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook.\n- [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb)\n- [Simple guide to confusion matrix terminology](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) by Kevin Markham, with video\n- [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415)",
"_____no_output_____"
],
[
"## Overview of Intent",
"_____no_output_____"
],
[
"## Targets\n\n**Original Feature-Targets**\n\n* BMI\n* HealthScore\n\n**Generated Feature-Targets**\n\n* BMI -> Underweight, Average Weight, Overweight, Obese\n* health_trajectory (matched polynomial regression of BMI & health scores - still in development)",
"_____no_output_____"
],
[
"## Features\n\n**Original Features**\n\nFrom Activity Response:\n\n* Case ID only (for join)\n\nFrom Child Response:\n\n* Case ID only (for join)\n\nFrom General Response (In Development):\n\n* (In Development)\n\n**Transformed Features**\n\nFrom Activity Response:\n\n* Total Secondary Eating (drinking not associated with primary meals)\n* Total Secondary Drinking (drinking not associated with primary meals)\n\nFrom Child Response:\n\n* Total Assisted Meals\n* Number Children Under 19 in Household\n\nFrom General Response (In Development):\n\n* (In Development)\n",
"_____no_output_____"
],
[
"## Evaluation Metrics\n\n**Bifurcated design**\n\nOne phase will use regression forms to estimate future BMI & BMI Trajectory Over Time (coefficients of line)\n\nAnother will use classification in an attempt to model reported health status (1 thru 5; already encoded in data)\n\n**Useful Metrics**\n\nAccuracy scores and confusion matrices for health status. Balanced accuracy may be considered if target distribution is skewed.\n\nr^2 and t-statistics for BMI prediction. Will also look at explained variance to see how much the model is capturing.\n",
"_____no_output_____"
],
[
"## Data Cleaning and Exploration\n\n**For work on combining datasets and first pass feature engineering, see stitch.ipynb in this folder**",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
cb833fcdce6e0a60ce627081c172b5d9d2110030
| 707,497 |
ipynb
|
Jupyter Notebook
|
ExampleUsage.ipynb
|
ckoerber/distribution-analysis
|
9f562d100652738837c670073e2a88f5a8eb25d8
|
[
"MIT"
] | null | null | null |
ExampleUsage.ipynb
|
ckoerber/distribution-analysis
|
9f562d100652738837c670073e2a88f5a8eb25d8
|
[
"MIT"
] | null | null | null |
ExampleUsage.ipynb
|
ckoerber/distribution-analysis
|
9f562d100652738837c670073e2a88f5a8eb25d8
|
[
"MIT"
] | null | null | null | 540.074046 | 250,732 | 0.935502 |
[
[
[
"# Example Usage of plotDist",
"_____no_output_____"
],
[
"This notebook presents the usage of the `plotDist` and `utilities` module.\n\nThe general starting point is a 3-dimensional array `samples` of size `samples.shape = (nObservables, nXrange, nSamples)`, where\n* `nObservables` is the number of observables\n* `nXrange` the independent variable of the observables\n* `nSamples` the number of statistical data for each observable at each `xRange` point",
"_____no_output_____"
],
[
"## Init",
"_____no_output_____"
],
[
"### Import modules",
"_____no_output_____"
]
],
[
[
"# Numeric and aata handling modules\nimport numpy as np\nimport pandas as pd\n\n# Plotting modules\nimport matplotlib.pylab as plt\n\n# Local modules\nimport distalysis.plotDist as pD # Plotting\nimport distalysis.utilities as ut # Data manipulation",
"_____no_output_____"
]
],
[
[
"### Define statistic sample array",
"_____no_output_____"
],
[
"Compute random array for later use. This array follows exponenital shape which is 'smeared' by gaussian noise.",
"_____no_output_____"
]
],
[
[
"help(ut.generatePseudoSamples)",
"Help on function generatePseudoSamples in module distalysis.utilities:\n\ngeneratePseudoSamples(xRange, nSamples, expPars, mean=0.0, sDev=0.1)\n Generates 3-dimensional pseudo statistical correlator data.\n \n The first dimension being different observables, the second the x-data\n of the correlator and the last the number of statistical samples.\n The correlator follow an exponential shape\n $$\n C_i(x) = \\sum_{ a_{ij}, b_{ij} in expPars } \\exp( - a_{ij} x - b_{ij} )\n $$\n and Gaussian noise is added.\n \n Parameters\n ----------\n xRange : array\n The number of x range for the exponential input.\n Length of array is second dimension of output.\n \n nSamples : int\n The number of statistical repetitions. Third dimension of array.\n \n expPars : 3-dimensional np.array\n The shape is (number of observables, number of exponentials, 2)\n The exponential parameters. E.g.,\n expPars = [ [(a11, b11), (a12, b12)], [(a21, b21), (a22, b22)], ... ]\n \n mean : double\n The mean of the Gaussian noise.\n \n sDev : double\n The standard deviation of the Gaussian noise\n \n Returns\n -------\n samples : 3-dimensional array\n\n"
],
[
"# Parameters\nnC, nT, nSamples = 4, 32, 100\n\n# x-range\nnt = np.arange(nT)",
"_____no_output_____"
]
],
[
[
"Define the exponenotial parameters and generate the pseudo samples.",
"_____no_output_____"
]
],
[
[
"aAA1 = 1./nT; aAA2 = 4./nT\nbAA1 = 0.5; bAA2 = 1.0\n\naBB1 = -aAA1; aBB2 = -aAA2;\nbBB1 = bAA1 + nT*aAA1; bBB2 = bAA2 + nT*aAA2\n\nexpPars = np.array([\n [(aAA1, bAA1), (aAA2, bAA2)], # C_{AA}\n [(0., 20.), (0., 20.)], # C_{AB} set to zero\n [(0., 20.), (0., 20.)], # C_{BA} set to zero\n [(aBB1, bBB1), (aBB2, bBB2)], # C_{BA}\n])\n\nsamples = ut.generatePseudoSamples(nt, nSamples, expPars)",
"_____no_output_____"
]
],
[
[
"# Single sample plots",
"_____no_output_____"
],
[
"In this section you find routines for data of one `samples` array.",
"_____no_output_____"
],
[
"## Visualize sample data",
"_____no_output_____"
]
],
[
[
"help(pD.plotSamples)",
"Help on function plotSamples in module distalysis.plotDist:\n\nplotSamples(xRange, samples, ax=None, **kwargs)\n Creates errorbar plot for all the correlator components in one frame.\n \n Parameters\n ----------\n xRange : array\n The number of x-values for sample input.\n Length of array is second dimension of samples.\n \n samples : 3-dimensional array\n Dimensions are the number of observables, the number of x-values and\n the number of statistical samples.\n \n ax : 'matplotlib.axes'\n Plot samples in this figure. If 'None', create new object.\n \n **kwargs: keyword arguments\n Will be passed to 'ax.errorbar'.\n \n Returns\n -------\n ax : 'matplotlib.axes'\n\n"
],
[
"fig, ax = plt.subplots(dpi=400, figsize=(3, 2))\n\npD.plotSamples(nt, samples, ax=ax, marker=\".\", linestyle=\"None\", lw=1, ms=4)\nax.legend(loc=\"best\", fontsize=\"xx-small\")\n\nplt.show(fig)",
"_____no_output_____"
]
],
[
[
"## Plot distribution of individual samples",
"_____no_output_____"
]
],
[
[
"help(pD.plotSampleDistributions)",
"Help on function plotSampleDistributions in module distalysis.plotDist:\n\nplotSampleDistributions(samples, nXStart=0, nXStep=0, obsTitles=None, xRange=None)\n Creates a 'matplotlib.figure' which contains a grid of distribution plots.\n The data is plotted for all 'Observables' values on the x-axis and\n selected 'xRange' values on the y-axis.\n Each individual frame contains a distribution plot with a fitted PDF, Bins\n and KDE.\n \n Parameters\n ----------\n samples : array, shape = (nObservables, nXrange, nSamples)\n The statsitical HMC data.\n \n nXStart : int\n Index to nX dimension of samples array for plotting frames. Plots\n will start at this index.\n \n nXStep : int\n Stepindex to nX dimension of samples array for plotting frames. Only\n each 'nXStep' will be shown.\n \n obsTitles : None or list, length = nObservables\n Row titles for figure.\n \n xRange : None or iterable\n If 'None' creates a range of x-values from 'nXStart', 'nXSetp' and\n 'samples.shape'.\n If specified, takes this as the x-range. make sure it agrees with the\n 'samples.shape'.\n \n Returns\n -------\n fig : 'matplotlib.figure'\n\n"
],
[
"nTstart = 0; nTstep = 5\nobsTitles = [r\"$C_{%s}$\" % ij for ij in [\"AA\", \"AB\", \"BA\", \"BB\"]]\nfig = pD.plotSampleDistributions(samples, nXStart=nTstart, nXStep=nTstep, obsTitles=obsTitles)\nplt.show(fig)",
"_____no_output_____"
]
],
[
[
"## Plot individual Kernel Density Eesimtates (KDEs) for distributions",
"_____no_output_____"
]
],
[
[
"help(pD.plotDistribution)",
"Help on function plotDistribution in module distalysis.plotDist:\n\nplotDistribution(dist)\n Plots the fitted PDF, KDE and CDF as well as the PDF differences between\n fits, binning and KDE.\n The figure contains additional informations like:\n * Kolmogorov-Smirnov test statistics and P-values\n * The KDE difference defined by\n $$\n \\Delta PDF(x)\n = 2*[PDF_{KDE}(x) - PDF_{FIT}(x)]/[PDF_{KDE}(x) + PDF_{FIT}(x)]\n $$\n and the integrated KDE difference is given by\n $$ \\sqrt{ \\int dx [\\Delta PDF(x)]^2 } $$\n \n Parameters\n ----------\n dist : array or list, one dimensional\n \n Returns\n -------\n fig : 'matplotlib.figure'\n \n Note\n ----\n Abbreviations:\n * KDE : Kernel Density Estimate\n * PDF : Probability Density Function\n * CDF : Cumulative Density Function\n \n This routine uses seaborn to estimate the bins and KDE, scipy for the\n Kolmogorov-Smirnov test\n (https://en.wikipedia.org/wiki/Kolmogorov-Smirnov_test)\n and 'statsmodels' for estimating the KDE\n '''python\n >>> import statsmodels.nonparametric.api as smnp\n >>> kde = smnp.KDEUnivariate(data)\n >>> kde.fit(kernel=\"gau\", bw=\"scott\", fft=True, gridsize=100, cut=3)\n '''\n 'seaborn' itself uses 'numpy' for binning where the number of bins is\n determined by the Freedman Diaconis Estimator\n (https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html).\n\n"
],
[
"AA = 0; nt = 1\ndist = samples[AA, nt]\n\nfig = pD.plotDistribution(dist)\nplt.show(fig)",
"_____no_output_____"
]
],
[
[
"# Collective sample plots",
"_____no_output_____"
],
[
"In this section you find routines for visualizing more than one independent variable dependencies.",
"_____no_output_____"
],
[
"## Get sample statistics",
"_____no_output_____"
]
],
[
[
"help(ut.getStatisticsFrame)",
"Help on function getStatisticsFrame in module distalysis.utilities:\n\ngetStatisticsFrame(samples, nXStart=0, nXStep=1, obsTitles=None)\n Computes a statistic frame for a given correlator bootstrap ensemble.\n \n This routine takes statistical data 'samples' (see parameters) as input.\n For each individual distribution within the sample data,\n this routine fits a Gaussian Probability Density Function (PDF) and computes\n Kernel Density Estimate (KDE).\n The output of this routine is a data frame, which contains the\n following information for each individual distribution of data within\n the samples array:\n * 'mean': the mean value of the distribution\n * 'sDev': the standard deviation of the individual distribution\n * 'kdeDiff': the relative vector norm of the KDE and the fitted PDF\n $$\n \\sqrt{\n \\int dx [ 2*(PDF_{KDE} - PDF_{FIT})/(PDF_{KDE} + PDF_{FIT}) ]^2\n }\n $$\n * 'Dn' and 'pValue': the statistic and the significance of the Hypothesis\n (normal distribution with given parameters)\n by the Kolmogorov-Smirnov test. of the Kolmogorov-Smirnov test\n (https://en.wikipedia.org/wiki/Kolmogorov-Smirnov_test).\n The data is classified by 'nX' and the observable name\n ('obsTitles' if present).\n \n Parameters\n ----------\n samples : array, shape = (nObservables, nXrange, nSamples)\n The statsitical HMC data.\n \n nXStart : int\n Index to nX dimension of samples array for plotting frames. Plots\n will start at this index.\n \n nXStep : int\n Stepindex to nX dimension of samples array for plotting frames. Only\n each 'nXStep' will be shown.\n \n obsTitles : None or list, length = nObservables\n Row titles for figure.\n \n Returns\n -------\n df : 'pandas.DataFrame'\n \n Note\n ----\n For the Kolmogorov-Smirnov test see 'scipy.stats.kstest' and for the KDE see\n '''python\n >>> import statsmodels.nonparametric.api as smnp\n >>> kde = smnp.KDEUnivariate(dist)\n >>> kde.fit(kernel=\"gau\", bw=\"scott\", fft=True, gridsize=100, cut=3)\n '''\n\n"
],
[
"nTstart = 0; nTstep = 5\nobsTitles = [r\"$C_{%s}$\" % ij for ij in [\"AA\", \"AB\", \"BA\", \"BB\"]]\n\ndf = ut.getStatisticsFrame(samples, nXStart=nTstart, nXStep=nTstep, obsTitles=obsTitles)\n\nprint(df.describe())\ndf.head()",
" nX mean sDev Dn pValue kdeDiff\ncount 28.000000 28.000000 28.000000 28.000000 28.000000 28.000000\nmean 15.000000 0.244180 0.099686 0.061418 0.796467 0.459347\nstd 10.183502 0.294374 0.007139 0.017522 0.229418 0.184051\nmin 0.000000 -0.029611 0.086986 0.035827 0.217235 0.271162\n25% 5.000000 0.005008 0.094894 0.050452 0.670468 0.311737\n50% 15.000000 0.122004 0.099304 0.058393 0.883972 0.395268\n75% 25.000000 0.414171 0.103043 0.072313 0.960911 0.555293\nmax 30.000000 0.973170 0.115759 0.103689 0.999532 0.919964\n"
]
],
[
[
"## Prepare collective pseudo sample data frame ",
"_____no_output_____"
],
[
"Assume you have more than one indpendent variable and want to anaylze the collective dependence of the dependet variable.\nYou can mimic this dependence by adding new columns to the statistic frames.\nIn this case, the columns are named `nBinSize` and `nSamples`.",
"_____no_output_____"
]
],
[
[
"# Define independt variable ranges\nbinSizeRange = [1,2,5]\nsampleSizeRange = [400, 500, 700, 1000]\nnt = np.arange(nT)\n\n# Create storage frame\ndf = pd.DataFrame()\n\n# Generate pseudo samples for each parameter configuration\nfor nBinSize in binSizeRange:\n for nSamples in sampleSizeRange:\n ## Generate individual pseudo sample set\n samples = ut.generatePseudoSamples(nt, nSamples, expPars)\n ## Get temporary statistics data frame\n tmp = ut.getStatisticsFrame(samples, nXStart=nTstart, nXStep=nTstep, obsTitles=obsTitles)\n ## Store independent variable parameter\n tmp[\"nBinSize\"] = nBinSize\n tmp[\"nSamples\"] = nSamples\n ## Collect in data frame\n df = df.append(tmp)\n\ndf.head()",
"_____no_output_____"
]
],
[
[
"## Plot parameter dependent errorbars for mean values",
"_____no_output_____"
]
],
[
[
"help(pD.errBarPlot)",
"Help on function errBarPlot in module distalysis.plotDist:\n\nerrBarPlot(dataFrame, meanKey='mean', sDevKey='sDev', xKey='nBinSize', rowKey='observable', colKey='nX', colorKey='nSamples', errBarKwargs=None, shareY=False)\n Creates a grid of errorbar plots.\n \n Each frame in the grid plot displays the 'meanKey' and its\n standard deviation 'sDevKey' over the independent variable 'xKey'.\n The columns of the grid are given by the 'colKey' entries and\n the rows are given by the 'rowKey'.\n The 'colorKey' plots decides which entries are shown in different\n colors within each frame.\n \n Parameters\n ----------\n dataFrame : 'pandas.DataFrame'\n This data frame must contain the values of the following keys.\n \n meanKey : string\n Name of the dataFrame key which will used for plotting the mean value for\n each frame of the grid.\n \n sDevKey : string\n Name of the dataFrame key which will used for plotting the standard\n deviation value for each frame of the grid.\n \n xKey : string\n Name of the dataFrame key which will used as the dependent variable for\n each frame of the grid.\n \n rowKey : string\n Name of the dataFrame key which will used as the rows of the plot grid.\n \n colKey : string\n Name of the dataFrame key which will used as the columns of the plot grid.\n \n colorKey : string\n Name of the dataFrame key which will used for discriminating different\n plots values within each frame.\n \n errBarKwargs : dict\n Parameters which will be passed to 'plt.errorbar'.\n These parameters will overwrite default values.\n \n shareY : boolean\n Specifies whether the y-entries shall be displayed on the same range\n (row wise).\n \n Returns\n -------\n graph : 'matplotlib.figure'\n \n Notes\n -----\n The y-axis scales are different for each frame and can generally not be\n compared.\n\n"
],
[
"g = pD.errBarPlot(df)\nplt.show(g)",
"_____no_output_____"
]
],
[
[
"## Summary convergence plot for distributions",
"_____no_output_____"
],
[
"Suppose you want to summarize the previous frame for several ensembles at once. \nThis is done by the `plotFluctuations` method.\nIn this example, this method computes the average and std of `mean` and `sDev` over `nX` and `observables`.\nThe informations are seperatly computed for different values of `nBinSize` and `nSamples`.",
"_____no_output_____"
]
],
[
[
"help(ut.getFluctuationFrame)",
"Help on function getFluctuationFrame in module distalysis.utilities:\n\ngetFluctuationFrame(dataFrame, valueKeys, collectByKeys, averageOverKeys=None)\n Routine for computing the collective average and standard deviation\n information for specified keys in a data frame.\n \n Computes the mean and standard deviations for the 'valueKeys' over all keys\n no collected in 'collectByKeys'.\n Afterwards further averages over 'averageOverKeys':\n '''pseudo_code\n >>> avg[valueKey, collectAvgKey] = average(\n >>> average( df[collectKey, valueKey, restKeys], restKeys)[keys, valueKeys],\n >>> keys in averageOverKeys\n >>> )\n >>> std[valueKey, collectAvgKey] = average(\n >>> std( df[collectKey, valueKey, restKeys], restKeys)[keys, valueKeys],\n >>> keys in averageOverKeys\n >>> )\n '''\n for all valueKeys and collectAvgKeys, where\n '''pseudo_code\n >>> collectAvgKey in collectKey and not in averageOverKeys\n '''\n \n Parameters\n ----------\n dataFrame : 'pandas.DataFrame'\n Data frame which must contain the the values of the following keys.\n \n valueKeys : list of strings\n The target values for which the means and standard deviations will be\n computed.\n \n collectByKeys : list of strings\n Dependent columns for the data.\n These informations will be separated and not averaged out.\n \n averageOverKeys : None or list of strings\n If not None: This routines first computes the average and mean\n for the target values separated by by the 'collectByKeys' values.\n Afterwards, another average will be computed over these keys.\n \n Parameters\n ----------\n fluctFrame : 'pandas.DataFrame'\n \n \n Note\n ----\n 'averageOverKeys' does not affect the average values when ignoring it in\n 'collectByKeys': the 'avg_...' values are the same for\n '''pseudo_code\n >>> (collectByKeys = ['key1, key2'], averageOverKeys = ['key1, key2']) and\n >>> (collectByKeys = ['key1'], averageOverKeys = ['key1'])\n '''\n However, the standard deviation is affected by that.\n\n"
],
[
"fluctFrame = ut.getFluctuationFrame(\n df, \n valueKeys=[\"mean\", \"sDev\"], # present collective mean and sDev statistics\n collectByKeys=[\"nSamples\", \"nBinSize\"] # group by nSamples and nBinSize \n)\nfluctFrame.head()",
"_____no_output_____"
],
[
"help(pD.plotFluctuations)",
"Help on function plotFluctuations in module distalysis.plotDist:\n\nplotFluctuations(fluctFrame, valueKey, axisKey)\n Routine for visualizing fluctuations of statistical data.\n \n Plots the average values and standard deviations for collective datasets\n in bar plots.\n \n Parameters\n ----------\n fluctFrame : 'pandas.DataFrame'\n A data frame containing fluctuation data.\n It must have the columns '[\"avg_{valueKey}\", \"std_{valueKey}\"]' and\n 'axisKey' must be specified in the indices.\n The easiest way to generate such a frame is using the\n 'utilities.getFluctuationFrame' method.\n \n valueKey : string\n Name of the dataFrame key which dependence is analyzed.\n \n axisKey : string\n Name of the dataFrame key which will be displayed on the y-axis.\n Must be in the indices of the 'fluctFrame'\n \n Returns\n -------\n fig : 'matplotlib.figure'\n\n"
],
[
"fig = pD.plotFluctuations(fluctFrame, valueKey=\"sDev\", axisKey=\"nSamples\")\nplt.show(fig)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
cb83416873d166ec3a006f19db9b7eaf54d2e4b4
| 893 |
ipynb
|
Jupyter Notebook
|
Exercises/E05-prophet.ipynb
|
tusharsinghsuryavanshi/AdvancedMethodsDataAnalysisClass
|
68ad736ea9e1e9e33226249144f15b5c21f56c03
|
[
"MIT"
] | 14 |
2020-06-03T21:19:36.000Z
|
2021-01-30T19:40:37.000Z
|
Exercises/E05-prophet.ipynb
|
tusharsinghsuryavanshi/AdvancedMethodsDataAnalysisClass
|
68ad736ea9e1e9e33226249144f15b5c21f56c03
|
[
"MIT"
] | 2 |
2020-10-01T21:55:26.000Z
|
2020-11-27T13:07:18.000Z
|
Exercises/E05-prophet.ipynb
|
tusharsinghsuryavanshi/AdvancedMethodsDataAnalysisClass
|
68ad736ea9e1e9e33226249144f15b5c21f56c03
|
[
"MIT"
] | 28 |
2020-06-02T23:36:40.000Z
|
2021-09-06T03:09:29.000Z
| 19.413043 | 75 | 0.545353 |
[
[
[
"# Exercise 05\n\n\n## Using the example_retail_sales dataset:\n- Standardize the dataset using np.log\n- Using prophet forescast the next 6 months of the sales logarithm\n- Predict the next 6 months of sales",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown"
]
] |
cb834ab19c4db24aee03197e5c74ae0124cff335
| 96,029 |
ipynb
|
Jupyter Notebook
|
custom_training_walkthrough.ipynb
|
kyle-w-brown/tensorflow-1.x
|
f31a9fd0c8f817c592242ad65e1c08786fde216d
|
[
"Apache-2.0"
] | null | null | null |
custom_training_walkthrough.ipynb
|
kyle-w-brown/tensorflow-1.x
|
f31a9fd0c8f817c592242ad65e1c08786fde216d
|
[
"Apache-2.0"
] | null | null | null |
custom_training_walkthrough.ipynb
|
kyle-w-brown/tensorflow-1.x
|
f31a9fd0c8f817c592242ad65e1c08786fde216d
|
[
"Apache-2.0"
] | null | null | null | 72.148009 | 26,570 | 0.717502 |
[
[
[
"# **Custom Training: Walkthrough `tf-1.x`**\r\n\r\n---",
"_____no_output_____"
],
[
"[](https://mybinder.org/v2/gh/kyle-w-brown/tensorflow-1.x.git/HEAD)",
"_____no_output_____"
],
[
"This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/guide/eager) to:\n1. Build a model,\n2. Train this model on example data, and\n3. Use the model to make predictions about unknown data.\n\n## TensorFlow programming\n\nThis guide uses these high-level TensorFlow concepts:\n\n* Enable an [eager execution](https://www.tensorflow.org/guide/eager) development environment,\n* Import data with the [Datasets API](https://www.tensorflow.org/guide/datasets),\n* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).\n\nThis tutorial is structured like many TensorFlow programs:\n\n1. Import and parse the data sets.\n2. Select the type of model.\n3. Train the model.\n4. Evaluate the model's effectiveness.\n5. Use the trained model to make predictions.",
"_____no_output_____"
],
[
"## Setup program",
"_____no_output_____"
],
[
"### Configure imports and eager execution\n\nImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [Tensorlow >=1.8](https://www.tensorflow.org/install/).\n\nOnce eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/guide/eager) for more details.",
"_____no_output_____"
]
],
[
[
"%tensorflow_version 1.x\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport matplotlib.pyplot as plt\n\nimport tensorflow as tf\n\ntf.enable_eager_execution()\n\nprint(\"TensorFlow version: {}\".format(tf.__version__))\nprint(\"Eager execution: {}\".format(tf.executing_eagerly()))",
"TensorFlow 1.x selected.\nTensorFlow version: 1.15.2\nEager execution: True\n"
]
],
[
[
"## The Iris classification problem\n\nImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).\n\nThe Iris genus entails about 300 species, but our program will only classify the following three:\n\n* Iris setosa\n* Iris virginica\n* Iris versicolor\n\n<table>\n <tr><td>\n <img src=\"https://www.tensorflow.org/images/iris_three_species.jpg\"\n alt=\"Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 1.</b> <a href=\"https://commons.wikimedia.org/w/index.php?curid=170298\">Iris setosa</a> (by <a href=\"https://commons.wikimedia.org/wiki/User:Radomil\">Radomil</a>, CC BY-SA 3.0), <a href=\"https://commons.wikimedia.org/w/index.php?curid=248095\">Iris versicolor</a>, (by <a href=\"https://commons.wikimedia.org/wiki/User:Dlanglois\">Dlanglois</a>, CC BY-SA 3.0), and <a href=\"https://www.flickr.com/photos/33397993@N05/3352169862\">Iris virginica</a> (by <a href=\"https://www.flickr.com/photos/33397993@N05\">Frank Mayfield</a>, CC BY-SA 2.0).<br/> \n </td></tr>\n</table>\n\nFortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems.",
"_____no_output_____"
],
[
"## Import and parse the training dataset\n\nDownload the dataset file and convert it into a structure that can be used by this Python program.\n\n### Download the dataset\n\nDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file.",
"_____no_output_____"
]
],
[
[
"train_dataset_url = \"https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv\"\n\ntrain_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),\n origin=train_dataset_url)\n\nprint(\"Local copy of the dataset file: {}\".format(train_dataset_fp))",
"Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv\n\r8192/2194 [================================================================================================================] - 0s 0us/step\nLocal copy of the dataset file: /root/.keras/datasets/iris_training.csv\n"
]
],
[
[
"### Inspect the data\n\nThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries:",
"_____no_output_____"
]
],
[
[
"!head -n5 {train_dataset_fp}",
"120,4,setosa,versicolor,virginica\n6.4,2.8,5.6,2.2,2\n5.0,2.3,3.3,1.0,1\n4.9,2.5,4.5,1.7,2\n4.9,3.1,1.5,0.1,0\n"
]
],
[
[
"From this view of the dataset, notice the following:\n\n1. The first line is a header containing information about the dataset:\n * There are 120 total examples. Each example has four features and one of three possible label names. \n2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/#example)* per line, where:\n * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/#feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements.\n * The last column is the *[label](https://developers.google.com/machine-learning/glossary/#label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.\n\nLet's write that out in code:",
"_____no_output_____"
]
],
[
[
"# column order in CSV file\ncolumn_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']\n\nfeature_names = column_names[:-1]\nlabel_name = column_names[-1]\n\nprint(\"Features: {}\".format(feature_names))\nprint(\"Label: {}\".format(label_name))",
"Features: ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']\nLabel: species\n"
]
],
[
[
"Each label is associated with string name (for example, \"setosa\"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:\n\n* `0`: Iris setosa\n* `1`: Iris versicolor\n* `2`: Iris virginica\n\nFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology).",
"_____no_output_____"
]
],
[
[
"class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']",
"_____no_output_____"
]
],
[
[
"### Create a `tf.data.Dataset`\n\nTensorFlow's [Dataset API](https://www.tensorflow.org/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.\n\n\nSince the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/#batch_size) parameter.",
"_____no_output_____"
]
],
[
[
"batch_size = 32\n\ntrain_dataset = tf.contrib.data.make_csv_dataset(\n train_dataset_fp,\n batch_size, \n column_names=column_names,\n label_name=label_name,\n num_epochs=1)",
"WARNING:tensorflow:\nThe TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, please see:\n * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md\n * https://github.com/tensorflow/addons\n * https://github.com/tensorflow/io (for I/O related ops)\nIf you depend on functionality not listed there, please file an issue.\n\nWARNING:tensorflow:From <ipython-input-6-b631b53c09af>:8: make_csv_dataset (from tensorflow.contrib.data.python.ops.readers) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.data.experimental.make_csv_dataset(...)`.\nWARNING:tensorflow:From /tensorflow-1.15.2/python3.7/tensorflow_core/python/data/experimental/ops/readers.py:540: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_determinstic`.\n"
]
],
[
[
"The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`\n\nWith eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features:",
"_____no_output_____"
]
],
[
[
"features, labels = next(iter(train_dataset))\n\nfeatures",
"_____no_output_____"
]
],
[
[
"Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.\n\nYou can start to see some clusters by plotting a few features from the batch:",
"_____no_output_____"
]
],
[
[
"plt.scatter(features['petal_length'].numpy(),\n features['sepal_length'].numpy(),\n c=labels.numpy(),\n cmap='viridis')\n\nplt.xlabel(\"Petal length\")\nplt.ylabel(\"Sepal length\");",
"_____no_output_____"
]
],
[
[
"To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.\n\nThis function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension.",
"_____no_output_____"
]
],
[
[
"def pack_features_vector(features, labels):\n \"\"\"Pack the features into a single array.\"\"\"\n features = tf.stack(list(features.values()), axis=1)\n return features, labels",
"_____no_output_____"
]
],
[
[
"Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset:",
"_____no_output_____"
]
],
[
[
"train_dataset = train_dataset.map(pack_features_vector)",
"_____no_output_____"
]
],
[
[
"The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples:",
"_____no_output_____"
]
],
[
[
"features, labels = next(iter(train_dataset))\n\nprint(features[:5])",
"tf.Tensor(\n[[5.1 3.5 1.4 0.3]\n [6.4 2.8 5.6 2.2]\n [7.7 2.8 6.7 2. ]\n [6. 2.2 5. 1.5]\n [5. 3.2 1.2 0.2]], shape=(5, 4), dtype=float32)\n"
]
],
[
[
"## Select the type of model\n\n### Why model?\n\nA *[model](https://developers.google.com/machine-learning/crash-course/glossary#model)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.\n\nCould you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you.\n\n### Select the model\n\nWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/#neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/#hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/#neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/#fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer:\n\n<table>\n <tr><td>\n <img src=\"https://www.tensorflow.org/images/custom_estimators/full_network.png\"\n alt=\"A diagram of the network architecture: Inputs, 2 hidden layers, and outputs\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 2.</b> A neural network with features, hidden layers, and predictions.<br/> \n </td></tr>\n</table>\n\nWhen the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossary#inference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*.",
"_____no_output_____"
],
[
"### Create a model using Keras\n\nThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.\n\nThe [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required.",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([\n tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required\n tf.keras.layers.Dense(10, activation=tf.nn.relu),\n tf.keras.layers.Dense(3)\n])",
"_____no_output_____"
]
],
[
[
"The *[activation function](https://developers.google.com/machine-learning/crash-course/glossary#activation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossary#ReLU) is common for hidden layers.\n\nThe ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively.",
"_____no_output_____"
],
[
"### Using the model\n\nLet's have a quick look at what this model does to a batch of features:",
"_____no_output_____"
]
],
[
[
"predictions = model(features)\npredictions[:5]",
"_____no_output_____"
]
],
[
[
"Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossary#logits) for each class. \n\nTo convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossary#softmax) function:",
"_____no_output_____"
]
],
[
[
"tf.nn.softmax(predictions[:5])",
"_____no_output_____"
]
],
[
[
"Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions.",
"_____no_output_____"
]
],
[
[
"print(\"Prediction: {}\".format(tf.argmax(predictions, axis=1)))\nprint(\" Labels: {}\".format(labels))",
"Prediction: [2 1 1 1 2 1 1 1 1 1 1 2 2 1 1 2 1 1 1 2 1 1 1 1 1 1 1 1 1 1 2 1]\n Labels: [0 2 2 2 0 2 2 2 1 1 2 0 0 1 1 0 1 2 0 0 1 2 1 2 2 2 1 2 0 1 0 2]\n"
]
],
[
[
"## Train the model\n\n*[Training](https://developers.google.com/machine-learning/crash-course/glossary#training)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossary#overfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.\n\nThe Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/#supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/#unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features.",
"_____no_output_____"
],
[
"### Define the loss and gradient function\n\nBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossary#loss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.\n\nOur model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples.",
"_____no_output_____"
]
],
[
[
"def loss(model, x, y):\n y_ = model(x)\n return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)\n\n\nl = loss(model, features, labels)\nprint(\"Loss test: {}\".format(l))",
"Loss test: 1.1898300647735596\n"
]
],
[
[
"Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/guide/eager).",
"_____no_output_____"
]
],
[
[
"def grad(model, inputs, targets):\n with tf.GradientTape() as tape:\n loss_value = loss(model, inputs, targets)\n return loss_value, tape.gradient(loss_value, model.trainable_variables)",
"_____no_output_____"
]
],
[
[
"### Create an optimizer\n\nAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossary#optimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.\n\n<table>\n <tr><td>\n <img src=\"https://cs231n.github.io/assets/nn3/opt1.gif\" width=\"70%\"\n alt=\"Optimization algorithms visualized over time in 3D space.\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 3.</b> Optimization algorithms visualized over time in 3D space.<br/>(Source: <a href=\"http://cs231n.github.io/neural-networks-3/\">Stanford class CS231n</a>, MIT License, Image credit: <a href=\"https://twitter.com/alecrad\">Alec Radford</a>)\n </td></tr>\n</table>\n\nTensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossary#gradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results.",
"_____no_output_____"
],
[
"Let's setup the optimizer and the `global_step` counter:",
"_____no_output_____"
]
],
[
[
"optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)\n\nglobal_step = tf.Variable(0)",
"_____no_output_____"
]
],
[
[
"We'll use this to calculate a single optimization step:",
"_____no_output_____"
]
],
[
[
"loss_value, grads = grad(model, features, labels)\n\nprint(\"Step: {}, Initial Loss: {}\".format(global_step.numpy(),\n loss_value.numpy()))\n\noptimizer.apply_gradients(zip(grads, model.trainable_variables), global_step)\n\nprint(\"Step: {}, Loss: {}\".format(global_step.numpy(),\n loss(model, features, labels).numpy()))",
"Step: 0, Initial Loss: 1.1898300647735596\nStep: 1, Loss: 1.1726126670837402\n"
]
],
[
[
"### Training loop\n\nWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:\n\n1. Iterate each *epoch*. An epoch is one pass through the dataset.\n2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).\n3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.\n4. Use an `optimizer` to update the model's variables.\n5. Keep track of some stats for visualization.\n6. Repeat for each epoch.\n\nThe `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/#hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation.",
"_____no_output_____"
]
],
[
[
"## Note: Rerunning this cell uses the same model variables\n\nfrom tensorflow import contrib\ntfe = contrib.eager\n\n# keep results for plotting\ntrain_loss_results = []\ntrain_accuracy_results = []\n\nnum_epochs = 201\n\nfor epoch in range(num_epochs):\n epoch_loss_avg = tfe.metrics.Mean()\n epoch_accuracy = tfe.metrics.Accuracy()\n\n # Training loop - using batches of 32\n for x, y in train_dataset:\n # Optimize the model\n loss_value, grads = grad(model, x, y)\n optimizer.apply_gradients(zip(grads, model.trainable_variables),\n global_step)\n\n # Track progress\n epoch_loss_avg(loss_value) # add current batch loss\n # compare predicted label to actual label\n epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y)\n\n # end epoch\n train_loss_results.append(epoch_loss_avg.result())\n train_accuracy_results.append(epoch_accuracy.result())\n \n if epoch % 50 == 0:\n print(\"Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}\".format(epoch,\n epoch_loss_avg.result(),\n epoch_accuracy.result()))",
"Epoch 000: Loss: 0.063, Accuracy: 99.167%\nEpoch 050: Loss: 0.062, Accuracy: 99.167%\nEpoch 100: Loss: 0.061, Accuracy: 99.167%\nEpoch 150: Loss: 0.061, Accuracy: 99.167%\nEpoch 200: Loss: 0.060, Accuracy: 99.167%\n"
]
],
[
[
"### Visualize the loss function over time",
"_____no_output_____"
],
[
"While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.\n\nInterpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up.",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))\nfig.suptitle('Training Metrics')\n\naxes[0].set_ylabel(\"Loss\", fontsize=14)\naxes[0].plot(train_loss_results)\n\naxes[1].set_ylabel(\"Accuracy\", fontsize=14)\naxes[1].set_xlabel(\"Epoch\", fontsize=14)\naxes[1].plot(train_accuracy_results);",
"_____no_output_____"
]
],
[
[
"## Evaluate the model's effectiveness\n\nNow that the model is trained, we can get some statistics on its performance.\n\n*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/#accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy:\n\n<table cellpadding=\"8\" border=\"0\">\n <colgroup>\n <col span=\"4\" >\n <col span=\"1\" bgcolor=\"lightblue\">\n <col span=\"1\" bgcolor=\"lightgreen\">\n </colgroup>\n <tr bgcolor=\"lightgray\">\n <th colspan=\"4\">Example features</th>\n <th colspan=\"1\">Label</th>\n <th colspan=\"1\" >Model prediction</th>\n </tr>\n <tr>\n <td>5.9</td><td>3.0</td><td>4.3</td><td>1.5</td><td align=\"center\">1</td><td align=\"center\">1</td>\n </tr>\n <tr>\n <td>6.9</td><td>3.1</td><td>5.4</td><td>2.1</td><td align=\"center\">2</td><td align=\"center\">2</td>\n </tr>\n <tr>\n <td>5.1</td><td>3.3</td><td>1.7</td><td>0.5</td><td align=\"center\">0</td><td align=\"center\">0</td>\n </tr>\n <tr>\n <td>6.0</td> <td>3.4</td> <td>4.5</td> <td>1.6</td> <td align=\"center\">1</td><td align=\"center\" bgcolor=\"red\">2</td>\n </tr>\n <tr>\n <td>5.5</td><td>2.5</td><td>4.0</td><td>1.3</td><td align=\"center\">1</td><td align=\"center\">1</td>\n </tr>\n <tr><td align=\"center\" colspan=\"6\">\n <b>Figure 4.</b> An Iris classifier that is 80% accurate.<br/> \n </td></tr>\n</table>",
"_____no_output_____"
],
[
"### Setup the test dataset\n\nEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossary#test_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.\n\nThe setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle:",
"_____no_output_____"
]
],
[
[
"test_url = \"https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv\"\n\ntest_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),\n origin=test_url)",
"Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv\n\r8192/573 [============================================================================================================================================================================================================================================================================================================================================================================================================================================] - 0s 0us/step\n"
],
[
"test_dataset = tf.contrib.data.make_csv_dataset(\n test_fp,\n batch_size, \n column_names=column_names,\n label_name='species',\n num_epochs=1,\n shuffle=False)\n\ntest_dataset = test_dataset.map(pack_features_vector)",
"_____no_output_____"
]
],
[
[
"### Evaluate the model on the test dataset\n\nUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/#epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set.",
"_____no_output_____"
]
],
[
[
"test_accuracy = tfe.metrics.Accuracy()\n\nfor (x, y) in test_dataset:\n logits = model(x)\n prediction = tf.argmax(logits, axis=1, output_type=tf.int32)\n test_accuracy(prediction, y)\n\nprint(\"Test set accuracy: {:.3%}\".format(test_accuracy.result()))",
"Test set accuracy: 96.667%\n"
]
],
[
[
"We can see on the last batch, for example, the model is usually correct:",
"_____no_output_____"
]
],
[
[
"tf.stack([y,prediction],axis=1)",
"_____no_output_____"
]
],
[
[
"## Use the trained model to make predictions\n\nWe've trained a model and \"proven\" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/#unlabeled_example); that is, on examples that contain features but not a label.\n\nIn real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:\n\n* `0`: Iris setosa\n* `1`: Iris versicolor\n* `2`: Iris virginica",
"_____no_output_____"
]
],
[
[
"predict_dataset = tf.convert_to_tensor([\n [5.1, 3.3, 1.7, 0.5,],\n [5.9, 3.0, 4.2, 1.5,],\n [6.9, 3.1, 5.4, 2.1]\n])\n\npredictions = model(predict_dataset)\n\nfor i, logits in enumerate(predictions):\n class_idx = tf.argmax(logits).numpy()\n p = tf.nn.softmax(logits)[class_idx]\n name = class_names[class_idx]\n print(\"Example {} prediction: {} ({:4.1f}%)\".format(i, name, 100*p))",
"Example 0 prediction: Iris setosa (99.8%)\nExample 1 prediction: Iris versicolor (99.9%)\nExample 2 prediction: Iris virginica (95.9%)\n"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb83558f3f6b728d6a6fd2e63babf77dbe122a7d
| 1,159 |
ipynb
|
Jupyter Notebook
|
web/12/Canonical_economic_models.ipynb
|
Jovansam/lectures-2021
|
32b0992a58191723ef660e1de629193862b19f52
|
[
"MIT"
] | null | null | null |
web/12/Canonical_economic_models.ipynb
|
Jovansam/lectures-2021
|
32b0992a58191723ef660e1de629193862b19f52
|
[
"MIT"
] | null | null | null |
web/12/Canonical_economic_models.ipynb
|
Jovansam/lectures-2021
|
32b0992a58191723ef660e1de629193862b19f52
|
[
"MIT"
] | 2 |
2021-06-26T01:52:28.000Z
|
2021-08-10T14:42:46.000Z
| 20.333333 | 176 | 0.563417 |
[
[
[
"# Lecture 12: Cannonical Economic Models",
"_____no_output_____"
],
[
"[Download on GitHub](https://github.com/NumEconCopenhagen/lectures-2021)\n\n[<img src=\"https://mybinder.org/badge_logo.svg\">](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2021/master?urlpath=lab/tree/12/Canonical_economic_models.ipynb)",
"_____no_output_____"
],
[
"TO BE ADDED",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
]
] |
cb83570d8297d3e0267def8ba60d3718ffe06be1
| 25,782 |
ipynb
|
Jupyter Notebook
|
assignments/transfer.ipynb
|
VincentV0/8p361-project-imaging
|
2312e2a965b4fafb4adf853661eaef09f035bb50
|
[
"MIT"
] | null | null | null |
assignments/transfer.ipynb
|
VincentV0/8p361-project-imaging
|
2312e2a965b4fafb4adf853661eaef09f035bb50
|
[
"MIT"
] | null | null | null |
assignments/transfer.ipynb
|
VincentV0/8p361-project-imaging
|
2312e2a965b4fafb4adf853661eaef09f035bb50
|
[
"MIT"
] | null | null | null | 94.094891 | 14,161 | 0.812078 |
[
[
[
"# Assignment 4: Transfer learning\n\nThe goal of this assignment is to demonstrate a technique called transfer learning. Transfer learning is a good way to quickly get good performance on the Patch-CAMELYON benchmark.\n\n### Peliminaries\n\nTransfer learning is a technique where instead of random initialization of the parameters of a model, we use a model that was pre-trained for a different task as the starting point. The two ways by which the pre-trained model can be transferred to the new task is by fine-tuning the complete model, or using it as a fixed feature extractor on top of which a new (usually linear) model is trained. For example, we can take a neural network model that was trained on the popular [ImageNet](http://www.image-net.org/) dataset that consists of images of objects (including categories such as \"parachute\" and \"toaster\") and apply it to cancer metastases detection.\n\nThis technique is explained in more detail in the following [video](https://www.youtube.com/watch?v=yofjFQddwHE) by Andrew Ng:\n",
"_____no_output_____"
]
],
[
[
"from IPython.display import YouTubeVideo\n\nYouTubeVideo('yofjFQddwHE')",
"_____no_output_____"
]
],
[
[
"TL: haal de laatste layer weg en train deze opnieuw met random-initialized weights. Meer layers kunnen toegevoegd worden of meer layers kunnen opnieuw getraind worden.\n\nToepassing van TL: veel data beschikbaar voor oorsprong van transfer, weinig data beschikbaar voor doel van transfer (eg: veel annotated fotos van honden, weinig annotated X-rays)\n\n\n\nIf you are curious about different pre-training that you can use, you might want to have a look at [this paper]( https://arxiv.org/abs/1810.05444).",
"_____no_output_____"
],
[
"\n### Fine-tuning a pre-trained model\n\n*Note that the code blocks below are only illustrative snippets from* `transfer.py` *and cannot be executed on their own within the notebook.*\n\nAn example of fine tuning a model is given in the `transfer.py` file. This example is very similar to the convolutional neural network example from the third assignments, so we will just highlight the differences.\n\nThe Keras library includes quite a few pre-trained models that can be used for transfer learning. The examples uses the MobileNetV2 model that is described in details [here](https://arxiv.org/abs/1801.04381). This architecture is targeted for use on mobile devices. We chose it for this example since it is \"lightweight\" and it can be relatively efficiently trained even on the CPU.",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2, preprocess_input",
"_____no_output_____"
]
],
[
[
"In addition to the model, we also import the associated preprocessing function that is then used in the generator function instead of the rescale-only preprocessing used in the CNN example:",
"_____no_output_____"
]
],
[
[
"datagen = ImageDataGenerator(preprocessing_function=preprocess_input)",
"_____no_output_____"
]
],
[
[
"The code snippet below shows how to initialize the MobileNetV2 model for fine-tuning on the Patch-CAMELYON dataset. Compared to the previous examples that used the Keras Sequential API, this example uses the Keras Functional API.",
"_____no_output_____"
]
],
[
[
"input = Input(input_shape)\n\n# get the pretrained model, cut out the top layer\npretrained = MobileNetV2(input_shape=input_shape, include_top=False, weights='imagenet')\n\n# if the pretrained model it to be used as a feature extractor, and not for\n# fine-tuning, the weights of the model can be frozen in the following way\n# for layer in pretrained.layers:\n# layer.trainable = False\n\noutput = pretrained(input)\noutput = GlobalAveragePooling2D()(output)\noutput = Dropout(0.5)(output)\noutput = Dense(1, activation='sigmoid')(output)\n\nmodel = Model(input, output)\n\n# note the lower lr compared to the cnn example\nmodel.compile(SGD(lr=0.001, momentum=0.95), loss = 'binary_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"The architecture of the model is given below. The MobileNetV2 model takes the 96x96x3 images from the Patch-CAMELYON dataset and produces 1280 feature maps of size 3x3. The feature maps are then pooled and connected to the output layer of the model (with a dropout layer in between; see Exercise 3).",
"_____no_output_____"
]
],
[
[
"# _________________________________________________________________\n# Layer (type) Output Shape Param #\n# =================================================================\n# input_1 (InputLayer) (None, 96, 96, 3) 0\n# _________________________________________________________________\n# mobilenetv2_1.00_96 (Model) (None, 3, 3, 1280) 2257984\n# _________________________________________________________________\n# global_average_pooling2d_1 ( (None, 1280) 0\n# _________________________________________________________________\n# dropout_1 (Dropout) (None, 1280) 0\n# _________________________________________________________________\n# dense_1 (Dense) (None, 1) 1281\n# =================================================================\n# Total params: 2,259,265\n# Trainable params: 2,225,153\n# Non-trainable params: 34,112",
"_____no_output_____"
]
],
[
[
"The remainder of the code in `transfer.py` performs training (i.e. fine-tuning) of the model in much the same way as in the CNN example. One difference is that instead of training for a number of full epochs, we define \"mini-epochs\" that contain around 5% of the training and validation samples. Since the fine-tuning of the model converges fast (you can expect convergence in less than one epoch), this will provide more fine-grained feedback about the performance on the validation set. ",
"_____no_output_____"
],
[
"## Exercise 1\n\nWhen does transfer learning make sense? Hint: watch the video. Does it make sense to do transfer learning from ImageNet to the Patch-CAMELYON dataset?\n\n<i><b> ANSWER:</b> When a limited amount of images is available in the dataset that you want to model on, but there is another model trained on a dataset with more images. Since ImageNet trained on >14 million images and Patch-Camelyon on 144.000, it would make sense to do transfer learning from ImageNet to Patch-Camelyon. </i>\n\n\n\n## Exercise 2\n\nRun the example in `transfer.py`. Then, modify the code so that the MobileNetV2 model is not initialized from the ImageNet weights, but randomly (you can do that by setting the `weights` parameter to `None`). Analyze the results from both runs and compare them to the CNN example in assignment 3.\n\n<i><b> ANSWER: </b></i>\n \n \n| Metric (weights = `ImageNet`) | Score |\n|---------------------------|-------|\n| Loss (model.evaluate) | 0.883 |\n| Accuracy (model.evaluate) | 0.608 |\n| AUC (model.predict) | 0.826 |\n| F1 (model.predict) | 0.376 |\n| Accuracy (model.predict) | 0.608 |\n\n| Metric (weights = `None`) | Score |\n|---------------------------|-------|\n| Loss (model.evaluate) | 0.693 |\n| Accuracy (model.evaluate) | 0.5 |\n| AUC (model.predict) | 0.554 |\n| F1 (model.predict) | 0.0 |\n| Accuracy (model.predict) | 0.5 |\n\n\n\n\n## Exercise 3\n\nThe model in `transfer.py` uses a dropout layer. How does dropout work and what is the effect of adding dropout layers the the network architecture? What is the observed effect when removing the dropout layer from this model? Hint: check out the Keras documentation for this layer.\n\n<i><b> ANSWER: </b> </i>\n\n| Metric (weights = `ImageNet`, no dropout) | Score |\n|---------------------------|-------|\n| Loss (model.evaluate) | 2.542 |\n| Accuracy (model.evaluate) | 0.500 |\n| AUC (model.predict) | 0.704 |\n| F1 (model.predict) | 0.001 |\n| Accuracy (model.predict) | 0.500 |\n\n\n\n\n\n\n## Submission checklist\n\n- Exercise 1: Answers to the questions\n- Exercise 2: Answers to the questions and code\n- Exercise 3: Answers to the questions\n\n### Before you start working on the main project...\n\nAs mentioned before, transfer learning is a good way to quickly get good performance on the Patch-CAMELYON benchmark. Note, however, that this is not the objectives of the course. One of the main objectives is for the students to get \"insight in setting up a research question that can be quantitatively investigated\". While it would certainly be nice to score high on the challenge leaderboard, it is much more important to ask a good research question and properly investigate it. You are free to choose what you want to investigate and the course instructors can give you feedback.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
cb836722715b026e1c8970aad4e9eb6a4a8fc48a
| 36,851 |
ipynb
|
Jupyter Notebook
|
System Automation API using Python .ipynb
|
rishiCSE17/py_Maths
|
699e39ec15f187f9a83c028b0fd1ed4862b82886
|
[
"MIT"
] | 1 |
2019-12-12T17:38:52.000Z
|
2019-12-12T17:38:52.000Z
|
System Automation API using Python .ipynb
|
rishiCSE17/py_Maths
|
699e39ec15f187f9a83c028b0fd1ed4862b82886
|
[
"MIT"
] | null | null | null |
System Automation API using Python .ipynb
|
rishiCSE17/py_Maths
|
699e39ec15f187f9a83c028b0fd1ed4862b82886
|
[
"MIT"
] | 2 |
2019-09-30T15:20:20.000Z
|
2021-01-05T15:02:28.000Z
| 25.484786 | 1,514 | 0.509294 |
[
[
[
"# Python Basics",
"_____no_output_____"
],
[
"## Variables\n\nPython variables are untyped, i.e. no datatype is required to define a variable",
"_____no_output_____"
]
],
[
[
"x=10 # static allocation ",
"_____no_output_____"
],
[
"print(x) # to print a variable ",
"10\n"
]
],
[
[
"Sometimes variables are allocated dynamically during runtime by user input. Python not only creates a new variable on-demand, also, it assigns corresponding type. ",
"_____no_output_____"
]
],
[
[
"y=input('Enter something : ')\nprint(y)",
"Enter something : hello there... \nhello there... \n"
]
],
[
[
"To check the type of a variable use `type()` method. ",
"_____no_output_____"
]
],
[
[
"type(x)",
"_____no_output_____"
],
[
"type(y)",
"_____no_output_____"
]
],
[
[
"Any input given is Python is by default of string type. You may use different __typecasting__ constructors to change it. ",
"_____no_output_____"
]
],
[
[
"# without typeasting \ny=input('Enter a number... ')\nprint(f'type of y is {type(y)}') # string formatting\n\n# with typeasting into integer\ny=int(input('Enter a number... '))\nprint(f'type of y is {type(y)}') # string formatting",
"Enter a number... 25\ntype of y is <class 'str'>\nEnter a number... 25\ntype of y is <class 'int'>\n"
]
],
[
[
"How to check if an existing variable is of a given type ?",
"_____no_output_____"
]
],
[
[
"x=10\nprint(isinstance(x,int))\nprint(isinstance(x,float))",
"True\nFalse\n"
]
],
[
[
"## Control Flow Structure ",
"_____no_output_____"
],
[
"### if-else-elif ",
"_____no_output_____"
]
],
[
[
"name = input ('Enter name...')\nage = int(input('Enter age... '))\nif age in range(0,150):\n if age < 18 : \n print(f'{name} is a minor')\n elif age >= 18 and age < 60:\n print(f'{name} is a young person')\n else:\n print(f'{name} is an elderly person')\nelse:\n print('Invalid age')",
"Enter name...Rishi\nEnter age... 30\nRishi is a young person\n"
]
],
[
[
"### For-loop",
"_____no_output_____"
]
],
[
[
"print('Table Generator\\n************************')\nnum = int(input('Enter a number... '))\nfor i in range(1,11):\n print(f'{num} x {i} \\t = {num*i}')",
"Table Generator\n************************\nEnter a number... 5\n5 x 1 \t = 5\n5 x 2 \t = 10\n5 x 3 \t = 15\n5 x 4 \t = 20\n5 x 5 \t = 25\n5 x 6 \t = 30\n5 x 7 \t = 35\n5 x 8 \t = 40\n5 x 9 \t = 45\n5 x 10 \t = 50\n"
]
],
[
[
"### While-loop",
"_____no_output_____"
]
],
[
[
"print('Table Generator\\n************************')\nnum = int(input('Enter a number...'))\ni = 1\nwhile i<11:\n print(f'{num} x {i} \\t = {num * i}')\n i += 1",
"Table Generator\n************************\nEnter a number...6\n6 x 1 \t = 6\n6 x 2 \t = 12\n6 x 3 \t = 18\n6 x 4 \t = 24\n6 x 5 \t = 30\n6 x 6 \t = 36\n6 x 7 \t = 42\n6 x 8 \t = 48\n6 x 9 \t = 54\n6 x 10 \t = 60\n"
]
],
[
[
"## Primitive Data-structures ",
"_____no_output_____"
],
[
"### List \nList is a heterogenous linked-list stucture in python",
"_____no_output_____"
]
],
[
[
"lst = [1,2,'a','b'] #creating a list",
"_____no_output_____"
],
[
"lst",
"_____no_output_____"
],
[
"type(lst)",
"_____no_output_____"
],
[
"loc=2\nprint(f'item at location {loc} is {lst[loc]}') # reading item by location",
"item at location 2 is a\n"
],
[
"lst[2]='abc' # updating an item in a list \nlst",
"_____no_output_____"
],
[
"lst.insert(2,'a') # inserting into a specific location\nlst",
"_____no_output_____"
],
[
"lst.pop(2) # deleting from a specific location\nlst",
"_____no_output_____"
],
[
"len(lst) # length of a list",
"_____no_output_____"
],
[
"lst.reverse() # reversing a list\nlst",
"_____no_output_____"
],
[
"test_list=[1,5,7,8,10] # sorting a list\ntest_list.sort(reverse=False)\ntest_list",
"_____no_output_____"
]
],
[
[
"### Set ",
"_____no_output_____"
]
],
[
[
"P = {2,3,5,7} # Set of single digit prime numbers \nO = {1,3,5,7} # Set of single digit odd numbers \nE = {0,2,4,6,8} # Set of ingle digit even numbers",
"_____no_output_____"
],
[
"type(P)",
"_____no_output_____"
],
[
"P.union(O) # odd or prime",
"_____no_output_____"
],
[
"P.intersection(E) # even and prime",
"_____no_output_____"
],
[
"P-E # Prime but not even ",
"_____no_output_____"
]
],
[
[
"Finding distinct numbers from a list of numbers by typecasting into set ",
"_____no_output_____"
]
],
[
[
"lst = [1,2,4,5,6,2,1,4,5,6,1]\nprint(lst)\nlst = list(set(lst)) # List --> Set --> List\nprint(lst)",
"[1, 2, 4, 5, 6, 2, 1, 4, 5, 6, 1]\n[1, 2, 4, 5, 6]\n"
]
],
[
[
"### Dictionarry \n\nUnordered named list, i.e. values are index by alphanumeric indices called key. ",
"_____no_output_____"
]
],
[
[
"import random as rnd\ntest_d = {\n 'name' : 'Something', #kay : value\n 'age' : rnd.randint(18,60),\n 'marks' : {\n 'Physics' : rnd.randint(0,100),\n 'Chemistry' : rnd.randint(0,100),\n 'Mathematics' : rnd.randint(0,100),\n 'Biology' : rnd.randint(0,100),\n }\n}",
"_____no_output_____"
],
[
"test_d",
"_____no_output_____"
]
],
[
[
"A list of dictionarry forms a tabular structure. Each key becomes a column and the corresponding value becomes the value that specific row at that coloumn. ",
"_____no_output_____"
]
],
[
[
"test_d['marks'] # reading a value by key",
"_____no_output_____"
],
[
"test_d['name'] = 'anything' # updating a value by its key",
"_____no_output_____"
],
[
"test_d",
"_____no_output_____"
],
[
"for k in test_d.keys(): # reading values iteratively by its key\n print(f'value at key {k} is {test_d[k]} of type {type(test_d[k])}')",
"value at key name is anything of type <class 'str'>\nvalue at key age is 44 of type <class 'int'>\nvalue at key marks is {'Physics': 56, 'Chemistry': 0, 'Mathematics': 10, 'Biology': 58} of type <class 'dict'>\n"
]
],
[
[
"### Tuples \n\nImmutable ordered collection of heterogenous data. ",
"_____no_output_____"
]
],
[
[
"tup1 = ('a',1,2)",
"_____no_output_____"
],
[
"tup1",
"_____no_output_____"
],
[
"type(tup1)",
"_____no_output_____"
],
[
"tup1[1] # reading from index",
"_____no_output_____"
],
[
"tup1[1] = 3 # immutable collection, updation is not possible",
"_____no_output_____"
],
[
"lst1 = list(tup1) #typecast into list\nlst1",
"_____no_output_____"
]
],
[
[
"## Serialization\n\n### Theory\nComputer networks are defined as a collection interconnected autonomous systems. The connections (edges) between netwrok devices (nodes) are descibed by its Topology which is modeled by Graph Theoretic principles and the computing modeles i.e. Algorithms are designed based on Distributed Systems. The connetions are inheritly FIFO (Sequential) in nature, thus it cannot carry any non-linear data-structures. However, duting RPC communication limiting the procedures to only linear structures are not realistic, especially while using Objects, as Objects are stored in memory Heap. Therefore Data stored in a Non-Linear DS must be converted into a linear format (Byte-Stream) before transmitting in a way that the receiver must reconstruct the source DS and retrive the original data. This transformation is called Serialization. All Modern programming languages such as Java and Python support Serializtion.",
"_____no_output_____"
],
[
"### Serializing primitive ADTs ",
"_____no_output_____"
]
],
[
[
"test_d = {\n 'name' : 'Something', #kay : value\n 'age' : rnd.randint(18,60),\n 'marks' : {\n 'Physics' : rnd.randint(0,100),\n 'Chemistry' : rnd.randint(0,100),\n 'Mathematics' : rnd.randint(0,100),\n 'Biology' : rnd.randint(0,100),\n },\n 'optionals' : ['music', 'Mechanics']\n}",
"_____no_output_____"
],
[
"test_d",
"_____no_output_____"
],
[
"import json # default serialization library commonly used in RESTFul APIs",
"_____no_output_____"
],
[
"# Step 1\nser_dat = json.dumps(test_d) # Serialization\nprint(ser_dat)\nprint(type(ser_dat))",
"{\"name\": \"Something\", \"age\": 20, \"marks\": {\"Physics\": 89, \"Chemistry\": 55, \"Mathematics\": 89, \"Biology\": 25}, \"optionals\": [\"music\", \"Mechanics\"]}\n<class 'str'>\n"
],
[
"# Step 2\nbs_data = ser_dat.encode() # Encoding into ByteStream\nprint(bs_data)\nprint(type(bs_data))",
"b'{\"name\": \"Something\", \"age\": 20, \"marks\": {\"Physics\": 89, \"Chemistry\": 55, \"Mathematics\": 89, \"Biology\": 25}, \"optionals\": [\"music\", \"Mechanics\"]}'\n<class 'bytes'>\n"
],
[
"# Step 3\nser_data2 = bytes.decode(bs_data) # Decoding strings from ByteStream\nprint(ser_data2)\nprint(type(ser_data2))",
"{\"name\": \"Something\", \"age\": 20, \"marks\": {\"Physics\": 89, \"Chemistry\": 55, \"Mathematics\": 89, \"Biology\": 25}, \"optionals\": [\"music\", \"Mechanics\"]}\n<class 'str'>\n"
],
[
"# Step 4 \njson.loads(ser_data2) # Deserializing ",
"_____no_output_____"
]
],
[
[
"### Serializing Objects ",
"_____no_output_____"
]
],
[
[
"class MyClass: # defining class \n # member variables \n name \n age\n \n # member functions\n def __init__(self,name, age): #__init__() = Constructor \n self.name = name #'self' is like 'this' in java \n self.age = age\n \n def get_info(self): # returns a dictionary \n return {'name' : self.name , 'age' : self.age}\n \nobj1 = MyClass('abc',20) # crates an object\n\nobj1.get_info() #invoke functions from object",
"_____no_output_____"
],
[
"json.dumps(obj1) # object can't be serializable in string ",
"_____no_output_____"
],
[
"import pickle as pkl # pickle library is used to serialize objects\n\nbs_data = pkl.dumps(obj1) # serialization + encoding \n\nprint(bs_data) \nprint(type(bs_data))\n\nobj2 = pkl.loads(bs_data) # Decoding + Deserialization\n\nobj2.get_info()",
"b'\\x80\\x03c__main__\\nMyClass\\nq\\x00)\\x81q\\x01}q\\x02(X\\x04\\x00\\x00\\x00nameq\\x03X\\x03\\x00\\x00\\x00abcq\\x04X\\x03\\x00\\x00\\x00ageq\\x05K\\x14ub.'\n<class 'bytes'>\n"
]
],
[
[
"# Interfacing with the Operating System\n\nIn this section we will discuss various methods a Python script may use to interface with an Operating Systems. We'll fist understand the Local interfacing i.e. the script runs on top of the OS. Later, We'll see how it communicates with a remote computer using networking protocols such as Telnet and SSH. ",
"_____no_output_____"
],
[
"## Local interfacing ",
"_____no_output_____"
]
],
[
[
"import os\n\ncmd = 'dir *.exe' # command to be executed\nfor i in os.popen(cmd).readlines():\n print(i)",
" Volume in drive C has no label.\n\n Volume Serial Number is 720E-DBD8\n\n\n\n Directory of C:\\Users\\sapta\\Documents\n\n\n\n21/02/2020 20:13 9,916,256 FileZilla_3.46.3_win64_sponsored-setup.exe\n\n05/06/2019 22:32 63,046,477 kodi-18.2-Leia-x64.exe\n\n05/06/2019 01:31 23,130,408 XTUSetup.exe\n\n 3 File(s) 96,093,141 bytes\n\n 0 Dir(s) 116,005,609,472 bytes free\n\n"
]
],
[
[
"To run a command without any outputs",
"_____no_output_____"
]
],
[
[
"import os \n\n# write a batch of commnad \ncmds = ['md test_dir' ,\n 'cd test_dir' ,\n 'fsutil file createnew test1.txt 0',\n 'fsutil file createnew test2.txt 0',\n 'fsutil file createnew test3.txt 0',\n 'cd..'\n ]\n\n# call commands from the batch\nfor c in cmds:\n os.system(c)\n \n# verify \nfor i in os.popen('dir test*.txt').readlines():\n print(i)",
" Volume in drive C has no label.\n\n Volume Serial Number is 720E-DBD8\n\n\n\n Directory of C:\\Users\\sapta\\Documents\n\n\n\n06/12/2020 15:32 0 test1.txt\n\n06/12/2020 15:32 0 test2.txt\n\n06/12/2020 15:32 0 test3.txt\n\n 3 File(s) 0 bytes\n\n 0 Dir(s) 116,013,948,928 bytes free\n\n"
]
],
[
[
"## Remote Interfacing",
"_____no_output_____"
],
[
"* Install Telnet daemon on the Linux host : `sudo apt -y install telnetd`\n* Verify installation using : `namp localhiost`",
"_____no_output_____"
]
],
[
[
"import telnetlib as tn\nimport getpass\n\nhost = '192.168.1.84'\nuser = input(\"Enter your remote account: \")\npassword = getpass.getpass()\n\ntn_session = tn.Telnet(host)\n\ntn_session.read_until(b\"login: \")\ntn_session.write(user.encode('ascii') + b\"\\n\")\nif password:\n tn_session.read_until(b\"Password: \")\n tn_session.write(password.encode('ascii') + b\"\\n\")\n \ntn_session.write(b\"ls\\n\")\nprint(tn_session.read_all().decode('ascii'))\n",
"Enter your remote account: rishi\n········\n"
]
],
[
[
"",
"_____no_output_____"
],
[
"Remote config with SSH (Secure Communication)",
"_____no_output_____"
]
],
[
[
"import paramiko\nimport getpass\n\nhost = input('Enter host IP')\nport = 22\nusername = input(\"Enter your remote account: \")\npassword = getpass.getpass()\n\ncommand = \"ls\"\n\nssh = paramiko.SSHClient()\nssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())\nssh.connect(host, port, username, password)\n\nstdin, stdout, stderr = ssh.exec_command(command)\nfor l in stdout.readlines():\n print(l)",
"Enter host IP192.168.1.84\nEnter your remote account: rishi\n········\ndistribution-karaf-0.4.4-Beryllium-SR4.tar.gz\n\ndownload\n\nodl\n\nShellMon_sock\n\ntest\n\ntest1.txt\n\ntest2.txt\n\ntest34.txt\n\ntest3.txt\n\ntest5.txt\n\ntest_dir\n\ntestfile2.txt\n\ntestfile.txt\n\ntest.sh\n\n"
]
],
[
[
"",
"_____no_output_____"
],
[
"# Home Tasks",
"_____no_output_____"
],
[
"1. Write a python API that runs shell scripts on demand. The shell scripts must be present on the system. The API must take the name of the script as input and display output from the script. Create at least 3 shell scripts of your choice to demonstrate. \n\n2. Write a python API that automatically calls DHCP request for dynamic IP allocation on a given interface, if it doesnt have any IP address.\n\n3. Write a python API that organises files. \n * The API first takes a directory as input on which it will run the organization \n * Thereafter, it asks for a list of pairs (filetype, destination_folder).\n * For example, [('mp3','music'),('png','images'),('jpg','images'),('mov','videos')] means all '.mp3' files will be moved to 'Music' directory likewise for images and Videos. In case the directories do not exist, the API must create them. \n \n4. Write a python API that remotely monitors number of processes running on a system over a given period. ",
"_____no_output_____"
],
[
"# Course Suggestion\n\nhttps://www.linkedin.com/learning/python-essential-training-2/",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
cb83767da65788ff3bae3b0f49739194e066b8ce
| 904,364 |
ipynb
|
Jupyter Notebook
|
module3-permutation-boosting/Furkan_Onat_LS_DS_233_assignment.ipynb
|
furkanonat/DS-Unit-2-Applied-Modeling
|
e9e09e61a253126d3584bc58af117a59d9b23233
|
[
"MIT"
] | null | null | null |
module3-permutation-boosting/Furkan_Onat_LS_DS_233_assignment.ipynb
|
furkanonat/DS-Unit-2-Applied-Modeling
|
e9e09e61a253126d3584bc58af117a59d9b23233
|
[
"MIT"
] | null | null | null |
module3-permutation-boosting/Furkan_Onat_LS_DS_233_assignment.ipynb
|
furkanonat/DS-Unit-2-Applied-Modeling
|
e9e09e61a253126d3584bc58af117a59d9b23233
|
[
"MIT"
] | null | null | null | 122.725472 | 128,389 | 0.624266 |
[
[
[
"<a href=\"https://colab.research.google.com/github/furkanonat/DS-Unit-2-Applied-Modeling/blob/master/module3-permutation-boosting/Furkan_Onat_LS_DS_233_assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport os",
"_____no_output_____"
],
[
"from google.colab import files\nuploaded = files.upload()",
"_____no_output_____"
],
[
"df = pd.read_csv('freMTPL2freq.csv')\n",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.isnull().sum()",
"_____no_output_____"
],
[
"import sys\n!{sys.executable} -m pip install pandas-profiling\n",
"Requirement already satisfied: pandas-profiling in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (2.8.0)\nRequirement already satisfied: missingno>=0.4.2 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (0.4.2)\nRequirement already satisfied: requests>=2.23.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (2.23.0)\nRequirement already satisfied: confuse>=1.0.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (1.1.0)\nRequirement already satisfied: joblib in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (0.14.1)\nRequirement already satisfied: tangled-up-in-unicode>=0.0.6 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (0.0.6)\nRequirement already satisfied: ipywidgets>=7.5.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (7.5.1)\nRequirement already satisfied: tqdm>=4.43.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (4.46.0)\nRequirement already satisfied: htmlmin>=0.1.12 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (0.1.12)\nRequirement already satisfied: visions[type_image_path]==0.4.4 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (0.4.4)\nRequirement already satisfied: scipy>=1.4.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (1.4.1)\nRequirement already satisfied: numpy>=1.16.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (1.18.1)\nRequirement already satisfied: matplotlib>=3.2.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (3.2.1)\nRequirement already satisfied: phik>=0.9.10 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (0.9.12)\nRequirement already satisfied: jinja2>=2.11.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (2.11.1)\nRequirement already satisfied: pandas!=1.0.0,!=1.0.1,!=1.0.2,>=0.25.3 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (1.0.3)\nRequirement already satisfied: astropy>=4.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas-profiling) (4.0)\nRequirement already satisfied: seaborn in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from missingno>=0.4.2->pandas-profiling) (0.10.0)\nRequirement already satisfied: certifi>=2017.4.17 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from requests>=2.23.0->pandas-profiling) (2019.11.28)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from requests>=2.23.0->pandas-profiling) (1.25.8)\nRequirement already satisfied: chardet<4,>=3.0.2 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from requests>=2.23.0->pandas-profiling) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from requests>=2.23.0->pandas-profiling) (2.8)\nRequirement already satisfied: pyyaml in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from confuse>=1.0.0->pandas-profiling) (5.3)\nRequirement already satisfied: ipython>=4.0.0; python_version >= \"3.3\" in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipywidgets>=7.5.1->pandas-profiling) (7.12.0)\nRequirement already satisfied: ipykernel>=4.5.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipywidgets>=7.5.1->pandas-profiling) (5.1.4)\nRequirement already satisfied: traitlets>=4.3.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipywidgets>=7.5.1->pandas-profiling) (4.3.3)\nRequirement already satisfied: widgetsnbextension~=3.5.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipywidgets>=7.5.1->pandas-profiling) (3.5.1)\nRequirement already satisfied: nbformat>=4.2.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipywidgets>=7.5.1->pandas-profiling) (5.0.4)\nRequirement already satisfied: attrs>=19.3.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from visions[type_image_path]==0.4.4->pandas-profiling) (19.3.0)\nRequirement already satisfied: networkx>=2.4 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from visions[type_image_path]==0.4.4->pandas-profiling) (2.4)\nRequirement already satisfied: imagehash; extra == \"type_image_path\" in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from visions[type_image_path]==0.4.4->pandas-profiling) (4.1.0)\nRequirement already satisfied: Pillow; extra == \"type_image_path\" in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from visions[type_image_path]==0.4.4->pandas-profiling) (7.0.0)\nRequirement already satisfied: cycler>=0.10 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from matplotlib>=3.2.0->pandas-profiling) (0.10.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from matplotlib>=3.2.0->pandas-profiling) (2.4.6)\nRequirement already satisfied: python-dateutil>=2.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from matplotlib>=3.2.0->pandas-profiling) (2.8.1)\nRequirement already satisfied: kiwisolver>=1.0.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from matplotlib>=3.2.0->pandas-profiling) (1.1.0)\nRequirement already satisfied: numba>=0.38.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from phik>=0.9.10->pandas-profiling) (0.48.0)\nRequirement already satisfied: MarkupSafe>=0.23 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from jinja2>=2.11.1->pandas-profiling) (1.1.1)\nRequirement already satisfied: pytz>=2017.2 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pandas!=1.0.0,!=1.0.1,!=1.0.2,>=0.25.3->pandas-profiling) (2019.3)\nRequirement already satisfied: backcall in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (0.1.0)\nRequirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (3.0.3)\nRequirement already satisfied: jedi>=0.10 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (0.14.1)\nRequirement already satisfied: pygments in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (2.5.2)\nRequirement already satisfied: pexpect; sys_platform != \"win32\" in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (4.8.0)\nRequirement already satisfied: setuptools>=18.5 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (46.0.0.post20200309)\nRequirement already satisfied: decorator in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (4.4.1)\nRequirement already satisfied: pickleshare in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (0.7.5)\nRequirement already satisfied: appnope; sys_platform == \"darwin\" in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (0.1.0)\nRequirement already satisfied: tornado>=4.2 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipykernel>=4.5.1->ipywidgets>=7.5.1->pandas-profiling) (6.0.3)\nRequirement already satisfied: jupyter-client in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from ipykernel>=4.5.1->ipywidgets>=7.5.1->pandas-profiling) (5.3.4)\nRequirement already satisfied: six in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from traitlets>=4.3.1->ipywidgets>=7.5.1->pandas-profiling) (1.14.0)\nRequirement already satisfied: ipython-genutils in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from traitlets>=4.3.1->ipywidgets>=7.5.1->pandas-profiling) (0.2.0)\nRequirement already satisfied: notebook>=4.4.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (6.0.3)\nRequirement already satisfied: jupyter-core in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from nbformat>=4.2.0->ipywidgets>=7.5.1->pandas-profiling) (4.6.1)\nRequirement already satisfied: jsonschema!=2.5.0,>=2.4 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from nbformat>=4.2.0->ipywidgets>=7.5.1->pandas-profiling) (3.2.0)\nRequirement already satisfied: PyWavelets in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from imagehash; extra == \"type_image_path\"->visions[type_image_path]==0.4.4->pandas-profiling) (1.1.1)\nRequirement already satisfied: llvmlite<0.32.0,>=0.31.0dev0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from numba>=0.38.1->phik>=0.9.10->pandas-profiling) (0.31.0)\nRequirement already satisfied: wcwidth in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (0.1.8)\nRequirement already satisfied: parso>=0.5.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from jedi>=0.10->ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (0.5.2)\nRequirement already satisfied: ptyprocess>=0.5 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from pexpect; sys_platform != \"win32\"->ipython>=4.0.0; python_version >= \"3.3\"->ipywidgets>=7.5.1->pandas-profiling) (0.6.0)\nRequirement already satisfied: pyzmq>=13 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from jupyter-client->ipykernel>=4.5.1->ipywidgets>=7.5.1->pandas-profiling) (18.1.1)\nRequirement already satisfied: terminado>=0.8.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (0.8.3)\nRequirement already satisfied: Send2Trash in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (1.5.0)\nRequirement already satisfied: prometheus-client in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (0.7.1)\nRequirement already satisfied: nbconvert in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (5.6.1)\nRequirement already satisfied: pyrsistent>=0.14.0 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets>=7.5.1->pandas-profiling) (0.15.7)\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets>=7.5.1->pandas-profiling) (1.5.0)\nRequirement already satisfied: mistune<2,>=0.8.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (0.8.4)\nRequirement already satisfied: bleach in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (3.1.0)\nRequirement already satisfied: defusedxml in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (0.6.0)\nRequirement already satisfied: pandocfilters>=1.4.1 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (1.4.2)\nRequirement already satisfied: entrypoints>=0.2.2 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (0.3)\nRequirement already satisfied: testpath in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (0.4.4)\nRequirement already satisfied: zipp>=0.5 in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from importlib-metadata; python_version < \"3.8\"->jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets>=7.5.1->pandas-profiling) (2.2.0)\nRequirement already satisfied: webencodings in /Users/fonat/opt/anaconda3/lib/python3.7/site-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.5.1->pandas-profiling) (0.5.1)\n"
],
[
"from pandas_profiling import ProfileReport\nprofile = ProfileReport(df, minimal=True).to_notebook_iframe()\n\nprofile\n\n",
"_____no_output_____"
],
[
"# Adding a feature for annualized claim frequency\n\ndf['Frequency'] = df['ClaimNb'] /df['Exposure']\ndf.head()",
"_____no_output_____"
],
[
"df['Frequency'].value_counts(normalize=True)",
"_____no_output_____"
],
[
"df['Frequency'].nunique()",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
],
[
"df['Exposure'].value_counts()",
"_____no_output_____"
],
[
"df['ClaimNb'].value_counts()",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"df.nunique()",
"_____no_output_____"
]
],
[
[
"#### Model 1 \nTarget= ClaimNb\nModel= DecisionTree Classifier\nEvaluation Metric. = Validation Accuracy\nDescription = Make ClaimNb feature 3-class feature\n Added Frequency Feature\n",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport category_encoders as ce\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import FunctionTransformer\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.tree import DecisionTreeClassifier",
"_____no_output_____"
],
[
"df_model_1 = df.copy()",
"_____no_output_____"
],
[
"df_model_1['ClaimNb'].value_counts(normalize=True)",
"_____no_output_____"
],
[
"df_model_1['ClaimNb'].value_counts()",
"_____no_output_____"
],
[
"# I will create a new column for number of claims per policy.\ndf_model_1['ClaimNb_Adj'] = df_model_1['ClaimNb']",
"_____no_output_____"
],
[
"df_model_1.head()",
"_____no_output_____"
],
[
"# I modify the new 'ClaimNb' column to have just 3 classes : 'no claim', 'once', 'more than once'. \ndf_model_1['ClaimNb_Adj'] = df_model_1['ClaimNb_Adj'].replace({0: 'no claim', 1: 'once', 2: 'more than once', 3: 'more than once', 4: 'more than once', 11: 'more than once', 5: 'more than once', 16: 'more than once', 9: 'more than once', 8: 'more than once', 6: 'more than once'})\ndf_model_1.head()",
"_____no_output_____"
],
[
"# I will use \"ClaimNb_Adj\" feature as the target for the model\ny = df_model_1['ClaimNb_Adj']",
"_____no_output_____"
],
[
"# Baseline for the majority class\ndf_model_1['ClaimNb_Adj'].value_counts(normalize=True)",
"_____no_output_____"
],
[
"df_model_1.dtypes",
"_____no_output_____"
],
[
"# Split for test and train\ntrain, test = train_test_split(df_model_1, train_size=0.80, test_size=0.20, stratify=df_model_1['ClaimNb_Adj'], random_state=42)\n\ntrain.shape, test.shape",
"_____no_output_____"
],
[
"# Split for train and val \ntrain, val = train_test_split(train, train_size = 0.80, test_size=0.20, stratify=train['ClaimNb_Adj'], random_state=42)\n\ntrain.shape, val.shape",
"_____no_output_____"
],
[
"def wrangle(X):\n # Drop IDpol since it doesn't have any explanatory power\n # Drop ClaimNb and Frequency as they are a function of our target.\n column_drop = ['IDpol','ClaimNb', 'Frequency']\n X = X.drop(columns=column_drop)\n return X",
"_____no_output_____"
],
[
"train = wrangle(train)\nval = wrangle(val)\ntest = wrangle(test)",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"# Arranging features matrix and y target vector\ntarget = 'ClaimNb_Adj'\nX_train = train.drop(columns=target)\ny_train = train[target]\nX_val = val.drop(columns=target)\ny_val = val[target]\nX_test = test.drop(columns=target)\ny_test = test[target]\n\npipeline = make_pipeline(\n ce.OrdinalEncoder(), \n DecisionTreeClassifier(max_depth = 3)\n)\n\npipeline.fit(X_train, y_train)\nprint('Validation Accuracy', pipeline.score(X_val, y_val))",
"Validation Accuracy 0.9497704688335392\n"
],
[
"import graphviz \nfrom sklearn.tree import export_graphviz\n\ntree = pipeline.named_steps['decisiontreeclassifier']\n\ndot_data = export_graphviz(tree,\n out_file=None,\n feature_names=X_train.columns,\n class_names=y_train.unique().astype(str),\n filled=True,\n impurity=False,\n proportion=True\n )\ngraphviz.Source(dot_data)\n \n ",
"_____no_output_____"
],
[
"y.value_counts(normalize=True)",
"_____no_output_____"
],
[
"# Getting feature importances\nrf = pipeline.named_steps['decisiontreeclassifier']\nimportances = pd.Series(rf.feature_importances_,X_train.columns)\n\n# plot feature importances\n%matplotlib inline\n\nn=11\nplt.figure(figsize=(5,n))\nplt.title(\"Feature Importances\")\nimportances.sort_values()[-n:].plot.barh(color='black');\n",
"_____no_output_____"
],
[
"importances.sort_values(ascending=False)",
"_____no_output_____"
],
[
"# Predict on Test\ny_pred = pipeline.predict(X_test)\ny_pred.shape, y_test.shape",
"_____no_output_____"
],
[
"print('Train Accuracy', pipeline.score(X_train, y_train))\nprint('Validation Accuracy', pipeline.score(X_val, y_val))\n",
"Train Accuracy 0.949763555244188\nValidation Accuracy 0.9497704688335392\n"
]
],
[
[
"A to Assignment Q: Validation Accuracy of Decision Tree Classifier model beats baseline narrowly as the majority class had a frequency of 94.9765%.",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import accuracy_score\n\n\n# print the accuracy\naccuracy = accuracy_score(y_pred, y_test)\n\nprint(\"Accuracy : %.4f%%\" % (accuracy * 100.0))",
"Accuracy : 94.9765%\n"
],
[
"from sklearn.metrics import confusion_matrix\ncnf_matrix = confusion_matrix(y_test, y_pred)\n\nprint('Confusion matrix:\\n', cnf_matrix)",
"Confusion matrix:\n [[ 0 376 0]\n [ 0 128791 0]\n [ 0 6436 0]]\n"
],
[
"# Explanatory graph: Confusion Matrix \nfrom sklearn.metrics import plot_confusion_matrix\nplot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')",
"_____no_output_____"
],
[
"# import the metric\nfrom sklearn.metrics import classification_report\n\n# print classification report\nprint(\"Classification Report:\\n\\n\", classification_report(y_test, y_pred))",
"/Users/fonat/opt/anaconda3/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.\n _warn_prf(average, modifier, msg_start, len(result))\n"
]
],
[
[
"### Getting my model's permutation importances",
"_____no_output_____"
]
],
[
[
"transformers = make_pipeline(ce.OrdinalEncoder(), \n SimpleImputer(strategy='median'))\n\nX_train_transformed = transformers.fit_transform(X_train)\nX_val_transformed = transformers.transform(X_val)\n\nmodel=DecisionTreeClassifier()\nmodel.fit(X_train_transformed, y_train)",
"_____no_output_____"
],
[
"import eli5\nfrom eli5.sklearn import PermutationImportance\n\npermuter = PermutationImportance(\n model,\n scoring='accuracy', \n n_iter=5, \n random_state=42)\n\npermuter.fit(X_val_transformed, y_val)",
"/Users/fonat/opt/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.metrics.scorer module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.\n warnings.warn(message, FutureWarning)\n/Users/fonat/opt/anaconda3/lib/python3.7/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.feature_selection.base module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.feature_selection. Anything that cannot be imported from sklearn.feature_selection is now part of the private API.\n warnings.warn(message, FutureWarning)\n"
],
[
"feature_names= X_val.columns.tolist()\npd.Series(permuter.feature_importances_, feature_names).sort_values(ascending=False)",
"_____no_output_____"
],
[
"eli5.show_weights(permuter, top=None, feature_names=feature_names)",
"_____no_output_____"
]
],
[
[
"### Model 2: Xgboost",
"_____no_output_____"
]
],
[
[
"from xgboost import XGBClassifier\n\npipeline = make_pipeline(ce.OrdinalEncoder(), \n XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1))\n\npipeline.fit(X_train, y_train)",
"_____no_output_____"
],
[
"# Validation accuracy\ny_pred = pipeline.predict(X_val)\nprint('Validation Accuracy is ', accuracy_score(y_val,y_pred))",
"Validation Accuracy is 0.9497704688335392\n"
],
[
"# Validation accuracy of Decision Tree Classifier is 0.9497704688335392\n# which is same with Xgboost model's validation accuracy. ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
cb8388d617a41f0e41050a0094f6ffa546a2caec
| 19,337 |
ipynb
|
Jupyter Notebook
|
qiskit_version/10_Discrete_Optimization_and_Unsupervised_Learning.ipynb
|
pedrorohde/qml-mooc
|
430a488b9fa16f89b36545bc61769c87997087b5
|
[
"MIT"
] | null | null | null |
qiskit_version/10_Discrete_Optimization_and_Unsupervised_Learning.ipynb
|
pedrorohde/qml-mooc
|
430a488b9fa16f89b36545bc61769c87997087b5
|
[
"MIT"
] | null | null | null |
qiskit_version/10_Discrete_Optimization_and_Unsupervised_Learning.ipynb
|
pedrorohde/qml-mooc
|
430a488b9fa16f89b36545bc61769c87997087b5
|
[
"MIT"
] | null | null | null | 64.029801 | 8,176 | 0.759115 |
[
[
[
"Unsupervised learning means a lack of labels: we are looking for structure in the data, without having an *a priori* intuition what that structure might be. A great example is clustering, where the goal is to identify instances that clump together in some high-dimensional space. Unsupervised learning in general is a harder problem. Deep learning revolutionized supervised learning and it had made significant advances in unsupervised learning, but there remains plenty of room for improvement. In this notebook, we look at how we can map an unsupervised learning problem to graph optimization, which in turn we can solve on a quantum computer.\n\n# Mapping clustering to discrete optimization\n\nAssume that we have some points $\\{x_i\\}_{i=1}^N$ lying in some high-dimensional space $\\mathbb{R}^d$. How do we tell which ones are close to one another and which ones are distant? To get some intuition, let's generate a simple dataset with two distinct classes. The first five instances will belong to class 1, and the second five to class 2:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n%matplotlib inline\n\nn_instances = 4\nclass_1 = np.random.rand(n_instances//2, 3)/5\nclass_2 = (0.6, 0.1, 0.05) + np.random.rand(n_instances//2, 3)/5\ndata = np.concatenate((class_1, class_2))\ncolors = [\"red\"] * (n_instances//2) + [\"green\"] * (n_instances//2)\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d', xticks=[], yticks=[], zticks=[])\nax.scatter(data[:, 0], data[:, 1], data[:, 2], c=colors)",
"_____no_output_____"
]
],
[
[
"The high-dimensional space is endowed with some measure of distance, the Euclidean distance being the simplest case. We can calculate all pairwise distances between the data points:",
"_____no_output_____"
]
],
[
[
"import itertools\nw = np.zeros((n_instances, n_instances))\nfor i, j in itertools.product(*[range(n_instances)]*2):\n w[i, j] = np.linalg.norm(data[i]-data[j])",
"_____no_output_____"
]
],
[
[
"This matrix is sometimes called the Gram or the kernel matrix. The Gram matrix contains a fair bit of information about the topology of the points in the high-dimensional space, but it is not easy to see. We can think of the Gram matrix as the weighted adjacency matrix of a graph: two nodes represent two data instances. Their distance as contained in the Gram matrix is the weight on the edge that connects them. If the distance is zero, they are not connected by an edge. In general, this is a dense graph with many edges -- sparsity can be improved by a distance function that gets exponentially smaller.\n\nWhat can we do with this graph to find the clusters? We could look for the max-cut, that is, the collection of edges that would split the graph in exactly two if removed, while maximizing the total weight of these edges [[1](#1)]. This is a well-known NP-hard problem, but it also very naturally maps to an Ising model.\n\nThe spin variables $\\sigma_i \\in \\{-1, +1\\}$ take on value $\\sigma_i = +1$ if a data instance is in cluster 1 (nodes $V_1$ in the graph), and $\\sigma_i = -1$ if the data instance is in cluster 2 (nodes $V_2$ in the graph). The cost of a cut is\n\n$$\n\\sum_{i\\in V_1, j\\in V_2} w_{ij}\n$$\n\nLet us assume a fully connected graph. Then, accounting for the symmetry of the adjacency matrix, we can expand this as\n$$\n\\frac{1}{4}\\sum_{i, j} w_{ij} - \\frac{1}{4} \\sum_{i, j} w_{ij} \\sigma_i \\sigma_j\n$$\n$$\n= \\frac{1}{4}\\sum_{i, j\\in V} w_{ij} (1- \\sigma_i \\sigma_j).\n$$ \n\nBy taking the negative of this, we can directly solve the problem by a quantum optimizer.",
"_____no_output_____"
],
[
"# Solving the max-cut problem by QAOA\n\nMost quantum computing frameworks have convenience functions defined for common graph optimization algorithms, and max-cut is a staple. This reduces our task to importing the relevant functions:",
"_____no_output_____"
]
],
[
[
"from qiskit import Aer\nfrom qiskit.aqua import QuantumInstance\nfrom qiskit.aqua.algorithms import QAOA\nfrom qiskit.aqua.components.optimizers import COBYLA\nfrom qiskit.optimization.applications.ising import max_cut\nfrom qiskit.optimization.applications.ising.common import sample_most_likely",
"_____no_output_____"
]
],
[
[
"Setting $p=1$ in the QAOA algorithm, we can initialize it with the max-cut problem.",
"_____no_output_____"
]
],
[
[
"qubit_operators, offset = max_cut.get_operator(w)\np = 1\noptimizer = COBYLA()\nqaoa = QAOA(qubit_operators, optimizer, p)",
"_____no_output_____"
]
],
[
[
"Here the choice of the classical optimizer `COBYLA` was arbitrary. Let us run this and analyze the solution. This can take a while on a classical simulator.",
"_____no_output_____"
]
],
[
[
"backend = Aer.get_backend('statevector_simulator')\nquantum_instance = QuantumInstance(backend, shots=1)\nresult = qaoa.run(quantum_instance)\nx = sample_most_likely(result['eigenstate'])\ngraph_solution = max_cut.get_graph_solution(x)\nprint('energy:', result['eigenvalue'])\nprint('maxcut objective:', result['eigenvalue'] + offset)\nprint('solution:', max_cut.get_graph_solution(x))\nprint('solution objective:', max_cut.max_cut_value(x, w))",
"energy: (-0.4850874782714728+0j)\nmaxcut objective: (-1.8975760374781179+0j)\nsolution: [0. 0. 1. 1.]\nsolution objective: 2.47397534157203\n"
]
],
[
[
"Looking at the solution, the cut matches the clustering structure.",
"_____no_output_____"
],
[
"# Solving the max-cut problem by annealing\n\nNaturally, the same problem can be solved on an annealer. Our only task is to translate the couplings and the on-site fields to match the programming interface:",
"_____no_output_____"
]
],
[
[
"import dimod\n\nJ, h = {}, {}\nfor i in range(n_instances):\n h[i] = 0\n for j in range(i+1, n_instances):\n J[(i, j)] = w[i, j]\n\nmodel = dimod.BinaryQuadraticModel(h, J, 0.0, dimod.SPIN)\nsampler = dimod.SimulatedAnnealingSampler()\nresponse = sampler.sample(model, num_reads=10)\nprint(\"Energy of samples:\")\nfor solution in response.data():\n print(\"Energy:\", solution.energy, \"Sample:\", solution.sample)",
"Energy of samples:\nEnergy: -2.12297356473077 Sample: {0: -1, 1: -1, 2: 1, 3: 1}\nEnergy: -2.12297356473077 Sample: {0: -1, 1: -1, 2: 1, 3: 1}\nEnergy: -2.12297356473077 Sample: {0: -1, 1: -1, 2: 1, 3: 1}\nEnergy: -2.12297356473077 Sample: {0: 1, 1: 1, 2: -1, 3: -1}\nEnergy: -2.12297356473077 Sample: {0: -1, 1: -1, 2: 1, 3: 1}\nEnergy: -2.12297356473077 Sample: {0: -1, 1: -1, 2: 1, 3: 1}\nEnergy: -2.12297356473077 Sample: {0: -1, 1: -1, 2: 1, 3: 1}\nEnergy: -2.12297356473077 Sample: {0: 1, 1: 1, 2: -1, 3: -1}\nEnergy: -2.12297356473077 Sample: {0: -1, 1: -1, 2: 1, 3: 1}\nEnergy: -2.12297356473077 Sample: {0: 1, 1: 1, 2: -1, 3: -1}\n"
]
],
[
[
"If you look at the first sample, you will see that the first five data instances belong to the same graph partition, matching the actual cluster.",
"_____no_output_____"
],
[
"# References\n\n[1] Otterbach, J. S., Manenti, R., Alidoust, N., Bestwick, A., Block, M., Bloom, B., Caldwell, S., Didier, N., Fried, E. Schuyler, Hong, S., Karalekas, P., Osborn, C. B., Papageorge, A., Peterson, E. C., Prawiroatmodjo, G., Rubin, N., Ryan, Colm A., Scarabelli, D., Scheer, M., Sete, E. A., Sivarajah, P., Smith, Robert S., Staley, A., Tezak, N., Zeng, W. J., Hudson, A., Johnson, Blake R., Reagor, M., Silva, M. P. da, Rigetti, C. (2017). [Unsupervised Machine Learning on a Hybrid Quantum Computer](https://arxiv.org/abs/1712.05771). *arXiv:1712.05771*. <a id='1'></a>",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
cb839e4fc25427d6f8e0da8cf7fcbddc5e37e3cd
| 417,342 |
ipynb
|
Jupyter Notebook
|
ConvLSTM_HAR Spain Data+HAR Data.ipynb
|
alino93/HARDataset_LSTM
|
3ee8498ac7b4e3a4a12bd38ea85451e6a6eb0328
|
[
"MIT"
] | null | null | null |
ConvLSTM_HAR Spain Data+HAR Data.ipynb
|
alino93/HARDataset_LSTM
|
3ee8498ac7b4e3a4a12bd38ea85451e6a6eb0328
|
[
"MIT"
] | null | null | null |
ConvLSTM_HAR Spain Data+HAR Data.ipynb
|
alino93/HARDataset_LSTM
|
3ee8498ac7b4e3a4a12bd38ea85451e6a6eb0328
|
[
"MIT"
] | null | null | null | 417,342 | 417,342 | 0.953475 |
[
[
[
"%tensorflow_version 1.x\n!pip install -q h5py==2.10.0\nfrom scipy.io import loadmat\nfrom scipy import stats\nimport pandas as pd\nimport numpy as np\nimport pickle\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nimport tensorflow as tf\n#import tensorflow.compat.v1 as tf1\n#tf1.disable_v2_behavior()\nimport seaborn as sns\nfrom pylab import rcParams\nfrom sklearn import metrics\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.keras.utils import to_categorical\nfrom numpy import dstack\nfrom pandas import read_csv\nfrom sklearn.svm import SVC\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Flatten\nfrom keras.layers import Dropout\nfrom keras.layers import LSTM\nfrom keras.layers import TimeDistributed\nfrom keras.layers.convolutional import Conv1D\nfrom keras.layers.convolutional import MaxPooling1D\nfrom keras.layers import ConvLSTM2D",
"TensorFlow 1.x selected.\n"
],
[
"# load a single file as a numpy array\ndef load_file(filepath):\n\tdf = read_csv(filepath)\n # convert activity names to number\n\tdf.activity = pd.factorize(df.activity)[0]\n\t# select mean and var features\n\tx1 = df.iloc[:,3:9].values\n\tx2 = df.iloc[:,21:27].values\n\tx = np.append(x1,x2,axis=1)\n\ty = df.activity.values\n\tprint(x.shape, y.shape)\n\treturn x, y\n# load the dataset, returns train and test X and y elements\ndef load_dataset(prefix=''):\n\t# load all train\n\tx, y = load_file(prefix + '/sensoringData_feature_prepared_20_19.0_2'+'.csv')\n\t# transform for LSTM\n\tN_TIME_STEPS = 20\n\tstep = 15 # faster with bigger step but accuracy degrades fast\n\tX = []\n\tY = []\n\tnum = y.shape[0]\n\tfor i in range(0, num - N_TIME_STEPS, step):\n\t\t\tpart = x[i: i + N_TIME_STEPS]\n\t\t\tlabel = stats.mode(y[i: i + N_TIME_STEPS])[0][0]\n\t\t\tX.append(part)\n\t\t\tY.append(label)\n\n\ttrainX, testX, trainy, testy = train_test_split(np.array(X), np.array(Y), test_size=0.2, random_state=42)\n\t \n\tprint(trainX.shape, trainy.shape, testX.shape, testy.shape)\n\treturn trainX, trainy, testX, testy\n# run an experiment\n# load data\ntrainX1, trainy1, testX1, testy1 = load_dataset('drive/MyDrive/Thesis/Test Data/HAR Spain')",
"(499276, 12) (499276,)\n(26627, 20, 12) (26627,) (6657, 20, 12) (6657,)\n"
],
[
"# load a single file as a numpy array\ndef load_file1(filepath):\n\tdataframe = read_csv(filepath, header=None, delim_whitespace=True)\n\treturn dataframe.values\n\n# load a list of files and return as a 3d numpy array\ndef load_group1(filenames, prefix=''):\n\tloaded = list()\n\tfor name in filenames:\n\t\tdata = load_file1(prefix + name)\n\t\tloaded.append(data)\n\t# stack group so that features are the 3rd dimension\n\tloaded = dstack(loaded)\n\treturn loaded\n\n# load a dataset group, such as train or test\ndef load_dataset_group1(group, prefix=''):\n\tfilepath = prefix + group + '/Inertial Signals/'\n\t# load all 9 files as a single array\n\tfilenames = list()\n\t# body acceleration\n\tfilenames += ['body_acc_x_'+group+'.txt', 'body_acc_y_'+group+'.txt', 'body_acc_z_'+group+'.txt']\n\t# body gyroscope\n\tfilenames += ['body_gyro_x_'+group+'.txt', 'body_gyro_y_'+group+'.txt', 'body_gyro_z_'+group+'.txt']\n\t# load input data\n\tX = load_group1(filenames, filepath)\n\t# load class output\n\ty = load_file1(prefix + group + '/y_'+group+'.txt')\n\treturn X, y\n\ndef extract_features(x, y, window, step):\n num = x.shape[0]\n X = []\n Y = []\n for i in range(0, num - window, step):\n part = np.append(np.mean(x[i:i + window], axis=0),np.var(x[i: i + window], axis=0), axis=1)\n part = np.mean(part,axis=0)\n label = stats.mode(y[i: i + window])[0][0]\n X.append(part)\n Y.append(label)\n return np.array(X), np.array(Y)\n\n\n# load the dataset, returns train and test X and y elements\ndef load_dataset1(prefix=''):\n # load all train\n trainX, trainy = load_dataset_group1('train', prefix + 'HARDataset/')\n print(trainX.shape, trainy.shape)\n # load all test\n testX, testy = load_dataset_group1('test', prefix + 'HARDataset/')\n print(testX.shape, testy.shape)\n # zero-offset class values\n trainy = trainy - 1\n testy = testy - 1\n window = 10\n # calculate mean and var\n window = 10\n step = 1\n x, y = extract_features(np.append(trainX,testX,axis=0), np.append(trainy,testy,axis=0), window, step)\n\n ind = np.argsort( y[:,0] )\n x = x[ind]\n y = y[ind]\n\n x = x[1832:4692]\n y = y[1832:4692]\n x[:, [2, 1,5,4]] = x[:, [1, 2,4,5]]\n # transform for LSTM\n N_TIME_STEPS = 20\n step = 1 # faster with bigger step but accuracy degrades fast\n X = []\n Y = []\n num = y.shape[0]\n for i in range(0, num - N_TIME_STEPS, step):\n part = x[i: i + N_TIME_STEPS]\n label = stats.mode(y[i: i + N_TIME_STEPS])[0][0]\n X.append(part)\n Y.append(label)\n \n trainX, testX, trainy, testy = train_test_split(np.array(X), np.array(Y), test_size=0.2, random_state=42)\n testy = testy.reshape(-1)\n trainy = trainy.reshape(-1)\n print(trainX.shape, trainy.shape, testX.shape, testy.shape)\n # offset upstairs label 4 abd downstair 5\n trainy = trainy + 3\n testy = testy + 3\n return trainX, trainy, testX, testy\n\n# load data\ntrainX2, trainy2, testX2, testy2 = load_dataset1('drive/MyDrive/Thesis/Test Data/')",
"(7352, 128, 6) (7352, 1)\n(2947, 128, 6) (2947, 1)\n(2272, 20, 12) (2272,) (568, 20, 12) (568,)\n"
],
[
"# append train data\ntrainX = np.append(trainX1,trainX2,axis=0)\ntestX = np.append(testX1,testX2,axis=0)\ntrainy = np.append(trainy1,trainy2,axis=0)\ntesty = np.append(testy1,testy2,axis=0)\n\nprint(trainX.shape, trainy.shape, testX.shape, testy.shape)",
"(28899, 20, 12) (28899,) (7225, 20, 12) (7225,)\n"
],
[
"#def classify_svm(x_train, y_train, x_test):\n# # train label SVM\n# clf = SVC(kernel='rbf', class_weight='balanced', C=1e3, gamma=0.1)\n# clf = clf.fit(x_train, y_train)\n\n # predict using svm\n# y_pred = clf.predict(x_test)\n\n# return y_pred\n\n#y_pred = classify_svm(testX, testy, testX)\n#from sklearn.metrics import accuracy_score\n#accuracy_score(testy, y_pred)",
"_____no_output_____"
],
[
"# fit and evaluate a model\ndef evaluate_model(trainX, trainy, testX, testy):\n\t# define model\n\tverbose, epochs, batch_size = 1, 10, 64\n\tn_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]\n\t# reshape into subsequences (samples, time steps, rows, cols, channels)\n\tn_steps, n_length = 2, 10\n\ttrainX = trainX.reshape((trainX.shape[0], n_steps, 1, n_length, n_features))\n\ttestX = testX.reshape((testX.shape[0], n_steps, 1, n_length, n_features))\n\t# define model\n\tmodel = Sequential()\n\tmodel.add(ConvLSTM2D(filters=64, kernel_size=(1,3), activation='relu', input_shape=(n_steps, 1, n_length, n_features)))\n\tmodel.add(Dropout(0.5))\n\tmodel.add(Flatten())\n\tmodel.add(Dense(100, activation='relu'))\n\tmodel.add(Dense(n_outputs, activation='softmax'))\n\tmodel.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n\t# fit network\n\thistory = model.fit(trainX, trainy, validation_split=0.33, epochs=epochs, batch_size=batch_size, verbose=verbose)\n\t# evaluate model\n\t_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)\n\treturn model,history,accuracy\n\n\n# repeat experiment\n# one hot encode y\ntrainy = to_categorical(trainy)\ntesty = to_categorical(testy)\nmodel,model_history,score = evaluate_model(trainX, trainy, testX, testy)\nscore = score * 100.0\nprint('>#%d: %.3f' % (1, score))\n\n",
"Train on 19362 samples, validate on 9537 samples\nEpoch 1/10\n19362/19362 [==============================] - 10s 534us/step - loss: 0.6866 - accuracy: 0.7790 - val_loss: 2.6717 - val_accuracy: 0.6372\nEpoch 2/10\n19362/19362 [==============================] - 10s 500us/step - loss: 0.4140 - accuracy: 0.8529 - val_loss: 3.6249 - val_accuracy: 0.6667\nEpoch 3/10\n19362/19362 [==============================] - 9s 490us/step - loss: 0.3292 - accuracy: 0.8880 - val_loss: 5.6280 - val_accuracy: 0.6821\nEpoch 4/10\n19362/19362 [==============================] - 10s 499us/step - loss: 0.2741 - accuracy: 0.9106 - val_loss: 6.2929 - val_accuracy: 0.6991\nEpoch 5/10\n19362/19362 [==============================] - 10s 491us/step - loss: 0.2491 - accuracy: 0.9186 - val_loss: 6.7523 - val_accuracy: 0.6994\nEpoch 6/10\n19362/19362 [==============================] - 9s 488us/step - loss: 0.2310 - accuracy: 0.9255 - val_loss: 7.2853 - val_accuracy: 0.7077\nEpoch 7/10\n19362/19362 [==============================] - 10s 492us/step - loss: 0.2216 - accuracy: 0.9284 - val_loss: 8.0207 - val_accuracy: 0.7067\nEpoch 8/10\n19362/19362 [==============================] - 14s 702us/step - loss: 0.2165 - accuracy: 0.9301 - val_loss: 9.4570 - val_accuracy: 0.7085\nEpoch 9/10\n19362/19362 [==============================] - 9s 490us/step - loss: 0.2077 - accuracy: 0.9333 - val_loss: 11.3773 - val_accuracy: 0.7115\nEpoch 10/10\n19362/19362 [==============================] - 10s 491us/step - loss: 0.1976 - accuracy: 0.9349 - val_loss: 11.6063 - val_accuracy: 0.7102\n>#1: 85.924\n"
],
[
"# reshape data into time steps of sub-sequences\nn_features, n_steps, n_length = trainX.shape[2], 2, 10\ntrainX = trainX.reshape((trainX.shape[0], n_steps, 1, n_length, n_features))\ntestX = testX.reshape((testX.shape[0], n_steps, 1, n_length, n_features))\nX = np.append(trainX, testX, axis=0)\nY = np.append(trainy, testy, axis=0)\nY_cat = np.argmax(Y, axis=1)\nind = np.argsort( Y_cat[:] )\nX = X[ind]\nY = Y[ind]\nY_pred = model.predict(np.array(X))\nnp.shape(Y_pred)",
"_____no_output_____"
],
[
"print(model_history.history.keys())\nplt.figure(figsize=(12, 8))\nplt.plot(np.array(model_history.history['loss']), \"r-\", label=\"Train loss\")\nplt.plot(np.array(model_history.history['accuracy']), \"g-\", label=\"Train accuracy\")\nplt.plot(np.array(model_history.history['val_loss']), \"r--\", label=\"Valid loss\")\nplt.plot(np.array(model_history.history['val_accuracy']), \"g--\", label=\"Valid accuracy\")\nplt.title(\"Training session's progress over iterations\")\nplt.legend(loc='upper right', shadow=True)\nplt.ylabel('Training Progress (Loss or Accuracy values)')\nplt.xlabel('Training Epoch')\nplt.ylim(0)\nplt.show()",
"dict_keys(['val_loss', 'val_accuracy', 'loss', 'accuracy'])\n"
],
[
"plt.figure(figsize=(12, 8))\nplt.plot(np.array(Y[:,3]), linewidth=5.0, label=\"True\")\nplt.plot(np.array(Y_pred[:,3]), '--', linewidth=0.20, label=\"Predict\")\nplt.title(\"Confidence of Driving Detection\")\nplt.legend(loc='upper right', shadow=True)\nplt.ylabel('Confidence (%)')\nplt.xlabel('Training Epoch')\nplt.ylim(0)\nplt.show()",
"_____no_output_____"
],
[
"N = 50\ny_f = np.convolve(np.array(Y_pred[:,3]), np.ones(N)/N, mode='valid')\n\nplt.figure(figsize=(12, 8))\nplt.plot(np.array(Y[:,3]), linewidth=5.0, label=\"True\")\nplt.plot(y_f, '--', linewidth=2.0, label=\"Predict\")\nplt.title(\"Confidence of Driving Detection\")\nplt.legend(loc='upper right', shadow=True)\nplt.ylabel('Confidence (%)')\nplt.xlabel('Training Epoch')\nplt.ylim(0)\nplt.show()",
"_____no_output_____"
],
[
"LABELS = ['Walking','Inactive','Active','Driving']\nY_cat = np.argmax(Y, axis=1)\nY_pred_cat = np.argmax(Y_pred, axis=1)\nconfusion_matrix = metrics.confusion_matrix(Y_cat, Y_pred_cat)\nplt.figure(figsize=(10, 8))\nsns.heatmap(confusion_matrix, xticklabels=LABELS, yticklabels=LABELS, annot=True, fmt=\"d\",cmap=\"Blues\");\nplt.title(\"Confusion matrix\")\nplt.ylabel('True label')\nplt.xlabel('Predicted label')\nplt.show();",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\naccuracy_score(Y_cat2, Y_pred2_cat)",
"_____no_output_____"
],
[
"def visualize_activity_recognition(t, label_true, label_pred_mode, label_classes,name):\n plt.figure(figsize=(10, 8))\n plt.title(\"Activity recognition {}\".format(name))\n plt.plot(t, label_true, linewidth=5.0)\n plt.plot(t.reshape(-1), label_pred_mode.reshape(-1), '--', linewidth=0.10)\n plt.yticks(np.arange(len(label_classes)), label_classes)\n plt.xlabel(\"time (s)\")\n plt.ylabel(\"Activity\")\n plt.legend([\"True\", \"Predict\"])\n plt.show()\n\nnum = len(Y_cat)\nt = np.arange(0,num*1,1)\nvisualize_activity_recognition(t, Y_cat, Y_pred_cat, LABELS,\"LSTM\")",
"_____no_output_____"
],
[
"model.save('drive/MyDrive/Thesis/Test Data/HAR Spain/Driving_model')\npickle.dump(model_history.history, open(\"drive/MyDrive/Thesis/Test Data/HAR Spain/Driving_model_history\", \"wb\"))\nprint(\"Saved model to disk\")",
"Saved model to disk\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb83a1c55a7ae38b3d257c3370ebe8fb15182185
| 285,829 |
ipynb
|
Jupyter Notebook
|
Scripts/notebooks/.ipynb_checkpoints/Figure2_Horiscale-checkpoint.ipynb
|
SridharJagannathan/Jagannathan_Neuroimage2018
|
498ad8a82c454fc7b43081f19f753cbc60898739
|
[
"MIT"
] | 3 |
2018-05-01T06:45:04.000Z
|
2021-10-04T10:02:11.000Z
|
Scripts/notebooks/.ipynb_checkpoints/Figure2_Horiscale-checkpoint.ipynb
|
SridharJagannathan/Jagannathan_Neuroimage2018
|
498ad8a82c454fc7b43081f19f753cbc60898739
|
[
"MIT"
] | null | null | null |
Scripts/notebooks/.ipynb_checkpoints/Figure2_Horiscale-checkpoint.ipynb
|
SridharJagannathan/Jagannathan_Neuroimage2018
|
498ad8a82c454fc7b43081f19f753cbc60898739
|
[
"MIT"
] | 1 |
2020-09-29T00:17:40.000Z
|
2020-09-29T00:17:40.000Z
| 658.592166 | 214,120 | 0.942592 |
[
[
[
"import scipy.io as io\nimport matplotlib.pyplot as plt\nimport matplotlib.pylab as pylab",
"_____no_output_____"
],
[
"#Set up parameters for figure display\nparams = {'legend.fontsize': 'x-large',\n 'figure.figsize': (8, 10),\n 'axes.labelsize': 'x-large',\n 'axes.titlesize':'x-large',\n 'axes.labelweight': 'bold',\n 'xtick.labelsize':'x-large',\n 'ytick.labelsize':'x-large'}\n\npylab.rcParams.update(params)\npylab.rcParams[\"font.family\"] = \"serif\"\npylab.rcParams[\"font.weight\"] = \"heavy\"",
"_____no_output_____"
],
[
"#Load the hori data from some samples..\nmat_hori = io.loadmat('/work/imagingQ/SpatialAttention_Drowsiness/Jagannathan_Neuroimage2018/'\n 'Scripts/mat_files/horigraphics.mat')\ndata_hori = mat_hori['Hori_graphics']",
"_____no_output_____"
],
[
"#take the data for different scales..\ny_hori1 = data_hori[0,]\ny_hori2 = data_hori[3,]\ny_hori3 = data_hori[6,]\ny_hori4 = data_hori[9,]\ny_hori5 = data_hori[12,]\ny_hori6 = data_hori[13,]\ny_hori7 = data_hori[15,]\ny_hori8 = data_hori[18,]\ny_hori9 = data_hori[21,]\ny_hori10 = data_hori[23,]",
"_____no_output_____"
],
[
"#Set the bolding range..\nx = list(range(0, 1001))\n\nbold_hori1a = slice(0, 500)\nbold_hori1b = slice(500, 1000)\n\nbold_hori2a = slice(50, 460)\nbold_hori2b = slice(625, 835)\n\nbold_hori3a = slice(825, 1000)\nbold_hori4a = slice(0, 1000)\nbold_hori6a = slice(800, 875)\n\nbold_hori7a = slice(200, 250)\nbold_hori7b = slice(280, 350)\nbold_hori7c = slice(450, 525)\nbold_hori7d = slice(550, 620)\nbold_hori7e = slice(750, 800)\n\nbold_hori8a = slice(650, 750)\nbold_hori8b = slice(750, 795)\n\nbold_hori9a = slice(200, 325)\nbold_hori10a = slice(720, 855)",
"_____no_output_____"
],
[
"#Set the main figure of the Hori scale..\nplt.style.use('ggplot')\n\nax1 = plt.subplot2grid((60, 1), (0, 0), rowspan=6)\nax2 = plt.subplot2grid((60, 1), (6, 0), rowspan=6)\nax3 = plt.subplot2grid((60, 1), (12, 0), rowspan=6)\nax4 = plt.subplot2grid((60, 1), (18, 0), rowspan=6)\nax5 = plt.subplot2grid((60, 1), (24, 0), rowspan=6)\nax6 = plt.subplot2grid((60, 1), (30, 0), rowspan=6)\nax7 = plt.subplot2grid((60, 1), (36, 0), rowspan=6)\nax8 = plt.subplot2grid((60, 1), (42, 0), rowspan=6)\nax9 = plt.subplot2grid((60, 1), (48, 0), rowspan=6)\nax10 = plt.subplot2grid((60, 1), (54, 0), rowspan=6)\n\n\nplt.setp(ax1, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])\nplt.setp(ax2, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])\nplt.setp(ax3, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])\nplt.setp(ax4, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])\nplt.setp(ax5, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])\nplt.setp(ax6, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])\nplt.setp(ax7, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])\nplt.setp(ax8, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])\nplt.setp(ax9, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])\nplt.setp(ax10, xticks=[0, 250, 500, 750, 999], xticklabels=['0','1', '2', '3', '4'])\n\nplt.subplots_adjust(wspace=0, hspace=0)\n\nax1.plot(x, y_hori1, 'k-', alpha=0.5, linewidth=2.0)\nax1.plot(x[bold_hori1a], y_hori1[bold_hori1a], 'b-', alpha=0.75)\nax1.plot(x[bold_hori1b], y_hori1[bold_hori1b], 'b-', alpha=0.75)\nax1.set_ylim([-150, 150])\nax1.axes.xaxis.set_ticklabels([])\nax1.set_ylabel('1: Alpha wave \\ntrain', rotation=0,ha='right',va='center', fontsize=20, labelpad=10)\n\nax2.plot(x, y_hori2, 'k-', alpha=0.5, linewidth=2.0)\nax2.plot(x[bold_hori2a], y_hori2[bold_hori2a], 'b-', alpha=0.75)\nax2.plot(x[bold_hori2b], y_hori2[bold_hori2b], 'b-', alpha=0.75)\nax2.set_ylim([-150, 150])\nax2.axes.xaxis.set_ticklabels([])\nax2.set_ylabel('2: Alpha wave \\nintermittent(>50%)', rotation=0,ha='right',va='center', \n fontsize=20, labelpad=10)\n\nax3.plot(x, y_hori3, 'k-', alpha=0.5, linewidth=2.0)\nax3.plot(x[bold_hori3a], y_hori3[bold_hori3a], 'b-', alpha=0.75)\nax3.set_ylim([-150, 150])\nax3.axes.xaxis.set_ticklabels([])\nax3.set_ylabel('3: Alpha wave \\nintermittent(<50%)', rotation=0,ha='right',va='center', \n fontsize=20, labelpad=10)\n\n\nax4.plot(x, y_hori4, 'g-', alpha=0.5, linewidth=2.0)\nax4.plot(x[bold_hori4a], y_hori4[bold_hori4a], 'g-', alpha=0.75)\nax4.set_ylim([-150, 150])\nax4.axes.xaxis.set_ticklabels([])\nax4.set_ylabel('4: EEG flattening', rotation=0,ha='right',va='center', fontsize=20, labelpad=10)\n\n\nax5.plot(x, y_hori5, 'g-', alpha=0.5, linewidth=2.0)\nax5.plot(x[bold_hori4a], y_hori5[bold_hori4a], 'g-', alpha=0.75)\nax5.set_ylim([-150, 150])\nax5.axes.xaxis.set_ticklabels([])\nax5.set_ylabel('5: Ripples', rotation=0,ha='right',va='center', fontsize=20, labelpad=10)\n\nax6.plot(x, y_hori6, 'k-', alpha=0.5, linewidth=2.0)\nax6.plot(x[bold_hori6a], y_hori6[bold_hori6a], 'r-', alpha=0.75)\nax6.set_ylim([-150, 150])\nax6.axes.xaxis.set_ticklabels([])\nax6.set_ylabel('6: Vertex sharp wave \\nsolitary', rotation=0,ha='right',va='center', \n fontsize=20, labelpad=10)\n\nax7.plot(x, y_hori7, 'k-', alpha=0.5, linewidth=2.0)\nax7.plot(x[bold_hori7a], y_hori7[bold_hori7a], 'r-', alpha=0.75)\nax7.plot(x[bold_hori7b], y_hori7[bold_hori7b], 'r-', alpha=0.75)\nax7.plot(x[bold_hori7c], y_hori7[bold_hori7c], 'r-', alpha=0.75)\nax7.plot(x[bold_hori7d], y_hori7[bold_hori7d], 'r-', alpha=0.75)\nax7.plot(x[bold_hori7e], y_hori7[bold_hori7e], 'r-', alpha=0.75)\nax7.set_ylim([-150, 150])\nax7.set_ylabel('7: Vertex sharp wave \\nbursts', rotation=0,ha='right',va='center', \n fontsize=20, labelpad=10)\n\nax7.axes.xaxis.set_ticklabels([])\n\nax8.plot(x, y_hori8, 'k-', alpha=0.5, linewidth=2.0)\nax8.plot(x[bold_hori8a], y_hori8[bold_hori8a], 'r-', alpha=0.75)\nax8.plot(x[bold_hori8b], y_hori8[bold_hori8b], 'm-', alpha=0.75)\nax8.set_ylim([-150, 150])\nax8.set_ylabel('8: Vertex sharp wave \\nand incomplete spindles', rotation=0,ha='right',va='center', \n fontsize=20, labelpad=10)\n\nax8.axes.xaxis.set_ticklabels([])\n\nax9.plot(x, y_hori9, 'k-', alpha=0.5, linewidth=2.0)\nax9.plot(x[bold_hori9a], y_hori9[bold_hori9a], 'm-', alpha=0.75)\nax9.set_ylim([-40, 40])\nax9.set_ylabel('9: Spindles', rotation=0,ha='right',va='center', fontsize=20, labelpad=10)\n\nax9.axes.xaxis.set_ticklabels([])\n\nax10.plot(x, y_hori10, 'k-', alpha=0.5, linewidth=2.0)\nax10.plot(x[bold_hori10a], y_hori10[bold_hori10a], 'c-', alpha=0.75)\nax10.set_ylim([-175, 175])\nax10.set_ylabel('10: K-complexes', rotation=0,ha='right',va='center', fontsize=20, labelpad=10)\nax10.set_xlabel('Time(seconds)', rotation=0,ha='center',va='center', fontsize=20, labelpad=10)\n\nax1.axes.yaxis.set_ticklabels([' ',' ',''])\nax2.axes.yaxis.set_ticklabels([' ',' ',''])\nax3.axes.yaxis.set_ticklabels([' ',' ',''])\nax4.axes.yaxis.set_ticklabels([' ',' ',''])\nax5.axes.yaxis.set_ticklabels([' ',' ',''])\nax6.axes.yaxis.set_ticklabels([' ',' ',''])\nax7.axes.yaxis.set_ticklabels([' ',' ',''])\nax8.axes.yaxis.set_ticklabels([' ',' ',''])\nax9.axes.yaxis.set_ticklabels([' ',' ',''])\nax10.axes.yaxis.set_ticklabels(['-100(uV)','','100(uV)'])\nax10.axes.yaxis.tick_right()\n\nax1.axes.yaxis.set_ticks([-100, 0, 100])\nax2.axes.yaxis.set_ticks([-100, 0, 100])\nax3.axes.yaxis.set_ticks([-100, 0, 100])\nax4.axes.yaxis.set_ticks([-100, 0, 100])\nax5.axes.yaxis.set_ticks([-100, 0, 100])\nax6.axes.yaxis.set_ticks([-100, 0, 100])\nax7.axes.yaxis.set_ticks([-100, 0, 100])\nax8.axes.yaxis.set_ticks([-100, 0, 100])\nax9.axes.yaxis.set_ticks([-100, 0, 100])\nax10.axes.yaxis.set_ticks([-100, 0, 100])\n\n\n\n# Here is the label of interest\nax2.annotate('Wake', xy=(-0.85, 0.90), xytext=(-0.85, 1.00), xycoords='axes fraction',rotation='vertical', \n fontsize=20, ha='center', va='center')\n\nax6.annotate('N1', xy=(-0.85, 1), xytext=(-0.85, 1), xycoords='axes fraction', rotation='vertical', \n fontsize=20, ha='center', va='center')\nax10.annotate('N2', xy=(-0.85, 0.90), xytext=(-0.85, 1.00), xycoords='axes fraction', rotation='vertical', \n fontsize=20, ha='center', va='center')",
"_____no_output_____"
],
[
"#Set up the vertex element now..\n\nparams = {'figure.figsize': (3, 6)}\n\npylab.rcParams.update(params)\n\ny_hori6 = data_hori[13,]\ny_hori7 = data_hori[15,]\n\n\nx = list(range(0, 101))\nx_spin = list(range(0, 301))\nx_kcomp = list(range(0, 301))\ny_hori6 = y_hori6[800:901]\ny_hori7 = y_hori7[281:382]\n\n\n#Vertex\nbold_biphasic = slice(8, 75)\nbold_monophasic = slice(8, 65)\n\nplt.style.use('ggplot')\nf, axarr = plt.subplots(2, sharey=True) # makes the 2 subplots share an axis.\nf.suptitle('Vertex element', size=12, fontweight='bold')\nplt.setp(axarr, xticks=[0, 50,100], xticklabels=['0', '0.5', '1'],\n yticks=[-150,0, 150])\naxarr[0].plot(x, y_hori6, 'k-', alpha=0.5, linewidth=2.0)\naxarr[0].plot(x[bold_biphasic], y_hori6[bold_biphasic], 'r-', alpha=0.75)\naxarr[0].set_title('Biphasic', fontsize=10, fontweight='bold')\naxarr[0].set_ylim([-150, 150])\n\naxarr[1].plot(x, y_hori7, 'k-', alpha=0.5, linewidth=2.0)\naxarr[1].plot(x[bold_monophasic], y_hori7[bold_monophasic], 'r-', alpha=0.75)\naxarr[1].set_title('Monophasic', fontsize=10, fontweight='bold')\n\naxarr[1].set_xlabel('Time(s)')\n\nf.text(-0.2, 0.5, 'Amp(uV)', va='center', rotation='vertical', fontsize=20)\n\nf.subplots_adjust(hspace=0.3)",
"_____no_output_____"
],
[
"#Set up the Spindle element now..\nparams = {'figure.figsize': (3, 1.5)}\n\npylab.rcParams.update(params)\nbold_spindle = slice(95, 205)\n\n\ny_hori8 = data_hori[21,]\ny_hori8 = y_hori8[101:402]\n\n\nfspin, axarrspin = plt.subplots(1, sharey=False) # makes the 2 subplots share an axis.\nplt.setp(axarrspin, xticks=[0, 150,300], xticklabels=['0', '1.5', '3'],\n yticks=[-100,0, 100])\naxarrspin.plot(x_spin, y_hori8, 'k-', alpha=0.5, linewidth=2.0)\naxarrspin.plot(x_spin[bold_spindle], y_hori8[bold_spindle], 'r-', alpha=0.75)\naxarrspin.set_title('', fontsize=10, fontweight='bold')\naxarrspin.set_ylim([-100, 100])\n\naxarrspin.set_xlabel('Time(s)')\n\nfspin.text(0.3, 1.5, 'Spindle element', va='center', rotation='horizontal', fontsize=12)\n\nfspin.subplots_adjust(hspace=0.3)",
"_____no_output_____"
],
[
"#Set up the K-complex element now..\nbold_kcomp = slice(20, 150)\n\ny_hori10 = data_hori[23,]\ny_hori10 = y_hori10[700:1007]\n\nfkcomp, axarrkcomp = plt.subplots(1, sharey=False) # makes the 2 subplots share an axis.\nplt.setp(axarrkcomp, xticks=[0, 150,300], xticklabels=['0', '1.5', '3'],\n yticks=[-200,0, 200])\naxarrkcomp.plot(x_kcomp, y_hori10, 'k-', alpha=0.5, linewidth=2.0)\naxarrkcomp.plot(x_kcomp[bold_kcomp], y_hori10[bold_kcomp], 'r-', alpha=0.75)\naxarrkcomp.set_title('', fontsize=10, fontweight='bold')\naxarrkcomp.set_ylim([-200, 200])\n\naxarrkcomp.set_xlabel('Time(s)')\n\nfkcomp.text(0.3, 1.5, 'K-complex element', va='center', rotation='horizontal', fontsize=12)\n\nfkcomp.subplots_adjust(hspace=0.3)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb83b49e45e12bef4e78328eea38441d3d0bfa19
| 2,366 |
ipynb
|
Jupyter Notebook
|
HW2/q8.ipynb
|
1997alireza/Optimization
|
a81178b0ea10b6c762988231c781b6eb9e73fe7e
|
[
"MIT"
] | 3 |
2020-06-19T03:35:45.000Z
|
2020-07-19T14:15:10.000Z
|
HW2/q8.ipynb
|
1997alireza/Optimization-Homework
|
a81178b0ea10b6c762988231c781b6eb9e73fe7e
|
[
"MIT"
] | null | null | null |
HW2/q8.ipynb
|
1997alireza/Optimization-Homework
|
a81178b0ea10b6c762988231c781b6eb9e73fe7e
|
[
"MIT"
] | 1 |
2022-01-18T06:19:46.000Z
|
2022-01-18T06:19:46.000Z
| 21.706422 | 132 | 0.532544 |
[
[
[
"from HW2.line_search_algorithms import *\nfrom HW2.functions import RosenbrockProvider",
"_____no_output_____"
],
[
"in_x = [\n np.array([1.2, 1.2]),\n np.array([-1.2, 1])\n]\nfunction_provider = RosenbrockProvider()",
"_____no_output_____"
],
[
"print('Results:', [line_search(xi, function_provider, steepest_descent_direction) for xi in in_x])",
"number of steps: 285\nnumber of steps: 1147\nResults: [array([1.00100739, 1.00201919]), array([0.99902909, 0.99805432])]\n"
],
[
"print('Results:', [line_search(xi, function_provider, newton_direction) for xi in in_x])",
"number of steps: 7\nnumber of steps: 21\nResults: [array([1.0000101 , 1.00001876]), array([0.99999994, 0.99999987])]\n"
],
[
"print('Results:', [line_search(xi, function_provider, BFGS_direction) for xi in in_x])",
"number of steps: 8\nnumber of steps: 29\nResults: [array([1.00001316, 1.0000253 ]), array([0.99995636, 0.99991369])]\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
cb83ba4ac39dfebe68fbee3d32c171203297bdbc
| 6,921 |
ipynb
|
Jupyter Notebook
|
notebook/numpy_sum_mean_axis.ipynb
|
puyopop/python-snippets
|
9d70aa3b2a867dd22f5a5e6178a5c0c5081add73
|
[
"MIT"
] | 174 |
2018-05-30T21:14:50.000Z
|
2022-03-25T07:59:37.000Z
|
notebook/numpy_sum_mean_axis.ipynb
|
puyopop/python-snippets
|
9d70aa3b2a867dd22f5a5e6178a5c0c5081add73
|
[
"MIT"
] | 5 |
2019-08-10T03:22:02.000Z
|
2021-07-12T20:31:17.000Z
|
notebook/numpy_sum_mean_axis.ipynb
|
puyopop/python-snippets
|
9d70aa3b2a867dd22f5a5e6178a5c0c5081add73
|
[
"MIT"
] | 53 |
2018-04-27T05:26:35.000Z
|
2022-03-25T07:59:37.000Z
| 16.170561 | 62 | 0.396041 |
[
[
[
"import numpy as np",
"_____no_output_____"
],
[
"a = np.arange(12).reshape(3, 4)\nprint(a.shape)\nprint(a)",
"(3, 4)\n[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]]\n"
],
[
"print(np.sum(a))",
"66\n"
],
[
"print(np.sum(a, axis=0))\nprint(np.sum(a, axis=1))",
"[12 15 18 21]\n[ 6 22 38]\n"
],
[
"print(a.sum())",
"66\n"
],
[
"print(a.sum(axis=0))\nprint(a.sum(axis=1))",
"[12 15 18 21]\n[ 6 22 38]\n"
],
[
"print(np.mean(a))",
"5.5\n"
],
[
"print(np.mean(a, axis=0))\nprint(np.mean(a, axis=1))",
"[ 4. 5. 6. 7.]\n[ 1.5 5.5 9.5]\n"
],
[
"print(a.mean())",
"5.5\n"
],
[
"print(a.mean(axis=0))\nprint(a.mean(axis=1))",
"[ 4. 5. 6. 7.]\n[ 1.5 5.5 9.5]\n"
],
[
"print(np.min(a))\nprint(np.min(a, axis=0))",
"0\n[0 1 2 3]\n"
],
[
"print(a.max())\nprint(a.max(axis=1))",
"11\n[ 3 7 11]\n"
],
[
"b = np.arange(24).reshape(2, 3, 4)\nprint(b.shape)\nprint(b)",
"(2, 3, 4)\n[[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]]\n\n [[12 13 14 15]\n [16 17 18 19]\n [20 21 22 23]]]\n"
],
[
"print(b.sum(axis=0))",
"[[12 14 16 18]\n [20 22 24 26]\n [28 30 32 34]]\n"
],
[
"print(b[0, :, :] + b[1, :, :])",
"[[12 14 16 18]\n [20 22 24 26]\n [28 30 32 34]]\n"
],
[
"print(b.sum(axis=1))",
"[[12 15 18 21]\n [48 51 54 57]]\n"
],
[
"print(b[:, 0, :] + b[:, 1, :] + b[:, 2, :])",
"[[12 15 18 21]\n [48 51 54 57]]\n"
],
[
"print(b.sum(axis=2))",
"[[ 6 22 38]\n [54 70 86]]\n"
],
[
"print(b[:, :, 0] + b[:, :, 1] + b[:, :, 2] + b[:, :, 3])",
"[[ 6 22 38]\n [54 70 86]]\n"
],
[
"print(b.sum(axis=(0, 1)))",
"[60 66 72 78]\n"
],
[
"print(b.sum(axis=(0, 2)))",
"[ 60 92 124]\n"
],
[
"print(b.sum(axis=(1, 2)))",
"[ 66 210]\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb83d965d281bdb09d1cddac70265eb552234557
| 4,163 |
ipynb
|
Jupyter Notebook
|
1_kNN/dating_test.ipynb
|
wonderui/machine_learning
|
8c0d7e6bc5c7f480cfd474c71f523a7fd31d8ee9
|
[
"MIT"
] | null | null | null |
1_kNN/dating_test.ipynb
|
wonderui/machine_learning
|
8c0d7e6bc5c7f480cfd474c71f523a7fd31d8ee9
|
[
"MIT"
] | null | null | null |
1_kNN/dating_test.ipynb
|
wonderui/machine_learning
|
8c0d7e6bc5c7f480cfd474c71f523a7fd31d8ee9
|
[
"MIT"
] | null | null | null | 22.262032 | 98 | 0.523661 |
[
[
[
"from sklearn.neighbors import NearestNeighbors\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"'''\ntest data\n2299\t3.733617\t0.698269\tsmallDoses\n60613\t3.620110\t0.287767\tdidntLike\n35432\t7.398380\t0.684218\tlargeDoses\n7413\t0.000000\t1.020797\tsmallDoses\n58668\t6.556676\t0.055183\tdidntLike\n35018\t9.959588\t0.060020\tlargeDoses\n'''",
"_____no_output_____"
],
[
"def file2matrix(filename):\n love_dictionary={'largeDoses':3, 'smallDoses':2, 'didntLike':1}\n fr = open(filename)\n arrayOLines = fr.readlines()\n numberOfLines = len(arrayOLines) #get the number of lines in the file\n returnMat = np.zeros((numberOfLines,3)) #prepare matrix to return\n classLabelVector = [] #prepare labels return \n index = 0\n for line in arrayOLines:\n line = line.strip()\n listFromLine = line.split('\\t')\n returnMat[index,:] = listFromLine[0:3]\n if(listFromLine[-1].isdigit()):\n classLabelVector.append(int(listFromLine[-1]))\n else:\n classLabelVector.append(love_dictionary.get(listFromLine[-1]))\n index += 1\n return returnMat,classLabelVector",
"_____no_output_____"
],
[
"X = file2matrix('datingTestSet.txt')[0]",
"_____no_output_____"
],
[
"sample = pd.Series(file2matrix('datingTestSet.txt')[1])",
"_____no_output_____"
],
[
"neigh = NearestNeighbors(10, 1)",
"_____no_output_____"
],
[
"neigh.fit(X)",
"_____no_output_____"
],
[
"kneigh = neigh.kneighbors([[35432, 7.398380, 0.684218]], 10, return_distance=False).tolist()",
"_____no_output_____"
],
[
"kneigh_list = [item for sublist in kneigh for item in sublist]",
"_____no_output_____"
],
[
"sample[sample.index.isin(kneigh_list)].value_counts().index[0]",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb83fafde4845ad511b4f67b555e3a8c18a07cda
| 37,806 |
ipynb
|
Jupyter Notebook
|
course-1/week-2/122. pandas_indexing_selection.ipynb
|
MLunov/Machine-Learning-Data-Analysis-Specialization-MIPT-Yandex
|
857dc468d995d7b62f00a47711ac8ca8af8c5e0f
|
[
"MIT"
] | null | null | null |
course-1/week-2/122. pandas_indexing_selection.ipynb
|
MLunov/Machine-Learning-Data-Analysis-Specialization-MIPT-Yandex
|
857dc468d995d7b62f00a47711ac8ca8af8c5e0f
|
[
"MIT"
] | null | null | null |
course-1/week-2/122. pandas_indexing_selection.ipynb
|
MLunov/Machine-Learning-Data-Analysis-Specialization-MIPT-Yandex
|
857dc468d995d7b62f00a47711ac8ca8af8c5e0f
|
[
"MIT"
] | null | null | null | 27.179008 | 104 | 0.369016 |
[
[
[
"# Библиотека Pandas",
"_____no_output_____"
],
[
"## Data Frame",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"#создание DataFrame с помощью чтения данных из файла\nframe = pd.read_csv('data_sample_example.tsv', header=0, sep='\\t')",
"_____no_output_____"
],
[
"frame",
"_____no_output_____"
],
[
"frame.dtypes",
"_____no_output_____"
],
[
"#изменение типа столбца с помощью функции apply\nframe.Birth = frame.Birth.apply(pd.to_datetime)",
"_____no_output_____"
],
[
"frame",
"_____no_output_____"
],
[
"frame.dtypes",
"_____no_output_____"
],
[
"frame.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6 entries, 0 to 5\nData columns (total 4 columns):\nName 6 non-null object\nBirth 6 non-null datetime64[ns]\nCity 6 non-null object\nPosition 4 non-null object\ndtypes: datetime64[ns](1), object(3)\nmemory usage: 272.0+ bytes\n"
],
[
"#заполнение пропущенных значений с помощью метода fillna\nframe.fillna('разнорабочий')",
"_____no_output_____"
],
[
"#заполнение пропущенных значений с помощью метода fillna (inplace)\nframe.fillna('разнорабочий', inplace=True)",
"_____no_output_____"
],
[
"frame",
"_____no_output_____"
],
[
"frame.Position",
"_____no_output_____"
],
[
"frame[['Position']]",
"_____no_output_____"
],
[
"frame[['Name', 'Position']]",
"_____no_output_____"
],
[
"frame[:3] #выбираем первые три записи",
"_____no_output_____"
],
[
"frame[-3:] #выбираем три послдение записи",
"_____no_output_____"
],
[
"frame.loc[[0,1,2], [\"Name\", \"City\"]] #работает на основе имен",
"_____no_output_____"
],
[
"frame.iloc[[1,3,5], [0,1]] #работает на основе позиций",
"_____no_output_____"
],
[
"frame.ix[[0,1,2], [\"Name\", \"City\"]] #поддерживает и имена и позиции (пример с именами)",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: DeprecationWarning: \n.ix is deprecated. Please use\n.loc for label based indexing or\n.iloc for positional indexing\n\nSee the documentation here:\nhttp://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"frame.ix[[0,1,2], [0,1]] #поддерживает и имена и позиции (пример с позициями)",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: DeprecationWarning: \n.ix is deprecated. Please use\n.loc for label based indexing or\n.iloc for positional indexing\n\nSee the documentation here:\nhttp://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"#выбираем строки, которые удовлетворяют условию frame.Birth >= pd.datetime(1985,1,1)\nframe[frame.Birth >= pd.datetime(1985,1,1)]",
"_____no_output_____"
],
[
"#выбираем строки, удовлетворяющие пересечению условий\nframe[(frame.Birth >= pd.datetime(1985,1,1)) &\n (frame.City != 'Москва')]",
"_____no_output_____"
],
[
"#выбираем строки, удовлетворяющие объединению условий\nframe[(frame.Birth >= pd.datetime(1985,1,1)) |\n (frame.City == 'Волгоград')]",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb840f6d1492ed45b0d7faf60ffe5dd3ff8d8411
| 14,075 |
ipynb
|
Jupyter Notebook
|
notebooks/run04-L200only_reg_rf_boruta.ipynb
|
pritchardlabatpsu/cga
|
0a71c672b1348cebc724560643fd908d636fc133
|
[
"MIT"
] | null | null | null |
notebooks/run04-L200only_reg_rf_boruta.ipynb
|
pritchardlabatpsu/cga
|
0a71c672b1348cebc724560643fd908d636fc133
|
[
"MIT"
] | null | null | null |
notebooks/run04-L200only_reg_rf_boruta.ipynb
|
pritchardlabatpsu/cga
|
0a71c672b1348cebc724560643fd908d636fc133
|
[
"MIT"
] | 1 |
2022-02-08T01:06:20.000Z
|
2022-02-08T01:06:20.000Z
| 67.344498 | 410 | 0.678934 |
[
[
[
"from ceres_infer.session import workflow\nfrom ceres_infer.models import model_infer_ens_custom",
"/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tqdm/std.py:668: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version\n from pandas import Panel\nUsing TensorFlow backend.\n/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/Users/boyangzhao/anaconda/envs/cnp/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n"
],
[
"import logging\nlogging.basicConfig(level=logging.INFO)",
"_____no_output_____"
],
[
"params = {\n # directories\n 'outdir_run': '../out/20.0909 Lx/L200only_reg_rf_boruta/', # output dir for the run\n 'outdir_modtmp': '../out/20.0909 Lx/L200only_reg_rf_boruta/model_perf/', # intermediate files for each model\n 'indir_dmdata_Q3': '../out/20.0817 proc_data/gene_effect/dm_data.pkl', # pickled preprocessed DepMap Q3 data\n 'indir_dmdata_external': '../out/20.0817 proc_data/gene_effect/dm_data_Q4.pkl', # pickled preprocessed DepMap Q3 data\n 'indir_genesets': '../data/gene_sets/',\n 'indir_landmarks': '../out/19.1013 tight cluster/landmarks_n200_k200.csv', # csv file of landmarks [default: None]\n\n # notes\n 'session_notes': 'L200 landmarks only; regression with random forest-boruta lite iteration',\n\n # data\n 'external_data_name': 'p19q4', # name of external validation dataset\n 'opt_scale_data': False, # scale input data True/False\n 'opt_scale_data_types': '\\[(?:RNA-seq|CN)\\]', # data source types to scale; in regexp\n 'model_data_source': ['CERES_Lx'],\n 'anlyz_set_topN': 10, # for analysis set how many of the top features to look at\n 'perm_null': 1000, # number of samples to get build the null distribution, for corr\n 'useGene_dependency': False, # whether to use CERES gene dependency (true) or gene effect (false)\n 'scope': 'differential', # scope for which target genes to run on; list of gene names, or 'all', 'differential'\n\n # model\n 'model_name': 'rf',\n 'model_params': {'n_estimators':1000,'max_depth':15,'min_samples_leaf':5,'max_features':'log2'},\n 'model_paramsgrid': {},\n 'model_pipeline': model_infer_ens_custom,\n 'pipeline_params': {'sf_iterThresholds': [], 'sf_topK': None},\n \n # pipeline\n 'parallelize': False, # parallelize workflow\n 'processes': 1, # number of cpu processes to use\n \n # analysis\n 'metric_eval': 'score_test', # metric in model_results to evaluate, e.g. score_test, score_oob\n 'thresholds': {'score_rd10': 0.1, # score of reduced model - threshold for filtering\n 'recall_rd10': 0.95}, # recall of reduced model - threshold for filtering\n 'min_gs_size': 4 # minimum gene set size, to be derived\n}",
"_____no_output_____"
],
[
"wf = workflow(params)\npipeline = ['load_processed_data', 'infer']\nwf.create_pipe(pipeline)\nwf.run_pipe()",
"INFO:root:Loading preprocessed data...\nINFO:root:Adding landmarks...\nINFO:root:Running model building and inference...\n100%|██████████| 521/521 [12:32:26<00:00, 86.65s/it] \n"
],
[
"wf = workflow(params)\npipeline = ['load_processed_data', 'load_model_results', 'analyze', 'analyze_filtered', 'derive_genesets']\nwf.create_pipe(pipeline)\nwf.run_pipe()",
"INFO:root:Loading preprocessed data...\nINFO:root:Adding landmarks...\nINFO:root:Loading model results...\nINFO:root:Analyzing model results...\n/Users/boyangzhao/Dropbox/Industry/Quantalarity/client Penn/proj_ceres/github/cnp_dev/src/ceres_infer/analyses.py:303: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n feat_summary = varExp_noNeg.groupby('target')['target', 'score_rd', 'score_full'].first()\n/Users/boyangzhao/Dropbox/Industry/Quantalarity/client Penn/proj_ceres/github/cnp_dev/src/ceres_infer/analyses.py:36: MatplotlibDeprecationWarning: normalize=None does not normalize if the sum is less than 1 but this behavior is deprecated since 3.3 until two minor releases later. After the deprecation period the default value will be normalize=True. To prevent normalization pass normalize=False \n plt.pie(df_counts.values, labels=labels, autopct=autopct, colors=colors)\nINFO:root:Analyzing filtered results...\n/Users/boyangzhao/Dropbox/Industry/Quantalarity/client Penn/proj_ceres/github/cnp_dev/src/ceres_infer/analyses.py:303: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n feat_summary = varExp_noNeg.groupby('target')['target', 'score_rd', 'score_full'].first()\n/Users/boyangzhao/Dropbox/Industry/Quantalarity/client Penn/proj_ceres/github/cnp_dev/src/ceres_infer/analyses.py:36: MatplotlibDeprecationWarning: normalize=None does not normalize if the sum is less than 1 but this behavior is deprecated since 3.3 until two minor releases later. After the deprecation period the default value will be normalize=True. To prevent normalization pass normalize=False \n plt.pie(df_counts.values, labels=labels, autopct=autopct, colors=colors)\nINFO:root:Deriving gene sets...\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\nWARNING:matplotlib.axes._axes:*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
cb84226b70d931a074a58bf3031f1ae90b67e1fc
| 173,183 |
ipynb
|
Jupyter Notebook
|
Jupyter notebooks & Docs/Two moons clustering and DBCV.ipynb
|
MonicaSelvaraj/Drosophila-germ-plasm
|
abe3186a42400cfd757900cce7dd46a8afeea3f5
|
[
"MIT"
] | null | null | null |
Jupyter notebooks & Docs/Two moons clustering and DBCV.ipynb
|
MonicaSelvaraj/Drosophila-germ-plasm
|
abe3186a42400cfd757900cce7dd46a8afeea3f5
|
[
"MIT"
] | null | null | null |
Jupyter notebooks & Docs/Two moons clustering and DBCV.ipynb
|
MonicaSelvaraj/Drosophila-germ-plasm
|
abe3186a42400cfd757900cce7dd46a8afeea3f5
|
[
"MIT"
] | null | null | null | 202.316589 | 39,440 | 0.879601 |
[
[
[
"#Source for DBSCAN code: https://www.youtube.com/watch?v=5cOhL4B5waU&t=918s",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nfrom sklearn.datasets import make_moons\nfrom sklearn.cluster import DBSCAN\nfrom sklearn.neighbors import NearestNeighbors\nfrom scipy.spatial.distance import euclidean\n\nimport numpy as np\nimport numpy.matlib\n\nplt.style.use('ggplot')\n%matplotlib inline",
"_____no_output_____"
],
[
"X, label = make_moons(n_samples=200, noise=0.1,random_state=19)\n#print(X[:5,])\nfig, ax = plt.subplots(figsize=(10,8))\nsctr1 = ax.scatter(X[:,0],X[:,1],s=140,alpha=0.9)\nlen(X)",
"_____no_output_____"
],
[
"#Using a kNN distance plot to find eps\n#Using k=12\n#Source - https://scikit-learn.org/stable/modules/neighbors.html\nnbrs = NearestNeighbors(n_neighbors=12).fit(X)\ndistances, indices = nbrs.kneighbors(X)\n#print(distances)\nsortedDistancesInc = sorted(distances[:,11],reverse=False)\nplt.plot(list(range(1,len(X)+1)), sortedDistancesInc)\n#plt.show()\n\n#Figuring out how to automatically get epsilon from the graph\n#The elbow point is the point on the curve with the maximum absolute second derivative \n#Source: https://dataplatform.cloud.ibm.com/analytics/notebooks/54d79c2a-f155-40ec-93ec-ed05b58afa39/view?access_token=6d8ec910cf2a1b3901c721fcb94638563cd646fe14400fecbb76cea6aaae2fb1\n\nx = list(range(1,len(X)+1))\ny = sortedDistancesInc\nkNNdata = np.vstack((x,y)).T\nnPoints = len(x)\n#print(kNNdata)\n\n#Drawing a line from the first point to the last point on the curve \nfirstPoint = kNNdata[0]\nlastPoint = kNNdata[-1]\nplt.scatter(firstPoint[0],firstPoint[1], c='blue',s=10)\nplt.scatter(lastPoint[0],lastPoint[1], c='blue',s=10)\nlv = lastPoint - firstPoint #Finding a vector between the first and last point\nlvn = lv/np.linalg.norm(lv)#Normalizing the vector\nplt.plot([firstPoint[0],lastPoint[0]],[firstPoint[1],lastPoint[1]])\n#plt.show()\n\n#Finding the distance to the line \nvecFromFirst = kNNdata - firstPoint\nscalarProduct = np.sum(vecFromFirst * np.matlib.repmat(lvn, nPoints, 1), axis=1)\nvecFromFirstParallel = np.outer(scalarProduct, lvn)\nvecToLine = vecFromFirst - vecFromFirstParallel\n\n# distance to line is the norm of vecToLine\ndistToLine = np.sqrt(np.sum(vecToLine ** 2, axis=1))\n\n# knee/elbow is the point with max distance value\nidxOfBestPoint = np.argmax(distToLine)\n\nprint (\"Knee of the curve is at index =\",idxOfBestPoint)\nprint (\"Knee value =\", kNNdata[idxOfBestPoint])\n\nplt.scatter(kNNdata[idxOfBestPoint][0],kNNdata[idxOfBestPoint][1])\nplt.show()\n",
"Knee of the curve is at index = 162\nKnee value = [163. 0.26893279]\n"
],
[
"#DBSCAN\nmodel = DBSCAN(eps=kNNdata[idxOfBestPoint][1],min_samples=12).fit(X)\nprint(model)",
"DBSCAN(algorithm='auto', eps=0.2689327859651778, leaf_size=30,\n metric='euclidean', metric_params=None, min_samples=12, n_jobs=None,\n p=None)\n"
],
[
"model.labels_",
"_____no_output_____"
],
[
"#To know which points are the core points\nmodel.core_sample_indices_",
"_____no_output_____"
],
[
"#Visualizing clusters\n#fig, ax = plt.subplots(figsize=(10,8))\n#sctr2 = ax.scatter(X[:,0],X[:,1],c=model.labels_,s=140,alpha=0.9,cmap=plt.cm.Set1)\n#fig.show()\n\nfrom sklearn.cluster import KMeans\n\nkmeans = KMeans(n_clusters=2)\nkmeans_labels = kmeans.fit_predict(X)\nplt.scatter(X[:,0], X[:,1], c=kmeans_labels)\nplt.show()\n\nkmeans_score = DBCV(X, kmeans_labels, dist_function=euclidean)\nprint(kmeans_score)\nprint(kmeans_labels)",
"_____no_output_____"
],
[
"import hdbscan\nprint(X)\nhdbscanner = hdbscan.HDBSCAN()\nhdbscan_labels = hdbscanner.fit_predict(X)\nplt.scatter(X[:,0], X[:,1], c=hdbscan_labels)\n\nhdbscan_score = DBCV(X, hdbscan_labels, dist_function=euclidean)\nprint(hdbscan_score)",
"[[ 2.81714569e-01 9.10444056e-01]\n [ 8.38924105e-01 -5.30053378e-01]\n [ 4.09154736e-01 8.09443517e-01]\n [-9.84152132e-01 1.31421552e-01]\n [ 1.15919021e+00 4.91042499e-01]\n [-9.67034864e-01 9.81273018e-02]\n [ 9.17391379e-01 -2.33492700e-01]\n [ 9.78275081e-01 5.01470015e-01]\n [ 1.85907097e+00 3.30871464e-01]\n [ 1.28971276e+00 -3.64160764e-01]\n [ 5.84428413e-01 7.67799476e-01]\n [-7.30832047e-01 4.56298974e-01]\n [ 7.94417589e-01 6.12926580e-01]\n [ 1.79695065e+00 -2.93018722e-02]\n [ 2.32147899e-01 9.24300964e-01]\n [ 1.54806066e+00 -3.39771532e-01]\n [-6.18477556e-01 8.99092698e-01]\n [-6.58859478e-02 4.56605027e-01]\n [ 1.11837392e+00 -4.01525312e-01]\n [ 1.18063952e+00 -2.33470696e-01]\n [ 1.49243452e-01 -1.28066249e-01]\n [-3.43639867e-01 8.37080031e-01]\n [-3.46496873e-01 7.71662626e-01]\n [ 1.10554355e+00 -3.66008783e-01]\n [ 1.77313541e+00 2.20647050e-01]\n [ 1.70382757e-01 7.30566191e-01]\n [-2.33630869e-01 1.01740159e+00]\n [ 1.77765377e+00 3.82306966e-01]\n [ 7.22840616e-01 -3.78990619e-01]\n [ 1.06124107e+00 -4.22074761e-01]\n [ 8.81390770e-01 -3.36024229e-02]\n [-3.39269973e-01 9.07231980e-01]\n [ 4.87196179e-01 9.17613431e-01]\n [ 5.07742972e-01 6.91930633e-01]\n [ 6.46502886e-01 7.49997777e-01]\n [ 1.91481372e+00 3.38433459e-01]\n [ 1.12951014e+00 3.36269110e-01]\n [ 3.09461855e-01 -1.19380361e-01]\n [-5.15164417e-01 7.88584465e-01]\n [ 7.51220238e-01 -5.42852616e-01]\n [ 1.27435393e+00 -4.90628430e-01]\n [ 9.07497256e-01 1.37175734e-01]\n [-9.34976907e-01 4.51134853e-01]\n [ 1.60442635e+00 -4.08584945e-01]\n [-2.78076901e-01 8.64095352e-01]\n [ 6.77256855e-01 -3.48885259e-01]\n [ 9.64398650e-01 3.78846390e-01]\n [ 1.80012064e+00 2.57766394e-01]\n [ 1.10609400e+00 -1.08145533e-02]\n [ 1.50330274e-01 2.07549945e-01]\n [ 7.58441172e-01 -3.15981629e-01]\n [ 8.49491194e-01 -4.67679590e-01]\n [ 2.67211232e-01 -3.95742005e-01]\n [ 1.98832264e+00 -1.24071427e-01]\n [ 2.06321875e+00 -7.67316517e-02]\n [ 9.21506727e-01 -4.92093137e-01]\n [ 1.91787271e-01 -9.76060743e-02]\n [ 1.37536098e+00 -2.99750569e-01]\n [-1.03145147e+00 1.50835667e-01]\n [-9.98085669e-01 1.66616496e-01]\n [ 7.11007501e-01 -4.80496835e-01]\n [ 6.21564717e-02 3.21238638e-01]\n [ 3.08723567e-01 9.59157093e-01]\n [ 1.70667440e-01 -3.75379077e-02]\n [ 6.99711516e-01 8.09784869e-01]\n [-9.40023394e-01 5.23730261e-01]\n [-6.26622817e-01 7.41426777e-01]\n [-1.06499351e+00 2.16361259e-01]\n [ 7.01949242e-01 5.89121985e-01]\n [-8.27492399e-01 2.30738787e-01]\n [ 5.37907661e-01 -5.03666583e-01]\n [ 7.16740148e-01 8.17543020e-01]\n [ 8.83262886e-01 1.13308858e-01]\n [ 7.17386403e-02 2.19313656e-01]\n [ 2.11374090e+00 5.38851941e-01]\n [ 1.93429241e+00 3.47896845e-01]\n [ 8.71826634e-01 5.93309354e-01]\n [ 1.64909327e+00 -3.25999634e-01]\n [ 1.91102249e+00 6.34066907e-02]\n [ 1.68040221e-01 -2.38311133e-01]\n [-7.36267166e-01 9.51043996e-01]\n [ 1.10415264e+00 -5.04586010e-01]\n [-3.97070930e-01 1.08231963e+00]\n [-1.27720474e-01 9.33783717e-01]\n [ 1.69827187e+00 -6.62905632e-02]\n [ 1.66614969e+00 -1.18278227e-01]\n [ 2.58576299e-01 9.82549519e-01]\n [ 9.32611783e-01 2.01505345e-01]\n [ 1.04432541e+00 -4.59045777e-01]\n [ 9.01205318e-01 3.52599263e-01]\n [-9.02171166e-01 7.12479147e-01]\n [-4.73335021e-01 9.01877413e-01]\n [-1.94960525e-04 9.98399326e-01]\n [ 3.62085170e-01 8.13342990e-01]\n [-7.87373871e-01 6.45036646e-01]\n [ 1.68677015e+00 -3.80182525e-01]\n [ 3.60087432e-01 1.06137585e+00]\n [ 2.85840535e-02 -1.32960537e-01]\n [ 4.73651695e-01 -2.94766313e-01]\n [-1.03825975e+00 -4.51232806e-02]\n [ 2.03027244e+00 5.58906938e-01]\n [ 1.85953758e+00 2.07668729e-01]\n [ 3.16177556e-01 9.55986443e-01]\n [-1.06970612e+00 4.71387154e-01]\n [ 2.26019186e-01 1.01386122e+00]\n [ 4.40534021e-01 -2.73269706e-01]\n [ 4.63343062e-01 -3.17886974e-01]\n [ 1.46548840e-01 4.84263242e-02]\n [ 1.91907573e+00 1.94347026e-01]\n [-7.69634974e-01 6.79609239e-01]\n [-5.12525987e-01 6.81034015e-01]\n [ 8.63959309e-01 1.56695906e-01]\n [ 6.52884327e-01 -5.65466015e-01]\n [ 1.09799392e+00 -4.82593736e-01]\n [ 5.64459487e-01 -3.63350947e-01]\n [ 1.75317378e+00 -9.61061337e-02]\n [ 1.37029309e+00 -1.95206177e-01]\n [ 8.52279442e-02 1.06797257e+00]\n [ 7.64275847e-01 7.75467197e-01]\n [ 5.86898310e-01 8.30183231e-01]\n [ 4.70898960e-01 -6.69234020e-01]\n [-8.49875638e-01 4.70087006e-01]\n [ 8.53527883e-01 -4.38043488e-01]\n [ 1.75749230e+00 -1.74382024e-01]\n [ 1.91877898e+00 2.34344957e-01]\n [-6.91630977e-01 6.68556167e-01]\n [-1.02614698e+00 2.99792553e-01]\n [ 4.89998220e-01 8.09777973e-01]\n [-8.03920364e-01 6.11667154e-01]\n [-3.75865674e-01 8.37840919e-01]\n [ 1.91101144e-02 3.16166736e-01]\n [ 1.75298538e+00 1.76139968e-01]\n [ 1.94519664e+00 4.57764979e-01]\n [ 6.31647820e-02 5.31556595e-01]\n [ 1.24257701e+00 -4.12167882e-01]\n [-1.31335127e-01 3.11846289e-01]\n [ 1.18514446e-01 6.17020286e-02]\n [ 3.47530312e-01 -4.03862976e-01]\n [ 1.71898995e+00 -5.19388338e-02]\n [ 1.87280385e+00 -8.89216067e-02]\n [ 8.65346904e-02 -1.08446878e-01]\n [ 3.99036756e-02 1.13742982e+00]\n [-9.49120924e-01 4.14647603e-01]\n [ 6.64364029e-01 -5.56016890e-01]\n [ 1.67794606e+00 -2.11835660e-01]\n [ 8.00914913e-01 5.71138820e-01]\n [ 2.14318277e-01 1.18281935e+00]\n [-1.71538779e-02 3.09189745e-01]\n [ 1.91939489e+00 6.34607642e-02]\n [-4.20133067e-01 9.01687374e-01]\n [ 6.23735491e-01 6.93158411e-01]\n [ 1.91221212e-01 -3.10053225e-01]\n [ 7.06670811e-02 -5.23268877e-02]\n [ 1.52231355e+00 -4.44094684e-01]\n [ 3.69281106e-01 -2.00660744e-01]\n [ 3.05950835e-01 -1.76371745e-01]\n [-3.04786297e-02 5.72969174e-01]\n [ 9.11922443e-01 3.46300191e-01]\n [ 5.86574246e-01 -3.97795645e-01]\n [ 1.17649975e+00 -4.28208483e-01]\n [ 1.50337454e+00 -3.69673057e-01]\n [ 1.04407970e+00 5.53067481e-01]\n [ 9.65248990e-01 2.82047827e-01]\n [ 9.15718150e-02 1.03263694e+00]\n [ 1.23563502e+00 -4.49032628e-01]\n [-5.33633393e-01 8.12668516e-01]\n [ 1.79400878e+00 -2.58621816e-01]\n [ 1.24881298e-01 3.64564324e-01]\n [-2.07629568e-01 1.03465823e+00]\n [-1.17969384e-01 7.81704664e-01]\n [ 2.43386468e-01 9.02970150e-01]\n [-1.47728419e-01 1.15128550e+00]\n [ 3.97402170e-02 1.27351760e-02]\n [-8.67239841e-01 5.37456953e-01]\n [-8.49568329e-02 5.37052123e-01]\n [-9.31366638e-01 6.29593907e-01]\n [ 1.00174531e+00 3.80066392e-02]\n [ 8.85522951e-01 3.40571377e-01]\n [-8.65536351e-01 7.35630625e-01]\n [ 4.27897469e-01 1.08084422e+00]\n [ 7.91907322e-01 4.46229222e-01]\n [ 1.53672078e+00 -5.35309571e-01]\n [-2.52068165e-02 3.51484683e-01]\n [ 7.16854681e-02 1.07017727e+00]\n [-1.01977067e+00 3.97985171e-01]\n [ 1.09957077e+00 3.80012933e-01]\n [-1.17450457e+00 3.62294438e-02]\n [ 4.99866114e-01 -4.15292391e-01]\n [ 1.29635640e+00 -5.50057093e-01]\n [ 8.91714614e-01 6.67009267e-01]\n [ 1.87137062e+00 -1.74169842e-02]\n [-4.30965176e-01 8.34156844e-01]\n [ 1.23229816e+00 -6.07764691e-01]\n [-8.14160192e-01 8.16064583e-01]\n [-6.42195753e-02 1.02674814e+00]\n [ 8.00276170e-01 6.85654017e-01]\n [ 1.58007883e+00 -4.22033969e-01]\n [-1.02457058e+00 2.12367198e-01]\n [ 1.77371829e-01 1.87030390e-01]\n [-9.21240232e-01 1.88708009e-01]]\n"
],
[
"'''\nCluster validation \nDBCV evaluates the within- and between- cluster desnity connectedness of clustering results \nby measuring the least dense region inside and cluster and the most dense region between clusters.\nA relative measure for evaluation of of density-based clustering should be defined by means of densities\nrather than by distances.\n\nStep1: Define all points core distance (APCD) for each object in a cluster\n\nStep 2: Define mutual reachability distance (MRD) for every pair of points in a cluster \n\nStep 3: Build a fully connected graph G, for each cluster, based on mutual reachability distance\n\nStep 4: Find the minimum spanning tree of G\n\nStep 5: Using the MST define the density sparseness and density separation of each cluster \nDensity sparseness - maximum edge of the MST - can be interpreted as the area with the lowest density inside the cluster\nDensity separation - minimum MRD between the objects of two clusters - can be interpreted as the maximum density area between the clusters\n\nStep 6: Compute the validity index of a cluster (VC)\n\nStep 7: Compute the validity index of the clustering solution\n\nNote: Distances are Euclidean \n'''\n\ndbscan_score = DBCV(X,model.labels_,dist_function=euclidean)\nprint(model.labels_)\nprint(dbscan_score)",
"_____no_output_____"
],
[
"\"\"\"\nImplimentation of Density-Based Clustering Validation \"DBCV\"\nCitation:\nMoulavi, Davoud, et al. \"Density-based clustering validation.\"\nProceedings of the 2014 SIAM International Conference on Data Mining.\nSociety for Industrial and Applied Mathematics, 2014.\n\"\"\"\n\nimport numpy as np\nfrom scipy.spatial.distance import euclidean, cdist\nfrom scipy.sparse.csgraph import minimum_spanning_tree\nfrom scipy.sparse import csgraph\n\n\ndef DBCV(X, labels, dist_function=euclidean):\n \"\"\"\n Density Based clustering validation\n Args:\n X (np.ndarray): ndarray with dimensions [n_samples, n_features]\n data to check validity of clustering\n labels (np.array): clustering assignments for data X\n dist_dunction (func): function to determine distance between objects\n func args must be [np.array, np.array] where each array is a point\n Returns: cluster_validity (float)\n score in range[-1, 1] indicating validity of clustering assignments\n \"\"\"\n graph = _mutual_reach_dist_graph(X, labels, dist_function)\n mst = _mutual_reach_dist_MST(graph)\n cluster_validity = _clustering_validity_index(mst, labels)\n return cluster_validity\n\n\ndef _core_dist(point, neighbors, dist_function):\n \"\"\"\n Computes the core distance of a point.\n Core distance is the inverse density of an object.\n Args:\n point (np.array): array of dimensions (n_features,)\n point to compute core distance of\n neighbors (np.ndarray): array of dimensions (n_neighbors, n_features):\n array of all other points in object class\n dist_dunction (func): function to determine distance between objects\n func args must be [np.array, np.array] where each array is a point\n Returns: core_dist (float)\n inverse density of point\n \"\"\"\n n_features = np.shape(point)[0]\n n_neighbors = np.shape(neighbors)[1]\n\n distance_vector = cdist(point.reshape(1, -1), neighbors)\n distance_vector = distance_vector[distance_vector != 0]\n numerator = ((1/distance_vector)**n_features).sum()\n core_dist = (numerator / (n_neighbors)) ** (-1/n_features)\n return core_dist\n\n\ndef _mutual_reachability_dist(point_i, point_j, neighbors_i,\n neighbors_j, dist_function):\n \"\"\".\n Computes the mutual reachability distance between points\n Args:\n point_i (np.array): array of dimensions (n_features,)\n point i to compare to point j\n point_j (np.array): array of dimensions (n_features,)\n point i to compare to point i\n neighbors_i (np.ndarray): array of dims (n_neighbors, n_features):\n array of all other points in object class of point i\n neighbors_j (np.ndarray): array of dims (n_neighbors, n_features):\n array of all other points in object class of point j\n dist_dunction (func): function to determine distance between objects\n func args must be [np.array, np.array] where each array is a point\n Returns: mutual_reachability (float)\n mutual reachability between points i and j\n \"\"\"\n core_dist_i = _core_dist(point_i, neighbors_i, dist_function)\n core_dist_j = _core_dist(point_j, neighbors_j, dist_function)\n dist = dist_function(point_i, point_j)\n mutual_reachability = np.max([core_dist_i, core_dist_j, dist])\n return mutual_reachability\n\n\ndef _mutual_reach_dist_graph(X, labels, dist_function):\n \"\"\"\n Computes the mutual reach distance complete graph.\n Graph of all pair-wise mutual reachability distances between points\n Args:\n X (np.ndarray): ndarray with dimensions [n_samples, n_features]\n data to check validity of clustering\n labels (np.array): clustering assignments for data X\n dist_dunction (func): function to determine distance between objects\n func args must be [np.array, np.array] where each array is a point\n Returns: graph (np.ndarray)\n array of dimensions (n_samples, n_samples)\n Graph of all pair-wise mutual reachability distances between points.\n \"\"\"\n n_samples = np.shape(X)[0]\n graph = []\n counter = 0\n for row in range(n_samples):\n graph_row = []\n for col in range(n_samples):\n point_i = X[row]\n point_j = X[col]\n class_i = labels[row]\n class_j = labels[col]\n members_i = _get_label_members(X, labels, class_i)\n members_j = _get_label_members(X, labels, class_j)\n dist = _mutual_reachability_dist(point_i, point_j,\n members_i, members_j,\n dist_function)\n graph_row.append(dist)\n counter += 1\n graph.append(graph_row)\n graph = np.array(graph)\n return graph\n\n\ndef _mutual_reach_dist_MST(dist_tree):\n \"\"\"\n Computes minimum spanning tree of the mutual reach distance complete graph\n Args:\n dist_tree (np.ndarray): array of dimensions (n_samples, n_samples)\n Graph of all pair-wise mutual reachability distances\n between points.\n Returns: minimum_spanning_tree (np.ndarray)\n array of dimensions (n_samples, n_samples)\n minimum spanning tree of all pair-wise mutual reachability\n distances between points.\n \"\"\"\n mst = minimum_spanning_tree(dist_tree).toarray()\n return mst + np.transpose(mst)\n\n\ndef _cluster_density_sparseness(MST, labels, cluster):\n \"\"\"\n Computes the cluster density sparseness, the minimum density\n within a cluster\n Args:\n MST (np.ndarray): minimum spanning tree of all pair-wise\n mutual reachability distances between points.\n labels (np.array): clustering assignments for data X\n cluster (int): cluster of interest\n Returns: cluster_density_sparseness (float)\n value corresponding to the minimum density within a cluster\n \"\"\"\n indices = np.where(labels == cluster)[0]\n cluster_MST = MST[indices][:, indices]\n cluster_density_sparseness = np.max(cluster_MST)\n return cluster_density_sparseness\n\n\ndef _cluster_density_separation(MST, labels, cluster_i, cluster_j):\n \"\"\"\n Computes the density separation between two clusters, the maximum\n density between clusters.\n Args:\n MST (np.ndarray): minimum spanning tree of all pair-wise\n mutual reachability distances between points.\n labels (np.array): clustering assignments for data X\n cluster_i (int): cluster i of interest\n cluster_j (int): cluster j of interest\n Returns: density_separation (float):\n value corresponding to the maximum density between clusters\n \"\"\"\n indices_i = np.where(labels == cluster_i)[0]\n indices_j = np.where(labels == cluster_j)[0]\n shortest_paths = csgraph.dijkstra(MST, indices=indices_i)\n relevant_paths = shortest_paths[:, indices_j]\n density_separation = np.min(relevant_paths)\n return density_separation\n\n\ndef _cluster_validity_index(MST, labels, cluster):\n \"\"\"\n Computes the validity of a cluster (validity of assignmnets)\n Args:\n MST (np.ndarray): minimum spanning tree of all pair-wise\n mutual reachability distances between points.\n labels (np.array): clustering assignments for data X\n cluster (int): cluster of interest\n Returns: cluster_validity (float)\n value corresponding to the validity of cluster assignments\n \"\"\"\n min_density_separation = np.inf\n for cluster_j in np.unique(labels):\n if cluster_j != cluster:\n cluster_density_separation = _cluster_density_separation(MST,\n labels,\n cluster,\n cluster_j)\n if cluster_density_separation < min_density_separation:\n min_density_separation = cluster_density_separation\n cluster_density_sparseness = _cluster_density_sparseness(MST,\n labels,\n cluster)\n numerator = min_density_separation - cluster_density_sparseness\n denominator = np.max([min_density_separation, cluster_density_sparseness])\n cluster_validity = numerator / denominator\n return cluster_validity\n\n\ndef _clustering_validity_index(MST, labels):\n \"\"\"\n Computes the validity of all clustering assignments for a\n clustering algorithm\n Args:\n MST (np.ndarray): minimum spanning tree of all pair-wise\n mutual reachability distances between points.\n labels (np.array): clustering assignments for data X\n Returns: validity_index (float):\n score in range[-1, 1] indicating validity of clustering assignments\n \"\"\"\n n_samples = len(labels)\n validity_index = 0\n for label in np.unique(labels):\n fraction = np.sum(labels == label) / float(n_samples)\n cluster_validity = _cluster_validity_index(MST, labels, label)\n validity_index += fraction * cluster_validity\n return validity_index\n\n\ndef _get_label_members(X, labels, cluster):\n \"\"\"\n Helper function to get samples of a specified cluster.\n Args:\n X (np.ndarray): ndarray with dimensions [n_samples, n_features]\n data to check validity of clustering\n labels (np.array): clustering assignments for data X\n cluster (int): cluster of interest\n Returns: members (np.ndarray)\n array of dimensions (n_samples, n_features) of samples of the\n specified cluster.\n \"\"\"\n indices = np.where(labels == cluster)[0]\n members = X[indices]\n return members",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb8424f35dd632cd29a34ba761c65b36b067936f
| 12,390 |
ipynb
|
Jupyter Notebook
|
week3/make_heap/build heap.ipynb
|
Mstoned/Data-Structures
|
8113af839a6c58dae2dfc796f0e8a5edce4e9361
|
[
"MIT"
] | 2 |
2020-03-22T15:03:58.000Z
|
2020-04-03T03:27:55.000Z
|
week3/make_heap/build heap.ipynb
|
Mstoned/Data-Structures
|
8113af839a6c58dae2dfc796f0e8a5edce4e9361
|
[
"MIT"
] | null | null | null |
week3/make_heap/build heap.ipynb
|
Mstoned/Data-Structures
|
8113af839a6c58dae2dfc796f0e8a5edce4e9361
|
[
"MIT"
] | null | null | null | 32.519685 | 132 | 0.451655 |
[
[
[
"# sift down ",
"_____no_output_____"
]
],
[
[
"# python3\n\nclass HeapBuilder:\n def __init__(self):\n self._swaps = [] #array of tuples or arrays\n self._data = []\n\n def ReadData(self):\n n = int(input())\n self._data = [int(s) for s in input().split()]\n assert n == len(self._data)\n\n def WriteResponse(self):\n print(len(self._swaps))\n for swap in self._swaps:\n print(swap[0], swap[1])\n\n def swapdown(self,i):\n n = len(self._data)\n min_index = i\n l = 2*i+1 if (2*i+1<n) else -1 \n r = 2*i+2 if (2*i+2<n) else -1 \n\n if l != -1 and self._data[l] < self._data[min_index]:\n min_index = l\n\n if r != - 1 and self._data[r] < self._data[min_index]:\n min_index = r\n\n if i != min_index:\n self._swaps.append((i, min_index))\n self._data[i], self._data[min_index] = \\\n self._data[min_index], self._data[i]\n self.swapdown(min_index)\n\n def GenerateSwaps(self):\n for i in range(len(self._data)//2 ,-1,-1):\n self.swapdown(i)\n\n def Solve(self):\n self.ReadData()\n self.GenerateSwaps()\n self.WriteResponse()\n\nif __name__ == '__main__':\n heap_builder = HeapBuilder()\n heap_builder.Solve()\n",
"_____no_output_____"
]
],
[
[
"# sift up initialized swap array",
"_____no_output_____"
]
],
[
[
"%%time\n# python3\n\nclass HeapBuilder:\n# index =0 \n# global index\n def __init__(self,n):\n self._swaps = [[None ,None]]*4*n #array of tuples or arrays\n self._data = []\n self.n = n\n self.index = 0 \n\n def ReadData(self):\n# n = int(input())\n self._data = [int(s) for s in input().split()]\n assert self.n == len(self._data)\n\n def WriteResponse(self):\n print(self.index)\n for _ in range(self.index):\n print(self._swaps[_][0],self._swaps[_][1])\n# print(len(self._swaps))\n# for swap in self._swaps:\n# print(swap[0], swap[1])\n\n def swapup(self,i):\n if i !=0:\n# print(self._data[int((i-1)/2)], self._data[i])\n# print(self.index)\n# print(i)\n if self._data[int((i-1)/2)]> self._data[i]:\n# print('2')\n# self._swaps.append(((int((i-1)/2)),i))\n self._swaps[self.index] = ((int((i-1)/2)),i)\n# print(((int((i-1)/2)),i))\n self.index+=1\n# print(self.index)\n self._data[int((i-1)/2)], self._data[i] = self._data[i],self._data[int((i-1)/2)]\n self.swapup(int((i-1)/2))\n\n def GenerateSwaps(self):\n # The following naive implementation just sorts \n # the given sequence using selection sort algorithm\n # and saves the resulting sequence of swaps.\n # This turns the given array into a heap, \n # but in the worst case gives a quadratic number of swaps.\n #\n # TODO: replace by a more efficient implementation\n # efficient implementation is complete binary tree. but here you're not getting data 1 by 1, instead everything at once\n # so for i in range(0,n), implement swap up ai < a2i+1 ai < a2i+2\n for i in range(self.n-1,0,-1):\n# print(i)\n self.swapup(i)\n# print('1')\n# for j in range(i + 1, len(self._data)):\n# if self._data[i] > self._data[j]:\n# self._swaps.append((i, j))\n# self._data[i], self._data[j] = self._data[j], self._data[i]\n\n def Solve(self):\n self.ReadData()\n self.GenerateSwaps()\n self.WriteResponse()\n\nif __name__ == '__main__':\n n = int(input())\n heap_builder = HeapBuilder(n)\n heap_builder.Solve()\n assert(len(heap_builder._swaps)<=4*len(heap_builder._data))",
"_____no_output_____"
],
[
"a = [None]*4\nfor i in range(4):\n a[i] = (i,i+1000000)\nprint(a)\n",
"_____no_output_____"
],
[
"k = 100\nfor i in range(k,0,-1):\n print(i,end = ' ')",
"_____no_output_____"
],
[
"%%time\n# python3\n\nclass HeapBuilder:\n def __init__(self):\n self._swaps = [] #array of tuples or arrays\n self._data = []\n\n def ReadData(self):\n n = int(input())\n self._data = [int(s) for s in input().split()]\n assert n == len(self._data)\n\n def WriteResponse(self):\n print(len(self._swaps))\n for swap in self._swaps:\n print(swap[0], swap[1])\n\n def swapup(self,i):\n if i !=0:\n if self._data[int((i-1)/2)]> self._data[i]:\n self._swaps.append(((int((i-1)/2)),i))\n self._data[int((i-1)/2)], self._data[i] = self._data[i],self._data[int((i-1)/2)]\n self.swapup(int((i-1)/2))\n\n def GenerateSwaps(self):\n for i in range(len(self._data)-1,0,-1):\n self.swapup(i)\n\n def Solve(self):\n self.ReadData()\n self.GenerateSwaps()\n self.WriteResponse()\n\nif __name__ == '__main__':\n heap_builder = HeapBuilder()\n heap_builder.Solve()\n",
"_____no_output_____"
],
[
" 26148864/536870912",
"_____no_output_____"
],
[
"0.30/3.00",
"_____no_output_____"
],
[
"# python3\n\n\nclass HeapBuilder:\n \"\"\"Converts an array of integers into a min-heap.\n A binary heap is a complete binary tree which satisfies the heap ordering\n property: the value of each node is greater than or equal to the value of\n its parent, with the minimum-value element at the root.\n Samples:\n >>> heap = HeapBuilder()\n >>> heap.array = [5, 4, 3, 2, 1]\n >>> heap.generate_swaps()\n >>> heap.swaps\n [(1, 4), (0, 1), (1, 3)]\n >>> # Explanation: After swapping elements 4 in position 1 and 1 in position\n >>> # 4 the array becomes 5 1 3 2 4. After swapping elements 5 in position 0\n >>> # and 1 in position 1 the array becomes 1 5 3 2 4. After swapping\n >>> # elements 5 in position 1 and 2 in position 3 the array becomes\n >>> # 1 2 3 5 4, which is already a heap, because a[0] = 1 < 2 = a[1],\n >>> # a[0] = 1 < 3 = a[2], a[1] = 2 < 5 = a[3], a[1] = 2 < 4 = a[4].\n >>> heap = HeapBuilder()\n >>> heap.array = [1, 2, 3, 4, 5]\n >>> heap.generate_swaps()\n >>> heap.swaps\n []\n >>> # Explanation: The input array is already a heap, because it is sorted\n >>> # in increasing order.\n \"\"\"\n\n def __init__(self):\n self.swaps = []\n self.array = []\n\n @property\n def size(self):\n return len(self.array)\n\n def read_data(self):\n \"\"\"Reads data from standard input.\"\"\"\n n = int(input())\n self.array = [int(s) for s in input().split()]\n assert n == self.size\n\n def write_response(self):\n \"\"\"Writes the response to standard output.\"\"\"\n print(len(self.swaps))\n for swap in self.swaps:\n print(swap[0], swap[1])\n\n def l_child_index(self, index):\n \"\"\"Returns the index of left child.\n If there's no left child, returns -1.\n \"\"\"\n l_child_index = 2 * index + 1\n if l_child_index >= self.size:\n return -1\n return l_child_index\n\n def r_child_index(self, index):\n \"\"\"Returns the index of right child.\n If there's no right child, returns -1.\n \"\"\"\n r_child_index = 2 * index + 2\n if r_child_index >= self.size:\n return -1\n return r_child_index\n\n def sift_down(self, i):\n \"\"\"Sifts i-th node down until both of its children have bigger value.\n At each step of swapping, indices of swapped nodes are appended\n to HeapBuilder.swaps attribute.\n \"\"\"\n min_index = i\n l = self.l_child_index(i)\n r = self.r_child_index(i)\n print(i,l,r)\n\n if l != -1 and self.array[l] < self.array[min_index]:\n min_index = l\n\n if r != - 1 and self.array[r] < self.array[min_index]:\n min_index = r\n\n if i != min_index:\n self.swaps.append((i, min_index))\n self.array[i], self.array[min_index] = \\\n self.array[min_index], self.array[i]\n self.sift_down(min_index)\n\n def generate_swaps(self):\n \"\"\"Heapify procedure.\n It calls sift down procedure 'size // 2' times. It's enough to make\n the heap completed.\n \"\"\"\n for i in range(self.size // 2, -1, -1):\n self.sift_down(i)\n\n def solve(self):\n self.read_data()\n self.generate_swaps()\n self.write_response()\n\n\nif __name__ == \"__main__\":\n heap_builder = HeapBuilder()\n heap_builder.solve()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb842d56ddf1a1d61272797248c3f638b1414bc3
| 104,125 |
ipynb
|
Jupyter Notebook
|
examples/summarization.ipynb
|
dhawalkp/notebooks
|
3aa82be2ca52467db51d0a2db4a317cc2660e19e
|
[
"Apache-2.0"
] | 1 |
2021-07-13T02:21:09.000Z
|
2021-07-13T02:21:09.000Z
|
examples/summarization.ipynb
|
prettyprettyboy/notebooks
|
46a1605abe8c2bb484941882a35274509b1262cb
|
[
"Apache-2.0"
] | null | null | null |
examples/summarization.ipynb
|
prettyprettyboy/notebooks
|
46a1605abe8c2bb484941882a35274509b1262cb
|
[
"Apache-2.0"
] | null | null | null | 38.867115 | 8,330 | 0.598127 |
[
[
[
"If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets as well as other dependencies. Uncomment the following cell and run it.",
"_____no_output_____"
]
],
[
[
"#! pip install datasets transformers rouge-score nltk",
"_____no_output_____"
]
],
[
[
"If you're opening this notebook locally, make sure your environment has the last version of those libraries installed.\n\nYou can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq).",
"_____no_output_____"
],
[
"# Fine-tuning a model on a summarization task",
"_____no_output_____"
],
[
"In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model for a summarization task. We will use the [XSum dataset](https://arxiv.org/pdf/1808.08745.pdf) (for extreme summarization) which contains BBC articles accompanied with single-sentence summaries.\n\n\n\nWe will see how to easily load the dataset for this task using 🤗 Datasets and how to fine-tune a model on it using the `Trainer` API.",
"_____no_output_____"
]
],
[
[
"model_checkpoint = \"t5-small\"",
"_____no_output_____"
]
],
[
[
"This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a sequence-to-sequence version in the Transformers library. Here we picked the [`t5-small`](https://huggingface.co/t5-small) checkpoint. ",
"_____no_output_____"
],
[
"## Loading the dataset",
"_____no_output_____"
],
[
"We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`. ",
"_____no_output_____"
]
],
[
[
"from datasets import load_dataset, load_metric\n\nraw_datasets = load_dataset(\"xsum\")\nmetric = load_metric(\"rouge\")",
"Using custom data configuration default\nReusing dataset xsum (/home/sgugger/.cache/huggingface/datasets/xsum/default/1.2.0/f9abaabb5e2b2a1e765c25417264722d31877b34ec34b437c53242f6e5c30d6d)\n"
]
],
[
[
"The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test set:",
"_____no_output_____"
]
],
[
[
"raw_datasets",
"_____no_output_____"
]
],
[
[
"To access an actual element, you need to select a split first, then give an index:",
"_____no_output_____"
]
],
[
[
"raw_datasets[\"train\"][0]",
"_____no_output_____"
]
],
[
[
"To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.",
"_____no_output_____"
]
],
[
[
"import datasets\nimport random\nimport pandas as pd\nfrom IPython.display import display, HTML\n\ndef show_random_elements(dataset, num_examples=5):\n assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\n picks = []\n for _ in range(num_examples):\n pick = random.randint(0, len(dataset)-1)\n while pick in picks:\n pick = random.randint(0, len(dataset)-1)\n picks.append(pick)\n \n df = pd.DataFrame(dataset[picks])\n for column, typ in dataset.features.items():\n if isinstance(typ, datasets.ClassLabel):\n df[column] = df[column].transform(lambda i: typ.names[i])\n display(HTML(df.to_html()))",
"_____no_output_____"
],
[
"show_random_elements(raw_datasets[\"train\"])",
"_____no_output_____"
]
],
[
[
"The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Metric):",
"_____no_output_____"
]
],
[
[
"metric",
"_____no_output_____"
]
],
[
[
"You can call its `compute` method with your predictions and labels, which need to be list of decoded strings:",
"_____no_output_____"
]
],
[
[
"fake_preds = [\"hello there\", \"general kenobi\"]\nfake_labels = [\"hello there\", \"general kenobi\"]\nmetric.compute(predictions=fake_preds, references=fake_labels)",
"_____no_output_____"
]
],
[
[
"## Preprocessing the data",
"_____no_output_____"
],
[
"Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that the model requires.\n\nTo do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:\n\n- we get a tokenizer that corresponds to the model architecture we want to use,\n- we download the vocabulary used when pretraining this specific checkpoint.\n\nThat vocabulary will be cached, so it's not downloaded again the next time we run the cell.",
"_____no_output_____"
]
],
[
[
"from transformers import AutoTokenizer\n \ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)",
"_____no_output_____"
]
],
[
[
"By default, the call above will use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library.",
"_____no_output_____"
],
[
"You can directly call this tokenizer on one sentence or a pair of sentences:",
"_____no_output_____"
]
],
[
[
"tokenizer(\"Hello, this one sentence!\")",
"_____no_output_____"
]
],
[
[
"Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.\n\nInstead of one sentence, we can pass along a list of sentences:",
"_____no_output_____"
]
],
[
[
"tokenizer([\"Hello, this one sentence!\", \"This is another sentence.\"])",
"_____no_output_____"
]
],
[
[
"To prepare the targets for our model, we need to tokenize them inside the `as_target_tokenizer` context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets:",
"_____no_output_____"
]
],
[
[
"with tokenizer.as_target_tokenizer():\n print(tokenizer([\"Hello, this one sentence!\", \"This is another sentence.\"]))",
"{'input_ids': [[8774, 6, 48, 80, 7142, 55, 1], [100, 19, 430, 7142, 5, 1]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1]]}\n"
]
],
[
[
"If you are using one of the five T5 checkpoints we have to prefix the inputs with \"summarize:\" (the model can also translate and it needs the prefix to know which task it has to perform).",
"_____no_output_____"
]
],
[
[
"if model_checkpoint in [\"t5-small\", \"t5-base\", \"t5-larg\", \"t5-3b\", \"t5-11b\"]:\n prefix = \"summarize: \"\nelse:\n prefix = \"\"",
"_____no_output_____"
]
],
[
[
"We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model. The padding will be dealt with later on (in a data collator) so we pad examples to the longest length in the batch and not the whole dataset.",
"_____no_output_____"
]
],
[
[
"max_input_length = 1024\nmax_target_length = 128\n\ndef preprocess_function(examples):\n inputs = [prefix + doc for doc in examples[\"document\"]]\n model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)\n\n # Setup the tokenizer for targets\n with tokenizer.as_target_tokenizer():\n labels = tokenizer(examples[\"summary\"], max_length=max_target_length, truncation=True)\n\n model_inputs[\"labels\"] = labels[\"input_ids\"]\n return model_inputs",
"_____no_output_____"
]
],
[
[
"This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:",
"_____no_output_____"
]
],
[
[
"preprocess_function(raw_datasets['train'][:2])",
"_____no_output_____"
]
],
[
[
"To apply this function on all the pairs of sentences in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command.",
"_____no_output_____"
]
],
[
[
"tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)",
"Loading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/xsum/default/1.2.0/f9abaabb5e2b2a1e765c25417264722d31877b34ec34b437c53242f6e5c30d6d/cache-798454ba84bcf26e.arrow\nLoading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/xsum/default/1.2.0/f9abaabb5e2b2a1e765c25417264722d31877b34ec34b437c53242f6e5c30d6d/cache-7bca00a09a5288ab.arrow\nLoading cached processed dataset at /home/sgugger/.cache/huggingface/datasets/xsum/default/1.2.0/f9abaabb5e2b2a1e765c25417264722d31877b34ec34b437c53242f6e5c30d6d/cache-288693030a39fd3d.arrow\n"
]
],
[
[
"Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.\n\nNote that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently.",
"_____no_output_____"
],
[
"## Fine-tuning the model",
"_____no_output_____"
],
[
"Now that our data is ready, we can download the pretrained model and fine-tune it. Since our task is of the sequence-to-sequence kind, we use the `AutoModelForSeq2SeqLM` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us.",
"_____no_output_____"
]
],
[
[
"from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer\n\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)",
"_____no_output_____"
]
],
[
[
"Note that we don't get a warning like in our classification example. This means we used all the weights of the pretrained model and there is no randomly initialized head in this case.",
"_____no_output_____"
],
[
"To instantiate a `Seq2SeqTrainer`, we will need to define three more things. The most important is the [`Seq2SeqTrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Seq2SeqTrainingArguments), which is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:",
"_____no_output_____"
]
],
[
[
"batch_size = 16\nargs = Seq2SeqTrainingArguments(\n \"test-summarization\",\n evaluation_strategy = \"epoch\",\n learning_rate=2e-5,\n per_device_train_batch_size=batch_size,\n per_device_eval_batch_size=batch_size,\n weight_decay=0.01,\n save_total_limit=3,\n num_train_epochs=1,\n predict_with_generate=True,\n fp16=True,\n)",
"_____no_output_____"
]
],
[
[
"Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the `batch_size` defined at the top of the cell and customize the weight decay. Since the `Seq2SeqTrainer` will save the model regularly and our dataset is quite large, we tell it to make three saves maximum. Lastly, we use the `predict_with_generate` option (to properly generate summaries) and activate mixed precision training (to go a bit faster).\n\nThen, we need a special kind of data collator, which will not only pad the inputs to the maximum length in the batch, but also the labels:",
"_____no_output_____"
]
],
[
[
"data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)",
"_____no_output_____"
]
],
[
[
"The last thing to define for our `Seq2SeqTrainer` is how to compute the metrics from the predictions. We need to define a function for this, which will just use the `metric` we loaded earlier, and we have to do a bit of pre-processing to decode the predictions into texts:",
"_____no_output_____"
]
],
[
[
"import nltk\nimport numpy as np\n\ndef compute_metrics(eval_pred):\n predictions, labels = eval_pred\n decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)\n # Replace -100 in the labels as we can't decode them.\n labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\n decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\n \n # Rouge expects a newline after each sentence\n decoded_preds = [\"\\n\".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]\n decoded_labels = [\"\\n\".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]\n \n result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)\n # Extract a few results\n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\n \n # Add mean generated length\n prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]\n result[\"gen_len\"] = np.mean(prediction_lens)\n \n return {k: round(v, 4) for k, v in result.items()}",
"_____no_output_____"
]
],
[
[
"Then we just need to pass all of this along with our datasets to the `Seq2SeqTrainer`:",
"_____no_output_____"
]
],
[
[
"trainer = Seq2SeqTrainer(\n model,\n args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"validation\"],\n data_collator=data_collator,\n tokenizer=tokenizer,\n compute_metrics=compute_metrics\n)",
"_____no_output_____"
]
],
[
[
"We can now finetune our model by just calling the `train` method:",
"_____no_output_____"
]
],
[
[
"trainer.train()",
"_____no_output_____"
]
],
[
[
"Don't forget to [upload your model](https://huggingface.co/transformers/model_sharing.html) on the [🤗 Model Hub](https://huggingface.co/models). You can then use it only to generate results like the one shown in the first picture of this notebook!",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb8436165633fb83c974b79a4f513e329871f035
| 46,599 |
ipynb
|
Jupyter Notebook
|
notebooks/communities/pyladies/Python 101.ipynb
|
marchub/docker-demo-images
|
f0fc1347e0bda5a93af71c8cf9260cf198430bf2
|
[
"BSD-3-Clause"
] | 85 |
2015-01-21T18:42:35.000Z
|
2020-03-16T23:34:12.000Z
|
notebooks/communities/pyladies/Python 101.ipynb
|
Siddhesh2097/docker-demo-images
|
e9c7812503fe047b0b659e01598b95a20351148b
|
[
"BSD-3-Clause"
] | 85 |
2015-01-21T17:10:49.000Z
|
2018-03-04T12:24:32.000Z
|
notebooks/communities/pyladies/Python 101.ipynb
|
Siddhesh2097/docker-demo-images
|
e9c7812503fe047b0b659e01598b95a20351148b
|
[
"BSD-3-Clause"
] | 167 |
2015-01-22T18:34:09.000Z
|
2021-10-23T01:41:33.000Z
| 23.416583 | 425 | 0.491427 |
[
[
[
"# Welcome to Python 101\n\n<a href=\"http://pyladies.org\"><img align=\"right\" src=\"http://www.pyladies.com/assets/images/pylady_geek.png\" alt=\"Pyladies\" style=\"position:relative;top:-80px;right:30px;height:50px;\" /></a>\n\nWelcome! This notebook is appropriate for people who have never programmed before. A few tips:\n\n- To execute a cell, click in it and then type `[shift]` + `[enter]`\n- This notebook's kernel will restart if the page becomes idle for 10 minutes, meaning you'll have to rerun steps again\n- Try.jupyter.org is awesome, and <a href=\"http://rackspace.com\">Rackspace</a> is awesome for hosting this, but you will want your own Python on your computer too. Hopefully you are in a class and someone helped you install. If not:\n + [Anaconda][anaconda-download] is great if you use Windows\n or will only use Python for data analysis.\n + If you want to contribute to open source code, you want the standard\n Python release. (Follow\n the [Hitchhiker's Guide to Python][python-guide].)\n\n\n## Outline\n\n- Operators and functions\n- Data and container types\n- Control structures\n- I/O, including basic web APIs\n- How to write and run a Python script\n\n[anaconda-download]: http://continuum.io/downloads\n[python-guide]: http://docs.python-guide.org/",
"_____no_output_____"
],
[
"### First, try Python as a calculator.\n\nPython can be used as a shell interpreter. After you install Python, you can open a command line terminal (*e.g.* powershell or bash), type `python3` or `python`, and a Python shell will open.\n\nFor now, we are using the notebook.\n\nHere is simple math. Go to town!",
"_____no_output_____"
]
],
[
[
"1 + 1",
"_____no_output_____"
],
[
"3 / 4 # caution: in Python 2 the result will be an integer",
"_____no_output_____"
],
[
"7 ** 3",
"_____no_output_____"
]
],
[
[
"## Challenge for you\nThe arithmetic operators in Python are:\n```python\n + - * / ** % //\n```\nUse the Python interpreter to calculate:\n\n- 16 times 26515\n- 1835 [modulo][wiki-modulo] 163\n\n\n<p style=\"font-size:smaller\">(psst...)\nIf you're stuck, try</p>\n\n```python\nhelp()\n```\n<p style=\"font-size:smaller\">and then in the interactive box, type <tt>symbols</tt>\n</p>\n\n[wiki-modulo]: https://en.wikipedia.org/wiki/Modulo_operation",
"_____no_output_____"
],
[
"## More math requires the math module",
"_____no_output_____"
]
],
[
[
"import math\n\nprint(\"The square root of 3 is:\", math.sqrt(3))\nprint(\"pi is:\", math.pi)\nprint(\"The sin of 90 degrees is:\", math.sin(math.radians(90)))",
"The square root of 3 is: 1.7320508075688772\npi is: 3.141592653589793\nThe sin of 90 degrees is: 1.0\n"
]
],
[
[
"- The `import` statement imports the module into the namespace\n- Then access functions (or constants) by using:\n```python\n <module>.<function>\n``` \n- And get help on what is in the module by using:\n```python\n help(<module>)\n```",
"_____no_output_____"
],
[
"## Challenge for you\nHint: `help(math)` will show all the functions...\n\n- What is the arc cosine of `0.743144` in degrees?",
"_____no_output_____"
]
],
[
[
"from math import acos, degrees # use 'from' sparingly\n\nint(degrees(acos(0.743144))) # 'int' to make an integer",
"_____no_output_____"
]
],
[
[
"## Math takeaways\n- Operators are what you think\n- Be careful of unintended integer math\n- the `math` module has the remaining functions",
"_____no_output_____"
],
[
"# Strings\n\n(Easier in Python than in any other language ever. Even Perl.)",
"_____no_output_____"
],
[
"## Strings\nUse `help(str)` to see available functions for string objects. For help on a particular function from the class, type the class name and the function name: `help(str.join)`\n\nString operations are easy:\n```\ns = \"foobar\"\n\n\"bar\" in s\ns.find(\"bar\")\nindex = s.find(\"bar\")\ns[:index]\ns[index:] + \" this is intuitive! Hooray!\"\ns[-1] # The last element in the list or string\n```\n\nStrings are **immutable**, meaning they cannot be modified, only copied or replaced. (This is related to memory use, and interesting for experienced programmers ... don't worry if you don't get what this means.)",
"_____no_output_____"
]
],
[
[
"# Here's to start.\ns = \"foobar\"\n\n\"bar\" in s",
"_____no_output_____"
],
[
"# You try out the other ones!\ns.find(\"bar\")",
"_____no_output_____"
]
],
[
[
"## Challenge for you\nUsing only string addition (concatenation) and the function `str.join`, combine `declaration` and `sayings` :\n\n```python\ndeclaration = \"We are the knights who say:\\n\"\nsayings = ['\"icky\"'] * 3 + ['\"p\\'tang\"']\n# the (\\') escapes the quote\n```\n\nto a variable, `sentence`, that when printed does this:\n\n```python\n>>> print(sentence)\nWe are the knights who say:\n\"icky\", \"icky\", \"icky\", \"p'tang\"!\n```",
"_____no_output_____"
]
],
[
[
"help(str.join)",
"Help on method_descriptor:\n\njoin(...)\n S.join(iterable) -> str\n \n Return a string which is the concatenation of the strings in the\n iterable. The separator between elements is S.\n\n"
],
[
"declaration = \"We are now the knights who say:\\n\"\nsayings = ['\"icky\"'] * 3 + ['\"p\\'tang\"']\n\n# You do the rest -- fix the below :-)\nprint(sayings)",
"['\"icky\"', '\"icky\"', '\"icky\"', '\"p\\'tang\"']\n"
]
],
[
[
"### Don't peek until you're done with your own code!",
"_____no_output_____"
]
],
[
[
"sentence = declaration + \", \".join(sayings) + \"!\"\nprint(sentence)\nprint() # empty 'print' makes a newline",
"We are now the knights who say:\n\"icky\", \"icky\", \"icky\", \"p'tang\"!\n\n"
],
[
"# By the way, you use 'split' to split a string...\n# (note what happens to the commas):\nprint(\" - \".join(['ni'] * 12))\nprint(\"\\n\".join(\"icky, icky, icky, p'tang!\".split(\", \")))",
"ni - ni - ni - ni - ni - ni - ni - ni - ni - ni - ni - ni\nicky\nicky\nicky\np'tang!\n"
]
],
[
[
"## String formatting\nThere are a bunch of ways to do string formatting:\n- C-style:\n```python\n\"%s is: %.3f (or %d in Indiana)\" % \\\n (\"Pi\", math.pi, math.pi)\n# %s = string\n# %0.3f = floating point number, 3 places to the left of the decimal\n# %d = decimal number\n#\n# Style notes:\n# Line continuation with '\\' works but\n# is frowned upon. Indent twice\n# (8 spaces) so it doesn't look\n# like a control statement\n```",
"_____no_output_____"
]
],
[
[
"print(\"%s is: %.3f (well, %d in Indiana)\" % (\"Pi\", math.pi, math.pi))",
"Pi is: 3.142 (well, 3 in Indiana)\n"
]
],
[
[
"- New in Python 2.6, `str.format` doesn't require types:\n```python\n\"{0} is: {1} ({1:3.2} truncated)\".format(\n \"Pi\", math.pi)\n# More style notes:\n# Line continuation in square or curly\n# braces or parenthesis is better.\n``` ",
"_____no_output_____"
]
],
[
[
"# Use a colon and then decimals to control the\n# number of decimals that print out.\n#\n# Also note the number {1} appears twice, so that\n# the argument `math.pi` is reused.\nprint(\"{0} is: {1} ({1:.3} truncated)\".format(\"Pi\", math.pi))",
"Pi is: 3.141592653589793 (3.14 truncated)\n"
]
],
[
[
"- And Python 2.7+ allows named specifications:\n```python\n\"{pi} is {pie:05.3}\".format(\n pi=\"Pi\", pie=math.pi)\n# 05.3 = zero-padded number, with\n# 5 total characters, and\n# 3 significant digits.\n```",
"_____no_output_____"
]
],
[
[
"# Go to town -- change the decimal places!\nprint(\"{pi} is: {pie:05.2}\".format(pi=\"Pi\", pie=math.pi))",
"Pi is: 003.1\n"
]
],
[
[
"## String takeaways\n- `str.split` and `str.join`, plus the **regex** module (pattern matching tools for strings), make Python my language of choice for data manipulation\n- There are many ways to format a string\n- `help(str)` for more",
"_____no_output_____"
],
[
"# Quick look at other types",
"_____no_output_____"
]
],
[
[
"# Boolean\nx = True\ntype(x)",
"_____no_output_____"
]
],
[
[
"## Python has containers built in...\n\nLists, dictionaries, sets. We will talk about them later.\nThere is also a library [`collections`][collections] with additional specialized container types.\n\n[collections]: https://docs.python.org/3/library/collections.html",
"_____no_output_____"
]
],
[
[
"# Lists can contain multiple types\nx = [True, 1, 1.2, 'hi', [1], (1,2,3), {}, None]\ntype(x)\n# (the underscores are for special internal variables)",
"_____no_output_____"
],
[
"# List access. Try other numbers!\nx[1]",
"_____no_output_____"
],
[
"print(\"x[0] is:\", x[0], \"... and x[1] is:\", x[1]) # Python is zero-indexed",
"x[0] is: True ... and x[1] is: 1\n"
],
[
"x.append(set([\"a\", \"b\", \"c\"]))\n\nfor item in x:\n print(item, \"... type =\", type(item))",
"True ... type = <class 'bool'>\n1 ... type = <class 'int'>\n1.2 ... type = <class 'float'>\nhi ... type = <class 'str'>\n[1] ... type = <class 'list'>\n(1, 2, 3) ... type = <class 'tuple'>\n{} ... type = <class 'dict'>\nNone ... type = <class 'NoneType'>\n{'a', 'c', 'b'} ... type = <class 'set'>\n"
]
],
[
[
"If you need to check an object's type, do this:\n\n```python\nisinstance(x, list)\nisinstance(x[1], bool)\n```",
"_____no_output_____"
]
],
[
[
"# You do it!\nisinstance(x, tuple)",
"_____no_output_____"
]
],
[
[
"## Caveat\nLists, when copied, are copied by pointer. What that means is every symbol that points to a list, points to that same list.\n\nSame with dictionaries and sets.\n\n### Example:\n```python\nfifth_element = x[4]\nfifth_element.append(\"Both!\")\nprint(fifth_element)\nprint(x)\n```\n\nWhy? The assignment (`=`) operator copies the pointer to the place on the computer where the list (or dictionary or set) is: it does not copy the actual contents of the whole object, just the address where the data is in the computer. This is efficent because the object could be megabytes big.\n",
"_____no_output_____"
]
],
[
[
"# You do it!\nfifth_element = x[4]\nprint(fifth_element)\n\nfifth_element.append(\"Both!\")\nprint(fifth_element)\n\n# and see, the original list is changed too!\nprint(x)",
"[1]\n[1, 'Both!']\n[True, 1, 1.2, 'hi', [1, 'Both!'], (1, 2, 3), {}, None, {'a', 'c', 'b'}]\n"
]
],
[
[
"### To make a duplicate copy you must do it explicitly\n[The copy module ] [copy]\n\nExample:\n\n```python\nimport copy\n\n# -------------------- A shallow copy\nx[4] = [\"list\"]\nshallow_copy_of_x = copy.copy(x)\nshallow_copy_of_x[0] = \"Shallow copy\"\nfifth_element = x[4]\nfifth_element.append(\"Both?\")\n\ndef print_list(l):\n print(\"-\" * 10)\n for elem in l:\n print(elem)\n print()\n\n\n# look at them\nprint_list(shallow_copy_of_x)\nprint_list(x)\nfifth_element\n```\n\n[copy]: https://docs.python.org/3/library/copy.html",
"_____no_output_____"
]
],
[
[
"import copy\n\n# -------------------- A shallow copy\nx[4] = [\"list\"]\nshallow_copy_of_x = copy.copy(x)\nshallow_copy_of_x[0] = \"Shallow copy\"\nfifth_element = x[4]\nfifth_element.append(\"Both?\")",
"_____no_output_____"
],
[
"# look at them\ndef print_list(l):\n print(\"-\" * 8, \"the list, element-by-element\", \"-\" * 8)\n for elem in l:\n print(elem)\n print()\n\nprint_list(shallow_copy_of_x)\nprint_list(x)",
"-------- the list, element-by-element --------\nShallow copy\n1\n1.2\nhi\n['list', 'Both?']\n(1, 2, 3)\n{}\nNone\n{'a', 'c', 'b'}\n\n-------- the list, element-by-element --------\nTrue\n1\n1.2\nhi\n['list', 'Both?']\n(1, 2, 3)\n{}\nNone\n{'a', 'c', 'b'}\n\n"
]
],
[
[
"## And here is a deep copy\n\n```python\n# -------------------- A deep copy\n\nx[4] = [\"list\"]\ndeep_copy_of_x = copy.deepcopy(x)\ndeep_copy_of_x[0] = \"Deep copy\"\nfifth_element = deep_copy_of_x[4]\nfifth_element.append(\"Both?\")\n\n# look at them\nprint_list(deep_copy_of_x)\nprint_list(x)\nfifth_element\n```",
"_____no_output_____"
]
],
[
[
"# -------------------- A deep copy\n\nx[4] = [\"list\"]\ndeep_copy_of_x = copy.deepcopy(x)\ndeep_copy_of_x[0] = \"Deep copy\"\nfifth_element = deep_copy_of_x[4]\nfifth_element.append(\"Both? -- no, just this one got it!\")\n\n# look at them\nprint(fifth_element)\nprint(\"\\nand...the fifth element in the original list:\")\nprint(x[4])",
"['list', 'Both? -- no, just this one got it!']\n\nand...the fifth element in the original list:\n['list']\n"
]
],
[
[
"## Common atomic types\n\n<table style=\"border:3px solid white;\"><tr>\n<td> boolean</td>\n<td> integer </td>\n<td> float </td>\n<td>string</td>\n<td>None</td>\n</tr><tr>\n<td><tt>True</tt></td>\n<td><tt>42</tt></td>\n<td><tt>42.0</tt></td>\n<td><tt>\"hello\"</tt></td>\n<td><tt>None</tt></td>\n</tr></table>\n\n## Common container types\n\n<table style=\"border:3px solid white;\"><tr>\n<td> list </td>\n<td> tuple </td>\n<td> set </td>\n<td>dictionary</td>\n</tr><tr style=\"font-size:smaller;\">\n<td><ul style=\"margin:5px 2px 0px 15px;\"><li>Iterable</li><li>Mutable</li>\n <li>No restriction on elements</li>\n <li>Elements are ordered</li></ul></td>\n<td><ul style=\"margin:5px 2px 0px 15px;\"><li>Iterable</li><li>Immutable</li>\n <li>Elements must be hashable</li>\n <li>Elements are ordered</li></ul></td>\n<td><ul style=\"margin:5px 2px 0px 15px;\"><li>Iterable</li><li>Mutable</li>\n <li>Elements are<br/>\n unique and must<br/>\n be hashable</li>\n <li>Elements are not ordered</li></ul></td>\n<td><ul style=\"margin:5px 2px 0px 15px;\"><li>Iterable</li><li>Mutable</li>\n <li>Key, value pairs.<br/>\n Keys are unique and<br/>\n must be hashable</li>\n <li>Keys are not ordered</li></ul></td>\n</tr></table>",
"_____no_output_____"
],
[
"### Iterable\nYou can loop over it\n\n### Mutable\nYou can change it\n\n### Hashable\nA hash function converts an object to a number that will always be the same for the object. They help with identifying the object. A better explanation kind of has to go into the guts of the code...\n",
"_____no_output_____"
],
[
"# Container examples",
"_____no_output_____"
],
[
"## List\n- To make a list, use square braces.",
"_____no_output_____"
]
],
[
[
"l = [\"a\", 0, [1, 2] ]\nl[1] = \"second element\"\n\ntype(l)",
"_____no_output_____"
],
[
"print(l)",
"['a', 'second element', [1, 2]]\n"
]
],
[
[
"- Items in a list can be anything: <br/>\n sets, other lists, dictionaries, atoms",
"_____no_output_____"
]
],
[
[
"indices = range(len(l))\nprint(indices)",
"range(0, 3)\n"
],
[
"# Iterate over the indices using i=0, i=1, i=2\nfor i in indices:\n print(l[i])",
"a\nsecond element\n[1, 2]\n"
],
[
"# Or iterate over the items in `x` directly\nfor x in l:\n print(x)",
"a\nsecond element\n[1, 2]\n"
]
],
[
[
"## Tuple\nTo make a tuple, use parenthesis.",
"_____no_output_____"
]
],
[
[
"t = (\"a\", 0, \"tuple\")\ntype(t)",
"_____no_output_____"
],
[
"for x in t:\n print x",
"_____no_output_____"
]
],
[
[
"## Set\nTo make a set, wrap a list with the function `set()`.\n- Items in a set are unique\n- Lists, dictionaries, and sets cannot be in a set",
"_____no_output_____"
]
],
[
[
"s = set(['a', 0])\nif 'b' in s:\n print(\"has b\")",
"_____no_output_____"
],
[
"s.add(\"b\")\ns.remove(\"a\")\n\nif 'b' in s:\n print(\"has b\")",
"has b\n"
],
[
"l = [1,2,3]\ntry:\n s.add(l)\nexcept TypeError:\n print(\"Could not add the list\")\n #raise # uncomment this to raise an error",
"Could not add the list\n"
]
],
[
[
"## Dictionary\nTo make a dictionary, use curly braces.\n- A dictionary is a set of key,value pairs where the keys\n are unique.\n- Lists, dictionaries, and sets cannot be dictionary keys\n- To iterate over a dictionary use `items`",
"_____no_output_____"
]
],
[
[
"# two ways to do the same thing\nd = {\"mother\":\"hamster\",\n \"father\":\"elderberries\"}\nd = dict(mother=\"hamster\",\n father=\"elderberries\")",
"_____no_output_____"
],
[
"d['mother']",
"_____no_output_____"
],
[
"print(\"the dictionary keys:\", d.keys())\nprint()\nprint(\"the dictionary values:\", d.values())",
"the dictionary keys: dict_keys(['father', 'mother'])\n\nthe dictionary values: dict_values(['elderberries', 'hamster'])\n"
],
[
"# When iterating over a dictionary, use items() and two variables:\nfor k, v in d.items():\n print(\"key: \", k, end=\" ... \")\n print(\"val: \", v)",
"key: father ... val: elderberries\nkey: mother ... val: hamster\n"
],
[
"# If you don't you will just get the keys:\nfor k in d:\n print(k)",
"father\nmother\n"
]
],
[
[
"## Type takeaways\n- Lists, tuples, dictionaries, sets all are base Python objects\n- Be careful of duck typing\n- Remember about copy / deepcopy\n\n```python\n# For more information, use help(object)\nhelp(tuple)\nhelp(set)\nhelp()\n```",
"_____no_output_____"
],
[
"## Function definition and punctuation\n\nThe syntax for creating a function is:\n\n```python\ndef function_name(arg1, arg2, kwarg1=default1):\n \"\"\"Docstring goes here -- triple quoted.\"\"\"\n pass # the 'pass' keyword means 'do nothing'\n \n\n# The next thing unindented statement is outside\n# of the function. Leave a blank line between the\n# end of the function and the next statement.\n```\n\n- The **def** keyword begins a function declaration.\n- The colon (`:`) finishes the signature.\n- The body must be indented. The indentation must be exactly the same.\n- There are no curly braces for function bodies in Python — white space at the beginning of a line tells Python that this line is \"inside\" the body of whatever came before it.\n\nAlso, at the end of a function, leave at least one blank line to separate the thought from the next thing in the script.",
"_____no_output_____"
]
],
[
[
"def function_name(arg1, arg2, kwarg1=\"my_default_value\"):\n \"\"\"Docstring goes here -- triple quoted.\"\"\"\n pass # the 'pass' keyword means 'do nothing'",
"_____no_output_____"
],
[
"# See the docstring appear when using `help`\nhelp(function_name)",
"Help on function function_name in module __main__:\n\nfunction_name(arg1, arg2, kwarg1='my_default_value')\n Docstring goes here -- triple quoted.\n\n"
]
],
[
[
"## Whitespace matters\nThe 'tab' character **'\\t'** counts as one single character even if it looks like multiple characters in your editor.\n\n**But indentation is how you denote nesting!**\n\nSo, this can seriously mess up your coding. The [Python style guide][pep8] recommends configuring your editor to make the tab keypress type four spaces automatically.\n\nTo set the spacing for Python code in Sublime, go to **Sublime Text** → **Preferences** → **Settings - More** → **Syntax Specific - User**\n\nIt will open up the file **Python.sublime-settings**. Please put this inside, then save and close.\n\n```\n{\n \"tab_size\": 4,\n \"translate_tabs_to_spaces\": true\n}\n```\n\n[pep8]: https://www.python.org/dev/peps/pep-0008/",
"_____no_output_____"
],
[
"## Your first function\nCopy this and paste it in the cell below\n```python\ndef greet_person(person):\n \"\"\"Greet the named person.\n\n usage:\n >>> greet_person(\"world\")\n hello world\n \"\"\"\n print('hello', person)\n```",
"_____no_output_____"
]
],
[
[
"# Paste the function definition below:\n",
"_____no_output_____"
],
[
"# Here's the help statement\nhelp(greet_person)",
"Help on function greet_person in module __main__:\n\ngreet_person(person)\n Greet the named person.\n \n usage:\n >>> greet_person(\"world\")\n\n"
],
[
"# And here's the function in action!\ngreet_person(\"world\")",
"hello world\n"
]
],
[
[
"## Duck typing\nPython's philosophy for handling data types is called **duck typing** (If it walks like a duck, and quacks like a duck, it's a duck). Functions do no type checking — they happily process an argument until something breaks. This is great for fast coding but can sometimes make for odd errors. (If you care to specify types, there is a [standard way to do it][pep484], but don't worry about this if you're a beginner.)\n\n[pep484]: https://www.python.org/dev/peps/pep-0484/",
"_____no_output_____"
],
[
"## Challenge for you\nCreate another function named `greet_people` that takes a list of people and greets them all one by one. Hint: you can call the function `greet_person`.",
"_____no_output_____"
]
],
[
[
"# your function\ndef greet_people(list_of_people)\n \"\"\"Documentation string goes here.\"\"\"\n # You do it here!\n pass",
"_____no_output_____"
]
],
[
[
"### don't peek...",
"_____no_output_____"
]
],
[
[
"def greet_people(list_of_people):\n for person in list_of_people:\n greet_person(person)",
"_____no_output_____"
],
[
"greet_people([\"world\", \"awesome python user!\", \"rockstar!!!\"])",
"hello world\nhello awesome python user!\nhello rockstar!!!\n"
]
],
[
[
"## Quack quack\nMake a list of all of the people in your group and use your function to greet them:\n\n```python\npeople = [\"King Arthur\",\n \"Sir Galahad\",\n \"Sir Robin\"]\ngreet_people(people)\n\n# What do you think will happen if I do:\ngreet_people(\"pyladies\")\n```",
"_____no_output_____"
]
],
[
[
"# Try it!\n",
"_____no_output_____"
]
],
[
[
"## WTW?\nRemember strings are iterable...\n\n\n<div style=\"text-align:center;\">quack!</div>\n<div style=\"text-align:right;\">quack!</div>",
"_____no_output_____"
],
[
"## Whitespace / duck typing takeways\n\n- Indentation is how to denote nesting in Python\n- Do not use tabs; expand them to spaces\n- If it walks like a duck and quacks like a duck, it's a duck ",
"_____no_output_____"
],
[
"# Control structures\n\n### Common comparison operators\n<table style=\"border:3px solid white;\"><tr>\n<td><tt>==</tt></td>\n<td><tt>!=</tt></td>\n<td><tt><=</tt> or <tt><</tt><br/>\n <tt>>=</tt> or <tt>></tt></td>\n<td><tt>x in (1, 2)</tt></td>\n<td><tt>x is None<br/>\n x is not None</tt></td>\n</tr><tr style=\"font-size:smaller;\">\n<td>equals</td>\n<td>not equals</td>\n<td>less or<br/>equal, etc.</td>\n<td>works for sets,<br/>\n lists, tuples,<br/>\n dictionary keys,<br/>\n strings</td>\n<td>just for <tt>None</tt></td>\n</tr></table>",
"_____no_output_____"
],
[
"### If statement\n\nThe `if` statement checks whether the condition after `if` is true.\nNote the placement of colons (`:`) and the indentation. These are not optional.\n\n- If it is, it does the thing below it.\n- Otherwise it goes to the next comparison.\n- You do not need any `elif` or `else` statements if you only\n want to do something if your test condition is true.\n\nAdvanced users, there is no switch statement in Python.",
"_____no_output_____"
]
],
[
[
"# Standard if / then / else statement.\n#\n# Go ahead and change `i`\ni = 1\n\nif i is None:\n print(\"None!\")\nelif i % 2 == 0:\n print(\"`i` is an even number!\")\nelse:\n print(\"`i` is neither None nor even\")",
"`i` is neither None nor even\n"
],
[
"# This format is for very short one-line if / then / else.\n# It is called a `ternary` statement.\n#\n\"Y\" if i==1 else \"N\"",
"_____no_output_____"
]
],
[
[
"### While loop\n\nThe `while` loop requires you to set up something first. Then it\ntests whether the statement after the `while` is true. \nAgain note the colon (`:`) and the indentation.\n\n- If the condition is true, then the body of\n the `while` loop will execute\n- Otherwise it will break out of the loop and go on\n to the next code underneath the `while` block",
"_____no_output_____"
]
],
[
[
"i = 0\nwhile i < 3:\n print(\"i is:\", i)\n i += 1\n\nprint(\"We exited the loop, and now i is:\", i)",
"i is: 0\ni is: 1\ni is: 2\nWe exited the loop, and now i is: 3\n"
]
],
[
[
"### For loop\n\nThe `for` loop iterates over the items after the `for`,\nexecuting the body of the loop once per item.",
"_____no_output_____"
]
],
[
[
"for i in range(3):\n print(\"in the for loop. `i` is:\", i)\n\nprint()\nprint(\"outside the for loop. `i` is:\", i)",
"in the for loop. `i` is: 0\nin the for loop. `i` is: 1\nin the for loop. `i` is: 2\n\noutside the for loop. `i` is: 2\n"
],
[
"# or loop directly over a list or tuple\nfor element in (\"one\", 2, \"three\"):\n print(\"in the for loop. `element` is:\", element)\n\nprint()\nprint(\"outside the for loop. `element` is:\", element)",
"in the for loop. `element` is: one\nin the for loop. `element` is: 2\nin the for loop. `element` is: three\n\noutside the for loop. `element` is: three\n"
]
],
[
[
"## Challenge for you\nPlease look at this code and think of what will happen, then copy it and run it. We introduce `break` and `continue`...can you tell what they do?\n\n- When will it stop?\n- What will it print out?\n- What will `i` be at the end?\n\n```python\nfor i in range(20):\n if i == 15:\n break\n elif i % 2 == 0:\n continue\n for j in range(5):\n print(i + j, end=\"...\")\n print() # newline\n```",
"_____no_output_____"
]
],
[
[
"# Paste it here, and run!",
"_____no_output_____"
]
],
[
[
"# You are done, welcome to Python!\n\n## ... and you rock!",
"_____no_output_____"
],
[
"### Now join (or start!) a friendly PyLadies group near you ...\n\n[PyLadies locations][locations]\n[locations]: http://www.pyladies.com/locations/",
"_____no_output_____"
],
[
"<div style=\"font-size:80%;color:#333333;text-align:center;\">\n<h4>Psst...contribute to this repo!</h4>\n\n<span style=\"font-size:70%;\">\nHere is the \n<a href=\"https://github.com/jupyter/docker-demo-images\">\nlink to the github repo that hosts these\n</a>.\nMake them better!\n</span>\n</div>",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
cb844ec558a69a84cc8ffc1fcb3e3894afe18b65
| 4,635 |
ipynb
|
Jupyter Notebook
|
tutorials/W0D0_NeuroVideoSeries/W0D0_Tutorial6.ipynb
|
ofou/course-content
|
04cc6450ee20c57c7832f86da6826e516d2daeed
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null |
tutorials/W0D0_NeuroVideoSeries/W0D0_Tutorial6.ipynb
|
ofou/course-content
|
04cc6450ee20c57c7832f86da6826e516d2daeed
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null |
tutorials/W0D0_NeuroVideoSeries/W0D0_Tutorial6.ipynb
|
ofou/course-content
|
04cc6450ee20c57c7832f86da6826e516d2daeed
|
[
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | 28.090909 | 274 | 0.570874 |
[
[
[
"<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W0D0_NeuroVideoSeries/W0D0_Tutorial6.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>[](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D0_NeuroVideoSeries/W0D0_Tutorial6.ipynb)",
"_____no_output_____"
],
[
"# Brain Signals: LFP\n**Neuro Video Series**\n\n**By Neuromatch Academy**\n\n**Content creator**: Gaute Einevoll\n\n**Content reviewers**: Richard Gao, Jiaxin Tu, Tara van Viegen, Sirisha Sripada\n\n**Video editors, captioners, and translators**: Manisha Sinha, Tara van Viegen, Shuze Liu ",
"_____no_output_____"
],
[
"**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**\n\n<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>",
"_____no_output_____"
],
[
"## Video",
"_____no_output_____"
]
],
[
[
"# @markdown\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = \"https://player.bilibili.com/player.html?bvid={0}&page={1}\".format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV17o4y1C7K4\", width=854, height=480, fs=1)\n print(\"Video available at https://www.bilibili.com/video/{0}\".format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"PwkYgrTE2fU\", width=854, height=480, fs=1, rel=0)\n print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"## Slides",
"_____no_output_____"
]
],
[
[
"# @markdown\nfrom IPython.display import IFrame\nurl = \"https://mfr.ca-1.osf.io/render?url=https://osf.io/w9tgp/?direct%26mode=render%26action=download%26mode=render\"\nIFrame(src=url, width=854, height=480)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb8451d794c5e662e1d36c2e0aa1ed1b93ff26ce
| 8,858 |
ipynb
|
Jupyter Notebook
|
retail/recommendation-system/bqml-scann/04_build_embeddings_scann.ipynb
|
barrosm/analytics-componentized-patterns
|
ebb80d118222f990a61a1ac739a375170e180376
|
[
"Apache-2.0"
] | 109 |
2020-07-30T13:25:24.000Z
|
2022-03-29T05:10:46.000Z
|
retail/recommendation-system/bqml-scann/04_build_embeddings_scann.ipynb
|
barrosm/analytics-componentized-patterns
|
ebb80d118222f990a61a1ac739a375170e180376
|
[
"Apache-2.0"
] | 16 |
2020-08-28T15:26:56.000Z
|
2022-03-21T13:29:26.000Z
|
retail/recommendation-system/bqml-scann/04_build_embeddings_scann.ipynb
|
barrosm/analytics-componentized-patterns
|
ebb80d118222f990a61a1ac739a375170e180376
|
[
"Apache-2.0"
] | 62 |
2020-08-28T14:18:37.000Z
|
2022-03-29T06:27:27.000Z
| 8,858 | 8,858 | 0.734816 |
[
[
[
"# Part 4: Create an approximate nearest neighbor index for the item embeddings\n\nThis notebook is the fourth of five notebooks that guide you through running the [Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN](https://github.com/GoogleCloudPlatform/analytics-componentized-patterns/tree/master/retail/recommendation-system/bqml-scann) solution.\n\nUse this notebook to create an approximate nearest neighbor (ANN) index for the item embeddings by using the [ScaNN](https://github.com/google-research/google-research/tree/master/scann) framework. You create the index as a model, train the model on AI Platform Training, then export the index to Cloud Storage so that it can serve ANN information.\n\nBefore starting this notebook, you must run the [03_create_embedding_lookup_model](03_create_embedding_lookup_model.ipynb) notebook to process the item embeddings data and export it to Cloud Storage.\n\nAfter completing this notebook, run the [05_deploy_lookup_and_scann_caip](05_deploy_lookup_and_scann_caip.ipynb) notebook to deploy the solution. Once deployed, you can submit song IDs to the solution and get similar song recommendations in return, based on the ANN index.\n",
"_____no_output_____"
],
[
"## Setup\r\n\r\nImport the required libraries, configure the environment variables, and authenticate your GCP account.",
"_____no_output_____"
]
],
[
[
"!pip install -q scann",
"_____no_output_____"
]
],
[
[
"### Import libraries",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport numpy as np\nfrom datetime import datetime",
"_____no_output_____"
]
],
[
[
"### Configure GCP environment settings\r\n\r\nUpdate the following variables to reflect the values for your GCP environment:\r\n\r\n+ `PROJECT_ID`: The ID of the Google Cloud project you are using to implement this solution.\r\n+ `BUCKET`: The name of the Cloud Storage bucket you created to use with this solution. The `BUCKET` value should be just the bucket name, so `myBucket` rather than `gs://myBucket`.\r\n+ `REGION`: The region to use for the AI Platform Training job.",
"_____no_output_____"
]
],
[
[
"PROJECT_ID = 'yourProject' # Change to your project.\nBUCKET = 'yourBucketName' # Change to the bucket you created.\nREGION = 'yourTrainingRegion' # Change to your AI Platform Training region.\nEMBEDDING_FILES_PREFIX = f'gs://{BUCKET}/bqml/item_embeddings/embeddings-*'\nOUTPUT_INDEX_DIR = f'gs://{BUCKET}/bqml/scann_index'",
"_____no_output_____"
]
],
[
[
"### Authenticate your GCP account\nThis is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.",
"_____no_output_____"
]
],
[
[
"try:\n from google.colab import auth\n auth.authenticate_user()\n print(\"Colab user is authenticated.\")\nexcept: pass",
"_____no_output_____"
]
],
[
[
"## Build the ANN index\r\n\r\nUse the `build` method implemented in the [indexer.py](index_builder/builder/indexer.py) module to load the embeddings from the CSV files, create the ANN index model and train it on the embedding data, and save the SavedModel file to Cloud Storage. You pass the following three parameters to this method:\r\n\r\n+ `embedding_files_path`, which specifies the Cloud Storage location from which to load the embedding vectors.\r\n+ `num_leaves`, which provides the value for a hyperparameter that tunes the model based on the trade-off between retrieval latency and recall. A higher `num_leaves` value will use more data and provide better recall, but will also increase latency. If `num_leaves` is set to `None` or `0`, the `num_leaves` value is the square root of the number of items.\r\n+ `output_dir`, which specifies the Cloud Storage location to write the ANN index SavedModel file to.\r\n\r\nOther configuration options for the model are set based on the [rules-of-thumb](https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md#rules-of-thumb) provided by ScaNN.",
"_____no_output_____"
],
[
"### Build the index locally",
"_____no_output_____"
]
],
[
[
"from index_builder.builder import indexer\nindexer.build(EMBEDDING_FILES_PREFIX, OUTPUT_INDEX_DIR)",
"_____no_output_____"
]
],
[
[
"### Build the index using AI Platform Training\r\n\r\nSubmit an AI Platform Training job to build the ScaNN index at scale. The [index_builder](index_builder) directory contains the expected [training application packaging structure](https://cloud.google.com/ai-platform/training/docs/packaging-trainer) for submitting the AI Platform Training job.",
"_____no_output_____"
]
],
[
[
"if tf.io.gfile.exists(OUTPUT_INDEX_DIR):\n print(\"Removing {} contents...\".format(OUTPUT_INDEX_DIR))\n tf.io.gfile.rmtree(OUTPUT_INDEX_DIR)\n\nprint(\"Creating output: {}\".format(OUTPUT_INDEX_DIR))\ntf.io.gfile.makedirs(OUTPUT_INDEX_DIR)\n\ntimestamp = datetime.utcnow().strftime('%y%m%d%H%M%S')\njob_name = f'ks_bqml_build_scann_index_{timestamp}'\n\n!gcloud ai-platform jobs submit training {job_name} \\\n --project={PROJECT_ID} \\\n --region={REGION} \\\n --job-dir={OUTPUT_INDEX_DIR}/jobs/ \\\n --package-path=index_builder/builder \\\n --module-name=builder.task \\\n --config='index_builder/config.yaml' \\\n --runtime-version=2.2 \\\n --python-version=3.7 \\\n --\\\n --embedding-files-path={EMBEDDING_FILES_PREFIX} \\\n --output-dir={OUTPUT_INDEX_DIR} \\\n --num-leaves=500",
"_____no_output_____"
]
],
[
[
"After the AI Platform Training job finishes, check that the `scann_index` folder has been created in your Cloud Storage bucket:",
"_____no_output_____"
]
],
[
[
"!gsutil ls {OUTPUT_INDEX_DIR}",
"_____no_output_____"
]
],
[
[
"## Test the ANN index\r\n\r\nTest the ANN index by using the `ScaNNMatcher` class implemented in the [index_server/matching.py](index_server/matching.py) module.\r\n\r\nRun the following code snippets to create an item embedding from random generated values and pass it to `scann_matcher`, which returns the items IDs for the five items that are the approximate nearest neighbors of the embedding you submitted.",
"_____no_output_____"
]
],
[
[
"from index_server.matching import ScaNNMatcher\nscann_matcher = ScaNNMatcher(OUTPUT_INDEX_DIR)",
"_____no_output_____"
],
[
"vector = np.random.rand(50)\nscann_matcher.match(vector, 5)",
"_____no_output_____"
]
],
[
[
"## License\n\nCopyright 2020 Google LLC\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \n\nSee the License for the specific language governing permissions and limitations under the License.\n\n**This is not an official Google product but sample code provided for an educational purpose**",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
cb845727ce7256132f87841363e83ea6161539d9
| 187,369 |
ipynb
|
Jupyter Notebook
|
Principal-Component-Analysis.ipynb
|
souparnabose99/PrincipalComponentAnalysisConcepts
|
497abc4db070217b31ba4e406e2b83aad10fff08
|
[
"MIT"
] | null | null | null |
Principal-Component-Analysis.ipynb
|
souparnabose99/PrincipalComponentAnalysisConcepts
|
497abc4db070217b31ba4e406e2b83aad10fff08
|
[
"MIT"
] | null | null | null |
Principal-Component-Analysis.ipynb
|
souparnabose99/PrincipalComponentAnalysisConcepts
|
497abc4db070217b31ba4e406e2b83aad10fff08
|
[
"MIT"
] | null | null | null | 175.933333 | 59,732 | 0.866605 |
[
[
[
"# Principal Component Analysis on Breast Cancer Dataset",
"_____no_output_____"
],
[
"<b>Transformation of data in order to find out what features explain the most variance in our data<b/>",
"_____no_output_____"
],
[
"<b>Import required libraries<b/>",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"<b>Load dataset<b/>",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_breast_cancer",
"_____no_output_____"
],
[
"cancer = load_breast_cancer()",
"_____no_output_____"
],
[
"type(cancer)",
"_____no_output_____"
],
[
"cancer.keys()",
"_____no_output_____"
],
[
"print(cancer['DESCR'])",
".. _breast_cancer_dataset:\n\nBreast cancer wisconsin (diagnostic) dataset\n--------------------------------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 569\n\n :Number of Attributes: 30 numeric, predictive attributes and the class\n\n :Attribute Information:\n - radius (mean of distances from center to points on the perimeter)\n - texture (standard deviation of gray-scale values)\n - perimeter\n - area\n - smoothness (local variation in radius lengths)\n - compactness (perimeter^2 / area - 1.0)\n - concavity (severity of concave portions of the contour)\n - concave points (number of concave portions of the contour)\n - symmetry \n - fractal dimension (\"coastline approximation\" - 1)\n\n The mean, standard error, and \"worst\" or largest (mean of the three\n largest values) of these features were computed for each image,\n resulting in 30 features. For instance, field 3 is Mean Radius, field\n 13 is Radius SE, field 23 is Worst Radius.\n\n - class:\n - WDBC-Malignant\n - WDBC-Benign\n\n :Summary Statistics:\n\n ===================================== ====== ======\n Min Max\n ===================================== ====== ======\n radius (mean): 6.981 28.11\n texture (mean): 9.71 39.28\n perimeter (mean): 43.79 188.5\n area (mean): 143.5 2501.0\n smoothness (mean): 0.053 0.163\n compactness (mean): 0.019 0.345\n concavity (mean): 0.0 0.427\n concave points (mean): 0.0 0.201\n symmetry (mean): 0.106 0.304\n fractal dimension (mean): 0.05 0.097\n radius (standard error): 0.112 2.873\n texture (standard error): 0.36 4.885\n perimeter (standard error): 0.757 21.98\n area (standard error): 6.802 542.2\n smoothness (standard error): 0.002 0.031\n compactness (standard error): 0.002 0.135\n concavity (standard error): 0.0 0.396\n concave points (standard error): 0.0 0.053\n symmetry (standard error): 0.008 0.079\n fractal dimension (standard error): 0.001 0.03\n radius (worst): 7.93 36.04\n texture (worst): 12.02 49.54\n perimeter (worst): 50.41 251.2\n area (worst): 185.2 4254.0\n smoothness (worst): 0.071 0.223\n compactness (worst): 0.027 1.058\n concavity (worst): 0.0 1.252\n concave points (worst): 0.0 0.291\n symmetry (worst): 0.156 0.664\n fractal dimension (worst): 0.055 0.208\n ===================================== ====== ======\n\n :Missing Attribute Values: None\n\n :Class Distribution: 212 - Malignant, 357 - Benign\n\n :Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian\n\n :Donor: Nick Street\n\n :Date: November, 1995\n\nThis is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.\nhttps://goo.gl/U2Uwz2\n\nFeatures are computed from a digitized image of a fine needle\naspirate (FNA) of a breast mass. They describe\ncharacteristics of the cell nuclei present in the image.\n\nSeparating plane described above was obtained using\nMultisurface Method-Tree (MSM-T) [K. P. Bennett, \"Decision Tree\nConstruction Via Linear Programming.\" Proceedings of the 4th\nMidwest Artificial Intelligence and Cognitive Science Society,\npp. 97-101, 1992], a classification method which uses linear\nprogramming to construct a decision tree. Relevant features\nwere selected using an exhaustive search in the space of 1-4\nfeatures and 1-3 separating planes.\n\nThe actual linear program used to obtain the separating plane\nin the 3-dimensional space is that described in:\n[K. P. Bennett and O. L. Mangasarian: \"Robust Linear\nProgramming Discrimination of Two Linearly Inseparable Sets\",\nOptimization Methods and Software 1, 1992, 23-34].\n\nThis database is also available through the UW CS ftp server:\n\nftp ftp.cs.wisc.edu\ncd math-prog/cpo-dataset/machine-learn/WDBC/\n\n.. topic:: References\n\n - W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction \n for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on \n Electronic Imaging: Science and Technology, volume 1905, pages 861-870,\n San Jose, CA, 1993.\n - O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and \n prognosis via linear programming. Operations Research, 43(4), pages 570-577, \n July-August 1995.\n - W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques\n to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994) \n 163-171.\n"
]
],
[
[
"<b>Load data into Pandas Dataframe<b/>",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame(cancer['data'], columns= cancer['feature_names'])",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"cancer['target']",
"_____no_output_____"
],
[
"cancer['target_names']",
"_____no_output_____"
]
],
[
[
"<b>Scale Data<b/>",
"_____no_output_____"
]
],
[
[
"#StandardScaling used over MinMax\n#For example, in clustering analyses, standardization may be especially crucial in order to compare similarities between features based on certain distance measures. Another prominent example is the Principal Component Analysis (PCA), where we usually prefer standardization over Min-Max scaling since we are interested in the components that maximize the variance\n\n#However, this doesn’t mean that Min-Max scaling is not useful at all! A popular application is image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB colour range). Also, a typical neural network algorithm requires data that on a 0-1 scale.\nfrom sklearn.preprocessing import StandardScaler",
"_____no_output_____"
],
[
"scaler = StandardScaler()",
"_____no_output_____"
],
[
"scaler.fit(df)",
"_____no_output_____"
],
[
"scaled_data = scaler.transform(df)",
"_____no_output_____"
]
],
[
[
"<b>Perform PCA<b/>",
"_____no_output_____"
]
],
[
[
"from sklearn.decomposition import PCA",
"_____no_output_____"
],
[
"pca = PCA(n_components=2)",
"_____no_output_____"
],
[
"pca.fit(scaled_data)",
"_____no_output_____"
],
[
"trans_pca = pca.transform(scaled_data)",
"_____no_output_____"
],
[
"trans_pca.shape",
"_____no_output_____"
],
[
"scaled_data.shape",
"_____no_output_____"
]
],
[
[
"<b>Visualizing Data<b/>",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(9,5))\nplt.scatter(trans_pca[:,0], trans_pca[:,1])\nplt.xlabel('First Principal Component')\nplt.ylabel('Second Principal Component')",
"_____no_output_____"
],
[
"plt.figure(figsize=(9,5))\nplt.scatter(trans_pca[:,0], trans_pca[:,1], c= cancer['target'])\nplt.xlabel('First Principal Component')\nplt.ylabel('Second Principal Component')",
"_____no_output_____"
]
],
[
[
"<b>Understanding principal components<b/>",
"_____no_output_____"
]
],
[
[
"pca.components_",
"_____no_output_____"
],
[
"pcomp_df = pd.DataFrame(pca.components_, columns=cancer['feature_names'])",
"_____no_output_____"
],
[
"pcomp_df.head()",
"_____no_output_____"
],
[
"plt.figure(figsize=(9,5))\nsns.heatmap(pcomp_df)",
"_____no_output_____"
],
[
"plt.figure(figsize=(9,5))\nsns.heatmap(pcomp_df, cmap='plasma')",
"_____no_output_____"
]
],
[
[
"<b>Each Principal Component is shown here as a row. Higher the number/more hotter looking color towards yellow, its more corelated to a specific feature in the colums<b/>",
"_____no_output_____"
]
],
[
[
"#Can use SVM over PCA for a classification problem",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb845aff6cbe05d78ffee5cb198bab83bedd9a49
| 18,292 |
ipynb
|
Jupyter Notebook
|
notebooks/04 working with text data.ipynb
|
lampsonnguyen/ml-training-advance
|
992c8304683879ade23410cfa4478622980ef420
|
[
"MIT"
] | null | null | null |
notebooks/04 working with text data.ipynb
|
lampsonnguyen/ml-training-advance
|
992c8304683879ade23410cfa4478622980ef420
|
[
"MIT"
] | null | null | null |
notebooks/04 working with text data.ipynb
|
lampsonnguyen/ml-training-advance
|
992c8304683879ade23410cfa4478622980ef420
|
[
"MIT"
] | 2 |
2018-04-20T03:09:43.000Z
|
2021-07-23T05:48:42.000Z
| 24.920981 | 116 | 0.559644 |
[
[
[
"# Working with Text data",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom preamble import *",
"_____no_output_____"
]
],
[
[
"# http://ai.stanford.edu/~amaas/data/sentiment/",
"_____no_output_____"
],
[
"## Example application: Sentiment analysis of movie reviews",
"_____no_output_____"
]
],
[
[
"!tree -L 2 data/aclImdb",
"_____no_output_____"
],
[
"from sklearn.datasets import load_files\n\nreviews_train = load_files(\"data/aclImdb/train/\")\n# load_files returns a bunch, containing training texts and training labels\ntext_train, y_train = reviews_train.data, reviews_train.target\nprint(\"type of text_train: {}\".format(type(text_train)))\nprint(\"length of text_train: {}\".format(len(text_train)))\nprint(\"text_train[1]:\\n{}\".format(text_train[1]))",
"_____no_output_____"
],
[
"text_train = [doc.replace(b\"<br />\", b\" \") for doc in text_train]",
"_____no_output_____"
],
[
"print(\"Samples per class (training): {}\".format(np.bincount(y_train)))",
"_____no_output_____"
],
[
"reviews_test = load_files(\"data/aclImdb/test/\")\ntext_test, y_test = reviews_test.data, reviews_test.target\nprint(\"Number of documents in test data: {}\".format(len(text_test)))\nprint(\"Samples per class (test): {}\".format(np.bincount(y_test)))\ntext_test = [doc.replace(b\"<br />\", b\" \") for doc in text_test]",
"_____no_output_____"
]
],
[
[
"### Representing text data as Bag of Words",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"#### Applying bag-of-words to a toy dataset",
"_____no_output_____"
]
],
[
[
"bards_words = [\"The fool doth think he is wise,\",\n \"but the wise man knows himself to be a fool\"]",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import CountVectorizer\nvect = CountVectorizer()\nvect.fit(bards_words)",
"_____no_output_____"
],
[
"print(\"Vocabulary size: {}\".format(len(vect.vocabulary_)))\nprint(\"Vocabulary content:\\n {}\".format(vect.vocabulary_))",
"_____no_output_____"
],
[
"bag_of_words = vect.transform(bards_words)\nprint(\"bag_of_words: {}\".format(repr(bag_of_words)))",
"_____no_output_____"
],
[
"print(\"Dense representation of bag_of_words:\\n{}\".format(\n bag_of_words.toarray()))",
"_____no_output_____"
],
[
"vect.get_feature_names()",
"_____no_output_____"
],
[
"vect.inverse_transform(bag_of_words)",
"_____no_output_____"
]
],
[
[
"### Bag-of-word for movie reviews",
"_____no_output_____"
]
],
[
[
"vect = CountVectorizer().fit(text_train)\nX_train = vect.transform(text_train)\nprint(\"X_train:\\n{}\".format(repr(X_train)))",
"_____no_output_____"
],
[
"feature_names = vect.get_feature_names()\nprint(\"Number of features: {}\".format(len(feature_names)))\nprint(\"First 20 features:\\n{}\".format(feature_names[:20]))\nprint(\"Features 20010 to 20030:\\n{}\".format(feature_names[20010:20030]))\nprint(\"Every 2000th feature:\\n{}\".format(feature_names[::2000]))",
"_____no_output_____"
],
[
"from sklearn.model_selection import cross_val_score\nfrom sklearn.linear_model import LogisticRegression\n\nscores = cross_val_score(LogisticRegression(), X_train, y_train, cv=5)\nprint(\"Mean cross-validation accuracy: {:.2f}\".format(np.mean(scores)))",
"_____no_output_____"
],
[
"from sklearn.model_selection import GridSearchCV\nparam_grid = {'C': [0.001, 0.01, 0.1, 1]}\ngrid = GridSearchCV(LogisticRegression(), param_grid, cv=5)\ngrid.fit(X_train, y_train)\nprint(\"Best cross-validation score: {:.2f}\".format(grid.best_score_))\nprint(\"Best parameters: \", grid.best_params_)",
"_____no_output_____"
],
[
"X_test = vect.transform(text_test)\nprint(\"Test score: {:.2f}\".format(grid.score(X_test, y_test)))",
"_____no_output_____"
],
[
"vect = CountVectorizer(min_df=5).fit(text_train)\nX_train = vect.transform(text_train)\nprint(\"X_train with min_df: {}\".format(repr(X_train)))",
"_____no_output_____"
],
[
"feature_names = vect.get_feature_names()\n\nprint(\"First 50 features:\\n{}\".format(feature_names[:50]))\nprint(\"Features 20010 to 20030:\\n{}\".format(feature_names[20010:20030]))\nprint(\"Every 700th feature:\\n{}\".format(feature_names[::700]))",
"_____no_output_____"
],
[
"grid = GridSearchCV(LogisticRegression(), param_grid, cv=5)\ngrid.fit(X_train, y_train)\nprint(\"Best cross-validation score: {:.2f}\".format(grid.best_score_))",
"_____no_output_____"
]
],
[
[
"### Stop-words",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS\nprint(\"Number of stop words: {}\".format(len(ENGLISH_STOP_WORDS)))\nprint(\"Every 10th stopword:\\n{}\".format(list(ENGLISH_STOP_WORDS)[::10]))",
"_____no_output_____"
],
[
"# specifying stop_words=\"english\" uses the build-in list.\n# We could also augment it and pass our own.\nvect = CountVectorizer(min_df=5, stop_words=\"english\").fit(text_train)\nX_train = vect.transform(text_train)\nprint(\"X_train with stop words:\\n{}\".format(repr(X_train)))",
"_____no_output_____"
],
[
"grid = GridSearchCV(LogisticRegression(), param_grid, cv=5)\ngrid.fit(X_train, y_train)\nprint(\"Best cross-validation score: {:.2f}\".format(grid.best_score_))",
"_____no_output_____"
]
],
[
[
"### Rescaling the data with TFIDF\n\\begin{equation*}\n\\text{tfidf}(w, d) = \\text{tf} \\log\\big(\\frac{N + 1}{N_w + 1}\\big) + 1\n\\end{equation*}",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.pipeline import make_pipeline\npipe = make_pipeline(TfidfVectorizer(min_df=5),\n LogisticRegression())\nparam_grid = {'logisticregression__C': [0.001, 0.01, 0.1, 1, 10]}\n\ngrid = GridSearchCV(pipe, param_grid, cv=5)\ngrid.fit(text_train, y_train)\nprint(\"Best cross-validation score: {:.2f}\".format(grid.best_score_))",
"_____no_output_____"
],
[
"vectorizer = grid.best_estimator_.named_steps[\"tfidfvectorizer\"]\n# transform the training dataset:\nX_train = vectorizer.transform(text_train)\n# find maximum value for each of the features over dataset:\nmax_value = X_train.max(axis=0).toarray().ravel()\nsorted_by_tfidf = max_value.argsort()\n# get feature names\nfeature_names = np.array(vectorizer.get_feature_names())\n\nprint(\"Features with lowest tfidf:\\n{}\".format(\n feature_names[sorted_by_tfidf[:20]]))\n\nprint(\"Features with highest tfidf: \\n{}\".format(\n feature_names[sorted_by_tfidf[-20:]]))",
"_____no_output_____"
],
[
"sorted_by_idf = np.argsort(vectorizer.idf_)\nprint(\"Features with lowest idf:\\n{}\".format(\n feature_names[sorted_by_idf[:100]]))",
"_____no_output_____"
]
],
[
[
"#### Investigating model coefficients",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(20, 5), dpi=300)\nmglearn.tools.visualize_coefficients(\n grid.best_estimator_.named_steps[\"logisticregression\"].coef_,\n feature_names, n_top_features=40)",
"_____no_output_____"
]
],
[
[
"# Bag of words with more than one word (n-grams)",
"_____no_output_____"
]
],
[
[
"print(\"bards_words:\\n{}\".format(bards_words))",
"_____no_output_____"
],
[
"cv = CountVectorizer(ngram_range=(1, 1)).fit(bards_words)\nprint(\"Vocabulary size: {}\".format(len(cv.vocabulary_)))\nprint(\"Vocabulary:\\n{}\".format(cv.get_feature_names()))",
"_____no_output_____"
],
[
"cv = CountVectorizer(ngram_range=(2, 2)).fit(bards_words)\nprint(\"Vocabulary size: {}\".format(len(cv.vocabulary_)))\nprint(\"Vocabulary:\\n{}\".format(cv.get_feature_names()))",
"_____no_output_____"
],
[
"print(\"Transformed data (dense):\\n{}\".format(cv.transform(bards_words).toarray()))",
"_____no_output_____"
],
[
"cv = CountVectorizer(ngram_range=(1, 3)).fit(bards_words)\nprint(\"Vocabulary size: {}\".format(len(cv.vocabulary_)))\nprint(\"Vocabulary:{}\\n\".format(cv.get_feature_names()))",
"_____no_output_____"
],
[
"pipe = make_pipeline(TfidfVectorizer(min_df=5), LogisticRegression())\n# running the grid-search takes a long time because of the\n# relatively large grid and the inclusion of trigrams\nparam_grid = {'logisticregression__C': [0.001, 0.01, 0.1, 1, 10, 100],\n \"tfidfvectorizer__ngram_range\": [(1, 1), (1, 2), (1, 3)]}\n\ngrid = GridSearchCV(pipe, param_grid, cv=5)\ngrid.fit(text_train, y_train)\nprint(\"Best cross-validation score: {:.2f}\".format(grid.best_score_))\nprint(\"Best parameters:\\n{}\".format(grid.best_params_))",
"_____no_output_____"
],
[
"len(CountVectorizer().fit(text_train).get_feature_names())",
"_____no_output_____"
],
[
"len(CountVectorizer(min_df=5).fit(text_train).get_feature_names())",
"_____no_output_____"
],
[
"len(CountVectorizer(ngram_range=(1, 2)).fit(text_train).get_feature_names())",
"_____no_output_____"
],
[
"len(CountVectorizer(ngram_range=(1, 2), min_df=5).fit(text_train).get_feature_names())",
"_____no_output_____"
],
[
"len(CountVectorizer(ngram_range=(1, 2), min_df=5, stop_words=\"english\").fit(text_train).get_feature_names())",
"_____no_output_____"
],
[
"# extract scores from grid_search\nscores = grid.cv_results_['mean_test_score'].reshape(-1, 3).T\n# visualize heatmap\nheatmap = mglearn.tools.heatmap(\n scores, xlabel=\"C\", ylabel=\"ngram_range\", cmap=\"viridis\", fmt=\"%.3f\",\n xticklabels=param_grid['logisticregression__C'],\n yticklabels=param_grid['tfidfvectorizer__ngram_range'])\nplt.colorbar(heatmap)",
"_____no_output_____"
],
[
"# extract feature names and coefficients\nvect = grid.best_estimator_.named_steps['tfidfvectorizer']\nfeature_names = np.array(vect.get_feature_names())\ncoef = grid.best_estimator_.named_steps['logisticregression'].coef_\nmglearn.tools.visualize_coefficients(coef, feature_names, n_top_features=40)\nplt.ylim(-22, 22)",
"_____no_output_____"
],
[
"# find 3-gram features\nmask = np.array([len(feature.split(\" \")) for feature in feature_names]) == 3\n# visualize only 3-gram features:\nmglearn.tools.visualize_coefficients(coef.ravel()[mask],\n feature_names[mask], n_top_features=40)\nplt.ylim(-22, 22)",
"_____no_output_____"
]
],
[
[
"# Exercise\nCompare unigram and bigram models on the 20 newsgroup dataset",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import fetch_20newsgroups\ncategories = [\n 'alt.atheism',\n 'talk.religion.misc',\n 'comp.graphics',\n 'sci.space',\n]\nremove = ('headers', 'footers', 'quotes')\n\ndata_train = fetch_20newsgroups(subset='train', categories=categories,\n shuffle=True, random_state=42,\n remove=remove)\n\ndata_test = fetch_20newsgroups(subset='test', categories=categories,\n shuffle=True, random_state=42,\n remove=remove)",
"_____no_output_____"
],
[
"data_train.data[0]",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb8472b7796707877d7e56a39b7091843d05a8bc
| 1,668 |
ipynb
|
Jupyter Notebook
|
Scripts/parsimonious/Untitled.ipynb
|
sawa25/PDFs-TextExtract
|
bdc4469deab8b023135165ce8dbc63577927a508
|
[
"MIT"
] | null | null | null |
Scripts/parsimonious/Untitled.ipynb
|
sawa25/PDFs-TextExtract
|
bdc4469deab8b023135165ce8dbc63577927a508
|
[
"MIT"
] | null | null | null |
Scripts/parsimonious/Untitled.ipynb
|
sawa25/PDFs-TextExtract
|
bdc4469deab8b023135165ce8dbc63577927a508
|
[
"MIT"
] | null | null | null | 30.888889 | 172 | 0.630695 |
[
[
[
"pip install parsimonious",
"Collecting parsimonious\n Downloading parsimonious-0.8.1.tar.gz (45 kB)\nRequirement already satisfied: six>=1.9.0 in c:\\users\\sawa\\anaconda3\\lib\\site-packages (from parsimonious) (1.15.0)\nBuilding wheels for collected packages: parsimonious\n Building wheel for parsimonious (setup.py): started\n Building wheel for parsimonious (setup.py): finished with status 'done'\n Created wheel for parsimonious: filename=parsimonious-0.8.1-py3-none-any.whl size=42715 sha256=a4a032218f65c9d499312599b69e2ef72dcbc4b613c453dfc30a2f547ea67689\n Stored in directory: c:\\users\\sawa\\appdata\\local\\pip\\cache\\wheels\\d8\\af\\19\\fb896f509a437aca2dcf62583e84d7fb2cd5b628c1564a609c\nSuccessfully built parsimonious\nInstalling collected packages: parsimonious\nSuccessfully installed parsimonious-0.8.1\nNote: you may need to restart the kernel to use updated packages.\n"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
cb8479c3434cd3c717c13f0337910b32eb0a67fb
| 8,046 |
ipynb
|
Jupyter Notebook
|
2019/Day 24.ipynb
|
AwesomeGitHubRepos/adventofcode
|
84ba7963a5d7905973f14bb1c2e3a59165f8b398
|
[
"MIT"
] | 96 |
2018-04-21T07:53:34.000Z
|
2022-03-15T11:00:02.000Z
|
2019/Day 24.ipynb
|
AwesomeGitHubRepos/adventofcode
|
84ba7963a5d7905973f14bb1c2e3a59165f8b398
|
[
"MIT"
] | 17 |
2019-02-07T05:14:47.000Z
|
2021-12-27T12:11:04.000Z
|
2019/Day 24.ipynb
|
AwesomeGitHubRepos/adventofcode
|
84ba7963a5d7905973f14bb1c2e3a59165f8b398
|
[
"MIT"
] | 14 |
2019-02-05T06:34:15.000Z
|
2022-01-24T17:35:00.000Z
| 35.60177 | 451 | 0.530077 |
[
[
[
"# Day 24 - Cellular automaton\n\nWe are back to [cellar automatons](https://en.wikipedia.org/wiki/Cellular_automaton), in a finite 2D grid, just like [day 18 of 2018](../2018/Day%2018.ipynb). I'll use similar techniques, with [`scipy.signal.convolve2d()`](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.signal.convolve2d.html) to turn neighbor counts into the next state. Our state is simpler, a simple on or off so we can use simple boolean selections here.",
"_____no_output_____"
]
],
[
[
"from __future__ import annotations\nfrom typing import Set, Sequence, Tuple\n\nimport numpy as np\nfrom scipy.signal import convolve2d\n\n\ndef readmap(maplines: Sequence[str]) -> np.array:\n return np.array([\n c == \"#\" for line in maplines for c in line\n ]).reshape((5, -1))\n\ndef biodiversity_rating(matrix: np.array) -> int:\n # booleans -> single int by multiplying with powers of 2, then summing\n return (\n matrix.reshape((-1)) * \n np.logspace(0, matrix.size - 1, num=matrix.size, base=2, dtype=np.uint)\n ).sum()\n\ndef find_repeat(matrix: np.array) -> int:\n # the four adjacent tiles matter, not the diagonals\n kernel = np.array([[0, 1, 0], [1, 0, 1], [0, 1, 0]])\n # previous states seen (matrix flattened to a tuple)\n seen: Set[Tuple] = set()\n while True:\n counts = convolve2d(matrix, kernel, mode='same')\n matrix = (\n # A bug dies (becoming an empty space) unless there is exactly one bug adjacent to it.\n (matrix & (counts == 1)) |\n # An empty space becomes infested with a bug if exactly one or two bugs are adjacent to it.\n (~matrix & ((counts == 1) | (counts == 2)))\n )\n key = tuple(matrix.flatten())\n if key in seen:\n return biodiversity_rating(matrix)\n seen.add(key)\n\ntest_matrix = readmap(\"\"\"\\\n....#\n#..#.\n#..##\n..#..\n#....\"\"\".splitlines())\nassert find_repeat(test_matrix) == 2129920",
"_____no_output_____"
],
[
"import aocd\ndata = aocd.get_data(day=24, year=2019)\nerismap = readmap(data.splitlines())",
"_____no_output_____"
],
[
"print(\"Part 1:\", find_repeat(erismap))",
"Part 1: 13500447\n"
],
[
"# how fast is this?\n%timeit find_repeat(erismap)",
"248 µs ± 9.49 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
]
],
[
[
"## Part 2, adding a 3rd dimension\n\nI'm not sure if we might be able to use [`scipy.signal.convolve()`](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.signal.convolve.html#scipy.signal.convolve) (the N-dimensional variant of `convolve2d()`) to count neighbours across multiple layers in one go. It works for counting neighbours across a single layer however, and for 200 steps, the additional 8 computations are not exactly strenuous.\n\nI'm creating all layers needed to fit all the steps. An empty layer is filled across 2 steps; first the inner ring, then the outer ring, at which point another layer is needed. So for 200 steps we need 100 layers below and a 100 layers above, ending up with 201 layers. These are added by using [np.pad()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html).\n\nThen use `convolve()` to count neighbours on the same level, and a few sums for additional counts from the levels above and below.",
"_____no_output_____"
]
],
[
[
"from scipy.signal import convolve\n\ndef run_multidimensional(matrix: np.array, steps: int = 200) -> int:\n # 3d kernel; only those on the same level, not above or below\n kernel = np.array([\n [[0, 0, 0], [0, 0, 0], [0, 0, 0]],\n [[0, 1, 0], [1, 0, 1], [0, 1, 0]],\n [[0, 0, 0], [0, 0, 0], [0, 0, 0]],\n ])\n matrix = np.pad(matrix[None], [((steps + 1) // 2,), (0,), (0,)])\n\n for _ in range(steps):\n # count neighbours on the same layer, then clear the hole\n counts = convolve(matrix, kernel, mode='same')\n counts[:, 2, 2] = 0\n # layer below, counts[:-1, ...] are updated from kernel[1:, ...].sum()s\n counts[:-1, 1, 2] += matrix[1:, 0, :].sum(axis=1) # cell above hole += top row next level\n counts[:-1, 3, 2] += matrix[1:, -1, :].sum(axis=1) # cell below hole += bottom row next level\n counts[:-1, 2, 1] += matrix[1:, :, 0].sum(axis=1) # cell left of hole += left column next level\n counts[:-1, 2, 3] += matrix[1:, :, -1].sum(axis=1) # cell right of hole += right column next level\n # layer above, counts[1-:, ...] slices are updated from kernel[:-1, ...] indices (true -> 1)\n counts[1:, 0, :] += matrix[:-1, 1, 2, None] # top row += cell above hole next level\n counts[1:, -1, :] += matrix[:-1, 3, 2, None] # bottom row += cell below hole next level\n counts[1:, :, 0] += matrix[:-1, 2, 1, None] # left column += cell left of hole next level\n counts[1:, :, -1] += matrix[:-1, 2, 3, None] # right column += cell right of hole next level\n\n # next step is the same as part 1:\n matrix = (\n # A bug dies (becoming an empty space) unless there is exactly one bug adjacent to it.\n (matrix & (counts == 1)) |\n # An empty space becomes infested with a bug if exactly one or two bugs are adjacent to it.\n (~matrix & ((counts == 1) | (counts == 2)))\n )\n \n return matrix.sum()\n\nassert run_multidimensional(test_matrix, 10) == 99",
"_____no_output_____"
],
[
"print(\"Part 2:\", run_multidimensional(erismap))",
"Part 2: 2120\n"
],
[
"# how fast is this?\n%timeit run_multidimensional(erismap)",
"316 ms ± 4.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
cb84853b63bf21f165481a46fc4064768437d8fe
| 118,015 |
ipynb
|
Jupyter Notebook
|
lectures/11-regression-discontinuity/notebook.ipynb
|
milakis/microeconometrics
|
6ede1eceb25e578b3109c03d35f26d34d41777aa
|
[
"MIT"
] | null | null | null |
lectures/11-regression-discontinuity/notebook.ipynb
|
milakis/microeconometrics
|
6ede1eceb25e578b3109c03d35f26d34d41777aa
|
[
"MIT"
] | null | null | null |
lectures/11-regression-discontinuity/notebook.ipynb
|
milakis/microeconometrics
|
6ede1eceb25e578b3109c03d35f26d34d41777aa
|
[
"MIT"
] | null | null | null | 145.697531 | 39,928 | 0.838207 |
[
[
[
"import statsmodels.formula.api as smf\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\nnp.random.seed(123)",
"_____no_output_____"
]
],
[
[
"---\n# Lecture 11: Regression discontinuity\n---",
"_____no_output_____"
],
[
"---\n## Lee (2008)\n---",
"_____no_output_____"
],
[
"The author studies the \"incumbency advantage\", i.e. the overall causal impact of being the current incumbent party in a district on the votes obtained in the district's election.\n\n* Lee, David S. (2008). Randomized experiments from non-random selection in U.S. House elections. Journal of Econometrics.",
"_____no_output_____"
]
],
[
[
"df_base = pd.read_csv(\"../../datasets/processed/msc/house.csv\")\ndf_base.head()",
"_____no_output_____"
]
],
[
[
"---\n## What are the basic characteristics of the dataset?\n---",
"_____no_output_____"
]
],
[
[
"df_base.plot.scatter(x=0, y=1)",
"_____no_output_____"
]
],
[
[
"What is the re-election rate?",
"_____no_output_____"
]
],
[
[
"pd.crosstab(\n df_base.vote_last > 0.0,\n df_base.vote_next > 0.5,\n margins=True,\n normalize=\"columns\",\n)",
"_____no_output_____"
]
],
[
[
"---\n## Regression discontinuity design\n---",
"_____no_output_____"
],
[
"How does the average vote in the next election look like as we move along last year's election.",
"_____no_output_____"
]
],
[
[
"df_base[\"bin\"] = pd.cut(df_base.vote_last, 200, labels=False)\ndf_base.groupby(\"bin\").vote_next.mean().plot()",
"_____no_output_____"
]
],
[
[
"Now we turn to an explicit model of the conditional mean.",
"_____no_output_____"
]
],
[
[
"def fit_regression(incumbent, level=4):\n assert incumbent in [\"republican\", \"democratic\"]\n\n if incumbent == \"republican\":\n df_incumbent = df_base.loc[df_base.vote_last < 0.0, :]\n else:\n df_incumbent = df_base.loc[df_base.vote_last > 0.0, :]\n\n for level in range(2, level + 1):\n label = \"vote_last_{:}\".format(level)\n df_incumbent.loc[:, label] = df_incumbent[\"vote_last\"] ** level\n\n formula = \"vote_next ~ vote_last + vote_last_2 + vote_last_3 + vote_last_4\"\n rslt = smf.ols(formula=formula, data=df_incumbent).fit()\n\n return rslt\n\n\nrslt = dict()\nfor incumbent in [\"republican\", \"democratic\"]:\n rslt = fit_regression(incumbent, level=4)\n title = \"\\n\\n {:}\\n\".format(incumbent.capitalize())\n print(title, rslt.summary())",
"\n\n Republican\n OLS Regression Results \n==============================================================================\nDep. Variable: vote_next R-squared: 0.277\nModel: OLS Adj. R-squared: 0.276\nMethod: Least Squares F-statistic: 262.5\nDate: Tue, 28 Apr 2020 Prob (F-statistic): 4.33e-191\nTime: 13:15:10 Log-Likelihood: 1761.3\nNo. Observations: 2740 AIC: -3513.\nDf Residuals: 2735 BIC: -3483.\nDf Model: 4 \nCovariance Type: nonrobust \n===============================================================================\n coef std err t P>|t| [0.025 0.975]\n-------------------------------------------------------------------------------\nIntercept 0.4542 0.009 49.743 0.000 0.436 0.472\nvote_last 0.5236 0.148 3.529 0.000 0.233 0.814\nvote_last_2 1.5292 0.696 2.199 0.028 0.165 2.893\nvote_last_3 4.2201 1.172 3.600 0.000 1.922 6.518\nvote_last_4 3.0452 0.623 4.885 0.000 1.823 4.268\n==============================================================================\nOmnibus: 208.853 Durbin-Watson: 1.877\nProb(Omnibus): 0.000 Jarque-Bera (JB): 1143.885\nSkew: -0.033 Prob(JB): 4.06e-249\nKurtosis: 6.165 Cond. No. 655.\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n\n\n Democratic\n OLS Regression Results \n==============================================================================\nDep. Variable: vote_next R-squared: 0.380\nModel: OLS Adj. R-squared: 0.379\nMethod: Least Squares F-statistic: 583.2\nDate: Tue, 28 Apr 2020 Prob (F-statistic): 0.00\nTime: 13:15:10 Log-Likelihood: 2056.4\nNo. Observations: 3818 AIC: -4103.\nDf Residuals: 3813 BIC: -4071.\nDf Model: 4 \nCovariance Type: nonrobust \n===============================================================================\n coef std err t P>|t| [0.025 0.975]\n-------------------------------------------------------------------------------\nIntercept 0.5308 0.009 56.720 0.000 0.512 0.549\nvote_last 0.5431 0.143 3.807 0.000 0.263 0.823\nvote_last_2 -0.7048 0.617 -1.143 0.253 -1.914 0.504\nvote_last_3 1.2362 0.960 1.288 0.198 -0.646 3.118\nvote_last_4 -0.7304 0.481 -1.518 0.129 -1.674 0.213\n==============================================================================\nOmnibus: 437.431 Durbin-Watson: 2.136\nProb(Omnibus): 0.000 Jarque-Bera (JB): 1986.195\nSkew: -0.473 Prob(JB): 0.00\nKurtosis: 6.405 Cond. No. 664.\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n"
]
],
[
[
"How does the predictions look like?",
"_____no_output_____"
]
],
[
[
"for incumbent in [\"republican\", \"democratic\"]:\n\n rslt = fit_regression(incumbent, level=4)\n\n # For our predictions, we need to set up a grid for the evaluation.\n if incumbent == \"republican\":\n grid = np.linspace(-0.5, 0.0, 100)\n else:\n grid = np.linspace(+0.0, 0.5, 100)\n\n df_grid = pd.DataFrame(grid, columns=[\"vote_last\"])\n for level in range(2, 5):\n label = \"vote_last_{:}\".format(level)\n df_grid.loc[:, label] = df_grid[\"vote_last\"] ** level\n\n ax = rslt.predict(df_grid).plot(title=incumbent.capitalize())\n plt.show()",
"_____no_output_____"
]
],
[
[
"We can now compute the difference at the cutoffs to get an estimate for the treatment effect.",
"_____no_output_____"
]
],
[
[
"before_cutoff = df_base.groupby(\"bin\").vote_next.mean()[99]\nafter_cutoff = df_base.groupby(\"bin\").vote_next.mean()[100]\n\neffect = after_cutoff - before_cutoff\nprint(\"Treatment Effect: {:5.3f}%\".format(effect * 100))",
"Treatment Effect: 8.823%\n"
]
],
[
[
"---\n## How does the estimated treatment effect depend on the choice of the bin width?\n---",
"_____no_output_____"
]
],
[
[
"for num_bins in [100, 200]:\n df = df_base.copy(deep=True)\n df[\"bin\"] = pd.cut(df_base.vote_last, num_bins, labels=False)\n info = df.groupby(\"bin\").vote_next.mean()\n lower = (num_bins / 2) - 1\n effect = info[lower + 1] - info[lower]\n print(\n \" Number of bins: {:}, Width {:>5}, Effect {:5.2f}%\".format(\n num_bins, 1.0 / num_bins, effect * 100\n )\n )",
" Number of bins: 100, Width 0.01, Effect 8.12%\n Number of bins: 200, Width 0.005, Effect 8.82%\n"
]
],
[
[
"---\n## Regression \n---",
"_____no_output_____"
],
[
"There are several alternatives to estimate the conditional mean functions.\n\n* pooled regressions\n\n* local linear regressions",
"_____no_output_____"
]
],
[
[
"# It will be useful to split the sample by the cutoff value\n# for easier access going forward.\ndf_base[\"D\"] = df_base.vote_last > 0",
"_____no_output_____"
]
],
[
[
"### Pooled regression\n\nWe estimate the conditinal mean using the whole function.\n\n\\begin{align*}\nY = \\alpha_r + \\tau D + \\beta X + \\epsilon\n\\end{align*}\n\nThis allows for a difference in levels but not slope.\n",
"_____no_output_____"
]
],
[
[
"smf.ols(formula=\"vote_next ~ vote_last + D\", data=df_base).fit().summary()",
"_____no_output_____"
]
],
[
[
"### Local linear regression\n\nWe now turn to local regressions by restricting the estimation to observations close to the cutoff.\n\n\\begin{align*}\nY = \\alpha_r + \\tau D + \\beta X + \\gamma X D + \\epsilon,\n\\end{align*}\n\nwhere $-h \\geq X \\geq h$. This allows for a difference in levels and slope.",
"_____no_output_____"
]
],
[
[
"for h in [0.3, 0.2, 0.1, 0.05, 0.01]:\n # We restrict the sample to observations close\n # to the cutoff.\n df = df_base[df_base.vote_last.between(-h, h)]\n\n formula = \"vote_next ~ D + vote_last + D * vote_last\"\n rslt = smf.ols(formula=formula, data=df).fit()\n info = [h, rslt.params[1] * 100, rslt.pvalues[1]]\n print(\n \" Bandwidth: {:>4} Effect {:5.3f}% pvalue {:5.3f}\".format(*info)\n )",
" Bandwidth: 0.3 Effect 8.318% pvalue 0.000\n Bandwidth: 0.2 Effect 7.818% pvalue 0.000\n Bandwidth: 0.1 Effect 6.058% pvalue 0.000\n Bandwidth: 0.05 Effect 4.870% pvalue 0.010\n Bandwidth: 0.01 Effect 9.585% pvalue 0.001\n"
]
],
[
[
"There exists some work that can guide the choice of the bandwidth.\n\nNow, let's return to the slides to summarize the key issues and some review best practices.",
"_____no_output_____"
],
[
"---\n## Resources\n---",
"_____no_output_____"
],
[
"* **Lee, D. S. (2008)**. [Randomized experiments from non-random selection in us house elections](https://reader.elsevier.com/reader/sd/pii/S0304407607001121?token=B2B8292E08E07683C3CAFB853380CD4C1E5D1FD17982228079F6EE672298456ED7D6692F0598AA50D54463AC0A849065). In *Journal of Econometrics*, 142(2), 675–697.\n\n\n* **Lee, D. S., and Lemieux, T. (2010)**. [Regression discontinuity designs in economics](https://www.princeton.edu/~davidlee/wp/RDDEconomics.pdf). In *Journal of Economic Literature*, 48(2), 281–355.",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
cb84866260cfc405cf6591b360cf3d4c85855240
| 23,486 |
ipynb
|
Jupyter Notebook
|
tutorials/asr/Online_Offline_Speech_Commands_Demo.ipynb
|
GNroy/NeMo
|
3d0c29a317b89b20c93757010db80271eeea6816
|
[
"Apache-2.0"
] | 4,145 |
2019-09-13T08:29:43.000Z
|
2022-03-31T18:31:44.000Z
|
tutorials/asr/Online_Offline_Speech_Commands_Demo.ipynb
|
GNroy/NeMo
|
3d0c29a317b89b20c93757010db80271eeea6816
|
[
"Apache-2.0"
] | 2,031 |
2019-09-17T16:51:39.000Z
|
2022-03-31T23:52:41.000Z
|
tutorials/asr/Online_Offline_Speech_Commands_Demo.ipynb
|
GNroy/NeMo
|
3d0c29a317b89b20c93757010db80271eeea6816
|
[
"Apache-2.0"
] | 1,041 |
2019-09-13T10:08:21.000Z
|
2022-03-30T06:37:38.000Z
| 31.567204 | 259 | 0.537171 |
[
[
[
"\"\"\"\nPlease run notebook locally (if you have all the dependencies and a GPU). \nTechnically you can run this notebook on Google Colab but you need to set up microphone for Colab.\n \nInstructions for setting up Colab are as follows:\n1. Open a new Python 3 notebook.\n2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n4. Run this cell to set up dependencies.\n5. Set up microphone for Colab\n\"\"\"\n# If you're using Google Colab and not running locally, run this cell.\n\n## Install dependencies\n!pip install wget\n!apt-get install sox libsndfile1 ffmpeg portaudio19-dev\n!pip install unidecode\n!pip install pyaudio\n\n# ## Install NeMo\nBRANCH = 'main'\n!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]\n\n## Install TorchAudio\n!pip install torchaudio>=0.10.0 -f https://download.pytorch.org/whl/torch_stable.html",
"_____no_output_____"
]
],
[
[
"This notebook demonstrates offline and online (from a microphone's stream in NeMo) speech commands recognition ",
"_____no_output_____"
],
[
"The notebook requires PyAudio library to get a signal from an audio device.\nFor Ubuntu, please run the following commands to install it:\n```\nsudo apt-get install -y portaudio19-dev\npip install pyaudio\n```",
"_____no_output_____"
],
[
"This notebook requires the `torchaudio` library to be installed for MatchboxNet. Please follow the instructions available at the [torchaudio Github page](https://github.com/pytorch/audio#installation) to install the appropriate version of torchaudio.\n\nIf you would like to install the latest version, please run the following command to install it:\n\n```\nconda install -c pytorch torchaudio\n```",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pyaudio as pa\nimport os, time\nimport librosa\nimport IPython.display as ipd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport nemo\nimport nemo.collections.asr as nemo_asr",
"_____no_output_____"
],
[
"# sample rate, Hz\nSAMPLE_RATE = 16000",
"_____no_output_____"
]
],
[
[
"## Restore the model from NGC",
"_____no_output_____"
]
],
[
[
"mbn_model = nemo_asr.models.EncDecClassificationModel.from_pretrained(\"commandrecognition_en_matchboxnet3x1x64_v2\")",
"_____no_output_____"
]
],
[
[
"Since speech commands model MatchBoxNet doesn't consider non-speech scenario, \nhere we use a Voice Activity Detection (VAD) model to help reduce false alarm for background noise/silence. When there is speech activity detected, the speech command inference will be activated. \n",
"_____no_output_____"
],
[
"**Please note the VAD model is not perfect for various microphone input and you might need to finetune on your input and play with different parameters.**",
"_____no_output_____"
]
],
[
[
"vad_model = nemo_asr.models.EncDecClassificationModel.from_pretrained('vad_marblenet')",
"_____no_output_____"
]
],
[
[
"## Observing the config of the model",
"_____no_output_____"
]
],
[
[
"from omegaconf import OmegaConf\nimport copy",
"_____no_output_____"
],
[
"# Preserve a copy of the full config\nvad_cfg = copy.deepcopy(vad_model._cfg)\nmbn_cfg = copy.deepcopy(mbn_model._cfg)\nprint(OmegaConf.to_yaml(mbn_cfg))",
"_____no_output_____"
]
],
[
[
"## What classes can this model recognize?\n\nBefore we begin inference on the actual audio stream, let's look at what are the classes this model was trained to recognize. \n\n**MatchBoxNet model is not designed to recognize words out of vocabulary (OOV).**",
"_____no_output_____"
]
],
[
[
"labels = mbn_cfg.labels\nfor i in range(len(labels)):\n print('%-10s' % (labels[i]), end=' ')",
"_____no_output_____"
]
],
[
[
"## Setup preprocessor with these settings",
"_____no_output_____"
]
],
[
[
"# Set model to inference mode\nmbn_model.eval();\nvad_model.eval();",
"_____no_output_____"
]
],
[
[
"## Setting up data for Streaming Inference",
"_____no_output_____"
]
],
[
[
"from nemo.core.classes import IterableDataset\nfrom nemo.core.neural_types import NeuralType, AudioSignal, LengthsType\nimport torch\nfrom torch.utils.data import DataLoader",
"_____no_output_____"
],
[
"# simple data layer to pass audio signal\nclass AudioDataLayer(IterableDataset):\n @property\n def output_types(self):\n return {\n 'audio_signal': NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),\n 'a_sig_length': NeuralType(tuple('B'), LengthsType()),\n }\n\n def __init__(self, sample_rate):\n super().__init__()\n self._sample_rate = sample_rate\n self.output = True\n \n def __iter__(self):\n return self\n \n def __next__(self):\n if not self.output:\n raise StopIteration\n self.output = False\n return torch.as_tensor(self.signal, dtype=torch.float32), \\\n torch.as_tensor(self.signal_shape, dtype=torch.int64)\n \n def set_signal(self, signal):\n self.signal = signal.astype(np.float32)/32768.\n self.signal_shape = self.signal.size\n self.output = True\n\n def __len__(self):\n return 1",
"_____no_output_____"
],
[
"data_layer = AudioDataLayer(sample_rate=mbn_cfg.train_ds.sample_rate)\ndata_loader = DataLoader(data_layer, batch_size=1, collate_fn=data_layer.collate_fn)",
"_____no_output_____"
]
],
[
[
"## inference method for audio signal (single instance)",
"_____no_output_____"
]
],
[
[
"def infer_signal(model, signal):\n data_layer.set_signal(signal)\n batch = next(iter(data_loader))\n audio_signal, audio_signal_len = batch\n audio_signal, audio_signal_len = audio_signal.to(model.device), audio_signal_len.to(model.device)\n logits = model.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)\n return logits",
"_____no_output_____"
]
],
[
[
"we don't include postprocessing techniques here. ",
"_____no_output_____"
]
],
[
[
"# class for streaming frame-based ASR\n# 1) use reset() method to reset FrameASR's state\n# 2) call transcribe(frame) to do ASR on\n# contiguous signal's frames\nclass FrameASR:\n \n def __init__(self, model_definition,\n frame_len=2, frame_overlap=2.5, \n offset=0):\n '''\n Args:\n frame_len (seconds): Frame's duration\n frame_overlap (seconds): Duration of overlaps before and after current frame.\n offset: Number of symbols to drop for smooth streaming.\n '''\n self.task = model_definition['task']\n self.vocab = list(model_definition['labels'])\n \n self.sr = model_definition['sample_rate']\n self.frame_len = frame_len\n self.n_frame_len = int(frame_len * self.sr)\n self.frame_overlap = frame_overlap\n self.n_frame_overlap = int(frame_overlap * self.sr)\n timestep_duration = model_definition['AudioToMFCCPreprocessor']['window_stride']\n for block in model_definition['JasperEncoder']['jasper']:\n timestep_duration *= block['stride'][0] ** block['repeat']\n self.buffer = np.zeros(shape=2*self.n_frame_overlap + self.n_frame_len,\n dtype=np.float32)\n self.offset = offset\n self.reset()\n \n @torch.no_grad()\n def _decode(self, frame, offset=0):\n assert len(frame)==self.n_frame_len\n self.buffer[:-self.n_frame_len] = self.buffer[self.n_frame_len:]\n self.buffer[-self.n_frame_len:] = frame\n\n if self.task == 'mbn':\n logits = infer_signal(mbn_model, self.buffer).to('cpu').numpy()[0]\n decoded = self._mbn_greedy_decoder(logits, self.vocab)\n \n elif self.task == 'vad':\n logits = infer_signal(vad_model, self.buffer).to('cpu').numpy()[0]\n decoded = self._vad_greedy_decoder(logits, self.vocab)\n \n else:\n raise(\"Task should either be of mbn or vad!\")\n \n return decoded[:len(decoded)-offset]\n \n def transcribe(self, frame=None,merge=False):\n if frame is None:\n frame = np.zeros(shape=self.n_frame_len, dtype=np.float32)\n if len(frame) < self.n_frame_len:\n frame = np.pad(frame, [0, self.n_frame_len - len(frame)], 'constant')\n unmerged = self._decode(frame, self.offset)\n return unmerged\n \n \n def reset(self):\n '''\n Reset frame_history and decoder's state\n '''\n self.buffer=np.zeros(shape=self.buffer.shape, dtype=np.float32)\n self.mbn_s = []\n self.vad_s = []\n \n @staticmethod\n def _mbn_greedy_decoder(logits, vocab):\n mbn_s = []\n if logits.shape[0]:\n class_idx = np.argmax(logits)\n class_label = vocab[class_idx]\n mbn_s.append(class_label) \n return mbn_s\n \n \n @staticmethod\n def _vad_greedy_decoder(logits, vocab):\n vad_s = []\n if logits.shape[0]:\n probs = torch.softmax(torch.as_tensor(logits), dim=-1)\n probas, preds = torch.max(probs, dim=-1)\n vad_s = [preds.item(), str(vocab[preds]), probs[0].item(), probs[1].item(), str(logits)]\n return vad_s\n",
"_____no_output_____"
]
],
[
[
"# Streaming Inference",
"_____no_output_____"
],
[
"## offline inference\nHere we show an example of offline streaming inference. you can use your file or download the provided demo audio file. \n",
"_____no_output_____"
],
[
"Streaming inference depends on a few factors, such as the frame length (STEP) and buffer size (WINDOW SIZE). Experiment with a few values to see their effects in the below cells.",
"_____no_output_____"
]
],
[
[
"STEP = 0.25\nWINDOW_SIZE = 1.28 # input segment length for NN we used for training",
"_____no_output_____"
],
[
"import wave\n\ndef offline_inference(wave_file, STEP = 0.25, WINDOW_SIZE = 0.31):\n \"\"\"\n Arg:\n wav_file: wave file to be performed inference on.\n STEP: infer every STEP seconds \n WINDOW_SIZE : lenght of audio to be sent to NN.\n \"\"\"\n \n FRAME_LEN = STEP \n CHANNELS = 1 # number of audio channels (expect mono signal)\n RATE = SAMPLE_RATE # sample rate, 16000 Hz\n \n CHUNK_SIZE = int(FRAME_LEN * SAMPLE_RATE)\n \n mbn = FrameASR(model_definition = {\n 'task': 'mbn',\n 'sample_rate': SAMPLE_RATE,\n 'AudioToMFCCPreprocessor': mbn_cfg.preprocessor,\n 'JasperEncoder': mbn_cfg.encoder,\n 'labels': mbn_cfg.labels\n },\n frame_len=FRAME_LEN, frame_overlap = (WINDOW_SIZE - FRAME_LEN)/2,\n offset=0)\n\n wf = wave.open(wave_file, 'rb')\n data = wf.readframes(CHUNK_SIZE)\n\n while len(data) > 0:\n\n data = wf.readframes(CHUNK_SIZE)\n signal = np.frombuffer(data, dtype=np.int16)\n mbn_result = mbn.transcribe(signal)\n \n if len(mbn_result):\n print(mbn_result)\n \n mbn.reset()",
"_____no_output_____"
],
[
"demo_wave = 'SpeechCommands_demo.wav'\nif not os.path.exists(demo_wave):\n !wget \"https://dldata-public.s3.us-east-2.amazonaws.com/SpeechCommands_demo.wav\"",
"_____no_output_____"
],
[
"wave_file = demo_wave\n\nCHANNELS = 1\naudio, sample_rate = librosa.load(wave_file, sr=SAMPLE_RATE)\ndur = librosa.get_duration(audio)\nprint(dur)",
"_____no_output_____"
],
[
"ipd.Audio(audio, rate=sample_rate)",
"_____no_output_____"
],
[
"# Ground-truth is Yes No\noffline_inference(wave_file, STEP, WINDOW_SIZE)",
"_____no_output_____"
]
],
[
[
"## Online inference through microphone",
"_____no_output_____"
],
[
"Please note MatchBoxNet and VAD model are not perfect for various microphone input and you might need to finetune on your input and play with different parameter. \\\n**We also recommend to use a headphone.**",
"_____no_output_____"
]
],
[
[
"vad_threshold = 0.8 \n\nSTEP = 0.1 \nWINDOW_SIZE = 0.15\nmbn_WINDOW_SIZE = 1\n\nCHANNELS = 1 \nRATE = SAMPLE_RATE\nFRAME_LEN = STEP # use step of vad inference as frame len\n\nCHUNK_SIZE = int(STEP * RATE)\nvad = FrameASR(model_definition = {\n 'task': 'vad',\n 'sample_rate': SAMPLE_RATE,\n 'AudioToMFCCPreprocessor': vad_cfg.preprocessor,\n 'JasperEncoder': vad_cfg.encoder,\n 'labels': vad_cfg.labels\n },\n frame_len=FRAME_LEN, frame_overlap=(WINDOW_SIZE - FRAME_LEN) / 2, \n offset=0)\n\nmbn = FrameASR(model_definition = {\n 'task': 'mbn',\n 'sample_rate': SAMPLE_RATE,\n 'AudioToMFCCPreprocessor': mbn_cfg.preprocessor,\n 'JasperEncoder': mbn_cfg.encoder,\n 'labels': mbn_cfg.labels\n },\n frame_len=FRAME_LEN, frame_overlap = (mbn_WINDOW_SIZE-FRAME_LEN)/2,\n offset=0)",
"_____no_output_____"
],
[
"vad.reset()\nmbn.reset()\n\n# Setup input device\np = pa.PyAudio()\nprint('Available audio input devices:')\ninput_devices = []\nfor i in range(p.get_device_count()):\n dev = p.get_device_info_by_index(i)\n if dev.get('maxInputChannels'):\n input_devices.append(i)\n print(i, dev.get('name'))\n\nif len(input_devices):\n dev_idx = -2\n while dev_idx not in input_devices:\n print('Please type input device ID:')\n dev_idx = int(input())\n\n \n def callback(in_data, frame_count, time_info, status):\n \"\"\"\n callback function for streaming audio and performing inference\n \"\"\"\n signal = np.frombuffer(in_data, dtype=np.int16)\n vad_result = vad.transcribe(signal) \n mbn_result = mbn.transcribe(signal) \n \n if len(vad_result):\n # if speech prob is higher than threshold, we decide it contains speech utterance \n # and activate MatchBoxNet \n if vad_result[3] >= vad_threshold: \n print(mbn_result) # print mbn result when speech present\n else:\n print(\"no-speech\")\n return (in_data, pa.paContinue)\n\n # streaming\n stream = p.open(format=pa.paInt16,\n channels=CHANNELS,\n rate=SAMPLE_RATE,\n input=True,\n input_device_index=dev_idx,\n stream_callback=callback,\n frames_per_buffer=CHUNK_SIZE)\n\n \n print('Listening...')\n stream.start_stream()\n \n # Interrupt kernel and then speak for a few more words to exit the pyaudio loop !\n try:\n while stream.is_active():\n time.sleep(0.1)\n finally: \n stream.stop_stream()\n stream.close()\n p.terminate()\n print()\n print(\"PyAudio stopped\")\n \nelse:\n print('ERROR: No audio input device found.')",
"_____no_output_____"
]
],
[
[
"## ONNX Deployment\nYou can also export the model to ONNX file and deploy it to TensorRT or MS ONNX Runtime inference engines. If you don't have one installed yet, please run:",
"_____no_output_____"
]
],
[
[
"!pip install --upgrade onnxruntime # for gpu, use onnxruntime-gpu\n# !mkdir -p ort\n# %cd ort\n# !git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git .\n# !./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel\n# !pip install ./build/Linux/Release/dist/onnxruntime*.whl\n# %cd ..",
"_____no_output_____"
]
],
[
[
"Then just replace `infer_signal` implementation with this code:",
"_____no_output_____"
]
],
[
[
"import onnxruntime\nmbn_model.export('mbn.onnx')\nort_session = onnxruntime.InferenceSession('mbn.onnx')\n\ndef to_numpy(tensor):\n return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()\n\ndef infer_signal(signal):\n data_layer.set_signal(signal)\n batch = next(iter(data_loader))\n audio_signal, audio_signal_len = batch\n audio_signal, audio_signal_len = audio_signal.to(mbn_model.device), audio_signal_len.to(mbn_model.device)\n processed_signal, processed_signal_len = mbn_model.preprocessor(\n input_signal=audio_signal, length=audio_signal_len,\n )\n ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal), }\n ologits = ort_session.run(None, ort_inputs)\n alogits = np.asarray(ologits)\n logits = torch.from_numpy(alogits[0])\n return logits",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb8498eeeb1e8bc6d1e0271e0560eeca0f2a50d3
| 42,648 |
ipynb
|
Jupyter Notebook
|
examples/swi_examples/swiex1.ipynb
|
kwilcox/flopy
|
527c4ee452ea779bdebd6c1c540452d145e26943
|
[
"BSD-3-Clause"
] | null | null | null |
examples/swi_examples/swiex1.ipynb
|
kwilcox/flopy
|
527c4ee452ea779bdebd6c1c540452d145e26943
|
[
"BSD-3-Clause"
] | null | null | null |
examples/swi_examples/swiex1.ipynb
|
kwilcox/flopy
|
527c4ee452ea779bdebd6c1c540452d145e26943
|
[
"BSD-3-Clause"
] | null | null | null | 129.629179 | 33,143 | 0.855609 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
cb849f33bd5b262e6e9710a9085d140a4f753f2a
| 5,648 |
ipynb
|
Jupyter Notebook
|
notebooks/movie_frame_generator.ipynb
|
jswelling/CMU-MS-DAS-Vis-S22
|
6098249ef9c53776888540381fa877251fab9f16
|
[
"CC0-1.0"
] | null | null | null |
notebooks/movie_frame_generator.ipynb
|
jswelling/CMU-MS-DAS-Vis-S22
|
6098249ef9c53776888540381fa877251fab9f16
|
[
"CC0-1.0"
] | null | null | null |
notebooks/movie_frame_generator.ipynb
|
jswelling/CMU-MS-DAS-Vis-S22
|
6098249ef9c53776888540381fa877251fab9f16
|
[
"CC0-1.0"
] | 3 |
2022-03-22T14:50:41.000Z
|
2022-03-24T03:15:30.000Z
| 26.641509 | 87 | 0.465475 |
[
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom os import mkdir\nfrom os.path import join",
"_____no_output_____"
],
[
"bov_counter = 0\n\ndef writeBOV(g):\n \"\"\"g is presumed to be a numpy 2D array of doubles\"\"\"\n global bov_counter\n bovNm = 'file_%03d.bov' % bov_counter\n dataNm = 'file_%03d.doubles' % bov_counter\n bov_counter += 1\n try:\n mkdir('frames')\n except FileExistsError:\n pass\n with open(join('frames', bovNm), 'w') as f:\n f.write('TIME: %g\\n' % float(bov_counter))\n f.write('DATA_FILE: %s\\n' % dataNm)\n f.write('DATA_SIZE: %d %d 1\\n' % g.shape)\n f.write('DATA_FORMAT: DOUBLE\\n')\n f.write('VARIABLE: U\\n')\n f.write('DATA_ENDIAN: LITTLE\\n')\n f.write('CENTERING: ZONAL\\n')\n f.write('BRICK_ORIGIN: 0. 0. 0.\\n')\n f.write('BRICK_SIZE: 1.0 1.0 1.0\\n')\n with open(join('frames', dataNm), 'w') as f:\n g.T.tofile(f) # BOV format expects Fortran order\n",
"_____no_output_____"
],
[
"#\n# Scaling constants\n#\n# You'll have to pick a value for dt which produces stable evolution\n# for your stencil!\n\nXDIM = 101\nYDIM = 101\ntMax = 5.0\ndx = 0.1\ndy = 0.1\ndt = 0.025 # FIX ME! \nvel = 1.0\nxMin = -(XDIM//2)*dx\nyMin = -(YDIM//2)*dy",
"_____no_output_____"
],
[
"def initialize():\n \"\"\"Create the grid and apply the initial condition\"\"\"\n U = np.zeros([YDIM, XDIM]) # We just use this for shape\n\n ctrX= 0.0\n ctrY= 0.0\n sigma= 0.25\n maxU= 5.0\n\n grid = np.indices(U.shape)\n x = (grid[1] * dx) + xMin # a full grid of X coordinates\n y = (grid[0] * dy) + yMin # a full grid of Y coordinates\n distSqr = np.square(x - ctrX) + np.square(y - ctrY)\n U = maxU * np.exp(-distSqr/(sigma*sigma))\n \n return U",
"_____no_output_____"
],
[
"# test writeBOV\nbov_counter = 0\nwriteBOV(initialize())",
"_____no_output_____"
],
[
"def doTimeStep(U, UOld):\n \"\"\"\n Step your solution forward in time. You need to calculate\n UNew in the grid area [1:-1, 1:-1]. The 'patch the boundaries'\n bit below will take care of the edges at i=0, i=XDIM-1, j=0,\n and j=YDIM-1. Note that the array indices are ordered like U[j][i]!\n \"\"\"\n\n xRatioSqr= (dt*dt*vel*vel)/(dx*dx)\n yRatioSqr= (dt*dt*vel*vel)/(dy*dy)\n\n UNew = np.empty_like(U)\n\n dxxterm = xRatioSqr * (U[1:-1, 2:] + U[1:-1, 0:-2] - 2*U[1:-1, 1:-1])\n dyyterm = yRatioSqr * (U[2:, 1:-1] + U[0:-2, 1:-1] - 2*U[1:-1, 1:-1])\n UNew[1:-1, 1:-1] = 2*U[1:-1,1:-1] + (dxxterm + dyyterm) - UOld[1:-1, 1:-1]\n\n # Patch the boundaries. This mapping makes the surface into a torus.\n UNew[:, 0] = UNew[:, 1]\n UNew[:, -1] = UNew[:, -2]\n UNew[0, :] = UNew[1, :]\n UNew[-1, :] = UNew[-2, :]\n\n return UNew",
"_____no_output_____"
],
[
"def timeToOutput(t, count):\n \"\"\"A little test to tell how often to dump output\"\"\"\n return (count % 4 == 0)",
"_____no_output_____"
],
[
"\nU = initialize()\n\nUOld = np.copy(U)\n\nt = 0.0\ncount = 0\nwhile t < tMax:\n if timeToOutput(t, count):\n writeBOV(U)\n print ('Output at t = %s: min = %f, max = %f'\n % (t, np.amin(U), np.amax(U)))\n UNew = doTimeStep(U, UOld)\n UOld = U\n U = UNew\n t += dt\n count += 1",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb84baf3b71dfccf4f97dc43e9ee56e2fce50058
| 46,508 |
ipynb
|
Jupyter Notebook
|
5.5-names/names.ipynb
|
0xBADC0FFEE/netology-practice
|
838f8a382fa16024fb0d3de1a6c72ca651487a92
|
[
"MIT"
] | null | null | null |
5.5-names/names.ipynb
|
0xBADC0FFEE/netology-practice
|
838f8a382fa16024fb0d3de1a6c72ca651487a92
|
[
"MIT"
] | null | null | null |
5.5-names/names.ipynb
|
0xBADC0FFEE/netology-practice
|
838f8a382fa16024fb0d3de1a6c72ca651487a92
|
[
"MIT"
] | null | null | null | 38.183908 | 264 | 0.480584 |
[
[
[
"# Домашнее задание 2 по обработке текстов\n\nРассмотрим задачу бинарной классификации. Пусть дано два списка имен: мужские и женские имена. Требуется разработать классификатор, который по данному имени будет определять мужское оно или женское.\n\nДанные: \n* Женские имена: female.txt\n* Мужские имена: male.txt",
"_____no_output_____"
]
],
[
[
"# plots\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom pylab import rcParams\n\nfrom plotly.offline import init_notebook_mode, iplot\nimport plotly\nimport plotly.graph_objs as go\ninit_notebook_mode(connected=True)\n\n%config InlineBackend.figure_format = 'retina'\n%matplotlib inline\n%pylab inline\n%config InlineBackend.figure_format = 'png'\nrcParams['figure.figsize'] = (16, 6)",
"_____no_output_____"
],
[
"! ls",
"female.txt male.txt names.ipynb\r\n"
],
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"## Часть 1. Предварительная обработка данных\n\n1. Удалите неоднозначные имена (те имена, которые являются и мужскими, и женскими дновременно), если такие есть; \n2. Создайте обучающее и тестовое множество так, чтобы в обучающем множестве классы были сбалансированы, т.е. к классу принадлежало бы одинаковое количество имен;",
"_____no_output_____"
]
],
[
[
"df_male = pd.read_csv('male.txt', sep=\",\", header=None, names=['name'])\ndisplay(df_male.head(), df_male.describe(), df_male.info())",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2943 entries, 0 to 2942\nData columns (total 1 columns):\nname 2943 non-null object\ndtypes: object(1)\nmemory usage: 23.1+ KB\n"
],
[
"df_female = pd.read_csv('female.txt', sep=\",\", header=None, names=['name'])\ndisplay(df_female.head(), df_female.describe(), df_female.info())",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 5001 entries, 0 to 5000\nData columns (total 1 columns):\nname 5001 non-null object\ndtypes: object(1)\nmemory usage: 39.1+ KB\n"
],
[
"df_male['male'] = 1\ndf_female['male'] = 0\ndf_all = df_male.append(df_female)\ndf_all['name'] = df_all['name'].str.lower()\ndf_all.drop_duplicates(subset=['name'], inplace=True, keep=False)\ndisplay(df_all.head(), df_all.describe(), df_all.info())",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 7208 entries, 0 to 5000\nData columns (total 2 columns):\nname 7208 non-null object\nmale 7208 non-null int64\ndtypes: int64(1), object(1)\nmemory usage: 168.9+ KB\n"
],
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"df_train, df_test = train_test_split(df_all, test_size=0.2, random_state=42, stratify=df_all['male'])\ndf_train.reset_index(inplace = True, drop = True)\ndf_test.reset_index(inplace = True, drop = True)\ndisplay(df_train[\"male\"].value_counts(), df_test[\"male\"].value_counts())",
"_____no_output_____"
]
],
[
[
"## Часть 2. Базовый метод классификации\n\nИспользуйте метод наивного Байеса или логистическую регрессию для классификации имен: в качестве признаков используйте символьные $n$-граммы. Сравните результаты, получаемые при разных $n=2,3,4$ по $F$-мере и аккуратности. В каких случаях метод ошибается?\n\nДля генерации $n$-грамм используйте:",
"_____no_output_____"
]
],
[
[
"# from nltk.util import ngrams",
"_____no_output_____"
],
[
"from sklearn.metrics import *\n\ndef print_score(y_test, y_pred):\n print(\"Accuracy: {0:.2f}\".format(accuracy_score(y_test, y_pred))) \n print(\"F1-measure: {0:.2f}\".format(f1_score(y_test, y_pred, average='macro')))\n print(\"Precision: {0:.2f}\".format(precision_score(y_test, y_pred)))\n print(\"Recall: {0:.2f}\".format(recall_score(y_test, y_pred)))\n \n print(classification_report(y_test, y_pred, target_names=['female', 'male']))",
"_____no_output_____"
],
[
"from sklearn.pipeline import Pipeline\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\nfrom sklearn.naive_bayes import MultinomialNB",
"_____no_output_____"
],
[
"clf = Pipeline([\n ('vectorizer', CountVectorizer(analyzer='char_wb')),\n ('tfidf', TfidfTransformer()), \n ('clf', MultinomialNB()),\n])",
"_____no_output_____"
],
[
"from sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import StratifiedKFold",
"_____no_output_____"
],
[
"params = {\n 'vectorizer__ngram_range': [(1, 1), (1, 3), (2, 2), (2, 3), (2, 4)],\n 'tfidf__use_idf': (True, False), \n 'clf__alpha': (0.001, 0.01, 0.1, 1)\n}\n\ncv = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)\nclf = GridSearchCV(clf, params, scoring='f1', cv=cv, n_jobs=-1)\nclf.fit(df_train['name'], df_train['male'])",
"_____no_output_____"
],
[
"predictions = clf.best_estimator_.predict(df_test.name)\nprint_score(df_test['male'].values, predictions)",
"Accuracy: 0.90\nF1-measure: 0.89\nPrecision: 0.92\nRecall: 0.80\n precision recall f1-score support\n\n female 0.90 0.96 0.93 926\n male 0.92 0.80 0.85 516\n\navg / total 0.90 0.90 0.90 1442\n\n"
]
],
[
[
"## Часть 3. Нейронная сеть\n\n\nИспользуйте реккурентную нейронную сеть с LSTM для решения задачи. В ней может быть несколько слоев с LSTM, несколько слоев c Bidirectional(LSTM). У нейронной сети один выход, определяющий класс имени. \n\nПредставление имени для классификации в этом случае: бинарная матрица размера (количество букв в алфавите $\\times$ максимальная длина имени). Обозначим его через $x$. Если первая буква имени a, то $x[1][1] = 1$, если вторая – b, то $x[2][1] = 1$. \n\nНе забудьте про регуляризацию нейронной сети дропаутами. \n\nСравните результаты классификации разными методами. Какой метод лучше и почему?\n\nСравните результаты, получаемые при разных значениях дропаута, разных числах узлов на слоях нейронной сети по $F$-мере и аккуратности. В каких случаях нейронная сеть ошибается?",
"_____no_output_____"
],
[
"Если совсем не получается запрограммировать нейронную сеть самостоятельно, обратитесь к туториалу тут: https://github.com/divamgupta/lstm-gender-predictor",
"_____no_output_____"
]
],
[
[
"from keras.models import Sequential\nfrom keras.layers import Dense, Activation, Dropout, Bidirectional\nfrom keras.layers import LSTM\nfrom keras.utils import to_categorical",
"Using TensorFlow backend.\n"
],
[
"longest_name_length = df_all['name'].str.len().max()\n\nchars = sorted(list(set(\"\".join(df_train['name']))))\nprint('total chars:', len(chars))\nchar_indices = dict((c, i) for i, c in enumerate(chars))\nindices_char = dict((i, c) for i, c in enumerate(chars))\n\nX_train = np.zeros((len(df_train), longest_name_length, len(chars)), dtype=np.int)\ny_train = np.zeros((len(df_train), 1), dtype=np.int)\n\nfor i in range(len(df_train)): \n for t, char in enumerate(df_train['name'][i]):\n X_train[i, t, char_indices[char]] = 1\n y_train[i] = df_train['male'][i]\n\nX_test = np.zeros((len(df_test), longest_name_length, len(chars)), dtype=np.int)\ny_test = np.zeros((len(df_test), 1), dtype=np.int)\n\nfor i in range(len(df_test)): \n for t, char in enumerate(df_test['name'][i]): \n X_test[i, t, char_indices[char]] = 1\n y_test[i] = df_test['male'][i]\n \nprint(X_train.shape, y_train.shape, X_test.shape, y_test.shape)",
"total chars: 29\n(5766, 15, 29) (5766, 1) (1442, 15, 29) (1442, 1)\n"
],
[
"model = Sequential()\nmodel.add(LSTM(256, return_sequences=True, input_shape=(longest_name_length, len(chars))))\nmodel.add(Dropout(0.2))\nmodel.add(LSTM(256, return_sequences=False))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(32))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(2))\nmodel.add(Activation('softmax'))\nmodel.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])",
"_____no_output_____"
],
[
"from IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\n\nSVG(model_to_dot(model, show_shapes=True, rankdir='LR').create(prog='dot', format='svg'))",
"_____no_output_____"
],
[
"model.fit(X_train, y_train, batch_size=16, epochs=30)",
"Epoch 1/30\n5766/5766 [==============================] - 19s 3ms/step - loss: 0.5398 - acc: 0.7385\nEpoch 2/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.4815 - acc: 0.7790\nEpoch 3/30\n5766/5766 [==============================] - 17s 3ms/step - loss: 0.4551 - acc: 0.7896\nEpoch 4/30\n5766/5766 [==============================] - 17s 3ms/step - loss: 0.4369 - acc: 0.8054\nEpoch 5/30\n5766/5766 [==============================] - 17s 3ms/step - loss: 0.4120 - acc: 0.8146\nEpoch 6/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.3951 - acc: 0.8255\nEpoch 7/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.3714 - acc: 0.8368\nEpoch 8/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.3578 - acc: 0.8408\nEpoch 9/30\n5766/5766 [==============================] - 17s 3ms/step - loss: 0.3460 - acc: 0.8535\nEpoch 10/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.3340 - acc: 0.8599\nEpoch 11/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.3196 - acc: 0.8623\nEpoch 12/30\n5766/5766 [==============================] - 18s 3ms/step - loss: 0.3013 - acc: 0.8715\nEpoch 13/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.2884 - acc: 0.8788\nEpoch 14/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.2777 - acc: 0.8828\nEpoch 15/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.2682 - acc: 0.8892\nEpoch 16/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.2503 - acc: 0.8951\nEpoch 17/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.2365 - acc: 0.9048\nEpoch 18/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.2241 - acc: 0.9079\nEpoch 19/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.2123 - acc: 0.9148\nEpoch 20/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.2044 - acc: 0.9174\nEpoch 21/30\n5766/5766 [==============================] - 17s 3ms/step - loss: 0.1932 - acc: 0.9275\nEpoch 22/30\n5766/5766 [==============================] - 17s 3ms/step - loss: 0.1821 - acc: 0.9237\nEpoch 23/30\n5766/5766 [==============================] - 18s 3ms/step - loss: 0.1688 - acc: 0.9346\nEpoch 24/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.1657 - acc: 0.9364\nEpoch 25/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.1476 - acc: 0.9450\nEpoch 26/30\n5766/5766 [==============================] - 17s 3ms/step - loss: 0.1383 - acc: 0.9475\nEpoch 27/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.1163 - acc: 0.9546\nEpoch 28/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.1189 - acc: 0.9547\nEpoch 29/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.1194 - acc: 0.9551\nEpoch 30/30\n5766/5766 [==============================] - 16s 3ms/step - loss: 0.1123 - acc: 0.9613\n"
],
[
"y_pred = model.predict_classes(X_test)\nprint_score(y_test, y_pred)",
"Accuracy: 0.88\nF1-measure: 0.87\nPrecision: 0.81\nRecall: 0.84\n precision recall f1-score support\n\n female 0.91 0.89 0.90 926\n male 0.81 0.84 0.83 516\n\navg / total 0.88 0.88 0.88 1442\n\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb84c2c3e68d4345dce82cab17c39689cdb55994
| 149,984 |
ipynb
|
Jupyter Notebook
|
content/ch-algorithms/quantum-counting.ipynb
|
DaniloZZZ/qiskit-textbook
|
8d1f98224522bd168e7269da71fc1fba333874e1
|
[
"Apache-2.0"
] | 1 |
2021-03-13T12:15:44.000Z
|
2021-03-13T12:15:44.000Z
|
content/ch-algorithms/quantum-counting.ipynb
|
DaniloZZZ/qiskit-textbook
|
8d1f98224522bd168e7269da71fc1fba333874e1
|
[
"Apache-2.0"
] | null | null | null |
content/ch-algorithms/quantum-counting.ipynb
|
DaniloZZZ/qiskit-textbook
|
8d1f98224522bd168e7269da71fc1fba333874e1
|
[
"Apache-2.0"
] | 1 |
2020-07-15T03:48:47.000Z
|
2020-07-15T03:48:47.000Z
| 42.70615 | 652 | 0.488365 |
[
[
[
"# Quantum Counting",
"_____no_output_____"
],
[
"To understand this algorithm, it is important that you first understand both Grover’s algorithm and the quantum phase estimation algorithm. Whereas Grover’s algorithm attempts to find a solution to the Oracle, the quantum counting algorithm tells us how many of these solutions there are. This algorithm is interesting as it combines both quantum search and quantum phase estimation.\n\n## Contents\n\n1. [Overview](#overview) \n 1.1 [Intuition](#intuition) \n 1.2 [A Closer Look](#closer_look) \n2. [The Code](#code) \n 2.1 [Initialising our Code](#init_code) \n 2.2 [The Controlled-Grover Iteration](#cont_grover) \n 2.3 [The Inverse QFT](#inv_qft) \n 2.4 [Putting it Together](#putting_together) \n3. [Simulating](#simulating) \n4. [Finding the Number of Solutions](#finding_m)\n5. [Exercises](#exercises)\n6. [References](#references)",
"_____no_output_____"
],
[
"## 1. Overview <a id='overview'></a>",
"_____no_output_____"
],
[
"### 1.1 Intuition <a id='intuition'></a>",
"_____no_output_____"
],
[
"In quantum counting, we simply use the quantum phase estimation algorithm to find an eigenvalue of a Grover search iteration. You will remember that an iteration of Grover’s algorithm, $G$, rotates the state vector by $\\theta$ in the $|\\omega\\rangle$, $|s’\\rangle$ basis:\n\n",
"_____no_output_____"
],
[
"The percentage number of solutions in our search space affects the difference between $|s\\rangle$ and $|s’\\rangle$. For example, if there are not many solutions, $|s\\rangle$ will be very close to $|s’\\rangle$ and $\\theta$ will be very small. It turns out that the eigenvalues of the Grover iterator are $e^{\\pm i\\theta}$, and we can extract this using quantum phase estimation (QPE) to estimate the number of solutions ($M$).",
"_____no_output_____"
],
[
"### 1.2 A Closer Look <a id='closer_look'></a>",
"_____no_output_____"
],
[
"In the $|\\omega\\rangle$,$|s’\\rangle$ basis we can write the Grover iterator as the matrix:\n\n$$\nG =\n\\begin{pmatrix}\n\\cos{\\theta} && -\\sin{\\theta}\\\\\n\\sin{\\theta} && \\cos{\\theta}\n\\end{pmatrix}\n$$\n\nThe matrix $G$ has eigenvectors:\n\n$$\n\\begin{pmatrix}\n-i\\\\\n1\n\\end{pmatrix}\n,\n\\begin{pmatrix}\ni\\\\\n1\n\\end{pmatrix}\n$$\n\nWith the aforementioned eigenvalues $e^{\\pm i\\theta}$. Fortunately, we do not need to prepare our register in either of these states, the state $|s\\rangle$ is in the space spanned by $|\\omega\\rangle$, $|s’\\rangle$, and thus is a superposition of the two vectors.\n$$\n|s\\rangle = \\alpha |\\omega\\rangle + \\beta|s'\\rangle\n$$\n\nAs a result, the output of the QPE algorithm will be a superposition of the two phases, and when we measure the register we will obtain one of these two values! We can then use some simple maths to get our estimate of $M$.\n\n\n",
"_____no_output_____"
],
[
"## 2. The Code <a id='code'></a>",
"_____no_output_____"
],
[
"### 2.1 Initialising our Code <a id='init_code'></a>\n\nFirst, let’s import everything we’re going to need:",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport math\n\n# importing Qiskit\nimport qiskit\nfrom qiskit import IBMQ, Aer\nfrom qiskit import QuantumCircuit, execute\n\n# import basic plot tools\nfrom qiskit.visualization import plot_histogram",
"_____no_output_____"
]
],
[
[
"In this guide will choose to ‘count’ on the first 4 qubits on our circuit (we call the number of counting qubits $t$, so $t = 4$), and to 'search' through the last 4 qubits ($n = 4$). With this in mind, we can start creating the building blocks of our circuit.",
"_____no_output_____"
],
[
"### 2.2 The Controlled-Grover Iteration <a id='cont_grover'></a>",
"_____no_output_____"
],
[
"We have already covered Grover iterations in the Grover’s algorithm section. Here is an example with an Oracle we know has 5 solutions ($M = 5$) of 16 states ($N = 2^n = 16$), combined with a diffusion operator:",
"_____no_output_____"
]
],
[
[
"def example_grover_iteration():\n \"\"\"Small circuit with 5/16 solutions\"\"\"\n # Do circuit\n qc = QuantumCircuit(4)\n # Oracle\n qc.h([2,3])\n qc.ccx(0,1,2)\n qc.h(2)\n qc.x(2)\n qc.ccx(0,2,3)\n qc.x(2)\n qc.h(3)\n qc.x([1,3])\n qc.h(2)\n qc.mct([0,1,3],2)\n qc.x([1,3])\n qc.h(2)\n # Diffuser\n qc.h(range(3))\n qc.x(range(3))\n qc.z(3)\n qc.mct([0,1,2],3)\n qc.x(range(3))\n qc.h(range(3))\n qc.z(3)\n return qc",
"_____no_output_____"
]
],
[
[
"Notice the python function takes no input and returns a `QuantumCircuit` object with 4 qubits. In the past the functions you created might have modified an existing circuit, but a function like this allows us to turn the `QuantmCircuit` object into a single gate we can then control.\n\nWe can use `.to_gate()` and `.control()` to create a controlled gate from a circuit. We will call our Grover iterator `grit` and the controlled Grover iterator `cgrit`:",
"_____no_output_____"
]
],
[
[
"# Create controlled-Grover\ngrit = example_grover_iteration().to_gate()\ncgrit = grit.control()\ncgrit.label = \"Grover\"",
"_____no_output_____"
]
],
[
[
"### 2.3 The Inverse QFT <a id='inv_qft'></a>\nWe now need to create an inverse QFT. This code implements the QFT on n qubits:",
"_____no_output_____"
]
],
[
[
"def qft(n):\n \"\"\"Creates an n-qubit QFT circuit\"\"\"\n circuit = QuantumCircuit(4)\n def swap_registers(circuit, n):\n for qubit in range(n//2):\n circuit.swap(qubit, n-qubit-1)\n return circuit\n def qft_rotations(circuit, n):\n \"\"\"Performs qft on the first n qubits in circuit (without swaps)\"\"\"\n if n == 0:\n return circuit\n n -= 1\n circuit.h(n)\n for qubit in range(n):\n circuit.cu1(np.pi/2**(n-qubit), qubit, n)\n qft_rotations(circuit, n)\n \n qft_rotations(circuit, n)\n swap_registers(circuit, n)\n return circuit",
"_____no_output_____"
]
],
[
[
"Again, note we have chosen to return another `QuantumCircuit` object, this is so we can easily invert the gate. We create the gate with t = 4 qubits as this is the number of counting qubits we have chosen in this guide:",
"_____no_output_____"
]
],
[
[
"qft_dagger = qft(4).to_gate().inverse()\nqft_dagger.label = \"QFT†\"",
"_____no_output_____"
]
],
[
[
"### 2.4 Putting it Together <a id='putting_together'></a>\n\nWe now have everything we need to complete our circuit! Let’s put it together.\n\nFirst we need to put all qubits in the $|+\\rangle$ state:",
"_____no_output_____"
]
],
[
[
"# Create QuantumCircuit\nt = 4 # no. of counting qubits\nn = 4 # no. of searching qubits\nqc = QuantumCircuit(n+t, t) # Circuit with n+t qubits and t classical bits\n\n# Initialise all qubits to |+>\nfor qubit in range(t+n):\n qc.h(qubit)\n\n# Begin controlled Grover iterations\niterations = 1\nfor qubit in range(t):\n for i in range(iterations):\n qc.append(cgrit, [qubit] + [*range(t, n+t)])\n iterations *= 2\n \n# Do inverse QFT on counting qubits\nqc.append(qft_dagger, range(t))\n\n# Measure counting qubits\nqc.measure(range(t), range(t))\n\n# Display the circuit\nqc.draw()",
"_____no_output_____"
]
],
[
[
"Great! Now let’s see some results.",
"_____no_output_____"
],
[
"## 3. Simulating <a id='simulating'></a>",
"_____no_output_____"
]
],
[
[
"# Execute and see results\nemulator = Aer.get_backend('qasm_simulator')\njob = execute(qc, emulator, shots=2048 )\nhist = job.result().get_counts()\nplot_histogram(hist)",
"_____no_output_____"
]
],
[
[
"We can see two values stand out, having a much higher probability of measurement than the rest. These two values correspond to $e^{i\\theta}$ and $e^{-i\\theta}$, but we can’t see the number of solutions yet. We need to little more processing to get this information, so first let us get our output into something we can work with (an `int`).\n\nWe will get the string of the most probable result from our output data:",
"_____no_output_____"
]
],
[
[
"measured_str = max(hist, key=hist.get)",
"_____no_output_____"
]
],
[
[
"Let us now store this as an integer:",
"_____no_output_____"
]
],
[
[
"measured_int = int(measured_str,2)\nprint(\"Register Output = %i\" % measured_int)",
"Register Output = 5\n"
]
],
[
[
"## 4. Finding the Number of Solutions (M) <a id='finding_m'></a>\n\nWe will create a function, `calculate_M()` that takes as input the decimal integer output of our register, the number of counting qubits ($t$) and the number of searching qubits ($n$).\n\nFirst we want to get $\\theta$ from `measured_int`. You will remember that QPE gives us a measured $\\text{value} = 2^n \\phi$ from the eigenvalue $e^{2\\pi i\\phi}$, so to get $\\theta$ we need to do:\n\n$$\n\\theta = \\text{value}\\times\\frac{2\\pi}{2^t}\n$$\n\nOr, in code:",
"_____no_output_____"
]
],
[
[
"theta = (measured_int/(2**t))*math.pi*2\nprint(\"Theta = %.5f\" % theta)",
"Theta = 1.96350\n"
]
],
[
[
"You may remember that we can get the angle $\\theta/2$ can from the inner product of $|s\\rangle$ and $|s’\\rangle$:\n\n\n\n$$\n\\langle s'|s\\rangle = \\cos{\\tfrac{\\theta}{2}}\n$$\n\nAnd that the inner product of these vectors is:\n\n$$\n\\langle s'|s\\rangle = \\sqrt{\\frac{N-M}{N}}\n$$\n\nWe can combine these equations, then use some trigonometry and algebra to show:\n\n$$\nN\\sin^2{\\frac{\\theta}{2}} = M\n$$\n\nFrom the [Grover's algorithm](https://qiskit.org/textbook/ch-algorithms/grover.html) chapter, you will remember that a common way to create a diffusion operator, $U_s$, is actually to implement $-U_s$. This implementation is used in the Grover iteration provided in this chapter. In a normal Grover search, this phase is global and can be ignored, but now we are controlling our Grover iterations, this phase does have an effect. The result is that we have effectively searched for the states that are _not_ solutions, and our quantum counting algorithm will tell us how many states are _not_ solutions. To fix this, we simply calculate $N-M$.\n\nAnd in code:",
"_____no_output_____"
]
],
[
[
"N = 2**n\nM = N * (math.sin(theta/2)**2)\nprint(\"No. of Solutions = %.1f\" % (N-M))",
"No. of Solutions = 4.9\n"
]
],
[
[
"And we can see we have (approximately) the correct answer! We can approximately calculate the error in this answer using:",
"_____no_output_____"
]
],
[
[
"m = t - 1 # Upper bound: Will be less than this \nerr = (math.sqrt(2*M*N) + N/(2**(m-1)))*(2**(-m))\nprint(\"Error < %.2f\" % err)",
"Error < 2.85\n"
]
],
[
[
"Explaining the error calculation is outside the scope of this article, but an explanation can be found in [1].\n\nFinally, here is the finished function `calculate_M()`:",
"_____no_output_____"
]
],
[
[
"def calculate_M(measured_int, t, n):\n \"\"\"For Processing Output of Quantum Counting\"\"\"\n # Calculate Theta\n theta = (measured_int/(2**t))*math.pi*2\n print(\"Theta = %.5f\" % theta)\n # Calculate No. of Solutions\n N = 2**n\n M = N * (math.sin(theta/2)**2)\n print(\"No. of Solutions = %.1f\" % (N-M))\n # Calculate Upper Error Bound\n m = t - 1 #Will be less than this (out of scope) \n err = (math.sqrt(2*M*N) + N/(2**(m-1)))*(2**(-m))\n print(\"Error < %.2f\" % err)",
"_____no_output_____"
]
],
[
[
"## 5. Exercises <a id='exercises'></a>\n\n1.\tCan you create an oracle with a different number of solutions? How does the accuracy of the quantum counting algorithm change?\n2.\tCan you adapt the circuit to use more or less counting qubits to get a different precision in your result?\n",
"_____no_output_____"
],
[
"## 6. References <a id='references'></a>\n\n[1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA. ",
"_____no_output_____"
]
],
[
[
"import qiskit\nqiskit.__qiskit_version__",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
cb84f067155cfd4c638c1bc6d3c6692a0b6a67ff
| 493,420 |
ipynb
|
Jupyter Notebook
|
Notebook_CORD-19_1_overview.ipynb
|
Giovanni1085/cwts_covid
|
82ce50356949edf2052e6b4dfca3859a87f89174
|
[
"MIT"
] | 12 |
2020-03-30T15:33:21.000Z
|
2021-12-05T15:11:06.000Z
|
Notebook_CORD-19_1_overview.ipynb
|
Giovanni1085/cwts_covid
|
82ce50356949edf2052e6b4dfca3859a87f89174
|
[
"MIT"
] | 18 |
2020-03-30T18:30:04.000Z
|
2020-07-08T08:39:04.000Z
|
Notebook_CORD-19_1_overview.ipynb
|
CWTSLeiden/cwts_covid
|
82ce50356949edf2052e6b4dfca3859a87f89174
|
[
"MIT"
] | 4 |
2020-04-01T19:35:23.000Z
|
2021-03-24T01:18:49.000Z
| 194.412924 | 84,020 | 0.869075 |
[
[
[
"# CORD-19 overview\n\nIn this notebook, we provide an overview of publication medatata for CORD-19.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# magics and warnings\n%load_ext autoreload\n%autoreload 2\nimport warnings; warnings.simplefilter('ignore')\n\nimport os, random, codecs, json\nimport pandas as pd\nimport numpy as np\n\nseed = 99\nrandom.seed(seed)\nnp.random.seed(seed)\n\nimport nltk, sklearn\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(style=\"white\")\nsns.set_context(\"notebook\", font_scale=1.2, rc={\"lines.linewidth\": 2.5})",
"_____no_output_____"
],
[
"# load metadata\n\ndf_meta = pd.read_csv(\"datasets_output/df_pub.csv\",compression=\"gzip\")\ndf_datasource = pd.read_csv(\"datasets_output/sql_tables/datasource.csv\",sep=\"\\t\",header=None,names=['datasource_metadata_id', 'datasource', 'url'])\ndf_pub_datasource = pd.read_csv(\"datasets_output/sql_tables/pub_datasource.csv\",sep=\"\\t\",header=None,names=['pub_id','datasource_metadata_id'])\ndf_cord_meta = pd.read_csv(\"datasets_output/sql_tables/cord19_metadata.csv\",sep=\"\\t\",header=None,names=[ 'cord19_metadata_id', 'source', 'license', 'ms_academic_id',\n 'who_covidence', 'sha', 'full_text', 'pub_id'])",
"_____no_output_____"
],
[
"df_meta.head()",
"_____no_output_____"
],
[
"df_meta.columns",
"_____no_output_____"
],
[
"df_datasource",
"_____no_output_____"
],
[
"df_pub_datasource.head()",
"_____no_output_____"
],
[
"df_cord_meta.head()",
"_____no_output_____"
]
],
[
[
"#### Select just CORD-19",
"_____no_output_____"
]
],
[
[
"df_meta = df_meta.merge(df_pub_datasource, how=\"inner\", left_on=\"pub_id\", right_on=\"pub_id\")\ndf_meta = df_meta.merge(df_datasource, how=\"inner\", left_on=\"datasource_metadata_id\", right_on=\"datasource_metadata_id\")\ndf_cord19 = df_meta[df_meta.datasource_metadata_id==0]\ndf_cord19 = df_cord19.merge(df_cord_meta, how=\"inner\", left_on=\"pub_id\", right_on=\"pub_id\")",
"_____no_output_____"
],
[
"df_meta.shape",
"_____no_output_____"
],
[
"df_cord19.shape",
"_____no_output_____"
],
[
"df_cord19.head()",
"_____no_output_____"
]
],
[
[
"#### Publication years",
"_____no_output_____"
]
],
[
[
"import re\n\ndef clean_year(s):\n if pd.isna(s):\n return np.nan\n if not (s>1900):\n return np.nan\n elif s>2020:\n return 2020\n return s\n\ndf_cord19[\"publication_year\"] = df_cord19[\"publication_year\"].apply(clean_year)",
"_____no_output_____"
],
[
"df_cord19.publication_year.describe()",
"_____no_output_____"
],
[
"sns.distplot(df_cord19.publication_year.tolist(), bins=60, kde=False)\nplt.xlabel(\"Publication year\", fontsize=15)\nplt.ylabel(\"Publication count\", fontsize=15)\nplt.tight_layout()\nplt.savefig(\"figures/publication_year_all.pdf\")",
"_____no_output_____"
],
[
"sns.distplot(df_cord19[(pd.notnull(df_cord19.publication_year)) & (df_cord19.publication_year > 2000)].publication_year.tolist(), bins=20, hist=True, kde=False)\nplt.xlabel(\"Publication year\", fontsize=15)\nplt.ylabel(\"Publication count\", fontsize=15)\nplt.tight_layout()\nplt.savefig(\"figures/publication_year_2000.pdf\")",
"_____no_output_____"
],
[
"which = \"PMC\"\n\nsns.distplot(df_cord19[(pd.notnull(df_cord19.publication_year)) & (df_cord19.publication_year > 2000) & (df_cord19.source == which)].publication_year.tolist(), bins=20, hist=True, kde=False)\nplt.xlabel(\"Publication year\", fontsize=15)\nplt.ylabel(\"Publication count\", fontsize=15)\nplt.tight_layout()",
"_____no_output_____"
],
[
"# recent uptake\ndf_cord19[df_cord19.publication_year>2018].groupby([(df_cord19.publication_year),(df_cord19.publication_month)]).count().pub_id",
"_____no_output_____"
]
],
[
[
"#### Null values",
"_____no_output_____"
]
],
[
[
"df_cord19.shape",
"_____no_output_____"
],
[
"df_cord19[\"abstract_length\"] = df_cord19.abstract.str.len()",
"_____no_output_____"
],
[
"df_cord19[df_cord19.abstract_length>0].shape",
"_____no_output_____"
],
[
"sum(pd.notnull(df_cord19.abstract))",
"_____no_output_____"
],
[
"sum(pd.notnull(df_cord19.doi))",
"_____no_output_____"
],
[
"sum(pd.notnull(df_cord19.pmcid))",
"_____no_output_____"
],
[
"sum(pd.notnull(df_cord19.pmid))",
"_____no_output_____"
],
[
"sum(pd.notnull(df_cord19.journal))",
"_____no_output_____"
]
],
[
[
"#### Journals",
"_____no_output_____"
]
],
[
[
"df_cord19.journal.value_counts()[:30]",
"_____no_output_____"
],
[
"df_sub = df_cord19[df_cord19.journal.isin(df_cord19.journal.value_counts()[:20].index.tolist())]\nb = sns.countplot(y=\"journal\", data=df_sub, order=df_sub['journal'].value_counts().index)\n#b.axes.set_title(\"Title\",fontsize=50)\nb.set_xlabel(\"Publication count\",fontsize=15)\nb.set_ylabel(\"Journal\",fontsize=15)\nb.tick_params(labelsize=12)\nplt.tight_layout()\nplt.savefig(\"figures/journals.pdf\")",
"_____no_output_____"
]
],
[
[
"#### Sources and licenses",
"_____no_output_____"
]
],
[
[
"# source\ndf_sub = df_cord19[df_cord19.source.isin(df_cord19.source.value_counts()[:10].index.tolist())]\nb = sns.countplot(y=\"source\", data=df_sub, order=df_sub['source'].value_counts().index)\n#b.axes.set_title(\"Title\",fontsize=50)\nb.set_xlabel(\"Publication count\",fontsize=15)\nb.set_ylabel(\"Source\",fontsize=15)\nb.tick_params(labelsize=12)\nplt.tight_layout()\nplt.savefig(\"figures/sources.pdf\")",
"_____no_output_____"
],
[
"# license\ndf_sub = df_cord19[df_cord19.license.isin(df_cord19.license.value_counts()[:30].index.tolist())]\nb = sns.countplot(y=\"license\", data=df_sub, order=df_sub['license'].value_counts().index)\n#b.axes.set_title(\"Title\",fontsize=50)\nb.set_xlabel(\"Publication count\",fontsize=15)\nb.set_ylabel(\"License\",fontsize=15)\nb.tick_params(labelsize=12)\nplt.tight_layout()\nplt.savefig(\"figures/licenses.pdf\")",
"_____no_output_____"
]
],
[
[
"#### Full text availability",
"_____no_output_____"
]
],
[
[
"df_cord19[\"has_full_text\"] = pd.notnull(df_cord19.full_text)",
"_____no_output_____"
],
[
"df_cord19[\"has_full_text\"].sum()",
"_____no_output_____"
],
[
"# full text x source\ndf_plot = df_cord19.groupby(['has_full_text', 'source']).size().reset_index().pivot(columns='has_full_text', index='source', values=0)\ndf_plot.plot(kind='bar', stacked=True)\nplt.xlabel(\"Source\", fontsize=15)\nplt.ylabel(\"Publication count\", fontsize=15)\n#plt.tight_layout()\nplt.savefig(\"figures/source_ft.pdf\")",
"_____no_output_____"
],
[
"# full text x journal\ndf_sub = df_cord19[df_cord19.journal.isin(df_cord19.journal.value_counts()[:20].index.tolist())]\ndf_plot = df_sub.groupby(['has_full_text', 'journal']).size().reset_index().pivot(columns='has_full_text', index='journal', values=0)\ndf_plot.plot(kind='bar', stacked=True)\nplt.xlabel(\"Source\", fontsize=15)\nplt.ylabel(\"Publication count\", fontsize=15)\n#plt.tight_layout()\nplt.savefig(\"figures/journal_ft.pdf\")",
"_____no_output_____"
],
[
"# full text x year\ndf_sub = df_cord19[(pd.notnull(df_cord19.publication_year)) & (df_cord19.publication_year > 2000)]\ndf_plot = df_sub.groupby(['has_full_text', 'publication_year']).size().reset_index().pivot(columns='has_full_text', index='publication_year', values=0)\ndf_plot.plot(kind='bar', stacked=True)\nplt.xticks(np.arange(20), [int(x) for x in df_plot.index.values], rotation=45)\nplt.xlabel(\"Publication year\", fontsize=15)\nplt.ylabel(\"Publication count\", fontsize=15)\nplt.tight_layout()\nplt.savefig(\"figures/year_ft.pdf\")",
"_____no_output_____"
]
],
[
[
"## Dimensions",
"_____no_output_____"
]
],
[
[
"# load Dimensions data (you will need to download it on your own!)\n\ndirectory_name = \"datasets_output/json_dimensions_cwts\"\n\nall_dimensions = list()\nfor root, dirs, files in os.walk(directory_name):\n for file in files:\n if \".json\" in file:\n all_data = codecs.open(os.path.join(root,file)).read()\n for record in all_data.split(\"\\n\"):\n if record:\n all_dimensions.append(json.loads(record))",
"_____no_output_____"
],
[
"df_dimensions = pd.DataFrame.from_dict({\n \"id\":[r[\"id\"] for r in all_dimensions],\n \"publication_type\":[r[\"publication_type\"] for r in all_dimensions],\n \"doi\":[r[\"doi\"] for r in all_dimensions],\n \"pmid\":[r[\"pmid\"] for r in all_dimensions],\n \"issn\":[r[\"journal\"][\"issn\"] for r in all_dimensions],\n \"times_cited\":[r[\"times_cited\"] for r in all_dimensions],\n \"relative_citation_ratio\":[r[\"relative_citation_ratio\"] for r in all_dimensions],\n \"for_top\":[r[\"for\"][0][\"first_level\"][\"name\"] if len(r[\"for\"])>0 else \"\" for r in all_dimensions],\n \"for_bottom\":[r[\"for\"][0][\"second_level\"][\"name\"] if len(r[\"for\"])>0 else \"\" for r in all_dimensions],\n \"open_access_versions\":[r[\"open_access_versions\"] for r in all_dimensions]\n})",
"_____no_output_____"
],
[
"df_dimensions.head()",
"_____no_output_____"
],
[
"df_dimensions.pmid = df_dimensions.pmid.astype(float)",
"_____no_output_____"
],
[
"df_dimensions.shape",
"_____no_output_____"
],
[
"df_joined_doi = df_cord19[pd.notnull(df_cord19.doi)].merge(df_dimensions[pd.notnull(df_dimensions.doi)], how=\"inner\", left_on=\"doi\", right_on=\"doi\")",
"_____no_output_____"
],
[
"df_joined_doi.shape",
"_____no_output_____"
],
[
"df_joined_pmid = df_cord19[pd.isnull(df_cord19.doi) & pd.notnull(df_cord19.pmid)].merge(df_dimensions[pd.isnull(df_dimensions.doi) & pd.notnull(df_dimensions.pmid)], how=\"inner\", left_on=\"pmid\", right_on=\"pmid\")",
"_____no_output_____"
],
[
"df_joined_pmid.shape",
"_____no_output_____"
],
[
"df_joined = pd.concat([df_joined_doi,df_joined_pmid])",
"_____no_output_____"
],
[
"# nearly all publications from CORD-19 are in Dimensions",
"_____no_output_____"
],
[
"df_joined.shape",
"_____no_output_____"
],
[
"df_cord19.shape",
"_____no_output_____"
],
[
"# publication type\n\ndf_sub = df_joined[df_joined.publication_type.isin(df_joined.publication_type.value_counts()[:10].index.tolist())]\nb = sns.countplot(y=\"publication_type\", data=df_sub, order=df_sub['publication_type'].value_counts().index)\n#b.axes.set_title(\"Title\",fontsize=50)\nb.set_xlabel(\"Publication count\",fontsize=15)\nb.set_ylabel(\"Publication type\",fontsize=15)\nb.tick_params(labelsize=12)\nplt.tight_layout()\nplt.savefig(\"figures/dim_pub_type.pdf\")",
"_____no_output_____"
]
],
[
[
"#### Citation counts",
"_____no_output_____"
]
],
[
[
"# scatter of citations vs time of publication\n\nsns.scatterplot(df_joined.publication_year.to_list(),df_joined.times_cited.to_list())\nplt.xlabel(\"Publication year\", fontsize=15)\nplt.ylabel(\"Citation count\", fontsize=15)\nplt.tight_layout()\nplt.savefig(\"figures/dim_citations_year.png\")",
"_____no_output_____"
],
[
"# most cited papers\n\ndf_joined[[\"title\",\"times_cited\",\"relative_citation_ratio\",\"journal\",\"publication_year\",\"doi\"]].sort_values(\"times_cited\",ascending=False).head(20)",
"_____no_output_____"
],
[
"# same but in 2020; note that duplicates are due to SI or pre-prints with different PMIDs\n\ndf_joined[df_joined.publication_year>2019][[\"title\",\"times_cited\",\"relative_citation_ratio\",\"journal\",\"publication_year\",\"doi\"]].sort_values(\"times_cited\",ascending=False).head(10)",
"_____no_output_____"
],
[
"# most cited journals\n\ndf_joined[['journal','times_cited']].groupby('journal').sum().sort_values('times_cited',ascending=False).head(20)",
"_____no_output_____"
]
],
[
[
"#### Categories",
"_____no_output_____"
]
],
[
[
"# FOR jeywords distribution, TOP\n\ndf_sub = df_joined[df_joined.for_top.isin(df_joined.for_top.value_counts()[:10].index.tolist())]\nb = sns.countplot(y=\"for_top\", data=df_sub, order=df_sub['for_top'].value_counts().index)\n#b.axes.set_title(\"Title\",fontsize=50)\nb.set_xlabel(\"Publication count\",fontsize=15)\nb.set_ylabel(\"FOR first level\",fontsize=15)\nb.tick_params(labelsize=12)\nplt.tight_layout()\nplt.savefig(\"figures/dim_for_top.pdf\")",
"_____no_output_____"
],
[
"# FOR jeywords distribution, TOP\n\ndf_sub = df_joined[df_joined.for_bottom.isin(df_joined.for_bottom.value_counts()[:10].index.tolist())]\nb = sns.countplot(y=\"for_bottom\", data=df_sub, order=df_sub['for_bottom'].value_counts().index)\n#b.axes.set_title(\"Title\",fontsize=50)\nb.set_xlabel(\"Publication count\",fontsize=15)\nb.set_ylabel(\"FOR second level\",fontsize=15)\nb.tick_params(labelsize=12)\nplt.tight_layout()\nplt.savefig(\"figures/dim_for_bottom.pdf\")",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb850cda1ee4185437c30c1c3b5b4b1a375ede9b
| 278,678 |
ipynb
|
Jupyter Notebook
|
benchmarking/case_studies/case_studies.ipynb
|
justinshaffer/DEICODE
|
b7cd4da09c993bdd9ab536b1a5919dbc28d2b9ca
|
[
"BSD-3-Clause"
] | null | null | null |
benchmarking/case_studies/case_studies.ipynb
|
justinshaffer/DEICODE
|
b7cd4da09c993bdd9ab536b1a5919dbc28d2b9ca
|
[
"BSD-3-Clause"
] | null | null | null |
benchmarking/case_studies/case_studies.ipynb
|
justinshaffer/DEICODE
|
b7cd4da09c993bdd9ab536b1a5919dbc28d2b9ca
|
[
"BSD-3-Clause"
] | null | null | null | 351.422446 | 245,304 | 0.914733 |
[
[
[
"import pandas as pd\nimport numpy as np\nimport os\n#import data\nfrom biom import load_table\nfrom gneiss.util import match\n#deicode\nfrom deicode.optspace import OptSpace\nfrom deicode.preprocessing import rclr\nfrom deicode.ratios import log_ratios\n#skbio\nimport warnings; warnings.simplefilter('ignore') #for PCoA warning\nfrom skbio import DistanceMatrix\nfrom skbio.stats.ordination import pcoa\nfrom scipy.stats import pearsonr\nfrom matplotlib import cm\nfrom skbio.stats.composition import clr,centralize\n#plotting\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nimport matplotlib.gridspec as gridspec\nfrom matplotlib import ticker\nimport matplotlib.colors as mcolors\nplt.style.use('seaborn-paper')\npaper_rc = {'lines.linewidth': 1.5} \nsns.set_context(\"paper\", rc = paper_rc) \nplt.rcParams[\"axes.labelsize\"] = 25\nplt.rcParams['xtick.labelsize'] = 25\nplt.rcParams['ytick.labelsize'] = 25\n\n\ndef plot_pcoa(samples, md, ax, factor_, colors_map):\n \"\"\" \n Parameters\n ----------\n samples : pd.DataFrame\n Contains PCoA coordinates\n md : pd.Dataframe\n Metadata object\n ax : matplotlib.Axes\n Contains matplotlib axes object\n \"\"\"\n classes=np.sort(list(set(md[factor_].values)))\n cmap_out={}\n for sub_class,color_ in zip(classes,colors_map):\n idx = md[factor_] == sub_class \n ax.scatter(samples.loc[idx, 'PC1'],\n samples.loc[idx, 'PC2'], \n label=sub_class.replace('stressed','Stressed'),\n facecolors=color_,\n edgecolors=color_,\n alpha=.8,linewidth=3) \n cmap_out[sub_class]=color_\n ax.grid()\n ax.spines['right'].set_visible(False)\n ax.spines['top'].set_visible(False)\n ax.set_xticks([])\n ax.set_yticks([])\n ax.set_xlabel('PC1',fontsize=15)\n ax.set_ylabel('PC2',fontsize=15)\n \n return ax,cmap_out\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"### Case Study Benchmark Sub-sample",
"_____no_output_____"
]
],
[
[
"# store info\nboth_perm_res={}\nboth_perm_res['Sponges']=pd.read_csv('subsample_results/Sponges_health_status_fstat.csv', index_col=[0,1,2])\nboth_perm_res['Sleep_Apnea']=pd.read_csv('subsample_results/Sleep_Apnea_exposure_type_fstat.csv', index_col=[0,1,2])\nboth_nn={}\nboth_nn['Sponges']=pd.read_csv('subsample_results/Sponges_health_status_classifier.csv', index_col=[0,1,2])\nboth_nn['Sleep_Apnea']=pd.read_csv('subsample_results/Sleep_Apnea_exposure_type_classifier.csv', index_col=[0,1,2])\nfactor={}\nfactor['Sponges']='health_status'\nfactor['Sleep_Apnea']='exposure_type'\n\n#clean up the dataframes\nrename_m={'Bray_Curtis':'Bray-Curtis',\n 'GUniFrac_Alpha_Half':'Generalized UniFrac $\\\\alpha$=0.5',\n 'GUniFrac_Alpha_One':'Generalized UniFrac $\\\\alpha$=1.0',\n 'GUniFrac_Alpha_Zero':'Generalized UniFrac $\\\\alpha$=0.0',\n 'Jaccard':'Jaccard',\n 'Robust_Aitchison':'Robust Aitchison'}\n#colors to use later\ncolors_={'Bray-Curtis':'#1f78b4',\n 'Generalized UniFrac $\\\\alpha$=0.5':'#e31a1c',\n 'Generalized UniFrac $\\\\alpha$=1.0':'#984ea3',\n 'Generalized UniFrac $\\\\alpha$=0.0':'#ff7f00',\n 'Jaccard':'#e6ab02',\n 'Robust Aitchison':'#33a02c'}\n\n\nfor dataset_,results_permanova in both_perm_res.items():\n df_ = pd.DataFrame(results_permanova.copy().stack())\n df_.reset_index(inplace=True)\n df_.columns = ['Fold','N-Samples','Metric','Method','Values']\n df_['Method'] = [rename_m[x] for x in df_.Method]\n df_=df_[df_['N-Samples']>=70]\n both_perm_res[dataset_]=df_[df_.Metric.isin(['test statistic'])]\n \nfor dataset_,results_nn in both_nn.items():\n df_ = pd.DataFrame(results_nn.copy().stack())\n df_.reset_index(inplace=True)\n df_.columns = ['Fold','N-Samples','Metric','Method','Values']\n df_['Method'] = [rename_m[x] for x in df_.Method]\n df_=df_[df_['N-Samples']>=70]\n both_nn[dataset_]=df_[df_.Metric.isin(['R^{2}'])]\n \n",
"_____no_output_____"
]
],
[
[
"# Figure 3",
"_____no_output_____"
]
],
[
[
"plt.rcParams[\"axes.labelsize\"] = 14\nplt.rcParams['xtick.labelsize'] = 14\nplt.rcParams['ytick.labelsize'] = 14\ncolors_map=['#1f78b4','#e31a1c']\nsubpath_sp='sub_sample/biom_tables_Sponges'\nsubpath_sl='sub_sample/biom_tables_Sleep_Apnea'\n\nfontsize_ = 18\n\nfig = plt.figure(figsize=(20, 15), facecolor='white')\ngs = gridspec.GridSpec(300, 240)\nx_1=10+45\nx_2=x_1+10\nx_3=x_2+45\n\nx_4=x_3+30\nx_5=x_4+45\nx_6=x_5+10\nx_7=x_6+45\n\n\n# benchmarking (clasification)\nfstat_ax1 = plt.subplot(gs[:50, 10:x_1])\nclasif_ax2 = plt.subplot(gs[:50:, x_2:x_3])\nfstat_ax3 = plt.subplot(gs[:50, x_4:x_5])\nclasif_ax4 = plt.subplot(gs[:50:, x_6:x_7])\n# RPCA\nRPCA_ax1 = plt.subplot(gs[100:145, 10:x_1])\nRPCA_ax2 = plt.subplot(gs[100:145:, x_2:x_3])\nRPCA_ax3 = plt.subplot(gs[100:145, x_4:x_5])\nRPCA_ax4 = plt.subplot(gs[100:145:, x_6:x_7])\n# WUNI\nWUNI_ax1 = plt.subplot(gs[175:220, 10:x_1])\nWUNI_ax2 = plt.subplot(gs[175:220:, x_2:x_3])\nWUNI_ax3 = plt.subplot(gs[175:220, x_4:x_5])\nWUNI_ax4 = plt.subplot(gs[175:220:, x_6:x_7])\n# BC\nBC_ax1 = plt.subplot(gs[240:285, 10:x_1])\nBC_ax2 = plt.subplot(gs[240:285:, x_2:x_3])\nBC_ax3 = plt.subplot(gs[240:285, x_4:x_5])\nBC_ax4 = plt.subplot(gs[240:285:, x_6:x_7])\n\n# plot benchmarking\nfstat_ax1.set_title('PERMANOVA F-statistic', fontsize=fontsize_)\nsns.pointplot(x='N-Samples',y='Values',hue='Method',\n data=both_perm_res['Sponges'].sort_values('Method',ascending=False),\n palette=colors_, ci=0, ax=fstat_ax1)\nfstat_ax1.legend_.remove()\n\nclasif_ax2.set_title('KNN Classification Accuracy', fontsize=fontsize_)\nsns.pointplot(x='N-Samples',y='Values',hue='Method',\n data=both_nn['Sponges'].sort_values('Method',ascending=False),\n palette=colors_, ci=0, ax=clasif_ax2)\nclasif_ax2.legend(loc=2, \n bbox_to_anchor=(-1.3, 1.95),\n prop={'size':26},\n fancybox=True, framealpha=0.5,ncol=4\n , markerscale=2, facecolor=\"grey\")\n\n\nfstat_ax3.set_title('PERMANOVA F-statistic', fontsize=fontsize_)\nsns.pointplot(x='N-Samples',y='Values',hue='Method',\n data=both_perm_res['Sleep_Apnea'].sort_values('Method',ascending=False),\n palette=colors_, ci=0, ax=fstat_ax3)\nfstat_ax3.legend_.remove()\n\n\nclasif_ax4.set_title('KNN Classification Accuracy', fontsize=fontsize_)\nsns.pointplot(x='N-Samples',y='Values',hue='Method',\n data=both_nn['Sleep_Apnea'].sort_values('Method',ascending=False),\n palette=colors_, ci=0, ax=clasif_ax4)\nclasif_ax4.legend_.remove()\n\n\nfstat_ax1.set_ylabel('')\nclasif_ax2.set_ylabel('')\nfstat_ax3.set_ylabel('')\nclasif_ax4.set_ylabel('')\n\n# set titles for case-study \nfstat_ax1.annotate('Sponges',(2.5,770), \n annotation_clip=False,\n fontsize=fontsize_+20)\nfstat_ax3.annotate('Sleep Apnea',(2,168), \n annotation_clip=False,\n fontsize=fontsize_+20)\n\n# 30 samples total sponge\nmeta_ = pd.read_table(os.path.join(subpath_sp,'1_70','metadata.tsv'), index_col=0)\nrpca_tmp = pd.read_table(os.path.join(subpath_sp,'1_70','Robust_Aitchison_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nrpca_tmp=pcoa(DistanceMatrix(rpca_tmp)).samples[['PC1','PC2']]\nrpca_tmp.index=meta_.index\n\nbc_tmp = pd.read_table(os.path.join(subpath_sp,'1_70','Bray_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nbc_tmp=pcoa(DistanceMatrix(bc_tmp)).samples[['PC1','PC2']]\nbc_tmp.index=meta_.index\n\nwun_tmp = pd.read_table(os.path.join(subpath_sp,'1_70','GUniFrac_alpha_one_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nwun_tmp=pcoa(DistanceMatrix(wun_tmp)).samples[['PC1','PC2']]\nwun_tmp.index=meta_.index\n\nplot_pcoa(rpca_tmp, meta_, RPCA_ax1, factor['Sponges'], colors_map)\nplot_pcoa(wun_tmp, meta_, WUNI_ax1, factor['Sponges'], colors_map)\nplot_pcoa(bc_tmp, meta_, BC_ax1, factor['Sponges'], colors_map)\n\nRPCA_ax1.legend(loc=2, \n bbox_to_anchor=(0, 1.8),\n prop={'size':26},\n fancybox=True, framealpha=0.5,ncol=2\n , markerscale=2, facecolor=\"grey\")\n\n\nRPCA_ax1.set_title('RPCA (70-Samples)', fontsize=fontsize_)\nWUNI_ax1.set_title('W-UniFrac (70-Samples)', fontsize=fontsize_)\nBC_ax1.set_title('Bray-Curtis (70-Samples)', fontsize=fontsize_)\n\n# 30 samples total sleep\nmeta_ = pd.read_table(os.path.join(subpath_sl,'1_70','metadata.tsv'), index_col=0)\nrpca_tmp = pd.read_table(os.path.join(subpath_sl,'1_70','Robust_Aitchison_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nrpca_tmp=pcoa(DistanceMatrix(rpca_tmp)).samples[['PC1','PC2']]\nrpca_tmp.index=meta_.index\n\nbc_tmp = pd.read_table(os.path.join(subpath_sl,'1_70','Bray_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nbc_tmp=pcoa(DistanceMatrix(bc_tmp)).samples[['PC1','PC2']]\nbc_tmp.index=meta_.index\n\nwun_tmp = pd.read_table(os.path.join(subpath_sl,'1_70','GUniFrac_alpha_one_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nwun_tmp=pcoa(DistanceMatrix(wun_tmp)).samples[['PC1','PC2']]\nwun_tmp.index=meta_.index\n\nplot_pcoa(rpca_tmp, meta_, RPCA_ax3, factor['Sleep_Apnea'], colors_map)\nplot_pcoa(wun_tmp, meta_, WUNI_ax3, factor['Sleep_Apnea'], colors_map)\nplot_pcoa(bc_tmp, meta_, BC_ax3, factor['Sleep_Apnea'], colors_map)\n\n\nRPCA_ax3.set_title('RPCA (70-Samples)', fontsize=fontsize_)\nWUNI_ax3.set_title('W-UniFrac (70-Samples)', fontsize=fontsize_)\nBC_ax3.set_title('Bray-Curtis (70-Samples)', fontsize=fontsize_)\n\nRPCA_ax3.legend(loc=2, \n bbox_to_anchor=(0.25, 1.8),\n prop={'size':26},\n fancybox=True, framealpha=0.5,ncol=2\n , markerscale=2, facecolor=\"grey\")\n\n# max samp sponge \nmeta_ = pd.read_table(os.path.join(subpath_sp,'1_158','metadata.tsv'), index_col=0)\nrpca_tmp = pd.read_table(os.path.join(subpath_sp,'1_158','Robust_Aitchison_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nrpca_tmp=pcoa(DistanceMatrix(rpca_tmp)).samples[['PC1','PC2']]\nrpca_tmp.index=meta_.index\n\nbc_tmp = pd.read_table(os.path.join(subpath_sp,'1_158','Bray_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nbc_tmp=pcoa(DistanceMatrix(bc_tmp)).samples[['PC1','PC2']]\nbc_tmp.index=meta_.index\n\nwun_tmp = pd.read_table(os.path.join(subpath_sp,'1_158','GUniFrac_alpha_one_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nwun_tmp=pcoa(DistanceMatrix(wun_tmp)).samples[['PC1','PC2']]\nwun_tmp.index=meta_.index\n\nplot_pcoa(rpca_tmp, meta_, RPCA_ax2, factor['Sponges'], colors_map)\nplot_pcoa(wun_tmp, meta_, WUNI_ax2, factor['Sponges'], colors_map)\nplot_pcoa(bc_tmp, meta_, BC_ax2, factor['Sponges'], colors_map)\n\nRPCA_ax2.set_title('RPCA (158-Samples)', fontsize=fontsize_)\nWUNI_ax2.set_title('W-UniFrac (158-Samples)', fontsize=fontsize_)\nBC_ax2.set_title('Bray-Curtis (158-Samples)', fontsize=fontsize_)\n\n\n# max samp sleep \nmeta_ = pd.read_table(os.path.join(subpath_sl,'1_184','metadata.tsv'), index_col=0)\nrpca_tmp = pd.read_table(os.path.join(subpath_sl,'1_184','Robust_Aitchison_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nrpca_tmp=pcoa(DistanceMatrix(rpca_tmp)).samples[['PC1','PC2']]\nrpca_tmp.index=meta_.index\n\nbc_tmp = pd.read_table(os.path.join(subpath_sl,'1_184','Bray_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nbc_tmp=pcoa(DistanceMatrix(bc_tmp)).samples[['PC1','PC2']]\nbc_tmp.index=meta_.index\n\nwun_tmp = pd.read_table(os.path.join(subpath_sl,'1_184','GUniFrac_alpha_one_Distance.tsv'), \n index_col=0,low_memory=False).reindex(index=meta_.index,columns=meta_.index)\nwun_tmp=pcoa(DistanceMatrix(wun_tmp)).samples[['PC1','PC2']]\nwun_tmp.index=meta_.index\n\nplot_pcoa(rpca_tmp, meta_, RPCA_ax4, factor['Sleep_Apnea'], colors_map)\nplot_pcoa(wun_tmp, meta_, WUNI_ax4, factor['Sleep_Apnea'], colors_map)\nplot_pcoa(bc_tmp, meta_, BC_ax4, factor['Sleep_Apnea'], colors_map)\n\nRPCA_ax4.set_title('RPCA (184-Samples)', fontsize=fontsize_)\nWUNI_ax4.set_title('W-UniFrac (184-Samples)', fontsize=fontsize_)\nBC_ax4.set_title('Bray-Curtis (184-Samples)', fontsize=fontsize_)\n\nfig.savefig('figures/figure4.png',dpi=300, \n bbox_inches='tight',facecolor='white')\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"# Figure 4",
"_____no_output_____"
]
],
[
[
"from numpy.polynomial.polynomial import polyfit\n\n\ndef plot_biplot(samples, md, ax, factor_, y_axis_, x_axis, regcol,colors_map=['#1f78b4','#e31a1c']):\n \"\"\" \n Parameters\n ----------\n samples : pd.DataFrame\n Contains PCoA coordinates\n md : pd.Dataframe\n Metadata object\n ax : matplotlib.Axes\n Contains matplotlib axes object\n \"\"\"\n cmap_out={}\n classes=np.sort(list(set(md[factor_].values)))\n for sub_class,color_ in zip(classes,colors_map):\n idx = md[factor_] == sub_class\n ax.scatter(samples.loc[idx, y_axis_],\n samples.loc[idx, x_axis], \n label=sub_class.replace('stressed','Stressed'),\n facecolors=color_,\n edgecolors=color_,\n alpha=.8,linewidth=3) \n cmap_out[sub_class]=color_\n \n \n fit_=samples.dropna(subset=[y_axis_,x_axis])\n x=fit_.loc[:, y_axis_]\n y=fit_.loc[:, x_axis]\n # Fit with polyfit\n b, m = polyfit(x, y, 1)\n ax.plot(x, b + m * x, '-', lw=2, color=regcol, label='_nolegend_')\n\n \n ax.grid()\n ax.spines['right'].set_visible(False)\n ax.spines['top'].set_visible(False)\n ax.set_yticks([])\n ax.xaxis.set_tick_params(labelsize=20)\n return ax,cmap_out\n\nclass MidpointNormalize(colors.Normalize):\n def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):\n self.midpoint = midpoint\n colors.Normalize.__init__(self, vmin, vmax, clip)\n\n def __call__(self, value, clip=None):\n x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]\n return np.ma.masked_array(np.interp(value, x, y))\n",
"_____no_output_____"
],
[
"from biom.util import biom_open\nfrom biom import load_table\nfrom gneiss.util import match\nfrom skbio.stats.ordination import OrdinationResults\n\nlr_datasets = {}\nfor dataset_,sub_ in zip(['Sponges','Sleep_Apnea'],\n ['biom_tables_Sponges/1_248','biom_tables_Sleep_Apnea/1_184']):\n lr_datasets[dataset_]={}\n \n # get table\n in_biom = 'sub_sample/'+sub_+'/table.biom'\n table = load_table(in_biom)\n table = table.to_dataframe().T\n #remove few read sotus to help lr\n table = table.T[table.sum()>50].T \n\n # get ordination file\n in_ord = 'sub_sample/'+sub_+'/RPCA_Ordination.txt'\n sample_loading = OrdinationResults.read(in_ord).samples\n feat_loading = OrdinationResults.read(in_ord).features\n\n # taxonomy file\n tax_col = ['kingdom', 'phylum', 'class', 'order',\n 'family', 'genus', 'species']\n taxon = pd.read_table('data/'+dataset_+'/taxonomy.tsv',index_col=0)\n taxon = {i:pd.Series(j) for i,j in taxon['Taxon'].str.split(';').items()}\n taxon = pd.DataFrame(taxon).T\n taxon.columns = tax_col\n\n # metadata\n meta = pd.read_table('sub_sample/'+sub_+'/metadata.tsv',index_col=0)\n\n #match em\n table,meta=match(table,meta)\n table,taxon=match(table.T,taxon)\n feat_loading,table=match(feat_loading,table)\n sample_loading,table=match(sample_loading,table.T)\n table,meta=match(table,meta)\n\n #sort em\n table=table.T.sort_index().T\n feat_loading=feat_loading.reindex(index=table.columns)\n taxon=taxon.reindex(index=table.columns)\n \n # relabel otus \n oturelabel=['sOTU'+str(i) for i in range(len(feat_loading.index))] \n feat_loading.index = oturelabel\n taxa_mapback={ind_:[oturelabel[count_]] for count_,ind_ in enumerate(taxon.index)}\n #taxon['sequence'] = taxon.index\n taxon.index = feat_loading.index\n table.columns = feat_loading.index\n \n lr_datasets[dataset_]['maptaxa']=taxa_mapback\n lr_datasets[dataset_]['table']=table.copy()\n lr_datasets[dataset_]['meta']=meta.copy()\n lr_datasets[dataset_]['fl']=feat_loading.copy()\n lr_datasets[dataset_]['sl']=sample_loading.copy()\n lr_datasets[dataset_]['taxon']=taxon.copy()\n",
"_____no_output_____"
],
[
"for dataset_,axsor in zip(['Sponges','Sleep_Apnea'],[0,1]):\n \n\n tabletmp_=lr_datasets[dataset_]['table'].copy()\n metatmp_=lr_datasets[dataset_]['meta'].copy()\n fltmp=lr_datasets[dataset_]['fl'].copy()\n sltmp=lr_datasets[dataset_]['sl'].copy()\n txtmp=lr_datasets[dataset_]['taxon'].copy()\n\n # remove some sparsity to get better lr\n tabletmp_=tabletmp_.T[tabletmp_.sum()>50].T\n\n #match em\n tabletmp_,metatmp_=match(tabletmp_,metatmp_)\n tabletmp_,txtmp=match(tabletmp_.T,txtmp)\n fltmp,tabletmp_=match(fltmp,tabletmp_)\n sltmp,tabletmp_=match(sltmp,tabletmp_.T)\n tabletmp_,metatmp_=match(tabletmp_,metatmp_)\n\n #save em\n fltmp_=fltmp.copy()\n fltmp_.columns = ['PC1','PC2','PC3'][:len(fltmp_.columns)]\n savem_=pd.concat([fltmp_,txtmp],axis=1)\n savem_ = savem_.sort_values('PC1')\n savem_.to_csv(dataset_+'_ranking.tsv',sep='\\t')\n \n #sort em\n tabletmp_=tabletmp_.T.sort_index().T\n fltmp=fltmp.reindex(index=tabletmp_.columns)\n txtmp=txtmp.reindex(index=tabletmp_.columns)\n\n \n #table_clean = Table(tabletmp_.T.values, \n # tabletmp_.T.index, \n # tabletmp_.T.columns)\n\n #with biom.util.biom_open(dataset_+'_table.biom', 'w') as f:\n # table_clean.to_hdf5(f, \"filtered\")\n\n #metatmp_.to_csv(dataset_+'_metadata.tsv',sep='\\t')\n\n\n logdf_tmp=log_ratios(tabletmp_.copy(), \n fltmp.copy(),\n sltmp.copy()\n ,taxa_tmp=txtmp\n ,axis_sort=axsor\n ,N_show=int(int(len(txtmp.index)/2)-5))\n\n lr_datasets[dataset_]['lr']=pd.concat([logdf_tmp,metatmp_],axis=1).copy()\n lr_datasets[dataset_]['fl_sub']=fltmp.copy()*-1\n lr_datasets[dataset_]['table_sub']=tabletmp_.copy()\n lr_datasets[dataset_]['meta_sub']=metatmp_.copy()\n\n\n",
"_____no_output_____"
],
[
"plt.rcParams[\"axes.labelsize\"] = 14\nplt.rcParams['xtick.labelsize'] = 14\nplt.rcParams['ytick.labelsize'] = 14\ncolors_map=['#1f78b4','#e31a1c']\nsubpath_sp='sub_sample/biom_tables_Sponges'\nsubpath_sl='sub_sample/biom_tables_Sleep_Apnea'\n\nfontsize_ = 18\n\nfig = plt.figure(figsize=(15, 18), facecolor='white')\ngs = gridspec.GridSpec(205, 100)\n\n#biplots\nax1 = plt.subplot(gs[95:140, :45])\nax2 = plt.subplot(gs[95:140, 55:])\nax3 = plt.subplot(gs[160:, :45])\nax4 = plt.subplot(gs[160:, 55:])\n# heatmaps\nax9 = plt.subplot(gs[:35, 3:44])\nax11 = plt.subplot(gs[:35, 58:99])\n# heatmap bars\nax10 = plt.subplot(gs[:35, :3])\nax12 = plt.subplot(gs[:35, 55:58])\n# heat map color bars\nax7 = plt.subplot(gs[:35, 44:45])\nax8 = plt.subplot(gs[:35, 99:100])\n# rankings\nax5 = plt.subplot(gs[40:90, :45])\nax6 = plt.subplot(gs[40:90, 55:])\n\n# iterating variables \naxn=[ax1,ax2,ax3,ax4]\naxbar=[ax5,ax6,ax5,ax6]\naxmap=[ax9,ax11,ax9,ax1]\naxmapbar=[ax10,ax12,ax10,ax12]\naxmacbar=[ax7,ax8,ax7,ax8]\nregcolor=['#969696','#969696','#969696','#969696']\ndtst=['Sponges','Sleep_Apnea',\n 'Sponges','Sleep_Apnea']\nlrs=['log(\\\\dfrac{Synechococcophycideae( c)_{ID:sOTU984}}{Nitrosopumilus( g)_{ID:sOTU14}})',\n 'log(\\\\dfrac{Coriobacteriaceae( f)_{ID:sOTU258}}{Clostridium( g)_{ID:sOTU133}})',\n 'log(\\\\dfrac{Bacteria(k)_{ID:sOTU1224}}{A4b( f)_{ID:sOTU30}})',\n 'log(\\\\dfrac{Ruminococcus( g)_{ID:sOTU124}}{Clostridiales( o)_{ID:sOTU256}})']\nys=[0,1,0,1]\ntext_loc=[[(.21,.1),(.8,.85)], \n [(.24,.1),(.84,.85)],\n [(.40,.3),(.7,.65)],\n [(.35,.3),(.7,.65)]] \n\narrow_loc=[[(0.015,.2),(.99,.81)], \n [(0.015,.2),(.99,.81)],\n [(.365,.4),(.63,.61)],\n [(.31,.4),(.685,.62)]] \n\nfor (count_,ax_),dataset_,lr_,y_tmp,x_,X_arrow in zip(enumerate(axn), \n dtst,lrs,ys,text_loc,arrow_loc):\n \n logdf_tmp=lr_datasets[dataset_]['lr']\n factor_=factor[dataset_]\n _,cmap_out=plot_biplot(logdf_tmp, logdf_tmp, \n ax_, factor_, lr_, y_tmp, regcolor[count_])\n ax_.set_ylabel('PC1',fontsize=16) \n #r^2\n logdf_tmp_p=logdf_tmp.dropna(subset=[lr_,y_tmp])\n r_=pearsonr(logdf_tmp_p[lr_].values,logdf_tmp_p[y_tmp].values)\n r_=np.around(r_[0],2)\n ax_.annotate('$R^{2}$='+str(abs(r_)),(.7,.80), \n xycoords='axes fraction',\n fontsize=22,bbox=dict(facecolor='lightgray',\n edgecolor='None',alpha=1.0))\n #fix axis\n \n \n \n X_1=lr_.split('{')[2].replace('}','').replace('ID:','')\n Y_1=lr_.split('{')[4].replace('}','').replace('ID:','').replace(')','')\n lr_title_=lr_.replace(X_1,'').replace(Y_1,'').replace('_{ID:}','').replace(' ','').replace('Bacteria(k)','Cereibacter(g)').replace('A4b(f)','Methylonatrum(g)').replace('Synechococcophycideae(c)','Synechococcus(g)') \n X_1_sp=lr_title_.split('{')[1].replace('}','').replace('Bacteria(k)','Cereibacter(g)').replace('Synechococcophycideae(c)','Synechococcus(g)')\n Y_1_sp=lr_title_.split('{')[2].replace('})','').replace('A4b(f)','Methylonatrum(g)')\n #'sOTU1224'->'Cereibacter(g)' by blast\n #'sOTU30'->'Methylonatrum(g)' by blast\n \n ax_.set_xlabel('$'+lr_title_+'$',fontsize=16)\n \n \n ## barplot\n axbar_=axbar[count_] #bar\n fltmp=lr_datasets[dataset_]['fl_sub'].sort_values(y_tmp,ascending=False)\n fltmp=fltmp[abs(fltmp[y_tmp])>0.8]\n ind = np.arange(fltmp.shape[0])\n nxy_=list(fltmp[~fltmp.index.isin([X_1,Y_1])].index)\n fltmp_bars=fltmp.copy()\n fltmp_bars.loc[nxy_,y_tmp]=0\n fltmp_bars['group']=((fltmp_bars[y_tmp]<0).astype(int)*-1)+(fltmp_bars[y_tmp]>0).astype(int)\n colorsmap={0:'#a6cee3',1:'#1f78b4',-1:'#e41a1c'}\n fltmp_bars[y_tmp].plot(kind='bar',color=list(fltmp_bars['group'].map(colorsmap))\n ,width=int(len(fltmp_bars)/100)*2,\n ax=axbar_)\n axbar_.annotate(X_1_sp,\n x_[0],\n fontsize=16, ha='center',\n xycoords='axes fraction',\n bbox=dict(facecolor='#1f78b4',\n edgecolor='None',alpha=.2))\n \n axbar_.annotate('', xy=X_arrow[0], xycoords='axes fraction',\n ha='center', xytext=(X_arrow[0][0],0.51), \n arrowprops=dict(arrowstyle=\"<-\", color='#1f78b4', lw=5,alpha=.8))\n \n \n axbar_.annotate(Y_1_sp,\n x_[1],\n fontsize=16, ha='center',\n xycoords='axes fraction',\n bbox=dict(facecolor='#e41a1c',\n edgecolor='None',alpha=.2))\n\n axbar_.annotate('', xy=X_arrow[1], xycoords='axes fraction',\n ha='center', xytext=(X_arrow[1][0],0.48), \n arrowprops=dict(arrowstyle=\"<-\", color='#e41a1c', lw=5,alpha=.8))\n \n \n color_map_con = cm.Greys(np.linspace(0,1,len(fltmp)))\n fltmp[y_tmp].plot(kind='area',color='black',stacked=False,ax=axbar_)\n axbar_.axhline(0,c='black',lw=1,ls='-')\n axbar_.set_ylim(-10, 10)\n axbar_.set_xticks([])\n \n if count_ in [0,1]:\n ax_.legend(loc='upper center', \n bbox_to_anchor=(0.5, 3.4),\n prop={'size':22},\n fancybox=True, framealpha=0.5,ncol=2\n , markerscale=2, facecolor=\"grey\")\n \n # set titles for case-study \n ax_.annotate(dataset_.replace('_',' '),(0.5,3.5), \n annotation_clip=False,ha='center',\n xycoords='axes fraction',\n fontsize=33)\n \n # plot map\n colors_map=['#1f78b4','#e31a1c']\n table_tmp=lr_datasets[dataset_]['table_sub'].copy()\n sort_meta=lr_datasets[dataset_]['meta_sub'][factor[dataset_]].sort_values()\n sorted_df = table_tmp.reindex(index=sort_meta.index, columns=fltmp.index)\n sorted_df = sorted_df.loc[:, sorted_df.sum(axis=0) > 10] #make clusters more evident \n img = axmap[count_].imshow(clr(centralize(sorted_df+1)), aspect='auto', \n norm=MidpointNormalize(midpoint=0.),\n interpolation='nearest', cmap='PiYG')\n axmap[count_].set_xticks([])\n axmap[count_].set_yticks([])\n \n # add color bar \n fig.colorbar(img, cax=axmacbar[count_])\n axmacbar[count_].tick_params(labelsize=8) \n\n\n # color map-bars\n unique_values = sorted(set(sort_meta.values)) \n colors_map=list(cmap_out.values())\n vmap = { c : i for i, c in enumerate(unique_values) }\n mapper = lambda t: vmap[str(t)]\n cmap_object = mcolors.LinearSegmentedColormap.from_list('custom', colors_map, N=len(colors_map))\n sns.heatmap(pd.DataFrame(sort_meta).applymap(mapper), \n cmap=cmap_object,ax=axmapbar[count_],\n yticklabels=False,xticklabels=False,cbar=False,alpha=.6)\n axmapbar[count_].set_xlabel('')\n axmapbar[count_].set_ylabel('')\n\nfig.savefig('figures/figure5.png',dpi=300, \n bbox_inches='tight',facecolor='white')\nplt.show()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
cb850d656000a15d42c9088f690a73ef51322d99
| 393,583 |
ipynb
|
Jupyter Notebook
|
examples/rps_oscillator.ipynb
|
UnHumbleBen/population-protocols-python-package
|
eb39f4f5ac82d44fd5534fe72790fb36c5e7055c
|
[
"MIT"
] | null | null | null |
examples/rps_oscillator.ipynb
|
UnHumbleBen/population-protocols-python-package
|
eb39f4f5ac82d44fd5534fe72790fb36c5e7055c
|
[
"MIT"
] | null | null | null |
examples/rps_oscillator.ipynb
|
UnHumbleBen/population-protocols-python-package
|
eb39f4f5ac82d44fd5534fe72790fb36c5e7055c
|
[
"MIT"
] | null | null | null | 791.917505 | 206,444 | 0.946245 |
[
[
[
"from ppsim import Simulation, StatePlotter, time_trials\nimport numpy as np\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\n# Either this backend or the qt backend is necessary to use the StatePlotter Snapshot object for dynamic visualization while the simulation runs\n%matplotlib notebook",
"_____no_output_____"
]
],
[
[
"# 3-state oscillator\n\nThe 3-state rock-paper-scissors protocol is a simple set of rules that gives oscillatory dynamics:",
"_____no_output_____"
]
],
[
[
"r, p, s = 'rock', 'paper', 'scissors'\nrps = {\n (r,s): (r,r),\n (p,r): (p,p),\n (s,p): (s,s)\n}",
"_____no_output_____"
]
],
[
[
"This rule has been studied in many different contexts, such as [evolutionary game theory](https://www.cambridge.org/core/books/evolutionary-games-and-population-dynamics/A8D94EBE6A16837E7CB3CED24E1948F8). \n\nThis exact protocol has also been [implemented experimentally using DNA strand displacement](https://science.sciencemag.org/content/358/6369/eaal2052)\n\n<img src=\"https://science.sciencemag.org/content/sci/358/6369/eaal2052/F1.large.jpg\" width=\"400\" />\n\nLet's take a look and what the dynamics do, starting from a uniform initial distribution:",
"_____no_output_____"
]
],
[
[
"def uniform_config(n):\n return {r: n // 3, p: n // 3, s: n // 3}\nn = 500\nsim = Simulation(uniform_config(n), rps)\nsim.run()\nsim.history.plot()",
"_____no_output_____"
]
],
[
[
"We can see the amplitude of the fluctuations varies until one species dies out. [Several](https://arxiv.org/abs/1001.5235) [papers](https://arxiv.org/abs/q-bio/0605042) have analyzed these dynamics using stochastic differential equations to get analytic estimates for this time to extinction.\n\n\n\nOne considers the more general case of varied reaction rates:\n\n\n\nWe can simulate that by adding varied probabilies associated to each interaction:",
"_____no_output_____"
]
],
[
[
"p_r, p_p, p_s = 0.9, 0.6, 0.3\nimbalanced_rps = {\n (r,s): {(r,r): p_r},\n (p,r): {(p,p): p_p},\n (s,p): {(s,s): p_s}\n}\n\nn = 1000\nsim = Simulation(uniform_config(n), imbalanced_rps)\nsim.run()\nsim.history.plot()",
"_____no_output_____"
]
],
[
[
"A [population protocols paper](https://hal.inria.fr/hal-01137486/document) gave some rigorous bounds on the behavior of this protocol.\nThey first showed that it belongs to a wider family of protocols that all become extinct in at most polynomial time:\n\n\n\nThey also show that for most initial configurations, the state we converge to is equally likely to be any of the three states, making the consensus decision a 'fair die roll':\n\n\n\nTaking a look a larger simulation, we can see that the time to extinction is scaling with the population size:",
"_____no_output_____"
]
],
[
[
"n = 10 ** 4\nsim = Simulation(uniform_config(n), rps)\nsim.run()\nsim.history.plot()",
"_____no_output_____"
]
],
[
[
"Let's run some trials to get a sense of what this rate of growth might be.",
"_____no_output_____"
]
],
[
[
"ns = [int(n) for n in np.geomspace(50, 10 ** 3, 10)]\ndf = time_trials(rps, ns, uniform_config, num_trials=1000, max_wallclock_time = 60)\nfig, ax = plt.subplots()\nsns.lineplot(x='n', y='time', data=df, ax = ax)",
"_____no_output_____"
]
],
[
[
"This suggests that the population becomes silent in linear time. This makes it pretty expensive to get larger population sizes, so we would need to increase `max_wallclock_time` and be more patient if we wanted to get good data that spans more orders of magnitude. We can take a look at the distributions of the times for each population size, which shows the silence time is pretty heavy tailed, so there seems to be a lot of variance in how long it takes.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\nax = sns.violinplot(x=\"n\", y=\"time\", data=df, palette=\"muted\", scale=\"count\")",
"_____no_output_____"
]
],
[
[
"# 7-state oscillator\n\nA 7-state variant of the rock-paper-scissors oscillator was defined in the paper [Universal Protocols for Information Dissemination Using Emergent Signals](https://arxiv.org/abs/1705.09798):\nThe first state added is a control state `x`. One of the goals of their paper was the detection problem, where they will detect the presence of `x` because of the way the presence or absence of even a single copy affect the global dynamics.\n\n`x` brings the other agent to a random state, which serves to bring the system toward the equilibrium of equal rock/paper/scissors and keeps any states from becoming extinct.",
"_____no_output_____"
]
],
[
[
"x = 'x'\nrpsx = {\n (r,s): (r,r),\n (p,r): (p,p),\n (s,p): (s,s),\n (x,r): {(x,r):1/3, (x,p): 1/3, (x,s): 1/3},\n (x,p): {(x,r):1/3, (x,p): 1/3, (x,s): 1/3},\n (x,s): {(x,r):1/3, (x,p): 1/3, (x,s): 1/3},\n}\nn = 100\n# Start with 1 copy of x in an otherwise silent configuration\ninit_config = {x: 1, r: n - 1}\nsim = Simulation(init_config, rpsx)\nsim.run(500)\nsim.history.plot()",
"_____no_output_____"
]
],
[
[
"The absence of `x` will take a relatively long time to have an effect in large populations, because we saw that the time to become silent is scaling linearly with population size. To speed this up, the 7-state oscillator adds a 'lazy' and 'aggressive' variant of each state, whose dynamics serve to more quickly reach extinction in the absence of `x`.\n\n\n\nWe can translate this pseudocode into a function that defines the rule:",
"_____no_output_____"
]
],
[
[
"# 7 states, the source, then 'rock', 'paper', 'scissors' in lazy '+' or aggressive '++' variants\nstates = ['x','0+','0++','1+','1++','2+','2++']\n\n# The protocol is one-way, only\ndef seven_state_oscillator(a, b, p):\n if p > 0.5:\n return ValueError('p must be at most 0.5.')\n # (5) The source converts any receiver into a lazy state of a uniformly random species:\n if a == 'x' and b != 'x':\n return {(a, str(i) + '+'): 1/3 for i in range(3)}\n if b == 'x':\n return\n # (1) Interaction with an initiator from the same species makes receiver aggressive:\n if a[0] == b[0]:\n return a, b[0] + '++'\n # (2) Interaction with an initator from a different species makes receiver lazy (case of no attack):\n if int(b[0]) == (int(a[0]) + 1) % 3:\n return a, b[0] + '+'\n # (3) A lazy initiator has a probability p of performing a successful attack on its prey:\n if int(b[0]) == (int(a[0]) - 1) % 3 and len(a) == 2:\n return {(a, a[0] + '+'): p, (a, b[0] + '+'): 1-p}\n # (3) An aggressive initiator has a probability 2p of performing a successful attack on its prey:\n if int(b[0]) == (int(a[0]) - 1) % 3 and len(a) == 3:\n return {(a, a[0] + '+'): 2*p, (a, b[0] + '+'): 1-2*p}",
"_____no_output_____"
],
[
"n = 10 ** 3\n# Start with 1 copy of x in an otherwise silent configuration\ninit_config = {x: 1, '0+': n - 1}\nsim = Simulation(init_config, seven_state_oscillator, p = 0.1)",
"_____no_output_____"
]
],
[
[
"We can confirm that we got the logic correct by looking at the reachable states and reactions:",
"_____no_output_____"
]
],
[
[
"print(sim.state_list)\nprint(sim.reactions)",
"_____no_output_____"
],
[
"sim.run(100 * int(np.log(n)))\nsim.history.plot()",
"_____no_output_____"
]
],
[
[
"Now we can try adding and removing `x` mid-simulation to verify that the starts and stops the oscillations.",
"_____no_output_____"
]
],
[
[
"n = 10 ** 7\n# Start with 1 copy of x in an otherwise silent configuration\ninit_config = {x: 1, '0+': n - 1}\nsim = Simulation(init_config, seven_state_oscillator, p = 0.1)\nsp = StatePlotter()\nsim.add_snapshot(sp)\nsp.ax.set_yscale('symlog')",
"_____no_output_____"
]
],
[
[
"If we want to watch the simulation in real time, we need to create this interactive figure before we tell it to run.",
"_____no_output_____"
]
],
[
[
"sim.run(100 * int(np.log(n)))\n# Remove the one copy of x\nprint('removing x')\nd = sim.config_dict\nd[x] = 0\nsim.set_config(d)\nsim.run(100 * int(np.log(n)))\n# Add back one copy of x\nprint('adding x')\nd = sim.config_dict\nd[x] = 1\nsim.set_config(d)\nsim.run(100 * int(np.log(n)))\n\nsim.history.plot()",
"_____no_output_____"
]
],
[
[
"Notice the simulator was able to skip right through the middle period because once the clock shut down, there were no applicable transitions.\n\n# Basis for a Phase Clock\n\nThis oscillator was used in the paper [Population Protocols are Fast](https://arxiv.org/abs/1802.06872) as the basis for a constant-state phase clock. For that, we need to create a small count of the signal `x`, and to be sure the clock keeps running, the count of `x` must stay positive. The simplest way to do this is simply to start with the entire population in state `x` and now allow multiple `x` to eliminate each other. This only requires changing one line of our protocol:",
"_____no_output_____"
]
],
[
[
"def seven_state_oscillator_leader_election(a, b, p):\n if p > 0.5:\n return ValueError('p must be at most 0.5.')\n # (5) The source converts any receiver into a lazy state of a uniformly random species:\n # Now this also applies to input pair (x, x), so the state x is doing simple leader election to eventually get down to one state\n if a == 'x':\n return {(a, str(i) + '+'): 1/3 for i in range(3)}\n if b == 'x':\n return\n # (1) Interaction with an initiator from the same species makes receiver aggressive:\n if a[0] == b[0]:\n return a, b[0] + '++'\n # (2) Interaction with an initator from a different species makes receiver lazy (case of no attack):\n if int(b[0]) == (int(a[0]) + 1) % 3:\n return a, b[0] + '+'\n # (3) A lazy initiator has a probability p of performing a successful attack on its prey:\n if int(b[0]) == (int(a[0]) - 1) % 3 and len(a) == 2:\n return {(a, a[0] + '+'): p, (a, b[0] + '+'): 1-p}\n # (3) An aggressive initiator has a probability 2p of performing a successful attack on its prey:\n if int(b[0]) == (int(a[0]) - 1) % 3 and len(a) == 3:\n return {(a, a[0] + '+'): 2*p, (a, b[0] + '+'): 1-2*p}",
"_____no_output_____"
]
],
[
[
"Once the count of `x` gets down to $O(n^{1-\\epsilon})$, then we should start seeing oscillations start, which have a period of $O(\\log n)$.",
"_____no_output_____"
]
],
[
[
"n = 10 ** 7\ninit_config = {x: n}\nsim = Simulation(init_config, seven_state_oscillator_leader_election, p = 0.1)\nsp = StatePlotter()\nsim.add_snapshot(sp)\nsp.ax.set_yscale('symlog')",
"_____no_output_____"
],
[
"sim.run(100 * int(np.log(n)))\nsim.history.plot()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb8515c5c600391998ac8ebee4295522c42cfd11
| 208,498 |
ipynb
|
Jupyter Notebook
|
src/uvceti/03b-make_flare_table_and_figs_byhand.ipynb
|
MillionConcepts/gfcat_gj65
|
0655bb7536aba38420ef8502c3971ad098872035
|
[
"BSD-3-Clause"
] | null | null | null |
src/uvceti/03b-make_flare_table_and_figs_byhand.ipynb
|
MillionConcepts/gfcat_gj65
|
0655bb7536aba38420ef8502c3971ad098872035
|
[
"BSD-3-Clause"
] | null | null | null |
src/uvceti/03b-make_flare_table_and_figs_byhand.ipynb
|
MillionConcepts/gfcat_gj65
|
0655bb7536aba38420ef8502c3971ad098872035
|
[
"BSD-3-Clause"
] | null | null | null | 407.222656 | 89,412 | 0.908642 |
[
[
[
"import matplotlib\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport os\nimport pandas as pd\nfrom gPhoton import galextools as gt\nplt.rcParams.update({'font.size': 18})",
"_____no_output_____"
],
[
"# Import the function definitions that accompany this notebook tutorial.\nnb_funcdef_file = \"function_defs.py\"\nif os.path.isfile(nb_funcdef_file):\n from function_defs import listdir_contains, read_lightcurve, refine_flare_ranges, calculate_flare_energy\n from function_defs import is_left_censored, is_right_censored, is_peak_censored, peak_flux, peak_time\nelse:\n raise IOError(\"Could not find function definition file '\" + nb_funcdef_file + \"' that goes with this notebook.\")",
"_____no_output_____"
],
[
"# Restore the output directory. Note: this assumes you've run the \"generate_products\" notebook already. If not you\n# will need to specify the location of the products made from the \"generate_products\" notebook.\n%store -r data_directory\n# If you have not run the \"generate_products\" notebook during this session, uncomment the line below and specify\n# the location of the output products.\ndata_directory = \"./raw_files/\"",
"no stored variable or alias data_directory\n"
],
[
"# Restore the distance parameter. Note: this assumes you've run the \"generate_products\" notebook already. If not you\n# will need to specify the distance to use.\n%store -r distance\n# If you have not run the \"generate_products\" notebook during this session, uncomment the line below and specify\n# the distance to the system in parsecs.\ndistance = 1/(372.1631/1000) # parsecs",
"no stored variable or alias distance\n"
],
[
"# Locate the photon files.\nphoton_files = {'NUV':listdir_contains(data_directory,'nd-30s.csv'),\n 'FUV':listdir_contains(data_directory,'fd-30s.csv')}",
"_____no_output_____"
],
[
"def get_flareranges_byhand(flare_num, orig_flare_ranges):\n \"\"\"\n In this notebook, we are going to break up flare events into individual flares based on the presence of peaks.\n This is an alternative to our algorithm that defines a single flare event based on a return of the flux to the\n INFF value. Instead, we break them up into components to consider the scenario where a complex flare morphology\n is composed of multiple individual flares instead of a single flare with a complex shape.\n \n The orig_flare_ranges is the flare range as defined by the original algorithm, in case there is a single peak\n and we don't need to modify it at all.\n \n If we modify the flare range by hand, then we need to turn off the extra checking about 3-sigma passes, since\n if we split it into components those components won't necessarily pass those checks done by the original\n algorithm used to define the flare range in the first place.\n \"\"\"\n modded = False\n if flare_num==0:\n flare_ranges = orig_flare_ranges\n elif flare_num==1:\n flare_ranges = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46],\n [47, 48, 49, 50, 51, 52, 53, 54, 55, 56]]\n modded=True\n elif flare_num==2:\n flare_ranges = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],\n [42, 43, 44, 45, 46, 47, 48, 49], [50, 51, 52, 53, 54, 55, 56]]\n modded=True\n elif flare_num==3:\n flare_ranges = orig_flare_ranges\n elif flare_num==4:\n flare_ranges = orig_flare_ranges\n elif flare_num==5:\n flare_ranges = orig_flare_ranges\n elif flare_num==6:\n flare_ranges = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],\n [30, 31, 32, 33, 34, 35],\n [37, 38, 39, 40, 41, 42, 43, 44]]\n modded=True\n elif flare_num==7:\n flare_ranges = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],\n [36, 37, 38, 39, 40], [41, 42, 43],\n [44, 45, 46, 47, 48, 49, 50, 51], [52, 53, 54, 55, 56]]\n modded=True\n elif flare_num==8:\n flare_ranges = [[5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17],\n [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]]\n else:\n raise ValueError(\"More flares than expected passed to function.\")\n\n return (flare_ranges, modded)",
"_____no_output_____"
],
[
"# Also creates the figures used in the Appendix.\nflare_table = pd.DataFrame()\nn_visits = 9 # The last visit does not have a real flare in it that satisifes our basic criteria.\nfor i in np.arange(n_visits):\n lc_nuv = read_lightcurve(photon_files['NUV'][i])\n lc_fuv = read_lightcurve(photon_files['FUV'][i])\n (flare_ranges, quiescence, quiescence_err) = refine_flare_ranges(lc_nuv, makeplot=False)\n # Search for FUV flares, but use the NUV flare ranges rather than searching for new flare ranges\n # based on the FUV light curve (a good choice for GJ 65, but not necessarily the case in general).\n (flare_ranges_fuv, quiescence_fuv, quiescence_err_fuv) = refine_flare_ranges(lc_fuv, makeplot=False,\n flare_ranges=flare_ranges)\n \n # Override the algorithmic flare ranges by-hand instead, in this notebook.\n (flare_ranges, modded) = get_flareranges_byhand(i, flare_ranges)\n \n fig = plt.figure(figsize=(10, 8), constrained_layout=False)\n gs = fig.add_gridspec(2, len(flare_ranges), height_ratios=[1,2], hspace=0.4)\n ax1 = fig.add_subplot(gs[0,:])\n # Ignore any data points that have bad time bins. NOTE: THIS ASSUMES A 30-SECOND BIN SIZE.\n where_plot_fuv_full = np.where(lc_fuv['expt'] >= 20.0)[0]\n where_plot_nuv_full = np.where(lc_nuv['expt'] >= 20.0)[0]\n ax1.set_title('Visit #{i} - Full Light Curve'.format(i=i+1))\n ax1.errorbar((lc_fuv['t0']-min(lc_nuv['t0']))[where_plot_fuv_full], lc_fuv['flux'][where_plot_fuv_full],\n yerr=1.*lc_fuv['flux_err'][where_plot_fuv_full], fmt='-bo')\n ax1.errorbar((lc_nuv['t0']-min(lc_nuv['t0']))[where_plot_nuv_full], lc_nuv['flux'][where_plot_nuv_full],\n yerr=1.*lc_nuv['flux_err'][where_plot_nuv_full], fmt='-ko')\n ylim = [min([min((lc_nuv['flux']-4*lc_nuv['flux_err'])[where_plot_nuv_full]),\n min((lc_fuv['flux']-4*lc_fuv['flux_err'])[where_plot_fuv_full])]),\n max([max((lc_nuv['flux']+4*lc_nuv['flux_err'])[where_plot_nuv_full]),\n max((lc_fuv['flux']+4*lc_fuv['flux_err'])[where_plot_fuv_full])])]\n n_found = 0\n for flare_range in flare_ranges:\n nuv_3sig = np.array(flare_range)[np.where((np.array(lc_nuv['cps'].iloc[flare_range].values)-\n 3*np.array(lc_nuv['cps_err'].iloc[flare_range].values) >= quiescence))[0]].tolist()\n fuv_3sig = np.array(flare_range)[np.where((np.array(lc_fuv['cps'].iloc[flare_range].values)-\n 3*np.array(lc_fuv['cps_err'].iloc[flare_range].values) >= quiescence_fuv))[0]].tolist()\n # Check that flux is simultaneously >3-sigma above quiescence in both bands (dual-band detection criteria),\n # or there are at least TWO NUV fluxes at >3-sigma above quiescence (single-band detection criteria).\n if not modded:\n real = (any(set(nuv_3sig) & set(fuv_3sig)) or len(nuv_3sig)>1) # force detection conditions\n else:\n real = True\n if not real:\n continue\n # Add a panel for the zoom-in view. No visit has more than three real flares in it.\n n_found += 1\n subax = fig.add_subplot(gs[1,n_found-1])\n flare_data = {'visit_num':i,'flare_num':len(flare_table)+1,'duration':len(flare_range)*30}\n # We pass the NUV quiescence values because we want to use the flare ranges found in the\n # NUV flare search for *both* NUV and FUV. So we do NOT pass a quiescence parameter when\n # calling the FUV energy calculation.\n energy_nuv = calculate_flare_energy(lc_nuv, flare_range, distance, binsize=30, band='NUV',\n quiescence=[quiescence, quiescence_err])\n energy_fuv = calculate_flare_energy(lc_fuv, flare_range, distance, binsize=30, band='FUV')\n nuv_sn = max(((np.array(lc_nuv['cps'].iloc[flare_range].values) -\n 3*np.array(lc_nuv['cps_err'].iloc[flare_range].values)) / quiescence))\n flare_data['energy_nuv'] = energy_nuv[0]\n flare_data['energy_err_nuv'] = energy_nuv[1]\n flare_data['energy_fuv'] = energy_fuv[0]\n flare_data['energy_err_fuv'] = energy_fuv[1]\n flare_data['nuv_sn'] = nuv_sn\n # If the flare is detected because it has at least one FUV and one NUV at the same time\n # above 3*INFF, this will be True.\n flare_data['detmeth_nf'] = any(set(nuv_3sig) & set(fuv_3sig))\n # If the flare is detected because it has at least two NUV fluxes that are both\n # above 3*INFF, this will be True.\n flare_data['detmeth_nn'] = len(nuv_3sig) > 1\n flare_data['left_censored'] = is_left_censored(flare_range)\n flare_data['right_censored'] = is_right_censored(lc_nuv,flare_range)\n flare_data['peak_flux_nuv'] = peak_flux(lc_nuv,flare_range)\n flare_data['peak_t0_nuv'] = peak_time(lc_nuv,flare_range)\n flare_data['peak_censored'] = is_peak_censored(lc_nuv,flare_range)\n flare_data['peak_flux_fuv'] = peak_flux(lc_fuv,flare_range)\n flare_data['peak_t0_fuv'] = peak_time(lc_fuv,flare_range)\n flare_data['quiescence_nuv'] = quiescence\n flare_data['quiescence_err_nuv'] = quiescence_err\n flare_data['quiescence_fuv'] = quiescence_fuv\n flare_data['quiescence_err_fuv'] = quiescence_err_fuv\n flare_data['flare_range'] = flare_range\n flare_table = flare_table.append(flare_data,ignore_index=True)\n # Make plots\n commentstr = 'Truncation: '\n if flare_data['left_censored']:\n commentstr += 'Left;'\n if flare_data['right_censored']:\n commentstr += 'Right;'\n if flare_data['peak_censored']:\n commentstr += 'Peak;'\n detectstr = 'Detection: '\n if flare_data['detmeth_nf']:\n detectstr += 'FUV+NUV;'\n if flare_data['detmeth_nn']:\n detectstr += 'Multi NUV;'\n # Added too much buffer in x-direction, use x-labels to identify what part of a visit\n # this flare comes from.\n t_buffer = 30.\n # Ignore any data points that have bad time bins. NOTE: THIS ASSUMES A 30-SECOND BIN SIZE.\n where_plot_fuv = list(set(list(np.where(lc_fuv['expt'] >= 20.0)[0])).intersection(flare_range))\n where_plot_nuv = list(set(list(np.where(lc_nuv['expt'] >= 20.0)[0])).intersection(flare_range))\n where_plot_fuv.sort()\n where_plot_nuv.sort()\n subax.errorbar((lc_fuv['t0'].iloc[where_plot_fuv]-min(lc_nuv['t0'])),\n lc_fuv['flux'].iloc[where_plot_fuv],\n yerr=1.*lc_fuv['flux_err'].iloc[where_plot_fuv], fmt='bo-', label=\"FUV\")\n subax.errorbar((lc_nuv['t0']-min(lc_nuv['t0'])).iloc[where_plot_nuv],\n lc_nuv['flux'].iloc[where_plot_nuv],\n yerr=1.*lc_nuv['flux_err'].iloc[where_plot_nuv], fmt='ko-', label=\"NUV\")\n subax.set_xlim([lc_nuv['t0'].iloc[flare_range].min()-min(lc_nuv['t0'])-t_buffer,\n lc_nuv['t1'].iloc[flare_range].max()-min(lc_nuv['t0'])+t_buffer])\n subax.set_ylim([min((lc_nuv['flux']-4*lc_nuv['flux_err']).iloc[flare_range].min(),\n (lc_fuv['flux']-4*lc_fuv['flux_err']).iloc[flare_range].min()),\n max((lc_nuv['flux']+4*lc_nuv['flux_err']).iloc[flare_range].max(),\n (lc_fuv['flux']+4*lc_fuv['flux_err']).iloc[flare_range].max())])\n subax.hlines(gt.counts2flux(quiescence,'NUV'), lc_nuv['t0'].min()-min(lc_nuv['t0']),\n lc_nuv['t0'].max()-min(lc_nuv['t0']), label='NUV quiescence',linestyles='dashed',color='k')\n vlinecolor = 'black'\n if n_found == 2:\n vlinecolor = \"dimgrey\"\n ax1.vlines(lc_nuv['t0'].iloc[flare_range].min()-min(lc_nuv['t0']), -999, 999, color=vlinecolor)\n ax1.vlines(lc_nuv['t0'].iloc[flare_range].max()-min(lc_nuv['t0']), -999, 999, color=vlinecolor)\n ax1.text((lc_nuv['t0'].iloc[flare_range].max() - lc_nuv['t0'].iloc[flare_range].min())/2.-min(lc_nuv['t0']) +\n lc_nuv['t0'].iloc[flare_range].min(), ylim[1]*0.95,\n \"Flare #{m}\".format(m=len(flare_table)), color=vlinecolor)\n subax.set_title('Flare #{m}'.format(m=len(flare_table)))\n subax.legend(ncol=2, fontsize=8)\n ax1.set_ylim(ylim)\n ax1.set_xlabel('Seconds (from start of visit)')\n ax1.set_ylabel('Flux (erg/s/cm^2)')\n fig.savefig('figures/visit_{i}_byhand.eps'.format(i=i+1), dpi=600)\n plt.close(fig)",
"The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\n"
],
[
"# Creates the figure used in the main part of the paper.\nvisit_arr = [0,1,2,3,4]\nfig, axs = plt.subplots(len(visit_arr), 1, figsize=(10, 8), constrained_layout=True)\nfor ii,i in enumerate(visit_arr):\n lc_nuv = read_lightcurve(photon_files['NUV'][i])\n lc_fuv = read_lightcurve(photon_files['FUV'][i])\n # Ignore any data points that have bad time bins. NOTE: THIS ASSUMES A 30-SECOND BIN SIZE.\n where_plot_fuv_full = np.where(lc_fuv['expt'] >= 20.0)[0]\n where_plot_nuv_full = np.where(lc_nuv['expt'] >= 20.0)[0]\n axs[ii].errorbar((lc_fuv['t0']-min(lc_nuv['t0']))[where_plot_fuv_full], lc_fuv['flux'][where_plot_fuv_full],\n yerr=1.*lc_fuv['flux_err'][where_plot_fuv_full], fmt='-bo')\n axs[ii].errorbar((lc_nuv['t0']-min(lc_nuv['t0']))[where_plot_nuv_full], lc_nuv['flux'][where_plot_nuv_full],\n yerr=1.*lc_nuv['flux_err'][where_plot_nuv_full], fmt='-ko')\n ylim = [min([min((lc_nuv['flux']-4*lc_nuv['flux_err'])[where_plot_nuv_full]),\n min((lc_fuv['flux']-4*lc_fuv['flux_err'])[where_plot_fuv_full])]),\n max([max((lc_nuv['flux']+4*lc_nuv['flux_err'])[where_plot_nuv_full]),\n max((lc_fuv['flux']+4*lc_fuv['flux_err'])[where_plot_fuv_full])])]\n axs[ii].set_ylim(ylim)\n # Create shade regions for each flare in this visit.\n for fr, vn in zip(flare_table['flare_range'], flare_table['visit_num']):\n if int(vn) == i:\n mint = (lc_nuv['t0']-lc_nuv['t0'][0])[min(fr)]\n maxt = (lc_nuv['t0']-lc_nuv['t0'][0])[max(fr)]\n axs[ii].fill([mint, maxt, maxt, mint], [-999, -999, 999, 999], '0.9')\nfig.text(0.5, -0.02, 'Seconds (from start of visit)', ha='center', va='center')\nfig.text(-0.02, 0.5, 'Flux (erg/s/cm^2/Angstrom)', ha='center', va='center', rotation='vertical')\nfig.savefig('figures/all_visits_01_byhand.eps', dpi=600)\n\n\n\nvisit_arr = [5,6,7,8]\nfig, axs = plt.subplots(len(visit_arr), 1, figsize=(10, 8), constrained_layout=True)\nfor ii,i in enumerate(visit_arr):\n lc_nuv = read_lightcurve(photon_files['NUV'][i])\n lc_fuv = read_lightcurve(photon_files['FUV'][i])\n # Ignore any data points that have bad time bins. NOTE: THIS ASSUMES A 30-SECOND BIN SIZE.\n where_plot_fuv_full = np.where(lc_fuv['expt'] >= 20.0)[0]\n where_plot_nuv_full = np.where(lc_nuv['expt'] >= 20.0)[0]\n axs[ii].errorbar((lc_fuv['t0']-min(lc_nuv['t0']))[where_plot_fuv_full], lc_fuv['flux'][where_plot_fuv_full],\n yerr=1.*lc_fuv['flux_err'][where_plot_fuv_full], fmt='-bo')\n axs[ii].errorbar((lc_nuv['t0']-min(lc_nuv['t0']))[where_plot_nuv_full], lc_nuv['flux'][where_plot_nuv_full],\n yerr=1.*lc_nuv['flux_err'][where_plot_nuv_full], fmt='-ko')\n ylim = [min([min((lc_nuv['flux']-4*lc_nuv['flux_err'])[where_plot_nuv_full]),\n min((lc_fuv['flux']-4*lc_fuv['flux_err'])[where_plot_fuv_full])]),\n max([max((lc_nuv['flux']+4*lc_nuv['flux_err'])[where_plot_nuv_full]),\n max((lc_fuv['flux']+4*lc_fuv['flux_err'])[where_plot_fuv_full])])]\n axs[ii].set_ylim(ylim)\n # Create shade regions for each flare in this visit.\n for fr, vn in zip(flare_table['flare_range'], flare_table['visit_num']):\n if int(vn) == i:\n mint = (lc_nuv['t0']-lc_nuv['t0'][0])[min(fr)]\n maxt = (lc_nuv['t0']-lc_nuv['t0'][0])[max(fr)]\n axs[ii].fill([mint, maxt, maxt, mint], [-999, -999, 999, 999], '0.9')\nfig.text(0.5, -0.02, 'Seconds (from start of visit)', ha='center', va='center')\nfig.text(-0.02, 0.5, 'Flux (erg/s/cm^2/Angstrom)', ha='center', va='center', rotation='vertical')\nfig.savefig('figures/all_visits_02_byhand.eps', dpi=600)",
"_____no_output_____"
],
[
"# Make the table of flare properties in the paper.\n# Reformat the energy measurements to include error bars and reasonable sigfigs\nnuv_energy_string, fuv_energy_string = [], []\nfor e, e_err in zip(np.array(np.log10(flare_table['energy_nuv']), dtype='float16'),\n np.array(np.log10(flare_table['energy_err_nuv']), dtype='float16')):\n nuv_energy_string += ['{:4.2f} pm {:4.2f}'.format(e, e_err, 2)]\nfor e, e_err in zip(np.array(np.log10(flare_table['energy_fuv']), dtype='float16'),\n np.array(np.log10(flare_table['energy_err_fuv']), dtype='float16')):\n fuv_energy_string += ['{:4.2f} pm {:4.2f}'.format(e, e_err, 2)]\n\n# Reformat the peak timestamps\nt = pd.to_datetime(flare_table['peak_t0_nuv'] + gt.GPSSECS, unit='s')\nt.iloc[np.where(np.array(flare_table['peak_censored'], dtype='bool'))] = 'peak not measured'\n\n# Convert key columns in the flare table into a format suitable for printing in a LaTeX table\nsummary_table = pd.DataFrame({\n 'Flare':np.array(flare_table['flare_num'],dtype='int16'),\n 'Visit':np.array(flare_table['visit_num'],dtype='int16')+1,\n 'NUV Peak Time (UTC)':t,\n 'Duration':np.array(flare_table['duration']/60.,dtype='float16'),\n 'log(E_NUV)*':nuv_energy_string,\n 'log(E_FUV)':fuv_energy_string,\n 'NUV Strength':flare_table['nuv_sn'],\n})\nprint(summary_table.to_latex(index=False))",
"\\begin{tabular}{rrlrllr}\n\\toprule\n Flare & Visit & NUV Peak Time (UTC) & Duration & log(E\\_NUV)* & log(E\\_FUV) & NUV Strength \\\\\n\\midrule\n 1 & 1 & 2005-11-18 06:54:45 & 9.5 & 29.45 pm 27.92 & 28.86 pm 27.84 & 8.498467 \\\\\n 2 & 2 & peak not measured & 5.5 & 28.95 pm 27.80 & 28.00 pm 27.64 & 1.761480 \\\\\n 3 & 2 & 2005-11-18 08:30:49 & 5.5 & 28.59 pm 27.75 & 27.92 pm 27.67 & 1.185004 \\\\\n 4 & 2 & 2005-11-18 08:30:19 & 5.0 & 28.56 pm 27.72 & 28.12 pm 27.67 & 1.293150 \\\\\n 5 & 3 & 2005-11-18 11:54:02 & 12.0 & 29.17 pm 27.97 & 28.64 pm 27.89 & 1.397152 \\\\\n 6 & 3 & 2005-11-18 11:48:02 & 4.0 & 28.69 pm 27.72 & 28.28 pm 27.67 & 1.660546 \\\\\n 7 & 3 & 2005-11-18 11:48:32 & 3.5 & 28.61 pm 27.69 & 28.14 pm 27.62 & 1.184765 \\\\\n 8 & 4 & 2005-11-18 18:22:56 & 5.0 & 29.42 pm 27.88 & 29.02 pm 27.83 & 3.686975 \\\\\n 9 & 5 & 2005-11-18 20:01:01 & 5.0 & 28.73 pm 27.77 & 27.91 pm 27.67 & 1.329063 \\\\\n 10 & 5 & 2005-11-18 20:12:01 & 18.0 & 31.31 pm 28.73 & 30.80 pm 28.66 & 150.754465 \\\\\n 11 & 6 & 2005-11-19 00:56:50 & 5.5 & 28.73 pm 27.83 & 28.53 pm 27.81 & 1.194350 \\\\\n 12 & 7 & 2005-11-19 05:52:08 & 6.0 & 29.48 pm 27.97 & 28.23 pm 27.77 & 5.832215 \\\\\n 13 & 7 & 2005-11-19 05:53:08 & 3.0 & 28.66 pm 27.73 & 28.38 pm 27.69 & 1.252815 \\\\\n 14 & 7 & 2005-11-19 05:52:08 & 4.0 & 28.67 pm 27.78 & 27.80 pm 27.70 & 1.078322 \\\\\n 15 & 8 & peak not measured & 6.0 & 28.98 pm 27.91 & 27.80 pm 27.75 & 1.192360 \\\\\n 16 & 8 & 2005-11-19 09:09:50 & 2.5 & 28.36 pm 27.69 & 27.88 pm 27.61 & 1.095557 \\\\\n 17 & 8 & 2005-11-19 09:09:20 & 1.5 & 27.91 pm 27.56 & 27.83 pm 27.52 & 0.961061 \\\\\n 18 & 8 & 2005-11-19 09:11:20 & 4.0 & 29.11 pm 27.86 & 28.52 pm 27.78 & 1.428352 \\\\\n 19 & 8 & 2005-11-19 09:09:50 & 2.5 & 28.92 pm 27.75 & 28.27 pm 27.67 & 1.662115 \\\\\n 20 & 9 & 2005-11-19 23:57:14 & 6.5 & 29.06 pm 27.81 & 28.50 pm 27.73 & 2.524152 \\\\\n 21 & 9 & 2005-11-19 23:57:44 & 6.5 & 29.31 pm 27.86 & 28.88 pm 27.81 & 3.180030 \\\\\n\\bottomrule\n\\end{tabular}\n\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb85217f3402bd138293bbcf4faeeb51fe22bf6a
| 707,767 |
ipynb
|
Jupyter Notebook
|
2.2) CNN Models - Test Cases.ipynb
|
henrrydegee/plaquebox-paper
|
05a06cadaf7c47b52c93e460a51cfa56862b2521
|
[
"MIT"
] | 1 |
2019-07-19T03:22:29.000Z
|
2019-07-19T03:22:29.000Z
|
2.2) CNN Models - Test Cases.ipynb
|
henrrydegee/plaquebox-paper
|
05a06cadaf7c47b52c93e460a51cfa56862b2521
|
[
"MIT"
] | null | null | null |
2.2) CNN Models - Test Cases.ipynb
|
henrrydegee/plaquebox-paper
|
05a06cadaf7c47b52c93e460a51cfa56862b2521
|
[
"MIT"
] | 2 |
2020-02-06T19:03:59.000Z
|
2020-09-13T22:01:20.000Z
| 1,353.282983 | 237,492 | 0.955268 |
[
[
[
"### 2.2 CNN Models - Test Cases\n\nThe trained CNN model was performed to a hold-out test set with 10,873 images.\n\nThe network obtained 0.743 and 0.997 AUC-PRC on the hold-out test set for cored plaque and diffuse plaque respectively.",
"_____no_output_____"
]
],
[
[
"import time, os\n\nimport torch\ntorch.manual_seed(42)\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.optim import lr_scheduler\n\nimport torchvision\nfrom torchvision import transforms\n\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"CSV_DIR = 'data/CSVs/test.csv'\nMODEL_DIR = 'models/CNN_model_parameters.pkl'\n\nIMG_DIR = 'data/tiles/hold-out/'\nNEGATIVE_DIR = 'data/seg/negatives/'\nSAVE_DIR = 'data/outputs/'",
"_____no_output_____"
],
[
"if not os.path.exists(SAVE_DIR):\n os.makedirs(SAVE_DIR)",
"_____no_output_____"
],
[
"batch_size = 32\nnum_workers = 8\n\nnorm = np.load('utils/normalization.npy', allow_pickle=True).item()",
"_____no_output_____"
],
[
"from torch.utils.data import Dataset\nfrom PIL import Image\n\nclass MultilabelDataset(Dataset):\n def __init__(self, csv_path, img_path, transform=None):\n \"\"\"\n Args:\n csv_path (string): path to csv file\n img_path (string): path to the folder where images are\n transform: pytorch transforms for transforms and tensor conversion\n \"\"\"\n self.data_info = pd.read_csv(csv_path)\n self.img_path = img_path\n self.transform = transform\n c=torch.Tensor(self.data_info.loc[:,'cored'])\n d=torch.Tensor(self.data_info.loc[:,'diffuse'])\n a=torch.Tensor(self.data_info.loc[:,'CAA'])\n c=c.view(c.shape[0],1)\n d=d.view(d.shape[0],1)\n a=a.view(a.shape[0],1)\n self.raw_labels = torch.cat([c,d,a], dim=1)\n self.labels = (torch.cat([c,d,a], dim=1)>0.99).type(torch.FloatTensor)\n\n def __getitem__(self, index):\n # Get label(class) of the image based on the cropped pandas column\n single_image_label = self.labels[index]\n raw_label = self.raw_labels[index]\n # Get image name from the pandas df\n single_image_name = str(self.data_info.loc[index,'imagename'])\n # Open image\n try:\n img_as_img = Image.open(self.img_path + single_image_name)\n except:\n img_as_img = Image.open(NEGATIVE_DIR + single_image_name)\n # Transform image to tensor\n if self.transform is not None:\n img_as_img = self.transform(img_as_img)\n # Return image and the label\n return (img_as_img, single_image_label, raw_label, single_image_name)\n\n def __len__(self):\n return len(self.data_info.index)",
"_____no_output_____"
],
[
"data_transforms = {\n 'test' : transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize(norm['mean'], norm['std'])\n ])\n }\n\nimage_datasets = {'test': MultilabelDataset(CSV_DIR, IMG_DIR, \n data_transforms['test'])}\n\ndataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], \n batch_size=batch_size,\n shuffle=False,\n num_workers=num_workers)\n for x in ['test']}\n\ndataset_sizes = {x: len(image_datasets[x]) for x in ['test']}\n\nimage_classes = ['cored','diffuse','CAA']\n\nuse_gpu = torch.cuda.is_available()",
"_____no_output_____"
],
[
"def imshow(inp, title=None):\n \"\"\"Imshow for Tensor.\"\"\"\n inp = inp.numpy().transpose((1, 2, 0))\n mean = np.array(norm['mean'])\n std = np.array(norm['std'])\n inp = std * inp + mean\n inp = np.clip(inp, 0, 1)\n plt.figure()\n plt.imshow(inp)\n if title is not None:\n plt.title(title)\n plt.pause(0.001) # pause a bit so that plots are updated\n\n# Get a batch of training data\ninputs, labels, raw_labels, names = next(iter(dataloaders['test']))\n# Make a grid from batch\nout = torchvision.utils.make_grid(inputs)\nimshow(out)",
"_____no_output_____"
],
[
"class Net(nn.Module):\n\n def __init__(self, fc_nodes=512, num_classes=3, dropout=0.5):\n super(Net, self).__init__()\n\n def forward(self, x):\n \n x = self.features(x)\n x = x.view(x.size(0), -1)\n x = self.classifier(x)\n\n return x",
"_____no_output_____"
],
[
"def dev_model(model, criterion, phase='test', gpu_id=None):\n phase = phase\n since = time.time()\n \n dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size,\n shuffle=False, num_workers=num_workers)\n for x in [phase]}\n\n model.train(False) \n\n running_loss = 0.0\n running_corrects = torch.zeros(len(image_classes))\n running_preds = torch.Tensor(0)\n running_predictions = torch.Tensor(0)\n running_labels = torch.Tensor(0)\n running_raw_labels = torch.Tensor(0)\n\n # Iterate over data.\n step = 0\n for data in dataloaders[phase]:\n step += 1 \n # get the inputs\n inputs, labels, raw_labels, names = data\n running_labels = torch.cat([running_labels, labels])\n running_raw_labels = torch.cat([running_raw_labels, raw_labels])\n\n # wrap them in Variable\n if use_gpu:\n inputs = Variable(inputs.cuda(gpu_id))\n labels = Variable(labels.cuda(gpu_id))\n else:\n inputs, labels = Variable(inputs), Variable(labels)\n\n # forward\n outputs = model(inputs)\n preds = F.sigmoid(outputs) #posibility for each class\n #print(preds)\n if use_gpu:\n predictions = (preds>0.5).type(torch.cuda.FloatTensor)\n else:\n predictions = (preds>0.5).type(torch.FloatTensor)\n \n loss = criterion(outputs, labels)\n\n preds = preds.data.cpu()\n predictions = predictions.data.cpu()\n labels = labels.data.cpu()\n\n # statistics\n running_loss += loss.data[0]\n running_corrects += torch.sum(predictions==labels, 0).type(torch.FloatTensor)\n running_preds = torch.cat([running_preds, preds])\n running_predictions = torch.cat([running_predictions, predictions])\n\n\n epoch_loss = running_loss / dataset_sizes[phase]\n epoch_acc = running_corrects / dataset_sizes[phase]\n\n print('{} Loss: {:.4f}\\n Cored: {:.4f} Diffuse: {:.4f} CAA: {:.4f}'.format(\n phase, epoch_loss, epoch_acc[0], epoch_acc[1], epoch_acc[2]))\n\n print()\n\n time_elapsed = time.time() - since\n print('Prediction complete in {:.0f}m {:.0f}s'.format(\n time_elapsed // 60, time_elapsed % 60))\n\n return epoch_acc, running_preds, running_predictions, running_labels",
"_____no_output_____"
],
[
"from sklearn.metrics import roc_curve, auc, precision_recall_curve\n\ndef plot_roc(preds, label, image_classes, size=20, path=None):\n colors = ['pink','c','deeppink', 'b', 'g', 'm', 'y', 'r', 'k']\n fig = plt.figure(figsize=(1.2*size, size))\n ax = plt.axes()\n for i in range(preds.shape[1]):\n fpr, tpr, _ = roc_curve(label[:,i].ravel(), preds[:,i].ravel())\n lw = 0.2*size\n # Plot all ROC curves\n ax.plot([0, 1], [0, 1], 'k--', lw=lw, label='random')\n ax.plot(fpr, tpr,\n label='ROC-curve of {}'.format(image_classes[i])+ '( area = {0:0.3f})'\n ''.format(auc(fpr, tpr)),\n color=colors[(i+preds.shape[1])%len(colors)], linewidth=lw)\n \n \n ax.set_xlim([0.0, 1.0])\n ax.set_ylim([0.0, 1.05])\n ax.set_xlabel('False Positive Rate', fontsize=1.8*size)\n ax.set_ylabel('True Positive Rate', fontsize=1.8*size)\n ax.set_title('Receiver operating characteristic Curve', fontsize=1.8*size, y=1.01)\n ax.legend(loc=0, fontsize=1.5*size)\n ax.xaxis.set_tick_params(labelsize=1.6*size, size=size/2, width=0.2*size)\n ax.yaxis.set_tick_params(labelsize=1.6*size, size=size/2, width=0.2*size)\n \n if path != None:\n fig.savefig(path)\n# plt.close(fig)\n print('saved')\n\n \ndef plot_prc(preds, label, image_classes, size=20, path=None):\n colors = ['pink','c','deeppink', 'b', 'g', 'm', 'y', 'r', 'k']\n \n fig = plt.figure(figsize=(1.2*size,size))\n ax = plt.axes()\n \n for i in range(preds.shape[1]):\n rp = (label[:,i]>0).sum()/len(label)\n precision, recall, _ = precision_recall_curve(label[:,i].ravel(), preds[:,i].ravel())\n \n lw=0.2*size\n \n ax.plot(recall, precision,\n label='PR-curve of {}'.format(image_classes[i])+ '( area = {0:0.3f})'\n ''.format(auc(recall, precision)),\n color=colors[(i+preds.shape[1])%len(colors)], linewidth=lw)\n\n ax.plot([0, 1], [rp, rp], 'k--', color=colors[(i+preds.shape[1])%len(colors)], lw=lw, label='random')\n \n ax.set_xlim([0.0, 1.0])\n ax.set_ylim([0.0, 1.05])\n ax.set_xlabel('Recall', fontsize=1.8*size)\n ax.set_ylabel('Precision', fontsize=1.8*size)\n ax.set_title('Precision-Recall curve', fontsize=1.8*size, y=1.01)\n ax.legend(loc=\"lower left\", bbox_to_anchor=(0.01, 0.1), fontsize=1.5*size)\n ax.xaxis.set_tick_params(labelsize=1.6*size, size=size/2, width=0.2*size)\n ax.yaxis.set_tick_params(labelsize=1.6*size, size=size/2, width=0.2*size)\n \n if path != None:\n fig.savefig(path)\n# plt.close(fig)\n print('saved')",
"_____no_output_____"
],
[
"def auc_roc(preds, label):\n aucroc = []\n for i in range(preds.shape[1]):\n fpr, tpr, _ = roc_curve(label[:,i].ravel(), preds[:,i].ravel())\n aucroc.append(auc(fpr, tpr))\n return aucroc\n \ndef auc_prc(preds, label):\n aucprc = []\n for i in range(preds.shape[1]):\n precision, recall, _ = precision_recall_curve(label[:,i].ravel(), preds[:,i].ravel())\n aucprc.append(auc(recall, precision))\n return aucprc",
"_____no_output_____"
],
[
"criterion = nn.MultiLabelSoftMarginLoss(size_average=False)\nmodel = torch.load(MODEL_DIR, map_location=lambda storage, loc: storage)\nif use_gpu:\n model = model.module.cuda()",
"/usr/local/lib/python3.6/dist-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.parallel.data_parallel.DataParallel' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.\n warnings.warn(msg, SourceChangeWarning)\n/usr/local/lib/python3.6/dist-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.\n warnings.warn(msg, SourceChangeWarning)\n/usr/local/lib/python3.6/dist-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.MaxPool2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.\n warnings.warn(msg, SourceChangeWarning)\n/usr/local/lib/python3.6/dist-packages/torch/serialization.py:325: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.\n warnings.warn(msg, SourceChangeWarning)\n"
],
[
"# take 10s running on single GPU\ntry:\n acc, pred, prediction, target = dev_model(model.module, criterion, phase='test', gpu_id=None)\nexcept:\n acc, pred, prediction, target = dev_model(model, criterion, phase='test', gpu_id=None)",
"test Loss: 0.1798\n Cored: 0.9706 Diffuse: 0.9651 CAA: 0.9963\n\nPrediction complete in 0m 25s\n"
],
[
"label = target.numpy()\npreds = pred.numpy()\n\noutput = {}\nfor i in range(3):\n fpr, tpr, _ = roc_curve(label[:,i].ravel(), preds[:,i].ravel())\n\n precision, recall, _ = precision_recall_curve(label[:,i].ravel(), preds[:,i].ravel())\n \n output['{} fpr'.format(image_classes[i])] = fpr\n output['{} tpr'.format(image_classes[i])] = tpr\n output['{} precision'.format(image_classes[i])] = precision\n output['{} recall'.format(image_classes[i])] = recall\n\noutcsv = pd.DataFrame(dict([ (k,pd.Series(v)) for k,v in output.items() ]))\noutcsv.to_csv(SAVE_DIR+'CNN_test_output.csv', index=False)",
"_____no_output_____"
],
[
"plot_roc(pred.numpy(), target.numpy(), image_classes, size=30)\nplot_prc(pred.numpy(), target.numpy(), image_classes, size=30)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb852e6b469cbfdab235986face41423978897d8
| 1,041,940 |
ipynb
|
Jupyter Notebook
|
notebooks/optimization_playground.ipynb
|
davidetalon/StyleCLIP
|
1cbf552b322cd90c417f26a259143382e2b7af8f
|
[
"MIT"
] | 70 |
2021-04-20T10:01:34.000Z
|
2022-03-29T11:35:16.000Z
|
notebooks/optimization_playground.ipynb
|
davidetalon/StyleCLIP
|
1cbf552b322cd90c417f26a259143382e2b7af8f
|
[
"MIT"
] | null | null | null |
notebooks/optimization_playground.ipynb
|
davidetalon/StyleCLIP
|
1cbf552b322cd90c417f26a259143382e2b7af8f
|
[
"MIT"
] | 23 |
2021-04-20T10:01:43.000Z
|
2022-01-19T21:07:50.000Z
| 2,637.822785 | 1,023,616 | 0.958167 |
[
[
[
"# Text-Guided Editing of Images (Using CLIP and StyleGAN)",
"_____no_output_____"
]
],
[
[
"#@title Setup (may take a few minutes)\n!git clone https://github.com/orpatashnik/StyleCLIP.git\n\nimport os\nos.chdir(f'./StyleCLIP')\n\n!pip install ftfy regex tqdm\n!pip install git+https://github.com/openai/CLIP.git\n\nfrom pydrive.auth import GoogleAuth\nfrom pydrive.drive import GoogleDrive\nfrom google.colab import auth\nfrom oauth2client.client import GoogleCredentials\n\n# Authenticate and create the PyDrive client.\nauth.authenticate_user()\ngauth = GoogleAuth()\ngauth.credentials = GoogleCredentials.get_application_default()\ndrive = GoogleDrive(gauth)\n\nfile_id = '1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT'\ndownloaded = drive.CreateFile({'id':file_id})\ndownloaded.FetchMetadata(fetch_all=True)\ndownloaded.GetContentFile(downloaded.metadata['title'])",
"Cloning into 'StyleCLIP'...\nremote: Enumerating objects: 35, done.\u001b[K\nremote: Counting objects: 100% (35/35), done.\u001b[K\nremote: Compressing objects: 100% (24/24), done.\u001b[K\nremote: Total 35 (delta 7), reused 32 (delta 7), pack-reused 0\u001b[K\nUnpacking objects: 100% (35/35), done.\nCollecting ftfy\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/04/06/e5c80e2e0f979628d47345efba51f7ba386fe95963b11c594209085f5a9b/ftfy-5.9.tar.gz (66kB)\n\u001b[K |████████████████████████████████| 71kB 3.6MB/s \n\u001b[?25hRequirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (2019.12.20)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (4.41.1)\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from ftfy) (0.2.5)\nBuilding wheels for collected packages: ftfy\n Building wheel for ftfy (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for ftfy: filename=ftfy-5.9-cp36-none-any.whl size=46451 sha256=df0996d8a9c25cbb1571fbd1cd33affd5fc3b0e68330f8cb286e54bf92386525\n Stored in directory: /root/.cache/pip/wheels/5e/2e/f0/b07196e8c929114998f0316894a61c752b63bfa3fdd50d2fc3\nSuccessfully built ftfy\nInstalling collected packages: ftfy\nSuccessfully installed ftfy-5.9\nCollecting git+https://github.com/openai/CLIP.git\n Cloning https://github.com/openai/CLIP.git to /tmp/pip-req-build-2y4pedfk\n Running command git clone -q https://github.com/openai/CLIP.git /tmp/pip-req-build-2y4pedfk\nRequirement already satisfied: ftfy in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (5.9)\nRequirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (2019.12.20)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (4.41.1)\nCollecting torch~=1.7.1\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/90/4f/acf48b3a18a8f9223c6616647f0a011a5713a985336088d7c76f3a211374/torch-1.7.1-cp36-cp36m-manylinux1_x86_64.whl (776.8MB)\n\u001b[K |████████████████████████████████| 776.8MB 24kB/s \n\u001b[?25hCollecting torchvision~=0.8.2\n\u001b[33m WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))': /packages/19/f1/d1d9b2be9f50e840accfa180ec2fb759dd2504f2b3a12a232398d5fa00ae/torchvision-0.8.2-cp36-cp36m-manylinux1_x86_64.whl\u001b[0m\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/19/f1/d1d9b2be9f50e840accfa180ec2fb759dd2504f2b3a12a232398d5fa00ae/torchvision-0.8.2-cp36-cp36m-manylinux1_x86_64.whl (12.8MB)\n\u001b[K |████████████████████████████████| 12.8MB 241kB/s \n\u001b[?25hRequirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from ftfy->clip==1.0) (0.2.5)\nRequirement already satisfied: dataclasses; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from torch~=1.7.1->clip==1.0) (0.8)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from torch~=1.7.1->clip==1.0) (3.7.4.3)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch~=1.7.1->clip==1.0) (1.19.5)\nRequirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision~=0.8.2->clip==1.0) (7.0.0)\nBuilding wheels for collected packages: clip\n Building wheel for clip (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for clip: filename=clip-1.0-cp36-none-any.whl size=1368563 sha256=3b332a6f3a48f96e7b2526eab81a09c5b3613b120238a5bd20ec12c56574277c\n Stored in directory: /tmp/pip-ephem-wheel-cache-rqyayw55/wheels/79/51/d7/69f91d37121befe21d9c52332e04f592e17d1cabc7319b3e09\nSuccessfully built clip\nInstalling collected packages: torch, torchvision, clip\n Found existing installation: torch 1.7.0+cu101\n Uninstalling torch-1.7.0+cu101:\n Successfully uninstalled torch-1.7.0+cu101\n Found existing installation: torchvision 0.8.1+cu101\n Uninstalling torchvision-0.8.1+cu101:\n Successfully uninstalled torchvision-0.8.1+cu101\nSuccessfully installed clip-1.0 torch-1.7.1 torchvision-0.8.2\n"
],
[
"experiment_type = 'edit' #@param ['edit', 'free_generation']\n\ndescription = 'A person with purple hair' #@param {type:\"string\"}\n\nlatent_path = None #@param {type:\"string\"}\n\noptimization_steps = 100 #@param {type:\"number\"}\n\nl2_lambda = 0.008 #@param {type:\"number\"}\n\ncreate_video = True #@param {type:\"boolean\"}",
"_____no_output_____"
],
[
"#@title Additional Arguments\nargs = {\n \"description\": description,\n \"ckpt\": \"stylegan2-ffhq-config-f.pt\",\n \"stylegan_size\": 1024,\n \"lr_rampup\": 0.05,\n \"lr\": 0.1,\n \"step\": optimization_steps,\n \"mode\": experiment_type,\n \"l2_lambda\": l2_lambda,\n \"latent_path\": latent_path,\n \"truncation\": 0.7,\n \"save_intermediate_image_every\": 1 if create_video else 20,\n \"results_dir\": \"results\"\n}",
"_____no_output_____"
],
[
"from optimization.run_optimization import main\nfrom argparse import Namespace\nresult = main(Namespace(**args))",
"100%|███████████████████████████████████████| 354M/354M [00:05<00:00, 63.8MiB/s]\nloss: 0.7222;: 100%|██████████| 100/100 [03:16<00:00, 1.97s/it]\n"
],
[
"#@title Visualize Result\nfrom torchvision.utils import make_grid\nfrom torchvision.transforms import ToPILImage\nresult_image = ToPILImage()(make_grid(result.detach().cpu(), normalize=True, scale_each=True, range=(-1, 1), padding=0))\nh, w = result_image.size\nresult_image.resize((h // 2, w // 2))",
"_____no_output_____"
],
[
"#@title Create and Download Video\n\n!ffmpeg -r 15 -i results/%05d.png -c:v libx264 -vf fps=25 -pix_fmt yuv420p out.mp4\nfrom google.colab import files\nfiles.download('out.mp4')",
"ffmpeg version 3.4.8-0ubuntu0.2 Copyright (c) 2000-2020 the FFmpeg developers\n built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)\n configuration: --prefix=/usr --extra-version=0ubuntu0.2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared\n libavutil 55. 78.100 / 55. 78.100\n libavcodec 57.107.100 / 57.107.100\n libavformat 57. 83.100 / 57. 83.100\n libavdevice 57. 10.100 / 57. 10.100\n libavfilter 6.107.100 / 6.107.100\n libavresample 3. 7. 0 / 3. 7. 0\n libswscale 4. 8.100 / 4. 8.100\n libswresample 2. 9.100 / 2. 9.100\n libpostproc 54. 7.100 / 54. 7.100\nInput #0, image2, from 'results/%05d.png':\n Duration: 00:00:04.00, start: 0.000000, bitrate: N/A\n Stream #0:0: Video: png, rgb24(pc), 1024x1024, 25 fps, 25 tbr, 25 tbn, 25 tbc\nStream mapping:\n Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))\nPress [q] to stop, [?] for help\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0musing cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mprofile High, level 3.2\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0m264 - core 152 r2854 e9a5903 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00\nOutput #0, mp4, to 'out.mp4':\n Metadata:\n encoder : Lavf57.83.100\n Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1024x1024, q=-1--1, 25 fps, 12800 tbn, 25 tbc\n Metadata:\n encoder : Lavc57.107.100 libx264\n Side data:\n cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1\nframe= 167 fps=9.4 q=-1.0 Lsize= 2713kB time=00:00:06.56 bitrate=3388.4kbits/s speed=0.369x \nvideo:2711kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.103037%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mframe I:2 Avg QP:22.24 size: 59578\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mframe P:48 Avg QP:23.84 size: 32531\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mframe B:117 Avg QP:24.85 size: 9354\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mconsecutive B-frames: 2.4% 12.0% 1.8% 83.8%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mmb I I16..4: 12.8% 67.3% 19.9%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mmb P I16..4: 7.9% 23.7% 5.5% P16..4: 36.5% 11.3% 5.6% 0.0% 0.0% skip: 9.5%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mmb B I16..4: 1.7% 2.8% 0.7% B16..8: 18.2% 3.9% 1.2% direct: 8.9% skip:62.5% L0:44.9% L1:44.4% BI:10.7%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0m8x8 transform intra:61.9% inter:77.5%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mcoded y,uvDC,uvAC intra: 55.3% 50.7% 9.1% inter: 22.2% 28.1% 0.0%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mi16 v,h,dc,p: 36% 20% 9% 35%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mi8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 10% 20% 4% 10% 17% 6% 5% 3%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mi4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 5% 9% 2% 18% 28% 8% 4% 2%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mi8c dc,h,v,p: 63% 12% 21% 4%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mWeighted P-Frames: Y:41.7% UV:41.7%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mref P L0: 46.8% 15.3% 18.3% 13.5% 6.1%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mref B L0: 77.4% 14.4% 8.3%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mref B L1: 99.0% 1.0%\n\u001b[1;36m[libx264 @ 0x55dffdc4fe00] \u001b[0mkb/s:3323.37\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb852e9dadf37c37a82930a4a7f19f2b95216f91
| 26,748 |
ipynb
|
Jupyter Notebook
|
Big-Data-Clusters/CU8/Public/content/monitor-k8s/tsg023-run-kubectl-get-all.ipynb
|
meenal-gupta141/tigertoolbox
|
5c432392f7cab091121a8879ea886b39c54f519b
|
[
"MIT"
] | 541 |
2019-05-07T11:41:25.000Z
|
2022-03-29T17:33:19.000Z
|
Big-Data-Clusters/CU8/Public/content/monitor-k8s/tsg023-run-kubectl-get-all.ipynb
|
sqlworldwide/tigertoolbox
|
2abcb62a09daf0116ab1ab9c9dd9317319b23297
|
[
"MIT"
] | 89 |
2019-05-09T14:23:52.000Z
|
2022-01-13T20:21:04.000Z
|
Big-Data-Clusters/CU8/Public/content/monitor-k8s/tsg023-run-kubectl-get-all.ipynb
|
sqlworldwide/tigertoolbox
|
2abcb62a09daf0116ab1ab9c9dd9317319b23297
|
[
"MIT"
] | 338 |
2019-05-08T05:45:16.000Z
|
2022-03-28T15:35:03.000Z
| 57.031983 | 408 | 0.415657 |
[
[
[
"TSG023 - Get all BDC objects (Kubernetes)\n=========================================\n\nDescription\n-----------\n\nGet a summary of all Kubernetes resources for the system namespace and\nthe Big Data Cluster namespace\n\nSteps\n-----\n\n### Common functions\n\nDefine helper functions used in this notebook.",
"_____no_output_____"
]
],
[
[
"# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows\nimport sys\nimport os\nimport re\nimport json\nimport platform\nimport shlex\nimport shutil\nimport datetime\n\nfrom subprocess import Popen, PIPE\nfrom IPython.display import Markdown\n\nretry_hints = {} # Output in stderr known to be transient, therefore automatically retry\nerror_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help\ninstall_hint = {} # The SOP to help install the executable if it cannot be found\n\nfirst_run = True\nrules = None\ndebug_logging = False\n\ndef run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):\n \"\"\"Run shell command, stream stdout, print stderr and optionally return output\n\n NOTES:\n\n 1. Commands that need this kind of ' quoting on Windows e.g.:\n\n kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}\n\n Need to actually pass in as '\"':\n\n kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='\"'data-pool'\"')].metadata.name}\n\n The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:\n \n `iter(p.stdout.readline, b'')`\n\n The shlex.split call does the right thing for each platform, just use the '\"' pattern for a '\n \"\"\"\n MAX_RETRIES = 5\n output = \"\"\n retry = False\n\n global first_run\n global rules\n\n if first_run:\n first_run = False\n rules = load_rules()\n\n # When running `azdata sql query` on Windows, replace any \\n in \"\"\" strings, with \" \", otherwise we see:\n #\n # ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')\n #\n if platform.system() == \"Windows\" and cmd.startswith(\"azdata sql query\"):\n cmd = cmd.replace(\"\\n\", \" \")\n\n # shlex.split is required on bash and for Windows paths with spaces\n #\n cmd_actual = shlex.split(cmd)\n\n # Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries\n #\n user_provided_exe_name = cmd_actual[0].lower()\n\n # When running python, use the python in the ADS sandbox ({sys.executable})\n #\n if cmd.startswith(\"python \"):\n cmd_actual[0] = cmd_actual[0].replace(\"python\", sys.executable)\n\n # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail\n # with:\n #\n # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)\n #\n # Setting it to a default value of \"en_US.UTF-8\" enables pip install to complete\n #\n if platform.system() == \"Darwin\" and \"LC_ALL\" not in os.environ:\n os.environ[\"LC_ALL\"] = \"en_US.UTF-8\"\n\n # When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`\n #\n if cmd.startswith(\"kubectl \") and \"AZDATA_OPENSHIFT\" in os.environ:\n cmd_actual[0] = cmd_actual[0].replace(\"kubectl\", \"oc\")\n\n # To aid supportability, determine which binary file will actually be executed on the machine\n #\n which_binary = None\n\n # Special case for CURL on Windows. The version of CURL in Windows System32 does not work to\n # get JWT tokens, it returns \"(56) Failure when receiving data from the peer\". If another instance\n # of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost\n # always the first curl.exe in the path, and it can't be uninstalled from System32, so here we\n # look for the 2nd installation of CURL in the path)\n if platform.system() == \"Windows\" and cmd.startswith(\"curl \"):\n path = os.getenv('PATH')\n for p in path.split(os.path.pathsep):\n p = os.path.join(p, \"curl.exe\")\n if os.path.exists(p) and os.access(p, os.X_OK):\n if p.lower().find(\"system32\") == -1:\n cmd_actual[0] = p\n which_binary = p\n break\n\n # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this\n # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) \n #\n # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.\n #\n if which_binary == None:\n which_binary = shutil.which(cmd_actual[0])\n\n # Display an install HINT, so the user can click on a SOP to install the missing binary\n #\n if which_binary == None:\n print(f\"The path used to search for '{cmd_actual[0]}' was:\")\n print(sys.path)\n\n if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:\n display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))\n\n raise FileNotFoundError(f\"Executable '{cmd_actual[0]}' not found in path (where/which)\")\n else: \n cmd_actual[0] = which_binary\n\n start_time = datetime.datetime.now().replace(microsecond=0)\n\n print(f\"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)\")\n print(f\" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})\")\n print(f\" cwd: {os.getcwd()}\")\n\n # Command-line tools such as CURL and AZDATA HDFS commands output\n # scrolling progress bars, which causes Jupyter to hang forever, to\n # workaround this, use no_output=True\n #\n\n\n # Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait\n #\n wait = True \n\n try:\n if no_output:\n p = Popen(cmd_actual)\n else:\n p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)\n with p.stdout:\n for line in iter(p.stdout.readline, b''):\n line = line.decode()\n if return_output:\n output = output + line\n else:\n if cmd.startswith(\"azdata notebook run\"): # Hyperlink the .ipynb file\n regex = re.compile(' \"(.*)\"\\: \"(.*)\"') \n match = regex.match(line)\n if match:\n if match.group(1).find(\"HTML\") != -1:\n display(Markdown(f' - \"{match.group(1)}\": \"{match.group(2)}\"'))\n else:\n display(Markdown(f' - \"{match.group(1)}\": \"[{match.group(2)}]({match.group(2)})\"'))\n\n wait = False\n break # otherwise infinite hang, have not worked out why yet.\n else:\n print(line, end='')\n if rules is not None:\n apply_expert_rules(line)\n\n if wait:\n p.wait()\n except FileNotFoundError as e:\n if install_hint is not None:\n display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))\n\n raise FileNotFoundError(f\"Executable '{cmd_actual[0]}' not found in path (where/which)\") from e\n\n exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()\n\n if not no_output:\n for line in iter(p.stderr.readline, b''):\n try:\n line_decoded = line.decode()\n except UnicodeDecodeError:\n # NOTE: Sometimes we get characters back that cannot be decoded(), e.g.\n #\n # \\xa0\n #\n # For example see this in the response from `az group create`:\n #\n # ERROR: Get Token request returned http error: 400 and server \n # response: {\"error\":\"invalid_grant\",# \"error_description\":\"AADSTS700082: \n # The refresh token has expired due to inactivity.\\xa0The token was \n # issued on 2018-10-25T23:35:11.9832872Z\n #\n # which generates the exception:\n #\n # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte\n #\n print(\"WARNING: Unable to decode stderr line, printing raw bytes:\")\n print(line)\n line_decoded = \"\"\n pass\n else:\n\n # azdata emits a single empty line to stderr when doing an hdfs cp, don't\n # print this empty \"ERR:\" as it confuses.\n #\n if line_decoded == \"\":\n continue\n \n print(f\"STDERR: {line_decoded}\", end='')\n\n if line_decoded.startswith(\"An exception has occurred\") or line_decoded.startswith(\"ERROR: An error occurred while executing the following cell\"):\n exit_code_workaround = 1\n\n # inject HINTs to next TSG/SOP based on output in stderr\n #\n if user_provided_exe_name in error_hints:\n for error_hint in error_hints[user_provided_exe_name]:\n if line_decoded.find(error_hint[0]) != -1:\n display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))\n\n # apply expert rules (to run follow-on notebooks), based on output\n #\n if rules is not None:\n apply_expert_rules(line_decoded)\n\n # Verify if a transient error, if so automatically retry (recursive)\n #\n if user_provided_exe_name in retry_hints:\n for retry_hint in retry_hints[user_provided_exe_name]:\n if line_decoded.find(retry_hint) != -1:\n if retry_count < MAX_RETRIES:\n print(f\"RETRY: {retry_count} (due to: {retry_hint})\")\n retry_count = retry_count + 1\n output = run(cmd, return_output=return_output, retry_count=retry_count)\n\n if return_output:\n if base64_decode:\n import base64\n\n return base64.b64decode(output).decode('utf-8')\n else:\n return output\n\n elapsed = datetime.datetime.now().replace(microsecond=0) - start_time\n\n # WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so\n # don't wait here, if success known above\n #\n if wait: \n if p.returncode != 0:\n raise SystemExit(f'Shell command:\\n\\n\\t{cmd} ({elapsed}s elapsed)\\n\\nreturned non-zero exit code: {str(p.returncode)}.\\n')\n else:\n if exit_code_workaround !=0 :\n raise SystemExit(f'Shell command:\\n\\n\\t{cmd} ({elapsed}s elapsed)\\n\\nreturned non-zero exit code: {str(exit_code_workaround)}.\\n')\n\n print(f'\\nSUCCESS: {elapsed}s elapsed.\\n')\n\n if return_output:\n if base64_decode:\n import base64\n\n return base64.b64decode(output).decode('utf-8')\n else:\n return output\n\ndef load_json(filename):\n \"\"\"Load a json file from disk and return the contents\"\"\"\n\n with open(filename, encoding=\"utf8\") as json_file:\n return json.load(json_file)\n\ndef load_rules():\n \"\"\"Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable\"\"\"\n\n # Load this notebook as json to get access to the expert rules in the notebook metadata.\n #\n try:\n j = load_json(\"tsg023-run-kubectl-get-all.ipynb\")\n except:\n pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?\n else:\n if \"metadata\" in j and \\\n \"azdata\" in j[\"metadata\"] and \\\n \"expert\" in j[\"metadata\"][\"azdata\"] and \\\n \"expanded_rules\" in j[\"metadata\"][\"azdata\"][\"expert\"]:\n\n rules = j[\"metadata\"][\"azdata\"][\"expert\"][\"expanded_rules\"]\n\n rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.\n\n # print (f\"EXPERT: There are {len(rules)} rules to evaluate.\")\n\n return rules\n\ndef apply_expert_rules(line):\n \"\"\"Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so\n inject a 'HINT' to the follow-on SOP/TSG to run\"\"\"\n\n global rules\n\n for rule in rules:\n notebook = rule[1]\n cell_type = rule[2]\n output_type = rule[3] # i.e. stream or error\n output_type_name = rule[4] # i.e. ename or name \n output_type_value = rule[5] # i.e. SystemExit or stdout\n details_name = rule[6] # i.e. evalue or text \n expression = rule[7].replace(\"\\\\*\", \"*\") # Something escaped *, and put a \\ in front of it!\n\n if debug_logging:\n print(f\"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.\")\n\n if re.match(expression, line, re.DOTALL):\n\n if debug_logging:\n print(\"EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'\".format(output_type_name, output_type_value, expression, notebook))\n\n match_found = True\n\n display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))\n\n\n\nprint('Common functions defined successfully.')\n\n# Hints for binary (transient fault) retry, (known) error and install guide\n#\nretry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']}\nerror_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]}\ninstall_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}",
"_____no_output_____"
]
],
[
[
"### Run kubectl get all for the system namespace",
"_____no_output_____"
]
],
[
[
"run(\"kubectl get all\")",
"_____no_output_____"
]
],
[
[
"### Get the Kubernetes namespace for the big data cluster\n\nGet the namespace of the Big Data Cluster use the kubectl command line\ninterface .\n\n**NOTE:**\n\nIf there is more than one Big Data Cluster in the target Kubernetes\ncluster, then either:\n\n- set \\[0\\] to the correct value for the big data cluster.\n- set the environment variable AZDATA\\_NAMESPACE, before starting\n Azure Data Studio.",
"_____no_output_____"
]
],
[
[
"# Place Kubernetes namespace name for BDC into 'namespace' variable\n\nif \"AZDATA_NAMESPACE\" in os.environ:\n namespace = os.environ[\"AZDATA_NAMESPACE\"]\nelse:\n try:\n namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)\n except:\n from IPython.display import Markdown\n print(f\"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.\")\n display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))\n display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))\n display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))\n raise\n\nprint(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')",
"_____no_output_____"
]
],
[
[
"### Run kubectl get all for the Big Data Cluster namespace",
"_____no_output_____"
]
],
[
[
"run(f\"kubectl get all -n {namespace}\")",
"_____no_output_____"
],
[
"print('Notebook execution complete.')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb85430367f46a7c2fd1894bdfed6425385586ac
| 22,036 |
ipynb
|
Jupyter Notebook
|
examples/PlotActivity.ipynb
|
wahab2604/pycalphad
|
dba5c0fe91167a3769f1fbc88ae1f4672f4f998f
|
[
"MIT"
] | 162 |
2016-02-15T00:04:28.000Z
|
2022-03-29T06:10:07.000Z
|
examples/PlotActivity.ipynb
|
wahab2604/pycalphad
|
dba5c0fe91167a3769f1fbc88ae1f4672f4f998f
|
[
"MIT"
] | 323 |
2016-01-16T18:41:57.000Z
|
2022-03-29T22:07:52.000Z
|
examples/PlotActivity.ipynb
|
wahab2604/pycalphad
|
dba5c0fe91167a3769f1fbc88ae1f4672f4f998f
|
[
"MIT"
] | 76 |
2016-02-08T21:05:45.000Z
|
2022-03-04T12:12:41.000Z
| 117.839572 | 17,348 | 0.879742 |
[
[
[
"# Calculate and Plot Activity\n\n\nGiven an existing database for Al-Zn, we would like to calculate the activity of the liquid.\n\n## Experimental activity results\n\nIn order to make sure we are correct, we'll compare the values with experimental results.\nExperimental activities are digtized from Fig 18 in A. Yazawa, Y.K. Lee, Thermodynamic Studies of the Liquid Aluminum Alloy Systems, Trans. Japan Inst. Met. 11 (1970) 411–418. https://doi.org/10.2320/matertrans1960.11.411.\n\nThe measurements at at 1073 K and they used a reference state of the pure Zn at that temperature.\n",
"_____no_output_____"
]
],
[
[
"exp_x_zn = [0.0482, 0.1990, 0.3550, 0.5045, 0.6549, 0.8070, 0.9569]\nexp_acr_zn = [0.1154, 0.3765, 0.5411, 0.6433, 0.7352, 0.8384, 0.9531]",
"_____no_output_____"
]
],
[
[
"## Set up the database\n\nAl-Zn database is taken from S. Mey, Reevaluation of the Al-Zn system, Zeitschrift F{ü}r Met. 84 (1993) 451–455.",
"_____no_output_____"
]
],
[
[
"from pycalphad import Database, equilibrium, variables as v\nimport numpy as np\n\ndbf = Database('alzn_mey.tdb') \n\ncomps = ['AL', 'ZN', 'VA']\nphases = list(dbf.phases.keys())",
"_____no_output_____"
]
],
[
[
"## Calculate the reference state\n\nBecause all chemical activities must be specified with a reference state, we're going to choose a reference state as the pure element at the same temperature, consistent with the experimental data.",
"_____no_output_____"
]
],
[
[
"ref_eq = equilibrium(dbf, comps, phases, {v.P: 101325, v.T: 1023, v.X('ZN'): 1})",
"_____no_output_____"
]
],
[
[
"## Calculate the equilibria\n\nDo the calculation over the composition range",
"_____no_output_____"
]
],
[
[
"eq = equilibrium(dbf, comps, phases, {v.P: 1013325, v.T: 1023, v.X('ZN'): (0, 1, 0.005)})",
"_____no_output_____"
]
],
[
[
"## Get the chemical potentials and calculate activity\n\nWe need to select the chemical potentials from the xarray Dataset and calculate the activity.",
"_____no_output_____"
]
],
[
[
"chempot_ref = ref_eq.MU.sel(component='ZN').squeeze()\nchempot = eq.MU.sel(component='ZN').squeeze()\n\nacr_zn = np.exp((chempot - chempot_ref)/(8.315*1023))",
"_____no_output_____"
]
],
[
[
"## Plot the result",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.plot(eq.X.sel(component='ZN', vertex=0).squeeze(), acr_zn, label='Calculated')\n# add experimental data\nplt.scatter(exp_x_zn, exp_acr_zn, label='Yazawa 1970')\n\nplt.xlim((0, 1))\nplt.ylim((0, 1))\nplt.gca().set_aspect(1)\nplt.xlabel('X(ZN)')\nplt.ylabel('ACR(ZN)')\nplt.title('Activity of Zn at 1073K')\nplt.legend(loc=0)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb8546801075fb5c1fe0e3583180056b069dade0
| 147,544 |
ipynb
|
Jupyter Notebook
|
learn/array/numpy_array/array-v3.ipynb
|
midorigreen/data-kaggle
|
f41ba65165482b8790e5ff32ab7f4d02ae3dd0c9
|
[
"MIT"
] | null | null | null |
learn/array/numpy_array/array-v3.ipynb
|
midorigreen/data-kaggle
|
f41ba65165482b8790e5ff32ab7f4d02ae3dd0c9
|
[
"MIT"
] | null | null | null |
learn/array/numpy_array/array-v3.ipynb
|
midorigreen/data-kaggle
|
f41ba65165482b8790e5ff32ab7f4d02ae3dd0c9
|
[
"MIT"
] | null | null | null | 147.691692 | 62,952 | 0.892581 |
[
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"points = np.arange(-5, 5, 0.1)",
"_____no_output_____"
],
[
"dx, dy = np.meshgrid(points, points)",
"_____no_output_____"
],
[
"dx",
"_____no_output_____"
],
[
"dy",
"_____no_output_____"
],
[
"dx[0].shape",
"_____no_output_____"
],
[
"dx.shape",
"_____no_output_____"
]
],
[
[
"## カラーグラフへの描画",
"_____no_output_____"
]
],
[
[
"plt.imshow(dx)",
"_____no_output_____"
],
[
"plt.imshow(dy)",
"_____no_output_____"
],
[
"z = (np.sin(dx) + np.sin(dy))",
"_____no_output_____"
],
[
"plt.imshow(z)",
"_____no_output_____"
],
[
"plt.imshow(z)\nplt.colorbar()\nplt.title('Plot for sin(x) + sin(y)')",
"_____no_output_____"
]
],
[
[
"## 計算",
"_____no_output_____"
]
],
[
[
"A = np.array([1, 2, 3, 4])",
"_____no_output_____"
],
[
"B = np.array([1000,2000,3000,4000])",
"_____no_output_____"
],
[
"condition = np.array([True,False,False,True])",
"_____no_output_____"
]
],
[
[
"※ 速度が遅い + 多次元に対応できない",
"_____no_output_____"
]
],
[
[
"ans = [(a if cond else b) for a,b,cond in zip(A,B,condition)]",
"_____no_output_____"
],
[
"ans",
"_____no_output_____"
],
[
"ans2 = np.where(condition,A,B)",
"_____no_output_____"
],
[
"ans2",
"_____no_output_____"
],
[
"from numpy.random import randn",
"_____no_output_____"
]
],
[
[
"正規分布の5x5のランダム行列",
"_____no_output_____"
]
],
[
[
"arr = randn(5,5)",
"_____no_output_____"
],
[
"arr",
"_____no_output_____"
]
],
[
[
"where(条件, trueの場合, falseの場合)",
"_____no_output_____"
]
],
[
[
"np.where(arr < 0, 0, arr)",
"_____no_output_____"
],
[
"arr = np.array([[1,2,3],[4,5,6],[7,8,9]])",
"_____no_output_____"
],
[
"arr",
"_____no_output_____"
],
[
"arr.sum()",
"_____no_output_____"
]
],
[
[
"行方向(0軸)に足し合わせ",
"_____no_output_____"
]
],
[
[
"arr.sum(0)",
"_____no_output_____"
]
],
[
[
"列方向(1軸)に足し合わせ",
"_____no_output_____"
]
],
[
[
"arr.sum(1)",
"_____no_output_____"
]
],
[
[
"**平均値**",
"_____no_output_____"
]
],
[
[
"arr.mean()",
"_____no_output_____"
]
],
[
[
"**標準偏差**",
"_____no_output_____"
]
],
[
[
"arr.std()",
"_____no_output_____"
]
],
[
[
"**分散**",
"_____no_output_____"
]
],
[
[
"arr.var()",
"_____no_output_____"
]
],
[
[
"### 真偽値判定",
"_____no_output_____"
]
],
[
[
"bool_arr = np.array([True,False,True])",
"_____no_output_____"
]
],
[
[
"一つでもTrueがあればTrue",
"_____no_output_____"
]
],
[
[
"bool_arr.any()",
"_____no_output_____"
]
],
[
[
"全てTrueでTrue",
"_____no_output_____"
]
],
[
[
"bool_arr.all()",
"_____no_output_____"
]
],
[
[
"### ソート",
"_____no_output_____"
]
],
[
[
"arr = randn(5)",
"_____no_output_____"
],
[
"arr",
"_____no_output_____"
]
],
[
[
"昇順",
"_____no_output_____"
]
],
[
[
"arr.sort()",
"_____no_output_____"
],
[
"arr",
"_____no_output_____"
]
],
[
[
"逆順に取り出す(降順)",
"_____no_output_____"
]
],
[
[
"arr[::-1]",
"_____no_output_____"
]
],
[
[
"#### 重複",
"_____no_output_____"
]
],
[
[
"cs = np.array(['USA', 'Japan', 'Russia', 'France', 'Japan', 'USA', 'Mexico'])",
"_____no_output_____"
],
[
"cs",
"_____no_output_____"
],
[
"np.unique(cs)",
"_____no_output_____"
]
],
[
[
"1つの目の引数の配列が2つめの引数に入っているか個別に判定",
"_____no_output_____"
]
],
[
[
"np.in1d(['USA', 'France', 'Sweaden'], cs)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb854a80c5881fc9b31778b746ad12a59894de95
| 8,104 |
ipynb
|
Jupyter Notebook
|
tests/notebooks/ipynb_coconut/coconut_homepage_demo.ipynb
|
Skylion007/jupytext
|
efd9bec9abc1091db145b411e794851760f0f2cf
|
[
"MIT"
] | 1 |
2021-02-25T19:11:41.000Z
|
2021-02-25T19:11:41.000Z
|
tests/notebooks/ipynb_coconut/coconut_homepage_demo.ipynb
|
Skylion007/jupytext
|
efd9bec9abc1091db145b411e794851760f0f2cf
|
[
"MIT"
] | null | null | null |
tests/notebooks/ipynb_coconut/coconut_homepage_demo.ipynb
|
Skylion007/jupytext
|
efd9bec9abc1091db145b411e794851760f0f2cf
|
[
"MIT"
] | null | null | null | 20.159204 | 64 | 0.466436 |
[
[
[
"Taken fron [coconut-lang.org](coconut-lang.org)",
"_____no_output_____"
],
[
"pipeline-style programming",
"_____no_output_____"
]
],
[
[
"\"hello, world!\" |> print",
"hello, world!\n"
]
],
[
[
" prettier lambdas",
"_____no_output_____"
]
],
[
[
"x -> x ** 2",
"_____no_output_____"
]
],
[
[
"partial application",
"_____no_output_____"
]
],
[
[
"range(10) |> map$(pow$(?, 2)) |> list",
"_____no_output_____"
]
],
[
[
"pattern-matching",
"_____no_output_____"
]
],
[
[
"match [head] + tail in [0, 1, 2, 3]:\n print(head, tail)",
"0 [1, 2, 3]\n"
]
],
[
[
"destructuring assignment",
"_____no_output_____"
]
],
[
[
"{\"list\": [0] + rest} = {\"list\": [0, 1, 2, 3]}",
"_____no_output_____"
]
],
[
[
"infix notation",
"_____no_output_____"
]
],
[
[
"# 5 `mod` 3 == 2",
"_____no_output_____"
]
],
[
[
"operator functions",
"_____no_output_____"
]
],
[
[
"product = reduce$(*)",
"_____no_output_____"
]
],
[
[
"function composition",
"_____no_output_____"
]
],
[
[
"# (f..g..h)(x, y, z)",
"_____no_output_____"
]
],
[
[
"lazy lists",
"_____no_output_____"
]
],
[
[
"# (| first_elem() |) :: rest_elems()",
"_____no_output_____"
]
],
[
[
"parallel programming",
"_____no_output_____"
]
],
[
[
"range(100) |> parallel_map$(pow$(2)) |> list",
"_____no_output_____"
]
],
[
[
"tail call optimization",
"_____no_output_____"
]
],
[
[
"def factorial(n, acc=1):\n case n:\n match 0:\n return acc\n match _ is int if n > 0:\n return factorial(n-1, acc*n)",
"_____no_output_____"
]
],
[
[
"algebraic data types",
"_____no_output_____"
]
],
[
[
"data Empty()\ndata Leaf(n)\ndata Node(l, r)\n\ndef size(Empty()) = 0\n\naddpattern def size(Leaf(n)) = 1\n\naddpattern def size(Node(l, r)) = size(l) + size(r)",
"_____no_output_____"
]
],
[
[
"and much more!\n\nLike what you see? Don't forget to star Coconut on GitHub!",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb854d3bb2f55e92affe14156aa42a82ce91e0ff
| 4,172 |
ipynb
|
Jupyter Notebook
|
models/02a_logit__single_yr__bbz.ipynb
|
georgetown-analytics/Paddle-Your-Loan-Canoe
|
62b40c28ea9ff757a4fbcd31f6e899670f35df6b
|
[
"MIT"
] | null | null | null |
models/02a_logit__single_yr__bbz.ipynb
|
georgetown-analytics/Paddle-Your-Loan-Canoe
|
62b40c28ea9ff757a4fbcd31f6e899670f35df6b
|
[
"MIT"
] | null | null | null |
models/02a_logit__single_yr__bbz.ipynb
|
georgetown-analytics/Paddle-Your-Loan-Canoe
|
62b40c28ea9ff757a4fbcd31f6e899670f35df6b
|
[
"MIT"
] | null | null | null | 30.903704 | 320 | 0.595638 |
[
[
[
"# HMDA Data -- Regression Modeling\n\n## Using ML with *scikit-learn* for modeling -- (02) Logistical Regression \n\n\n\nThis notebook explores the Home Mortgage Disclosure Act (HMDA) data for one year -- 2015. We use concepts from as well as tools from our own research and further readings to create a machine learning logistical regression model along with Naive Bayes classifers for predictive properties of loan approval rates.\n\n*Note that as of July 12, 2019, HMDA data is publically available for 2007 - 2017. \nhttps://www.consumerfinance.gov/data-research/hmda/explore\n\n--\n\n**Documentation:** \n(1) See below in '02'\n\n*There are many learning sources and prior work around similar topics: We draw inspiration from past Cohorts as well as learning materials from peer sources such as Kaggle and Towards Data Science*.\n\n---",
"_____no_output_____"
],
[
"## Importing Libraries and Loading the Data\n\nFirst, we need to import all the libraries we are going to utilize throughout this notebook. We import everything at the very top of this notebook for order and best practice.",
"_____no_output_____"
]
],
[
[
"# Importing Libraries.\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport math\n\nimport os\nimport psycopg2\nimport pandas.io.sql as psql\nimport sqlalchemy\nfrom sqlalchemy import create_engine\n\nfrom sklearn import preprocessing\nfrom sklearn import model_selection\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import accuracy_score\n\nfrom scipy import stats\nfrom pylab import*\nfrom matplotlib.ticker import LogLocator\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'",
"_____no_output_____"
]
],
[
[
"*-----*\n \nSecond, we establish the connection to the AWS PostgreSQL Relational Database System.",
"_____no_output_____"
]
],
[
[
"# Postgres (username, password, and database name) -- we define variables and put it into a function to easily call using an engine.\npostgres_host = 'aws-pgsql-loan-canoe.cr3nrpkvgwaj.us-east-2.rds.amazonaws.com' \npostgres_port = '5432' \npostgres_username = 'reporting_user' \npostgres_password = 'team_loan_canoe2019'\npostgres_dbname = \"paddle_loan_canoe\"\npostgres_str = ('postgresql://{username}:{password}@{host}:{port}/{dbname}'\n .format(username = postgres_username,\n password = postgres_password,\n host = postgres_host,\n port = postgres_port,\n dbname = postgres_dbname)\n )\n\n# Creating the connection.\ncnx = create_engine(postgres_str)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb8553db85cec2d1946d862d990db10d8b99e309
| 3,509 |
ipynb
|
Jupyter Notebook
|
proj4/InClass-Hypothesis Testing Exercise.ipynb
|
sspickle/instrumentation-projects
|
ed6e67a07141df92b98a79ee2e8bbaece9a25e09
|
[
"MIT"
] | 12 |
2018-03-25T23:26:04.000Z
|
2021-09-01T08:32:15.000Z
|
proj4/InClass-Hypothesis Testing Exercise.ipynb
|
sspickle/instrumentation-projects
|
ed6e67a07141df92b98a79ee2e8bbaece9a25e09
|
[
"MIT"
] | 4 |
2021-02-23T11:18:07.000Z
|
2021-11-04T09:23:07.000Z
|
proj4/InClass-Hypothesis Testing Exercise.ipynb
|
sspickle/instrumentation-projects
|
ed6e67a07141df92b98a79ee2e8bbaece9a25e09
|
[
"MIT"
] | 5 |
2020-01-23T14:12:29.000Z
|
2022-02-05T21:46:53.000Z
| 24.886525 | 141 | 0.524651 |
[
[
[
"import pymc3 as pm\nimport arviz as az\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"Suppose you measure two sets of data, $x_1$ and $x_2$:",
"_____no_output_____"
]
],
[
[
"\nx1 = np.array([3.25466863, 2.97370402, 2.91113498, 3.4574893 , 3.17937048,\n 3.03048094, 3.21812428, 2.81350504, 2.9976349 , 2.97788408,\n 3.1813029 , 2.87498481, 2.90372449, 3.46095383, 3.11570786,\n 2.69100383, 2.97142051, 2.72968174, 2.48244642, 2.8584929 ])\nx1",
"_____no_output_____"
],
[
"x2 = np.array([3.58365047, 3.04506491, 3.35190893, 2.76485786, 3.8494015 ,\n 3.17593123, 3.03499338, 2.31533078, 2.58647626, 3.47397813,\n 2.9985396 , 3.46170964, 3.23908075, 2.78904992, 3.000179 ,\n 3.23386923, 3.10856455, 3.24167989, 2.92353227, 3.09131427])\nx2",
"_____no_output_____"
]
],
[
[
"They *appear* to have different means:",
"_____no_output_____"
]
],
[
[
"x1_mean = x1.mean()\nx2_mean = x2.mean()\n\nprint(\"<x1>=%4.3f <x2>=%4.3f -> <x2-x1> = %4.3f\" % (x1_mean, x2_mean, x2_mean-x1_mean))",
"<x1>=3.004 <x2>=3.113 -> <x2-x1> = 0.109\n"
]
],
[
[
"Use Bayesian inference to find the posterior distribution of the difference. How likely is it that $x_2$ is *really* larger than $x_1$?",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb856b64c15761d206a413bba8187d42fef599c6
| 16,368 |
ipynb
|
Jupyter Notebook
|
Python_CH10.ipynb
|
AminOroji/Elementary-Python-suf
|
733bba0b2142f1ff00795abf122af5aa2c1ac626
|
[
"MIT"
] | null | null | null |
Python_CH10.ipynb
|
AminOroji/Elementary-Python-suf
|
733bba0b2142f1ff00795abf122af5aa2c1ac626
|
[
"MIT"
] | null | null | null |
Python_CH10.ipynb
|
AminOroji/Elementary-Python-suf
|
733bba0b2142f1ff00795abf122af5aa2c1ac626
|
[
"MIT"
] | null | null | null | 20.771574 | 302 | 0.458333 |
[
[
[
"# Chapter 10: Tuples",
"_____no_output_____"
],
[
"### Tuples are immutable",
"_____no_output_____"
],
[
"A tuple is a sequence of values much like a list. The values stored in a tuple can be **any type**, and they are **indexed by integers**.",
"_____no_output_____"
]
],
[
[
"tp1 = 'a', 'b', 'c', 'd', 'e'\ntype(tp1)",
"_____no_output_____"
],
[
"tp2 = ('a', 'b', 'c', 'd', 'e')\ntype(tpl2)",
"_____no_output_____"
],
[
"tp1 is tp2",
"_____no_output_____"
],
[
"# Without the comma Python treats ('a') as an expression with a string in parentheses that evaluates to a string:\nst = ('Amin')\ntype(st)",
"_____no_output_____"
],
[
"# empty tuple\nempty_tuple = tuple()\nempty_tuple",
"_____no_output_____"
],
[
"dir(tp1)",
"_____no_output_____"
],
[
"tp1.count('a')",
"_____no_output_____"
],
[
"help(tp1.index)",
"Help on built-in function index:\n\nindex(value, start=0, stop=9223372036854775807, /) method of builtins.tuple instance\n Return first index of value.\n \n Raises ValueError if the value is not present.\n\n"
],
[
"# If the argument is a sequence (string, list, or tuple), \n# the result of the call to tuple is a tuple with the elements of the sequence:\nnew_tuple = tuple('Hello World!')\nnew_tuple",
"_____no_output_____"
],
[
"new_tuple.count('l')",
"_____no_output_____"
],
[
"new_tuple.index('l',4)",
"_____no_output_____"
],
[
"new_tuple[0] = 'Amin'",
"_____no_output_____"
],
[
"new_tuple = 'new string is assigned'\nprint(new_tuple)\nprint('type is: ', type(new_tuple))",
"new string is assigned\ntype is: <class 'str'>\n"
],
[
"new_tuple = tuple('new string is assigned')\nprint(new_tuple)\nprint('type: ', type(new_tuple))",
"('n', 'e', 'w', ' ', 's', 't', 'r', 'i', 'n', 'g', ' ', 'i', 's', ' ', 'a', 's', 's', 'i', 'g', 'n', 'e', 'd')\ntype: <class 'tuple'>\n"
]
],
[
[
"## Comparing tuples",
"_____no_output_____"
],
[
"The comparison operators work with tuples and other sequences. Python starts by comparing the first element from each sequence. If they are equal, it goes on to the next element, and so on, until it finds elements that differ. Subsequent elements are not considered (even if they are really big).",
"_____no_output_____"
]
],
[
[
"print((0, 1, 2) < (0, 3, 4))\nprint((0, 5, 0) < (0, 3, 4))\nprint((0, 1, 2000000) < (0, 1))",
"True\nFalse\nFalse\n"
]
],
[
[
"### IMPORTANT POINT",
"_____no_output_____"
]
],
[
[
"### IMPORTANT POINT\nm = ('Amin', 'Oroji', 'Armin ')\nx, y, z = m\nprint(x)\nprint(y)\nprint(z)",
"Amin\nOroji\nArmin \n"
],
[
"print(type(x), type(m))",
"<class 'str'> <class 'tuple'>\n"
]
],
[
[
"The same as lists, it means:\n\nx = m[0] and y = m[1]\n\n",
"_____no_output_____"
]
],
[
[
"a, b = 1, 2, 3",
"_____no_output_____"
],
[
"# example\naddr = '[email protected]'\nuname, domain = addr.split('@')\nprint('username: ', uname)\nprint('Domain: ', domain)",
"username: amin\nDomain: prata-tech.com\n"
]
],
[
[
"### Dictionaries and Tuples",
"_____no_output_____"
]
],
[
[
"### Dictionaries and Tuples\nemployees = {'CEO':'Amin', 'IT Support': 'Milad', 'DM Manager': 'Sahar', 'AI Eng': 'Armin',\n 'Graphic Designer':'Raana', 'UX Designer':'Narges'\n }\nitems = list(employees.items())\nitems",
"_____no_output_____"
],
[
"type(items[0])",
"_____no_output_____"
],
[
"items.sort()\nitems",
"_____no_output_____"
]
],
[
[
"### Multiple assignment with dictionaries",
"_____no_output_____"
]
],
[
[
"for key, val in items:\n print('%s : %s' %(val, key))",
"Armin : AI Eng\nAmin : CEO\nSahar : DM Manager\nRaana : Graphic Designer\nMilad : IT Support\nNarges : UX Designer\n"
],
[
"items.sort(reverse = True)\nitems",
"_____no_output_____"
],
[
"new_list = list()\nfor key, val in items:\n new_list.append((val,key))\n\nnew_list",
"_____no_output_____"
]
],
[
[
"### Using Tuples as Keys in Dictionaries",
"_____no_output_____"
],
[
"Tuples are hashable unlike lists.",
"_____no_output_____"
]
],
[
[
"phonebook = {('Amin','Oroji'):'09981637510', ('Armin','Golzar'):'09981637520',('Milad','Gashtil'):'09981637530'}\nphonebook",
"_____no_output_____"
],
[
"phonebook[('Amin','Oroji')]",
"_____no_output_____"
]
],
[
[
"## Excercise: Write a script to create your phonebook ((name,surname),number). update your phonebook with new entries.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
cb856d0d67783e109cb12b1cc998d4b697359631
| 97,909 |
ipynb
|
Jupyter Notebook
|
1_getting_started.ipynb
|
zhengxingXue/rl-tutorial-jnrr19
|
14164f8f1b4cf9a5608cbcad1750b4f7baa5719e
|
[
"MIT"
] | 1 |
2020-05-14T15:00:39.000Z
|
2020-05-14T15:00:39.000Z
|
1_getting_started.ipynb
|
zhengxingXue/rl-tutorial-jnrr19
|
14164f8f1b4cf9a5608cbcad1750b4f7baa5719e
|
[
"MIT"
] | null | null | null |
1_getting_started.ipynb
|
zhengxingXue/rl-tutorial-jnrr19
|
14164f8f1b4cf9a5608cbcad1750b4f7baa5719e
|
[
"MIT"
] | null | null | null | 102.201461 | 68,023 | 0.826369 |
[
[
[
"<a href=\"https://colab.research.google.com/github/araffin/rl-tutorial-jnrr19/blob/master/1_getting_started.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Stable Baselines Tutorial - Getting Started\n\nGithub repo: https://github.com/araffin/rl-tutorial-jnrr19\n\nStable-Baselines: https://github.com/hill-a/stable-baselines\n\nDocumentation: https://stable-baselines.readthedocs.io/en/master/\n\nRL Baselines zoo: https://github.com/araffin/rl-baselines-zoo\n\nMedium article: [https://medium.com/@araffin/stable-baselines-a-fork-of-openai-baselines-df87c4b2fc82](https://medium.com/@araffin/stable-baselines-a-fork-of-openai-baselines-df87c4b2fc82)\n\n[RL Baselines Zoo](https://github.com/araffin/rl-baselines-zoo) is a collection of pre-trained Reinforcement Learning agents using Stable-Baselines.\n\nIt also provides basic scripts for training, evaluating agents, tuning hyperparameters and recording videos.\n\n\n## Introduction\n\nIn this notebook, you will learn the basics for using stable baselines library: how to create a RL model, train it and evaluate it. Because all algorithms share the same interface, we will see how simple it is to switch from one algorithm to another.\n\n\n## Install Dependencies and Stable Baselines Using Pip\n\nList of full dependencies can be found in the [README](https://github.com/hill-a/stable-baselines).\n\n```\nsudo apt-get update && sudo apt-get install cmake libopenmpi-dev zlib1g-dev\n```\n\n\n```\npip install stable-baselines[mpi]\n```",
"_____no_output_____"
]
],
[
[
"# Stable Baselines only supports tensorflow 1.x for now\n%tensorflow_version 1.x\n!apt-get install ffmpeg freeglut3-dev xvfb # For visualization\n!pip install stable-baselines[mpi]==2.10.0",
"_____no_output_____"
]
],
[
[
"## Imports",
"_____no_output_____"
],
[
"Stable-Baselines works on environments that follow the [gym interface](https://stable-baselines.readthedocs.io/en/master/guide/custom_env.html).\nYou can find a list of available environment [here](https://gym.openai.com/envs/#classic_control).\n\nIt is also recommended to check the [source code](https://github.com/openai/gym) to learn more about the observation and action space of each env, as gym does not have a proper documentation.\nNot all algorithms can work with all action spaces, you can find more in this [recap table](https://stable-baselines.readthedocs.io/en/master/guide/algos.html)",
"_____no_output_____"
]
],
[
[
"import gym\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"The first thing you need to import is the RL model, check the documentation to know what you can use on which problem",
"_____no_output_____"
]
],
[
[
"from stable_baselines import PPO2",
"_____no_output_____"
]
],
[
[
"The next thing you need to import is the policy class that will be used to create the networks (for the policy/value functions).\nThis step is optional as you can directly use strings in the constructor: \n\n```PPO2('MlpPolicy', env)``` instead of ```PPO2(MlpPolicy, env)```\n\nNote that some algorithms like `SAC` have their own `MlpPolicy` (different from `stable_baselines.common.policies.MlpPolicy`), that's why using string for the policy is the recommened option.",
"_____no_output_____"
]
],
[
[
"from stable_baselines.common.policies import MlpPolicy",
"_____no_output_____"
]
],
[
[
"## Create the Gym env and instantiate the agent\n\nFor this example, we will use CartPole environment, a classic control problem.\n\n\"A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. \"\n\nCartpole environment: [https://gym.openai.com/envs/CartPole-v1/](https://gym.openai.com/envs/CartPole-v1/)\n\n\n\n\nWe chose the MlpPolicy because the observation of the CartPole task is a feature vector, not images.\n\nThe type of action to use (discrete/continuous) will be automatically deduced from the environment action space\n\nHere we are using the [Proximal Policy Optimization](https://stable-baselines.readthedocs.io/en/master/modules/ppo2.html) algorithm (PPO2 is the version optimized for GPU), which is an Actor-Critic method: it uses a value function to improve the policy gradient descent (by reducing the variance).\n\nIt combines ideas from [A2C](https://stable-baselines.readthedocs.io/en/master/modules/a2c.html) (having multiple workers and using an entropy bonus for exploration) and [TRPO](https://stable-baselines.readthedocs.io/en/master/modules/trpo.html) (it uses a trust region to improve stability and avoid catastrophic drops in performance).\n\nPPO is an on-policy algorithm, which means that the trajectories used to update the networks must be collected using the latest policy.\nIt is usually less sample efficient than off-policy alorithms like [DQN](https://stable-baselines.readthedocs.io/en/master/modules/dqn.html), [SAC](https://stable-baselines.readthedocs.io/en/master/modules/sac.html) or [TD3](https://stable-baselines.readthedocs.io/en/master/modules/td3.html), but is much faster regarding wall-clock time.\n",
"_____no_output_____"
]
],
[
[
"env = gym.make('CartPole-v1')\n\nmodel = PPO2(MlpPolicy, env, verbose=0)",
"_____no_output_____"
]
],
[
[
"We create a helper function to evaluate the agent:",
"_____no_output_____"
]
],
[
[
"def evaluate(model, num_episodes=100):\n \"\"\"\n Evaluate a RL agent\n :param model: (BaseRLModel object) the RL Agent\n :param num_episodes: (int) number of episodes to evaluate it\n :return: (float) Mean reward for the last num_episodes\n \"\"\"\n # This function will only work for a single Environment\n env = model.get_env()\n all_episode_rewards = []\n for i in range(num_episodes):\n episode_rewards = []\n done = False\n obs = env.reset()\n while not done:\n # _states are only useful when using LSTM policies\n action, _states = model.predict(obs)\n # here, action, rewards and dones are arrays\n # because we are using vectorized env\n obs, reward, done, info = env.step(action)\n episode_rewards.append(reward)\n\n all_episode_rewards.append(sum(episode_rewards))\n\n mean_episode_reward = np.mean(all_episode_rewards)\n print(\"Mean reward:\", mean_episode_reward, \"Num episodes:\", num_episodes)\n\n return mean_episode_reward",
"_____no_output_____"
]
],
[
[
"Let's evaluate the un-trained agent, this should be a random agent.",
"_____no_output_____"
]
],
[
[
"# Random Agent, before training\nmean_reward_before_train = evaluate(model, num_episodes=100)",
"Mean reward: 25.09 Num episodes: 100\n"
]
],
[
[
"Stable-Baselines already provides you with that helper:",
"_____no_output_____"
]
],
[
[
"from stable_baselines.common.evaluation import evaluate_policy",
"_____no_output_____"
],
[
"mean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=100)\n\nprint(f\"mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}\")",
"mean_reward:32.08 +/- 12.17\n"
]
],
[
[
"## Train the agent and evaluate it",
"_____no_output_____"
]
],
[
[
"# Train the agent for 10000 steps\nmodel.learn(total_timesteps=10000)",
"_____no_output_____"
],
[
"# Evaluate the trained agent\nmean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=100)\n\nprint(f\"mean_reward:{mean_reward:.2f} +/- {std_reward:.2f}\")",
"mean_reward:276.61 +/- 118.51\n"
]
],
[
[
"Apparently the training went well, the mean reward increased a lot ! ",
"_____no_output_____"
],
[
"### Prepare video recording",
"_____no_output_____"
]
],
[
[
"# Set up fake display; otherwise rendering will fail\nimport os\nos.system(\"Xvfb :1 -screen 0 1024x768x24 &\")\nos.environ['DISPLAY'] = ':1'",
"_____no_output_____"
],
[
"import base64\nfrom pathlib import Path\n\nfrom IPython import display as ipythondisplay\n\ndef show_videos(video_path='', prefix=''):\n \"\"\"\n Taken from https://github.com/eleurent/highway-env\n\n :param video_path: (str) Path to the folder containing videos\n :param prefix: (str) Filter the video, showing only the only starting with this prefix\n \"\"\"\n html = []\n for mp4 in Path(video_path).glob(\"{}*.mp4\".format(prefix)):\n video_b64 = base64.b64encode(mp4.read_bytes())\n html.append('''<video alt=\"{}\" autoplay \n loop controls style=\"height: 400px;\">\n <source src=\"data:video/mp4;base64,{}\" type=\"video/mp4\" />\n </video>'''.format(mp4, video_b64.decode('ascii')))\n ipythondisplay.display(ipythondisplay.HTML(data=\"<br>\".join(html)))",
"_____no_output_____"
]
],
[
[
"We will record a video using the [VecVideoRecorder](https://stable-baselines.readthedocs.io/en/master/guide/vec_envs.html#vecvideorecorder) wrapper, you will learn about those wrapper in the next notebook.",
"_____no_output_____"
]
],
[
[
"from stable_baselines.common.vec_env import VecVideoRecorder, DummyVecEnv\n\ndef record_video(env_id, model, video_length=500, prefix='', video_folder='videos/'):\n \"\"\"\n :param env_id: (str)\n :param model: (RL model)\n :param video_length: (int)\n :param prefix: (str)\n :param video_folder: (str)\n \"\"\"\n eval_env = DummyVecEnv([lambda: gym.make(env_id)])\n # Start the video at step=0 and record 500 steps\n eval_env = VecVideoRecorder(eval_env, video_folder=video_folder,\n record_video_trigger=lambda step: step == 0, video_length=video_length,\n name_prefix=prefix)\n\n obs = eval_env.reset()\n for _ in range(video_length):\n action, _ = model.predict(obs)\n obs, _, _, _ = eval_env.step(action)\n\n # Close the video recorder\n eval_env.close()",
"_____no_output_____"
]
],
[
[
"### Visualize trained agent\n\n",
"_____no_output_____"
]
],
[
[
"record_video('CartPole-v1', model, video_length=500, prefix='ppo2-cartpole')",
"Saving video to /content/videos/ppo2-cartpole-step-0-to-step-500.mp4\n"
],
[
"show_videos('videos', prefix='ppo2')",
"_____no_output_____"
]
],
[
[
"## Bonus: Train a RL Model in One Line\n\nThe policy class to use will be inferred and the environment will be automatically created. This works because both are [registered](https://stable-baselines.readthedocs.io/en/master/guide/quickstart.html).",
"_____no_output_____"
]
],
[
[
"model = PPO2('MlpPolicy', \"CartPole-v1\", verbose=1).learn(1000)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"## Train a DQN agent\n\nIn the previous example, we have used PPO, which one of the many algorithms provided by stable-baselines.\n\nIn the next example, we are going train a [Deep Q-Network agent (DQN)](https://stable-baselines.readthedocs.io/en/master/modules/dqn.html), and try to see possible improvements provided by its extensions (Double-DQN, Dueling-DQN, Prioritized Experience Replay).\n\nThe essential point of this section is to show you how simple it is to tweak hyperparameters.\n\nThe main advantage of stable-baselines is that it provides a common interface to use the algorithms, so the code will be quite similar.\n\n\nDQN paper: https://arxiv.org/abs/1312.5602\n\nDueling DQN: https://arxiv.org/abs/1511.06581\n\nDouble-Q Learning: https://arxiv.org/abs/1509.06461\n\nPrioritized Experience Replay: https://arxiv.org/abs/1511.05952",
"_____no_output_____"
],
[
"### Vanilla DQN: DQN without extensions",
"_____no_output_____"
]
],
[
[
"# Same as before we instantiate the agent along with the environment\nfrom stable_baselines import DQN\n\n# Deactivate all the DQN extensions to have the original version\n# In practice, it is recommend to have them activated\nkwargs = {'double_q': False, 'prioritized_replay': False, 'policy_kwargs': dict(dueling=False)}\n\n# Note that the MlpPolicy of DQN is different from the one of PPO\n# but stable-baselines handles that automatically if you pass a string\ndqn_model = DQN('MlpPolicy', 'CartPole-v1', verbose=1, **kwargs)",
"_____no_output_____"
],
[
"# Random Agent, before training\nmean_reward_before_train = evaluate(dqn_model, num_episodes=100)",
"Mean reward: 9.29 Num episodes: 100\n"
],
[
"# Train the agent for 10000 steps\ndqn_model.learn(total_timesteps=10000, log_interval=10)",
"_____no_output_____"
],
[
"# Evaluate the trained agent\nmean_reward = evaluate(dqn_model, num_episodes=100)",
"Mean reward: 130.02 Num episodes: 100\n"
]
],
[
[
"### DQN + Prioritized Replay",
"_____no_output_____"
]
],
[
[
"# Activate only the prioritized replay\nkwargs = {'double_q': False, 'prioritized_replay': True, 'policy_kwargs': dict(dueling=False)}\n\ndqn_per_model = DQN('MlpPolicy', 'CartPole-v1', verbose=1, **kwargs)",
"_____no_output_____"
],
[
"dqn_per_model.learn(total_timesteps=10000, log_interval=10)",
"_____no_output_____"
],
[
"# Evaluate the trained agent\nmean_reward = evaluate(dqn_per_model, num_episodes=100)",
"Mean reward: 110.18 Num episodes: 100\n"
]
],
[
[
"### DQN + Prioritized Experience Replay + Double Q-Learning + Dueling",
"_____no_output_____"
]
],
[
[
"# Activate all extensions\nkwargs = {'double_q': True, 'prioritized_replay': True, 'policy_kwargs': dict(dueling=True)}\n\ndqn_full_model = DQN('MlpPolicy', 'CartPole-v1', verbose=1, **kwargs)",
"Creating environment from the given name, wrapped in a DummyVecEnv.\n"
],
[
"dqn_full_model.learn(total_timesteps=10000, log_interval=10)",
"_____no_output_____"
],
[
"mean_reward = evaluate(dqn_per_model, num_episodes=100)",
"Mean reward: 110.02 Num episodes: 100\n"
]
],
[
[
"In this particular example, the extensions does not seem to give any improvement compared to the simple DQN version.\nThey are several reasons for that:\n\n1. `CartPole-v1` is a pretty simple environment\n2. We trained DQN for very few timesteps, not enough to see any difference\n3. The default hyperparameters for DQN are tuned for atari games, where the number of training timesteps is much larger (10^6) and input observations are images\n4. We have only compared one random seed per experiment",
"_____no_output_____"
],
[
"## Conclusion\n\nIn this notebook we have seen:\n- how to define and train a RL model using stable baselines, it takes only one line of code ;)\n- how to use different RL algorithms and change some hyperparameters",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
cb856dc7628ff9c0c7048b8aa1560dec09c220a7
| 64,444 |
ipynb
|
Jupyter Notebook
|
IBM Cloud/Custom ML Provider/OpenScale Custom ML Provider - All Monitors.ipynb
|
parthakom2/watson-openscale-samples
|
a134126791d64ea1c6bddfb563e21701138fe8f6
|
[
"Apache-2.0"
] | 11 |
2021-01-08T13:21:45.000Z
|
2022-02-01T17:03:29.000Z
|
IBM Cloud/Custom ML Provider/OpenScale Custom ML Provider - All Monitors.ipynb
|
parthakom2/watson-openscale-samples
|
a134126791d64ea1c6bddfb563e21701138fe8f6
|
[
"Apache-2.0"
] | 18 |
2021-05-19T04:20:56.000Z
|
2022-01-05T08:55:50.000Z
|
IBM Cloud/Custom ML Provider/OpenScale Custom ML Provider - All Monitors.ipynb
|
parthakom2/watson-openscale-samples
|
a134126791d64ea1c6bddfb563e21701138fe8f6
|
[
"Apache-2.0"
] | 16 |
2021-01-06T13:55:47.000Z
|
2022-03-22T09:48:15.000Z
| 31.80849 | 926 | 0.596052 |
[
[
[
"<img src=\"https://github.com/pmservice/ai-openscale-tutorials/raw/master/notebooks/images/banner.png\" align=\"left\" alt=\"banner\">",
"_____no_output_____"
],
[
"# Working with Watson OpenScale - Custom Machine Learning Provider",
"_____no_output_____"
],
[
"This notebook should be run using with **Python 3.7.x** runtime environment. **If you are viewing this in Watson Studio and do not see Python 3.7.x in the upper right corner of your screen, please update the runtime now.** It requires service credentials for the following services:\n * Watson OpenScale\n * A Custom ML provider which is hosted in a VM that can be accessible from CPD PODs, specifically OpenScale PODs namely ML Gateway fairness, quality, drift, and explain.\n * DB2 - as part of this notebook, we make use of an existing data mart.\n\n \nThe notebook will configure a OpenScale data mart subscription for Custom ML Provider deployment. We configure and execute the fairness, explain, quality and drift monitors.",
"_____no_output_____"
],
[
"## Custom Machine Learning Provider Setup\nFollowing code can be used to start a gunicorn/flask application that can be hosted in a VM, such that it can be accessable from CPD system.\nThis code does the following:\n* It wraps a Watson Machine Learning model that is deployed to a space.\n* So the hosting application URL should contain the SPACE ID and the DEPLOYMENT ID. Then, the same can be used to talk to the target WML model/deployment.\n* Having said that, this is only for this tutorial purpose, and you can define your Custom ML provider endpoint in any fashion you want, such that it wraps your own custom ML engine.\n* The scoring request and response payload should confirm to the schema as described here at: https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-custom.html\n* To start the application using the below code, make sure you install following python packages in your VM:\n\npython -m pip install gunicorn\npython -m pip install flask\npython -m pip install numpy\npython -m pip install pandas\npython -m pip install requests\npython -m pip install joblib==0.11\npython -m pip install scipy==0.19.1\npython -m pip install --user numpy scipy matplotlib ipython jupyter pandas sympy nose\npython -m pip install ibm_watson_machine_learning\n\n-----------------\n\n```\nfrom flask import Flask, request, abort, jsonify\nimport json\nimport base64\nimport requests, io\nimport pandas as pd\nfrom ibm_watson_machine_learning import APIClient\n\napp = Flask(__name__)\n\nWML_CREDENTIALS = {\n \"url\": \"https://namespace1-cpd-namespace1.apps.xxxxx.os.fyre.ibm.com\",\n \"username\": \"admin\",\n \"password\" : \"xxxx\",\n \"instance_id\": \"wml_local\",\n \"version\" : \"3.5\"\n }\n\[email protected]('/spaces/<space_id>/deployments/<deployment_id>/predictions', methods=['POST'])\ndef wml_scoring(space_id, deployment_id):\n\tif not request.json:\n\t\tabort(400)\n\twml_credentials = WML_CREDENTIALS\n\tpayload_scoring = {\n \"input_data\": [\n request.json\n ]\n }\n\n\twml_client = APIClient(wml_credentials)\n\twml_client.set.default_space(space_id)\n\n\trecords_list=[]\n\tscoring_response = wml_client.deployments.score(deployment_id, payload_scoring)\n\treturn jsonify(scoring_response[\"predictions\"][0])\n\nif __name__ == '__main__':\n app.run(host='xxxx.fyre.ibm.com', port=9443, debug=True)\n```\n-----------------",
"_____no_output_____"
],
[
"# Setup <a name=\"setup\"></a>",
"_____no_output_____"
],
[
"## Package installation",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"!pip install --upgrade pyspark==2.4 --no-cache | tail -n 1\n\n!pip install --upgrade pandas==0.25.3 --no-cache | tail -n 1\n!pip install --upgrade requests==2.23 --no-cache | tail -n 1\n!pip install numpy==1.16.4 --no-cache | tail -n 1\n!pip install scikit-learn==0.20 --no-cache | tail -n 1\n!pip install SciPy --no-cache | tail -n 1\n!pip install lime --no-cache | tail -n 1\n\n!pip install --upgrade ibm-watson-machine-learning --user | tail -n 1\n!pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1\n!pip install --upgrade ibm-wos-utils --no-cache | tail -n 1",
"_____no_output_____"
]
],
[
[
"### Action: restart the kernel!",
"_____no_output_____"
],
[
"## Configure credentials",
"_____no_output_____"
],
[
"- WOS_CREDENTIALS (CP4D)\n- WML_CREDENTIALS (CP4D)\n- DATABASE_CREDENTIALS (DB2 on CP4D or Cloud Object Storage (COS))\n- SCHEMA_NAME",
"_____no_output_____"
]
],
[
[
"#masked\nWOS_CREDENTIALS = {\n \"url\": \"https://namespace1-cpd-namespace1.apps.xxxxx.os.fyre.ibm.com\",\n \"username\": \"admin\",\n \"password\": \"xxxxx\",\n \"version\": \"3.5\"\n}",
"_____no_output_____"
],
[
"CUSTOM_ML_PROVIDER_SCORING_URL = 'https://xxxxx.fyre.ibm.com:9443/spaces/$SPACE_ID/deployments/$DEPLOYMENT_ID/predictions'\nscoring_url = CUSTOM_ML_PROVIDER_SCORING_URL",
"_____no_output_____"
],
[
"label_column=\"Risk\"\nmodel_type = \"binary\"",
"_____no_output_____"
],
[
"import os\nimport base64\nimport json\nimport requests\nfrom requests.auth import HTTPBasicAuth",
"_____no_output_____"
]
],
[
[
"## Save training data to Cloud Object Storage\n\n### Cloud object storage details¶\n\nIn next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit getting started with COS tutorial. You can find COS_API_KEY_ID and COS_RESOURCE_CRN variables in Service Credentials in menu of your COS instance. Used COS Service Credentials must be created with Role parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription. COS_ENDPOINT variable can be found in Endpoint field of the menu.",
"_____no_output_____"
]
],
[
[
"IAM_URL=\"https://iam.ng.bluemix.net/oidc/token\"",
"_____no_output_____"
],
[
"# masked\nCOS_API_KEY_ID = \"*****\"\nCOS_RESOURCE_CRN = \"*****\"\nCOS_ENDPOINT = \"https://s3.us.cloud-object-storage.appdomain.cloud\" # Current list avaiable at https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints\nBUCKET_NAME = \"*****\"\nFILE_NAME = \"german_credit_data_biased_training.csv\"",
"_____no_output_____"
]
],
[
[
"# Load and explore data",
"_____no_output_____"
]
],
[
[
"!rm german_credit_data_biased_training.csv\n!wget https://raw.githubusercontent.com/pmservice/ai-openscale-tutorials/master/assets/historical_data/german_credit_risk/wml/german_credit_data_biased_training.csv",
"_____no_output_____"
]
],
[
[
"## Explore data",
"_____no_output_____"
]
],
[
[
"training_data_references = [\n {\n \"id\": \"Credit Risk\",\n \"type\": \"s3\",\n \"connection\": {\n \"access_key_id\": COS_API_KEY_ID,\n \"endpoint_url\": COS_ENDPOINT,\n \"resource_instance_id\":COS_RESOURCE_CRN\n },\n \"location\": {\n \"bucket\": BUCKET_NAME,\n \"path\": FILE_NAME,\n }\n }\n ]",
"_____no_output_____"
]
],
[
[
"## Construct the scoring payload",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndf = pd.read_csv(\"german_credit_data_biased_training.csv\")\ndf.head()",
"_____no_output_____"
],
[
"cols_to_remove = [label_column]\ndef get_scoring_payload(no_of_records_to_score = 1):\n\n for col in cols_to_remove:\n if col in df.columns:\n del df[col] \n\n fields = df.columns.tolist()\n values = df[fields].values.tolist()\n\n payload_scoring ={\"fields\": fields, \"values\": values[:no_of_records_to_score]} \n return payload_scoring",
"_____no_output_____"
],
[
"#debug\npayload_scoring = get_scoring_payload(1)\npayload_scoring",
"_____no_output_____"
]
],
[
[
"## Method to perform scoring",
"_____no_output_____"
]
],
[
[
"def custom_ml_scoring():\n header = {\"Content-Type\": \"application/json\", \"x\":\"y\"}\n \n print(scoring_url)\n scoring_response = requests.post(scoring_url, json=payload_scoring, headers=header, verify=False)\n jsonify_scoring_response = scoring_response.json()\n return jsonify_scoring_response",
"_____no_output_____"
]
],
[
[
"## Method to perform payload logging",
"_____no_output_____"
]
],
[
[
"import uuid\nscoring_id = None",
"_____no_output_____"
],
[
"from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord\ndef payload_logging(payload_scoring, scoring_response):\n scoring_id = str(uuid.uuid4())\n records_list=[]\n \n #manual PL logging for custom ml provider\n pl_record = PayloadRecord(scoring_id=scoring_id, request=payload_scoring, response=scoring_response, response_time=int(460))\n records_list.append(pl_record)\n wos_client.data_sets.store_records(data_set_id = payload_data_set_id, request_body=records_list)\n \n time.sleep(5)\n pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)\n print(\"Number of records in the payload logging table: {}\".format(pl_records_count))\n return scoring_id",
"_____no_output_____"
]
],
[
[
"## Score the model and print the scoring response\n### Sample Scoring",
"_____no_output_____"
]
],
[
[
"custom_ml_scoring()",
"_____no_output_____"
]
],
[
[
"# Configure OpenScale \n\nThe notebook will now import the necessary libraries and set up a Python OpenScale client.",
"_____no_output_____"
]
],
[
[
"from ibm_watson_openscale import APIClient\nfrom ibm_watson_openscale.utils import *\nfrom ibm_watson_openscale.supporting_classes import *\nfrom ibm_watson_openscale.supporting_classes.enums import *\nfrom ibm_watson_openscale.base_classes.watson_open_scale_v2 import *\nfrom ibm_cloud_sdk_core.authenticators import CloudPakForDataAuthenticator\n\nimport json\nimport requests\nimport base64\nfrom requests.auth import HTTPBasicAuth\nimport time",
"_____no_output_____"
]
],
[
[
"## Get a instance of the OpenScale SDK client",
"_____no_output_____"
]
],
[
[
"authenticator = CloudPakForDataAuthenticator(\n url=WOS_CREDENTIALS['url'],\n username=WOS_CREDENTIALS['username'],\n password=WOS_CREDENTIALS['password'],\n disable_ssl_verification=True\n )\n\nwos_client = APIClient(service_url=WOS_CREDENTIALS['url'],authenticator=authenticator)\nwos_client.version",
"_____no_output_____"
]
],
[
[
"## Set up datamart\n\nWatson OpenScale uses a database to store payload logs and calculated metrics. If database credentials were not supplied above, the notebook will use the free, internal lite database. If database credentials were supplied, the datamart will be created there unless there is an existing datamart and the KEEP_MY_INTERNAL_POSTGRES variable is set to True. If an OpenScale datamart exists in Db2 or PostgreSQL, the existing datamart will be used and no data will be overwritten.\n\nPrior instances of the model will be removed from OpenScale monitoring.",
"_____no_output_____"
]
],
[
[
"wos_client.data_marts.show()",
"_____no_output_____"
],
[
"data_marts = wos_client.data_marts.list().result.data_marts\nif len(data_marts) == 0:\n raise Exception(\"Missing data mart.\")\ndata_mart_id=data_marts[0].metadata.id\nprint('Using existing datamart {}'.format(data_mart_id))",
"_____no_output_____"
],
[
"data_mart_details = wos_client.data_marts.list().result.data_marts[0]\ndata_mart_details.to_dict()",
"_____no_output_____"
],
[
"wos_client.service_providers.show()",
"_____no_output_____"
]
],
[
[
"## Remove existing service provider connected with used WML instance.\n\nMultiple service providers for the same engine instance are avaiable in Watson OpenScale. To avoid multiple service providers of used WML instance in the tutorial notebook the following code deletes existing service provder(s) and then adds new one.",
"_____no_output_____"
]
],
[
[
"SERVICE_PROVIDER_NAME = \"Custom ML Provider Demo - All Monitors\"\nSERVICE_PROVIDER_DESCRIPTION = \"Added by tutorial WOS notebook to showcase monitoring Fairness, Quality, Drift and Explainability against a Custom ML provider.\"",
"_____no_output_____"
],
[
"service_providers = wos_client.service_providers.list().result.service_providers\nfor service_provider in service_providers:\n service_instance_name = service_provider.entity.name\n if service_instance_name == SERVICE_PROVIDER_NAME:\n service_provider_id = service_provider.metadata.id\n wos_client.service_providers.delete(service_provider_id)\n print(\"Deleted existing service_provider for WML instance: {}\".format(service_provider_id))",
"_____no_output_____"
]
],
[
[
"## Add service provider\n\nWatson OpenScale needs to be bound to the Watson Machine Learning instance to capture payload data into and out of the model.\nNote: You can bind more than one engine instance if needed by calling wos_client.service_providers.add method. Next, you can refer to particular service provider using service_provider_id.",
"_____no_output_____"
]
],
[
[
"request_headers = {\"Content-Type\": \"application/json\", \"Custom_header_X\": \"Custom_header_X_value_Y\"}\nMLCredentials = {}\nadded_service_provider_result = wos_client.service_providers.add(\n name=SERVICE_PROVIDER_NAME,\n description=SERVICE_PROVIDER_DESCRIPTION,\n service_type=ServiceTypes.CUSTOM_MACHINE_LEARNING,\n request_headers=request_headers,\n operational_space_id = \"production\",\n credentials=MLCredentials,\n background_mode=False\n ).result\nservice_provider_id = added_service_provider_result.metadata.id",
"_____no_output_____"
],
[
"print(wos_client.service_providers.get(service_provider_id).result)",
"_____no_output_____"
],
[
"print('Data Mart ID : ' + data_mart_id)\nprint('Service Provider ID : ' + service_provider_id)",
"_____no_output_____"
]
],
[
[
"## Subscriptions",
"_____no_output_____"
],
[
"Remove existing credit risk subscriptions\n\nThis code removes previous subscriptions to the model to refresh the monitors with the new model and new data.",
"_____no_output_____"
]
],
[
[
"wos_client.subscriptions.show()",
"_____no_output_____"
]
],
[
[
"## Remove the existing subscription",
"_____no_output_____"
]
],
[
[
"SUBSCRIPTION_NAME = \"Custom ML Subscription - All Monitors\"",
"_____no_output_____"
],
[
"subscriptions = wos_client.subscriptions.list().result.subscriptions\nfor subscription in subscriptions:\n if subscription.entity.asset.name == \"[asset] \" + SUBSCRIPTION_NAME:\n sub_model_id = subscription.metadata.id\n wos_client.subscriptions.delete(subscription.metadata.id)\n print('Deleted existing subscription for model', sub_model_id)",
"_____no_output_____"
]
],
[
[
"This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself.",
"_____no_output_____"
]
],
[
[
"feature_columns=[\"CheckingStatus\",\"LoanDuration\",\"CreditHistory\",\"LoanPurpose\",\"LoanAmount\",\"ExistingSavings\",\"EmploymentDuration\",\"InstallmentPercent\",\"Sex\",\"OthersOnLoan\",\"CurrentResidenceDuration\",\"OwnsProperty\",\"Age\",\"InstallmentPlans\",\"Housing\",\"ExistingCreditsCount\",\"Job\",\"Dependents\",\"Telephone\",\"ForeignWorker\"]\ncat_features=[\"CheckingStatus\",\"CreditHistory\",\"LoanPurpose\",\"ExistingSavings\",\"EmploymentDuration\",\"Sex\",\"OthersOnLoan\",\"OwnsProperty\",\"InstallmentPlans\",\"Housing\",\"Job\",\"Telephone\",\"ForeignWorker\"]",
"_____no_output_____"
],
[
"import uuid\nasset_id = str(uuid.uuid4())\nasset_name = '[asset] ' + SUBSCRIPTION_NAME\nurl = ''\n\nasset_deployment_id = str(uuid.uuid4())\nasset_deployment_name = asset_name\nasset_deployment_scoring_url = scoring_url\n\nscoring_endpoint_url = scoring_url\nscoring_request_headers = {\n \"Content-Type\": \"application/json\",\n \"Custom_header_X\": \"Custom_header_X_value_Y\"\n }",
"_____no_output_____"
],
[
"subscription_details = wos_client.subscriptions.add(\n data_mart_id=data_mart_id,\n service_provider_id=service_provider_id,\n asset=Asset(\n asset_id=asset_id,\n name=asset_name,\n url=url,\n asset_type=AssetTypes.MODEL,\n input_data_type=InputDataType.STRUCTURED,\n problem_type=ProblemType.BINARY_CLASSIFICATION\n ),\n deployment=AssetDeploymentRequest(\n deployment_id=asset_deployment_id,\n name=asset_deployment_name,\n deployment_type= DeploymentTypes.ONLINE,\n scoring_endpoint=ScoringEndpointRequest(\n url=scoring_endpoint_url,\n request_headers=scoring_request_headers\n )\n ),\n asset_properties=AssetPropertiesRequest(\n label_column=label_column,\n probability_fields=[\"probability\"],\n prediction_field=\"predictedLabel\",\n feature_fields = feature_columns,\n categorical_fields = cat_features,\n training_data_reference=TrainingDataReference(type=\"cos\",\n location=COSTrainingDataReferenceLocation(bucket = BUCKET_NAME,\n file_name = FILE_NAME),\n connection=COSTrainingDataReferenceConnection.from_dict({\n \"resource_instance_id\": COS_RESOURCE_CRN,\n \"url\": COS_ENDPOINT,\n \"api_key\": COS_API_KEY_ID,\n \"iam_url\": IAM_URL}))\n )\n ).result\nsubscription_id = subscription_details.metadata.id",
"_____no_output_____"
],
[
"print('Subscription ID: ' + subscription_id)",
"_____no_output_____"
],
[
"import time\n\ntime.sleep(5)\npayload_data_set_id = None\npayload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING, \n target_target_id=subscription_id, \n target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id\nif payload_data_set_id is None:\n print(\"Payload data set not found. Please check subscription status.\")\nelse:\n print(\"Payload data set id:\", payload_data_set_id)",
"_____no_output_____"
]
],
[
[
"### Before the payload logging",
"_____no_output_____"
],
[
"wos_client.subscriptions.get(subscription_id).result.to_dict()",
"_____no_output_____"
],
[
"# Score the model so we can configure monitors\n\nNow that the WML service has been bound and the subscription has been created, we need to send a request to the model before we configure OpenScale. This allows OpenScale to create a payload log in the datamart with the correct schema, so it can capture data coming into and out of the model.",
"_____no_output_____"
]
],
[
[
"no_of_records_to_score = 100",
"_____no_output_____"
]
],
[
[
"### Construct the scoring payload",
"_____no_output_____"
]
],
[
[
"payload_scoring = get_scoring_payload(no_of_records_to_score)",
"_____no_output_____"
]
],
[
[
"### Perform the scoring against the Custom ML Provider",
"_____no_output_____"
]
],
[
[
"scoring_response = custom_ml_scoring()",
"_____no_output_____"
]
],
[
[
"### Perform payload logging by passing the scoring payload and scoring response",
"_____no_output_____"
]
],
[
[
"scoring_id = payload_logging(payload_scoring, scoring_response)",
"_____no_output_____"
]
],
[
[
"### The scoring id, which would be later used for explanation of the randomly picked transactions",
"_____no_output_____"
]
],
[
[
"print('scoring_id: ' + str(scoring_id))",
"_____no_output_____"
]
],
[
[
"# Fairness configuration <a name=\"Fairness\"></a>",
"_____no_output_____"
],
[
"The code below configures fairness monitoring for our model. It turns on monitoring for two features, sex and age. In each case, we must specify:\n \nWhich model feature to monitor One or more majority groups, which are values of that feature that we expect to receive a higher percentage of favorable outcomes One or more minority groups, which are values of that feature that we expect to receive a higher percentage of unfavorable outcomes The threshold at which we would like OpenScale to display an alert if the fairness measurement falls below (in this case, 80%) Additionally, we must specify which outcomes from the model are favourable outcomes, and which are unfavourable. We must also provide the number of records OpenScale will use to calculate the fairness score. In this case, OpenScale's fairness monitor will run hourly, but will not calculate a new fairness rating until at least 100 records have been added. Finally, to calculate fairness, OpenScale must perform some calculations on the training data, so we provide the dataframe containing the data.",
"_____no_output_____"
],
[
"### Create Fairness Monitor Instance",
"_____no_output_____"
]
],
[
[
"target = Target(\n target_type=TargetTypes.SUBSCRIPTION,\n target_id=subscription_id\n\n)\nparameters = {\n \"features\": [\n {\"feature\": \"Sex\",\n \"majority\": ['male'],\n \"minority\": ['female']\n },\n {\"feature\": \"Age\",\n \"majority\": [[26, 75]],\n \"minority\": [[18, 25]]\n }\n ],\n \"favourable_class\": [\"No Risk\"],\n \"unfavourable_class\": [\"Risk\"],\n \"min_records\": 100\n}\nthresholds = [{\n \"metric_id\": \"fairness_value\",\n \"specific_values\": [{\n \"applies_to\": [{\n \"key\": \"feature\",\n \"type\": \"tag\",\n \"value\": \"Age\"\n }],\n \"value\": 95\n },\n {\n \"applies_to\": [{\n \"key\": \"feature\",\n \"type\": \"tag\",\n \"value\": \"Sex\"\n }],\n \"value\": 95\n }\n ],\n \"type\": \"lower_limit\",\n \"value\": 80.0\n}]\n\nfairness_monitor_details = wos_client.monitor_instances.create(\n data_mart_id=data_mart_id,\n background_mode=False,\n monitor_definition_id=wos_client.monitor_definitions.MONITORS.FAIRNESS.ID,\n target=target,\n parameters=parameters,\n thresholds=thresholds).result",
"_____no_output_____"
],
[
"fairness_monitor_instance_id = fairness_monitor_details.metadata.id",
"_____no_output_____"
]
],
[
[
"### Get Fairness Monitor Instance",
"_____no_output_____"
]
],
[
[
"wos_client.monitor_instances.show()",
"_____no_output_____"
]
],
[
[
"### Get run details\nIn case of production subscription, initial monitoring run is triggered internally. Checking its status",
"_____no_output_____"
]
],
[
[
"runs = wos_client.monitor_instances.list_runs(fairness_monitor_instance_id, limit=1).result.to_dict()\nfairness_monitoring_run_id = runs[\"runs\"][0][\"metadata\"][\"id\"]\nrun_status = None\nwhile(run_status not in [\"finished\", \"error\"]):\n run_details = wos_client.monitor_instances.get_run_details(fairness_monitor_instance_id, fairness_monitoring_run_id).result.to_dict()\n run_status = run_details[\"entity\"][\"status\"][\"state\"]\n print('run_status: ', run_status)\n if run_status in [\"finished\", \"error\"]:\n break\n time.sleep(10)",
"_____no_output_____"
]
],
[
[
"### Fairness run output",
"_____no_output_____"
]
],
[
[
"wos_client.monitor_instances.get_run_details(fairness_monitor_instance_id, fairness_monitoring_run_id).result.to_dict()",
"_____no_output_____"
],
[
"wos_client.monitor_instances.show_metrics(monitor_instance_id=fairness_monitor_instance_id)",
"_____no_output_____"
]
],
[
[
"# Configure Explainability <a name=\"explain\"></a>\nWe provide OpenScale with the training data to enable and configure the explainability features.",
"_____no_output_____"
]
],
[
[
"target = Target(\n target_type=TargetTypes.SUBSCRIPTION,\n target_id=subscription_id\n)\nparameters = {\n \"enabled\": True\n}\nexplain_monitor_details = wos_client.monitor_instances.create(\n data_mart_id=data_mart_id,\n background_mode=False,\n monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID,\n target=target,\n parameters=parameters\n).result\n\nexplain_monitor_details.metadata.id",
"_____no_output_____"
],
[
"scoring_ids = []\nsample_size = 2\nimport random\nfor i in range(0, sample_size):\n n = random.randint(1,100)\n scoring_ids.append(scoring_id + '-' + str(n))\nprint(\"Running explanations on scoring IDs: {}\".format(scoring_ids))",
"_____no_output_____"
],
[
"explanation_types = [\"lime\", \"contrastive\"]\nresult = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result\nprint(result)",
"_____no_output_____"
]
],
[
[
"### Explanation tasks",
"_____no_output_____"
]
],
[
[
"explanation_task_ids=result.metadata.explanation_task_ids\nexplanation_task_ids",
"_____no_output_____"
]
],
[
[
"### Wait for the explanation tasks to complete - all of them",
"_____no_output_____"
]
],
[
[
"import time\ndef finish_explanation_tasks():\n finished_explanations = []\n finished_explanation_task_ids = []\n \n # Check for the explanation task status for finished status. \n # If it is in-progress state, then sleep for some time and check again. \n # Perform the same for couple of times, so that all tasks get into finished state.\n for i in range(0, 5):\n # for each explanation\n print('iteration ' + str(i))\n \n #check status for all explanation tasks\n for explanation_task_id in explanation_task_ids:\n if explanation_task_id not in finished_explanation_task_ids:\n result = wos_client.monitor_instances.get_explanation_tasks(explanation_task_id=explanation_task_id).result\n print(explanation_task_id + ' : ' + result.entity.status.state)\n if (result.entity.status.state == 'finished' or result.entity.status.state == 'error') and explanation_task_id not in finished_explanation_task_ids:\n finished_explanation_task_ids.append(explanation_task_id)\n finished_explanations.append(result)\n\n\n # if there is altest one explanation task that is not yet completed, then sleep for sometime, \n # and check for all those tasks, for which explanation is not yet completeed.\n \n if len(finished_explanation_task_ids) != sample_size:\n print('sleeping for some time..')\n time.sleep(10)\n else:\n break\n \n return finished_explanations",
"_____no_output_____"
]
],
[
[
"### You may have to run the below multiple times till all explanation tasks are either finished or error'ed.",
"_____no_output_____"
]
],
[
[
"finished_explanations = finish_explanation_tasks()",
"_____no_output_____"
],
[
"len(finished_explanations)",
"_____no_output_____"
],
[
"def construct_explanation_features_map(feature_name, feature_weight):\n if feature_name in explanation_features_map:\n explanation_features_map[feature_name].append(feature_weight)\n else:\n explanation_features_map[feature_name] = [feature_weight]",
"_____no_output_____"
],
[
"explanation_features_map = {}\nfor result in finished_explanations:\n print('\\n>>>>>>>>>>>>>>>>>>>>>>\\n')\n print('explanation task: ' + str(result.metadata.explanation_task_id) + ', perturbed:' + str(result.entity.perturbed))\n if result.entity.explanations is not None:\n explanations = result.entity.explanations\n for explanation in explanations:\n if 'predictions' in explanation:\n predictions = explanation['predictions']\n for prediction in predictions:\n predicted_value = prediction['value']\n probability = prediction['probability']\n print('prediction : ' + str(predicted_value) + ', probability : ' + str(probability))\n if 'explanation_features' in prediction:\n explanation_features = prediction['explanation_features']\n for explanation_feature in explanation_features:\n feature_name = explanation_feature['feature_name']\n feature_weight = explanation_feature['weight']\n if (feature_weight >= 0 ):\n feature_weight_percent = round(feature_weight * 100, 2)\n print(str(feature_name) + ' : ' + str(feature_weight_percent))\n task_feature_weight_map = {}\n task_feature_weight_map[result.metadata.explanation_task_id] = feature_weight_percent\n construct_explanation_features_map(feature_name, feature_weight_percent)\n print('\\n>>>>>>>>>>>>>>>>>>>>>>\\n')\nexplanation_features_map",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nfor key in explanation_features_map.keys():\n #plot_graph(key, explanation_features_map[key])\n values = explanation_features_map[key]\n plt.title(key)\n plt.ylabel('Weight')\n plt.bar(range(len(values)), values)\n plt.show()",
"_____no_output_____"
]
],
[
[
"# Quality monitoring and feedback logging <a name=\"quality\"></a>",
"_____no_output_____"
],
[
"## Enable quality monitoring",
"_____no_output_____"
],
[
"The code below waits ten seconds to allow the payload logging table to be set up before it begins enabling monitors. First, it turns on the quality (accuracy) monitor and sets an alert threshold of 70%. OpenScale will show an alert on the dashboard if the model accuracy measurement (area under the curve, in the case of a binary classifier) falls below this threshold.\n\nThe second paramater supplied, min_records, specifies the minimum number of feedback records OpenScale needs before it calculates a new measurement. The quality monitor runs hourly, but the accuracy reading in the dashboard will not change until an additional 50 feedback records have been added, via the user interface, the Python client, or the supplied feedback endpoint.",
"_____no_output_____"
]
],
[
[
"import time\n\n#time.sleep(10)\ntarget = Target(\n target_type=TargetTypes.SUBSCRIPTION,\n target_id=subscription_id\n)\nparameters = {\n \"min_feedback_data_size\": 90\n}\nthresholds = [\n {\n \"metric_id\": \"area_under_roc\",\n \"type\": \"lower_limit\",\n \"value\": .80\n }\n ]\nquality_monitor_details = wos_client.monitor_instances.create(\n data_mart_id=data_mart_id,\n background_mode=False,\n monitor_definition_id=wos_client.monitor_definitions.MONITORS.QUALITY.ID,\n target=target,\n parameters=parameters,\n thresholds=thresholds\n).result",
"_____no_output_____"
],
[
"quality_monitor_instance_id = quality_monitor_details.metadata.id\nquality_monitor_instance_id",
"_____no_output_____"
]
],
[
[
"## Feedback logging",
"_____no_output_____"
],
[
"The code below downloads and stores enough feedback data to meet the minimum threshold so that OpenScale can calculate a new accuracy measurement. It then kicks off the accuracy monitor. The monitors run hourly, or can be initiated via the Python API, the REST API, or the graphical user interface.",
"_____no_output_____"
]
],
[
[
"!rm additional_feedback_data_v2.json\n!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/credit_risk/additional_feedback_data_v2.json",
"_____no_output_____"
]
],
[
[
"## Get feedback logging dataset ID",
"_____no_output_____"
]
],
[
[
"feedback_dataset_id = None\nfeedback_dataset = wos_client.data_sets.list(type=DataSetTypes.FEEDBACK, \n target_target_id=subscription_id, \n target_target_type=TargetTypes.SUBSCRIPTION).result\nfeedback_dataset_id = feedback_dataset.data_sets[0].metadata.id\nif feedback_dataset_id is None:\n print(\"Feedback data set not found. Please check quality monitor status.\")",
"_____no_output_____"
],
[
"with open('additional_feedback_data_v2.json') as feedback_file:\n additional_feedback_data = json.load(feedback_file)",
"_____no_output_____"
],
[
"wos_client.data_sets.store_records(feedback_dataset_id, request_body=additional_feedback_data, background_mode=False)",
"_____no_output_____"
],
[
"wos_client.data_sets.get_records_count(data_set_id=feedback_dataset_id)",
"_____no_output_____"
],
[
"run_details = wos_client.monitor_instances.run(monitor_instance_id=quality_monitor_instance_id, background_mode=False).result",
"_____no_output_____"
],
[
"wos_client.monitor_instances.show_metrics(monitor_instance_id=quality_monitor_instance_id)",
"_____no_output_____"
]
],
[
[
"# Drift configuration <a name=\"drift\"></a>",
"_____no_output_____"
],
[
"# Drift detection model generation\n\nPlease update the score function which will be used forgenerating drift detection model which will used for drift detection . This might take sometime to generate model and time taken depends on the training dataset size. The output of the score function should be a 2 arrays 1. Array of model prediction 2. Array of probabilities \n\n- User is expected to make sure that the data type of the \"class label\" column selected and the prediction column are same . For eg : If class label is numeric , the prediction array should also be numeric\n\n- Each entry of a probability array should have all the probabities of the unique class lable .\n For eg: If the model_type=multiclass and unique class labels are A, B, C, D . Each entry in the probability array should be a array of size 4 . Eg : [ [50,30,10,10] ,[40,20,30,10]...]\n \n**Note:**\n- *User is expected to add \"score\" method , which should output prediction column array and probability column array.*\n- *The data type of the label column and prediction column should be same . User needs to make sure that label column and prediction column array should have the same unique class labels*\n- **Please update the score function below with the help of templates documented [here](https://github.com/IBM-Watson/aios-data-distribution/blob/master/Score%20function%20templates%20for%20drift%20detection.md)**",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndf = pd.read_csv(\"german_credit_data_biased_training.csv\")\ndf.head()",
"_____no_output_____"
],
[
"def score(training_data_frame):\n #The data type of the label column and prediction column should be same .\n #User needs to make sure that label column and prediction column array should have the same unique class labels\n prediction_column_name = \"predictedLabel\"\n probability_column_name = \"probability\"\n \n feature_columns = list(training_data_frame.columns)\n training_data_rows = training_data_frame[feature_columns].values.tolist()\n \n payload_scoring_records = {\n \"fields\": feature_columns,\n \"values\": [x for x in training_data_rows]\n }\n \n header = {\"Content-Type\": \"application/json\", \"x\":\"y\"}\n scoring_response_raw = requests.post(scoring_url, json=payload_scoring_records, headers=header, verify=False)\n scoring_response = scoring_response_raw.json()\n\n probability_array = None\n prediction_vector = None\n \n prob_col_index = list(scoring_response.get('fields')).index(probability_column_name)\n predict_col_index = list(scoring_response.get('fields')).index(prediction_column_name)\n\n if prob_col_index < 0 or predict_col_index < 0:\n raise Exception(\"Missing prediction/probability column in the scoring response\")\n\n import numpy as np\n probability_array = np.array([value[prob_col_index] for value in scoring_response.get('values')])\n prediction_vector = np.array([value[predict_col_index] for value in scoring_response.get('values')])\n\n return probability_array, prediction_vector",
"_____no_output_____"
]
],
[
[
"### Define the drift detection input",
"_____no_output_____"
]
],
[
[
"drift_detection_input = {\n \"feature_columns\": feature_columns,\n \"categorical_columns\": cat_features,\n \"label_column\": label_column,\n \"problem_type\": model_type\n}\nprint(drift_detection_input)",
"_____no_output_____"
]
],
[
[
"### Generate drift detection model",
"_____no_output_____"
]
],
[
[
"!rm drift_detection_model.tar.gz",
"_____no_output_____"
],
[
"from ibm_wos_utils.drift.drift_trainer import DriftTrainer\ndrift_trainer = DriftTrainer(df,drift_detection_input)\nif model_type != \"regression\":\n #Note: batch_size can be customized by user as per the training data size\n drift_trainer.generate_drift_detection_model(score,batch_size=df.shape[0])\n\n#Note: Two column constraints are not computed beyond two_column_learner_limit(default set to 200)\n#User can adjust the value depending on the requirement\ndrift_trainer.learn_constraints(two_column_learner_limit=200)\ndrift_trainer.create_archive()",
"_____no_output_____"
],
[
"!ls -al",
"_____no_output_____"
],
[
"filename = 'drift_detection_model.tar.gz'",
"_____no_output_____"
]
],
[
[
"### Upload the drift detection model to OpenScale subscription",
"_____no_output_____"
]
],
[
[
"wos_client.monitor_instances.upload_drift_model(\n model_path=filename,\n archive_name=filename,\n data_mart_id=data_mart_id,\n subscription_id=subscription_id,\n enable_data_drift=True,\n enable_model_drift=True\n )",
"_____no_output_____"
]
],
[
[
"### Delete the existing drift monitor instance for the subscription",
"_____no_output_____"
]
],
[
[
"monitor_instances = wos_client.monitor_instances.list().result.monitor_instances\nfor monitor_instance in monitor_instances:\n monitor_def_id=monitor_instance.entity.monitor_definition_id\n if monitor_def_id == \"drift\" and monitor_instance.entity.target.target_id == subscription_id:\n wos_client.monitor_instances.delete(monitor_instance.metadata.id)\n print('Deleted existing drift monitor instance with id: ', monitor_instance.metadata.id)",
"_____no_output_____"
],
[
"target = Target(\n target_type=TargetTypes.SUBSCRIPTION,\n target_id=subscription_id\n\n)\nparameters = {\n \"min_samples\": 100,\n \"drift_threshold\": 0.1,\n \"train_drift_model\": False,\n \"enable_model_drift\": True,\n \"enable_data_drift\": True\n}\n\ndrift_monitor_details = wos_client.monitor_instances.create(\n data_mart_id=data_mart_id,\n background_mode=False,\n monitor_definition_id=wos_client.monitor_definitions.MONITORS.DRIFT.ID,\n target=target,\n parameters=parameters\n).result\n\ndrift_monitor_instance_id = drift_monitor_details.metadata.id\ndrift_monitor_instance_id",
"_____no_output_____"
]
],
[
[
"### Drift run",
"_____no_output_____"
]
],
[
[
"drift_run_details = wos_client.monitor_instances.run(monitor_instance_id=drift_monitor_instance_id, background_mode=False)",
"_____no_output_____"
],
[
"time.sleep(5)\nwos_client.monitor_instances.show_metrics(monitor_instance_id=drift_monitor_instance_id)",
"_____no_output_____"
]
],
[
[
"## Summary\n\nAs part of this notebook, we have performed the following:\n* Create a subscription to an custom ML end point\n* Scored the custom ML provider with 100 records\n* With the scored payload and also the scored response, we called the DataSets SDK method to store the payload logging records into the data mart. While doing so, we have set the scoring_id attribute.\n* Configured the fairness monitor and executed it and viewed the fairness metrics output.\n* Configured explainabilty monitor\n* Randomly selected 5 transactions for which we want to get the prediction explanation.\n* Submitted explainability tasks for the selected scoring ids, and waited for their completion.\n* In the end, we composed a weight map of feature and its weight across transactions. And plotted the same.\n* For example:\n```\n{'ForeignWorker': [33.29, 5.23],\n 'OthersOnLoan': [15.96, 19.97, 12.76],\n 'OwnsProperty': [15.43, 3.92, 4.44, 10.36],\n 'Dependents': [9.06],\n 'InstallmentPercent': [9.05],\n 'CurrentResidenceDuration': [8.74, 13.15, 12.1, 10.83],\n 'Sex': [2.96, 12.76],\n 'InstallmentPlans': [2.4, 5.67, 6.57],\n 'Age': [2.28, 8.6, 11.26],\n 'Job': [0.84],\n 'LoanDuration': [15.02, 10.87, 18.91, 12.72],\n 'EmploymentDuration': [14.02, 14.05, 12.1],\n 'LoanAmount': [9.28, 12.42, 7.85],\n 'Housing': [4.35],\n 'CreditHistory': [6.5]}\n ```\n\nThe understanding of the above map is like this:\n* LoanDuration, CurrentResidenceDuration, OwnsProperty are the most contributing features across transactions for their respective prediction. Their weights for the respective prediction can also be seen.\n* And the low contributing features are CreditHistory, Housing, Job, InstallmentPercent and Dependents, with their respective weights can also be seen as printed.\n\n* We configured quality monitor and uploaded feedback data, and thereby ran the quality monitor\n* For drift monitoring purposes, we created the drift detection model and uploaded to the OpenScale subscription.\n* Executed the drift monitor.\n\nThank You! for working on tutorial notebook.",
"_____no_output_____"
],
[
"Author: Ravi Chamarthy ([email protected])",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
cb857aa8962535837525b0ed51f43072a53f4161
| 13,733 |
ipynb
|
Jupyter Notebook
|
lda_sorghum_svm_rbf.ipynb
|
adusumalliakhil/Crop-varietal-identification-with-SCIO
|
6ddf89b703047ee041efc506963e948bad6f7cb7
|
[
"Apache-2.0"
] | null | null | null |
lda_sorghum_svm_rbf.ipynb
|
adusumalliakhil/Crop-varietal-identification-with-SCIO
|
6ddf89b703047ee041efc506963e948bad6f7cb7
|
[
"Apache-2.0"
] | null | null | null |
lda_sorghum_svm_rbf.ipynb
|
adusumalliakhil/Crop-varietal-identification-with-SCIO
|
6ddf89b703047ee041efc506963e948bad6f7cb7
|
[
"Apache-2.0"
] | null | null | null | 29.91939 | 365 | 0.43552 |
[
[
[
"import numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"df = pd.read_csv(\"D:\\\\newproject\\\\New folder\\\\Sorghum.data.csv\")",
"_____no_output_____"
],
[
"#Na Handling\ndf.isnull().values.any()",
"_____no_output_____"
],
[
"df=df.dropna()",
"_____no_output_____"
],
[
"from sklearn.model_selection import cross_val_predict, cross_val_score\nfrom sklearn.metrics import accuracy_score, classification_report, confusion_matrix",
"_____no_output_____"
],
[
"X = df.drop(['Predictor'], axis=1)\nX_col = X.columns",
"_____no_output_____"
],
[
"y = df['Predictor']",
"_____no_output_____"
],
[
"\n#Savitzky-Golay filter with second degree derivative.\nfrom scipy.signal import savgol_filter \n\nsg=savgol_filter(X,window_length=11, polyorder=3, deriv=2, delta=1.0)",
"C:\\Users\\akhil\\Anaconda3\\lib\\site-packages\\scipy\\signal\\_arraytools.py:45: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n b = a[a_slice]\n"
],
[
"sg_x=pd.DataFrame(sg, columns=X_col)\nsg_x.head()",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nscaled_data = scaler.fit_transform(sg)",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(scaled_data, y,\n train_size=0.8,\n random_state=103,stratify = y)",
"C:\\Users\\akhil\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_split.py:2179: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.\n FutureWarning)\n"
],
[
"from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA\n\nlda = LDA(n_components=8) \nX_train = lda.fit_transform(X_train, y_train) \nX_test = lda.transform(X_test)",
"C:\\Users\\akhil\\Anaconda3\\lib\\site-packages\\sklearn\\discriminant_analysis.py:388: UserWarning: Variables are collinear.\n warnings.warn(\"Variables are collinear.\")\n"
],
[
"from sklearn import svm\nclf = svm.SVC(kernel=\"rbf\")\n\nclf.fit(X_train, y_train) \ny_pred = clf.predict(X_test) ",
"C:\\Users\\akhil\\Anaconda3\\lib\\site-packages\\sklearn\\svm\\base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.\n \"avoid this warning.\", FutureWarning)\n"
],
[
"from sklearn.metrics import confusion_matrix \nfrom sklearn.metrics import accuracy_score\n\ncm = confusion_matrix(y_test, y_pred) \nprint(cm) \nprint('Accuracy' + str(accuracy_score(y_test, y_pred))) ",
"[[ 9 1 0 0 0 0 0 0 0 0]\n [ 1 9 0 0 0 0 0 0 0 0]\n [ 0 0 10 0 0 0 0 0 0 0]\n [ 0 0 0 8 0 0 0 0 0 2]\n [ 0 0 0 0 10 0 0 0 0 0]\n [ 0 0 0 0 0 10 0 0 0 0]\n [ 0 0 0 0 0 0 8 0 1 1]\n [ 0 0 0 0 0 0 0 10 0 0]\n [ 0 0 0 0 0 0 0 0 10 0]\n [ 0 0 0 2 0 0 0 0 0 8]]\nAccuracy0.92\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb8580b7a80e1f20cfa42d2e0235777b285e73bd
| 13,029 |
ipynb
|
Jupyter Notebook
|
examples/notebooks/ets.ipynb
|
sergiolevin/statsmodels
|
13a901edfc0d4ab05e09438749df2487af04d77e
|
[
"BSD-3-Clause"
] | 1 |
2021-08-05T13:30:46.000Z
|
2021-08-05T13:30:46.000Z
|
examples/notebooks/ets.ipynb
|
barryquinn1/statsmodels
|
305258f5245e76409f6deab24d217ffbbc352ba0
|
[
"BSD-3-Clause"
] | null | null | null |
examples/notebooks/ets.ipynb
|
barryquinn1/statsmodels
|
305258f5245e76409f6deab24d217ffbbc352ba0
|
[
"BSD-3-Clause"
] | null | null | null | 35.793956 | 351 | 0.604114 |
[
[
[
"# ETS models\n\nThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).\n\nThis notebook shows how they can be used with `statsmodels`. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.\n\n`statmodels` implements all combinations of:\n- additive and multiplicative error model\n- additive and multiplicative trend, possibly dampened\n- additive and multiplicative seasonality\n\nHowever, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.\n\n[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n%matplotlib inline\nfrom statsmodels.tsa.exponential_smoothing.ets import ETSModel",
"_____no_output_____"
],
[
"plt.rcParams['figure.figsize'] = (12, 8)",
"_____no_output_____"
]
],
[
[
"## Simple exponential smoothing\n\nThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\n\n\\begin{align}\ny_{t} &= y_{t-1} + e_t\\\\\nl_{t} &= l_{t-1} + \\alpha e_t\\\\\n\\end{align}\n\nThis state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\n\n\\begin{align}\n\\hat{y}_{t|t-1} &= l_{t-1}\\\\\nl_{t} &= \\alpha y_{t-1} + (1 - \\alpha) l_{t-1}\n\\end{align}\n\nHere, $\\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.",
"_____no_output_____"
]
],
[
[
"oildata = [\n 111.0091, 130.8284, 141.2871, 154.2278,\n 162.7409, 192.1665, 240.7997, 304.2174,\n 384.0046, 429.6622, 359.3169, 437.2519,\n 468.4008, 424.4353, 487.9794, 509.8284,\n 506.3473, 340.1842, 240.2589, 219.0328,\n 172.0747, 252.5901, 221.0711, 276.5188,\n 271.1480, 342.6186, 428.3558, 442.3946,\n 432.7851, 437.2497, 437.2092, 445.3641,\n 453.1950, 454.4096, 422.3789, 456.0371,\n 440.3866, 425.1944, 486.2052, 500.4291,\n 521.2759, 508.9476, 488.8889, 509.8706,\n 456.7229, 473.8166, 525.9509, 549.8338,\n 542.3405\n]\noil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))\noil.plot()\nplt.ylabel(\"Annual oil production in Saudi Arabia (Mt)\");",
"_____no_output_____"
]
],
[
[
"The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).\nBelow you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.",
"_____no_output_____"
]
],
[
[
"model = ETSModel(oil, error='add', trend='add', damped_trend=True)\nfit = model.fit(maxiter=10000)\noil.plot(label='data')\nfit.fittedvalues.plot(label='statsmodels fit')\nplt.ylabel(\"Annual oil production in Saudi Arabia (Mt)\");\n\n# obtained from R\nparams_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]\nyhat = model.smooth(params_R).fittedvalues\nyhat.plot(label='R fit', linestyle='--')\n\nplt.legend();",
"_____no_output_____"
]
],
[
[
"By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.",
"_____no_output_____"
]
],
[
[
"model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,\n initialization_method='heuristic')\nfit_heuristic = model_heuristic.fit()\noil.plot(label='data')\nfit.fittedvalues.plot(label='estimated')\nfit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')\nplt.ylabel(\"Annual oil production in Saudi Arabia (Mt)\");\n\n# obtained from R\nparams = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]\nyhat = model.smooth(params).fittedvalues\nyhat.plot(label='with R params', linestyle=':')\n\nplt.legend();",
"_____no_output_____"
]
],
[
[
"The fitted parameters and some other measures are shown using `fit.summary()`. Here we can see that the log-likelihood of the model using fitted initial states is a bit lower than the one using a heuristic for the initial states.\nAdditionally, we see that $\\beta$ (`smoothing_trend`) is at the boundary of the default parameter bounds, and therefore it's not possible to estimate confidence intervals for $\\beta$.",
"_____no_output_____"
]
],
[
[
"fit.summary()",
"_____no_output_____"
],
[
"fit_heuristic.summary()",
"_____no_output_____"
]
],
[
[
"## Holt-Winters' seasonal method\n\nThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\n\n\\begin{align}\ny_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\\\\nl_{t} &= l_{t-1} + b_{t-1} + \\alpha e_t\\\\\nb_{t} &= b_{t-1} + \\beta e_t\\\\\ns_{t} &= s_{t-m} + \\gamma e_t\n\\end{align}\n\n",
"_____no_output_____"
]
],
[
[
"austourists_data = [\n 30.05251300, 19.14849600, 25.31769200, 27.59143700,\n 32.07645600, 23.48796100, 28.47594000, 35.12375300, \n 36.83848500, 25.00701700, 30.72223000, 28.69375900, \n 36.64098600, 23.82460900, 29.31168300, 31.77030900,\n 35.17787700, 19.77524400, 29.60175000, 34.53884200,\n 41.27359900, 26.65586200, 28.27985900, 35.19115300,\n 42.20566386, 24.64917133, 32.66733514, 37.25735401,\n 45.24246027, 29.35048127, 36.34420728, 41.78208136,\n 49.27659843, 31.27540139, 37.85062549, 38.83704413,\n 51.23690034, 31.83855162, 41.32342126, 42.79900337,\n 55.70835836, 33.40714492, 42.31663797, 45.15712257,\n 59.57607996, 34.83733016, 44.84168072, 46.97124960,\n 60.01903094, 38.37117851, 46.97586413, 50.73379646,\n 61.64687319, 39.29956937, 52.67120908, 54.33231689,\n 66.83435838, 40.87118847, 51.82853579, 57.49190993,\n 65.25146985, 43.06120822, 54.76075713, 59.83447494,\n 73.25702747, 47.69662373, 61.09776802, 66.05576122,\n]\nindex = pd.date_range(\"1999-03-01\", \"2015-12-01\", freq=\"3MS\")\naustourists = pd.Series(austourists_data, index=index)\naustourists.plot()\nplt.ylabel('Australian Tourists');",
"_____no_output_____"
],
[
"# fit in statsmodels\nmodel = ETSModel(austourists, error=\"add\", trend=\"add\", seasonal=\"add\",\n damped_trend=True, seasonal_periods=4)\nfit = model.fit()\n\n# fit with R params\nparams_R = [\n 0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357, \n 0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637\n]\nfit_R = model.smooth(params_R)\n\naustourists.plot(label='data')\nplt.ylabel('Australian Tourists')\n\nfit.fittedvalues.plot(label='statsmodels fit')\nfit_R.fittedvalues.plot(label='R fit', linestyle='--')\nplt.legend();",
"_____no_output_____"
],
[
"fit.summary()",
"_____no_output_____"
]
],
[
[
"## Predictions\n\nThe ETS model can also be used for predicting. There are several different methods available:\n- `forecast`: makes out of sample predictions\n- `predict`: in sample and out of sample predictions\n- `simulate`: runs simulations of the statespace model\n- `get_prediction`: in sample and out of sample predictions, as well as prediction intervals\n\nWe can use them on our previously fitted model to predict from 2014 to 2020.",
"_____no_output_____"
]
],
[
[
"pred = fit.get_prediction(start='2014', end='2020')",
"_____no_output_____"
],
[
"df = pred.summary_frame(alpha=0.05)\ndf",
"_____no_output_____"
]
],
[
[
"In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the `get_prediction` method.\n\nWe can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.",
"_____no_output_____"
]
],
[
[
"simulated = fit.simulate(anchor=\"end\", nsimulations=17, repetitions=100)",
"_____no_output_____"
],
[
"for i in range(simulated.shape[1]):\n simulated.iloc[:,i].plot(label='_', color='gray', alpha=0.1)\ndf[\"mean\"].plot(label='mean prediction')\ndf[\"pi_lower\"].plot(linestyle='--', color='tab:blue', label='95% interval')\ndf[\"pi_upper\"].plot(linestyle='--', color='tab:blue', label='_')\npred.endog.plot(label='data')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"In this case, we chose \"end\" as simulation anchor, which means that the first simulated value will be the first out of sample value. It is also possible to choose other anchor inside the sample.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
cb8580bd009709726cabb0379c5f4b698d134f31
| 177,947 |
ipynb
|
Jupyter Notebook
|
nbs/examples/byol_iwang_256.ipynb
|
adbmd/self_supervised
|
d87ebd9b4961c7da0efd6073c42782bbc61aaa2e
|
[
"Apache-2.0"
] | 1 |
2020-09-22T14:29:07.000Z
|
2020-09-22T14:29:07.000Z
|
nbs/examples/byol_iwang_256.ipynb
|
adbmd/self_supervised
|
d87ebd9b4961c7da0efd6073c42782bbc61aaa2e
|
[
"Apache-2.0"
] | null | null | null |
nbs/examples/byol_iwang_256.ipynb
|
adbmd/self_supervised
|
d87ebd9b4961c7da0efd6073c42782bbc61aaa2e
|
[
"Apache-2.0"
] | null | null | null | 31.685719 | 13,464 | 0.361681 |
[
[
[
"## **Bootstrap Your Own Latent A New Approach to Self-Supervised Learning:** https://arxiv.org/pdf/2006.07733.pdf",
"_____no_output_____"
]
],
[
[
"# !pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html",
"_____no_output_____"
],
[
"# !pip install -qqU fastai fastcore",
"_____no_output_____"
],
[
"# !pip install nbdev",
"_____no_output_____"
],
[
"import fastai, fastcore, torch",
"_____no_output_____"
],
[
"fastai.__version__ , fastcore.__version__, torch.__version__",
"_____no_output_____"
],
[
"from fastai.vision.all import *",
"_____no_output_____"
]
],
[
[
"### Sizes\n\nResize -> RandomCrop\n\n320 -> 256 | 224 -> 192 | 160 -> 128",
"_____no_output_____"
]
],
[
[
"resize = 320\nsize = 256",
"_____no_output_____"
]
],
[
[
"## 1. Implementation Details (Section 3.2 from the paper)",
"_____no_output_____"
],
[
"### 1.1 Image Augmentations\n\nSame as SimCLR with optional grayscale",
"_____no_output_____"
]
],
[
[
"import kornia\ndef get_aug_pipe(size, stats=imagenet_stats, s=.6):\n \"SimCLR augmentations\"\n rrc = kornia.augmentation.RandomResizedCrop((size, size), scale=(0.2, 1.0), ratio=(3/4, 4/3))\n rhf = kornia.augmentation.RandomHorizontalFlip()\n rcj = kornia.augmentation.ColorJitter(0.8*s, 0.8*s, 0.8*s, 0.2*s)\n rgs = kornia.augmentation.RandomGrayscale(p=0.2)\n \n tfms = [rrc, rhf, rcj, rgs, Normalize.from_stats(*stats)]\n pipe = Pipeline(tfms)\n pipe.split_idx = 0\n return pipe",
"_____no_output_____"
]
],
[
[
"### 1.2 Architecture",
"_____no_output_____"
]
],
[
[
"def create_encoder(arch, n_in=3, pretrained=True, cut=None, concat_pool=True):\n \"Create encoder from a given arch backbone\"\n encoder = create_body(arch, n_in, pretrained, cut)\n pool = AdaptiveConcatPool2d() if concat_pool else nn.AdaptiveAvgPool2d(1)\n return nn.Sequential(*encoder, pool, Flatten())",
"_____no_output_____"
],
[
"class MLP(Module):\n \"MLP module as described in paper\"\n def __init__(self, dim, projection_size=256, hidden_size=2048):\n self.net = nn.Sequential(\n nn.Linear(dim, hidden_size),\n nn.BatchNorm1d(hidden_size),\n nn.ReLU(inplace=True),\n nn.Linear(hidden_size, projection_size)\n )\n\n def forward(self, x):\n return self.net(x)",
"_____no_output_____"
],
[
"class BYOLModel(Module):\n \"Compute predictions of v1 and v2\" \n def __init__(self,encoder,projector,predictor):\n self.encoder,self.projector,self.predictor = encoder,projector,predictor \n\n def forward(self,v1,v2):\n q1 = self.predictor(self.projector(self.encoder(v1)))\n q2 = self.predictor(self.projector(self.encoder(v2)))\n return (q1,q2)",
"_____no_output_____"
],
[
"def create_byol_model(arch=resnet50, hidden_size=4096, pretrained=True, projection_size=256, concat_pool=False):\n encoder = create_encoder(arch, pretrained=pretrained, concat_pool=concat_pool)\n with torch.no_grad(): \n x = torch.randn((2,3,128,128))\n representation = encoder(x)\n\n projector = MLP(representation.size(1), projection_size, hidden_size=hidden_size) \n predictor = MLP(projection_size, projection_size, hidden_size=hidden_size)\n apply_init(projector)\n apply_init(predictor)\n return BYOLModel(encoder, projector, predictor)",
"_____no_output_____"
]
],
[
[
"### 1.3 BYOLCallback",
"_____no_output_____"
]
],
[
[
"def _mse_loss(x, y):\n x = F.normalize(x, dim=-1, p=2)\n y = F.normalize(y, dim=-1, p=2)\n return 2 - 2 * (x * y).sum(dim=-1)\n\ndef symmetric_mse_loss(pred, *yb):\n (q1,q2),z1,z2 = pred,*yb\n return (_mse_loss(q1,z2) + _mse_loss(q2,z1)).mean()",
"_____no_output_____"
],
[
"x = torch.randn((64,256))\ny = torch.randn((64,256))\ntest_close(symmetric_mse_loss((x,y),y,x), 0) # perfect\ntest_close(symmetric_mse_loss((x,y),x,y), 4, 1e-1) # random",
"_____no_output_____"
]
],
[
[
"Useful Discussions and Supportive Material:\n\n- https://www.reddit.com/r/MachineLearning/comments/hju274/d_byol_bootstrap_your_own_latent_cheating/fwohtky/\n- https://untitled-ai.github.io/understanding-self-supervised-contrastive-learning.html",
"_____no_output_____"
]
],
[
[
"import copy\n\nclass BYOLCallback(Callback):\n \"Implementation of https://arxiv.org/pdf/2006.07733.pdf\"\n def __init__(self, T=0.99, debug=True, size=224, **aug_kwargs): \n self.T, self.debug = T, debug\n self.aug1 = get_aug_pipe(size, **aug_kwargs)\n self.aug2 = get_aug_pipe(size, **aug_kwargs)\n\n\n def before_fit(self):\n \"Create target model\"\n self.target_model = copy.deepcopy(self.learn.model).to(self.dls.device) \n self.T_sched = SchedCos(self.T, 1) # used in paper\n # self.T_sched = SchedNo(self.T, 1) # used in open source implementation\n \n \n def before_batch(self):\n \"Generate 2 views of the same image and calculate target projections for these views\"\n if self.debug: print(f\"self.x[0]: {self.x[0]}\")\n \n v1,v2 = self.aug1(self.x), self.aug2(self.x.clone())\n self.learn.xb = (v1,v2)\n \n if self.debug:\n print(f\"v1[0]: {v1[0]}\\nv2[0]: {v2[0]}\")\n self.show_one()\n assert not torch.equal(*self.learn.xb)\n\n with torch.no_grad():\n z1 = self.target_model.projector(self.target_model.encoder(v1))\n z2 = self.target_model.projector(self.target_model.encoder(v2))\n self.learn.yb = (z1,z2)\n\n\n def after_step(self):\n \"Update target model and T\"\n self.T = self.T_sched(self.pct_train)\n with torch.no_grad():\n for param_k, param_q in zip(self.target_model.parameters(), self.model.parameters()):\n param_k.data = param_k.data * self.T + param_q.data * (1. - self.T)\n \n\n def show_one(self):\n b1 = self.aug1.normalize.decode(to_detach(self.learn.xb[0]))\n b2 = self.aug1.normalize.decode(to_detach(self.learn.xb[1]))\n i = np.random.choice(len(b1))\n show_images([b1[i],b2[i]], nrows=1, ncols=2)\n\n def after_train(self): \n if self.debug: self.show_one()\n\n def after_validate(self): \n if self.debug: self.show_one()",
"_____no_output_____"
]
],
[
[
"## 2. Pretext Training",
"_____no_output_____"
]
],
[
[
"sqrmom=0.99\nmom=0.95\nbeta=0.\neps=1e-4\nopt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)",
"_____no_output_____"
],
[
"bs=128",
"_____no_output_____"
],
[
"def get_dls(size, bs, workers=None):\n path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG\n source = untar_data(path)\n \n files = get_image_files(source)\n tfms = [[PILImage.create, ToTensor, RandomResizedCrop(size, min_scale=0.9)], \n [parent_label, Categorize()]]\n \n dsets = Datasets(files, tfms=tfms, splits=RandomSplitter(valid_pct=0.1)(files))\n \n batch_tfms = [IntToFloatTensor]\n dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)\n return dls",
"_____no_output_____"
],
[
"dls = get_dls(resize, bs)\nmodel = create_byol_model(arch=xresnet34, pretrained=False)\nlearn = Learner(dls, model, symmetric_mse_loss, opt_func=opt_func,\n cbs=[BYOLCallback(T=0.99, size=size, debug=False), TerminateOnNaNCallback()])\nlearn.to_fp16();",
"_____no_output_____"
],
[
"learn.lr_find()",
"_____no_output_____"
],
[
"lr=1e-3\nwd=1e-2\nepochs=100",
"_____no_output_____"
],
[
"learn.unfreeze()\nlearn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.5)",
"_____no_output_____"
],
[
"save_name = f'byol_iwang_sz{size}_epc{epochs}'",
"_____no_output_____"
],
[
"learn.save(save_name)\ntorch.save(learn.model.encoder.state_dict(), learn.path/learn.model_dir/f'{save_name}_encoder.pth')",
"_____no_output_____"
],
[
"learn.load(save_name);",
"_____no_output_____"
],
[
"lr=1e-4\nwd=1e-2\nepochs=100",
"_____no_output_____"
],
[
"learn.unfreeze()\nlearn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.5)",
"_____no_output_____"
],
[
"save_name = f'byol_iwang_sz{size}_epc200'",
"_____no_output_____"
],
[
"learn.save(save_name)\ntorch.save(learn.model.encoder.state_dict(), learn.path/learn.model_dir/f'{save_name}_encoder.pth')",
"_____no_output_____"
],
[
"lr=1e-4\nwd=1e-2\nepochs=30",
"_____no_output_____"
],
[
"learn.unfreeze()\nlearn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.5)",
"_____no_output_____"
],
[
"save_name = f'byol_iwang_sz{size}_epc230'",
"_____no_output_____"
],
[
"learn.save(save_name)\ntorch.save(learn.model.encoder.state_dict(), learn.path/learn.model_dir/f'{save_name}_encoder.pth')",
"_____no_output_____"
],
[
"lr=5e-5\nwd=1e-2\nepochs=30",
"_____no_output_____"
],
[
"learn.unfreeze()\nlearn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.5)",
"_____no_output_____"
],
[
"save_name = f'byol_iwang_sz{size}_epc260'",
"_____no_output_____"
],
[
"learn.save(save_name)\ntorch.save(learn.model.encoder.state_dict(), learn.path/learn.model_dir/f'{save_name}_encoder.pth')",
"_____no_output_____"
],
[
"learn.recorder.plot_loss()",
"_____no_output_____"
],
[
"save_name",
"_____no_output_____"
]
],
[
[
"## 3. Downstream Task - Image Classification",
"_____no_output_____"
]
],
[
[
"def get_dls(size, bs, workers=None):\n path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG\n source = untar_data(path)\n files = get_image_files(source, folders=['train', 'val'])\n splits = GrandparentSplitter(valid_name='val')(files)\n \n item_aug = [RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]\n tfms = [[PILImage.create, ToTensor, *item_aug], \n [parent_label, Categorize()]]\n \n dsets = Datasets(files, tfms=tfms, splits=splits)\n \n batch_tfms = [IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]\n dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)\n return dls",
"_____no_output_____"
],
[
"def do_train(epochs=5, runs=5, lr=2e-2, size=size, bs=bs, save_name=None):\n dls = get_dls(size, bs)\n for run in range(runs):\n print(f'Run: {run}')\n learn = cnn_learner(dls, xresnet34, opt_func=opt_func, normalize=False,\n metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),\n pretrained=False)\n# learn.to_fp16()\n \n if save_name is not None:\n state_dict = torch.load(learn.path/learn.model_dir/f'{save_name}_encoder.pth')\n learn.model[0].load_state_dict(state_dict)\n print(\"Model loaded...\")\n \n learn.unfreeze()\n learn.fit_flat_cos(epochs, lr, wd=wd)",
"_____no_output_____"
]
],
[
[
"### ImageWang Leaderboard",
"_____no_output_____"
],
[
"**sz-256**\n\n\n\n**Contrastive Learning**\n\n- 5 epochs: 67.70%\n- 20 epochs: 70.03%\n- 80 epochs: 70.71%\n- 200 epochs: 71.78%\n\n\n**BYOL**\n\n- 5 epochs: 64.74%\n- 20 epochs: **71.01%**\n- 80 epochs: **72.58%**\n- 200 epochs: **72.13%**\n",
"_____no_output_____"
],
[
"### 5 epochs",
"_____no_output_____"
]
],
[
[
"# we are using old pretrained model with size 192 for transfer learning\n# link: https://github.com/KeremTurgutlu/self_supervised/blob/252269827da41b41091cf0db533b65c0d1312f85/nbs/byol_iwang_192.ipynb\nsave_name = 'byol_iwang_sz192_epc230'",
"_____no_output_____"
],
[
"lr = 1e-2\nwd=1e-2\nbs=128\nepochs = 5\nruns = 5",
"_____no_output_____"
],
[
"do_train(epochs, runs, lr=lr, bs=bs, save_name=save_name)",
"Run: 0\nModel loaded...\n"
],
[
"np.mean([0.657165,0.637312,0.631967,0.646729,0.664291])",
"_____no_output_____"
]
],
[
[
"### 20 epochs",
"_____no_output_____"
]
],
[
[
"lr=2e-2\nepochs = 20\nruns = 3",
"_____no_output_____"
],
[
"do_train(epochs, runs, lr=lr, save_name=save_name)",
"Run: 0\nModel loaded...\n"
],
[
"np.mean([0.711631, 0.705269, 0.713413])",
"_____no_output_____"
]
],
[
[
"### 80 epochs",
"_____no_output_____"
]
],
[
[
"epochs = 80\nruns = 1",
"_____no_output_____"
],
[
"do_train(epochs, runs, save_name=save_name)",
"Run: 0\nModel loaded...\n"
]
],
[
[
"### 200 epochs",
"_____no_output_____"
]
],
[
[
"epochs = 200\nruns = 1",
"_____no_output_____"
],
[
"do_train(epochs, runs, save_name=save_name)",
"Run: 0\nModel loaded...\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb8583a0a47197ae79dbae8c07827bf7355dc849
| 364,195 |
ipynb
|
Jupyter Notebook
|
CTC Multi.ipynb
|
nayyarv/ctcinvestigations
|
11bedd6fcdec440dc5e2249e36e58d6e92eb2f11
|
[
"MIT"
] | null | null | null |
CTC Multi.ipynb
|
nayyarv/ctcinvestigations
|
11bedd6fcdec440dc5e2249e36e58d6e92eb2f11
|
[
"MIT"
] | null | null | null |
CTC Multi.ipynb
|
nayyarv/ctcinvestigations
|
11bedd6fcdec440dc5e2249e36e58d6e92eb2f11
|
[
"MIT"
] | null | null | null | 1,043.538682 | 95,428 | 0.952265 |
[
[
[
"import torch\nfrom torch import nn\n\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nsns.set()",
"_____no_output_____"
],
[
"hval = {}\n \ndef dhook(name):\n def inner_hook(grad):\n global hval\n hval[name] = grad\n return grad\n return inner_hook\n\ndef to_plot(tensor):\n return tensor.squeeze(1).T.detach().numpy()\n\ndef posteriorgram(data, xlab, ylab, title, **kwargs):\n sns.heatmap(data, cmap=\"YlGnBu\", cbar=True, **kwargs)\n plt.xlabel(xlab)\n plt.ylabel(ylab)\n plt.title(title)\n plt.gca().invert_yaxis()\n\ndef double_posterior(dat, gradient, **kwargs):\n plt.figure(figsize=(18,14))\n plt.subplot(211)\n posteriorgram(dat, \"Time\", \"Phones\", \"Activations\", **kwargs)\n plt.subplot(212)\n posteriorgram(gradient, \"Time\", \"Phones\", \"Gradient\", **kwargs)",
"_____no_output_____"
]
],
[
[
"## CTC and Baum Welch\n\nThe Baum Welch procedure is the foundation of CTC, however CTC is more constrained. The two main constrains\n\n1. The blank label - CTC mandates a label as the blank label in which the network doesn't make any predictions. This allows the network to be selective with it's predictions.\n2. Constrained transition matrix - the CTC transition matrix is to either to stay in current phone, go blank or go to next voiced phone. (If prediction is blank, it stays blank or to next phone). Phones can be repeated, but you don't have branching paths, nor recursive transitions.\n\nThese are not well articulated (in fact Graves does not make any explicit links to Baum Welch) and are poorly justified. We will focus on part 2 since blank label investigations require real data for good intuition.\n\nWe refer to \"Multi-path CTC\" for the equations derived there. They have been verified to work within the pytorch framework of autograd. This is a huge result, as implementing CTC is non-trivial, and a version with custom transition matrices would be an incredible software engineering undertaking. ",
"_____no_output_____"
]
],
[
[
"def multi_ctc(ctc, data, targets, inlen, target_len):\n \"\"\"\n For simplicity, we assume that the targets are equal length\n This is not necessary, but makes for slightly simpler code here\n \"\"\"\n loss = 0\n for target in targets:\n l = ctc(data, target, inlen, target_len)\n# print(f\"{l:.4f}:{np.exp(-l.item()):.4f} for {target}\")\n loss += torch.exp(-l)\n totloss = -torch.log(loss)\n# print(f\"combo: {totloss}, {loss}\")\n return totloss\n\n",
"_____no_output_____"
],
[
"def train_multi(data, targets, epochs=100):\n T, N, C = data.shape\n data = data.requires_grad_(True)\n # if we don't take sum, the CTCLoss will be\n # averaged across time and this means that our \n # equation won't be correct since each path sum is kind of normalised\n ctcloss = nn.CTCLoss(reduction=\"sum\")\n inlen = torch.IntTensor([T])\n target_len = torch.IntTensor([len(targets[0])])\n \n\n global hval\n hval = {}\n \n for epoch in range(epochs):\n data.register_hook(dhook(\"dgrad\"))\n ds = data.log_softmax(2)\n ds.register_hook(dhook(\"ds\"))\n loss = multi_ctc(ctcloss, ds, targets, inlen, target_len)\n loss.backward()\n # bootleg SGD\n data = data - .5 * hval[\"dgrad\"]\n \n endp = data.softmax(2).squeeze(1).T.detach().numpy()\n grad = hval[\"ds\"].squeeze(1).T.detach().numpy()\n return endp, grad\n ",
"_____no_output_____"
],
[
"data = torch.zeros(3, 1, 5)\ntargets = [torch.IntTensor([1,3, 4]) , torch.IntTensor([1,2,4])]\n\nend, g = train_multi(data, targets, 1)\ndouble_posterior(to_plot(data.softmax(2)), g, annot=True)",
"_____no_output_____"
],
[
"end, g = train_multi(data, targets, 100)\ndouble_posterior(end, g, annot=True)",
"_____no_output_____"
],
[
"data_b = torch.rand(3, 1, 5)\n# data_b[0,0,1] += 10\ndata_b[1,0,2] += 2\n# data_b[2,0,4] += 10\ntargets = [torch.IntTensor([1,2, 4]), torch.IntTensor([1,3,4])]\nplt.figure(figsize=(14,7))\nposteriorgram(to_plot(data_b.softmax(2)), \"time\", \"phone\", \"data\", annot=True)",
"_____no_output_____"
],
[
"end, g = train_multi(data_b, targets, 20)\ndouble_posterior(end, g, annot=True)",
"_____no_output_____"
],
[
"data_N = torch.rand(3, 1, 5)\ndata_N[1,0,:] = 0\ndata_N[1,0,2] = 1\ndata_N[1,0,3] = 1.01\n# data_b[2,0,4] += 10\ntargets = [torch.IntTensor([1,2, 4]), torch.IntTensor([1,3,4])]\n\nplt.figure(figsize=(18,7))\nposteriorgram(to_plot(data_N.softmax(2)), \"time\", \"phone\", \"data\", annot=True)\nend, g = train_multi(data_N, targets, 1000)\ndouble_posterior(end, g, annot=True)",
"_____no_output_____"
]
],
[
[
"The above plots show that the multi-path CTC training that would be a natural extension from baum welch doesn't actually work very well here. Even from small starting differences, the eventual result is that the result is that the slightly more likely path becomes the overwhelming favourite. In contrast, values that are identical, remain so the entire way",
"_____no_output_____"
]
],
[
[
"data_N = torch.rand(10, 1, 5)\n# data_N[1,0,2] = 1\n# data_N[1,0,3] = 1.1\n# data_b[2,0,4] += 10\ntargets = [torch.IntTensor([1,2, 4]), torch.IntTensor([1,3,4])]\n\nplt.figure(figsize=(18,7))\nposteriorgram(to_plot(data_N.softmax(2)), \"time\", \"phone\", \"data\", annot=True)\nend, g = train_multi(data_N, targets, 1000)\ndouble_posterior(end, g, annot=True)",
"_____no_output_____"
]
],
[
[
"What does this mean when we have training data? This can be illustrated easily with a few thought experiments.\n\nConsider the phones \"ah\" and \"a\". This is a common difference in pronounciation (tomato, alexa etc), so being able to learn the correct phone is useful. Let us assume we've done a curriculum learning, and our network is sort of good at picking these sounds. Now let us introduce branching paths.\n\n1. The output of the a vs ah phone is close to 1, it’s that confident about the different phones. In this case, what will the gradients be? In this situation, since the gradients are weighted by likelihood, the correct path is identified and the gradients should be mostly 0. Hence, our network should be stable, but this is expected since it’s already converged on the correct phones - the fact that we are training isn’t helpful at this point.\n2. What about a situation where the a vs ah sound is not so clear, say 0.7 for the correct phone and 0.3 for the wrong phone. In this case, each training example will make the correct pronunciation more correct. In the context of a batch, the gradients are now effectively weighted by the makeup of the pronunciations.\n \n - I.e. if the batch is evenly weighted 50-50 “ah” and ’a’ then the gradients effectively cancel the other out and again, we learn nothing between the phones in the batch. In this case, I believe we would eventually lose out ability to identify between ’a’ and ’ah’, always putting out 0.5, 0.5 for the phones (or would it just learn nothing in this case?)\n - If the batch is not evenly weighted. The pronunciation that is more common will dominate and our network will switch to predicting the more common pronunciation and the other phone would no longer be predicted at all.",
"_____no_output_____"
],
[
"The above results were seen in experimental tests. Training on Australian accents and then adding in a large number of American accent examples (large misbalance) resulted in our network shifting to always predicting american accents. ",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
cb859901dc3aa44bb7fcf2f150048412137269cd
| 129,595 |
ipynb
|
Jupyter Notebook
|
fitbit/fitbit_data_analysis.ipynb
|
Zackhardtoname/qs_ledger
|
77d15079e90be40429b99be8abaa5a51423585d8
|
[
"MIT"
] | 755 |
2018-06-17T08:28:38.000Z
|
2022-03-27T05:37:02.000Z
|
fitbit/fitbit_data_analysis.ipynb
|
Zackhardtoname/qs_ledger
|
77d15079e90be40429b99be8abaa5a51423585d8
|
[
"MIT"
] | 17 |
2019-03-31T08:26:09.000Z
|
2022-03-31T05:33:22.000Z
|
fitbit/fitbit_data_analysis.ipynb
|
Zackhardtoname/qs_ledger
|
77d15079e90be40429b99be8abaa5a51423585d8
|
[
"MIT"
] | 195 |
2018-08-30T11:41:28.000Z
|
2022-03-31T11:35:20.000Z
| 121.799812 | 28,082 | 0.858312 |
[
[
[
"# Fitbit Data Analysis",
"_____no_output_____"
],
[
"## About Fitbit Data Analysis\n\nThis project provides some high-level data analysis of steps, sleep, heart rate and weight data from Fitbit tracking.\n\nPlease using fitbit_downloader file to first collect and export your data. ",
"_____no_output_____"
],
[
"-------",
"_____no_output_____"
],
[
"### Dependencies and Libraries",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt, matplotlib.font_manager as fm\nfrom datetime import datetime\nimport seaborn\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"-------",
"_____no_output_____"
],
[
"# Steps",
"_____no_output_____"
]
],
[
[
"daily_steps = pd.read_csv('data/daily_steps.csv', encoding='utf-8')",
"_____no_output_____"
],
[
"daily_steps['Date'] = pd.to_datetime(daily_steps['Date'])\ndaily_steps['dow'] = daily_steps['Date'].dt.weekday\ndaily_steps['day_of_week'] = daily_steps['Date'].dt.weekday_name",
"_____no_output_____"
],
[
"daily_steps.tail()",
"_____no_output_____"
],
[
"len(daily_steps)",
"_____no_output_____"
],
[
"# drop days with now steps\ndaily_steps = daily_steps[daily_steps.Steps > 0]\nlen(daily_steps)",
"_____no_output_____"
],
[
"daily_steps.Steps.max()",
"_____no_output_____"
],
[
"daily_steps.Steps.min()",
"_____no_output_____"
],
[
"daily_steps.Steps.max()",
"_____no_output_____"
]
],
[
[
"### Step Charts",
"_____no_output_____"
]
],
[
[
"daily_steps['RollingMeanSteps'] = daily_steps.Steps.rolling(window=10, center=True).mean()\ndaily_steps.plot(x='Date', y='RollingMeanSteps', title= 'Daily step counts rolling mean over 10 days')",
"_____no_output_____"
],
[
"daily_steps.groupby(['dow'])['Steps'].mean()",
"_____no_output_____"
],
[
"ax = daily_steps.groupby(['dow'])['Steps'].mean().plot(kind='bar', x='day_of_week')\nplt.suptitle('Average Steps by Day of the Week', fontsize=16)\nplt.xlabel('Day of Week: 0 = Monday, 6 = Sunday', fontsize=12, color='red')",
"_____no_output_____"
]
],
[
[
"# Sleep",
"_____no_output_____"
]
],
[
[
"daily_sleep = pd.read_csv('data/daily_sleep.csv', encoding='utf-8')\ndaily_inbed = pd.read_csv('data/daily_inbed.csv', encoding='utf-8')\nlen(daily_sleep)",
"_____no_output_____"
],
[
"sleep_data = pd.merge(daily_sleep, daily_inbed, how='inner', on='Date')",
"_____no_output_____"
],
[
"sleep_data['Date'] = pd.to_datetime(sleep_data['Date'])",
"_____no_output_____"
],
[
"sleep_data['dow'] = sleep_data['Date'].dt.weekday\nsleep_data['day_of_week'] = sleep_data['Date'].dt.weekday_name",
"_____no_output_____"
],
[
"sleep_data['day_of_week'] = sleep_data[\"day_of_week\"].astype('category')",
"_____no_output_____"
],
[
"sleep_data['InBedHours'] = round((sleep_data.InBed / 60), 2)",
"_____no_output_____"
],
[
"sleep_data = sleep_data[sleep_data.Sleep > 0]",
"_____no_output_____"
],
[
"len(daily_sleep)",
"_____no_output_____"
],
[
"sleep_data.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 35 entries, 1 to 35\nData columns (total 7 columns):\nDate 35 non-null datetime64[ns]\nSleep 35 non-null int64\nHours 35 non-null float64\nInBed 35 non-null int64\ndow 35 non-null int64\nday_of_week 35 non-null category\nInBedHours 35 non-null float64\ndtypes: category(1), datetime64[ns](1), float64(2), int64(3)\nmemory usage: 2.3 KB\n"
],
[
"sleep_data.tail()",
"_____no_output_____"
],
[
"sleep_data.describe()",
"_____no_output_____"
],
[
"sleep_data.plot(x='Date', y='Hours')",
"_____no_output_____"
],
[
"sleep_data['RollingMeanSleep'] = sleep_data.Sleep.rolling(window=10, center=True).mean()\nsleep_data.plot(x='Date', y='RollingMeanSleep', title= 'Daily sleep counts rolling mean over 10 days')",
"_____no_output_____"
],
[
"sleep_data.groupby(['dow'])['Hours'].mean()",
"_____no_output_____"
],
[
"ax = sleep_data.groupby(['dow'])['Hours'].mean().plot(kind='bar', x='day_of_week')\nplt.suptitle('Average Sleep by Night of the Week', fontsize=16)\nplt.xlabel('Day of Week: 0 = Monday, 6 = Sunday', fontsize=12, color='red')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb859c7a542a8fafe26ed643e0633382a17cacba
| 1,211 |
ipynb
|
Jupyter Notebook
|
03 - A little inspiration.ipynb
|
AccordionGuy/JavaScript-Course-2020
|
5dd847ee5da44dbeb1a1af962575d47a9899ddd4
|
[
"BSD-3-Clause"
] | null | null | null |
03 - A little inspiration.ipynb
|
AccordionGuy/JavaScript-Course-2020
|
5dd847ee5da44dbeb1a1af962575d47a9899ddd4
|
[
"BSD-3-Clause"
] | null | null | null |
03 - A little inspiration.ipynb
|
AccordionGuy/JavaScript-Course-2020
|
5dd847ee5da44dbeb1a1af962575d47a9899ddd4
|
[
"BSD-3-Clause"
] | null | null | null | 23.288462 | 214 | 0.565648 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
cb85a0c65230d08a2c6631c20ddb05721e8aed89
| 106,752 |
ipynb
|
Jupyter Notebook
|
3 - Airbnb - Scripts/1. Airbnb - Importing libraries, checks and first descriptive analysis.ipynb
|
LauraAstraData/AirbnbAccomodationTrendsBerlin
|
8e84063eb6e79a87d980cae0cf1a56dc3a4e1858
|
[
"CC0-1.0"
] | null | null | null |
3 - Airbnb - Scripts/1. Airbnb - Importing libraries, checks and first descriptive analysis.ipynb
|
LauraAstraData/AirbnbAccomodationTrendsBerlin
|
8e84063eb6e79a87d980cae0cf1a56dc3a4e1858
|
[
"CC0-1.0"
] | null | null | null |
3 - Airbnb - Scripts/1. Airbnb - Importing libraries, checks and first descriptive analysis.ipynb
|
LauraAstraData/AirbnbAccomodationTrendsBerlin
|
8e84063eb6e79a87d980cae0cf1a56dc3a4e1858
|
[
"CC0-1.0"
] | null | null | null | 38.084909 | 243 | 0.350082 |
[
[
[
"#### 1. Importing libraries and datasets",
"_____no_output_____"
]
],
[
[
"# Import libraries\nimport pandas as pd\nimport numpy as np\nimport os",
"_____no_output_____"
],
[
"# Import Airbnb listing dataset\npath = r'C:\\Users\\",
"_____no_output_____"
],
[
"df_airbnb = pd.read_csv(os.path.join(path, 'Original Data', 'listings2021.csv'), index_col = False)",
"_____no_output_____"
],
[
"# check imported dataset\ndf_airbnb.head()",
"_____no_output_____"
],
[
"df_airbnb.tail()",
"_____no_output_____"
],
[
"df_airbnb.shape",
"_____no_output_____"
],
[
"df_airbnb.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 19858 entries, 0 to 19857\nData columns (total 16 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 id 19858 non-null int64 \n 1 name 19826 non-null object \n 2 host_id 19858 non-null int64 \n 3 host_name 18926 non-null object \n 4 neighbourhood_group 19858 non-null object \n 5 neighbourhood 19858 non-null object \n 6 latitude 19858 non-null float64\n 7 longitude 19858 non-null float64\n 8 room_type 19858 non-null object \n 9 price 19858 non-null int64 \n 10 minimum_nights 19858 non-null int64 \n 11 number_of_reviews 19858 non-null int64 \n 12 last_review 15753 non-null object \n 13 reviews_per_month 15753 non-null float64\n 14 calculated_host_listings_count 19858 non-null int64 \n 15 availability_365 19858 non-null int64 \ndtypes: float64(3), int64(7), object(6)\nmemory usage: 2.4+ MB\n"
],
[
"df_airbnb.dtypes",
"_____no_output_____"
],
[
"df_airbnb.describe()",
"_____no_output_____"
],
[
"# checking data consistency for df_airbnb\ndf_airbnb.isnull().sum()",
"_____no_output_____"
]
],
[
[
"#### 2. Checking for 'NaN' and '0'",
"_____no_output_____"
]
],
[
[
"df_airbnb_nan1 = df_airbnb[df_airbnb['name'].isnull() == True]",
"_____no_output_____"
],
[
"df_airbnb_nan1",
"_____no_output_____"
]
],
[
[
"#### housing name / title missing for 32 entries. No changes will be made as the 'name' column won´t have an impact on the analysis",
"_____no_output_____"
]
],
[
[
"df_airbnb_nan2 = df_airbnb[df_airbnb['host_name'].isnull() == True]",
"_____no_output_____"
],
[
"df_airbnb_nan2",
"_____no_output_____"
]
],
[
[
"#### 'host_name' column shouldn´t have any impact on the analyis, column might be dropped to a later point for data privacy",
"_____no_output_____"
]
],
[
[
"df_airbnb_nan3 = df_airbnb[df_airbnb['last_review'].isnull() == True]",
"_____no_output_____"
],
[
"df_airbnb_nan3",
"_____no_output_____"
]
],
[
[
"#### 'last_review' represent the date of the last review received, keeping the column for now as won´t have an impact on the analysis",
"_____no_output_____"
]
],
[
[
"df_airbnb_nan4 = df_airbnb[df_airbnb['reviews_per_month'].isnull() == True]",
"_____no_output_____"
],
[
"df_airbnb_nan4",
"_____no_output_____"
]
],
[
[
"#### the information 'reviews_per_month´ is based on the column 'number_of_reviews'. Value for this column as NaN is applied when 0 is the 'reviews_per_month' column.\n",
"_____no_output_____"
]
],
[
[
"# changing na in the 'reviews_per_month' column with 0\ndf_airbnb['reviews_per_month'] = df_airbnb['reviews_per_month'].fillna(0)",
"_____no_output_____"
],
[
"df_airbnb['reviews_per_month'].isnull().sum()",
"_____no_output_____"
],
[
"# checking duplicates df_airbnb\ndf_airbnb_dups = df_airbnb[df_airbnb.duplicated()]",
"_____no_output_____"
],
[
"df_airbnb_dups",
"_____no_output_____"
]
],
[
[
"#### No duplicates were found",
"_____no_output_____"
]
],
[
[
"# saving cleaned data set\ndf_airbnb.to_csv(os.path.join(path, 'Prepared Data','df_airbnb_clean.csv'))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb85b942bb57f6a7f699c9e8d80fc9cbb43d28fe
| 78,471 |
ipynb
|
Jupyter Notebook
|
Optimal Design.ipynb
|
pablocarb/sbc-doe
|
3467a42765bae03aedfd24924e4c18213753d27a
|
[
"MIT"
] | null | null | null |
Optimal Design.ipynb
|
pablocarb/sbc-doe
|
3467a42765bae03aedfd24924e4c18213753d27a
|
[
"MIT"
] | null | null | null |
Optimal Design.ipynb
|
pablocarb/sbc-doe
|
3467a42765bae03aedfd24924e4c18213753d27a
|
[
"MIT"
] | null | null | null | 31.782503 | 346 | 0.30475 |
[
[
[
"Evaluate experimental design using D-Efficiency.\n\n**Definitions**:\n\n$\\mathbf{X}$ is the model matrix: A row for each run and a column for each term in the model.\n\nFor instance, a model assuming only main effects:\n\n$\\mathbf{Y} = \\mathbf{X} \\beta + \\alpha$\n\n$\\mathbf{X}$ will contain $p = m + 1$ columns (number of factors + intercept).\n\n\nD-optimality:\n\n\n*D-efficiency* $= \\left(\\frac{1}{n}|\\mathbf{X}'\\mathbf{X}|^{1/p}\\right)$\n\n**D-efficiency**\n\nD-efficiency compares the design $\\mathbf{X}$ with the D-optimal design $\\mathbf{X_D}$ for the assumed model: \n\n*D-efficiency* $= \\left[ \\frac{|\\mathbf{X}'\\mathbf{X}|}{|\\mathbf{X_D}'\\mathbf{X_D}|} \\right]^{1/p}$\n\n\nIn JMP, the D-efficiency compares the design with an orthogonal design in terms of D-optimality criterion:\n\n*D-efficiency* $= 100 \\left(\\frac{1}{n}|\\mathbf{X}'\\mathbf{X}|^{1/p}\\right)$\n\nIn orthogonal designs (see Olive, D.J. (2017) Linear Regression, Springer, New York, NY.):\n\n * the entries in the model matrix $\\mathbf{X}$ are either -1 or 1, \n * the columns are orthogonals $c_i^T c_j = 0$ for $i \\neq j$,\n * $c_i^T c_i = n$, where $n$ is the number of experiments,\n * the sum of the absolute value of the columns is $n$.\n\nFor an orthogonal design with $p$ factors and $n$ experiments:\n\n$\\mathbf{X}'\\mathbf{X} = n \\mathbf{I}$\n\nand therefore:\n\n$|\\mathbf{X}'\\mathbf{X}| = n^p $.\n\n**Designs**\n\nD-optimal designs maximize $D$:\n\n$D = |\\mathbf{X}'\\mathbf{X}|$\n\n(no need for the other terms because they are constant)\n\nD-optimal split-plot designs maximize:\n\n$D = |\\mathbf{X}'\\mathbf{V}^{-1}\\mathbf{X}|$\n\nwhere $\\mathbf{V}^{-1}$ is the block diagonal covariance matrix of the responses.\n\nSplit-plot designs are those that some factors are harder to vary than others. The covariance indiciates the ratio between the whole variance and the subplot variance to the error variance. \n\n**Estimation efficiency**\n\nThere are several parameters (see JMP guide), but they are related. The basic is the relateive std error to estimate, i.e., how large the standard errors of the model's parameter estimates are relative to the error standard deviation.\n\n*SE* $= \\sqrt{\\left(\\mathbf{X}'\\mathbf{X}\\right)_{ii}^{-1}}$\n\nwhere $\\left(\\mathbf{X}'\\mathbf{X}\\right)_{ii}^{-1}$ is the $i$ diagonal of $\\left(\\mathbf{X}'\\mathbf{X}\\right)^{-1}$.",
"_____no_output_____"
],
[
"### Calculation of D-efficiency with categorical variables\n\nWith categorical variables with $l$ levels, we need to use $l-1$ dummy variables. There are several possible [constrat codings](https://juliastats.github.io/StatsModels.jl/latest/contrasts.html). \n * One possibility is to perform a **one-hot encoding** with values -1 and 1 (an hypercube) and take the $l-1$ levels. \n * In order to perform the dimension reduction, one approach is to express the points in their $l-1$ principal directions.\n * In order to make the design orthogonal, the product of the columns of the dummy variables need to add up to the number of variables $\\sum_{j=1}^{l-1} c_{ij}^T c_{ij} = l-1$. My approach: a) normalize the resulting vectors, b) multiply them for $\\sqrt{l-1}$.",
"_____no_output_____"
],
[
"# Example\n\nAn example for 2-factor model with main effects:",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
],
[
"X1 = np.matrix( \"[1 1 1; 1 -1 1; 1 -1 -1]\")\nX1",
"_____no_output_____"
],
[
"def Deff(X):\n # D-efficiency\n return (100.0/X.shape[0]) * ( np.linalg.det( np.dot( np.transpose( X ), X ) )**(1.0/X.shape[1]))\ndef Dopt(X):\n # D-optimality\n return np.linalg.det( np.dot( np.transpose( X ), X ) )\ndef SE(X):\n # Estimation efficiency\n return np.diag( np.linalg.inv( np.dot( np.transpose( X ), X ) ) )\ndef Contrib(X):\n cn = []\n for i in range(0, X.shape[0]):\n cn.append( Dopt( np.vstack( [X[:i,:], X[(i+1):,:]] ) ) )\n return cn\ndef VarAdd(X,xj):\n # Variance of adding/removing one\n return np.dot( np.dot( np.transpose(xj) , np.linalg.inv( np.dot( np.transpose( X ), X) ) ), xj )\n ",
"_____no_output_____"
],
[
"Dopt( np.vstack( [X[:i,:], X[i:,:]] ) )",
"_____no_output_____"
],
[
"Contrib(X1)",
"(10, 6) (10, 6)\n(10, 6) (10, 6)\n(10, 6) (10, 6)\n(10, 6) (10, 6)\n(10, 6) (10, 6)\n(10, 6) (10, 6)\n(10, 6) (10, 6)\n(10, 6) (10, 6)\n(10, 6) (10, 6)\n(10, 6) (10, 6)\n"
],
[
"SE(X1)",
"_____no_output_____"
]
],
[
[
"# Algorithm",
"_____no_output_____"
]
],
[
[
"import itertools\n\n\np = 10 # Number of factors print(w,Deff(X),i,j\nn = 24 # Number of runs\n# Initial design\nX = np.random.randint(0,2,(n,p+1))\nval = []\nfor x in np.arange(p):\n val.append([0,1])\nffact = []\nfor m in itertools.product(*val):\n ffact.append(m)\nffact = np.array(ffact)\nX = ffact[np.random.randint(0,len(ffact),n),:]\nJ = 0\nw = 0\nwhile ( (J < 1e4) and (w < 10000) ):\n d2 = None\n d6 = None\n w += 1\n try:\n d1 = Deff( X )\n d2 = Dopt( X )\n d3 = SE( X )\n d4 = Contrib( X )\n except:\n continue\n J = max(J, d1)\n i = np.argmin( d4 )\n j = np.argmax( d3 )\n X1 = X.copy()\n k = 0\n while( k < 10 ):\n i = np.argsort( d4 )[ np.random.randint(0,5)]\n j = np.flip( np.argsort( d3 ) )[0] # [ np.random.randint(0,5)]\n X1[i,:] = ffact[np.random.randint(0,len(ffact),1), :]\n# if X[i,j] == 0:\n# X1[i,j] = 1\n# else:\n# X1[i,j] = 0\n k += 1\n try:\n d5 = Deff( X1 )\n d6 = Dopt( X1 )\n d7 = SE( X1 )\n d8 = Contrib( X1 )\n \n except:\n continue\n if d6 > d2:\n X = X1\n print(w,J,d1,d2,d6,i,j)\n\nprint(w,J,d1,d2,d6,i,j)\n ",
"39 24.757137280437647 24.757137280437647 54842426.99999994 55795928.00000006 16 6\n70 24.799847417156084 24.799847417156084 55795928.00000006 63224372.00000023 2 2\n270 25.111763392576464 25.111763392576464 63224372.00000023 64715250.00000028 12 1\n1070 25.170359681548877 25.170359681548877 64715250.00000028 78980367.0000001 5 1\n2767 25.676786679666098 25.676786679666098 78980367.0000001 87557751.00000007 12 0\n3639 25.942881867260958 25.942881867260958 87557751.00000007 90560945.00000003 2 7\n10000 26.030520536933427 26.030520536933427 90560945.00000003 6691743.000000008 20 0\n"
],
[
"p = 100 # Number of factors\nn = 150 # Number of runs\nm = 100 # Sampled design space per iteration\n\n# Here we generate a full factorial but this is not possible for large designs\n# Replace by random sampling, ideally some descent algorithm (TBD)\n# For categorical variables, we randomize the levels that are then mapped into the dummy variables\nFULL = False\nif FULL:\n val = []\n for x in np.arange(p+1):\n val.append([-1,1])\n ffact = []\n for m in itertools.product(*val):\n ffact.append(m)\n ffact = np.array(ffact)\n# Initial design: could we do something better than a purely random start?\nX = np.array([-1,1])[ np.random.randint(0,2,(n,p+1)) ]\n# D-Efficiency of the initial design\n# Here I have implemented a simple DETMAX algorithm. At each iteration: \n# - remove the design with the lowest variance\n# - add the design with the highest variance\n# Many more efficent variants exist (kl-exchange, etc..)\nJ = Deff(X)\nprint(J)\nw = 0\nwhile ((J<99.0) and (w < 100)):\n # X1 is the design space sample in the iteration. \n # Here we loop through the full factorial, which is computationally expensive\n # First thing to fdo is to change it to random generation of a subset of the full library\n # It would be better to move across some surface like gradient descent...\n if FULL:\n X1 = ffact\n else:\n X1 = np.array([-1,1])[ np.random.randint(0,2,(m,p+1)) ]\n sub = []\n for i in np.arange(X.shape[0]):\n sub.append( VarAdd(X, X[i,:]) )\n w += 1\n Xsub = None\n dList = np.argsort( sub )[0:1]\n for i in np.arange(X.shape[0]):\n if i in dList:\n continue\n else:\n if Xsub is None:\n Xsub = X[i,:]\n else:\n Xsub = np.vstack( [Xsub, X[i,:]] )\n add = []\n for j in np.arange(X1.shape[0]):\n add.append( VarAdd( Xsub, X1[j,:] ) ) \n aList = np.flip( np.argsort( add ) )[0:1]\n Xn = Xsub\n for j in aList:\n Xn = np.vstack( [Xn, X1[j,:] ] )\n if w % 100 == 0:\n print(w,J,i,j, Dopt(X), Dopt(Xsub), Dopt(Xn))\n if Dopt(Xn) > Dopt(X):\n X = Xn\n J = Deff(X)\n elif Dopt(Xn) == Dopt(X):\n break\nprint(w,J,i,j)\n ",
"63.10813627200491\n100 75.63574184274607 149 90 3.440735760266953e+207 1.331020674558307e+207 3.5931443869773624e+207\n100 75.66820654669202 149 90\n"
],
[
"X.shape",
"_____no_output_____"
],
[
"X = np.random.randint(0,2,(n,p+1))\nX1 = np.random.randint(0,2,(n,p+1))\nadd = []\nfor i in np.arange(X1.shape[0]):\n add.append( VarAdd(X, X1[i,:]) )\nsub = []\nfor i in np.arange(X.shape[0]):\n sub.append( VarAdd(X, X[i,:]) )",
"_____no_output_____"
],
[
"sub",
"_____no_output_____"
],
[
"Dopt( X1 )",
"_____no_output_____"
],
[
"d4",
"_____no_output_____"
],
[
" np.dot( np.transpose( X), np.array( X ) ) ",
"_____no_output_____"
],
[
"np.transpose( X )",
"_____no_output_____"
],
[
"np.linalg.det( np.dot( np.transpose( X ), X ) )",
"_____no_output_____"
],
[
"24**11",
"_____no_output_____"
],
[
"import dexpy.optimal\nfrom dexpy.model import ModelOrder\nreaction_design = dexpy.optimal.build_optimal(50, run_count=64, order=ModelOrder.linear)\n#reaction_design",
"_____no_output_____"
],
[
"from sklearn.preprocessing import OneHotEncoder",
"_____no_output_____"
]
],
[
[
"dexpy library works fine, but it does not work with dummy variables. Therefore, if we add dummy variables, there will be clashes between levels in the factor (multiple level set simultaneously). Also, if we increase the order of the model, it will generate intermediate values that are not actually meaningful for dummy variables. \n\nDummy variables could probably be better processed by generating a full factorial with labels and then convert to dummy after sampling.",
"_____no_output_____"
]
],
[
[
"reaction_design",
"_____no_output_____"
],
[
"np.matrix( \"[1 1 1; 2 3 4]\")",
"_____no_output_____"
],
[
"import pandas as pd",
"_____no_output_____"
],
[
"df = pd.read_excel('/mnt/SBC1/data/OptimalDesign/data/CD.xlsx')\nXX = np.array( df.iloc[:,0:5] )",
"_____no_output_____"
],
[
"Deff(XX)",
"_____no_output_____"
],
[
"X = np.array([-1,-1])[ np.random.randint(0,2,(n,p+1)) ]",
"_____no_output_____"
],
[
"X",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb85c3feee3313fe1dd6b92e4eb3f8170c9003c8
| 56,921 |
ipynb
|
Jupyter Notebook
|
drafts/1_neural-networks-by-hand.ipynb
|
jydiw/deeplearning.ai
|
9b825dc9351ac611f354139dae596d1c0a1f3834
|
[
"MIT"
] | null | null | null |
drafts/1_neural-networks-by-hand.ipynb
|
jydiw/deeplearning.ai
|
9b825dc9351ac611f354139dae596d1c0a1f3834
|
[
"MIT"
] | null | null | null |
drafts/1_neural-networks-by-hand.ipynb
|
jydiw/deeplearning.ai
|
9b825dc9351ac611f354139dae596d1c0a1f3834
|
[
"MIT"
] | null | null | null | 116.165306 | 42,532 | 0.860139 |
[
[
[
"# Introduction to Neural Networks\r\n\r\nBased off of the lab exercises from deeplearning.ai, using public datasets and personal flair.",
"_____no_output_____"
],
[
"## Objectives\r\n- Build the general architecture of a learning algorithm, including:\r\n - initializing parameters\r\n - calculating the cost function and its gradient\r\n - using an optimization algorithm\r\n- Gather all three functions above into a main model function, in the right order.",
"_____no_output_____"
],
[
"## Import Packages",
"_____no_output_____"
]
],
[
[
"import os\r\nimport random\r\nimport re\r\n\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\nimport scipy\r\n\r\nfrom PIL import Image\r\nfrom scipy import ndimage\r\nfrom sklearn.model_selection import train_test_split\r\n\r\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Dataset\r\n\r\nData will be taken from Kaggle's [Dogs vs. Cats](https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition/data) dataset.\r\n\r\nFrom Kaggle's description:\r\n\r\n>The train folder contains 25,000 images of dogs and cats. Each image in this folder has the label as part of the filename. The test folder contains 12,500 images, named according to a numeric id. For each image in the test set, you should predict a probability that the image is a dog (1 = dog, 0 = cat).\r\n\r\nSteps to reproduce:\r\n- preprocess train and validation set\r\n - (optional) select subset of training set\r\n - resize images to all be the same (64x64)\r\n - flatten images\r\n- build logistic regression model as a single-layer neural network\r\n - initialize weight matrix\r\n - write forward and backprop functions, defining the log loss cost function\r\n - optimize learning",
"_____no_output_____"
]
],
[
[
"TRAIN_PATH = 'C:/Users/JYDIW/Documents/kaggle-datasets/dogs-vs-cats-redux-kernels-edition/train/'\r\nTEST_PATH = 'C:/Users/JYDIW/Documents/kaggle-datasets/dogs-vs-cats-redux-kernels-edition/test/'\r\n\r\nROWS = 128\r\nCOLS = 128\r\nCHANNELS = 3\r\n\r\nm_train = 800\r\nm_val = 200\r\nm_total = m_train + m_val\r\n\r\nall_train_dogs = [TRAIN_PATH+f for f in os.listdir(TRAIN_PATH) if 'dog' in f]\r\nall_train_cats = [TRAIN_PATH+f for f in os.listdir(TRAIN_PATH) if 'cat' in f]\r\n\r\nall_train_images = random.sample(all_train_dogs, m_total//2) + random.sample(all_train_cats, m_total//2)\r\nrandom.shuffle(all_train_images)\r\n\r\ntrain_images, val_images = train_test_split(all_train_images, test_size=m_val)\r\n\r\n# all_test_images = [TEST_PATH+f for f in os.listdir(TEST_PATH)]\r\n# test_images = random.sample(all_test_images, m_test)",
"_____no_output_____"
],
[
"def read_image(image_path, as_array=False):\r\n img = Image.open(image_path)\r\n if as_array:\r\n return np.asarray(img.resize((COLS, ROWS)))\r\n return img.resize((COLS, ROWS))\r\n\r\ndef resize_images(images):\r\n count = len(images)\r\n data = np.ndarray((count, ROWS, COLS, CHANNELS), dtype=np.uint8)\r\n for i, file in enumerate(images):\r\n img = read_image(file, as_array=True)\r\n data[i] = img\r\n if (i+1)%250 == 0:\r\n print(f'Processed {i+1} of {count}')\r\n return data",
"_____no_output_____"
],
[
"print(read_image(train_images[0], as_array=True).shape)\r\nread_image(train_images[0])",
"(128, 128, 3)\n"
],
[
"train_images_resized = resize_images(train_images)\r\nval_images_resized = resize_images(val_images)",
"Processed 250 of 800\nProcessed 500 of 800\nProcessed 750 of 800\n"
],
[
"def generate_labels(images):\r\n labels = np.zeros((1, np.array(images).shape[0]), dtype=np.uint8)\r\n for i, img in enumerate(images):\r\n if re.findall('.+\\/(\\w+)\\.\\d+\\.jpg', img)[0] == 'dog':\r\n labels[0][i] = 1\r\n# else:\r\n# labels[0][i] = 0\r\n return labels",
"_____no_output_____"
],
[
"y_train = generate_labels(train_images)\r\ny_val = generate_labels(val_images)",
"_____no_output_____"
],
[
"def flatten_and_normalize_images(images):\r\n return images.reshape(images.shape[0], -1).T / 255",
"_____no_output_____"
],
[
"X_train = flatten_and_normalize_images(train_images_resized)\r\nX_val = flatten_and_normalize_images(val_images_resized)",
"_____no_output_____"
],
[
"print(X_train.shape)\r\nprint(y_train.shape)",
"(49152, 800)\n(1, 800)\n"
]
],
[
[
"## Building the Algorithm\r\n\r\nThe main steps for building a Neural Network are:\r\n1. Define the model structure (such as number of input features) \r\n2. Initialize the model's parameters\r\n3. Loop:\r\n - Calculate current loss (forward propagation)\r\n - Calculate current gradient (backward propagation)\r\n - Update parameters (gradient descent)",
"_____no_output_____"
]
],
[
[
"def sigmoid(z):\r\n return 1 / (1 + np.exp(-1 * z))",
"_____no_output_____"
],
[
"def initialize_with_zeros(dim):\r\n w = np.zeros((dim, 1))\r\n b = 0\r\n return w, b",
"_____no_output_____"
],
[
"def negative_log_likelihood(A, y, m):\r\n J = -1 * np.sum(y * np.log(A) + (1 - y) * np.log(1 - A)) / m\r\n return J",
"_____no_output_____"
],
[
"def propagate(w, b, X, y):\r\n m = X.shape[1]\r\n A = sigmoid(np.dot(w.T, X) + b)\r\n cost = negative_log_likelihood(A, y, m)\r\n\r\n dw = np.dot(X, (A - y).T) / m\r\n db = np.sum(A - y) / m\r\n\r\n cost = np.squeeze(cost)\r\n grads = {\"dw\": dw, \"db\": db}\r\n\r\n return grads, cost\r\n",
"_____no_output_____"
],
[
"def optimize(w, b, X, y, num_iterations, learning_rate, verbose=False):\r\n costs = []\r\n \r\n for i in range(num_iterations):\r\n grads, cost = propagate(w, b, X, y)\r\n dw = grads['dw']\r\n db = grads['db']\r\n \r\n w -= learning_rate * dw\r\n b -= learning_rate * db\r\n \r\n if i % 100 == 0:\r\n costs.append(cost)\r\n if verbose:\r\n print(f'cost after iteration {i}: {cost}')\r\n \r\n params = {'w': w, 'b': b}\r\n grads = {'dw': dw, 'db': db}\r\n \r\n return params, grads, costs",
"_____no_output_____"
],
[
"def predict(w, b, X):\r\n m = X.shape[-1]\r\n y_pred = np.zeros((1, m))\r\n w = w.reshape(X.shape[0], 1)\r\n \r\n A = sigmoid(np.dot(w.T, X) + b)\r\n \r\n for i in range(A.shape[1]):\r\n y_pred[0][i] = (A[0][i] > 0.5)\r\n \r\n return y_pred",
"_____no_output_____"
],
[
"def model(X_train, y_train, X_val, y_val, num_iterations=2000, learning_rate=0.5, verbose=False):\r\n w, b = initialize_with_zeros(X_train.shape[0])\r\n params, grads, costs = optimize(w, b, X_train, y_train, num_iterations, learning_rate, verbose)\r\n \r\n w = params['w']\r\n b = params['b']\r\n \r\n y_pred_train = predict(w, b, X_train)\r\n y_pred_val = predict(w, b, X_val)\r\n \r\n print(f'train accuracy: {(100 - np.mean(np.abs(y_pred_train - y_train)) * 100)}')\r\n print(f'test accuracy: {(100 - np.mean(np.abs(y_pred_val - y_val)) * 100)}')\r\n\r\n\r\n d = {\"costs\": costs,\r\n \"y_prediction_test\": y_pred_val, \r\n \"y_prediction_train\" : y_pred_train, \r\n \"w\" : w, \r\n \"b\" : b,\r\n \"learning_rate\" : learning_rate,\r\n \"num_iterations\": num_iterations}\r\n \r\n return d",
"_____no_output_____"
],
[
"m = model(X_train, y_train, X_val, y_val, 2000, 0.005, True)",
"cost after iteration 0: 0.6931471805599452\ncost after iteration 100: 3.454169192861785\ncost after iteration 200: 4.19750334914926\ncost after iteration 300: 4.1188392553789885\ncost after iteration 400: 4.039392025305311\ncost after iteration 500: 3.9681554335302214\ncost after iteration 600: 3.8840734705378397\ncost after iteration 700: 3.793686363109777\ncost after iteration 800: 3.703834827497441\ncost after iteration 900: 3.6200087518981126\ncost after iteration 1000: 3.542935255901365\ncost after iteration 1100: 3.470934366372168\ncost after iteration 1200: 3.4007262730644823\ncost after iteration 1300: 3.3279209612208023\ncost after iteration 1400: 3.250476606041328\ncost after iteration 1500: 3.1700452565226076\ncost after iteration 1600: 3.088383833332973\ncost after iteration 1700: 3.0062164166896674\ncost after iteration 1800: 2.923499855149795\ncost after iteration 1900: 2.8399148978577227\ntrain accuracy: 62.375\ntest accuracy: 56.0\n"
]
],
[
[
"## Adding Layers to the Model\r\n\r\nSteps to reproduce:\r\n- Initialize the parameters for a two-layer network and for an $L$-layer neural network.\r\n- Implement the forward propagation module.\r\n - Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).\r\n - We give you the ACTIVATION function (relu/sigmoid).\r\n - Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.\r\n - Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.\r\n- Compute the loss.\r\n- Implement the backward propagation module (denoted in red in the figure below).\r\n - Complete the LINEAR part of a layer's backward propagation step.\r\n - We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) \r\n - Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.\r\n - Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function\r\n- Finally update the parameters.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
cb85c41223f83cfcbc51aad9e6b8b3ec64231117
| 4,099 |
ipynb
|
Jupyter Notebook
|
Untitled.ipynb
|
escolmebartlebooth/usdc-t1-p4-detection
|
98ae6ffadc7d8117cca223a038274746f59d1986
|
[
"MIT"
] | null | null | null |
Untitled.ipynb
|
escolmebartlebooth/usdc-t1-p4-detection
|
98ae6ffadc7d8117cca223a038274746f59d1986
|
[
"MIT"
] | null | null | null |
Untitled.ipynb
|
escolmebartlebooth/usdc-t1-p4-detection
|
98ae6ffadc7d8117cca223a038274746f59d1986
|
[
"MIT"
] | null | null | null | 18.976852 | 83 | 0.42181 |
[
[
[
"import numpy as np\nimport collections\nimport random",
"_____no_output_____"
],
[
"n = collections.deque(maxlen=5)",
"_____no_output_____"
],
[
"for i in range(10):\n arr = []\n for x in range(3):\n arr.append(random.randrange(0,10))\n n.append(arr)\n print(n)",
"deque([[8, 0, 4]], maxlen=5)\ndeque([[8, 0, 4], [6, 0, 0]], maxlen=5)\ndeque([[8, 0, 4], [6, 0, 0], [0, 6, 6]], maxlen=5)\ndeque([[8, 0, 4], [6, 0, 0], [0, 6, 6], [1, 9, 9]], maxlen=5)\ndeque([[8, 0, 4], [6, 0, 0], [0, 6, 6], [1, 9, 9], [6, 2, 4]], maxlen=5)\ndeque([[6, 0, 0], [0, 6, 6], [1, 9, 9], [6, 2, 4], [2, 0, 3]], maxlen=5)\ndeque([[0, 6, 6], [1, 9, 9], [6, 2, 4], [2, 0, 3], [3, 4, 9]], maxlen=5)\ndeque([[1, 9, 9], [6, 2, 4], [2, 0, 3], [3, 4, 9], [3, 1, 7]], maxlen=5)\ndeque([[6, 2, 4], [2, 0, 3], [3, 4, 9], [3, 1, 7], [1, 3, 4]], maxlen=5)\ndeque([[2, 0, 3], [3, 4, 9], [3, 1, 7], [1, 3, 4], [4, 6, 3]], maxlen=5)\n"
],
[
"np.mean(n)",
"_____no_output_____"
],
[
"x = np.mean(n, axis=0)",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"z = x.copy()",
"_____no_output_____"
],
[
"z = np.add(z, 4)",
"_____no_output_____"
],
[
"z",
"_____no_output_____"
],
[
"a = np.subtract(x,z)\na",
"_____no_output_____"
],
[
"np.absolute(a)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb85c5bac664465aecf2631220ec5f0e766845be
| 14,787 |
ipynb
|
Jupyter Notebook
|
r_examples/r_api_serving_examples/API Serving Examples.ipynb
|
eugeneteoh/amazon-sagemaker-examples
|
15c006d367d27371a407706953e2e962fe3bbe48
|
[
"Apache-2.0"
] | 2,610 |
2020-10-01T14:14:53.000Z
|
2022-03-31T18:02:31.000Z
|
r_examples/r_api_serving_examples/API Serving Examples.ipynb
|
eugeneteoh/amazon-sagemaker-examples
|
15c006d367d27371a407706953e2e962fe3bbe48
|
[
"Apache-2.0"
] | 1,959 |
2020-09-30T20:22:42.000Z
|
2022-03-31T23:58:37.000Z
|
r_examples/r_api_serving_examples/API Serving Examples.ipynb
|
eugeneteoh/amazon-sagemaker-examples
|
15c006d367d27371a407706953e2e962fe3bbe48
|
[
"Apache-2.0"
] | 2,052 |
2020-09-30T22:11:46.000Z
|
2022-03-31T23:02:51.000Z
| 24.201309 | 404 | 0.565159 |
[
[
[
"# R API Serving Examples\n\nIn this example, we demonstrate how to quickly compare the runtimes of three methods for serving a model from an R hosted REST API. The following SageMaker examples discuss each method in detail:\n\n* **Plumber**\n * Website: [https://www.rplumber.io/](https://www.rplumber.io)\n * SageMaker Example: [r_serving_with_plumber](../r_serving_with_plumber)\n* **RestRServe**\n * Website: [https://restrserve.org](https://restrserve.org)\n * SageMaker Example: [r_serving_with_restrserve](../r_serving_with_restrserve)\n* **FastAPI** (reticulated from Python)\n * Website: [https://fastapi.tiangolo.com](https://fastapi.tiangolo.com)\n * SageMaker Example: [r_serving_with_fastapi](../r_serving_with_fastapi)\n \nWe will reuse the docker images from each of these examples. Each one is configured to serve a small XGBoost model which has already been trained on the classical Iris dataset.",
"_____no_output_____"
],
[
"## Building Docker Images for Serving\n\nFirst, we will build each docker image from the provided SageMaker Examples.",
"_____no_output_____"
],
[
"### Plumber Serving Image",
"_____no_output_____"
]
],
[
[
"!cd .. && docker build -t r-plumber -f r_serving_with_plumber/Dockerfile r_serving_with_plumber",
"_____no_output_____"
]
],
[
[
"### RestRServe Serving Image",
"_____no_output_____"
]
],
[
[
"!cd .. && docker build -t r-restrserve -f r_serving_with_restrserve/Dockerfile r_serving_with_restrserve",
"_____no_output_____"
]
],
[
[
"### FastAPI Serving Image",
"_____no_output_____"
]
],
[
[
"!cd .. && docker build -t r-fastapi -f r_serving_with_fastapi/Dockerfile r_serving_with_fastapi",
"_____no_output_____"
]
],
[
[
"## Launch Serving Containers",
"_____no_output_____"
],
[
"Next, we will launch each search container. The containers will be launch on the following ports to avoid port collisions on your local machine or SageMaker Notebook instance:",
"_____no_output_____"
]
],
[
[
"ports = {\n \"plumber\": 5000,\n \"restrserve\": 5001,\n \"fastapi\": 5002,\n}",
"_____no_output_____"
],
[
"!bash launch.sh",
"_____no_output_____"
],
[
"!docker container list",
"_____no_output_____"
]
],
[
[
"## Define Simple Client",
"_____no_output_____"
]
],
[
[
"import requests\nfrom tqdm import tqdm\nimport pandas as pd",
"_____no_output_____"
],
[
"def get_predictions(examples, instance=requests, port=5000):\n payload = {\"features\": examples}\n return instance.post(f\"http://127.0.0.1:{port}/invocations\", json=payload)",
"_____no_output_____"
],
[
"def get_health(instance=requests, port=5000):\n instance.get(f\"http://127.0.0.1:{port}/ping\")",
"_____no_output_____"
]
],
[
[
"## Define Example Inputs",
"_____no_output_____"
],
[
"Next, we define a example inputs from the classical [Iris](https://archive.ics.uci.edu/ml/datasets/iris) dataset.\n* Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.",
"_____no_output_____"
]
],
[
[
"column_names = [\"Sepal.Length\", \"Sepal.Width\", \"Petal.Length\", \"Petal.Width\", \"Label\"]\niris = pd.read_csv(\n \"s3://sagemaker-sample-files/datasets/tabular/iris/iris.data\", names=column_names\n)",
"_____no_output_____"
],
[
"iris_features = iris[[\"Sepal.Length\", \"Sepal.Width\", \"Petal.Length\", \"Petal.Width\"]]",
"_____no_output_____"
],
[
"example = iris_features.values[:1].tolist()",
"_____no_output_____"
],
[
"many_examples = iris_features.values[:100].tolist()",
"_____no_output_____"
]
],
[
[
"## Testing\n\nNow it's time to test how each API server performs under stress.",
"_____no_output_____"
],
[
"We will test two use cases:\n* **New Requests**: In this scenario, we test how quickly the server can respond with predictions when each client request establishes a new connection with the server. This simulates the server's ability to handle real-time requests. We could make this more realistic by creating an asynchronous environment that tests the server's ability to fulfill concurrent rather than sequential requests.\n* **Keep Alive / Reuse Session**: In this scenario, we test how quickly the server can respond with predictions when each client request uses a session to keep its connection to the server alive between requests. This simulates the server's ability to handle sequential batch requests from the same client.",
"_____no_output_____"
],
[
"For each of the two use cases, we will test the performance on following situations:\n\n* 1000 requests of a single example\n* 1000 requests of 100 examples\n* 1000 pings for health status",
"_____no_output_____"
],
[
"## New Requests",
"_____no_output_____"
],
[
"### Plumber",
"_____no_output_____"
]
],
[
[
"# verify the prediction output\nget_predictions(example, port=ports[\"plumber\"]).json()",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n _ = get_predictions(example, port=ports[\"plumber\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n _ = get_predictions(many_examples, port=ports[\"plumber\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n get_health(port=ports[\"plumber\"])",
"_____no_output_____"
]
],
[
[
"### RestRserve",
"_____no_output_____"
]
],
[
[
"# verify the prediction output\nget_predictions(example, port=ports[\"restrserve\"]).json()",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n _ = get_predictions(example, port=ports[\"restrserve\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n _ = get_predictions(many_examples, port=ports[\"restrserve\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n get_health(port=ports[\"restrserve\"])",
"_____no_output_____"
]
],
[
[
"### FastAPI",
"_____no_output_____"
]
],
[
[
"# verify the prediction output\nget_predictions(example, port=ports[\"fastapi\"]).json()",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n _ = get_predictions(example, port=ports[\"fastapi\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n _ = get_predictions(many_examples, port=ports[\"fastapi\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n get_health(port=ports[\"fastapi\"])",
"_____no_output_____"
]
],
[
[
"## Keep Alive (Reuse Session)",
"_____no_output_____"
],
[
"Now, let's test how each one performs when each request reuses a session connection. ",
"_____no_output_____"
]
],
[
[
"# reuse the session for each post and get request\ninstance = requests.Session()",
"_____no_output_____"
]
],
[
[
"### Plumber",
"_____no_output_____"
]
],
[
[
"for i in tqdm(range(1000)):\n _ = get_predictions(example, instance=instance, port=ports[\"plumber\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n _ = get_predictions(many_examples, instance=instance, port=ports[\"plumber\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n get_health(instance=instance, port=ports[\"plumber\"])",
"_____no_output_____"
]
],
[
[
"### RestRserve",
"_____no_output_____"
]
],
[
[
"for i in tqdm(range(1000)):\n _ = get_predictions(example, instance=instance, port=ports[\"restrserve\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n _ = get_predictions(many_examples, instance=instance, port=ports[\"restrserve\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n get_health(instance=instance, port=ports[\"restrserve\"])",
"_____no_output_____"
]
],
[
[
"### FastAPI",
"_____no_output_____"
]
],
[
[
"for i in tqdm(range(1000)):\n _ = get_predictions(example, instance=instance, port=ports[\"fastapi\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n _ = get_predictions(many_examples, instance=instance, port=ports[\"fastapi\"])",
"_____no_output_____"
],
[
"for i in tqdm(range(1000)):\n get_health(instance=instance, port=ports[\"fastapi\"])",
"_____no_output_____"
]
],
[
[
"### Stop All Serving Containers",
"_____no_output_____"
],
[
"Finally, we will shut down the serving containers we launched for the tests.",
"_____no_output_____"
]
],
[
[
"!docker kill $(docker ps -q)",
"_____no_output_____"
]
],
[
[
"## Conclusion",
"_____no_output_____"
],
[
"In this example, we demonstrated how to conduct a simple performance benchmark across three R model serving solutions. We leave the choice of serving solution up to the reader since in some cases it might be appropriate to customize the benchmark in the following ways:\n\n* Update the serving example to serve a specific model\n* Perform the tests across multiple instances types\n* Modify the serving example and client to test asynchronous requests.\n* Deploy the serving examples to SageMaker Endpoints to test within an autoscaling environment.\n\nFor more information on serving your models in custom containers on SageMaker, please see our [support documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-main.html) for the latest updates and best practices.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
cb85c5dd57532574f2a81dd6d7c1477acb11cc31
| 805,114 |
ipynb
|
Jupyter Notebook
|
AI_CA5.ipynb
|
alizare1/Covid-19-Xray-Classifier-Neural-Network
|
23b1e40aaaed5a2b2add31eca638d0521135a773
|
[
"MIT"
] | null | null | null |
AI_CA5.ipynb
|
alizare1/Covid-19-Xray-Classifier-Neural-Network
|
23b1e40aaaed5a2b2add31eca638d0521135a773
|
[
"MIT"
] | null | null | null |
AI_CA5.ipynb
|
alizare1/Covid-19-Xray-Classifier-Neural-Network
|
23b1e40aaaed5a2b2add31eca638d0521135a773
|
[
"MIT"
] | null | null | null | 363.154714 | 96,970 | 0.902409 |
[
[
[
"# CA5 Phase 2\n## Mohammad Ali Zare\n### 810197626\n\nIn this assignment we must classify Xray scans of patients and tell if they're **Normal**, or they have **Covid19**/**Pneuma**.\n\n\nWe do this using neural networks and will try different parameters to see how the performance of the model would change.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import classification_report\n",
"_____no_output_____"
],
[
"LABELS = ['COVID19', 'NORMAL', 'PNEUMA']",
"_____no_output_____"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')\n!unzip /content/drive/MyDrive/AI/xray.zip",
"_____no_output_____"
],
[
"train_dataset = keras.preprocessing.image_dataset_from_directory(\n './Data/train', color_mode='grayscale', batch_size=32, image_size=(80, 80))",
"Found 5144 files belonging to 3 classes.\n"
],
[
"test_dataset = keras.preprocessing.image_dataset_from_directory(\n './Data/test', color_mode='grayscale', batch_size=32, image_size=(80, 80))",
"Found 1288 files belonging to 3 classes.\n"
]
],
[
[
"# 2",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(10, 10))\nfor imgs, labels in train_dataset.take(1):\n class_i = []\n class_i.append(np.where(labels == 0)[0][0])\n class_i.append(np.where(labels == 1)[0][0])\n class_i.append(np.where(labels == 2)[0][0])\n for j,i in enumerate(class_i):\n ax = plt.subplot(3, 3, j+1)\n plt.imshow(imgs[i].numpy().astype(\"uint8\")[:,:,0], cmap='Greys_r')\n plt.title(train_dataset.class_names[labels[i]])\n plt.axis('off')\n ",
"_____no_output_____"
],
[
"counts = [0, 0, 0]\nfor imgs, labels in train_dataset.take(-1):\n for i,c in enumerate(np.unique(labels, return_counts=True)[1]):\n counts[i] += c",
"_____no_output_____"
],
[
"barWidth = 0.4\nplt.figure(figsize=(7, 7))\n\nbars1 = counts\n \nr1 = np.arange(len(bars1))\n \nplt.bar(r1, bars1, color='#7f6d5f', width=barWidth, edgecolor='white', label='Test')\n \nplt.xlabel('Label', fontweight='bold')\nplt.ylabel('Frequency', fontweight='bold')\nplt.xticks([r + 0.01 for r in range(len(bars1))], train_dataset.class_names)\n\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"--------",
"_____no_output_____"
]
],
[
[
"def plot_report(report):\n plt.figure(figsize=(8,8))\n plt.plot(report.history['loss'], label='Train')\n plt.plot(report.history['val_loss'], label='Test')\n plt.ylabel('Loss')\n plt.xlabel('Epoch')\n plt.title('Loss Plot')\n plt.legend()\n plt.show()\n\n plt.figure(figsize=(8,8))\n plt.plot(report.history['accuracy'], label='Train')\n plt.plot(report.history['val_accuracy'], label='Test')\n plt.ylabel('Accuracy')\n plt.xlabel('Epoch')\n plt.title('Accuracy Plot')\n plt.legend()\n plt.show()",
"_____no_output_____"
]
],
[
[
"----\n",
"_____no_output_____"
],
[
"# 3. Model Summary",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(3, activation='softmax'))",
"_____no_output_____"
],
[
"inp = keras.layers.Input(shape=(80,80,1))\nout = keras.layers.Flatten()(inp)\nout = keras.layers.Dense(1024, activation='relu')(out)\nout = keras.layers.Dense(1024, activation='relu')(out)\nout = keras.layers.Dense(3, activation='softmax')(out)\n\nmodel = keras.models.Model(inputs=inp, outputs=out)",
"_____no_output_____"
],
[
"model.compile(\n loss='categorical_crossentropy', \n optimizer=keras.optimizers.SGD(learning_rate=0.01), \n metrics=['accuracy', keras.metrics.Precision()]\n)",
"_____no_output_____"
]
],
[
[
"### Summary\n\nThe summary shows that our network has 4 layers. The first Layer is our input layers which has 80*80 nuerons (pixel count). Second and third layers are our hidden layers and both have 1024 neurons. And the last layers is our output layers which has 3 neurons (each representing our classes). Overall we have 7,607,299 parameters (weights and biases) to train.",
"_____no_output_____"
]
],
[
[
"model.summary()",
"Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nflatten_2 (Flatten) (None, 6400) 0 \n_________________________________________________________________\ndense_6 (Dense) (None, 1024) 6554624 \n_________________________________________________________________\ndense_7 (Dense) (None, 1024) 1049600 \n_________________________________________________________________\ndense_8 (Dense) (None, 3) 3075 \n=================================================================\nTotal params: 7,607,299\nTrainable params: 7,607,299\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"------------",
"_____no_output_____"
],
[
"# 4. Tanh and Relu (non-normalized data)",
"_____no_output_____"
]
],
[
[
"dataGen = keras.preprocessing.image.ImageDataGenerator()\n\nntrain_dataset = dataGen.flow_from_directory(\n '/content/Data/train',\n target_size=(80,80),\n color_mode='grayscale',\n batch_size=32\n)\n\nntest_dataset = dataGen.flow_from_directory(\n '/content/Data/test',\n target_size=(80,80),\n color_mode='grayscale',\n batch_size=32\n)\n\nnunshuffled_train = dataGen.flow_from_directory(\n '/content/Data/train',\n target_size=(80,80),\n color_mode='grayscale',\n batch_size=32,\n shuffle=False\n)\n\nnunshuffled_test = dataGen.flow_from_directory(\n '/content/Data/test',\n target_size=(80,80),\n color_mode='grayscale',\n batch_size=32,\n shuffle=False\n)",
"Found 5144 images belonging to 3 classes.\nFound 1288 images belonging to 3 classes.\nFound 5144 images belonging to 3 classes.\nFound 1288 images belonging to 3 classes.\n"
]
],
[
[
"## Relu\n\nAs it can be seen in plot and report of the model, it doesn't train at all. The reason is our activation function is Relu and the data is not normalized. As the Relu function is not bounded, it can produce large numbers so it may cause gradiant explosion with large inputs and numbers would overflow. For this reason the loss in first epoch increased instantly and then became NaN so the network couldn't learn.",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(3, activation='softmax'))\n\nmodel.compile(\n loss='categorical_crossentropy', \n optimizer=keras.optimizers.SGD(learning_rate=0.01), \n metrics=['accuracy']\n)\n\nreport0 = model.fit(ntrain_dataset, validation_data=ntest_dataset, epochs=10)\n\nplot_report(report0)\n\npred1_test = model.predict(nunshuffled_test, batch_size=32)\npred1_train = model.predict(nunshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(nunshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(nunshuffled_train.labels, np.argmax(pred1_train, axis=1)))",
"Epoch 1/10\n161/161 [==============================] - 144s 893ms/step - loss: nan - accuracy: 0.1391 - val_loss: nan - val_accuracy: 0.0901\nEpoch 2/10\n161/161 [==============================] - 144s 894ms/step - loss: nan - accuracy: 0.0942 - val_loss: nan - val_accuracy: 0.0901\nEpoch 3/10\n161/161 [==============================] - 144s 894ms/step - loss: nan - accuracy: 0.0880 - val_loss: nan - val_accuracy: 0.0901\nEpoch 4/10\n161/161 [==============================] - 143s 889ms/step - loss: nan - accuracy: 0.0867 - val_loss: nan - val_accuracy: 0.0901\nEpoch 5/10\n161/161 [==============================] - 143s 887ms/step - loss: nan - accuracy: 0.0956 - val_loss: nan - val_accuracy: 0.0901\nEpoch 6/10\n161/161 [==============================] - 142s 884ms/step - loss: nan - accuracy: 0.0873 - val_loss: nan - val_accuracy: 0.0901\nEpoch 7/10\n161/161 [==============================] - 143s 889ms/step - loss: nan - accuracy: 0.0987 - val_loss: nan - val_accuracy: 0.0901\nEpoch 8/10\n161/161 [==============================] - 143s 888ms/step - loss: nan - accuracy: 0.0940 - val_loss: nan - val_accuracy: 0.0901\nEpoch 9/10\n161/161 [==============================] - 143s 888ms/step - loss: nan - accuracy: 0.0869 - val_loss: nan - val_accuracy: 0.0901\nEpoch 10/10\n161/161 [==============================] - 142s 884ms/step - loss: nan - accuracy: 0.0903 - val_loss: nan - val_accuracy: 0.0901\n"
]
],
[
[
"## Tanh\n\nWe can see in the plot that the accuracy and loss aren't changing much. The reason here is **Vanishing Gradiant**. As the output of Tanh is in range of -1 and 1, the gradiants would be too small, and furthermore, in the back-prop process they become even smaller as they are using the chain rule. So as a result of this, the updates to weights and biases are too small and not much learning will happen.",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='tanh'))\nmodel.add(keras.layers.Dense(1024, activation='tanh'))\nmodel.add(keras.layers.Dense(3, activation='softmax'))\n\nmodel.compile(\n loss='categorical_crossentropy', \n optimizer=keras.optimizers.SGD(learning_rate=0.01), \n metrics=['accuracy']\n)\n\nreport00 = model.fit(ntrain_dataset, validation_data=ntest_dataset, epochs=10)\n\nplot_report(report00)\n\npred1_test = model.predict(nunshuffled_test, batch_size=32)\npred1_train = model.predict(nunshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(nunshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(nunshuffled_train.labels, np.argmax(pred1_train, axis=1)))",
"Epoch 1/10\n161/161 [==============================] - 144s 895ms/step - loss: 1.5753 - accuracy: 0.5213 - val_loss: 0.8660 - val_accuracy: 0.6638\nEpoch 2/10\n161/161 [==============================] - 143s 886ms/step - loss: 0.9836 - accuracy: 0.6224 - val_loss: 1.1966 - val_accuracy: 0.6638\nEpoch 3/10\n161/161 [==============================] - 143s 887ms/step - loss: 0.9550 - accuracy: 0.6031 - val_loss: 0.8489 - val_accuracy: 0.6638\nEpoch 4/10\n161/161 [==============================] - 144s 891ms/step - loss: 0.9743 - accuracy: 0.5918 - val_loss: 0.8519 - val_accuracy: 0.6638\nEpoch 5/10\n161/161 [==============================] - 143s 891ms/step - loss: 0.8903 - accuracy: 0.6373 - val_loss: 0.8475 - val_accuracy: 0.6638\nEpoch 6/10\n161/161 [==============================] - 143s 892ms/step - loss: 0.8943 - accuracy: 0.6364 - val_loss: 0.8385 - val_accuracy: 0.6638\nEpoch 7/10\n161/161 [==============================] - 142s 886ms/step - loss: 0.8762 - accuracy: 0.6610 - val_loss: 0.8713 - val_accuracy: 0.6638\nEpoch 8/10\n161/161 [==============================] - 143s 887ms/step - loss: 0.9066 - accuracy: 0.6432 - val_loss: 0.8769 - val_accuracy: 0.6638\nEpoch 9/10\n161/161 [==============================] - 143s 888ms/step - loss: 0.8658 - accuracy: 0.6653 - val_loss: 0.8610 - val_accuracy: 0.6638\nEpoch 10/10\n161/161 [==============================] - 142s 885ms/step - loss: 0.8817 - accuracy: 0.6303 - val_loss: 0.8338 - val_accuracy: 0.6638\n"
]
],
[
[
"## 4.3.\n\nTanh output is bounded so it won't cause overflow as the Relu did. But in both cases the network couldn't learn at all. Normalization can help with it. Because for Relu the values would be smaller and won't cause overflow or explosion and in Tanh the updates will be more effective.",
"_____no_output_____"
],
[
"# 5\n\nWith trying different networks, the current model had acceptable F1 scores above 0.90.",
"_____no_output_____"
]
],
[
[
"dataGenNorm = keras.preprocessing.image.ImageDataGenerator(rescale=1/255.0)\n\ntrain_dataset = dataGenNorm.flow_from_directory(\n '/content/Data/train',\n target_size=(80,80),\n color_mode='grayscale',\n batch_size=32,\n)\n\ntest_dataset = dataGenNorm.flow_from_directory(\n '/content/Data/test',\n target_size=(80,80),\n color_mode='grayscale',\n batch_size=32\n)\n\nunshuffled_train = dataGenNorm.flow_from_directory(\n '/content/Data/train',\n target_size=(80,80),\n color_mode='grayscale',\n batch_size=32,\n shuffle=False\n)\n\nunshuffled_test = dataGenNorm.flow_from_directory(\n '/content/Data/test',\n target_size=(80,80),\n color_mode='grayscale',\n batch_size=32,\n shuffle=False\n)",
"Found 5144 images belonging to 3 classes.\nFound 1288 images belonging to 3 classes.\nFound 5144 images belonging to 3 classes.\nFound 1288 images belonging to 3 classes.\n"
],
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(3, activation='softmax'))\n\nmodel.compile(\n loss='categorical_crossentropy', \n optimizer=keras.optimizers.SGD(learning_rate=0.01), \n metrics=['accuracy', keras.metrics.Precision()]\n)",
"_____no_output_____"
],
[
"report1 = model.fit(train_dataset, validation_data=test_dataset, epochs=10)",
"Epoch 1/10\n161/161 [==============================] - 144s 893ms/step - loss: 0.6856 - accuracy: 0.7033 - precision_2: 0.7236 - val_loss: 0.5432 - val_accuracy: 0.7384 - val_precision_2: 0.7510\nEpoch 2/10\n161/161 [==============================] - 143s 887ms/step - loss: 0.4016 - accuracy: 0.8366 - precision_2: 0.8413 - val_loss: 0.2639 - val_accuracy: 0.9123 - val_precision_2: 0.9209\nEpoch 3/10\n161/161 [==============================] - 142s 885ms/step - loss: 0.3387 - accuracy: 0.8655 - precision_2: 0.8697 - val_loss: 0.2398 - val_accuracy: 0.9107 - val_precision_2: 0.9153\nEpoch 4/10\n161/161 [==============================] - 143s 887ms/step - loss: 0.3157 - accuracy: 0.8762 - precision_2: 0.8792 - val_loss: 0.2023 - val_accuracy: 0.9262 - val_precision_2: 0.9281\nEpoch 5/10\n161/161 [==============================] - 142s 882ms/step - loss: 0.2889 - accuracy: 0.8920 - precision_2: 0.8973 - val_loss: 0.2035 - val_accuracy: 0.9239 - val_precision_2: 0.9280\nEpoch 6/10\n161/161 [==============================] - 142s 883ms/step - loss: 0.2898 - accuracy: 0.8924 - precision_2: 0.8982 - val_loss: 0.2266 - val_accuracy: 0.9061 - val_precision_2: 0.9109\nEpoch 7/10\n161/161 [==============================] - 142s 885ms/step - loss: 0.2542 - accuracy: 0.9009 - precision_2: 0.9037 - val_loss: 0.1914 - val_accuracy: 0.9317 - val_precision_2: 0.9349\nEpoch 8/10\n161/161 [==============================] - 144s 895ms/step - loss: 0.2172 - accuracy: 0.9149 - precision_2: 0.9188 - val_loss: 0.1885 - val_accuracy: 0.9309 - val_precision_2: 0.9321\nEpoch 9/10\n161/161 [==============================] - 143s 889ms/step - loss: 0.2323 - accuracy: 0.9128 - precision_2: 0.9143 - val_loss: 0.1815 - val_accuracy: 0.9379 - val_precision_2: 0.9407\nEpoch 10/10\n161/161 [==============================] - 142s 885ms/step - loss: 0.2203 - accuracy: 0.9129 - precision_2: 0.9169 - val_loss: 0.1663 - val_accuracy: 0.9394 - val_precision_2: 0.9430\n"
],
[
"plot_report(report1)",
"_____no_output_____"
],
[
"pred1_test = model.predict(unshuffled_test, batch_size=32)\npred1_train = model.predict(unshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))",
"Test:\n precision recall f1-score support\n\n 0 0.96 0.87 0.91 116\n 1 0.91 0.89 0.90 317\n 2 0.95 0.97 0.96 855\n\n accuracy 0.94 1288\n macro avg 0.94 0.91 0.92 1288\nweighted avg 0.94 0.94 0.94 1288\n\n\nTrain:\n precision recall f1-score support\n\n 0 0.93 0.89 0.91 460\n 1 0.90 0.85 0.88 1266\n 2 0.94 0.97 0.95 3418\n\n accuracy 0.93 5144\n macro avg 0.93 0.90 0.91 5144\nweighted avg 0.93 0.93 0.93 5144\n\n"
]
],
[
[
"# 6. Optimizers\n\n#### I. Momentum\n\nNormally, the updates to weights are product of learning rate and the gradiant of error/loss. But if we use momentum, we add a new term. For each new weight update, we add the product of momentum value and the previous update. So the previous updates are considered on each. Because of this one batch can't change the direction of descent if most of the previous batchs were moving toward a direction.\n\nUsing momentum the learning will happen faster as we will have bigger updates. It can also help us to skip the local minima because of the big update.\n\n#### III. Is momentum always good?\n\nA very large momentum can cause random behavior and can stop the learning proccess. And a very small value won't make much change.\n\n#### IV. Adam ",
"_____no_output_____"
],
[
"### 6.2. momentum = 0.5",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(3, activation='softmax'))\n\nmodel.compile(\n loss='categorical_crossentropy', \n optimizer=keras.optimizers.SGD(learning_rate=0.01, momentum=0.5), \n metrics=['accuracy', keras.metrics.Precision()]\n)",
"_____no_output_____"
],
[
"report2 = model.fit(train_dataset, validation_data=test_dataset, epochs=10)\n\nplot_report(report2)\n\npred1_test = model.predict(unshuffled_test, batch_size=32)\npred1_train = model.predict(unshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))",
"Epoch 1/10\n161/161 [==============================] - 143s 889ms/step - loss: 0.6620 - accuracy: 0.7358 - precision_5: 0.7620 - val_loss: 0.2544 - val_accuracy: 0.9208 - val_precision_5: 0.9289\nEpoch 2/10\n161/161 [==============================] - 143s 887ms/step - loss: 0.3298 - accuracy: 0.8647 - precision_5: 0.8696 - val_loss: 0.1951 - val_accuracy: 0.9309 - val_precision_5: 0.9342\nEpoch 3/10\n161/161 [==============================] - 143s 886ms/step - loss: 0.2785 - accuracy: 0.9008 - precision_5: 0.9041 - val_loss: 0.3045 - val_accuracy: 0.8734 - val_precision_5: 0.8752\nEpoch 4/10\n161/161 [==============================] - 142s 887ms/step - loss: 0.2926 - accuracy: 0.8885 - precision_5: 0.8930 - val_loss: 0.1935 - val_accuracy: 0.9348 - val_precision_5: 0.9369\nEpoch 5/10\n161/161 [==============================] - 142s 886ms/step - loss: 0.2505 - accuracy: 0.9065 - precision_5: 0.9081 - val_loss: 0.2837 - val_accuracy: 0.8843 - val_precision_5: 0.8847\nEpoch 6/10\n161/161 [==============================] - 142s 885ms/step - loss: 0.2370 - accuracy: 0.9034 - precision_5: 0.9070 - val_loss: 0.1691 - val_accuracy: 0.9464 - val_precision_5: 0.9484\nEpoch 7/10\n161/161 [==============================] - 142s 886ms/step - loss: 0.2147 - accuracy: 0.9178 - precision_5: 0.9202 - val_loss: 0.1895 - val_accuracy: 0.9332 - val_precision_5: 0.9388\nEpoch 8/10\n161/161 [==============================] - 142s 885ms/step - loss: 0.2109 - accuracy: 0.9197 - precision_5: 0.9246 - val_loss: 0.1786 - val_accuracy: 0.9387 - val_precision_5: 0.9406\nEpoch 9/10\n161/161 [==============================] - 143s 886ms/step - loss: 0.2046 - accuracy: 0.9233 - precision_5: 0.9259 - val_loss: 0.2070 - val_accuracy: 0.9216 - val_precision_5: 0.9251\nEpoch 10/10\n161/161 [==============================] - 143s 886ms/step - loss: 0.2186 - accuracy: 0.9168 - precision_5: 0.9200 - val_loss: 0.1501 - val_accuracy: 0.9495 - val_precision_5: 0.9508\n"
]
],
[
[
"### 6.2. momentum = 0.9",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(3, activation='softmax'))\n\nmodel.compile(\n loss='categorical_crossentropy', \n optimizer=keras.optimizers.SGD(learning_rate=0.01, momentum=0.9), \n metrics=['accuracy', keras.metrics.Precision()]\n)\n\nreport3 = model.fit(train_dataset, validation_data=test_dataset, epochs=10)\n\nplot_report(report3)\n\npred1_test = model.predict(unshuffled_test, batch_size=32)\npred1_train = model.predict(unshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))\n\n",
"Epoch 1/10\n161/161 [==============================] - 144s 893ms/step - loss: 0.7076 - accuracy: 0.7163 - precision_8: 0.7280 - val_loss: 0.2486 - val_accuracy: 0.9216 - val_precision_8: 0.9340\nEpoch 2/10\n161/161 [==============================] - 143s 888ms/step - loss: 0.3150 - accuracy: 0.8848 - precision_8: 0.8901 - val_loss: 0.2349 - val_accuracy: 0.9146 - val_precision_8: 0.9178\nEpoch 3/10\n161/161 [==============================] - 143s 889ms/step - loss: 0.2896 - accuracy: 0.8874 - precision_8: 0.8906 - val_loss: 0.1891 - val_accuracy: 0.9231 - val_precision_8: 0.9309\nEpoch 4/10\n161/161 [==============================] - 143s 888ms/step - loss: 0.2805 - accuracy: 0.8932 - precision_8: 0.8983 - val_loss: 0.2415 - val_accuracy: 0.9068 - val_precision_8: 0.9102\nEpoch 5/10\n161/161 [==============================] - 143s 887ms/step - loss: 0.2507 - accuracy: 0.9047 - precision_8: 0.9082 - val_loss: 0.2455 - val_accuracy: 0.9014 - val_precision_8: 0.9066\nEpoch 6/10\n161/161 [==============================] - 143s 889ms/step - loss: 0.2358 - accuracy: 0.9099 - precision_8: 0.9135 - val_loss: 0.3292 - val_accuracy: 0.8626 - val_precision_8: 0.8759\nEpoch 7/10\n161/161 [==============================] - 143s 889ms/step - loss: 0.2888 - accuracy: 0.8931 - precision_8: 0.8983 - val_loss: 0.1860 - val_accuracy: 0.9379 - val_precision_8: 0.9414\nEpoch 8/10\n161/161 [==============================] - 143s 892ms/step - loss: 0.2327 - accuracy: 0.9125 - precision_8: 0.9149 - val_loss: 0.1740 - val_accuracy: 0.9410 - val_precision_8: 0.9417\nEpoch 9/10\n161/161 [==============================] - 143s 888ms/step - loss: 0.2306 - accuracy: 0.9135 - precision_8: 0.9146 - val_loss: 0.1651 - val_accuracy: 0.9410 - val_precision_8: 0.9438\nEpoch 10/10\n161/161 [==============================] - 143s 890ms/step - loss: 0.2081 - accuracy: 0.9198 - precision_8: 0.9237 - val_loss: 0.1636 - val_accuracy: 0.9441 - val_precision_8: 0.9468\n"
]
],
[
[
"### 6.2. momentum = 0.99",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(3, activation='softmax'))\n\nmodel.compile(\n loss='categorical_crossentropy', \n optimizer=keras.optimizers.SGD(learning_rate=0.01, momentum=0.99), \n metrics=['accuracy', keras.metrics.Precision()]\n)\n\nreport4 = model.fit(train_dataset, validation_data=test_dataset, epochs=10)\n\nplot_report(report4)\n\npred1_test = model.predict(unshuffled_test, batch_size=32)\npred1_train = model.predict(unshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))\n\n",
"Epoch 1/10\n161/161 [==============================] - 144s 892ms/step - loss: 0.7848 - accuracy: 0.6799 - precision_7: 0.6915 - val_loss: 0.2682 - val_accuracy: 0.8998 - val_precision_7: 0.9084\nEpoch 2/10\n161/161 [==============================] - 143s 886ms/step - loss: 0.4398 - accuracy: 0.8463 - precision_7: 0.8529 - val_loss: 0.9324 - val_accuracy: 0.6747 - val_precision_7: 0.6747\nEpoch 3/10\n161/161 [==============================] - 142s 884ms/step - loss: 1.0046 - accuracy: 0.6676 - precision_7: 0.6679 - val_loss: 0.8351 - val_accuracy: 0.6638 - val_precision_7: 0.6638\nEpoch 4/10\n161/161 [==============================] - 143s 889ms/step - loss: 0.8399 - accuracy: 0.6647 - precision_7: 0.6647 - val_loss: 0.8472 - val_accuracy: 0.6638 - val_precision_7: 0.6638\nEpoch 5/10\n161/161 [==============================] - 143s 887ms/step - loss: 0.8276 - accuracy: 0.6756 - precision_7: 0.6756 - val_loss: 0.8362 - val_accuracy: 0.6638 - val_precision_7: 0.6638\nEpoch 6/10\n161/161 [==============================] - 143s 890ms/step - loss: 0.8259 - accuracy: 0.6719 - precision_7: 0.6719 - val_loss: 0.8353 - val_accuracy: 0.6638 - val_precision_7: 0.6638\nEpoch 7/10\n161/161 [==============================] - 143s 888ms/step - loss: 0.8277 - accuracy: 0.6698 - precision_7: 0.6698 - val_loss: 0.8345 - val_accuracy: 0.6638 - val_precision_7: 0.6638\nEpoch 8/10\n161/161 [==============================] - 143s 889ms/step - loss: 0.8525 - accuracy: 0.6514 - precision_7: 0.6514 - val_loss: 0.8374 - val_accuracy: 0.6638 - val_precision_7: 0.6638\nEpoch 9/10\n161/161 [==============================] - 143s 887ms/step - loss: 0.8581 - accuracy: 0.6501 - precision_7: 0.6501 - val_loss: 0.8347 - val_accuracy: 0.6638 - val_precision_7: 0.6638\nEpoch 10/10\n161/161 [==============================] - 143s 891ms/step - loss: 0.8401 - accuracy: 0.6607 - precision_7: 0.6607 - val_loss: 0.8345 - val_accuracy: 0.6638 - val_precision_7: 0.6638\n"
]
],
[
[
"## 6.2. Different Momentums\n\nWe can see with large momentum (0.99) the models doesn't perform good and stops learning.\n\nWith momentum = 0.5, the previous values aren't affecting the result as much as momentum = 0.9. As a result of this, the low momentum (0.5) still responds to the noises and has a more zigzagy behavior, but the 0.9 one is moving smoother because previous batches have more effect. Overall 0.9 seems a better compromise between the very high and low values.\n",
"_____no_output_____"
],
[
"## 6.4 Adam\n\nWe can see from the plot, Adam has less zigzagy behavior and moves faster toward a lower loss but SGD got slightly better results . This optimizer doesn't require much custom tuning Where SGD sometimes requires custom tuning for its parameters (we saw momentum as an example).",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(3, activation='softmax'))\n\nmodel.compile(\n loss='categorical_crossentropy', \n optimizer='adam', \n metrics=['accuracy', keras.metrics.Precision()]\n)\n\nreport5 = model.fit(train_dataset, validation_data=test_dataset, epochs=10)\n\nplot_report(report5)\n\npred1_test = model.predict(unshuffled_test, batch_size=32)\npred1_train = model.predict(unshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))\n\n",
"Epoch 1/10\n161/161 [==============================] - 144s 893ms/step - loss: 1.7614 - accuracy: 0.6907 - precision_6: 0.7108 - val_loss: 0.2427 - val_accuracy: 0.9030 - val_precision_6: 0.9095\nEpoch 2/10\n161/161 [==============================] - 142s 883ms/step - loss: 0.2940 - accuracy: 0.8849 - precision_6: 0.8871 - val_loss: 0.4026 - val_accuracy: 0.8346 - val_precision_6: 0.8484\nEpoch 3/10\n161/161 [==============================] - 143s 886ms/step - loss: 0.3097 - accuracy: 0.8824 - precision_6: 0.8905 - val_loss: 0.2521 - val_accuracy: 0.9068 - val_precision_6: 0.9152\nEpoch 4/10\n161/161 [==============================] - 142s 885ms/step - loss: 0.2583 - accuracy: 0.9083 - precision_6: 0.9142 - val_loss: 0.1940 - val_accuracy: 0.9262 - val_precision_6: 0.9311\nEpoch 5/10\n161/161 [==============================] - 142s 883ms/step - loss: 0.2528 - accuracy: 0.9001 - precision_6: 0.9036 - val_loss: 0.1945 - val_accuracy: 0.9262 - val_precision_6: 0.9291\nEpoch 6/10\n161/161 [==============================] - 142s 885ms/step - loss: 0.2613 - accuracy: 0.8969 - precision_6: 0.9014 - val_loss: 0.1871 - val_accuracy: 0.9301 - val_precision_6: 0.9335\nEpoch 7/10\n161/161 [==============================] - 142s 886ms/step - loss: 0.2087 - accuracy: 0.9247 - precision_6: 0.9271 - val_loss: 0.2074 - val_accuracy: 0.9301 - val_precision_6: 0.9370\nEpoch 8/10\n161/161 [==============================] - 142s 883ms/step - loss: 0.2352 - accuracy: 0.9063 - precision_6: 0.9088 - val_loss: 0.2003 - val_accuracy: 0.9247 - val_precision_6: 0.9322\nEpoch 9/10\n161/161 [==============================] - 142s 883ms/step - loss: 0.2122 - accuracy: 0.9213 - precision_6: 0.9241 - val_loss: 0.2878 - val_accuracy: 0.8952 - val_precision_6: 0.8977\nEpoch 10/10\n161/161 [==============================] - 142s 884ms/step - loss: 0.2433 - accuracy: 0.9098 - precision_6: 0.9130 - val_loss: 0.1594 - val_accuracy: 0.9433 - val_precision_6: 0.9498\n"
]
],
[
[
"# 7. Epoch\n\n#### II. Why we need multiple epochs?\n\nModels need to learn parameters for the input data. It learns gradually using usually with a gradual descent algorithm to minimize the loss/error. With a set of data the algorithm may not be able to update its parameters and generelize enough. Because of this we do the learning proccess in multiple iterations (epochs) to learn the data better. Another reason is we may not be able to have a lot of new data to train the model on, so we use the same data in multiple epochs to learn its features. But if we have enough data we may get the optimal result in one iteration.\n\n#### III. Is more epochs always better?\n\nHaving a lot epochs may lead to overfitting. Because it learns the features of the data too much, that it even catches the noises. So it can't generalize well and although it has good performance on the train data, it doesn't perform good on the test data.\n\nTo prevent overfitting, we can use early-stopping techniques. It means we stop the learning process when the it evaluation data results start to worsening.",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(3, activation='softmax'))\n\nmodel.compile(\n loss='categorical_crossentropy', \n optimizer='adam', \n metrics=['accuracy', keras.metrics.Precision()]\n)\n\nreport6 = model.fit(train_dataset, validation_data=test_dataset, epochs=20)\n\nplot_report(report6)\n\npred1_test = model.predict(unshuffled_test, batch_size=32)\npred1_train = model.predict(unshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))\n\n",
"Epoch 1/20\n161/161 [==============================] - 144s 893ms/step - loss: 1.6424 - accuracy: 0.6909 - precision_9: 0.7086 - val_loss: 0.3559 - val_accuracy: 0.8439 - val_precision_9: 0.8489\nEpoch 2/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.3614 - accuracy: 0.8515 - precision_9: 0.8563 - val_loss: 0.2295 - val_accuracy: 0.9146 - val_precision_9: 0.9165\nEpoch 3/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2974 - accuracy: 0.8941 - precision_9: 0.8967 - val_loss: 0.2980 - val_accuracy: 0.8874 - val_precision_9: 0.8949\nEpoch 4/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2955 - accuracy: 0.8851 - precision_9: 0.8887 - val_loss: 0.1988 - val_accuracy: 0.9262 - val_precision_9: 0.9288\nEpoch 5/20\n161/161 [==============================] - 142s 882ms/step - loss: 0.2762 - accuracy: 0.8938 - precision_9: 0.8989 - val_loss: 0.1876 - val_accuracy: 0.9394 - val_precision_9: 0.9408\nEpoch 6/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2419 - accuracy: 0.9087 - precision_9: 0.9106 - val_loss: 0.1724 - val_accuracy: 0.9410 - val_precision_9: 0.9437\nEpoch 7/20\n161/161 [==============================] - 142s 880ms/step - loss: 0.2311 - accuracy: 0.9089 - precision_9: 0.9119 - val_loss: 0.2092 - val_accuracy: 0.9348 - val_precision_9: 0.9373\nEpoch 8/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2384 - accuracy: 0.9100 - precision_9: 0.9123 - val_loss: 0.1973 - val_accuracy: 0.9317 - val_precision_9: 0.9353\nEpoch 9/20\n161/161 [==============================] - 142s 880ms/step - loss: 0.2075 - accuracy: 0.9174 - precision_9: 0.9199 - val_loss: 0.1597 - val_accuracy: 0.9449 - val_precision_9: 0.9469\nEpoch 10/20\n161/161 [==============================] - 142s 882ms/step - loss: 0.2238 - accuracy: 0.9180 - precision_9: 0.9202 - val_loss: 0.1591 - val_accuracy: 0.9457 - val_precision_9: 0.9464\nEpoch 11/20\n161/161 [==============================] - 141s 879ms/step - loss: 0.2217 - accuracy: 0.9140 - precision_9: 0.9170 - val_loss: 0.2254 - val_accuracy: 0.9193 - val_precision_9: 0.9219\nEpoch 12/20\n161/161 [==============================] - 142s 883ms/step - loss: 0.2019 - accuracy: 0.9326 - precision_9: 0.9343 - val_loss: 0.2280 - val_accuracy: 0.9216 - val_precision_9: 0.9237\nEpoch 13/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2163 - accuracy: 0.9243 - precision_9: 0.9267 - val_loss: 0.1525 - val_accuracy: 0.9464 - val_precision_9: 0.9493\nEpoch 14/20\n161/161 [==============================] - 142s 884ms/step - loss: 0.2007 - accuracy: 0.9243 - precision_9: 0.9275 - val_loss: 0.1600 - val_accuracy: 0.9449 - val_precision_9: 0.9469\nEpoch 15/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.1905 - accuracy: 0.9272 - precision_9: 0.9306 - val_loss: 0.2894 - val_accuracy: 0.8882 - val_precision_9: 0.8905\nEpoch 16/20\n161/161 [==============================] - 142s 886ms/step - loss: 0.1981 - accuracy: 0.9279 - precision_9: 0.9310 - val_loss: 0.2641 - val_accuracy: 0.8929 - val_precision_9: 0.8942\nEpoch 17/20\n161/161 [==============================] - 141s 879ms/step - loss: 0.1992 - accuracy: 0.9326 - precision_9: 0.9338 - val_loss: 0.2189 - val_accuracy: 0.9239 - val_precision_9: 0.9253\nEpoch 18/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.1795 - accuracy: 0.9345 - precision_9: 0.9352 - val_loss: 0.3009 - val_accuracy: 0.8936 - val_precision_9: 0.8954\nEpoch 19/20\n161/161 [==============================] - 142s 875ms/step - loss: 0.1829 - accuracy: 0.9269 - precision_9: 0.9293 - val_loss: 0.1634 - val_accuracy: 0.9441 - val_precision_9: 0.9469\nEpoch 20/20\n161/161 [==============================] - 142s 883ms/step - loss: 0.1724 - accuracy: 0.9307 - precision_9: 0.9341 - val_loss: 0.1694 - val_accuracy: 0.9379 - val_precision_9: 0.9419\n"
]
],
[
[
"# 8. Loss\n\n#### I. MSE\n\nWe can see the model with MSE isn't good and it stop learning and stalls.\n\n#### II. Why MSE is not good for classifaction?\n\nThe formula for MSE is \n\n$\\frac{1}{n}((actual_1 - predict_1)^2 + ... + (actual_n - predict_n)^2)$\n\nIf we calculate the gradian $\\frac{dL}{dW}$ We reach something that has the term $(predict - actual)(x*predict(1-predict)$ and because we are classifying, the $predict$ is in range 0 and 1. So as the prediction approaches very close to 1 or 0 (being certain about a class) the gradian would become too small (because of $predict(1-predict)$). This causes very small updates to weights and the learning process will eventually stall early with no further update.\n\n\nMSE is used mostly for regression problems.",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dense(3, activation='softmax'))\n\nmodel.compile(\n loss='mean_squared_error', \n optimizer='adam', \n metrics=['accuracy', keras.metrics.Precision()]\n)\n\nreport8 = model.fit(train_dataset, validation_data=test_dataset, epochs=20)\n\nplot_report(report8)\n\npred1_test = model.predict(unshuffled_test, batch_size=32)\npred1_train = model.predict(unshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))\n\n",
"Epoch 1/20\n161/161 [==============================] - 145s 897ms/step - loss: 0.2209 - accuracy: 0.6631 - precision_10: 0.6703 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 2/20\n161/161 [==============================] - 143s 893ms/step - loss: 0.2226 - accuracy: 0.6662 - precision_10: 0.6662 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 3/20\n161/161 [==============================] - 143s 893ms/step - loss: 0.2210 - accuracy: 0.6686 - precision_10: 0.6686 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 4/20\n161/161 [==============================] - 144s 895ms/step - loss: 0.2267 - accuracy: 0.6600 - precision_10: 0.6600 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 5/20\n161/161 [==============================] - 143s 892ms/step - loss: 0.2244 - accuracy: 0.6634 - precision_10: 0.6634 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 6/20\n161/161 [==============================] - 144s 894ms/step - loss: 0.2172 - accuracy: 0.6742 - precision_10: 0.6742 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 7/20\n161/161 [==============================] - 143s 890ms/step - loss: 0.2213 - accuracy: 0.6681 - precision_10: 0.6681 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 8/20\n161/161 [==============================] - 144s 893ms/step - loss: 0.2224 - accuracy: 0.6664 - precision_10: 0.6664 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 9/20\n161/161 [==============================] - 143s 892ms/step - loss: 0.2182 - accuracy: 0.6728 - precision_10: 0.6728 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 10/20\n161/161 [==============================] - 143s 892ms/step - loss: 0.2208 - accuracy: 0.6689 - precision_10: 0.6689 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 11/20\n161/161 [==============================] - 143s 892ms/step - loss: 0.2138 - accuracy: 0.6793 - precision_10: 0.6793 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 12/20\n161/161 [==============================] - 143s 891ms/step - loss: 0.2198 - accuracy: 0.6703 - precision_10: 0.6703 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 13/20\n161/161 [==============================] - 142s 883ms/step - loss: 0.2223 - accuracy: 0.6666 - precision_10: 0.6666 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 14/20\n161/161 [==============================] - 141s 877ms/step - loss: 0.2220 - accuracy: 0.6670 - precision_10: 0.6670 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 15/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2261 - accuracy: 0.6608 - precision_10: 0.6608 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 16/20\n161/161 [==============================] - 141s 880ms/step - loss: 0.2351 - accuracy: 0.6474 - precision_10: 0.6474 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 17/20\n161/161 [==============================] - 142s 880ms/step - loss: 0.2202 - accuracy: 0.6697 - precision_10: 0.6697 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 18/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2179 - accuracy: 0.6731 - precision_10: 0.6731 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 19/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2277 - accuracy: 0.6584 - precision_10: 0.6584 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\nEpoch 20/20\n161/161 [==============================] - 141s 878ms/step - loss: 0.2191 - accuracy: 0.6714 - precision_10: 0.6714 - val_loss: 0.2241 - val_accuracy: 0.6638 - val_precision_10: 0.6638\n"
]
],
[
[
"# 9. Regularization\n\nRegularization methods are used to prevent overfitting.",
"_____no_output_____"
],
[
"### 9.2. L2=0.0001\n\nUsing L2 method, in each update to weight, we add a product of constant and the previous weight. This constant is predefined and independent of the learning process, so this change to weight, keeps the model from overfitting to the train data and being perfect on that.",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu', kernel_regularizer=keras.regularizers.l2(l2=0.0001)))\nmodel.add(keras.layers.Dense(1024, activation='relu', kernel_regularizer=keras.regularizers.l2(l2=0.0001)))\nmodel.add(keras.layers.Dense(3, activation='softmax', kernel_regularizer=keras.regularizers.l2(l2=0.0001)))\n\nmodel.compile(\n loss='categorical_crossentropy', \n optimizer='adam', \n metrics=['accuracy', keras.metrics.Precision()]\n)\n\nreport8 = model.fit(train_dataset, validation_data=test_dataset, epochs=20)",
"Epoch 1/20\n161/161 [==============================] - 149s 914ms/step - loss: 1.6743 - accuracy: 0.7084 - precision: 0.7338 - val_loss: 0.4033 - val_accuracy: 0.9224 - val_precision: 0.9283\nEpoch 2/20\n161/161 [==============================] - 143s 891ms/step - loss: 0.4860 - accuracy: 0.8760 - precision: 0.8801 - val_loss: 0.5935 - val_accuracy: 0.8261 - val_precision: 0.8310\nEpoch 3/20\n161/161 [==============================] - 142s 886ms/step - loss: 0.5019 - accuracy: 0.8642 - precision: 0.8675 - val_loss: 0.3192 - val_accuracy: 0.9301 - val_precision: 0.9336\nEpoch 4/20\n161/161 [==============================] - 141s 881ms/step - loss: 0.3795 - accuracy: 0.9031 - precision: 0.9062 - val_loss: 0.3114 - val_accuracy: 0.9286 - val_precision: 0.9347\nEpoch 5/20\n161/161 [==============================] - 141s 880ms/step - loss: 0.3505 - accuracy: 0.9092 - precision: 0.9129 - val_loss: 0.2789 - val_accuracy: 0.9371 - val_precision: 0.9397\nEpoch 6/20\n161/161 [==============================] - 141s 879ms/step - loss: 0.3414 - accuracy: 0.9060 - precision: 0.9106 - val_loss: 0.2911 - val_accuracy: 0.9262 - val_precision: 0.9346\nEpoch 7/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.3252 - accuracy: 0.9065 - precision: 0.9105 - val_loss: 0.2532 - val_accuracy: 0.9387 - val_precision: 0.9407\nEpoch 8/20\n161/161 [==============================] - 141s 879ms/step - loss: 0.2787 - accuracy: 0.9220 - precision: 0.9243 - val_loss: 0.2611 - val_accuracy: 0.9317 - val_precision: 0.9375\nEpoch 9/20\n161/161 [==============================] - 142s 882ms/step - loss: 0.2752 - accuracy: 0.9239 - precision: 0.9258 - val_loss: 0.2776 - val_accuracy: 0.9130 - val_precision: 0.9171\nEpoch 10/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2894 - accuracy: 0.9158 - precision: 0.9186 - val_loss: 0.2310 - val_accuracy: 0.9433 - val_precision: 0.9452\nEpoch 11/20\n161/161 [==============================] - 142s 885ms/step - loss: 0.2806 - accuracy: 0.9152 - precision: 0.9187 - val_loss: 0.2586 - val_accuracy: 0.9270 - val_precision: 0.9297\nEpoch 12/20\n161/161 [==============================] - 141s 876ms/step - loss: 0.2600 - accuracy: 0.9254 - precision: 0.9272 - val_loss: 0.2192 - val_accuracy: 0.9340 - val_precision: 0.9360\nEpoch 13/20\n161/161 [==============================] - 141s 879ms/step - loss: 0.2315 - accuracy: 0.9331 - precision: 0.9346 - val_loss: 0.2483 - val_accuracy: 0.9177 - val_precision: 0.9218\nEpoch 14/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2316 - accuracy: 0.9316 - precision: 0.9328 - val_loss: 0.2131 - val_accuracy: 0.9371 - val_precision: 0.9385\nEpoch 15/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2340 - accuracy: 0.9346 - precision: 0.9354 - val_loss: 0.2483 - val_accuracy: 0.9154 - val_precision: 0.9168\nEpoch 16/20\n161/161 [==============================] - 141s 880ms/step - loss: 0.2272 - accuracy: 0.9305 - precision: 0.9329 - val_loss: 0.2204 - val_accuracy: 0.9348 - val_precision: 0.9375\nEpoch 17/20\n161/161 [==============================] - 141s 879ms/step - loss: 0.2253 - accuracy: 0.9326 - precision: 0.9336 - val_loss: 0.1916 - val_accuracy: 0.9464 - val_precision: 0.9472\nEpoch 18/20\n161/161 [==============================] - 142s 881ms/step - loss: 0.2146 - accuracy: 0.9356 - precision: 0.9363 - val_loss: 0.1998 - val_accuracy: 0.9441 - val_precision: 0.9440\nEpoch 19/20\n161/161 [==============================] - 142s 883ms/step - loss: 0.1868 - accuracy: 0.9485 - precision: 0.9488 - val_loss: 0.3068 - val_accuracy: 0.9006 - val_precision: 0.9023\nEpoch 20/20\n161/161 [==============================] - 142s 879ms/step - loss: 0.2172 - accuracy: 0.9346 - precision: 0.9364 - val_loss: 0.2573 - val_accuracy: 0.9130 - val_precision: 0.9163\n"
],
[
"plot_report(report8)\n\npred1_test = model.predict(unshuffled_test, batch_size=32)\npred1_train = model.predict(unshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))",
"_____no_output_____"
]
],
[
[
"### 9.3. dropout = 0.1\n\nTo combat overfitting, ensemble methods can be used where multiple models results are combined to give one final result. But training multiple architectures is expensive. With dropout method we can simulate having multiple models. On each layer we randomly drop/ignore some neurons' outputs so it's like we have a layer with less neurons (different architecture). On each iteration the data are viewed from different model, so the overall result would be more generalized.\n\nWe can see from the plots and reports, this models performs good and doesn't suffer from overfitting.",
"_____no_output_____"
]
],
[
[
"model = keras.models.Sequential()\nmodel.add(keras.layers.Input(shape=(80, 80, 1)))\nmodel.add(keras.layers.Flatten())\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dropout(0.1))\nmodel.add(keras.layers.Dense(1024, activation='relu'))\nmodel.add(keras.layers.Dropout(0.1))\nmodel.add(keras.layers.Dense(3, activation='softmax'))\n\nmodel.compile(\n loss='categorical_crossentropy', \n optimizer='adam', \n metrics=['accuracy', keras.metrics.Precision()]\n)\n\nreport9 = model.fit(train_dataset, validation_data=test_dataset, epochs=20)\n\nplot_report(report9)\n\npred1_test = model.predict(unshuffled_test, batch_size=32)\npred1_train = model.predict(unshuffled_train, batch_size=32)\nprint(\"Test:\")\nprint(classification_report(unshuffled_test.labels, np.argmax(pred1_test, axis=1)))\nprint(\"\\nTrain:\")\nprint(classification_report(unshuffled_train.labels, np.argmax(pred1_train, axis=1)))\n\n",
"Epoch 1/20\n161/161 [==============================] - 145s 899ms/step - loss: 1.6637 - accuracy: 0.6771 - precision_1: 0.6975 - val_loss: 0.2380 - val_accuracy: 0.9138 - val_precision_1: 0.9143\nEpoch 2/20\n161/161 [==============================] - 144s 896ms/step - loss: 0.3299 - accuracy: 0.8717 - precision_1: 0.8755 - val_loss: 0.2324 - val_accuracy: 0.9154 - val_precision_1: 0.9220\nEpoch 3/20\n161/161 [==============================] - 143s 893ms/step - loss: 0.2964 - accuracy: 0.8832 - precision_1: 0.8891 - val_loss: 0.2656 - val_accuracy: 0.9053 - val_precision_1: 0.9083\nEpoch 4/20\n161/161 [==============================] - 143s 892ms/step - loss: 0.3493 - accuracy: 0.8714 - precision_1: 0.8759 - val_loss: 0.2496 - val_accuracy: 0.9022 - val_precision_1: 0.9090\nEpoch 5/20\n161/161 [==============================] - 143s 892ms/step - loss: 0.2654 - accuracy: 0.9012 - precision_1: 0.9037 - val_loss: 0.1919 - val_accuracy: 0.9332 - val_precision_1: 0.9405\nEpoch 6/20\n161/161 [==============================] - 144s 892ms/step - loss: 0.2312 - accuracy: 0.9167 - precision_1: 0.9179 - val_loss: 0.2291 - val_accuracy: 0.9045 - val_precision_1: 0.9083\nEpoch 7/20\n161/161 [==============================] - 144s 897ms/step - loss: 0.2388 - accuracy: 0.9123 - precision_1: 0.9154 - val_loss: 0.1962 - val_accuracy: 0.9348 - val_precision_1: 0.9361\nEpoch 8/20\n161/161 [==============================] - 145s 901ms/step - loss: 0.2466 - accuracy: 0.9069 - precision_1: 0.9113 - val_loss: 0.1667 - val_accuracy: 0.9449 - val_precision_1: 0.9484\nEpoch 9/20\n161/161 [==============================] - 144s 894ms/step - loss: 0.2590 - accuracy: 0.9032 - precision_1: 0.9070 - val_loss: 0.2031 - val_accuracy: 0.9239 - val_precision_1: 0.9302\nEpoch 10/20\n161/161 [==============================] - 144s 893ms/step - loss: 0.2377 - accuracy: 0.9090 - precision_1: 0.9109 - val_loss: 0.1946 - val_accuracy: 0.9255 - val_precision_1: 0.9290\nEpoch 11/20\n161/161 [==============================] - 144s 894ms/step - loss: 0.2358 - accuracy: 0.9146 - precision_1: 0.9173 - val_loss: 0.2583 - val_accuracy: 0.8960 - val_precision_1: 0.9000\nEpoch 12/20\n161/161 [==============================] - 144s 894ms/step - loss: 0.2490 - accuracy: 0.9046 - precision_1: 0.9080 - val_loss: 0.2121 - val_accuracy: 0.9138 - val_precision_1: 0.9186\nEpoch 13/20\n161/161 [==============================] - 144s 894ms/step - loss: 0.2399 - accuracy: 0.9121 - precision_1: 0.9145 - val_loss: 0.1826 - val_accuracy: 0.9387 - val_precision_1: 0.9407\nEpoch 14/20\n161/161 [==============================] - 146s 905ms/step - loss: 0.2215 - accuracy: 0.9162 - precision_1: 0.9183 - val_loss: 0.1537 - val_accuracy: 0.9449 - val_precision_1: 0.9476\nEpoch 15/20\n161/161 [==============================] - 144s 893ms/step - loss: 0.2161 - accuracy: 0.9153 - precision_1: 0.9176 - val_loss: 0.1888 - val_accuracy: 0.9332 - val_precision_1: 0.9389\nEpoch 16/20\n161/161 [==============================] - 144s 895ms/step - loss: 0.2112 - accuracy: 0.9258 - precision_1: 0.9284 - val_loss: 0.2387 - val_accuracy: 0.8975 - val_precision_1: 0.9001\nEpoch 17/20\n161/161 [==============================] - 144s 894ms/step - loss: 0.2253 - accuracy: 0.9164 - precision_1: 0.9192 - val_loss: 0.1904 - val_accuracy: 0.9309 - val_precision_1: 0.9315\nEpoch 18/20\n161/161 [==============================] - 144s 893ms/step - loss: 0.2047 - accuracy: 0.9265 - precision_1: 0.9294 - val_loss: 0.1845 - val_accuracy: 0.9441 - val_precision_1: 0.9462\nEpoch 19/20\n161/161 [==============================] - 144s 893ms/step - loss: 0.2000 - accuracy: 0.9254 - precision_1: 0.9284 - val_loss: 0.1843 - val_accuracy: 0.9348 - val_precision_1: 0.9375\nEpoch 20/20\n161/161 [==============================] - 145s 902ms/step - loss: 0.2013 - accuracy: 0.9260 - precision_1: 0.9290 - val_loss: 0.1998 - val_accuracy: 0.9224 - val_precision_1: 0.9249\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
cb85c7e9e789d746dc318a9ef226403aee765653
| 35,593 |
ipynb
|
Jupyter Notebook
|
jupyter_notebooks/8_0_3_quick_auditory_test.ipynb
|
charlieccarey/rdoc
|
2e857f29e128f893706d042d583eec698c0bc56a
|
[
"CC-BY-4.0"
] | null | null | null |
jupyter_notebooks/8_0_3_quick_auditory_test.ipynb
|
charlieccarey/rdoc
|
2e857f29e128f893706d042d583eec698c0bc56a
|
[
"CC-BY-4.0"
] | 5 |
2016-05-07T04:42:06.000Z
|
2018-04-19T01:08:38.000Z
|
jupyter_notebooks/8_0_3_quick_auditory_test.ipynb
|
charlieccarey/rdoc
|
2e857f29e128f893706d042d583eec698c0bc56a
|
[
"CC-BY-4.0"
] | null | null | null | 30.950435 | 235 | 0.533953 |
[
[
[
"# Quick and dirty test Auditory perception whole docs vs. other categories",
"_____no_output_____"
]
],
[
[
"### Positive corpus from all Auditory abstracts\n- 146 documents in batch_05_AP_pmids (most are actually AP)\n\n### Compare Auditory perception to corpus for other topics\nDecreasing distance:\n- 1000 disease documents\n- 1000 arousal documents\n- 1000 auditory perception documents\n- 1000 psychology documents, psyc_1000_ids\n- 156 new arousal documents, batch_04_AR_pmids (most are probably AP, but a few prob are not.)",
"_____no_output_____"
],
[
"## Setup our deepdive app",
"_____no_output_____"
],
[
"deepdive_app/my_app/:\n - db.url # name for this db\n - deepdive.conf # contains extractors, inference rules, specify holdout.\n - input/raw_sentences\n - input/annotated_sentences\n - input/init.sh\n - udf/* # user defined functions used within deepdive.conf\n \nSteps to build app:\n\ndeepdive initdb:\n - db started with schema\n - runs init.sh to preload deepdive postgres db with raw, annotated sentences\n\ndeepdive run:\n - creates run/* directory for each run\n - runs deepdive.conf which holds the deepdive pipeline extractors and rules\n - in particular, the extractors set up the features and rules to be used by deepdive\n \nWe have some of these items as templates in a template directory.",
"_____no_output_____"
]
],
[
[
"!pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start # deepdive and medic",
"pg_ctl: another server might be running; trying to start server anyway\r\nserver starting\r\n"
],
[
"templates='/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/templates_deepdive_app_bagofwords'\n\napp_dir='/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception'\n%mkdir {app_dir}",
"mkdir: /Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception: File exists\r\n"
],
[
"%cd {app_dir}",
"/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception\n"
],
[
"%cp -r {templates}/* {app_dir}/",
"mkdir: /Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception: File exists\n/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception\n"
],
[
"# modify the postgres db name\n# \n!echo postgresql://localhost/8_0_3_quick_auditory_perception > db.url",
"_____no_output_____"
]
],
[
[
"## Fill input directory based on document abstracts",
"_____no_output_____"
]
],
[
[
"%cd '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception/input'",
"/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception/input\n"
]
],
[
[
"### Prepping the Auditory perception positive, negative training, and a set of unkowns to test.\ntraining set:\n- 146 documents as postive\n- 1000 documents as negative\n\nour unknowns are a mix of positive, likely negative, most likely negative.",
"_____no_output_____"
],
[
"abstracts from other topics",
"_____no_output_____"
]
],
[
[
"def get_abstracts(pmid_list_file):\n abstracts=!medic --format tsv write --pmid-list {pmid_list_file} 2>/dev/null\n return([a.split('\\t', 2) for a in abstracts])",
"_____no_output_____"
],
[
"ap_146 = get_abstracts('/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_temp/batch_05_AP_pmids')\nap_1000 = get_abstracts('/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_temp/AP00_1000_ids')\ndiss = get_abstracts('/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_temp/diss_1000_ids')\n# psyc = get_abstracts('/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_temp/psyc_1000_ids')\n# ar_1000 = get_abstracts('/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/task_data_temp/AR_1000_ids')",
"_____no_output_____"
]
],
[
[
"### annotated sentences\n my_id sentences [tf] \\N\n - column 3, true, false, null\n - column 4 null id for deepdive's use",
"_____no_output_____"
]
],
[
[
"import io\nimport codecs\nfrom spacy.en import English\nnlp = English(parser=True, tagger=True) # so we can sentence parse",
"_____no_output_____"
],
[
"def spacy_lemma_gt_len(text, length=2):\n '''Create bag of unique lemmas, requiring lemma length > length\n \n Note: setting length to 1 may mess up our postgres arrays as we would\n get commas here, unless we were to quote everything.\n '''\n tokens = []\n #doc = nlp(text.decode('utf8')) #\"This is a sentence. Here's another...\".decode('utf8'))\n parsed_data = nlp(text) #\"This is a sentence. Here's another...\".decode('utf8'))\n for token in parsed_data:\n if len(token.lemma_) > length:\n tokens.append(token.lemma_.lower())\n return(list(set(tokens)))\n\n# def remove_stop_words():\n# pass\n\n# def spacy_lemma_biwords_gt_len(text, length=3):\n# '''Create bag of unique bi-lemmas, requiring lemma length > length\n \n# We are crudely eliminating any bi-lemmas that have commas in them to save us in loading postgres arrays.\n# '''\n# biwords = []\n# parsed_data = nlp(text)\n# skip_chars = [',', '\"', \"'\"]\n# for i in range(1, len(parsed_data) - 1):\n# skip = False\n# biword = u'{} {}'.format(parsed_data[i].lemma_.lower(), parsed_data[i+1].lemma_.lower())\n# if (parsed_data[i].lemma_ in skip_chars or parsed_data[i+1].lemma_ in skip_chars):\n# skip = True\n# if len(biword) > length and not skip:\n# biwords.append(biword)\n# return(list(set(biwords)))\n\ndef get_scored_abstract_bow(abstracts, score):\n '''Return annotated bag of words.\n my_id sentences [tf] \\N\n - score (postgres boolean) : t f \\N\n - column 3, true, false, null.\n - column 4 null id for deepdive's use.\n - {{}} is to wrap list as postgres array.\n '''\n results = []\n for a in abstracts:\n # bow = spacy_lemma_gt_len(a[2].decode('utf8'), length=2)\n bow = spacy_lemma_gt_len(a[2].decode('utf-8'), length=2)\n # maybe remove stop words\n bow = u', '.join(bow)\n results.append(u'{}\\t{{{}}}\\t{}\\t{}'.format(a[0], bow, score, '\\N'))\n return(results)\n\n",
"_____no_output_____"
],
[
"def write_raw_sentences(fname, annotations, score=None):\n '''\n Annotations (list of lists) : [[id, title, abstract],...]\n score (postgres boolean) : t f \\N'''\n with codecs.open(fname, 'a', encoding = 'utf-8') as f:\n for a in annotations:\n f.write(u'{}\\t{}\\t{}\\N\\n'.format(a[0], a[2].decode('utf-8'), score))\n \ndef write_annotated_sentences(fname, annotations):\n ''' \n Annotations (list of strings) : [\"id\\tbagofwords\\tpostgres_boolean\\t\\N\",...]\n '''\n with codecs.open(fname, 'a', encoding = 'utf-8') as f:\n for a in annotations:\n a = a.replace('\"', '') # avoid postgres malformed array on unescaped quotes\n f.write(a + '\\n')",
"_____no_output_____"
],
[
"%cd '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception/input'\n%rm ./raw_sentences\nwrite_raw_sentences('raw_sentences', ap_146, 't')\nwrite_raw_sentences('raw_sentences', diss, 'f')\nwrite_raw_sentences('raw_sentences', ap_1000, '\\N')",
"/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception/input\n"
],
[
"ap_146_pos = get_scored_abstract_bow(ap_146, 't')\ndiss_neg = get_scored_abstract_bow(diss, 'f')\nap_1000_null = get_scored_abstract_bow(ap_1000, '\\N')",
"_____no_output_____"
],
[
"%rm './annotated_sentences'\nwrite_annotated_sentences('annotated_sentences', ap_146_pos)\nwrite_annotated_sentences('annotated_sentences', diss_neg)\nwrite_annotated_sentences('annotated_sentences', ap_1000_null)",
"_____no_output_____"
]
],
[
[
"### Add in Null equivalents for the entire training set.\nThis is so we can see their predictions, whether they are in the holdout fraction or not.\nOtherwise we can not see the results of the non-holdout portion.",
"_____no_output_____"
]
],
[
[
"write_raw_sentences('raw_sentences', ap_146, '\\N')\nwrite_raw_sentences('raw_sentences', diss, '\\N')\n\nap_146_null = get_scored_abstract_bow(ap_146, '\\N')\ndiss_null = get_scored_abstract_bow(diss, '\\N')\nwrite_annotated_sentences('annotated_sentences', ap_146_null)\nwrite_annotated_sentences('annotated_sentences', diss_null)",
"_____no_output_____"
]
],
[
[
"## Explanation of how the sentences get into deepdive.\ninput/init.sh is executed when we run deepdive initdb",
"_____no_output_____"
]
],
[
[
"# contents of input/init.sh\ndeepdive sql \"COPY _raw_sentences FROM STDIN\" < ${APP_HOME}\"/input/raw_sentences\"\ndeepdive sql \"COPY _annotated_sentences FROM STDIN\" < ${APP_HOME}\"/input/annotated_sentences\"",
"_____no_output_____"
]
],
[
[
"## init and run deepdive app\nFrom the top level of this deepdive app.",
"_____no_output_____"
]
],
[
[
"%cd '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception'",
"/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception\n"
]
],
[
[
"# on command line\ndeepdive initdb\ndeepdive run",
"_____no_output_____"
]
],
[
[
"## inspect results\n1. inspect - deepdive's calibration graphs showing accuracy, holdout and holdout + unknowns (nulls)\n2. extract and inspect expectation vs test values\n3. Recall that only the 'holdout' portion of the training data gets an expectation assigned.\n\n### How to get reports out on all the training, not just the holdout?\nJust put all the training back in as nulls.",
"_____no_output_____"
]
],
[
[
"cmd = ('select has_term,sentence_id,id,category,expectation '\n 'from _annotated_sentences_has_term_inference order by random() limit 10')\n!deepdive sql \"{cmd}\"\n# output tsv\n# !deepdive sql eval \"{cmd}\" format=tsv",
" has_term | sentence_id | id | category | expectation \r\n----------+-------------+------+----------+-------------\r\n | 24386403 | 1900 | 1 | 0.998\r\n | 24617559 | 3032 | 1 | 0\r\n f | 24902046 | 963 | 1 | 0\r\n | 24161466 | 1868 | 1 | 1\r\n | 23510647 | 1712 | 1 | 0\r\n | 26527069 | 3279 | 1 | 0\r\n f | 24286024 | 836 | 1 | 0\r\n | 23285949 | 1654 | 1 | 0\r\n | 23179223 | 2735 | 1 | 0.004\r\n f | 23063979 | 570 | 1 | 0\r\n(10 rows)\r\n\r\n"
]
],
[
[
" has_term | sentence_id | id | category | expectation\n----------+-------------+------+----------+-------------\n | 23117535 | 1617 | 1 | 0.012\n t | 21305666 | 53 | 1 | 1\n | 23128585 | 1619 | 1 | 0.908\n | 23442569 | 1691 | 1 | 0.312\n | 24793771 | 1971 | 1 | 1\n f | 23423257 | 653 | 1 | 0\n f | 24659875 | 913 | 1 | 0\n | 23432759 | 1687 | 1 | 0.034\n | 24383225 | 1893 | 1 | 0.076\n | 22790547 | 1554 | 1 | 0.808",
"_____no_output_____"
]
],
[
[
"### Running deepdive another time, get different holdouts.\n- The holdout fraction in the deepdive.conf file hasn't changed.\n- The holdout fraction seems to simply be a rough guide.\n- There is nothing in documentation about specifying the random seed or how the random selection is made.",
"_____no_output_____"
]
],
[
[
"$ deepdive initdb\n$ deepdive run\n$ deepdive sql 'select has_term,sentence_id,id,category,expectation from _annotated_sentences_has_term_inference' | cut -f1 -d'|' | sort | uniq -c\n 1\n2146\n 250 f\n 1 has_term\n 37 t\n 1 (2433 rows)\n 1 ----------+-------------+------+----------+-------------\n\n$ deepdive initdb\n$ deepdive run\n$ deepdive sql 'select has_term,sentence_id,id,category,expectation from _annotated_sentences_has_term_inference' | cut -f1 -d'|' | sort | uniq -c\n 1\n2146\n 259 f\n 1 has_term\n 33 t\n 1 (2433 rows)\n 1 ----------+-------------+------+----------+-------------",
"_____no_output_____"
]
],
[
[
"## Review our expected input numbers\nYes, everything checks out. We have duplicates due to the pseudolabeling of the training test. And a few duplicates due to not having cleaned up our 1000 unknown-to-predict set that might have overlapped with the training set.\n\n1146 training records, 146 true, 1000 false.",
"_____no_output_____"
]
],
[
[
"!wc input/raw_sentences",
" 3292 687066 4742740 input/raw_sentences\r\n"
],
[
"!cut -f1 input/raw_sentences | sort | uniq | wc",
" 2134 2134 19206\r\n"
],
[
"!cut -f1 input/annotated_sentences | sort | uniq | wc",
" 2134 2134 19206\r\n"
],
[
"!cut -f1 input/raw_sentences | sort | uniq -c | sort | grep -v ' 1 ' | wc",
" 1146 2292 16044\r\n"
]
],
[
[
"Not shown: 12 in the training set were also in the 1000 to be predicted set.",
"_____no_output_____"
]
],
[
[
"!cut -f1 input/raw_sentences | sort | uniq -c | sort | grep -v ' 1 ' | sed 's/ *. //' > mult_rec_pmids\n!cut -f1 input/raw_sentences | sort | uniq -c | sort | grep -v ' 1 ' | sed 's/ *. //' | wc # number training records",
" 1146 1146 10314\r\n"
],
[
"#!grep -h -f mult_rec_pmids input/raw_sentences input/annotated_sentences | cut -f1,3 | sort | uniq -c | wc",
" 2292 6876 37818\r\n"
]
],
[
[
"## pull out clean results sets of the full training and unkown test sets.",
"_____no_output_____"
]
],
[
[
" Table \"public._raw_sentences\"\n Column | Type | Modifiers | Storage | Stats target | Description\n-------------+------+-----------+----------+--------------+-------------\n sentence_id | text | | extended | |\n sentence | text | | extended | |\n terms | text | | extended | |\n\n View \"public._annotated_sentences_has_term_inference\"\n Column | Type | Modifiers | Storage | Description\n-------------+------------------+-----------+----------+-------------\n sentence_id | text | | extended |\n words | text[] | | extended |\n has_term | boolean | | plain |\n id | bigint | | plain |\n category | bigint | | plain |\n expectation | double precision | | plain |\n\ncategory is all 1 (we had a single category prediction)",
"_____no_output_____"
]
],
[
[
"fields = 'terms,has_term,sentence_id,expectation,sentence'\n\n!deepdive sql 'DROP TABLE cc_neg_holdout'\n!deepdive sql 'DROP TABLE cc_pos_holdout'\n!deepdive sql 'DROP TABLE cc_training'",
"ERROR: table \"cc_neg_holdout\" does not exist\nDROP TABLE\nDROP TABLE\n"
],
[
"neg_holdout = ('SELECT DISTINCT r.terms,has_term,a.sentence_id,expectation INTO '\n 'cc_neg_holdout FROM '\n '_annotated_sentences_has_term_inference as a JOIN '\n '_raw_sentences as r ON '\n 'a.sentence_id = r.sentence_id WHERE '\n 'NOT a.has_term '\n 'ORDER BY a.sentence_id') # if include r.terms would see we pseudonulled all these.\npos_holdout = ('SELECT DISTINCT r.terms,has_term,a.sentence_id,expectation INTO '\n 'cc_pos_holdout FROM '\n '_annotated_sentences_has_term_inference as a JOIN '\n '_raw_sentences as r ON '\n 'a.sentence_id = r.sentence_id WHERE '\n 'a.has_term '\n 'ORDER BY a.sentence_id') # if include r.terms would see we pseudonulled all these.\n\n# test = (\"SELECT DISTINCT r.terms,a.has_term,a.sentence_id,a.expectation FROM \"\n# \"_annotated_sentences_has_term_inference as a JOIN \"\n# \"_raw_sentences as r ON \"\n# \"a.sentence_id = r.sentence_id JOIN \"\n# \"cc_neg_holdout as n ON a.sentence_id = n.sentence_id WHERE \"\n# \"a.has_term IS NULL AND r.terms IS NOT NULL \"\n# \"ORDER BY a.sentence_id\") # 259\n\npos_neg_input = (\"SELECT DISTINCT r.terms,a.has_term,a.sentence_id,a.expectation INTO \"\n \"cc_all_input FROM \"\n \"_annotated_sentences_has_term_inference as a JOIN \"\n \"_raw_sentences as r ON \"\n \"a.sentence_id = r.sentence_id LEFT JOIN \"\n \"cc_neg_holdout as n ON a.sentence_id = n.sentence_id WHERE \"\n \"a.has_term IS NULL AND r.terms IS NOT NULL \"\n \"ORDER BY a.sentence_id\")\n\ntraining = (\"SELECT DISTINCT a.terms,a.has_term,a.sentence_id,a.expectation INTO \"\n \"cc_training FROM \"\n \"cc_all_input as a LEFT JOIN \"\n \"cc_pos_holdout as p ON \"\n \"a.sentence_id = p.sentence_id LEFT JOIN \"\n \"cc_neg_holdout as n ON \"\n \"a.sentence_id = n.sentence_id WHERE \"\n \"p.sentence_id IS NULL AND n.sentence_id IS NULL AND a.terms IS NOT NULL\")\n\nreport = \"select * from cc_pos_holdout UNION ALL select cc_pos_holdout\" # as p union all select t.terms from cc_training as t\"\n\n# report = (\"SELECT * \"\n# \"cc_all_input UNION ALL \"\n# \"SELECT * FROM cc_pos_holdout UNION ALL \"\n# \"SELECT * FROM cc_training \")\n# WHERE \"\n# \"c.has_term IS NULL AND r.terms IS NOT NULL \"\n# \"ORDER BY a.sentence_id\")\n\n# unk_input = (\"SELECT DISTINCT r.terms,has_term,a.sentence_id,category,expectation FROM \"\n# \"_annotated_sentences_has_term_inference as a JOIN \"\n# \"_raw_sentences as r ON \"\n# \"a.sentence_id = r.sentence_id\")\n#result=!deepdive sql \"{neg_holdout}\"\n#result=!deepdive sql \"{pos_holdout}\"\n#pos_neg_input_results=!deepdive sql \"{pos_neg_input}\" # should be 1000\n###unk_input_results=!deepdive sql \"{unk_input}\" # should be 1000\n#test=!deepdive sql \"{test}\"\n#test = !deepdive sql \"{training}\"\ntest = !deepdive sql \"{report}\"",
"_____no_output_____"
],
[
"print(len(test))\ntest[0:6]\n",
"3\n"
],
[
"neg_training = 'select has_term,sentence_id,category,expectation from _annotated_sentences_has_term_inference as a WHERE NOT a.has_term'\n\ncmd = ('select {} FROM '\n '_annotated_sentences_has_term_inference as a JOIN '\n '_raw_sentences as r ON '\n 'a.sentence_id = r.sentence_id WHERE'.format(fields))\nresults=!deepdive sql eval \"{cmd}\" format=tsv\nfields = fields.split(',')",
"_____no_output_____"
],
[
"print(fields)\n#results[0:10]",
"_____no_output_____"
]
],
[
[
"## plot our curves by getting predictions from deepdive by sql.\nHoldout data has_term is t or f\n\n Total returned = holdout + null labeled trues + null labeled falses + to be predicteds\n 2438 = (259f + 33t) + 146 + 1000 + 1000\n\nAs sentences (pubmed ids) are shared between classe:\n- remove the holdouts (they are retained in the pseudo-null labeled classes)\n- extract correct labels onto the pseudo-null\n - maybe by queries back to _annotated_sentences\n- remove any 'to be predicteds' that are also in the pseudo-null\n - because we hadn't cleaned these out prior to building our input files.",
"_____no_output_____"
]
],
[
[
"cmd = ('SELECT has_term,sentence_id,id,category,expectation '\n 'FROM _annotated_sentences_has_term_inference, _annotated_sentences')\ndeepdive sql 'select has_term,sentence_id,id,category,expectation from _annotated_sentences_has_term_inference' \npdrx = !deepdive sql eval \"{cmd}\" format=tsv",
"_____no_output_____"
],
[
"print(len(pdrx))\n# Total returned = holdout + null labeled trues + null labeled falses + to be predicteds\n# 2438 = (259f + 33t) + 146 + 1000 + 1000",
"2438\n"
]
],
[
[
"# The R Script is in appendix",
"_____no_output_____"
]
],
[
[
"%cd '/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception'",
"/Users/ccarey/Documents/Projects/NAMI/rdoc/tasks/deepdive_app/8_0_3_quick_auditory_perception\n"
],
[
"%alias plot_cal /Users/ccarey/Documents/Projects/NAMI/rdoc/scripts/plot_deepdive_calibration.R",
"_____no_output_____"
],
[
"%plot_cal ./run/LATEST/calibration/_annotated_sentences.has_term.tsv custom_stats_plots/test > /dev/null 2>&1",
"_____no_output_____"
]
],
[
[
"<!---\nimages are loaded from the root of the notebook rather than the current directory\n-->",
"_____no_output_____"
]
],
[
[
"# side by side\n# <tr>\n# <td><img src=./tasks/deepdive_app/8_0_3_quick_auditory_perception/custom_stats_plots/test_histogram.png width=200 height=200 /> </td>\n# <td><img src=./tasks/deepdive_app/8_0_3_quick_auditory_perception/custom_stats_plots/test_stacked_histogram.png /> </td> \n# </tr>\n# or \n# ![my image]./tasks/deepdive_app/8_0_3_quick_auditory_perception/custom_stats_plots/test_stacked_histogram.png\n# or (also works for pdf) \nfrom IPython.display import HTML \nHTML('<iframe src=./tasks/deepdive_app/8_0_3_quick_auditory_perception/custom_stats_plots/test_histogram.png width=350 height=350></iframe>')",
"_____no_output_____"
]
],
[
[
"# Appendix 1. In R, plot our own curves from deepdive's calibration\n\nDeepDive produces some diagnostics.\n\n- *calibration/....png*. But can't distinguish the holdouts from the predictions on the unkowns in DeepDive's png.\n\n- *calibration/....tsv*. See deepdive documentation.\n\ntsv columns:\n\n [bucket_from] [bucket_to] [num_predictions] [num_true] [num_false]\n\n- buckets are the min and max extent of the probability bins. (1.00 = 100% probability document is on topic).\n- Columns 3 is predicted from unknowns + holdouts\n- Columns 4 and 5 are predicted only from the holdouts.",
"_____no_output_____"
]
],
[
[
"#!/usr/bin/env Rscript\n#------------------------------------------------------------------\n#\n# Plot a calibration curve for deepdive.\n#\n# Usage:\n#\n# Rscript --vanilla plot_deepdive_calibration.R my_dd_predictions.tsv ofile\n#\n#------------------------------------------------------------------\nlibrary(ggplot2)\n\nargs = commandArgs(trailingOnly=TRUE)\nifile = args[1]\nofile_stem = args[2]\n\nprint(paste('Reading :', ifile))\n\nt <- read.table(args[1])\nt$V6 <- t$V3 - t$V4 - t$V5\n\n\npos_lab = 'Positive training'\nneg_lab = 'Negative training'\nneut_lab = 'Unknown' # includes pseudolabeled unkowns (i.e. were labeled positive / negative, but now unkown for prediction)\n\npos <- cbind.data.frame(class=pos_lab, bin=t$V1 + 0.1, count=t$V4)\nneg <- cbind.data.frame(class=neg_lab, bin=t$V1 + 0.1, count=t$V5)\nn <- cbind.data.frame(class=neut_lab, bin=t$V1 + 0.1, count=t$V6)\nall <- rbind.data.frame(neg,n,pos)\n\n\nprint(paste0('Writing results to : ', ofile, '_histogram.pdf'))\n\npdf(paste0(ofile, '_histogram.pdf'))\nggplot(all, aes(x=as.character(bin), y=count, fill=class, xlab='probability bin')) +\n geom_bar(stat='identity', position='dodge') +\n ggtitle('histogram') +\n xlab('probability bin')\ndev.off()\n\nprint(paste0('Writing results to : ', ofile, '_stacked_histogram.pdf'))\n\npdf(paste0(ofile, '_stacked_histogram.pdf'))\nggplot(all, aes(x=bin, y=count, fill = class)) + \n geom_bar(stat = \"identity\") + \n ggtitle('Stacked histogram') +\n xlab('input class')\ndev.off()\n\n#plot(x=t$V1+0.1, y=t$V3, xlim = c(0,1), ylim = c(0,max(t)), col = 'grey', main='calibration curve deepdive documentation')\n# points(x=t$V1+0.1, y=t$V4, col = 'blue')\n# points(x=t$V1+0.1, y=t$V5, col = 'red')\n\n",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"raw",
"markdown",
"code",
"raw",
"markdown",
"code",
"raw",
"markdown",
"raw",
"markdown",
"code",
"raw",
"code",
"markdown",
"raw",
"code",
"markdown",
"code",
"raw",
"code",
"markdown",
"code",
"markdown",
"raw"
] |
[
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"raw"
],
[
"markdown"
],
[
"code"
],
[
"raw"
],
[
"markdown"
],
[
"code"
],
[
"raw"
],
[
"markdown"
],
[
"raw"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"raw"
],
[
"code",
"code"
],
[
"markdown"
],
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"raw"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"raw"
]
] |
cb85d65d7f83fba5748ebade17d44a41dbbcc3d0
| 7,769 |
ipynb
|
Jupyter Notebook
|
analysis/.ipynb_checkpoints/milestone1-checkpoint.ipynb
|
aadiraju/animetrics
|
69292f481998222b360d9b8ae059d012d8c5188f
|
[
"MIT"
] | null | null | null |
analysis/.ipynb_checkpoints/milestone1-checkpoint.ipynb
|
aadiraju/animetrics
|
69292f481998222b360d9b8ae059d012d8c5188f
|
[
"MIT"
] | 1 |
2020-11-25T09:37:13.000Z
|
2020-11-25T09:37:14.000Z
|
analysis/.ipynb_checkpoints/milestone1-checkpoint.ipynb
|
data301-2020-winter1/course-project-solo_100
|
ef885d3e001759554c2cf88885c76d7b3037aeec
|
[
"MIT"
] | null | null | null | 28.774074 | 158 | 0.370189 |
[
[
[
"# MyAnimeList Recommendations Database analysis\n\n## Step 1 : Loading the Data\n\nLet's import all the modules we need for the anaysis and start loading the files we need into `pandas` dataframes from their respective `.csv` files.\n\nFirst up is the **Anime metadata**, located in `../data/raw/anime.csv`:",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.style as style\nimport csv\nimport pandas_profiling as pprof\n\nanime_df = pd.read_csv('../data/raw/anime.csv')\nanime_df.head()",
"_____no_output_____"
]
],
[
[
"Looking good!\n\nNow let's do the same for the **User Ratings Data**, located in `../data/raw/rating.csv`:",
"_____no_output_____"
]
],
[
[
"ratings_df = pd.read_csv('../data/raw/rating.csv')\nratings_df.head()",
"_____no_output_____"
]
],
[
[
"Perfect! No issues so far. (Continued in milestone2.ipynb)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb85e06e44aae65ed7bb33ee97d67583fd2388a9
| 7,714 |
ipynb
|
Jupyter Notebook
|
Unsupervised Learning/03 Gaussian Mixture Models and Cluster Validation/GMM_Clustering_and_Cluster_Validation_Lab-zh.ipynb
|
stephengineer/Introduction-to-Machine-Learning-with-TensorFlow
|
fc13795db3e20d87f625864e4e7ff68b4afcedb3
|
[
"MIT"
] | null | null | null |
Unsupervised Learning/03 Gaussian Mixture Models and Cluster Validation/GMM_Clustering_and_Cluster_Validation_Lab-zh.ipynb
|
stephengineer/Introduction-to-Machine-Learning-with-TensorFlow
|
fc13795db3e20d87f625864e4e7ff68b4afcedb3
|
[
"MIT"
] | null | null | null |
Unsupervised Learning/03 Gaussian Mixture Models and Cluster Validation/GMM_Clustering_and_Cluster_Validation_Lab-zh.ipynb
|
stephengineer/Introduction-to-Machine-Learning-with-TensorFlow
|
fc13795db3e20d87f625864e4e7ff68b4afcedb3
|
[
"MIT"
] | null | null | null | 26.784722 | 386 | 0.582836 |
[
[
[
"## 1. KMeans vs GMM \n\n在第一个例子中,我们将生成一个高斯数据集,并尝试对其进行聚类,看看其聚类结果是否与数据集的原始标签相匹配。\n\n我们可以使用 sklearn 的 [make_blobs] (http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_blobs.html) 函数来创建高斯 blobs 的数据集:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import cluster, datasets, mixture\n\n%matplotlib inline\n\nn_samples = 1000\n\nvaried = datasets.make_blobs(n_samples=n_samples,\n cluster_std=[5, 1, 0.5],\n random_state=3)\nX, y = varied[0], varied[1]\n\nplt.figure( figsize=(16,12))\nplt.scatter(X[:,0], X[:,1], c=y, edgecolor='black', lw=1.5, s=100, cmap=plt.get_cmap('viridis'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"现在,当我们把这个数据集交给聚类算法时,我们显然不会传入标签。所以让我们从 k-means 开始,看看它是如何处理这个数据集的。是否会产生与原标签相匹配的聚类?",
"_____no_output_____"
]
],
[
[
"from sklearn.cluster import KMeans\n\nkmeans = KMeans(n_clusters=3)\npred = kmeans.fit_predict(X)",
"_____no_output_____"
],
[
"plt.figure( figsize=(16,12))\nplt.scatter(X[:,0], X[:,1], c=pred, edgecolor='black', lw=1.5, s=100, cmap=plt.get_cmap('viridis'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"k-means 的表现怎么样?它是否能够找到与原始标签匹配或相似的聚类?\n\n现在让我们尝试使用 [GaussianMixture](http://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html) 进行聚类:",
"_____no_output_____"
]
],
[
[
"# TODO: Import GaussianMixture\nfrom import \n\n# TODO: Create an instance of Gaussian Mixture with 3 components\ngmm = \n\n# TODO: fit the dataset\ngmm = \n\n# TODO: predict the clustering labels for the dataset\npred_gmm = ",
"_____no_output_____"
],
[
"# Plot the clusters\nplt.figure( figsize=(16,12))\nplt.scatter(X[:,0], X[:,1], c=pred_gmm, edgecolor='black', lw=1.5, s=100, cmap=plt.get_cmap('viridis'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"通过视觉比较k-means和GMM聚类的结果,哪一个能更好地匹配原始标签?",
"_____no_output_____"
],
[
"# 2. KMeans vs GMM - 鸢尾花(Iris)数据集\n\n对于第二个示例,我们将使用一个具有两个以上特征的数据集。鸢尾花(Iris)数据集在这方面做得很好,因为可以合理地假设它的数据分布是高斯分布。\n\n鸢尾花(Iris)数据集是一个带标签的数据集,具有四个特征:\n",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\n\niris = sns.load_dataset(\"iris\")\n\niris.head()",
"_____no_output_____"
]
],
[
[
"有几种方法 (例如 [PairGrid](https://seaborn.pydata.org/generated/seaborn.PairGrid.html), [t-SNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html), 或 [用 PCA 投影到一个较低的数维](http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_iris.html#sphx-glr-auto-examples-decomposition-plot-pca-iris-py))。让我们尝试用 PairGrid 进行可视化,因为它不会扭曲数据集 --它只是在一个子图中将每一对特征进行相互对应:",
"_____no_output_____"
]
],
[
[
"g = sns.PairGrid(iris, hue=\"species\", palette=sns.color_palette(\"cubehelix\", 3), vars=['sepal_length','sepal_width','petal_length','petal_width'])\ng.map(plt.scatter)\nplt.show()",
"_____no_output_____"
]
],
[
[
"If we cluster the Iris datset using KMeans, how close would the resulting clusters match the original labels?",
"_____no_output_____"
]
],
[
[
"kmeans_iris = KMeans(n_clusters=3)\npred_kmeans_iris = kmeans_iris.fit_predict(iris[['sepal_length','sepal_width','petal_length','petal_width']])",
"_____no_output_____"
],
[
"iris['kmeans_pred'] = pred_kmeans_iris\n\ng = sns.PairGrid(iris, hue=\"kmeans_pred\", palette=sns.color_palette(\"cubehelix\", 3), vars=['sepal_length','sepal_width','petal_length','petal_width'])\ng.map(plt.scatter)\nplt.show()",
"_____no_output_____"
]
],
[
[
"How do these clusters match the original labels?\n\nYou can clearly see that visual inspection is no longer useful if we're working with multiple dimensions like this. So how can we evaluate the clustering result versus the original labels? \n\nYou guessed it. We can use an external cluster validation index such as the [adjusted Rand score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html) which generates a score between -1 and 1 (where an exact match will be scored as 1).",
"_____no_output_____"
]
],
[
[
"# TODO: Import adjusted rand score\nfrom import \n\n# TODO: calculate adjusted rand score passing in the original labels and the kmeans predicted labels \niris_kmeans_score = \n\n# Print the score\niris_kmeans_score",
"_____no_output_____"
]
],
[
[
"What if we cluster using Gaussian Mixture models? Would it earn a better ARI score?",
"_____no_output_____"
]
],
[
[
"gmm_iris = GaussianMixture(n_components=3).fit(iris[['sepal_length','sepal_width','petal_length','petal_width']])\npred_gmm_iris = gmm_iris.predict(iris[['sepal_length','sepal_width','petal_length','petal_width']])",
"_____no_output_____"
],
[
"iris['gmm_pred'] = pred_gmm_iris\n\n# TODO: calculate adjusted rand score passing in the original \n# labels and the GMM predicted labels iris['species']\niris_gmm_score = \n\n# Print the score\niris_gmm_score",
"_____no_output_____"
]
],
[
[
"Thanks to ARI socres, we have a clear indicator which clustering result better matches the original dataset.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
cb85e1202300a5f89a07435ff2de193c2dfd257f
| 2,035 |
ipynb
|
Jupyter Notebook
|
Coursera/Linux Tools for Developers/Week-1/Quiz/Essential-Command-Line-Tools.ipynb
|
manipiradi/Online-Courses-Learning
|
2a4ce7590d1f6d1dfa5cfde632660b562fcff596
|
[
"MIT"
] | 331 |
2019-10-22T09:06:28.000Z
|
2022-03-27T13:36:03.000Z
|
Coursera/Linux Tools for Developers/Week-1/Quiz/Essential-Command-Line-Tools.ipynb
|
manipiradi/Online-Courses-Learning
|
2a4ce7590d1f6d1dfa5cfde632660b562fcff596
|
[
"MIT"
] | 8 |
2020-04-10T07:59:06.000Z
|
2022-02-06T11:36:47.000Z
|
Coursera/Linux Tools for Developers/Week-1/Quiz/Essential-Command-Line-Tools.ipynb
|
manipiradi/Online-Courses-Learning
|
2a4ce7590d1f6d1dfa5cfde632660b562fcff596
|
[
"MIT"
] | 572 |
2019-07-28T23:43:35.000Z
|
2022-03-27T22:40:08.000Z
| 20.765306 | 133 | 0.522359 |
[
[
[
"#### 1. Which command will list all files under the current directory with a .cfg extension, and then delete them?",
"_____no_output_____"
],
[
"##### Ans: find . -name \"*.cfg\" -exec rm {} ';'",
"_____no_output_____"
],
[
"#### 2. Which command will list all files and directories on the system with cfg in their name?",
"_____no_output_____"
],
[
"##### Ans: ls -l $(locate cfg)",
"_____no_output_____"
],
[
"#### 3. Which command will find all files and directories in the system whose name ends with cfg?",
"_____no_output_____"
],
[
"##### Ans: locate -r \"cfg$\"",
"_____no_output_____"
],
[
"#### 4. Which commands can change all occurrences within a file of the string boris to natasha (Select all answers that apply)?",
"_____no_output_____"
],
[
"##### Ans: \n- sed -e s:boris:natasha:g file\n- sed -e s/boris/natasha/g file",
"_____no_output_____"
],
[
"#### 5. Which command will print out all lines beginning with \"X\" in all files in the current directory?",
"_____no_output_____"
],
[
"##### Ans: grep \"^X\" *",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
cb85e85637d3ce5383413aa1f42c60f10510ae3f
| 7,030 |
ipynb
|
Jupyter Notebook
|
Chapter05/IMDB_1DCNN.ipynb
|
suryapa1/Tensorflow_stack
|
6a6c82ed30401b115cf106ad8271763dda518fa8
|
[
"MIT"
] | 86 |
2017-12-28T12:18:02.000Z
|
2022-03-14T03:41:29.000Z
|
Chapter05/IMDB_1DCNN.ipynb
|
suryapa1/Tensorflow_stack
|
6a6c82ed30401b115cf106ad8271763dda518fa8
|
[
"MIT"
] | 1 |
2019-02-15T18:02:17.000Z
|
2020-09-23T05:36:17.000Z
|
Chapter05/IMDB_1DCNN.ipynb
|
suryapa1/Tensorflow_stack
|
6a6c82ed30401b115cf106ad8271763dda518fa8
|
[
"MIT"
] | 82 |
2017-12-21T09:25:04.000Z
|
2021-04-22T02:38:29.000Z
| 34.80198 | 127 | 0.494168 |
[
[
[
"from __future__ import print_function\n\nfrom keras.preprocessing import sequence\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation\nfrom keras.layers import Embedding\nfrom keras.layers import Conv1D, GlobalMaxPooling1D\nfrom keras.datasets import imdb\n\n# max number of features\nmax_features = 20000\n# max sentence lenght\nmax_sentence_len = 80 \n# dimensions in the embedding space\nembedding_dims = 50\n# batch size\nbatch_size = 32\n# filters used in Conv!D\nfilters = 250\n# size of the kernels in Conv!D\nkernel_size = 3\n# hidden dimensions for MLP\nhidden_dims = 250\n# training epocs\nepochs = 20",
"_____no_output_____"
],
[
"(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)\nprint(len(x_train), 'train sequences')\nprint(len(x_test), 'test sequences')\n\nprint('Pad sequences (samples x time)')\nx_train = sequence.pad_sequences(x_train, maxlen=max_sentence_len)\nx_test = sequence.pad_sequences(x_test, maxlen=max_sentence_len)\nprint('x_train shape:', x_train.shape)\nprint('x_test shape:', x_test.shape)",
"25000 train sequences\n25000 test sequences\nPad sequences (samples x time)\nx_train shape: (25000, 80)\nx_test shape: (25000, 80)\n"
],
[
"print('Build model...')\nmodel = Sequential()\nmodel.add(Embedding(max_features, embedding_dims))\nmodel.add(Dropout(0.2))\nmodel.add(Conv1D(filters,\n kernel_size,\n padding='valid',\n activation='relu',\n strides=1))\n# we use max pooling:\nmodel.add(GlobalMaxPooling1D())\nmodel.add(Dense(hidden_dims))\nmodel.add(Dropout(0.2))\nmodel.add(Activation('relu'))\nmodel.add(Dense(1))\nmodel.add(Activation('sigmoid'))",
"Build model...\n"
],
[
"model.compile(loss='binary_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\nmodel.fit(x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n validation_data=(x_test, y_test))",
"Train on 25000 samples, validate on 25000 samples\nEpoch 1/20\n25000/25000 [==============================] - 35s - loss: 0.4584 - acc: 0.7682 - val_loss: 0.3564 - val_acc: 0.8437\nEpoch 2/20\n25000/25000 [==============================] - 34s - loss: 0.2689 - acc: 0.8885 - val_loss: 0.3573 - val_acc: 0.8464\nEpoch 3/20\n25000/25000 [==============================] - 35s - loss: 0.1481 - acc: 0.9444 - val_loss: 0.4275 - val_acc: 0.8397\nEpoch 4/20\n25000/25000 [==============================] - 33s - loss: 0.0687 - acc: 0.9752 - val_loss: 0.5693 - val_acc: 0.8298\nEpoch 5/20\n25000/25000 [==============================] - 35s - loss: 0.0337 - acc: 0.9880 - val_loss: 0.6721 - val_acc: 0.8274\nEpoch 6/20\n25000/25000 [==============================] - 35s - loss: 0.0240 - acc: 0.9924 - val_loss: 0.7733 - val_acc: 0.8238\nEpoch 7/20\n25000/25000 [==============================] - 35s - loss: 0.0229 - acc: 0.9920 - val_loss: 0.8571 - val_acc: 0.8218\nEpoch 8/20\n25000/25000 [==============================] - 34s - loss: 0.0180 - acc: 0.9940 - val_loss: 0.9327 - val_acc: 0.8190\nEpoch 9/20\n25000/25000 [==============================] - 34s - loss: 0.0150 - acc: 0.9951 - val_loss: 0.9652 - val_acc: 0.8185\nEpoch 10/20\n25000/25000 [==============================] - 35s - loss: 0.0140 - acc: 0.9955 - val_loss: 0.8886 - val_acc: 0.8228\nEpoch 11/20\n25000/25000 [==============================] - 33s - loss: 0.0131 - acc: 0.9957 - val_loss: 0.9696 - val_acc: 0.8197\nEpoch 12/20\n25000/25000 [==============================] - 32s - loss: 0.0111 - acc: 0.9963 - val_loss: 1.0649 - val_acc: 0.8238\nEpoch 13/20\n25000/25000 [==============================] - 32s - loss: 0.0101 - acc: 0.9963 - val_loss: 1.0606 - val_acc: 0.8217\nEpoch 14/20\n25000/25000 [==============================] - 32s - loss: 0.0103 - acc: 0.9968 - val_loss: 1.1543 - val_acc: 0.8050\nEpoch 15/20\n25000/25000 [==============================] - 34s - loss: 0.0097 - acc: 0.9967 - val_loss: 1.1228 - val_acc: 0.8113\nEpoch 16/20\n25000/25000 [==============================] - 35s - loss: 0.0109 - acc: 0.9961 - val_loss: 1.0900 - val_acc: 0.8123\nEpoch 17/20\n25000/25000 [==============================] - 35s - loss: 0.0056 - acc: 0.9978 - val_loss: 1.2230 - val_acc: 0.8125\nEpoch 18/20\n25000/25000 [==============================] - 35s - loss: 0.0061 - acc: 0.9976 - val_loss: 1.2706 - val_acc: 0.8170\nEpoch 19/20\n25000/25000 [==============================] - 32s - loss: 0.0083 - acc: 0.9972 - val_loss: 1.2003 - val_acc: 0.8109\nEpoch 20/20\n25000/25000 [==============================] - 33s - loss: 0.0067 - acc: 0.9976 - val_loss: 1.2917 - val_acc: 0.8105\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
cb85f3e019e497c74aee6932b560005235d1c3be
| 14,621 |
ipynb
|
Jupyter Notebook
|
Untitled.ipynb
|
KylinDC/Analysis-to-Chinese-political-coordinates-test
|
89848446c2cf162fae28047893339b808311a8ae
|
[
"MIT"
] | null | null | null |
Untitled.ipynb
|
KylinDC/Analysis-to-Chinese-political-coordinates-test
|
89848446c2cf162fae28047893339b808311a8ae
|
[
"MIT"
] | null | null | null |
Untitled.ipynb
|
KylinDC/Analysis-to-Chinese-political-coordinates-test
|
89848446c2cf162fae28047893339b808311a8ae
|
[
"MIT"
] | null | null | null | 34.894988 | 93 | 0.299364 |
[
[
[
"import numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"data = pd.read_csv(r\"data/2014data_split.csv\")",
"_____no_output_____"
]
],
[
[
"去除多余列",
"_____no_output_____"
]
],
[
[
"data.drop(data.columns[3],axis=1,inplace=True)",
"_____no_output_____"
]
],
[
[
"将编号列作为索引",
"_____no_output_____"
]
],
[
[
"data.set_index('编号', inplace = True)",
"_____no_output_____"
],
[
"data.replace({'强烈反对': -2, '反对':- 1, '同意':1, '强烈同意':2}, inplace = True)",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"data.corrwith(data[[-3]])",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
cb860578c1106126817e03147a0468fda29e4115
| 1,664 |
ipynb
|
Jupyter Notebook
|
docs/contents/tools/classes/mmtf_MMTFDecoder/to_molsysmt_Trajectory.ipynb
|
dprada/molsysmt
|
83f150bfe3cfa7603566a0ed4aed79d9b0c97f5d
|
[
"MIT"
] | null | null | null |
docs/contents/tools/classes/mmtf_MMTFDecoder/to_molsysmt_Trajectory.ipynb
|
dprada/molsysmt
|
83f150bfe3cfa7603566a0ed4aed79d9b0c97f5d
|
[
"MIT"
] | null | null | null |
docs/contents/tools/classes/mmtf_MMTFDecoder/to_molsysmt_Trajectory.ipynb
|
dprada/molsysmt
|
83f150bfe3cfa7603566a0ed4aed79d9b0c97f5d
|
[
"MIT"
] | null | null | null | 20.292683 | 84 | 0.545072 |
[
[
[
"# To molsysmt.Trajectory",
"_____no_output_____"
]
],
[
[
"from molsysmt.tools import mmtf_MMTFDecoder",
"Warning: importing 'simtk.openmm' is deprecated. Import 'openmm' instead.\n"
],
[
"#mmtf_MMTFDecoder.to_molsysmt_Trajectory(item)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
]
] |
cb861662fd80842118d18b2261b28a4241ea9b3c
| 12,838 |
ipynb
|
Jupyter Notebook
|
Pytorch10_automatic_differentiation_with_autograd.ipynb
|
RehoboamX/learn_pytorch
|
7ed3f38b22e3ddf2a4db8fa9d4f5affd72f5486e
|
[
"MIT"
] | null | null | null |
Pytorch10_automatic_differentiation_with_autograd.ipynb
|
RehoboamX/learn_pytorch
|
7ed3f38b22e3ddf2a4db8fa9d4f5affd72f5486e
|
[
"MIT"
] | null | null | null |
Pytorch10_automatic_differentiation_with_autograd.ipynb
|
RehoboamX/learn_pytorch
|
7ed3f38b22e3ddf2a4db8fa9d4f5affd72f5486e
|
[
"MIT"
] | null | null | null | 35.366391 | 495 | 0.588254 |
[
[
[
"# AUTOMATIC DIFFERENTIATION WITH [TORCH.AUTOGRAD](https://pytorch.org/tutorials/beginner/basics/autogradqs_tutorial.html#automatic-differentiation-with-torch-autograd)",
"_____no_output_____"
],
[
"When training neural networks, the most frequently used algorithm is **back propagation**. In this algorithm, parameters (model weights) are adjusted according to the **gradient** of the loss function with respect to the given parameter.\n\nTo compute those gradients, PyTorch has a built-in differentiation engine called **torch.autograd**. It supports automatic computation of gradient for any computational graph.\n\nConsider the simplest one-layer neural network, with input ***x***, parameters ***w*** and ***b***, and some loss function. It can be defined in PyTorch in the following manner:",
"_____no_output_____"
]
],
[
[
"import torch\n\nx = torch.ones(5) # input tensor\ny = torch.zeros(3) # expected output\nw = torch.randn(5, 3, requires_grad=True)\nb = torch.randn(3, requires_grad=True)\nz = torch.matmul(x, w)+b\nloss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) # 二分类输入为logits对交叉熵损失",
"_____no_output_____"
]
],
[
[
"## Tensors, Functions and Computational graph",
"_____no_output_____"
],
[
"This code defines the following **computational graph**:\n\nIn this network, ***w*** and ***b*** are **parameters**, which we need to optimize. Thus, we need to be able to compute the gradients of loss function with respect to those variables. In order to do that, we set the **requires_grad** property of those tensors.",
"_____no_output_____"
],
[
"**NOTE**: \n- You can set the value of **requires_grad** when creating a tensor, or later by using **x.requires_grad_(True)** method.",
"_____no_output_____"
],
[
"A function that we apply to tensors to construct computational graph is in fact an object of class **Function**. This object knows how to compute the function in the forward direction, and also how to compute its derivative during the ***backward propagation step***. A reference to the backward propagation function is stored in **grad_fn** property of a tensor. You can find more information of **Function** in the [documentation](https://pytorch.org/docs/stable/autograd.html#function).",
"_____no_output_____"
]
],
[
[
"print(f\"Gradient function for z = {z.grad_fn}\")\nprint(f\"Gradient function for loss = {loss.grad_fn}\")",
"Gradient function for z = <AddBackward0 object at 0x7fc873fb2490>\nGradient function for loss = <BinaryCrossEntropyWithLogitsBackward0 object at 0x7fc873fb2ee0>\n"
]
],
[
[
"## Computing Gradients",
"_____no_output_____"
],
[
"To optimize weights of parameters in the neural network, we need to compute the derivatives of our loss function with respect to parameters, namely, we need $\\frac{\\partial loss}{\\partial w}$\n and $\\frac{\\partial loss}{\\partial b}$ under some fixed values of ***x*** and ***y***. To compute those derivatives, we call **loss.backward()**, and then retrieve the values from **w.grad** and **b.grad**:",
"_____no_output_____"
]
],
[
[
"loss.backward()\nprint(w.grad)\nprint(b.grad)",
"tensor([[0.0859, 0.1106, 0.2762],\n [0.0859, 0.1106, 0.2762],\n [0.0859, 0.1106, 0.2762],\n [0.0859, 0.1106, 0.2762],\n [0.0859, 0.1106, 0.2762]])\ntensor([0.0859, 0.1106, 0.2762])\n"
]
],
[
[
"**NOTE**:\n- We can only obtain the **grad** properties for the ***leaf nodes*** of the computational graph, which have **requires_grad** property set to **True**. For all other nodes in our graph, gradients will not be available. \n- We can only perform gradient calculations using **backward** once on a given graph, for performance reasons. If we need to do several **backward** calls on the same graph, we need to pass **retain_graph=True** to the **backward** call.",
"_____no_output_____"
],
[
"## Disabling Gradient Tracking",
"_____no_output_____"
],
[
"By default, all tensors with **requires_grad=True** are tracking their computational history and support gradient computation. However, there are some cases when we do not need to do that, for example, when we have trained the model and just want to apply it to some input data, i.e. we only want to do forward computations through the network. We can stop tracking computations by surrounding our computation code with **torch.no_grad()** block:",
"_____no_output_____"
]
],
[
[
"z = torch.matmul(x, w)+b\nprint(z.requires_grad)\n\nwith torch.no_grad(): # 以下的操作不生成计算图\n z = torch.matmul(x, w)+b\nprint(z.requires_grad)",
"True\nFalse\n"
]
],
[
[
"Another way to achieve the same result is to use the **detach()** method on the tensor:",
"_____no_output_____"
]
],
[
[
"z = torch.matmul(x, w)+b\nz_det = z.detach() # 复制张量z并从原有计算图上剥离下来,与clone()做好区分\nprint(z_det.requires_grad)",
"False\n"
]
],
[
[
"There are reasons you might want to disable gradient tracking:\n- To mark some parameters in your neural network as **frozen parameters**. This is a very common scenario for [finetuning a pretrained network](https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html).\n- To **speed up computations** when you are only doing forward pass, because computations on tensors that do not track gradients would be more efficient.",
"_____no_output_____"
],
[
"## More on Computational Graphs ",
"_____no_output_____"
],
[
"Conceptually, autograd keeps a record of data (tensors) and all executed operations (along with the resulting new tensors) in a directed acyclic graph (DAG) consisting of [Function](https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function) objects. In this DAG, leaves are the input tensors, roots are the output tensors. By tracing this graph from roots to leaves, you can automatically compute the gradients using the chain rule.",
"_____no_output_____"
],
[
"In a forward pass, autograd does two things simultaneously:\n- run the requested operation to compute a resulting tensor.\n- maintain the operation’s gradient function in the DAG. ",
"_____no_output_____"
],
[
"The backward pass kicks off when **.backward()** is called on the DAG root. **autograd** then:\n- computes the gradients from each **.grad_fn**,\n- accumulates them in the respective tensor’s **.grad** attribute\n- using the chain rule, propagates all the way to the leaf tensors.",
"_____no_output_____"
],
[
"**NOTE**:\n- ***DAGs are dynamic in PyTorch*** An important thing to note is that the graph is recreated from scratch; after each **.backward()** call, autograd starts populating a new graph. This is exactly what allows you to use control flow statements in your model; you can change the shape, size and operations at every iteration if needed.",
"_____no_output_____"
],
[
"## Optional Reading: Tensor Gradients and Jacobian Products ",
"_____no_output_____"
],
[
"In many cases, we have a scalar loss function, and we need to compute the gradient with respect to some parameters. However, there are cases when the output function is an arbitrary tensor. In this case, PyTorch allows you to compute so-called ***Jacobian product***, and not the actual gradient.\n\n\nFor a vector function $\\vec{y}=f(\\vec{x})$ , where $\\vec{x}=\\langle x_1,\\dots,x_n\\rangle$ and $\\vec{y}=\\langle y_1,\\dots,y_m\\rangle$ , a gradient of $\\vec{y}$ with respect to $\\vec{x}$ is given by Jacobian matrix:\n\n$$J=\\left(\\begin{array}{ccc} \\frac{\\partial y_{1}}{\\partial x_{1}} & \\cdots & \\frac{\\partial y_{1}}{\\partial x_{n}}\\\\ \\vdots & \\ddots & \\vdots\\\\ \\frac{\\partial y_{m}}{\\partial x_{1}} & \\cdots & \\frac{\\partial y_{m}}{\\partial x_{n}} \\end{array}\\right)$$\nInstead of computing the Jacobian matrix itself, PyTorch allows you to compute **Jacobian Product** $v^T\\cdot J$\n for a given input vector $v=(v_1 \\dots v_m)$. This is achieved by calling **backward** with ***v*** as an argument. The size of vv should be the same as the size of the original tensor, with respect to which we want to compute the product:\n\n",
"_____no_output_____"
]
],
[
[
"inp = torch.eye(5, requires_grad=True)\nout = (inp+1).pow(2) # shape为5*5\nout.backward(torch.ones_like(inp), retain_graph=True) # 这里的torch.ones_like()生成对正是上文中v,正常情况下都使用全1的tensor\nprint(f\"First call\\n{inp.grad}\")\nout.backward(torch.ones_like(inp), retain_graph=True)\nprint(f\"\\nSecond call\\n{inp.grad}\")\ninp.grad.zero_()\nout.backward(torch.ones_like(inp), retain_graph=True)\nprint(f\"\\nCall after zeroing gradients\\n{inp.grad}\")",
"First call\ntensor([[4., 2., 2., 2., 2.],\n [2., 4., 2., 2., 2.],\n [2., 2., 4., 2., 2.],\n [2., 2., 2., 4., 2.],\n [2., 2., 2., 2., 4.]])\n\nSecond call\ntensor([[8., 4., 4., 4., 4.],\n [4., 8., 4., 4., 4.],\n [4., 4., 8., 4., 4.],\n [4., 4., 4., 8., 4.],\n [4., 4., 4., 4., 8.]])\n\nCall after zeroing gradients\ntensor([[4., 2., 2., 2., 2.],\n [2., 4., 2., 2., 2.],\n [2., 2., 4., 2., 2.],\n [2., 2., 2., 4., 2.],\n [2., 2., 2., 2., 4.]])\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
cb8618975968645d69ec79de317ef87b5eabc8be
| 4,464 |
ipynb
|
Jupyter Notebook
|
Prelim_Exam.ipynb
|
Alfredzzx/Linear-Algebra-58020
|
3085b0a97780090f83597f3d8b9449a8eed9d084
|
[
"Apache-2.0"
] | null | null | null |
Prelim_Exam.ipynb
|
Alfredzzx/Linear-Algebra-58020
|
3085b0a97780090f83597f3d8b9449a8eed9d084
|
[
"Apache-2.0"
] | null | null | null |
Prelim_Exam.ipynb
|
Alfredzzx/Linear-Algebra-58020
|
3085b0a97780090f83597f3d8b9449a8eed9d084
|
[
"Apache-2.0"
] | null | null | null | 23.744681 | 238 | 0.432124 |
[
[
[
"<a href=\"https://colab.research.google.com/github/Alfredzzx/Linear-Algebra-58020/blob/main/Prelim_Exam.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"Question 1. (20 points) Create a 4 x 4 matrix whose diagonal elements are all one (1's). Name it as matrix \"C\". Show your solutions using Python codes and do not forget to label them on the Text Cell.",
"_____no_output_____"
]
],
[
[
"#Matrix C\nimport numpy as np\nf = np.eye(4)\nprint(f)",
"[[1. 0. 0. 0.]\n [0. 1. 0. 0.]\n [0. 0. 1. 0.]\n [0. 0. 0. 1.]]\n"
],
[
"",
"_____no_output_____"
]
],
[
[
"Question 2. (20 points) In relation to Question 1, show a solution that doubles all the values of each element. Show your solutions using Python codes and do not forget to label them on the Text Cell.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nf = np.eye(4) * 2\nprint(f)",
"[[2. 0. 0. 0.]\n [0. 2. 0. 0.]\n [0. 0. 2. 0.]\n [0. 0. 0. 2.]]\n"
],
[
"",
"_____no_output_____"
]
],
[
[
"\nQuestion 3. (10 points) Find the cross-product of matrices, A = [2,7,4] and\n\nB = [3,9,8]. Show your solutions using Python codes and do not forget to label them on the Text Cell.",
"_____no_output_____"
]
],
[
[
"A = np.array([2,7,4])\nB = np.array([3,9,8])\ncross = np.cross(A,B)\nprint(cross)",
"[20 -4 -3]\n"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb8618eda043b8cfcb805b63ae30d683fda0256d
| 2,337 |
ipynb
|
Jupyter Notebook
|
savingData.ipynb
|
ABPande/MyPythonRepo
|
51de39aee6c99b6ea2eb47fb199925ee63ca3750
|
[
"Apache-2.0"
] | null | null | null |
savingData.ipynb
|
ABPande/MyPythonRepo
|
51de39aee6c99b6ea2eb47fb199925ee63ca3750
|
[
"Apache-2.0"
] | null | null | null |
savingData.ipynb
|
ABPande/MyPythonRepo
|
51de39aee6c99b6ea2eb47fb199925ee63ca3750
|
[
"Apache-2.0"
] | null | null | null | 17.058394 | 74 | 0.460847 |
[
[
[
"import numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"df = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]])",
"_____no_output_____"
],
[
"df.to_csv(\"sampledf\")",
"_____no_output_____"
],
[
"np.save(\"wtf\",np.array([1,2,3]))",
"_____no_output_____"
],
[
"np.savez(\"2_arrays\", a = np.array([1,2,3]), b = np.array([1,2,3]))",
"_____no_output_____"
],
[
"np.load(\"wtf.npy\")",
"_____no_output_____"
],
[
"arch = np.load(\"2_arrays.npz\")",
"_____no_output_____"
],
[
"arch['a']",
"_____no_output_____"
],
[
"np.savetxt(\"text\", np.array([1,2,3]))",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
cb862132fb7b989ebd6ec5588dbcf6443f9f2cb9
| 14,756 |
ipynb
|
Jupyter Notebook
|
10 Randomness/donow/DoNow_10.ipynb
|
barjacks/algorithms_mine
|
bc248ed9ebb88aed73c6e8da3d3b9553d9173cdd
|
[
"MIT"
] | null | null | null |
10 Randomness/donow/DoNow_10.ipynb
|
barjacks/algorithms_mine
|
bc248ed9ebb88aed73c6e8da3d3b9553d9173cdd
|
[
"MIT"
] | null | null | null |
10 Randomness/donow/DoNow_10.ipynb
|
barjacks/algorithms_mine
|
bc248ed9ebb88aed73c6e8da3d3b9553d9173cdd
|
[
"MIT"
] | null | null | null | 36.07824 | 5,820 | 0.687449 |
[
[
[
"## Create a classifier to predict the wine color from wine quality attributes using this dataset: http://archive.ics.uci.edu/ml/datasets/Wine+Quality",
"_____no_output_____"
],
[
"## The data is in the database we've been using\n+ host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com'\n+ database='training'\n+ port=5432\n+ user='dot_student'\n+ password='qgis'\n+ table name = 'winequality'",
"_____no_output_____"
]
],
[
[
"import pg8000\nimport pandas as pd\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom sklearn import datasets, tree, metrics\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Query for the data and create a numpy array",
"_____no_output_____"
]
],
[
[
"conn = pg8000.connect(host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', user='dot_student', password='qgis', database=\"training\")\ncursor = conn.cursor()\ncursor.execute(\"select * from information_schema.columns where table_name='winequality'\")\nresults = cursor.fetchall()",
"_____no_output_____"
],
[
"conn = pg8000.connect(host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', user='dot_student', password='qgis', database=\"training\")\ncursor = conn.cursor()\ncursor.execute(\"select column_name from information_schema.columns where table_name='winequality'\") #LIMIT 10\nresults = cursor.fetchall()",
"_____no_output_____"
],
[
"conn.rollback()",
"_____no_output_____"
],
[
"for x in results:\n print(x)",
"['fixed_acidity']\n['volatile_acidity']\n['citric_acid']\n['residual_sugar']\n['chlorides']\n['free_sulfur_dioxide']\n['total_sulfur_dioxide']\n['density']\n['ph']\n['sulphates']\n['alcohol']\n['color']\n"
],
[
"conn = pg8000.connect(host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', user='dot_student', password='qgis', database=\"training\")\ncursor = conn.cursor()\ndb = []\ncursor.execute(\"SELECT * from winequality\")\nfor item in cursor.fetchall():\n db.append(item)",
"_____no_output_____"
],
[
"results_list = []\nfor y in results:\n results_list.append(y)",
"_____no_output_____"
],
[
"result_array = np.array(db)",
"_____no_output_____"
]
],
[
[
"## Split the data into features (x) and target (y, the last column in the table)",
"_____no_output_____"
],
[
"### Remember you can cast the results into an numpy array and then slice out what you want",
"_____no_output_____"
]
],
[
[
"x = result_array[:,:11]\ny = result_array[:,11]",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
]
],
[
[
"## Create a decision tree with the data",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier",
"_____no_output_____"
],
[
"dt = DecisionTreeClassifier()",
"_____no_output_____"
],
[
"dt = dt.fit(x,y)",
"_____no_output_____"
]
],
[
[
"## Run 10-fold cross validation on the model",
"_____no_output_____"
]
],
[
[
"from sklearn.cross_validation import cross_val_score",
"_____no_output_____"
],
[
"scores = cross_val_score(dt,x,y,cv=10)",
"_____no_output_____"
],
[
"np.mean(scores)",
"_____no_output_____"
]
],
[
[
"## If you have time, calculate the feature importance and graph based on the code in the [slides from last class](http://ledeprogram.github.io/algorithms/class9/#21)",
"_____no_output_____"
],
[
"### Use [this tip for getting the column names from your cursor object](http://stackoverflow.com/questions/10252247/how-do-i-get-a-list-of-column-names-from-a-psycopg2-cursor)",
"_____no_output_____"
]
],
[
[
"plt.plot(dt.feature_importances_,'o')\nplt.ylim(0,1)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
cb8622d8e2eeb7216aaff037f750126d0644a1d5
| 7,553 |
ipynb
|
Jupyter Notebook
|
nbs/index.ipynb
|
kikejimenez/superstore_forecast
|
c80388b94de491cb89ff85d577d2084202dca022
|
[
"Apache-2.0"
] | null | null | null |
nbs/index.ipynb
|
kikejimenez/superstore_forecast
|
c80388b94de491cb89ff85d577d2084202dca022
|
[
"Apache-2.0"
] | null | null | null |
nbs/index.ipynb
|
kikejimenez/superstore_forecast
|
c80388b94de491cb89ff85d577d2084202dca022
|
[
"Apache-2.0"
] | null | null | null | 44.692308 | 479 | 0.639084 |
[
[
[
"#hide\nfrom your_lib.core import *",
"_____no_output_____"
],
[
"import pandas_profiling\n",
"_____no_output_____"
],
[
"pandas_profiling.ProfileReport?",
"_____no_output_____"
]
],
[
[
"# Project name here\n\n> Summary description here.",
"_____no_output_____"
],
[
"This file will become your README and also the index of your documentation.",
"_____no_output_____"
],
[
"## Install",
"_____no_output_____"
],
[
"`pip install your_project_name`",
"_____no_output_____"
],
[
"## How to use",
"_____no_output_____"
],
[
"Fill me in please! Don't forget code examples:",
"_____no_output_____"
]
],
[
[
"1+1",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
cb862a6b95efd20d94e0cc91ea419588cb746996
| 7,917 |
ipynb
|
Jupyter Notebook
|
app.ipynb
|
Poojakottae/Chatbot
|
c619bbf2c07a7b59f50c075c5aaf3da91ae0343b
|
[
"MIT"
] | null | null | null |
app.ipynb
|
Poojakottae/Chatbot
|
c619bbf2c07a7b59f50c075c5aaf3da91ae0343b
|
[
"MIT"
] | null | null | null |
app.ipynb
|
Poojakottae/Chatbot
|
c619bbf2c07a7b59f50c075c5aaf3da91ae0343b
|
[
"MIT"
] | null | null | null | 44.982955 | 261 | 0.572186 |
[
[
[
"from flask import Flask, render_template, request\nfrom flask import jsonify\n#import xlrd\n#import Chatbot2\nfrom Chatbot2 import response",
"WARNING:tensorflow:From C:\\Users\\POOJA\\anaconda\\lib\\site-packages\\tensorflow\\python\\compat\\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\nInstructions for updating:\nnon-resource variables are not supported in the long term\ncurses is not supported on this machine (please install/reinstall curses for an optimal experience)\nWARNING:tensorflow:From C:\\Users\\POOJA\\anaconda\\lib\\site-packages\\tflearn\\initializations.py:164: calling TruncatedNormal.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\nInstructions for updating:\nCall initializer instance with the dtype argument instead of passing it to the constructor\n[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]\n['', 'Advantages', 'Charging cable', 'EV Brand', 'Price', 'Range', 'Types of EV', 'VOVO Electric', 'VOVO Offer', 'availability', 'city', 'duration', 'ev charging station', 'goodbye', 'greeting', 'help', 'installation', 'payments', 'safety', 'thanks']\nINFO:tensorflow:Restoring parameters from C:\\Users\\POOJA\\chatbot-master\\model.tflearn\n"
],
[
"app = Flask(__name__)\[email protected](\"/\")\ndef index(name=None):\n return render_template('index.html',name=name)",
"_____no_output_____"
],
[
"\ndef home():\n \n return render_template(\"index.html\")",
"_____no_output_____"
],
[
"@app.route(\"/get\", methods=['POST', 'GET'])\ndef getresponse():\n userText = request.args.get('msg')\n userText= userText.lower()\n response1 = str(response(userText))\n return jsonify(response1)\n \n ",
"_____no_output_____"
],
[
"if __name__ == \"__main__\":\n app.run()",
" * Serving Flask app \"__main__\" (lazy loading)\n * Environment: production\n WARNING: This is a development server. Do not use it in a production deployment.\n Use a production WSGI server instead.\n * Debug mode: off\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code"
]
] |
cb865623f89a95cef255aa9b04072b86a29e265f
| 6,199 |
ipynb
|
Jupyter Notebook
|
backend/preprocessing/AdjacencyMatrix.ipynb
|
mikewong-sfsu/GeneDive
|
241f7c8864f0323a428bcae27881a15af1722e04
|
[
"MIT"
] | 1 |
2019-01-31T05:35:38.000Z
|
2019-01-31T05:35:38.000Z
|
backend/preprocessing/AdjacencyMatrix.ipynb
|
mikewong-sfsu/GeneDive
|
241f7c8864f0323a428bcae27881a15af1722e04
|
[
"MIT"
] | null | null | null |
backend/preprocessing/AdjacencyMatrix.ipynb
|
mikewong-sfsu/GeneDive
|
241f7c8864f0323a428bcae27881a15af1722e04
|
[
"MIT"
] | null | null | null | 24.895582 | 106 | 0.494596 |
[
[
[
"** Build Adjacency Matrix **",
"_____no_output_____"
],
[
"**Note:** You must put the generated JSON file into a zip file. We probably should code this in too.",
"_____no_output_____"
]
],
[
[
"import sqlite3\nimport json",
"_____no_output_____"
],
[
"# Progress Bar I found on the internet.\n# https://github.com/alexanderkuk/log-progress\nfrom progress_bar import log_progress",
"_____no_output_____"
],
[
"\nPLOS_PMC_DB = 'sqlite_data/data.plos-pmc.sqlite'\nALL_DB = 'sqlite_data/data.all.sqlite'\n\nPLOS_PMC_MATRIX = 'json_data/plos-pmc/adjacency_matrix.json'\nALL_MATRIX = 'json_data/all/adjacency_matrix.json'",
"_____no_output_____"
],
[
"conn_plos_pmc = sqlite3.connect(PLOS_PMC_DB)\ncursor_plos_pmc = conn_plos_pmc.cursor()\n\nconn_all = sqlite3.connect(ALL_DB)\ncursor_all = conn_all.cursor()",
"_____no_output_____"
]
],
[
[
"Queries",
"_____no_output_____"
]
],
[
[
"# For getting the maximum row id\nQUERY_MAX_ID = \"SELECT id FROM interactions ORDER BY id DESC LIMIT 1\"\n\n# Get interaction data\nQUERY_INTERACTION = \"SELECT geneids1, geneids2, probability FROM interactions WHERE id = {}\"\n\n# Get all at once\nQUERY_ALL_INTERACTION = \"SELECT geneids1, geneids2, probability FROM interactions\"",
"_____no_output_____"
],
[
"actions = [\n# {\n# \"db\":PLOS_PMC_DB,\n# \"matrix\" : PLOS_PMC_MATRIX,\n# \"conn\": conn_plos_pmc,\n# \"cursor\": cursor_plos_pmc,\n# },\n {\n \"db\":ALL_DB,\n \"matrix\" : ALL_MATRIX,\n \"conn\": conn_all,\n \"cursor\": cursor_all,\n },\n]",
"_____no_output_____"
]
],
[
[
"Step through every interaction.\n\n1. If geneids1 not in matrix - insert it as dict.\n2. If geneids2 not in matrix[geneids1] - insert it as []\n3. If probability not in matrix[geneids1][geneids2] - insert it.\n4. Perform the reverse.",
"_____no_output_____"
]
],
[
[
"# for action in actions:\nfor action in log_progress(actions, every=1, name=\"Matrix\"):\n print(\"Executing SQL query. May take a minute.\")\n matrix = {}\n cursor = action[\"cursor\"].execute(QUERY_ALL_INTERACTION)\n interactions = cursor.fetchall()\n print(\"Query complete\")\n for row in log_progress(interactions, every=10000, name=action[\"matrix\"]+\" rows\"):\n if row == None:\n continue\n \n id1 = row[0]\n id2 = row[1]\n try:\n prob = int(round(row[2],2) * 1000)\n except Exception:\n continue\n\n # Forward\n if id1 not in matrix:\n matrix[id1] = {}\n\n if id2 not in matrix[id1]:\n matrix[id1][id2] = []\n\n if prob not in matrix[id1][id2]:\n matrix[id1][id2].append(prob)\n\n # Backwards\n if id2 not in matrix:\n matrix[id2] = {}\n\n if id1 not in matrix[id2]:\n matrix[id2][id1] = []\n\n if prob not in matrix[id2][id1]:\n matrix[id2][id1].append(prob)\n \n with open(action[\"matrix\"], \"w+\") as file:\n file.write(json.dumps( matrix ))\n \nprint(\"All Matrices generated\")\n ",
"_____no_output_____"
],
[
"action[\"conn\"].close()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
cb865cb87a676dc756b7fd18ceb875719e4a801f
| 27,927 |
ipynb
|
Jupyter Notebook
|
OpenCV Friendly Intro.ipynb
|
jotathebest/pymedellin
|
f20c8a755a78edeaeded3e16d139c2fa86d11a74
|
[
"MIT"
] | null | null | null |
OpenCV Friendly Intro.ipynb
|
jotathebest/pymedellin
|
f20c8a755a78edeaeded3e16d139c2fa86d11a74
|
[
"MIT"
] | null | null | null |
OpenCV Friendly Intro.ipynb
|
jotathebest/pymedellin
|
f20c8a755a78edeaeded3e16d139c2fa86d11a74
|
[
"MIT"
] | 1 |
2018-12-06T00:13:49.000Z
|
2018-12-06T00:13:49.000Z
| 31.520316 | 517 | 0.57593 |
[
[
[
"# Introduction to Digital Image Treatment\n\nOpenCV is one of the most popular libraries for DIT, it was originally wrote in C but since some time ago we can find Python bindings that let us to use with the simplied pythonic synthaxis.\n\nLet's begin to play some with the library",
"_____no_output_____"
]
],
[
[
"# Import modules\nimport cv2\nimport numpy as np\nimport imutils\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"* **CV2**: The OpenCV python binding library, it will let us to access to all the functions available in the library\n* **Numpy**: Numpy is an efficient matrix manipulation library, we will see that all our images will be treated as matrixes or vectors so we need a library to manipulate them.\n* **Imutils**: An useful image transformation library.\n* **Matplotlib**: As this library can be useful for different purposes, we will use it mainly for visualization purposes.",
"_____no_output_____"
]
],
[
[
"# Let's load an image\n\nimage = cv2.imread(\"teach_images/yiyo_pereza.png\") # Loads color image\nprint(image)",
"_____no_output_____"
]
],
[
[
"What did I print? What those numbers mean?\n\nMost of the computers interpret an image as an array of color representation of 8 bits, this means, a number between 0 and 255. What you see in the last print statemen is this representation splitted into three different color channels: Red, Green and Blue, represented by a matrix that stores the pixels color representation of the image, this is what your computer 'sees' and that is why artificial vision applications requires some of math to be developed. For our fortune, OpenCV treats with it for us !!\n\n<img src=\"teach_images/image_representation.png\">\n\nThe RGB color space means that you have a 8-bit unsigned representation for every color, below you can find some examples:\n\n\n* <img src=\"teach_images/rgb_255_0_0.png\", width=\"20px\" align=\"left\" top=\"10px\" bottom=\"10px\"> **Pure Red** [255, 0, 0]\n\n* <img src=\"teach_images/rgb_0_255_0.png\" width=\"20px\" align=\"left\" top=\"10px\" bottom=\"10px\"> **Pure Green** [0, 255, 0]\n* <img src=\"teach_images/rgb_255_255_0.png\" width=\"20px\" align=\"left\" top=\"10px\" bottom=\"10px\"> **Mix of Red and Green** [255, 255, 0]\n* <img src=\"teach_images/rgb_0_127_127.png\" width=\"20px\" align=\"left\" top=\"10px\" bottom=\"10px\"> **Attenuate mix of Green and Blue** [0, 127, 127]\n\nKeeping this in mind, it means that the last pixel from Yiyo's picture ```[207 221 220]``` has this color:\n\n* <img src=\"teach_images/rgb_207_221_220.png\" width=\"20px\" align=\"left\" top=\"10px\" bottom=\"10px\"> ** Yiyo's last pixel** [207 221 220]\n\nFinally, most of applications manages the color space as **RGB** but OpenCV manages it as **BGR**, so please **keep this in mind** as it will be very important in the incoming lesson",
"_____no_output_____"
]
],
[
[
"# Let's show the image to see how this array 'looks'\n%matplotlib inline\n\nplt.axis('off')\nplt.imshow(image)\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Why the first signal is plotted in a 'strange' way?**\n\nImages are treated as an array of three channels: Red, Green and Blue. Opencv treats with them as BGR while matplot treats with them as RGB.\n\nLet's solve this",
"_____no_output_____"
]
],
[
[
"image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\nplt.axis('off')\nplt.imshow(image_rgb)\nplt.show()",
"_____no_output_____"
],
[
"# Nice, now let's split the channels\n\nimage = cv2.imread(\"teach_images/yiyo_pereza.png\")\nblue, green, red = cv2.split(image)\n\n# Original one\nplt.title('Original')\nplt.axis('off')\nplt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))\nplt.show()\n\n# Splitted channels\nplt.figure(figsize=(20, 20))\nplt.subplot(131)\nplt.axis('off')\nplt.title('blue')\nplt.imshow(blue)\nplt.subplot(132)\nplt.axis('off')\nplt.title('green')\nplt.imshow(green)\nplt.subplot(133)\nplt.title('red')\nplt.axis('off')\nplt.imshow(red)",
"_____no_output_____"
],
[
"# Let's split another image\nimage = cv2.imread(\"teach_images/jerico_2.png\")\nblue, green, red = cv2.split(image)\n\n# Original one\nplt.title('Original')\nplt.axis('off')\nplt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))\nplt.show()\n\n# Splitted channels\nplt.figure(figsize=(20, 20))\nplt.subplot(131)\nplt.axis('off')\nplt.title('blue')\nplt.imshow(blue)\nplt.subplot(132)\nplt.axis('off')\nplt.title('green')\nplt.imshow(green)\nplt.subplot(133)\nplt.title('red')\nplt.axis('off')\nplt.imshow(red)",
"_____no_output_____"
],
[
"# Let's split another image\nimage = cv2.imread(\"teach_images/rose.png\")\nblue, green, red = cv2.split(image)\n\n# Original one\nplt.title('Original')\nplt.axis('off')\nplt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))\nplt.show()\n\n# Splitted channels\nplt.figure(figsize=(20, 20))\nplt.subplot(131)\nplt.axis('off')\nplt.title('blue')\nplt.imshow(blue)\nplt.subplot(132)\nplt.axis('off')\nplt.title('green')\nplt.imshow(green)\nplt.subplot(133)\nplt.title('red')\nplt.axis('off')\nplt.imshow(red)",
"_____no_output_____"
]
],
[
[
"One of the main RGB split channels are mask, look that the red channel would serve us to get most of the information of the rose's petals. We will review the mask concept later in the lesson.",
"_____no_output_____"
],
[
"## Grayspace Images\n\nFrom our previous job, we know that an image is represented as a RGB matrix, but to process a three columns matrix can be computationally costly and because of this many applications transforms the image in a vector array of 'gray' pixels intensities.\n\nBe careful, a grayscale space vector does not mean black or white, it is a new representation of the RGB space:\n\nY = 0.299 x R + 0.587 x G + 0.114 x B\n\ndue to the cones and receptors in our eyes, we are able to perceive nearly 2x the amount of green than red. And similarly, we notice over twice the amount of red than blue. Thus, we make sure to account for this when converting from RGB to grayscale.",
"_____no_output_____"
]
],
[
[
"# Let's create a grayscale image\n\nimage = cv2.imread(\"teach_images/yiyo_pereza.png\")\n\ngray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\nplt.figure(figsize=(20, 20))\nplt.subplot(121)\nplt.title('Original')\nplt.axis('off')\nplt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))\nplt.subplot(122)\nplt.title('gray')\nplt.axis('off')\nplt.imshow(gray, cmap='gray')\n",
"_____no_output_____"
],
[
"# Let's print the gray array\n\nprint(gray)",
"_____no_output_____"
],
[
"# Let's print the shapes of both color and gray\n\nprint(image.shape)\nprint(gray.shape)",
"_____no_output_____"
]
],
[
[
"## Binary Images\n\nOne of the main application of the grayscale images is to create binary images, that contains **only** two possible pixel values: 0 or 255. The main application of binary images is mask creation to extract a region of interest of the image based on the number of pixels that are equals to zero or higher than zero.\n\nLet's create some examples",
"_____no_output_____"
]
],
[
[
"# Our objective will be to extract a mask with only the letters and number of a license plate\n\nplate = cv2.imread(\"teach_images/license_plate.png\")\ngray = cv2.cvtColor(plate, cv2.COLOR_BGR2GRAY)\n\nplt.figure(figsize=(20, 20))\nplt.subplot(121)\nplt.title('Original Plate')\nplt.axis('off')\nplt.imshow(cv2.cvtColor(plate, cv2.COLOR_BGR2RGB))\nplt.subplot(122)\nplt.title('Gray Plate')\nplt.axis('off')\nplt.imshow(gray, cmap='gray')",
"_____no_output_____"
]
],
[
[
"## Thresholding\n\nThresholding is one of the most common (and basic) segmentation techniques in computer vision and it allows us to separate the foreground (i.e. the objects that we are interested in) from the background of the image.\n\nFor the example, we will use a simple thresholding, We must specify a threshold value T. All pixel intensities below T are set to 255. And all pixel intensities greater than T are set to 0.\n\nKeep in mind that our main objetive is to extract the letter and numbers of the plate, so we want to extract intensities of 255 which are 'pure' black.",
"_____no_output_____"
]
],
[
[
"# Let's look some representations of some aparts of the image\n\n# The top of the image, which is only yellow\nprint(gray[0:10])\n\nplt.imshow(gray[0:10], cmap='gray')",
"_____no_output_____"
],
[
"# Let's look some representations of some aparts of the image\n\n# One of the letters (mainly black)\nprint(gray[40: 110, 110:150])\n\nplt.imshow(gray[40: 110, 110:150], cmap='gray')",
"_____no_output_____"
],
[
"# Let's look some representations of some aparts of the image\n\n# One of the noisy region\nprint(gray[100: 170, 80:150])\nplt.subplot(121)\nplt.title('Original Plate')\nplt.imshow(cv2.cvtColor(plate[100: 170, 80:150], cv2.COLOR_BGR2RGB))\nplt.subplot(122)\nplt.title('Gray Plate')\nplt.imshow(gray[100: 170, 80:150], cmap='gray')",
"_____no_output_____"
],
[
"# Let's apply a threshold\n\n# If a pixel value is greater than our threshold (in this case,\n# 213), we set it to be BLACK, otherwise it is WHITE.\n# REMEMBER: Black regions are in the order of 213\n(T, thresh_1) = cv2.threshold(gray, 213, 255, cv2.THRESH_BINARY_INV)\n\n# If a pixel value is greater than our threshold (in this case,\n# 212), we set it to be BLACK, otherwise it is WHITE.\n(T, thresh_2) = cv2.threshold(gray, 212, 255, cv2.THRESH_BINARY_INV)\n\n# If a pixel value is greater than our threshold (in this case,\n# 80), we set it to be BLACK, otherwise it is WHITE.\n(T, thresh_3) = cv2.threshold(gray, 80, 255, cv2.THRESH_BINARY_INV)\n\n# If a pixel value is greater than our threshold (in this case,\n# 40), we set it to be BLACK, otherwise it is WHITE.\n(T, thresh_4) = cv2.threshold(gray, 40, 255, cv2.THRESH_BINARY_INV)\n\nplt.subplot(221)\nplt.title('Greater than 213')\nplt.axis('off')\nplt.imshow(thresh_1, cmap='gray')\nplt.subplot(222)\nplt.title('Greater than 212')\nplt.axis('off')\nplt.imshow(thresh_2, cmap='gray')\nplt.subplot(223)\nplt.title('Greater than 80')\nplt.axis('off')\nplt.imshow(thresh_3, cmap='gray')\nplt.subplot(224)\nplt.title('Greater than 80')\nplt.axis('off')\nplt.imshow(thresh_4, cmap='gray')",
"_____no_output_____"
]
],
[
[
"For this particular application, a threshold of 40 takes off most of our noise, nice!",
"_____no_output_____"
],
[
"## Dilation and Erosion\n\nMorphological transformations have a wide array of uses, i.e. :\n\n* Removing noise\n* Isolation of individual elements and joining disparate elements in an image.\n* Finding of intensity bumps or holes in an image\n\nLet's look some of them.\n\n* **Dilation**: As its name suggests, it 'dilates' a binary region transforming black pixels into white pixels\n\n <img src=\"teach_images/dilation.png\">\n\n* **Erosion**: The opposite to dilation, it transforms white pixels into black pixels.\n\n<img src=\"teach_images/dilation.png\">\n\n* **Opening**: Opening is just another name of erosion followed by dilation\n\n<img src=\"teach_images/opening.png\">\n\n* **Closing**: Closing is reverse of Opening, Dilation followed by Erosion\n\n<img src=\"teach_images/closing.png\">\n\nNow that you know the basics of image transformation, lets eliminate the noise",
"_____no_output_____"
]
],
[
[
"# Let's use an opening transformation to eliminate most of the noise\n\nkernel = np.ones((5,5),np.uint8)\nopening = cv2.morphologyEx(thresh_4, cv2.MORPH_OPEN, kernel)\n\nplt.subplot(121)\nplt.title('Original thresholded')\nplt.axis('off')\nplt.imshow(thresh_4, cmap='gray')\nplt.subplot(122)\nplt.title('Closing')\nplt.axis('off')\nplt.imshow(opening, cmap='gray')",
"_____no_output_____"
]
],
[
[
"Nice, most of the noise was taked off!! Now let's begin with a masking job to extract ROIS from our original image\n\n# Masking\n\n'Masking' is the process of extract a Region of Interest from our images, this is basically approached using a bitwise-and operation:\n```\n 0101\nAND 0011\n = 0001\n```\nIn OpenCV, what we will have is a mask with only two possible values: 0 or 255 (remember, a binary image). Let's look with a concrete example from our initial splitted RGB splitted rose. The main goal is to extract its petals",
"_____no_output_____"
]
],
[
[
"# Let's split another image\nrose = cv2.imread(\"teach_images/rose.png\")\nblue, green, red = cv2.split(rose)\n\n# Original one\nplt.title('Original')\nplt.axis('off')\nplt.imshow(cv2.cvtColor(rose, cv2.COLOR_BGR2RGB))\nplt.show()\n\n# Splitted channels\nplt.figure(figsize=(20, 20))\nplt.subplot(131)\nplt.axis('off')\nplt.title('blue')\nplt.imshow(blue)\nplt.subplot(132)\nplt.axis('off')\nplt.title('green')\nplt.imshow(green)\nplt.subplot(133)\nplt.title('red')\nplt.axis('off')\nplt.imshow(red)",
"_____no_output_____"
],
[
"# Red channel is the most useful for our purposes\n\n(T, thresh_rose) = cv2.threshold(red, 40, 255, cv2.THRESH_BINARY)",
"_____no_output_____"
],
[
"plt.subplot(131)\nplt.title('Original')\nplt.axis('off')\nplt.imshow(rose)\nplt.subplot(132)\nplt.title('Red')\nplt.axis('off')\nplt.imshow(red)\nplt.subplot(133)\nplt.title('thresholded')\nplt.axis('off')\nplt.imshow(thresh_rose, cmap='gray')",
"_____no_output_____"
],
[
"# Let's use an some morphological transformations to eliminate most of the noise\n\nkernel_1 = np.ones((50,50),np.uint8)\nopening_rose = cv2.morphologyEx(thresh_rose, cv2.MORPH_OPEN, kernel_1)\nkernel_2 = np.ones((5,5),np.uint8)\nclosing = cv2.morphologyEx(opening_rose, cv2.MORPH_CLOSE, kernel_1)\n\nplt.figure(figsize=(20, 20))\nplt.subplot(231)\nplt.title('Original')\nplt.axis('off')\nplt.imshow(rose)\nplt.subplot(232)\nplt.title('Red')\nplt.axis('off')\nplt.imshow(red)\nplt.subplot(233)\nplt.title('thresholded')\nplt.axis('off')\nplt.imshow(thresh_rose, cmap='gray')\nplt.subplot(234)\nplt.title('Opening')\nplt.axis('off')\nplt.imshow(opening_rose, cmap='gray')\nplt.subplot(235)\nplt.title('Closing')\nplt.axis('off')\nplt.imshow(closing, cmap='gray')",
"_____no_output_____"
],
[
"# Now, let's implement a masking!!\nclone = rose.copy()\nmasked = cv2.bitwise_and(clone, clone, mask=closing)\n\nplt.subplot(221)\nplt.title('Original')\nplt.axis('off')\nplt.imshow(rose)\nplt.subplot(222)\nplt.title('Red')\nplt.axis('off')\nplt.imshow(red)\nplt.subplot(223)\nplt.title('Mask')\nplt.axis('off')\nplt.imshow(closing, cmap='gray')\nplt.subplot(224)\nplt.title('Masked')\nplt.axis('off')\nplt.imshow(masked)",
"_____no_output_____"
]
],
[
[
"Nice, isn't it? :)",
"_____no_output_____"
],
[
"## Contours\n\nThere is another basic Image processing concept to review: Contours. Contours are simply the outlines of an object in an image. If the image is simple enough, we might be able to get away with using the grayscale image as an input. if not, we will need to apply some transformation and artificial vision techniques to obtain a properly objetive image as we made with our rose masking.\n\nLet's define an initial goal, we want to know in a tetris game how many pieces we need to play at time:\n\n<img src=\"teach_images/tetris_goal.png\">\n\nFor the example above, there are three pieces to be played properly to keep our 'live' in the game. Let's begin.\n",
"_____no_output_____"
]
],
[
[
"# Loads the image\n\ntetris = cv2.imread(\"teach_images/tetris_1.png\")\ngray = cv2.imread(\"teach_images/tetris_1.png\", 0)\n\n# Applies a threshold\n(T, thresh_tetris) = cv2.threshold(gray, 80, 255, cv2.THRESH_BINARY)\n\n# Applies a closing\nkernel = np.ones((5,5),np.uint8)\nclosing = cv2.morphologyEx(thresh_tetris, cv2.MORPH_CLOSE, kernel)\n\n# Find contours \ncnts = cv2.findContours(closing.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[1]\nclone = gray.copy()\nclone = cv2.cvtColor(clone, cv2.COLOR_GRAY2BGR)\n\n# draw the contours\n\ncv2.drawContours(clone, cnts, -1, (0, 0, 255), 2)\nprint(\"Found {} contours\".format(len(cnts)))\n\n# Shows initial results\n\nplt.figure(figsize=(20, 20))\nplt.subplot(231)\nplt.title('Original')\nplt.axis('off')\nplt.imshow(cv2.cvtColor(tetris, cv2.COLOR_BGR2RGB))\nplt.subplot(232)\nplt.title('Gray')\nplt.axis('off')\nplt.imshow(gray, cmap='gray')\nplt.subplot(233)\nplt.title('Thresholded')\nplt.axis('off')\nplt.imshow(thresh_tetris, cmap='gray')\nplt.subplot(234)\nplt.title('Closing')\nplt.axis('off')\nplt.imshow(closing, cmap='gray')\nplt.subplot(235)\nplt.title('Contours')\nplt.axis('off')\nplt.imshow(clone)",
"_____no_output_____"
]
],
[
[
"Look that we have drawn all the contours of our image, but we have found five of them, how can we know which of them are tetris pieces?\n\nLet's look some more concepts:\n\n### Area: \n\nThe number of pixels that reside inside the contour outline. We will expect a fixed max and min area for our tetris pieces.\n\n### Aspect Ratio\n\nThe actual definition of the a contour’s aspect ratio is as follows:\n\n```\naspect ratio = image width / image height\n```\n\nWe will expect this aspect ratio: \n\n```[0.2, 0.4] and [0.6, 1.7]```\n\nLet's add just some line to inclue the area and the aspect ratio",
"_____no_output_____"
]
],
[
[
"# Loads the image\n\ntetris = cv2.imread(\"teach_images/tetris_1.png\")\ngray = cv2.imread(\"teach_images/tetris_1.png\", 0)\n\n# Applies a threshold\n(T, thresh_tetris) = cv2.threshold(gray, 80, 255, cv2.THRESH_BINARY)\n\n# Applies a closing\nkernel = np.ones((5,5),np.uint8)\nclosing = cv2.morphologyEx(thresh_tetris, cv2.MORPH_CLOSE, kernel)\n\n# Find contours \ncontours = cv2.findContours(closing.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[1]\nclone = gray.copy()\nclone = cv2.cvtColor(clone, cv2.COLOR_GRAY2BGR)\ncnts = []\n\nfor (i, c) in enumerate(contours):\n area = cv2.contourArea(c)\n (x, y, w, h) = cv2.boundingRect(c)\n checked = 0\n \n aspect_ratio = w / float(h)\n \n checked = (area>400 and area<700) \n checked = checked and ((aspect_ratio>=0.2 and aspect_ratio<=0.4) or (aspect_ratio>=0.6 and aspect_ratio<=1.8))\n \n print(\"contour number: {} area: {} aspect_ratio: {}\".format(i, area, aspect_ratio))\n print(checked)\n \n if checked:\n cnts.append(c)\n\n# draw the contours\ncv2.drawContours(clone, cnts, -1, (0, 0, 255), 2)\n# Shows initial results\n\nplt.figure(figsize=(20, 20))\nplt.subplot(231)\nplt.title('Original')\nplt.axis('off')\nplt.imshow(cv2.cvtColor(tetris, cv2.COLOR_BGR2RGB))\nplt.subplot(232)\nplt.title('Gray')\nplt.axis('off')\nplt.imshow(gray, cmap='gray')\nplt.subplot(233)\nplt.title('Thresholded')\nplt.axis('off')\nplt.imshow(thresh_tetris, cmap='gray')\nplt.subplot(234)\nplt.title('Closing')\nplt.axis('off')\nplt.imshow(closing, cmap='gray')\nplt.subplot(235)\nplt.title('Contours')\nplt.axis('off')\nplt.imshow(clone)\n\n# Prints the result\n\nprint(\"The number of pieces to play is {}\".format(len(cnts)))",
"_____no_output_____"
]
],
[
[
"## Image Comparing\n\nFinally, let's apply all that we have seen for image comparing.\n",
"_____no_output_____"
]
],
[
[
"# Image compare script\n\ntemplate = cv2.imread(\"teach_images/pycon_template.png\")\ntesting = cv2.imread(\"teach_images/pycon_testing.png\")\n\ntemplate_gray = cv2.cvtColor(template, cv2.COLOR_RGB2GRAY)\ntesting_gray = cv2.cvtColor(testing, cv2.COLOR_RGB2GRAY)\n\n# Applies a bitwise XOR operation that will return one only if some pixel is different\nxor = np.bitwise_xor(template_gray, testing_gray)\nones = cv2.countNonZero(xor)\n\nprint(ones)\n\n# Let's paint the differences, if there is any\n\nif ones > 0:\n result = cv2.absdiff(template, testing)\n gray = cv2.cvtColor(result, cv2.COLOR_BGR2GRAY)\n ret, thresh = cv2.threshold(gray, 1, 255, 0)\n \n # Find contours\n cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[1]\n cv2.drawContours(result, cnts, -1, (0, 0, 255), 1)\n\n\nplt.figure(figsize=(30, 30))\nplt.subplot(221)\nplt.title('Template')\nplt.axis('off')\nplt.imshow(cv2.cvtColor(template, cv2.COLOR_BGR2RGB))\nplt.subplot(222)\nplt.title('Testing')\nplt.axis('off')\nplt.imshow(cv2.cvtColor(template, cv2.COLOR_BGR2RGB))\nplt.subplot(223)\nplt.title('Thresholded')\nplt.axis('off')\nplt.imshow(thresh, cmap='gray')\nplt.subplot(224)\nplt.title('Contours')\nplt.axis('off')\nplt.imshow(result, cmap='gray')",
"_____no_output_____"
]
],
[
[
"Nice!!! You have drawn the images differences.\n\nHope you have enjoyed this lesson :)\n\nAll the best,\n\nJosé García",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
cb865e59b7d47eda0e5a1179df04fd09057ef9e5
| 2,230 |
ipynb
|
Jupyter Notebook
|
fuel_tutorial/fuel_server.ipynb
|
raghavchalapathy/summerschool2015
|
c96da4af353fc1b0c1a7e3a08863c6de89072b19
|
[
"BSD-3-Clause"
] | 437 |
2015-07-16T22:14:15.000Z
|
2019-02-02T23:47:59.000Z
|
fuel_tutorial/fuel_server.ipynb
|
mila-iqia/summerschool2015
|
c96da4af353fc1b0c1a7e3a08863c6de89072b19
|
[
"BSD-3-Clause"
] | 21 |
2015-07-03T00:39:53.000Z
|
2016-04-27T07:56:40.000Z
|
fuel_tutorial/fuel_server.ipynb
|
mila-udem/summerschool2015
|
c96da4af353fc1b0c1a7e3a08863c6de89072b19
|
[
"BSD-3-Clause"
] | 201 |
2015-07-13T17:12:31.000Z
|
2019-01-09T12:11:07.000Z
| 25.930233 | 77 | 0.564126 |
[
[
[
"# Fuel data processing server\n\nThis notebook goes in pair with the `fuel_tutorial` notebook.",
"_____no_output_____"
]
],
[
[
"import time\n\nfrom fuel.datasets import IndexableDataset\nfrom fuel.schemes import ShuffledScheme\nfrom fuel.server import start_server\nfrom fuel.streams import DataStream\nfrom fuel.transformers import Transformer\n\n\nclass Bottleneck(Transformer):\n def __init__(self, data_stream, **kwargs):\n self.slowdown = kwargs.pop('slowdown', 0)\n super(Bottleneck, self).__init__(\n data_stream, data_stream.produces_examples, **kwargs)\n\n def get_data(self, request=None):\n if request is not None:\n raise ValueError\n time.sleep(self.slowdown)\n return next(self.child_epoch_iterator)\n\n\ndef create_data_stream(slowdown=0):\n dataset = IndexableDataset({'features': [[0] * 128] * 1000})\n iteration_scheme = ShuffledScheme(examples=1000, batch_size=100)\n data_stream = Bottleneck(\n data_stream=DataStream.default_stream(\n dataset=dataset, iteration_scheme=iteration_scheme),\n slowdown=slowdown)\n return data_stream",
"_____no_output_____"
],
[
"start_server(create_data_stream(0.005))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
]
] |
cb8665d061cb1f39517587eb63a119004302eace
| 355,413 |
ipynb
|
Jupyter Notebook
|
module4/assignment_applied_modeling_1.ipynb
|
skredenmathias/DS-Unit-2-Applied-Modeling
|
bffb71f317194a72856eb3945b134483af4e806d
|
[
"MIT"
] | 1 |
2019-12-16T23:02:12.000Z
|
2019-12-16T23:02:12.000Z
|
module4/assignment_applied_modeling_1.ipynb
|
skredenmathias/DS-Unit-2-Applied-Modeling
|
bffb71f317194a72856eb3945b134483af4e806d
|
[
"MIT"
] | null | null | null |
module4/assignment_applied_modeling_1.ipynb
|
skredenmathias/DS-Unit-2-Applied-Modeling
|
bffb71f317194a72856eb3945b134483af4e806d
|
[
"MIT"
] | null | null | null | 78.302049 | 116,524 | 0.629063 |
[
[
[
"<a href=\"https://colab.research.google.com/github/skredenmathias/DS-Unit-2-Applied-Modeling/blob/master/module4/assignment_applied_modeling_1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Lambda School Data Science\n\n*Unit 2, Sprint 3, Module 1*\n\n---\n\n\n# Define ML problems\n\nYou will use your portfolio project dataset for all assignments this sprint.\n\n## Assignment\n\nComplete these tasks for your project, and document your decisions.\n\n- [ ] Choose your target. Which column in your tabular dataset will you predict?\n- [ ] Is your problem regression or classification?\n- [ ] How is your target distributed?\n - Classification: How many classes? Are the classes imbalanced?\n - Regression: Is the target right-skewed? If so, you may want to log transform the target.\n- [ ] Choose your evaluation metric(s).\n - Classification: Is your majority class frequency >= 50% and < 70% ? If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading. What evaluation metric will you choose, in addition to or instead of accuracy?\n - Regression: Will you use mean absolute error, root mean squared error, R^2, or other regression metrics?\n- [ ] Choose which observations you will use to train, validate, and test your model.\n - Are some observations outliers? Will you exclude them?\n - Will you do a random split or a time-based split?\n- [ ] Begin to clean and explore your data.\n- [ ] Begin to choose which features, if any, to exclude. Would some features \"leak\" future information?\n\nIf you haven't found a dataset yet, do that today. [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2) and choose your dataset.",
"_____no_output_____"
]
],
[
[
"# I will use the worlds_2019 dataset for now.\nimport pandas as pd\n\n!git clone https://github.com/skredenmathias/DS-Unit-1-Build.git\npath = '/content/DS-Unit-1-Build/'\nworlds_2019 = pd.read_excel(path+'2019-summer-match-data-OraclesElixir-2019-11-10.xlsx')\n(print(worlds_2019.shape))\nworlds_2019.head()",
"Cloning into 'DS-Unit-1-Build'...\nremote: Enumerating objects: 35, done.\u001b[K\nremote: Counting objects: 100% (35/35), done.\u001b[K\nremote: Compressing objects: 100% (34/34), done.\u001b[K\nremote: Total 35 (delta 11), reused 0 (delta 0), pack-reused 0\u001b[K\nUnpacking objects: 100% (35/35), done.\n(1428, 98)\n"
]
],
[
[
"Choose your target. Which column in your tabular dataset will you predict?",
"_____no_output_____"
]
],
[
[
"df = worlds_2019\ntarget = df['result']\n# Initially I seek if I can predict if a team will win or lose.\n# The goal is to see how much variance is explained by each factor.\n# / see how much certain factors contribute to the result.",
"_____no_output_____"
],
[
"# From here I might look at questions such as:\n\n# How do win conditions change per patch?\n\n# What are the win percentages for red / blue side? Are different objectives\n# more important to one side?\n\n# See how different positions have different degrees of impact based on teams.\n\n# ",
"_____no_output_____"
]
],
[
[
"Is your problem regression or classification?",
"_____no_output_____"
]
],
[
[
"# Classification.",
"_____no_output_____"
]
],
[
[
" How is your target distributed?\nClassification: How many classes? Are the classes imbalanced?",
"_____no_output_____"
]
],
[
[
"target.value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
" Choose your evaluation metric(s).\n\nClassification: Is your majority class frequency >= 50% and < 70% ? \nIf so, you can just use accuracy if you want. Outside that range, accuracy could be misleading. \n\nWhat evaluation metric will you choose, in addition to or instead of accuracy?",
"_____no_output_____"
]
],
[
[
"# Accuracy. What others could I choose?",
"_____no_output_____"
]
],
[
[
" Choose which observations you will use to train, validate, and test your model.\n\nAre some observations outliers? Will you exclude them?\n\nWill you do a random split or a time-based split?",
"_____no_output_____"
]
],
[
[
"# Depends on ceteris paribus, other things equal:\n# I might have to keep it on the same patch.\n# Feature importances will differ across regions & tournaments & patches.\n# Gamelength might be a leak?\n\n# Should I also make a separate df with all 5 players grouped as a team w/\n# most of the stats retained?\n\n# Outliers:\n# Gamelength beyond 50 minutes. I can filter these out if needed.\n\n# Leaks / uninteresting columns:\n# gameid, url, (league), (split), date, week, game, (patchno), playerid, \n# (position), (team), gamelength?, total gold?, firsttothreetowers?,\n# teamtowerkills?, opptowerkills?, ",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
]
],
[
[
" Begin to clean and explore your data.",
"_____no_output_____"
]
],
[
[
"# Lot's of cleaning done in the unit 1 build notebook.\n# Will focus on exploration here for now.\n\ndf['gamelength'].plot.hist() # We see a small outlier here.",
"_____no_output_____"
],
[
"df['gamelength'].describe()",
"_____no_output_____"
],
[
"import seaborn as sns\n# Note: Will be other outliers in the big dataset.\n# Couldn't upload full dataset to Git, 80mb is too big.\n# Why is it so big, it's just a text file?\nsns.distplot(df['gamelength']);",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# Fast first model",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\ntrain, val = train_test_split(df, test_size=.25)",
"_____no_output_____"
],
[
"!pip install category_encoders\nimport category_encoders as ce",
"Collecting category_encoders\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/a0/52/c54191ad3782de633ea3d6ee3bb2837bda0cf3bc97644bb6375cf14150a0/category_encoders-2.1.0-py2.py3-none-any.whl (100kB)\n\r\u001b[K |███▎ | 10kB 18.7MB/s eta 0:00:01\r\u001b[K |██████▌ | 20kB 6.6MB/s eta 0:00:01\r\u001b[K |█████████▉ | 30kB 9.1MB/s eta 0:00:01\r\u001b[K |█████████████ | 40kB 5.8MB/s eta 0:00:01\r\u001b[K |████████████████▍ | 51kB 7.0MB/s eta 0:00:01\r\u001b[K |███████████████████▋ | 61kB 8.3MB/s eta 0:00:01\r\u001b[K |██████████████████████▉ | 71kB 9.4MB/s eta 0:00:01\r\u001b[K |██████████████████████████▏ | 81kB 10.4MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▍ | 92kB 11.5MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 102kB 6.0MB/s \n\u001b[?25hRequirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.21.3)\nRequirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.25.3)\nRequirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.5.1)\nRequirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.17.4)\nRequirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.3.3)\nRequirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.10.2)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders) (0.14.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2018.9)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2.6.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders) (1.12.0)\nInstalling collected packages: category-encoders\nSuccessfully installed category-encoders-2.1.0\n"
],
[
"from sklearn.pipeline import make_pipeline\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import accuracy_score",
"_____no_output_____"
],
[
"target = 'result'",
"_____no_output_____"
],
[
"X_train = train.drop(columns=target)\ny_train = train[target]\nX_val = val.drop(columns=target)\ny_val = val[target]\n# X_test = test.drop(columns=target)\n# y_test = test[target]",
"_____no_output_____"
],
[
"X_train.shape, X_val.shape",
"_____no_output_____"
],
[
"pipeline = make_pipeline(\n ce.OrdinalEncoder(),\n SimpleImputer(),\n RandomForestClassifier(random_state=42, n_jobs=-1)\n)\n\npipeline.fit(X_train, y_train)",
"/usr/local/lib/python3.6/dist-packages/sklearn/ensemble/forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n"
],
[
"# Get validation accuracy\ny_pred = pipeline.predict(X_val)\nprint('Validation Accuracy:', pipeline.score(X_val, y_val)) # We've got leakage!",
"Validation Accuracy: 1.0\n"
],
[
"print('X_train shape before encoding', X_train.shape)\n\nencoder = pipeline.named_steps['ordinalencoder']\nencoded = encoder.transform(X_train)\n\nprint('X_train shape after encoding', encoded.shape)",
"X_train shape before encoding (1071, 97)\nX_train shape after encoding (1071, 97)\n"
],
[
"# Plot feature importances to find leak\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# Get feature importances\nrf = pipeline.named_steps['randomforestclassifier']\nimportances = pd.Series(rf.feature_importances_, encoded.columns)\n\n# Plot top n feature importances\nn = 20\nplt.figure(figsize=(10,n/2))\nplt.title(f'Top {n} features')\nimportances.sort_values()[-n:].plot.barh(color='grey')",
"_____no_output_____"
]
],
[
[
"# XGBoost",
"_____no_output_____"
]
],
[
[
"from xgboost import XGBClassifier\n\npipeline = make_pipeline(\n ce.OrdinalEncoder(),\n XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1)\n)\n\npipeline.fit(X_train, y_train)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\ny_pred = pipeline.predict(X_val)\nprint('Validation Accuracy', accuracy_score(y_val, y_pred))",
"Validation Accuracy 1.0\n"
]
],
[
[
"# Partial dependence plots",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n# plt.rcParams['figure.dpi] = 72",
"_____no_output_____"
],
[
"!pip install pdpbox\n!pip install shap\nfrom pdpbox.pdp import pdp_isolate, pdp_plot\n\nfeature = 'teamtowerkills'\n\nisolated = pdp_isolate(\n model=pipeline,\n dataset=X_val,\n model_features=X_val.columns,\n feature=feature,\n num_grid_points=50\n)",
"Requirement already satisfied: pdpbox in /usr/local/lib/python3.6/dist-packages (0.2.0)\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.14.1)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.21.3)\nRequirement already satisfied: matplotlib>=2.1.2 in /usr/local/lib/python3.6/dist-packages (from pdpbox) (3.1.2)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from pdpbox) (1.3.3)\nRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from pdpbox) (0.25.3)\nRequirement already satisfied: psutil in /usr/local/lib/python3.6/dist-packages (from pdpbox) (5.4.8)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pdpbox) (1.17.4)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (2.6.1)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (0.10.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (1.1.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.2->pdpbox) (2.4.5)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->pdpbox) (2018.9)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib>=2.1.2->pdpbox) (1.12.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=2.1.2->pdpbox) (42.0.2)\nRequirement already satisfied: shap in /usr/local/lib/python3.6/dist-packages (0.33.0)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from shap) (1.17.4)\nRequirement already satisfied: tqdm>4.25.0 in /usr/local/lib/python3.6/dist-packages (from shap) (4.28.1)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from shap) (1.3.3)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from shap) (0.21.3)\nRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from shap) (0.25.3)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->shap) (0.14.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2018.9)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas->shap) (2.6.1)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.6.1->pandas->shap) (1.12.0)\n"
],
[
"pdp_plot(isolated, feature_name=feature, plot_lines=True,\n frac_to_plot=0.1) # leakage\nplt.xlim(5, 12);",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# Permutation importances",
"_____no_output_____"
]
],
[
[
"!pip install eli5\nimport eli5\nfrom eli5.sklearn import PermutationImportance",
"Collecting eli5\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/97/2f/c85c7d8f8548e460829971785347e14e45fa5c6617da374711dec8cb38cc/eli5-0.10.1-py2.py3-none-any.whl (105kB)\n\r\u001b[K |███ | 10kB 23.8MB/s eta 0:00:01\r\u001b[K |██████▏ | 20kB 6.6MB/s eta 0:00:01\r\u001b[K |█████████▎ | 30kB 9.4MB/s eta 0:00:01\r\u001b[K |████████████▍ | 40kB 6.1MB/s eta 0:00:01\r\u001b[K |███████████████▌ | 51kB 7.5MB/s eta 0:00:01\r\u001b[K |██████████████████▋ | 61kB 8.8MB/s eta 0:00:01\r\u001b[K |█████████████████████▊ | 71kB 10.0MB/s eta 0:00:01\r\u001b[K |████████████████████████▊ | 81kB 11.2MB/s eta 0:00:01\r\u001b[K |███████████████████████████▉ | 92kB 12.4MB/s eta 0:00:01\r\u001b[K |███████████████████████████████ | 102kB 9.9MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 112kB 9.9MB/s \n\u001b[?25hRequirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.21.3)\nRequirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (1.17.4)\nRequirement already satisfied: attrs>16.0.0 in /usr/local/lib/python3.6/dist-packages (from eli5) (19.3.0)\nRequirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from eli5) (2.10.3)\nRequirement already satisfied: tabulate>=0.7.7 in /usr/local/lib/python3.6/dist-packages (from eli5) (0.8.6)\nRequirement already satisfied: graphviz in /usr/local/lib/python3.6/dist-packages (from eli5) (0.10.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from eli5) (1.12.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from eli5) (1.3.3)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.18->eli5) (0.14.1)\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->eli5) (1.1.1)\nInstalling collected packages: eli5\nSuccessfully installed eli5-0.10.1\n"
],
[
"transformers = make_pipeline(\n ce.OrdinalEncoder(),\n SimpleImputer(strategy='median')\n)\n\nX_train_transformed = transformers.fit_transform(X_train)\nX_val_transformed = transformers.transform(X_val)\n\nmodel = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)\nmodel.fit(X_train_transformed, y_train)",
"_____no_output_____"
],
[
"permuter = PermutationImportance(\n model,\n scoring='accuracy',\n n_iter=10,\n random_state=42\n)\n\npermuter.fit(X_val_transformed, y_val)",
"_____no_output_____"
],
[
"feature_names = X_val.columns.tolist()\npd.Series(permuter.feature_importances_, feature_names).sort_values()",
"_____no_output_____"
],
[
"eli5.show_weights(\n permuter,\n top=None,\n feature_names=feature_names\n)",
"_____no_output_____"
]
],
[
[
"# Dropping 'teamtowerkills' & 'opptowerkills'",
"_____no_output_____"
]
],
[
[
"def wrangle(X):\n X = X.copy()\n\n # Drop teamtowerkills & opptowerkills\n model_breakers = ['teamtowerkills','opptowerkills']\n X = X.drop(columns = model_breakers)\n return X\n\n# train = wrangle(train)\nval = wrangle(val)",
"_____no_output_____"
],
[
"val.columns",
"_____no_output_____"
],
[
"train.shape, val.shape",
"_____no_output_____"
]
],
[
[
"# Running XGBoost again",
"_____no_output_____"
]
],
[
[
"X_train = train.drop(columns=target)\ny_train = train[target]\nX_val = val.drop(columns=target)\ny_val = val[target]",
"_____no_output_____"
],
[
"pipeline = make_pipeline(\n ce.OrdinalEncoder(),\n XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1)\n)\n\npipeline.fit(X_train, y_train)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\ny_pred = pipeline.predict(X_val)\nprint('Validation Accuracy', accuracy_score(y_val, y_pred))",
"_____no_output_____"
]
],
[
[
"# Feature importances, again",
"_____no_output_____"
]
],
[
[
"transformers = make_pipeline(\n ce.OrdinalEncoder(),\n SimpleImputer(strategy='median')\n)\n\nX_train_transformed = transformers.fit_transform(X_train)\nX_val_transformed = transformers.transform(X_val)\n\nmodel = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)\nmodel.fit(X_train_transformed, y_train)",
"_____no_output_____"
],
[
"permuter = PermutationImportance(\n model,\n scoring='accuracy',\n n_iter=10,\n random_state=42\n)\n\npermuter.fit(X_val_transformed, y_val)",
"_____no_output_____"
],
[
"feature_names = X_val.columns.tolist()\npd.Series(permuter.feature_importances_, feature_names).sort_values()",
"_____no_output_____"
],
[
"eli5.show_weights(\n permuter,\n top=None,\n feature_names=feature_names\n)",
"_____no_output_____"
]
],
[
[
"# Dropping a fuckton of columns, then repeat",
"_____no_output_____"
]
],
[
[
"# Leaks / uninteresting columns:\n# gameid, url, (league), (split), date, week, game, (patchno), playerid, \n# (position), (team), gamelength?, total gold?, firsttothreetowers?,\n# teamtowerkills?, opptowerkills?, ",
"_____no_output_____"
],
[
"def wrangle2(X):\n X = X.copy()\n\n # Drops\n low_importance = ['gameid', 'url', 'league', 'split', 'date', 'week',\n 'patchno', 'position', 'gamelength']\n X = X.drop(columns = low_importance)\n return X\n\n# train = wrangle(train)\ntrain = wrangle2(train)\nval = wrangle2(val)",
"_____no_output_____"
],
[
"train.columns",
"_____no_output_____"
],
[
"X_train = train.drop(columns=target)\ny_train = train[target]\nX_val = val.drop(columns=target)\ny_val = val[target]",
"_____no_output_____"
],
[
"pipeline = make_pipeline(\n ce.OrdinalEncoder(),\n XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1)\n)\n\npipeline.fit(X_train, y_train)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\ny_pred = pipeline.predict(X_val)\nprint('Validation Accuracy', accuracy_score(y_val, y_pred))",
"_____no_output_____"
]
],
[
[
"# Feature importances, iterations",
"_____no_output_____"
]
],
[
[
"transformers = make_pipeline(\n ce.OrdinalEncoder(),\n SimpleImputer(strategy='median')\n)\n\nX_train_transformed = transformers.fit_transform(X_train)\nX_val_transformed = transformers.transform(X_val)\n\nmodel = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)\nmodel.fit(X_train_transformed, y_train)",
"_____no_output_____"
],
[
"permuter = PermutationImportance(\n model,\n scoring='accuracy',\n n_iter=10,\n random_state=42\n)\n\npermuter.fit(X_val_transformed, y_val)",
"_____no_output_____"
],
[
"feature_names = X_val.columns.tolist()\npd.Series(permuter.feature_importances_, feature_names).sort_values()",
"_____no_output_____"
],
[
"eli5.show_weights(\n permuter,\n top=None,\n feature_names=feature_names\n)",
"_____no_output_____"
]
],
[
[
"# Shapley plot",
"_____no_output_____"
]
],
[
[
"row = X_val.iloc[[0]]",
"_____no_output_____"
],
[
"y_val.iloc[[0]]",
"_____no_output_____"
],
[
"row #model.predict(row)",
"_____no_output_____"
],
[
"import shap\nexplainer = shap.TreeExplainer(model)\nshap_values = explainer.shap_values(row)\n\nshap.initjs()\nshap.force_plot(\n base_value = explainer.expected_value,\n shap_values = shap_values,\n features=row\n)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# Using importances for feature selection",
"_____no_output_____"
]
],
[
[
"X_train.shape",
"_____no_output_____"
],
[
"minimum_importance = 0\nmask = permuter.feature_importances_ > minimum_importance",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
cb868138a429a2f74aef6187016f541638bea6be
| 3,345 |
ipynb
|
Jupyter Notebook
|
seminario1/04. Strings.ipynb
|
jmarinma/Julia-seminarios
|
114ae06bfd9c3a889c87dc289cc41620553c9c7b
|
[
"MIT"
] | null | null | null |
seminario1/04. Strings.ipynb
|
jmarinma/Julia-seminarios
|
114ae06bfd9c3a889c87dc289cc41620553c9c7b
|
[
"MIT"
] | 1 |
2021-03-11T11:53:43.000Z
|
2021-03-11T11:53:43.000Z
|
seminario1/04. Strings.ipynb
|
jmarinma/Julia-seminarios
|
114ae06bfd9c3a889c87dc289cc41620553c9c7b
|
[
"MIT"
] | 1 |
2021-03-08T16:44:44.000Z
|
2021-03-08T16:44:44.000Z
| 20.648148 | 187 | 0.513303 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
cb869132f7c207e92bda7996e6b6d688ca242eda
| 3,236 |
ipynb
|
Jupyter Notebook
|
Assignments/AssignmentTemplate.ipynb
|
gavin971/ClimateModeling_courseware
|
9c8b446d6a274d88868c24570155f50c32d27b89
|
[
"MIT"
] | 4 |
2017-12-06T04:36:30.000Z
|
2020-12-02T13:16:02.000Z
|
Assignments/AssignmentTemplate.ipynb
|
gavin971/ClimateModeling_courseware
|
9c8b446d6a274d88868c24570155f50c32d27b89
|
[
"MIT"
] | null | null | null |
Assignments/AssignmentTemplate.ipynb
|
gavin971/ClimateModeling_courseware
|
9c8b446d6a274d88868c24570155f50c32d27b89
|
[
"MIT"
] | 4 |
2018-08-09T04:03:45.000Z
|
2021-12-20T11:28:17.000Z
| 33.708333 | 288 | 0.627627 |
[
[
[
"# [ATM 623: Climate Modeling](../index.ipynb)\n[Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany\n# Assignment title",
"_____no_output_____"
],
[
"### About these notes:\n\nThis document uses the interactive [`IPython notebook`](http://ipython.org/notebook.html) format (now also called [`Jupyter`](https://jupyter.org)). The notes can be accessed in several different ways:\n\n- The interactive notebooks are hosted on `github` at https://github.com/brian-rose/ClimateModeling_courseware\n- The latest versions can be viewed as static web pages [rendered on nbviewer](http://nbviewer.ipython.org/github/brian-rose/ClimateModeling_courseware/blob/master/index.ipynb)\n- A complete snapshot of the notes as of May 2015 (end of spring semester) are [available on Brian's website](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/Notes/index.html).\n\nMany of these notes make use of the `climlab` package, available at https://github.com/brian-rose/climlab",
"_____no_output_____"
],
[
"Some text.\n\n## Your assigment\n\n1. Do the assignment.\n3. Write up your answers (including text, code and figures) in a new IPython notebook. *Try to make sure that your notebook runs cleanly from start to finish, and explicitly imports every package that it uses.*\n5. Save your notebook as `[your last name].ipynb`, e.g. my notebook should be called `Rose.ipynb`. *This makes it easier for me when I collect all your answers*\n4. Submit your answers by email before class on **DUE DATE**.",
"_____no_output_____"
],
[
"<div class=\"alert alert-success\">\n[Back to ATM 623 notebook home](../index.ipynb)\n</div>",
"_____no_output_____"
],
[
"____________\n## Credits\n\nThe author of this notebook is [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.\n\nIt was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php), offered in Spring 2015.\n____________\n\n",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.